DRBD and Lustre
DISCLAIMER - EXTERNAL CONTRIBUTOR CONTENT
This content was submitted by an external contributor. We provide this information as a resource for the Lustre™ open-source community, but we make no representation as to the accuracy, completeness or reliability of this information.
Distributed Replicated Block Device (DRBD) is a block device that is designed for building high-availability clusters. It works by mirroring the entire block device via a dedicated network. For more information, see the DRBD website.
Preliminary testing has been done to evaluate the use of DRBD as back-end storage for Lustre (to avoid shared storage solutions while retaining redundancy). Of particular interest is the performance impact of DRBD on a Lustre filesystem. The results of this preliminary testing appear below.
NOTE: When these test results were obtained, no fine-tuning of DRBD was done.
Tests with lmdd
These tests measured write throughput (2 OSTs on 1 OSS):
- Without DRBD: 160 MB/s
- With DRBD synchronous mode: 90 MB/s
- With DRBD semi-synchronous mode: 108 MB/s
- With DRBD asynchronous mode: 115 MB/s
In the lmdd tests, the write throughput loss due to DRBD is about 30% in asynchronous mode, and about 40% in synchronous mode.
Tests with IOzone
For this testing, IOzone was run this way:
./iozone -c -e -i 0 -s 1g -t 1 -F /mnt/lustre/iozone -w ./iozone -c -e -i 1 -s 1g -t 1 -F /mnt/lustre/iozone
These tests measured write/read throughput:
- Without DRBD: W 125 MB/s, R 180 MB/s
- With DRBD synchronous mode: W 90 MB/s, R 160 MB/s
- With DRBD semi-synchronous mode: W 50 MB/s, R 140 MB/s
- With DRBD asynchronous mode: W 115 MB/s, R 160 MB/s
With the IOzone tests, the read throughput is the same in synchronous and asynchronous modes. This is because reads are always carried out locally. In read throughput, the performance loss due to DRBD is about 10%. In write throughput, the performance loss due to DRBD is about 10% in asynchronous mode and about 30% in synchronous mode.
The performance impact of DRBD is smaller with IOzone, because IOzone proceeds to independent writes, and avoids accumulating DRBD overhead with each I/O instance.