Hey folks. I just installed harvester v1.3.0 which...
# longhorn-storage
b
Hey folks. I just installed harvester v1.3.0 which uses longhorn 1.6.0. It is 1 NODE only and has NVMe drives in it. I did a test on the underlying drives directly and got the following:
Copy code
harvester01:/var/lib/harvester/extra-disks/b3fa2628c7e6619a6ae4f5fa954c0f5d # sudo dd if=/dev/zero of=/var/lib/harvester/extra-disks/b3fa2628c7e6619a6ae4f5fa954c0f5d/testfile bs=1G count=1 oflag=direct
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.836411 s, 1.3 GB/s
harvester01:/var/lib/harvester/extra-disks/b3fa2628c7e6619a6ae4f5fa954c0f5d # sudo dd if=/var/lib/harvester/extra-disks/b3fa2628c7e6619a6ae4f5fa954c0f5d/testfile of=/dev/null bs=1G count=1 iflag=direct
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.239335 s, 4.5 GB/s
VERY fast. I then created a VM that I tried using the default longhorn-harvester storage class (replica=3) AND I created a new storage class with replica=1. The cloud image I’m using is ubuntu 24.0, one on each storage class. I get the same result when doing disk speed tests locally whether it uses replica=1 or 3. Remember this is one node, all local.
Copy code
ubuntu@testvm:~$ sudo dd if=/dev/zero of=/tmp/testfile bs=1G count=1 oflag=direct
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.99403 s, 269 MB/s
ubuntu@testvm:~$ sudo dd if=/tmp/testfile of=/dev/null bs=1G count=1 iflag=direct
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.2846 s, 836 MB/s
On replica=1, only one node, so it has to be local as there is no network to speak of (it’s plugged into a 1G switch). It tops out at 269 MB/s write. This is roughly 2.1Gbit/s for conversion sake so the traffic isn’t flowing through the switch, obviously as it would top out at 125MB/s then) Why is there such a huge disparity in writes and can I tune this out? What’s happening?