This message was deleted.
# harvester
a
This message was deleted.
s
Here is the command I’m running. fio --name=benchtest --size=800m --filename=/volume/test --direct=1 --rw=randrw --ioengine=libaio --bs=4k --iodepth=16 --numjobs=8 --time_based --runtime=60 Direct NVME Output READ: bw=228MiB/s (239MB/s), 27.9MiB/s-29.2MiB/s (29.3MB/s-30.6MB/s), io=13.3GiB (14.3GB), run=60001-60002msec WRITE: bw=228MiB/s (239MB/s), 27.9MiB/s-29.2MiB/s (29.3MB/s-30.6MB/s), io=13.4GiB (14.3GB), run=60001-60002msec Disk stats (read/write): sdc: ios=3495766/3497698, merge=0/4, ticks=2908172/2704837, in_queue=5613021, util=99.99% Longhorn 3 replica output READ: bw=10.3MiB/s (10.8MB/s), 617KiB/s-3385KiB/s (631kB/s-3466kB/s), io=617MiB (647MB), run=60004-60035msec WRITE: bw=10.3MiB/s (10.8MB/s), 625KiB/s-3389KiB/s (640kB/s-3470kB/s), io=617MiB (647MB), run=60004-60035msec Disk stats (read/write):
I’ve provisioned 50 CPUs and 64GB of RAM to each node.
Note, these nodes are running as VMs inside Proxmox and the NVME drives are passed through to the Harvester VMs. Doing a similar thing on a different test cluster using Mayastor with 3 replicas gave me ~81 MB/s which also felt low, but it at least shows the network should be capable of at least that speed with Longhorn.
I’m also running Harvester 1.2.0rc4
p
cc @salmon-city-57654
p
seems like the best answer for near-term solution is either custom harvester CSI or longhorn SPDK? https://github.com/harvester/harvester/issues/2405
105 Views