Happy Harvesting to all! I have couple of storage ...
# harvester
b
Happy Harvesting to all! I have couple of storage class questions (using v1.6): 1. When k8's cluster is provisioned by the Harvester CSI driver (through Rancher) is there any benefit in adding the passthrough custom storage class to it https://docs.harvesterhci.io/v1.6/rancher/csi-driver#passthrough-custom-storageclass ? 2. I heard about 'longhorn in longhorn' and it doesn't sound good.. Is the 'harvester' storage class, that comes out of the box in the K8's cluster from question1, good enough to avoid it?
t
1. yes. You can present the Longhorn on Harvester to the k8s cluster running on vms. This will save you on the complexity of adding another storage layers in on the vms. 2. The passthrough will prevent the longhorn on longhorn problem. You MAY want to use it for testing or tiering of storage. You WILL want to look at 2rd party CSI drivers for external storage. this will help with scaling. Bonus tip, make sure you have the at least 2.5gb network for dev/test and 10gb for prod.
b
Thanks for clarification @thousands-advantage-10804. I was hoping that I don't need to use any custom storage class because there is some problem with it that it seems like I need to dig in now πŸ˜… When I create a pod and use this custom storage class to provision a volume - it all looks good on the cluster/rancher side, but there is an error in Harvester:
Copy code
MapVolume.MapPodDevice failed for volume "pvc-dbf52476-201d-488c-b8fd-1c3a9ae14854" : rpc error: code = Internal desc = failed to bind mount "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-dbf52476-201d-488c-b8fd-1c3a9ae14854/pvc-dbf52476-201d-488c-b8fd-1c3a9ae14854" at "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvc-dbf52476-201d-488c-b8fd-1c3a9ae14854/6e44a2c0-ff84-4e4c-b0df-1cf685a48fd5": mount failed: exit status 32 Mounting command: mount Mounting arguments: -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-dbf52476-201d-488c-b8fd-1c3a9ae14854/pvc-dbf52476-201d-488c-b8fd-1c3a9ae14854 /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvc-dbf52476-201d-488c-b8fd-1c3a9ae14854/6e44a2c0-ff84-4e4c-b0df-1cf685a48fd5 Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvc-dbf52476-201d-488c-b8fd-1c3a9ae14854/6e44a2c0-ff84-4e4c-b0df-1cf685a48fd5: special device /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-dbf52476-201d-488c-b8fd-1c3a9ae14854/pvc-dbf52476-201d-488c-b8fd-1c3a9ae14854 does not exist. dmesg(1) may have more information after failed mount system call.
It fails trying to create hp-volume-xxxxx pod because there is some problem with the hp-volume-xxxx.xxxxxxxx
From what i read this 'hot plug' volume is not the one that I created for my pod, but something that comes from the kube-virt - "This feature is particularly useful for adding resources to running virtual machines that host critical services". I'm wondering if this something that Harvester have to use or there is a way to disable this 'hot plugging' if not needed?
Figured it out (link to Longhorn UI in Harvester helped) - it failed because i specified 2 replicas in the storage class (for 3 cluster VM's), but have only 1 node. Second replica provisioning was stuck.. all is good when set replica count to 1
@thousands-advantage-10804 my Harvester has only 1 node with dual Epyc CPU's (64 cores now are pretty cheap) and plenty of RAM/NVMe's. So, I was hoping to get away with 1 GB because there is no replication between nodes. Is there anything else that requires super big pipe to the router/switch?
b
Heads, up. If you pay for support you'll need the longhorn add-on for each of the downstream nodes (including non-worker control plane) if you do the passthrough.
b
Good to know, Thanks! I'm a free loader for now... It would be good to get to the point when I need/can afford paid support :)
b
Nothing wrong with trying to run unsupported. We were just caught off guard (we weren't doing passthrough) but that we needed extra licenses for each downstream cluster including controlplane nodes.
b
I'm comparing performance of the local storage (pod is running on the bare metal rke2 cluster) to the harvester-passthrough one (pod is running in rke2 provisioned by Harvester) using
Copy code
fio --name=rand-read --ioengine=libaio --iodepth=16 --rw=randread --bs=4k --direct=1 --size=1G --numjobs=4 --runtime=60 --group_reporting
The hardware is identical, one node is Rancher with local rke2 cluster the other is Harvester with rke2. Local storage:
Copy code
Jobs: 4 (f=4): [r(4)][-.-%][r=1458MiB/s,w=0KiB/s][r=373k,w=0 IOPS][eta 00m:00s]
rand-read: (groupid=0, jobs=4): err= 0: pid=886: Tue Sep  2 22:41:18 2025
   read: IOPS=371k, BW=1447MiB/s (1518MB/s)(4096MiB/2830msec)
    slat (usec): min=5, max=263, avg= 8.84, stdev= 4.60
    clat (usec): min=58, max=894, avg=161.12, stdev=51.77
     lat (usec): min=64, max=901, avg=170.08, stdev=52.07
    clat percentiles (usec):
     |  1.00th=[   94],  5.00th=[  108], 10.00th=[  116], 20.00th=[  126],
     | 30.00th=[  135], 40.00th=[  141], 50.00th=[  149], 60.00th=[  157],
     | 70.00th=[  169], 80.00th=[  190], 90.00th=[  223], 95.00th=[  260],
     | 99.00th=[  359], 99.50th=[  400], 99.90th=[  510], 99.95th=[  562],
     | 99.99th=[  676]
   bw (  KiB/s): min=344808, max=382184, per=25.37%, avg=376002.00, stdev=7881.48, samples=20
   iops        : min=86202, max=95546, avg=94000.50, stdev=1970.37, samples=20
  lat (usec)   : 100=2.32%, 250=91.80%, 500=5.76%, 750=0.11%, 1000=0.01%
  cpu          : usr=12.90%, sys=86.16%, ctx=10206, majf=0, minf=101
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=1048576,0,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
   READ: bw=1447MiB/s (1518MB/s), 1447MiB/s-1447MiB/s (1518MB/s-1518MB/s), io=4096MiB (4295MB), run=2830-2830msec
Harvester-Longhorn Passthrough:
Copy code
Jobs: 4 (f=4): [r(4)][100.0%][r=90.5MiB/s,w=0KiB/s][r=23.2k,w=0 IOPS][eta 00m:00s]
rand-read: (groupid=0, jobs=4): err= 0: pid=660: Tue Sep  2 22:43:59 2025
   read: IOPS=23.7k, BW=92.5MiB/s (96.0MB/s)(4096MiB/44296msec)
    slat (usec): min=2, max=33905, avg= 8.78, stdev=43.28
    clat (usec): min=70, max=64533, avg=2693.15, stdev=862.13
     lat (usec): min=450, max=64554, avg=2702.09, stdev=864.54
    clat percentiles (usec):
     |  1.00th=[ 1631],  5.00th=[ 1876], 10.00th=[ 2008], 20.00th=[ 2180],
     | 30.00th=[ 2311], 40.00th=[ 2409], 50.00th=[ 2540], 60.00th=[ 2671],
     | 70.00th=[ 2835], 80.00th=[ 3064], 90.00th=[ 3589], 95.00th=[ 4113],
     | 99.00th=[ 5145], 99.50th=[ 5669], 99.90th=[ 8848], 99.95th=[10421],
     | 99.99th=[25822]
   bw (  KiB/s): min=20416, max=25728, per=24.95%, avg=23625.16, stdev=1017.91, samples=352
   iops        : min= 5104, max= 6432, avg=5906.27, stdev=254.48, samples=352
  lat (usec)   : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03%
  lat (msec)   : 2=9.96%, 4=84.12%, 10=5.82%, 20=0.04%, 50=0.01%
  lat (msec)   : 100=0.01%
  cpu          : usr=1.92%, sys=8.33%, ctx=533966, majf=0, minf=96
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=1048576,0,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
   READ: bw=92.5MiB/s (96.0MB/s), 92.5MiB/s-92.5MiB/s (96.0MB/s-96.0MB/s), io=4096MiB (4295MB), run=44296-44296msec
Does it make sense?
and here is the performance of out of the box harvester storage class ('longhorn in longhorn' created by harvester during cluster provisioning):
Copy code
Jobs: 4 (f=4): [r(4)][97.8%][r=94.9MiB/s,w=0KiB/s][r=24.3k,w=0 IOPS][eta 00m:01s]
rand-read: (groupid=0, jobs=4): err= 0: pid=678: Tue Sep  2 23:03:32 2025
   read: IOPS=23.4k, BW=91.5MiB/s (95.0MB/s)(4096MiB/44749msec)
    slat (usec): min=2, max=7889, avg= 8.90, stdev=18.28
    clat (usec): min=79, max=38378, avg=2720.46, stdev=807.42
     lat (usec): min=413, max=38385, avg=2729.52, stdev=808.40
    clat percentiles (usec):
     |  1.00th=[ 1663],  5.00th=[ 1893], 10.00th=[ 2024], 20.00th=[ 2180],
     | 30.00th=[ 2311], 40.00th=[ 2442], 50.00th=[ 2573], 60.00th=[ 2704],
     | 70.00th=[ 2868], 80.00th=[ 3097], 90.00th=[ 3621], 95.00th=[ 4146],
     | 99.00th=[ 5211], 99.50th=[ 5800], 99.90th=[ 8979], 99.95th=[11076],
     | 99.99th=[21103]
   bw (  KiB/s): min=18856, max=28104, per=24.99%, avg=23421.57, stdev=1369.93, samples=356
   iops        : min= 4714, max= 7026, avg=5855.37, stdev=342.48, samples=356
  lat (usec)   : 100=0.01%, 500=0.01%, 750=0.02%, 1000=0.04%
  lat (msec)   : 2=8.85%, 4=85.04%, 10=5.99%, 20=0.06%, 50=0.01%
  cpu          : usr=1.94%, sys=8.28%, ctx=531092, majf=0, minf=103
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=1048576,0,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
   READ: bw=91.5MiB/s (95.0MB/s), 91.5MiB/s-91.5MiB/s (95.0MB/s-95.0MB/s), io=4096MiB (4295MB), run=44749-44749msec
I must be doing something wrong...
t
for single node 1 gb is fine. It is when you have multiple nodes. Longhorn is a chatty thing. You are seeing the fact that longhorn is NOT performant. do you have access to an iscsi NAS?
b
No, don't have any storage out of the server right now. Is there anything that may help with the drives that are in the node itself? I didn't expect the custom passthrough storage class to have the same performance as out of the box harvester storage class tbh..
going to try Longhorn V2..
For the sake of completeness, here are the results of running the same test on one of the VM's provisioned by Harvester (from Rancher UI) for the 3 VM cluster:
Copy code
Jobs: 4 (f=4): [r(4)][100.0%][r=93.4MiB/s][r=23.9k IOPS][eta 00m:00s]
rand-read: (groupid=0, jobs=4): err= 0: pid=11703: Wed Sep  3 00:02:06 2025
  read: IOPS=23.1k, BW=90.1MiB/s (94.5MB/s)(4096MiB/45446msec)
    slat (nsec): min=1860, max=7260.8k, avg=8448.01, stdev=15094.10
    clat (usec): min=294, max=34652, avg=2763.64, stdev=791.85
     lat (usec): min=545, max=34898, avg=2772.09, stdev=792.75
    clat percentiles (usec):
     |  1.00th=[ 1729],  5.00th=[ 1975], 10.00th=[ 2089], 20.00th=[ 2245],
     | 30.00th=[ 2376], 40.00th=[ 2507], 50.00th=[ 2606], 60.00th=[ 2737],
     | 70.00th=[ 2900], 80.00th=[ 3130], 90.00th=[ 3589], 95.00th=[ 4113],
     | 99.00th=[ 5211], 99.50th=[ 5800], 99.90th=[ 9372], 99.95th=[12125],
     | 99.99th=[22152]
   bw (  KiB/s): min=76696, max=100904, per=99.95%, avg=92243.84, stdev=971.57, samples=360
   iops        : min=19174, max=25226, avg=23060.91, stdev=242.89, samples=360
  lat (usec)   : 500=0.01%, 750=0.01%, 1000=0.02%
  lat (msec)   : 2=5.94%, 4=88.18%, 10=5.76%, 20=0.07%, 50=0.01%
  cpu          : usr=1.82%, sys=8.20%, ctx=543592, majf=0, minf=111
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=1048576,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
   READ: bw=90.1MiB/s (94.5MB/s), 90.1MiB/s-90.1MiB/s (94.5MB/s-94.5MB/s), io=4096MiB (4295MB), run=45446-45446msec

Disk stats (read/write):
  vda: ios=1042778/2142, sectors=8342816/19688, merge=0/622, ticks=2864872/4975, in_queue=2869983, util=78.14%
So, passthrough works... it's just the Harvester Longhorn (V1) on the VM itself that is dead slow.
t
Honestly if you have a single node you should reduce the replicas to 1. there is ZERO need for redundancy with only 1 drive backing it all.
βœ… 1
b
Already did. I created initially with 2 because planned to have 3 node rke2 cluster, but now I'm thinking maybe cluster with just one beefed up node would do be better (they all run on the same hardware anyway :)
t
how many vms do you need to run? I have 2 separate single node installs so I can play with new versions independently.
b
All my services are running in containers, so i don't even need VM's so much.. but its nice to have an option if I need one.. otherwise would be using just rke2
t
sounds like you already have your answer. πŸ˜„
πŸ˜… 1
p
yeah something like pci passthru or directpv I think are best storage perf? csi-lvm? it'd be nice if there was like a CSI feature table / benchmarks somewhere
t
I pass a nvme to a vm for truenas. Works great.
πŸ™Œ 1
b
Tested the lvm-local-storage, results:
Copy code
Jobs: 4 (f=4): [r(4)][100.0%][r=461MiB/s][r=118k IOPS][eta 00m:00s]
rand-read: (groupid=0, jobs=4): err= 0: pid=21976: Wed Sep 3 22:39:38 2025
 read: IOPS=115k, BW=449MiB/s (471MB/s)(4096MiB/9114msec)
  slat (nsec): min=1810, max=5630.1k, avg=4182.16, stdev=8905.92
  clat (usec): min=26, max=26334, avg=549.87, stdev=269.73
   lat (usec): min=97, max=26909, avg=554.05, stdev=270.57
  clat percentiles (usec):
   | 1.00th=[ 306], 5.00th=[ 404], 10.00th=[ 445], 20.00th=[ 486],
   | 30.00th=[ 506], 40.00th=[ 519], 50.00th=[ 529], 60.00th=[ 545],
   | 70.00th=[ 562], 80.00th=[ 586], 90.00th=[ 644], 95.00th=[ 701],
   | 99.00th=[ 914], 99.50th=[ 1254], 99.90th=[ 3884], 99.95th=[ 5276],
   | 99.99th=[11469]
  bw ( KiB/s): min=372376, max=481072, per=100.00%, avg=460817.78, stdev=5968.76, samples=72
  iops    : min=93094, max=120268, avg=115204.44, stdev=1492.19, samples=72
 lat (usec)  : 50=0.01%, 100=0.01%, 250=0.44%, 500=25.97%, 750=70.51%
 lat (usec)  : 1000=2.30%
 lat (msec)  : 2=0.58%, 4=0.11%, 10=0.07%, 20=0.01%, 50=0.01%
 cpu     : usr=5.29%, sys=17.08%, ctx=90238, majf=0, minf=113
 IO depths  : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
   submit  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
   complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
   issued rwts: total=1048576,0,0,0 short=0,0,0,0 dropped=0,0,0,0
   latency  : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
  READ: bw=449MiB/s (471MB/s), 449MiB/s-449MiB/s (471MB/s-471MB/s), io=4096MiB (4295MB), run=9114-9114msec
So it is ~5 times faster than Harvester Longhorn and ~3 times slower than native/node storage. Would be great if you guys can share your setup and performance test(s) results.
t
Harvester on a minisforum minipc with nvme disk. 1 Longhorn replica from within a VM
Copy code
[root@tetee ~]# fio --name=rand-read --ioengine=libaio --iodepth=16 --rw=randread --bs=4k --direct=1 --size=1G --numjobs=4 --runtime=60 --group_reporting
rand-read: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=16
...
fio-3.36
Starting 4 processes
rand-read: Laying out IO file (1 file / 1024MiB)
rand-read: Laying out IO file (1 file / 1024MiB)
rand-read: Laying out IO file (1 file / 1024MiB)
rand-read: Laying out IO file (1 file / 1024MiB)
Jobs: 3 (f=3): [r(1),_(1),r(2)][95.5%][r=201MiB/s][r=51.5k IOPS][eta 00m:01s]
rand-read: (groupid=0, jobs=4): err= 0: pid=2563: Wed Sep  3 23:22:07 2025
  read: IOPS=50.6k, BW=197MiB/s (207MB/s)(4096MiB/20743msec)
    slat (nsec): min=1993, max=604530, avg=3611.77, stdev=3067.62
    clat (usec): min=174, max=15966, avg=1237.90, stdev=457.75
     lat (usec): min=178, max=15969, avg=1241.51, stdev=457.86
    clat percentiles (usec):
     |  1.00th=[  478],  5.00th=[  676], 10.00th=[  734], 20.00th=[  816],
     | 30.00th=[  914], 40.00th=[ 1090], 50.00th=[ 1270], 60.00th=[ 1369],
     | 70.00th=[ 1450], 80.00th=[ 1565], 90.00th=[ 1713], 95.00th=[ 1860],
     | 99.00th=[ 2638], 99.50th=[ 3130], 99.90th=[ 4752], 99.95th=[ 5407],
     | 99.99th=[ 6194]
   bw (  KiB/s): min=168872, max=270720, per=100.00%, avg=204682.46, stdev=6910.42, samples=162
   iops        : min=42218, max=67680, avg=51170.61, stdev=1727.61, samples=162
  lat (usec)   : 250=0.15%, 500=0.89%, 750=10.77%, 1000=24.31%
  lat (msec)   : 2=60.52%, 4=3.17%, 10=0.19%, 20=0.01%
  cpu          : usr=1.55%, sys=7.92%, ctx=147840, majf=0, minf=111
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=1048576,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
   READ: bw=197MiB/s (207MB/s), 197MiB/s-197MiB/s (207MB/s-207MB/s), io=4096MiB (4295MB), run=20743-20743msec

Disk stats (read/write):
  vda: ios=1035959/76, sectors=8287680/1366, merge=0/33, ticks=1254276/154, in_queue=1254430, util=85.17%
from the host itself
Copy code
slim:~ # fio --name=rand-read --ioengine=libaio --iodepth=16 --rw=randread --bs=4k --direct=1 --size=1G --numjobs=4 --runtime=60 --group_reporting
rand-read: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=16
...
fio-3.23
Starting 4 processes
rand-read: Laying out IO file (1 file / 1024MiB)
rand-read: Laying out IO file (1 file / 1024MiB)
rand-read: Laying out IO file (1 file / 1024MiB)
rand-read: Laying out IO file (1 file / 1024MiB)
Jobs: 4 (f=4): [r(4)][-.-%][r=1338MiB/s][r=343k IOPS][eta 00m:00s]
rand-read: (groupid=0, jobs=4): err= 0: pid=31632: Wed Sep  3 23:25:51 2025
  read: IOPS=339k, BW=1324MiB/s (1388MB/s)(4096MiB/3094msec)
    slat (nsec): min=1603, max=686424, avg=2948.59, stdev=1674.56
    clat (usec): min=29, max=7096, avg=184.06, stdev=143.75
     lat (usec): min=50, max=7129, avg=187.06, stdev=143.82
    clat percentiles (usec):
     |  1.00th=[   57],  5.00th=[   61], 10.00th=[   66], 20.00th=[   81],
     | 30.00th=[   99], 40.00th=[  120], 50.00th=[  145], 60.00th=[  174],
     | 70.00th=[  210], 80.00th=[  265], 90.00th=[  355], 95.00th=[  441],
     | 99.00th=[  644], 99.50th=[  750], 99.90th=[ 1188], 99.95th=[ 1237],
     | 99.99th=[ 1369]
   bw (  MiB/s): min= 1313, max= 1361, per=100.00%, avg=1347.12, stdev= 4.20, samples=24
   iops        : min=336324, max=348659, avg=344862.50, stdev=1074.66, samples=24
  lat (usec)   : 50=0.01%, 100=30.47%, 250=47.33%, 500=19.05%, 750=2.65%
  lat (usec)   : 1000=0.23%
  lat (msec)   : 2=0.26%, 10=0.01%
  cpu          : usr=7.52%, sys=33.82%, ctx=591496, majf=0, minf=108
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=1048576,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
   READ: bw=1324MiB/s (1388MB/s), 1324MiB/s-1324MiB/s (1388MB/s-1388MB/s), io=4096MiB (4295MB), run=3094-3094msec

Disk stats (read/write):
  nvme0n1: ios=993874/124, merge=0/49, ticks=177654/90, in_queue=177784, util=96.84%
πŸ‘ 1