Hello everyone, if anyone here can help me with th...
# longhorn-storage
p
Hello everyone, if anyone here can help me with this. I'm having some issues with Longhorn on our production cluster. For our resources we have: • Storage pool, we have 10 SAS SSDs. • Storage Network setup with Multus • 10Gbit/s network. • Longhorn Engine v1.6.2 • MTU 9000 • Dell R630 28c Xeon CPU • SAS controller used with PCI passthrough to storage VM. I'm getting really poor performance from IOPs. Below is our kbench results. Your thoughts concerning this would be highly appreciated.
Copy code
TEST_SIZE: 30G
Benchmarking iops.fio into Longhorn-iops.json
Benchmarking bandwidth.fio into Longhorn-bandwidth.json
Benchmarking latency.fio into Longhorn-latency.json

================================
FIO Benchmark Comparsion Summary
For: Local-Path vs Longhorn
CPU Idleness Profiling: disabled
Size: 30G
Quick Mode: disabled
================================
                              Local-Path   vs                 Longhorn    :              Change
IOPS (Read/Write)
        Random:          75,237 / 37,531   vs              7,886 / 673    :   -89.52% / -98.21%
    Sequential:          78,704 / 39,660   vs           13,519 / 1,284    :   -82.82% / -96.76%

Bandwidth in KiB/sec (Read/Write)
        Random:      1,479,581 / 614,089   vs         304,961 / 71,547    :   -79.39% / -88.35%
    Sequential:        910,860 / 476,261   vs        236,124 / 107,440    :   -74.08% / -77.44%

Latency in ns (Read/Write)
        Random:        364,759 / 242,416   vs    2,626,861 / 2,816,804    :  620.16% / 1061.97%
    Sequential:        288,415 / 247,150   vs    2,601,315 / 2,835,670    :  801.93% / 1047.35%