Hi, how can I add more disks to longhorn via ssh/c...
# longhorn-storage
f
Hi, how can I add more disks to longhorn via ssh/cli? Not clickops via a webUI. I would like to add them via /dev/disk/by-path since /dev/sd[X] seeems to change between reboots.
b
Probably by editing the node objects
It won't be ssh, it'd be kubectl
Though you'll probably want to format them on the host first
f
Would it be possible to do from the Harvester installation directly?
b
Probably with a pre-done config
f
What would be even better if I could mount a block device directly and not format it to a filesystem.
b
I don't think Longhorn does that.
It does however pick up a default path if you wanted to use an LVM or raid to make it available
We use LVM on one of our production systems and it works pretty well
f
When I add a disk via the webUI I get the option for either File System or Block
b
For longhorn Volumes, yes, but I don't think it does that for the backing disks.
I'd be happy to be wrong.
Also I'm not on latest versions since most of our usage is in Harvester
f
Yes, only Harvester here
The Harvester installation let's me add a disk via data_disk but I would love to add multiple disks.
b
yeah it's pretty easy to add via the UI, but it's not in the LH view and there's options to force format it to
ext4
but not leaving it as a block device.
image.png
^^ When you edit the config on a host
These are the only options:
f
Via Longhorn, I use the link from Harvester
But I need to mount it in the OS first.
b
As Suse support is VERY fond of telling me:
The longhorn UI is for debugging and informational purposes only
f
🙂
b
You can do it that way, just expect it to be broken.
f
The problem is that it seems to add disks via /dev/sd[X] and that's bad. I would prefer it via it's absolute path since when using /dev/sd[X] seems to change when I reboot.
b
But the k8s object that stores that is a
Node
in
longhorn-system
kubectl -n longhorn-system get <http://nodes.longhorn.io|nodes.longhorn.io> -oyaml
f
Thanks, will take a look in a few minutes. Got called into a meeting.
b
ext4 lvm with the disks mounted at
/var/lib/harvester/defaultdisk
is probably your best/safest bet.
f
The interesting thing is that it's just ext4 disk does not use LVM.
b
We're doing LVM and ext4 with elemental on baremetal with RKE2 in prod
f
How do you add a disk via kubectl/yaml in longhorn? I see an issue that you need to format the disk and have it mounted somewhere.
b
kubectl -n longhorn-system get <http://nodes.longhorn.io|nodes.longhorn.io> -oyaml
will give you the output of the nodes
looks like
.spec.disks
f
Yes, that I can see but I must format and mount it first what I can see?
b
Yep, and add it to fstab and all the things.
This is why I think LVM and the default path is the path of least resistance.
f
Ah yes
b
Or just use the UI
from Harvester, not longhorn
f
Yes
But that mounts /dev/sdX and I would like to mount it via /dev/disk/by-path/pci-00005c00.0-sas-phy3-lun-0 to make it more stable.
b
Mine shows sdX in the UI but if you look at the yaml it's by UUID
f
That could be good enough.
b
ie:
Copy code
spec:
    allowScheduling: true
    disks:
      ac9cde8459d216725d505e2089d6c535:
        allowScheduling: true
        diskType: filesystem
        evictionRequested: false
        path: /var/lib/harvester/extra-disks/ac9cde8459d216725d505e2089d6c535
        storageReserved: 0
        tags: []
      default-disk-dc10f35963aa6dfb:
        allowScheduling: true
        diskType: filesystem
        evictionRequested: false
        path: /var/lib/harvester/defaultdisk
        storageReserved: 190890602496
        tags: []
      e007edc7c538d34ac8ba5f710b28f70a:
        allowScheduling: true
        diskType: filesystem
        evictionRequested: false
        path: /var/lib/harvester/extra-disks/e007edc7c538d34ac8ba5f710b28f70a
        storageReserved: 0
        tags: []
      f27cd533e309ba5d4df8c24d9e7b8c53:
        allowScheduling: true
        diskType: filesystem
        evictionRequested: false
        path: /var/lib/harvester/extra-disks/f27cd533e309ba5d4df8c24d9e7b8c53
        storageReserved: 0
        tags: []