why not use a CSI? <https://github.com/kubernetes-...
# harvester
t
g
Hi, I'm not very experienced with CSI, but I’d like to use iSCSI disks as backend storage — not by creating a separate volume for each VM (which would be ideal when using CSI), but instead by attaching the iSCSI disk directly to the node. Managing over 60 VMs would be very difficult if I had to handle individual volumes manually. From what I understand, CSI is mainly used for provisioning volumes that are then attached to VMs, but I couldn't find a way to use a CSI-provisioned iSCSI volume directly on the node itself. It’s possible I’m misunderstanding something. If there’s a way to achieve this — using CSI to attach an iSCSI disk directly to a node — I’d really appreciate it if you could point me in the right direction or share any relevant documentation. Thanks in advance for your time!
t
There are basically 2 options. A. Boot the node from iscsi. https://docs.harvesterhci.io/v1.5/install/external-disk-support/#iscsi-based-installation B. Add iscsi volume to longhorn. Which I am not sure the best way to make that persistent. It will need multi pathd installed.
g
Hi, I have two local disks on the server dedicated to Harvester itself, since I prefer not to rely on network boot. However, I need additional storage, so I'm trying to use external SAN storage. I've enabled multipath, and it correctly detects both paths through the two SAN controllers. Here's the output from `multipath -ll`:
Copy code
mpathd (32017001378101100) dm-14 Qsan,XS3216  
size=500G features='0' hwhandler='1 alua' wp=rw  
|-+- policy='service-time 0' prio=50 status=active  
| `- 16:0:0:0 sde 8:64 active ready running  
`-+- policy='service-time 0' prio=50 status=enabled  
  `- 17:0:0:0 sdf 8:80 active ready running
The issue is that Longhorn doesn't detect disks presented via multipath (i.e., under
/dev/mapper/mpathX
) — it only recognizes block devices like
/dev/sdX
. If I disable multipath, the disks show up correctly in the Harvester UI and can be added without issues. I considered modifying some of the Longhorn components in the
longhorn-system
namespace or manually adding a new block device via a YAML manifest, but that seems risky and potentially error-prone. Your suggestion about network boot could be a valid workaround, but I’m not sure how well it would integrate with multipath, and I’d really prefer to avoid using network boot if possible. If you (or anyone else) have suggestions or workarounds, I’d really appreciate it. Thanks a lot for your time!
m
I think this is because Harvester’s node disk manager skips LVM devices.
t
I think what you’re trying to do really isn’t supported. Longhorn traditionally looks to a specific directory for its blocked devices.
👍 1
m
Harvester node disk manager makes filesystem on extra disk first, and add the fs path to LH
g
Yes, you're absolutely right — this is currently not supported by Longhorn. That's why I'm asking if anyone knows how the disk attachment to the nodes is actually done, so I can dig a bit deeper into the Longhorn code and implement a temporary workaround until proper multipath support is added. Thanks to everyone for your time and the helpful suggestions!
👍 1
t
the workaround is a csi. lol. 😄