Hello everyone, I have a question I hope someone c...
# harvester
g
Hello everyone, I have a question I hope someone can help me with. Before updating Harvester, all nodes had the same IQN
InitiatorName
, but after the update, the IQN has changed. Is it possible that the Harvester update caused this change, or is it something managed by Kubernetes? This change is causing issues with adding disks from our QSAN storage server, as the updated IQN no longer matches the expected configuration. Thank you in advance for your time and support! Best regards,Damyan
f
Did you use a iSCSI CSI? Or did you just keeping going with manually setting up the target?
g
Hi, I'm setting it up manually via
iscsiadm
, not using a CSI driver.
f
This will not work because nothing you do in the operating system manually is saved between reboots. My understanding is that you must use the CSI.
☝️ 1
b
I think technically you might be able to hack the
/oem/90_custom.yaml
and write a script to get it iscsiadm to perform the actions after every reboot, but best practice would be to use the csi.
a
It was an error that all harvester nodes had the same initiatorname - these should be unique per host. This was fixed in v1.5.0, see: https://github.com/harvester/harvester/issues/6911
After that fix, if you wish to change the initiatorname, it should be enough to simply edit
/etc/iscsi/initiatorname.iscsi
because those changes should now persist. For other versions as @bland-article-62755 said you can add yaml to
/oem
to override it (see suggested workaround in the description of that issue).
That said, you're likely better off using CSI if possible 🙂
g
First of all, thank you all very much for your replies — I really appreciate the help. Secondly, I assume you’re suggesting that I use this: https://github.com/kubernetes-csi/csi-driver-iscsi to automatically attach the disk to the iSCSI node, and then re-add the disk through the Harvester UI. Thanks again for taking the time to support me!
🐿️ 1
a
I haven't tried that particular iSCSI CSI driver myself, but the idea with CSI is that you'd install a CSI driver, then create a storage class that uses it, then create volumes via that storage class to attach to your VMs, i.e. the VMs are talking (more or less directly) to iSCSI volumes. That you said "re-add the disk through the Harvester UI" makes me think you might actually be using your external iSCSI storage as backing storage for Longhorn, which is a different approach, and CSI won't help you with that.
g
Hi, Yes, that's correct — I'm using iSCSI for external storage. I have two QSAN units that serve as external storage backends for Longhorn.
a
OK, in that case, if you're using Harvester v1.5 you should be able to set the initiatorname by editing
/etc/iscsi/initiatorname.iscsi
on each host, so if you really need to you could set them all back to the same name, BUT each host really is meant to have a different name, so it would be best to have unique names for each host, and update the config on the storage server to allow connections from all hosts (not just the one name). I haven't tried backing Longhorn with iSCSI myself, so I'm interested to hear how it works for you
f
You need to understand that Longhorn doesn't support iSCSI backing in the way that vmware/vmfs works. You can't have Harvester place Virtual Disks "On the SAN" in the same way it works in vmware. If this is your goal then you should stop here and reassess, because I can tell you with absolute certainty that this won't work. Harvester is not designed to work this way. If you mount the same iSCSI target on each host and point Longhorn at it then your data will be corrupted. If you mount seperate LUNs on each host then you'll end up with multiple copies of the same disk on the SAN. It might be worth asking what you are expecting here?
a
Yeah, for it to work you'd need a separate iSCSI target per host, so that from Longhorn's perspective it just looked like a separate local disk per host, then longhorn would replicate volumes across those just as it would with local disks (in theory). To the best of my knowledge Longhorn engineering folks don't/haven't done any testing of this scenario.
f
I think the question is really why would you want to do this? Performance would likely be poor and storage efficiency would be terrible. Harvester really isn't built for shared block storage SANs, unless those SANs ship a CSI driver. Which those QSAN units allegedly have: https://www.qsan.com/en/os/xevo/cloudnative
a
Yeah, if you can switch to using CSI, then it's just Harvester VM -> CSI Volume (on ISCSI) -> SAN vs: Harvester VM -> Longhorn Volume -> iSCSI -> SAN
and if your SAN is already doing replicated storage or RAID or something for volumes, then Longhorn replication on top of that is presumably redundant ... unless you set up a LH storage class to use a single replica, but even then you're still adding another layer in the storage stack for data to flow through (and potentially slow down)
g
@full-night-53442 Thanks for the advice! Yes, I read that it's not possible to use the same target for multiple nodes, and that each node requires a separate target — that's exactly how I'm doing it now. I'm currently creating a separate target per node on the SAN servers, attaching the appropriate target to the corresponding node, and then adding it through the Harvester UI into Longhorn. I also followed the recommendation to change both the InitiatorName and the hostnqn. Thanks @ambitious-daybreak-95996 for the help as well! Here’s the YAML file I use to connect and mount the target automatically in case of a reboot: name: "ISCSI configuration" stages: network: - name: "Add local IP address" commands: - ip addr add 192.168.70.20/24 dev enp216s0f1 - name: "Set ISCSI automatic connection" commands: - iscsiadm -m discovery -t st -p 192.168.70.10 - iscsiadm -m node -T TARGET_IQN -p ISCSI_IP --login - mkdir -p /var/lib/harvester/extra-disks/unique_mount_id - mount "$(readlink -f /dev/disk/by-path/ip-192.168.70.103260 iscsi iqn.2024 01.com.exampletarget1-lun-0)" /var/lib/harvester/extra-disks/unique_mount_id I also explored using volumes, but for a larger number of VMs it felt a bit inconvenient to create and attach a volume every time you set up a new VM. Personally, I think using a StorageClass and pre-created volumes makes sense when managing a small number of VMs. But since these two QSANs will be used solely as Longhorn storage, I find it more convenient to let Longhorn manage the disks directly. The SAN servers have four pools in total — two from each server: one 20TB RAID 5 pool and one 20TB RAID 6 pool. Thanks again to everyone for your time and support — really appreciate the help!
The only issue I encountered was that I couldn't get it to work like this:
Copy code
DISK_PATH=$(readlink -f /dev/disk/by-path/ip-192.168.70.10:3260-iscsi-iqn.2004-08.com.qsan:xs3216-000d47038:dev0.ctr1-lun-0)
mount "${DISK_PATH}" /var/lib/harvester/extra-disks/79de259774dce4983b72c9ddb950793c
It just doesn’t seem to work when I use a variable for the path, and I’m not sure why. Do you have any idea what might be causing this? Or am I possibly doing something wrong in the way I'm using variables?