This message was deleted.
# harvester
a
This message was deleted.
b
Have you seen the latest docs https://docs.harvesterhci.io/v1.4/rancher/csi-driver/#rwx-volumes-support You will need a storage network and to attach a NIC in the storage network to the VMs
w
I did see that, however the host cluster is running harvester 1.4.1, will take a quick look again tho. Also upgrading to 1.5 is our plan, but we'll go through the minot versions over the weekend
Also w have a dedicated storage network already, but will double check that too
b
We have it "working" with 1.4.1 but there is a big issue you should be aware of https://github.com/harvester/harvester/issues/7796
oh sorry, I misread. This isn't guest RWX, ignore me
Or are you trying to do this in a guest cluster pod?
w
We've a cluster installed on the harvester, in which we were trying to install nuevector, this requires RWX volumes - I hit this problem before and gave up in earlier harvester version and ended up running a dedicated NFS VM to service shared volumes, but now I need to sort it
So for clarity - we have a separate rancher pool (bare metal low power machines) which manages a harvester pool (high power machines) on which we have used rancher to deploy a RKE2 cluster with open-leap-micro as the base image (seems to have NFS client pre installed and it provisions nicely).
b
Thanks, so you might hit the issue I linked. We also had issues with the NFS share that backs the RWX volume "locking up" from time to time and all writes failed until we restarted the guest cluster pods. Sadly overall not a reliable experience and we moved away from it.
w
Bit of a blocker for using nuevector then
Storage Network for RWX Volume Enabled: wasnt toggled in longhorn
damn it... says all volumes must be detached before we can apply that!!!!
so thats pull the handbrake!
b
ah yes, that was a pain
full shutdown
w
one for the weekend... will see if i can use existing nfs for nuevector install in the meantime
b
if you are shutting down make sure you have the storage network exclude IPs in the storage network settings in harvester, as that will also require a shutdown of all VMs
w
do i need to do that tho - as the vms use a completely different subnet
b
You need to attach a NIC from the storage network to the VMs hosting rke2 and then give that NIC an IP from the excluded range (so it doesn't conflict with anything harvester is doing)
e.g. our one of our rke2 VMs looks like
with NICs
Copy code
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 0e:45:ec:4d:d6:bd brd ff:ff:ff:ff:ff:ff
    altname enp1s0
    inet 10.150.2.27/23 brd 10.150.3.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 0e:24:21:55:26:36 brd ff:ff:ff:ff:ff:ff
    altname enp2s0
    inet 10.150.11.1/22 brd 10.150.11.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
w
ahh ic
b
where the eth1 IP we have set from the excluded range in the storage network
w
i get you, makes sense - on the other end we dedicated separate 10g ports for storage for each harvester node, so this needs to be routable to the pods then... think thats what your saying... 🙂 almost fudged this with our dedicated NFS by manually creating the PV...
Thats also painful though given the nodes in the RKE2 cluster are many...
also another consideration for when you scale the cluster up .... wont that be better in the cloud-init for the worker templates?
Just seen on the nodes you can add networks there - so maybe a case of defining the storage network in rancehr so you can allocate it to the extra nic from there
b
We setup the guest rke2 cluster ourselves with ansible and our workload size is fairly static, all the IPs are set via ansible. Which isn't very dynamic but works for our use case. I don't think there is DHCP in the storage network to hook into but could be wrong.
and the VMs with terraform where we add a separate NIC config.
w
ahh - i've been driving this from rancher itself, will give this a play when I get a chance to shut down fully
fixed it for now - manually editing the pvc / pv that neuvector uses make it possible for me to use our dedicated NFS server, its not perfect and only there for development - ultimately not a million miles away from what longhorn would provision, only technically not as private or as tidy! It'll do for now while we find our way with this product. I'll take note of your points @brainy-kilobyte-33711 thats super helpful and will let you know how we get on, soonest I can likely try this will be the weekend - but I've quite a lot on and could do without shutting everything down just yet...
b
good luck!
w
Now i need to write all this up... thanks again neuvector is now running - all be it on poor mans nfs for now, I'll let you know how i get on with the RWX "proper" option later... though from the sounds of it I might be better off sticking with what we have!
b
Yeah, it might be more reliable with your one until that issue I linked is fixed
w
Might lay the ground work in the meantime - got to do 1.4.1->1.5.0 yet
Just got the rancher/k8s side up-to-date and ready so next steps should be good
🤞 1
b
You know that you can install Neuvector also with a RWO volume? It's not ideal, but it's better than emptydir...
w
Well - read that it was a requirement for RWX and as stated, I got it working using NFS prepared pv in advance and updated the values for Helm, so we have persistence working just fine.