Hey there, any suggestions on how to troubleshoot ...
# harvester
h
Hey there, any suggestions on how to troubleshoot dyanmic provisioning of harvester volumes on guest k8s clusters? I see that static provisioning works perfectly fine - as in if i create a PV, then the PVC binds to it - but without a PV explicitly created, the PVC stays pending.
w
make sure you have setup the storage class to match the class in harvester in rancher
explore -> more-resources -> CSIDrivers, make sure driver.harvesterhci.io is listed then more-resources -> StorageClasses - setup using that provisioner and make sure they match. Also make sure you have enough replicas so the volumes can migrate with pods.
h
Thanks Craig - we don't have Rancher Manager in this setup. The RKE2 cluster was deployed manually on Harvester vended VMs.
w
Ahh - thats how you do it in Rancher with Longhorn 🤷
h
Yes, there is a storage class called 'longhorn-k8sguest' that is configured on Harvester. the number of replicas is set to 3. And the Volume Binding Mode seems right as well.
w
it also has to be setup within your cluster with the driver, then it can connect - I don't know how you do that manually, but likely you can apply the charts that rancher manages if you know what your doing.
Your cluster wont know about the host othewise
h
Yes, let me paste the manifest we use for configuring the guest k8s cluster's SC.
And the odd part is static provisioning of PVs seem to be work fine! just the dynamic provisioning seems to have a problem.
w
storage classes defined?
h
Copy code
apiVersion: StorageClass
metadata:
  name: harvester-default
provisioner: <http://driver.harvesterhci.io|driver.harvesterhci.io>
parameters:
  host-storage-class: "longhorn-k8sguest"
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
This is the storage class defined on the guest k8s cluser.
And with this, am able to create PVs - pending PVCs are able to bind to PVs as well. However, PVCs staying pending if a PV is not created.
w
Thats as we have - but we named them the same
👍 1
The PVC? Something like -
Copy code
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: NAME
  namespace: NS
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: harvester-default
  volumeMode: Filesystem
h
yes, the same except for default volumeMode
let me recreate the PVC with that and see if it makes a difference
Copy code
----    ------                ----                 ----                         -------
  Normal  ExternalProvisioning  1s (x12 over 2m46s)  persistentvolume-controller  Waiting for a volume to be created either by the external provisioner '<http://driver.harvesterhci.io|driver.harvesterhci.io>' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
One part am not clear about is - shouldn't there be a dynamic provisioner pod running on the guest k8s cluster? I dont see one - how does the communication happen between guest k8s VMs and the harvest host CSI - its a little unclear to me.
w
Isn't that what the harvester-csi-driver DaemonSet does?
h
In this setup, there is considerable about network segmentation done, so its quite possible there is a firewall blocking the interaction between the guest k8s VM and the harvester host. I don't really see any pod running on the guest k8s pertaining to the CSI
w
then the driver is missing and thats whats preventing the dynamic provision i suspect - i dont know how you install it as its done for me when I setup via rancher with the harvester dropin
h
do you see a provisioner pod running on your guest k8s cluster?
w
Yes
It’s a daemon set - so pod per node
h
yeah that makes sense - can you share the output of kubectl get ds -A ?
Also, helm list -A, trying to see how the ds got created.
w
Copy code
% kubectl get ds -A               
NAMESPACE        NAME                   DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
calico-system    calico-node            10        10        6       10           6           <http://kubernetes.io/os=linux|kubernetes.io/os=linux>   149d
kube-system      csi-nfs-node           10        10        6       10           6           <http://kubernetes.io/os=linux|kubernetes.io/os=linux>   148d
kube-system      harvester-csi-driver   6         6         6       6            6           <http://kubernetes.io/os=linux|kubernetes.io/os=linux>   149d
those are the lines your interested in
h
yeah, i definitely dont have that harvester-csi-driver ds running on my guest k8s
can you check if there was chart deployed by rancher manager to deploy this ?
Am going to try to deploy this - https://github.com/harvester/charts/releases/tag/harvester-csi-driver-0.1.25 and see if it helps.
w
Was managed by rancher for sure, apart from NFS - rember setting that up vaguely - I'm off now - hope that info helps get you closer where you need to be. That looks like your ticket! good luck.
h
thanks Craig - much appreciated.
w
I'll fax the invoice
h
i'll send in the bitcoin 😄