Q: I’m having issues with rke2 cluster provisioning in my homelab when using the Harvester cloud provider. They all show “waiting for agent to connect…”
• k3s clusters are provisioned fine with harvester or the default rke2 cloud provider
• rke2 clusters are provisioned fine with the default rke2 cloud provider
• I add iptables to the user config when using either open SUSE leap or Ubuntu jammy cloud images since I am using calico or calico + multus
• My rancher cluster is running a let’s encrypt public cert
• the above behavior happens with any of the 1.25-1.27 k8s versions
• This happens with both 1 and 3 node clusters
• The logs on the vm for the cluster that failed to provision don’t show ssl errors or service issues
• I had 2 3-node rke2 clusters working before using the harvester cloud provider. After a third failed to deploy, I rebuilt rancher and reassociated it with harvester and am still seeing the same behavior. I deleted my existing rke2 clusters and now can’t provision any with the harvester cloud provider
• Rancher is deployed as VMs on top of harvester
• The VMs are all on the same layer 3 network
• If I check the jammy Ubuntu images at least, ufw is off
Just trying to see what differences in rke2 cause this is confusing 🤔
Ex: such as the host network setting being off on rke2 vs k3s, which I believe is off by default.