This seems to be an issue with Rancher not having the correct state of the cluster, causing it to wait on something that will never be done/finish. We might need to manually intervene to fix this situation but I can't advice without more details like Rancher version, Rancher install info (HA etc), Rancher k8s version on the cluster, debug/trace logs from Rancher pods, and depending on the output from this, custom resource dump from the cluster to check its state vs what is really there. If you can also include the actual nodes that do exists + kubectl get nodes output so I can match the current state with what Rancher is thinking.