kind-air-74358
10/23/2025, 11:21 AMlocal cluster).
We've tried to update the Kubernetes version using Rancher's cluster management. During this update somehow our CNI (cilium) Helm values where overwritten which causes Cilium to lost its network (kubeProxyReplacement was disabled but no kube-proxy was running). Next this caused our local cluster to get in a broken state. After redeploying Cilium with the correct Helm values, the local was recovering again.
But due to the cluster being completely broken, the Kubernetes upgrade got stuck, where three nodes are still running the old version (v1.31.7+rke2r1), and the other nodes are running the updated version (v1.32.9+rke2r1).
Rancher is now reporting that it tries to update one of the worker nodes still running v1.31.7+rke2r1 but nothing happens.
Is there any possibility to force Rancher to retry the failing nodes?kind-air-74358
10/23/2025, 1:14 PMmysterious-animal-29850
10/23/2025, 1:22 PMkind-air-74358
10/24/2025, 12:26 PM