Hi all, I've some trouble with upgrading the Kuber...
# general
k
Hi all, I've some trouble with upgrading the Kubernetes version running Rancher (e.g.
local
cluster). We've tried to update the Kubernetes version using Rancher's cluster management. During this update somehow our CNI (cilium) Helm values where overwritten which causes Cilium to lost its network (kubeProxyReplacement was disabled but no kube-proxy was running). Next this caused our
local
cluster to get in a broken state. After redeploying Cilium with the correct Helm values, the
local
was recovering again. But due to the cluster being completely broken, the Kubernetes upgrade got stuck, where three nodes are still running the old version (v1.31.7+rke2r1), and the other nodes are running the updated version (v1.32.9+rke2r1). Rancher is now reporting that it tries to update one of the worker nodes still running v1.31.7+rke2r1 but nothing happens. Is there any possibility to force Rancher to retry the failing nodes?
It looked like the system-upgrade-controller was giving an error, restarting the controller continued the upgrade.
m
Upgrading the local cluster vis the cluster management page is not supported. However you deployed/installed the underlying kubernetes is the way you need to update K8s for your local rancher cluster.
☝️ 2
k
That's interesting, as it is (unfortunately) possible via the UI 😞.