https://rancher.com/ logo
a

average-vr-7209

09/01/2022, 9:16 PM
Hello there. I have a strange issue: When I want to upgrade my cluster to a newer Kubernetes Version only 2 of my 4 worker nodes get upgraded. Master-Node and ETCD-Node get upgraded without issues. Rancher means after upgrading 2 out of 4 the update is done and gives no error. In Rancher UI and with "kubectl get nodes" all my worker nodes seem healthy. I checked the clusterSpec in Rancher API UI, there the 2 nodes are also missing Anyone has an idea? Would be very appreciated Rancher Verison: 2.5.16 Kubernets Version: 1.18.12
b

bright-whale-83501

09/02/2022, 9:14 AM
If your worker nodes are just worker nodes, and dont have any other role, why dont you just delete them from rancher, clean the nodes, and re-add them to the cluster? Is it a custom cluster?
a

average-vr-7209

09/08/2022, 10:18 PM
Hi @bright-whale-83501 I'm thinking of that but I have longhorn installed with some statefullsets on that nodes. Not sure if I can migrate them without complex volume migrations. Some have only 1 replica.
2 Views