12/02/2022, 11:37 AM
Hi teams! Could you help me? There is RKE2 cluster with 6 nodes: 3 nodes - all roles 3 nodes - worker role This cluster installed like Custom from Rancher 2.6 How can I remove worker role from first nodes (all roles)?


12/03/2022, 8:02 AM
Hi, I had a similar situation this week. I was able make the change but this was on a RKE1 rancher installation. Though, the process is not even documented and it could be risky if you don’t take your time and do it properly. Here’s what I did on my RKE1 installation. I cordoned and drained the node that I wanted to remove. Yes you’ll have to remove the node first, before you can modify the role, which I agree, it is stupid. Once that done, remove the node in the cluster management page. Let Rancher do its magic until your cluster is available. On the node, clean up/prune containers. Then re-register the node by using the rancher registration command:
sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.6.7 --server <https://rancher.domain> --token XXX --ca-checksum XXX --etcd --controlplane
Onece done, you old should only have roles etcd and controlplane. I’m not sure that it would help you as your are using RKE2 but it may give a hint.


12/06/2022, 11:18 AM
Just taint the node, job done. As an example, because it depends on what you want, refer to the Kubernetes taints docs> kubectl taint nodes <your node> CriticalAddonsOnly=true:NoExecute


12/06/2022, 12:44 PM
Thanks a lot for you help!