https://rancher.com/ logo
#rke2
Title
# rke2
a

adamant-kite-43734

12/22/2022, 9:59 AM
This message was deleted.
s

sparse-fireman-14239

12/22/2022, 2:30 PM
$ sudo /usr/local/bin/rke2-uninstall.sh $ kubectl delete node <node> run the install again as agent
s

steep-manchester-31195

12/22/2022, 2:31 PM
Hi @sparse-fireman-14239, thank you for this! I was hoping there was a way without the uninstall.sh 😅 as this is an active workload agent
But if this is the only way...
s

sparse-fireman-14239

12/22/2022, 2:31 PM
There might be other methods, dunno.
s

steep-manchester-31195

12/22/2022, 2:32 PM
I'm just wondering what keeps scheduling the pods again after I delete them
The pods are controlled by the node object
But there's nothing in there that gives a hint
Could it be the rke2-config-hash annotation?
Copy code
<http://rke2.io/node-config-hash|rke2.io/node-config-hash>: redacted====
c

cold-egg-49710

12/22/2022, 2:46 PM
have you tried to run rke2-killall.sh and then to start rke2-agent (systemctl start rke2-agent)?
s

steep-manchester-31195

12/22/2022, 2:46 PM
No I haven't, but I did reboot. Let me try.
Did not work unfortunately
h

hallowed-breakfast-56871

12/22/2022, 6:49 PM
I have been in the same boat due to terraform typo's. You will need to uninstall, then re-install for the node to come back into the cluster as a worker. Doesn't take long.
c

creamy-pencil-82913

12/22/2022, 7:30 PM
delete the control-plane component manifests from /var/lib/rancher/rke2/agent/pod-manifests
👍 1
and then delete the node from the cluster and rejoin it to get the labels and annotations cleared
s

steep-manchester-31195

01/24/2023, 3:20 PM
Thanks Brandon, should this happen again I'll try this approach!
@creamy-pencil-82913 FYI, I needed this again, and tried your approach. Worked perfectly. Thanks!
86 Views