This message was deleted.
# rke2
a
This message was deleted.
s
$ sudo /usr/local/bin/rke2-uninstall.sh $ kubectl delete node <node> run the install again as agent
s
Hi @sparse-fireman-14239, thank you for this! I was hoping there was a way without the uninstall.sh 😅 as this is an active workload agent
But if this is the only way...
s
There might be other methods, dunno.
s
I'm just wondering what keeps scheduling the pods again after I delete them
The pods are controlled by the node object
But there's nothing in there that gives a hint
Could it be the rke2-config-hash annotation?
Copy code
<http://rke2.io/node-config-hash|rke2.io/node-config-hash>: redacted====
c
have you tried to run rke2-killall.sh and then to start rke2-agent (systemctl start rke2-agent)?
s
No I haven't, but I did reboot. Let me try.
Did not work unfortunately
h
I have been in the same boat due to terraform typo's. You will need to uninstall, then re-install for the node to come back into the cluster as a worker. Doesn't take long.
c
delete the control-plane component manifests from /var/lib/rancher/rke2/agent/pod-manifests
👍 1
and then delete the node from the cluster and rejoin it to get the labels and annotations cleared
s
Thanks Brandon, should this happen again I'll try this approach!
@creamy-pencil-82913 FYI, I needed this again, and tried your approach. Worked perfectly. Thanks!