This message was deleted.
# rke2
a
This message was deleted.
h
that config file is read when the service is started
n
There isn't a direct way to convert a server to an agent. But you can always remove the server node from the cluster (via
kubectl drain
and then
kubectl delete node
) +
rke2-killall.sh
. Then readd it as an agent.
We don't give examples of using the CLI because RKE2 is largely designed around running as a service. But, all config values have an equivalent cli value. In this case,
Copy code
disable-etcd: true
is equivalent to
--disable-etcd=true
in the CLI
l
thank you 🙏 let me evaluate the options
@nutritious-tomato-14686 it looks like the
rke2-killall.sh
doesn’t clean up everything? when I run the script then try run the agent I still end up with control-plane components. Unless re-adding an agent involves something I miss? my procedure is: • Cordon • Drain • Delete node • rke2-killall • start agent
n
I would also
rm -rf /var/lib/rancher/rke2/
, should cleanup all the etcd and server internal configuration.
After running rke2-killall
l
Thank you 🙏 working on it
h
you should have /usr/bin/rke2-uninstall.sh this does the uninstall and cleaning
that path is for RKE2 via RPM install for tar-ball install path maybe different
n
Yeah but in this case we don't want to uninstall the rke2 binary and service
h
oh right - he's changing the role
n
But you are correct you could
rke2-uninstall.sh
and then reinstall rke2 again as an agent... I wasn't sure the networking/host setup Joseph is running. Sometimes people have airgap setups or registry redirects
l
• Cordon • Drain • Delete node • rke2-killall • rm -rf /var/lib/rancher/rke2/ • start agent did it for me, I will update with the complete procedure need to put fire out first 😄
🙌 1
🎉 1
This worked for me:
Copy code
1. Cordon and Drain the node.
2. Delete the node: Kubectl delete <node name>
3. Stop rke2-agent / server (We had cases where an rke2-server was running and in others the agent was running, it became messy)
4. Run the kill-all script
5. Clear two these two directories, could be different and contextual: 
   a. rm -rf /var/lib/kubelet/pods/*
   b. rm -rf /var/lib/rancher/rke2/
6. Start the agent
... then the directories are re-created and the node re-joined the cluster as a worker. We had to make sure the rke2-server was masked and disabled in worker nodes and without clearing those 2 paths we would not have changed anything.
Thank you for the assist we are grateful🙏 how we got here was originally caused by rke2 certs expiring and not being auto-renewed via the server & agent reboot along with unmasking of servers and agent processes where they were masked. Long day 😄