Hi, needing advice:
One of my RKE 1.3.11 nodes went away because a disks crash. I had to replace it and reprovision the OS with ansible.
Not sure what the procedure is to have that node back to life, please confirm if I am right or not with the next steps:
• Shall I remove the node from the cluster with a "kubectl delete node k8s-10.10.10.43" before proceeding to the next step?
• run "rke up --config myconfig.yaml --update-only"
I am not sure if with just update only will not break the production cluster.