This message was deleted.
# rke2
a
This message was deleted.
b
Yes
to the first question, I mean.
You can simply drain each node, delete it, then reinstall or make whatever changes you need to, then rejoin it to the cluster when you're done. Rotate through all your nodes and wham, you've done just this.
s
Ok,...so that should even work for master nodes that host the etcd things? No conflicts when re-joining a new node later on,...or having an even number of nodes?
b
Oh, gosh, that part is a little harder 🙂
Give me a second to explain.
s
Yeah, I thought so...:-) That's why I thought it might be safer to build a new cluster and just migrate things
b
So, you can remove your control plane nodes (server nodes) one at a time, and re-add them - that shouldn't cause conflicts
but just be aware sometimes removing a control plane node can go "wrong" and etcd will still think the node exists when it doesn't any more.
If your cluster is healthy and you go reasonably slowly it's fairly safe, I've done it a few times.
I'd take backups, and go for it once you're confident you have backups.
as 90% chance you'll be able to easily do it without a full rebuild of your cluster.
s
Hm, ok,...that makes sense I guess. Yeah, I have got back ups of everything running on the cluster, so that should not be an issue.
I think I will change host-names and ip-addresses just to be sure
b
As for "even" clusters
The issue with, say, a four node
etcd
cluster is it has no more redundancy than a three node etcd cluster has
and it is in fact "worse" because you now have four nodes that can fail instead of three, and any two can take you offline
It's actually fine to run etcd with an even number of nodes, just A) not recommended B) Not something you should do long term, as a temporary thing it's fine
s
Ok, that makes sense.
👍 1
b
But perfectly safe and sane to do during node churn.
There's literally no way to avoid it in those circumstances without being used to regularly throwing away clusters 🙂
s
Yep,....although this would still be easier than handling something similar on VMs,...;-)
b
Yeah, I'm... really appreciating Kubernetes more and more
as my current tasks involve running thousands of containers
and I just would never be able to do that at scale with VMs/LXCs without serious pain
s
Yeah, I totally agree, Kubernetes really helps in that scenario.