This message was deleted.
# k3s
a
This message was deleted.
r
For some reason when I stop 2 of the 3 the last control plane node also becomes also unhappy because of health checks to the others
s
Once your HA cluster doesn't have quorum any more (i.e less than 50% of the control-plane nodes are available) it cannot continue.
r
Jeah, the question is… can I somehow reconfigure it to allow it to only have 1 control-plane node?
s
I don't know whether you can remove the two nodes from the cluster, but it feels like you're in a catch-22 situation
r
you read my issue in the post before? jeah… it pretty much feels like it. and I thought downgrading again to single-control-plane would make things easier… and then adding server nodes again later.
s
I had skimmed your issue previously, but it wasn't an issue I'd noticed.
I've tried looking for FAQs about not being able to scale down from a 3 node HA to a single node, but I haven't found anything yet. Maybe it was said in Slack and has disappeared.
c
starting a node with --cluster-reset will make it the only member of the etcd cluster. At that point you could delete and rejoin the other nodes as agents.
r
So I would stop all three server instances and then restart one with cluster-reset and it’ll still have the data but as single control plane node? That sounds good and might make me solve the issue I posted as GitHub issue above :D
Would I have to recreate all agents again or would they automatically be joined again if the token stays the same?
I’ll try this out tomorrow in a test Cluster :) but thanks that’s a good lead
c
stop the k3s service on all three nodes, run
k3s server --cluster-reset
on one of them, then start the service again.
then reinstall the other servers as agents
and yes, this keeps all the datastore data
s
Wow - that really is magic.
c
That flag is documented in passing at https://docs.k3s.io/backup-restore#options
r
This seems to be a feasible way out of my checkmate-situation 🙂 downscaling the control-plane to 1 server, changing the important config flags and then adding more servers again seems to work fine.
@creamy-pencil-82913 thanks again for the hint with the
--cluster-reset
- that was the missing link 😄 now I can fix all my k3s clusters properly. (at least bring them to 1.23 😄 let’s see what the future brings)
124 Views