This message was deleted.
# k3s
a
This message was deleted.
l
This is Ubuntu 20.04.4 and when comparing the k3s.service files in
/etc/systemd/system/k3s.service
they look exactly the same ….. can’t find the needle in the haystack
Any advice - thank you
Side-by-side comparing them in e.g. VS Code and theres no difference AT ALL
c
we’re going to release v1.23.7+k3s2 to fix that. The workaround is to upgrade the first server (the one pointed at by the --server flag on the failing node) first
l
aaah okay so it’s known … fair enough.
c
or remove the --server flag from the arguments
l
hmmm …. removing the
--server
flag - no consequences of that - one should know of?
c
not if its already joined to the cluster
👍 1
l
And the issue is the same in v1.23.8+k3s1? I also tried going to that version directly from v1.23.6+k3s1 and the same thing happened.
c
yep
l
okay fair … so +k3s2 … on v1.23.8 coming up I guess 🙂
The server being pointed at in this setup is the API IP … handled via a
keepalived
floating IP “cluster” across the
control-plane
nodes
Oooooh interesting question arises from this debugging session I’m having here. So. I’ve coded some logic that introduces change, where change might be: • worker node disk conf. • k3s version • OS version ◦ patches • “hardware” resources
The change is introduced by interchanging the nodes … for the control-plane this is one at a time … querying the control-plane health and so on.
So the initial first control-plane node. The one with the
--cluster-init
flag. When that is interchanged it is replaced with “one” with the same node name and the same IP. However, it no longer has the
--cluster-init
flag. Is this an issue?
Hmm reading the docs and I think that the
--cluster-init
flag/arg actually needs to be removed.
c
it doesn’t. It’s ignored after the datastore is initialized.
l
Super - thank you for the info.
801 Views