This message was deleted.
# rke2
a
This message was deleted.
b
is there a doc the explains
what happens
during the upgrade process?
r
My understanding is you do need to go in steps between Kubernetes versions (1.n -> 1.n+1) and can't skip those. Containerd used by RKE2 gets installed with RKE2 so it should be updated as I understand it. How you handle restarting things depends on if you care if your workloads are unavailable during upgrades or have stateful services that may get mucked up. You should be able to incrementally upgrade or do all at once otherwise.
b
so if I want to go from v1.22x to 1.27x I would have to perform 5 upgrades? • 1.22 -> 1.23 • 1.23 -> 1.24 • 1.24 -> 1.25 • 1.25 -> 1.26 • 1.25 -> 1.27
h
correct - that is general good practice for all kubernetes distributions and not just RKE2
b
does the entire cluster need to be upgraded at the same time or can there be a difference in versions for the workers. i.e all control nodes upgraded to 1.27 but worker nodes upgraded after?
h
how was this cluster deployed? via script, RPM or from Rancher UI? I have personally never tried to run mismatched versions on etcd/controlplane and workers. so I don't know the answer.
r
I don't know for certain either, but my guess would be it's unsupported to run mismatched control plane/etcd for long periods of time (i.e. longer than a few hours for a rolling upgrade of all nodes to the version).
b
it was installed using the script
r
If you installed via script, that uses RPM and adds a repo to your system to upgrade, so when you do your normal yum/apt upgrades you'll be updating at least patch versions of Kubernetes too.
b
thanks for the information, its very helpful!
h
Also incase if you have not seen this already: https://docs.rke2.io/upgrade
b
I saw that, not a tonne of information there 🙂
s
For anything K8s related, keep up with the release cadence or you're potentially in for a bumpy ride when upgrading. Obviously depending on how is deployed and i.e which CNI, upgrading RKE2 is pretty straightforward and "just works". I've only had one issue and that was fixed by SUSE within a week or so I think. I keep a sandbox cluster which is identical to test and prod, as long as everything works there I've had no issues upgrading RKE2 etc. in test/prod.
s
Hi, is the https://rpm.rancher.io/rke2/stable/1.23/centos/9/x86_64/repodata/repomd.xml removed? i was tried to upgrade my 1.22 cluster to 1.23 and found that that url start at 1.24
219 Views