This message was deleted.
# rke2
a
This message was deleted.
c
its a best practice to go to the latest patch release but not necessarily required
👍 1
concurrency for workers mostly depends on your workloads, and how many nodes you can tolerate being down at a time.
a
ok cool, I basically shut down the main app using our cluster when I upgrade. The main concern is Longhorn nodes, attached volumes, and replicas. But there aren't any attached volumes when I shut everything down, so I'm guessing doing 2 at a time would probably be fine
Sorry, one last question. When the SUC plans finish and all of my nodes are running the upgraded version, I go back into Cluster Management, either edit the config or YAML, and change the cluster version to match the upgraded version. Then I see it cycling through all of the nodes again applying a plan, one by one. What is that doing?
I don't see anything going on when in the downstream cluster view, so maybe it's just upgrading the rancher-system-agent or something? I only see it in the cluster management view
c
you’re supposed to edit in the UI to deploy the SUC plan. If you’re using rancher, you need to use Rancher to manage the version. It will use the SUC same as you are.
a
Oh, I've been following this guide https://docs.rke2.io/upgrade/automated_upgrade, and going into the RKE2 cluster view -> Upgrade Controller -> Create From YAML. Then after that finishes I go to cluster management and change the version there. I see what you're talking about, in Cluster Management I can go to Upgrade Strategy and I have the option to choose the concurrency for the workers and control plane. My only concern with that method is that we have separate Control Plane and Etcd nodes. Will that still upgrade the Etcd nodes too and if so, does it fall under the Control Plane concurrency?