This message was deleted.
# k3s
a
This message was deleted.
r
maybe you’ll need to run with
--disable local-storage
?
e
Ok, that seems like the right thing to do as I never plan on using local-path. Restarting my cluster sounds scary. 😄
c
it’s not. Hopefully you’re upgrading and restarting it frequently
r
Jeah. Better set a timer. I was once lazy and that ended in chaos. Since then I’m a frequent upgrader :D with the upgrade controller it’s not too much of a hassle
e
I don't expect issues, but always nervous. Ceph and k8s should both do the right thing. I think.
r
If you replace ceph nodes just don’t do it too quick after each other and wait for health OK after each node upgrade
e
Good tip.
I should be able to go from 1.25.2 -> 1.26.2 directly, yes?
Copy code
➜ kubectl get nodes                        
NAME    STATUS   ROLES                       AGE    VERSION
homer   Ready    control-plane,etcd,master   154d   v1.25.2+k3s1
lisa    Ready    control-plane,etcd,master   154d   v1.25.2+k3s1
marge   Ready    control-plane,etcd,master   154d   v1.25.2+k3s1
That's my cluster
r
Jo first upgrade Server and then agent. Minor version don’t matter in my experience. https://kubernetes.io/releases/version-skew-policy/
With minor I mean the z in 1.y.z :D
e
If I use the installation script method will that do everything or do I need to stop stuff first?
r
I only have experience with https://docs.k3s.io/upgrades/automated
e
Oh, that looks like a much better way
So submit those as two separate plans. Do the agent one after the server one is completed?
r
yep, install the controller, install those plans, then I label the nodes step by step in a way which makes sense, for the common-nodes all at once, for the rook-ceph storage nodes one by one until I verify that health=OK again (because the upgrade-controller will start with the next node as soon as the former one is Ready again)
I think it’s even so clever that agent-upgrades don’t run before server-upgrades are all through if you would label all nodes at once. But I did that once… and rook ceph got in an unhappy state 😄
e
I don't need ceph being mad at me so I'll do everything one at a time in order. 😄
Ok, so looking at that plan, I only need the one. I only have control-plane nodes. How do I control the rollout? use a custom annotation that I can apply manually to each node one at a time?
r
Jeah in my setup I have to label the nodes where it should start. Not sure if that’s in the default Plans
e
The default plan appears to just go based on the existence of the control-plane annotation, but still does them sequentially. But I'd want to have better control over it so that I can make sure ceph is happy first before going onto the next node.
r
that’s how my control-plane CRD looks like (created with terraform) but jeah the nodeSelector does the trick for one-by-one explicit upgrades
e
So I went with this (just now):
Copy code
apiVersion: <http://upgrade.cattle.io/v1|upgrade.cattle.io/v1>
kind: Plan
metadata:
  name: server-plan
  namespace: system-upgrade
spec:
  concurrency: 1
  cordon: true
  nodeSelector:
    matchExpressions:
    - key: <http://node-role.kubernetes.io/rancher-upgrade|node-role.kubernetes.io/rancher-upgrade>
      operator: In
      values:
      - "true"
  serviceAccountName: system-upgrade
  upgrade:
    image: rancher/k3s-upgrade
  version: v1.26.3+k3s1
labelled each node one by one and watched ceph
flawless
Copy code
➜ kubectl get nodes
NAME    STATUS   ROLES                                       AGE    VERSION
homer   Ready    control-plane,etcd,master,rancher-upgrade   158d   v1.26.3+k3s1
lisa    Ready    control-plane,etcd,master,rancher-upgrade   158d   v1.26.3+k3s1
marge   Ready    control-plane,etcd,master,rancher-upgrade   158d   v1.26.3+k3s1
Ceph never even blipped.
I'm impressed
r
👍