This message was deleted.
# k3s
This message was deleted.
maybe you’ll need to run with
--disable local-storage
Ok, that seems like the right thing to do as I never plan on using local-path. Restarting my cluster sounds scary. 😄
it’s not. Hopefully you’re upgrading and restarting it frequently
Jeah. Better set a timer. I was once lazy and that ended in chaos. Since then I’m a frequent upgrader :D with the upgrade controller it’s not too much of a hassle
I don't expect issues, but always nervous. Ceph and k8s should both do the right thing. I think.
If you replace ceph nodes just don’t do it too quick after each other and wait for health OK after each node upgrade
Good tip.
I should be able to go from 1.25.2 -> 1.26.2 directly, yes?
Copy code
➜ kubectl get nodes                        
NAME    STATUS   ROLES                       AGE    VERSION
homer   Ready    control-plane,etcd,master   154d   v1.25.2+k3s1
lisa    Ready    control-plane,etcd,master   154d   v1.25.2+k3s1
marge   Ready    control-plane,etcd,master   154d   v1.25.2+k3s1
That's my cluster
Jo first upgrade Server and then agent. Minor version don’t matter in my experience.
With minor I mean the z in 1.y.z :D
If I use the installation script method will that do everything or do I need to stop stuff first?
I only have experience with
Oh, that looks like a much better way
So submit those as two separate plans. Do the agent one after the server one is completed?
yep, install the controller, install those plans, then I label the nodes step by step in a way which makes sense, for the common-nodes all at once, for the rook-ceph storage nodes one by one until I verify that health=OK again (because the upgrade-controller will start with the next node as soon as the former one is Ready again)
I think it’s even so clever that agent-upgrades don’t run before server-upgrades are all through if you would label all nodes at once. But I did that once… and rook ceph got in an unhappy state 😄
I don't need ceph being mad at me so I'll do everything one at a time in order. 😄
Ok, so looking at that plan, I only need the one. I only have control-plane nodes. How do I control the rollout? use a custom annotation that I can apply manually to each node one at a time?
Jeah in my setup I have to label the nodes where it should start. Not sure if that’s in the default Plans
The default plan appears to just go based on the existence of the control-plane annotation, but still does them sequentially. But I'd want to have better control over it so that I can make sure ceph is happy first before going onto the next node.
that’s how my control-plane CRD looks like (created with terraform) but jeah the nodeSelector does the trick for one-by-one explicit upgrades
So I went with this (just now):
Copy code
apiVersion: <|>
kind: Plan
  name: server-plan
  namespace: system-upgrade
  concurrency: 1
  cordon: true
    - key: <|>
      operator: In
      - "true"
  serviceAccountName: system-upgrade
    image: rancher/k3s-upgrade
  version: v1.26.3+k3s1
labelled each node one by one and watched ceph
Copy code
➜ kubectl get nodes
NAME    STATUS   ROLES                                       AGE    VERSION
homer   Ready    control-plane,etcd,master,rancher-upgrade   158d   v1.26.3+k3s1
lisa    Ready    control-plane,etcd,master,rancher-upgrade   158d   v1.26.3+k3s1
marge   Ready    control-plane,etcd,master,rancher-upgrade   158d   v1.26.3+k3s1
Ceph never even blipped.
I'm impressed