https://rancher.com/ logo
Title
e

enough-carpet-20915

03/30/2023, 9:23 AM
Is there a way to get k3s to stop setting
local-path
as a default storage class? I try removing the default but k3s keeps putting it back.
r

rich-cartoon-70161

03/30/2023, 9:26 AM
maybe you’ll need to run with
--disable local-storage
?
e

enough-carpet-20915

03/30/2023, 9:34 AM
Ok, that seems like the right thing to do as I never plan on using local-path. Restarting my cluster sounds scary. 😄
c

creamy-pencil-82913

03/30/2023, 5:17 PM
it’s not. Hopefully you’re upgrading and restarting it frequently
r

rich-cartoon-70161

03/30/2023, 6:18 PM
Jeah. Better set a timer. I was once lazy and that ended in chaos. Since then I’m a frequent upgrader :D with the upgrade controller it’s not too much of a hassle
e

enough-carpet-20915

03/30/2023, 6:32 PM
I don't expect issues, but always nervous. Ceph and k8s should both do the right thing. I think.
r

rich-cartoon-70161

03/30/2023, 6:33 PM
If you replace ceph nodes just don’t do it too quick after each other and wait for health OK after each node upgrade
e

enough-carpet-20915

03/30/2023, 6:33 PM
Good tip.
I should be able to go from 1.25.2 -> 1.26.2 directly, yes?
➜ kubectl get nodes                        
NAME    STATUS   ROLES                       AGE    VERSION
homer   Ready    control-plane,etcd,master   154d   v1.25.2+k3s1
lisa    Ready    control-plane,etcd,master   154d   v1.25.2+k3s1
marge   Ready    control-plane,etcd,master   154d   v1.25.2+k3s1
That's my cluster
r

rich-cartoon-70161

03/30/2023, 6:45 PM
Jo first upgrade Server and then agent. Minor version don’t matter in my experience. https://kubernetes.io/releases/version-skew-policy/
With minor I mean the z in 1.y.z :D
e

enough-carpet-20915

03/30/2023, 6:48 PM
If I use the installation script method will that do everything or do I need to stop stuff first?
r

rich-cartoon-70161

03/30/2023, 6:50 PM
I only have experience with https://docs.k3s.io/upgrades/automated
e

enough-carpet-20915

03/30/2023, 6:51 PM
Oh, that looks like a much better way
So submit those as two separate plans. Do the agent one after the server one is completed?
r

rich-cartoon-70161

03/30/2023, 7:02 PM
yep, install the controller, install those plans, then I label the nodes step by step in a way which makes sense, for the common-nodes all at once, for the rook-ceph storage nodes one by one until I verify that health=OK again (because the upgrade-controller will start with the next node as soon as the former one is Ready again)
I think it’s even so clever that agent-upgrades don’t run before server-upgrades are all through if you would label all nodes at once. But I did that once… and rook ceph got in an unhappy state 😄
e

enough-carpet-20915

03/30/2023, 7:40 PM
I don't need ceph being mad at me so I'll do everything one at a time in order. 😄
Ok, so looking at that plan, I only need the one. I only have control-plane nodes. How do I control the rollout? use a custom annotation that I can apply manually to each node one at a time?
r

rich-cartoon-70161

03/31/2023, 3:41 PM
Jeah in my setup I have to label the nodes where it should start. Not sure if that’s in the default Plans
e

enough-carpet-20915

03/31/2023, 4:07 PM
The default plan appears to just go based on the existence of the control-plane annotation, but still does them sequentially. But I'd want to have better control over it so that I can make sure ceph is happy first before going onto the next node.
r

rich-cartoon-70161

04/03/2023, 7:33 AM
that’s how my control-plane CRD looks like (created with terraform) but jeah the nodeSelector does the trick for one-by-one explicit upgrades
e

enough-carpet-20915

04/04/2023, 10:09 AM
So I went with this (just now):
apiVersion: <http://upgrade.cattle.io/v1|upgrade.cattle.io/v1>
kind: Plan
metadata:
  name: server-plan
  namespace: system-upgrade
spec:
  concurrency: 1
  cordon: true
  nodeSelector:
    matchExpressions:
    - key: <http://node-role.kubernetes.io/rancher-upgrade|node-role.kubernetes.io/rancher-upgrade>
      operator: In
      values:
      - "true"
  serviceAccountName: system-upgrade
  upgrade:
    image: rancher/k3s-upgrade
  version: v1.26.3+k3s1
labelled each node one by one and watched ceph
flawless
➜ kubectl get nodes
NAME    STATUS   ROLES                                       AGE    VERSION
homer   Ready    control-plane,etcd,master,rancher-upgrade   158d   v1.26.3+k3s1
lisa    Ready    control-plane,etcd,master,rancher-upgrade   158d   v1.26.3+k3s1
marge   Ready    control-plane,etcd,master,rancher-upgrade   158d   v1.26.3+k3s1
Ceph never even blipped.
I'm impressed
r

rich-cartoon-70161

04/04/2023, 11:21 AM
👍