https://rancher.com/ logo
#k3s
Title
# k3s
m

melodic-hamburger-23329

09/14/2022, 1:05 AM
Is it possible to use the upgrade image for upgrading single-node cluster? Tried with the approach described here and here, but it doesn’t seem to work as expected. Upgrade image is executed, but
kubectl version
still shows old server version.
c

creamy-pencil-82913

09/14/2022, 10:11 PM
what do the upgrade pod logs say?
m

melodic-hamburger-23329

09/15/2022, 3:24 AM
unfortunately don’t have the original logs anymore :S Was just wondering if the upgrade image is supported also for single node cluster or not. Ended up upgrading manually.
c

creamy-pencil-82913

09/15/2022, 3:30 AM
yes, it should work on clusters of any size
1
m

melodic-hamburger-23329

09/29/2022, 1:37 AM
@creamy-pencil-82913 Tried again the upgrade image (1.25.0 => 1.25.2). system-upgrade-controller.yaml:
Copy code
apiVersion: v1
kind: Namespace
metadata:
  name: system-upgrade
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: system-upgrade
  namespace: system-upgrade
---
apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
kind: ClusterRoleBinding
metadata:
  name:  system-upgrade
roleRef:
  apiGroup: <http://rbac.authorization.k8s.io|rbac.authorization.k8s.io>
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: system-upgrade
  namespace: system-upgrade
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: default-controller-env
  namespace: system-upgrade
data:
  SYSTEM_UPGRADE_CONTROLLER_DEBUG: "false"
  SYSTEM_UPGRADE_CONTROLLER_THREADS: "2"
  SYSTEM_UPGRADE_JOB_ACTIVE_DEADLINE_SECONDS: "900"
  SYSTEM_UPGRADE_JOB_BACKOFF_LIMIT: "99"
  SYSTEM_UPGRADE_JOB_IMAGE_PULL_POLICY: "Always"
  SYSTEM_UPGRADE_JOB_KUBECTL_IMAGE: "rancher/kubectl:v1.23.7"
  SYSTEM_UPGRADE_JOB_PRIVILEGED: "true"
  SYSTEM_UPGRADE_JOB_TTL_SECONDS_AFTER_FINISH: "900"
  SYSTEM_UPGRADE_PLAN_POLLING_INTERVAL: "15m"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: system-upgrade-controller
  namespace: system-upgrade
spec:
  selector:
    matchLabels:
      <http://upgrade.cattle.io/controller|upgrade.cattle.io/controller>: system-upgrade-controller
  template:
    metadata:
      labels:
        <http://upgrade.cattle.io/controller|upgrade.cattle.io/controller>: system-upgrade-controller # necessary to avoid drain
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - {key: "<http://node-role.kubernetes.io/master|node-role.kubernetes.io/master>", operator: Exists}
      serviceAccountName: system-upgrade
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
        - key: "<http://node-role.kubernetes.io/master|node-role.kubernetes.io/master>"
          operator: "Exists"
          effect: "NoSchedule"
        - key: "<http://node-role.kubernetes.io/controlplane|node-role.kubernetes.io/controlplane>"
          operator: "Exists"
          effect: "NoSchedule"
        - key: "<http://node-role.kubernetes.io/control-plane|node-role.kubernetes.io/control-plane>"
          operator: "Exists"
          effect: "NoSchedule"
        - key: "<http://node-role.kubernetes.io/etcd|node-role.kubernetes.io/etcd>"
          operator: "Exists"
          effect: "NoExecute"
      containers:
        - name: system-upgrade-controller
          image: rancher/system-upgrade-controller:v0.9.1
          imagePullPolicy: IfNotPresent
          envFrom:
            - configMapRef:
                name: default-controller-env
          env:
            - name: SYSTEM_UPGRADE_CONTROLLER_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.labels['<http://upgrade.cattle.io/controller|upgrade.cattle.io/controller>']
            - name: SYSTEM_UPGRADE_CONTROLLER_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          volumeMounts:
            - name: etc-ssl
              mountPath: /etc/ssl
            - name: etc-pki
              mountPath: /etc/pki
            - name: etc-ca-certificates
              mountPath: /etc/ca-certificates
            - name: tmp
              mountPath: /tmp
      volumes:
        - name: etc-ssl
          hostPath:
            path: /etc/ssl
            type: Directory
        - name: etc-pki
          hostPath:
            path: /etc/pki
            type: DirectoryOrCreate
        - name: etc-ca-certificates
          hostPath:
            path: /etc/ca-certificates
            type: DirectoryOrCreate
        - name: tmp
          emptyDir: {}
system-upgrade.yaml:
Copy code
apiVersion: <http://upgrade.cattle.io/v1|upgrade.cattle.io/v1>
kind: Plan
metadata:
  name: k3s-server
  namespace: system-upgrade
  labels:
    k3s-upgrade: server
spec:
  concurrency: 1 # Batch size (roughly maps to maximum number of unschedulable nodes)
  version: v1.25.2+k3s1
  nodeSelector:
    matchExpressions:
      - {key: k3s-upgrade, operator: Exists}
      - {key: k3s-upgrade, operator: NotIn, values: ["disabled", "false"]}
      - {key: <http://k3os.io/mode|k3os.io/mode>, operator: DoesNotExist}
      - {key: <http://node-role.kubernetes.io/control-plane|node-role.kubernetes.io/control-plane>, operator: Exists}
  serviceAccountName: system-upgrade
  cordon: true
  upgrade:
    image: rancher/k3s-upgrade
---
apiVersion: <http://upgrade.cattle.io/v1|upgrade.cattle.io/v1>
kind: Plan
metadata:
  name: k3s-agent
  namespace: system-upgrade
  labels:
    k3s-upgrade: agent
spec:
  concurrency: 2 # Batch size (roughly maps to maximum number of unschedulable nodes)
  version: v1.25.2+k3s1
  nodeSelector:
    matchExpressions:
      - {key: k3s-upgrade, operator: Exists}
      - {key: k3s-upgrade, operator: NotIn, values: ["disabled", "false"]}
      - {key: <http://k3os.io/mode|k3os.io/mode>, operator: DoesNotExist}
      - {key: <http://node-role.kubernetes.io/control-plane|node-role.kubernetes.io/control-plane>, operator: DoesNotExist}
  serviceAccountName: system-upgrade
  prepare:
    # Defaults to the same "resolved" tag that is used for the `upgrade` container, NOT `latest`
    image: rancher/k3s-upgrade:v1.25.2-k3s1
    args: ["prepare", "k3s-server"]
  drain:
    force: true
    skipWaitForDeleteTimeout: 60 # 1.18+ (honor pod disruption budgets up to 60 seconds per pod then moves on)
  upgrade:
    image: rancher/k3s-upgrade:v1.25.2-k3s1
apply-k3s-server-… pod logs:
Copy code
Defaulted container "upgrade" out of: upgrade, cordon (init)
+ upgrade
+ get_k3s_process_info
+ ps -ef
+ grep+  -E 'k3s .*(server|agent)'grep
 -E -v '(init|grep|channelserver|supervise-daemon)'
+ awk '{print $1}'
[INFO]  K3S binary is running with pid 2652521
+ K3S_PID=2652521
[INFO]  Comparing old and new binaries
+ '[' -z 2652521 ]
+ info 'K3S binary is running with pid 2652521'
+ echo '[INFO] ' 'K3S binary is running with pid 2652521'
+ cat /host/proc/2652521/cmdline
+ awk '{print $1}'
+ head -n 1
+ K3S_BIN_PATH=k3s
+ '[' 2652521 '==' 1 ]
+ '[' -z k3s ]
+ return
+ replace_binary
+ NEW_BINARY=/opt/k3s
+ FULL_BIN_PATH=/hostk3s
+ '[' '!' -f /opt/k3s ]
+ info 'Comparing old and new binaries'
+ echo '[INFO] ' 'Comparing old and new binaries'
+ sha256sum+ cut /opt/k3s /hostk3s
 '-d ' -f1
+ uniq
+ wc -l
sha256sum: can't open '/hostk3s': No such file or directory
[INFO]  Binary already been replaced
+ BIN_COUNT=1
+ '[' 1 '==' 1 ]
+ info 'Binary already been replaced'
+ echo '[INFO] ' 'Binary already been replaced'
+ exit 0
controller pod logs:
Copy code
W0929 01:20:14.122954       1 client_config.go:552] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
time="2022-09-29T01:20:14Z" level=info msg="Applying CRD <http://plans.upgrade.cattle.io|plans.upgrade.cattle.io>"
time="2022-09-29T01:20:14Z" level=info msg="Starting /v1, Kind=Node controller"
time="2022-09-29T01:20:14Z" level=info msg="Starting /v1, Kind=Secret controller"
time="2022-09-29T01:20:14Z" level=info msg="Starting batch/v1, Kind=Job controller"
time="2022-09-29T01:20:14Z" level=info msg="Starting <http://upgrade.cattle.io/v1|upgrade.cattle.io/v1>, Kind=Plan controller"
c

creamy-pencil-82913

09/29/2022, 2:27 AM
Where did you get that plan from? There's an error there in the logs about /hostk3s not existing that suggests that that a mount of some sort is missing.
m

melodic-hamburger-23329

09/30/2022, 7:03 AM
Is there an up-to-date reference example somewhere? It seems all the examples I could find are a bit old.
c

creamy-pencil-82913

09/30/2022, 8:01 AM
hmm. This bit here is failing:
Copy code
+ cat /host/proc/2652521/cmdline
+ awk '{print $1}'
+ head -n 1
+ K3S_BIN_PATH=k3s
That should result in an absolute path on the host filesystem, but it’s not. How did you install and start K3s on this host? What do you get from
cat /proc/2652521/cmdline
on that host? Can you post the output of
kubectl get node -o yaml
?
m

melodic-hamburger-23329

09/30/2022, 8:12 AM
Copy code
apiVersion: v1
items:
- apiVersion: v1
  kind: Node
  metadata:
    annotations:
      <http://alpha.kubernetes.io/provided-node-ip|alpha.kubernetes.io/provided-node-ip>: ...
      <http://flannel.alpha.coreos.com/backend-data|flannel.alpha.coreos.com/backend-data>: '{"VNI":1,"VtepMAC":"..."}'
      <http://flannel.alpha.coreos.com/backend-type|flannel.alpha.coreos.com/backend-type>: vxlan
      <http://flannel.alpha.coreos.com/kube-subnet-manager|flannel.alpha.coreos.com/kube-subnet-manager>: "true"
      <http://flannel.alpha.coreos.com/public-ip|flannel.alpha.coreos.com/public-ip>: ...
      <http://k3s.io/external-ip|k3s.io/external-ip>: ...
      <http://k3s.io/hostname|k3s.io/hostname>: ...
      <http://k3s.io/internal-ip|k3s.io/internal-ip>: ...
      <http://k3s.io/node-args|k3s.io/node-args>: '["server","--datastore-endpoint","********","--disable","traefik","--disable","servicelb","--disable","etcd","--flannel-iface","rvni0","--kube-apiserver-arg","allow-privileged=true","--kube-apiserver-arg","oidc-signing-algs=ES256","--kube-apiserver-arg","tls-cipher-suites=TLS_AES_128_GCM_SHA256","--kube-apiserver-arg","tls-min-version=VersionTLS13","--kubelet-arg","cgroup-driver=systemd","--kubelet-arg","provider-id=k3s","--kubelet-arg","serialize-image-pulls=false","--kubelet-arg","tls-cipher-suites=TLS_AES_128_GCM_SHA256","--kubelet-arg","tls-min-version=VersionTLS13","--kube-scheduler-arg","tls-cipher-suites=TLS_AES_128_GCM_SHA256","--kube-scheduler-arg","tls-min-version=VersionTLS13","--kube-controller-manager-arg","tls-cipher-suites=TLS_AES_128_GCM_SHA256","--kube-controller-manager-arg","tls-min-version=VersionTLS13","--log","/var/log/k3s.log","--node-ip","...","--node-external-ip","...","--resolv-conf","/etc/rancher/k3s/resolv.conf","--snapshotter","stargz","--tls-san","...","--tls-san","...","--token","********","--write-kubeconfig-mode","644"]'
      <http://k3s.io/node-config-hash|k3s.io/node-config-hash>: ...
      <http://k3s.io/node-env|k3s.io/node-env>: '{"K3S_DATA_DIR":"/var/lib/rancher/k3s/data/57ca24d589d11eecebc8b6a6337899849a14c9de97f85e0f3ffa0c94945aa248"}'
      <http://node.alpha.kubernetes.io/ttl|node.alpha.kubernetes.io/ttl>: "0"
      <http://volumes.kubernetes.io/controller-managed-attach-detach|volumes.kubernetes.io/controller-managed-attach-detach>: "true"
    creationTimestamp: "2022-09-01T16:28:58Z"
    finalizers:
    - <http://wrangler.cattle.io/node|wrangler.cattle.io/node>
    labels:
      <http://beta.kubernetes.io/arch|beta.kubernetes.io/arch>: amd64
      <http://beta.kubernetes.io/instance-type|beta.kubernetes.io/instance-type>: k3s
      <http://beta.kubernetes.io/os|beta.kubernetes.io/os>: linux
      <http://egress.k3s.io/cluster|egress.k3s.io/cluster>: "true"
      k3s-upgrade: "true"
      <http://kubernetes.io/arch|kubernetes.io/arch>: amd64
      <http://kubernetes.io/hostname|kubernetes.io/hostname>: ...
      <http://kubernetes.io/os|kubernetes.io/os>: linux
      <http://node-role.kubernetes.io/control-plane|node-role.kubernetes.io/control-plane>: "true"
      <http://node-role.kubernetes.io/master|node-role.kubernetes.io/master>: "true"
      <http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>: k3s
      <http://plan.upgrade.cattle.io/k3s-server|plan.upgrade.cattle.io/k3s-server>: eb03d11336fc8c3a0d60739cfa4eb1df891134dc092e36aa3637a85f
    name: ...
    resourceVersion: "8231615"
    uid: 9a50c9ff-d6a5-4986-8b32-3242c27c520d
  spec:
    podCIDR: 10.42.0.0/24
    podCIDRs:
    - 10.42.0.0/24
    providerID: k3s
  status:
    addresses:
    - address: ...
      type: InternalIP
    - address: ...
      type: ExternalIP
    - address: ...
      type: Hostname
    allocatable:
      cpu: "16"
      ephemeral-storage: "736545573516"
      hugepages-1Gi: "0"
      hugepages-2Mi: "0"
      memory: 32775320Ki
      pods: "110"
    capacity:
      cpu: "16"
      ephemeral-storage: 757139776Ki
      hugepages-1Gi: "0"
      hugepages-2Mi: "0"
      memory: 32775320Ki
      pods: "110"
    conditions:
    - lastHeartbeatTime: "2022-09-30T08:06:14Z"
      lastTransitionTime: "2022-09-01T16:28:58Z"
      message: kubelet has sufficient memory available
      reason: KubeletHasSufficientMemory
      status: "False"
      type: MemoryPressure
    - lastHeartbeatTime: "2022-09-30T08:06:14Z"
      lastTransitionTime: "2022-09-01T16:28:58Z"
      message: kubelet has no disk pressure
      reason: KubeletHasNoDiskPressure
      status: "False"
      type: DiskPressure
    - lastHeartbeatTime: "2022-09-30T08:06:14Z"
      lastTransitionTime: "2022-09-01T16:28:58Z"
      message: kubelet has sufficient PID available
      reason: KubeletHasSufficientPID
      status: "False"
      type: PIDPressure
    - lastHeartbeatTime: "2022-09-30T08:06:14Z"
      lastTransitionTime: "2022-09-30T07:40:42Z"
      message: kubelet is posting ready status. AppArmor enabled
      reason: KubeletReady
      status: "True"
      type: Ready
    daemonEndpoints:
      kubeletEndpoint:
        Port: 10250
    images:
    - names:
      - <http://quay.io/argoproj/argocd@sha256:05216f405815bf4d007277392a1a23887fdf345b36d66fbab77e496ddc64161c|quay.io/argoproj/argocd@sha256:05216f405815bf4d007277392a1a23887fdf345b36d66fbab77e496ddc64161c>
      - <http://quay.io/argoproj/argocd:v2.4.12|quay.io/argoproj/argocd:v2.4.12>
      sizeBytes: 140773123
    - names:
      - <http://docker.io/kubernetesui/dashboard@sha256:f12df071f8bad3e1965b5246095bd3f78df0eb76ceabcc0878d42849d33e4a10|docker.io/kubernetesui/dashboard@sha256:f12df071f8bad3e1965b5246095bd3f78df0eb76ceabcc0878d42849d33e4a10>
      - <http://docker.io/kubernetesui/dashboard:v2.6.1|docker.io/kubernetesui/dashboard:v2.6.1>
      sizeBytes: 75788622
    - names:
      - <http://docker.io/library/redis@sha256:1a727ed4923bcd25c7a13c6e12fe4f983b14ce859ada2fa97700c03a78c6dd0b|docker.io/library/redis@sha256:1a727ed4923bcd25c7a13c6e12fe4f983b14ce859ada2fa97700c03a78c6dd0b>
      - <http://docker.io/library/redis:7.0.4-bullseye|docker.io/library/redis:7.0.4-bullseye>
      sizeBytes: 44703303
    - names:
      - <http://docker.io/library/traefik@sha256:735fe45a1b4bf9bad7e4414338515ae10027a6a7dd67afcda834df6a294aa4b3|docker.io/library/traefik@sha256:735fe45a1b4bf9bad7e4414338515ae10027a6a7dd67afcda834df6a294aa4b3>
      - <http://docker.io/library/traefik:2.8.7|docker.io/library/traefik:2.8.7>
      sizeBytes: 33668617
    - names:
      - <http://docker.io/rancher/mirrored-metrics-server@sha256:6dadccbdb792893ac8f7dcc5f232844eab1aba1b6d98eb9f3b7e121c3e635aa9|docker.io/rancher/mirrored-metrics-server@sha256:6dadccbdb792893ac8f7dcc5f232844eab1aba1b6d98eb9f3b7e121c3e635aa9>
      - <http://docker.io/rancher/mirrored-metrics-server:v0.5.2|docker.io/rancher/mirrored-metrics-server:v0.5.2>
      sizeBytes: 26588818
    - names:
      - <http://docker.io/kubernetesui/metrics-scraper@sha256:9fdef455b4f9a8ee315a0aa3bd71787cfd929e759da3b4d7e65aaa56510d747b|docker.io/kubernetesui/metrics-scraper@sha256:9fdef455b4f9a8ee315a0aa3bd71787cfd929e759da3b4d7e65aaa56510d747b>
      - <http://docker.io/kubernetesui/metrics-scraper:v1.0.8|docker.io/kubernetesui/metrics-scraper:v1.0.8>
      sizeBytes: 19745056
    - names:
      - <http://docker.io/rancher/mirrored-coredns-coredns@sha256:e5a309df7c9cb478e444ea8827d1e6554a6872f5b0884da694bc3c170260df2c|docker.io/rancher/mirrored-coredns-coredns@sha256:e5a309df7c9cb478e444ea8827d1e6554a6872f5b0884da694bc3c170260df2c>
      - <http://docker.io/rancher/mirrored-coredns-coredns:1.9.1|docker.io/rancher/mirrored-coredns-coredns:1.9.1>
      sizeBytes: 14071555
    - names:
      - <http://docker.io/rancher/local-path-provisioner@sha256:610450eb24e51f2e5cefb9f643af2ea63e9bc4396cc8e51e4e732c671a0ad4e1|docker.io/rancher/local-path-provisioner@sha256:610450eb24e51f2e5cefb9f643af2ea63e9bc4396cc8e51e4e732c671a0ad4e1>
      - <http://docker.io/rancher/local-path-provisioner:v0.0.21|docker.io/rancher/local-path-provisioner:v0.0.21>
      sizeBytes: 11420005
    - names:
      - <http://docker.io/rancher/system-upgrade-controller@sha256:74dfc23d2a216de2c94800d48ba0312115937db51c8c8d7c13540d0b2a6d3f50|docker.io/rancher/system-upgrade-controller@sha256:74dfc23d2a216de2c94800d48ba0312115937db51c8c8d7c13540d0b2a6d3f50>
      - <http://docker.io/rancher/system-upgrade-controller:v0.9.1|docker.io/rancher/system-upgrade-controller:v0.9.1>
      sizeBytes: 8908363
    - names:
      - <http://docker.io/rancher/mirrored-pause@sha256:a4f3b71ca7503c067b0355959576eaab3d39ac831f5b82bd87bb1324fb057b9e|docker.io/rancher/mirrored-pause@sha256:a4f3b71ca7503c067b0355959576eaab3d39ac831f5b82bd87bb1324fb057b9e>
      - <http://docker.io/rancher/mirrored-pause:3.6|docker.io/rancher/mirrored-pause:3.6>
      sizeBytes: 298231
    nodeInfo:
      architecture: amd64
      bootID: 4d2f0668-5a0d-4741-b3ac-c6847894212a
      containerRuntimeVersion: <containerd://1.6.8-k3s1>
      kernelVersion: 5.8.0-33-generic
      kubeProxyVersion: v1.25.2+k3s1
      kubeletVersion: v1.25.2+k3s1
      machineID: ddf83343682c435dbfedcc79d2a3a922
      operatingSystem: linux
      osImage: Ubuntu 20.04 LTS
      systemUUID: 7ccfd7b2-e053-11e9-9552-b4a9fc5a9686
kind: List
metadata:
  resourceVersion: ""
This is after manual upgrade to 1.25.2 (it seems containerd somehow recovered)
97 Views