Hi, i upgraded my test cluster from v1.31.7 to v1....
# k3s
p
Hi, i upgraded my test cluster from v1.31.7 to v1.32.5 (the nodes get replaced with terraform) and now the providerID on some nodes get set to k3s:// and not to openstack:///. Also my label no getting set anymore. Is there a know issue ? Where should i look
Copy code
apiVersion: v1
kind: Node
metadata:
  annotations:
    <http://alpha.kubernetes.io/provided-node-ip|alpha.kubernetes.io/provided-node-ip>: 192.168.45.63
    <http://csi.volume.kubernetes.io/nodeid|csi.volume.kubernetes.io/nodeid>: '{"<http://cinder.csi.openstack.org|cinder.csi.openstack.org>":"6c62f5d3-0851-4f47-8144-d6bee909a0d3"}'
    <http://k3s.io/hostname|k3s.io/hostname>: inovex-prod-daidalos-it-ingress-local-dev-0
    <http://k3s.io/internal-ip|k3s.io/internal-ip>: 192.168.45.63
    <http://k3s.io/node-args|k3s.io/node-args>: '["agent","--node-ip","192.168.45.63","10.0.10.109","--with-node-id","--node-label","openstackid=6c62f5d3-0851-4f47-8144-d6bee909a0d3","--node-label","instance-role.daidalos=ingress-local","--kubelet-arg","provider-id=openstack:///6c62f5d3-0851-4f47-8144-d6bee909a0d3","--kubelet-arg","cloud-provider=external","--node-taint","ingress-local=true:NoExecute"]'
    <http://k3s.io/node-config-hash|k3s.io/node-config-hash>: I6BO6WNMSFTJEWEVRJ7ESVG64C4SQQ5HMJBZEHQYLDVHGFTO3MOQ====
    <http://k3s.io/node-env|k3s.io/node-env>: '{"K3S_TOKEN":"********","K3S_URL":"<https://control-plane-dev.some.tld>.:6443"}'
    <http://node.alpha.kubernetes.io/ttl|node.alpha.kubernetes.io/ttl>: "0"
    <http://volumes.kubernetes.io/controller-managed-attach-detach|volumes.kubernetes.io/controller-managed-attach-detach>: "true"
  creationTimestamp: "2025-06-18T11:40:52Z"
  finalizers:
  - <http://wrangler.cattle.io/managed-etcd-controller|wrangler.cattle.io/managed-etcd-controller>
  - <http://wrangler.cattle.io/node|wrangler.cattle.io/node>
  labels:
    <http://beta.kubernetes.io/arch|beta.kubernetes.io/arch>: amd64
    <http://beta.kubernetes.io/instance-type|beta.kubernetes.io/instance-type>: k3s
    <http://beta.kubernetes.io/os|beta.kubernetes.io/os>: linux
    <http://kubernetes.io/arch|kubernetes.io/arch>: amd64
    <http://kubernetes.io/hostname|kubernetes.io/hostname>: inovex-prod-daidalos-it-ingress-local-dev-0
    <http://kubernetes.io/os|kubernetes.io/os>: linux
    <http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>: k3s
    <http://topology.cinder.csi.openstack.org/zone|topology.cinder.csi.openstack.org/zone>: az1
  name: inovex-prod-daidalos-it-ingress-local-dev-0
  resourceVersion: "738104384"
  uid: 3139d6b0-bf57-458e-97be-bf2e83052e0b
spec:
  podCIDR: 10.42.23.0/24
  podCIDRs:
  - 10.42.23.0/24
  providerID: <k3s://inovex-prod-daidalos-it-ingress-local-dev-0>
status:
  addresses:
  - address: 192.168.45.63
    type: InternalIP
  - address: inovex-prod-daidalos-it-ingress-local-dev-0
    type: Hostname
  allocatable:
    cpu: "4"
    ephemeral-storage: "73958691987"
    hugepages-1Gi: "0"
    hugepages-2Mi: "0"
    memory: 4005288Ki
    pods: "110"
  capacity:
    cpu: "4"
    ephemeral-storage: 76026616Ki
    hugepages-1Gi: "0"
    hugepages-2Mi: "0"
    memory: 4005288Ki
    pods: "110"
  conditions:
  - lastHeartbeatTime: "2025-06-18T11:41:18Z"
    lastTransitionTime: "2025-06-18T11:41:18Z"
    message: Cilium is running on this node
    reason: CiliumIsUp
    status: "False"
    type: NetworkUnavailable
  - lastHeartbeatTime: "2025-06-18T11:47:00Z"
    lastTransitionTime: "2025-06-18T11:40:52Z"
    message: kubelet has sufficient memory available
    reason: KubeletHasSufficientMemory
    status: "False"
    type: MemoryPressure
  - lastHeartbeatTime: "2025-06-18T11:47:00Z"
    lastTransitionTime: "2025-06-18T11:40:52Z"
    message: kubelet has no disk pressure
    reason: KubeletHasNoDiskPressure
    status: "False"
    type: DiskPressure
  - lastHeartbeatTime: "2025-06-18T11:47:00Z"
    lastTransitionTime: "2025-06-18T11:40:52Z"
    message: kubelet has sufficient PID available
    reason: KubeletHasSufficientPID
    status: "False"
    type: PIDPressure
  - lastHeartbeatTime: "2025-06-18T11:47:00Z"
    lastTransitionTime: "2025-06-18T11:41:14Z"
    message: kubelet is posting ready status
    reason: KubeletReady
    status: "True"
    type: Ready
  daemonEndpoints:
    kubeletEndpoint:
      Port: 10250
  images:
  - names:
    - <http://quay.io/cilium/cilium@sha256:24a73fe795351cf3279ac8e84918633000b52a9654ff73a6b0d7223bcff4a67a|quay.io/cilium/cilium@sha256:24a73fe795351cf3279ac8e84918633000b52a9654ff73a6b0d7223bcff4a67a>
    sizeBytes: 271358817
  - names:
    - <http://docker.io/grafana/alloy@sha256:b5fc87ff9a8941d6ed3ae5f099d9cb8598b3cd42fef9a8af128ed782258b4017|docker.io/grafana/alloy@sha256:b5fc87ff9a8941d6ed3ae5f099d9cb8598b3cd42fef9a8af128ed782258b4017>
    - <http://docker.io/grafana/alloy:v1.9.1|docker.io/grafana/alloy:v1.9.1>
    sizeBytes: 140860135
  - names:
    - <http://docker.io/falcosecurity/falco@sha256:731c5b47e697c56749d97f1fb30399248e1019e6959b2a2db866a17af7af6395|docker.io/falcosecurity/falco@sha256:731c5b47e697c56749d97f1fb30399248e1019e6959b2a2db866a17af7af6395>
    - <http://docker.io/falcosecurity/falco:0.41.1|docker.io/falcosecurity/falco:0.41.1>
    sizeBytes: 80664481
  - names:
    - <http://docker.io/velero/velero@sha256:c790429fcd543f0a5eed3a490e85a2c39bf9aefb8ce7ddbc7a158557745ab33f|docker.io/velero/velero@sha256:c790429fcd543f0a5eed3a490e85a2c39bf9aefb8ce7ddbc7a158557745ab33f>
    - <http://docker.io/velero/velero:v1.16.1|docker.io/velero/velero:v1.16.1>
    sizeBytes: 77400601
  - names:
    - <http://docker.io/falcosecurity/falcoctl@sha256:8e02bfd0c44a954495a5c7f980693f603d47c1bec2e55c86319c55134b9a5b6e|docker.io/falcosecurity/falcoctl@sha256:8e02bfd0c44a954495a5c7f980693f603d47c1bec2e55c86319c55134b9a5b6e>
    - <http://docker.io/falcosecurity/falcoctl:0.11.2|docker.io/falcosecurity/falcoctl:0.11.2>
    sizeBytes: 31964684
  - names:
    - <http://registry.k8s.io/provider-os/cinder-csi-plugin@sha256:6ae514ebd71636705daabbb318256aff5e9fe03e601366e3181a547c179a8e06|registry.k8s.io/provider-os/cinder-csi-plugin@sha256:6ae514ebd71636705daabbb318256aff5e9fe03e601366e3181a547c179a8e06>
    - <http://registry.k8s.io/provider-os/cinder-csi-plugin:v1.33.0|registry.k8s.io/provider-os/cinder-csi-plugin:v1.33.0>
    sizeBytes: 29401612
  - names:
    - <http://quay.io/prometheus-operator/prometheus-config-reloader@sha256:959d47672fbff2776a04ec62b8afcec89e8c036af84dc5fade50019dab212746|quay.io/prometheus-operator/prometheus-config-reloader@sha256:959d47672fbff2776a04ec62b8afcec89e8c036af84dc5fade50019dab212746>
    - <http://quay.io/prometheus-operator/prometheus-config-reloader:v0.81.0|quay.io/prometheus-operator/prometheus-config-reloader:v0.81.0>
    sizeBytes: 14433657
  - names:
    - <http://registry.k8s.io/sig-storage/livenessprobe@sha256:33692aed26aaf105b4d6e66280cceca9e0463f500c81b5d8c955428a75438f32|registry.k8s.io/sig-storage/livenessprobe@sha256:33692aed26aaf105b4d6e66280cceca9e0463f500c81b5d8c955428a75438f32>
    - <http://registry.k8s.io/sig-storage/livenessprobe:v2.14.0|registry.k8s.io/sig-storage/livenessprobe:v2.14.0>
    sizeBytes: 14311007
  - names:
    - <http://registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0d23a6fd60c421054deec5e6d0405dc3498095a5a597e175236c0692f4adee0f|registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0d23a6fd60c421054deec5e6d0405dc3498095a5a597e175236c0692f4adee0f>
    - <http://registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.12.0|registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.12.0>
    sizeBytes: 14038309
  - names:
    - <http://quay.io/prometheus/node-exporter@sha256:d00a542e409ee618a4edc67da14dd48c5da66726bbd5537ab2af9c1dfc442c8a|quay.io/prometheus/node-exporter@sha256:d00a542e409ee618a4edc67da14dd48c5da66726bbd5537ab2af9c1dfc442c8a>
    - <http://quay.io/prometheus/node-exporter:v1.9.1|quay.io/prometheus/node-exporter:v1.9.1>
    sizeBytes: 12955907
  - names:
    - <http://docker.io/library/bash@sha256:01a15c6f48f6a3c08431cd77e11567823530b18159889dca3b7309b707beef91|docker.io/library/bash@sha256:01a15c6f48f6a3c08431cd77e11567823530b18159889dca3b7309b707beef91>
    - <http://docker.io/library/bash:5.2.37|docker.io/library/bash:5.2.37>
    sizeBytes: 6531375
  - names:
    - <http://docker.io/rancher/mirrored-pause@sha256:74c4244427b7312c5b901fe0f67cbc53683d06f4f24c6faee65d4182bf0fa893|docker.io/rancher/mirrored-pause@sha256:74c4244427b7312c5b901fe0f67cbc53683d06f4f24c6faee65d4182bf0fa893>
    - <http://docker.io/rancher/mirrored-pause:3.6|docker.io/rancher/mirrored-pause:3.6>
    sizeBytes: 301463
  nodeInfo:
    architecture: amd64
    bootID: 4c935316-0443-43f0-bfdd-f3b1c980ad13
    containerRuntimeVersion: <containerd://2.0.5-k3s1.32>
    kernelVersion: 5.15.0-140-generic
    kubeProxyVersion: v1.32.5+k3s1
    kubeletVersion: v1.32.5+k3s1
    machineID: 6c62f5d308514f478144d6bee909a0d3
    operatingSystem: linux
    osImage: Ubuntu 22.04.5 LTS
    systemUUID: 6c62f5d3-0851-4f47-8144-d6bee909a0d3
  runtimeHandlers:
  - features:
      recursiveReadOnlyMounts: true
      userNamespaces: true
    name: runc
  - features:
      recursiveReadOnlyMounts: true
      userNamespaces: true
    name: ""
  - features:
      recursiveReadOnlyMounts: false
      userNamespaces: false
    name: runhcs-wcow-process
Copy code
Running kubelet --cloud-provider=external --config-dir=/var/lib/rancher/k3s/agent/etc/kubelet.conf.d --containerd=/run/k3s/containerd/containerd.sock --hostname-override=inovex-prod-daidalos-it-ingress-local-dev-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --node-ip=192.168.45.63 --node-labels= --read-only-port=0
on the working nodes it looks like:
Copy code
Jun 18 11:21:41 inovex-prod-daidalos-it-ingress-dev-1 k3s[8726]: time="2025-06-18T11:21:41Z" level=info msg="Running kubelet --cloud-provider=external --config-dir=/var/lib/rancher/k3s/agent/etc/kubelet.conf.d --containerd=/run/k3s/containerd/containerd.sock --hostname-override=inovex-prod-daidalos-it-ingress-dev-1-e1bae219 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --node-ip=192.168.45.234 --node-labels=openstackid=90c95650-1627-4c3e-aa29-2437383cbfd0,instance-role.daidalos=ingress --provider-id=openstack:///90c95650-1627-4c3e-aa29-2437383cbfd0 --read-only-port=0"
why is kublet not get started with the right parameters?
i find the problem
the
--node-ip","192.168.45.63","10.0.10.109"
is no valid and when i only set one ip than kublet starts with the right parametrs. I dont find any logs. Maybe we can log something
c
it sounds like you installed the openstack CCM without disabling the embedded ccm. So it’s just a race as to which one gets to the nodes first and sets the provider-id.
As to why some of your nodes are missing the
--provider-id=openstack…
kubelet arg… that would be something you’re adding in your config. Did you forget to set the kubelet-arg values on those nodes?
and yes, that --node-ip,x,y format is definitely not a valid way to pass that flag
p
thank you
In my setup I have several misconfigurations from a historically used system. Thanks for the correct pointers