This message was deleted.
# rke2
a
This message was deleted.
m
Copy code
>> ll /run/cilium
total 7112
drwxr-xr-x  3 root root      80 Apr 25 07:15 ./
drwxr-xr-x 39 root root    1120 Apr 25 07:15 ../
-rw-r--r--  1 root root 7278822 Apr 25 09:50 cilium-cni.log
drwxr-xr-x  2 root root    5180 Apr 25 07:29 deleteQueue/
Copy code
>> tail cilium-cni.log
level=warning msg="Errors encountered while deleting endpoint" error="deletion queue directory /var/run/cilium/deleteQueue has too many entries; aborting" subsys=cilium-cni
level=warning msg="Failed to connect to agent socket at unix:///var/run/cilium/cilium.sock." containerID=ccf7dfdacdb2e20ebc45db766331c8c23d205d6e9c2775f19d9145f1de6d782d error="failed to create cilium agent client after 10.000000 seconds timeout: Get \"<http://localhost/v1/config>\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory" eventUUID=e2bccbc7-ca4c-4ae2-afc0-588288888f09 subsys=cilium-cni
level=warning msg="Failed to connect to agent socket at unix:///var/run/cilium/cilium.sock." containerID=6b6e6b5c7b7195a92be457ed91ac9ff3d55810281e16ac42eb29ad4f2c19c479 error="failed to create cilium agent client after 10.000000 seconds timeout: Get \"<http://localhost/v1/config>\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory" eventUUID=26181334-ebdf-4675-888f-178ecebe8860 subsys=cilium-cni
level=warning msg="Failed to connect to agent socket at unix:///var/run/cilium/cilium.sock." containerID=62ad481c0c964f9dc27863098fe73e24bd647638aea87e08e8426732b15bc5c9 error="failed to create cilium agent client after 10.000000 seconds timeout: Get \"<http://localhost/v1/config>\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory" eventUUID=eef9858c-d154-4d33-90fa-1e8d6524b128 subsys=cilium-cni
level=warning msg="Failed to connect to agent socket at unix:///var/run/cilium/cilium.sock." containerID=a819383dcc113eaea5ecd33763e6e7543c8610a36e9bbcb602d6267364ec3030 error="failed to create cilium agent client after 10.000000 seconds timeout: Get \"<http://localhost/v1/config>\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory" eventUUID=830bc22c-b3c0-46a6-adb5-fb61f9cefc41 subsys=cilium-cni
level=warning msg="Failed to connect to agent socket at unix:///var/run/cilium/cilium.sock." containerID=cdaa45ad2b10e567a0c0e1c029a790962f7db0ec222590d71c96351e41d34d16 error="failed to create cilium agent client after 10.000000 seconds timeout: Get \"<http://localhost/v1/config>\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory" eventUUID=a1cca5e8-d8a3-4a2d-98f9-64b7a08f9c4e subsys=cilium-cni
level=warning msg="Failed to connect to agent socket at unix:///var/run/cilium/cilium.sock." containerID=b3bd3b5d0e04174683f25daa877258122c36d08b7a922bd1c7d5e7c63d57386c error="failed to create cilium agent client after 10.000000 seconds timeout: Get \"<http://localhost/v1/config>\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory" eventUUID=d91ac6b3-3557-43f8-97e1-c1559fd14071 subsys=cilium-cni
level=warning msg="Failed to connect to agent socket at unix:///var/run/cilium/cilium.sock." containerID=98fab04a2a7675238f030ff4111bcbf9f667cf095d2fee4228c5b6b7c738a49f error="failed to create cilium agent client after 10.000000 seconds timeout: Get \"<http://localhost/v1/config>\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory" eventUUID=2281a530-7962-41a3-8b3a-db38ea483a14 subsys=cilium-cni
level=warning msg="Failed to connect to agent socket at unix:///var/run/cilium/cilium.sock." containerID=28b7a6e5300481138b7cd025846408fad269bd621c1cc73768ca480020604de3 error="failed to create cilium agent client after 10.000000 seconds timeout: Get \"<http://localhost/v1/config>\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory" eventUUID=862cd191-88e2-490f-881b-2c057ba44311 subsys=cilium-cni
level=info msg="Agent is down, falling back to deletion queue directory" containerID=a57a68e6c56eb530caa78bbca557111f6f24c85d85c78164920dd6a24ff87cbe eventUUID=4a0ab9bd-7b7f-4737-8bf8-40b5713d1360 subsys=cilium-cni
level=info msg="Queueing deletion request for endpoint" containerID=a57a68e6c56eb530caa78bbca557111f6f24c85d85c78164920dd6a24ff87cbe endpointID="container-id:a57a68e6c56eb530caa78bbca557111f6f24c85d85c78164920dd6a24ff87cbe" eventUUID=4a0ab9bd-7b7f-4737-8bf8-40b5713d1360 subsys=cilium-cni
level=warning msg="Errors encountered while deleting endpoint" error="deletion queue directory /var/run/cilium/deleteQueue has too many entries; aborting" subsys=cilium-cni
level=warning msg="Unable to enter namespace \"\", will not delete interface" error="failed to Statfs \"\": no such file or directory" subsys=cilium-cni
level=info msg="Agent is down, falling back to deletion queue directory" containerID=7728110cd4bfadd9366ed9e52218de8035af4638299dc68e926546d7e469388d eventUUID=03305114-3155-445d-ae37-33bdbf62e3c7 subsys=cilium-cni
level=info msg="Queueing deletion request for endpoint" containerID=7728110cd4bfadd9366ed9e52218de8035af4638299dc68e926546d7e469388d endpointID="container-id:7728110cd4bfadd9366ed9e52218de8035af4638299dc68e926546d7e469388d" eventUUID=03305114-3155-445d-ae37-33bdbf62e3c7 subsys=cilium-cni
level=warning msg="Errors encountered while deleting endpoint" error="deletion queue directory /var/run/cilium/deleteQueue has too many entries; aborting" subsys=cilium-cni
level=warning msg="Unable to enter namespace \"\", will not delete interface" error="failed to Statfs \"\": no such file or directory" subsys=cilium-cni
level=warning msg="Failed to connect to agent socket at unix:///var/run/cilium/cilium.sock." containerID=f69231181accee8e83e9f541fd749c02f17df83f530291ccc32aef6c4c90bfc3 error="failed to create cilium agent client after 10.000000 seconds timeout: Get \"<http://localhost/v1/config>\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory" eventUUID=d488c799-9a1e-4b79-8b57-969ed4141fe7 subsys=cilium-cni
level=info msg="Agent is down, falling back to deletion queue directory" containerID=25cd4d65c51fb44085e880363f5f0700ad426ed70b19829ba8b61629c2bd1bdf eventUUID=7ac39d84-da8e-44cb-a176-f739cd53d8c7 subsys=cilium-cni
level=info msg="Queueing deletion request for endpoint" containerID=25cd4d65c51fb44085e880363f5f0700ad426ed70b19829ba8b61629c2bd1bdf endpointID="container-id:25cd4d65c51fb44085e880363f5f0700ad426ed70b19829ba8b61629c2bd1bdf" eventUUID=7ac39d84-da8e-44cb-a176-f739cd53d8c7 subsys=cilium-cni
level=warning msg="Errors encountered while deleting endpoint" error="deletion queue directory /var/run/cilium/deleteQueue has too many entries; aborting" subsys=cilium-cni
level=warning msg="Unable to enter namespace \"\", will not delete interface" error="failed to Statfs \"\": no such file or directory" subsys=cilium-cni
level=warning msg="Failed to connect to agent socket at unix:///var/run/cilium/cilium.sock." containerID=e794607347adce7a9b030ad467cd0ccb139e0f5473d03093782d5ca22aabd0f7 error="failed to create cilium agent client after 10.000000 seconds timeout: Get \"<http://localhost/v1/config>\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory" eventUUID=f3877fde-195f-4548-b779-b8a22c68aaa8 subsys=cilium-cni
level=warning msg="Failed to connect to agent socket at unix:///var/run/cilium/cilium.sock." containerID=4b30060664a5a292bdd36926f77d9fec73d54735ec2ad864cd5bbd85b86b265d error="failed to create cilium agent client after 10.000000 seconds timeout: Get \"<http://localhost/v1/config>\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory" eventUUID=46c3bf54-b24a-44c5-9b5d-15d90cbf1169 subsys=cilium-cni
level=info msg="Agent is down, falling back to deletion queue directory" containerID=7c5df5a0f6a7014f02e99ce477091675fcf11b347f637d220a1aab7da84c5e2d eventUUID=d5db6b46-b0a5-4e67-8def-074cffb15642 subsys=cilium-cni
level=info msg="Queueing deletion request for endpoint" containerID=7c5df5a0f6a7014f02e99ce477091675fcf11b347f637d220a1aab7da84c5e2d endpointID="container-id:7c5df5a0f6a7014f02e99ce477091675fcf11b347f637d220a1aab7da84c5e2d" eventUUID=d5db6b46-b0a5-4e67-8def-074cffb15642 subsys=cilium-cni
level=warning msg="Errors encountered while deleting endpoint" error="deletion queue directory /var/run/cilium/deleteQueue has too many entries; aborting" subsys=cilium-cni
level=warning msg="Failed to connect to agent socket at unix:///var/run/cilium/cilium.sock." containerID=e7394bd6f7466e0b9addbb90feed1566110e05ae13aefa0454eabb4ee6730f30 error="failed to create cilium agent client after 10.000000 seconds timeout: Get \"<http://localhost/v1/config>\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory" eventUUID=32b6e61e-b6b1-4f36-867a-98aca519bb51 subsys=cilium-cni
level=info msg="Agent is down, falling back to deletion queue directory" containerID=00c69a854b21db30cd5fd8f710014a5b8987fc668148951b257765c6b721f513 eventUUID=bd423935-2f12-43b4-9ba8-774dd4f4a3b0 subsys=cilium-cni
level=info msg="Queueing deletion request for endpoint" containerID=00c69a854b21db30cd5fd8f710014a5b8987fc668148951b257765c6b721f513 endpointID="container-id:00c69a854b21db30cd5fd8f710014a5b8987fc668148951b257765c6b721f513" eventUUID=bd423935-2f12-43b4-9ba8-774dd4f4a3b0 subsys=cilium-cni
level=warning msg="Errors encountered while deleting endpoint" error="deletion queue directory /var/run/cilium/deleteQueue has too many entries; aborting" subsys=cilium-cni
level=warning msg="Unable to enter namespace \"\", will not delete interface" error="failed to Statfs \"\": no such file or directory" subsys=cilium-cni
level=info msg="Agent is down, falling back to deletion queue directory" containerID=3c8392ae013c7356561f2582a8d131677f78dbc4537be1ed37470e8822721be6 eventUUID=2e288cd6-c688-4458-be4e-4d17cadd6492 subsys=cilium-cni
level=info msg="Queueing deletion request for endpoint" containerID=3c8392ae013c7356561f2582a8d131677f78dbc4537be1ed37470e8822721be6 endpointID="container-id:3c8392ae013c7356561f2582a8d131677f78dbc4537be1ed37470e8822721be6" eventUUID=2e288cd6-c688-4458-be4e-4d17cadd6492 subsys=cilium-cni
level=warning msg="Errors encountered while deleting endpoint" error="deletion queue directory /var/run/cilium/deleteQueue has too many entries; aborting" subsys=cilium-cni
level=warning msg="Unable to enter namespace \"\", will not delete interface" error="failed to Statfs \"\": no such file or directory" subsys=cilium-cni
level=warning msg="Failed to connect to agent socket at unix:///var/run/cilium/cilium.sock." containerID=49ef80e9e9818d32bdf44bf09471662497654e6d335a57039c760e9a61704f3c error="failed to create cilium agent client after 10.000000 seconds timeout: Get \"<http://localhost/v1/config>\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory" eventUUID=3777a5d9-8739-4007-b8bd-ff7938d2d291 subsys=cilium-cni
level=info msg="Agent is down, falling back to deletion queue directory" containerID=39bfb62c06f022496988a5be9036627746766c35efa244a0ab7e122b359995f6 eventUUID=dd11eb6b-b698-40d1-920a-c974b5df6f58 subsys=cilium-cni
level=info msg="Queueing deletion request for endpoint" containerID=39bfb62c06f022496988a5be9036627746766c35efa244a0ab7e122b359995f6 endpointID="container-id:39bfb62c06f022496988a5be9036627746766c35efa244a0ab7e122b359995f6" eventUUID=dd11eb6b-b698-40d1-920a-c974b5df6f58 subsys=cilium-cni
level=warning msg="Errors encountered while deleting endpoint" error="deletion queue directory /var/run/cilium/deleteQueue has too many entries; aborting" subsys=cilium-cni
level=warning msg="Unable to enter namespace \"\", will not delete interface" error="failed to Statfs \"\": no such file or directory" subsys=cilium-cni
i
Hello ! What is the requirement of using cgroups v1 ? Could you please provide the config.yaml used to create your cluster ? Kind regards
m
small correction… i came across the conclusion of
cgroup1
based on https://kubernetes.io/docs/concepts/architecture/cgroups/#check-cgroup-version since in this environment it returned
tmpfs
but i see following in the environment
Copy code
>>>mount | grep cgroup
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
rke2-config file
Copy code
>>> /etc/rancher/rke2/config.yaml

cni: ['cilium']
tls-san:
  - cluster.local
  - 192.168.3.8
disable: ['rke2-canal', 'rke2-calico', 'rke2-multus', 'calico']
I tried to restore etcd in the environment in a new
v1.26.15+rke2r1
server. noticed following error in containered.log
Copy code
time="2024-04-30T14:20:44.270411577Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config
though I see cilium cni helm file, its not coming up in the environment.
Copy code
/var/lib/rancher/rke2/server/manifests# ll
total 424
drwx------ 2 root root   4096 Apr 30 14:23 ./
drwx------ 8 root root   4096 Apr 30 13:27 ../
-rw-r--r-- 1 root root    409 Apr 30 14:13 rke2-cilium-config.yaml
-rw-r--r-- 1 root root 229394 Apr 30 14:20 rke2-cilium.yaml
-rw-r--r-- 1 root root  27793 Apr 30 14:20 rke2-coredns.yaml
-rw-r--r-- 1 root root  65487 Apr 30 14:20 rke2-ingress-nginx.yaml
-rw-r--r-- 1 root root  13647 Apr 30 14:20 rke2-metrics-server.yaml
-rw-r--r-- 1 root root  14397 Apr 30 14:20 rke2-snapshot-controller-crd.yaml
-rw-r--r-- 1 root root  18665 Apr 30 14:20 rke2-snapshot-controller.yaml
-rw-r--r-- 1 root root  20885 Apr 30 14:20 rke2-snapshot-validation-webhook.yaml
I see such errors are posted in https://github.com/rancher/rke2/issues/1926
Also noticed sometimes facing issues similar to mentioned in https://github.com/rancher/rke2/issues/4052
i
Hello ! Could you please provide the output of the cilium config file
/var/lib/rancher/rke2/manifests/server/rke2-cilium-config.yaml
? It may exist some invalid config options there.
m
Copy code
kind: HelmChartConfig
metadata:
  name: rke2-cilium
  namespace: kube-system
spec:
  valuesContent: |-
    operator:
      replicas: 1
i
Hum, that looks correct ! Is the Kubelet starting successfully ? Also could you please provide the logs of rke2-server ?
journalctl -u rke2-server
m
Copy code
systemctl status rke2-server
● rke2-server.service - Rancher Kubernetes Engine v2 (server)
    Loaded: loaded (/usr/local/lib/systemd/system/rke2-server.service; enabled; vendor preset: enabled)
    Active: active (running) since Tue 2024-04-30 14:23:02 UTC; 1h 14min ago
      Docs: <https://github.com/rancher/rke2#readme>
   Process: 130442 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exited, status=0/SUCCESS)
   Process: 130444 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
   Process: 130445 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
  Main PID: 130446 (rke2)
     Tasks: 245
    Memory: 1.7G
    CGroup: /system.slice/rke2-server.service
            ├─130446 /usr/local/bin/rke2 server
            ├─130528 containerd -c /var/lib/rancher/rke2/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/rke2/agent/containerd
            ├─131407 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-f>
            ├─131867 /var/lib/rancher/rke2/data/v1.26.15-rke2r1-4876255903be/bin/containerd-shim-runc-v2 -namespace <http://k8s.io|k8s.io> -id 7f3eb9c962814addcef8158682138377d7e44635f27f1039a28260155df945bf -address /run/k3s/containerd/containerd.sock
            ├─132235 /var/lib/rancher/rke2/data/v1.26.15-rke2r1-4876255903be/bin/containerd-shim-runc-v2 -namespace <http://k8s.io|k8s.io> -id efbdc63862811f1938ead90b0d70d9a5a309071c09e6e471ae9bbf22dc96f116 -address /run/k3s/containerd/containerd.sock
            ├─132237 /var/lib/rancher/rke2/data/v1.26.15-rke2r1-4876255903be/bin/containerd-shim-runc-v2 -namespace <http://k8s.io|k8s.io> -id 5e090a0ed92e1eb0a522ee16477301409126fec3980900acbc4f3c52e9f61ec0 -address /run/k3s/containerd/containerd.sock
            ├─132238 /var/lib/rancher/rke2/data/v1.26.15-rke2r1-4876255903be/bin/containerd-shim-runc-v2 -namespace <http://k8s.io|k8s.io> -id acd7bc422211c9c34254837e44ccd0574645c239db2dec7f03ee846300d6fd18 -address /run/k3s/containerd/containerd.sock
            ├─132520 /var/lib/rancher/rke2/data/v1.26.15-rke2r1-4876255903be/bin/containerd-shim-runc-v2 -namespace <http://k8s.io|k8s.io> -id c0741367139723bae9ab874c1fed944eaac6d1c5c26da76bc7e0f4970163eb23 -address /run/k3s/containerd/containerd.sock
            ├─132601 /var/lib/rancher/rke2/data/v1.26.15-rke2r1-4876255903be/bin/containerd-shim-runc-v2 -namespace <http://k8s.io|k8s.io> -id b895301d0cdc837e543bd1e200b2009dca9a25165f6bffc647026ad93999cd17 -address /run/k3s/containerd/containerd.sock
            ├─132605 /var/lib/rancher/rke2/data/v1.26.15-rke2r1-4876255903be/bin/containerd-shim-runc-v2 -namespace <http://k8s.io|k8s.io> -id 914398c0eb017f1bf8b1a9a2bdcea95277093b0c4c82a790a213792190f1d791 -address /run/k3s/containerd/containerd.sock
            ├─132769 /var/lib/rancher/rke2/data/v1.26.15-rke2r1-4876255903be/bin/containerd-shim-runc-v2 -namespace <http://k8s.io|k8s.io> -id c0eb4ac4613013121f0d83c3a6d494315f399266a63e98515de6dcaadb6383e8 -address /run/k3s/containerd/containerd.sock
            ├─132930 /var/lib/rancher/rke2/data/v1.26.15-rke2r1-4876255903be/bin/containerd-shim-runc-v2 -namespace <http://k8s.io|k8s.io> -id 434a098a22d1362d8d07f843b8aa33917b5db2aa83d906d2459a6dc9b344d69a -address /run/k3s/containerd/containerd.sock
            ├─132931 /var/lib/rancher/rke2/data/v1.26.15-rke2r1-4876255903be/bin/containerd-shim-runc-v2 -namespace <http://k8s.io|k8s.io> -id 9fd571de45be665613d2bb3ef14d1e4d401585d8de47204f0d0859d8687c0849 -address /run/k3s/containerd/containerd.sock
            ├─133077 /var/lib/rancher/rke2/data/v1.26.15-rke2r1-4876255903be/bin/containerd-shim-runc-v2 -namespace <http://k8s.io|k8s.io> -id 60e1b0afd6839b3242b14d72b9bca9b6ccda553805de5c50ca8cb9a5348a1095 -address /run/k3s/containerd/containerd.sock
            ├─133861 /var/lib/rancher/rke2/data/v1.26.15-rke2r1-4876255903be/bin/containerd-shim-runc-v2 -namespace <http://k8s.io|k8s.io> -id cd86762c51aef57d2f8ad000f17fff511cc97a5fb8ee94dbda4deb3fe89cc6d0 -address /run/k3s/containerd/containerd.sock
            └─135014 /var/lib/rancher/rke2/data/v1.26.15-rke2r1-4876255903be/bin/containerd-shim-runc-v2 -namespace <http://k8s.io|k8s.io> -id 8f656f9fd784dc6ff92b0ad1e6c7f5f7a1798eace36bf35cfba30a1ce244e463 -address /run/k3s/containerd/containerd.sock
i
You have provided the status of the service. I mean you should provide the logs of rke2-server by issuing this command
journalctl -u rke2-server
I notice that Kubelet is running successfully. That is a good point. Is the CP up ?
kubectl get pods -A -o wide
m
Copy code
>> kubectl get pods -n kube-system

NAME                                                    READY   STATUS             RESTARTS           AGE
etcd-svr-pro-2                                    1/1     Running            1                  27d
etcd-svr-pro-4                                    1/1     Running            2                  137m
etcd-svr-pro-1                                   1/1     Running            4                  145d
helm-install-rke2-coredns-qcswn                         0/1     CrashLoopBackOff   38 (2m43s ago)     137m
helm-install-rke2-ingress-nginx-44c9g                   0/1     Pending            0                  137m
helm-install-rke2-snapshot-controller-crd-zkw6k         0/1     Pending            0                  137m
helm-install-rke2-snapshot-controller-qxtb7             0/1     Pending            0                  137m
helm-install-rke2-snapshot-validation-webhook-8sxgz     0/1     Pending            0                  137m
kube-apiserver-svr-pro-2                          1/1     Running            2                  27d
kube-apiserver-svr-pro-4                          1/1     Running            2                  138m
kube-apiserver-svr-pro-1                         1/1     Running            5 (28d ago)        145d
kube-controller-manager-svr-pro-2                 1/1     Running            2 (7d16h ago)      27d
kube-controller-manager-svr-pro-4                 1/1     Running            2 (86m ago)        138m
kube-controller-manager-svr-pro-1                1/1     Running            13 (28d ago)       145d
kube-proxy-svr-pro-2                              1/1     Running            0                  7d16h
kube-proxy-svr-pro-4                              1/1     Running            0                  84m
kube-proxy-svr-pro-1                             1/1     Running            5 (28d ago)        145d
kube-scheduler-svr-pro-2                          1/1     Running            1 (7d16h ago)      27d
kube-scheduler-svr-pro-4                          1/1     Running            2 (86m ago)        138m
kube-scheduler-svr-pro-1                         1/1     Running            4 (28d ago)        145d
rke2-coredns-rke2-coredns-657cc8fbcc-q7kfb              0/1     Pending            0                  136m
rke2-coredns-rke2-coredns-657cc8fbcc-wllzz              0/1     Pending            0                  136m
rke2-coredns-rke2-coredns-autoscaler-6cc6bfffcf-qh2c9   0/1     Pending            0                  136m
rke2-ingress-nginx-controller-f9tqx                     1/1     Running            4 (28d ago)        146d
rke2-ingress-nginx-controller-rh72d                     0/1     Unknown            0                  32d
rke2-metrics-server-74f878b999-6647t                    0/1     Pending            0                  136m
rke2-snapshot-controller-7df8c9c5fc-4t9xk               0/1     Pending            0                  136m
rke2-snapshot-validation-webhook-8744b774d-jdkfr        0/1     Pending            0                  136m
sealed-secrets-controller-6449b96755-7jvj6              0/1     Pending            0                  136m
cilium cni is not coming and thats the issue.. i am trying to figure out reason and solution
i
Ok it seems like the cilium config is not take into account by the rke2-server process. Could you please reconfigure your
/etc/rancher/rke2/config.yaml
like this ?
cni: cilium
tls-san:
- 192.168.3.8
We’ll start with basic configuration and see what’s happening next.