cuddly-egg-57762
08/08/2022, 12:17 PM/var/lib/rancher/k3s/agent/images/
, but when I do the same thing with cilium operator and cilium "client" images tar files they seems to be not imported automatically during the cluster init.
Do the auto-import only works for k3s airgap package? Or am I making something wrong?
I put here also the list of images directory and the crictl image list after the k3s cluster init:
[root@rocky1 srv]# ls /var/lib/rancher/k3s/agent/images/
cilium-operator.tar cilium.tar k3s-airgap-images-amd64.tar.gz metallb-controller.tar metallb-speaker.tar
[root@rocky1 srv]# crictl image list
IMAGE TAG IMAGE ID SIZE
<http://docker.io/rancher/klipper-helm|docker.io/rancher/klipper-helm> v0.7.3-build20220613 38b3b9ad736af 239MB
<http://docker.io/rancher/klipper-lb|docker.io/rancher/klipper-lb> v0.3.5 dbd43b6716a08 8.51MB
<http://docker.io/rancher/local-path-provisioner|docker.io/rancher/local-path-provisioner> v0.0.21 fb9b574e03c34 35.3MB
<http://docker.io/rancher/mirrored-coredns-coredns|docker.io/rancher/mirrored-coredns-coredns> 1.9.1 99376d8f35e0a 49.7MB
<http://docker.io/rancher/mirrored-library-busybox|docker.io/rancher/mirrored-library-busybox> 1.34.1 62aedd01bd852 1.47MB
<http://docker.io/rancher/mirrored-library-traefik|docker.io/rancher/mirrored-library-traefik> 2.6.2 72463d8000a35 103MB
<http://docker.io/rancher/mirrored-metrics-server|docker.io/rancher/mirrored-metrics-server> v0.5.2 f73640fb50619 65.7MB
<http://docker.io/rancher/mirrored-pause|docker.io/rancher/mirrored-pause> 3.6 6270bb605e12e 686kB
Thanks a lot for your help!gray-lawyer-73831
08/09/2022, 3:25 PM/var/lib/rancher/k3s/server/manifests
as well that specifies to use cilium, and have the flags in config to not use the built in cni?
If not, you’ll need those as well to make sure k3s knows to use cilium (and its images you’ve loaded there) instead of flannelcuddly-egg-57762
08/10/2022, 7:20 AM0/1 nodes are available: 1 node(s) had untolerated taint {<http://node.kubernetes.io/not-ready|node.kubernetes.io/not-ready>: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
is there a way to avoid this? I'm thinking about the manual removal of the not-ready taint, but I guess it's not really safe to do