This message was deleted.
# k3s
a
This message was deleted.
c
you can’t, if you’re using flannel. You are welcome to set --flannel-backend=none and deploy your own CNI though.
c
This is what I thought. The thing is that Cilium installs itself in
/var/lib/rancher/k3s/data/[long_id]/bin
and, following a k3s upgrade, the cni gets broken as the cluster can't find the cilium-cni binary anymore, and I need to restart the cilium daemonset in order for the cluster to work again. This is why I was looking at changing the cni binary location. Otherwise, I may need to use a clusterPolicy with something like kyverno to check for a kubernetes upgrade and then restart the pods accordingly, which isn't ideal
c
Why is it going there? As you noted, that is the path for binaries bundled with k3s - other things shouldn’t be dropping things there if they’re intended to be available following an upgrade.
c
Would have to check... Could I place it somewhere like /opt/bin or somewhere around there?
If I instruct cilium to install its binaries in /opt/cni/bin, and then restart any pod, I'm getting this: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "225b6ecfcbf37ebab736b2534a0fa43fce8926a055a93d7d3b72f8f87e62f970": plugin type="cilium-cni" failed (add): failed to find plugin "cilium-cni" in path [/var/lib/rancher/k3s/data/f9f37b05ac205a3ca783d075d40c4ba8be73efd8caf83a27add3ed0ab8035e96/bin] This is why I migrated its location there in order to make it work, but clearly something's not right
Apparently, the bin_dir location is set in this file:
/varlib/rancher/k3s/agent/etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "/var/lib/rancher/k3s/data/f9f37b05ac205a3ca783d075d40c4ba8be73efd8caf83a27add3ed0ab8035e96/bin"
conf_dir = "/var/lib/rancher/k3s/agent/etc/cni/net.d"
Any clue on installing cni binaries (like cilium as independant install) outside of the standard above? Can I install it into /opt/cni/bin and having it reachable ?