This message was deleted.
# k3s
a
This message was deleted.
c
change the $PATH in your docker container to include that path
The k3s docker image does not self-extract to /var/lib/rancher/k3s/data like the standalone binary does, so those paths aren’t used. But I suppose we could include the new static CNI bin path in the default $PATH for compatibility.
f
I've had a look and it looks like the following happens: It looks for
host-local
executable in the PATH and uses that value to generate the configuration in:
/var/lib/rancher/k3s/agent/etc/containerd/config.toml
At startup `/var/lib/rancher/k3s/data`directory does not exist. This is how
/bin
ends up to be the cni-bin-dir setting.
c
Yes and it will never exist because the docker image doesn't self extract or add that path to the path env var
You can modify container env vars when running the k3s container though...
f
The lookup is the path of host-local is only done once, before the config.toml is generated. So unless I mount the cni drivers from the host. The host-local will only be found in /bin and /bin is the only path used for cni plugins. I've configure /bin as the binDir in the multus chart, the pod from daemonset will install the multus cni plugin in /bin. Don't know if it's bad to having the cni drivers/plugin in /bin dir, seems to mee you could trigger any of the executables there just by creating network with cni poweroff or reboot.
c
You'd need something like Multus to try to get it to run arbitrary other executables as CNI plugins
And anyone with admin access to do that could already do the same with a privileged pod. It doesn't affect the threat model at all.
Im talking about setting env vars for the k3s container itself. Not for containerd. You understand what I'm saying, right?
Copy code
docker run --name k3s-server-1 --privileged -v /var/lib/rancher/k3s/data/cni -e PATH=/var/lib/rancher/k3s/data/cni:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/bin/aux rancher/k3s:v1.31.4-k3s1 server
something like that. idk how you’re currently running it in Docker such that things persist across restarts.
Obviously it won’t use the bins in that directory on the first start, you’ll need to copy or symlink them there yourself and then restart
f
I'm running k3s in a container to make it part of an integration test. The docker command is pretty much what you mentioned. I'm using testcontainers from golang to create the cluster and deploy all resources. I guess I could override the entrypoint of the container to invoke a shell scripts that creates the directory and symlinks to cni plugins, or create a Dockerfile to the same.
Thanks for the suggestions.
c
yeah, if you have the ability to build another image on top of the k3s image that might not be as easy, but it’d also work. Init container would probably be more portable, if thats something you can do.
I did open https://github.com/k3s-io/k3s/issues/11497 to track this, hoping to have a fix for the January release cycle