This message was deleted.
# k3s
a
This message was deleted.
l
Isn’t it possible to “tell” the gpu-operator chart where the containerd sock is?
I think it’s more the gpu-operator that should support K3s and not the other way around
w
No, they do not make that path configurable in
values.yaml
b
you could always symlink 🤷
🎯 1
l
yup or for something compatible with Helm .. you could use ytt from Carvel to overlay on the output of the Helm templating…. by using the
--post-rendering
paramter to Helm
w
Symlink did come to mind…
Anyways, just wanted to check with more experienced k3s users on this - thanks
c
You shouldn't need the operator. Just install the runtimes, k3s will find them when it starts. Check the k3s docs for more info.
w
Yes, this is how we do it now… Was just considering if GPU Operator was a “newer, better way” & if it could be used.
b
you can do both, if you just install runtime, you must specify in every pod or deployment: "`runtimeClassName: nvidia`"
if you want to install GPU operator follow this steps:
helm repo add nvidia <https://nvidia.github.io/gpu-operator> \
&& helm repo update
helm pull nvidia/gpu-operator
tar xzvf gpu-operator-v23.3.2.tgz
helm install --wait --generate-name --namespace gpu-operator --create-namespace  \
nvidia/gpu-operator \
--set operator.defaultRuntime="containerd"
gpu-operator % helm install gpu-operator ./ --namespace gpu-operator --create-namespace   --set operator.defaultRuntime="containerd"
and change this in values.yaml:
toolkit:
env:
- name: CONTAINERD_CONFIG
value: /var/lib/rancher/k3s/agent/etc/containerd/config.toml
- name: CONTAINERD_SOCKET
value: /run/k3s/containerd/containerd.sock
w
Thanks, @boundless-spoon-6503! I will give this a try.
👍 1