This message was deleted.
# rancher-desktop
a
This message was deleted.
f
No, this is not possible.
What benefits do you expect to get from Rancher Desktop compared to just starting the docker daemon yourself? All the isolation advantages (and the ability to wipe the whole setup with a factory-reset) would no longer be available if you run directly on the host.
e
I see... in fact I really missed an explanation of these details. OTOH I'd be very interested on comparing the isolation features/level currently obtained through "the qemu driver" with what we could obtain implementing a "LXC driver" counterpart. I'm not an expert, but I was really impressed by running microk8s on LXD last week and I feel it may be worth it. I'd be happy to contribute in any case :-)
f
This is unlikely to fit the Rancher Desktop design, which is built around the assumption that it manages a container engine inside a VM. And the VM is managed on macOS and Linux using Lima, and on Windows using WSL2.
That's why I'm asking what benefits you expect from running Rancher Desktop against a native docker or containerd daemon? What additional functionality would it provide?
Note that we originally did not even plan to build a Linux version at all because we assumed everyone would just run the daemons natively, but we got enough requests from users to extend the platform list. But these requests were all based around the easy of "wiping the slate clean", which you get from running the engine (and data store) inside a VM.
Adding an "LXC driver" for Linux seems like a lot of work for very little benefit.
e
Ok, so my main use case is setting a hard limit for the whole set of resources drained by a local k8s and its applied loads, so that it doesn't impact system UI, IDE and so on. This can be easily achieved i.e. with a simple microk8s running on a LXC container (the same stands for k3d I suppose). With that approach even deploying dozens of angry pods without CPU limits on k8s side, they compete based on their own dynamics i.e. to get available CPU cycles, but always under the limits imposed by LXC. The LXC container uses its own container runtime to place pods (through containerd). So both the LXC container and the pods are confined, and cannot access any system resource, though I still can mount directories as disks, proxy ports and so on. OTOH most of my colleagues would really benefit from the massive feature set of Rancher Desktop, but QEMU/KVM full virtualization imposes some performance penalty implications that I'd prefer to avoid if I can stick to a lighter approach. TBH I didn't properly measure performances so far, so I don't have clear metrics for comparison, but at a first sight virtualization seems more cumbersome for this scenario.