Ok, so my main use case is setting a hard limit for the whole set of resources drained by a local k8s and its applied loads, so that it doesn't impact system UI, IDE and so on.
This can be easily achieved i.e. with a simple microk8s running on a LXC container (the same stands for k3d I suppose).
With that approach even deploying dozens of angry pods without CPU limits on k8s side, they compete based on their own dynamics i.e. to get available CPU cycles, but always under the limits imposed by LXC.
The LXC container uses its own container runtime to place pods (through containerd).
So both the LXC container and the pods are confined, and cannot access any system resource, though I still can mount directories as disks, proxy ports and so on.
OTOH most of my colleagues would really benefit from the massive feature set of Rancher Desktop, but QEMU/KVM full virtualization imposes some performance penalty implications that I'd prefer to avoid if I can stick to a lighter approach.
TBH I didn't properly measure performances so far, so I don't have clear metrics for comparison, but at a first sight virtualization seems more cumbersome for this scenario.