This message was deleted.
# rancher-desktop
a
This message was deleted.
👍 2
j
Attention @calm-sugar-3169
c
@nice-toddler-37804 nobody is actively working on tackling this issue, however, we do have it in our roadmap. Feel free to tackle this one, we look forward to your proposed changes. Also, feel free to ping me directly if you have any question. Thanks 🙏
w
And remember you can use WSLENV on windows and it will automatically get written to /etc/environment on init for all the RD processes to take advantage of. I find this works well for Docker/Containerd, but my issue is with no_proxy to prevent kubernetes traffic from being forwarded as well.
n
@calm-sugar-3169 @wide-mechanic-33041 If I may ping you directly for some questions. Right now it seems like the preference that triggers a change in the program running in the background are the following: *
kubernetes.port
*
kubernetes.containerEngine
*
kubernetes.enables
*
kubernetes.WSLIntegrations
*
kubernetes.options.traefik
*
kubernetes.options.flannel
*
kubernetes.hostResolver
All the preferences are under the
kubernetes
object but they don't all relate directly to the Kubernetes configuration. Correct me if I'm wrong but for instance, the parameter
WSLIntegrations
is in this object but doesn't relate directly to Kubernetes. During the development of my WSLProxy features, I added the the
wslProxy
entry to the
Settings
object
Copy code
ts
export const defaultSettings = {
  ...
  kubernetes: { ... },
  wslProxy: { address: '', port: 3128 },
  ...
}
My first question is then: 1. Should all the preference settings that modify the backend be listed under the
kubernetes
Settings
entry ? Or I can just keep it as I did ? Right now the modification of every settings variable is checked in the function
requiresRestartReasons
in
pkg/rancher-desktop/backend/kube/wsl.ts
. Any change in one of that preference will trigger a full reset of Kubernetes in the WSL VM. 2. Is it planned to take into account the preference more precisely ? In such a way, a simple change in the settings doesn't trigger the whole Kubernetes reset but a more specific subset ? If I want to apply the WSL Proxy settings my setting will need to be checked in the body of
requiresRestartReasons
in
pkg/rancher-desktop/backend/wsl.ts
Also, I want first to be sure 3. There is no settings the user can trigger that directly modify the content or execution of the WSL VM that currently exist ? To apply the WSL Proxy settings I do have to modify
WSLENV
variables or directly write them inside the running WSL VM. 4. If I want to modify the
WSLENV
, is there any way to modify the
WSLENV
variable while the VM is running or I should re-spawn a new VM ? Thank you
w
so can’t speak to the RD boostrap, but WSLENV is a Microsoft thing. I believe it is only interpreted at the init of a distro. The issues I have had w WSL proxy has not been setting it (WSLENV does great) but setting the no_proxy itself. It’s been a while since I gave it a run again and I believe there were hints that the RD team were poking at vpnkit which could allow for an explicit subnet.
n
I think I'm missing something but what do you mean exactly by "settings the `no_proxy`" ? I will look at
WSLENV
and maybe restarting WSL then just to try things out. I guess that if I just set
http_proxy
variable in
/etc/profile
I will need to restart all the docker, k8s, ... so that they take that variable into account anyway. "vpnkit" does looks interesting to try out indeed, I did saw it mentioned somewhere in one of the issues. Right now I was thinking of just developing something more simple based on the env variable instead of importing something external to rancher-desktop project. Of course I'm open to investigate any option to solve this issue right now.
w
well and look at rc.conf etc that start those daemons too. its a timing thing at its core. no_proxy is needed to tell kubectl and other internal cluster processes to ignore the proxy setting for intra-VM traffic
cause the external proxy won’t be able to complete the CONNECT statement to a nested VM
n
I see, thank you @wide-mechanic-33041 for your insight I will continue working on that.
w
i am confident if it was an easy fix they would have already have a doc for it. macOS/limavm offers a bit more isolation than WSL which I find seems to keep intra-cluster traffic working, but WSL I don’t have a great option yet.
c
@nice-toddler-37804 thanks for looking into this, right now all the preferences are under Kubernetes (even the ones that are unrelated to k8s), however, we are planning on refactoring this soon. I think @proud-jewelry-46860 can speak to that more. But, for now you can also, place the new preference property under kubernetes. As for modifying the
WSLENV
we need to be more cautious since restarting the WSL or re-spawn of a new VM may not be an option since RD is only one of the many distros that users could be running in their wsl. If WSL vm is restarted all those distros essentially are restarted too. Furthermore, ideally the proxy variable should also get updated without the restart of k8s or docker to prevent any potential glitches or outages from users experience perspective. One way to do this is to use an intermediary proxy either by running it on the host machine or inside the WSL VM, that way all traffic is always going through the proxy and also, all the changes applied to the proxy too. This way there is no need to restart dockerd or wsl for that matter. I wrote a proposal for a solution in this document here (under proxy section): https://docs.google.com/document/d/1rEMnQJTxJ6oHePrtsdtuES8gi6p_Qibi4BOq1rmzp5E/edit?usp=sharing