This message was deleted.
# rancher-desktop
a
This message was deleted.
w
This has been causing several of our developers issues https://github.com/rancher-sandbox/rancher-desktop/issues/4305
w
Maybe this is related to my networking issues
w
In case it's relevant, we're running Windows 11. On launching Rancher and all services firing up it will correctly show the ports bound in Windows
netstat -ab
For example port 443 bound and you can access web apps inside the cluster. A while later these bindings are just silently dropped and netstat will show the bindings no longer there, access to your cluster hosted services no longer work until you restart Rancher Desktop
w
I'm also firing up rancher, and it will work for a bit then quit, but in particular I won't be able to access the ports from windows, but can still hit them from within WSL via curl
c
@witty-sunset-95652 the issue has been fixed here: https://github.com/rancher-sandbox/rancher-desktop-agent/pull/33
we are working on getting a release out today.
w
Perfect, thank you for the update
👍 1
b
oh! i might be running into the same thing, cool.
m
just so we're clear, it's a tech preview of 1.9 that's out now so it won't be part of the auto update yet since it offers people a chance to try a new feature. the fix, though, should be there. we'll get that into the proper release of 1.9 in due time so let us know if that fix works for you with this preview
👍 1
w
I've installed the 1.9 tech preview and I'm not sure if this is yet related but I've not seen it previously. We are running an nginx controller in the cluster and Traefik is disabled, but the controller no longer starts
Several other services have the same issue too, openvpn for example:
Wed Apr 19 09:06:11 2023 TCP/UDP: Socket bind failed on local address [undef]: Address in use
Host machine bindings for example port 443
Is anyone else having further port issues?
w
Can you guys check the Microsoft Store for Windows Subsystem for Linux? Let us know if it isn't installed (yes, this is different than the Windows Feature being enabled)
w
It is installed in the Microsoft Store for me
c
@witty-sunset-95652 since you mentioned the traefik is disabled, can you check what else might be listening on
0.0.0.0:443
and
0.0.0.0:80
? perhaps:
Copy code
netstat -ano | findstr 443
w
I have checked that, when Rancher Desktop is stopped, there is nothing bound to those ports
And when it's running it binds (screenshot above)
And it's not just the web ports, all other services seem to be complaining about their ports already being in use, I've only seen this issue on the 1.9 tech preview
c
can you give me a repro steps, so I can repro this on my end?
w
I might have missed something but it seems that way so far
I've just started up RD after not having it running all afternoon and now typically it appears to be working
I did notice when it wasn't working earlier, both iphlpsvc and wslrelay.exe were bound to port 443
Now it's working only iphlpsvc is bound to 443
c
can you share the output of
netsh interface portproxy show all
?
w
This is while it's in the working state and shows the open service ports as expected
c
@witty-sunset-95652 I’m currently investigating this issue and it might be an actual issue, are you able to give me a repro steps on how to replicate this?
w
Hi Nino, I don't really have any concrete repo steps to share at this stage, we have a series of scripts to configure Rancher Desktop and then apply our manifests This is the current mode we put RD into:
rdctl start --kubernetes.options.traefik=false --container-engine.name=moby --kubernetes.version=1.24.12 --kubernetes.enabled=true
But after that I don't think there's anything special that we are doing as when we were hitting the issue it seemed to affect all services with a port mapping inside the cluster
c
ok, thanks
w
I'd guess the issue is around here in servicewatcher.go still where under some circumstances it's attempting to add ports that already exist but I'm not sure
c
which container backend are you using?
also, are you able to share you logs with me? you can send them directly if you like
which container backend are you using?
nvm, I see from your
rdctl start
you are using moby
👍 1
w
I'll send them over, it's difficult to tell if they will contain the relevant issues as I've reset / reinstalled plenty of times, it looks like they are dated back to yesterday morning while there were issues based on the time of my messages above though. It does appear to have been working fine since (the original port issue hasn't reappeared)
c
One more thing, this is a privileged/admin installation correct?
w
Yeah with the associated windows service running
c
cool, just wanted to make sure, I think I may have some ideas on what actually happened
👍 1
w
Sounds promising, let me know if there’s anything else you need
👍 1
c
Ok, so looking at the logic again in the
servicewatcher.go
it seems to be correct for the most part, although there is a potential where it can forward the duplicated ports to the host. However, we really don’t have any way of notifying the users during the runtime since that error needs to come from
kubectl
, therefore we leave that at the user’s discretion to be mindful of duplicating the ports on the host. However, in your case what I suspect has happened, was the actual bug in the privileged service that runs on Windows, the privileged service was leaving behind the port proxies that were created previously (never cleaned up after itself upon shutdown) therefore, the old traefik port proxy listeners were already running in the host. So when you created the new controller listening on
443/80
it kind of got into a panic state. I believe the following PR should allow the users not to get into that state since it make the privileged service clean up after itself.
here is the PR that I’m hoping to fix this: https://github.com/rancher-sandbox/rancher-desktop/pull/4503
w
Ah good find, that sounds like it would do it, thank you for looking into it
c
No worries, thanks for taking the initiative to report it, I hope the assumption is correct and the PR will prevent you from getting into that state again
🤞 1