witty-sunset-95652
04/18/2023, 10:45 AMwitty-honey-18052
04/18/2023, 1:56 PMwitty-sunset-95652
04/18/2023, 1:58 PMnetstat -ab
For example port 443 bound and you can access web apps inside the cluster. A while later these bindings are just silently dropped and netstat will show the bindings no longer there, access to your cluster hosted services no longer work until you restart Rancher Desktopwitty-honey-18052
04/18/2023, 2:03 PMcalm-sugar-3169
04/18/2023, 7:10 PMwitty-sunset-95652
04/18/2023, 7:11 PMbright-fireman-42144
04/19/2023, 1:20 AMmagnificent-napkin-96586
04/19/2023, 2:30 AMwitty-sunset-95652
04/19/2023, 9:02 AMWed Apr 19 09:06:11 2023 TCP/UDP: Socket bind failed on local address [undef]: Address in use
witty-honey-18052
04/19/2023, 4:27 PMwitty-sunset-95652
04/19/2023, 6:04 PMcalm-sugar-3169
04/19/2023, 7:16 PM0.0.0.0:443
and 0.0.0.0:80
?
perhaps:
netstat -ano | findstr 443
witty-sunset-95652
04/19/2023, 7:18 PMcalm-sugar-3169
04/19/2023, 7:20 PMwitty-sunset-95652
04/19/2023, 7:20 PMcalm-sugar-3169
04/19/2023, 8:37 PMnetsh interface portproxy show all
?witty-sunset-95652
04/20/2023, 7:18 AMcalm-sugar-3169
04/20/2023, 6:46 PMwitty-sunset-95652
04/20/2023, 6:51 PMrdctl start --kubernetes.options.traefik=false --container-engine.name=moby --kubernetes.version=1.24.12 --kubernetes.enabled=true
But after that I don't think there's anything special that we are doing as when we were hitting the issue it seemed to affect all services with a port mapping inside the clustercalm-sugar-3169
04/20/2023, 7:00 PMwitty-sunset-95652
04/20/2023, 7:06 PMcalm-sugar-3169
04/20/2023, 7:25 PMwhich container backend are you using?nvm, I see from your
rdctl start
you are using mobywitty-sunset-95652
04/20/2023, 7:39 PMcalm-sugar-3169
04/20/2023, 7:46 PMwitty-sunset-95652
04/20/2023, 7:47 PMcalm-sugar-3169
04/20/2023, 7:48 PMwitty-sunset-95652
04/20/2023, 7:51 PMcalm-sugar-3169
04/20/2023, 8:00 PMservicewatcher.go
it seems to be correct for the most part, although there is a potential where it can forward the duplicated ports to the host. However, we really don’t have any way of notifying the users during the runtime since that error needs to come from kubectl
, therefore we leave that at the user’s discretion to be mindful of duplicating the ports on the host. However, in your case what I suspect has happened, was the actual bug in the privileged service that runs on Windows, the privileged service was leaving behind the port proxies that were created previously (never cleaned up after itself upon shutdown) therefore, the old traefik port proxy listeners were already running in the host. So when you created the new controller listening on 443/80
it kind of got into a panic state. I believe the following PR should allow the users not to get into that state since it make the privileged service clean up after itself.witty-sunset-95652
04/20/2023, 8:08 PMcalm-sugar-3169
04/20/2023, 8:09 PM