https://rancher.com/ logo
#rancher-desktop
Title
# rancher-desktop
b

best-wall-17038

09/08/2022, 8:34 AM
Hello, Since yesterday rancher-desktop is not running properly in my macOS.. I am getting below error, when I restart rancher-desktop its working but then again started to get this error.What might be the reason of this ?
Copy code
kubectl get pods -n comnext                                                                                                                                                                                                                                                                                                            
I0908 10:32:01.731836   12084 versioner.go:56] Remote kubernetes server unreachable
Unable to connect to the server: net/http: TLS handshake timeout
upload the logs
w

wide-mechanic-33041

09/08/2022, 12:28 PM
do you have an http(s)_proxy var and use a no_proxy in your host where kubectl is running so traffic doesn’t try to hairpin back to your machine?
b

best-wall-17038

09/08/2022, 2:00 PM
@wide-mechanic-33041 nope I dont have any
w

wide-mechanic-33041

09/08/2022, 2:06 PM
i know i get tls issues from VPN stuff, but these flows should all be local
b

best-wall-17038

09/08/2022, 2:32 PM
yeah I dont know what happened but response from output also too slow even simply run docker ps
Sometimes it returns the output but mostly I am getting this
Copy code
kubectl get ns                                                                    
I0908 16:33:31.033046   31140 versioner.go:56] Remote kubernetes server unreachable
w

wide-mechanic-33041

09/08/2022, 3:04 PM
maybe a
kubectl --v=9 get ns
?
b

best-wall-17038

09/08/2022, 3:05 PM
What does it mean
--v=9
?
w

wide-mechanic-33041

09/08/2022, 3:05 PM
verbose
max verbose really
just see some of the flow and why things may be getting stuck
b

best-wall-17038

09/08/2022, 3:06 PM
Copy code
kubectl --v=9 get ns                                                                                                                                                                                                                                                                                                                     INT ✘  10s   17:04:41  ▓▒░
I0908 17:05:08.891214   32288 versioner.go:56] Remote kubernetes server unreachable
I0908 17:05:09.110461   32288 loader.go:372] Config loaded from file:  /Users/uralsem/.kube/config
I0908 17:05:09.130193   32288 round_trippers.go:466] curl -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubectl1.23.3/v1.23.3 (darwin/amd64) kubernetes/816c97a" '<https://127.0.0.1:6443/apis/metrics.k8s.io/v1beta1?timeout=32s>'
I0908 17:05:09.136732   32288 round_trippers.go:510] HTTP Trace: Dial to tcp:127.0.0.1:6443 succeed
I0908 17:05:19.149190   32288 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 1 ms TLSHandshake 10007 ms Duration 10018 ms
I0908 17:05:19.149268   32288 round_trippers.go:577] Response Headers:
I0908 17:05:19.151381   32288 cached_discovery.go:78] skipped caching discovery info due to Get "<https://127.0.0.1:6443/apis/metrics.k8s.io/v1beta1?timeout=32s>": net/http: TLS handshake timeout
I0908 17:05:19.162021   32288 round_trippers.go:466] curl -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubectl1.23.3/v1.23.3 (darwin/amd64) kubernetes/816c97a" '<https://127.0.0.1:6443/apis/metrics.k8s.io/v1beta1?timeout=32s>'
I0908 17:05:19.164376   32288 round_trippers.go:510] HTTP Trace: Dial to tcp:127.0.0.1:6443 succeed
I0908 17:05:29.169252   32288 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 2 ms TLSHandshake 10004 ms Duration 10007 ms
I0908 17:05:29.169354   32288 round_trippers.go:577] Response Headers:
I0908 17:05:29.169516   32288 cached_discovery.go:78] skipped caching discovery info due to Get "<https://127.0.0.1:6443/apis/metrics.k8s.io/v1beta1?timeout=32s>": net/http: TLS handshake timeout
I0908 17:05:29.170125   32288 shortcut.go:89] Error loading discovery information: unable to retrieve the complete list of server APIs: <http://metrics.k8s.io/v1beta1|metrics.k8s.io/v1beta1>: Get "<https://127.0.0.1:6443/apis/metrics.k8s.io/v1beta1?timeout=32s>": net/http: TLS handshake timeout
I0908 17:05:29.187323   32288 round_trippers.go:466] curl -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubectl1.23.3/v1.23.3 (darwin/amd64) kubernetes/816c97a" '<https://127.0.0.1:6443/apis/metrics.k8s.io/v1beta1?timeout=32s>'
I0908 17:05:29.192199   32288 round_trippers.go:510] HTTP Trace: Dial to tcp:127.0.0.1:6443 succeed
I0908 17:05:39.193968   32288 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 1 ms TLSHandshake 10001 ms Duration 10006 ms
I0908 17:05:39.194086   32288 round_trippers.go:577] Response Headers:
I0908 17:05:39.194288   32288 cached_discovery.go:78] skipped caching discovery info due to Get "<https://127.0.0.1:6443/apis/metrics.k8s.io/v1beta1?timeout=32s>": net/http: TLS handshake timeout
I0908 17:05:39.206093   32288 round_trippers.go:466] curl -v -XGET  -H "Accept: application/json;as=Table;v=v1;g=<http://meta.k8s.io|meta.k8s.io>,application/json;as=Table;v=v1beta1;g=<http://meta.k8s.io|meta.k8s.io>,application/json" -H "User-Agent: kubectl1.23.3/v1.23.3 (darwin/amd64) kubernetes/816c97a" '<https://127.0.0.1:6443/api/v1/namespaces?limit=500>'
I0908 17:05:39.209387   32288 round_trippers.go:510] HTTP Trace: Dial to tcp:127.0.0.1:6443 succeed
I0908 17:05:49.211215   32288 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 1 ms TLSHandshake 10001 ms Duration 10003 ms
I0908 17:05:49.211260   32288 round_trippers.go:577] Response Headers:
I0908 17:05:49.214209   32288 helpers.go:237] Connection error: Get <https://127.0.0.1:6443/api/v1/namespaces?limit=500>: net/http: TLS handshake timeout
F0908 17:05:49.214775   32288 helpers.go:118] Unable to connect to the server: net/http: TLS handshake timeout
goroutine 1 [running]:
<http://k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0x1)|k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0x1)>
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1038 +0x8a
<http://k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x3c83d20|k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x3c83d20>, 0x3, 0x0, 0xc0009a0070, 0x2, {0x31f383e, 0x10}, 0xc000188480, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:987 +0x5fd
<http://k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0xc0000401e0|k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0xc0000401e0>, 0x41, 0x0, {0x0, 0x0}, 0x0, {0xc000436dd0, 0x1, 0x1})
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x1ae
<http://k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)|k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)>
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1518
<http://k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal({0xc0000401e0|k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal({0xc0000401e0>, 0x41}, 0xc000436d20)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:96 +0xc5
<http://k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr({0x2bed1e0|k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr({0x2bed1e0>, 0xc0006d4ae0}, 0x2a788e8)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:180 +0x69a
<http://k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)|k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)>
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:118
<http://k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func2(0xc0003a1680|k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func2(0xc0003a1680>, {0xc000519540, 0x1, 0x2})
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get/get.go:181 +0xc8
<http://k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0003a1680|k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0003a1680>, {0xc000519520, 0x2, 0x2})
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:860 +0x5f8
<http://k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0009b4500)|k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0009b4500)>
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:974 +0x3bc
<http://k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)|k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)>
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:902
<http://k8s.io/kubernetes/vendor/k8s.io/component-base/cli.run(0xc0009b4500)|k8s.io/kubernetes/vendor/k8s.io/component-base/cli.run(0xc0009b4500)>
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/cli/run.go:146 +0x325
<http://k8s.io/kubernetes/vendor/k8s.io/component-base/cli.RunNoErrOutput(...)|k8s.io/kubernetes/vendor/k8s.io/component-base/cli.RunNoErrOutput(...)>
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/cli/run.go:84
main.main()
	_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:30 +0x1e

goroutine 6 [chan receive]:
<http://k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x0)|k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x0)>
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1181 +0x6a
created by <http://k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0|k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0>
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:420 +0xfb

goroutine 10 [select]:
<http://k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0|k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0>, {0x2bea760, 0xc000814000}, 0x1, 0xc000104360)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x13b
<http://k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0|k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0>, 0x12a05f200, 0x0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89
<http://k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)|k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)>
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
<http://k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x0|k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x0>, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x28
created by <http://k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs|k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs>
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:179 +0x8
w

wide-mechanic-33041

09/08/2022, 3:07 PM
and k8s is running in RD?
b

best-wall-17038

09/08/2022, 3:07 PM
yes
w

wide-mechanic-33041

09/08/2022, 3:09 PM
and that kubectl is in your macos terminal not in another limavm. using a browser does 127.0.0.1:6443 show anything?
b

best-wall-17038

09/08/2022, 3:09 PM
let me try.. even docker ps command is also too slow or return nothing sometimes
w

wide-mechanic-33041

09/08/2022, 3:10 PM
i mean rosetta2 is playing in the backround, but shouldn’t be too bad
hopefully not another xprotect bug
b

best-wall-17038

09/08/2022, 3:10 PM
it happened suddenly since yesterday, i did not change anything but
browser also did not return anything.. just pending 😞
w

wide-mechanic-33041

09/08/2022, 3:12 PM
and if you jump into
rdctl shell
can you get any responses
b

best-wall-17038

09/08/2022, 3:12 PM
rancher-desktop dashboard is also not opening 😞
w

wide-mechanic-33041

09/08/2022, 3:13 PM
so either in the instance or in the port forwarding into the instance
b

best-wall-17038

09/08/2022, 3:13 PM
correct
w

wide-mechanic-33041

09/08/2022, 3:13 PM
if you are in the instance and you can curl those same endpoints well its the forwarding
b

best-wall-17038

09/08/2022, 3:13 PM
Copy code
I0908 17:11:11.920375   32634 versioner.go:56] Remote kubernetes server unreachable
Starting to serve on 127.0.0.1:8001
E0908 17:13:05.925598   32634 proxy_server.go:147] Error while proxying request: net/http: TLS handshake timeout
w

wide-mechanic-33041

09/08/2022, 3:14 PM
yeah we got that. 😉 its the why that is unknown
b

best-wall-17038

09/08/2022, 3:15 PM
why 😞 its the just output of kubectl proxy by the way
w

wide-mechanic-33041

09/08/2022, 3:15 PM
seeing that the error is can’t start docker it seems like the cluster is in bad shape, but could be a just a bad event
b

best-wall-17038

09/08/2022, 3:17 PM
yeah but why this happened ? 😞
how can I fix this
w

wide-mechanic-33041

09/08/2022, 3:18 PM
did you try the Reset options in Troubleshooting
b

best-wall-17038

09/08/2022, 3:18 PM
Nope
I am afraid to loose everything
Will it delete everything inside the clusteR ?
w

wide-mechanic-33041

09/08/2022, 3:18 PM
i mean your yamls should be outside the cluster. it may need to download the images again
b

best-wall-17038

09/08/2022, 3:19 PM
which yaml ? 😕
w

wide-mechanic-33041

09/08/2022, 3:19 PM
your apps?
what are you concerned about losing?
b

best-wall-17038

09/08/2022, 3:20 PM
I dont want to deploy everything again actually
I deployed lots of tool like database, argocd etc
w

wide-mechanic-33041

09/08/2022, 3:20 PM
i mean than you will need to dig into the VM and see if you can figure out what has gone wrong
b

best-wall-17038

09/08/2022, 3:23 PM
mine is Monterey 😕
w

wide-mechanic-33041

09/08/2022, 3:24 PM
yup not sure its os specific, but lots of issues to poke though on M1 stuff
b

best-wall-17038

09/08/2022, 3:25 PM
This bug has been fixed in the Monterey 12.3 release
w

wide-mechanic-33041

09/08/2022, 3:25 PM
the error w starting docker?
b

best-wall-17038

09/08/2022, 3:26 PM
actually the weird thing it sometimes working sometimes not
I can get the output of command sometimes but if hit then its not wokring
w

wide-mechanic-33041

09/08/2022, 3:26 PM
does make it hard to troubleshoot. i would be spending time in the Alpine VM seeing if anything stands
b

best-wall-17038

09/08/2022, 3:26 PM
@fast-garage-66093 might have some idea I believe
f

fast-garage-66093

09/08/2022, 3:35 PM
Sorry, no immediate ideas; did you reboot the computer since this started?
b

best-wall-17038

09/08/2022, 3:35 PM
I did several times 😞
@fast-garage-66093 is there anyway to debug this
f

fast-garage-66093

09/08/2022, 4:56 PM
As @wide-mechanic-33041 said, you will have to dive into the VM to see what is going on there. I would make a copy of the
"$HOME/Library/Application Support/rancher-desktop/lima"
directory and then do a Factory Reset to see if that solves the issue. That way you know if the problem is with your host or the VM, and then take it from there
(this assumes you haven't moved the lima directory elsewhere)
b

best-wall-17038

09/08/2022, 9:18 PM
@fast-garage-66093 seems my issue similar with this https://github.com/rancher-sandbox/rancher-desktop/issues/2510
f

fast-garage-66093

09/08/2022, 9:46 PM
I don't think so, at least not from the symptoms you have described above. Issue 2510 seems to be due to an incorrect docker context or
DOCKER_HOST
setting, and would only affect
docker
commands run internally by RD. You were showing errors from kubernetes...
247 Views