This message was deleted.
# rancher-desktop
a
This message was deleted.
a
Windows 10 environment
w
yup proxy issue. try https://docs.rancherdesktop.io/how-to-guides/running-air-gapped/ or you can try the new 1.8 with gvisor enabled and use WSLENV to inject your proxy into env. depending on your VPN you may not need gvisor, but something you can try.
a
Ok, thanks. `WSLENV`iis set and available. The error message is still the same, although the
k3s-versions.json
file is present and populated in
%APPDATALOCAL%\rancher-desktop\cache
.
background.log loops:
Copy code
2023-03-21T13:23:11.828Z: Launching background process Vtunnel Host Process.
2023-03-21T13:23:12.018Z: Background process Vtunnel Host Process exited with status 1 signal null
2023-03-21T13:23:12.019Z: Background process Vtunnel Host Process will restart.
diagnostics.log loops:
Copy code
2023-03-21T13:23:57.241Z: Running check CONNECTED_TO_INTERNET
2023-03-21T13:23:57.241Z: Running connectivity test with timeout of 5000 ms
2023-03-21T13:23:57.302Z: Connection test completed successfully
2023-03-21T13:23:57.302Z: Check CONNECTED_TO_INTERNET result: {"description":"The application cannot reach the general internet for updated kubernetes versions and other components, but can still operate.","passed":true,"fixes":[]}
update.log:
Copy code
2023-03-21T13:19:58.033Z: Checking for upgrades from <https://desktop.version.rancher.io/v1/checkupgrade>
2023-03-21T13:19:58.179Z: Error: FetchError: invalid json response body at <https://desktop.version.rancher.io/v1/checkupgrade> reason: Unexpected token '<', "<!DOCTYPE "... is not valid JSON
    at C:\Program Files\Rancher Desktop\resources\app.asar\node_modules\node-fetch\lib\index.js:273:32
w
yeah that looks like a lot in the electron side vs the WSL side. did you download a tarball for your version? could the proxy require auth or other more finicky settings that RD could be tripping over?
a
It is the 1.8.0 msi installer.
Ok, you mean the source code tarball to debug RD?
w
no the images tarball listed in https://docs.rancherdesktop.io/how-to-guides/running-air-gapped/#the-cache-directory after the versions json
a
Thanks, the air-gapped installation installs k3s and the Kubernetes distribution successfully. When running
kubectl run rdtest --image=rancher/hello-world
this error occurs:
Copy code
3s          Warning   FailedCreatePodSandBox    pod/rdtest         Failed to create pod sandbox: rpc error: code = Unknown desc = failed pulling image "<http://registry.k8s.io/pause:3.6|registry.k8s.io/pause:3.6>": Error response from daemon: Get "<https://registry.k8s.io/v2/>": dial tcp: lookup <http://registry.k8s.io|registry.k8s.io> on w.x.y.z:53: no such host
I added
/etc/rancher/k3s/registries.yaml
, ran
wsl --shutdown
and started rancher desktop again. But same error occurs. It is probably related to the proxy settings of the underlying docker or moby host installation.
w
that is a DNS issue (port 53) so look at how you have configured WSL for DNS, make sure your VPN is forwarding those packets, and check out host resolver https://github.com/rancher-sandbox/rancher-desktop/issues/1899#issuecomment-1109128277
a
Now it is possible to pull images. The host resolver seems to have no effect. I added a nameserver to /etc/resolv.conf. When pulling images, the following error occurs - not sure what to do in this case:
Copy code
docker run nginx
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
3f9582a2cbe7: Pull complete
9a8c6f286718: Pull complete
e81b85700bc2: Pull complete
73ae4d451120: Pull complete
6058e3569a68: Pull complete
3a1b8f201356: Pull complete
Digest: sha256:aa0afebbb3cfa473099a62c4b32e9b3fb73ed23f2a75a65ce1d4b4f55a5c2ef2
Status: Downloaded newer image for nginx:latest
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: systemd not running on this host, cannot use systemd cgroups manager: unknown.
Similar error occurs when running
rdctl shell k3s check-config
Copy code
>rdctl shell k3s check-config

Verifying binaries in /var/lib/rancher/k3s/data/1d787a9b6122e3e3b24afe621daa97f895d85f2cb9cc66860ea5ff973b5c78f2/bin:
- sha256sum: good
- links: good

System:
- /sbin iptables v1.8.8 (legacy): ok
- swap: should be disabled
- routes: ok

Limits:
- /proc/sys/kernel/keys/root_maxkeys: 1000000

modprobe: can't change directory to '/lib/modules': No such file or directory
info: reading kernel config from /proc/config.gz ...

Generally Necessary:
- /var/lib/rancher/k3s/data/1d787a9b6122e3e3b24afe621daa97f895d85f2cb9cc66860ea5ff973b5c78f2/bin/check-config: line 344: can't open /proc/self/cgroup: no such file
cgroup hierarchy: cgroups Hybrid mounted, cpuset|memory controllers status: bad (fail)
    (for cgroups V1/Hybrid on non-Systemd init see <https://github.com/tianon/cgroupfs-mount>)
- CONFIG_NAMESPACES: enabled
w
yeah host resolver is one of those depends on your VPN things. Something seems pretty off in the config as I don’t think 1.8 takes advantage of systemd in WSL1.x and still uses openrc.
a
It is pretty much the RD installation out of the box.
Copy code
>ver

Microsoft Windows [Version 10.0.19044.2604]
Copy code
>wsl -l -v
  NAME                    STATE           VERSION
* rancher-desktop         Running         2
  rancher-desktop-data    Stopped         2
w
yeah but the bootstrap seemed to go off the rails at some point
the install init on first run does a ton of stuff
a
That would be logged to wsl.log most likely.
w
all the tasks are spread over all the logs
you may want to give the gvisor network a try (do a factory reset before) and combine w a WSLENV of your proxy and see if that works out of box
a
switching to containerd and running rancher/hello-world leads to:
Copy code
/ # nerdctl run rancher/hello-world
<http://docker.io/rancher/hello-world:latest|docker.io/rancher/hello-world:latest>:                                             resolved       |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:4b1559cb4b57ca36fa2b313a3c7dde774801aa3a2047930d94e11a45168bc053: done           |++++++++++++++++++++++++++++++++++++++|
config-sha256:cab3bc026f39f4070347ea317ad92a50ffac666454de81dc838b7d5e0cf8173d:   done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:f75c169d000f3785ca3855a5bdf2c5a1e2e55360716f63a7fe5f654457899fec:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:ff3a5c916c92643ff77519ffa742d3ec61b7f591b6b7504599d95a4a41134e28:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:eb7e1814d2d5eab53b5ae85c34b972db763c2112efa5e3d9386a9a0763e62c38:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:fc3738c569c3cfa3e5e4048ac60b9fe970e5ecdebeb295bf90b359759b3ec0c1:    done           |++++++++++++++++++++++++++++++++++++++|
elapsed: 12.5s                                                                    total:  7.4 Mi (603.9 KiB/s)
FATA[0012] readlink /proc/self/exe: no such file or directory
Does not work with either settings. For example networkingTunnel true or false does not make a difference:
Copy code
>docker run rancher/hello-world
Unable to find image 'rancher/hello-world:latest' locally
latest: Pulling from rancher/hello-world
ff3a5c916c92: Pull complete
eb7e1814d2d5: Pull complete
fc3738c569c3: Pull complete
f75c169d000f: Pull complete
Digest: sha256:4b1559cb4b57ca36fa2b313a3c7dde774801aa3a2047930d94e11a45168bc053
Status: Downloaded newer image for rancher/hello-world:latest
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: systemd not running on this host, cannot use systemd cgroups manager: unknown.
docker works
k8s does not work:
Copy code
Events:
  Type     Reason                  Age                 From               Message
  ----     ------                  ----                ----               -------
  Normal   Scheduled               2m41s               default-scheduler  Successfully assigned default/hello-world-66fb979868-2dzr2 to m1rrzn13313
  Warning  FailedCreatePodSandBox  60s (x3 over 116s)  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed pulling image "<http://registry.k8s.io/pause:3.6|registry.k8s.io/pause:3.6>": Error response from daemon: Get "<https://registry.k8s.io/v2/>": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  FailedCreatePodSandBox  4s (x3 over 2m26s)  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed pulling image "<http://registry.k8s.io/pause:3.6|registry.k8s.io/pause:3.6>": Error response from daemon: Get "<https://registry.k8s.io/v2/>": proxyconnect tcp: dial tcp 10.244.100.44:318: i/o timeout
The error occurs with
experimental.virtual-machine.networking-tunnel
set to
true
or
false
.
...but networking-tunnel set to true starts up k3s, if set to false, k3s does not start up. But
works better with v1.21.14+k3s1 than with v1.24.3+k3s1
remaining error:
Warning  FailedCreatePodSandBox  118s                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "97b7694931a232988e12cbc56273b8168cf821f5df25f12de7e1469b888a0ec3" network for pod "svclb-traefik-4hbl4": networkPlugin cni failed to set up pod "svclb-traefik-4hbl4_kube-system" network: open /run/flannel/subnet.env: no such file or directory
rancher desktop 1.8.1 on windows 10 with host resolver and k3s version v1.21.14+k3s1 kube-system svclb-traefik-4hbl4 0/2 CrashLoopBackOff 160 5h5m events of svclb-traefik-4hbl4:
Copy code
Events:
  Type     Reason          Age                   From     Message
  ----     ------          ----                  ----     -------
  Warning  BackOff         44m (x960 over 4h4m)  kubelet  Back-off restarting failed container
  Warning  FailedMount     37m                   kubelet  MountVolume.SetUp failed for volume "kube-api-access-lbpcr" : failed to fetch token: serviceaccounts "default" is forbidden: User "system:node:m1rrzn13313" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'm1rrzn13313' and this object
  Normal   SandboxChanged  37m                   kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Created         37m (x2 over 37m)     kubelet  Created container lb-port-80
  Normal   Started         37m (x2 over 37m)     kubelet  Started container lb-port-80
  Normal   Pulled          37m (x2 over 37m)     kubelet  Container image "rancher/klipper-lb:v0.3.4" already present on machine
  Normal   Created         37m (x2 over 37m)     kubelet  Created container lb-port-443
  Normal   Started         37m (x2 over 37m)     kubelet  Started container lb-port-443
  Warning  BackOff         37m (x5 over 37m)     kubelet  Back-off restarting failed container
  Normal   Pulled          37m (x3 over 37m)     kubelet  Container image "rancher/klipper-lb:v0.3.4" already present on machine
  Warning  BackOff         22m (x76 over 37m)    kubelet  Back-off restarting failed container
  Warning  FailedMount     20m                   kubelet  MountVolume.SetUp failed for volume "kube-api-access-lbpcr" : failed to sync configmap cache: timed out waiting for the condition
  Normal   SandboxChanged  20m                   kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Created         20m (x2 over 20m)     kubelet  Created container lb-port-80
  Normal   Started         20m (x2 over 20m)     kubelet  Started container lb-port-80
  Normal   Pulled          20m (x2 over 20m)     kubelet  Container image "rancher/klipper-lb:v0.3.4" already present on machine
  Normal   Created         20m (x2 over 20m)     kubelet  Created container lb-port-443
  Normal   Started         20m (x2 over 20m)     kubelet  Started container lb-port-443
  Warning  BackOff         19m (x5 over 20m)     kubelet  Back-off restarting failed container
  Normal   Pulled          19m (x3 over 20m)     kubelet  Container image "rancher/klipper-lb:v0.3.4" already present on machine
  Warning  BackOff         19s (x99 over 20m)    kubelet  Back-off restarting failed container
1864 Views