https://rancher.com/ logo
Title
m

melodic-kite-90272

12/07/2022, 12:56 PM
periodically having this same problem, restarting RD fixes it. But I'd rather not have to do this. https://rancher-users.slack.com/archives/C0200L1N1MM/p1666489570301949 Does anyone have any advice how to troubleshoot this?
Here's another example from inside the lima VM:
lima-rancher-desktop:~# k3s kubectl logs -n kube-system coredns-7796b77cd4-pqs4g
Error from server: Get "<https://192.168.205.2:10250/containerLogs/kube-system/coredns-7796b77cd4-pqs4g/coredns>": x509: certificate is valid for 127.0.0.1, 192.168.5.15, not 192.168.205.2
Here's my `networks.yaml`:
paths:
  vdeSwitch: /opt/rancher-desktop/bin/vde_switch
  vdeVMNet: /opt/rancher-desktop/bin/vde_vmnet
  varRun: /private/var/run
  sudoers: /private/etc/sudoers.d/zzzzz-rancher-desktop-lima
group: everyone
networks:
  rancher-desktop-shared:
    mode: shared
    gateway: 192.168.205.1
    dhcpEnd: 192.168.205.254
    netmask: 255.255.255.0
  host:
    mode: host
    gateway: 192.168.206.1
    dhcpEnd: 192.168.206.254
    netmask: 255.255.255.0
  rancher-desktop-bridged_en6:
    mode: bridged
    interface: en6
  rancher-desktop-bridged_en9:
    mode: bridged
    interface: en9
  rancher-desktop-bridged_en7:
    mode: bridged
    interface: en7
  rancher-desktop-bridged_en10:
    mode: bridged
    interface: en10
  rancher-desktop-bridged_en11:
    mode: bridged
    interface: en11
  rancher-desktop-bridged_en12:
    mode: bridged
    interface: en12
  rancher-desktop-bridged_en0:
    mode: bridged
    interface: en0
  rancher-desktop-bridged_bridge0:
    mode: bridged
    interface: bridge0
# k3s kubectl get nodes -o wide
NAME                   STATUS   ROLES                  AGE     VERSION         INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME
lima-rancher-desktop   Ready    control-plane,master   5d23h   v1.22.12+k3s1   192.168.205.2   <none>        Alpine Linux v3.16   5.15.64-0-virt   <docker://20.10.18>
c

calm-sugar-3169

12/07/2022, 11:59 PM
@melodic-kite-90272 by looking at the error it looks like the Virtual Network IP address has changed from
192.168.5.15
to
192.168.0.21
. Did the VM IP address changed somehow? or did you perhaps changed network or something?
m

melodic-kite-90272

12/08/2022, 8:25 AM
Yeah I change networks all the time on the host machine
But I don't know why that would change the VIP of the guest
You mean changed to 192.168.205.2 right?
c

calm-sugar-3169

12/08/2022, 5:52 PM
yeah that’s the IP address I meant. I’m not sure why
192.168.5.15
would change. One obvious work around I can think of is to define
--insecure-skip-tls-verify
in you
kubectl
command but of course that is not an ultimate fix. Looking at our issues it looks like we have an issue to generate the certs using the hostname as opposed to the IP address which I believe should fix these kind of issues: https://github.com/rancher-sandbox/rancher-desktop/issues/3186
m

melodic-kite-90272

12/08/2022, 6:36 PM
Btw I guess it might not be obvious but this problem is only occuring when running kubectl logs. Other API requests work normally
does anyone have any other ideas here? This is on a weekly basis for me..
for some reason kubernetes thinks that 192.168.1.11 is the right address to query even if the eth0 interface in the lima VM is:
eth0      Link encap:Ethernet  HWaddr 52:55:55:DB:30:FE
          inet addr:192.168.5.15  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::5055:55ff:fedb:30fe/64 Scope:Link
          inet6 addr: fec0::5055:55ff:fedb:30fe/64 Scope:Site
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:4181360 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1650054 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:3818593105 (3.5 GiB)  TX bytes:275946467 (263.1 MiB)
however
❯ kubectl logs local-path-provisioner-84bb864455-gdwc9
E0120 16:51:22.458417   30499 memcache.go:255] couldn't get resource list for <http://metrics.k8s.io/v1beta1|metrics.k8s.io/v1beta1>: the server is currently unable to handle the request
E0120 16:51:22.509654   30499 memcache.go:106] couldn't get resource list for <http://metrics.k8s.io/v1beta1|metrics.k8s.io/v1beta1>: the server is currently unable to handle the request
E0120 16:51:22.523427   30499 memcache.go:106] couldn't get resource list for <http://metrics.k8s.io/v1beta1|metrics.k8s.io/v1beta1>: the server is currently unable to handle the request
Error from server: Get "<https://192.168.1.11:10250/containerLogs/kube-system/local-path-provisioner-84bb864455-gdwc9/local-path-provisioner>": x509: certificate is valid for 127.0.0.1, 192.168.5.15, not 192.168.1.11
f

fast-garage-66093

01/20/2023, 5:04 PM
@calm-sugar-3169 It is not that the
eth0
address is changing; it is that some requests seem to be routed to
rd0
or
rd1
instead.
I'm pretty confident that
192.168.1.11
is the IP address of
rd0
(bridged network) and from the DHCP range of the local network.
It is only supposed to be used for cluster-ingress (e.g. Traefik), but never for talking to the apiserver.