https://rancher.com/ logo
Title
s

steep-london-53093

03/30/2023, 3:54 PM
Hello, after installing new rke2 cluster I can get pod logs scheduled on any node of the cluster via any master node API. After restarting some master or worker nodes I try to get logs again and at this time via some master node API I can’t get pod logs from some nodes with error:
<https://192.168.0.15:10250/containerLogs/kube-system/kube-proxy-k8s-master-03/kube-proxy>": proxy error from 127.0.0.1:9345 while dialing 192.168.0.15:10250, code 503: 503 Service Unavailable
Can you give me any hint to find the root cause of this problem? Thank you in advance!
Mar 30 16:21:08 k8s-master-01 rke2[60704]: time="2023-03-30T16:21:08Z" level=debug msg="Tunnel server handing HTTP/1.1 CONNECT request for //192.168.0.14:10250 from 127.0.0.1:33868"
Mar 30 16:21:08 k8s-master-01 rke2[60704]: time="2023-03-30T16:21:08Z" level=debug msg="Tunnel server egress proxy dial error: failed to find Session for client k8s-master-02"
c

creamy-pencil-82913

03/30/2023, 5:18 PM
the clients aren’t reconnecting to your server. Check the rke2-agent logs to see why.
s

steep-london-53093

03/30/2023, 5:26 PM
after setting --egress-selector-mode=disabled situation looks that problem is disappeared
c

creamy-pencil-82913

03/30/2023, 6:01 PM
Yeah that's the wrong fix though. You should figure out why the agents aren't reconnecting to the server. What do you get from
kubectl get node -o wide
?
s

steep-london-53093

03/30/2023, 6:11 PM
They are in ready state
May be I did not get the question?