https://rancher.com/ logo
Title
c

creamy-pencil-82913

01/07/2023, 6:48 PM
It is not listening on ipv6 only. That is a dual-stack bind.
The
dial tcp x.x.x.x:10250: connect: no route to host
error indicates that the metrics server pod is running on a node that cannot reach the node at that address
which is a common problem when running nodes across multiple providers
m

mysterious-wire-57288

01/07/2023, 7:14 PM
@creamy-pencil-82913 what can I do in this scenario? for auto scaling metrics is the heart.
Additionally, I'm using public IP to communicate.
c

creamy-pencil-82913

01/07/2023, 8:00 PM
What address is it showing a routing error for? Is it the internal or external IP for that node?
I don't really love building distributed clusters like that. Kubernetes isn't really designed to have a bunch of nodes with their ports exposed directly to the internet so that they can reach each other.
m

mysterious-wire-57288

01/07/2023, 8:41 PM
Public IPv4 in log @creamy-pencil-82913
I have looked for this answer a lot. I figured it out. Problem wasn’t with k3s, it was iptable issue. Oracle has iptables configured to block maximum ports. Solved that with
sudo iptables -I INPUT -j ACCEPT