This message was deleted.
# k3s
a
This message was deleted.
c
It is not listening on ipv6 only. That is a dual-stack bind.
The
dial tcp x.x.x.x:10250: connect: no route to host
error indicates that the metrics server pod is running on a node that cannot reach the node at that address
which is a common problem when running nodes across multiple providers
m
@creamy-pencil-82913 what can I do in this scenario? for auto scaling metrics is the heart.
Additionally, I'm using public IP to communicate.
c
What address is it showing a routing error for? Is it the internal or external IP for that node?
I don't really love building distributed clusters like that. Kubernetes isn't really designed to have a bunch of nodes with their ports exposed directly to the internet so that they can reach each other.
m
Public IPv4 in log @creamy-pencil-82913
I have looked for this answer a lot. I figured it out. Problem wasn’t with k3s, it was iptable issue. Oracle has iptables configured to block maximum ports. Solved that with
sudo iptables -I INPUT -j ACCEPT
103 Views