This message was deleted.
# general
a
This message was deleted.
c
have you disabled firewalld/ufw?
for some reason the pod is unable to reach the apiserver
is the kube-proxy pod running on that node?
i
Yes, kube-proxy is running and does not have any errors in its log.
firewalld is not installed; ufw is disabled, and I have restarted the box after disabling it and setting the NetworkManager workaround
c
Is that pod with an error on a server, or an agent?
i
This error is happening on a server -- I'm setting up a fresh cluster, and this is the only node so far.
c
Hmm. Have you customized the configuration at all? or did you literally just run the install script and then start the rke2-server service?
How long has it been in that state?
i
The only configuration I have done is to set
node-name
,
node-external-ip
, and
tls-san
It was in this state almost immediately after installing RKE2
c
hmm. Why did you set those?
Do you see the same problem if you install with the default values?
Is the address specified for node-external-ip reachable from the node itself? Is it an actual address bound to the node, or is it a NATed public IP?
i
Let me try doing an install without setting anything. In previous attempts, if I didn't set
node-external-ip
, it would use a NATed address that was causing problems for agents that weren't in the same network. I set
tls-san
because my intent was to set this up as a HA cluster with a proxy.
c
node-external-ip is usually used to inform the cluster of the node’s public IP that is NATed to the primary --node-ip address. Both the internal and external IPs need to be reachable by cluster members, including the node itself.
i
Thank you for clarifying. Removing
node-external-ip
resolved this issue for me, and I am now able to start the RKE2 server successfully.
457 Views