This message was deleted.
# rke
a
This message was deleted.
p
If you're able to resolve the host
<http://hurancher.zeomega.org|hurancher.zeomega.org>
from the node where the pod is running then maybe try restarting the coredns deployment.
g
Did you get this solved? I'm facing the same issue.
f
Yes. Do you have firewall enabled on your machine?
g
I have iptables yes
I actually changed from nftables to iptables since cilium doesn't support nftables and there didn't seem to be any solid fix for using it..
f
My issue was I had missed to add masquerade
g
Ok, I just added it but still facing the same issue
f
Can you share all the ports opened.
g
Copy code
-P INPUT DROP
-P FORWARD DROP
-P OUTPUT ACCEPT
-N CILIUM_FORWARD
-N CILIUM_INPUT
-N CILIUM_OUTPUT
-N KUBE-EXTERNAL-SERVICES
-N KUBE-FIREWALL
-N KUBE-FORWARD
-N KUBE-KUBELET-CANARY
-N KUBE-NODEPORTS
-N KUBE-PROXY-CANARY
-N KUBE-PROXY-FIREWALL
-N KUBE-SERVICES
-A INPUT -m comment --comment "cilium-feeder: CILIUM_INPUT" -j CILIUM_INPUT
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -p tcp -m tcp --sport 53 -j ACCEPT
-A INPUT -p udp -m udp --sport 53 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A FORWARD -m comment --comment "cilium-feeder: CILIUM_FORWARD" -j CILIUM_FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A FORWARD -s 10.42.0.0/16 -o eth0 -j ACCEPT
-A FORWARD -d 10.42.0.0/16 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 10.42.0.0/16 -o ens192 -j ACCEPT
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT" -j CILIUM_OUTPUT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A CILIUM_FORWARD -o cilium_host -m comment --comment "cilium: any->cluster on cilium_host forward accept" -j ACCEPT
-A CILIUM_FORWARD -i cilium_host -m comment --comment "cilium: cluster->any on cilium_host forward accept (nodeport)" -j ACCEPT
-A CILIUM_FORWARD -i lxc+ -m comment --comment "cilium: cluster->any on lxc+ forward accept" -j ACCEPT
-A CILIUM_FORWARD -i cilium_net -m comment --comment "cilium: cluster->any on cilium_net forward accept (nodeport)" -j ACCEPT
-A CILIUM_INPUT -m mark --mark 0x200/0xf00 -m comment --comment "cilium: ACCEPT for proxy traffic" -j ACCEPT
-A CILIUM_OUTPUT -m mark --mark 0xa00/0xfffffeff -m comment --comment "cilium: ACCEPT for proxy return traffic" -j ACCEPT
-A CILIUM_OUTPUT -m mark --mark 0x800/0xe00 -m comment --comment "cilium: ACCEPT for l7 proxy upstream traffic" -j ACCEPT
-A CILIUM_OUTPUT -m mark ! --mark 0xe00/0xf00 -m mark ! --mark 0xd00/0xf00 -m mark ! --mark 0xa00/0xe00 -m mark ! --mark 0x800/0xe00 -m mark ! --mark 0xf00/0xf00 -m comment --comment "cilium: host->any mark as from host" -j MARK --set-xmark 0xc00/0xf00
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-SERVICES -d 10.43.0.10/32 -p udp -m comment --comment "kube-system/rke2-coredns-rke2-coredns:udp-53 has no endpoints" -m udp --dport 53 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.43.0.10/32 -p tcp -m comment --comment "kube-system/rke2-coredns-rke2-coredns:tcp-53 has no endpoints" -m tcp --dport 53 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.43.210.19/32 -p tcp -m comment --comment "cattle-system/cattle-cluster-agent:http has no endpoints" -m tcp --dport 80 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.43.210.19/32 -p tcp -m comment --comment "cattle-system/cattle-cluster-agent:https-internal has no endpoints" -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable
and as the cluster-agent just tries to reach my rancher.example.com
Copy code
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N CILIUM_OUTPUT_nat
-N CILIUM_POST_nat
-N CILIUM_PRE_nat
-N KUBE-EXT-SY4FJKJ2P7DRXVOG
-N KUBE-KUBELET-CANARY
-N KUBE-MARK-MASQ
-N KUBE-NODEPORTS
-N KUBE-POSTROUTING
-N KUBE-PROXY-CANARY
-N KUBE-SEP-4WIAZYJRPU5KMWQQ
-N KUBE-SEP-UV5I55VZVIGJHXA7
-N KUBE-SERVICES
-N KUBE-SVC-NPX46M4PTMTKRN6Y
-N KUBE-SVC-SY4FJKJ2P7DRXVOG
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_nat" -j CILIUM_PRE_nat
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT_nat" -j CILIUM_OUTPUT_nat
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A POSTROUTING -m comment --comment "cilium-feeder: CILIUM_POST_nat" -j CILIUM_POST_nat
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -o ens192 -j MASQUERADE
-A POSTROUTING -s 10.42.0.0/24 -o ens192 -j MASQUERADE
-A CILIUM_POST_nat -s 10.42.0.0/24 ! -d 10.42.0.0/24 ! -o cilium_+ -m comment --comment "cilium masquerade non-cluster" -j MASQUERADE
-A CILIUM_POST_nat -m mark --mark 0xa00/0xe00 -m comment --comment "exclude proxy return traffic from masquerade" -j ACCEPT
-A CILIUM_POST_nat ! -s 10.42.0.0/24 ! -d 10.42.0.0/24 -o cilium_host -m comment --comment "cilium host->cluster masquerade" -j SNAT --to-source 10.42.0.187
-A CILIUM_POST_nat -s 127.0.0.1/32 -o cilium_host -m comment --comment "cilium host->cluster from 127.0.0.1 masquerade" -j SNAT --to-source 10.42.0.187
-A CILIUM_POST_nat -o cilium_host -m mark --mark 0xf00/0xf00 -m conntrack --ctstate DNAT -m comment --comment "hairpin traffic that originated from a local pod" -j SNAT --to-source 10.42.0.187
-A KUBE-EXT-SY4FJKJ2P7DRXVOG -m comment --comment "masquerade traffic for kube-system/rancher-vsphere-cpi-cloud-controller-manager external destinations" -j KUBE-MARK-MASQ
-A KUBE-EXT-SY4FJKJ2P7DRXVOG -j KUBE-SVC-SY4FJKJ2P7DRXVOG
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/rancher-vsphere-cpi-cloud-controller-manager" -m tcp --dport 32024 -j KUBE-EXT-SY4FJKJ2P7DRXVOG
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
-A KUBE-SEP-4WIAZYJRPU5KMWQQ -s 10.104.12.10/32 -m comment --comment "kube-system/rancher-vsphere-cpi-cloud-controller-manager" -j KUBE-MARK-MASQ
-A KUBE-SEP-4WIAZYJRPU5KMWQQ -p tcp -m comment --comment "kube-system/rancher-vsphere-cpi-cloud-controller-manager" -m tcp -j DNAT --to-destination 10.104.12.10:43001
-A KUBE-SEP-UV5I55VZVIGJHXA7 -s 10.104.12.10/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-UV5I55VZVIGJHXA7 -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 10.104.12.10:6443
-A KUBE-SERVICES -d 10.43.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.43.225.211/32 -p tcp -m comment --comment "kube-system/rancher-vsphere-cpi-cloud-controller-manager cluster IP" -m tcp --dport 43001 -j KUBE-SVC-SY4FJKJ2P7DRXVOG
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-NPX46M4PTMTKRN6Y ! -s 10.42.0.0/16 -d 10.43.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https -> 10.104.12.10:6443" -j KUBE-SEP-UV5I55VZVIGJHXA7
-A KUBE-SVC-SY4FJKJ2P7DRXVOG ! -s 10.42.0.0/16 -d 10.43.225.211/32 -p tcp -m comment --comment "kube-system/rancher-vsphere-cpi-cloud-controller-manager cluster IP" -m tcp --dport 43001 -j KUBE-MARK-MASQ
-A KUBE-SVC-SY4FJKJ2P7DRXVOG -m comment --comment "kube-system/rancher-vsphere-cpi-cloud-controller-manager -> 10.104.12.10:43001" -j KUBE-SEP-4WIAZYJRPU5KMWQQ
there are rules from nat-table
and it goes a bit further when tested switching cluster-agent dnsPolicy from ClusterFirst to Default, which then uses just hosts resolve.conf.
f
I think the ports are not opened correctly. Below is my list of open ports
Copy code
[rke-admin@Poclphusanode2 ~]$ sudo firewall-cmd --list-ports
22/tcp 80/tcp 443/tcp 2376/tcp 2379/tcp 2380/tcp 6443/tcp 9099/tcp 10250/tcp 10254/tcp 30000-32767/tcp 53/udp 8472/udp 30000-32767/udp
g
I can confirm, it was firewall after all.. I'm using AlmaLinux 9.3 which natively has nftables and switching to iptables was a bit pain in the ass