https://rancher.com/ logo
Title
a

alert-motherboard-87423

04/28/2023, 2:14 PM
Hello Team, I have created a K3s Cluster in which master nodes are in the cloud and worker nodes are Raspberry Pis in a remote location and both of them are in different networks. So when I am deploying my applications on the worker node pods can’t access the internet. However when I use
hostNetwork: true
pods can access the internet. I’m missing something ?? Please help me fix the issue. Thanks, Ashis
b

bland-account-99790

04/28/2023, 3:13 PM
What config did you use to deploy k3s?
a

alert-motherboard-87423

04/28/2023, 3:19 PM
Hi @bland-account-99790, I’m using the following command to start K3s Server.
curl -sfL <https://get.k3s.io> | sh -s - \
    --write-kubeconfig-mode "0644" \
    --token "token" \
    --node-external-ip "ip" \
    --tls-san "ip" \
    --node-taint "CriticalAddonsOnly=true:NoExecute" \
    --disable=traefik
    --flannel-backend=host-gw
    --flannel-external-ip
Thanks, Ashis
b

bland-account-99790

04/28/2023, 3:25 PM
internal cluster communication works? You only have problems with pods accessing the internet?
verify that your iptables contain the masquerade rule
👍 1
a

alert-motherboard-87423

04/28/2023, 3:28 PM
No, Internal communication between pods also doesn’t works,
b

bland-account-99790

04/28/2023, 3:50 PM
ah ok, that might be the reason why internet communication does not work either. Check your ip route
ip r
Communication of pods running in the same node works?
a

alert-motherboard-87423

04/28/2023, 4:14 PM
This is list of rule of masquerade iptables rule
This is the output of
ip r
Communication between pods also doesn’t works
b

bland-account-99790

04/28/2023, 4:41 PM
Do you see the veth interfaces when executing
sudo brctl show
?
communication between pods in the same node does not work?
a

alert-motherboard-87423

04/29/2023, 6:27 AM
Hi @bland-account-99790, This is the output of brctl command
And communication between pods in the same node works
b

bland-account-99790

05/01/2023, 1:44 PM
ah ok, I thought it did not
I would expect another rule in the
ip r
, could you try deploying without
--flannel-backend=host-gw
? Just to verify that it works with vxlan. There might be a bug with flannel and host-gw
a

alert-motherboard-87423

05/01/2023, 2:40 PM
Sure let me try
I tried without
--flannel-backend=host-gw
but no luck. I have attached the output of
ip r.
b

bland-account-99790

05/03/2023, 11:16 AM
hey, could you create an github issue in k3s? That way we will be able to troubleshoot it further
a

alert-motherboard-87423

05/03/2023, 12:27 PM
Hi @bland-account-99790, I have already created the issue. Following is the link for the same. https://github.com/k3s-io/k3s/issues/5549
b

bland-account-99790

05/04/2023, 11:23 AM
uh, that's old
I'd prefer if you close that one and start a new one
But I have time now, let me deploy an example
I deployed with default configs
This is the routing table of the agent:
default via 10.1.1.1 dev eth0 proto dhcp src 10.1.1.9 metric 100 
10.1.1.0/24 dev eth0 proto kernel scope link src 10.1.1.9 
10.42.0.0/24 via 10.42.0.0 dev flannel.1 onlink 
10.42.1.0/24 dev cni0 proto kernel scope link src 10.42.1.1
I have two nodes
In the agent:
cat /var/run/flannel/subnet.env
FLANNEL_NETWORK=10.42.0.0/16
FLANNEL_SUBNET=10.42.1.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
In the server:
cat /var/run/flannel/subnet.env 
FLANNEL_NETWORK=10.42.0.0/16
FLANNEL_SUBNET=10.42.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
So the routing table of the agent is correct because for traffic going to
10.42.1.0/24
it uses the cni0 bridge, as this traffic will be local. For traffic going to the server node (
10.42.1.0/24
), it uses flannel.1 interface, which is the vxlan interface
And my pods are able to communicate across nodes
Is it possible that you have a firewall or something blocking your pod traffic across nodes?
Could you verify with tcpdump if you see packets coming from one node to another?
Let me try now with hostgw backend
Let's start over for one second because I am a bit confused. How many nodes did you deploy? Is it one or two?
a

alert-motherboard-87423

05/05/2023, 7:50 AM
I have one master and one agent as of now
b

bland-account-99790

05/05/2023, 7:54 AM
can you show me
kubectl get nodes -o wide
please?
a

alert-motherboard-87423

05/05/2023, 7:55 AM
Here it is
b

bland-account-99790

05/05/2023, 7:55 AM
From your master node, can you ping
10.128.0.7
?
a

alert-motherboard-87423

05/05/2023, 7:56 AM
No I can’t ping because these two nodes are in different network using private ip.
b

bland-account-99790

05/05/2023, 7:56 AM
Can you show me the output of
cat /var/run/flannel/subnet.env
in each node?
a

alert-motherboard-87423

05/05/2023, 7:59 AM
Screenshot 2023-05-05 at 1.27.55 PM.png,Screenshot 2023-05-05 at 1.28.42 PM.png
b

bland-account-99790

05/05/2023, 8:01 AM
🤔 strange. They belong to the same cluster but the
FLANNEL_NETWORK
is different
what command did you use to start k3s in the agent?
a

alert-motherboard-87423

05/05/2023, 8:04 AM
curl -sfL <https://get.k3s.io> | K3S_TOKEN="token" K3S_URL="<https://server_address:6443>" sh -s - \
    --node-label --node-external-ip=ip "<http://svccontroller.k3s.cattle.io/enablelb=true|svccontroller.k3s.cattle.io/enablelb=true>" --with-node-id
This is the command I used
b

bland-account-99790

05/05/2023, 8:05 AM
server_address
is the
node-external-ip
of the server, right?
Could you show me the output of
kubeclt get nodes -o yaml | grep podCIDRs -n5
please?
a

alert-motherboard-87423

05/05/2023, 8:07 AM
No, server_address and node-external-ip is different. server_address is master ip and node-external-ip is agent external ip (public ip)
Screenshot 2023-05-05 at 1.36.44 PM.png
b

bland-account-99790

05/05/2023, 8:09 AM
server_address is master ip
yes, but master's external ip (public ip), right?
Something strange is happening in your agent. As you can observe, Kubernetes is assuming a global pod CIDR:
10.42.0.0/16
and it gave
10.42.0.0/24
to master and
10.42.1.0/24
to the agent. However, your flannel config in the agent is using
10.244.1.0/24
. I wonder if there was an old k3s cluster there and things were not cleaned up properly. Could you show me the output of
ip a show dev flannel.1
in the agent please?
a

alert-motherboard-87423

05/05/2023, 8:15 AM
It’s saying flannel.1 doesn’t exist.
b

bland-account-99790

05/05/2023, 8:16 AM
ok, could you show me all your interfaces please?
ip a
?
a

alert-motherboard-87423

05/05/2023, 8:17 AM
Screenshot 2023-05-05 at 1.46.44 PM.png
b

bland-account-99790

05/05/2023, 8:17 AM
This cluster is using
--flannel-backend=host-gw
?
You see, cni0 interface, which is the bridge, is using 10.244.0.1, which is wrong
a

alert-motherboard-87423

05/05/2023, 8:18 AM
No, As you said previously I removed it
b

bland-account-99790

05/05/2023, 8:20 AM
Did you deploy previously a cluster with
10.244.0.0/16
as podCIDR?
a

alert-motherboard-87423

05/05/2023, 8:22 AM
No, I haven’t deployed the cluster with these CIDRs and I am not mentioning podsCIDR while deploying cluster also.
b

bland-account-99790

05/05/2023, 8:22 AM
weird, I wonder where did that config come from
Is it ok for you to redeploy? We can do it together
a

alert-motherboard-87423

05/05/2023, 8:23 AM
Yes, Sure
b

bland-account-99790

05/05/2023, 8:23 AM
As nodes are not in the same network, the only flannel backend that will work is
wireguard-native
First of all, please run
k3s-agent-uninstall.sh
in the agent and
k3s-uninstall.sh
in the server
After that, check that
/var/run/flannel/subnet.env
does not exist in both server and agent. Also verify that the
cni0
interface disappeared
a

alert-motherboard-87423

05/05/2023, 8:35 AM
Sure I’ll do it and ping you here
I have redeployed it,
/var/run/flannel/subnet.env
This file exist on both server and agent.
cni0
interface didn’t disappeared.
b

bland-account-99790

05/05/2023, 2:55 PM
how does it look?
Still not able to ping among pods?
a

alert-motherboard-87423

05/08/2023, 6:44 AM
Yeah still not able to communicate among pods
b

bland-account-99790

05/11/2023, 2:17 PM
did you follow the instructions I gave you? Is there any difference?
Do you still see conflicting ip ranges?
a

alert-motherboard-87423

05/14/2023, 7:06 PM
Yes I followed the instructions that you have said and it’s conflicting ip ranges.
b

bland-account-99790

05/15/2023, 4:32 PM
10.42.1.0/24
and
10.42.0.0/24
, and your cni0 interface IP is not in that range?