This message was deleted.
# k3s
a
This message was deleted.
b
What config did you use to deploy k3s?
a
Hi @bland-account-99790, I’m using the following command to start K3s Server.
Copy code
curl -sfL <https://get.k3s.io> | sh -s - \
    --write-kubeconfig-mode "0644" \
    --token "token" \
    --node-external-ip "ip" \
    --tls-san "ip" \
    --node-taint "CriticalAddonsOnly=true:NoExecute" \
    --disable=traefik
    --flannel-backend=host-gw
    --flannel-external-ip
Thanks, Ashis
b
internal cluster communication works? You only have problems with pods accessing the internet?
verify that your iptables contain the masquerade rule
👍 1
a
No, Internal communication between pods also doesn’t works,
b
ah ok, that might be the reason why internet communication does not work either. Check your ip route
ip r
Communication of pods running in the same node works?
a
This is list of rule of masquerade iptables rule
This is the output of
ip r
Communication between pods also doesn’t works
b
Do you see the veth interfaces when executing
sudo brctl show
?
communication between pods in the same node does not work?
a
Hi @bland-account-99790, This is the output of brctl command
And communication between pods in the same node works
b
ah ok, I thought it did not
I would expect another rule in the
ip r
, could you try deploying without
--flannel-backend=host-gw
? Just to verify that it works with vxlan. There might be a bug with flannel and host-gw
a
Sure let me try
I tried without
--flannel-backend=host-gw
but no luck. I have attached the output of
ip r.
b
hey, could you create an github issue in k3s? That way we will be able to troubleshoot it further
a
Hi @bland-account-99790, I have already created the issue. Following is the link for the same. https://github.com/k3s-io/k3s/issues/5549
b
uh, that's old
I'd prefer if you close that one and start a new one
But I have time now, let me deploy an example
I deployed with default configs
This is the routing table of the agent:
Copy code
default via 10.1.1.1 dev eth0 proto dhcp src 10.1.1.9 metric 100 
10.1.1.0/24 dev eth0 proto kernel scope link src 10.1.1.9 
10.42.0.0/24 via 10.42.0.0 dev flannel.1 onlink 
10.42.1.0/24 dev cni0 proto kernel scope link src 10.42.1.1
I have two nodes
In the agent:
Copy code
cat /var/run/flannel/subnet.env
FLANNEL_NETWORK=10.42.0.0/16
FLANNEL_SUBNET=10.42.1.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
In the server:
Copy code
cat /var/run/flannel/subnet.env 
FLANNEL_NETWORK=10.42.0.0/16
FLANNEL_SUBNET=10.42.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
So the routing table of the agent is correct because for traffic going to
10.42.1.0/24
it uses the cni0 bridge, as this traffic will be local. For traffic going to the server node (
10.42.1.0/24
), it uses flannel.1 interface, which is the vxlan interface
And my pods are able to communicate across nodes
Is it possible that you have a firewall or something blocking your pod traffic across nodes?
Could you verify with tcpdump if you see packets coming from one node to another?
Let me try now with hostgw backend
Let's start over for one second because I am a bit confused. How many nodes did you deploy? Is it one or two?
a
I have one master and one agent as of now
b
can you show me
kubectl get nodes -o wide
please?
a
Here it is
b
From your master node, can you ping
10.128.0.7
?
a
No I can’t ping because these two nodes are in different network using private ip.
b
Can you show me the output of
cat /var/run/flannel/subnet.env
in each node?
🤔 strange. They belong to the same cluster but the
FLANNEL_NETWORK
is different
what command did you use to start k3s in the agent?
a
Copy code
curl -sfL <https://get.k3s.io> | K3S_TOKEN="token" K3S_URL="<https://server_address:6443>" sh -s - \
    --node-label --node-external-ip=ip "<http://svccontroller.k3s.cattle.io/enablelb=true|svccontroller.k3s.cattle.io/enablelb=true>" --with-node-id
This is the command I used
b
server_address
is the
node-external-ip
of the server, right?
Could you show me the output of
kubeclt get nodes -o yaml | grep podCIDRs -n5
please?
a
No, server_address and node-external-ip is different. server_address is master ip and node-external-ip is agent external ip (public ip)
b
server_address is master ip
yes, but master's external ip (public ip), right?
Something strange is happening in your agent. As you can observe, Kubernetes is assuming a global pod CIDR:
10.42.0.0/16
and it gave
10.42.0.0/24
to master and
10.42.1.0/24
to the agent. However, your flannel config in the agent is using
10.244.1.0/24
. I wonder if there was an old k3s cluster there and things were not cleaned up properly. Could you show me the output of
ip a show dev flannel.1
in the agent please?
a
It’s saying flannel.1 doesn’t exist.
b
ok, could you show me all your interfaces please?
ip a
?
This cluster is using
--flannel-backend=host-gw
?
You see, cni0 interface, which is the bridge, is using 10.244.0.1, which is wrong
a
No, As you said previously I removed it
b
Did you deploy previously a cluster with
10.244.0.0/16
as podCIDR?
a
No, I haven’t deployed the cluster with these CIDRs and I am not mentioning podsCIDR while deploying cluster also.
b
weird, I wonder where did that config come from
Is it ok for you to redeploy? We can do it together
a
Yes, Sure
b
As nodes are not in the same network, the only flannel backend that will work is
wireguard-native
First of all, please run
k3s-agent-uninstall.sh
in the agent and
k3s-uninstall.sh
in the server
After that, check that
/var/run/flannel/subnet.env
does not exist in both server and agent. Also verify that the
cni0
interface disappeared
a
Sure I’ll do it and ping you here
I have redeployed it,
/var/run/flannel/subnet.env
This file exist on both server and agent.
cni0
interface didn’t disappeared.
b
how does it look?
Still not able to ping among pods?
a
Yeah still not able to communicate among pods
b
did you follow the instructions I gave you? Is there any difference?
Do you still see conflicting ip ranges?
a
Yes I followed the instructions that you have said and it’s conflicting ip ranges.
b
10.42.1.0/24
and
10.42.0.0/24
, and your cni0 interface IP is not in that range?
238 Views