This message was deleted.
# k3s
a
This message was deleted.
c
k3s doesn’t stop container processes on either server, or agent
that doesn’t mean the process won’t crash for other reasons if the kubelet or apiserver goes away, but it doesn’t stop them.
have you opened an issuse with the cilium folks? I’m not seeing anything here k3s specific
w
Not yet. I was wondering which side is the source of issue here.
@creamy-pencil-82913 I tested it a bit more and it seems that this line from k3s install script:
iptables-save | grep -v KUBE- | grep -iv flannel | $SUDO iptables-restore
is breaking the networking of the instance.
Maybe a flag for the script to disable this step would be useful. If someone is not using flannel then this step is obsolete and will do nothing
c
breaking it how?
Removing the kubelet and flannel iptables rules shouldn’t break anything…
you can always set INSTALL_K3S_SKIP_ENABLE or INSTALL_K3S_SKIP_START to skip that bit
w
"breaking it how?" - all connectivity of the VM is broken, and no services can work until: • reset of vm is done • or I run commands for cleanup of cilium stuff mentioned here: https://docs.k3s.io/installation/network-options#custom-cni
"Removing the kubelet and flannel iptables rules shouldn’t break anything…" - my idea is that maybe it is related somehow to the order of rules? Not that easy to debug...
c
ah right, I forgot that cilium does dumb things with the host networking
🙂 1
w
"you can always set INSTALL_K3S_SKIP_ENABLE or INSTALL_K3S_SKIP_START to skip that bit" - this is interesting idea
c
you’ll have to start the service yourself afterwards, but that shouldn’t be too bad
w
That's pretty easy with ansible. I'll give it a try. Thanx
l
@worried-jackal-89144 I would suggest you to use the no kube proxy mode of cilium. Then iptables use is reduced DRASTICALLY. Further, are you actually disabling flannel entirely when you use cilium?
w
@late-needle-80860 Yes, flannel is disabled, and I'm using cilium's kube-proxy (I tried with and without it, but the result was the same). The easiest solution is to use install script, install k3s with INSTALL_K3S_SKIP_START=true, and then restart it explicitly
l
I would discuss this over on the Cilium slack community as well .. it seems weird. I’m running K3s on v1.27.4+k3s1 as well … have no such issue with Cilium v1.14
We don’t configure the Pod CIDR though on-prem .. as we don’t need to. However, I can’t see that affecting node network connection in any negative way
We use: • bpf masquerading • auto direct node routes • replace the kube proxy in strict mode • we set the ipv4 native routing cidr • we disable tunnel mode
all the above is of course Cilium conf.