https://rancher.com/ logo
m

microscopic-memory-76904

04/26/2023, 2:32 AM
Hi All! I'm running into an issue with servicelb. The pods are not coming up and when I look at the logs, I'm getting the following error:
Copy code
2023-04-25T19:12:18.599777283-07:00 + grep -Eq :
2023-04-25T19:12:18.601544851-07:00 + iptables -t filter -I FORWARD -s 0.0.0.0/0 -p TCP --dport 8686 -j ACCEPT
iptables v1.8.8 (legacy): can't initialize iptables table `filter': Table does not exist (do you need to insmod?)
2023-04-25T19:12:18.614809084-07:00 Perhaps iptables or your kernel needs to be upgraded.
Has anyone else ran into this? I recently upgraded Rancher to 2.7.3 and k3s to v1.25.7 +k3s1
c

creamy-pencil-82913

04/26/2023, 3:09 AM
can you post the full pod log? Also, confirm that you have appropriate iptables or nftables modules loaded…
r

rough-farmer-49135

04/26/2023, 2:06 PM
I recall something similar where I needed a package install and another where it wouldn't load by default when it should've but an explicit
modprobe iptable_filter
(or a different one) was required.
I checked, on a RHEL 8 the module I was missing was
iptable_nat
, that was for k3d, though. Also on RHEL 8 & newer Ubuntu there's a difference with using legacy iptables instead of nftables that fixed things like that at times too.
m

microscopic-memory-76904

04/27/2023, 3:53 PM
Sorry I've taken a bit to get back. One thing I have discovered is when I force Klippper-lb 0.4.3, it works. I noticed in the release notes of 0.4.3 that it supports legacy. This gives me a different problem though, when I modify the daemonset configuration yaml and change the image from Klipper-lb 0.4.0 to 0.4.3, the config reverts back to 0.4.0 in seconds. Is this something Rancher is enforcing?
c

creamy-pencil-82913

04/27/2023, 4:51 PM
Yes, the daemonset is managed by the ServiceLB controller. Can you post the pod logs? It appears to be misdetecting nftables and I'm curious why.
m

microscopic-memory-76904

04/27/2023, 5:49 PM
@creamy-pencil-82913 I'm sorry I didn't get those for you. What I posted was all the log information I was getting from the pod in the UI of Rancher. This morning, I ended up needing to reboot my cluster, I'm running 5 raspbery pi 4s. One dedicate for rancher and the other 4 running k3s. After rebooting, Klipper-lb 0.4.0 came up fine. Maybe the upgrade didn't fully take till rebooting?
c

creamy-pencil-82913

04/27/2023, 6:01 PM
ah yeah, it guess is possible that the controller was scheduling newer svclb pods onto nodes that hadn’t been upgraded yet?
m

microscopic-memory-76904

04/27/2023, 6:10 PM
Yes, I would agree that was what had happned. Because there were even still remaining klipper-lp nodes that were 0.3.5 that hadn't died off yet at the time.
11 Views