This message was deleted.
# general
a
This message was deleted.
c
can you post the full pod log? Also, confirm that you have appropriate iptables or nftables modules loaded…
r
I recall something similar where I needed a package install and another where it wouldn't load by default when it should've but an explicit
modprobe iptable_filter
(or a different one) was required.
I checked, on a RHEL 8 the module I was missing was
iptable_nat
, that was for k3d, though. Also on RHEL 8 & newer Ubuntu there's a difference with using legacy iptables instead of nftables that fixed things like that at times too.
m
Sorry I've taken a bit to get back. One thing I have discovered is when I force Klippper-lb 0.4.3, it works. I noticed in the release notes of 0.4.3 that it supports legacy. This gives me a different problem though, when I modify the daemonset configuration yaml and change the image from Klipper-lb 0.4.0 to 0.4.3, the config reverts back to 0.4.0 in seconds. Is this something Rancher is enforcing?
c
Yes, the daemonset is managed by the ServiceLB controller. Can you post the pod logs? It appears to be misdetecting nftables and I'm curious why.
m
@creamy-pencil-82913 I'm sorry I didn't get those for you. What I posted was all the log information I was getting from the pod in the UI of Rancher. This morning, I ended up needing to reboot my cluster, I'm running 5 raspbery pi 4s. One dedicate for rancher and the other 4 running k3s. After rebooting, Klipper-lb 0.4.0 came up fine. Maybe the upgrade didn't fully take till rebooting?
c
ah yeah, it guess is possible that the controller was scheduling newer svclb pods onto nodes that hadn’t been upgraded yet?
m
Yes, I would agree that was what had happned. Because there were even still remaining klipper-lp nodes that were 0.3.5 that hadn't died off yet at the time.