This message was deleted.
# rke2
a
This message was deleted.
p
Did not experience. Are you experiencing it on all the nodes? Check rancher logs if something is wrong on the management side. Also i noted that a cluster update will overwrite some of your changes you did on rancher-managed workloads (such as the ingress controller arguments, in my case)
b
yes on all the nodes. apparently the health check endpoint liveness for kube-proxy is not accessible
http-get <http://localhost:10256/healthz> delay=10s timeout=15s period=10s #success=1 #failure=8
.
p
You can't explore the cluster to check pods logs, correct?
b
i found the problem. ip_vs module was not present in the kernel. i added it in all nodes and it's working again.
p
Oh, running your nodes on KVM? ๐Ÿ˜„
b
yes on proxmox vm ๐Ÿ˜…
p
you found this topic, correct? ๐Ÿ˜„ https://github.com/rancher/rke2/issues/4416
b
p
Well, nice one. Kubenetwork issues are a effin pain. Happy you managed to find the solution.
๐Ÿ‘ 1