https://rancher.com/ logo
#k3s
Title
c

cool-ocean-71403

07/23/2022, 8:44 AM
Is there any way the k3s service lb can pass or preserve source ip and port from the clients connection to my backend services? Right now when I deploy any service using the service lb, my service can only see and log the ip addresses of the service lb daemonset but my backend service is unable to read the original client up which is hitting the service lb. Also, in future iterations of k3s, is it possible to configure the klipper lb in a way that it creates only one daemonset and routes all load balancer service traffics through one single daemonset? So, only one pod per node and maybe a node label flag to allow the daemonset to scale on any node. A full single daaaemonset for every load balancer service seems very unefficient in terms of resource usage and also it creates 10 extra pods on every node if 10 services are of type Load Balancer.
f

full-painter-23916

07/23/2022, 8:20 PM
The only way to do that for layer 4 (as opposed to layer 7/http/ingress) is for the request to hit a node where the load balancer is "running" and be responded to by a target pod on that same node, with (the service).spec.externalTrafficPolicy=Local Klipper is a ~10 line shell script whose entire purpose is to be a stupid-simple implementation so that balancer services are reachable instead of staying pending because k8s comes with none. It sets up an iptables rule on startup and then does nothing after that until it's time to remove the rule. You are not "wasting" anything of significance by having multiple "running", other than counting against the default 110 pods per node limit. It is not listening for or shuffling bits around A more complicated multiplexing design requiring configuration decisions is what metallb and friends do; if you want that, use them.
c

cool-ocean-71403

07/24/2022, 2:42 PM
Understood
The only problem I have so far with metallb is that it needs a free address pool to assign ip address from to all the load balancer services. But all my nodes have 1 public IP address each and 1 private IP each. I am not sure if it is possible to hand over these single public ip addresses to metallb and then somehow metallb can internally do networking to make a service hosted in node1 be available from node4 ip address because the load balancer service has been assigned the node4 IP even though the pod is in node1.
117 Views