This message was deleted.
# k3s
a
This message was deleted.
f
The only way to do that for layer 4 (as opposed to layer 7/http/ingress) is for the request to hit a node where the load balancer is "running" and be responded to by a target pod on that same node, with (the service).spec.externalTrafficPolicy=Local Klipper is a ~10 line shell script whose entire purpose is to be a stupid-simple implementation so that balancer services are reachable instead of staying pending because k8s comes with none. It sets up an iptables rule on startup and then does nothing after that until it's time to remove the rule. You are not "wasting" anything of significance by having multiple "running", other than counting against the default 110 pods per node limit. It is not listening for or shuffling bits around A more complicated multiplexing design requiring configuration decisions is what metallb and friends do; if you want that, use them.
c
Understood
The only problem I have so far with metallb is that it needs a free address pool to assign ip address from to all the load balancer services. But all my nodes have 1 public IP address each and 1 private IP each. I am not sure if it is possible to hand over these single public ip addresses to metallb and then somehow metallb can internally do networking to make a service hosted in node1 be available from node4 ip address because the load balancer service has been assigned the node4 IP even though the pod is in node1.
181 Views