This message was deleted.
# k3d
a
This message was deleted.
w
Heyho 👋 The serverlb doesn't react to anything dynamically. k3d writes the confd values file on your command. There's nothing watching ingress or service objects etc.
c
Thanks for the answer! I got to that point already. Now I'm going down the klipper-lb rabbit hole. 🙂
w
Klipper is inside K3s. It reacts to services with
type: LoadBalancer
. It checks the desired port and tries to expose it on any node in the cluster using a proxy pod with
hostPort
exposed. The
externalIP
will then be the IP of that node.
c
Yeah I got to that as well 🙂. What I don't get is how it differs from a nodePort service.
Because as far as I understood it's practically the same with the difference that it'l be on every node with the help of its DaemonSet. Which will then forward all traffic to the clusterIp of the service.
w
NodePort is exposed on all nodes simultaneously and only offers a specific port range (e.g. no privileged ports), though that can be configured. Klipper is only something that tries to emulate the behavior of real LoadBalancers that you get from cloud providers.
c
That is the part that still didn't click for me. I don't see how that emulation is supposed to work. The premise of both, NodePort and klipper-lb seems the same to me.
Both expose a port on the nodes, NodePort does that without extra pods, while klipper-lb does so with the help of a daemonSet. In both cases you'll use the same IP's to reach the service. Klipper allows you to use privileged ports. In terms of K3D I won't be able to bind those ports to the host as only the first pod would block the port while all other pods will error out because the port wouldn't be available anymore. Would that be correct or is there still more to the klipper-lb?
w
I might be wrong.. but Klipper does use single pods, not DaemonSets, right? It exposes the port on only a single node, not all of them, right?
You're example on k3d is only valid if you have only a single node. But in that case it would be valid everywhere.
From the Klipper repo
This works by using a host port for each service load balancer and setting up iptables to forward the request to the cluster IP. The regular k8s scheduler will find a free host port. If there are no free host ports, the service load balancer will stay in pending.
c
You're example on k3d is only valid if you have only a single node. But in that case it would be valid everywhere.
Isn't that the premise of k3d, that you have a single docker host which can than run multiple k3s nodes in docker? Or is it possible to run k3d on multiple machines? Regarding Klipper. https://rancher.com/docs/k3s/latest/en/networking/#how-the-service-lb-works
K3s creates a controller that creates a Pod for the service load balancer, which is a Kubernetes object of kind Service.
For each service load balancer, a DaemonSet is created. The DaemonSet creates a pod with the
svc
prefix on each node.
...
...
w
Yes, but in k3d, you're Kubernetes (K3s) nodes are containers, which you can have multiple on. In that case the external IP of a node is the IP of the K3s' container. Regarding the docs... This seems off to me. Maybe we can cross post this somewhere else to get clarification, as the last paragraph says
If you try to create a Service LB that listens on port 80, the Service LB will try to find a free host in the cluster for port 80. If no host with that port is available, the LB will stay in Pending.
which wouldn't make sense it the proxies would be DaemonSets, which would cover all hosts.
I'm on my phone right now, but maybe you can just give it a try and see if it creates a DaemonSet? I could totally be wrong there..