This message was deleted.
# k3s
a
This message was deleted.
c
w
Yes, I did peruse that… Is there more info on
lbpool
config? Trying to do what in MetalLB would be a L2 load balancer.
Went also to the GitHub page for the KlipperLB project, no docs there…
c
It is very simple. As the docs say, all it does is create pods that use iptables to forward traffic from ports on the host, to the cluster service, and advertises the hosts IPs as the LoadBalancer addresses. It is not a “real” LoadBalancer.
It is enough to get LoadBalancer services working, and works for most simple use cases. If you want to get fancy with it, you probably need a real LoadBalancer that will give you a Virtual IP for the LB.
w
OK, thanks. How would I remove ServiceLB then on my running cluster without impacting the cluster?
c
w
Yes, read that; where/how do I set that flag?
c
in the arguments in the systemd unit, or in the config file. your choice.
w
OK, thanks, I appreciate your assistance.
@creamy-pencil-82913 If I create the file
/etc/rancher/k3s/config.yaml
on my nodes, is the syntax for disabling ServiceLB:
disable: servicelb
or maybe
servicelb: disable
or something different? I cannot seem to find docu on the config.yaml directivves & values.
c
https://docs.k3s.io/installation/configuration#configuration-file
CLI arguments map to their respective YAML key, with repeatable CLI arguments being represented as YAML lists. Boolean flags are represented as
true
or
false
in the YAML file.
So --disable=servicelb would be
Copy code
disable: servicelb
or
Copy code
disable:
  - servicelb
w
Great, thx for explaining!
After I put this in place, I’d have to
systemctl restart k3s[-agent].service
, correct?
c
yep
w
thx
Hallo @creamy-pencil-82913 one more q if I may... I have a HA 3-node k3s control plane; do I put the
/etc/rancher/k3s/config.yaml
on each cp node, then restart them one by one to maintain etcd quorum? Or someother way?
c
yeah, thats the best way to do it to reduce downtime
w
And it wouldn't be an issue that some are configured one way, and some another?
c
do it on the initial server first, and then go from there.
w
ok, thx
@creamy-pencil-82913 I put the
/etc/rancher/k3s/config.yaml
on all the cluster nodes, then restarted
k3s.service
via systemctl on the controller nodes, which worked; however, when I tried restarting
k3s-agent
on the worker nodes, it failed to start. If I moved the
config.yaml
out and restarted, then the k3s-agent service did start. Do I not need the
/etc/rancher/k3s/config.yaml
on the k3s-agent nodes?
c
Did you try to add that same flag to agents? It's a server flag, agents will complain and probably fail to start if you set it in them.
w
Yes I did, and I did find out they do fail to start 😛
So to confirm, it’s just needed on the control-plane (server) nodes then?
And how to confirm that ServiceLB is removed now?
c
yep, thats what the docs say: > To disable ServiceLB, configure all servers in the cluster with the
--disable=servicelb
flag.
You could create a service with type=LoadBalancer and confirm that it remains pending?
If you already have LB services, you’ll want to delete and re-create them after deploying a new LB controller so that it picks them up
w
Ah, it’s just my old sysadmin brain - saw servers, thought “nodes” (we are running on bare metal)
What are the ServiceLB pods named? (I don’t see anything with “servicelb” string on anything…) I did have a service of type “LoadBalancer” before when ServiceLB was installed, but it remained in “pending”… It is still there in “pending”, haven’t installed a new LB implementation yet. Justt want to ensure that before I do that, that the old ServiceLB is indeed gone.