I'm embarrassed to ask such a basic question, but ...
# rke2
a
I'm embarrassed to ask such a basic question, but I think I am at this juncture where someone here MUST know. Really a best practices type issue, that we haven't figured out in our RKE2 Proof of concept install. I have to use an External F5 Load Balancer, for TLS Termination and for VIP to Internal Services. Were not focused on deploying any apps, just kind of getting the Cluster itself to stable, so the Platform Engineers can get comfortable. When I did the last install to testing, I rebooted the CP 1 server and it had an issue with another part of the OS, but then I couldn't even really get back into anything. Since it was testing, I tried too many things, and probably added new problems. But I think in my thoughts about it...I just didn't have an external VIP for the CP nodes setup. So the other 2 never knew to respond, etc. Given the the drawing attached: 1. Is this a reasonable understanding of how to LB core components for Max Availability? 2. anyone have any advice for when I get to the Rancher (Helm) Install portion for supporting External Load Balancer Ingress? 3. I wont bother anyone, but the Apps are going to be served using the F5 Container Ingress Service, but thats so scary I can't even think of the how there, yet. lol Thank You!
c
no. You really shouldn’t put the F5 in front of the apiserver. You really only want it in front of the ingress ports or other service ports.
The cluster members communicate directly with each other, they are not going to go through the F5.
And when you are accessing the apiserver externally, you don’t want to go through the F5 for that either, since Kubernetes does mutual TLS authentication, and if you’re offloading TLS to the F5… that won’t work.
e
https://docs.rke2.io/install/ha I guess I left out the reference. Reading it a second time, it’s only the 9345 port that gets balanced/health checked in the drawing. How is this wrong? And thank you.
c
The fixed registration endpoint is ONLY used for the initial contact when registering a new node. Once the node joins the cluster, it communicates directly with other cluster members. So it’s not worth going through a bunch of work to set up a VIP in front of the servers, when it’s barely ever used.
e
Ah gotcha.
Does it not serve well to allow that same vip to load-balance/make highly available the api for kubectl for the internal admins? In most F5 configs you’re doing health checks, so a kubectl pointed at that VIP would always be able to reach a CP node?
c
you could but you’d need to turn off TLS offload because Kubernetes does TLS mutual auth
and you’d also need to do both 9345 and 6443
and make sure that your target group for 6443 only includes control-plane servers
e
Last dumb questions (for now)…when I get to the next part of using F5 for the rancher web UI itself: I can skip the cert manager part since I’m using all external TLS termination? How do I tell the helm install, that I’m using external TLS? Does the VIP point to all nodes or just CP servers?
c
the VIP for rancher should point to all nodes that run the ingress controller. if you’re running it as a daemonset, that would be all nodes.
you can skip cert-manager if you tell rancher that you’re using an external tls secret, but you still need to put the cert/ca-cert in a secret and tell rancher what it’s called, because it passes that cert into agents so that they know what to expect when they connect up.
that is covered in the rancher install docs and chart values, if you look for it