This message was deleted.
# rke2
a
This message was deleted.
b
Is this a downstream cluster?
I ask because the "Now after setup Rancher we [...]" makes it seem like it might be your upstream cluster.
Well if it's downstream, here's how I think you might be able to solve it: • Don't have a worker pool and just beef up the control plane pool to allow them to run your workloads. Since you already have kube-vip that floating IP will keep your workloads available too. • Use something else like metal-lb to get a VIP for your workernodes. • Use an external Proxy (like HAProxy) to be the vip for your entire cluster and route appropriate traffic to the CP and worknodes respectively.
b
Thanks @bland-article-62755 for you reply. Yes, downstream cluster. This is true for solution like rke2, but for custom cluster from rancher you are not able to change kube-api IP for workers - you can provide only rancher server for system-agent-install tool. Regarding first point yes, but I want to split workers and CP. Metallb will be for LB service. For external haproxy solution you need to provide somehow during installation that haproxy IP for workers to connect via haproxy to kube-api, Still I'm talking here only to provide HA for workers, so any CP will be down workers should still works without any issues. Even entire Rancher will be down still downstream k8s should work as well as workers when any of CP in downstream k8s will be down. Now it seems that HA works thanks to Rancher, so all downstream k8s connections are going through Rancher and Rancher has all information about all CP in particular cluster and thanks to that workers can works in HA mode, but what will happen when Rancher will be down?
b
Metallb will be for LB service
If you already can set up metalLB then you're done? That'll make sure the services for the workers stay in HA.
afaik, if your CP goes down, there's no way to keep up the workers for any meaningful amount of time.
Now it seems that HA works thanks to Rancher, so all downstream k8s connections are going through
This is kinda true, but it doesn't have to be.
You can proxy through your rancher instance, but you can also talk to the kube api through the kube-vip address on the control plane.
also, with this: > For external haproxy solution you need to provide somehow during installation that haproxy IP for workers to connect via haproxy to kube-api That's not true, but you'd have to provision it seperately and have static IPs for the nodes that you provision.
Health checks would tell HAProxy to send traffic that way or not.
The control plane and the workers should be talking on the backend private network and aren't routing their traffic through rancher at all. The rancher proxy is just how users or other external things (like fleet, or some other cicd, gitops, whatever) are going to connect to that downstream cluster.
kube-vip and MetalLB need to be on the external interface not the internal one.
So your VIPs is an external address that's rotatable from outside, but it's going to be using the internal addresses for cross cluster communication.
and your kube-vip VIP is so your control plane stays in HA via dns with something like downstream.cluster.example.com with an A record to your VIP.
c
If you’re provisioning via Rancher, Rancher manages that.
You don’t need to deploy kube-vip or anything. Rancher ensures that the agent’s
server
address always points to a valid cluster member. And agents also watch all apiserver endpoints and run a local load-balancer that can send traffic to all server members, once they have bootstrapped.
Initially, the agent connects to the supervisor (and kube-apiserver) via the local load-balancer on port 6443. The load-balancer maintains a list of available endpoints to connect to. The default (and initially only) endpoint is seeded by the hostname from the
--server
address. Once it connects to the cluster, the agent retrieves a list of kube-apiserver addresses from the Kubernetes service endpoint list in the default namespace. Those endpoints are added to the load balancer, which then maintains stable connections to all servers in the cluster, providing a connection to the kube-apiserver that tolerates outages of individual servers.
b
@creamy-pencil-82913 cool, thank you. This is what I was looking for, probably this https://ranchermanager.docs.rancher.com/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters was the answer for that but it wasn't clear for me 🙂 You've pasted link to k3s but for rke2 (custom deployment via rancher) it works the same way right? @bland-article-62755 Thank you for your answer. But still it wasn't about HA for user access but for workers. For example when you set up bare k8s via let say kube-admin you need at first setup for example kube-vip on control planes. Then workers (different nodes) should point on floating IP to ensure HA (when one CP will fail), right? Until @creamy-pencil-82913 explained how it works in Rancher still I was convinced that this was the way it had to be but haven't seen where to provide floating IP for workers during installation (via rancher-agent) Generally as far as I understand now, workers have HA because of solution with local load-balancer. Now I need kube-vip anyway to provide HA for user access (for example one CP will fail so user still has one point of access (floating ip)), right? For that Rancher doesn't provide any solution - did I understand it correctly?
@creamy-pencil-82913 I've already checked and on port 6443 there is only kube-api. I don't see any load balancer...
c
What are you looking for? As the docs say, this is talking about agents, which run a local loadbalancer.
b
@creamy-pencil-82913 Not sure what I'm looking for. I'd like to verify and understand how it works, so for example check/go thru a path how workers are connecting to CP (to ensure HA). I thought I should see any service on worker side that is LB for kubelet which connects to kube-api and other k8s components , instead of connecting directly to kube-api/scheduler etc..
ok, @creamy-pencil-82913 I see it's just rk2 service that kubelet is connecting to (as you probably mentioned but I didn't catch that :)), thank you very much for your explanation 🙂