This message was deleted.
# rke2
a
This message was deleted.
c
depends on how you’re connecting to it. In-cluster, out-of-cluster, supervisor components? There are several different mechanisms in play.
what specifically are you asking about
l
out of cluster. I am not sure about supervisor components. I am aware that kubevip could do this for apiserver HA IP address, but not sure what rke2 is using under the hood that has the same functionality as kubevip.
c
This falls under the “fixed registration address” in the docs. https://docs.rke2.io/install/ha#1-configure-the-fixed-registration-address
If you’re connecting from within the cluster to the Kubernetes in-cluster endpoint (the kubernetes.default service), HA is handled by kube-proxy. For supervisor components and the kubelet, RKE2 runs a local load-balancer that routes connections to available server nodes. If you’re connecting your own clients from outside the cluster, then you are responsible for providing your own HA solution. You could just point it at one of the servers, or use a DNS alias, or a load-balancer.
l
what if using cilium? does cilium handle kube-apiserver HA failover? or does it require kube-vip?
c
I’m not really sure what you’re asking. cilium is the CNI, which handles communication between pods. It is a layer down from the apiserver.
l
by changing default kubernetes service type from ClusterIP to LoadBalancer. would that work?
c
you can change that, but the apiserver will reset it back to ClusterIP when it starts. This is core Kubernetes logic that cannot be disabled.
l
really? where is the logic that it would reset it back to ClusterIP when it starts?
I just toggled that via ‘kubectl edit’, and restarted rke2:
Copy code
systemctl restart rke2-server.service
and it stayed with LoadBalancer..
c
when the apiserver restarts, not when rke2 restarts
restarting rke2 doesn’t restart all the pods
that is my understanding as to the intended behavior at least
l
ok I just tried to delete the apiserver pod, and it came back, and default kubernetes service is still showing up as LoadBalancer.
this is a single host, one apiserver pod test environment
c
if it works for you then great. It is not something that I see people doing much; my understanding was that the kubernetes service is managed by the apiserver.
it looks like at some point the apiserver stopped resetting the service type, I guess it’ll just repair the endpoints and ports now
l
I doubt that would work, and I am trying to clarify. someone else mentioned this would work, and I don’t think so..
c
the code is still there to do it but it doesn’t appear to be used any longer
l
ok I was hoping it would reset back to clusterip.
c
looks like that behavior was removed without calling it out in the changelog in 1.14: https://github.com/kubernetes/kubernetes/pull/74668