This message was deleted.
# k3s
a
This message was deleted.
f
I've looked at kube-vip, which also provides a ServiceLB, which we don't need right at the moment because we run MetalLB (maybe it's worth considering converting)
I've also considered an haproxy / vrrp contraption outside of k3s, not sure if that's really worth-while
Background: the cluster pretty much "just works" after it's bootstrapped. After the cluster is bootstrapped, it seems like the agents use a Service/ClusterIP to access the control-plane, or at least seem to know about all the nodes in the control-plane There's a couple scenarios where it'd be nice to have a VIP: • We've our control nodes get into a slightly screwed up state, where they temporarily lost quorum and were trying to hit the first node via "--server FIRST_NODE" to re-establish quorum. That first node was offline and our work-around was to change the remaining nodes startup flags to "--server SECOND_NODE" which seemed to remedy the situation and let the cluster establish quorum. I'm wondering if it would actually have been better to have just used "--server CONTROL_VIP" • We have some k8s clients where we'd like to hit a single VIP or DNS name to get to the cluster. If we have a problem like this, I'd like the clients to just hit the same DNS name to get to a working control node
b
We're running on bare metal and so for those machines that are are hosting our rancher instances, we're using HA Proxy with keepalived to float the proxy/VIP outside of the cluster.
👍 1
For downstream cluster (in Harvester mostly) we're using kube-vip in arp mode. Just had to change the cloud provider from Harvester to the default RKE Embedded to allow it to install.
The Harvester Cloud Provider has it's own version to get the ServiceLB thing working with their own proxy but it's a slimmed down version and the vip for the CP doesn't work with the image they ship.
But we wanted a vip so we could keep a cluster up as HA and have vip for the DNS record.
f
That makes sense. When you talk about downstream clusters, would the upstream be a harvester cluster to run VMs and the downstream cluster be a k8s cluster to run the standard Kubernetes affair (pods, containers, etc) in VMs? I have only seen harvester's website, I have never deployed it so I wasn't sure if it did something different to nest a cluster
b
Downstream meaning managed by a Rancher instance.
f
Ah ok
So a harvester cluster managed by rancher?
b
So Harvester is a hypervisor that runs on top of kubernetes, but can also be managed by Rancher. So it's a little confusing because you can deploy a cluster as VMs in harvester, which is also running kubernetes.
Harvester clusters can be managed by Rancher, yes.
You can also keep them separated.
f
Yeah, I was just wondering which configuration you were referring to
b
Harvester has it's own thing for moving a vip around, when you set up your initial node for a cluster the installer asks you to provide a VIP for it.
I'm not sure what it actually uses under the hood, but I suspect it's kube-vip.
But yeah I was more meaning using kube-vip inside a k3s cluster running on VMs.
👍 1