This message was deleted.
# harvester
a
This message was deleted.
b
I don't feel authoritative to answer, but I think it's still "supported". They don't really care where your cluster runs, it just needs a dedicated cluster to run Rancher. You'd have to set it up manually in Harvester. You'd need to be sure to add lables or something to keep them on different nodes for HA.
👍 1
w
It's not unsupported as such but perhaps unwise - if you're running Rancher on a VM (whether it's as a Docker container or Kubernetes cluster) within the Harvester cluster you're managing via Rancher what happens if there's a problem? Fine for development/testing but not production. The Harvester docs say install Rancher on a separate server - see https://docs.harvesterhci.io/v1.3/rancher/rancher-integration#deploying-rancher-server An alternative is to install the rancher-vcluster addon to run Rancher on the Harvester cluster - see https://docs.harvesterhci.io/v1.3/advanced/addons/rancher-vcluster
n
Understood. If there's a problem with Rancher the underlying hosts and harvester itself is still accessible though, no?
w
yes, if there's a problem with Rancher, Harvester will be fine. I was thinking the other way round - what if there's a problem with Harvester? (upgrades come to mind 😱)
n
Currently we're running rancher on bare metal which hasn't been a great experience
Do you know if harvester supports rolling upgrades? If so, I'm guessing it'd be a matter of coordinating the scheduling/migration of the rancher VM(s) so they're not on hosts that're getting upgraded. But yeah, good points indeed
w
yes that's how Harvester upgrades work - it stores the target version's image then upgrades each node one by one ... well that's the idea 😉 theoretically moving workloads around as it goes ...
currently manually started but then it sorts itself
👍 1
w
We run our rancher on a bank of 5 low power machines (11w each) setup in HA, they were cheep and small and we’ve had no problem with them - highly recommend this approach, we even used one of them to run DNSmasq service and provide DNS for the private network, giving some separation from the cluster - works really well for us and was reasonably straight-forward to install. As Simon points out - if things go wrong…. trust me they can and do - the less layers of abstraction you have the better things will run 😉
n
Cool, rke2/k3s under the hood? For DNS we’ve been using coredns with the k8s_gateway plugin so ingress/loadbalancer service resources in the cluster resolve externally via a forward lookup. No dns configuration needed besides the dns forwarding, just define k8s resources and they’ll be added
w
I’ll look at that plug-in - yep rke2/k3s for the mini cluster. DNS in our context was todo with the wider network - so we can point domains to the load balancer ip locally or specific vms - the clusters network is isolated from our wider network - we’ve used metallb in cluster. Lots of ways you can do this however - good luck 😉