Hi all, I’m looking for a quick sanity check.
I’ve been running a 3-node Harvester cluster in my homelab,, mainly relying on Rancher for workload management and using the GUI to get Kubernetes clusters up and running without having to fully dive into the CLI. It’s worked well for my needs so far.
I’ve recently started a new job at a small(ish) ISP that handles everything in-house — no AWS or cloud providers. They’re now considering adopting Harvester to replace their existing Proxmox setup. I’ll be learning Kubernetes in depth moving forward, including CLI and deeper cluster internals, but for now I’m still learning.
The current infrastructure already uses Rancher (mandated by the CTO, though our lead engineer would’ve preferred upstream Kubernetes). After discussing his reasoning, I actually agree — upstream Kubernetes might suit our use cases better.
So here’s my question(s):
• How tightly coupled is Harvester to Rancher? I understand Rancher is used to manage Harvester, but are we able to use a combination of RKE2 and upstream Kubernetes clusters alongside Harvester without running into major issues?
• Our provisioning pipeline is driven by NetBox, which acts as our source of truth and triggers OpenTofu and Ansible playbooks. My concern is that Harvester/KubeVirt may introduce dynamic changes that aren’t easily tracked or reconciled in this workflow. Additionally, I’m not sure how standard the Harvester API is when it comes to integration.
• I did come across a project that implements CAPI for Harvester, but it looks abandoned. Has CAPI been integrated into Harvester more officially since then (I do not believe it has been as I couldn't see it in the docs). Having full blown CAPI support would eliminate many of our concerns. I have not worked with the native harvesterOS API; how complete is it? Does it allow us for full lifecycle management?
• How viable would management be via a IaC workflow without needing us to go into the rancher GUI?
One engineer who is open to trialing Harvester, has also floated the idea of building our own virtualization platform using KubeVirt directly. Personally, I don’t think we have the resources to build and maintain such a platform from scratch — Harvester seems like the better option if it meets our needs.
Ideally, we’d use Harvester as our HCI layer, but avoid vendor lock-in by using a mix of RKE2 and upstream Kubernetes clusters on top, with custom tooling where appropriate. For that to be viable long-term, though, I feel we’d need proper CAPI support.