Hi all, I’m looking for a quick sanity check. I’v...
# harvester
d
Hi all, I’m looking for a quick sanity check. I’ve been running a 3-node Harvester cluster in my homelab,, mainly relying on Rancher for workload management and using the GUI to get Kubernetes clusters up and running without having to fully dive into the CLI. It’s worked well for my needs so far. I’ve recently started a new job at a small(ish) ISP that handles everything in-house — no AWS or cloud providers. They’re now considering adopting Harvester to replace their existing Proxmox setup. I’ll be learning Kubernetes in depth moving forward, including CLI and deeper cluster internals, but for now I’m still learning. The current infrastructure already uses Rancher (mandated by the CTO, though our lead engineer would’ve preferred upstream Kubernetes). After discussing his reasoning, I actually agree — upstream Kubernetes might suit our use cases better. So here’s my question(s): • How tightly coupled is Harvester to Rancher? I understand Rancher is used to manage Harvester, but are we able to use a combination of RKE2 and upstream Kubernetes clusters alongside Harvester without running into major issues? • Our provisioning pipeline is driven by NetBox, which acts as our source of truth and triggers OpenTofu and Ansible playbooks. My concern is that Harvester/KubeVirt may introduce dynamic changes that aren’t easily tracked or reconciled in this workflow. Additionally, I’m not sure how standard the Harvester API is when it comes to integration. • I did come across a project that implements CAPI for Harvester, but it looks abandoned. Has CAPI been integrated into Harvester more officially since then (I do not believe it has been as I couldn't see it in the docs). Having full blown CAPI support would eliminate many of our concerns. I have not worked with the native harvesterOS API; how complete is it? Does it allow us for full lifecycle management? • How viable would management be via a IaC workflow without needing us to go into the rancher GUI? One engineer who is open to trialing Harvester, has also floated the idea of building our own virtualization platform using KubeVirt directly. Personally, I don’t think we have the resources to build and maintain such a platform from scratch — Harvester seems like the better option if it meets our needs. Ideally, we’d use Harvester as our HCI layer, but avoid vendor lock-in by using a mix of RKE2 and upstream Kubernetes clusters on top, with custom tooling where appropriate. For that to be viable long-term, though, I feel we’d need proper CAPI support.
t
A lot to unpack here.
• Harvester’s gui IS a versrion of rancher. • Rancher can manage ANY k8s. You can import any k8s into the cluster manager. • Rancher is not k8s! RKE2 is k8s. You can use ANY k8s you want. Rancher, similar to capi, can create “downstream” rke2 and k3s clusters. • You can automate Rancher’s cluster creation with Fleet :

https://youtu.be/L7TSawtl97w

I have seen companies/groups use upstream k8s and it required a few engineers to work on packaging, testing and maintain the “fork”. When I was selling rke2/rancher/harvester/longhorn the real value is that the time and effort spent packaging upstream could be spent on other things. Honestly building a platform scratch is a pretty crazy idea these days. Especially when official support can be purchased if needed. I would have him test harvester and the cluster templates. https://github.com/rancherfederal/rancher-cluster-templates Everything can be automated from one git repo. Check the video. 😄
d
Thank you, apologies about the terminology mix up. I'm trying to absorb so much currently and I'm not going to lie... This job is way above my knowledge level. But they oddly seem to be happy with me (I have been honest with them about my shortcomings). I am trying to balance the view points of this engineer with the opinions I had formed to date with regards to best practices; which I was coming from the view of using pre made tools, rancher, harvester etc to get the results we want. BUT this engineer is far more knowledgeable than I, so I do not want to discount his opinion. I think he has had a couple of issues with rancher previously and also is concerned/not a fan of suse although he is willing to trial it out. To me having an engineer critical of a product is going to be a huge asset as if he changes his mind then we are definitely on the right track. I have just submitted a PO for 3 new nodes of pretty decent spec with the safety net of being able to repurpose them for proxmox use if the harvester idea doesn't work out as I hope. Essentially I have been given the task to explore and suggest a new full stack for our k8 infra. Which is pretty much a dream situation to be in; I am just trying to balance everything out and due to my lack of hands on experience with all these components it is hard to evaluate them all, but soon we will have a couple of test environment for our engineers to play around with.
t
I have been in the tech industry for a LONG time. Too many times we have STRONG opinions about things that may no longer be valid. The software gets better. The hardware gets better. This is where playing and testing is important and fun. Play with harvester and rke2. Show him what it can do now.
p
fwiw we do ~all harvester management via API. the GUI is basically just doing the same API calls over xhr
🦜 1
d
This is music to my ears. Honestly thank you. I know we are talking in a rancher/harvester community so I do expect some levels of bias, but I think I also need to accept that the engineer in question may not know the full picture at this stage. @prehistoric-morning-49258 may I ask for a rough top-level view of what this looks like? No need to go into details but some some breadcrumbs to point me in the right direction would be greatly appreciated
t
share your code! lol
🤭 1
d
Noooo 😆 no need to share code or anything.. That would take the fun out of it, just quick overview of tools used (teraform/opentofu vs direct API calls etc)
p
yeah we have a fairly idiosyncratic setup, but essentially harvester is just kubevirt+longhorn+tools on immutable OS, so I don't think much point rolling your own. we just use REST API directly, usually find docs/examples or just setup via GUI and then check what it's doing. kubevirt docs are also helpful w/ minor differences
there's also a lot of features we don't use but afaict its basically all just kubernetes. also by ~v1.6 roadmap has most of our wishlist, ymmv: https://github.com/harvester/harvester/wiki/Roadmap#harvester-v160-july-2025
Screenshot 2025-03-26 at 3.52.05 PM.png
e.g. here's actually an example we've used
kernelBoot
for VMs which afaik wasn't documented for harvester anywhere but works fine via API as an underlying kubevirt feature https://kubevirt.io/user-guide/user_workloads/boot_from_external_source/
d
@prehistoric-morning-49258 you are a wonderful human, this is pretty much the confidence boost I needed. I think I may suggest we separate the VM/HCI layer away from the workload layer internally as their current workflow for cluster management and workloads are in one repo/pipeline.
🦜 2