This message was deleted.
# harvester
a
This message was deleted.
s
1. Yes, Harvester is production ready. 2. Harvester would take the place of vSphere. It is a hypervisor in itself. You could use Rancher to provision nodes dynamically, or create something yourself. 3. It is free, but has a paid support model if you want enterprise support.
🙇 1
l
@swift-eve-48927 great thanks for your help. Is there any video or docs about how harvester integrates with vsphere. thanks again.
m
To "just" provision on vsphere you don't need harvester. Rancher can do this with the vsphere cloud provider. But as Andrew has said, you could instead of vsphere use havester on your bare metal servers and than use rancher to directly provision your vms (either as kubernetes nodes or regular vm workloads) on harvester. To dynamically do clusters you can than again use rancher with the harvester cloud provider
I think it is even technically possible to setup a nested virtualisation harvester cluster on top of vsphere but that ofc is not officially supported and would also have a significant performance loss because of the overhead of nested virtualisation. But I have used it shortly as a PoC before changing our vsphere hypervisiors over to harvester. I would like to say that harvester is a quite young project and I myself have encountered a couple of small to medium impact bugs but I have also received help on some of them through the community here or the github issues.
l
@miniature-lock-53926 great thanks for your help. quite a lot of useful information. would nested visualization hurt performance significantly. I agree with you that harvester is quite a young project, so I decided to use it on top of the vsphere. one headache thing is that it seems quite hard to enable both nested virtualization and pci passthrough. esxi support much mature features for vm and network management so we could not drop it. e.g. BIOS tunes, vswitch configurations.
m
well actually pci passthrough is marked as experimental yet and that is an ongoing issue i personally have with using passthrough nvmes on our hardware https://github.com/harvester/pcidevices/issues/57
but apart from that I think as long as you passthrough the devices (GPUs or Drives? or something else?) to the nested harvester vms, I think they should be picked up by the pci-device-controller but I have not tested that scenario as to the performance loss, I can just say that on my PoC the provisioning on the harvester nested vm from rancher using the cloud provider was still faster then the provisioning with the vspere cloud provider. But that might have been because (the lack) of a proper storage backend with our vsphere cluster instead of the builtin longhorn SAN which harvester brings for free and which is quite expensive for hci-baremetal nodes if you get licenses for vSAN which you would need when using vsphere
l
@miniature-lock-53926 so combine esxi with harvester could be a good idea for the current stage. e.g. minio cluster is outside the harvester and joined the rancher k8s cluster as a vm node directly on the esxi, and it could passthrough disks.
m
so you mean no vCenter in the mix?
l
yes, vCerter takes the control of managing virtual machines for installing harvester node and those who needs pci passthrough. while harvester is responsible for creating k8s nodes using the resources that vcenter gives him.
m
ah sorry I was confused... yes (big) harvester VMs on top of esxi nodes with drives passthrough to the harvester vms which then span the longhorn cluster between the harvester vms should be possible
you can take that even further because you could install the harvester nodes on the vsphere vms using ipxe. But I cannot say how much of a performance loss you will suffer
l
vSAN is expensive, so in this way, we could take vSAN away, and use the local datastore of every esxi node.
m
ah yes that's what I was thinking about... because yes that is true
l
yes, performance loss will be a problem. esxi -> harvester -> k8s node. we are using kubevirt directly on top of rke2. it’s esxi -> k8s node -> virtual machine. all k8s nodes will suffer a performance loss because of a additional layer of virtualization. I don’t know if there will be a performance benchmark report about this.
using harvester to provision vm is faster, it’s a good reason to add that layer if we do not have vSAN.
👍 1
m
if you want to do hci directly on vsphere but are not willing to pay for vSAN (that what we tried first) you can do that, but on big problem than is, that you can you the vsphere cloud provider but you have to again decide where to put your new nodes because you then cannot say to rancher when spinning up new nodes to use a shared storage system for the root vms and thus always have to device which local datastore you use and then also where the new vms have to run... so you actually dont get the automaticall scheduling an such
but when you do the nested virtualisation then you can sidestep that problem because then you use harvester cloud provider and the builtin longhorn SAN at the price of the performance hit of nested virt
exactly
one additional benefit, that you can still always then host the high performance workloads directly on esxi like databases or your minio cluster...
I just read your post about kubevirt. Do I understand you correctly that you then run VMs ontop of k8s using kubevirt? Because harvester itself is a k3s cluster with kubevirt as the virtualisation driver, so you could run those vm workloads directly from harvester nested or otherwise
l
thanks for your reply. you mentioned that harvester is a k3s cluster, so is it possible to use this k3s cluster directly to deploy applications. I found it recommend that user should integrate with rancher and provision cluster on top of it.
m
Yeah it is possible, allthough not supported and discouraged to do that in the docs. I would not use the internal k3s-cluster for more than debugging harvester itself and MAYBE to host some small cluster-internal services that you might want to integrate like maybe external secrets provider or the like. You should setup a management HA-Rancher-Cluster ontop of the the harvester nodes and then import the Harvester Cluster into this Rancher and you that as your Cluster Orchestration. Everythin else would also not be supported AFAIK
l
thanks for your reply. setup a dedicated rancher cluster would be a great idea for achieving HA rather than set up it in docker. what I am confused is about the concept of
HCI
, which I think virtual machines should be combined more tightly with kubernetes pods. For example, in the usecase of
kubevirt
, virtual machines could be treated as a pod / service, so some old and near deprecated program could also be migrated into kubernetes cluster with less modification, as kubevirt makes virtual machines into pods and thus no matter virtual machines or pods, they could be managed in the same abstraction thus make the management easier. that’s what I think is the problem HCI want to deal with. (I am not sure if I understand it correctly) However, in harvester, virtual machines are outside the kubernetes cluster that in use. so I wonder how to use some virtual machines to serve some old programs that provide services for kubernetes pods.
m
I mean you could also just put the VMs in the same VMNetworks as you k8s Nodes and then use e.g. externalname services to provide a way to get to the services running on the kubevirt/harvester vms from inside downstream k8s-clusters, i'd think
l
I don’t know and exercise that practice. k8s makes operation much simpler, that is to say, pods could run and restart at any where and so as services, k8s will take care of that migration while I don’t if a vm goes down, will harvester brings up a new one on another host and keep that ip unchanged.
m
Yeah but Harvester will do that too for those VM Workloads that are running directly on the harvester/k3s/kubevirt stack
I am generally just trying to run the workloads with the least amount of virtualization overhead possible, but I mean you are ofc free to setup another instance of kubevirt ontop of k8s ontop of kubevirts ontop of harvester-vm ontop of esxi-bare-metal ... it just seems a little too much virtualiception for me 😉
l
yep. to fulfill my needs,
vsphere <-> k8s <-> kubevirt vm
is better. I want to migrate this architecture to harvester, but I found it not fit my case well.