This message was deleted.
# harvester
a
This message was deleted.
c
see rancher as the management of any clusters (plural), including havester
would you really run your management interface on the thing that you are managing?
I wouldnt.
r
I understand this point and agree to a degree, but from documentation I get that impression that it is not used to "manage" harvester as such (afaik it has "integrated" rancher) but kubernetes clusters (VMs) deployed to harvester (I would assume it means also managing harvester resources used by those clusters). Answer to this question is more complicated than "I wouldn't". Physical servers we have are major overkill for running "just" rancher, but maybe I will get some minipc or something just for that then.
c
to me it is the same thought process of not running rancher on a kubernetes cluster than rancher is managing.
instead, for our lab environment, we decided to setup rancher on our old (legacy) vcenter infrastructure, have a few VMs running there on which rancher runs on a simple k3s. but I agree with you, it sounds crazy to have just some physical severs (even small ones) running rancher exclusively.
r
Thanks for your input.
n
@rhythmic-article-81903 how are you getting on with the cluster add-on? I am in a similar position, but have decided to go a different route... I've setup a server running rke2 and deployed rancher upon that, currently as a VM on my old proxmox server but was thinking of moving it to a fairly low spec dell r230. Plan would then be to make some more nodes on the harvester cluster to join the physical rke2 server to provide load balancing and some form of redundancy. This can then be used as a general management/auxiliary services kubernetes cluster so wouldn't need to host just rancher, the harvester cluster can be used for the main workload but if the cluster would go down I would still be able to access the rancher management layer.
Would be interested to hear other people's thoughts on this though!!
a
We are using Rancher and Harvester in our lab environment. We run two Rancher Manager instances (a "test" and a "prod"... bearing in mind that this is a lab environment). The "test" instance is a 3-node cluster on Harvester using OpenSuse as the OS, K3s as the runtime, and Cilium as the CNI. The "prod" instance is a 5-node cluster on VMWare VCenter using Ubuntu 22.04 as the OS, RKE2 as the runtime and Canal as the CNI. We only have a single Harvester instance. What we discovered is that "either" our test instance or our dev instance of Rancher Manager could deploy K8s clusters to Harvester, but not "both". The Rancher Manager to Harvester relationship is weirdly one-to-one. This makes sense for "managing" the Harvester instance (good for authentication and permissions!), but not so good for "deploying" K8s workloads to Harvester as a target. Ideally, we would use the Rancher Manager Harvester plugin just to manage the operations and health of the cluster, but use external test and dev Rancher Manager instances (which are dedicated to K8s deployment and management) to target the single Harvester cluster for deployments. Sadly that scenario (single Harvester targeted by multiple Rancher Managers for K8s cluster deployment) is not available yet.