This message was deleted.
# elemental
a
This message was deleted.
w
Hi Chris! You came to the right place! I would say yes, elemental is made for bare metal and will allow you to manage the host OS from Rancher in a declarative way!
n
Brilliant! I have not yet sat down with it to give it a go, but could you point me in the right direction to start off. Would I need a pre-existing/external rancher setup to deploy/seed the main node on bare metal or can I simply install the OS via an iso, and then setup rke2 and rancher ontop?
w
You will need an existing rancher for this, we have some quickstart documentation to get up and running. From the existing Rancher you will basically setup so you can download an ISO that when installed will register back to rancher. These registered nodes can then be added to clusters in the dashboard or automatically using node labels 👍
It would be cool to be able to run rancher on elemental as well and we have a ticket for it, but it hasn't been prioritized yet..
n
Ah, I see. So I wouldn't be able to make a temporary rancher cluster to then create a new cluster using elemental; install rke2 and rancher (on my BM server), transfer control to the new cluster and delete the previous seed rancher? Again, I apologise if these are daft questions, I am just making a coffee and then will sit down with the docs.
w
No worries! You might be able to do that, it's not really supported but it might be possible to do! You would probably need to login to the new rancher node and update some settings to get it to register to the new rancher..
If you actually go through with it I would be very interested to hear how it went! 😄
b
We're doing something similar, only we deployed (k3s + rancher + SLES Transactional Server) together and are using elemental to provision the other nodes. You're gonna have a bit of a chicken or the egg situation because at some point you'll have to swap out all the configs from old rancher into the new rancher AND have the new rancher be able to already have the elemental configs/registrations to manage it self. If you can get an N+1 kind of a setup, here's what might work? • Set up a single node cluster (k3s or RKE2) with a VIP that points at your single node. • Install Rancher. • Install Elemental • Use Elemental to provision an HA cluster to your 3 production baremetal boxes. • Make a Rancher backup. • Shut down your single node cluster • Point vip at HA cluster. • Install Rancher in HA cluster. • Restore rancher backup from single node. • Now you've "Migrated" the boostrap rancher instance into production and the baremetal should still be registered/managed by elemental
The only thing that might be weird is that the new local cluster may also have some entries for the registered cluster too. I think you might be able to force delete the yaml objects for clean up for that, but honestly I don't know.
n
Thank you so much, nice to see what I thought I needed to do written by someone with much more experience than I. I am currently thinking of putting this idea to the back of my mind for now as I have an awful lot to get my head around as it is. What I might do though as it ties into what I was already thinking of doing is the following: • Use the current microOS/rke2/rancher node (single physical BM server) to provision additional nodes using elemental within harvester, add them to the existing cluster as worker nodes and an additional management node. • Setup HA via a VIP (kube-vip) • Decommission the original BM server, so everything is in the harvester cluster • Redeploy an additional management node onto the BM server using elemental My goal here is to have a physical server outside of the harvester cluster to act as a management layer (and worker if/when the harvester cluster goes down) but offload the workers onto the harvester nodes and also have a backup management node for if/when the physical server goes down.
b
Others might weight in, but typically your rancher cluster doesn't have a lot or "workloads" so it doesn't really benefit from separating the controlplane vs the worker nodes. While you CAN run other workloads on your rancher cluster, Suse doesn't recommend it, and it's officially not supported.
IIRC part of the sizing recommendations are assuming the CP+WorkerNodes are the same boxes because of the amount of API calls Rancher is handling for downstream clusters/monitoring.
Most of that work is going to fall on the control plane regardless.
n
I did not know that to be honest. I had planned on running some auxiliary services like argo, my WiFi controller, backup solution (minio) and a couple of other small stuff. But I shall definitely take what you have said onboard
b
Rancher kinda assumes that you're going to provision another downstream cluster for that.
It's still running on Kubernetes, so it would install on the local cluster, but it's not how it was designed.
You might also consider doing a k3s cluster instead of rke because it'll have a lower overhead and still function the same way, but either should work.
I'm guessing you're installing Harvester on bare metal as well?
You might consider running rancher as HA over 3 smaller physical machines (like r pi's) and have your other physical box be another harvester node and just provision your other k8s clusters as managed harvester nodes/vms)
There's a built-in rancher for Harvester, but they don't recommend it for production.
n
Yes, the harvester cluster is also on BM. Currently, I have 3 nodes. The server which I am running rancher on is probably a bit overkill if I am not able(/it is suggested not to) run anything other than the initial rancher control plane. 32core/32gib 🤣 but I wanted something which I could throw in a rack cabinet. I could add yet another layer on top for solely the initial rancher management layer, use the 32/32 server as either a backup storage server, or part of an auxiliary cluster (with the other nodes in VMs as I mentioned before) and utilise the storage for something like minio. But that is yet another layer/looping logic. As you can see, I am pretty out of my depth here. But enjoying the learning process!
b
I'd use Pis or something similar for a three node ha endpoint and use the other server as a new Harvester/Longhorn node.
but that's me.
Have fun figuring it out. 🙂
👍 1
n
And another circle completed haha, I think I am going to deploy another hypervisor (thinking of using XCP-NG) in addition to harvesterOS, I can then host the rancher local/management cluster outside of harvesterOS. I'd then be able to use the dell r430 I have to host additional things like a backup server, DNS, additional rke2 nodes - stuff which I was going to host on the 'local' rancher cluster. I can then migrate/add additional nodes to the local rancher cluster either on separate hardware (I could use a pi and host a node at a second location) and AWS/cloud services if I have issues with reliability. This is of course going a bit OT for the channel, so I apologise for that :)
👍 1