This message was deleted.
# general
a
This message was deleted.
w
i run ours on a separate cluster of low power machines entirely - but the update process we followed was to setup backup first, backup to a remote s3 bucket. Note versions in use - then upgrade, if all goes well - happy days, if not you can reinstall the old version and restore from backup.
f
hey @worried-state-78253 thank you for your answer. Indeed that is an viable alternative. I will try it!
however I would prefer to have a new VM after every PR.. i like to deploy new VMs instead of keep a long-lived one.. It works well on AWS with their load balancing and I am wondering how to do it on premise with Harverster
w
I think your over complicating this - thought you were talking about updating rancher (as a manager of your harvester cluster) which would itself be a k8s install in a vm from the sounds of it that can be updated in place.
f
yes, well because I am running rancher in a separate VM and not running it on the same k8s cluster as recommended, I would like as well to update the underlying VM. That give me the possibility for instance to run the cloud-init for this VM, where we for instance add/remove ssh keys for the VM itself
so imagine you merge a PR where you change the cloud-init section of a terraform script and CI/CD creates a new VM using the terraform, apply the cloud init changes in the new image and push it to production.. with AWS this is doable
w
If i understand you correctly - if this is the rancher k8s cluster on a single vm I think your over engineering this - the separate vm would run k8s as a single node believe - how otherwise will you deal with registering the harvester cluster to your rancher cluster, maybe I misunderstand.
f
yeah i install a k3s, cert-manager and rancher everything in one host with https://github.com/vpereira/rancher-deploy I just want to make sure I am able to "re-run" the cloud-init for the VM (which is not possible, therefore i create always a new VM when I have to "re-run" the cloud-init... Using AWS i guess i could do it using autoscaling + backup and restore.. but with Harverster is all new land to me 🤣
w
I honestly wouldn’t bother - unless k3s was being updated significantly - feels like what your trying to do is swimming against the tide - k8s are designed to be highly available, if you want gold standard setup 3 small vms and install rancher in ha mode and deal with it separately - if something goes wrong in harvester your up the creak, although to be fair I have no idea about your particular use case but the important think is to think about criticality of each component and real world impacts - what’s the worst that can happen if rancher goes wrong … reinstall, configure and maybe have some issues with k8s clusters managed by rancher (harvester is fine regardless, but if you provision k8s in harvester those clusters can’t be resized from a new rancher install when linked). Also if you setup the backup policy in rancher you can get around all the issues anyway as you can start with fresh install at same version and be up in minutes. Good luck
👍 1
f
Thank you @worried-state-78253 you provided me a lot of good insights!
i kind of realized that too - if rancher goes down - just reinstall. There is nothing really stateful there 👍
w
As long as you’ve got your back up