This message was deleted.
# general
a
This message was deleted.
b
Store your manifests in git and back up the manifests as text files rather than backing up the state of the cluster.
r
Yeah I agree with Trent. I use rancher backup chart to backup the rancher cluster itself to S3 on a schedule. For all other clusters, it's much more reliable and efficient to store YAMLs in Git and have something like ArgoCD that keeps the cluster in sync 🙂
c
ok, what is the restore process? Will it be: • restore Rancher states on new clusters • Define a playbook, so that it can re-deploy everything, even if the ArgoCD is crashed?
b
For rancher: Use rancher backup Create a new cluster Install backup operator Restore rancher tar file For k8s: Create new cluster Install Argo and configure it to look at gitrepo Argo deploys manifests With fleet you can skip step 2 Cattle not pets. Kubernetes is designed to be ephemeral and declared. Trying to use kubernetes in an imperative and stateful manner is an anti pattern.
👍 1
r
I use this guide: https://ranchermanager.docs.rancher.com/pages-for-subheaders/backup-restore-and-disaster-recovery The easiest way in my experience is to just create a new one. That way, you can update the underlying K8 version as well as the rancher version itself while you are at it. It's also 0 downtime. Once you backup using the above doc, you can follow this: https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster After the new cluster is operational, I just update the dns entry for the old one to point to the new one and everything magically works 🙂
👍 2
c
will it works? if I follow this way: • restore Rancher state in another new cluster, just for management cluster • Re-deploy all manifest I did on downstream clusters (assuming I have a playbook that does) I have a bunch of Helm charts for my applications....FYI
r
So for me, If I restore the state on the new cluster, everything from the downstream clusters is just there. You don't have to redeploy unless you want to change something 🙂
Also, remember to get rid of the old cluster once the new one is good. It doesn't say that in the docs but the old cluster keeps an older version of rancher-agent pods running which can cause the rancher-agent pods to redeploy every now and then.
👍 1