https://rancher.com/ logo
#k3s
Title
# k3s
a

adamant-kite-43734

07/05/2022, 8:22 PM
This message was deleted.
c

creamy-pencil-82913

07/05/2022, 8:34 PM
they all take backups if so configured. I’m not sure what you mean by ‘completing’, the file names have the node name in them so for example if you’re using s3 you just get one etcd snapshot for each node at the requested interval.
👍 1
h

hundreds-state-15112

07/05/2022, 9:13 PM
I was trying to compare it to the RKE snapshotting functionality where it does one per cluster. So somehow it decides which singular? node is responsible for uploading the backups to S3 and decides that with me just configuring the S3 backup settings. In terms of competing I was thinking of whether there would two hosts trying to write to the same “cluster” level etcd snapshot instead of the node snapshot, but as you’ve explained (and I just validated with on demand snapshotting) the node name is present.
I haven’t setup Rancher 2.6 yet as I’m still playing with the k3s side but is there a similar “Snapshots” tool I could configure for the “local” k3s cluster that would handle a similar cluster level based backup? Or is there a particular reason the k3s version is node based vs cluster based?
Okay I just found the rancher-backup operator
I think in my case the rancher-backup operator is more what I’m going for…? We aren’t really doing any special sauce in these k3s deploys or using them outside of the mostly stock configuration for Rancher Server hosting. I guess I’d still like Rancher to be able to configure cluster level etcd snapshots on k3s based clusters similar to RKE eventually though. Let me know if I’m missing something about any of these backup solutions
I guess the k3s snapshot is still useful in addition to the rancher-backup operator in case a specific node ends up going bad and has to be replaced because without that backup we can’t properly add a node to replace the broken ones. We’d have to rebuild and redeploy the entire Rancher Server cluster and then restore to it? But at the same time it seems like the k3s etcd backup captures everything the rancher-backup operator would and more
c

creamy-pencil-82913

07/05/2022, 9:50 PM
the backup operator just backs up rancher-specific resources. etcd snapshots are taken directly out of etcd and include everything in the entire cluster at the datastore level
h

hundreds-state-15112

07/05/2022, 10:23 PM
Yeah I think I’m going to continue with the k3s etcd snapshots. Should I file a GitHub issue to k3s and/or Rancher regarding an equivalent cluster level k3s snapshotting functionality that mirrors RKE? My search hasn’t revealed any open ones
c

creamy-pencil-82913

07/05/2022, 10:41 PM
the rke snapshots are equivalent to the k3s/rke2 snapshots. It’s just a zip file of an etcd snapshot plus the cluster config file. K3s and rke2 don’t have an equivalent to rke’s cluster config, since they are not externally orchestrated.
so I’m not sure what you’d ask for exactly?
h

hundreds-state-15112

07/05/2022, 11:27 PM
To be clear I’m referring to this functionality in the Rancher Server UI. Pick a cluster, then Tools -> Snapshots and you’ll find yourself here with S3 snapshots if they’ve been configured via an RKE template
typing still
Here’s what I had to configure/provide answers for per cluster to enable that snapshotting
The distinction I’m making is that I never “deployed” etcd s3 snapshots to the RKE nodes themselves. I did it through this RKE template which magically decided how to configure these without me having to ever worry about which node the etcd snapshot was “deployed to” or being performed by.
Compare that to the k3s etcd backup which as you’ve said are node specific snapshots of the etcds instead of the RKE cluster level backup. So I’d like to be able to do an equivalent “cluster level” enablement for k3s. Am I misunderstanding?
c

creamy-pencil-82913

07/06/2022, 1:42 AM
You can configure the same thing for k3s or rke2 when you provision them from rancher 2.6. It is not supported for imported clusters.
h

hundreds-state-15112

07/06/2022, 1:45 AM
Awesome that’s great news. I’ll check it out tomorrow then, thanks.