Hi, We see in our Rancher pods quite often the fo...
# general
k
Hi, We see in our Rancher pods quite often the following error; error syncing 's3-etcd-snapshot-<cluster>-cp-xxxxx-xxxxx-1747180264-63080a': handler snapshotbackpopulate: etcdsnapshots.rke.cattle.io "<cluster>-etcd-snapshot-<cluster>-c7560" already exists, requeuing. I've already found that once I remove the s3 snapshot from the downstream cluster the error will appear. But after some time they are slowly getting back again (but for a different snapshot). Any idea on what is going on and how to fix this permanently? (We are running Rancher 2.11.1 using RKE2 1.31.7+rke2r1)