crooked-cat-21365
06/01/2024, 5:05 AMcreamy-pencil-82913
06/01/2024, 6:37 AMcreamy-pencil-82913
06/01/2024, 6:38 AMcrooked-cat-21365
06/01/2024, 10:00 AMcreamy-pencil-82913
06/01/2024, 7:20 PMcrooked-cat-21365
06/02/2024, 9:02 AMcrooked-cat-21365
06/02/2024, 9:34 AM:
"state": "active",
"message": "Resource is current"
},
{
"toId": "fleet-default/extkube001-etcd-snapshot-node01.dmz.aixigo.de-1708488004-s3",
"toType": "rke.cattle.io.etcdsnapshot",
"rel": "owner",
"state": "active",
"message": "Resource is current"
},
{
"toId": "fleet-default/extkube001-etcd-snapshot-node01.dmz.aixigo.de-1708617605-s3",
"toType": "rke.cattle.io.etcdsnapshot",
:
The interesting part is, "backup snapshots to S3" is disabled, see the attached snapshot. Nevertheless, there are 162 days of failed hourly backups somewhere in Rancher's database, all with 0 bytes written. I can see them in the snapshots overview, listed for "S3". For each is an error message
failed to test for existence of bucket rancher02: Head "<https://minio.ac.aixigo.de:9010/rancher02/>": dial tcp 172.19.96.219:9010: connect: connection refused
The ECONNREFUSED is expected. I had configured an internal S3 storage similar to other internal clusters, but this cluster is running on another network, so I had disabled S3 storage again.
Question is, how can I get rid of these failed zombie backups to an S3 bucket that doesn't exist? I tried editing the rke2-etcd-snapshots configmap, but this didn't help; they are back.crooked-cat-21365
06/03/2024, 11:02 AMcrooked-cat-21365
06/05/2024, 8:12 AM