This message was deleted.
# longhorn-storage
a
This message was deleted.
l
Anyone … ? I also see a lot of errors in the csi-snapshotter like
Copy code
csi-snapshotter-7cb6bf8447-kznqb I0407 19:42:46.942091       1 snapshot_controller.go:325] createSnapshotWrapp │
│ er: CreateSnapshot for content snapcontent-ee6510f9-86b1-456c-b54f-0f32e51b60b6 returned error: rpc error: cod │
│ e = NotFound desc = volume pvc-2adf3709-a96a-49ad-8030-7924ac8e324c not found                                  │
│ csi-snapshotter-7cb6bf8447-kznqb E0407 19:42:47.157322       1 snapshot_controller.go:124] checkandUpdateConte │
│ ntStatus [snapcontent-05544e79-736c-439f-a0ad-e750af2605db]: error occurred failed to take snapshot of the vol │
│ ume pvc-d4e68080-2f06-47db-87ce-39287988f03e: "rpc error: code = NotFound desc = volume pvc-d4e68080-2f06-47db │
│ -87ce-39287988f03e not found"                                                                                  │
│ csi-snapshotter-7cb6bf8447-kznqb E0407 19:42:47.157362       1 snapshot_controller_base.go:265] could not sync │
│  content "snapcontent-05544e79-736c-439f-a0ad-e750af2605db": failed to take snapshot of the volume pvc-d4e6808 │
│ 0-2f06-47db-87ce-39287988f03e: "rpc error: code = NotFound desc = volume pvc-d4e68080-2f06-47db-87ce-39287988f │
│ 03e not found"
and
Copy code
I0407 19:44:48.344188       1 util.go:218] storeObjectUpdate updating content "snapcontent-17f303c0-b0a1-4a78- │
│ 97eb-f1cf6edceb45" with version 776829910                                                                      │
│ I0407 19:44:48.344196       1 snapshot_controller.go:88] synchronizing VolumeSnapshotContent[snapcontent-17f30 │
│ 3c0-b0a1-4a78-97eb-f1cf6edceb45]: content is bound to snapshot monitoring/velero-data-loki-loki-distributed-in │
│ gester-1-brx8n                                                                                                 │
│ I0407 19:44:48.344213       1 snapshot_controller.go:90] syncContent[snapcontent-17f303c0-b0a1-4a78-97eb-f1cf6 │
│ edceb45]: check if we should add invalid label on content                                                      │
│ I0407 19:44:48.344221       1 snapshot_controller.go:1541] getSnapshotFromStore: snapshot monitoring/velero-da │
│ ta-loki-loki-distributed-ingester-1-brx8n not found                                                            │
│ I0407 19:44:48.568848       1 snapshot_controller_base.go:200] enqueued "snapcontent-63ce4478-b7ef-45a8-ba41-8 │
│ ffe81abd97f" for sync                                                                                          │
│ I0407 19:44:48.568876       1 snapshot_controller_base.go:302] syncContentByKey[snapcontent-63ce4478-b7ef-45a8 │
│ -ba41-8ffe81abd97f]                                                                                            │
│ I0407 19:44:48.568889       1 util.go:218] storeObjectUpdate updating content "snapcontent-63ce4478-b7ef-45a8- │
│ ba41-8ffe81abd97f" with version 776829933                                                                      │
│ I0407 19:44:48.568897       1 snapshot_controller.go:88] synchronizing VolumeSnapshotContent[snapcontent-63ce4 │
│ 478-b7ef-45a8-ba41-8ffe81abd97f]: content is bound to snapshot certificates/velero-database-step-certificates- │
│ 1-hdscc                                                                                                        │
│ I0407 19:44:48.568900       1 snapshot_controller.go:90] syncContent[snapcontent-63ce4478-b7ef-45a8-ba41-8ffe8 │
│ 1abd97f]: check if we should add invalid label on content                                                      │
│ I0407 19:44:48.568907       1 snapshot_controller.go:1541] getSnapshotFromStore: snapshot certificates/velero- │
│ database-step-certificates-1-hdscc not found                                                                   │
│ I0407 19:44:48.749320       1 snapshot_controller_base.go:200] enqueued "snapcontent-d84eca41-32d5-438f-a4d3-8 │
│ 48c8f3c96a3" for sync                                                                                          │
│ I0407 19:44:48.749432       1 snapshot_controller_base.go:302] syncContentByKey[snapcontent-d84eca41-32d5-438f │
│ -a4d3-848c8f3c96a3]                                                                                            │
│ I0407 19:44:48.749461       1 util.go:218] storeObjectUpdate updating content "snapcontent-d84eca41-32d5-438f- │
│ a4d3-848c8f3c96a3" with version 776829950                                                                      │
│ I0407 19:44:48.749507       1 snapshot_controller.go:88] synchronizing VolumeSnapshotContent[snapcontent-d84ec │
│ a41-32d5-438f-a4d3-848c8f3c96a3]: content is bound to snapshot certificates/velero-database-step-certificates- │
│ 1-w6khh                                                                                                        │
│ I0407 19:44:48.749525       1 snapshot_controller.go:90] syncContent[snapcontent-d84eca41-32d5-438f-a4d3-848c8 │
│ f3c96a3]: check if we should add invalid label on content                                                      │
│ I0407 19:44:48.749543       1 snapshot_controller.go:1541] getSnapshotFromStore: snapshot certificates/velero- │
│ database-step-certificates-1-w6khh not found
See the thread … I’ve posted some error logs … thank you very much … --- Deleting backups.longhorn.io objets via the Python Longhorn library or directly by requesting the REST API endpoints that the Longhorn UI uses - will that get VolumeSnapshot and VolumeSnapshotContent objets delete as well? Thank you
a
1. Delete backups, Do you have any problem for delete backup CRs by
kubectl
? I think that will be more convenient and quick. 2. It should not be deleted the VolumeSnapshot or VolumeSnapshotContent when deleting backup CRs .
l
Yes I do …. it’s the same experience when deleting backups.longhorn.io CR objects with kubectl .. I have to remove the finalizer … but, before I can actually get to execute
kubectl delete <http://backups.longhorn.io|backups.longhorn.io> LONGHORN_BACKUP_NAME
the finalizer is set again. --- Okay, so deleting backups.longhorn.io objects do NOT trigger a deletion of
VolumeSnapshot
or
VolumeSnapshotContent
objects. Hmm - matches my experience over the last couple of days. Where I ended up scaling down the snapshot-controller > remove finalizers on VolumeSnapshotContent objects no longer on the S3 backend and the > kubectl delete on those … that worked … the VolumeSnapshotContent objects do not “come back” again via sync.