I have a question. We're running longhorn 1.7.2 on an RKE2 cluster and have noticed performance degradation as well as several (not all) volumes in a degraded state. When digging into it a little bit, I see that some of the volumes are listed as "TooManySnapshots". I'm not able to use the UI to list the snapshots due to it being so slow, but I listed the engine volume info with kubectl ( kubectl -n longhorn-system get
engines.longhorn.io pvc-name-e-0 -o yaml) and I see that there are 88 snapshots for this volume....not 250 like in the setting. Regardless, that's more snapshots than I would expect. We have a recurring job daily which backs up volumes, but that's set to retain 3, so I'm not sure why 88 of these are hanging around. Is there a way to safely remove them without corrupting the volume, or to force a cleanup?