This message was deleted.
# longhorn-storage
a
This message was deleted.
w
Im using Longhorn 1.3.0, helm chart
Copy code
autoSalvage: true
      autoDeletePodWhenVolumeDetachedUnexpectedly: true
      disableSchedulingOnCordonedNode: true
      allowRecurringJobWhileVolumeDetached: true
      replicaSoftAntiAffinity: true
      nodeDownPodDeletionPolicy: delete-both-statefulset-and-deployment-pod
      backupstorePollInterval: 500
      priorityClass: system-node-critical
c
There could be 2 things: 1. Are the volumes attached at the time of scheduled jobs? There is a setting
Allow Recurring Job While Volume Is Detached
which you can set based on your requirement. 2. If there is no change in the data, you won’t see system taking backups/snapshot. As backups/snapshots are delta, there is no point in taking backup/snapshot if there is no change in data.
f
@cuddly-tomato-40568 IIRC, now Longhorn will blindly create snapshots or backups regardless of nothing changed in data.
Not sure if there is anything wrong with the backup creation part. Can we provide logs in the cron job pod or Longhorn components?
w
Sure, o will gather some logs
but PVC pvc-27631369-0d08-41ff-8f84-be004f2e736e, is not on the logs
Copy code
State: Attached
Health:
Healthy
Ready for workload:Ready
Conditions:
restore
scheduled
Frontend:Block Device
Attached Node & Endpoint:
worker1
/dev/longhorn/pvc-27631369-0d08-41ff-8f84-be004f2e736e
Size:
8 Gi
Actual Size:308 Mi
Data Locality:disabled
Access Mode:ReadWriteOnce
Engine Image:longhornio/longhorn-engine:v1.3.0
Created:a month ago
Encrypted:False
Node Tags:
Disk Tags:ssd
Last Backup:backup-0240b82804034f28
Last Backup At:8 days ago
Replicas Auto Balance:ignored
Instance Manager:
instance-manager-e-2df59a02
Namespace:grafana-oncall-1
PVC Name:redis-data-oncall-redis-replicas-1
PV Name:pvc-27631369-0d08-41ff-8f84-be004f2e736e
PV Status:Bound
Revision Counter Disabled:False
Pod Name:oncall-redis-replicas-1
Pod Status:Running
Workload Name:oncall-redis-replicas
Workload Type:StatefulSet
i think the issue must be related to cron job, i double checked and not all volumes are on default group. not sure yet the reason for it