Anyway, that's the backstory. I wanted to therefor...
# harvester
p
Anyway, that's the backstory. I wanted to therefore make my own snapshot schedule for a VM of which there's already a backup schedule. I thought to myself "hey i could maybe just create a snapshot object with kubectl" and then put that on a cronjob. (more details inside)
Copy code
apiVersion: <http://harvesterhci.io/v1beta1|harvesterhci.io/v1beta1>
kind: VirtualMachineBackup
metadata:
  annotations:
    <http://harvesterhci.io/snapshotFreezeFS|harvesterhci.io/snapshotFreezeFS>: "true"
  creationTimestamp: "2025-02-06T06:38:50Z"
  finalizers:
  - <http://wrangler.cattle.io/harvester-vm-backup-controller|wrangler.cattle.io/harvester-vm-backup-controller>
  - <http://wrangler.cattle.io/vm-backup-controller|wrangler.cattle.io/vm-backup-controller>
  generation: 5
  name: rancher-1-snapshot
  namespace: rancher-mgmt
  ownerReferences:
  - apiVersion: <http://kubevirt.io/v1|kubevirt.io/v1>
    kind: VirtualMachine
    name: rancher-1
    uid: fd9e0d2c-10b5-436e-8732-96fadf4a4f81
  resourceVersion: "63399898"
  uid: a9c723c2-b15b-4475-956f-f555d121b187
spec:
  source:
    apiGroup: <http://kubevirt.io|kubevirt.io>
    kind: VirtualMachine
    name: rancher-1
  type: snapshot
So this is a snapshot I took of a VM a while ago. (k get virtualmachinebackups.harvesterhci.io -n namespace rancher-1-snapshot -o yaml) I believe I could just create a similar manifest and apply it to get snapshots. Though sadly, I then miss out on the retain and retry features of proper Harvester schedules. But oh well.
So yes, this is an idea I came up with (because restoring backups is slow for me) so that next time something goes wrong, I can restore the VM faster than before.
Though of course, I would love to have a proper snapshot schedule which can coexist alongside the backup schedule 🙃
Applying just that yaml works. So now I'm wondering why I should not do this
m
When the schedule’s retention limit is reached, the controller will begin deleting outdated snapshots and backups, which can generate heavy I/O load. If multiple snapshot or backup deletions occur simultaneously, it can even cause the Longhorn engine to time out and become faulted. That’s why Harvester includes certain safeguards for schedule control.
We have an enhancement ticket to allow a VM’s schedule to rotate between backup and snapshot types.
p
Thank you. I understand the disk load issue. In that case, for the time being, I should be safe creating a backup at midnight and snapshots at 8,12,16hr? I don't imagine the two schedules would interfere then.
m
As mentioned, we plan to improve this by allowing VMBackup to rotate between different types, rather than having two separate schedules. This approach helps Harvester avoid concurrent I/O from snapshot purging.
❤️ 1