This message was deleted.
# harvester
a
This message was deleted.
e
The built-in migration feature in Harvester is meant to migrate VMs between nodes within a cluster. The main purpose of this is to be able to do node maintenance or upgrades without necessarily disrupting workloads, though in reality it can prove to be quite tricky, especially when the cluster has heterogeneous hardware and the workloads are under high utilization. The best alternative I can think of is to backup the VM into an external backup target and then restore it in the other cluster: https://docs.harvesterhci.io/v1.4/vm/backup-restore#restore-a-new-vm-on-another-harvester-cluster
b
Yeah, when you use a third party storage driver, which are supported, like the
<http://rbd.ceph.io|rbd.ceph.io>
csi driver, the external backup shared causes the UI to crash.
So the backup/restore can't work from the docs you provided. (I'm bummed)
Even so, apparently that feature is locked to longhorn backed PVs only.
e
The UI crash sounds like a bug. Would you mind reporting it as an issue on https://github.com/harvester/harvester/issues ? The backup/restore feature is tightly integrated with Longhorn. Not much can be done about that afaik.
b
There's an existing issue, I can look it up. I'm not really hopeful for something to make the integrated stuff work, but rather for some kubectl hack that exports the disks somehow and allows them to get imported into a new cluster.
e
Thanks. Unfortunately, I don't think the backup-and-restore route is viable until https://github.com/harvester/harvester/issues/5816 is fixed.
b
yeah
I see that
virtctl
has an export api that might be viable for making copies of the blk devices, but I'm not sure if it's available in the version that Harvester ships.