So I'm struggling with a migration process. So fa...
# harvester
b
So I'm struggling with a migration process. So far Suse support has told me it's not possible, but I'm finding it hard to believe. Ulimately I need to move a VM from Harvester ClusterA to ClusterB. The docs suggest using a shared backup target, however there were two issues: • If you use a third party CSI driver for a secondary drive ( a fully supported feature) the back up will fail. • Shared backups will cause the Harvester UI to crash and become unavailable in older version of Harvester.
I asked about manually exporting disk images, but they said it wasn't possible.
I'm hoping someone else has had to do this and either has a script, or can at least point me in the right direction towards making one.
virtctl
or the UI has to have some sort of way to export an image or disk right?
The thing is that most of these OS images on the VMs are cloudinit images that are image backed. I can already flatten them to be standard longhorn images (not image backed), but it would be better to be able to create an exportable/importable image with the PVC
I found this, but it's not immediately obvious to me if the full feature set it in Harvester 1.4.2: https://kubevirt.io/user-guide/storage/export_api/
m
i think the
VirtualMachineExport
api is the key, which is what
virtctl vmexport
or
kubectl virt vmexport
use
i wonder if we can extend the
vm-import-controller
to support harvester-to-harvester scenario, since 3rd party csi is a pain atm
the controller is already using the
VirtualMachineExport
api
b
It seems like the
VirtualMachineExport
is the path forward for sure. Using it instead of the longhorn backups because of the image backed disks (having to recreate them with the same IDs in the other clusters or convert them to flat non-image backed disks) seem to provide fewer steps/less effort than what's currently supported.
As I read it, it exports the PVCs as either files in a tar or as a standard qcow/disk image (compressed)
since it's the upstream kubevirt, it doesn't seem to care what the csi driver is doing.
m
it doesn't seem to care what the csi driver is doing.
i think that's the key. i'll try to find time to test this import scenario, unless you get to it first.
🦜 1
🙌 1
b
That would be awesome.
I'm not sure how much time I'll have for this given our current backlog at work.
If I do, I'll be sure to put anything I find out here. 🙂
👍 1
m
meanwhile, backup is still a pita. it has been a while, but i am thinking about velero.
i don't know if they have done much with vm since i was last there
b
I would think that if we could export the disks via theVirtualMachineExport api, it would be possible to download the export disks/images/yaml and upload to an s3 source as a "backup" and simplify things there too.
m
well to me, backup/restore and export/import is slightly different use case
backup/restore involves crash consistent snapshotting and data movement to provide a stronger guarantee rto, rpo etc.
b
Yep, that's fair.
m
but yes, in a way, there are some overlaps
from a feature perspective, i see export/import as an one-off
backup is something that you can build a recurring schedule, retention policy, incremental vs. changed block aroud etc.
b
It's just on my mind because of the regression with the 3rd party csi/non-boot disks.
m
loud and clear
@bland-article-62755 re: export/import, what kind of workloads are you running on the vm? from my testing with velero, even though i can successfully backup/restore vm, some workloads (like etcd) might not run successfully
b
It depends. Some of it is web hosting, some of it are databases. Some of it is research.
I did get a manual migration path figured out
m
are these workloads running in guest clusters?
b
These are VMs that might have a db running in it for a developer.
Or a professor wanting to do research with AI or datasets.
m
ok.. so no rke2, k3s etc. running on those vm?
b
Nope, just normal Linux
m
understood
b
alma mostly, some ubuntu and a few suse
m
what manual migration path did you come up with?
b
Let me pull up our little mop... hang on.
Ok, here's the manual steps for exporting/importing vms from one cluster to another: • Stop the VM. • Copy the MAC address from the network interface and make note of cores/RAM. • Go to Volumes and find the disk(s). Export disk to image. • Download the image. • Extract the raw image from the tarbal • convert from raw to qcow (compressed)
qemu-img convert -f raw -O qcow2 -c -p -S 4k ~/Downloads/lti-dev-db ~/Downloads/lti-dev-db.qcow2
• Upload the image to cluster directly (ie from fqdn for the vip on the harvester cluster and not through rancher ) • Create new disk with the image. • Flatten the disk with the pvc-flattener script. (ie
./pvc-disk-flatner lti-dev-db-root canvas lti-dev-db-flat
) • Create a new VM with the flat disk and the MAC from the previous vm. • Verify the VM boots properly. • Delete the Backed Image PVC (you may need to delete the Job and completed pods from the flattening via the embedded rancher UI) • Delete the Imported image.
👍 1
If we had a block device in another storage class outside of longhorn, we used this other script to migrate it to longhorn so it could be exported.
m
b
I think you're absolutely right to push back about the vm image.
Do you want me to comment more on the github issue or here?
m
gh issue please - thanks
yeah, i was trying to get the vm image to work, but it's a bit tricky
i velero ended up fighting with kubevirt cdi
b
Yeah I'm not surprised.
BTW overall, I think this is great.
I think that's all I got.
Hope it's helpful!
👍 1
Sorry freak'n github left my comments on pending until I "published" a review.
m
all good - appreciate the feedback so far
yeah, when i looked at the comments timestamp, i thought i missed them yesterday
b
not you... it was meeeeee. 🙃
m
ideally, i really don't want you folks have to worry about LH
BackingImage
,
BackupBackingImage
etc.