I have some qcow2 images i uploaded to harvester t...
# harvester
a
I have some qcow2 images i uploaded to harvester to migrate some libvirt virtual machines over to harvester. Once I create a virtual machine from the uploaded images, am i safe to delete the uploaded image? If i understand correctly and please do correct me if I am wrong, this should be fine as a volume is created using the image, so after creation the image is not needed.
b
You're not
The images are "backing images"
so it's like a layered snapshot.
So basically your "volume" is just keeping track of the diff against the backing image.
You'll have flatten the image to a non-image backed storage class.
I wrote a script for migrating the storage class or flattening an image to the default Storage class (which in harvester defaults to
harvester-longhorn
)
Let me know if you want either one of those.
(I actually just used this this morning)
a
I see, thanks for the explanation, that makes a lot of sense! I would much appreciate the migration scripts
b
This takes 3 arguments. <pvcName> <namespace> <OptionalNewVolume/PVCName> If you don't provide 3, it's slaps on
-flat
to the back of $1. It makes a job that pulls a container and uses
dd
to make a block copy from one storage class to the other. This only works for block device volumes! Licensed under MIT. 🙂
👍 2
example of running it:
/pvc-disk-flatner bobby-disk-0-bdny7 default bobby-root
pvcName is what shows up in the volume tab.
You need
kubectl
working for your harvester cluster for this script to work.
Same as above, only it'll migrate to whatever storageclass you put in as the 4th parameter. $3 is required to use this one.
example of running it:
./pvc-disk-migrator  mtdev-db-data  canvas  mtdev-db-data-ceph  csi-rbd-sc
the
csi-rbd-sc
is a ceph rbd storage class we set up in the cluster.
But you just swap that out with whatever you want to migrate it to.
After you run the script, delete the job/pods and you can delete the old volumes and images.
a
Thanks! I will happily credit and greatly appreciate the scripts. Are there any downsides to running a flattened image like this?
b
None that I'm aware of.
If anything it makes the vms more flexible.
Harvester's migration from OldHarvesterCluster -> NewHarvesterCluster is currently just a shared backup.
But having image backed vms causes complications.
You have to recreate image backing and storage class manually which is a pain.
And it's a huge timebomb if you want to do things like not have 40 backing images of different versions of Alma9 or Ubuntu or whatever for customers.
s/customers/users
👍 1
a
Trying to run pvc-disk-flattener gives me this output when i plug in a PVC and namespace (im okay with PVC-flat so i omitted $3)
Copy code
We got a image backed disk: vmi-2848ae11-328a-4db4-97ed-bdb45d71062d
The default storage class (harvester-longhorn) looks flat.
We're going to make a new 101Gi disk called  with the default StorageClass: harvester-longhorn.
jq: error: syntax error, unexpected '[', expecting FORMAT or QQSTRING_START (Unix shell quoting issues?) at <top-level>, line 1:
.metadata.annotations.["<http://pv.kubernetes.io/bind-completed|pv.kubernetes.io/bind-completed>"]
jq: 1 compile error
probably unix formatting like the error says…
i fixed that by removing the . char after metadata.annotations and before the opening bracket [ on line 73. so
Copy code
jq -r '.metadata.annotations.["<http://pv.kubernetes.io/bind-completed|pv.kubernetes.io/bind-completed>"]'
becomes
Copy code
jq -r '.metadata.annotations["<http://pv.kubernetes.io/bind-completed|pv.kubernetes.io/bind-completed>"]'
Copy code
❯ ./pvc-disk-flattener cobbler-1-disk-0-ee6gp ops
Looking to flatten cobbler-1-disk-0-ee6gp from image backed
We got a image backed disk: vmi-2848ae11-328a-4db4-97ed-bdb45d71062d
The default storage class (harvester-longhorn) looks flat.
We're going to make a new 101Gi disk called  with the default StorageClass: harvester-longhorn.
error: error parsing STDIN: error converting YAML to JSON: yaml: line 16: could not find expected ':'


Waiting for cobbler-1-disk-0-ee6gp-disk-flattener job to complete...Error from server (NotFound): jobs.batch "cobbler-1-disk-0-ee6gp-disk-flattener" not found
b
yeah you're doing something weird.
You're missing a
:
in the yaml and it doesn't know what to call the new disk.
a
woohoo! Im on MacOS 15.4 with JQ 1.7.1
b
I'm on fedora 41 and jq is the same version
But honestly it's probably bash not jq that's screwing you up
5.2.26
a
5.2
b
add the third value
a
oh wait yep im on 5.2.26
b
./pvc-disk-flattener cobbler-1-disk-0-ee6gp ops cobbler-root
In case slack screwed up the scripts somehow
a
Copy code
❯ ./pvc-disk-flattener cobbler-1-disk-0-ee6gp ops cobbler-root
Looking to flatten cobbler-1-disk-0-ee6gp from image backed
We got a image backed disk: vmi-2848ae11-328a-4db4-97ed-bdb45d71062d
The default storage class (harvester-longhorn) looks flat.
We're going to make a new 101Gi disk called sacops-cobbler-root with the default StorageClass: harvester-longhorn.
Error from server (NotFound): persistentvolumeclaims "cobbler-root" not found
error: error parsing STDIN: error converting YAML to JSON: yaml: line 16: could not find expected ':'

Waiting for cobbler-1-disk-0-ee6gp-disk-flattener job to complete...^C
I manually created a pvc claim and checked, it worked manually…
Copy code
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-new-disk
  namespace: ops
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  storageClassName: harvester-longhorn
  volumeMode: Block
Copy code
kubectl apply -f test
persistentvolumeclaim/test-new-disk created

kubectl -n ops get pvc test-new-disk -o json |jq -r '.metadata.annotations["pv.kubernetes.io/bind-completed"]'
yes
welp the gist script worked
thanks slack
b
I hate slack.
👆 1
Slack hates me
a
Worked wonderfully. Thanks again!