This message was deleted.
# harvester
a
This message was deleted.
m
Please. Am I doing something wrong with my questions? This is my 3rd or 4th attempt at getting some help here and I am always ignored. I would even appreciate a short. "That use case is not officially surported" or something similar because than I can look for other solutions. But to be left hanging sucks :(
b
You're not doing anything wrong with your questions, please don't feel discouraged by this. The most likely thing is that other people reading the questions don't know, or they don't have a viable configuration to test and reproduce what you're experiencing so they can't start to form hypotheses. If you run an
lsblk -f
from within the guest inbetween boots, do the UUIDs stay the same? If so, I think you can update your fstab to refer to the devices by UUID.
Here's a page with more info on what that entails: https://askubuntu.com/a/1182114
❤️ 1
m
Yeah sorry i was just getting frustrated with this issue a bit. The Problem with the UUID was, that for me there was no way to in advance know the UUID, and I didn't want do manually pin the UUID in the fstab after first boot. So I was looking on the bare metal nodes if I found a mapping for the pci devices addresses that can be gathered from the harvester ui to some unique reference that was the same on both the harvester node and later in the vm with the drive attached. I thought I had a solution using the serial number that I got using
sudo lspci -vv -s 0000:81:00.0 | grep Serial
Capabilities: [270 v1] Device Serial Number 00-14-ee-83-01-17-4c-00
And I thought I could map this to the by-id reference
Copy code
lrwxrwxrwx 1 root root 13 Aug 16 18:34 nvme-WUS4BB076D7P3E3_A061BA18 -> ../../nvme0n1
lrwxrwxrwx 1 root root 15 Aug 16 18:34 nvme-WUS4BB076D7P3E3_A061BA18-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root 13 Aug 16 18:34 nvme-WUS4BB076D7P3E3_A061FBCD -> ../../nvme1n1
lrwxrwxrwx 1 root root 15 Aug 16 18:34 nvme-WUS4BB076D7P3E3_A061FBCD-part1 -> ../../nvme1n1p1
lrwxrwxrwx 1 root root 13 Aug 16 18:34 nvme-WUS4BB076D7P3E3_A0659D0C -> ../../nvme3n1
lrwxrwxrwx 1 root root 15 Aug 16 18:34 nvme-WUS4BB076D7P3E3_A0659D0C-part1 -> ../../nvme3n1p1
lrwxrwxrwx 1 root root 13 Aug 16 18:34 nvme-WUS4BB076D7P3E3_A065E3A8 -> ../../nvme2n1
lrwxrwxrwx 1 root root 15 Aug 16 18:34 nvme-WUS4BB076D7P3E3_A065E3A8-part1 -> ../../nvme2n1p1
lrwxrwxrwx 1 root root 13 Aug 16 18:34 nvme-eui.01000000000000000014ee8300af9c00 -> ../../nvme0n1
lrwxrwxrwx 1 root root 15 Aug 16 18:34 nvme-eui.01000000000000000014ee8300af9c00-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root 13 Aug 16 18:34 nvme-eui.01000000000000000014ee8300b35000 -> ../../nvme1n1
lrwxrwxrwx 1 root root 15 Aug 16 18:34 nvme-eui.01000000000000000014ee8300b35000-part1 -> ../../nvme1n1p1
lrwxrwxrwx 1 root root 13 Aug 16 18:34 nvme-eui.01000000000000000014ee83013c9f80 -> ../../nvme3n1
lrwxrwxrwx 1 root root 15 Aug 16 18:34 nvme-eui.01000000000000000014ee83013c9f80-part1 -> ../../nvme3n1p1
lrwxrwxrwx 1 root root 13 Aug 16 18:34 nvme-eui.01000000000000000014ee8301596a80 -> ../../nvme2n1
lrwxrwxrwx 1 root root 15 Aug 16 18:34 nvme-eui.01000000000000000014ee8301596a80-part1 -> ../../nvme2n1p1
And than just mount them inside my cloud-config with this ID but the problem is there, that Harvester/Kubevirt also automatically increases the .01 part randomly to .02 .03 on subsequent setups. Sooo ... then I kinda took a step back and just wrote a silly little bash script that just takes the first ordering of the Disks at first boot and create a fstab using the UUIDs to permanently mount those drive with the current filesystem not matter of the ordering of the drives. Now I just have to backup the fstab of each VM and when I spin up new VMs i can just use this fstab and as long as the FS is intact on the drives the UUIDs should be good and the mountpoints will be in the right order. (hopefully)
Copy code
#!/bin/sh

      # Backup the current fstab
      cp /etc/fstab /etc/fstab.bak

      # Iterate over the NVMe devices by name order
      for idx in 0 1 2 3; do
          DEVICE="/dev/nvme${idx}n1p1"

          # Check if device exists
          if [ ! -b "$DEVICE" ]; then
              echo "Device $DEVICE does not exist. Exiting."
              exit 1
          fi

          # Get the UUID of the device
          UUID=$(blkid -s UUID -o value "$DEVICE")

          if [ -z "$UUID" ]; then
              echo "Failed to get UUID for $DEVICE. Exiting."
              exit 1
          fi

          # Define the mount point
          MOUNT_POINT="/mnt/disk$((idx+1))"

          # Create mount point if it doesn't exist
          [ ! -d "$MOUNT_POINT" ] && mkdir -p "$MOUNT_POINT"

          # Append to /etc/fstab
          echo "Adding UUID=$UUID to /etc/fstab for mounting at $MOUNT_POINT..."
          echo "UUID=$UUID $MOUNT_POINT xfs defaults 0 0" >> /etc/fstab
      done

      # Optional: Mount all filesystems according to fstab
      echo "Mounting all filesystems according to /etc/fstab..."
      mount -a

      echo "Script completed!"
I would just liked to have more of the unique references to the individual disk in the harvester UI because the only unique identifier there is the PCI Address and those are just not available inside the VM... but maybe this is just out of scope for the hypervisor and if someone wants to go the route of pci-attached drives one should be clear that there is some self-tinkering required. Thanks anyways for the response :)