adamant-kite-43734
08/16/2023, 12:19 PMminiature-lock-53926
08/16/2023, 4:21 PMbored-painting-68221
08/16/2023, 6:16 PMlsblk -f
from within the guest inbetween boots, do the UUIDs stay the same? If so, I think you can update your fstab to refer to the devices by UUID.bored-painting-68221
08/16/2023, 6:18 PMminiature-lock-53926
08/17/2023, 9:39 AMsudo lspci -vv -s 0000:81:00.0 | grep Serial
Capabilities: [270 v1] Device Serial Number 00-14-ee-83-01-17-4c-00
miniature-lock-53926
08/17/2023, 9:50 AMlrwxrwxrwx 1 root root 13 Aug 16 18:34 nvme-WUS4BB076D7P3E3_A061BA18 -> ../../nvme0n1
lrwxrwxrwx 1 root root 15 Aug 16 18:34 nvme-WUS4BB076D7P3E3_A061BA18-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root 13 Aug 16 18:34 nvme-WUS4BB076D7P3E3_A061FBCD -> ../../nvme1n1
lrwxrwxrwx 1 root root 15 Aug 16 18:34 nvme-WUS4BB076D7P3E3_A061FBCD-part1 -> ../../nvme1n1p1
lrwxrwxrwx 1 root root 13 Aug 16 18:34 nvme-WUS4BB076D7P3E3_A0659D0C -> ../../nvme3n1
lrwxrwxrwx 1 root root 15 Aug 16 18:34 nvme-WUS4BB076D7P3E3_A0659D0C-part1 -> ../../nvme3n1p1
lrwxrwxrwx 1 root root 13 Aug 16 18:34 nvme-WUS4BB076D7P3E3_A065E3A8 -> ../../nvme2n1
lrwxrwxrwx 1 root root 15 Aug 16 18:34 nvme-WUS4BB076D7P3E3_A065E3A8-part1 -> ../../nvme2n1p1
lrwxrwxrwx 1 root root 13 Aug 16 18:34 nvme-eui.01000000000000000014ee8300af9c00 -> ../../nvme0n1
lrwxrwxrwx 1 root root 15 Aug 16 18:34 nvme-eui.01000000000000000014ee8300af9c00-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root 13 Aug 16 18:34 nvme-eui.01000000000000000014ee8300b35000 -> ../../nvme1n1
lrwxrwxrwx 1 root root 15 Aug 16 18:34 nvme-eui.01000000000000000014ee8300b35000-part1 -> ../../nvme1n1p1
lrwxrwxrwx 1 root root 13 Aug 16 18:34 nvme-eui.01000000000000000014ee83013c9f80 -> ../../nvme3n1
lrwxrwxrwx 1 root root 15 Aug 16 18:34 nvme-eui.01000000000000000014ee83013c9f80-part1 -> ../../nvme3n1p1
lrwxrwxrwx 1 root root 13 Aug 16 18:34 nvme-eui.01000000000000000014ee8301596a80 -> ../../nvme2n1
lrwxrwxrwx 1 root root 15 Aug 16 18:34 nvme-eui.01000000000000000014ee8301596a80-part1 -> ../../nvme2n1p1
And than just mount them inside my cloud-config with this ID but the problem is there, that Harvester/Kubevirt also automatically increases the .01 part randomly to .02 .03 on subsequent setups.
Sooo ... then I kinda took a step back and just wrote a silly little bash script that just takes the first ordering of the Disks at first boot and create a fstab using the UUIDs to permanently mount those drive with the current filesystem not matter of the ordering of the drives. Now I just have to backup the fstab of each VM and when I spin up new VMs i can just use this fstab and as long as the FS is intact on the drives the UUIDs should be good and the mountpoints will be in the right order. (hopefully)
#!/bin/sh
# Backup the current fstab
cp /etc/fstab /etc/fstab.bak
# Iterate over the NVMe devices by name order
for idx in 0 1 2 3; do
DEVICE="/dev/nvme${idx}n1p1"
# Check if device exists
if [ ! -b "$DEVICE" ]; then
echo "Device $DEVICE does not exist. Exiting."
exit 1
fi
# Get the UUID of the device
UUID=$(blkid -s UUID -o value "$DEVICE")
if [ -z "$UUID" ]; then
echo "Failed to get UUID for $DEVICE. Exiting."
exit 1
fi
# Define the mount point
MOUNT_POINT="/mnt/disk$((idx+1))"
# Create mount point if it doesn't exist
[ ! -d "$MOUNT_POINT" ] && mkdir -p "$MOUNT_POINT"
# Append to /etc/fstab
echo "Adding UUID=$UUID to /etc/fstab for mounting at $MOUNT_POINT..."
echo "UUID=$UUID $MOUNT_POINT xfs defaults 0 0" >> /etc/fstab
done
# Optional: Mount all filesystems according to fstab
echo "Mounting all filesystems according to /etc/fstab..."
mount -a
echo "Script completed!"
I would just liked to have more of the unique references to the individual disk in the harvester UI because the only unique identifier there is the PCI Address and those are just not available inside the VM... but maybe this is just out of scope for the hypervisor and if someone wants to go the route of pci-attached drives one should be clear that there is some self-tinkering required.
Thanks anyways for the response :)