I'm pretty sure that my single harvester node is not using the second nvme disk and everything is using one.
via the UI storage is showing the default disk is not ready and that the UUID does not match. when I select add disk it shows me nvme0n1 but then a format option (which I am not prepared to do unless I know it is not truly using that disk.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 3G 1 loop /
sda 8:0 0 476.9G 0 disk
├─sda1 8:1 0 64M 0 part
├─sda2 8:2 0 50M 0 part /oem
├─sda3 8:3 0 15G 0 part /run/initramfs/cos-state
├─sda4 8:4 0 8G 0 part
├─sda5 8:5 0 80G 0 part /usr/local
└─sda6 8:6 0 373.8G 0 part /var/lib/harvester/defaultdisk
nvme0n1 259:0 0 476.9G 0 disk
I have never really been a *nix engineer but through years of supporting infrastructure equipment based on *nix and having to get in to the guts of things (EMC, vmware, f5) I have picked up on a few things.
How do I determine whether nvme0n1 is not sda?
sorry, meant this to be a reply for proper threading:
lspci is not installable and the harvester RO filesystem and the list PCI devices in the UI is experimental and enabled but I can't see anything via that.
/proc/bus/pci/devices is a bit too long for the combination of session, screen size, etc. I am using
I suspect a number of problems I have been having is due to disk pressure on that one little nvme guy. super fast but I suppose that through all the layers of virtualization going on, there is some latency there, especially if any swap is happening in the kvm VMs.
took a chance and force formatted it.
The UI says, "The disk has already been force-formatted. Current file system is ext4, You can format it manually."