<@U07AV705ZHD> Ah, so you have a working Harvesete...
# harvester
s
@microscopic-salesclerk-94412 Ah, so you have a working Harveseter cluster and you are about to add a node right ? Anyway, the issue is still the same, your installation wants to use /dev/sda, whatever device is that, and it fails. I'm new to Harvester but normal linux / kubernetes troubleshooting should work, i would check my block devices with lsblk on the offered debug console. On the other hand, your VM issue seems to me a different one, and probably happened to me as well in the past. There are some nodes in your cluster which part of the Longhorn service, and some are not. The VM is probably got scheduled to a node which is not part of Longhorn service, and cannot use Longhorn storage. It was a new finding to me that in order to use storage on a given node, that node needs to be part of the longhorn service, and if for some reason a pod is started on a node which is not a longhorn node, then it will fail to get its storage. The solution was in my case to add all nodes in the cluster to Longhorn as well, and because only a few of them had actually disks i excluded the others from storage replication. Longhorn was not fully happy complaining about running a pod on a node which has no storage to host at least one copy of the volume data (storage locality, makes sense whith large iops / large amount of data moved), but the pod started and had its disk io through network.