05/29/2022, 2:03 PM
Having upgraded from Harvester 1.0.0 to 1.0.1, I now have this VM which is stuck in the state "Starting" phase. The events from the VM say:
Pod virt-launcher-kube004-kgc64
(combined from similar events): Unable to attach or mount volumes: unmounted volumes=[rootdisk], unattached volumes=[libvirt-runtime cloudinitdisk-ndata public ephemeral-disks container-disks hotplug-disks sockets cloudinitdisk-udata private rootdisk cdrom-disk vardisk]: timed out waiting for the condition

Pod virt-launcher-kube004-kgc64
AttachVolume.Attach failed for volume "pvc-50ef97d3-065d-4a5f-b986-36294317c269" : rpc error: code = Aborted desc = volume pvc-50ef97d3-065d-4a5f-b986-36294317c269 is not ready for workloads
What is the best way for me to discover the issue, I'm guessing in Longhorn, and recover from this situation? Thanks :-)
It might just be easier to delete the VM and re-create it. Cattle not pets...


05/30/2022, 10:21 PM
No real advise, I'm just curious: Did you run the update while the VM was running? I'd also suggest checking Longhorn... it's only cattle when it's a Kubernetes node, otherwise it's a pet and you need to be able to fix it.


05/31/2022, 8:01 AM
I didn't know how to check longhorn so I decided to cut my losses and rebuild. It was a kubernetes agent node, so I wasn't personally attached to it.