This message was deleted.
# harvester
a
This message was deleted.
🤔 1
j
Here is the yaml config file.
Copy code
apiVersion: <http://kubevirt.io/v1|kubevirt.io/v1>
kind: VirtualMachine
metadata:
  annotations:
    <http://harvesterhci.io/vmRunStrategy|harvesterhci.io/vmRunStrategy>: RerunOnFailure
    <http://harvesterhci.io/volumeClaimTemplates|harvesterhci.io/volumeClaimTemplates>: '[{"metadata":{"name":"vm-disk-0-blwtn","annotations":{"<http://harvesterhci.io/imageId|harvesterhci.io/imageId>":"default/image-zjb8v"}},"spec":{"accessModes":["ReadWriteMany"],"resources":{"requests":{"storage":"10Gi"}},"volumeMode":"Block","storageClassName":"longhorn-image-zjb8v"}}]'
    <http://kubevirt.io/latest-observed-api-version|kubevirt.io/latest-observed-api-version>: v1
    <http://kubevirt.io/storage-observed-api-version|kubevirt.io/storage-observed-api-version>: v1alpha3
    <http://network.harvesterhci.io/ips|network.harvesterhci.io/ips>: '[]'
  creationTimestamp: "2022-06-29T19:31:21Z"
  finalizers:
  - <http://wrangler.cattle.io/VMController.UnsetOwnerOfPVCs|wrangler.cattle.io/VMController.UnsetOwnerOfPVCs>
  generation: 1
  labels:
    <http://harvesterhci.io/creator|harvesterhci.io/creator>: harvester
    <http://harvesterhci.io/os|harvesterhci.io/os>: ""
  managedFields:
  - apiVersion: <http://kubevirt.io/v1alpha3|kubevirt.io/v1alpha3>
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:<http://kubevirt.io/latest-observed-api-version|kubevirt.io/latest-observed-api-version>: {}
          f:<http://kubevirt.io/storage-observed-api-version|kubevirt.io/storage-observed-api-version>: {}
      f:status:
        .: {}
        f:conditions: {}
        f:created: {}
        f:printableStatus: {}
        f:volumeSnapshotStatuses: {}
    manager: Go-http-client
    operation: Update
    time: "2022-06-29T19:31:21Z"
  - apiVersion: <http://kubevirt.io/v1|kubevirt.io/v1>
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:<http://harvesterhci.io/vmRunStrategy|harvesterhci.io/vmRunStrategy>: {}
          f:<http://harvesterhci.io/volumeClaimTemplates|harvesterhci.io/volumeClaimTemplates>: {}
          f:<http://network.harvesterhci.io/ips|network.harvesterhci.io/ips>: {}
        f:finalizers:
          .: {}
          v:"<http://wrangler.cattle.io/VMController.UnsetOwnerOfPVCs|wrangler.cattle.io/VMController.UnsetOwnerOfPVCs>": {}
        f:labels:
          .: {}
          f:<http://harvesterhci.io/creator|harvesterhci.io/creator>: {}
          f:<http://harvesterhci.io/os|harvesterhci.io/os>: {}
      f:spec:
        .: {}
        f:runStrategy: {}
        f:template:
          .: {}
          f:metadata:
            .: {}
            f:annotations: {}
            f:labels: {}
          f:spec:
            .: {}
            f:domain:
              .: {}
              f:cpu:
                .: {}
                f:cores: {}
                f:sockets: {}
                f:threads: {}
              f:devices:
                .: {}
                f:disks: {}
                f:inputs: {}
                f:interfaces: {}
              f:features:
                .: {}
                f:acpi:
                  .: {}
                  f:enabled: {}
              f:machine:
                .: {}
                f:type: {}
              f:resources:
                .: {}
                f:limits:
                  .: {}
                  f:cpu: {}
                  f:memory: {}
            f:evictionStrategy: {}
            f:hostname: {}
            f:networks: {}
            f:volumes: {}
    manager: harvester
    operation: Update
    time: "2022-06-29T19:31:21Z"
  name: vm
  namespace: default
  resourceVersion: "101838"
  uid: 82f3406c-e521-4639-9aba-719ade3630ea
spec:
  runStrategy: RerunOnFailure
  template:
    metadata:
      annotations:
        <http://harvesterhci.io/sshNames|harvesterhci.io/sshNames>: '[]'
      creationTimestamp: null
      labels:
        <http://harvesterhci.io/vmName|harvesterhci.io/vmName>: vm
    spec:
      domain:
        cpu:
          cores: 12
          sockets: 1
          threads: 1
        devices:
          disks:
          - bootOrder: 1
            disk:
              bus: virtio
            name: disk-0
          - disk:
              bus: virtio
            name: cloudinitdisk
          inputs:
          - bus: usb
            name: tablet
            type: tablet
          interfaces:
          - masquerade: {}
            model: virtio
            name: default
        features:
          acpi:
            enabled: true
        machine:
          type: q35
        memory:
          guest: 12188Mi
        resources:
          limits:
            cpu: "12"
            memory: 12Gi
          requests:
            cpu: 750m
            memory: 8Gi
      evictionStrategy: LiveMigrate
      hostname: vm
      networks:
      - name: default
        pod: {}
      volumes:
      - name: disk-0
        persistentVolumeClaim:
          claimName: vm-disk-0-blwtn
      - cloudInitNoCloud:
          networkDataSecretRef:
            name: vm-7tpgf
          secretRef:
            name: vm-7tpgf
        name: cloudinitdisk
status:
  conditions:
  - lastProbeTime: "2022-06-29T19:31:21Z"
    lastTransitionTime: "2022-06-29T19:31:21Z"
    message: Guest VM is not reported as running
    reason: GuestNotRunning
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2022-06-29T19:31:21Z"
    message: '0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.'
    reason: Unschedulable
    status: "False"
    type: PodScheduled
  created: true
  printableStatus: ErrorUnschedulable
  volumeSnapshotStatuses:
  - enabled: true
    name: disk-0
  - enabled: false
    name: cloudinitdisk
    reason: Snapshot is not supported for this volumeSource type [cloudinitdisk]
g
the pvc claim associated with this vm cant be satisfied
you may want to check the PVCclaims in the namespace as VM
the associated launcher pod will have the pvc details if needed
j
hmmm, this is freshly out of box after the installation, everything is default
is it possible because hardware issue, i.e. I am installing harvester on a consumer motherboard i.e. ROG with an AMD ryzen cpu
is there any place I can get this information for debugging?
the associated launcher pod will have the pvc details if needed
g
kubectl get pvc -n default
1
or you can just generate a support bundle and i can look at it
j
great! let me spin up the machine and get back to you, thanks!
g
👍
j
@great-bear-19718 Finally figured, turned out I forgot to turn on the KVM supports of the AMD CPU, which indirectly detach the volume whenever I start the VM and thus caused this problem. Once I turned the KVM support of AMD in BIOS, then it worked fine.
125 Views