https://rancher.com/ logo
Title
s

stale-painting-80203

08/21/2022, 9:21 PM
Hi folks, I have a new harvester install on an Intel machine and when I try to create a VM I am getting the following error:
0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
I see the following events:
ddProvisioningSucceeded
PersistentVolumeClaim wvm-01-disk-0-xltya
Successfully provisioned volume pvc-b9673386-f74a-48eb-b079-159252159b46
10 secs ago
ProvisioningSucceeded
PersistentVolumeClaim wvm-01-disk-1-ocfip
Successfully provisioned volume pvc-5b4f57a1-0ed2-4c7a-ab32-fcc005526dbc
10 secs ago
ExternalProvisioning
PersistentVolumeClaim wvm-01-disk-0-xltya
Waiting for a volume to be created, either by external provisioner "<http://driver.longhorn.io|driver.longhorn.io>" or manually created by system administrator
18 secs ago
Provisioning
PersistentVolumeClaim wvm-01-disk-0-xltya
External provisioner is provisioning volume for claim "default/wvm-01-disk-0-xltya"
18 secs ago
Provisioning
PersistentVolumeClaim wvm-01-disk-1-ocfip
External provisioner is provisioning volume for claim "default/wvm-01-disk-1-ocfip"
18 secs ago
ExternalProvisioning
PersistentVolumeClaim wvm-01-disk-1-ocfip
Waiting for a volume to be created, either by external provisioner "<http://driver.longhorn.io|driver.longhorn.io>" or manually created by system administrator
18 secs ago
SuccessfulCreate
VirtualMachineInstance wvm-01
Created virtual machine pod virt-launcher-wvm-01-w54fc
18 secs ago
SuccessfulCreate
VirtualMachineInstance wvm-01
Created PodDisruptionBudget kubevirt-disruption-budget-jwlsp
18 secs ago
SuccessfulCreate
VirtualMachine wvm-01
Started the virtual machine by creating the new virtual machine instance wvm-01
18 secs ag
Seems the disks did get created:
kubectl get pvc -n default
NAME         STATUS VOLUME                  CAPACITY ACCESS MODES STORAGECLASS     AGE
wvm-01-disk-0-xltya Bound  pvc-b9673386-f74a-48eb-b079-159252159b46 10Gi   RWX      longhorn-image-cjgst 5m38s
wvm-01-disk-1-ocfip Bound  pvc-5b4f57a1-0ed2-4c7a-ab32-fcc005526dbc 60Gi   RWX      longhorn       5m38s
Any suggestions on how to debug this issue?
s

square-orange-60123

08/21/2022, 9:29 PM
do you have virtualization enabled on the machine?
s

stale-painting-80203

08/21/2022, 9:32 PM
yes. I saw an earlier thread https://rancher-users.slack.com/archives/C01GKHKAG0K/p1656531309987229. Where the guy reported that on AMD, KVM support was not enabled, so I immediately checked my BIOS. It's an Intel machine and I see that virtualization is enabled.
🤔 1
b

bland-farmer-13503

08/22/2022, 2:37 AM
May you help to check related Longhorn volume status?
> kubectl describe lhv -n longhorn-system pvc-b9673386-f74a-48eb-b079-159252159b46

> kubectl describe lhv -n longhorn-system pvc-5b4f57a1-0ed2-4c7a-ab32-fcc005526dbc
s

stale-painting-80203

08/22/2022, 3:42 AM
Thanks for the suggestion. Here is the output. I can't tell if there are any issues:
server1:~ # kubectl describe lhv -n longhorn-system pvc-5b4f57a1-0ed2-4c7a-ab32-fcc005526dbc
Name:         pvc-5b4f57a1-0ed2-4c7a-ab32-fcc005526dbc
Namespace:    longhorn-system
Labels:       longhornvolume=pvc-5b4f57a1-0ed2-4c7a-ab32-fcc005526dbc
              <http://recurring-job-group.longhorn.io/default=enabled|recurring-job-group.longhorn.io/default=enabled>
Annotations:  <none>
API Version:  <http://longhorn.io/v1beta1|longhorn.io/v1beta1>
Kind:         Volume
Metadata:
  Creation Timestamp:  2022-08-21T20:58:12Z
  Finalizers:
    <http://longhorn.io|longhorn.io>
  Generation:  1
  Managed Fields:
    API Version:  <http://longhorn.io/v1beta1|longhorn.io/v1beta1>
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
          .:
          v:"<http://longhorn.io|longhorn.io>":
        f:labels:
          .:
          f:longhornvolume:
          f:<http://recurring-job-group.longhorn.io/default|recurring-job-group.longhorn.io/default>:
      f:spec:
    Manager:      longhorn-manager
    Operation:    Update
    Time:         2022-08-21T20:58:12Z
    API Version:  <http://longhorn.io/v1beta1|longhorn.io/v1beta1>
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
    Manager:         longhorn-manager
    Operation:       Update
    Subresource:     status
    Time:            2022-08-21T20:58:12Z
  Resource Version:  3566001
  UID:               0a7bcc60-21c9-4b39-a0d5-4a9376409ae4
Spec:
  Standby:                    false
  Access Mode:                rwx
  Backing Image:              
  Base Image:                 
  Data Locality:              disabled
  Data Source:                
  Disable Frontend:           false
  Disk Selector:              <nil>
  Encrypted:                  false
  Engine Image:               longhornio/longhorn-engine:v1.2.4
  From Backup:                
  Frontend:                   blockdev
  Last Attached By:           
  Migratable:                 true
  Migration Node ID:          
  Node ID:                    
  Node Selector:              <nil>
  Number Of Replicas:         3
  Replica Auto Balance:       ignored
  Revision Counter Disabled:  false
  Size:                       64424509440
  Stale Replica Timeout:      30
Status:
  Actual Size:  0
  Clone Status:
    Snapshot:       
    Source Volume:  
    State:          
  Conditions:
    Restore:
      Last Probe Time:       
      Last Transition Time:  2022-08-21T20:58:18Z
      Message:               
      Reason:                
      Status:                False
      Type:                  restore
    Scheduled:
      Last Probe Time:       
      Last Transition Time:  2022-08-21T20:58:18Z
      Message:               Reset schedulable due to allow volume creation with degraded availability
      Reason:                
      Status:                True
      Type:                  scheduled
    Toomanysnapshots:
      Last Probe Time:       
      Last Transition Time:  2022-08-21T20:58:18Z
      Message:               
      Reason:                
      Status:                False
      Type:                  toomanysnapshots
  Current Image:             longhornio/longhorn-engine:v1.2.4
  Current Node ID:           
  Expansion Required:        false
  Frontend Disabled:         false
  Is Standby:                false
  Kubernetes Status:
    Last PVC Ref At:  
    Last Pod Ref At:  
    Namespace:        default
    Pv Name:          pvc-5b4f57a1-0ed2-4c7a-ab32-fcc005526dbc
    Pv Status:        Bound
    Pvc Name:         wvm-01-disk-1-ocfip
    Workloads Status:
      Pod Name:          virt-launcher-wvm-01-w54fc
      Pod Status:        Pending
      Workload Name:     wvm-01
      Workload Type:     VirtualMachineInstance
  Last Backup:           
  Last Backup At:        
  Last Degraded At:      
  Owner ID:              server1
  Pending Node ID:       
  Remount Requested At:  
  Restore Initiated:     false
  Restore Required:      false
  Robustness:            unknown
  Share Endpoint:        
  Share State:           
  State:                 detached
Events:                  <none>
server1:~ #
b

bland-farmer-13503

08/22/2022, 7:45 AM
The
State
is detached. May you check whether you can attach the Volume on Lonhorn dashboard? Thank you. You can access Longhorn dashboard like I mentioned in this comment. https://rancher-users.slack.com/archives/C01GKHKAG0K/p1660875605155319?thread_ts=1660869086.987049&amp;cid=C01GKHKAG0K
👍 1
🙏 1
I think you can also check longhorn-manager-<random-string> pods log. There may have some error message.
s

stale-painting-80203

08/22/2022, 5:11 PM
When I installed harvester. I used nvme0n1 (1.8T) for Installation Disk nvme1n1 (1.8T) for VM data I am wondering if I need to do any additional configuration to be able to use nvme1n1
I figured out the root cause. The issue was due to specifying too larger memory when creating a VM. Unfortunately the error if not obvious. I am now able to create new VMs, but I still see the scheduling failure as seen in the image above. Wondering if that is related to some config issue?
b

bland-farmer-13503

08/23/2022, 1:00 AM
Not sure how many nodes do you have in your cluster. By default, Longhorn will try to provision a volume with 3 replica. You can try to modify
spec.numberOfReplicas
for existing volumes. Or you can change default replica count in
Setting > General
for further volumes.
s

stale-painting-80203

08/23/2022, 6:41 AM
I have 1 Harvester node.
b

bland-farmer-13503

08/23/2022, 9:19 AM
Okay, I think you can change replica to 1 because LH will try to put replica on different nodes by default.
s

stale-painting-80203

08/23/2022, 7:37 PM
Thanks for your help by the way. Also is there a way to make LH available to other clusters?
b

bland-farmer-13503

08/24/2022, 1:44 AM
Yes, do you want to use it with Harvester or just Longhorn?
s

stale-painting-80203

08/24/2022, 1:47 AM
I have a cluster setup on another server and want to give it access to the LH which was automatically setup as part of Harvester install
As an example if I install Harbor (container registry), it automatically creates volumes it needs if LH is setup in it's cluster
b

bland-farmer-13503

08/24/2022, 1:49 AM
So the Harbor is not in a VM, it's like a statefulset using LH volumes?
s

stale-painting-80203

08/24/2022, 2:10 AM
Harbor will be in a VM.
b

bland-farmer-13503

08/25/2022, 3:02 AM
Sorry for the late reply. For this case, you have to take a backup in Harvester. After you set the same backup target on a new Harvester cluster, LH will automatically synchronize data. However, VM volume is based on Backing Image. Currently, we haven't supported restoring Backing Image (ref #4165). You have to manually restore the related VM images on a new cluster. You can take a look for the document. https://docs.harvesterhci.io/v1.0/vm/backup-restore/#restore-a-new-vm-on-another-harvester-cluster