This message was deleted.
# harvester
a
This message was deleted.
s
do you have virtualization enabled on the machine?
s
yes. I saw an earlier thread https://rancher-users.slack.com/archives/C01GKHKAG0K/p1656531309987229. Where the guy reported that on AMD, KVM support was not enabled, so I immediately checked my BIOS. It's an Intel machine and I see that virtualization is enabled.
🤔 1
b
May you help to check related Longhorn volume status?
Copy code
> kubectl describe lhv -n longhorn-system pvc-b9673386-f74a-48eb-b079-159252159b46

> kubectl describe lhv -n longhorn-system pvc-5b4f57a1-0ed2-4c7a-ab32-fcc005526dbc
s
Thanks for the suggestion. Here is the output. I can't tell if there are any issues:
Copy code
server1:~ # kubectl describe lhv -n longhorn-system pvc-5b4f57a1-0ed2-4c7a-ab32-fcc005526dbc
Name:         pvc-5b4f57a1-0ed2-4c7a-ab32-fcc005526dbc
Namespace:    longhorn-system
Labels:       longhornvolume=pvc-5b4f57a1-0ed2-4c7a-ab32-fcc005526dbc
              <http://recurring-job-group.longhorn.io/default=enabled|recurring-job-group.longhorn.io/default=enabled>
Annotations:  <none>
API Version:  <http://longhorn.io/v1beta1|longhorn.io/v1beta1>
Kind:         Volume
Metadata:
  Creation Timestamp:  2022-08-21T20:58:12Z
  Finalizers:
    <http://longhorn.io|longhorn.io>
  Generation:  1
  Managed Fields:
    API Version:  <http://longhorn.io/v1beta1|longhorn.io/v1beta1>
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
          .:
          v:"<http://longhorn.io|longhorn.io>":
        f:labels:
          .:
          f:longhornvolume:
          f:<http://recurring-job-group.longhorn.io/default|recurring-job-group.longhorn.io/default>:
      f:spec:
    Manager:      longhorn-manager
    Operation:    Update
    Time:         2022-08-21T20:58:12Z
    API Version:  <http://longhorn.io/v1beta1|longhorn.io/v1beta1>
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
    Manager:         longhorn-manager
    Operation:       Update
    Subresource:     status
    Time:            2022-08-21T20:58:12Z
  Resource Version:  3566001
  UID:               0a7bcc60-21c9-4b39-a0d5-4a9376409ae4
Spec:
  Standby:                    false
  Access Mode:                rwx
  Backing Image:              
  Base Image:                 
  Data Locality:              disabled
  Data Source:                
  Disable Frontend:           false
  Disk Selector:              <nil>
  Encrypted:                  false
  Engine Image:               longhornio/longhorn-engine:v1.2.4
  From Backup:                
  Frontend:                   blockdev
  Last Attached By:           
  Migratable:                 true
  Migration Node ID:          
  Node ID:                    
  Node Selector:              <nil>
  Number Of Replicas:         3
  Replica Auto Balance:       ignored
  Revision Counter Disabled:  false
  Size:                       64424509440
  Stale Replica Timeout:      30
Status:
  Actual Size:  0
  Clone Status:
    Snapshot:       
    Source Volume:  
    State:          
  Conditions:
    Restore:
      Last Probe Time:       
      Last Transition Time:  2022-08-21T20:58:18Z
      Message:               
      Reason:                
      Status:                False
      Type:                  restore
    Scheduled:
      Last Probe Time:       
      Last Transition Time:  2022-08-21T20:58:18Z
      Message:               Reset schedulable due to allow volume creation with degraded availability
      Reason:                
      Status:                True
      Type:                  scheduled
    Toomanysnapshots:
      Last Probe Time:       
      Last Transition Time:  2022-08-21T20:58:18Z
      Message:               
      Reason:                
      Status:                False
      Type:                  toomanysnapshots
  Current Image:             longhornio/longhorn-engine:v1.2.4
  Current Node ID:           
  Expansion Required:        false
  Frontend Disabled:         false
  Is Standby:                false
  Kubernetes Status:
    Last PVC Ref At:  
    Last Pod Ref At:  
    Namespace:        default
    Pv Name:          pvc-5b4f57a1-0ed2-4c7a-ab32-fcc005526dbc
    Pv Status:        Bound
    Pvc Name:         wvm-01-disk-1-ocfip
    Workloads Status:
      Pod Name:          virt-launcher-wvm-01-w54fc
      Pod Status:        Pending
      Workload Name:     wvm-01
      Workload Type:     VirtualMachineInstance
  Last Backup:           
  Last Backup At:        
  Last Degraded At:      
  Owner ID:              server1
  Pending Node ID:       
  Remount Requested At:  
  Restore Initiated:     false
  Restore Required:      false
  Robustness:            unknown
  Share Endpoint:        
  Share State:           
  State:                 detached
Events:                  <none>
server1:~ #
b
The
State
is detached. May you check whether you can attach the Volume on Lonhorn dashboard? Thank you. You can access Longhorn dashboard like I mentioned in this comment. https://rancher-users.slack.com/archives/C01GKHKAG0K/p1660875605155319?thread_ts=1660869086.987049&amp;cid=C01GKHKAG0K
👍 1
🙏 1
I think you can also check longhorn-manager-<random-string> pods log. There may have some error message.
s
I was able to get to longhorn dashboard and can see the VM disks were not attached. I tried to attach them, but get an error "The volume cannot be scheduled"
When I installed harvester. I used nvme0n1 (1.8T) for Installation Disk nvme1n1 (1.8T) for VM data I am wondering if I need to do any additional configuration to be able to use nvme1n1
I figured out the root cause. The issue was due to specifying too larger memory when creating a VM. Unfortunately the error if not obvious. I am now able to create new VMs, but I still see the scheduling failure as seen in the image above. Wondering if that is related to some config issue?
b
Not sure how many nodes do you have in your cluster. By default, Longhorn will try to provision a volume with 3 replica. You can try to modify
spec.numberOfReplicas
for existing volumes. Or you can change default replica count in
Setting > General
for further volumes.
s
I have 1 Harvester node.
b
Okay, I think you can change replica to 1 because LH will try to put replica on different nodes by default.
s
Thanks for your help by the way. Also is there a way to make LH available to other clusters?
b
Yes, do you want to use it with Harvester or just Longhorn?
s
I have a cluster setup on another server and want to give it access to the LH which was automatically setup as part of Harvester install
As an example if I install Harbor (container registry), it automatically creates volumes it needs if LH is setup in it's cluster
b
So the Harbor is not in a VM, it's like a statefulset using LH volumes?
s
Harbor will be in a VM.
b
Sorry for the late reply. For this case, you have to take a backup in Harvester. After you set the same backup target on a new Harvester cluster, LH will automatically synchronize data. However, VM volume is based on Backing Image. Currently, we haven't supported restoring Backing Image (ref #4165). You have to manually restore the related VM images on a new cluster. You can take a look for the document. https://docs.harvesterhci.io/v1.0/vm/backup-restore/#restore-a-new-vm-on-another-harvester-cluster