This message was deleted.
# longhorn-storage
a
This message was deleted.
m
In the longhorn gui, go to your volumes, select the pvc and there should be a 3 dots. Click and one of the options should be replicas, make it 1. That should set it to 1.
g
thanks for your kind response ... I used Longhorn UI > click PVC > click 3dots, there is no setting for replica possibly because this was set up with ansible. I checked all ansible playbooks and set replicas to "1" there must be some PVC default setting I am missing thanks for your help hubbert
running longhorn v1.5.1
running kubevirt 1.1.1
running k3s v1.25.7
m
give me a sec, I need to install longhorn real quick
Under Volumes, find your pvc. Click on PVC name
Then once in the pvc volume view click on the hamburger menu and click on Update Replicas Count
g
hiya, thank you so much I found replicas and reduced replica count to 1
now, stuck here $ kubectl get pods NAME READY STATUS RESTARTS AGE my-release-nginx-ingress-controller-5f8ccf57c8-6f5jx 1/1 Running 1 (4d2h ago) 7d17h importer-prime-07943f42-da58-42c0-b3fd-f8c350e26eed 0/1 ContainerCreating 0 4d2h thanks a TON for your help
m
Ok, check why container is not starting. Check events for the pod.
g
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedMount 32m (x1572 over 4d3h) kubelet Unable to attach or mount volumes: unmounted volumes=[cdi-scratch-vol], unattached volumes=[cdi-data-vol cdi-scratch-vol kube-api-access-qkdtg]: timed out waiting for the condition Warning FailedMount 20m (x512 over 4d2h) kubelet Unable to attach or mount volumes: unmounted volumes=[cdi-scratch-vol], unattached volumes=[kube-api-access-qkdtg cdi-data-vol cdi-scratch-vol]: timed out waiting for the condition Warning FailedAttachVolume 19m (x2908 over 4d3h) attachdetach-controller AttachVolume.Attach failed for volume "pvc-4572cf8e-91b5-4a0a-813a-2521131c18d7" : rpc error: code = Aborted desc = volume pvc-4572cf8e-91b5-4a0a-813a-2521131c18d7 is not ready for workloads Warning FailedMount 16m (x530 over 4d2h) kubelet Unable to attach or mount volumes: unmounted volumes=[cdi-scratch-vol], unattached volumes=[cdi-scratch-vol kube-api-access-qkdtg cdi-data-vol]: timed out waiting for the condition Warning FailedMount 7m (x12 over 15m) kubelet Unable to attach or mount volumes: unmounted volumes=[cdi-scratch-vol], unattached volumes=[cdi-scratch-vol kube-api-access-qkdtg cdi-data-vol]: error processing PVC default/prime-07943f42-da58-42c0-b3fd-f8c350e26eed-scratch: PVC is being deleted Warning FailedAttachVolume 2m45s (x7 over 15m) attachdetach-controller AttachVolume.Attach failed for volume "pvc-4572cf8e-91b5-4a0a-813a-2521131c18d7" : PersistentVolume "pvc-4572cf8e-91b5-4a0a-813a-2521131c18d7" is marked for deletion Warning FailedMount 2m1s (x30 over 16m) kubelet Unable to attach or mount volumes: unmounted volumes=[cdi-scratch-vol], unattached volumes=[cdi-data-vol cdi-scratch-vol kube-api-access-qkdtg]: error processing PVC default/prime-07943f42-da58-42c0-b3fd-f8c350e26eed-scratch: PVC is being deleted
m
Looking at that, your retain policy was set to delete. PVC is being deleted, check the logs for the volume in longhorn ui or longhorn manager. What caused the pvc to detach? Your previous post mentioned you only have 60G of storage, did the pvc max out storage?
g
yep, you put your finger right on it --- 60G is plenty if one replica 60G storage is maxed out if 3 replicas I will workaround with 180GB for 3 replicas (yes, I know 3 replicas on one device is dumb)
l
Also bump K3s, Longhorn and KubeVirt .. quite old versions.
m
In the longhorn UI you should be able to delete the replicas. Check the host if storage is recovered. You can also create a new storage class with 1 replica and use that one.
g
thank you, will do!! hubbert@i4ops.com
👍 1
still having problems -- I updated software versions
l
So what are you running now? And it seems KubeVirt is involved here. Are you using Harvester?
And are you trying to use CDI to import something?
f
Can you generate a support bundle?
g
hi Phan --- support bundle attached. thanks so so much