hundreds-airport-66196
02/03/2023, 3:31 PMsparse-businessperson-74827
02/03/2023, 3:36 PMclean-lawyer-76009
02/03/2023, 4:52 PMNot ready for workload, Volume Faulted
the output of csi attacher log is:
I0203 10:10:21.981136 1 csi_handler.go:231] Error processing "csi-": failed to attach: rpc error: code = Aborted desc = volume pvc- is not ready for workloads
The folder on the host belongs to root user in group root
How can i drill this issue further down?
Could this be a permission issue?late-needle-80860
02/04/2023, 9:07 PMcrooked-cat-21365
02/06/2023, 11:35 AMstocky-article-82001
02/09/2023, 5:18 PMstocky-article-82001
02/09/2023, 5:19 PMstocky-article-82001
02/09/2023, 5:27 PMunable to salvage volume pvc-87002c74-b16f-431e-af75-6f9b1f461616: invalid volume state to salvage: attaching
stocky-article-82001
02/09/2023, 5:32 PMunable to salvage volume pvc-87002c74-b16f-431e-af75-6f9b1f461616: invalid robustness state to salvage: unknown
toostocky-article-82001
02/09/2023, 5:32 PMquaint-alarm-7893
02/10/2023, 6:19 AMState: Attached
Health:
No node redundancy: all the healthy replicas are running on the same node Degraded
Ready for workload:Not Ready
Conditions:
restore
scheduled
Frontend:Block Device
Attached Node & Endpoint:
Size:
100 Gi
Actual Size:Unknown
Data Locality:disabled
Access Mode:ReadWriteMany
Backing Image:vdi-image-hzwdh
Backing Image Size:50 Gi
Engine Image:longhornio/longhorn-engine:v1.3.2
Created:a month ago
Encrypted:False
Node Tags:
Disk Tags:
Last Backup:
Last Backup At:
Replicas Auto Balance:ignored
Instance Manager:
instance-manager-e-c2f010c0
instance-manager-e-50d28d2d
Last time used by Pod:10 hours ago
Namespace:vdi
PVC Name:vmname-rootdisk-ossjw
PV Name:pvc-13787ebc-7d4e-41ad-9206-de9a6adb938a
PV Status:Bound
Revision Counter Disabled:False
Last Pod Name:virt-launcher-vmname-mpbn2
Last Workload Name:vmname
Last Workload Type:VirtualMachineInstance
note, it show's its attached to tow instance managers. i've tried rebooting the nodes, and deleting the instance manager pods, no luck...sparse-fireman-14239
02/10/2023, 1:27 PMmysterious-rose-43856
02/10/2023, 5:13 PMquaint-alarm-7893
02/10/2023, 11:18 PMmysterious-rose-43856
02/11/2023, 4:29 PMbroad-machine-78396
02/13/2023, 1:29 PMhundreds-airport-66196
02/14/2023, 6:06 PMloud-helmet-97067
02/16/2023, 1:36 AMquaint-alarm-7893
02/22/2023, 2:14 PMquaint-alarm-7893
02/23/2023, 9:19 PMquaint-alarm-7893
02/23/2023, 11:59 PMenough-memory-12110
03/01/2023, 6:48 AMerror listing backup volume names: Timeout executing: /var/lib/longhorn/engine-binaries/rancher-mirrored-longhornio-longhorn-engine-v1.2.2/longhorn [backup ls --volume-only <s3://backups@us-east-1/>], output , stderr, , error <nil>
I tried logging in to the container and checking the probelm from its side and this is what we get:
root@longhorn-manager-57lhs:/# /var/lib/longhorn/engine-binaries/rancher-mirrored-longhornio-longhorn-engine-v1.2.2/longhorn backup ls --volume-only <s3://backup/>
ERRO[0020] Fail to list s3: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors pkg=s3
I am sure the credentials in secret are valid and properly formatted ( I've copied minio configuration and adjust to our environment ).
Can anyone from here help us diagnose the problem further and solve it? (edited)blue-painting-33432
03/03/2023, 12:23 AMblue-painting-33432
03/03/2023, 12:23 AMUnable to attach or mount volumes: unmounted volumes=[alertmanager-kube-prometheus-stack-alertmanager-db], unattached volumes=[web-config kube-api-access-8lfhh config-volume config-out tls-assets alertmanager-kube-prometheus-stack-alertmanager-db]: timed out waiting for the condition
blue-painting-33432
03/03/2023, 12:24 AMblue-painting-33432
03/03/2023, 12:30 AMblue-painting-33432
03/03/2023, 1:05 AMacceptable-soccer-28720
03/03/2023, 2:04 AMrancher kubectl describe pod gitlab-postgresql-0 -n gitlab
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 14m default-scheduler Successfully assigned gitlab/gitlab-postgresql-0 to vik8scases-w-2
Warning FailedMount 10m kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[kube-api-access-g28wp custom-init-scripts postgresql-password dshm data]: timed out waiting for the condition
Warning FailedMount 7m49s kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[postgresql-password dshm data kube-api-access-g28wp custom-init-scripts]: timed out waiting for the condition
Warning FailedMount 5m34s kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[dshm data kube-api-access-g28wp custom-init-scripts postgresql-password]: timed out waiting for the condition
Warning FailedMount 3m18s (x2 over 12m) kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data kube-api-access-g28wp custom-init-scripts postgresql-password dshm]: timed out waiting for the condition
Warning FailedAttachVolume 2m9s (x6 over 12m) attachdetach-controller AttachVolume.Attach failed for volume "pvc-b59dafa1-3efa-44fc-92ba-e2be23e5d4a4" : timed out waiting for external-attacher of <http://driver.longhorn.io|driver.longhorn.io> CSI driver to attach volume pvc-b59dafa1-3efa-44fc-92ba-e2be23e5d4a4
Warning FailedMount 64s kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[custom-init-scripts postgresql-password dshm data kube-api-access-g28wp]: timed out waiting for the condition
plain-breakfast-5576
03/03/2023, 6:02 PMrich-shoe-36510
03/08/2023, 11:03 AM