<@U016WKMFLL9>, Continuing <https://rancher-users....
# rke2
m
@creamy-pencil-82913, Continuing https://rancher-users.slack.com/archives/C01PHNP149L/p1660833392861649 Followed https://docs.rke2.io/backup_restore/#restoring-a-snapshot-to-new-nodes and created a HA cluster, static pods are up. But failing to attach volumes.
Copy code
>> kubectl get pod -n kube-system

cilium-4xc5q                                            1/1     Running            0          8h
cilium-89vrg                                            1/1     Running            0          8h
cilium-cg8gn                                            1/1     Running            6          8h
cilium-gbbl7                                            1/1     Running            1          8h
cilium-j8s9t                                            1/1     Running            3          8h
cilium-jfs9f                                            1/1     Running            1          179m
cilium-ld9fc                                            1/1     Running            0          8h
cilium-lz2hj                                            1/1     Running            0          8h
cilium-node-init-7ltcv                                  1/1     Running            0          8h
cilium-node-init-gzhvc                                  1/1     Running            0          8h
cilium-node-init-hqnrk                                  1/1     Running            0          179m
cilium-node-init-j2ffd                                  1/1     Running            0          8h
cilium-node-init-j5q52                                  1/1     Running            3          8h
cilium-node-init-mmbjj                                  1/1     Running            0          8h
cilium-node-init-qk6pj                                  1/1     Running            1          8h
cilium-node-init-w87qb                                  1/1     Running            3          8h
cilium-node-init-zfrt9                                  1/1     Running            0          8h
cilium-nxqxb                                            1/1     Running            0          8h
cilium-operator-fccb67dc5-srt76                         1/1     Running            5          8h
cilium-operator-fccb67dc5-wsr5m                         1/1     Running            3          8h
cloud-controller-manager-sv-svr1                           1/1     Running            3          9h
cloud-controller-manager-sv-svr2                           1/1     Running            3          8h
cloud-controller-manager-sv-svr3                           1/1     Running            3          8h
etcd-sv-svr1                                               1/1     Running            8          9h
etcd-sv-svr2                                               1/1     Running            3          8h
etcd-sv-svr3                                               1/1     Running            3          147m
external-dns-dc9dd7d74-h6dqw                            1/1     Running            1          90d
helm-install-rke2-metrics-server-cmgjc                  0/1     CrashLoopBackOff   72         5h40m
kube-apiserver-sv-svr1                                     1/1     Running            1          9h
kube-apiserver-sv-svr2                                     1/1     Running            3          8h
kube-apiserver-sv-svr3                                     1/1     Running            3          140m
kube-controller-manager-sv-svr1                            1/1     Running            3          9h
kube-controller-manager-sv-svr2                            1/1     Running            3          8h
kube-controller-manager-sv-svr3                            1/1     Running            3          8h
kube-proxy-sv-agent3                                         1/1     Running            0          7h40m
kube-proxy-sv-agent4                                         1/1     Running            0          8h
kube-proxy-sv-agent5                                         1/1     Running            0          8h
kube-proxy-sv-agent6                                         1/1     Running            0          8h
kube-proxy-sv-svr1                                         1/1     Running            1          9h
kube-proxy-sv-svr2                                         1/1     Running            3          8h
kube-proxy-sv-svr3                                         1/1     Running            3          8h
kube-proxy-sv-agent1                                          1/1     Running            0          8h
kube-proxy-sv-agent2                                          1/1     Running            0          3h
kube-scheduler-sv-svr1                                     1/1     Running            3          9h
kube-scheduler-sv-svr2                                     1/1     Running            3          8h
kube-scheduler-sv-svr3                                     1/1     Running            3          8h
kube-vip-cloud-provider-0                               1/1     Running            3          8h
kube-vip-ds-5q5qw                                       1/1     Running            3          8h
kube-vip-ds-fw8zv                                       1/1     Running            3          8h
kube-vip-ds-rmqhc                                       1/1     Running            4          8h
metrics-server-8bbfb4bdb-rzpnp                          1/1     Running            5          7h33m
rke2-coredns-rke2-coredns-855c5d9879-9fwhx              1/1     Running            0          5h40m
rke2-coredns-rke2-coredns-855c5d9879-j7wbc              0/1     CrashLoopBackOff   41         3h3m
rke2-coredns-rke2-coredns-autoscaler-7c77dcfb76-hm78m   1/1     Running            3          8h
rke2-ingress-nginx-controller-4kvdx                     1/1     Running            2          8h
rke2-ingress-nginx-controller-8k5z8                     1/1     Running            0          8h
rke2-ingress-nginx-controller-c6r5q                     1/1     Running            0          179m
rke2-ingress-nginx-controller-cx88s                     1/1     Running            0          8h
rke2-ingress-nginx-controller-jl74q                     1/1     Running            1          8h
rke2-ingress-nginx-controller-nr2qp                     1/1     Running            8          8h
rke2-ingress-nginx-controller-p6sfq                     1/1     Running            3          8h
rke2-ingress-nginx-controller-qmbzn                     1/1     Running            0          8h
rke2-ingress-nginx-controller-wj54z                     1/1     Running            0          8h
rke2-metrics-server-5df7d77b5b-b4qlw                    1/1     Running            20         74d
Following are the errors noticed and volumes are not mounted.
Copy code
E0911 20:16:38.965933   17195 kubelet.go:1701] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[data], unattached volumes=[data kube-api-access-ztp4j dshm]: timed out waiting for the condition" pod="cvat/cvat-postgresql-0"
Copy code
E0911 20:23:07.393663   16782 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container pod=metadata-grpc-deployment-f8d68f687-5fvbs_kubeflow(d72591f7-e2c4-475f-ad83-fc59c996219a)\"" pod="kubeflow/metadata-grpc-deployment-f8d68f687-5fvbs" podUID=d72591f7-e2c4-475f-ad83-fc59c996219a
I0911 20:23:08.718940   16782 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-62552b22-3e99-4b63-8a56-69519573ae1d\" (UniqueName: \"<http://kubernetes.io/csi/driver.longhorn.io^pvc-62552b22-3e99-4b63-8a56-69519573ae1d\|kubernetes.io/csi/driver.longhorn.io^pvc-62552b22-3e99-4b63-8a56-69519573ae1d\>") pod \"loki-0\" (UID: \"8aef7574-fb66-415f-a130-6b8ec9091672\") "
E0911 20:23:08.724147   16782 nestedpendingoperations.go:335] Operation for "{volumeName:<http://kubernetes.io/csi/driver.longhorn.io^pvc-62552b22-3e99-4b63-8a56-69519573ae1d|kubernetes.io/csi/driver.longhorn.io^pvc-62552b22-3e99-4b63-8a56-69519573ae1d> podName: nodeName:}" failed. No retries permitted until 2022-09-11 20:25:10.724134581 +0000 UTC m=+21624.816950484 (durationBeforeRetry 2m2s). Error: "Volume not attached according to node status for volume \"pvc-62552b22-3e99-4b63-8a56-69519573ae1d\" (UniqueName: \"<http://kubernetes.io/csi/driver.longhorn.io^pvc-62552b22-3e99-4b63-8a56-69519573ae1d\|kubernetes.io/csi/driver.longhorn.io^pvc-62552b22-3e99-4b63-8a56-69519573ae1d\>") pod \"loki-0\" (UID: \"8aef7574-fb66-415f-a130-6b8ec9091672\") "
I0911 20:23:09.829046   16782 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c6597566-f0c6-40b3-be5b-9d670f51748d\" (UniqueName: \"<http://kubernetes.io/csi/driver.longhorn.io^pvc-c6597566-f0c6-40b3-be5b-9d670f51748d\|kubernetes.io/csi/driver.longhorn.io^pvc-c6597566-f0c6-40b3-be5b-9d670f51748d\>") pod \"harbor-redis-0\" (UID: \"912226dd-12cf-4cb5-a54b-fb831b4e7e73\") "
E0911 20:23:09.831850   16782 nestedpendingoperations.go:335] Operation for "{volumeName:<http://kubernetes.io/csi/driver.longhorn.io^pvc-c6597566-f0c6-40b3-be5b-9d670f51748d|kubernetes.io/csi/driver.longhorn.io^pvc-c6597566-f0c6-40b3-be5b-9d670f51748d> podName: nodeName:}" failed. No retries permitted until 2022-09-11 20:25:11.831837052 +0000 UTC m=+21625.924652956 (durationBeforeRetry 2m2s). Error: "Volume not attached according to node status for volume \"pvc-c6597566-f0c6-40b3-be5b-9d670f51748d\" (UniqueName: \"<http://kubernetes.io/csi/driver.longhorn.io^pvc-c6597566-f0c6-40b3-be5b-9d670f51748d\|kubernetes.io/csi/driver.longhorn.io^pvc-c6597566-f0c6-40b3-be5b-9d670f51748d\>") pod \"harbor-redis-0\" (UID: \"912226dd-12cf-4cb5-a54b-fb831b4e7e73\") "
Any inputs to recover?