This message was deleted.
# longhorn-storage
a
This message was deleted.
b
Have you observed similar log as the issue described? What is your Kubernetes version?
b
Thanks for your response! My k8s version is 1.22.7 with rancher 2.6.3. I look up some unexpected log with kubelet. It is as follows:
Copy code
E0124 03:25:56.760940   14156 nestedpendingoperations.go:335] Operation for "{volumeName:<http://kubernetes.io/csi/driver.longhorn.io^pvc-36029258-9957-4488-9ce1-e17dd8038931|kubernetes.io/csi/driver.longhorn.io^pvc-36029258-9957-4488-9ce1-e17dd8038931> podName: nodeName:}" failed. No retries permitted until 2024-01-24 03:27:00.760898181 +0000 UTC m=+60735.623166411 (durationBeforeRetry 1m4s). Error: MapVolume.WaitForAttach failed for volume "pvc-36029258-9957-4488-9ce1-e17dd8038931" (UniqueName: "<http://kubernetes.io/csi/driver.longhorn.io^pvc-36029258-9957-4488-9ce1-e17dd8038931|kubernetes.io/csi/driver.longhorn.io^pvc-36029258-9957-4488-9ce1-e17dd8038931>") pod "virt-launcher-40-1-jsk7m" (UID: "841fa055-d9ca-48dd-b822-d12d0fc4bad8") : rpc error: code = DeadlineExceeded desc = volume pvc-36029258-9957-4488-9ce1-e17dd8038931 failed to attach to node sw-10-113-1-40
b
The log doesn't look similar to the one outlined in the issue. Might be something different. What is your Longhorn version? Have you attempt to detach and reattach the volume? Anything in the longhorn-manager's log? Additionally, consider setting the priority class to prevent Kubelet from deleting Longhorn components first during a pressured situation.
b
My longhorn version is v1.4.4. The volume's state is attached on the longhorn UI. I didn't look up longhorn-manager's log. I don't make sure how to set the priority class. But I make sure the node is not under pressure. The priority class is as follows:
b
Have you tried to delete the pod (of the controlled workload) to see if the volume would detach/attach again?
b
I try to delete pod successfully. I retry to create pod with the volume. the object of volumeattachment' status is false.
I check the volume status it is attached. I didn't get error with longhorn.
b
When you delete the pod, can you observe Longhorn detaching the volume, or does the volume remain attached?
b
Yes! I observe the volume with attached state.
b
After the pod is delete, can you try manually detach the volume, then created the pod again?
b
When I create detach the volume. I created the pod again. The volume will attach to the node. But the volumeattachment get false status.
b
hmmm 🤔
b
I think it is chaotic. I try to reproduce this situation. But I failed. Do you have a try to freeze system with worker node or master? The volume will happen what? If I shut down all node combined worker or node. then I start up all node. Are these volumes normal?
b
Can you create an issue and attach the support-bundle to it?
b
Maybe I can't. Because The situation appeared suddenly. Perhaps I have a vague direction. I will freeze a node humanly.