https://rancher.com/ logo
Title
l

late-needle-80860

06/20/2022, 5:58 PM
I’m having issues with Longhorn v1.2.3 prepping the filesystem of a Kafka workload. I’m seeing
exit 32
when
mount … ext4
is being executed …. and I see this err. in the
csi-plugin
Pod
on the worker node where Longhorn is having the issue.
Mounting command: mount
Mounting arguments: -t ext4 -o defaults /dev/longhorn/pvc-77efeb41-63ad-43f6-8cc0-e67b2e820aad /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-77efeb41-63ad-43f6-8cc0-e67b2e820aad/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-77efeb41-63ad-43f6-8cc0-e67b2e820aad/globalmount: wrong fs type, bad option, bad superblock on /dev/longhorn/pvc-77efeb41-63ad-43f6-8cc0-e67b2e820aad, missing codepage or helper program, or other error.



time="2022-06-20T14:14:12Z" level=error msg="NodeStageVolume: err: rpc error: code = Internal desc = mount failed: exit status 32\nMounting command: mount\nMounting arguments: -t ext4 -o defaults /dev/longhorn/pvc-77efeb41-63ad-43f6-8cc0-e67b2e820aad /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-77efeb41-63ad-43f6-8cc0-e67b2e820aad/globalmount\nOutput: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-77efeb41-63ad-43f6-8cc0-e67b2e820aad/globalmount: wrong fs type, bad option, bad superblock on /dev/longhorn/pvc-77efeb41-63ad-43f6-8cc0-e67b2e820aad, missing codepage or helper program, or other error.\n"
Suggestions are cherished …. thank you very much.
1
Hmm solved. It helped to execute
apt upgrade
… and that updated
nfs-common
other workers where one patch version higher than the on the worker having the issue.
I meant
open-iscsi
not
nfs-common
Having the same issue again. Updated open-iscsi and that did not do the trick. Nodes have been rebooted as well. No fix there either. With dmesg I can see
VFS: Can't find ext4 filesystem
… something there giving us a clue?
This is another workload. PostgreSQL-ha
I have other workloads running with no issues. Now longhorn is at v1.3.0
Anyone? Please
c

cuddly-vase-67379

06/29/2022, 12:24 PM
Did you find the same log in csi-plugin? And is volume attached to the same node with workload? Could you provide the support bundle?
l

late-needle-80860

06/29/2022, 12:27 PM
Yes there's exactly the same log in the csi-plug-in. Nope we don't use replica balancing to the same nodes where the workload is necessarily. Could that be it? I'll provide the bundle
Bundle provided