helpful-beard-54962
05/03/2022, 11:24 AMAttachVolume.Attach failed for volume "pvc-47999557-f6c8-4a99-b594-1b26b18d260d" : rpc error: code = Internal desc = Bad response statusCode [500]. Status [500 Internal Server Error]. Body: [detail=, message=EOF, code=Server Error] from [<http://longhorn-backend:9500/v1/volumes/pvc-47999557-f6c8-4a99-b594-1b26b18d260d?action=attach>]
and a few redeploys later
Unable to attach or mount volumes: unmounted volumes=[temp-volume kube-api-access-m4gxw logs packages-volume code-store-volume], unattached volumes=[temp-volume kube-api-access-m4gxw logs packages-volume code-store-volume]: timed out waiting for the condition
big-engine-61989
05/04/2022, 7:28 PMtaintToleration: "longhorn=true:NoSchedule;othertaint=true:NoSchedule"
defaultSettings:
taintToleration: "longhorn=true:NoSchedule;othertaint=true:NoSchedule"
longhornManager:
tolerations:
- key: "longhorn"
operator: "Equal"
value: "true"
effect: "NoSchedule"
- key: "othertaint"
operator: "Equal"
value: "true"
effect: "NoSchedule"
longhornDriver:
tolerations:
- key: "longhorn"
operator: "Equal"
value: "true"
effect: "NoSchedule"
- key: "othertaint"
operator: "Equal"
value: "true"
effect: "NoSchedule"
average-gigabyte-2667
05/07/2022, 9:11 PMcool-state-48214
05/09/2022, 7:35 PMwonderful-kangaroo-15590
05/11/2022, 8:45 AMflaky-coat-75909
05/11/2022, 2:25 PMwide-easter-7639
05/11/2022, 4:08 PMsticky-truck-78998
05/12/2022, 5:42 AMhelpful-beard-54962
05/15/2022, 10:58 AMhundreds-hairdresser-46043
05/23/2022, 10:14 AMred-planet-35817
05/24/2022, 12:50 PMgreat-photographer-94826
05/24/2022, 2:49 PMflaky-coat-75909
05/26/2022, 11:50 AMkind: StorageClass
apiVersion: <http://storage.k8s.io/v1|storage.k8s.io/v1>
metadata:
name: my-longhorn-sc
...
parameters:
...
nodeSelector: "storage"
?wide-easter-7639
05/27/2022, 4:11 PMflaky-coat-75909
05/28/2022, 2:43 PMflaky-coat-75909
05/30/2022, 1:10 PMhundreds-hairdresser-46043
06/03/2022, 12:26 PMagreeable-vegetable-79181
06/08/2022, 2:17 AMflaky-coat-75909
06/09/2022, 8:20 AMflaky-coat-75909
06/09/2022, 11:38 AM602Mi
but in frontend ui I see the Actual Size is 2.26 Gi
I have 2 snapshots
• Size: 531Mi
• Size: 907Mi
and VolumeHead is 883Mi
why snapshots takes so much memory when the total memory of postgresql (602) is less than second snapshot?flaky-coat-75909
06/10/2022, 1:15 PM$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/longhorn/pvc-c6b78d1e-109a-4c44-af17-ae2ad46e28b6 20G 1.5G 19G 8% /bitnami/postgresql
is Used
1.5G
$ du -sh /bitnami/postgresql
1.5G /bitnami/postgresql
Where on the frontend-longhorn
I see
/dev/longhorn/pvc-c6b78d1e-109a-4c44-af17-ae2ad46e28b6
Size: 20 Gi
Actual Size:10.7 Gi
Which means it is almost 10times more displaying on frontend
And it is probably because I'm doing snapshots
On my node in that directory and I see files
[root@server22 pvc-c6b78d1e-109a-4c44-af17-ae2ad46e28b6-3277a500]# pwd
/var/lib/longhorn/replicas/pvc-c6b78d1e-109a-4c44-af17-ae2ad46e28b6-3277a500
[root@server22 pvc-c6b78d1e-109a-4c44-af17-ae2ad46e28b6-3277a500]# du -sh *
4.0K revision.counter
189M volume-head-010.img
4.0K volume-head-010.img.meta
4.0K volume.meta
531M volume-snap-035b7630-78a9-4299-af2f-ff3cc8be0f06.img
4.0K volume-snap-035b7630-78a9-4299-af2f-ff3cc8be0f06.img.meta
2.1G volume-snap-c-bi3bca-c-0dddc3f6.img
4.0K volume-snap-c-bi3bca-c-0dddc3f6.img.meta
2.5G volume-snap-c-bi3bca-c-1908b8fa.img
4.0K volume-snap-c-bi3bca-c-1908b8fa.img.meta
1.6G volume-snap-c-bi3bca-c-4b2590fb.img
4.0K volume-snap-c-bi3bca-c-4b2590fb.img.meta
1.9G volume-snap-c-bi3bca-c-d08a8ddd.img
4.0K volume-snap-c-bi3bca-c-d08a8ddd.img.meta
2.2G volume-snap-c-bi3bca-c-e827be1c.img
4.0K volume-snap-c-bi3bca-c-e827be1c.img.meta
but why snapshots are taking so much memory while the whole my data is only
1.5G
What I'm doing wrong ?stocky-article-82001
06/19/2022, 4:48 PMstocky-beard-10620
06/20/2022, 5:44 PMAttaching
and I can't seem to figure out what's the issue, can I get some help troubleshooting? I've tried restarting all pods, with no help. Thanks!late-needle-80860
06/20/2022, 5:58 PMexit 32
when mount … ext4
is being executed …. and I see this err. in the csi-plugin
Pod
on the worker node where Longhorn is having the issue.
Mounting command: mount
Mounting arguments: -t ext4 -o defaults /dev/longhorn/pvc-77efeb41-63ad-43f6-8cc0-e67b2e820aad /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-77efeb41-63ad-43f6-8cc0-e67b2e820aad/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-77efeb41-63ad-43f6-8cc0-e67b2e820aad/globalmount: wrong fs type, bad option, bad superblock on /dev/longhorn/pvc-77efeb41-63ad-43f6-8cc0-e67b2e820aad, missing codepage or helper program, or other error.
time="2022-06-20T14:14:12Z" level=error msg="NodeStageVolume: err: rpc error: code = Internal desc = mount failed: exit status 32\nMounting command: mount\nMounting arguments: -t ext4 -o defaults /dev/longhorn/pvc-77efeb41-63ad-43f6-8cc0-e67b2e820aad /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-77efeb41-63ad-43f6-8cc0-e67b2e820aad/globalmount\nOutput: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-77efeb41-63ad-43f6-8cc0-e67b2e820aad/globalmount: wrong fs type, bad option, bad superblock on /dev/longhorn/pvc-77efeb41-63ad-43f6-8cc0-e67b2e820aad, missing codepage or helper program, or other error.\n"
Suggestions are cherished …. thank you very much.stocky-beard-10620
06/20/2022, 6:50 PMStopped
, Failed
, or Unknown
. If I inspect a given volume, it tells me that the only replica is "running" on an instance-manager with a name that doesn't match any of the currently running instance-manager pods (neither the -e
nor the -r
ones). Is it possible that during the upgrade, the connection between replicas and instance-managers got lost or out of sync and now it has to be fixed manually?hundreds-hairdresser-46043
06/21/2022, 10:53 AMflaky-coat-75909
06/23/2022, 10:53 AMroot@longhorn-manager:/# curl 10.43.0.3:9500/metrics
do not return all metrics for example
longhorn_instance_manager_cpu_usage_millicpu
or
longhorn_node_cpu_capacity_millicpu
Should I fetch it from other pod? on which port ?helpful-beard-54962
06/28/2022, 2:54 PMsent 57,683 bytes received 337 bytes 1,172.12 bytes/sec
sent 53,839 bytes received 337 bytes 976.14 bytes/sec
Anyone else?ripe-queen-73614
06/28/2022, 3:17 PMbumpy-portugal-40754
06/28/2022, 11:57 PM