late-needle-80860
06/29/2022, 12:37 PMsteep-family-74984
06/30/2022, 7:58 PMflaky-coat-75909
07/01/2022, 2:00 PMancient-raincoat-46356
07/01/2022, 9:00 PM/var/lib/longhorn
and I am thinking this is where my provisioned PV's will be created and replica's will be stored. So wanting to use those 20G disk on each, I partitioned the disk, created a logical volume out of it and then formatted it with XFS filesystem which is our standard format. I then mounted that disk at /mnt/DATA
, created a subfolder named longhorn
and then symlinked /mnt/DATA/longhorn -> /var/lib/longhorn
. Is this the correct approach to use? This is the information I am failing to find anywhere including in the Longhorn docs.sparse-businessperson-74827
07/06/2022, 12:17 PMLiveness probe failed: dial tcp 10.42.5.68:8500: i/o timeout
when this happens but only happens on these pods none of other pods are impacted. The nodes are not overloaded alsobusy-crowd-80458
07/13/2022, 5:31 PMMountVolume.WaitForAttach failed for volume "pvc-8c978b7c-28db-43e9-88d3-f4b43c532891" : volume pvc-8c978b7c-28db-43e9-88d3-f4b43c532891 has GET error for volume attachment csi-c1dd4b0084f8679ff1701abcdc314f2c594e85c0cfb9c287d954b7a6cf3ac4ba: volumeattachments.storage.k8s.io "csi-c1dd4b0084f8679ff1701abcdc314f2c594e85c0cfb9c287d954b7a6cf3ac4ba" is forbidden: User "systemnodeemerald02" cannot get resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope: no relationship found between node 'emerald02' and this object
ripe-queen-73614
07/13/2022, 7:26 PMworried-businessperson-13284
07/16/2022, 5:34 PMPersistent Volume pvc-aafbf3de-7b46-43fd-a536-8595a38505f7 started to use/reuse Longhorn volume pvc-aafbf3de-7b46-43fd-a536-8595a38505f7
does this mean LH found the old PVCs from the previous install?flaky-coat-75909
07/19/2022, 9:42 AMancient-raincoat-46356
07/21/2022, 4:46 PMreplica's
be the same count as the number of work nodes, or would it be one less than the number of worker nodes (n-1)? Thinking if I deploy a Pod/PVC and it gets assigned to a node, wouldn't the replica's only need to exist on the worker nodes that are not running the Pod workload?ancient-raincoat-46356
07/21/2022, 6:31 PM/etc/fstab
for mounting /var/lib/longhorn
to a secondary disk to be used exclusively by longhorn so I'm not using the root partition space for any disk provisioning?
A little context. I have a secondary disk I want to use for Longhorn PV's. Running on Ubuntu 20.04, secondary disk is configured as an Logical Volume. It has been formatted with XFS filesystem and mounted at /mnt/DATA
. It is 20G and I only want longhorn to use this 20G when provisioning new PV's.stocky-article-82001
07/24/2022, 2:37 PMstocky-article-82001
07/24/2022, 3:50 PMmany-rocket-71417
07/24/2022, 4:36 PMfamous-journalist-11332
07/26/2022, 12:53 AMfull-window-19269
07/26/2022, 6:38 PMbusy-crowd-80458
07/27/2022, 4:58 AMbusy-crowd-80458
07/28/2022, 9:22 AMbusy-crowd-80458
07/28/2022, 9:22 AMancient-raincoat-46356
07/29/2022, 3:51 PMWarning FailedMount 82s (x9 over 3m31s) kubelet MountVolume.MountDevice failed for volume "pvc-113ff49c-1565-489c-9213-14f4d58ae27f" : rpc error: code = Internal desc = format of disk "/dev/longhorn/pvc-113ff49c-1565-489c-9213-14f4d58ae27f" failed: type:("ext4") target:("/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-113ff49c-1565-489c-9213-14f4d58ae27f/globalmount") options:("defaults") errcode:(exit status 1) output:(mke2fs 1.43.8 (1-Jan-2018)
/dev/longhorn/pvc-113ff49c-1565-489c-9213-14f4d58ae27f is apparently in use by the system; will not make a filesystem here!
)
ancient-raincoat-46356
07/29/2022, 3:52 PMancient-raincoat-46356
07/29/2022, 4:55 PMbland-painting-61617
07/31/2022, 12:20 AMflaky-coat-75909
08/01/2022, 10:32 PMflaky-coat-75909
08/01/2022, 10:32 PM- longhorn-manager
- -d
- daemon
- --engine-image
- longhornio/longhorn-engine:v1.3.0
- --instance-manager-image
- longhornio/longhorn-instance-manager:v1_20220611
busy-crowd-80458
08/02/2022, 2:43 AMbusy-crowd-80458
08/02/2022, 2:44 AMbusy-crowd-80458
08/02/2022, 2:44 AMflaky-coat-75909
08/01/2022, 11:32 PMEngineManagerCPURequest int `json:"engineManagerCPURequest"
to positive value
just add to my kind: Node
Node.spec.engineManagerCPURequest: 200
And a node is come from <http://longhorn.io/v1beta2|longhorn.io/v1beta2>
nodes
Am right?many-rocket-71417
08/05/2022, 7:41 PM