https://rancher.com/ logo
Join the conversationJoin Slack
Channels
academy
amazon
arm
azure
cabpr
chinese
ci-cd
danish
deutsch
developer
elemental
epinio
espanol
events
extensions
fleet
français
gcp
general
harvester
harvester-dev
hobbyfarm
hypper
japanese
k3d
k3os
k3s
k3s-contributor
kim
kubernetes
kubewarden
lima
logging
longhorn-dev
longhorn-storage
masterclass
mesos
mexico
nederlands
neuvector-security
office-hours
one-point-x
onlinemeetup
onlinetraining
opni
os
ozt
phillydotnet
portugues
rancher-desktop
rancher-extensions
rancher-setup
rancher-wrangler
random
rfed_ara
rio
rke
rke2
russian
s3gw
service-mesh
storage
submariner
supermicro-sixsq
swarm
terraform-controller
terraform-provider-rancher2
terraform-provider-rke
theranchcast
training-0110
training-0124
training-0131
training-0207
training-0214
training-1220
ukranian
v16-v21-migration
vsphere
windows
Powered by Linen
longhorn-storage
  • l

    late-needle-80860

    06/29/2022, 12:37 PM
    Here’s the bundle @cuddly-vase-67379
    longhorn-support-bundle_bf5f8b3a-37f6-47b1-a12f-60db695a2208_2022-06-29T12-34-26Z.zip
    👍 1
    ✅ 1
    a
    • 2
    • 5
  • s

    steep-family-74984

    06/30/2022, 7:58 PM
    Hi everybody!👋 Is anyone using multinetwork with multus in longhorn 1.3?
    w
    • 2
    • 44
  • f

    flaky-coat-75909

    07/01/2022, 2:00 PM
    from time to time, my longhorn is crashing and I do not know why It is start for example the Redis stop working (My Redis is using volume based on longhorn) and then almost everything which is using longhorn (and their dependencies) stop working for a while How can I debug it? I have metrics and logs but I do not know where I should look For example, from metrics I know the volumes are inaccessible
    l
    • 2
    • 8
  • a

    ancient-raincoat-46356

    07/01/2022, 9:00 PM
    So I have 6 VM's (Ubuntu 20.04) with 3 master/3 worker nodes. On the worker nodes I have attached a second 20G disk and here is where I made some assumptions... I learned the Longhorn installs at
    /var/lib/longhorn
    and I am thinking this is where my provisioned PV's will be created and replica's will be stored. So wanting to use those 20G disk on each, I partitioned the disk, created a logical volume out of it and then formatted it with XFS filesystem which is our standard format. I then mounted that disk at
    /mnt/DATA
    , created a subfolder named
    longhorn
    and then symlinked
    /mnt/DATA/longhorn -> /var/lib/longhorn
    . Is this the correct approach to use? This is the information I am failing to find anywhere including in the Longhorn docs.
    m
    • 2
    • 6
  • s

    sparse-businessperson-74827

    07/06/2022, 12:17 PM
    Can anyone help me out? Longhorn started behaving strangely. Instance-manage-e pod gets terminated on all nodes every 2-5 minutes. Seeing
    Liveness probe failed: dial tcp 10.42.5.68:8500: i/o timeout
    when this happens but only happens on these pods none of other pods are impacted. The nodes are not overloaded also
    h
    a
    • 3
    • 8
  • b

    busy-crowd-80458

    07/13/2022, 5:31 PM
    On 1.22.10+rke2r2, longhorn 1.2.4 I've got a bunch of issues like this:
    MountVolume.WaitForAttach failed for volume "pvc-8c978b7c-28db-43e9-88d3-f4b43c532891" : volume pvc-8c978b7c-28db-43e9-88d3-f4b43c532891 has GET error for volume attachment csi-c1dd4b0084f8679ff1701abcdc314f2c594e85c0cfb9c287d954b7a6cf3ac4ba: volumeattachments.storage.k8s.io "csi-c1dd4b0084f8679ff1701abcdc314f2c594e85c0cfb9c287d954b7a6cf3ac4ba" is forbidden: User "system:node:emerald02" cannot get resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope: no relationship found between node 'emerald02' and this object
    a
    • 2
    • 20
  • r

    ripe-queen-73614

    07/13/2022, 7:26 PM
    Hi all, I've been stuck for a week that elasticsearch (ECK) doesn't work for me when using storage longhorm. The volumes are attached and unattached and the logs of the pods show this: [17:52] Warning FailedAttachVolume 10s attachdetach-controller AttachVolume.Attach failed for volume "pvc-70a16c86-2e98-4a7e-a306-ef7238969f85" : rpc error: code = DeadlineExceeded desc = vol
    a
    f
    • 3
    • 20
  • w

    worried-businessperson-13284

    07/16/2022, 5:34 PM
    I rebuilt my k3s cluster but didn't wipe the LH disks. all the new PVCs are stuck in "Attaching". I noticed this msg in the events
    Persistent Volume pvc-aafbf3de-7b46-43fd-a536-8595a38505f7 started to use/reuse Longhorn volume pvc-aafbf3de-7b46-43fd-a536-8595a38505f7
    does this mean LH found the old PVCs from the previous install?
    a
    • 2
    • 2
  • f

    flaky-coat-75909

    07/19/2022, 9:42 AM
    while I'm using longhorn storageClass my CPU and disk IO saturatons are very high how can I debug the reason why it happens? In thread I will give more info
    • 1
    • 14
  • a

    ancient-raincoat-46356

    07/21/2022, 4:46 PM
    For the default longhorn storage class, should my
    replica's
    be the same count as the number of work nodes, or would it be one less than the number of worker nodes (n-1)? Thinking if I deploy a Pod/PVC and it gets assigned to a node, wouldn't the replica's only need to exist on the worker nodes that are not running the Pod workload?
    c
    • 2
    • 5
  • a

    ancient-raincoat-46356

    07/21/2022, 6:31 PM
    Does anyone have an example of a bind mount they setup in their
    /etc/fstab
    for mounting
    /var/lib/longhorn
    to a secondary disk to be used exclusively by longhorn so I'm not using the root partition space for any disk provisioning? A little context. I have a secondary disk I want to use for Longhorn PV's. Running on Ubuntu 20.04, secondary disk is configured as an Logical Volume. It has been formatted with XFS filesystem and mounted at
    /mnt/DATA
    . It is 20G and I only want longhorn to use this 20G when provisioning new PV's.
    ✅ 1
    b
    f
    • 3
    • 13
  • s

    stocky-article-82001

    07/24/2022, 2:37 PM
    I need to update longhorn but it was installed on old Rancher Apps and then moved to a different instance of Rancher, so it doesn’t appear under “Apps” anymore. Any ideas are greatly appreciated.
    ✅ 1
    p
    • 2
    • 2
  • s

    stocky-article-82001

    07/24/2022, 3:50 PM
    Currently leaning towards just backing up all the volumes, reinstalling longhorn and importing the backups
  • m

    many-rocket-71417

    07/24/2022, 4:36 PM
    That's exactly what I had to do. When i asked this question a couple months ago no one had any answers either
    f
    s
    • 3
    • 4
  • f

    famous-journalist-11332

    07/26/2022, 12:53 AM
    Yeah, it could be normal. Can you see if this document clear up the confusion? https://longhorn.io/docs/1.3.0/volumes-and-nodes/volume-size/
    f
    • 2
    • 1
  • f

    full-window-19269

    07/26/2022, 6:38 PM
    Hi guys, I am facing one issue with longhorn. I have a mongodb volume which gets into the loop of attaching and detaching for a pod. Not sure what will be the root cause for this. Any help will be much appreciated
    f
    • 2
    • 1
  • b

    busy-crowd-80458

    07/27/2022, 4:58 AM
    hi folks... I have a volume in Longhorn that has been stuck in deleting for 12+ hours
    f
    • 2
    • 5
  • b

    busy-crowd-80458

    07/28/2022, 9:22 AM
    what does "Scheduling Failure Replica Scheduling Failure" mean?
  • b

    busy-crowd-80458

    07/28/2022, 9:22 AM
    I guess it means there's no valid place to put one of the replicas?
    f
    • 2
    • 2
  • a

    ancient-raincoat-46356

    07/29/2022, 3:51 PM
    Hello all. I keep getting an error with one of my pods (StatefulSet) trying to mount a longhorn volume. It's working fine on 2 out of 3 nodes but this one nodes keeps giving me this error. Can anyone help?
    Warning  FailedMount             82s (x9 over 3m31s)  kubelet                  MountVolume.MountDevice failed for volume "pvc-113ff49c-1565-489c-9213-14f4d58ae27f" : rpc error: code = Internal desc = format of disk "/dev/longhorn/pvc-113ff49c-1565-489c-9213-14f4d58ae27f" failed: type:("ext4") target:("/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-113ff49c-1565-489c-9213-14f4d58ae27f/globalmount") options:("defaults") errcode:(exit status 1) output:(mke2fs 1.43.8 (1-Jan-2018)
    /dev/longhorn/pvc-113ff49c-1565-489c-9213-14f4d58ae27f is apparently in use by the system; will not make a filesystem here!
    )
  • a

    ancient-raincoat-46356

    07/29/2022, 3:52 PM
    I've removed the app and uninstalled/re-installed Longhorn but whatever pods gets assigned to this one worked node always fails with this message.
  • a

    ancient-raincoat-46356

    07/29/2022, 4:55 PM
    Found my solution here. https://github.com/longhorn/longhorn/issues/1210#issuecomment-671689746
  • b

    bland-painting-61617

    07/31/2022, 12:20 AM
    After upgrading to 1.3.0, all my volumes are dead - stuck in attaching or detaching. I can see all instance managers are stopped and a bunch of errors in longhorn-manager logs about it not being able to find the last instance managers... Backups? Some exist, I actually upgraded because the backups were failing for larger volumes... How can I figure out what's wrong, the instance manager log just says 'installed'...
    f
    • 2
    • 1
  • f

    flaky-coat-75909

    08/01/2022, 10:32 PM
    can I limit instance-manager cpu usage?
  • f

    flaky-coat-75909

    08/01/2022, 10:32 PM
    it is created from
    - longhorn-manager
            - -d
            - daemon
            - --engine-image
            - longhornio/longhorn-engine:v1.3.0
            - --instance-manager-image
            - longhornio/longhorn-instance-manager:v1_20220611
  • b

    busy-crowd-80458

    08/02/2022, 2:43 AM
    hey, just a suggestion
  • b

    busy-crowd-80458

    08/02/2022, 2:44 AM
    we ran into an issue with our Longhorn cluster today, and helpfully there was a guide - https://longhorn.io/kb/troubleshooting-volume-with-multipath/ - with a tweak in it that got us back online
    👍 1
  • b

    busy-crowd-80458

    08/02/2022, 2:44 AM
    that tweak may be worth including as a suggestion in the install guide by default...
    👀 2
    b
    • 2
    • 2
  • f

    flaky-coat-75909

    08/01/2022, 11:32 PM
    If I understand this line https://github.com/longhorn/longhorn-manager/blob/b88ebf82936c7335c6a3855ab44a39a2bb790d8b/controller/controller_manager.go#L169 I should setup
    EngineManagerCPURequest int `json:"engineManagerCPURequest"
    to positive value just add to my
    kind: Node
    Node.spec.engineManagerCPURequest: 200 And a node is come from
    <http://longhorn.io/v1beta2|longhorn.io/v1beta2>
    nodes
    Am right?
    f
    • 2
    • 4
  • m

    many-rocket-71417

    08/05/2022, 7:41 PM
    Anybody have a way to delete many snapshots at once? I have a bunch leftover from schedules that no longer exist and i don't want to go by hand through 1000+ snapshots.
    ✅ 1
    b
    f
    • 3
    • 3
Powered by Linen
Title
m

many-rocket-71417

08/05/2022, 7:41 PM
Anybody have a way to delete many snapshots at once? I have a bunch leftover from schedules that no longer exist and i don't want to go by hand through 1000+ snapshots.
✅ 1
b

billowy-painting-56466

08/08/2022, 12:56 AM
Have you tried the
retain
field for the snapshot?
f

famous-shampoo-18483

08/08/2022, 4:20 AM
In v1.3.0, you can try to directly delete snapshot CRs for one volume. And the snapshots of one volume can be filtered by the label
longhornvolume: <volume name>
.
m

many-rocket-71417

08/08/2022, 4:48 AM
@billowy-painting-56466 yes that field is always set but sometimes they don't get cleaned up and in my case some snapshots exist from other jobs that have since been deleted so retain doesn't seem to affect those. Which makes sense. @famous-shampoo-18483 this seems to be the way, might make a quick script to filter by this label and remove them. Thank you
👍 2
View count: 3