https://rancher.com/ logo
Join the conversationJoin Slack
Channels
academy
amazon
arm
azure
cabpr
chinese
ci-cd
danish
deutsch
developer
elemental
epinio
espanol
events
extensions
fleet
français
gcp
general
harvester
harvester-dev
hobbyfarm
hypper
japanese
k3d
k3os
k3s
k3s-contributor
kim
kubernetes
kubewarden
lima
logging
longhorn-dev
longhorn-storage
masterclass
mesos
mexico
nederlands
neuvector-security
office-hours
one-point-x
onlinemeetup
onlinetraining
opni
os
ozt
phillydotnet
portugues
rancher-desktop
rancher-extensions
rancher-setup
rancher-wrangler
random
rfed_ara
rio
rke
rke2
russian
s3gw
service-mesh
storage
submariner
supermicro-sixsq
swarm
terraform-controller
terraform-provider-rancher2
terraform-provider-rke
theranchcast
training-0110
training-0124
training-0131
training-0207
training-0214
training-1220
ukranian
v16-v21-migration
vsphere
windows
Powered by Linen
longhorn-storage
  • w

    white-battery-15789

    08/10/2022, 9:50 PM
    Im using Longhorn 1.3.0, helm chart
    c
    f
    • 3
    • 8
  • b

    big-judge-33880

    08/11/2022, 8:59 AM
    is there a way to make longhorn restore from backup with the correct/a custom volumeMode? when I restore volumes that were
    volumeMode=Block
    , I get a volume with
    volumeMode=Filesystem
    , which appears to cause issues getting these volumes mounted by a downstream rke2 cluster that uses longhorn csi
    f
    • 2
    • 2
  • l

    lively-hospital-24301

    08/12/2022, 3:24 PM
    Hi folks, we are running Prometheus with persistent volume provided by longhorn. And we have some issues with prometheus compaction. There is some related messages in prometheus logs
    level=warn ts=2022-08-11T20:36:59.034Z caller=main.go:849 fs_type=NFS_SUPER_MAGIC msg="This filesystem is not supported and may lead to data corruption and data loss. Please carefully read <https://prometheus.io/docs/prometheus/latest/storage/> to learn more about supported filesystems."
    level=error ts=2022-08-12T11:00:13.904Z caller=db.go:821 component=tsdb msg="compaction failed" err="WAL truncation in Compact: create checkpoint: read segments: corruption in segment /prometheus/wal/00000016 at 60187438: unexpected full record"
    Could anyone point me to the right direction of troubleshooting this issue?
    i
    • 2
    • 7
  • s

    straight-businessperson-27680

    08/13/2022, 12:52 PM
    These volumes don't exist basically and don't have any replicas. How can I clean them up?
  • s

    straight-businessperson-27680

    08/13/2022, 1:00 PM
    Nevermind, the are gone after restarting Longhorn manager
  • s

    stocky-article-82001

    08/17/2022, 11:50 AM
    Is there a way to change an existing PV/PVC from ext4 to XFS?
  • s

    shy-megabyte-75492

    08/17/2022, 11:54 AM
    Howdy, I'm installing longhorn on my RKE2 latest my bare metal (single node) cluster and I'm not able to get access to the UI, I followed the QuickStart docs to setup the ingress controller and I get a connection refused when I try to get to the UI Any help?
    h
    • 2
    • 2
  • g

    gifted-stone-19912

    08/17/2022, 2:55 PM
    Hi all, for longhorn to use ext4 for the volumes, does the Disk/LVM on the Node (/var/lib/longhorn) also have to be ext4 or can this be xfs and still serve ext4 volumes later on?
    i
    f
    f
    • 4
    • 18
  • f

    flaky-coat-75909

    08/19/2022, 11:22 AM
    Hi, I'm receinvg a events with message
    Message: EXT4-fs error (device sda): ext4_find_entry:1446: inode #12: comm grafana-server: reading directory lblock 0
    And my Node Problem detector is showing
    {*reason*="Ext4Error"}
    more than 1 thousand appears on one node but in UI everything works fine (all is on green) Grafana also working as well I have 5 nodes let's say node1, node2, node3, node4, node5 grafana is working on node3 and the data replicas is on node4 and node5 Mayby something is missing?
    i
    • 2
    • 4
  • b

    bright-fireman-42144

    08/23/2022, 11:04 PM
    where do I begin troubleshooting deployments failing because pvc's are consistently unbound. I have looked at at kubectl -n longhorn-system get pod NAME READY STATUS RESTARTS AGE csi-attacher-85444f77cd-465b4 1/1 Running 0 6d csi-attacher-85444f77cd-5xhd5 1/1 Running 0 6d csi-attacher-85444f77cd-qvr56 1/1 Running 0 6d csi-provisioner-5b58594849-2ccs4 1/1 Running 0 6d csi-provisioner-5b58594849-6vs94 1/1 Running 0 6d csi-provisioner-5b58594849-rb2rz 1/1 Running 0 6d csi-resizer-d55c5477f-fzt74 1/1 Running 0 6d csi-resizer-d55c5477f-pmd8f 1/1 Running 0 6d csi-resizer-d55c5477f-skflm 1/1 Running 0 6d csi-snapshotter-6d977dbf5f-ddkhz 1/1 Running 0 6d csi-snapshotter-6d977dbf5f-tggts 1/1 Running 0 6d csi-snapshotter-6d977dbf5f-vxglc 1/1 Running 0 6d engine-image-ei-dae99989-997kv 1/1 Running 0 6d engine-image-ei-dae99989-m5xff 1/1 Running 0 6d engine-image-ei-dae99989-wr9rq 1/1 Running 0 6d engine-image-ei-dae99989-xjvdr 0/1 Running 0 6d engine-image-ei-dae99989-zvdgk 0/1 Running 0 41h instance-manager-e-167824d3 1/1 Running 0 6d instance-manager-e-8d21ff63 1/1 Running 0 6d instance-manager-e-a56f1eea 1/1 Running 0 6d instance-manager-r-53eca689 1/1 Running 0 6d instance-manager-r-b31ebc94 1/1 Running 0 6d instance-manager-r-b7c13d7c 1/1 Running 0 6d longhorn-admission-webhook-5bbc4cf8c4-m9xbg 1/1 Running 0 6d longhorn-admission-webhook-5bbc4cf8c4-qxnff 1/1 Running 0 6d longhorn-conversion-webhook-7677cc4f5f-4c6kz 1/1 Running 0 6d longhorn-conversion-webhook-7677cc4f5f-8q9n6 1/1 Running 0 6d longhorn-csi-plugin-9fsh8 2/2 Running 0 6d longhorn-csi-plugin-bv6md 2/2 Running 0 6d longhorn-csi-plugin-djqbf 2/2 Running 0 6d longhorn-csi-plugin-qpsxt 2/2 Running 0 6d longhorn-csi-plugin-vkfdg 2/2 Running 0 41h longhorn-driver-deployer-968865df6-v2xb2 1/1 Running 0 6d longhorn-manager-2njr6 0/1 CrashLoopBackOff 1698 (2m34s ago) 6d longhorn-manager-fpxz7 0/1 CrashLoopBackOff 492 (4m23s ago) 41h longhorn-manager-pspgl 1/1 Running 0 6d longhorn-manager-qpcm5 1/1 Running 0 6d longhorn-manager-x42rz 1/1 Running 1 (6d ago) 6d longhorn-ui-768cf55d4d-m9nzw 1/1 Running 0 6d and I think I might have set up my taints and tolerations wrong with noexecute. I have some worker nodes that can't run the csi and 3 ubuntu nodes for longhorn storage on gke. Been running into weird problems. Not sure how I installed it as it's not in my app catalog. Where can I alter the yaml to make sure it's correct? Using Rancher 2.6.6
    i
    • 2
    • 1
  • b

    bright-fireman-42144

    08/23/2022, 11:25 PM
    I don't have any volumes in use, I think I'm going to go through the checklist for removing it and making sure I follow all the best practices (perhaps properly this time? LOL) on longhorn.io
    i
    • 2
    • 79
  • a

    aloof-hair-13897

    08/31/2022, 2:22 PM
    Yeah, longhorn 1.3.1 is stable.
  • s

    swift-zebra-42479

    09/01/2022, 5:46 AM
    Hi, How to add new worker node to the existing Longhorn Cluster(existing cluster having 3Worker nodes). Please suggest me is there any way to add new worker node to Longhorn Cluster using Cli or Web console.
    a
    • 2
    • 4
  • b

    bored-apple-66429

    09/08/2022, 9:30 AM
    Hey, we managed to delete one of our volumes via the UI. Is the volume lost, or are there any remains on the nodes?
  • m

    magnificent-vr-88571

    09/11/2022, 8:30 PM
    Guys, I have restored an cluster following https://docs.rke2.io/backup_restore/#restoring-a-snapshot-to-new-nodes and created an HA server Noticed following errors in Journalctl logs and volumes are not mounted.
    E0911 20:16:38.965933   17195 kubelet.go:1701] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[data], unattached volumes=[data kube-api-access-ztp4j dshm]: timed out waiting for the condition" pod="cvat/cvat-postgresql-0"
    E0911 20:23:07.393663   16782 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container pod=metadata-grpc-deployment-f8d68f687-5fvbs_kubeflow(d72591f7-e2c4-475f-ad83-fc59c996219a)\"" pod="kubeflow/metadata-grpc-deployment-f8d68f687-5fvbs" podUID=d72591f7-e2c4-475f-ad83-fc59c996219a
    I0911 20:23:08.718940   16782 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-62552b22-3e99-4b63-8a56-69519573ae1d\" (UniqueName: \"<http://kubernetes.io/csi/driver.longhorn.io^pvc-62552b22-3e99-4b63-8a56-69519573ae1d\|kubernetes.io/csi/driver.longhorn.io^pvc-62552b22-3e99-4b63-8a56-69519573ae1d\>") pod \"loki-0\" (UID: \"8aef7574-fb66-415f-a130-6b8ec9091672\") "
    E0911 20:23:08.724147   16782 nestedpendingoperations.go:335] Operation for "{volumeName:<http://kubernetes.io/csi/driver.longhorn.io^pvc-62552b22-3e99-4b63-8a56-69519573ae1d|kubernetes.io/csi/driver.longhorn.io^pvc-62552b22-3e99-4b63-8a56-69519573ae1d> podName: nodeName:}" failed. No retries permitted until 2022-09-11 20:25:10.724134581 +0000 UTC m=+21624.816950484 (durationBeforeRetry 2m2s). Error: "Volume not attached according to node status for volume \"pvc-62552b22-3e99-4b63-8a56-69519573ae1d\" (UniqueName: \"<http://kubernetes.io/csi/driver.longhorn.io^pvc-62552b22-3e99-4b63-8a56-69519573ae1d\|kubernetes.io/csi/driver.longhorn.io^pvc-62552b22-3e99-4b63-8a56-69519573ae1d\>") pod \"loki-0\" (UID: \"8aef7574-fb66-415f-a130-6b8ec9091672\") "
    I0911 20:23:09.829046   16782 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c6597566-f0c6-40b3-be5b-9d670f51748d\" (UniqueName: \"<http://kubernetes.io/csi/driver.longhorn.io^pvc-c6597566-f0c6-40b3-be5b-9d670f51748d\|kubernetes.io/csi/driver.longhorn.io^pvc-c6597566-f0c6-40b3-be5b-9d670f51748d\>") pod \"harbor-redis-0\" (UID: \"912226dd-12cf-4cb5-a54b-fb831b4e7e73\") "
    E0911 20:23:09.831850   16782 nestedpendingoperations.go:335] Operation for "{volumeName:<http://kubernetes.io/csi/driver.longhorn.io^pvc-c6597566-f0c6-40b3-be5b-9d670f51748d|kubernetes.io/csi/driver.longhorn.io^pvc-c6597566-f0c6-40b3-be5b-9d670f51748d> podName: nodeName:}" failed. No retries permitted until 2022-09-11 20:25:11.831837052 +0000 UTC m=+21625.924652956 (durationBeforeRetry 2m2s). Error: "Volume not attached according to node status for volume \"pvc-c6597566-f0c6-40b3-be5b-9d670f51748d\" (UniqueName: \"<http://kubernetes.io/csi/driver.longhorn.io^pvc-c6597566-f0c6-40b3-be5b-9d670f51748d\|kubernetes.io/csi/driver.longhorn.io^pvc-c6597566-f0c6-40b3-be5b-9d670f51748d\>") pod \"harbor-redis-0\" (UID: \"912226dd-12cf-4cb5-a54b-fb831b4e7e73\") "
    Any solution to recover?
    • 1
    • 1
  • c

    creamy-mechanic-63134

    09/19/2022, 12:26 PM
    Hi Team Getting this error while doing longhorn upgrade
    \"<http://backingimagemanagers.longhorn.io|backingimagemanagers.longhorn.io>\" is invalid: status.storedVersions[1]: Invalid value: \"v1beta2\": must appear in spec.versions && cannot patch \"<http://backingimagedatasources.longhorn.io|backingimagedatasources.longhorn.io>\" with kind CustomResourceDefinition: <http://CustomResourceDefinition.apiextensions.k8s.io|CustomResourceDefinition.apiextensions.k8s.io> \"<http://backingimagedatasources.longhorn.io|backingimagedatasources.longhorn.io>\" is invalid: status.storedVersions[1]: Invalid value: \"v1beta2\": must appear in spec.versions && cannot patch \"<http://backuptargets.longhorn.io|backuptargets.longhorn.io>\" with kind CustomResourceDefinition: <http://CustomResourceDefinition.apiextensions.k8s.io|CustomResourceDefinition.apiextensions.k8s.io> \"<http://backuptargets.longhorn.io|backuptargets.longhorn.io>\" is invalid: status.storedVersions[1]: Invalid value: \"v1beta2\": must appear in spec.versions && cannot patch \"<http://backupvolumes.longhorn.io|backupvolumes.longhorn.io>\" with kind CustomResourceDefinition: <http://CustomResourceDefinition.apiextensions.k8s.io|CustomResourceDefinition.apiextensions.k8s.io> \"<http://backupvolumes.longhorn.io|backupvolumes.longhorn.io>\" is invalid: status.storedVersions[1]: Invalid value: \"v1beta2\": must appear in spec.versions && cannot patch \"<http://backups.longhorn.io|backups.longhorn.io>\" with kind CustomResourceDefinition: <http://CustomResourceDefinition.apiextensions.k8s.io|CustomResourceDefinition.apiextensions.k8s.io>
    q
    • 2
    • 9
  • h

    helpful-beard-54962

    09/19/2022, 1:34 PM
    Hi All, Did anyone ever notice any storage problem with symlinks and Actual storage? I have a storage of 50 GB and there's only 1.2GB of files, but the Actual storage reported by Longhorn is 100% full, 50/50 GB. I have many folders with a symlink to a folder on the local drive that's controlled through a DaemonSet and that folder has ~2GB. The only explanation is that Longhorn counts symlinks as actual storage and now my disk is full and very slow!
  • l

    loud-daybreak-83328

    09/19/2022, 2:56 PM
    Hi. I'm attempting to get Kasten backing up our environment and it seems to be having Longhorn snapshotting issues. I've looked through the various discussions and am not seeing an issue similar to what we have. We're on Rancher 2.6.8, Longhorn 1.3.1, and Kasten 5.0.8. The error I'm seeing when backing something up that is on a Longhorn PVC is: Failed to create snapshot: failed to take snapshot of the volume pvc-af7ba930-e103-468c-85e3-cdc28d2bebde: "rpc error: code = NotFound desc = volume id pvc-af7ba930-e103-468c-85e3-cdc28d2bebde does not exist in the volumes list". Any help you can offer would be fantastic. @bland-byte-60612
    m
    • 2
    • 9
  • l

    loud-daybreak-83328

    09/20/2022, 12:07 PM
    Starting a new thread with this. I'm having trouble getting CSI snapshots to work with Longhorn (Rancher 2.6.8, Longhorn 1.3.1, RKE1 or RKE2 both have the issue), K8S 1.22. I followed the directions to the letter here: https://longhorn.io/docs/1.3.1/snapshots-and-backups/csi-snapshot-support/enable-csi-snapshot-support/. I verified that my backup area works, and that snapshots function through the GUI. When I try to create the snapshot through the CSI, it doesn't seem to actually do anything. Kubernetes accepts the yaml, and creates a VolumeSnapshot object, but Longhorn doesn't do anything. Any idea where I can look for messages to indicate what's going on? Thanks.
    • 1
    • 3
  • h

    high-butcher-71851

    09/20/2022, 10:35 PM
    Where is the best place to find logs to figure out why a volume has entered “Faulted” state after attempting to “attach to host” ? Thanks!
    q
    • 2
    • 6
  • n

    narrow-noon-75604

    09/22/2022, 11:05 AM
    Hi, I am using an RKE2 cluster with 1 master node and 5 worker nodes. I have deployed longhorn as a storage class and deployed a mongodb application. Longhorn successfully created the PV and it is bound state as well but the pod is not into Running state because the volume is not attached to any of the worker nodes. Found that "/dev/longhorn" folder is missing in all the worker nodes. Also the "instance-manager" pods are throwing the following error,
    [pvc-28f57305-2b5b-44c8-9447-549101dea147-e-ded1735d] time="2022-09-22T10:59:13Z" level=warning msg="FAIL to discover due to Failed to execute: nsenter [--mount=/host/proc/1/ns/mnt --net=/host/proc/1/ns/net iscsiadm -m discovery -t sendtargets -p 10.42.50.26], output , stderr, iscsiadm: Cannot perform discovery. Invalid Initiatorname.\niscsiadm: Could not perform SendTargets discovery: invalid parameter\n, error exit status 7"
    [pvc-28f57305-2b5b-44c8-9447-549101dea147-e-ded1735d] time="2022-09-22T10:59:13Z" level=warning msg="Nodes cleaned up for iqn.2019-10.io.longhorn:pvc-28f57305-2b5b-44c8-9447-549101dea147"
    I am not sure what I am missing. Any suggestions would be appreciated.
    q
    c
    s
    • 4
    • 17
  • b

    big-judge-33880

    09/26/2022, 3:07 PM
    Upgrading to k3s 1.25 was a great idea… :face_with_peeking_eye:
  • p

    proud-salesmen-12221

    09/29/2022, 11:32 PM
    Does anyone have experience with encrypted volumes? I'm setting up a StorageClass and PersistentVolumeClaim with encrypted: true and per-volume secrets with the yamls below. There are no errors when creating them, however , the Longhorn UI shows the volume as not encrypted. Any ideas why?
    ---
    apiVersion: <http://storage.k8s.io/v1|storage.k8s.io/v1>
    kind: StorageClass
    metadata:
      name: longhorn-crypto-v1-volume
    provisioner: <http://driver.longhorn.io|driver.longhorn.io>
    allowVolumeExpansion: true
    parameters:
      numberOfReplicas: "3"
      staleReplicaTimeout: "2880" # 48 hours in minutes
      fromBackup: ""
      encrypted: "true"
      <http://csi.storage.k8s.io/provisioner-secret-name|csi.storage.k8s.io/provisioner-secret-name>: ${pvc.name}
      <http://csi.storage.k8s.io/provisioner-secret-namespace|csi.storage.k8s.io/provisioner-secret-namespace>: ${pvc.namespace}
      <http://csi.storage.k8s.io/node-publish-secret-name|csi.storage.k8s.io/node-publish-secret-name>: ${pvc.name}
      <http://csi.storage.k8s.io/node-publish-secret-namespace|csi.storage.k8s.io/node-publish-secret-namespace>: ${pvc.namespace}
      <http://csi.storage.k8s.io/node-stage-secret-name|csi.storage.k8s.io/node-stage-secret-name>: ${pvc.name}
      <http://csi.storage.k8s.io/node-stage-secret-namespace|csi.storage.k8s.io/node-stage-secret-namespace>: ${pvc.namespace}
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: nginx-v1-pvc
      namespace: v1-ns
    spec:
      accessModes:
        - ReadWriteOnce
      storageClassName: longhorn
      resources:
        requests:
          storage: 2Gi
    ✅ 1
    c
    • 2
    • 6
  • p

    proud-salesmen-12221

    09/29/2022, 11:34 PM
    Any ideas why it displays as not encrypted? Yaml files for StorageClass and PVC are above ^^
  • s

    steep-furniture-72588

    09/30/2022, 1:04 PM
    So I am having the situation where I am needing to restore some volumes that were deleted because fleet removed the git repos . (That's another topic). I thought this will be a good test to restore data from long horn from scratch to deployments. It's a bit unclear how to restore data from a volume that is detached to a new deployment. So Do we attach the old volume to a node. Pull off the data and then put into the new volume? Is there A mechanism for long horn to do something like this? Many thanks for any guidance.
    👍 1
    • 1
    • 1
  • b

    bland-painting-61617

    10/02/2022, 9:00 AM
    Wanted to bring this topic back up,
    Allow Node Drain with the Last Healthy Replica
    is on but kured is unable to drain the node - there is one volume left on the node but it should be rebooted. Yesterday another node was stuck in this state with no volumes on the node itself - a bug?
    evicting pod longhorn-system/instance-manager-e-9ca60819
    error when evicting pods/"instance-manager-e-9ca60819" -n "longhorn-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
    error when evicting pods/"instance-manager-r-850386c0" -n "longhorn-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
    b
    • 2
    • 3
  • b

    bland-painting-61617

    10/03/2022, 1:14 PM
    I think after you deploy longhorn once, those changes in values have no effect. you need to set the path up in the UI
    g
    b
    • 3
    • 5
  • f

    full-toddler-53694

    10/06/2022, 10:14 AM
    👋 Hello, team! longhorn works good on almalinux and ubuntu k8s version 1.23.10, on opensuse k8s version 1.25.0 getting pod security error and pods not starting, some error logs from opensuse k8s 1.25.0 Defaulted container “longhorn-admission-webhook” out of: longhorn-admission-webhook, wait-longhorn-conversion-webhook (init) time=“2022-10-06T07:53:46Z” level=info msg=“Starting longhorn admission webhook server” W1006 07:53:46.121563 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I1006 07:53:46.122457 1 shared_informer.go:240] Waiting for caches to sync for longhorn datastore W1006 07:53:46.140328 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CronJob: the server could not find the requested resource E1006 07:53:46.140407 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CronJob: failed to list *v1beta1.CronJob: the server could not find the requested resource W1006 07:53:46.141159 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource E1006 07:53:46.141192 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource W1006 07:53:47.128605 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CronJob: the server could not find the requested resource E1006 07:53:47.128635 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CronJob: failed to list *v1beta1.CronJob: the server could not find the requested resource W1006 07:53:47.297626 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource E1006 07:53:47.297649 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource I1006 07:53:47.322972 1 request.go:665] Waited for 1.195963826s due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/longhorn.io/v1beta2/backingimagedatasources?limit=500&amp;resourceVersion=0
    a
    • 2
    • 1
  • f

    full-toddler-53694

    10/06/2022, 12:44 PM
    actually, is opensuse supported with host fs btrfs?
  • b

    big-judge-33880

    10/07/2022, 6:54 AM
    btrfs is the default fs in opensuse, but longhorn isn’t compatible with k8s 1.25
Powered by Linen
Title
b

big-judge-33880

10/07/2022, 6:54 AM
btrfs is the default fs in opensuse, but longhorn isn’t compatible with k8s 1.25
View count: 41