Hey, is there a timeline for when the harvester-cs...
# harvester
b
Hey, is there a timeline for when the harvester-csi-driver-lvm will move out of experimental? We would like to stripe some of our smaller disks for longhorn to use (as it works better with larger drives) and hoping to use the lvm driver for this rather than a bunch of custom commands in the install config.
m
Not exactly, we - on average - tend to take a few cycles to gather feedback about certain add-ons of features that we release in either Experimental or Technical Preview status. Our measure of stability is a lot to do with adoption and user-centric feedback (bugs, issues, additional feature requests etc). I haven't seen much in this regard until your message now... (except for these https://github.com/harvester/harvester/issues?q=state%3Aopen%20label%3A%22area%2Flocal-volume-lvm%22)
🙌 1
s
Hi @brainy-kilobyte-33711, I would like to get back to the use case. You mentioned using lvm csi driver to stripe disk for longhorn. Did you mean you would like to use LVM CSI driver to create the volume and use that volume for Longhorn?
b
Hey Vicente - yes we are struggling with longhorn. We have 6 x 2TB disks in each bare metal server and with PVCs that can be 1TB, snapshots and backups can cause disks to run out of space and fault replicas (raised https://github.com/harvester/harvester/issues/8280) The longhorn docs say "Since Longhorn doesn’t currently support sharding between the different disks, we recommend using LVM to aggregate all the disks for Longhorn into a single partition, so it can be easily extended in the future." We hope we can use the LVM CSI driver to achieve this rather than having to try out custom commands during the harvester install. So not using the LVM CSI driver for creating volumes, just for managing LVM itself.
👀 1
s
hmm, I did not try this way, but it should work. However, it did not relate to the LVM CSI driver GA because that did not fully rely on the LVM CSI driver capability, like snapshot/resize, …etc Anyway, the use cases sound interesting. Feel free to share any further progress here. Thanks!
b
Unfortunately couldn't get it fully working. The steps I did were Created a VG with the harvester CSI lvm driver: Create a storage class:
Copy code
allowVolumeExpansion: false
allowedTopologies:
  - matchLabelExpressions:
      - key: topology.lvm.csi/node
        values:
          - harvester-node-7
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: test-lvm-sc
parameters:
  type: striped
  vgName: lvm-test
  csi.storage.k8s.io/fstype: xfs
provisioner: lvm.driver.harvesterhci.io
reclaimPolicy: Retain
volumeBindingMode: Immediate
Created a PVC
Copy code
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-lvm-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2500Gi
  storageClassName: test-lvm-sc
  volumeMode: Filesystem
It gets created on the node and bound but the LVM is never mounted so I can't point longhorn at it. It only seems to get mounted when a pod uses it (which makes sense). I can mount it manually on the node but then I need to investigate the persistence options and how it might behave after an upgrade which is what we were trying to avoid. It would be nice if the harvester node driver supported LVM then we wouldn't need to worry about the persistence of the mount.
s
Do you mean the node-disk-manager supports LVM? Actually, yes, node-disk-manager supports the VG operation, but for the LV, you need to manually create it with the VG.
b
Thanks but that's the bit that we were hoping to avoid as we would then need to test how persistent it is after harvester upgrades, reboots etc
I can see
Sharding
and
Single Logical Volume Store Across Multiple Disks
in the roadmap for longhorn v2 data engine GA so will likely wait until then