This message was deleted.
# general
a
This message was deleted.
r
So there's an in-tree NFS provisioner? Would I still need to define a storage class that calls upon the provisioner? I didn't find any clues about what you mentioned over here in https://kubernetes.io/docs/concepts/storage/storage-classes/#nfs.
f
Sorry, yes you just mount the nfs server volume directly into the container, youll need ‘nfs-common’ installed on any worker nodes that will be connecting out to the nfs server
If you meant you want to spin up an nfs server, thats been dead and buried as far as support and is now a security nightmare
r
I'm not in any hurry to use NFS again either! It is sometimes unavoidable so I'm trying to catch up about the thing that's built in to K8s now. It sounds like it an admin, node-level operation. I was wondering if it was something I could use with a PVC as a pod deployer that's not a cluster admin.
via storage class, I mean
f
So what i would recommend, mount the nfs shares directly on the vms you want to use as storage brokers, then spin up longhorn and point its storage location to that mounted nfs share
That would give you nfs storage backing along w all its resiliancy features, plus a proper storage api to call to for pvc creation
It might seem weird to have two diff storage types next to eachother, but keep in mind we do stuff like that with nginx for example all the time
Nfs is still good as a storage protocol imo, it just lacks the api features of the newer stuff like ceph and longhorn
r
Best of both worlds.
👍 1
w
same go for k3s?
f
I mean, my home lab pi cluster uses k3s and longhorn just fine
s
@flaky-winter-94949 placing Longhorn volumes on NFS feels counterintuitive. Now your NFS backend needs to be VERY highly available otherwise you lose all your persistent volumes. For a proper HA NFS cluster you'd need three NFS servers, in which case you might as well (I think..) setup three additional K8s nodes with Longhorn and have them serve the volumes using the built-in NFS.