A couple of questions that are probably answered i...
# longhorn-storage
e
A couple of questions that are probably answered in the docs I have not had time to read: 1. Is there a command line way to view the same data that is in the Longhorn Web UI? 2. Do I understand correctly that once space is allocated to a PV, there is no way to recover/shrink that PV if the data on it is removed?
h
there's a
longhornctl
- if you haven't already, you may want to check it out https://longhorn.io/docs/1.9.0/advanced-resources/longhornctl/
👀 1
c
You can also just interact with the CR types via kubectl
The volumes do support trim but it will not reduce the space used on the backing disk.
e
So do I really gain anything?
c
Not if you're trying to recover space on the host, no.
e
do PV's share space with each other, so trim would allow a different PV to use that freed up space?
c
No.
Not sure what you mean by sharing space. Like using the same blocks for multiple disks somehow?
e
Lets say I have 1TB of space assigned to longhorn for PV use. I have 2 pods that have 500GB PV's, but only have 100GB of data on each. If they both filled up one day, but then I cleaned them out, back down to 100GB, the host has to write that off as 1TB forever, but each PV still has 400GB available now. If I tried to add a 3rd pod that needed 200GB, assuming I allowed 2x overprovisioning to begin with, would that 800GB "free" space be available to the 3rd pod? Would trim make it available?
feel free to give me a facepalm emoji if I am making no sense and need to go read the docs... 🙂
m
In your scenario, the 2 pods using 500GB wouldn't get assigned on the same node. Reason being Longhorn reserves 25% of disk for system use. https://longhorn.io/docs/1.6.1/nodes-and-volumes/nodes/node-space-usage/#whole-cluster-space-usage But lets say your hypothetical case and you accounted for the 25% reserve space. No, it wouldn't schedule either because your 2 pvcs are using all the available storage. Even if the pod only uses 100GB you already allocated 500GB, so its reserved all of it for each pod.
Also, most of this information is in the docs.
👍 1