I'm new at Longhorn, but Ive managed to get some n...
# longhorn-storage
a
I'm new at Longhorn, but Ive managed to get some nodes, a volume started, etc all basic stuff is working well. I see how I can manually add disks my servers have to be used by longhorn, but I can't find a good example online of how I might add drives declaratively. Might be using the wrong words, but 3 of my servers have 20 drives, sda-sdu, how can I add those in the most simplest way? (i.e. trying to develop a repeatable way to add drives when adding new servers, and not have to manually init setup 60 drives with clicks)
i
cc @little-kangaroo-65735
l
You can add disks with
kubectl
ad described in https://longhorn.io/docs/1.9.0/nodes-and-volumes/nodes/multidisk/
i
Hello @acoustic-bird-95130 To add multiple filesystem-type disks (used for v1 volumes) to Longhorn, you can follow these steps: 1. Format each disk with an extent-based filesystem such as
ext4
or
xfs
. 2. Mount the formatted disks to desired mount points. 3. Add the disks to the Longhorn node by patching the
<http://node.longhorn.io|node.longhorn.io>
resource. The Longhorn documentation provides an example using
kubectl edit
, but you can also achieve this using
kubectl patch
. These steps can easily be automated with a shell script using a
for
loop. Feel free to let me know if you need helps. Thank you.
e
@icy-agency-38675 that helps tremendously. I just barely started a script to handle the formatting portion when I stumbled on two questions and your reply! 1. This server has 20 - SATA SSD, and 8 - NVME. I see all of them under block storage like lsblk or udevadm sats block. Should I iterate through them as file system for the SSD and block v2 for the NVME? I planned to tag them as SSD and NVME so I can isolate them with storage configs. 2. What is a best/common practice, for how I script on node and kubectl commands? The Longhorn storage is on worker node which isn’t configured with kubectl by default. Should I configure kubectl to the cluster from worker? Or run them as two different scripts from a control node?
i
Hello @acoustic-bird-95130 1. I believe you are using the SATA and NVME SSDs for v1 date engine (notice that v2 is still in experimental phase, and it is not recommended in a production env), so you can iterate all of them, format them and add them (filesytem-type disks) to the node.longhorn.io. 2. The kubectl is a client command-line tool and is nothing to do with control or data plane. If you run
kubectl -n longhorn-system get <http://nodes.longhorn.io|nodes.longhorn.io>
, you can see all Longhorn nodes in your cluster. If your
node A
has one disk, you can only add the disk to the
node A
node.longhorn.io resource. If your control plane node doesn't have any disks for storage, you don't need to handle this node. One script is enough for the iteration.
a
Thank You, Again, @icy-agency-38675 I mamaged to cobble together an amateur-ish script, I just got scared, so I did a script to do the formatting and naming, and the a different one to use the longhorn kubectl commands to add them to Longhorn. So far so good! they all showed up, with their Tags. Not to extend a simple question into a multiple question thing, but from this point. Are there any best practice naming schemes, things to now about creating custom storage classes to accomodate things like targeting NVME versus SSD (or even slower)? I've seen people use "fast and slow" in their configs, but I am thinking I wont know how to illustrate this to developers, who will probably think Fast for everything. I'm also thinking a head a little, whereas CloudNativePG Operator and database instances, I wont want them to have 3 replicas, I would want them to have one, and sync application level. Do people just create a storage class for database-targets?
i
Are there any best practice naming schemes, things to now about creating custom storage classes to accomodate things like targeting NVME versus SSD (or even slower)?
No, we don't have any recommendation for the naming.
Do people just create a storage class for database-targets?
This is okay. No worries.