This message was deleted.
# general
a
This message was deleted.
s
by default it stores the data at /var/lib/longhorn/ on each node. You can add more disks/directories if you want.
However is it possible to attach data to container which are runnig on k8s clusters by Longhorn.
What do you mean? What are you trying to accomplish?
g
Thank you @shy-shoe-31898. I want to share volume with my workloads by using longhorn is it possible?
s
yes, you just need to attach ReadWriteMany volume to the pods https://longhorn.io/docs/1.6.2/nodes-and-volumes/volumes/rwx-volumes/
this is in case you have multiple pods you want to have a same volume mounted to them.
g
Thank you very much. Should I gather logs of deployments by using longhorn or should I use elastic seach? We are using log4j and we gather logs inside containers.For this reason i want to take all container logs and stroge on longhorn driver.
s
so that they share the same directory between them
g
I understand is it better than NFS and GlusterFS?
s
well it depends on the specific use case and your infrastructure. If you have a huge cluster with a lot of storage, you might want to take a look at Rook-CEPH.
but longhorn is good enough for most cases i guess.
i don't think i understand the logs part, you have elasticsearch deployed outside your kubernetes?
or do you want to deploy elasticsearch inside your kubernetes and use longhorn as it's storage provider?
ooh, you have multiple log4j instances, and you want to create a shared volume between them all?
👀 1
🙌 1
g
Rook-Ceph is doing the same thing with longhorn isnt it? I dont have log management now. I want to save all logs. I was thinking of saving the logs with volumes. I'm thinking about adding Elasticsearch, but I'll save all the logs in the longhorn driver anyway, right?
Yes I have multiple log4j instances and i want to create a shared volume between them all or what can ı do instead of that ?
s
yes, rook-ceph basicaly does the same, but using CEPH. Longhorn is much more simple.
you can do what you want to do with both
or you could deploy an elasticsearch cluster
depends on what you need
for simple log monitoring and storage, it's ok to do it with a shared volume
we use elasticsearch cluster outside the kubernetes since we have multiple clusters from which we collect logs. It was a pain to setup and configure. not to mention tune up to not use terabytes of storage.
g
I understand I can use elasticsearch cluster inside the kubernetes. İt can be useful. Just if ı use longhorn driver I will stroge all data in 7 node. İf I use GlusterFS İ will stroge all data in 2 machine. For this reason for log management using longhorn driver is not wise I think because it will take up more space