This message was deleted.
# harvester
a
This message was deleted.
b
Just to clarify, the numbers you're seeing are not the Reserved numbers, but Used, correct? A lot of the CPU reservations come from the default Longhorn CPU reservation settings out of the box. They can be adjusted. After 3 nodes the remaining nodes will be worker nodes, so they won't have to run control plane components like etcd or api server, so I would expect their reservations and usage to be lower, yes
s
Yes, It used 11GB of RAM and 4 vCPUs. I suspected that Longhorn was responsible for the CPU usage because I have another node with K3s and Longhorn installed, and I’ve noticed the loading are mainly coming from it. Is it correct that each node will have Longhorn installed? Also, It seems that both compute and storage are on the same boat in harvester. I plan to look into it further. How does Longhorn compare to Ceph or OpenShift Cinder in terms of architecture and technology? Are there any recommended resources for me to explore more?
b
Is it correct that each node will have Longhorn installed? Also, It seems that both compute and storage are on the same boat in harvester.
Yes, Harvester is a hyperconverged hypervisor, so it will cluster compute, networking, and storage across all nodes for high availability.
I plan to look into it further. How does Longhorn compare to Ceph or OpenShift Cinder in terms of architecture and technology? Are there any recommended resources for me to explore more?
To be honest, I have not used any of those 😅 I recommend putting all of the workloads you think you'll run onto some paper and draft up some requirements from those. Then investigate the Harvester docs https://docs.harvesterhci.io/v1.2 or check out some demos/talks on YouTube to see whether or not it satisfies your requirements before pursuing a proof-of-concept on bare metal.
👍 1