This message was deleted.
# vsphere
a
This message was deleted.
a
TL;DR - Let Kubernetes do the work for you It’s sounding like vRops might not be accounting for Eviction Thresholds, kubelet reserved resources, workload bursting & reservations, etc. (a.k.a it has zero awareness of Kubernetes). As far as workloads are concerned - Longhorn, as an example, reserves a not-insignificant amount of system resources for replica scheduling & rebuilding, which would in-turn have an impact on scheduling of additional workloads and those are things that must be considered when node sizing. And if Harbor’s postgres instance is using Longhorn for persistent storage it can consume quite a bit of CPU, Disk, and Network resources as the chatter between volume replicas would be almost non-stop when pushing & scanning images. There is also general cluster traffic as well: API requests/responses, etcd replication, etc. that comes into play. Cluster node autoscaling might be a good fit for your environment, since then nodes can scale in or out as workload demands. And since you’re technically already using Vertical Pod Autoscaling via Goldilocks (albeit in Reccomend mode), taking the extra step of implementing VPA to vertically scale up or down some workloads for you would also have a benefit.
b
Thanks for the details! I will look into the cluster node autoscaling. You have any recommendations? We are investigating SLE Micro instead of running a traditional Linux OS and leveraging SUSE's Elemental Operator to help with just having nodes that run containers and help trim the fat if you will to see if that can help out.
a
The only auto-scaler you can really use is from upstream Kubernetes - usage example HERE. Elemental/SleMicro is a great stack to try!