TL;DR - Let Kubernetes do the work for you
It’s sounding like vRops might not be accounting for Eviction Thresholds, kubelet reserved resources, workload bursting & reservations, etc. (a.k.a it has zero awareness of Kubernetes). As far as workloads are concerned - Longhorn, as an example, reserves a not-insignificant amount of system resources for replica scheduling & rebuilding, which would in-turn have an impact on scheduling of additional workloads and those are things that must be considered when node sizing. And if Harbor’s postgres instance is using Longhorn for persistent storage it can consume quite a bit of CPU, Disk, and Network resources as the chatter between volume replicas would be almost non-stop when pushing & scanning images. There is also general cluster traffic as well: API requests/responses, etcd replication, etc. that comes into play.
Cluster node autoscaling might be a good fit for your environment, since then nodes can scale in or out as workload demands. And since you’re technically already using Vertical Pod Autoscaling via Goldilocks (albeit in Reccomend mode), taking the extra step of implementing VPA to vertically scale up or down some workloads for you would also have a benefit.