Hi Team, for a K8s cluster with ~20 nodes, each running ~100 pods, how should we plan on scaling the controller pods? Also is it ok to set up an HPA for this instead of going into a set of statically scaled up controllers?
q
quaint-candle-18606
03/22/2023, 1:07 PM
Regarding horizontal scale, you’re definitely fine at the default 3
NeuVector containers, inclusive of the Controllers, are generally pretty good at not gobbling up more resources than they need. So I’d again suggest you’re good with the defaults, being mindful that they like about 1GB/1vcpu to run.
Certainly keep an eye on things as you test, but all should be fine until you’re at the many hundreds of nodes supporting many thousands of pods scale. 🙂
f
full-lawyer-94872
03/22/2023, 6:31 PM
Currently the controller pods are given below requests and limits.
For CPU, it seems way below 1vcpu, but could not find a case where it reached even above 10% of it.
On the other hand, memory in fact troubles me as there were instances where it reached so close to the limit and I suspect it even went OOM as there were few restarts in the cluster leading some unexpected behaviour too like losing multiple network rules while running in monitor mode, same rule been detected twice or not detected at all.
(Version running 5.1.1 with 5 controller pods in this case)
Hence I have got very curious about these settings and hence was the question 🙂
Also I do experience a gradual increase in memory usage for controller pods daily after a fresh restart and I am thinking if not enabling persistence yet has anything to do with it ?
q
quaint-candle-18606
03/23/2023, 2:48 PM
I don’t think deploying 5 Controllers is helping in your situation; stick with 3. 🙂