This message was deleted.
# neuvector-security
a
This message was deleted.
q
Regarding horizontal scale, you’re definitely fine at the default 3
NeuVector containers, inclusive of the Controllers, are generally pretty good at not gobbling up more resources than they need. So I’d again suggest you’re good with the defaults, being mindful that they like about 1GB/1vcpu to run.
Certainly keep an eye on things as you test, but all should be fine until you’re at the many hundreds of nodes supporting many thousands of pods scale. 🙂
f
Currently the controller pods are given below requests and limits.
resources: limits: cpu: 400m memory: 2792Mi requests: cpu: 100m memory: 2Gi
For CPU, it seems way below 1vcpu, but could not find a case where it reached even above 10% of it.
On the other hand, memory in fact troubles me as there were instances where it reached so close to the limit and I suspect it even went OOM as there were few restarts in the cluster leading some unexpected behaviour too like losing multiple network rules while running in monitor mode, same rule been detected twice or not detected at all.
(Version running 5.1.1 with 5 controller pods in this case)
Hence I have got very curious about these settings and hence was the question 🙂
Also I do experience a gradual increase in memory usage for controller pods daily after a fresh restart and I am thinking if not enabling persistence yet has anything to do with it ?
q
I don’t think deploying 5 Controllers is helping in your situation; stick with 3. 🙂
f
+1, will stick to 3 controllers then
How about the requests and limits?
below is what we already had
resources: limits: cpu: 400m memory: 2792Mi requests: cpu: 100m memory: 2Gi
q
Give that a try; yeah
👍 1