This message was deleted.
# k3s
a
This message was deleted.
b
The node has however much CPU the node has. If you have 8 cores, that would be 8000 reservable/requestable CPU for pods. You say it has 2 threads of a beefy ryzen -- how did you limit it to 2 cores? That'd be where to increase it.
f
It's a vm.
b
The other option is to reduce the requests of other pods on the machine so that mongo's requests can be satisfied
then give the vm more cores and/or reduce the requests across deployed pods so that you can fit them all in 2 cpus
f
The limit is 55%, which seems unreasonable for a dev vm.
b
Perhaps you shouldn't be setting requests on your deployed pods at all if there's no real workload guarantees you need to make?
When you describe the node, you'll see how much k8s thinks it has to work with:
Copy code
Allocatable:
  cpu:                8
  ephemeral-storage:  222891278366
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             16270508Ki
  pods:               110
On an 8 cpu vm for example ^
your node has 72% of its 2 cpus already requested. The
1450m
output in your top comment shows that
requests are similar to a guaranteed reservation.
not actual cpu usage
f
Right, so how do I change the limit? 55% of two cores is unreasonable.
b
the limit is per-pod. so a pod might request 500m cpu and be limited to 1000m cpu. the limits there are not impacting your ability to deploy
the requests are
basically k8s is telling you that your pods are already reserving 1.45 cores, and mongo is asking for more than the remaining .55 cores you have left to dish out
on a small dev node like that, you probably don't want to set requests for anything but critical workloads - or anything you know will need consistent cpu resources
f
My CR doesn't define any limits/requests.
After deleting the CR, the limits/requests seem to remain unchanged.
b
there's likely a default applied
in
kubectl describe node nodename
you can see the requests of all the existing pods
example ^
so you can see what's requesting what already and consider reducing their requests too
f
It looks like there's an example for overriding limits/requests: https://github.com/mongodb/mongodb-kubernetes-operator/blob/master/config/samples/mongodb.com_v1_mongodbcommunity_specify_pod_resources.yaml However, there isn't one for disabling it.
b
you could set
resources: {}
which will 0 it out
f
Doesn't seem to like it:
Copy code
containers:
            - name: mongod
              resources: {}
            - name: mongodb-agent
              resources: {}
          initContainers:
            - name: mongodb-agent-readinessprobe
              resources: {}
Throws:
Copy code
error converting YAML to JSON: yaml: line 37: did not find expected key
Line 37 is the first
resources
b
hm that really should work 🤔
f
Looks like the failure was formatting, I was able to get it to work with those after some indenting, but now there's a new error about disk pressure:
Copy code
Warning  FailedScheduling  89s   default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {<http://node.kubernetes.io/disk-pressure|node.kubernetes.io/disk-pressure>: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
But it seemed to work when I set lower reqs.
Okay, deleting and reapplying allowed it to go back to throwing an error about cpu bounds. Setting lower requests allowed this to proceed.
👍 1
Thanks for the help.
b
welcome!