https://rancher.com/ logo
p

powerful-soccer-11224

11/15/2022, 6:50 AM
Hi Everyone, We have a harvester node with the following resources
Copy code
Allocated CPU: 96C
Allocated Memory: 188GiB
Allocated Storage: 758GiB
Now when we are trying to create VMs (
CPU: 4cores
Memory: 8Gi
)on the node using our custom k8s controller (operator), we observed that
Actual memory usage was close to 35%, but the reserved memory on the Harvester node was almost 99% and VM creation was not going through anymore.
(attached snippet). Can anyone explain me the reason for this and how can we overcome this ?
f

full-plastic-79795

11/15/2022, 7:34 AM
The reserved value is the request value
You can use
kubectl describe node <NODE NAME>
to see which vm requested a large value
a

ancient-pizza-13099

11/15/2022, 11:26 AM
@powerful-soccer-11224 please run those 3 commands to get the memory usage, you will find which POD allocates big amount of memory. suspect some already created VM are using wrong memroy size, as in the
create vm
WEBUI, the default memory unit is
Gb
, not
Mb
.
Copy code
kubectl get pods -A -o yaml | yq e '.items[].metadata.name' -
kubectl get pods -A -o yaml | yq e '.items[].spec.containers[].resources.requests.memory' -
kubectl get pods -A -o yaml | yq e '.items[].spec.containers[].resources.limits.memory' -
p

powerful-soccer-11224

11/15/2022, 11:43 AM
Thanks @full-plastic-79795 and @ancient-pizza-13099. Let me try those commands and will update you further
3 Views