Memory discrepancy (sorry in case of duplicate) e...
# rke2
p
Memory discrepancy (sorry in case of duplicate) environment: • 7 x VMs (Red Hat Enterprise Linux release 9.2) • 1 x VM - master and 6 x VMs - workers • RKE2 v1.26 stable version problem description: I can see MEM output inconsistency between
kubectl top nodes
and
free -m OR top
commands on the VMs outputs:
kubectl top nodes
Copy code
NAME       CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%                                                                                                                                                                                  
master      707m         8%     12529Mi         80%                                                                                                                                                                                      
worker-01   489m         6%     10616Mi         68%                                                                                                                                                                                      
worker-02   415m         5%     10785Mi         69%                                                                                                                                                                                      
--- ommitted ----
free -m
on master
Copy code
# free -m                                                                                                                                                                                                                       
               total        used        free      shared  buff/cache   available                                                                                                                                                                 
Mem:           15524        4121         504         327       11590       11402                                                                                                                                                                 
Swap:           2047          23        2024
kubectl describe nodes master
Copy code
MemoryPressure       False   Mon, 04 Dec 2023 10:26:15 +0000   Mon, 16 Oct 2023 10:22:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
Allocated resources:                                                                                                                                                                                                                             
  (Total limits may be over 100 percent, i.e., overcommitted.)                                                                                                                                                                                   
  Resource           Requests      Limits                                                                                                                                                                                                        
  --------           --------      ------                                                                                                                                                                                                        
  cpu                1475m (18%)   200m (2%)                                                                                                                                                                                                     
  memory             2320Mi (14%)  192Mi (1%)                                                                                                                                                                                                    
  ephemeral-storage  0 (0%)        0 (0%)                                                                                                                                                                                                        
  hugepages-1Gi      0 (0%)        0 (0%)                                                                                                                                                                                                        
  hugepages-2Mi      0 (0%)        0 (0%)
process that consumes memory (used ps_mem)
Copy code
------- ommitted ----------------------
58.2 MiB +   0.5 KiB =  58.3 MiB       coredns                                                                                                                                                                                                  
 58.3 MiB +   0.5 KiB =  58.3 MiB       kube-proxy                                                                                                                                                                                               
 58.8 MiB +   0.5 KiB =  58.8 MiB       speaker                                                                                                                                                                                                  
 61.8 MiB +   0.5 KiB =  61.8 MiB       cloud-controlle                                                                                                                                                                                          
 66.8 MiB +   0.5 KiB =  66.8 MiB       controller                                                                                                                                                                                               
 68.6 MiB +   0.5 KiB =  68.6 MiB       metrics-server                                                                                                                                                                                           
 69.1 MiB +   0.5 KiB =  69.1 MiB       kube-scheduler                                                                                                                                                                                           
 95.2 MiB +   0.5 KiB =  95.2 MiB       containerd                                                                                                                                                                                               
 97.8 MiB +  47.3 MiB = 145.1 MiB       calico-node (5)                                                                                                                                                                                          
150.7 MiB +   0.5 KiB = 150.7 MiB       kubelet                                                                                                                                                                                                  
158.2 MiB +   0.5 KiB = 158.2 MiB       etcd                                                                                                                                                                                                     
146.7 MiB +  12.0 MiB = 158.7 MiB       containerd-shim-runc-v2 (14)                                                                                                                                                                             
186.2 MiB +   0.5 KiB = 186.2 MiB       kube-controller-manager                                                                                                                                                                                  
226.1 MiB +   0.5 KiB = 226.1 MiB       rke2                                                                                                                                                                                                     
  1.2 GiB +   0.5 KiB =   1.2 GiB       kube-apiserver                                                                                                                                                                                           
---------------------------------                                                                                                                                                                                                               total used                3.1 GiB
As you can see from the
kubectl top nodes
the usage is 80%, however the
describe node
command says that there is no MEM pressure and MEM request is ~14%. Checks from VMs directly also show that MEM is fine and for me that seems to be the right metrics. Therefore I don't understand why
kubectl top nodes
and Rancher UI show such big, inconsistent metrics?