This message was deleted.
# rke2
a
This message was deleted.
h
what is significant increase? I do not have 1.26 or 127... but this is what I see on 1.28.11:
Copy code
NAME                                               CPU(cores)   MEMORY(bytes)
kube-apiserver-node1                               179m         1164Mi          
kube-apiserver-node2                               79m          1143Mi          
kube-apiserver-node3                               73m          1119Mi
1
f
Thank you for the response, yeah I see a similar cpu and memory usage for my clusters too.
g
İts normal by the way RKE olsa consume that much. But may be you can compere with vanilla installation
f
okay, thank you, I will check that.
I rolled back the Kubernetes version to 1.26 to check if the high CPU load was due to version 1.27. After the rollback, the CPU load dropped from 91% to 13%. The logs don't show anything suspicious, so I am unsure of the root cause. I am using the RKE2 provider, and the entire setup is configured using Terraform.
h
what version of RKE2 v1.27 did you have? In 1.27.12 release notes: https://docs.rke2.io/release-notes/v1.27.X#changes-since-v12711rke2r1
Fixes issue with excessive CPU utilization while waiting for containerd to start
f
I have v1.27.15+rke2r1
h
any lingering etcd snapshots? https://github.com/rancher/rancher/issues/41876 you may want to open a new issue ?
f
checking...This might be the issue.
I do not have any lingering etcd snapshots as well.
g
This is my CPU and Memory consume for kube-api serve for rke2-1.30 cluster
f
Hope to get it resolved soon
Its a calico bug which got resolved following the below steps: https://docs.tigera.io/calico/latest/operations/ebpf/troubleshoot-ebpf
g
Which steps?
f
kubectl edit felixconfiguration -o yaml Then add the below line under specs: bpfKubeProxyIptablesCleanupEnabled: false I have made this changes directly on the rancher UI, hence, restarting any service is not required there.
g
Why you are doing this? Your claim is about Kube-api-server