adamant-kite-43734
12/07/2024, 9:48 AMpowerful-table-93807
12/07/2024, 10:21 AMbillowy-computer-46613
12/10/2024, 3:56 PMpowerful-table-93807
12/10/2024, 7:18 PMbillowy-computer-46613
12/10/2024, 8:28 PMk get po
took 5-6 seconds to complete ...
So first of all check how big is your db: ls -lh /var/lib/rancher/k3s/server/db/state.db
if its in the Gb range, you might have the same issue.billowy-computer-46613
12/10/2024, 8:31 PMvacuum
command, which will reclaim the disk held by deleted rowspowerful-table-93807
12/10/2024, 8:36 PMbillowy-computer-46613
12/10/2024, 8:58 PMtop
you only see the k3s
process to consume all cpu. If you use htop
you can see the htreads, but still wont be able to figure which component is actually eating he cpu.
The problem is that k3s by design a single process, instead of the vanilla k8s where you would have: api-server, scheduler, etcd, controller-manager, kubelet, containerd all as a separate process.
You have 2 option: you can enable the pprof
option which is a golang built in profiler:
• it will make k3s a little bit slower, but you check various metrics, like what actual code is using the cpu
• to do that, you have to install golang on the box, collect metrics, and than use go tool pprof -top
to find the critical program code
An other option that you can extract the "etcd" part out of k3s, by running a kine --debug
command in /var/lib/rancher/k3s/server/db/, and tell k3s server to connect to that one instead of running it internally in a thread. You have to edit /etc/systemd/system/k3s.service
and add a --datastore-endpoint
option