This message was deleted.
# rke2
a
This message was deleted.
c
The apiserver triggers compaction automatically. It is not recommended to run etcd it with the built in compaction thresholds enabled.
That is vanilla Kubernetes behavior
c
thanks! how do I verify if compaction is happening?
c
I guarantee it is, as there is no way to disable it. Why do you need to check? This is covered in the etcd docs though: https://etcd.io/docs/v3.5/dev-guide/interacting_v3/ See the "Compacted revisions" section.
There is also a Prometheus metrics on the apiserver I believe, can't remember off the top of my head.
c
Thanks! I’ll dig into the metrics of both api and etcd, I’m seeing database size bloat, starting to see if manual compaction and defrag helps
etcd_debugging_mvcc_db_compaction_last
shows me compaction last happened 5 minutes ago, so its probably defrag problem
c
Compaction is generally only a problem if you are creating and then deleting a bunch of resources. If you have a stable workload it usually hits a steady state at some point and you might as well just let it be over-allocated for a bit, knowing that it uses it at times. Rke2 defrags every time the service is started, as defrags are disruptive and should not be run while the apiserver is up.
c
That’s good to know rke2 defrags on restart. We’ve got argocd syncing hundreds of apps and kyverno mutating them, so tons of resources come and go, perhaps we just need to resize etcd up from the default 2Gig it currently runs at