This message was deleted.
# k3s
a
This message was deleted.
f
This is our memory usage trend for the last month or so
c
did you check the upstream release notes? https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.30.md#changelog-since-v1305
• Fixes a kubelet and kube-apiserver memory leak in default 1.29 configurations related to tracing. (#126984, @dashpole) [SIG API Machinery and Node]
🎯 1
k3s is 99% other projects (kubernetes, containerd, flannel, and so on). Most things are actually fixed elsewhere.
f
Thanks -- I had not looked at the Kubernetes release notes, but I'll make a note to do that in the future
👍 1
So if I wanted to pass
--feature-gates=APIServerTracing=false
to the kubeapiserver, how would I do that?
--kube-apiserver-arg=feature-gates=APIServerTracing=false
?
c
same for kubelet-arg as well I think
f
did I get that correct, though?
c
that’s the correct way to pass feature-gates, yes
f
ty 🙂
I wound up experiencing the same sort bug on 1.31.1, and 1.31.2 seemed to have patched it on the control-node. Interestingly enough, k3s-agent memory usage seems to be increasing sloowwwwly over time
This is a look at our production cluster and when we upgraded in the 1.30.x line to address the issue (this is what the original message was about).
Just an interesting thing of note is that the memory usage of the control nodes seems to completely stabilize; the compute nodes seem to have upward-trending k3s usage
Nothing for you to do here, just some interesting observations. Given my lack of cycles to take a serious look at it, I'm probably just going to keep an eye on it and be more on top of minor release upgrades