This message was deleted.
# k3s
a
This message was deleted.
c
Figure out what’s you’ve got deployed thats using up so much storage
q
nothing new thats what is confusing me. I guess i should probably just use etcd tools and see its content then?
c
for a very quick object count you can do
kubectl get --raw /metrics | grep apiserver_storage_objects | sort -rnk 2 | head
but that won’t tell you if you’ve got something churning updates to the same objects over and over, for that you’d have to enable audit logs or look at the
apiserver_request_body_size_bytes_count
metrics sliced by verb try to figure out if you’ve got something increasing at an unusual rate.
q
That already helps quite a bit thank you very much!
ok figured it out. longhorn is at fault that is just spamming everything cause some volumes seem to be in a detatching / attaching loop
🤔 1
🤠 1
l
What version of Longhorn are you running?
q
1.8.0
l
hmm
q
upgraded it just today as well, first did k3s, seemed working, then the other stuff like longhorn. and then the issues started happening an hour later
managed to stop the etcd balooning by now with disabling auto moving of volumes around by longhorn
now just gotta figure out how to fix the faulty volumes lol
🎯 1
l
okay … do you use strict locality for your volumes? Maybe that could be the cause …
q
nah had best-effort. but did set it to disabled now so it stops moving around
l
okay