This message was deleted.
# k3s
a
This message was deleted.
c
The K3s apiserver is the same as the Kubernetes apiserver. Anything covered in the docs here is true in k3s: https://kubernetes.io/docs/concepts/architecture/garbage-collection/#cascading-deletion
The question is probably, how are you linking the two resources together, and how are you expecting them to be cleaned up. do you need a finalizer? are you expecting Kubernetes to handle it for you? What fields are you using to track the relationships?
r
Actually, the thing is that on my local kubernetes cluster, garbage collection works correctly, but in k3s, after I delete the primary resource, the secondary resource still exists. I believe this might be due to way how k3s is provisioned - I am using java test containers to provision k3s cluster
Can you suggest where to look for logs? What to debug to find the issue?
I am using version
k3s:v1.22.6-k3s1
c
what kind of resources are we talking about
r
I have a custom resource that creates istio resources underneath
c
are these custom resources? core resources that the controller-manager is responsible for? Can you answer any of the questions above about how the relationships are tracked?
With regards to garbage collection, there’s nothing unique to our apiserver or controller-manager compared to a “vanilla” Kubernetes cluster.
r
Weird, I think that 5 seconds is not enough to garbage collect resources 🤔 This was the timeout that I set, and after waiting for more time period, I see that the resource was deleted 😛
Answering your questions - I use owner references, secondary resource references primary resource using owner reference
I wonder if it is possible to override poll interval (?) in k3s, so the resources would be cleaned faster?
c
I’m not sure. There’s probably a tunable somewhere but that’s not something I’ve ever needed to mess with.
r
ok, thanks for that. I’ll switch to checking that the resource does not exist OR it was marked for deletion. This should do the job, thanks! 🙂