https://rancher.com/ logo
#k3s
Title
# k3s
a

adamant-kite-43734

08/15/2022, 10:58 PM
This message was deleted.
s

square-engine-61315

08/16/2022, 11:47 AM
Start here:
Copy code
kubectl --namespace kube-system describe deploy/coredns
k

kind-nightfall-56861

08/16/2022, 12:09 PM
for me it usually works to execute this command to check the status of the kube-system pods;
Copy code
kubectl get all -n kube-system
And if it turns out that one or more pods are being a pain in the *, then I'm pretty much forcing them to redeploying.
Copy code
kubectl delete --all pods -n kube-system --force
Same way of work for any namespace tbh
s

square-engine-61315

08/16/2022, 12:12 PM
@kind-nightfall-56861 that works sometimes. But instead of deleting pods, you could restart the deployment that controls the pods:
Copy code
kubectl rollout restart -n kube-system deployment coredns
But you might want to find out why the deployment is failing before you do that. That's what I suggest:
Copy code
kubectl --namespace kube-system describe deploy/coredns
k

kind-nightfall-56861

08/16/2022, 12:19 PM
Tbh, when I try do restart through the Rancher interface, restarting almost never works, but idk if that restart translates to that console line. I'm finding that my method works 100% of the time, but I might be mistaken.
s

square-engine-61315

08/16/2022, 12:54 PM
Deleting the pod is almost like restarting a deployment that has
.spec.strategy.type==Recreate
. I think the default is
.spec.strategy.type==RollingUpdate
. The latter will try to start new pod before stopping the old one, which is nice for high availability, but does not work with all applications or pods.
👍 1
96 Views