This message was deleted.
# k3s
a
This message was deleted.
s
Start here:
Copy code
kubectl --namespace kube-system describe deploy/coredns
k
for me it usually works to execute this command to check the status of the kube-system pods;
Copy code
kubectl get all -n kube-system
And if it turns out that one or more pods are being a pain in the *, then I'm pretty much forcing them to redeploying.
Copy code
kubectl delete --all pods -n kube-system --force
Same way of work for any namespace tbh
s
@kind-nightfall-56861 that works sometimes. But instead of deleting pods, you could restart the deployment that controls the pods:
Copy code
kubectl rollout restart -n kube-system deployment coredns
But you might want to find out why the deployment is failing before you do that. That's what I suggest:
Copy code
kubectl --namespace kube-system describe deploy/coredns
k
Tbh, when I try do restart through the Rancher interface, restarting almost never works, but idk if that restart translates to that console line. I'm finding that my method works 100% of the time, but I might be mistaken.
s
Deleting the pod is almost like restarting a deployment that has
.spec.strategy.type==Recreate
. I think the default is
.spec.strategy.type==RollingUpdate
. The latter will try to start new pod before stopping the old one, which is nice for high availability, but does not work with all applications or pods.
👍 1