Hi! I've got a bit of a weird issue and I'm wonder...
# k3s
a
Hi! I've got a bit of a weird issue and I'm wondering if its a known problem with a known solution. I've got a 3-node cluster on proxmox and I'm doing some failure testing. When I stop a node the pods can no longer resolve DNS both internally or externally. This breaks access to services and things. How could I go about debugging coredns or the network on k3s?
c
Are all 3 nodes servers? Did you wait for the coredns pod to get rescheduled to a new node, or did you want to scale it up so there is more than one replica and you don't have to wait for rescheduling? This is just normal Kubernetes pod HA stuff...
a
I believe so, they've all got the roles
control-plane,etcd,master
Yeah I waited for a good while and all pods were happy but still no access. The DNS outage might just be a symptom`
I don't think any
service
was reachable, which would explain the dns outage too. Let me just recreate the problem again.