This message was deleted.
# rke2
a
This message was deleted.
c
it doesn’t change the dns server address… it just runs a local cache pod and adds iptables rules that redirect dns traffic to that pod. this is covered in the docs. https://docs.rke2.io/networking/networking_services#nodelocal-dnscache
c
Ooohhh okay, so it does not look the same as RKE1, got it. Thanks
m
This thing fixed my dns issue. I was always getting timedout and this fixed it. Thanks. More like a mandatory stuff for me than performance one.
c
It shouldn't be mandatory. If DNS doesn't work when you don't have a replica on all nodes, then something is causing CNI overlay traffic between nodes to get dropped. You should find out why and fix that.
m
I think so, it works some times, but often fails. Now I am back to square one. It wont work. I will study how to check the CNI overlay traffic. When I bash into coredns pods, the nslookup, curl works fine. But when I bash into other pods, like fleet related ones, nslookup works rarely or very slow. Currently I thought I had this issue and I was working on that https://github.com/k3s-io/k3s/issues/5013
I am using RKE2, but saw someone using this to resolve dns issue. My config is pretty basic.
# Listener
# IPv4 address that apiserver uses to advertise to cluster members
advertise-address: $(hostname -I | awk '{print $1}')
# more hostnames/IPv4s as subject alternative names on the server TLS cert
tls-san:
- $(hostname -I | awk '{print $1}')
- $(wget -qO- ifconfig.me | grep "ip_addr:" | awk '{print $2}')
tls-san-security: true
# Networking
cni:
- multus
- canal
# # cluster settings
service-cidr: 172.16.0.0/16
cluster-cidr: 172.170.0/16
cluster-dns: 172.16.0.10
cluster-domain: "cluster.local"
# Kube client
write-kubeconfig-mode: "0644"
write-kubeconfig: "/home/ubuntu/.kube/config"
# cluster
token: SECRET_TOKEN
# etcd settings
etcd-expose-metrics: true
etcd-snapshot-schedule-cron: "0 */23 * * *"
etcd-snapshot-retention: 5
etcd-snapshot-dir: /var/lib/rancher/rke2/db/snapshots
# Node (details)
node-name: master-$(hostname -I | awk '{print $1}')
# Registering and starting kubelet with set of labels
node-label:
- "node-role=server"
- "env=stag"
# # Components
disable-cloud-controller: true