This message was deleted.
# harvester
a
This message was deleted.
1
m
Solved it by ssh'ing into one of the management nodes and updating the Corefile of Harvester's kube deployment to forward dns for the internal domain to the upstream DNS. For anyone that finds this: 1. SSH into a management node 2.
sudo -i
- take sudo privileges 3.
kubectl -n kube-system get configmaps rke2-coredns-rke2-coredns -o yaml
- get the current coredns config 4. Grab the Corefile field and format it from the string format it's found in to a normal file format (format newlines) 5. Edit the Corefile in a text editor. To add routing to an upstream DNS server for s specific subdomain I used:
Copy code
sam.intranet:53 {
  forward . 10.43.1.1:53
}
.:53 {
    errors 
    health {
        lameduck 5s
    }
    ready 
    kubernetes cluster.local cluster.local in-addr.arpa ip6.arpa {
        pods insecure
        fallthrough in-addr.arpa ip6.arpa
        ttl 30
    }
    prometheus 0.0.0.0:9153
    forward . /etc/resolv.conf
    cache 30
    loop 
    reload 
    loadbalance 
}
6.
kubectl -n kube-system edit configmaps rke2-coredns-rke2-coredns -o yaml
- edit the coredns config in vim 7. Delete the old Corefile values and replace with:
Copy code
data:
  Corefile: |
    sam.intranet:53 {
      forward . 10.43.1.1:53
    }
    .:53 {
      errors 
      health {
        lameduck 5s
      }
      ready 
      kubernetes cluster.local cluster.local in-addr.arpa ip6.arpa {
        pods insecure
        fallthrough in-addr.arpa ip6.arpa
        ttl 30
      }
      prometheus 0.0.0.0:9153
      forward . /etc/resolv.conf
      cache 30
      loop 
      reload 
      loadbalance 
    }
8. Wait ~30s for the auto reload in coredns
246 Views