https://rancher.com/ logo
Title
f

future-vase-71145

02/10/2023, 7:01 PM
We're using k3d locally and are upgrading from 4.4.6 to 5.4.6. With 4.4.6 we mounted an updated coredns configmap to /var/lib/rancher/k3s/server/manifests/d-coredns-patch.yaml so that we could add
rewrite name regex (.*).<http://local.example.com|local.example.com> public-nginx-ingress-nginx-controller.default.svc.cluster.local
. (Everything else was kept default.) It appears from my reading of github issues that modifying the
.:53
block in coredns is no longer supported? Any pointers on how to handle this? For now I'm doing this, but it feels... wrong.
kubectl -n kube-system patch configmap coredns --patch-file "coredns-patch.yaml"
kubectl wait --for=condition=Ready=true pod -l k8s-app=kube-dns -n kube-system
kubectl -n kube-system rollout restart deployment coredns
@wide-garage-9465 pinging you because I see you responding to other questions here (sorry if that's not ok). Any suggestions? Looks like this isn't actually working (it keeps getting reverted) and I may need to go back to 4.4.6.
w

wide-garage-9465

02/14/2023, 8:19 PM
Is that actually a patch that you're deploying there?
Would you get along with the coredns-custom configmap?
f

future-vase-71145

02/14/2023, 8:21 PM
I tried the
coredns-custom
configmap, but couldn't figure out how to get it working.
And yes, I'm using an actual patch. The contents of that patch file are:
data:
  Corefile: |
    .:53 {
        errors
        health
        ready
        rewrite name regex (.*).<http://local.example.com|local.example.com> public-nginx-ingress-nginx-controller.default.svc.cluster.local
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
        hosts /etc/coredns/NodeHosts {
          ttl 60
          reload 15s
          fallthrough
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
I just copied the default and added my rewrite rule
I'm new to dealing with coredns and I've never managed dns zone files, so it's entirely possible I'm missing something simple.
w

wide-garage-9465

02/14/2023, 8:26 PM
OK. k3d is now also overwriting K3s' default CoreDNS file. It was meant as a temporary solution until coredns-custom was ready, but it wasn't removed yet. I could imagine (though I don't know how), that it interferes with your solution here. With the coredns-custom configmap you can add another zone for .example.com, which would be separate from the root zone (.:53)
f

future-vase-71145

02/14/2023, 8:27 PM
I think I tried that, but I tried enough things that I can't remember for sure. 🙃 I'll try it again.
Hope that helps 🙂
I have to get into this topic again 😬
f

future-vase-71145

02/14/2023, 8:32 PM
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns-custom
  namespace: kube-system
data:
  example.server: |
    .<http://example.com|example.com> {
      rewrite name regex (.*).<http://local.example.com|local.example.com> public-nginx-ingress-nginx-controller.default.svc.cluster.local
    }
I think this is what should work then?
w

wide-garage-9465

02/14/2023, 8:33 PM
Yup, I guess/hope so
f

future-vase-71145

02/14/2023, 8:33 PM
ok, gonna try that 🤞
w

wide-garage-9465

02/14/2023, 8:33 PM
Not sure if any additional configuration is needed then
f

future-vase-71145

02/14/2023, 10:05 PM
So far no luck 😞
Finally got it. I'm using a config file for k3d and I was setting
image
to an old version. 🤦 I removed that and I'm letting k3d set the image now. With that in place this works:
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns-custom
  namespace: kube-system
data:
  local.server: |
    <http://local.example.com|local.example.com> {
        rewrite name regex (.*).<http://local.example.com|local.example.com> public-nginx-ingress-nginx-controller.default.svc.cluster.local
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
    }
Thanks for the pointers. I'd seen the github thread, but that blog post was really helpful for testing.
w

wide-garage-9465

02/15/2023, 6:19 AM
Great! Glad that you got it working :)