This message was deleted.
# rke2
a
This message was deleted.
c
You want to automatically create DNS records for things, just by adding them as a hostname in an ingress rule?
b
ideally, yes, but adding an extra resource is fine
c
I’ve not seen anything that’ll do that….
things like external-dns will create records for loadbalancer-type services, but I’ve not seen anything that will do that for hostnames referenced in ingress rules
usually folks just point a wildcard dns record at the ingress and slap a wildcard cert on it. That way whatever hostname you put in the ingress resource is already set up.
b
but if the nodes change, the ingress ip would change, how should I deal with that? I don't feel like manually updating an ip is a good solution
c
thats why people usually use an external loadbalancer, or something like kube-vip or metallb that will provision IPs for the loadbalancer service.
b
I experimented a little with kube-vip but ended up with a dead cluster though possibly unrelated to kube-vip
c
If you point things directly at the node IPs you’re tied to those node IPs not changing.
b
I tried adding a second vlan interface to the nodes for kube-vip to use for lb, but the interface didn't come up and I couldn't figure out what to put in the cloudinit
c
I’ve not ever seen anyone try to add a second interface just for VIPs… why would that be necessary?
b
I just wanted to try it, keeping traffic and management separate
c
I would probably look at existing patterns for what you’re trying to do, and work on successfully implementing one of those, instead of trying to invent something new.
b
sounds like a good idea, got a resource for that I can look over?
c
I would try to get metallb or kube-vip working, use one of those to expose your ingress service, and then point a DNS wildcard at that VIP.
I believe harvester also has a loadbalancer service controller though, doesn’t it?
any reason why you’re not using that?
b
to use metallb, I would need to set that up on a separate vm, yeah? kube-vip sounds more appealing to me in this use case
or can I get rancher to set up metallb on the nodes it provisions in harvester somehow?
c
b
no, I have not.. the docs are kind of hard to read for me due them not supporting dark mode, so I've just skimmed parts as I stumbled into them via google
thanks, I'll look over that section of the doc and do some more experimentation when time allows 🙂
for right now I need to go figure out what happened to my harvester, it kind of appears to have died when I was setting up this test rke2 cluster on it
seems like it had restarted, and the box is a bit weird in that it won't boot without an attached display. also had some more ram to double what I had in it 🙂
after the little unexpected reboot, the cluster node provisioning is struggling a little..
Copy code
Failed creating server [fleet-default/infra-pool1-12e24e65-bc9r6] of kind (HarvesterMachine) for machine infra-pool1-5c548759cfx5956p-klcxr in infrastructure provider: CreateError: Downloading driver from <https://rancher.runsafe.no/assets/docker-machine-driver-harvester> Doing /etc/rancher/ssl docker-machine-driver-harvester docker-machine-driver-harvester: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, stripped Trying to access option which does not exist THIS ***WILL*** CAUSE UNEXPECTED BEHAVIOR Type assertion did not go smoothly to string for key Running pre-create checks... Error with pre-create check: "an error on the server (\"error trying to reach service: cluster agent disconnected\") has prevented the request from succeeding (get settings.harvesterhci.io server-version)" The default lines below are for a sh/bash shell, you can specify the shell you're using, with the --shell flag.
harvester is listed as
Unavailable
under virtualization management, but I can click it and manage it just fine
a
Going back to the original question, I've been looking at both ExternalDNS (https://github.com/kubernetes-sigs/external-dns/tree/master) and Hashicorp Consul (https://www.consul.io/use-cases/discover-services). My use cases are: 1. Have services that grab a MetalLB IP dynamically update an upstream InfoBlox IPAM that owns the IP range that MetalLB is configured with. ExternalDNS looks promising. 2. Have services that need external SSL offload receive a load balancer entry on a central NGINX load balancer. Consul appears to be able to do this, though it may require a dedicated NGINX instance just for Consul. I know that these options are not DNS delegation, but maybe dynamic DNS updates or dynamic service registration may meet your core requirements.
b
dns updates would definitely fit my bill, so long as windows dns server or bind are the ones being supported
had a look at the harvester lb stuff just now and I didn't really want to use my harvester vlan for this