This message was deleted.
# rke2
a
This message was deleted.
b
thanks i saw that. Im not sure what parts need to change. I will have to research.
v
you can change as many or as few values as your need to customize the component
we use it for preventing ingress pods on specific node types which have existing services on hostport 80/443, as well as adding config data to things like coredns
b
right im just not sure how much will work in a self-hosted environment. e.g. no load balancer service.
ah i see
v
at that point it is just a vanilla k8s ingress controller, but you can still point things like a wildcard DNS record and configure your individual ingresses for things like TLS offload
b
ive heard i need metallb to get the load balancer service but im trying to keep it least complicated 🙂
ah ok cool - thanks!
so the ingress-controller that is default should work like you expect without any modifications?
excluding the load balancer
v
for the most part, yeah
we use a vanilla internal RKE2 cluster on VMs running canal to host the
Rancher
chart, and it uses ingresses to handle the TLS offload and route traffic to the service
b
ok cool - really appreciate it. its been a few years since i worked with k8s and am only now working my way thru the cka cert lol
i need to review the canal - i remember setting that up
v
the difference between this approach and an external LB is we still need things like DNS records to get the traffic to the ingress (which includes addresses for all nodes running ingress), and don’t support things like stateful client connections that can survive ECMP rehash
b
gotcha