This message was deleted.
# rke2
a
This message was deleted.
c
Copy code
tls-san:
 - $HAPROXY_ADDRESS
on the servers?
where HAPROXY_ADDRESS is your actual address
should probably specify both the IP and the hostname
w
just right in the cluster.yml for the downstream cluster?
or right on the control plane nodes
looks like right on the control plane nodes works. it doesn't like the cert now
Copy code
Unable to connect to the server: x509: certificate signed by unknown authority
i do have a valid cert we use for rancher & one of our downstream clusters, so perhaps i just have to specify that in the same spot. i'll dig into it, thanks.
i did add a secret in the past and specify it on those nodes:
Copy code
spec:
  valuesContent: |-
    controller:
      extraArgs:
        default-ssl-certificate: "default/tls-wildcard-cert"
That's for the nginx ingress of course, and not the api.
c
the clusters themselves always use self-signed CA certs.
w
okay, kubectl seems to complain though. Is that avoidable?
or do i need to add the flag to skip tls verification somewhere on the cluster (or just tell the users to do it themselves in their kubeconfig)
c
it shouldn’t… are you pointing at 6443, or 443?
w
does kubeconfig use 443 by default if you specify https://<fqdn>?
c
I’m not exactly sure how rancher sets up the context for the authorized endpoint… I’m not sure how it picks what cert to use for the CA.
yes, if you don’t use a port then 443 is the default for https
you’re probably hitting the ingress instead of the apiserver
w
kubectl i going to haproxy server https://fqdn haproxy has an ACL set up to identify this is trying to use the API (based on hostname) haproxy redirects (round robin) to the control plane nodes port 6443 on the private network
so kubectl thinks https in this case, but the traffic is routed to 6443.
c
if haproxy is inspecting the request based on hostname, then haproxy is offloading TLS. Don’t do that; it needs to pass TLS through to the backend.
w
it's not offloading tls because we're using tcp mode. it can inspect the fqdn w/o decrypting anything.
c
hmm generally speaking you can’t decrypt a tls payload to get at the request without terminating it, but OK. If you’re sure its passing TLS through to the backend.
w
it's not the payload, it's the URL itself.
you provide a dns pointer (they call it cname) to the a record of the proxy server, it sees that
(if dns was encrypted, that'd be different)
c
yeahhhh thats not how it works. the host header is part of the http request, which is encrypted inside the tls session. I could see it doing SNI sniffing on the TLS handshake but it is certainly not inspecting the HTTP request without terminating.
w
okay, i use haproxy a fair bit to do acl based on host headers.
c
the server doesn’t know what DNS lookup you made to connect to it, it has to get that from the TLS SNI handshake or host header in the actual HTTP request.
Can you confirm that the cert you get from connecting to 6443 on the control-plane nodes, is the same cert you get when connecting to 443 on your VIP?
w
sure, i'll check.
Copy code
$ openssl s_client -showcerts -connect <fqdn>:6443 | grep -i -A3 "(chain|subject)"
depth=1 CN = rke2-server-ca@1704385527
verify error:num=19:self signed certificate in certificate chain
verify return:1
depth=1 CN = rke2-server-ca@1704385527
verify return:1
depth=0 CN = kube-apiserver
verify return:1
^C
$ openssl s_client -showcerts -connect <control-plane-node>:6443 | grep -i -A3 "(chain|subject)"
Can't use SSL_get_servername
depth=1 CN = rke2-server-ca@1704385527
verify error:num=19:self signed certificate in certificate chain
verify return:1
depth=1 CN = rke2-server-ca@1704385527
verify return:1
depth=0 CN = kube-apiserver
verify return:1
^C
$ openssl s_client -showcerts -connect <fqdn>:443 | grep -i -A3 "(chain|subject)"
depth=1 CN = rke2-server-ca@1704385527
verify error:num=19:self signed certificate in certificate chain
verify return:1
depth=1 CN = rke2-server-ca@1704385527
verify return:1
depth=0 CN = kube-apiserver
verify return:1
the middle example is directly connecting to the control plane node.
c
ok, that looks good. Does the CA cert in the authorized endpoint context of your kubeconfig, match the ca you see there?
w
i didn't re-download the kubeconfig after modifying the control plane, (with that additional tls-san setting) so that might be it
Make sure that the certificate info you provided when configuring the authorized endpoint, is correct for the downstream cluster.
You’ll probably want to give it the rke2-server-ca as seen in your openssl tests, so that the kubeconfig context has the correct CA info.
w
okay, it wasn't clear to me what cert to use if any there. Thanks!
and those self signed certs are rotated, so i will need to update this whenever they get rotated, correct? or can i point to the secret itself?
c
the docs on that page about when to provide certificate information for the authorized endpoint aren’t very good. Realistically you will always want to provide the cluster’s CA info since you’re connecting directly to the cluster.
the cluster CA is valid for 10 years.
w
then i better document this or i'll certainly have forgotten it in 10 years. i forget what i did thursday
c
lol
w
all right, i can confirm that worked. thanks.
if you want any docs on an example haproxy setup i may be able to provide some.
just in case it's something you think the community would want
oh, i spoke too soon. i think the certs i get differ per control node.
i probably just need to change which i trust.
c
the nodes should have different certs but the same root and intermediate CA
w
yeah, i got that. our internal docs are being reviewed now. I think we're all good. thanks for your patience.
i do near-zero k8s work so i'm all thumbs when it comes to this stuff.