This message was deleted.
# k3s
a
This message was deleted.
c
our team doesn’t maintain either of those, and we don’t get much if any outreach from the folks that do. So unfortunately I can’t offer much help. I will say that the kubelet certs are generated using the hostname, node-ip, and node-external-ip values - so the missing bit is probably that the nodes aren’t being passed the proper addresses during provisioning. I have heard in the past that the linode cloud provider likes to add node addresses after the fact, that aren’t known at the time the kubelet certs are being generated, and that causes problems. Not sure how folks solved that.
s
ah, interesting. I wonder if the way we are setting our hostname is causing it to differ from what other clouds are doing and not get added to the certs if that is the case
from what I can tell none of the k3s/rke2 capi code is setting the node-ip/node-external-ip explicitly in it’s config, but I could be missing something.
c
note that the error about certs that you get when doing
kubectl logs
or
kubectl exec
is from the apiserver connecting to the kubelet, not kubectl connecting to the control-plane endpoint. It has nothing to dowith the LB SANs, it is only about the node address that the apiserver is trying to use to connect to the node, not being in the kubelet’s server cert.
and the only things that determine that are the node’s hostname, node-ip, and node-external-ip. You can change the order that the apiserver uses those fields with the apiserver
--kubelet-preferred-address-types
flag https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/
```--kubelet-preferred-address-types strings Default: "Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP"
List of the preferred NodeAddressTypes to use for kubelet connections.```
k3s sets that to
InternalIP,ExternalIP,Hostname
by default, that help text shows the kubernetes defaults
s
and it is the node-ip/node-external-ip set when starting the k3s service, correct? and not what is actively observed in the cluster? so for instance the linode CCM updating it after the fact would not help here
c
the kubelet sets the hostname and ip when it initially creates the node object, but the ip and external ip can be overridden by the ccm.
you might compare the SANs on the kubelet serving cert, and the addresses on the node resource that you get from
kubectl get node
figure out which ones are valid on the cert, and reachable from the apiserver, and set those first in --kubelet-preferred-address-types
s
yeah, based on what I’ve seen in the past there the
node
resource does return the correct internal/extneral IP but what we are seeing is unless we set it with the k3s args/config those IPs are not getting added to the kubelet certs SAN
do you happen to know at what point the kubelet cert gets generated? is that when the node is first registered or when it becomes ready or some other point?
c
it gets all its certs and config first thing. It can’t connect to the apiserver without certs.
I don’t think theres any way to have it dynamically reload the cert later, either. I might look into that. If we could regenerate the cert while it’s running to stay in sync with the addresses, that would make some things easier when using external cloud providers.