This message was deleted.
# rke2
a
This message was deleted.
b
The kubeconfigs I generate/use from Rancher all have a 30 day TTL. I always have to download a new one, or copy/paste in the new token from the UI.
o
i haven't had to update mine since we've installed it last year, hm
cluster is 334 days old, thought we had eclipsed the 1 year (365 day mark) so thought the certs were the issue
b
I know not all rke2 deployments are through Rancher, but that's the only way we've deployed it.
I think there was a new setting and default vs the enforced 30 day from when we deployed it, we just haven't changed it yet.
o
i gotcha.. ill try to copy over the kube config for the cluster, see if that changes anything
b
If you took the config from the cluster, rotating the certs might have rotated the token it used as well.
o
i didnt previously; however, i just did and it worked
so thats good to know, and glad this worked
thanks for bringing up what you've done in the past, brought me to a solution!
now if only I could get the ECK folks to be as helpful 🙂
c
the client cert embedded in your kubeconfig expired. The admin cert on the RKE2 server is valid for 365 days and is updated as necessary when RKE2 starts, so if you copy them off the server nodes you’ll need to refresh your copy periodically. but it sounds like you figured that out.
o
@creamy-pencil-82913 but it was at 344 days, so it threw me.. it worked out for sure. thanks dude
quick question - would this impact the nginx ingress controller? since the refresh it seems that all of the connections to the services are now connection refused
b
It might. You could try triggering a new rollout
o
ill try to roll out / restart the services and see if it shifts
hm, same results. our front end resides in the default namespace, ingress resource in default namespace, path and everything is unchanged.. service running, pod running.. nothing has changed aside from the cert changes. still not loading it. restarted the service/pods by completely removing them and then re-starting them in the cluster too
interesting - we restarted the host yesterday, the ingress worked, everything connected... go back in to check today, and nada. back to connection refused
c
the only way it’d change is if someone tried to rotate or switch to unencrypted via the cli
o
would firewalld have an impact if the rules we had in place to circumvent it messing with anything RKE2/k8s related magically came back into play?
and back when the cert rotation happened, we just followed the commands from the documentation:
Copy code
# Stop RKE2
systemctl stop rke2-server

# Rotate certificates
rke2 certificate rotate

# Start RKE2
systemctl start rke2-server