This message was deleted.
# rke2
a
This message was deleted.
c
You're supposed to upgrade it periodically. Or at least restart the services when the certs are within 3 months of expiring so they auto renew
q
so i have k3's cluster with rancherd, then inside that, i have an rke2 cluster.
so two questions, 1) do i have to do this for both? 2) what's the "upgrade" process?
and it's my understanding the restart also happens if you reboot the node correct? if yes, should i just reboot each node once a month or something?
c
The cluster is not “inside” rancherd, it is managed by rancher. Certificate rotation is a cluster level operation, so yes they would both need to be restarted to effect the renewal.
Automatic renewal happens whenever the service starts, so it’s up to you if you want to reboot the node, or just restart the service.
q
followup. same thing for windows nodes?
c
windows nodes are agents and behave the same as linux agents
q
okay, i know when i did the cert rotation button on the rancher site the windows node threw errors, after looking into it it looks like there's a bug there, but it sounds like i should be able to just bounce that node and it'll renew itself
thanks!
i'll see if i can do some of this to revive the nodes that are having issues, see if that fixes it. can you do this after the cert is already expired? or is it too late then?
c
you’d need to restart the server nodes first, then the agents.
q
by server nodes, you mean control-plane / etc / master nodes?
sorry, running harvester and k3s and rancher and rke2. it's like inception for services. 😄
(fyi my k3s setup is on proxmox, not harvester)
c
etcd/control-plane roles are servers, as far as k3s/rke2 are concerned. Other nodes are agents.
q
my setup is basically proxmox w/ k3s on it running rancherd harvester w/ rke2 nodes, and rke2 is managing harvester as well
got it. thanks!
so basically reboot all my master nodes, then agent nodes.
i setup a customer rke2 cluster with Rancher. it's v1.30.4. on my prior cluster i used v1.26.8 using the default ingress, before i could go to the masternodes ip and get an ssl cert (https://{ip}) but now, i'm not getting a page, i just get a time out. nodes for both clusters have the same taints, the helmchartconfig is the same for ingress. not understanding how before i could go to the ip, and now i cant. this is mainly an issue because on the cluster i used to run, i had kube-vip which had a floating / vip between the master nodes. i could then use that as my endpoint for ingress, kubeconfig etc and if a node was down, i was still able to access resources on the cluster.