Hi Rancher Community, I'm experiencing an issue wi...
# rke2
d
Hi Rancher Community, I'm experiencing an issue with adding new worker nodes to my RKE2 cluster managed by Rancher. Here's my current setup and problem: Environment: RKE2 cluster managed through Rancher UI Management node running RKE2 server (version: v1.28.9-rke2r1) Attempting to add new worker nodes using the standard registration command Problem: When I run the worker node registration command: curl -fL https://rancher.aaaaa-7.com/system-agent-install.sh | sudo sh -s - \ --server https://rancher.within-7.com \ --label 'cattle.io/os=linux' \ --token [token] \ --ca-checksum [checksum] \ --worker The RKE2 agent fails to connect because it's trying to reach an old IPv6 address of the management node. The management node's IPv6 address was changed previously, but RKE2 seems to be using cached/stored old IPv6 address information. Current Status: RKE2 server is running successfully on the management node Management node has both IPv4 and current IPv6 addresses Question: I suspect the old IPv6 address might be stored in TLS certificates (SAN fields) or etcd member information. What's the best approach to: Update the stored IPv6 address information in certificates/configuration Allow new worker nodes to register successfully Avoid disrupting the existing cluster Should I regenerate specific certificates, update the cluster configuration, or is there a cleaner way to handle this IPv6 address change in RKE2? Any guidance would be greatly appreciated!
c
What do you mean by
an old IPv6 address of the management node
. Server node IPs must be static for the duration of the node’s life in the cluster. If the IP changes you will likely need to delete the node from the cluster and rejoin it, and take whatever steps are necessary to ensure that it does not change again in the future.
d
Thank you for the guidance! Let me clarify the exact situation: When I initially set up the management node, it had IPv6 address
2600:1f16:de1:d800:6974:3bdb:c038:fedb
. The server was registered with this address. Later, the IPv6 address was accidentally changed to a new one. Now when new worker nodes try to join, they're still trying to connect to the old address:
Copy code
"server": "https://[2600:1f16:de1:d800:6974:3bdb:c038:fedb]:9345"
This causes connection failures since that old IPv6 address is no longer valid. My questions: 1. Where is this old IPv6 address stored that I need to update? Is it in: ◦ RKE2 server certificates (SAN fields)? ◦ Rancher cluster configuration? ◦ etcd cluster member information? 2. Can I manually update the stored IPv6 address to the current one, or do I need to rebuild the entire cluster? 3. If I need to rebuild, what's the proper procedure for a single-node management setup? I understand the importance of keeping server IPs static going forward. Right now I'm trying to figure out which configuration files or certificates need to be updated to reflect the current IPv6 address. What specific files or commands should I check/modify to update this stored IPv6 address? Thanks for your patience!