Not sure if this is the proper place to ask this question. I'm seeing a problem with two cattle-node-agents connecting/running. It appears to only be two. I'm also seeing an issue with two downstream clusters where cluster agent is not connected. This is my scenario and how I sort of broke it...
• dev cluster was fine and working, we setup the new rancher control plane with HA, and imported the dev cluster from an older control plane. We never got around to importing the production cluster to this new control plane and put this on pause.
• I went ahead sometime later to try and import / register the prod cluster with this new control plane but in doing so it moved all of my dev namespaces and resources under the production cluster name, not what I was expecting. This part was my fault because I think I still had the dev config file set for my kubeconfig
◦ I attempted to re-trace my steps backwards and re-register the dev cluster and in doing so I managed to disconnect the cluster agent... I think this is what had happened. This is where things got worse. I set my kubeconfig file back to the production config file attempted to re-register the production downstream cluster to see the dev resources and caused the production cluster agent to disconnect.
Wondering if anyone has encountered this issue before and steps to rectify. At this point in time I can only talk to my production namespaces. It appears the production cattle-node-agents are trying to resolve the old control plan address. I tried manually editing the server url but just get errors when I try to apply the new yaml. Both cattle-node-agents are pointing to the old control plane url for the server.