I’ve somehow got into a weird situation where I accidentally registered the local cluster on a downstream cluster. Everything seems to be corrected with one small issue in that downstream cluster will show the wrong cluster information intermittently. Anyone have any ideas what to check?
f
future-night-17486
05/01/2023, 5:23 PM
Is it the same local cluster that the Rancher is running on?
p
polite-action-59010
05/01/2023, 6:37 PM
Yes, basically, I accidentally ran the cluster registration on the local cluster and associated with the downstream cluster ID.
So what I think is happening is that rancher is still watching resources for both clusters but I don’t know how to prove it. I don’t see the same behavior when interacting with kubectl on the cluster, only via the UI
f
future-night-17486
05/01/2023, 9:04 PM
I have not heard anyone running into this scenario, and do not have the enough understanding to answer your question.
If it is possible, can you try to delete the downstream cluster from rancher and see what will happen? AFAIK, Rancher will do some cleanup on the cluster, but I am unsure how big the impact will be in this case. ( it is totally fine if you cannot delete it because the setup is important)
p
polite-action-59010
05/01/2023, 9:11 PM
I was hoping to avoid that since I’d have to regenerate auth contexts
f
future-night-17486
05/01/2023, 9:15 PM
Understood. Sorry that I do not have a good answer now. I have brought this question to the team and hopeful someone else can help.