This message was deleted.
# general
a
This message was deleted.
f
Is it the same local cluster that the Rancher is running on?
p
Yes, basically, I accidentally ran the cluster registration on the local cluster and associated with the downstream cluster ID.
So what I think is happening is that rancher is still watching resources for both clusters but I don’t know how to prove it. I don’t see the same behavior when interacting with kubectl on the cluster, only via the UI
f
I have not heard anyone running into this scenario, and do not have the enough understanding to answer your question. If it is possible, can you try to delete the downstream cluster from rancher and see what will happen? AFAIK, Rancher will do some cleanup on the cluster, but I am unsure how big the impact will be in this case. ( it is totally fine if you cannot delete it because the setup is important)
p
I was hoping to avoid that since I’d have to regenerate auth contexts
f
Understood. Sorry that I do not have a good answer now. I have brought this question to the team and hopeful someone else can help.