This message was deleted.
# general
a
This message was deleted.
a
On the clusters.management.cattle.io object, there is an eksConfg field that should have an imported field. Setting imported to true, let that settle (nothing should really happen besides some CRDs getting updated), then deleting the cluster from Rancher should remove it from Rancher but not delete it from EKS. Be aware though: there are some Rancher-managed things leftover. I am not sure how those leftover things will affect anything when/if you try to add the cluster back to Rancher.
b
Hi Donnie, Thanks a lot for that, I'll do some tests today with that. For cleaning up the cluster created by the 2.6 instance I had, i just deleted everyting under the cattle-system, cattle-fleet-system, cattle-impersonation-system and local namespaces, and then deleted the namespaces themselves. After importing the clusters again some resources got marked as already existing (mostly the CustomResourceDefinitions), some got patched and most of them got created from scratch, and after a few seconds the cluster was available again.
a
Sorry, let me clarify my remarks. When I said “Rancher-managed things leftover” I was talking about Rancher-managed things in AWS (service roles, VPC, launch templates). Since those things were created by Rancher, they may not get cleaned up properly via the method I mentioned above.
b
Oh, I see. I actually want the cluster to remain functionally the same on the AWS side, so the roles, launch templates, auto scaling groups and other objects remaining there actually works for me. Thanks a lot for the advice.
Hi Donnie, Tried the steps you mentioned on a separate dev environment and it worked perfectly. However, on a few clusters on the prod environment I noticed the "imported" field is not there. Checking further, it seems that the field driverName switched from EKS to amazonelasticcontainerservice somehow, and on a single cluster the provider in the UI shows as ElasticContainerService as well. No idea how it could have been switched, but apparently it was always showing up as EKS on the UI up until we started having this issue, so it may be related.
t
If you remove a Rancher-provisioned cluster from Rancher, its RBAC will be frozen in time. There is no supported use case to gracefully detach a Rancher-provisioned cluster, as we only support removing imported clusters. https://github.com/rancher/rancher/issues/25234
b
Hi Catherine, We actually don't really want to detach the cluster, but it's the only thing we could think of to stop Rancher from constantly crashing. We've submitted a bug report 3 weeks ago: https://github.com/rancher/rancher/issues/38377 If we could stop Rancher from constantly crashing without removing any cluster it would be even better for us. Thanks for the reply.
t
I'm not sure if this is the same error, but in v2.6.6 there was a bug fix for a performance issue in which Rancher would crash repeatedly. https://github.com/rancher/rancher/issues/37250 Edit: never mind, I just saw you are on Rancher v2.5.x - that bug seems not to be in 2.5.x.
b
Hi Catherine, Yes, we are using version 2.5.15. Kubectl access works perfectly, it's just the main Rancher pods that keep crashing and restarting.