05/15/2023, 3:27 PM
Hello, I'm new here, but I would like some help with my EKS cluster on Rancher 2.5.5 (i know it's depreciated, but we couldn't update yet). Well, I just can't access my cluster, it keeps saying "*Error* cannot determine access, cluster is unavailable". I've tried with admin root user, tried to create another admin user, but none worked. All other clusters on this rancher are working fine, just this one has this error and I don't know what to do. Important to note that last friday we had this access working, but today we do not. No one worked on the weekend, so we do not had any changes between friday and today. I have the access using AWS Management Console, but I need to access using rancher, not EKS console. Any idea what's going on here?


05/15/2023, 3:58 PM
whats the status of pods in
namespace? Anything in their logs that may give you a hint?


05/15/2023, 4:31 PM
From local cluster, this is the status of all pods in cattle-system ns
from the cluster i'm with issue, i can't see, because i can't connect into it
Does anyone has any other idea? I still don't know how to solve this problem. I noticed the eksStatus on API info from this cluster is empty, looks like the credential was deleted. I tried to recreate using same name and CA/Endpoint info, but it didn't work.
Btw, i could connect via CLI (kubectl), only via Rancher UI I can't
Tried to recreate rancher agent, but not worked too