This message was deleted.
# general
a
This message was deleted.
m
We just resolved it. We increased the nodes to 3 from 2and it worked. Thanks to anyone who was preparing a solution for me.
m
wow i thought it was a misconfiguration
m
There was this cluster agent pod that kept restarting several times. We wondered what would have been the issue. In trying out many other things including aws iam. We then tried to increase the number of nodes to check, and voila. Interestingly, this same rancher was used to import a gke cluster successfully.
m
I see so you had a 1-node cluster (not sure if it is possible even) and then you increased the node count and it worked?
m
It was an EKS cluster with two worker nodes
1