This message was deleted.
# general
a
This message was deleted.
b
I am just going through this now
I also had trouble with the node selector but I think the "node-role.kubernetes.io/control-plane" label is not standard across k8s flavors - I'm on RKE2 and had to use 'true'
To my knowledge, once the DS is running and healthy you're good except potentially for the ECR credential provider. If you relied on the auto-ECR auth before, then you have to install the credential provider separately (a pain, not happy with how k8s handled that but not Rancher's fault)
s
Thanks for the response. We not sure we are using any features provided by the cloud provider at this point. We do have node templates with aws configs, we spin up ec2 instances to an etcd, control plane and worker node groups. We dont create security groups, load balancers or anything like that. We used this provider years and years ago without understanding the benefits of doing so and I'm not sure its even needed. I was able to get migration to work, or at least I think it works to an extend when I look at the cloud-controller logs. The cluster cloud provider is set to aws-external and the controller is running. if I create a new cluster through the Rancher UI and choose EC2 and out of tree provider, it sets the cloud provider is just plain "external" which contradicts what happens in the migration process. I dont understand why its different
Just as an update, we ended up removing the cloud provider all together and rebuilt our clusters. Everything is running without issue