I enabled the aws cloud provider in my rke1 cluster, but it doesn't appear to be happy with my host names. Our host names follow the format of ip-xxx-xxx-xxx-xxx, apparently the k8s cloud-provider is looking for the format ip-xxx-xxx-xxx-xxx.region.compute.internal. This left the cluster in a weird state with new control plane and etcd nodes in rancher that don't resolve. Has anyone run into this before?
I was able to recover this by removing the cloud provider section of the cluster.yml, rebooting one of the control planes, and then deleting the "new" etcd and control plane nodes from Rancher. There were duplicates, one of ip-xxx-xxx-xxx and one of ip-xxx-xxx-xxx-xxx.region.compute.internal, I deleted the latter. After about 10 minutes the control plane and etcd nodes cleared up.
I would still like to get the cloud provider working, it seems like my only option is to change the host names of all the of the nodes to the correct format? Is that possible to do in a running cluster?