I just ran into a very frustrating issue with the ...
# rke2
a
I just ran into a very frustrating issue with the aws cloud controller manager. I followed the instructions here, which worked great a while back when I built an RKE2 1.27.x cluster, which I then upgraded to 1.28.x, still works there. I'm building a new cluster at version 1.32 in a new air-gapped c2s partition. The cloud controller kept failing, mostly not reading my custom URL in the cloud.conf and defaulting to ec2.<region>.amazonaws.com. Finally I realized that it keeps pulling down the cloud-controller-manager:v1.27.1 image and it's running at that version. Nothing in the docs or instructions says anything about having to specify the image version. Once I added the image and tag (v1.32.1) for the cloud controller manager it worked. I think what happened is by default it's installing 1.27.1 because that's the helm chart version, and it's a coincidence that it worked for me before because I was also running RK2 v1.27.x, and it still worked with my upgrade because that's within one minor version. Is this a problem with the docs? Or should it be installing the correct version of the cloud controller manager? Or did I miss something?
c
If you’re using the HelmChart suggested there, without a version specified, it’ll deploy whatever the latest version is in that Helm repo.
https://kubernetes.github.io/cloud-provider-aws/index.yaml suggests that 1.27.1 is the latest available
I don’t see anything newer in the repo. Perhaps AWS has abandoned maintenance of their helm chart? https://github.com/kubernetes/cloud-provider-aws/blob/master/charts/aws-cloud-controller-manager/Chart.yaml
a
Yea it appears so. That's what I gathered as well. There's newer images obviously but the helm version hasn't changed in 2 years
c
you might try just changing the image tag to point at a newer version, via chart values?
a
it's possible the old version would have still worked if I wasn't trying to run this in a new air-gapped aws partition that the old aws sdk isn't aware of
yea that's what I did, I just added an image tag in the add-on section of my cluster yaml when provisioning the new cluster
it's still odd that it was ignoring both the custom URL in my cloud.conf and the AWS_EC2_ENDPOINT_URL variable that I set in the controller