This message was deleted.
# rke2
a
This message was deleted.
c
this has nothing to do with the helm repo CA. This is the cloud controller itself saying that it doesn’t trust the cert on the EC2 service endpoint in your environment.
Are you using a custom ec2 service endpoint? Why did you redact the ec2 service URL?
a
Yes it's a custom ec2 service endpoint. The CA for that endpoint is in my CA chain in our ec2 instances
Step 5. here says to verify the config by looking for the aws labels with
kubectl get nodes --show-labels
, I do not see the same AWS labels that I see in our non air-gapped deployment, but I'm not sure if those get added after the cloud controller is working since this part just worked in the non air-gapped environment
does the cloud controller pod use the CA certs on the local machine, or do they need to be mounted in the pod? If the latter I'd imagine I'd need to add something to the
valuesContent
section here
Copy code
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: aws-cloud-controller-manager
  namespace: kube-system
spec:
  chart: aws-cloud-controller-manager
  repo: <https://kubernetes.github.io/cloud-provider-aws>
  targetNamespace: kube-system
  bootstrap: true
  valuesContent: |-
    hostNetworking: true
    nodeSelector:
      node-role.kubernetes.io/control-plane: "true"
    args:
      - --configure-cloud-routes=false
      - --v=5
      - --cloud-provider=aws
c
It uses the certs from the image, which are just a generic public CA bundle. I suspect that you’re going to need to mount the govcloud CAs into the pod, as they don’t use commodity CA certs for those endpoints as far as I know.
a
Yea that's definitely the case, I know exactly which CA I need to add, just trying to figure out where/how to add it. Unless I'm looking in the wrong place the documentation for that isn't great
c
what chart did you deploy for the AWS CCM?
a
c
a
we have it mirrored, looking at the mirror I think the latest chart we have is 0.0.8 and that's from August of 2023 so that might need to be updated. But yea I was just going down the exact path you recommended, I can just reuse the configmap I loaded for the helm CA
wait no I think that's the latest, so we're probably fine with the chart
c
You might need to add
args
to point it at the ca file as well
a
yea probably the
--client-ca-file
. I assume it's safe to mount it to /etc/ssl/certs/ca-bundle.crt, just don't want to overwrite anything
ugh this is driving me crazy. I'm positive I have the mounts set up correctly but the the cloud-controller-manager keeps going into a crash loop, the logs show the usage statement and then "unable to execute command: unable to load client CA provider: open "/ca.crt": no such file or directory. I have this in my cluster yaml under AddOns
Copy code
apiVersion: <http://helm.cattle.io/v1|helm.cattle.io/v1>
kind: HelmChart
metadata:
  name: aws-cloud-controller-manager
  namespace: kube-system
spec:
  chart: aws-cloud-controller-manager
  repo: <https://kubernetes.github.io/cloud-provider-aws>
  repoCAConfigMap:
    name: helm-repo-ca
  targetNamespace: kube-system
  bootstrap: true
  valuesContent: |-
    hostNetworking: true
    nodeSelector:
      <http://node-role.kubernetes.io/control-plane|node-role.kubernetes.io/control-plane>: "true"
    extraVolumes:
      - name: ca-vol
        configMap:
          name: helm-repo-ca
      - name: dir0
        hostPath:
          path: /etc/ssl/certs/ca-bundle.crt
    extraVolumeMounts:
      - name: ca-vol
        mountPath: /ca.crt
        subPath: ca.crt
      - name: dir0
        mount:path: /etc/ssl/certs/ca-bundle.crt
    args:
      - --configure-cloud-routes=false
      - --v=5
      - --cloud-provider=aws
      - --client-ca-file="/ca.crt"
I know the configmap is there because 1) it shows up under
kubectl describe configmap helm-repo-ca
and 2) helm is using it. Just to try something different I added the hostPath mount for the ca certs on my local filesystem and changed the
--client-ca-file="/etc/ssl/certs/ca-bundle.crt
and I still got a no such file or directory error. The pod dies pretty quickly so I don't know how to actually look in the pod and confirm what's actually mounted. Describing the aws-cloud-controller-manager pod shows
Copy code
Mounts:
  /ca.crt from ca-vol (rw,path="ca.crt")
  /etc/ssl/certs/ca-bundle.crt from dir0 (rw)
c
you might try not mounting them at the root? I dunno that seems weird. Pick some other arbitrary path.
a
I did that originally, I mounted it in
/etc/ssl/certs/ca.crt
. I was just trying to make it as simple as possible while troubleshooting
c
your configmap has a
ca.crt
key in it?
a
yea that's the only thing in it, ca.crt is the key and pem bundle is the value
c
what does the resulting daemonset spec look like?
a
Copy code
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: kube-system
  name: helm-repo-ca
data:
  ca.crt: |-
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
It has multiple certs in there but I'm transcribing manually across networks lol. I know that works because helm is using it, without that helm was throwing the CA error too
nothing stands out when I describe the daemonset. It shows the args I have above and then the mounts
Copy code
Mounts:
  /ca.crt from ca-vol (rw,path="ca.crt")
  /etc/ssl/certs/ca-bundle.crt from dir0 (rw)

Volumes:
  ca-vol:
    Type: ConfigMap
    Name: helm-repo-ca
    Optional: false
  dir0:
    Type: HostPath (bare host directory volume)
    Path: /etc/ssl/certs/ca-bundle.crt
    HostPathType:
  Priority Class Name:

Then some events down here, just delete and create of pods
kubectl get ds -A
shows 1 desired, 1 current, 0 ready, 1 up-to-date, 0 available
c
I mean the ds spec. I was wondering if the chart values are getting mangled somehow and not mounting the volumes you want
a
dumb question, how do I check that?
c
kubectl get ds -o yaml -n <namespace> <daemonset name>
a
well FML, it didn't like quotes around the path with
--client-ca-file="/ca.crt"
. Took the quotes off and it worked
that's my cue to go home, thanks for the help
c
haha