This message was deleted.
# rke2
a
This message was deleted.
c
s
That process works as expected, but through the rancher MCM I have different results
c
if you’re doing it via Rancher you should put the chart values in the UI where it prompts you for CNI config, instead of providing your own HelmChartConfig for rke2-cilium… otherwise you’ll end up with conflicts, due to you deploying your own HelmChartConfig that conflicts with those provided by rancher.
s
I see, do you mean in the add-on config section?
c
specifically in the Cilium Configuration section, yes. I said CNI config because it prompts you for config for whatever CNI you’ve chosen, not just Cilium
s
Ah yes, that makes sense
b
Hey, we did actually. And somehow it got quite challenging, with different Cilium settings that needed to be setup. These does not come out of the box, per standard. The reason it was somehow troublesome, was that we also are hosting several services, internally, outside k8s, but on the same vlan. Cilium can't quite see the difference, when it's inside or outside k8s. So what we did, was changing k8s internal cidr blocks, to resolve that
s
Interesting
We've found that adding options to the "add-on" config don't seem to "stick" during install. I'm curious if you used Rancher to deploy a downstream cluster with cilium in this configuration @bland-appointment-12982
b
No, i actually added in the /manifests folder for initial cluster setup, and added a configmap cilium uses for its settings. Then its applied during bootstrapping the cluster. This approach ensured us, that Cilium always was available when a node/cluster was restarted
Btw, will always recommend this method, so no manual intervention is necessary
l
This is the config that worked for us. After multiple failed attempts one of my colleagues figured it out. The key point here is that when you are creating a cluster via the Rancher webUI when selecting cilium + IPv6 enabled .. rancher somehow puts that under
rke2-cilium.cilium.ipv6: enabled
.. all those config bits specific to cilium to disable kube proxy need to go directly under
rke2- cilium
as shown below
Copy code
kubernetesVersion: v1.28.12+rke2r1
  localClusterAuthEndpoint:
    caCerts: ''
    enabled: false
    fqdn: ''
    
  rkeConfig:
    chartValues:
      rke2-cilium:
        kubeProxyReplacement: true
        k8sServiceHost: 127.0.0.1
        k8sServicePort: 6443
        ipv6:
          enabled: true

    etcd:
      disableSnapshots: false
      s3:
#        bucket: string
#        cloudCredentialName: string
#        endpoint: string
#        endpointCA: string
#        folder: string
#        region: string
#        skipSSLVerify: boolean
      snapshotRetention: 5
      snapshotScheduleCron: 0 */5 * * *
    machineGlobalConfig:
      cluster-cidr: 10.42.0.0/16,fd00:cafe:42::/56
      cni: cilium
      disable: []
      disable-kube-proxy: true
      etcd-expose-metrics: false
      service-cidr: 10.43.0.0/16,fd00:cafe:43::/112
      profile: null