This message was deleted.
# general
a
This message was deleted.
k
I also tried to provision the specific Cilium configuration as Additional Manifests in Rancher, but the Add-On still overwrites this configuration eventually during an upgrade.
c
How are you customizing the config? The correct way to do it would be via HelmChartConfig, any other way is likely to get reset.
k
I create a cluster using the Rancher UI. Once created If I 'Edit Yaml' choose I get the following yaml file
Copy code
apiVersion: <http://provisioning.cattle.io/v1|provisioning.cattle.io/v1>
kind: Cluster
metadata:
  name: test-cluster
  annotations:
    {}
  labels:
    {}
  namespace: fleet-default
spec:
  cloudCredentialSecretName: cattle-global-data:cc-vc742
  defaultPodSecurityAdmissionConfigurationTemplateName: ''
  defaultPodSecurityPolicyTemplateName: ''
  kubernetesVersion: v1.25.12+rke2r1
  localClusterAuthEndpoint:
    caCerts: ''
    enabled: false
    fqdn: ''
  rkeConfig:
    additionalManifest: |-
      apiVersion: <http://helm.cattle.io/v1|helm.cattle.io/v1>
      kind: HelmChartConfig
      metadata:
        name: rke2-cilium
        namespace: kube-system
      spec:
        valuesContent: |-
          bgpControlPlane:
            enabled: true
    chartValues:
      rke2-cilium: {}
    etcd:
      disableSnapshots: true
      s3:
      snapshotRetention: 5
      snapshotScheduleCron: 0 */5 * * *
    machineGlobalConfig:
      cluster-cidr: 10.36.0.0/14
      cni: cilium
      disable-kube-proxy: false
      etcd-expose-metrics: false
      service-cidr: 10.40.0.0/14
      profile: null
    machinePools:
      - name: control-plane
        etcdRole: true
        controlPlaneRole: true
        workerRole: true
        hostnamePrefix: ''
        quantity: 1
        unhealthyNodeTimeout: 0m
        machineConfigRef:
          kind: VmwarevsphereConfig
          name: nc-test-cluster-control-plane-7xhhl
        machineOS: linux
        labels: {}
    machineSelectorConfig:
      - config:
          protect-kernel-defaults: false
    registries:
      configs:
        {}
      mirrors:
        {}
    upgradeStrategy:
      controlPlaneConcurrency: '1'
      controlPlaneDrainOptions:
        deleteEmptyDirData: true
        disableEviction: false
        enabled: false
        force: false
        gracePeriod: -1
        ignoreDaemonSets: true
        skipWaitForDeleteTimeoutSeconds: 0
        timeout: 12
      workerConcurrency: '1'
      workerDrainOptions:
        deleteEmptyDirData: true
        disableEviction: false
        enabled: false
        force: false
        gracePeriod: -1
        ignoreDaemonSets: true
        skipWaitForDeleteTimeoutSeconds: 0
        timeout: 120
  machineSelectorConfig:
    - config: {}
In this case, when I look in the actual control-plane at
/var/lib/rancher/rke2/server/manifests/rancher
I see two HelmchartConfigs for Cilium; one in
addons.yaml
containing the following yaml
Copy code
apiVersion: <http://helm.cattle.io/v1|helm.cattle.io/v1>
kind: HelmChartConfig
metadata:
  name: rke2-cilium
  namespace: kube-system
spec:
  valuesContent: |-
    bgpControlPlane:
      enabled: true
And in
managed-chart-config.yaml
Copy code
apiVersion: <http://helm.cattle.io/v1|helm.cattle.io/v1>
kind: HelmChartConfig
metadata:
  creationTimestamp: null
  name: rke2-cilium
  namespace: kube-system
spec:
  valuesContent: '{"global":{"cattle":{"clusterId":"c-m-9x7v9dg8"}}}'
Taking a look into the Cilium logs, I see that the flag --enabled-bgp=false. The one applied in the cluster is the HelmChartConfig found in
managed-chart-config.yaml
When I update the original Cluster yaml and removes the additionalManifest and add the config to the chartValues
Copy code
bgpControlPlane:
  enabled: true
In this case the file
addons.yaml
is empty, and
managed-chart-config.yaml
contains the actual Cilium config. The actual configuration is applied to the cluster (once cilium is restarted).
Somehow I can't reproduce it any more. Just tested it out with a fresh cluster and same setup. Guess I made a mistake somewhere...