https://rancher.com/ logo
#terraform-provider-rancher2
Title
# terraform-provider-rancher2
s

silly-jordan-81965

06/03/2022, 8:53 AM
Hi guys, im trying add chart_values to rke2-cilium. I dont get the values thouch. Im doing the following:
Copy code
machine_global_config = <<EOF
    cni: "cilium"
    disable:
    - rke2-ingress-nginx
    resolv-conf: "/run/systemd/resolve/resolv.conf"
    EOF
    chart_values          = <<EOF
    rke2-cilium:
      k8sServiceHost: 127.0.0.1
      k8sServicePort: 6443
      kubeProxyReplacement: strict
    EOF
I dont see the values reflected in the configmap cilium-config and the cluster doesnt finish provisioning. What am i missing? Should the chart_values be added into the machine_selector_config instead?
If i do a helm get values on rke2-cilium i see the following:
k8sServiceHost: 127.0.0.1
k8sServicePort: 6443
kubeProxyReplacement: strict
a

adorable-wolf-32316

09/06/2023, 7:00 AM
Hi! did you get this to work? We have the following code, but the config is not present in the cilium-config, and the operator still wants to connect to the default service IP, though we disabled kube-proxy, instead of the api-server-ip and therefore the node does not get active as nothing can be started. resource "rancher2_cluster_v2" "managed_cluster" { count = 1 name = var.name kubernetes_version = var.kubernetes_version enable_network_policy = var.network_policy rke_config { machine_global_config = <<EOF write-kubeconfig-mode: "644" selinux: true cni: "cilium" disable-kube-proxy: true disable-cloud-controller: true etcd-expose-metrics: false EOF chart_values = <<EOF rke2-cilium: cilium: kubeProxyReplacement: "true" k8sServiceHost: "${var.lb_ip_addr}" k8sServicePort: "443" ipv6: enabled: true EOF machine_selector_config { config = { cloud-provider-name = "external" profile = "cis-1.23" protect-kernel-defaults = true } } } }
got it, we have to set values directly on rke2-cilium level, not on cilium level: chart_values = <<EOF rke2-cilium: kubeProxyReplacement: "true" k8sServiceHost: "${var.lb_ip_addr}" k8sServicePort: "443" ipv6: enabled: true #cilium: #kubeProxyReplacement: "true" #k8sServiceHost: "${var.lb_ip_addr}" #k8sServicePort: "443" #ipv6: #enabled: true EOF
Cilium is working now, but if i add this part to the code, the nodes are not coming up: profile = "cis-1.23" protect-kernel-defaults = true If i install the cluster without it, then all nodes are active and if i set the cis profile on the running cluster, then it shows many permission error and connection refused for an hour, but nodes come up after a while. Any idea why it is not working when i add it by default?
forgot to add the default pod security admission template name, cluster is up and running now...
223 Views