This message was deleted.
# terraform-provider-rancher2
a
This message was deleted.
s
If i do a helm get values on rke2-cilium i see the following:
k8sServiceHost: 127.0.0.1
k8sServicePort: 6443
kubeProxyReplacement: strict
a
Hi! did you get this to work? We have the following code, but the config is not present in the cilium-config, and the operator still wants to connect to the default service IP, though we disabled kube-proxy, instead of the api-server-ip and therefore the node does not get active as nothing can be started. resource "rancher2_cluster_v2" "managed_cluster" { count = 1 name = var.name kubernetes_version = var.kubernetes_version enable_network_policy = var.network_policy rke_config { machine_global_config = <<EOF write-kubeconfig-mode: "644" selinux: true cni: "cilium" disable-kube-proxy: true disable-cloud-controller: true etcd-expose-metrics: false EOF chart_values = <<EOF rke2-cilium: cilium: kubeProxyReplacement: "true" k8sServiceHost: "${var.lb_ip_addr}" k8sServicePort: "443" ipv6: enabled: true EOF machine_selector_config { config = { cloud-provider-name = "external" profile = "cis-1.23" protect-kernel-defaults = true } } } }
got it, we have to set values directly on rke2-cilium level, not on cilium level: chart_values = <<EOF rke2-cilium: kubeProxyReplacement: "true" k8sServiceHost: "${var.lb_ip_addr}" k8sServicePort: "443" ipv6: enabled: true #cilium: #kubeProxyReplacement: "true" #k8sServiceHost: "${var.lb_ip_addr}" #k8sServicePort: "443" #ipv6: #enabled: true EOF
Cilium is working now, but if i add this part to the code, the nodes are not coming up: profile = "cis-1.23" protect-kernel-defaults = true If i install the cluster without it, then all nodes are active and if i set the cis profile on the running cluster, then it shows many permission error and connection refused for an hour, but nodes come up after a while. Any idea why it is not working when i add it by default?
forgot to add the default pod security admission template name, cluster is up and running now...
292 Views