swift-byte-32159
06/14/2022, 2:54 PMlabels
field that should be for node labels but for some reason the provider logic transforms a map of labels
into machineDeploymentLabels
. I think think this might be the function that's responsible: https://github.com/rancher/terraform-provider-rancher2/blob/master/rancher2/structure_cluster_v2_rke_config_machine_pool.go#L66
2. I'm trying to pass in a map of kubelet-arg
values into cluster_v2 machine_selector_configs but it's also undocumented in the provider docs. I'm forced to pass in an HCL map to the config
key but the kubelet-arg
subkey should be an array while the provider expects only expects a string. I've tried using properly formatted HCL and even hacking it with multiline strings or formatted single-line yaml arrays but no luck so far.
3. The two last issues make me wonder why I'm passing in rke_config values as HCL maps in the first place when Rancher/RKE2 expects them as YAML? I think we can maintain closer parity between the Rancher/RKE2 configs and the Terraform plans if we just pass in those configs as YAML while also making the codebase more maintainable. Thoughts?some-winter-95391
06/22/2022, 4:26 PM(1) cluster_v2 machine pools has aI noticed (1) as well. There is an issue for it on github as well: • https://github.com/rancher/terraform-provider-rancher2/issues/949 And I created a PR to "fix" it. But wasn't quite sure what the intended behaviour should be.field that should be for node labels but for some reason the provider logic transforms a map oflabels
intolabels
machineDeploymentLabels
swift-byte-32159
06/22/2022, 4:54 PM(2) I'm trying to pass in a map ofInstead of passing the values in tovalues into cluster_v2 machine_selector_configs but it's also undocumented in the provider docs.kubelet-arg
machine_selector_config
, we instead passed it in to machine_global_config
so the relevant block on the terraform plan looks like this when I try to set a static CPU policy on the cluster
machine_global_config = <<-EOF
kubelet-arg:
- cpu-manager-policy=static
- kube-reserved=cpu=500m,memory=1Gi,ephemeral-storage=1Gi
- system-reserved=cpu=300m,memory=500Mi,ephemeral-storage=1Gi
- eviction-hard=memory.available<500Mi,nodefs.available<10%
EOF
Verified kubelet started with the correct args and I was able to create guaranteed
workloads with this method.