After switching to rancher monitoring v2 we realiz...
# terraform-provider-rancher2
a
After switching to rancher monitoring v2 we realized we forgot to remove a cluster monitoring setting beforehand, and when we attempt to remove it now we always get the following terraform plan `for resource "rancher2_cluster"`:
Copy code
- cluster_monitoring_input {
          - answers = {
              - "prometheus.resources.core.limits.memory" = "2000Mi"
            } -> null
        }
Removing this from state doesn't help, as it's actually set somewhere in the cluster, but we're unsure of how or where to set this at the cluster level. Any ideas? Presumably something via kubectl?