This message was deleted.
# general
a
This message was deleted.
q
Sorry, I don't get it, can you explain it in a different way? I don't know what you refer to when you say "load balancer created by the user-addon"
m
My problem is that I use the Terraform rancher2 provider to create downstream RKE Rancher clusters on Azure. And when I create the downstream RKE cluster with a user-addon (Which is a manifest that creates a service type LoadBalancer pointing to the ingress controller and is run as a user-addon job at cluster creation time) that resource (cloud Load Balancer) has not been provisioned by Terraform and prevents me from running
terraform destroy
successfully because the Load Balancer stays in the resource group and uses the subnet created by Terraform.
It appears that Rancher does not remove the Load Balancer created on Azure when it destroys the cluster. which means the load balancer remains in the resource group and because it uses the subnet created by Terraform, it prevents Terraform from destroying the subnet and vnet.
q
Ok, understood. I guess this Reddit post was made by you, isn't it? https://www.reddit.com/r/rancher/comments/xqje3e/cloud_load_balancer_created_by_useraddon_not/
And your code may probably be similar to this: https://github.com/rancher/terraform-provider-rancher2/issues/972, am I right?
Using the above example as reference ... may it work for you to add some code to manually remove the Kubernetes LB type service created by user-addon when destroying resource "rancher2_node_template" "control_plane"? (using a provisioner + when = destroy)
Another possible option I see is that you create the service, not with the user-addon but with a kubernetes_service resource from using Terraform's Kubernetes provider.
m
Yes but i resolved it by using a NAT Gateway. Then i had to add an Application Gateway with WAF settings and use internal LB instead of public one. I could indeed add a script to remove the load balancer but i really don't like that solution and avoid that if possible. I also started thinking of using the K8s provider but when using the provider in the same module where the kubeconfig is generated from the cluster, it appears to sometimes reset the provider config on consecutive terraform runs which is why I wanted to avoid that as well.