I have an active cluster using Rancher 2.7.6, RKE2...
# terraform-provider-rancher2
a
I have an active cluster using Rancher 2.7.6, RKE2 Kubernetes Version: v1.26.8 +rke2r1, provider Custom. This entire infrastructure is running on AWS with EC2. I am having problems creating Persistent Volume Claims, where I receive the following message: waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator Looking at some documents, I'm trying to activate the cloud provider for AWS on the cluster. The infrastructure is all in terraform and I'm using the resource: resource "rancher2_cluster_v2" "cluster" { provider = rancher2.admin name = var.cluster_name kubernetes_version = var.cluster_kubernetes_version rke_config { machine_selector_config { config = { cloud-provider-name = "aws" } } etcd { snapshot_schedule_cron = "0 */5 * * *" snapshot_retention = 20 s3_config { bucket = var.bucket_etcd_bkp_name endpoint = "s3.amazonaws.com" folder = "${var.cluster_name}-etcd-backup" region = data.aws_s3_bucket.selected.region } } } } When I create the cluster, the provider is set to aws but the cluster is not active, always waiting for calico and the error message in rancher is: 2023/10/23 155702 [ERROR] error syncing '_all_': handler user-controllers-controller: userControllersController: failed to set peers for key _all_: failed to start user controllers for cluster c-m-zk9bqklq │ : ClusterUnavailable 503: cluster not found, requeuing I don't know if I'm missing something to configure. I have already entered the credentials and also the necessary permissions in the instance profile of the instances. Any ideas that might help me?