many-terabyte-68707
05/12/2023, 7:23 AMcurved-army-69172
05/24/2023, 7:48 AMlimited-eye-27484
06/01/2023, 6:37 PMError: [ERROR] Creating Project: Cluster ID c-dzn98 is not active
with module.test-ci.rancher2_project.project,
on ../../../providers/rancher/modules/project/main.tf line 1, in resource "rancher2_project" "project":
1: resource "rancher2_project" "project" {
The cluster itself is alive and fine as being reported by Rancher, any ideas as to what I can do in order to fix this Cluster ID not being detected as active from Terraform?kind-air-74358
06/13/2023, 2:06 PMrke_config.machine_pools
block containing each a machine_config
block. Their I define the kind and name of a just created resource rancher2_machine_config_v2
. As this machine_config_v2 doesn’t contain any specific machine_pool configuration I am re-using this created machine_config_v2 for another machine_pool
. See the example below.
resource "rancher2_machine_config_v2" "main" {
generate_name = "gen1-standard-4"
fleet_namespace = "fleet-default"
vsphere_config {
... # some variables
}
}
resource "rancher2_cluster_v2" "main" {
name = var.name
fleet_namespace = "fleet-default"
...
rke_config {
machine_global_config = ...
machine_pools {
name = "control-plane-01"
cloud_credential_secret_name = ...
control_plane_role = true
etcd_role = true
quantity = 3
machine_config {
kind = rancher2_machine_config_v2.main.kind
name = rancher2_machine_config_v2.main.name
}
}
machine_pools {
name = "workers-01"
cloud_credential_secret_name = ...
worker_role = true
quantity = 6
machine_config {
kind = rancher2_machine_config_v2.main.kind
name = rancher2_machine_config_v2.main.name
}
}
...
}
When I’m using the Rancher UI next to increase the number of machines (using the + / - buttons) in the workers-01 pool, Rancher increases actually the number of instances of the pool control-plane-01.
Is this due to using the same machine_config_v2
resource for both pools, or is this actually a bug in the Rancher interface?
I know that when creating a cluster using the Rancher Ui, it creates for each machine_pool a separate instance of the machine_config_v2, but I don’t see any use for that when using Terraform to just create a re-usable machine_config.few-park-83463
06/27/2023, 2:40 PMfew-park-83463
06/27/2023, 2:40 PM╷
│ Error: Too many etcd blocks
│
│ on <http://main.tf|main.tf> line 133, in resource "rancher2_cluster_v2" "cluster_on_vsphere":
│ 133: content {
│
│ No more than 1 "etcd" blocks are allowed
╵
few-park-83463
06/27/2023, 2:40 PMdynamic "etcd" {
for_each = var.etcd_backup
content {
snapshot_schedule_cron = etcd.snapshot_schedule_cron
snapshot_retention = etcd.snapshot_retention
s3_config {
bucket = etcd.value.bucket
endpoint = etcd.value.endpoint
cloud_credential_name = data.rancher2_cloud_credential.s3_cred.id
folder = etcd.value.folder
skip_ssl_verify = true
}
}
}
brainy-morning-84675
07/05/2023, 3:08 PMkind-air-74358
07/10/2023, 8:02 PMkube_config
to some local file on the system and use that one with the kubernetes provider. But that feels like a bit of too over-engineering.ancient-car-21816
07/11/2023, 2:26 PMmillions-ocean-48249
07/21/2023, 2:27 PMvapp_property = [
"guestinfo.interface.0.ip.0.address:ip:vswitch-pg",
"guestinfo.interface.0.ip.0.netmask:$${netmask:vswitch-pg}",
"guestinfo.interface.0.route.0.gateway:$${gateway:vswitch-pg}",
"guestinfo.dns.servers:$${dns:vswitch-pg}"
]
Ive been stuck on this and cant really find any concrete examples. ANY help is greatly appreciated.limited-eye-27484
07/21/2023, 11:10 PMmillions-ocean-48249
07/24/2023, 7:16 PMfaint-shampoo-17603
08/02/2023, 3:24 PMmillions-ocean-48249
08/07/2023, 4:31 PMbest-address-42882
08/09/2023, 7:16 PMrefined-analyst-8898
08/09/2023, 8:10 PMsquare-policeman-85866
08/15/2023, 8:19 AMkind-air-74358
08/16/2023, 8:22 AMdynamic
for machine_pools
in the rancher2_cluster_v2
resource.
We have the following resource defined
variable "machine_pools" {
type = map(object({
quantity = number
cpu_count = number
memory_size = number
}
resource "rancher2_cluster_v2" "main" {
name = var.name
fleet_namespace = "fleet-default"
...
rke_config {
# Workers
dynamic "machine_pools" {
for_each = var.machine_pools
content {
name = machine_pools.key
cloud_credential_secret_name = data.rancher2_cloud_credential.main.id
worker_role = true
quantity = machine_pools.value["quantity"]
...
machine_config {
kind = rancher2_machine_config_v2.workers[machine_pools.key].kind
name = rancher2_machine_config_v2.workers[machine_pools.key].name
}
}
}
}
}
The input looks then something like
machine_pools = {
"cpu2-mem8" = {
quantity = 2
cpu_count = 2
memory_size = 8192
},
"cpu4-mem8" = {
quantity = 2
cpu_count = 4
memory_size = 8192
}
}
But once we want to add the following machine_pool
"cpu2-mem16" = {
quantity = 2
cpu_count = 2
memory_size = 16384
}
the cluster resource is updated, but the machine_pools are updated as follows
~ machine_pools {
~ name = "cpu4-mem8" -> "cpu2-mem16"
# (11 unchanged attributes hidden)
~ machine_config {
- kind = "xxx" -> null
- name = "nc-np48-w-cpu4-mem8-v25tb" -> null
}
}
+ machine_pools {
+ ...
+ name = "cpu4-mem8"
+ ...
}
So basically it don’t change the first machine_pool, update the second machine_pool with the configuration of the newly added machine_pool and creates a new machine_pool for the already existing machine_pool but which is being replaced by the new pool.
Shouldn’t the name of the machine_pool be leading instead of the lexical order in this case?
Is this an issue from our side, or is the provider not dealing with this correctly. Any idea on how to fix this or to work around this?alert-king-94787
08/31/2023, 4:20 PMterraform apply
to bootstrap rancher2
I'm launching a infrastructure as code with terraform and following docs like k3s HA + Rancher + Cert-manager.
Now I trying to run a rancher2 bootstrap but the execution return error in terraform-provider-rancher2_v3.1.1 plugin:
Error: Plugin did not respond
│
│ with module.rancher-server.rancher2_bootstrap.admin,
│ on ../../../../../../../infrastructure-modules/modules/aws-rancher-server/rancher.tf line 4, in resource "rancher2_bootstrap" "admin":
│ 4: resource "rancher2_bootstrap" "admin" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call. The plugin logs may contain more details.
╵
Releasing state lock. This may take a few moments...
Stack trace from the terraform-provider-rancher2_v3.1.1 plugin:
panic: interface conversion: interface {} is nil, not string
goroutine 70 [running]:
<http://github.com/rancher/terraform-provider-rancher2/rancher2.DoUserLogin({0xc0001260c0|github.com/rancher/terraform-provider-rancher2/rancher2.DoUserLogin({0xc0001260c0>?, 0xc0010df4b8?}, {0x222af22, 0x5}, {0xc0010c606a, 0x5}, {0x222a46e, 0x5}, {0x228a036, 0x21}, ...)
/go/src/github.com/rancher/terraform-provider-rancher2/rancher2/util.go:154 +0x5f9
<http://github.com/rancher/terraform-provider-rancher2/rancher2.bootstrapDoLogin(0xc000b1f628|github.com/rancher/terraform-provider-rancher2/rancher2.bootstrapDoLogin(0xc000b1f628>?, {0x2221660?, 0xc000474180})
/go/src/github.com/rancher/terraform-provider-rancher2/rancher2/resource_rancher2_bootstrap.go:275 +0x445
<http://github.com/rancher/terraform-provider-rancher2/rancher2.resourceRancher2BootstrapCreate(0x1edf280|github.com/rancher/terraform-provider-rancher2/rancher2.resourceRancher2BootstrapCreate(0x1edf280>?, {0x2221660?, 0xc000474180})
/go/src/github.com/rancher/terraform-provider-rancher2/rancher2/resource_rancher2_bootstrap.go:27 +0x70
<http://github.com/hashicorp/terraform-plugin-sdk/helper/schema.(*Resource).Apply(0xc000423040|github.com/hashicorp/terraform-plugin-sdk/helper/schema.(*Resource).Apply(0xc000423040>, 0xc0010ce460, 0xc0010d4260, {0x2221660, 0xc000474180})
/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk@v1.17.2/helper/schema/resource.go:320 +0x438
<http://github.com/hashicorp/terraform-plugin-sdk/helper/schema.(*Provider).Apply(0xc00042a000|github.com/hashicorp/terraform-plugin-sdk/helper/schema.(*Provider).Apply(0xc00042a000>, 0xc000b1f8d0, 0x224f874?, 0xf?)
/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk@v1.17.2/helper/schema/provider.go:294 +0x70
<http://github.com/hashicorp/terraform-plugin-sdk/internal/helper/plugin.(*GRPCProviderServer).ApplyResourceChange(0xc00011c868|github.com/hashicorp/terraform-plugin-sdk/internal/helper/plugin.(*GRPCProviderServer).ApplyResourceChange(0xc00011c868>, {0xc0010d8070?, 0x4bffa6?}, 0xc0010d8070)
/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk@v1.17.2/internal/helper/plugin/grpc_provider.go:895 +0x805
<http://github.com/hashicorp/terraform-plugin-sdk/internal/tfplugin5._Provider_ApplyResourceChange_Handler({0x2156980|github.com/hashicorp/terraform-plugin-sdk/internal/tfplugin5._Provider_ApplyResourceChange_Handler({0x2156980>?, 0xc00011c868}, {0x279c2b0, 0xc0010ca1b0}, 0xc0010d8000, 0x0)
/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk@v1.17.2/internal/tfplugin5/tfplugin5.pb.go:3305 +0x170
<http://google.golang.org/grpc.(*Server).processUnaryRPC(0xc0002a01e0|google.golang.org/grpc.(*Server).processUnaryRPC(0xc0002a01e0>, {0x27a5f00, 0xc0007024e0}, 0xc00033b320, 0xc00062ae40, 0x3949e00, 0x0)
/go/pkg/mod/google.golang.org/grpc@v1.53.0/server.go:1336 +0xd23
<http://google.golang.org/grpc.(*Server).handleStream(0xc0002a01e0|google.golang.org/grpc.(*Server).handleStream(0xc0002a01e0>, {0x27a5f00, 0xc0007024e0}, 0xc00033b320, 0x0)
/go/pkg/mod/google.golang.org/grpc@v1.53.0/server.go:1704 +0xa2f
<http://google.golang.org/grpc.(*Server).serveStreams.func1.2()|google.golang.org/grpc.(*Server).serveStreams.func1.2()>
/go/pkg/mod/google.golang.org/grpc@v1.53.0/server.go:965 +0x98
created by <http://google.golang.org/grpc.(*Server).serveStreams.func1|google.golang.org/grpc.(*Server).serveStreams.func1>
/go/pkg/mod/google.golang.org/grpc@v1.53.0/server.go:963 +0x28a
Error: The terraform-provider-rancher2_v3.1.1 plugin crashed!
This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.
alert-king-94787
08/31/2023, 4:20 PMkubectl --kubeconfig kube_config_ranhcer_server_k3s_temp.yaml get pods --all-namespaces ─╯
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-74448699cf-lr2hz 1/1 Running 0 3d1h
kube-system local-path-provisioner-597bc7dccd-4txlt 1/1 Running 0 3d1h
kube-system helm-install-traefik-crd-sxlcb 0/1 Completed 0 3d1h
kube-system metrics-server-667586758d-2vttm 1/1 Running 0 3d1h
kube-system helm-install-traefik-hgg65 0/1 Completed 2 3d1h
kube-system svclb-traefik-126decc1-lg8bn 2/2 Running 0 3d1h
kube-system traefik-7467b667d9-t62mq 1/1 Running 0 3d1h
cert-manager cert-manager-cainjector-557c547f54-ng22p 1/1 Running 0 4h5m
cert-manager cert-manager-5674b9b755-zczgh 1/1 Running 0 4h5m
cert-manager cert-manager-webhook-86868b95db-hfl2v 1/1 Running 0 4h5m
cattle-system rancher-99ccc5df-f65hv 1/1 Running 0 4h3m
cattle-fleet-system fleet-controller-56786984f4-4tvjc 1/1 Running 0 4h
cattle-fleet-system gitjob-845b9dcc47-bg8x6 1/1 Running 0 4h
cattle-system rancher-webhook-998454b77-8bwlf 1/1 Running 0 3h59m
kube-system svclb-traefik-126decc1-72xqb 2/2 Running 1 (14m ago) 14m
alert-king-94787
08/31/2023, 4:20 PMterraform version
Terraform v1.5.6
on linux_amd64
+ provider <http://registry.terraform.io/hashicorp/aws|registry.terraform.io/hashicorp/aws> v5.10.0
+ provider <http://registry.terraform.io/hashicorp/helm|registry.terraform.io/hashicorp/helm> v2.11.0
+ provider <http://registry.terraform.io/hashicorp/local|registry.terraform.io/hashicorp/local> v2.4.0
+ provider <http://registry.terraform.io/hashicorp/random|registry.terraform.io/hashicorp/random> v3.5.1
+ provider <http://registry.terraform.io/hashicorp/template|registry.terraform.io/hashicorp/template> v2.2.0
+ provider <http://registry.terraform.io/hashicorp/time|registry.terraform.io/hashicorp/time> v0.9.1
+ provider <http://registry.terraform.io/hashicorp/tls|registry.terraform.io/hashicorp/tls> v4.0.4
+ provider <http://registry.terraform.io/rancher/rancher2|registry.terraform.io/rancher/rancher2> v3.1.1
alert-king-94787
08/31/2023, 4:21 PMterraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.10.0"
}
helm = {
source = "hashicorp/helm"
version = "2.11"
}
local = {
source = "hashicorp/local"
version = "2.4.0"
}
rancher2 = {
source = "rancher/rancher2"
version = "3.1.1"
}
}
required_version = "~> 1.5.4"
}
alert-king-94787
08/31/2023, 4:23 PMvariable "rancher_kubernetes_version" {
default = "v1.24.14+k3s1"
}
variable "cert_manager_version" {
default = "1.12.3"
}
variable "rancher_version" {
default = "2.7.5"
}
variable "k3s_version" {
default = "v1.24.14+k3s1"
}
alert-king-94787
08/31/2023, 4:25 PMkind-air-74358
09/04/2023, 1:59 PMrancher2_role_template
using custom verbs? I.e I want to create a role which implements the verbs updatepsa
and manage-namespaces
but I got the error expected rules.0.verbs.1 to be one of [* create delete deletecollection get list patch update view watch own use bind escalate impersonate], got manage-namespaces
Those verbs are specified by Rancher: https://github.com/rancher/webhook/blob/bd1de5c5620665de8e7158c9bbe83e09780cf9bc/docs.mdfuture-accountant-19088
09/08/2023, 5:28 PMglamorous-painting-54907
09/22/2023, 11:08 AMclean-furniture-7088
09/22/2023, 2:25 PMfew-park-83463
09/22/2023, 5:15 PMworker_node_vsphere_config
instead of being nested in each worker pool. When TF runs it can't interpolate the value if stored in each worker polls mapped block