alert-king-94787
08/31/2023, 4:20 PMkubectl --kubeconfig kube_config_ranhcer_server_k3s_temp.yaml get pods --all-namespaces ─╯
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-74448699cf-lr2hz 1/1 Running 0 3d1h
kube-system local-path-provisioner-597bc7dccd-4txlt 1/1 Running 0 3d1h
kube-system helm-install-traefik-crd-sxlcb 0/1 Completed 0 3d1h
kube-system metrics-server-667586758d-2vttm 1/1 Running 0 3d1h
kube-system helm-install-traefik-hgg65 0/1 Completed 2 3d1h
kube-system svclb-traefik-126decc1-lg8bn 2/2 Running 0 3d1h
kube-system traefik-7467b667d9-t62mq 1/1 Running 0 3d1h
cert-manager cert-manager-cainjector-557c547f54-ng22p 1/1 Running 0 4h5m
cert-manager cert-manager-5674b9b755-zczgh 1/1 Running 0 4h5m
cert-manager cert-manager-webhook-86868b95db-hfl2v 1/1 Running 0 4h5m
cattle-system rancher-99ccc5df-f65hv 1/1 Running 0 4h3m
cattle-fleet-system fleet-controller-56786984f4-4tvjc 1/1 Running 0 4h
cattle-fleet-system gitjob-845b9dcc47-bg8x6 1/1 Running 0 4h
cattle-system rancher-webhook-998454b77-8bwlf 1/1 Running 0 3h59m
kube-system svclb-traefik-126decc1-72xqb 2/2 Running 1 (14m ago) 14m
alert-king-94787
08/31/2023, 4:20 PMterraform version
Terraform v1.5.6
on linux_amd64
+ provider <http://registry.terraform.io/hashicorp/aws|registry.terraform.io/hashicorp/aws> v5.10.0
+ provider <http://registry.terraform.io/hashicorp/helm|registry.terraform.io/hashicorp/helm> v2.11.0
+ provider <http://registry.terraform.io/hashicorp/local|registry.terraform.io/hashicorp/local> v2.4.0
+ provider <http://registry.terraform.io/hashicorp/random|registry.terraform.io/hashicorp/random> v3.5.1
+ provider <http://registry.terraform.io/hashicorp/template|registry.terraform.io/hashicorp/template> v2.2.0
+ provider <http://registry.terraform.io/hashicorp/time|registry.terraform.io/hashicorp/time> v0.9.1
+ provider <http://registry.terraform.io/hashicorp/tls|registry.terraform.io/hashicorp/tls> v4.0.4
+ provider <http://registry.terraform.io/rancher/rancher2|registry.terraform.io/rancher/rancher2> v3.1.1
alert-king-94787
08/31/2023, 4:21 PMterraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.10.0"
}
helm = {
source = "hashicorp/helm"
version = "2.11"
}
local = {
source = "hashicorp/local"
version = "2.4.0"
}
rancher2 = {
source = "rancher/rancher2"
version = "3.1.1"
}
}
required_version = "~> 1.5.4"
}
alert-king-94787
08/31/2023, 4:23 PMvariable "rancher_kubernetes_version" {
default = "v1.24.14+k3s1"
}
variable "cert_manager_version" {
default = "1.12.3"
}
variable "rancher_version" {
default = "2.7.5"
}
variable "k3s_version" {
default = "v1.24.14+k3s1"
}
adamant-kite-43734
08/31/2023, 4:25 PMadamant-kite-43734
09/04/2023, 1:59 PMadamant-kite-43734
09/08/2023, 5:28 PMadamant-kite-43734
09/22/2023, 11:08 AMadamant-kite-43734
09/22/2023, 2:25 PMfew-park-83463
09/22/2023, 5:15 PMworker_node_vsphere_config
instead of being nested in each worker pool. When TF runs it can't interpolate the value if stored in each worker polls mapped blockbusy-napkin-41956
10/05/2023, 4:55 PMadamant-kite-43734
10/05/2023, 5:24 PMadamant-kite-43734
10/09/2023, 5:39 PMalert-king-94787
10/09/2023, 5:40 PMadamant-kite-43734
10/19/2023, 11:11 PMglamorous-coat-583
10/23/2023, 7:00 AMalert-king-94787
10/23/2023, 4:03 PMthousands-oyster-32502
11/07/2023, 9:59 PMadamant-kite-43734
11/20/2023, 5:51 PMboundless-crowd-74052
12/06/2023, 1:40 PMadamant-kite-43734
12/08/2023, 3:17 AMadamant-kite-43734
12/08/2023, 5:26 AMadamant-kite-43734
12/13/2023, 8:52 PMable-nest-15676
12/15/2023, 9:53 PMworried-alligator-69856
01/29/2024, 9:11 AMcalm-intern-71641
01/29/2024, 8:24 PMbulky-gold-73710
01/30/2024, 3:01 PMaverage-eye-88240
02/04/2024, 3:15 AMrancher2_storageClass_v2 Resource
.
When running the following:
resource "rancher2_storage_class_v2" "storage_class" {
allow_volume_expansion = "false"
cluster_id = trimprefix(rancher2_cluster_v2.rke2.id, "fleet-default/")
k8s_provisioner = "<http://driver.harvesterhci.io|driver.harvesterhci.io>"
name = trimprefix(rancher2_cluster_v2.rke2.id, "fleet-default/")
reclaim_policy = "Delete"
volume_binding_mode = "Immediate"
}
I get the following error:
rancher2_cluster_v2.rke2: Creation complete after 7m56s [id=fleet-default/inspired-rattler]
rancher2_storage_class_v2.storage_class: Creating...
rancher2_storage_class_v2.storage_class: Still creating... [10s elapsed]
rancher2_storage_class_v2.storage_class: Still creating... [20s elapsed]
rancher2_storage_class_v2.storage_class: Still creating... [30s elapsed]
rancher2_storage_class_v2.storage_class: Still creating... [40s elapsed]
rancher2_storage_class_v2.storage_class: Still creating... [50s elapsed]
rancher2_storage_class_v2.storage_class: Still creating... [1m0s elapsed]
rancher2_storage_class_v2.storage_class: Still creating... [1m10s elapsed]
rancher2_storage_class_v2.storage_class: Still creating... [1m20s elapsed]
rancher2_storage_class_v2.storage_class: Still creating... [1m30s elapsed]
rancher2_storage_class_v2.storage_class: Still creating... [1m40s elapsed]
rancher2_storage_class_v2.storage_class: Still creating... [1m50s elapsed]
rancher2_storage_class_v2.storage_class: Still creating... [2m0s elapsed]
╷
│ Error: Creating storageClass V2: Timeout getting Catalog V2 Client at cluster ID inspired-rattler: Bad response statusCode [500]. Status [500 Internal Server Error]. Body: [lost connection to cluster: failed to find Session for client stv-cluster-inspired-rattler] from [<https://rancher.home.arpa/k8s/clusters/inspired-rattler/v1>]
│
│ with rancher2_storage_class_v2.storage_class,
│ on storage.tf line 1, in resource "rancher2_storage_class_v2" "storage_class":
│ 1: resource "rancher2_storage_class_v2" "storage_class" {
I am not sure why this occurring. Any help would be much appreciated!future-accountant-19088
02/19/2024, 4:17 PMquaint-airline-20784
03/18/2024, 5:15 PM