future-gigabyte-33261
09/21/2025, 9:35 PMcreamy-pencil-82913
09/21/2025, 11:38 PMcreamy-pencil-82913
09/21/2025, 11:39 PMfuture-gigabyte-33261
09/22/2025, 12:52 AMfuture-gigabyte-33261
09/22/2025, 12:54 AMfuture-gigabyte-33261
09/22/2025, 12:58 AMfuture-gigabyte-33261
09/22/2025, 12:59 AMfuture-gigabyte-33261
09/22/2025, 1:01 AMnamespace: default
resourceVersion: '31587430'
uid: c6683a4d-2023-4bbe-a3fc-19d3d8e785f6
reason: InvalidDiskCapacity
reportingComponent: kubeletfuture-gigabyte-33261
09/22/2025, 1:13 AMcreamy-pencil-82913
09/22/2025, 3:44 AMcreamy-pencil-82913
09/22/2025, 3:46 AMfuture-gigabyte-33261
09/22/2025, 5:07 AMfew-appointment-23216
09/22/2025, 8:36 PM<http://node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule|node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule>
however, we are not setting this taint during the deployment via TFR2P. Any ideas why this happens?creamy-pencil-82913
09/22/2025, 8:44 PMcreamy-pencil-82913
09/22/2025, 8:44 PMfew-appointment-23216
09/22/2025, 8:54 PMmachine_selector_config {
config = jsonencode({
cloud-provider-config: local_file.harvester-kube-config.content
cloud-provider-name: "harvester"
})
}creamy-pencil-82913
09/22/2025, 8:55 PMfew-appointment-23216
09/22/2025, 8:56 PMfew-appointment-23216
09/22/2025, 8:56 PMcreamy-pencil-82913
09/22/2025, 8:56 PMfew-appointment-23216
09/22/2025, 8:56 PMcreamy-pencil-82913
09/22/2025, 8:57 PMfew-appointment-23216
09/22/2025, 9:04 PMcreamy-pencil-82913
09/22/2025, 9:04 PMcreamy-pencil-82913
09/22/2025, 9:05 PMcreamy-pencil-82913
09/22/2025, 9:06 PMfuture-gigabyte-33261
09/22/2025, 9:08 PMfuture-gigabyte-33261
09/22/2025, 9:09 PMcreamy-pencil-82913
09/22/2025, 9:09 PMcreamy-pencil-82913
09/22/2025, 9:10 PMcreamy-pencil-82913
09/22/2025, 9:10 PMcreamy-pencil-82913
09/22/2025, 9:11 PMfew-appointment-23216
09/22/2025, 9:16 PMcreamy-pencil-82913
09/22/2025, 9:17 PMcreamy-pencil-82913
09/22/2025, 9:17 PMfew-appointment-23216
09/22/2025, 9:17 PMError getting instance metadata for node addresses: <http://virtualmachines.kubevirt.io|virtualmachines.kubevirt.io> "test-cluster-allroles-fgqkm-lbsvp" not foundcreamy-pencil-82913
09/22/2025, 9:18 PMfew-appointment-23216
09/22/2025, 9:21 PMfew-appointment-23216
09/22/2025, 9:25 PMfuture-gigabyte-33261
09/22/2025, 9:30 PMcreamy-pencil-82913
09/22/2025, 9:31 PMfuture-gigabyte-33261
09/22/2025, 9:33 PMfuture-gigabyte-33261
09/22/2025, 9:34 PMtofu-test-allroles-fgqkm-lbsvp:~ # /var/lib/rancher/rke2/bin/kubectl logs harvester-cloud-provider-78d55bc78d-psfz4 -n kube-system
I0922 20:35:31.885816 1 serving.go:348] Generated self-signed cert in-memory
W0922 20:35:31.885910 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
W0922 20:35:32.246871 1 main.go:84] detected a cluster without a ClusterID. A ClusterID will be required in the future. Please tag your cluster to avoid any future issues
I0922 20:35:32.246900 1 controllermanager.go:152] Version: v0.0.0-master+$Format:%H$
I0922 20:35:32.248191 1 secure_serving.go:213] Serving securely on [::]:10258
I0922 20:35:32.248297 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0922 20:35:32.248504 1 leaderelection.go:248] attempting to acquire leader lease kube-system/cloud-controller-manager...
I0922 20:35:55.023176 1 leaderelection.go:258] successfully acquired lease kube-system/cloud-controller-manager
I0922 20:35:55.023385 1 event.go:294] "Event occurred" object="kube-system/cloud-controller-manager" fieldPath="" kind="Lease" apiVersion="<http://coordination.k8s.io/v1|coordination.k8s.io/v1>" type="Normal" reason="LeaderElection" message="tofu-test-allroles-fgqkm-lbsvp_d6cf7c8f-962f-4284-a1bc-27d502ec0875 became leader"
time="2025-09-22T20:35:55Z" level=info msg="start watching virtual machine instance" controller=harvester-cloudprovider-resync-topology namespace=default
W0922 20:35:55.080327 1 core.go:111] --configure-cloud-routes is set, but cloud provider does not support routes. Will not configure cloud provider routes.
W0922 20:35:55.080338 1 controllermanager.go:299] Skipping "route"
I0922 20:35:55.080613 1 controllermanager.go:311] Started "cloud-node"
I0922 20:35:55.080779 1 controllermanager.go:311] Started "cloud-node-lifecycle"
I0922 20:35:55.080828 1 node_controller.go:157] Sending events to api server.
I0922 20:35:55.080884 1 node_controller.go:166] Waiting for informer caches to sync
I0922 20:35:55.080940 1 node_lifecycle_controller.go:113] Sending events to api server
I0922 20:35:55.081035 1 controllermanager.go:311] Started "service"
I0922 20:35:55.081167 1 controller.go:227] Starting service controller
I0922 20:35:55.081248 1 shared_informer.go:270] Waiting for caches to sync for service
I0922 20:35:55.181595 1 shared_informer.go:277] Caches are synced for service
E0922 20:35:55.360526 1 node_controller.go:258] Error getting instance metadata for node addresses: <http://virtualmachines.kubevirt.io|virtualmachines.kubevirt.io> "tofu-test-allroles-fgqkm-lbsvp" not found
time="2025-09-22T20:35:55Z" level=info msg="Starting <http://kubevirt.io/v1|kubevirt.io/v1>, Kind=VirtualMachineInstance controller"
time="2025-09-22T20:35:55Z" level=info msg="Starting /v1, Kind=Service controller"
time="2025-09-22T20:35:55Z" level=info msg="Starting /v1, Kind=Node controller"
E0922 20:37:59.542014 1 leaderelection.go:367] Failed to update lock: Put "<https://10.43.0.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cloud-controller-manager?timeout=5s>": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0922 20:40:55.393928 1 node_controller.go:258] Error getting instance metadata for node addresses: <http://virtualmachines.kubevirt.io|virtualmachines.kubevirt.io> "tofu-test-allroles-fgqkm-lbsvp" not found
E0922 20:45:55.447995 1 node_controller.go:258] Error getting instance metadata for node addresses: <http://virtualmachines.kubevirt.io|virtualmachines.kubevirt.io> "tofu-test-allroles-fgqkm-lbsvp" not found
E0922 20:50:55.478969 1 node_controller.go:258] Error getting instance metadata for node addresses: <http://virtualmachines.kubevirt.io|virtualmachines.kubevirt.io> "tofu-test-allroles-fgqkm-lbsvp" not found
E0922 20:55:55.512430 1 node_controller.go:258] Error getting instance metadata for node addresses: <http://virtualmachines.kubevirt.io|virtualmachines.kubevirt.io> "tofu-test-allroles-fgqkm-lbsvp" not found
E0922 21:00:55.545766 1 node_controller.go:258] Error getting instance metadata for node addresses: <http://virtualmachines.kubevirt.io|virtualmachines.kubevirt.io> "tofu-test-allroles-fgqkm-lbsvp" not found
E0922 21:05:55.577520 1 node_controller.go:258] Error getting instance metadata for node addresses: <http://virtualmachines.kubevirt.io|virtualmachines.kubevirt.io> "tofu-test-allroles-fgqkm-lbsvp" not found
E0922 21:10:55.613498 1 node_controller.go:258] Error getting instance metadata for node addresses: <http://virtualmachines.kubevirt.io|virtualmachines.kubevirt.io> "tofu-test-allroles-fgqkm-lbsvp" not foundcreamy-pencil-82913
09/22/2025, 9:34 PMfuture-gigabyte-33261
09/22/2025, 9:35 PMcreamy-pencil-82913
09/22/2025, 9:35 PM<http://virtualmachines.kubevirt.io|virtualmachines.kubevirt.io> resources from being created. I suspect it just running very slowly and/or crashing due to poor datastore performance.future-gigabyte-33261
09/22/2025, 9:37 PMfuture-gigabyte-33261
09/22/2025, 9:38 PMfew-appointment-23216
09/22/2025, 10:09 PMkubectl patch node tofu-test-allroles-fgqkm-xxxxx -p '{"spec":{"providerID":"<harvester://harvester-public/tofu-test-allroles-fgqkm-xxxxx>"}}'
once this was done, the harvester-cloud-manager was able to complete whatever stuff was pending and finally all nodes joined the cluster.few-appointment-23216
09/22/2025, 10:10 PMproviderId field? shouldn't this be computed by the values that are used in machine_config_v2?creamy-pencil-82913
09/22/2025, 10:15 PMfew-appointment-23216
09/22/2025, 10:19 PMfew-appointment-23216
09/22/2025, 10:20 PMfuture-gigabyte-33261
09/22/2025, 10:24 PMcreamy-pencil-82913
09/22/2025, 10:28 PMfuture-gigabyte-33261
09/22/2025, 10:30 PMfew-appointment-23216
10/02/2025, 7:20 AM.../<cluster name>?action=generateKubeconfig endpoint. Due to this, the harvester-cloud-provider pod searches but fails to recognize the child cluster nodes, thus calico, the rke2 agent, and a bunch of other pods are not able to initialize/run properly.
Is there a way to include the namespace in the kubeconfig file? I tried both using Terraform http provider to generate the file, as well as manually using curl, both times I supply the namespace in the request body, along with the service principal account. In neither case the resulting kubeconfig contains the namespace.
P.s. The service principal has the relevant cluster role binding
Regards,
Ronaldcreamy-pencil-82913
10/02/2025, 7:40 AMfew-appointment-23216
10/02/2025, 9:00 AMfuture-gigabyte-33261
10/06/2025, 12:53 PMfuture-gigabyte-33261
10/06/2025, 12:53 PMfuture-gigabyte-33261
10/06/2025, 12:54 PMcreamy-pencil-82913
10/06/2025, 4:23 PMcreamy-pencil-82913
10/06/2025, 4:23 PMfuture-gigabyte-33261
10/08/2025, 3:39 PMfew-appointment-23216
10/08/2025, 8:23 PMdata "http" "kubeconfig" {
url = "<https://rancher.domain.com/v3/clusters/c-j9pl8?action=generateKubeconfig>"
method = "POST"
request_headers = {
Authorization = "Bearer ${var.token}"
Accept = "application/json"
}
request_body = jsonencode({
"clusterRoleName" = "harvesterhci.io:cloudprovider"
"namespace" = "harvester-public"
"serviceAccountName" = "tofu-test"
})
}
resource "local_file" "harvester-kube-config" {
filename = "${path.module}/tofu-test-kubeconfig"
content = jsondecode(data.http.kubeconfig.response_body).config
}
which results in the following content:
apiVersion: v1
kind: Config
clusters:
- name: "core"
cluster:
server: "<https://rancher.domain.com/k8s/clusters/c-j9pl8>"
users:
- name: "core"
user:
token: "kubeconfig-user-1ff2gx566s:hf9vftgtq5<REDACTED>d2tglp8m8b8vq26"
contexts:
- name: "core"
context:
user: "core"
cluster: "core"
current-context: "core"
However, the Harvester CCM requires the namespace, which I did add manually under contexts[0].context.namespace with the value harvester-public. Once I did this, the Harvester CCM pod (harvester-cloud-provider) was able to locate the VM and properly initialize it.
So, my question is this: is there a way to include the namespace information in the kubeconfig that is generated? I also tried all of the above using curl and the result was the same (missing namespace information).creamy-pencil-82913
10/08/2025, 8:43 PMfew-appointment-23216
10/08/2025, 8:44 PMcreamy-pencil-82913
10/08/2025, 8:46 PMcreamy-pencil-82913
10/08/2025, 8:47 PMfew-appointment-23216
10/08/2025, 8:48 PMserviceAccountName = tofu-test in the request bodycreamy-pencil-82913
10/08/2025, 8:51 PMcreamy-pencil-82913
10/08/2025, 8:56 PMcreamy-pencil-82913
10/08/2025, 8:57 PMcurl -sfL <https://raw.githubusercontent.com/harvester/cloud-provider-harvester/master/deploy/generate_addon.sh> | bash -s <serviceaccount name> <namespace>
> You must specify the namespace in which the guest cluster will be created.creamy-pencil-82913
10/08/2025, 8:57 PMcreamy-pencil-82913
10/08/2025, 8:58 PMfew-appointment-23216
10/08/2025, 8:59 PMfew-appointment-23216
10/08/2025, 9:00 PMfew-appointment-23216
10/08/2025, 9:01 PMcurl with some env varsfew-appointment-23216
10/08/2025, 9:02 PMcreamy-pencil-82913
10/08/2025, 9:27 PMcreamy-pencil-82913
10/08/2025, 9:29 PMbumpy-tomato-36167
10/08/2025, 9:48 PMbumpy-tomato-36167
10/08/2025, 9:51 PMcreamy-pencil-82913
10/08/2025, 9:52 PMcreamy-pencil-82913
10/08/2025, 9:53 PMbumpy-tomato-36167
10/08/2025, 9:54 PMfew-appointment-23216
10/08/2025, 10:53 PMcreamy-pencil-82913
10/08/2025, 10:55 PM