Sadly, my 2 node v1.4.2 to v1.5.0 upgrade has been...
# harvester
s
Sadly, my 2 node v1.4.2 to v1.5.0 upgrade has been stuck for >24 hours.
Following the troubleshooting page.
Copy code
# kubectl get jobs -n harvester-system -l <http://harvesterhci.io/upgradeComponent=manifest|harvesterhci.io/upgradeComponent=manifest>
NAME                                 STATUS    COMPLETIONS   DURATION   AGE
hvst-upgrade-bckj7-apply-manifests   Running   0/1           44h        44h
and
Copy code
...
longhorn nodeDrainPolicy has been set in managedchart, do not patch again
true
longhorn detachManuallyAttachedVolumesWhenCordoned has been set in managedchart, do not patch again
longhorn related config
defaultSettings:
  defaultDataPath: /var/lib/harvester/defaultdisk
  detachManuallyAttachedVolumesWhenCordoned: true
  nodeDrainPolicy: allow-if-replica-is-stopped
  taintToleration: <http://kubevirt.io/drain:NoSchedule|kubevirt.io/drain:NoSchedule>
enabled: true
<http://managedchart.management.cattle.io/harvester|managedchart.management.cattle.io/harvester> configured
<http://managedchart.management.cattle.io/harvester|managedchart.management.cattle.io/harvester> patched
<http://managedchart.management.cattle.io/harvester-crd|managedchart.management.cattle.io/harvester-crd> patched
Waiting for ManagedChart fleet-local/harvester from generation 8
Target version: 1.5.0, Target state: ready
Current version: 1.5.0, Current state: OutOfSync, Current generation: 10
Sleep for 5 seconds to retry
Current version: 1.5.0, Current state: WaitApplied, Current generation: 10
Sleep for 5 seconds to retry
Current version: 1.5.0, Current state: WaitApplied, Current generation: 10
Sleep for 5 seconds to retry
Current version: 1.5.0, Current state: WaitApplied, Current generation: 10
Sleep for 5 seconds to retry
Current version: 1.5.0, Current state: WaitApplied, Current generation: 10
Sleep for 5 seconds to retry
... (repeat ad-nauseam)
There doesn't seem to be any remediation described for
Phase 3: Upgrade the system services
f
s
Hi @sticky-summer-13450, Could you generate the SB for investigation? Also, could you get the following command result?
Copy code
$ kubectl get bundle -A
s
Thank you @salmon-city-57654.
Copy code
# kubectl get bundle -A
NAMESPACE     NAME                                          BUNDLEDEPLOYMENTS-READY   STATUS
fleet-local   fleet-agent-local                             1/1                       
fleet-local   local-managed-system-agent                    1/1                       
fleet-local   mcc-harvester                                 0/1                       Modified(1) [Cluster fleet-local/local]; <http://cdi.cdi.kubevirt.io|cdi.cdi.kubevirt.io> cdi missing
fleet-local   mcc-harvester-crd                             1/1                       
fleet-local   mcc-hvst-upgrade-bckj7-upgradelog-operator    0/1                       ErrApplied(1) [Cluster fleet-local/local: execution error at (rancher-logging/templates/validate-install-crd.yaml:26:7): Required CRDs are missing. Please install the corresponding CRD chart before installing this chart.]
fleet-local   mcc-local-managed-system-upgrade-controller   1/1                       
fleet-local   mcc-rancher-logging-crd                       1/1                       
fleet-local   mcc-rancher-monitoring-crd                    1/1
Did these details help in discovering why this upgrade has stalled?
s
Looks like the cdi CRD is already added, but I am investigating why the fleet is still showing the warning message
<http://cdi.cdi.kubevirt.io|cdi.cdi.kubevirt.io> cdi missing
👍 1
cc @ancient-pizza-13099, did you ever saw this similar situation? (I mean, the fleet-agent did not update to date after we applied the new CRDs?
Hi @sticky-summer-13450, Could you help to show the following command result?
Copy code
$ helm history harvester -n harvester-system
s
Sure:
Copy code
# helm history harvester -n harvester-system
REVISION	UPDATED                 	STATUS    	CHART          	APP VERSION	DESCRIPTION     
5       	Sat Apr 26 13:02:24 2025	superseded	harvester-1.4.2	v1.4.2     	Rollback to 3   
6       	Sat Apr 26 13:03:11 2025	deployed  	harvester-1.5.0	v1.5.0     	Upgrade complete
s
Hmm, could you try to delete (restart) the fleet-agent pod to let fleet recheck? It used the namespace
cattle-fleet-local-system
s
sure:
Copy code
# kubectl get pod -n cattle-fleet-local-system
NAME                           READY   STATUS    RESTARTS   AGE
fleet-agent-79c87b85c5-9lcmk   1/1     Running   0          9d

# kubectl delete pod fleet-agent-79c87b85c5-9lcmk -n cattle-fleet-local-system
pod "fleet-agent-79c87b85c5-9lcmk" deleted
The logs from the
hvst-upgrade-bckj7-apply-manifests
job are continuing to output:
Copy code
Sleep for 5 seconds to retry
Current version: 1.5.0, Current state: Modified, Current generation: 10
Sleep for 5 seconds to retry
Current version: 1.5.0, Current state: Modified, Current generation: 10
The logs of the new fleet-agent are repeating this error:
Copy code
{"level":"info","ts":"2025-05-06T09:19:10Z","logger":"bundledeployment.helm-deployer.install","msg":"Upgrading helm release","controller":"bundledeployment","controllerGroup":"fleet.cattle.io","controllerKind":"BundleDeployment","BundleDeployment":{"name":"mcc-hvst-upgrade-bckj7-upgradelog-operator","namespace":"cluster-fleet-local-local-1a3d67d0a899"},"namespace":"cluster-fleet-local-local-1a3d67d0a899","name":"mcc-hvst-upgrade-bckj7-upgradelog-operator","reconcileID":"5168e9e0-3168-405e-8b2c-8e7ec021173c","commit":"","dryRun":false}
{"level":"error","ts":"2025-05-06T09:19:10Z","msg":"Reconciler error","controller":"bundledeployment","controllerGroup":"fleet.cattle.io","controllerKind":"BundleDeployment","BundleDeployment":{"name":"mcc-hvst-upgrade-bckj7-upgradelog-operator","namespace":"cluster-fleet-local-local-1a3d67d0a899"},"namespace":"cluster-fleet-local-local-1a3d67d0a899","name":"mcc-hvst-upgrade-bckj7-upgradelog-operator","reconcileID":"5168e9e0-3168-405e-8b2c-8e7ec021173c","error":"failed deploying bundle: execution error at (rancher-logging/templates/validate-install-crd.yaml:26:7): Required CRDs are missing. Please install the corresponding CRD chart before installing this chart.","errorCauses":[{"error":"failed deploying bundle: execution error at (rancher-logging/templates/validate-install-crd.yaml:26:7): Required CRDs are missing. Please install the corresponding CRD chart before installing this chart."}],"stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.20.2/pkg/internal/controller/controller.go:341\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.20.2/pkg/internal/controller/controller.go:288\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.20.2/pkg/internal/controller/controller.go:249"}
s
What is the result of the following command?
Copy code
$ kubectl get bundle -A
s
Copy code
# kubectl get bundle -A
NAMESPACE     NAME                                          BUNDLEDEPLOYMENTS-READY   STATUS
fleet-local   fleet-agent-local                             1/1                       
fleet-local   local-managed-system-agent                    1/1                       
fleet-local   mcc-harvester                                 0/1                       Modified(1) [Cluster fleet-local/local]; <http://cdi.cdi.kubevirt.io|cdi.cdi.kubevirt.io> cdi missing
fleet-local   mcc-harvester-crd                             1/1                       
fleet-local   mcc-hvst-upgrade-bckj7-upgradelog-operator    0/1                       ErrApplied(1) [Cluster fleet-local/local: execution error at (rancher-logging/templates/validate-install-crd.yaml:26:7): Required CRDs are missing. Please install the corresponding CRD chart before installing this chart.]
fleet-local   mcc-local-managed-system-upgrade-controller   1/1                       
fleet-local   mcc-rancher-logging-crd                       1/1                       
fleet-local   mcc-rancher-monitoring-crd                    1/1
s
I am wondering if the Error from the rancher-logging might be a false alarm. But I need some time to investigate… @ancient-pizza-13099, did you ever see this message for the rancher logging?
cc @great-bear-19718
Hi @sticky-summer-13450, Could you try to patch the mcc-harvester bundle? To add this field:
spec.forceSyncGeneration: 1
It should trigger the redeployment.
s
Hi. I should have time this evening (UK time), I'm on a family hike at the moment 😊
s
Sure, no worries, just update some investigation here
BTW, could you create a GH issue for it? We would like to have the note for your case.
a
The harvester managedchart seems to be due to
Modified(1) [Cluster fleet-local/local]; <http://cdi.cdi.kubevirt.io|cdi.cdi.kubevirt.io> cdi missing
the
mcc-hvst-upgrade-bckj7-upgradelog-operator
is due to others
the normal CRD has such labels and annotations:
Copy code
- apiVersion: <http://apiextensions.k8s.io/v1|apiextensions.k8s.io/v1>
  kind: CustomResourceDefinition
  metadata:
    annotations:
      <http://helm.sh/resource-policy|helm.sh/resource-policy>: keep
      <http://meta.helm.sh/release-name|meta.helm.sh/release-name>: harvester-crd
      <http://meta.helm.sh/release-namespace|meta.helm.sh/release-namespace>: harvester-system
      <http://objectset.rio.cattle.io/id|objectset.rio.cattle.io/id>: default-mcc-harvester-crd-cattle-fleet-local-system
    creationTimestamp: "2024-12-01T12:57:22Z"
    generation: 1
    labels:
      <http://app.kubernetes.io/managed-by|app.kubernetes.io/managed-by>: Helm
      <http://objectset.rio.cattle.io/hash|objectset.rio.cattle.io/hash>: d4a83267ddde6a8769c04362d4a0e5605db9baa7
    managedFields:
but
kubectl create -f
created
<http://cdi.cdi.kubevirt.io|cdi.cdi.kubevirt.io>
has no such
Copy code
creationTimestamp: "2025-04-26T12:58:06Z"
    generation: 3
    managedFields:
    - apiVersion: <http://apiextensions.k8s.io/v1|apiextensions.k8s.io/v1>
,,
      manager: kube-apiserver
      operation: Update
      subresource: status
      time: "2025-04-26T12:58:06Z"
    - apiVersion: <http://apiextensions.k8s.io/v1|apiextensions.k8s.io/v1>
,,
      manager: kubectl-client-side-apply
      operation: Update
      time: "2025-04-26T13:03:46Z"
    name: <http://cdis.cdi.kubevirt.io|cdis.cdi.kubevirt.io>
    resourceVersion: "147939263"
    uid: 22c13f71-f4b3-4094-b82f-3e953587d010
  spec:
    conversion:
      strategy: None
    group: <http://cdi.kubevirt.io|cdi.kubevirt.io>
    names:
      kind: CDI
      listKind: CDIList
      plural: cdis
      shortNames:
      - cdi
      - cdis
      singular: cdi
    scope: Cluster
@sticky-summer-13450 Did you ever install third party
logging
on your cluster? if you installed and revmoed it, which may cause the
rancher-logging-crd
installed CRDs were removed by them accidentally, and the
mcc-hvst-upgrade-bckj7-upgradelog-operator
is reporting those errors, but which did not block the upgrade
@sticky-summer-13450 From the support-bundle, the
cdi.cdi...
resource has been in the CRDs; you may try below:
kubectl edit managedchart -n fleet-local harvester
, set the
paused
to `true`; wait 5 minutes, set it it
false
again, wait another 5 mintues, check the
managedchart
and
bundle
, if the
harvester
managedchart has changes
I suspect this is a side-effect of https://github.com/harvester/harvester/issues/8239
s
BTW, could you create a GH issue for it?
I have created https://github.com/harvester/harvester/issues/8240. I’ll try to add more notes to it as I get time during my vacation (when there is enough bandwidth for a VPN to the cluster)
> Did you ever install third party
logging
on your cluster? No, I have not installed anything directly into the cluster. This cluster was installed as v1.4.0 and upgraded successfully through v1.4.1 and v1.4.2.
> Could you try to patch the mcc-harvester bundle? > To add this field: >
spec.forceSyncGeneration: 1
I think this suggestion has been superseded in the time I’ve managed to reply. Please tell me if I’ve got that wrong. Edit: See below.
kubectl edit managedchart -n fleet-local harvester
, set the
paused
to `true`; wait 5 minutes, set it it
false
again, wait another 5 mintues, check the
managedchart
and
bundle
, if the
harvester
managedchart has changes
Did this and nothing has changed in the harvester ManagedChart or to the errors listed in bundle resources
Could you try to patch the mcc-harvester bundle?
To add this field:
spec.forceSyncGeneration: 1
I did this and after a couple of minutes the status cleared of notifications/errors and the deployed
mcc-harvester
bundle became ready, then 5 minutes later the
mcc-hvst-upgrade-bckj7-upgradelog-operator
bundle became ready too. The ‘Upgrading System Service’ is now at 100%, one node has moved into the ‘Pre-draining’ phase, and the VMs are being migrated to the other node. Thanks all 👏 - looks like this upgrade might be un-stuck now 🎉
👍 3
(GH issue updated with the above thread)
👍 1
a
Sounds great.
s
The fields
spec.forceSyncGeneration: 1
will help the fleet re-deploy the chart. We want to do that because somehow the CDI CR is deleted. Happy to know your upgrade is getting back!
s
Just a sad continuation… The upgrade of the nodes stalled on the pre-draining stage. I stopped all of the running VMs and rebooted of each node individually, as I have in the past. That caused the upgrade to continue to completion on each node. Sadly, none of the VMs will start and I guess from looking at the Longhorn UI, all the data is lost.
I’ve tried creating a new VM from one of the backups but that errors saying the VLANs are not available.
admission webhook "<http://validator.harvesterhci.io|validator.harvesterhci.io>" denied the request: Failed to get network attachment definition vlan648, err: <http://network-attachment-definitions.k8s.cni.cncf.io|network-attachment-definitions.k8s.cni.cncf.io> "vlan648" not found
Which, as far as I can see, they do.
So, all in all, a disappointing upgrade for me, unless something unexpected happens.
s
Hi @sticky-summer-13450, sorry to hear that. 🫠 Could you generate the support bundle again? I would like to check your current status.
I think the most disturbing thing is that “Restore New” from a Virtual Machine Backup is erroring as I described above. I’m quite worried that the VMs did not survive the upgrade, but not being able to restore from the backup is really devastating.
g
the support bundle is incomplete..
s
yeah, @sticky-summer-13450, sorry, I just checked your SB. It lacks the
yamls
folder. Could you generate SB again and check it should contain the
yamls
folder.
s
Just created a new SB. It does not contain a
yamls
folder.
s
Did you have any cert expired?
We have a similar case on another community user, and he found that his cert has expired.
s
Which cert? I add the UI cert which is valid until July. Aren’t all the internal certs managed by Harvester?
g
likely scheduler cert etc
lol.. that was you
s
Haha. I'll do that again. This cluster has only been running for 161 days. And that bug was filled over 2 years ago 😉
I did what I wrote that I’d previously done, as you linked to:
Copy code
rancher@harvester003:~> echo "Rotating kube-controller-manager certificate"
Rotating kube-controller-manager certificate
rancher@harvester003:~> sudo rm /var/lib/rancher/rke2/server/tls/kube-controller-manager/kube-controller-manager.{crt,key}
rancher@harvester003:~> sudo /var/lib/rancher/rke2/bin/crictl -r unix:///var/run/k3s/containerd/containerd.sock rm -f $(sudo /var/lib/rancher/rke2/bin/crictl -r unix:///var/run/k3s/containerd/containerd.sock ps -q --name kube-controller-manager)
WARN[0000] Config "/etc/crictl.yaml" does not exist, trying next: "/var/lib/rancher/rke2/data/v1.32.3-rke2r1-543f84e7e830/bin/crictl.yaml" 
WARN[0000] Config "/etc/crictl.yaml" does not exist, trying next: "/var/lib/rancher/rke2/data/v1.32.3-rke2r1-543f84e7e830/bin/crictl.yaml" 
9972fb861a1d4ffd8ca8bd577a3c2fc5c50895b4aa04425576c71a7037f04c19
9972fb861a1d4ffd8ca8bd577a3c2fc5c50895b4aa04425576c71a7037f04c19

rancher@harvester003:~> echo "Rotating kube-scheduler certificate"
Rotating kube-scheduler certificate
rancher@harvester003:~> sudo rm /var/lib/rancher/rke2/server/tls/kube-scheduler/kube-scheduler.{crt,key}
rancher@harvester003:~> sudo /var/lib/rancher/rke2/bin/crictl -r unix:///var/run/k3s/containerd/containerd.sock rm -f $(sudo /var/lib/rancher/rke2/bin/crictl -r unix:///var/run/k3s/containerd/containerd.sock ps -q --name kube-scheduler)
WARN[0000] Config "/etc/crictl.yaml" does not exist, trying next: "/var/lib/rancher/rke2/data/v1.32.3-rke2r1-543f84e7e830/bin/crictl.yaml" 
WARN[0000] Config "/etc/crictl.yaml" does not exist, trying next: "/var/lib/rancher/rke2/data/v1.32.3-rke2r1-543f84e7e830/bin/crictl.yaml" 
e17947504f0df0b4add0ad466ecb1fcbcb6c28fbe83c2dbdf75928bb8dc442ce
e17947504f0df0b4add0ad466ecb1fcbcb6c28fbe83c2dbdf75928bb8dc442ce
Then I created a new SB but it still does not contain a
yamls
folder.
Is there a correct way to check all of the certificates?
g
there would be a support-bundle-manager deployment running would be nice to check its logs
s
No errors in there - I wonder if you need the logs from the more transient
supportbundle-agent-bundle
from each node
Curious - there is a
yamls
folder now. I wonder if I needed to have waited longer for certificate creation. Here’s an SB.
s
Thanks @sticky-summer-13450, I will check this today
👍 1
Hi @sticky-summer-13450, Sorry for the delay, I checked your SB. Looks like the volume and the corresponding replicas are still here. Could you share the original issue after your upgrade? Thanks!
s
Hi Vincent. The problems are: • at upgrade time the VMs would not migrate, so I shut them down • post upgrade the VMs would not start • Longhorn says the volumes are failed (see example screenshot from mobile - I’m on the beach on holiday) • attempting to restore from backup is failing saying that the vlans don’t exist • creating an SB the zip does not contain a
yamls
folder I have created a couple of new VMs using Terraform which are working fine. Just the old ones that are not. When I get back to a real computer this evening I could attempt to start a VM and grab another SB?
Got to a PC. Selected to start a VM - it’s hanging in the “starting” phase. Selected to create an SB - it doesn’t contain a
yamls
folder. The logs from the
supportbundle-manager-bundle
pod:
Copy code
+ exec tini -- /usr/bin/support-bundle-kit manager
time="2025-05-16T15:13:52Z" level=info msg="Running phase init"
time="2025-05-16T15:13:52Z" level=debug msg="Create a local state store. (harvester-system/bundle-5mexo)"
time="2025-05-16T15:13:52Z" level=debug msg="Get supportbundle harvester-system/bundle-5mexo state generating"
time="2025-05-16T15:13:52Z" level=info msg="Succeed to run phase init. Progress (16)."
time="2025-05-16T15:13:52Z" level=info msg="Running phase cluster bundle"
time="2025-05-16T15:13:52Z" level=debug msg="Generating cluster bundle..."
time="2025-05-16T15:13:53Z" level=info msg="Prepare to get all support bundle yamls!"
time="2025-05-16T15:13:53Z" level=info msg="[Harvester] generate YAMLs, yamlsDir: /tmp/support-bundle-kit/bundle/yamls"
time="2025-05-16T15:13:53Z" level=error msg="Unable to fetch namespaced resources" error="unable to retrieve the complete list of server APIs: ext.cattle.io/v1: stale GroupVersion discovery: ext.cattle.io/v1"
time="2025-05-16T15:13:53Z" level=info msg="[Cluster] generate YAMLs, yamlsDir: /tmp/support-bundle-kit/bundle/yamls"
time="2025-05-16T15:13:53Z" level=error msg="Unable to fetch cluster resources" error="unable to retrieve the complete list of server APIs: ext.cattle.io/v1: stale GroupVersion discovery: ext.cattle.io/v1"
time="2025-05-16T15:13:53Z" level=info msg="[Default] generate YAMLs, yamlsDir: /tmp/support-bundle-kit/bundle/yamls"
time="2025-05-16T15:13:53Z" level=error msg="Unable to fetch namespaced resources" error="unable to retrieve the complete list of server APIs: ext.cattle.io/v1: stale GroupVersion discovery: ext.cattle.io/v1"
time="2025-05-16T15:13:53Z" level=error msg="Unable to fetch namespaced resources" error="unable to retrieve the complete list of server APIs: ext.cattle.io/v1: stale GroupVersion discovery: ext.cattle.io/v1"
time="2025-05-16T15:13:53Z" level=error msg="Unable to fetch namespaced resources" error="unable to retrieve the complete list of server APIs: ext.cattle.io/v1: stale GroupVersion discovery: ext.cattle.io/v1"
time="2025-05-16T15:13:53Z" level=error msg="Unable to fetch namespaced resources" error="unable to retrieve the complete list of server APIs: ext.cattle.io/v1: stale GroupVersion discovery: ext.cattle.io/v1"
time="2025-05-16T15:13:53Z" level=error msg="Unable to fetch namespaced resources" error="unable to retrieve the complete list of server APIs: ext.cattle.io/v1: stale GroupVersion discovery: ext.cattle.io/v1"
time="2025-05-16T15:13:53Z" level=error msg="Unable to fetch namespaced resources" error="unable to retrieve the complete list of server APIs: ext.cattle.io/v1: stale GroupVersion discovery: ext.cattle.io/v1"
time="2025-05-16T15:13:53Z" level=error msg="Unable to fetch namespaced resources" error="unable to retrieve the complete list of server APIs: ext.cattle.io/v1: stale GroupVersion discovery: ext.cattle.io/v1"
time="2025-05-16T15:13:53Z" level=error msg="Unable to fetch namespaced resources" error="unable to retrieve the complete list of server APIs: ext.cattle.io/v1: stale GroupVersion discovery: ext.cattle.io/v1"
time="2025-05-16T15:13:53Z" level=error msg="Unable to fetch namespaced resources" error="unable to retrieve the complete list of server APIs: ext.cattle.io/v1: stale GroupVersion discovery: ext.cattle.io/v1"
time="2025-05-16T15:13:53Z" level=error msg="Unable to fetch namespaced resources" error="unable to retrieve the complete list of server APIs: ext.cattle.io/v1: stale GroupVersion discovery: ext.cattle.io/v1"
time="2025-05-16T15:13:53Z" level=error msg="Unable to fetch namespaced resources" error="unable to retrieve the complete list of server APIs: ext.cattle.io/v1: stale GroupVersion discovery: ext.cattle.io/v1"
time="2025-05-16T15:13:53Z" level=error msg="Unable to fetch namespaced resources" error="unable to retrieve the complete list of server APIs: ext.cattle.io/v1: stale GroupVersion discovery: ext.cattle.io/v1"
time="2025-05-16T15:13:53Z" level=error msg="Unable to fetch namespaced resources" error="unable to retrieve the complete list of server APIs: ext.cattle.io/v1: stale GroupVersion discovery: ext.cattle.io/v1"
time="2025-05-16T15:13:53Z" level=error msg="Unable to fetch namespaced resources" error="unable to retrieve the complete list of server APIs: ext.cattle.io/v1: stale GroupVersion discovery: ext.cattle.io/v1"
time="2025-05-16T15:13:53Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/default/virt-launcher-kube004-dqv79/compute.log"
time="2025-05-16T15:13:53Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/default/virt-launcher-kube004-dqv79/guest-console-log.log"
time="2025-05-16T15:13:53Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/cloud-controller-manager-harvester003/cloud-controller-manager.log"
time="2025-05-16T15:13:53Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/cloud-controller-manager-harvester003/cloud-controller-manager.log.1"
time="2025-05-16T15:13:54Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/etcd-harvester003/etcd.log"
time="2025-05-16T15:13:54Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/harvester-snapshot-validation-webhook-8594c5f8f8-rb5nx/snapshot-validation-webhook.log"
time="2025-05-16T15:13:55Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/harvester-snapshot-validation-webhook-8594c5f8f8-zbfcj/snapshot-validation-webhook.log"
time="2025-05-16T15:13:55Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/harvester-whereabouts-6gr2x/whereabouts.log"
time="2025-05-16T15:13:55Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/harvester-whereabouts-6gr2x/whereabouts.log.1"
time="2025-05-16T15:13:56Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/harvester-whereabouts-6p8fg/whereabouts.log"
time="2025-05-16T15:13:56Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/harvester-whereabouts-6p8fg/whereabouts.log.1"
time="2025-05-16T15:13:56Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/helm-install-rke2-canal-29t8m/helm.log"
time="2025-05-16T15:13:57Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/helm-install-rke2-coredns-slr28/helm.log"
time="2025-05-16T15:13:57Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/helm-install-rke2-multus-ncxwq/helm.log"
time="2025-05-16T15:13:57Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/kube-apiserver-harvester003/kube-apiserver.log"
time="2025-05-16T15:13:58Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/kube-controller-manager-harvester003/kube-controller-manager.log"
time="2025-05-16T15:13:58Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/kube-controller-manager-harvester003/kube-controller-manager.log.1"
time="2025-05-16T15:13:59Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/kube-proxy-harvester001/kube-proxy.log"
time="2025-05-16T15:13:59Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/kube-proxy-harvester003/kube-proxy.log"
time="2025-05-16T15:13:59Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/kube-scheduler-harvester003/kube-scheduler.log"
time="2025-05-16T15:14:00Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/kube-scheduler-harvester003/kube-scheduler.log.1"
time="2025-05-16T15:14:00Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/rke2-canal-qh2cc/calico-node.log"
time="2025-05-16T15:14:00Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/rke2-canal-qh2cc/kube-flannel.log"
time="2025-05-16T15:14:01Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/rke2-canal-tmwz8/calico-node.log"
time="2025-05-16T15:14:01Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/rke2-canal-tmwz8/calico-node.log.1"
time="2025-05-16T15:14:01Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/rke2-canal-tmwz8/kube-flannel.log"
time="2025-05-16T15:14:02Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/rke2-canal-tmwz8/kube-flannel.log.1"
time="2025-05-16T15:14:02Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/rke2-coredns-rke2-coredns-7769c87bd-8klns/coredns.log"
time="2025-05-16T15:14:02Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/rke2-coredns-rke2-coredns-7769c87bd-m56rs/coredns.log"
time="2025-05-16T15:14:03Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/rke2-coredns-rke2-coredns-autoscaler-5b9b76dbf6-n9kn8/autoscaler.log"
time="2025-05-16T15:14:03Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/rke2-ingress-nginx-controller-h2dbz/rke2-ingress-nginx-controller.log"
time="2025-05-16T15:14:04Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/rke2-ingress-nginx-controller-h2dbz/rke2-ingress-nginx-controller.log.1"
time="2025-05-16T15:14:04Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/rke2-ingress-nginx-controller-hfcpp/rke2-ingress-nginx-controller.log"
time="2025-05-16T15:14:04Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/rke2-ingress-nginx-controller-hfcpp/rke2-ingress-nginx-controller.log.1"
time="2025-05-16T15:14:04Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/rke2-metrics-server-85479b695c-rqpqz/metrics-server.log"
time="2025-05-16T15:14:05Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/rke2-multus-8dllq/kube-rke2-multus.log"
time="2025-05-16T15:14:05Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/rke2-multus-8dllq/kube-rke2-multus.log.1"
time="2025-05-16T15:14:05Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/rke2-multus-9hw5n/kube-rke2-multus.log"
time="2025-05-16T15:14:06Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/snapshot-controller-5fb6d65787-lfjbt/snapshot-controller.log"
time="2025-05-16T15:14:06Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/kube-system/snapshot-controller-5fb6d65787-nmpsc/snapshot-controller.log"
time="2025-05-16T15:14:07Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-system/apply-hvst-upgrade-bckj7-cleanup-on-harvester001-with-310-9qd62/upgrade.log"
time="2025-05-16T15:14:07Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-system/apply-hvst-upgrade-bckj7-cleanup-on-harvester003-with-310-twfxs/upgrade.log"
time="2025-05-16T15:14:08Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-system/harvester-cluster-repo-558c6dd488-xjmt5/httpd.log"
time="2025-05-16T15:14:08Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-system/rancher-556698cf6f-6cqm8/rancher.log"
time="2025-05-16T15:14:08Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-system/rancher-556698cf6f-75mcg/rancher.log"
time="2025-05-16T15:14:09Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-system/rancher-webhook-5d578f98cf-9dtdj/rancher-webhook.log"
time="2025-05-16T15:14:09Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-system/system-upgrade-controller-6bb66b458b-nqhts/system-upgrade-controller.log"
time="2025-05-16T15:14:10Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-system/system-upgrade-controller-6bb66b458b-nqhts/system-upgrade-controller.log.1"
time="2025-05-16T15:14:10Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-fleet-local-system/fleet-agent-566b7845f-2vdwq/fleet-agent.log"
time="2025-05-16T15:14:11Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-fleet-system/fleet-controller-76fb9c5fbd-j67lp/fleet-controller.log"
time="2025-05-16T15:14:11Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-fleet-system/fleet-controller-76fb9c5fbd-j67lp/fleet-controller.log.1"
time="2025-05-16T15:14:11Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-fleet-system/fleet-controller-76fb9c5fbd-j67lp/fleet-cleanup.log"
time="2025-05-16T15:14:12Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-fleet-system/fleet-controller-76fb9c5fbd-j67lp/fleet-cleanup.log.1"
time="2025-05-16T15:14:12Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-fleet-system/fleet-controller-76fb9c5fbd-j67lp/fleet-agentmanagement.log"
time="2025-05-16T15:14:12Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-fleet-system/fleet-controller-76fb9c5fbd-j67lp/fleet-agentmanagement.log.1"
time="2025-05-16T15:14:13Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-fleet-system/gitjob-7bd6855c9b-jhcmj/gitjob.log"
time="2025-05-16T15:14:13Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-fleet-system/gitjob-7bd6855c9b-jhcmj/gitjob.log.1"
time="2025-05-16T15:14:14Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-monitoring-system/alertmanager-rancher-monitoring-alertmanager-0/alertmanager.log"
time="2025-05-16T15:14:14Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-monitoring-system/alertmanager-rancher-monitoring-alertmanager-0/config-reloader.log"
time="2025-05-16T15:14:14Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-monitoring-system/prometheus-rancher-monitoring-prometheus-0/prometheus.log"
time="2025-05-16T15:14:15Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-monitoring-system/prometheus-rancher-monitoring-prometheus-0/config-reloader.log"
time="2025-05-16T15:14:15Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-monitoring-system/prometheus-rancher-monitoring-prometheus-0/prometheus-proxy.log"
time="2025-05-16T15:14:16Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-monitoring-system/rancher-monitoring-grafana-7887776f6f-xwbhv/grafana-sc-dashboard.log"
time="2025-05-16T15:14:16Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-monitoring-system/rancher-monitoring-grafana-7887776f6f-xwbhv/grafana.log"
time="2025-05-16T15:14:16Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-monitoring-system/rancher-monitoring-grafana-7887776f6f-xwbhv/grafana-proxy.log"
time="2025-05-16T15:14:17Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-monitoring-system/rancher-monitoring-kube-state-metrics-596b877446-jcw6n/kube-state-metrics.log"
time="2025-05-16T15:14:17Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-monitoring-system/rancher-monitoring-operator-d5f56bff9-k4brp/rancher-monitoring.log"
time="2025-05-16T15:14:18Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-monitoring-system/rancher-monitoring-prometheus-adapter-9d8f566c4-9jszm/prometheus-adapter.log"
time="2025-05-16T15:14:18Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-monitoring-system/rancher-monitoring-prometheus-node-exporter-c8f9b/node-exporter.log"
time="2025-05-16T15:14:18Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-monitoring-system/rancher-monitoring-prometheus-node-exporter-c8f9b/node-exporter.log.1"
time="2025-05-16T15:14:19Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-monitoring-system/rancher-monitoring-prometheus-node-exporter-x6kwd/node-exporter.log"
time="2025-05-16T15:14:19Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-monitoring-system/rancher-monitoring-prometheus-node-exporter-x6kwd/node-exporter.log.1"
time="2025-05-16T15:14:20Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/cdi-apiserver-58bf7b8b74-spgs5/cdi-apiserver.log"
time="2025-05-16T15:14:20Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/cdi-deployment-6cf7d9b445-rnbx5/cdi-deployment.log"
time="2025-05-16T15:14:20Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/cdi-operator-7b6695c56d-fp799/cdi-operator.log"
time="2025-05-16T15:14:21Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/cdi-uploadproxy-68d59ccffb-hqt8c/cdi-uploadproxy.log"
time="2025-05-16T15:14:21Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-6586b5ff7b-bzvw5/apiserver.log"
time="2025-05-16T15:14:22Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-6586b5ff7b-c7w8n/apiserver.log"
time="2025-05-16T15:14:23Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-load-balancer-f6c5654f4-mvj5h/harvester-load-balancer.log"
time="2025-05-16T15:14:23Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-load-balancer-webhook-66794df8ff-tt9nt/harvester-load-balancer-webhook.log"
time="2025-05-16T15:14:23Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-network-controller-k5dnr/harvester-network.log"
time="2025-05-16T15:14:23Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-network-controller-k5dnr/harvester-network.log.1"
time="2025-05-16T15:14:23Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-network-controller-kkq8q/harvester-network.log"
time="2025-05-16T15:14:24Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-network-controller-kkq8q/harvester-network.log.1"
time="2025-05-16T15:14:24Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-network-controller-manager-b65465c9-5xqfq/harvester-network-manager.log"
time="2025-05-16T15:14:24Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-network-controller-manager-b65465c9-vfbwz/harvester-network-manager.log"
time="2025-05-16T15:14:25Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-network-webhook-6c55485b58-mdspn/harvester-network-webhook.log"
time="2025-05-16T15:14:25Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-networkfs-manager-kbzfv/harvester-networkfs-manager.log"
time="2025-05-16T15:14:26Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-networkfs-manager-kbzfv/harvester-networkfs-manager.log.1"
time="2025-05-16T15:14:26Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-networkfs-manager-p57q2/harvester-networkfs-manager.log"
time="2025-05-16T15:14:26Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-networkfs-manager-p57q2/harvester-networkfs-manager.log.1"
time="2025-05-16T15:14:26Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-node-disk-manager-8vgff/harvester-node-disk-manager.log"
time="2025-05-16T15:14:27Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-node-disk-manager-8vgff/harvester-node-disk-manager.log.1"
time="2025-05-16T15:14:27Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-node-disk-manager-m9f62/harvester-node-disk-manager.log"
time="2025-05-16T15:14:27Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-node-disk-manager-m9f62/harvester-node-disk-manager.log.1"
time="2025-05-16T15:14:28Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-node-disk-manager-webhook-55b8694747-285jf/harvester-node-disk-manager-webhook.log"
time="2025-05-16T15:14:28Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-node-manager-cjj97/node-manager.log"
time="2025-05-16T15:14:28Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-node-manager-cjj97/node-manager.log.1"
time="2025-05-16T15:14:29Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-node-manager-gr4gt/node-manager.log"
time="2025-05-16T15:14:29Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-node-manager-gr4gt/node-manager.log.1"
time="2025-05-16T15:14:29Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-node-manager-webhook-77d466598d-b2hxx/harvester-node-manager-webhook.log"
time="2025-05-16T15:14:30Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-node-manager-webhook-77d466598d-tjxkh/harvester-node-manager-webhook.log"
time="2025-05-16T15:14:30Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-vm-import-controller-7485d5dd56-dxnpf/harvester-vm-import-controller.log"
time="2025-05-16T15:14:30Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-webhook-588b5d84f5-76nq6/harvester-webhook.log"
time="2025-05-16T15:14:31Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/harvester-webhook-588b5d84f5-xkqh7/harvester-webhook.log"
time="2025-05-16T15:14:31Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/hvst-upgrade-bckj7-post-drain-harvester001-s9gs5/apply.log"
time="2025-05-16T15:14:32Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/hvst-upgrade-bckj7-post-drain-harvester003-xxpfq/apply.log"
time="2025-05-16T15:14:32Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/kube-vip-wt5j5/kube-vip.log"
time="2025-05-16T15:14:33Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/kube-vip-wt5j5/kube-vip.log.1"
time="2025-05-16T15:14:33Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/supportbundle-manager-bundle-5mexo-54bf9fd574-fr5z9/manager.log"
time="2025-05-16T15:14:33Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/svmb-c1ef3a33-1307-4c70-aca1-6ea90b4385a6-29119860-lckbl/svmb-c1ef3a33-1307-4c70-aca1-6ea90b4385a6.log"
time="2025-05-16T15:14:34Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/svmb-c1ef3a33-1307-4c70-aca1-6ea90b4385a6-29121300-6nthg/svmb-c1ef3a33-1307-4c70-aca1-6ea90b4385a6.log"
time="2025-05-16T15:14:34Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/svmb-c1ef3a33-1307-4c70-aca1-6ea90b4385a6-29122740-xmj8b/svmb-c1ef3a33-1307-4c70-aca1-6ea90b4385a6.log"
time="2025-05-16T15:14:35Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/virt-api-745c68d9f-dksl4/virt-api.log"
time="2025-05-16T15:14:35Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/virt-api-745c68d9f-mbd4n/virt-api.log"
time="2025-05-16T15:14:35Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/virt-controller-5dd599df-22w9p/virt-controller.log"
time="2025-05-16T15:14:36Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/virt-controller-5dd599df-gl5dw/virt-controller.log"
time="2025-05-16T15:14:36Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/virt-handler-jcrt6/virt-handler.log"
time="2025-05-16T15:14:37Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/virt-handler-jcrt6/virt-handler.log.1"
time="2025-05-16T15:14:37Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/virt-handler-pmrb7/virt-handler.log"
time="2025-05-16T15:14:37Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/virt-handler-pmrb7/virt-handler.log.1"
time="2025-05-16T15:14:37Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/harvester-system/virt-operator-7888b89b5-klgkm/virt-operator.log"
time="2025-05-16T15:14:38Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/backing-image-manager-8494-378e/backing-image-manager.log"
time="2025-05-16T15:14:39Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/backing-image-manager-8494-cddb/backing-image-manager.log"
time="2025-05-16T15:14:39Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/csi-attacher-867869c54d-krjbx/csi-attacher.log"
time="2025-05-16T15:14:39Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/csi-attacher-867869c54d-mnd5n/csi-attacher.log"
time="2025-05-16T15:14:40Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/csi-attacher-867869c54d-xl444/csi-attacher.log"
time="2025-05-16T15:14:40Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/csi-provisioner-8846bb87-9kv5g/csi-provisioner.log"
time="2025-05-16T15:14:41Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/csi-provisioner-8846bb87-ldmhz/csi-provisioner.log"
time="2025-05-16T15:14:41Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/csi-provisioner-8846bb87-zvcn9/csi-provisioner.log"
time="2025-05-16T15:14:41Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/csi-resizer-76679bb7f8-5zd6l/csi-resizer.log"
time="2025-05-16T15:14:42Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/csi-resizer-76679bb7f8-lvcxh/csi-resizer.log"
time="2025-05-16T15:14:42Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/csi-resizer-76679bb7f8-rnt7f/csi-resizer.log"
time="2025-05-16T15:14:43Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/csi-snapshotter-58c98ccf94-9cnr9/csi-snapshotter.log"
time="2025-05-16T15:14:43Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/csi-snapshotter-58c98ccf94-ltjq4/csi-snapshotter.log"
time="2025-05-16T15:14:43Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/csi-snapshotter-58c98ccf94-zkwsb/csi-snapshotter.log"
time="2025-05-16T15:14:44Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/engine-image-ei-51cc7b9c-sn5hl/engine-image-ei-51cc7b9c.log"
time="2025-05-16T15:14:44Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/engine-image-ei-51cc7b9c-sn5hl/engine-image-ei-51cc7b9c.log.1"
time="2025-05-16T15:14:44Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/engine-image-ei-51cc7b9c-tdj6j/engine-image-ei-51cc7b9c.log"
time="2025-05-16T15:14:45Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/engine-image-ei-51cc7b9c-tdj6j/engine-image-ei-51cc7b9c.log.1"
time="2025-05-16T15:14:45Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/engine-image-ei-db6c2b6f-2tnm2/engine-image-ei-db6c2b6f.log"
time="2025-05-16T15:14:45Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/engine-image-ei-db6c2b6f-2tnm2/engine-image-ei-db6c2b6f.log.1"
time="2025-05-16T15:14:46Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/engine-image-ei-db6c2b6f-5wzq6/engine-image-ei-db6c2b6f.log"
time="2025-05-16T15:14:46Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/engine-image-ei-db6c2b6f-5wzq6/engine-image-ei-db6c2b6f.log.1"
time="2025-05-16T15:14:46Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/instance-manager-6cd61b2e5d4b2d4263b4eef12e449c2d/instance-manager.log"
time="2025-05-16T15:14:47Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/instance-manager-ecad7f7d3a2682bd91344371908f4d91/instance-manager.log"
time="2025-05-16T15:14:47Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-csi-plugin-29jcz/node-driver-registrar.log"
time="2025-05-16T15:14:47Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-csi-plugin-29jcz/node-driver-registrar.log.1"
time="2025-05-16T15:14:48Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-csi-plugin-29jcz/longhorn-liveness-probe.log"
time="2025-05-16T15:14:48Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-csi-plugin-29jcz/longhorn-liveness-probe.log.1"
time="2025-05-16T15:14:48Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-csi-plugin-29jcz/longhorn-csi-plugin.log"
time="2025-05-16T15:14:49Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-csi-plugin-29jcz/longhorn-csi-plugin.log.1"
time="2025-05-16T15:14:49Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-csi-plugin-2wnxz/node-driver-registrar.log"
time="2025-05-16T15:14:49Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-csi-plugin-2wnxz/node-driver-registrar.log.1"
time="2025-05-16T15:14:49Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-csi-plugin-2wnxz/longhorn-liveness-probe.log"
time="2025-05-16T15:14:50Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-csi-plugin-2wnxz/longhorn-liveness-probe.log.1"
time="2025-05-16T15:14:50Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-csi-plugin-2wnxz/longhorn-csi-plugin.log"
time="2025-05-16T15:14:50Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-csi-plugin-2wnxz/longhorn-csi-plugin.log.1"
time="2025-05-16T15:14:51Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-driver-deployer-597c97c75b-ns72l/longhorn-driver-deployer.log"
time="2025-05-16T15:14:51Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-loop-device-cleaner-4bx9f/longhorn-loop-device-cleaner.log"
time="2025-05-16T15:14:52Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-loop-device-cleaner-4bx9f/longhorn-loop-device-cleaner.log.1"
time="2025-05-16T15:14:52Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-loop-device-cleaner-qmfqw/longhorn-loop-device-cleaner.log"
time="2025-05-16T15:14:53Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-loop-device-cleaner-qmfqw/longhorn-loop-device-cleaner.log.1"
time="2025-05-16T15:14:53Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-manager-mdkvh/longhorn-manager.log"
time="2025-05-16T15:14:54Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-manager-mdkvh/longhorn-manager.log.1"
time="2025-05-16T15:14:54Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-manager-mdkvh/pre-pull-share-manager-image.log"
time="2025-05-16T15:14:54Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-manager-mdkvh/pre-pull-share-manager-image.log.1"
time="2025-05-16T15:14:54Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-manager-pjfgq/longhorn-manager.log"
time="2025-05-16T15:14:54Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-manager-pjfgq/longhorn-manager.log.1"
time="2025-05-16T15:14:54Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-manager-pjfgq/pre-pull-share-manager-image.log"
time="2025-05-16T15:14:54Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-manager-pjfgq/pre-pull-share-manager-image.log.1"
time="2025-05-16T15:14:55Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-ui-c55c69896-m5bqk/longhorn-ui.log"
time="2025-05-16T15:14:55Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/longhorn-system/longhorn-ui-c55c69896-zz7lt/longhorn-ui.log"
time="2025-05-16T15:14:56Z" level=debug msg="Prepare to log to file: /tmp/support-bundle-kit/bundle/logs/cattle-provisioning-capi-system/capi-controller-manager-d55f9f8d-c8mf9/manager.log"
time="2025-05-16T15:14:56Z" level=info msg="Succeed to run phase cluster bundle. Progress (33)."
time="2025-05-16T15:14:56Z" level=info msg="Running phase node bundle"
time="2025-05-16T15:14:56Z" level=debug msg="Creating daemonset supportbundle-agent-bundle-5mexo with image rancher/support-bundle-kit:v0.0.46"
time="2025-05-16T15:14:57Z" level=debug msg="Waiting for the creation of agent DaemonSet Pods for scheduled node names collection"
time="2025-05-16T15:14:57Z" level=debug msg="Expecting bundles from nodes: map[harvester001: harvester003:]"
time="2025-05-16T15:15:45Z" level=debug msg="Handle create node bundle for harvester003"
time="2025-05-16T15:15:45Z" level=debug msg="Complete node harvester003"
time="2025-05-16T15:15:53Z" level=debug msg="Handle create node bundle for harvester001"
time="2025-05-16T15:15:53Z" level=debug msg="Complete node harvester001"
time="2025-05-16T15:15:53Z" level=debug msg="All nodes are completed"
time="2025-05-16T15:15:53Z" level=info msg="All node bundles are received."
time="2025-05-16T15:15:53Z" level=info msg="Succeed to run phase node bundle. Progress (50)."
time="2025-05-16T15:15:53Z" level=info msg="Running phase prometheus bundle"
time="2025-05-16T15:15:53Z" level=info msg="Succeed to run phase prometheus bundle. Progress (66)."
time="2025-05-16T15:15:53Z" level=info msg="Running phase package"
time="2025-05-16T15:15:54Z" level=info msg="Succeed to run phase package. Progress (83)."
time="2025-05-16T15:15:54Z" level=info msg="Running phase done"
time="2025-05-16T15:15:54Z" level=info msg="Support bundle /tmp/support-bundle-kit/supportbundle_5351197f-4e80-4b39-99a7-44607b6fdbd8_2025-05-16T15-13-53Z.zip ready to download"
time="2025-05-16T15:15:54Z" level=info msg="Succeed to run phase done. Progress (100)."
s
Hi @sticky-summer-13450, Sorry for the late reply. What is the problem with the VM namespace? Could you try to collect the VM with the corresponding namespace through the support bundle? If you still encounter the issue that the
yaml
folder is missing, could you try to refresh the certificates again like you do above? Or create a GH issue, that’s weird…
r
regarding the
yaml
folder missing situation, we’ve created a GH issue under the SBK project see https://github.com/rancher/support-bundle-kit/issues/137
👍 2
s
Hi @salmon-city-57654 > If you still encounter the issue that the
yaml
folder is missing, could you try to refresh the certificates again like you do above? I have refreshed the certs again, and waited a few hours in case that was needed. • The first time I made an SB it did not have a
yamls
folder. • The second time I made an SB it did have a
yamls
folder. • The third time I made an SB it did not have a
yamls
folder. • The fourth time I made an SB it did not have a
yamls
folder. This is very odd. It should not be indeterminate, it should work, or not work.
Hi @salmon-city-57654
What is the problem with the VM namespace?
I have no idea - I haven't mentioned there being a problem with the VM namespace.
Could you try to collect the VM with the corresponding namespace through the support bundle?
I don't know what you mean. If I can recap what I know. • I have some VMs which were working - running right up to the upgrade from 1.4.2 to 1.5.0 - which will not start now. One in the starting state in the SB I'm about to attach. • I have daily backups of the VMs which cannot be started, but they can not be restored because the VLAN definition apparently cannot be found
admission webhook "<http://validator.harvesterhci.io|validator.harvesterhci.io>" denied the request: Failed to get network attachment definition vlan640, err: <http://network-attachment-definitions.k8s.cni.cncf.io|network-attachment-definitions.k8s.cni.cncf.io> "vlan640" not found
even though the VLANs exist perfectly well.
I guess there was nothing you could find in the SB
s
Sorry @sticky-summer-13450, I missed that update, after checking
• I have some VMs which were working - running right up to the upgrade from 1.4.2 to 1.5.0 - which will not start now. One in the starting state in the SB I’m about to attach.
I saw some errors below:
Copy code
2025-05-09T18:33:42.827813286Z time="2025-05-09T18:33:42Z" level=warning msg="Unable to create new replica pvc-df2594d5-7f12-4bd4-907b-729f3828cda8-r-3dade017" func="controller.(*VolumeController).replenishReplicas" file="volume_controller.go:2328" accessMode=rwx controller=longhorn-volume error="No available disk candidates to create a new replica of size 34359738368" frontend=blockdev migratable=true node=harvester003 owner=harvester003 shareEndpoint= shareState= state=attached volume=pvc-df2594d5-7f12-4bd4-907b-729f3828cda8
2025-05-09T18:33:42.854694473Z time="2025-05-09T18:33:42Z" level=warning msg="Unable to create new replica pvc-df2594d5-7f12-4bd4-907b-729f3828cda8-r-c8f2a9f4" func="controller.(*VolumeController).replenishReplicas" file="volume_controller.go:2328" accessMode=rwx controller=longhorn-volume error="No available disk candidates to create a new replica of size 34359738368" frontend=blockdev migratable=true node=harvester003 owner=harvester003 shareEndpoint= shareState= state=attached volume=pvc-df2594d5-7f12-4bd4-907b-729f3828cda8
Seems the Longhorn wants to rebuild the replica but fails. Could you check the disk usage on the node
harvester003
?
• I have daily backups of the VMs which cannot be started, but they can not be restored because the VLAN definition apparently cannot be found
admission webhook "<http://validator.harvesterhci.io|validator.harvesterhci.io>" denied the request: Failed to get network attachment definition vlan640, err: <http://network-attachment-definitions.k8s.cni.cncf.io|network-attachment-definitions.k8s.cni.cncf.io> "vlan640" not found
even though the VLANs exist perfectly well.
This was weird. I checked the code logic. It should not fail here. What is the restore source name?
s
I'm sorry, @salmon-city-57654 - I don't remember which backup I was restoring. Is it important to say I was restoring to a new VM - using
Restore New
. I was not restoring over the existing VM - not using
Restore Existing
.
s
Hmm, anyway, that’s weird… Does it still happen in your environment? I mean the
admission webhook "<http://validator.harvesterhci.io|validator.harvesterhci.io>" denied the request: Failed to get network attachment definition vlan640, err: <http://network-attachment-definitions.k8s.cni.cncf.io|network-attachment-definitions.k8s.cni.cncf.io> "vlan640" not found
s
Yes, it still does it in my environment:
s
Can you try to reproduce it again and generate SB after that?
s
Sure - he's an SB. I tried the restore at 18:30 ish I've tried extracting the SB 5 times, including following https://github.com/harvester/harvester/issues/3863#issuecomment-1539681311, to get the
yamls
folder in the SB./ But no luck.
I guess this SB wasn't usable to you.
I guess if this cluster is not able to give you any useful information to fix issues then I'd better trash it and accept the loss of the VMs
s
Hi @sticky-summer-13450 , Sorry, I am busy with other stuff. Just saw it, I am checking it.
From the above SB, we cannot load it because the following error:
Copy code
FATA[0005] Error loading cluster scoped objects error during dir walk lstat supportbundle_5351197f-4e80-4b39-99a7-44607b6fdbd8_2025-06-17T17-46-16Z/yamls/cluster: no such file or directory
From the screenshot of the above error, I rechecked the code. It’s weird because it’s just a simple resource get. I will confirm with my colleagues tomorrow.
@magnificent-pencil-261 could you help with that?
m
I think it might be related to the
networkName
being
vlan640
, which lacks a namespace. Regarding the VMBackup mentioned above:
Copy code
apiVersion: <http://harvesterhci.io/v1beta1|harvesterhci.io/v1beta1>
  kind: VirtualMachineBackup
  metadata:
    name: svmb-ecd8ef47-2636-435e-99b5-76d862c7f6f9-20250426.0130
    namespace: default
  spec:
    source:
      apiGroup: <http://kubevirt.io|kubevirt.io>
      kind: VirtualMachine
      name: nagioskube001
    type: backup
  status:
    readyToUse: true
    source:
      metadata:
        name: nagioskube001
        namespace: default
      spec:
        runStrategy: RerunOnFailure
        template:
          metadata:
            annotations: {}
          spec:
            architecture: amd64
            evictionStrategy: LiveMigrate
            hostname: nagioskube001
            networks:
            - multus:
                networkName: vlan640
              name: nic-1
            volumes:
            - name: rootdisk
              persistentVolumeClaim:
                claimName: nagioskube001-rootdisk-4gxcx
In the VMRestore webhook, it attempts to parse the
networkName
, but in this case, it ends up with
namespace = ""
and
name = *vlan640
.* • Validator codeParsing logic (`api_id.go`) The networkName is derived from the VM manifest during the restore process.
Copy code
apiVersion: <http://kubevirt.io/v1|kubevirt.io/v1>
  kind: VirtualMachine
  metadata:
    labels:
      <http://harvesterhci.io/creator|harvesterhci.io/creator>: terraform-provider-harvester
      <http://harvesterhci.io/vmName|harvesterhci.io/vmName>: nagioskube001
      <http://tag.harvesterhci.io/ssh-user|tag.harvesterhci.io/ssh-user>: ubuntu
    name: nagioskube001
    namespace: default
    resourceVersion: "1918"
  spec:
    runStrategy: Halted
    template:
      metadata:
        annotations: {}
      spec:
        architecture: amd64
        evictionStrategy: LiveMigrate
        hostname: nagioskube001
        networks:
        - multus:
            networkName: vlan640
          name: nic-1
        - multus:
            networkName: vlan648
          name: nic-2
        - multus:
            networkName: vlan64
          name: nic-3
        - multus:
            networkName: vlan364
          name: nic-4
        terminationGracePeriodSeconds: 120
The VM used a NAD named
vlan640
, but in my environment it's usually formatted like
default/mgmt-vlan
. For example:
Copy code
apiVersion: <http://kubevirt.io/v1|kubevirt.io/v1>
  kind: VirtualMachine
  metadata:
    labels:
      <http://harvesterhci.io/creator|harvesterhci.io/creator>: docker-machine-driver-harvester
      <http://harvesterhci.io/vmName|harvesterhci.io/vmName>: guest1-pool1-x2cdd-rf2sc
    name: guest1-pool1-x2cdd-rf2sc
    namespace: default
  spec:
    runStrategy: RerunOnFailure
    template:
      metadata:
        annotations:
          <http://harvesterhci.io/sshNames|harvesterhci.io/sshNames>: '[]'
          <http://harvesterhci.io/waitForLeaseInterfaceNames|harvesterhci.io/waitForLeaseInterfaceNames>: '[]'
        creationTimestamp: null
        labels:
          <http://harvesterhci.io/creator|harvesterhci.io/creator>: docker-machine-driver-harvester
          <http://harvesterhci.io/vmName|harvesterhci.io/vmName>: guest1-pool1-x2cdd-rf2sc
      spec:
        networks:
        - multus:
            networkName: default/mgmt-vlan
          name: nic-0
        terminationGracePeriodSeconds: 120
I think the issue in the VMRestore webhook is caused by the VM manifest’s NAD missing the namespace. The NAD reference in the manifest uses a different format, which might be due to how the VM was created — possibly through the Terraform provider. cc @salmon-city-57654 @bland-farmer-13503
👍 1
s
Thanks @magnificent-pencil-261! I noticed that but I thought the NS is default when it is empty. Seems I am wrong. 🤦
👍 1
m
This VM is created with terraform, we can check if it's the reason to make the NAD has different format, we also should improve the parser in VMRestore webhook
Hi @sticky-summer-13450 For the VMBackup
svmb-ecd8ef47-2636-435e-99b5-76d862c7f6f9-20250426.0130
, could you try adding the namespace to all
networkName
fields under
spec.template.spec.networks
, and then attempt to create a VMRestore from it?
s
Hi @magnificent-pencil-261. Yes, the VMs were created with the Harvester Terraform provider - v0.6.1 to be exact. I've interpreted your request as
kubectl edit VirtualMachineBackup svmb-ecd8ef47-2636-435e-99b5-76d862c7f6f9-20250426.0130
to add
default/
to the
networkName
values. Yes, I did that and that solved the issue of the
vlan640
not found. I hope that helps improve Harvester. After 10 minutes the restore is still 0% completed, and in the cluster events I see lots of warnings.
Copy code
LAST SEEN   TYPE      REASON                 OBJECT                                                                                                                                MESSAGE
17m         Normal    NoPods                 poddisruptionbudget/kubevirt-disruption-budget-w25tb                                                                                  No matching pods found
4m          Warning   VolumeFailedDelete     persistentvolume/pvc-9997c9fd-07da-4ec2-b159-edf66da1c736                                                                             rpc error: code = DeadlineExceeded desc = failed to delete volume pvc-9997c9fd-07da-4ec2-b159-edf66da1c736
2m21s       Normal    ExternalProvisioning   persistentvolumeclaim/restore-svmb-ecd8ef47-2636-435e-99b5-76d862c7f6f9-20250426.0130-4686e3b9-9768-4778-945f-b2980821348f-rootdisk   Waiting for a volume to be created either by the external provisioner '<http://driver.longhorn.io|driver.longhorn.io>' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
3m53s       Normal    Provisioning           persistentvolumeclaim/restore-svmb-ecd8ef47-2636-435e-99b5-76d862c7f6f9-20250426.0130-4686e3b9-9768-4778-945f-b2980821348f-rootdisk   External provisioner is provisioning volume for claim "default/restore-svmb-ecd8ef47-2636-435e-99b5-76d862c7f6f9-20250426.0130-4686e3b9-9768-4778-945f-b2980821348f-rootdisk"
8m53s       Warning   ProvisioningFailed     persistentvolumeclaim/restore-svmb-ecd8ef47-2636-435e-99b5-76d862c7f6f9-20250426.0130-4686e3b9-9768-4778-945f-b2980821348f-rootdisk   failed to provision volume with StorageClass "longhorn-image-lf8cl": rpc error: code = Internal desc = Bad response statusCode [500]. Status [500 Internal Server Error]. Body: [code=Internal Server Error, detail=, message=failed to create volume: unable to create volume pvc-bf59d8a7-15b7-4e07-81cd-44f25729b1b8: admission webhook "<http://mutator.longhorn.io|mutator.longhorn.io>" denied the request: cannot get backup volume for backup target  and volume pvc-44bbacfc-b76d-46fa-b9e8-1a30e4bdb0d9: backup target name and volume name cannot be empty] from [<http://longhorn-backend:9500/v1/volumes>]
3m53s       Warning   ProvisioningFailed     persistentvolumeclaim/restore-svmb-ecd8ef47-2636-435e-99b5-76d862c7f6f9-20250426.0130-4686e3b9-9768-4778-945f-b2980821348f-rootdisk   failed to provision volume with StorageClass "longhorn-image-lf8cl": rpc error: code = Internal desc = Bad response statusCode [500]. Status [500 Internal Server Error]. Body: [message=failed to create volume: unable to create volume pvc-bf59d8a7-15b7-4e07-81cd-44f25729b1b8: admission webhook "<http://mutator.longhorn.io|mutator.longhorn.io>" denied the request: cannot get backup volume for backup target  and volume pvc-44bbacfc-b76d-46fa-b9e8-1a30e4bdb0d9: backup target name and volume name cannot be empty, code=Internal Server Error, detail=] from [<http://longhorn-backend:9500/v1/volumes>]
13m         Warning   ProvisioningFailed     persistentvolumeclaim/restore-svmb-ecd8ef47-2636-435e-99b5-76d862c7f6f9-20250426.0130-4686e3b9-9768-4778-945f-b2980821348f-rootdisk   failed to provision volume with StorageClass "longhorn-image-lf8cl": rpc error: code = Internal desc = Bad response statusCode [500]. Status [500 Internal Server Error]. Body: [detail=, message=failed to create volume: unable to create volume pvc-bf59d8a7-15b7-4e07-81cd-44f25729b1b8: admission webhook "<http://mutator.longhorn.io|mutator.longhorn.io>" denied the request: cannot get backup volume for backup target  and volume pvc-44bbacfc-b76d-46fa-b9e8-1a30e4bdb0d9: backup target name and volume name cannot be empty, code=Internal Server Error] from [<http://longhorn-backend:9500/v1/volumes>]
17m         Normal    SuccessfulCreate       virtualmachine/test-restore-nagioskube                                                                                                Started the virtual machine by creating the new virtual machine instance test-restore-nagioskube
17m         Normal    SuccessfulCreate       virtualmachineinstance/test-restore-nagioskube                                                                                        Created PodDisruptionBudget kubevirt-disruption-budget-w25tb
17m         Normal    SuccessfulCreate       virtualmachineinstance/test-restore-nagioskube                                                                                        Created virtual machine pod virt-launcher-test-restore-nagioskube-2g9vz
17m         Warning   FailedScheduling       pod/virt-launcher-test-restore-nagioskube-2g9vz                                                                                       0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
7m8s        Warning   FailedScheduling       pod/virt-launcher-test-restore-nagioskube-2g9vz                                                                                       0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
I think it's time to burn this cluster and the VMs, and start again.
m
It seems the Longhorn backup no longer exists on the remote target.
s
I can see stuff in my NAS where Harvester stored it
m
In my opinion, Harvester appears to be functioning correctly, but the remote target is missing the necessary data.
Can you successfully create a new VMBackup for the same VM
nagioskube001
?
s
The second problem in this thread was that Harvester could not start the VMs because Longhorn could not provide the disk images. This is why I needed to restore from backups. So, no, I cannot create a new backup of the VM.
m
> The second problem in this thread was that Harvester could not start the VMs because Longhorn could provide the disk images The VM can't start because LH could provide the disk images?
s
Oops - edited to fix missed word