full-crayon-745
02/27/2023, 10:49 AMhandsome-monitor-68857
02/27/2023, 3:24 PMhandsome-monitor-68857
02/27/2023, 4:14 PM[INFO ] waiting for viable init node
[INFO ] configuring bootstrap node(s) cluster08-pool08-69c7d74894-776ff: waiting for agent to check in and apply initial plan
[INFO ] configuring bootstrap node(s) cluster08-pool08-69c7d74894-776ff: waiting for probes: calico, etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet
[INFO ] configuring bootstrap node(s) cluster08-pool08-69c7d74894-776ff: waiting for probes: calico, etcd, kube-apiserver, kube-controller-manager, kube-scheduler
[INFO ] configuring bootstrap node(s) cluster08-pool08-69c7d74894-776ff: waiting for probes: calico, kube-apiserver, kube-controller-manager, kube-scheduler
[INFO ] configuring bootstrap node(s) cluster08-pool08-69c7d74894-776ff: waiting for probes: calico
[INFO ] configuring bootstrap node(s) cluster08-pool08-69c7d74894-776ff: waiting for probes: calico, etcd, kube-apiserver
[INFO ] configuring bootstrap node(s) cluster08-pool08-69c7d74894-776ff: waiting for probes: calico, kube-controller-manager, kube-scheduler
[INFO ] configuring bootstrap node(s) cluster08-pool08-69c7d74894-776ff: waiting for probes: calico
[INFO ] configuring bootstrap node(s) cluster08-pool08-69c7d74894-776ff: waiting for probes: calico, etcd, kube-apiserver
[INFO ] configuring bootstrap node(s) cluster08-pool08-69c7d74894-776ff: waiting for probes: calico, kubelet
[INFO ] configuring bootstrap node(s) cluster08-pool08-69c7d74894-776ff: waiting for probes: calico
[INFO ] configuring bootstrap node(s) cluster08-pool08-69c7d74894-776ff: waiting for probes: calico, kube-apiserver
[INFO ] configuring bootstrap node(s) cluster08-pool08-69c7d74894-776ff: waiting for probes: calico
[INFO ] configuring bootstrap node(s) cluster08-pool08-69c7d74894-776ff: waiting for probes: calico, etcd, kube-apiserver
[INFO ] configuring bootstrap node(s) cluster08-pool08-69c7d74894-776ff: waiting for probes: calico, kube-controller-manager, kube-scheduler
rough-insurance-93926
02/27/2023, 5:34 PMrough-insurance-93926
02/27/2023, 5:34 PMstale-painting-80203
02/27/2023, 9:29 PMEvents:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m57s default-scheduler Successfully assigned default/virt-launcher-sle-15-sp4-base-9zjw2 to harvester-01
Warning FailedMount 3m54s kubelet Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-1], unattached volumes=[private disk-0 cloudinitdisk-ndata container-disks hotplug-disks sockets cloudinitdisk-udata public ephemeral-disks libvirt-runtime disk-1]: timed out waiting for the condition
Warning FailedAttachVolume 102s (x10 over 5m57s) attachdetach-controller AttachVolume.Attach failed for volume "pvc-87fe9694-9c15-49fd-adc5-2bbe7337ce85" : PersistentVolume "pvc-87fe9694-9c15-49fd-adc5-2bbe7337ce85" is marked for deletion
Warning FailedAttachVolume 102s (x10 over 5m57s) attachdetach-controller AttachVolume.Attach failed for volume "pvc-e9108404-e61d-402a-a553-62610bd987a9" : PersistentVolume "pvc-e9108404-e61d-402a-a553-62610bd987a9" is marked for deletion
Warning FailedMount 99s kubelet Unable to attach or mount volumes: unmounted volumes=[disk-0 disk-1], unattached volumes=[public hotplug-disks libvirt-runtime cloudinitdisk-ndata sockets disk-0 private ephemeral-disks container-disks cloudinitdisk-udata disk-1]: timed out waiting for the condition
Did a describe on the pvc:
Name: sle-15-sp4-base-disk-0-6oyol
Namespace: default
StorageClass: longhorn-image-d48sr
Status: Bound
Volume: pvc-87fe9694-9c15-49fd-adc5-2bbe7337ce85
Labels: <none>
Annotations: <http://harvesterhci.io/imageId|harvesterhci.io/imageId>: default/image-d48sr
<http://harvesterhci.io/owned-by|harvesterhci.io/owned-by>: [{"schema":"kubevirt.io.virtualmachine","refs":["default/sle-15-sp4-base"]}]
<http://pv.kubernetes.io/bind-completed|pv.kubernetes.io/bind-completed>: yes
<http://pv.kubernetes.io/bound-by-controller|pv.kubernetes.io/bound-by-controller>: yes
<http://volume.beta.kubernetes.io/storage-provisioner|volume.beta.kubernetes.io/storage-provisioner>: <http://driver.longhorn.io|driver.longhorn.io>
<http://volume.kubernetes.io/storage-provisioner|volume.kubernetes.io/storage-provisioner>: <http://driver.longhorn.io|driver.longhorn.io>
Finalizers: [<http://kubernetes.io/pvc-protection|kubernetes.io/pvc-protection> <http://provisioner.storage.kubernetes.io/cloning-protection|provisioner.storage.kubernetes.io/cloning-protection>]
Capacity: 20Gi
Access Modes: RWX
VolumeMode: Block
Used By: virt-launcher-sle-15-sp4-base-9zjw2
Events: <none>
Name: sle-15-sp4-base-disk-1-upd7u
Namespace: default
StorageClass: harvester-longhorn
Status: Bound
Volume: pvc-e9108404-e61d-402a-a553-62610bd987a9
Labels: <none>
Annotations: <http://harvesterhci.io/owned-by|harvesterhci.io/owned-by>: [{"schema":"kubevirt.io.virtualmachine","refs":["default/sle-15-sp4-base"]}]
<http://pv.kubernetes.io/bind-completed|pv.kubernetes.io/bind-completed>: yes
<http://pv.kubernetes.io/bound-by-controller|pv.kubernetes.io/bound-by-controller>: yes
<http://volume.beta.kubernetes.io/storage-provisioner|volume.beta.kubernetes.io/storage-provisioner>: <http://driver.longhorn.io|driver.longhorn.io>
<http://volume.kubernetes.io/storage-provisioner|volume.kubernetes.io/storage-provisioner>: <http://driver.longhorn.io|driver.longhorn.io>
Finalizers: [<http://kubernetes.io/pvc-protection|kubernetes.io/pvc-protection> <http://provisioner.storage.kubernetes.io/cloning-protection|provisioner.storage.kubernetes.io/cloning-protection>]
Capacity: 60Gi
Access Modes: RWX
VolumeMode: Block
Used By: virt-launcher-sle-15-sp4-base-9zjw2
Events: <none>
billowy-country-12148
02/28/2023, 12:12 AMCTRL + ALT + F2
to switch to the console TTY per the troubleshooting guide, https://docs.harvesterhci.io/v1.1/troubleshooting/installation. There is a default route, and working DNS, and I can ping and resolve things such as google.com. But it seems that K3S just isn't installed so everything related to docker is throwing an error, expecting k3s to exist? The troubleshooting doc has nada to say about this issue.stale-painting-80203
02/28/2023, 1:37 AMiptables -t nat -A POSTROUTING -o mgmt-br -j MASQUERADE
billowy-country-12148
02/28/2023, 3:35 AMfancy-appointment-4748
02/28/2023, 2:02 PMstale-painting-80203
02/28/2023, 5:53 PMlittle-dress-13576
02/28/2023, 6:12 PMwitty-jelly-95845
02/28/2023, 11:03 PMwitty-jelly-95845
02/28/2023, 11:03 PMbright-fireman-42144
03/01/2023, 1:46 AMbillowy-country-12148
03/01/2023, 10:17 PMlittle-dress-13576
03/02/2023, 12:40 AMflat-finland-50817
03/02/2023, 10:06 AMwaiting for cluster agent to connect
... I was able to log into the VM, rke2-server and rancher-system-agent are both running and file, and I'm able to curl the rancher https endpoint (I'm using self-signed certificates). What am I missing and any idea on how I can debug this ?rapid-flag-87720
03/03/2023, 7:40 PMharvester-management:/home/rancher # kubectl get vmwaresource.migration
NAME STATUS
vcsim clusterNotReady
careful-dusk-92915
03/06/2023, 4:32 AMcareful-dusk-92915
03/06/2023, 4:38 AMloud-apartment-45889
03/08/2023, 2:21 AMquaint-alarm-7893
03/08/2023, 2:54 AMloud-apartment-45889
03/08/2023, 11:00 AMloud-apartment-45889
03/08/2023, 11:12 AMbig-judge-33880
03/10/2023, 8:30 PM[ 76.204319] A link change request failed with some changes committed already. Interface vm-br may have been left with an inconsistent configuration, please check.
controller:
time="2023-03-10T20:03:15Z" level=error msg="error syncing 'har-04': handler harvester-network-vlanconfig-controller: set up VLAN failed, vlanconfig: har-04, node: har-04, error: ensure bridge vm-br failed, error: set vlan filtering failed, error: invalid argument, iface: &{0xc0006de140}, requeuing"
I0310 20:03:15.252249 1 controller.go:75] vlan config har-04 has been changed, spec: {Description: ClusterNetwork:vm NodeSelector:map[<http://kubernetes.io/hostname:har-04|kubernetes.io/hostname:har-04>] Uplink:{NICs:[eno1 eno2] LinkAttrs:0xc001c85e30 BondOptions:0xc00052ad50}}
The associated bond is brought up just fine, any ideas what could be causing such a scenario?quaint-alarm-7893
03/10/2023, 8:39 PMdry-animal-96145
03/14/2023, 1:41 AMquaint-alarm-7893
03/14/2023, 7:59 PMquaint-alarm-7893
03/14/2023, 7:59 PM