This message was deleted.
# general
a
This message was deleted.
w
Seems a bit dangerous... I now have no workers because i made a config change which caused VMs to fail to get created, so they time out and get deleted (sometimes) and then recreated. I fixed the config change (i think) and it went and deleted the only nodes left in the pool
Rancher then loses track of the machines it thinks it deleted. and I'm left with many running VMs.
Is there some log i can provide to get this type of behavior fixed if it is a defect?
heh it's stuck in a reconcile loop. The nodes go active in k8s, but rancher kills them anyway.
ohh i think i found root cause. I added a second network adaptor that causes it to fail to meet some condition. I can't remove it. lol i edited via the API and it added it back
rke-machine.cattle.io.vmwarevspheremachinetemplate is what i edited
maybe have to edit rke-machine-config.cattle.io.vmwarevsphereconfigs
ahh had to edit configs then machine template
heh crisis averted
I wonder how this could be prevented in future?