This message was deleted.
# harvester
a
This message was deleted.
j
Have you tried the pre-check script? That’s how I found out, for example, that having a single replica volume attached will lock pre-drain when upgrading on <1.4 (for any workload, not just VMs).
c
If it’s in pre-drained state, it usually means that the node is being drained and something blocks it. Check the log on Harvester: kubectl logs -n cattle-system deploy/rancher -f
q
issue was a) pvcs it didnt like, which i fixed, and b) certs needed rotation. the precheck script was super helpful. now that node is in post-drain, but as part of troubleshooting the issue earlier, i deleted the upgrade vm, so not i'm trying to figure out how to get it to re-create it 😞