This message was deleted.
# general
a
This message was deleted.
d
I want to follow this as well. The only thing we found is that if the HASH of that machine pool is trying to provision the only way to stop is to get it to a good state, like apply another change that rolls it back or gives it the correct settings on the VM side to make the nodes roll to the correct state. This has a side effect that the bad HASH version is still creating and deleting VM's... On the other hand if you are trying to delete a machine, and the VM is gone. You may have to remove the finalizer on the machine to get it to properly go
r
How do you go about removing the finalizer?
hmm…running a kubectl get nodes int he cluster doesn’t show the ones in a bad state
d
right, on the management server you will need to edit the machine that is in the bad state and find the finalizer section and delete them
r
I’m lost on getting it from the management server.
Rancher has all sorts of stuff in a bad state. Gotta say I haven’t been thrilled with Rancher
d
p-* is a project c-* is a cluster, looks like a whole cluster is in process of being deleted including the projects
From the Rancher UI go to Cluster Management and then in the cluster you should see the machines you can edit one there and remove the finalizer
r
That’s not even a valid cluster anymore.
The cluster I’m working with is c-xl7rc
Can’t edit the yaml or the api to remove the finalizer
got it - removed the finalizers from the bad nodes.
Oddly, the UI is still saying it’s waiting for the node to finish provisioning
Guess I have a bit of cleanup to do. I also have removed the old v1 monitoring and alerting yet they keep coming back.
d
you might need to track that down if it keep coming back, probably another CRD is triggering it
169 Views