This message was deleted.
# general
a
This message was deleted.
p
Delete the machines too
c
machines are deleted (the vms in openstack) machines in the rancher "local" cluster are stuck and deleting them does not work...
p
Did you delete the machines here and not just the nodes?
c
they are stuck there:
p
And did you delete the nodes as well?
c
i deleted whatever i found. to what screen do you refer to?
p
this one
c
thats on the cluster - as the vms are gone i can no longer "jump into" the cluster
p
oh right
And you can't delete the entire cluster, correct?
c
right, it shows as:
delete does nothing
p
Copy code
kubectl get <http://clusters.management.cattle.io|clusters.management.cattle.io>  # find the cluster you want to delete 
export CLUSTERID="c-xxxxxxxxx" # 
kubectl patch <http://clusters.management.cattle.io|clusters.management.cattle.io> $CLUSTERID -p '{"metadata":{"finalizers":[]}}' --type=merge
kubectl delete <http://clusters.management.cattle.io|clusters.management.cattle.io> $CLUSTERID
stolen from https://github.com/longhorn/longhorn/issues/6313 does not work?
c
yes, this one i already tried πŸ˜„
but i'll do again, sec
p
kubectl delete does nothing?
c
it does, but the stuff is still shown in rancher ui πŸ˜‰
p
weird
c
yes, get clusters... does no longer return the cluster
p
restart the rancher manager perhaps?
c
cluster id was: c-m-wnvn4tbj
thats whats left in the related resources tab:
capi cluster cannot be deleted
p
try to delete that cluster object
test-kube currently in check-in?
c
the cluster object doesn't offer delete πŸ˜‰ and yes the check-in is where the cluster got broken
it never really built correctly
the vms came up but somehwere in the provisioning of the rancher-system-agent or the k3s service it got stuck
p
I never had the issue of nuking the whole cluuster when i was experimenting, weird.
c
we had loadbalancer issues with the rancher so the connections from the downstream clusters to the rancher itself got disconnected quite heavily. so i guess that's the reason for this state πŸ˜‰
meanwhile LB in front of Rancher is fixed and all the other clusters which were not touched in the meantime are fine again πŸ™‚ only this oddball is being left as we played with the test-kube during this "LB issue"
p
Is rancher installed as docker single node or on a cluster?
c
on a three node k3s cluster with etcd as datastore
but yes, maybe a restart is what we want to do πŸ˜„
p
try to remove those if theyre still here
c
those i did already - all gone, they could be deleted successfully
p
Also you could try to force update the cluster if not already done
c
how would i do that? any pointers to what that refers?
p
Its an option available in the cluster object, like edit yaml
c
hm:
p
not here
cluster objet, the one who in chek-in
c
ahh
thanks
ahh
i could delete the machines after pausing
πŸ˜„
it's gone
thanks!
p
somehow, it worked
Ysee, thats what i mean when my job description is being a linux bonker
πŸ˜† 1
Anyways, happy i couldve helped
Now suse can i work for you?
c
me too! thanks 1000x
now i can try breaking it again πŸ˜„
p
Thats the best way to learn !
break things, ONCE
πŸ‘ 1
c
Hey @curved-army-69172 I'm curious if you are using openstack as a cloud provider for rancher?
c
The cluster hosting Rancher - no. But I create clusters via Rancher and those boot nodes on Openstack and those clusters get the Openstack Cloud Controller Manager and Cinder CSI installed
c
Nice that's awesome! Are you using Octavia for load balancing? If so does Octavia also provision the external loadbalancer and manage the pool during scaling operations? Are you using or rke2? I'm a bit of a noob for rancher/kubernetes so any tips are appreciated πŸ™‚
c
Yes, we use an Octavia Loadbalancer for the traefik ingress controller (i think that one is a daemon-set) and yes, new worker nodes are automatically added to the loadbalancer pool. we're using k3s - not rke2. Octavia loadbalancers are also created when you deploy a service of type loadbalancer. I haven't used that much would neeed to check how that one works in terms of pool (up to now it just worked, so i didn't care much πŸ˜‰ )
c
Nice! Thanks for that info, that's awesome and what I'm looking to do. Any reason your using k3s over rke2? More lightweight? We are starting on kubernetes and there's a 100 ways to deploy clusters, I'm still trying to figure out my way around lol
c
Yeah, we thought let's start with k3s as its the most lightweight and has the helm operator + system upgrade controller built-in. So far really easy to use...