This message was deleted.
# rke2
a
This message was deleted.
d
I was able to get the cluster working again by changing a node hostname and then run the uninstall script, then make it join the cluster again, also deleted the rancher folders in
/etc
and
/var/lib
However unique hostnames and that workaround doesn't seem to scale
well, maybe scale is not the right word, but for my use-case hostname re-use would be common, as node goes away, node goes back in sort of thing
for the time being, I will append a uuid to the hostnames and see if that makes this pain go away
t
d
Yes, it looks like the same issue, I was able to "overcome" the issue by adding a few lines to the cloudinit script to generate a somewhat unique hostname, but it is just a hack to make it work