We recently upgraded to rancher 2.6.8 and some of ...
# general
v
We recently upgraded to rancher 2.6.8 and some of our clusters entered a
unavailable
state. Luckily we have snapshots for these environments but when attempting to restore by following the process of removing all etcd and control plane nodes and creating a single node with the etcd, controlplane, and worker nodes were running into a ui bug saying that the
"Timeout" should be between 1 and 10800
Have tried a combination of changes but were unable to commit the changes to the nodepools because of this. Anyone seen this before?