This message was deleted.
# harvester
a
This message was deleted.
b
b
A few things to note here: You should never have an even number of control plane nodes because of the way etcd works on kubernetes. Keep in mind - harvester is kubernetes under the hood. Etcd (the backed kubernetes data store) uses a leader election method meaning there needs to be a majority available to elect a leader. So if you have a 2 node cluster and both of the nodes are control-plane nodes and one of them fails you lose the majority (>50%) so etcd on the remaining node will attempt to protect itself from split-brain by going into read only which will break cluster functionality. So if you have 2 nodes it's counter intuitive because you have double the chance of failure with no fault tolerance. You can read more about this in the upstream docs. https://etcd.io/docs/v3.5/faq/#why-an-odd-number-of-cluster-members What you did is label the nodes which is really just metadata. You didn't actually promote the node you just tricked kubernetes into thinking you have another control plane node. What actually determines a control plane node is the pods running on it - such as the kube-apiserver and etcd. "Worker nodes" are just nodes that except workloads. In this case there is no reason to run another set of the control plane applications so harvester is just picking the most sane configuration. I would strongly advise keeping the default configuration for the reasons listed above.
b
@bulky-sunset-52084 or whoever, back to this. How can the 2nd node be promoted? We have a specific case with only two servers. What we actually need to do is demote server 1 and promote server 2.
b
Rebuilding is the best bet at that point
b
Not an option, need to stay running
Interesting thought, but in vmware if you only have a two machine vsan cluster you can run a virtual machine elsewhere that is a witness. It stores metadata but not data and takes care of the 3 node need I wonder if something like this would be possible. There are plenty use cases.