A few things to note here:
You should never have an even number of control plane nodes because of the way etcd works on kubernetes. Keep in mind - harvester is kubernetes under the hood. Etcd (the backed kubernetes data store) uses a leader election method meaning there needs to be a majority available to elect a leader. So if you have a 2 node cluster and both of the nodes are control-plane nodes and one of them fails you lose the majority (>50%) so etcd on the remaining node will attempt to protect itself from
split-brain by going into read only which will break cluster functionality. So if you have 2 nodes it's counter intuitive because you have double the chance of failure with no fault tolerance. You can read more about this in the upstream docs.
https://etcd.io/docs/v3.5/faq/#why-an-odd-number-of-cluster-members
What you did is label the nodes which is really just metadata. You didn't actually promote the node you just tricked kubernetes into thinking you have another control plane node. What actually determines a control plane node is the pods running on it - such as the kube-apiserver and etcd. "Worker nodes" are just nodes that except workloads. In this case there is no reason to run another set of the control plane applications so harvester is just picking the most sane configuration. I would strongly advise keeping the default configuration for the reasons listed above.