https://rancher.com/ logo
s

sticky-summer-13450

04/17/2022, 12:11 PM
I have finally added a third node to my Harvester v1.0.0 cluster. (I started with node 003, then added 002, and finally added 001 today). I thought that after adding the third node the cluster would turn into an HA cluster where all three nodes are the control plane with etcd, but that has not happened. Have I done something wrong?
Copy code
$ kubectl get nodes -o wide --context harvester003
NAME           STATUS   ROLES                       AGE   VERSION          INTERNAL-IP   EXTERNAL-IP   OS-IMAGE           KERNEL-VERSION         CONTAINER-RUNTIME
harvester001   Ready    <none>                      29m   v1.21.7+rke2r1   10.64.0.16    <none>        Harvester v1.0.0   5.3.18-59.37-default   <containerd://1.4.12-k3s1>
harvester002   Ready    <none>                      63d   v1.21.7+rke2r1   10.64.0.17    <none>        Harvester v1.0.0   5.3.18-59.37-default   <containerd://1.4.12-k3s1>
harvester003   Ready    control-plane,etcd,master   84d   v1.21.7+rke2r1   10.64.0.18    <none>        Harvester v1.0.0   5.3.18-59.37-default   <containerd://1.4.12-k3s1>
m

modern-wire-33567

04/17/2022, 8:05 PM
Last time I read the docs they said that HA was automatically done once the cluster had 4 or more nodes (the first 2 nodes would be the masters).
When there are more than three nodes, the two other nodes that first joined are automatically promoted to management nodes to form a HA cluster.
s

sticky-summer-13450

04/17/2022, 8:23 PM
Oh, so I have to spend another thousand pounds to make a 4 node cluster just to make the first three nodes HA. No. There's something wrong in the logic there. We have to be able to make a 3 node HA cluster.
m

modern-wire-33567

04/17/2022, 8:24 PM
Don't shoot the messenger 🤣
s

sticky-summer-13450

04/17/2022, 8:25 PM
Indeed. Thanks for re-reading the docs I have read many times.
👍 2
and pointing out my misreading...
Although I feel sure it said something different before
c

calm-twilight-27465

04/18/2022, 7:20 PM
I think the idea is to avoid running all the control plane services on smaller clusters to avoid over-allocation of resources, but it would definitely be nice to allow operators to adjust that trade-off for themselves by providing a documented mechanism for promoting a node manually.
b

better-garage-30620

04/18/2022, 9:20 PM
on 0.3.0, i definitely had HA after adding a 3rd node to the cluster. did that behavior change in 1.0.0?
s

sticky-summer-13450

04/19/2022, 7:48 AM
did that behavior change in 1.0.0?
I think it might have. I think I read and remembered that I needed 3 nodes, so I planned my finances to be able to achieve that. For a minimal HA home-lab I see no point in requiring more than 3 nodes.
💯 1
🙌 1
h

happy-cat-90847

04/25/2022, 7:07 PM
@sticky-summer-13450 this should not be the case. Can you point to where you read this? Harvester will do a 3 node HA cluster with the first 3 nodes and the additional nodes will be considered workers-only. If something changed/happened, it would be a bug.
Copy code
> kubectl get nodes
NAME     STATUS   ROLES                       AGE    VERSION
r620-1   Ready    control-plane,etcd,master   112m   v1.21.11+rke2r1
r620-2   Ready    control-plane,etcd,master   63m    v1.21.11+rke2r1
r620-3   Ready    control-plane,etcd,master   41m    v1.21.11+rke2r1
s

sticky-summer-13450

04/25/2022, 8:37 PM
@happy-cat-90847 the first paragraph on https://docs.harvesterhci.io/v1.0/host/host/ says
When there are more than three nodes, the two other nodes that first joined are automatically promoted to management nodes to form a HA cluster.
As I said at the top of this thread - I have a 3 node Harvester v1.0.0 cluster which has is a master and two workers.
Copy code
$ kubectl get nodes
NAME           STATUS   ROLES                       AGE   VERSION
harvester001   Ready    <none>                      8d    v1.21.7+rke2r1
harvester002   Ready    <none>                      71d   v1.21.7+rke2r1
harvester003   Ready    control-plane,etcd,master   93d   v1.21.7+rke2r1
a

adventurous-engine-93026

04/29/2022, 5:06 PM
When I added my 3rd node, it made it HA (I have one cordoned which is why vm01 says SchedulingDisabled)
s

sticky-summer-13450

04/29/2022, 5:07 PM
Did you build with Harvester 1.0.1 or have you upgraded to it?
a

adventurous-engine-93026

04/29/2022, 5:07 PM
ahh, it was all built with fresh 1.0.1
s

sticky-summer-13450

04/29/2022, 5:09 PM
Humm - mine was built with 1.0.0 and I have not upgraded yet because I'm a little bit worried, especially with the cluster not being HA...
a

adventurous-engine-93026

04/29/2022, 5:13 PM
s

sticky-summer-13450

04/29/2022, 8:56 PM
@adventurous-engine-93026 yes, I created it and linked to it above. Why?
a

adventurous-engine-93026

04/30/2022, 2:39 PM
I thought it looked similar to your situation. I couldn't tell it was you though. 🙂
s

sticky-summer-13450

05/03/2022, 12:57 PM
So - seeing as I am in the impossible situation that I have a three node Harvester cluster which has not automatically converted itself into an HA cluster, is there a way I can get it to happen manually - preferably without losing all the workloads? 🙂
w

witty-jelly-95845

05/03/2022, 3:24 PM
As per the note at the top of the docs page "Because Harvester is built on top of Kubernetes and uses etcd as its database, the maximum node fault toleration is one when there are three management nodes." my interpretation is with 3 nodes only 1 will be management and only when you have 4 do 2 more get promoted. Yet @happy-cat-90847 suggests in your bug that he's seeing all 3 mgmt with 3 nodes. I've not yet added my 3rd node, don't want to break my cluster before I've recorded SUSECon session
s

sticky-summer-13450

05/03/2022, 5:14 PM
Yeh - anecdotally, the documentation is wrong. I'd love to get some traction from Suse representatives to confirm or deny that.
h

happy-cat-90847

05/03/2022, 5:18 PM
There is no need to get caught up on this. If we have 3 nodes, we provide an HA stack. Any further after 3 is only added as a worker node. I don't know what happens when only two are added. I should try it. Any scenario where you have 3 nodes and they are not all cp/etcd/worker it's a bug. If the docs say otherwise, please file a bug.
s

sticky-summer-13450

05/03/2022, 5:20 PM
Thank you - I'm creating a documentation ticket now.
h

happy-cat-90847

05/03/2022, 5:20 PM
Thank you!
s

sticky-summer-13450

05/03/2022, 5:30 PM
With @happy-cat-90847’s explicit statement of "If we have 3 nodes, we provide an HA stack", I still have a 3 node cluster which is not HA. I still have no idea how to make it HA - without deleting it and starting again - which would be a PIA for me 😞, would probably take up a weekend which my better half would not appreciate either 😞😞, and may not work.
h

happy-cat-90847

05/03/2022, 5:40 PM
Right - like I said before, you may have hit a bug from a previous version. Did you start from zero with 1.0? Or was a node with a previous version?
s

sticky-summer-13450

05/03/2022, 5:40 PM
Sorry - I don't mean to sound grumpy. It's been a long day, and Harvester is so close to being brilliant!
This cluster was created from v1.0.0
p

prehistoric-balloon-31801

05/05/2022, 10:13 AM
@sticky-summer-13450 Need your help to dump one more thing. Could you get
Copy code
kubectl get secrets -n fleet-local -o yaml
and send the output to me privately? Slack is OK. This can help me understand why some Rancher plans are not applied on harvester002 node. (It’s another thing not included in the support bundle. Because the resource name is “secret”) Thanks a lot.
s

sticky-summer-13450

05/05/2022, 10:59 AM
I assume I can do that from anywhere that can access the cluster...
30 Views