This message was deleted.
# harvester
a
This message was deleted.
m
Last time I read the docs they said that HA was automatically done once the cluster had 4 or more nodes (the first 2 nodes would be the masters).
When there are more than three nodes, the two other nodes that first joined are automatically promoted to management nodes to form a HA cluster.
s
Oh, so I have to spend another thousand pounds to make a 4 node cluster just to make the first three nodes HA. No. There's something wrong in the logic there. We have to be able to make a 3 node HA cluster.
m
Don't shoot the messenger 🤣
s
Indeed. Thanks for re-reading the docs I have read many times.
👍 2
and pointing out my misreading...
Although I feel sure it said something different before
c
I think the idea is to avoid running all the control plane services on smaller clusters to avoid over-allocation of resources, but it would definitely be nice to allow operators to adjust that trade-off for themselves by providing a documented mechanism for promoting a node manually.
b
on 0.3.0, i definitely had HA after adding a 3rd node to the cluster. did that behavior change in 1.0.0?
s
did that behavior change in 1.0.0?
I think it might have. I think I read and remembered that I needed 3 nodes, so I planned my finances to be able to achieve that. For a minimal HA home-lab I see no point in requiring more than 3 nodes.
💯 1
🙌 1
h
@sticky-summer-13450 this should not be the case. Can you point to where you read this? Harvester will do a 3 node HA cluster with the first 3 nodes and the additional nodes will be considered workers-only. If something changed/happened, it would be a bug.
Copy code
> kubectl get nodes
NAME     STATUS   ROLES                       AGE    VERSION
r620-1   Ready    control-plane,etcd,master   112m   v1.21.11+rke2r1
r620-2   Ready    control-plane,etcd,master   63m    v1.21.11+rke2r1
r620-3   Ready    control-plane,etcd,master   41m    v1.21.11+rke2r1
s
@happy-cat-90847 the first paragraph on https://docs.harvesterhci.io/v1.0/host/host/ says
When there are more than three nodes, the two other nodes that first joined are automatically promoted to management nodes to form a HA cluster.
As I said at the top of this thread - I have a 3 node Harvester v1.0.0 cluster which has is a master and two workers.
Copy code
$ kubectl get nodes
NAME           STATUS   ROLES                       AGE   VERSION
harvester001   Ready    <none>                      8d    v1.21.7+rke2r1
harvester002   Ready    <none>                      71d   v1.21.7+rke2r1
harvester003   Ready    control-plane,etcd,master   93d   v1.21.7+rke2r1
a
When I added my 3rd node, it made it HA (I have one cordoned which is why vm01 says SchedulingDisabled)
s
Did you build with Harvester 1.0.1 or have you upgraded to it?
a
ahh, it was all built with fresh 1.0.1
s
Humm - mine was built with 1.0.0 and I have not upgraded yet because I'm a little bit worried, especially with the cluster not being HA...
a
s
@adventurous-engine-93026 yes, I created it and linked to it above. Why?
a
I thought it looked similar to your situation. I couldn't tell it was you though. 🙂
s
So - seeing as I am in the impossible situation that I have a three node Harvester cluster which has not automatically converted itself into an HA cluster, is there a way I can get it to happen manually - preferably without losing all the workloads? 🙂
w
As per the note at the top of the docs page "Because Harvester is built on top of Kubernetes and uses etcd as its database, the maximum node fault toleration is one when there are three management nodes." my interpretation is with 3 nodes only 1 will be management and only when you have 4 do 2 more get promoted. Yet @happy-cat-90847 suggests in your bug that he's seeing all 3 mgmt with 3 nodes. I've not yet added my 3rd node, don't want to break my cluster before I've recorded SUSECon session
s
Yeh - anecdotally, the documentation is wrong. I'd love to get some traction from Suse representatives to confirm or deny that.
h
There is no need to get caught up on this. If we have 3 nodes, we provide an HA stack. Any further after 3 is only added as a worker node. I don't know what happens when only two are added. I should try it. Any scenario where you have 3 nodes and they are not all cp/etcd/worker it's a bug. If the docs say otherwise, please file a bug.
s
Thank you - I'm creating a documentation ticket now.
h
Thank you!
s
With @happy-cat-90847’s explicit statement of "If we have 3 nodes, we provide an HA stack", I still have a 3 node cluster which is not HA. I still have no idea how to make it HA - without deleting it and starting again - which would be a PIA for me 😞, would probably take up a weekend which my better half would not appreciate either 😞😞, and may not work.
h
Right - like I said before, you may have hit a bug from a previous version. Did you start from zero with 1.0? Or was a node with a previous version?
s
Sorry - I don't mean to sound grumpy. It's been a long day, and Harvester is so close to being brilliant!
This cluster was created from v1.0.0
p
@sticky-summer-13450 Need your help to dump one more thing. Could you get
Copy code
kubectl get secrets -n fleet-local -o yaml
and send the output to me privately? Slack is OK. This can help me understand why some Rancher plans are not applied on harvester002 node. (It’s another thing not included in the support bundle. Because the resource name is “secret”) Thanks a lot.
s
I assume I can do that from anywhere that can access the cluster...