This message was deleted.
# rke2
a
This message was deleted.
c
Those are reserved labels. Nodes are not allowed to set them on themselves, there are controllers built into the cluster that set them for you as appropriate. The error tells you what labels you are allowed to set.
Why are you trying to have the node register with those labels?
c
Because I want to automatically deploy kube-vip to nodes having such labels
I guess i must modify the kube-vip deamon-set in order to deploy on the nodes reserved as controllers @creamy-pencil-82913
or what would you suggest? because as of now my kube-vip daemonset has this (by default)
Copy code
affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: <http://node-role.kubernetes.io/master|node-role.kubernetes.io/master>
                operator: Exists
            - matchExpressions:
              - key: <http://node-role.kubernetes.io/control-plane|node-role.kubernetes.io/control-plane>
                operator: Exists
So therefore I am not able to land pods on controller2(node2) and controller3(node3) but just on controller 1
c
If nodes 2 and 3 don't have those labels then they're not control-plane nodes (servers), they're agents
If you want the daemonset to run on those nodes, you should fix the selector, not add bogus labels to the nodes.
Or did you perhaps want to make those other 2 nodes servers instead of agents so that they get those roles?
c
Those other two should be "backups" for when the main node01 goes down so that my cluster api is still available. I am using kube-vip to share an ip address between all the three. It would be cool to make those other 2 servers if it's not a big deal so that I have the roles already setup and I don't have to modify manually the kube-vip manifest in order for my pods to land on to node 2 and node 3. because my current issue now is that kube-vip being only on node 1 when node 1 goes down my cluster isn't available anymore
I'm not sure if i explained myself
I'm not sure how to edit the kube-vip daemonset via ansible that's why I was thinking maybe if i add the role etcd,control-plane and master to the remaining two nodes (that would serve as control-planes regardless) maybe i can solve the issue quicker
c
Just make all three of them servers and they will have the correct labels
And also you will actually have HA. If you only have one server it doesn't matter if you have kube-vip running, you don't have a HA control plane.
c
So to run all three as servers
Instead of launching the rke2-agent service
i'd run the rke2-server
Yes just like that, just run rke2-server on servers aswell to join to the main server