This message was deleted.
# rke2
a
This message was deleted.
a
The trick here is that you have to ensure that leader gets the right configuration and probably the best way to accomplish this is replicating the configuration on all master nodes because leadership can change...
🙌 1
k
Thank you, so every node deals with its ‘own’ configuration even if this can cause some conflicts between configuration of different nodes
a
Well I think does not really true. When leader restarts and read its config file changes, they will be applied. That's why you have to keep files synced as leadership may change in your cluster and misconfigurations may appear if files are differ from node to node.
k
Is there actually a leader node within RKE2? As far as I understand there is only an initial server node. After joining a second and third node all master nodes are equal. So when having a configuration (both /etc/rancher/rke2/config or /var/lib/rancher/rke2/server/manifests) on master 1 which differs from master 2, it looks like just the most recent configuration is applied. But either way, configuration should be in sync on all the master nodes.
a
master will stay equal until you do some changes in their configs. If you want to upgrade a configuration you will change files and may get unsynchronized...
r
Most distributed systems have a leader election system so they can use the leader to be a checkpoint to avoid race conditions or a place to tally votes to decide things. Programming follows the programmer's thought, and having a "leader" to make decisions and validate ordering of events fits pretty well with human thought. I'd expect etcd to have a leader and I seem to recall kube-scheduler & kube-controller-manager both had leader elections as well (I was having a problem where the leader election for both of those was failing, so I couldn't get new pods launched and I didn't get updates for how many pods were running, and the logs for all were just spinning trying to get leader election).
c
manifest application is not leader elected. All servers will apply their manifests. So either pick one node to put your manifests on, or make sure they’re in sync on all of then.
🙌 1
a
Not sure if I am wrong, but helmchartconfig only applies on restart. Am I right?
c
all manifests are applied whenever they change on disk
k
Thank you @creamy-pencil-82913, can you advise on a common best practice? Like deploying all manifests on all servers (via Ansible or some other tool) or keep them just on a single server?
Or does it actually not really matter. As long as there is just a single source of truth for the manifests?
a
@creamy-pencil-82913, if changes are applied whenever they appear on disk, what happens when different files exist on different nodes?. There should be some mechanism to manage this situation as it can be critical.
r
From what was said, my guess is all manifests basically get a kubectl apply. Now if they only get that when they change on disk or periodically check and re-apply if things diverges isn't clear, though my guess would be the first.
c
what happens when different files exist on different nodes?
If there are different files on different nodes, then you get the combination of the files on all the different nodes deployed to the cluster. If you have the same files on all the nodes, but with different content. then that’s bad. Don’t do that.
But this is all just standard system administration stuff. If you are deploying content or configuration to multiple nodes, you should make sure that it is in sync across all of those nodes. This is why tools like ansible/terraform/etc exist.
a
Then it seems easier to maintain your own system helm charts outside of this RKE2 automation and this will ensure your configuration is correctly applied. Now it seems the best approach for me.
c
yeah I probably wouldn’t maintain too much complicated stuff via the manifests. It’s mostly used when you have a core component with a fairly static configuration that you need applied right when the cluster comes up. Cloud provider, ingress controller, et cetera.
a
In fact my question was related to ingress controller reconfiguration for special needs. Probably I will remove the built-in ingress controller and apply a more config-controlled one. Thanks.