This message was deleted.
# rke2
a
This message was deleted.
m
I've also been looking for a similar solution. I install with a Helm chart, but the CNI options aren't documented there.... need to set the auto detect interface manually after each install by editing the Calico Installation CRD, but that feels messy.
a
So you havent't installed Calico along with the RKE cluster but only later, using the Helm chart? And still after editing and upgrading the helm installation the options are not changed?
I checked the calico helm chart, hopefully this is the latest (https://github.com/projectcalico/calico/tree/v3.26.1/charts/calico) Looking at the calico-node template I found no indication of using any nodeAddressAutodetection setting. So the reason this is not mentioned anywhere in the helm documentation could be that it is not implemented.
rke2-calico chart could be different though.
m
Sorry, I was unclear. I'm actually installing the RKE2 cluter itself with Helm using a cluster template: https://github.com/rancher/cluster-template-examples But this doesn't seem to have a way to specify the Calico config in more detail in the same way. It installs the rke2-calico chart which is fine by me, except then I need to go and change the BGP to enabled and select the interface after provisioning.
a
I see, however I seem to be in a similarly dark room as you are. 🙂 I will however check the deployed rke2-calico helm chart.
m
When you make the change to the auto detection interface, you are editing the Installation default? Or somewhere else? In my experience the changes there haven't been overwritten on upgrades.
a
I modified the calico-node daemonset. I believe that would be overwrittend by subsequent upgrades.
m
Yes. Just edit it in the installation file that's there. So kubectl edit installation default and change the line related to the autodetection method.
Just make sure you only have one option there, it'll error out with two
Also, this will cause all of the calico nodes to restart
a
Sure, but as I pointed out the change might get lost during an update. Did you try an update after modifying the daemonset definition?
I mean update from rancher server.
a
Hi Guys, i don't know if you still need that but i found out how to make it to be configurable at the cluster deployment. First of all plan your Kubernetes cluster (RKE2) accordingly to your needs and check carefully how many nics you'll need and which specifically serves calico to correctly communicate with the KubeAPI, as example i'll show my use-case. In my specific use-case i needed my kubernetes node to have 2 different NICs: LAN NIC = Serves as cluster main communication NIC [ens3] DMZ NIC = Serves as a NIC to expose cluster services worldwide [ens4] During the Rancher managed cluster setup we must change the configuration inside "Add-On Config" accordingly to our network schema. By default the parameters inside the red square will not be present, so we have to manually add that: nodeAddressAutodetectionV4: interface: ens3 After doing this our cluster will be deployed and will use the specified nic, as we can even see from the Tigera configuration. So now it will not be set as standard to "first-found". Unfortunately i still didn't found any way to change that in case the cluster is already deployed. Feel free to test it our with your unique use case, hope to have helped some of you :) To see my reply on github issue with images: https://github.com/rancher/rancher/issues/41296
111 Views