Hi Guys, i don't know if you still need that but i found out how to make it to be configurable at the cluster deployment.
First of all plan your Kubernetes cluster (RKE2) accordingly to your needs and check carefully how many nics you'll need and which specifically serves calico to correctly communicate with the KubeAPI, as example i'll show my use-case.
In my specific use-case i needed my kubernetes node to have 2 different NICs:
LAN NIC = Serves as cluster main communication NIC [ens3]
DMZ NIC = Serves as a NIC to expose cluster services worldwide [ens4]
During the Rancher managed cluster setup we must change the configuration inside "Add-On Config" accordingly to our network schema.
By default the parameters inside the red square will not be present, so we have to manually add that:
nodeAddressAutodetectionV4:
interface: ens3
After doing this our cluster will be deployed and will use the specified nic, as we can even see from the Tigera configuration.
So now it will not be set as standard to "first-found".
Unfortunately i still didn't found any way to change that in case the cluster is already deployed.
Feel free to test it our with your unique use case, hope to have helped some of you :)
To see my reply on github issue with images:
https://github.com/rancher/rancher/issues/41296