I have a question regarding managing CNI outside of the cluster deployment (for example, Cilium Enterprise must be managed outside of the cluster deployment). I'm putting this into RKE2, but would also apply to K3S.
I have a case with K3S recently where I have set Cilium as helm chart. Then, when I updated the cluster to a newer version (in my case, moving from 1.29.x to 1.30.x), cluster started to behave strangely. I then realized that what was happening is that the CNI binary was installed in a per-version binary location which, in this case, was not recreated automatically. I had to restart the cilium pods so that they were able to replace the binaries at the right place.
I there any way to tell the cluster upgrade procedure that some pods need to restart after the upgrade procedure?