This message was deleted.
# rke2
a
This message was deleted.
g
Looks like the v1.20 reference may be related to this - https://github.com/helm/helm/issues/13248 Given the Rancher console is performing the upgrade, there's no way to declare the
--kube-version
parameter? Is there another way to fix this?
c
This is a Rancher issue, not RKE2. You might search for issues in the rancher/rancher repo on GH, and open an issue if you can’t find one. I’m sure I don’t need to tell you that everything here is all many many months end of life.
g
Yep, hence why I'm going through the motions to get everything up to date. Googling seems to have gotten me nowhere in particular, but I'll look to dig deeper and get an issue raised if I can't resolve... Any other possible hints would be appreciated.
c
You might see if there are any updates to fleet available? If you're airgapped you might be missing post-release updates to Rancher components that would normally be available to a non-airgapped cluster via driver metadata updates.
👍 1
m
What's the version of the local cluster running Rancher?
g
The local Rancher cluster is running v1.24.11+rke2r1 - I was wondering if it's worth upgrading this first, but wanted to test the process on a downstream dev environment first.
Negative, upgrading the local cluster to v1.26.11 didn't do the trick, stuck with the same error when trying to upgrade a downstream cluster:
Copy code
level=info msg="While calculating status.ResourceKey, error running helm template for bundle mcc-monitoring-managed-system-upgrade-controller with target options from : chart requires kubeVersion: >= 1.23.0-0 which is incompatible with Kubernetes v1.20.0"
For anyone else stumbling on this, the issue (at least for our clusters) was that the fleet agent couldn't register with the controller. We were using the hosts file on the cluster node to define the IP of the rancher server, which was working for most containers as we set the dnsPolicy to ClusterFirstWithHostNet for cattle cluster agent deployment. Looks like the fleet pod wasn't using that config as it couldn't reach the rancher server. After creating a CNAME to the rancher server & forcing the cluster update via Fleet-Clusters, updating the rke2 version from the console worked.
Exactly
m
Yeah you can't alias things, rancher agent and everything related does DNS lookups.