This message was deleted.
# rke2
a
This message was deleted.
c
We put out a release of RKE2 for every release from upstream. When there are no more Kubernetes releases from the 1.24 branch, there will be no more RKE2 releases from the 1.24 branch.
h
Is this the same for all Rancher enterprise customers? What about if there is a security vulnerability in, lets say rke2 1.24.10. Does this mean that I have to upgrade to the next available version?
c
If there is a vuln in 1.24.10 that is fixed in 1.24.11, you would need to upgrade to 1.24.11. We’re not going to backport stuff from one patch release to a previous one.
h
but what if 1.24.10 is the last patch of upstream kubernetes?
c
It isn’t. 1.24 isn’t end of life for another 5 months.
A better example might be v1.22.17?
That is the last release on the 1.22 branch. There will be no additional releases of 1.22, either from Kubernetes or from Rancher.
That is also a good example as 1.22 was supposed to be end of life in 1.22.15, but they ended up doing two additional releases to address regressions and vulns.
h
ok, got it. Thanks brandond!
c
but yes, if it comes out today that there is an issue with 1.22.17 you would need to upgrade to 1.23 or newer
h
yeah, we're looking to upgrade but GlusterFS is deprecated in 1.25. I have to replace Gluster first with Longhorn before I can upgrade
s
Where can you find info on end of life dates for releases?
h
r
Out of curiosity, what about if there's a critical vulnerability in something bundled with RKE2, such as the nginx ingress controller? I can see how it doesn't make sense for scarce Rancher resources to go back and work on that, but is there a guide/documentation for how an enterprising user might re-package a custom release to handle the bundled issue?
(or maybe to decouple it after the fact or something)
c
We have not yet made the call to do any additional RKE2 or K3s releases past the end of the Kubernetes release cycle. If there was ever some super-critical vuln in a packaged component, and we were just past the EOL date, we might consider doing a k3s2/rke2r2 release but that hasn’t been necessary yet.
r
I can see that being fair, so that's why I was hoping for maybe documentation on how people could decouple or rebuild with the new piece themselves as kind of "emergency first aid" for that kind of situation as a possible alternative.
c
the best way to address that would probably be to either disable the component and replace it with your own, or use HelmChartConfig to customize it to deploy a fix.
f
FWIW, "deprecated" != "removed", it should still work in 1.25
s
A better method is to "simply" follow the K8s releases and continually move foward, then you won't get stuck. Get out of the "don't touch it"-Enterprise mindset and embrace CI/CD for your entire stack. Features which are deprecated don't go out of support for a long time, so if you keep up to date with what's going on, you'll have plenty of time to migrate out of whatever.
By out of support I mean removed entirely.
r
I wouldn't disagree at all about that being better practice, but sadly I don't call the shots so need to have my plans ready for what I expect to happen.
s
@rough-farmer-49135 I fully understand that, just putting it out there because a lot of people have a point and shoot approach, or rather "install and forget" where if hasn't blown up all is good 😉
r
Yeah, and I'll admit that I have used the Kubernetes release cadence to get people like that to pause a bit, but haven't succeeded in getting it to truly penetrate into their thinking yet. I'm suspecting they'll need to get burned badly first.
👍 1
s
It's unfortunately classic, and the majority of companies still operate with what used to work 20+ years ago.
r
Which I find entertaining because if those same people are looking at Kubernetes, then they're also likely replacing something from a decade or more ago and have likely been talking up new technology getting approval/funding.
s
Indeed 😄
Drank the Kool-Aid, put it into production. Now what? 😆🙈
h
Deprecated in 1.25 and totally removed in 1.26