This message was deleted.
# rke2
a
This message was deleted.
a
I can't see any proxy settings being added as env variables to the deployment on the downstream cluster.
When provisioning a cluster I can set extra Agent env variables but this option doesn't seem to be available to imported clusters? Or am I missing something here?
Okay, so I can configure the Agent env variables when importing the cluster but I can't edit it after importing it? Is that correct?
Perhaps the question is more aimed to Rancher than RKE2...
Regardless - the question still stands, I've set up HTTP Proxy conf that isn't being respected by cattle-cluster-agent. Any ideas why?
c
no. Env vars from the systemd service do not propagate into pods. Note that “pods” are not listed where it says
These proxy settings will then be used in RKE2 and passed down to the embedded containerd and kubelet.
If you want env vars set in pods, you should include them in the pod spec.
1
Check the rancher docs on how to set env vars for Rancher components. Hint, it’s in the UI under “agent env vars” when importing the cluster.
a
Right - but I cant add them after a cluster has already been imported. Is the only way to either manually patch the cattle-cluster-agent deployment or deleting -> re-importing the whole cluster?
c
what do you mean you can’t? What’s stopping you? It should be under Config -> Advanced Options
Just add whatever you want and then save it, it’ll update the deployment for you. If you tried to edit it on the downstream cluster directly, it would just revert to whatever was configured in Rancher.
a
It's not there 😭
When -editing- already imported clusters that is
I can't send a screenshot unfortunately but I've got: • Member Roles • Labels & Annotations • Rke2 Options
And no Agent Environment Variables under Rke2 Options - Just: • Kubernetes Version • Drain/Concurrency Options • Authorized Cluster Endpoint
c
what version of Rancher are you on?
a
2.8.5
Rancher is running on RKE1
c
That is odd. I’m not sure why they’d be there for K3s but not RKE2
That’d be a Rancher issue though. would you mind opening an issue at https://github.com/rancher/rancher ?
If you open a shell on the rancher local cluster, you should be able to do
kubectl edit <http://cluster.management.cattle.io|cluster.management.cattle.io> <ID>
where the ID is the
c-m-xxx
string from the URL when you’re viewing the cluster in the dashboard
oh actually sorry, you want to edit the provisioning object
kubectl edit <http://cluster.provisioning.cattle.io|cluster.provisioning.cattle.io> -n fleet-default <NAME>
a
Yep - I see it
c
or easier than that even, just add
?mode=edit&as=yaml
to the end of the URL when you’re viewing the cluster lol
a
Wow lmao
That works yeah
c
have to be in the cluster detail view, not the config view but it should work
a
Just editing the objects with kubectl is fine with me 😄
c
if you wanna open an issue I can poke the UI folks about why it doesn’t show the env vars for RKE2. I think there’s another issue with it, in that editing it for k3s changes the management object not the provisioning object, so the keys end up set inconsistently - they do get set downstream but I think they could be lost if the provisioning object is modified.
a
👍
Thank you so much for the help! Proxy really is the mother of all headaches
c
gl!
a