This message was deleted.
# k3s
a
This message was deleted.
c
The controller always created the CRDs itself when started
That just added an auto-generated CRD to the build artifacts, as opposed to having a stale one that was checked into git
b
am I missing it in the build artifacts?
c
its included in the top of both of the yaml files in the release assets
b
aaaaahh
sorry I missed that...lastly, is there a comprehensive list of rbac needed in case I don't run it in kube-system?
I'm after cluster-wide but installing in a non kube-system ns
c
all the RBAC should be included in the manifest, I think you’d just need to update the namespaces to wherever you’re deploying it?
b
I don't see any rbac in the manifest (this time I swear I've looked at it 🙂 )
the cluster wide has 2 crds and the deployment
c
Ahh right. It’s been a while since I looked at this and I’m not the primary maintainer of that project, but iirc it just uses the default serviceaccount which has permission to create Jobs. The Jobs in turn have full cluster access so that they can create whatever resources are in the Helm chart.
I don’t think we have an example of using custom RBAC and a separate SA for the controller.
because in either case it essentially needs full cluster admin for Helm to be able to do what it needs to do. The only difference between the namespaced and cluster-scoped installations is where the controller watches for HelmChart resources.
b
right, but implicitly running in kube-system means full access no?
c
no, there’s nothing special about that namespace privilege-wise
the Helm job pod itself runs with a ServiceAccount that is bound to the cluster-admin built-in role https://github.com/k3s-io/helm-controller/blob/master/pkg/controllers/chart/chart.go#L517
thats where the full access comes from. not from the controller itself
b
ok, but how would the controller be able to watch the crds cluster-wide without rbac if not running in kube-system
c
I’d have to go poke through the cluster RBAC to refresh my memory
b
no worries, it may be easier to deploy to kube-system, but I'll dump it in my preferred ns and see how bad it blows up 😄
see if I can't get the proper rbac squared away
Copy code
I0313 20:48:00.594811       1 leaderelection.go:248] attempting to acquire leader lease kube-system/helm-controller-lock...
E0313 20:48:00.596531       1 leaderelection.go:330] error retrieving resource lock kube-system/helm-controller-lock: configmaps "helm-controller-lock" is forbidden: User "system:serviceaccount:operators:default" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
E0313 20:48:04.062603       1 leaderelection.go:330] error retrieving resource lock kube-system/helm-controller-lock: configmaps "helm-controller-lock" is forbidden: User "system:serviceaccount:operators:default" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
seems to be pretty hard-coded to some things
I'll just move it I suppose
even in kube-system..
Copy code
I0313 21:07:18.118032       1 controllers.go:94] Starting helm controller with no namespace
I0313 21:07:18.118051       1 leaderelection.go:248] attempting to acquire leader lease kube-system/helm-controller-lock...
E0313 21:07:18.121038       1 leaderelection.go:330] error retrieving resource lock kube-system/helm-controller-lock: configmaps "helm-controller-lock" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
c
honestly I’m not sure how much this gets used as a standalone controller. It was built for use in K3s/RKE2 where it is embedded into the supervisor as a basic way to get the packaged components deployed into the cluster at startup or during upgrade. If you’re trying to deploy a standalone Helm controller there are better more feature-complete ones out there.
b
I predominantly run rke/rke2 and don't see it in any of the clusters
c
yes its built in and runs in the supervisor
b
I don't deploy with rancher but they are joined
c
If you’re running K3s or RKE2 you don’t need to install anything, its already there
just deploy a HelmChart manifest and it’ll do its thing
b
there is no such crd in my clusters
c
did you start them with --disable-helm-controller or something?
b
no
c
what version of RKE2 are you running?
It’s not in RKE, only K3s/RKE2
If you look at a K3s or RKE2 cluster, it’ll be there
b
ok, most of them are rke but indeed I do see it in any rke2 cluster
I do not however see a controller pod running (it may be handled differently)
c
thats what I mean by built in to the supervisor
👍 1
b
I'll just mess around with it and get it running..I'll have clusters not managed by rke2 that will probably need it at some point as well
c
it looks like the tests at least do set up a one-off SA and ClusterRoleBinding for it. When it’s deployed as part of RKE2/K3s it uses the supervisor’s admin RBAC. If you want to open an issue to include RBAC in the example manifests we can try to do that at some point.
b
I'll just do some trial/error and get it nailed down