This message was deleted.
# general
a
This message was deleted.
c
does the owner reference give you any hints?
Copy code
ownerReferences:
  - apiVersion: <http://management.cattle.io/v3|management.cattle.io/v3>
    kind: Cluster
    name: local
s
Yes, it gives me a pretty clear hint that Rancher, created it, hence why I am here. However, it is not super clear exactly what component creates it and why (beyond it being related to cluster creation/registration), which is why I asked the follow-up question in the first place. Also, a search of the Rancher docs did not reveal anything useful. And although I really want to believe that your question was good natured and not snark, it came across as pretty snarky.
c
you’ll find a namespace for each cluster, including the local cluster
that namespace contains all the resources that rancher needs to track and manage things for that cluster
s
Thank you. They are all empty in our registered clusters, so it was very unclear what they were for, but we don't do much with Rancher besides register them with the Rancher master, and have fleet deploy a single bundle to them.
Yeah, I just found them using the
get-all
kubectl plugin. There are indeed 4 resources in there.
c
there should be some rancher custom resources in every cluster ns, but if you’re just doing
kubectl get all
on that ns they probably won’t show up, because that doesn’t actually list custom resources - just some of the default kubernetes resource types like pods and services and whatnot
the rancher docs are mostly concerned with how to use rancher. they don’t cover implementation details, like what all the internal resource types are, where they live, and what specific controller within rancher is responsible for managing them.
s
Thank you that helped a lot. As a cluster operator, I'm trying to make sure our team understands the details of how the clusters are setup, so some of the details are important. Thank you for helping shed a bit of light on this.
What should be in the cluster and why is an important one to conquer. For security, troubleshooting, validation, etc.
c
I am not sure you will find that level of detail in the rancher docs, unfortunately
s
Yeah. I know. That is why I end up here. Diving into the code is an option when trying to track down a bug, but things like this should at least be lightly covered in the docs. Things like when registering a cluster, you will see it create X new namespaces, named X, Y, and Z, and this is what each one is responsible for.
c
yeah, that sort of stuff is general considered an implementation detail. I don’t think we cover things down to the specific names, types, and locations of managed resources for any of our products.
s
I'd agree if you all were managing the clusters, but since our engineers have to keep everything running, it is pretty important that they know why something as obvious as a namespace was created in the cluster, why it is there, etc. The fact that getting the answer to pretty surface level questions is really difficult (e.g.
What is that namespace for?
) even with some self motivated searching is really doing a disservice to the operators of these Rancher clusters.
b
Rancher creates a namespace for all of the clusters. For example the Rancher feature "projects" uses this namespace to store project manifests for the local cluster.
There is a good reason we ask Rancher to run in a standalone cluster: https://www.suse.com/support/kb/doc/?id=000020442
c
I mean, most of the resources will have fairly easy to discover cross-references. Like in this, case, there is an owner reference back to the rancher Cluster object that indicates pretty clearly that the namespace is owned by that cluster.
s
@bulky-sunset-52084 I'm not sure why you mentioned that knowledge base. The
local
namespace is in all of our clusters, the Main Rancher master (which runs nothing but Rancher) and all the other clusters that are registered with the Rancher master. Nowhere did I say that we were running other workloads on the server. What I did say, is that the documentation should really be good enough that cluster operators understand how the product works at least at a high-level. I honestly feel that this is a silly thing to be arguing over. I'm simply trying to say that as a user of your product, our team had a question about how it was operating, and getting the answer was harder then it should be. I suppose I was hoping that would be heard, but I feel that the response has been a combination of both "it should be obvious based on the evidence" and "we don't really think that you need to know that", both of which are basically disregarding the fact that we needed an answer to the question to complete our understand of an aspect of a system which we are responsible for running.
b
Woah bud sorry maybe I just misread your post. IDK why you have a
local
namespace in your downstream cluster. I thought you were talking about the rancher cluster - sorry about that.
s
@bulky-sunset-52084 Sorry if that came off a bit hot, I just feel that Brent was short with the initial response and that set the tone a bit. Yeah. There is a local cluster in every single cluster that we register with Rancher. We are using the Terraform provider to do the cluster import (these are imported not created by Rancher), so maybe that has something to do with it, but it definitely wasn't (and now maybe isn't again) clear how it got there, or what it was for.
c
> There is a local cluster in every single cluster that we register with Rancher. Wait, I’m sorry. Are you installing Rancher to clusters, and then importing those clusters into another Rancher instance?
There should only be a rancher-managed
local
namespace in the cluster that runs rancher. not in downstream clusters that only run Rancher cluster agents.
I don’t believe we support running Rancher in a cluster that is also managed by Rancher
b
Yea I just wanna get a clear understanding too. Are you installing rancher as in the helm chart in more than one cluster?
s
~Nope. So, Rancher (the admin management console, etc) is running on a cluster, then: Via Terraform, we then spin up new cluster in Azure, and import them into theNope.~
b
Okok thanks for clarifying that
s
one sec. I hit enter to early
b
Lol no problem 😄
s
Nope. So, Rancher (the admin management console, etc) is running on a cluster, then: Via Terraform, we then spin up new cluster in Azure, and import them into Rancher via the
rancher/rancher2
Terraform provider. Using something like this:
Copy code
resource "rancher2_cluster" "cluster_import_rancher" {
  name        = var.cluster_name
  description = "Imported cluster"

  labels = {
     ...
  }
}
And all of those imported clusters have a
local
namespace. It actually looks like it is empty, except for things that we (or Kubernetes) created in every namespace.
b
Ok awesome thanks so much for clarifying - my knee jerk reaction is you're right this has something to do with the way Rancher2 does imports. Give me a few I can take a look.
s
Ok. Thanks. It isn't an urgent problem, but we had no idea what the heck that namespace was for and it is everywhere, hence the original question, which I now see was somewhat unclear, likely because of assumptions on both sides about what was going on.
We are currently defining the provider like this:
Copy code
rancher2 = {
      source  = "rancher/rancher2"
      version = "~>3.1.1"
    }
c
That doesn’t seem right, afaik the cluster namespaces should only be present in the cluster running Rancher MCM itself.
s
I agree. What component actually normally creates that namespace?
Or is it the helm chart?
c
downstream clusters don’t deploy things via the helm chart, the agent installs everything. On the MCM cluster, the helm chart just spins up the main Rancher pod in the cattle-system, and things kind of fan out from there.
s
Right. I get that. I just meant what normally creates it in the management cluster?
Helm or one of the controllers, etc?
We do also use fleet if that matters. So, the fleet agent is on these clusters as well.
c
what are you naming these clusters when you import them?
they’re not being named “local” by any chance are they
there is special handling for things with that name
s
No. 😄 They are generally something like
team-org-lab-aks-1002-westus
Terraform also generally treats
local
special as well, although you could pass it around as a string.
c
what version of Rancher are you running?
There were some changes to how RBAC is handled in downstream clusters, I suspect the creation of a local ns in downstream clusters might have been part of that? I mostly work on RKE2 and K3s, I haven’t been super in sync with some of the recent changes to move things into the downstream cluster agent.
it does seem odd to me that there is a reference to a
<http://cluster.management.cattle.io|cluster.management.cattle.io>
resource outside the MCM cluster though.
💯 1
s
Let me check. It is likely a bit older.
b
Yea the clusters CRDs should only exist in the rancher cluster. I actually am curious what a kubectl get crds looks like on the downstream cluster.
The only thing that should exist on the downstream cluster is the cattle-cluster-agent and the cattle-system namespace it's in...
c
I’m seeing some annotations in your ns yaml that I think we only reference in the context of cleaning up from old versions of Rancher.
s
on the Rancher master:
Copy code
Component	Version
Rancher	    v2.8.1
Dashboard	v2.8.0
Helm	    v2.16.8-rancher2
Machine	    v0.15.0-rancher106
c
hmm interesting. that’s all very up to date
s
on a downstream (registered) cluster:
Copy code
$ kubectl get crds  | grep -e cattle -e fleet
apiservices.management.cattle.io                                  2023-12-07T22:30:03Z
apps.catalog.cattle.io                                            2023-12-07T22:30:03Z
authconfigs.management.cattle.io                                  2023-12-07T22:30:03Z
clusterregistrationtokens.management.cattle.io                    2023-12-07T22:30:03Z
clusterrepos.catalog.cattle.io                                    2023-12-07T22:30:03Z
clusters.management.cattle.io                                     2023-12-07T22:30:03Z
features.management.cattle.io                                     2023-12-07T22:30:00Z
groupmembers.management.cattle.io                                 2023-12-07T22:30:03Z
groups.management.cattle.io                                       2023-12-07T22:30:03Z
navlinks.ui.cattle.io                                             2023-12-07T22:30:02Z
operations.catalog.cattle.io                                      2023-12-07T22:30:03Z
podsecurityadmissionconfigurationtemplates.management.cattle.io   2023-12-07T22:30:03Z
preferences.management.cattle.io                                  2023-12-07T22:30:03Z
settings.management.cattle.io                                     2023-12-07T22:30:03Z
tokens.management.cattle.io                                       2023-12-07T22:30:03Z
userattributes.management.cattle.io                               2023-12-07T22:30:03Z
users.management.cattle.io                                        2023-12-07T22:30:03Z
b
Users.management.cattle.io? This should only ever exist in the upstream cluster... This is extremely odd..
c
what is the output of
kubectl get namespace local -o yaml --show-managed-fields=true
You’re sure you’re not deploying the rancher helm chart to these downstream clusters for some reason?
was someone under the impression that the rancher chart needed to be deployed to managed clusters?
b
Yea these are CRDs straight from the rancher helm chart for sure
c
all signs point to you having artifacts from rancher itself installed in this cluster. not just the rancher agent.
b
c
you’re sure you’re looking at the imported cluster, and not the cluster that rancher MCM is deployed to?
s
So, honestly I am an outside engineer helping this team with a project to help improve their cluster management, and I don't know all the history, but maybe they have a mistake somewhere that is a left over from some older process. Maybe they have some old internal operator that is installing the CRDs and creating the namespace for some reason.
I see references to KOPF in here, so maybe it is some operator in their environment that is doing this...
Copy code
apiVersion: v1
kind: Namespace
metadata:
  annotations:
    <http://cattle.io/status|cattle.io/status>: '{"Conditions":[{"Type":"ResourceQuotaInit","Status":"True","Message":"","LastUpdateTime":"2023-12-07T22:30:10Z"},{"Type":"InitialRolesPopulated","Status":"True","Message":"","LastUpdateTime":"2023-12-07T22:30:10Z"}]}'
    <http://kopf.zalando.org/last-handled-configuration|kopf.zalando.org/last-handled-configuration>: |
      {"spec":{"finalizers":["kubernetes"]},"metadata":{"labels":{"<http://team.cloud.example.com/bootstrap-component|team.cloud.example.com/bootstrap-component>":"true","<http://kubernetes.io/metadata.name|kubernetes.io/metadata.name>":"local"},"annotations":{"<http://cattle.io/status|cattle.io/status>":"{\"Conditions\":[{\"Type\":\"ResourceQuotaInit\",\"Status\":\"True\",\"Message\":\"\",\"LastUpdateTime\":\"2023-12-07T22:30:10Z\"},{\"Type\":\"InitialRolesPopulated\",\"Status\":\"True\",\"Message\":\"\",\"LastUpdateTime\":\"2023-12-07T22:30:10Z\"}]}","<http://lifecycle.cattle.io/create.namespace-auth|lifecycle.cattle.io/create.namespace-auth>":"true","<http://scheduler.alpha.kubernetes.io/defaultTolerations|scheduler.alpha.kubernetes.io/defaultTolerations>":"[{\"operator\":\"Equal\",\"effect\":\"NoSchedule\",\"key\":\"<http://kubernetes.azure.com/scalesetpriority\|kubernetes.azure.com/scalesetpriority\>",\"value\":\"spot\"}]"}}}
    <http://lifecycle.cattle.io/create.namespace-auth|lifecycle.cattle.io/create.namespace-auth>: "true"
    <http://scheduler.alpha.kubernetes.io/defaultTolerations|scheduler.alpha.kubernetes.io/defaultTolerations>: '[{"operator":"Equal","effect":"NoSchedule","key":"<http://kubernetes.azure.com/scalesetpriority|kubernetes.azure.com/scalesetpriority>","value":"spot"}]'
  creationTimestamp: "2023-12-07T22:30:09Z"
  finalizers:
  - <http://controller.cattle.io/namespace-auth|controller.cattle.io/namespace-auth>
  labels:
    <http://team.cloud.example.com/bootstrap-component|team.cloud.example.com/bootstrap-component>: "true"
    <http://kubernetes.io/metadata.name|kubernetes.io/metadata.name>: local
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          f:<http://team.cloud.example.com/bootstrap-component|team.cloud.example.com/bootstrap-component>: {}
    manager: Terraform
    operation: Apply
    time: "2024-02-12T22:34:26Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          .: {}
          f:<http://kubernetes.io/metadata.name|kubernetes.io/metadata.name>: {}
        f:ownerReferences:
          .: {}
          k:{"uid":"a4910f0b-0063-4dc4-8c1d-d7d0cc465c75"}: {}
    manager: agent
    operation: Update
    time: "2023-12-07T22:30:09Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:<http://cattle.io/status|cattle.io/status>: {}
          f:<http://lifecycle.cattle.io/create.namespace-auth|lifecycle.cattle.io/create.namespace-auth>: {}
        f:finalizers:
          .: {}
          v:"<http://controller.cattle.io/namespace-auth|controller.cattle.io/namespace-auth>": {}
    manager: rancher
    operation: Update
    time: "2023-12-07T22:30:09Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:<http://kopf.zalando.org/last-handled-configuration|kopf.zalando.org/last-handled-configuration>: {}
    manager: kopf
    operation: Update
    time: "2024-02-12T22:34:26Z"
  name: local
  ownerReferences:
  - apiVersion: <http://management.cattle.io/v3|management.cattle.io/v3>
    kind: Cluster
    name: local
    uid: a4910f0b-0063-4dc4-8c1d-d7d0cc465c75
  resourceVersion: "362385096"
  uid: 22626fcb-2c6b-4413-bfc1-1d616dcaeefb
spec:
  finalizers:
  - kubernetes
status:
  phase: Active
Maybe an engineer mistake a long time ago, that has just been doing it's thing until someone was curious enough to ask. This is one of those reasons though that I think there should be better operator docs, explaining more clearly what people should expect to see in the cluster, etc.
b
It looks like they installed rancher in this cluster and attempted to "clean it up". Rancher really doesn't do a great job at uninstalling. It's generally advised to remove the entire cluster when finished with it
Well the only thing that should be there is the cattle-cluster-agent and cattle-system namespace after import. Really nothing else
c
yeah, somehow or another this cluster has rancher MCM bits on it. Whether it was installed at some point and then cleaned up, or it is being partially deployed by some automation, its hard to say.
If you’re confident this is the downstream cluster and not the actual cluster that rancher is running in.
you have confirmed that, right?
it can be easy to point at the wrong kubeconfig or context and be listing things from the wrong cluster
s
Hmmm. I'll dig into it a bit and report back. I built one of these cluster a few weeks ago, and Rancher was never installed on it, but I am now wondering if some internal operator is installing some CRDs and maybe pre-creating the namespace or something odd like that.
I can confirm that I am looking at the correct, downstream server.
Thank you for the information so far. That at least gives me enough to dig in more on this side.
c
the ManagedFields info on that NS indicates that Rancher was running on that cluster as of 2 months ago
Copy code
- apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:<http://cattle.io/status|cattle.io/status>: {}
          f:<http://lifecycle.cattle.io/create.namespace-auth|lifecycle.cattle.io/create.namespace-auth>: {}
        f:finalizers:
          .: {}
          v:"<http://controller.cattle.io/namespace-auth|controller.cattle.io/namespace-auth>": {}
    manager: rancher
    operation: Update
    time: "2023-12-07T22:30:09Z"
not the agent, but a full on rancher mcm install
but it sounds like its not there any more, so… not sure what’s going on.
s
That is very, very strange. Well, might be time to tear this lab server down, spin it back up and see what happens.
b
Ok this is everything rancher would ever create in the downstream cluster: https://github.com/rancher/rancher/blob/release%2Fv2.9/pkg%2Fsystemtemplate%2Ftemplate.go Some of that wont be in there like the windows stuff if you're not running a windows cluster
s
Oh. That is helpful. Thanks! Interesting. We also have other namespaces like
cattle-impersonation-system
and
cattle-fleet-system
(this is legit, I assume, since it has the fleet agent)
b
Dang forgot about fleet but yes the fleet stuff for sure
There is a fleet-agent that runs in its own namespace
1
c
yeah fleet will have some things of its own but that cattle-impersonation-system would also be another MCM namespace I believe
b
https://github.com/rancher/fleet/blob/master/docs%2Fdesign.md Yes it should only be that one ns for fleet
In the document above the local-fleet-cluster and the managed-fleet-cluster are both the rancher cluster
Cattle-impersonation-system is an upstream namespace for sure https://github.com/rancher/rancher/blob/release%2Fv2.9/pkg%2Fimpersonation%2Fimpersonation.go
s
Yeah, I was just looking at the PR that added it here: https://github.com/rancher/rancher/pull/33591/files
So, I am getting pretty confident that this Terraform resource is when the namespaces and management CRDs show up in our downstream clusters. Should we not be using this and be using something else instead? https://registry.terraform.io/providers/rancher/rancher2/latest/docs/resources/cluster#creating-rancher-v2-imported-cluster
Copy code
resource "rancher2_cluster" "cluster_import_rancher" {
  name        = var.cluster_name
  description = "imported cluster"

  labels = {
    "<http://provider.cattle.io|provider.cattle.io>"                 = "aks",
    ...
  }
}
Is it possible that one of these things creates the local namespace or installs the CRDs?
Copy code
cattle-fleet-system             fleet-agent-ddc444854-mht9n                                  1/1     Running     0          19m
cattle-system                   cattle-cluster-agent-7f7955dbdb-d97xq                        1/1     Running     0          19m
cattle-system                   cattle-cluster-agent-7f7955dbdb-mjb5x                        1/1     Running     0          19m
cattle-system                   helm-operation-hfrs9                                         0/2     Completed   0          19m
cattle-system                   rancher-webhook-5677dbfdf5-6tmsn
Those are the Rancher-related pods that appear in the cluster, post registration.
That webhook appears to run:
Copy code
---------------------------------------------------------------------
SUCCESS: helm upgrade --force-adopt=true --history-max=5 --install=true --namespace=cattle-system --reset-values=true --timeout=5m0s --values=/home/shell/helm/values-rancher-webhook-103.0.1-up0.4.2.yaml --version=103.0.1+up0.4.2 --wait=true rancher-webhook /home/shell/helm/rancher-webhook-103.0.1-up0.4.2.tgz
---------------------------------------------------------------------
@bulky-sunset-52084 @creamy-pencil-82913 Any thoughts or ideas about what is actually installing those CRDs and creating the
local
namespace?
Ok. The CRDs are being installed into our downstream clusters by the cattle-cluster-agent:
Copy code
cattle-cluster-agent-7f7955dbdb-7z2r8 cluster-register time="2024-02-15T15:28:40Z" level=info msg="Applying CRD features.management.cattle.io"
cattle-cluster-agent-7f7955dbdb-7z2r8 cluster-register time="2024-02-15T15:28:40Z" level=info msg="Applying CRD navlinks.ui.cattle.io"
cattle-cluster-agent-7f7955dbdb-7z2r8 cluster-register time="2024-02-15T15:28:40Z" level=info msg="Applying CRD podsecurityadmissionconfigurationtemplates.management.cattle.io"
cattle-cluster-agent-7f7955dbdb-7z2r8 cluster-register time="2024-02-15T15:28:40Z" level=info msg="Applying CRD clusters.management.cattle.io"
cattle-cluster-agent-7f7955dbdb-7z2r8 cluster-register time="2024-02-15T15:28:40Z" level=info msg="Applying CRD apiservices.management.cattle.io"
cattle-cluster-agent-7f7955dbdb-7z2r8 cluster-register time="2024-02-15T15:28:40Z" level=info msg="Applying CRD clusterregistrationtokens.management.cattle.io"
cattle-cluster-agent-7f7955dbdb-7z2r8 cluster-register time="2024-02-15T15:28:40Z" level=info msg="Applying CRD settings.management.cattle.io"
cattle-cluster-agent-7f7955dbdb-7z2r8 cluster-register time="2024-02-15T15:28:40Z" level=info msg="Applying CRD preferences.management.cattle.io"
cattle-cluster-agent-7f7955dbdb-7z2r8 cluster-register time="2024-02-15T15:28:40Z" level=info msg="Applying CRD features.management.cattle.io"
cattle-cluster-agent-7f7955dbdb-7z2r8 cluster-register time="2024-02-15T15:28:40Z" level=info msg="Applying CRD clusterrepos.catalog.cattle.io"
cattle-cluster-agent-7f7955dbdb-7z2r8 cluster-register time="2024-02-15T15:28:40Z" level=info msg="Applying CRD operations.catalog.cattle.io"
cattle-cluster-agent-7f7955dbdb-7z2r8 cluster-register time="2024-02-15T15:28:40Z" level=info msg="Applying CRD apps.catalog.cattle.io"
cattle-cluster-agent-7f7955dbdb-7z2r8 cluster-register time="2024-02-15T15:28:41Z" level=info msg="Starting API controllers"

cattle-cluster-agent-7f7955dbdb-h2zgg cluster-register INFO: Environment: CATTLE_ADDRESS=192.168.8.8 CATTLE_CA_CHECKSUM= CATTLE_CLUSTER=true CATTLE_CLUSTER_AGENT_PORT=<tcp://192.168.253.21:80> CATTLE_CLUSTER_AGENT_PORT_443_TCP=<tcp://192.168.253.21:443> CATTLE_CLUSTER_AGENT_PORT_443_TCP_ADDR=192.168.253.21 CATTLE_CLUSTER_AGENT_PORT_443_TCP_PORT=443 CATTLE_CLUSTER_AGENT_PORT_443_TCP_PROTO=tcp CATTLE_CLUSTER_AGENT_PORT_80_TCP=<tcp://192.168.253.21:80> CATTLE_CLUSTER_AGENT_PORT_80_TCP_ADDR=192.168.253.21 CATTLE_CLUSTER_AGENT_PORT_80_TCP_PORT=80 CATTLE_CLUSTER_AGENT_PORT_80_TCP_PROTO=tcp CATTLE_CLUSTER_AGENT_SERVICE_HOST=192.168.253.21 CATTLE_CLUSTER_AGENT_SERVICE_PORT=80 CATTLE_CLUSTER_AGENT_SERVICE_PORT_HTTP=80 CATTLE_CLUSTER_AGENT_SERVICE_PORT_HTTPS_INTERNAL=443 CATTLE_CLUSTER_REGISTRY= CATTLE_FEATURES=embedded-cluster-api=false,fleet=false,monitoringv1=false,multi-cluster-management=false,multi-cluster-management-agent=true,provisioningv2=false,rke2=false CATTLE_INGRESS_IP_DOMAIN=sslip.io CATTLE_INSTALL_UUID=4a7df5 d1-e641-4541-96e4-58bc2ca8ea14 CATTLE_INTERNAL_ADDRESS= CATTLE_IS_RKE=false CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=cattle-cluster-agent-7f7955dbdb-h2zgg CATTLE_RANCHER_WEBHOOK_VERSION=103.0.1+up0.4.2 CATTLE_SERVER=<https://master-rke.k8s.example.com> CATTLE_SERVER_VERSION=v2.8.1
So, if the CRDs aren't supposed to be there, Are we registering them wrong, configuring the agent incorrectly somehow, or is their a bug in the Rancher2 Terraform provider? It looks like the agent applies these CRDs every single time the pod is started.
When I delete the
local
namspace and then restart the
cattle-cluster-agent
pods I see the
local
namespace being recreated soon afterwards right about the time these logs lines appear.
Copy code
cattle-system cattle-cluster-agent-7f7955dbdb-9h4nv cluster-register 2024-02-15T08:54:08.803745223-08:00 I0215 16:54:08.803638      39 leaderelection.go:255] successfully acquired lease kube-system/cattle-controllers
cattle-system cattle-cluster-agent-7f7955dbdb-9h4nv cluster-register 2024-02-15T08:54:08.933203361-08:00 time="2024-02-15T16:54:08Z" level=info msg="Steve auth startup complete"
cattle-system cattle-cluster-agent-7f7955dbdb-9h4nv cluster-register 2024-02-15T08:54:09.010251910-08:00 time="2024-02-15T16:54:09Z" level=info msg="Registering namespaceHandler for adding labels "
cattle-system cattle-cluster-agent-7f7955dbdb-9h4nv cluster-register 2024-02-15T08:54:09.114096197-08:00 time="2024-02-15T16:54:09Z" level=info msg="Starting apps/v1, Kind=Deployment controller"
cattle-system cattle-cluster-agent-7f7955dbdb-9h4nv cluster-register 2024-02-15T08:54:09.114129697-08:00 time="2024-02-15T16:54:09Z" level=info msg="Starting apps/v1, Kind=DaemonSet controller"
cattle-system cattle-cluster-agent-7f7955dbdb-9h4nv cluster-register 2024-02-15T08:54:09.114144797-08:00 time="2024-02-15T16:54:09Z" level=info msg="Starting /v1, Kind=Node controller"
cattle-system cattle-cluster-agent-7f7955dbdb-9h4nv cluster-register 2024-02-15T08:54:09.114154797-08:00 time="2024-02-15T16:54:09Z" level=info msg="Starting <http://catalog.cattle.io/v1|catalog.cattle.io/v1>, Kind=App controller"
cattle-system cattle-cluster-agent-7f7955dbdb-9h4nv cluster-register 2024-02-15T08:54:09.114159497-08:00 time="2024-02-15T16:54:09Z" level=info msg="Starting apps/v1, Kind=ReplicaSet controller"
cattle-system cattle-cluster-agent-7f7955dbdb-9h4nv cluster-register 2024-02-15T08:54:09.114164597-08:00 time="2024-02-15T16:54:09Z" level=info msg="Starting <http://admissionregistration.k8s.io/v1|admissionregistration.k8s.io/v1>, Kind=MutatingWebhookConfiguration controller"
cattle-system cattle-cluster-agent-7f7955dbdb-9h4nv cluster-register 2024-02-15T08:54:09.149590351-08:00 time="2024-02-15T16:54:09Z" level=info msg="Starting batch/v1, Kind=Job controller"
cattle-system cattle-cluster-agent-7f7955dbdb-9h4nv cluster-register 2024-02-15T08:54:09.149617251-08:00 time="2024-02-15T16:54:09Z" level=info msg="Starting /v1, Kind=Service controller"
cattle-system cattle-cluster-agent-7f7955dbdb-9h4nv cluster-register 2024-02-15T08:54:09.149622451-08:00 time="2024-02-15T16:54:09Z" level=info msg="Starting /v1, Kind=ReplicationController controller"
cattle-system cattle-cluster-agent-7f7955dbdb-9h4nv cluster-register 2024-02-15T08:54:09.149627051-08:00 time="2024-02-15T16:54:09Z" level=info msg="Starting batch/v1, Kind=CronJob controller"
cattle-system cattle-cluster-agent-7f7955dbdb-9h4nv cluster-register 2024-02-15T08:54:09.149646351-08:00 time="2024-02-15T16:54:09Z" level=info msg="Starting <http://catalog.cattle.io/v1|catalog.cattle.io/v1>, Kind=Operation controller"
cattle-system cattle-cluster-agent-7f7955dbdb-9h4nv cluster-register 2024-02-15T08:54:09.149651051-08:00 time="2024-02-15T16:54:09Z" level=info msg="Starting /v1, Kind=Endpoints controller"
cattle-system cattle-cluster-agent-7f7955dbdb-9h4nv cluster-register 2024-02-15T08:54:09.149655251-08:00 time="2024-02-15T16:54:09Z" level=info msg="Starting apps/v1, Kind=StatefulSet controller"
cattle-system cattle-cluster-agent-7f7955dbdb-9h4nv cluster-register 2024-02-15T08:54:09.149659351-08:00 time="2024-02-15T16:54:09Z" level=info msg="Starting /v1, Kind=Pod controller"
cattle-system cattle-cluster-agent-7f7955dbdb-9h4nv cluster-register 2024-02-15T08:54:09.149663450-08:00 time="2024-02-15T16:54:09Z" level=info msg="Starting <http://networking.k8s.io/v1|networking.k8s.io/v1>, Kind=Ingress controller"
cattle-system cattle-cluster-agent-7f7955dbdb-9h4nv cluster-register 2024-02-15T08:54:09.149668050-08:00 time="2024-02-15T16:54:09Z" level=info msg="Starting <http://admissionregistration.k8s.io/v1|admissionregistration.k8s.io/v1>, Kind=ValidatingWebhookConfiguration controller"
So, based on all this investigation is appears that the
cattle-cluster-agent
is installing the CRDs and creating the
local
namespace is all of our downstream clusters. @bulky-sunset-52084 @creamy-pencil-82913 Any thoughts or ideas on next steps? • If it shouldn't be doing this for downstream clusters, what might be causing this issue? ◦ Are we registering them wrong, configuring the agent incorrectly somehow, or is their a bug in the Rancher2 Terraform provider? • If this is expected behavior, can you please confirm that? (since earlier it was stated that this should NOT be happening)
@creamy-pencil-82913 @bulky-sunset-52084 Does this look to be what you would both expect? You both said that the
local
namespace and CRDs should not exist in downstream clusters, but I have confirmed that the
cattle-cluster-agent
is installing the CRDs and very likely creating the
local
namespace. Does this mean this is actually normal behavior? Or is it a sign that the
cattle-cluster-agent
is mis-configured somehow?