This message was deleted.
# fleet
a
This message was deleted.
m
the chart version is. 102.0.1+up40.1.2 and my fleet.yaml diffs look like those. the last one was my try to fix it myself but apparently i have not fully understood what and how the modified error is referencing. I thought because the error was literally having a problem with everything under the spec of this ServiceMonitor that I could just remove the whole path, but it did not work
Copy code
Name:         rancher-monitoring-kubelet
Namespace:    kube-system
Labels:       app=rancher-monitoring-kubelet
              <http://app.kubernetes.io/instance=rancher-monitoring|app.kubernetes.io/instance=rancher-monitoring>
              <http://app.kubernetes.io/managed-by=Helm|app.kubernetes.io/managed-by=Helm>
              <http://app.kubernetes.io/part-of=rancher-monitoring|app.kubernetes.io/part-of=rancher-monitoring>
              <http://app.kubernetes.io/version=102.0.1_up40.1.2|app.kubernetes.io/version=102.0.1_up40.1.2>
              chart=rancher-monitoring-102.0.1_up40.1.2
              heritage=Helm
              <http://objectset.rio.cattle.io/hash=dcdf7bd94ce32ab9d267436aabd13ff6e6022ae3|objectset.rio.cattle.io/hash=dcdf7bd94ce32ab9d267436aabd13ff6e6022ae3>
              release=rancher-monitoring
Annotations:  <http://meta.helm.sh/release-name|meta.helm.sh/release-name>: rancher-monitoring
              <http://meta.helm.sh/release-namespace|meta.helm.sh/release-namespace>: cattle-monitoring-system
              <http://objectset.rio.cattle.io/id|objectset.rio.cattle.io/id>: default-core-bundle-rancher-monitoring
API Version:  <http://monitoring.coreos.com/v1|monitoring.coreos.com/v1>
Kind:         ServiceMonitor
Metadata:
  Creation Timestamp:  2023-07-04T15:26:34Z
  Generation:          1
  Managed Fields:
    API Version:  <http://monitoring.coreos.com/v1|monitoring.coreos.com/v1>
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:<http://meta.helm.sh/release-name|meta.helm.sh/release-name>:
          f:<http://meta.helm.sh/release-namespace|meta.helm.sh/release-namespace>:
          f:<http://objectset.rio.cattle.io/id|objectset.rio.cattle.io/id>:
        f:labels:
          .:
          f:app:
          f:<http://app.kubernetes.io/instance|app.kubernetes.io/instance>:
          f:<http://app.kubernetes.io/managed-by|app.kubernetes.io/managed-by>:
          f:<http://app.kubernetes.io/part-of|app.kubernetes.io/part-of>:
          f:<http://app.kubernetes.io/version|app.kubernetes.io/version>:
          f:chart:
          f:heritage:
          f:<http://objectset.rio.cattle.io/hash|objectset.rio.cattle.io/hash>:
          f:release:
      f:spec:
        .:
        f:endpoints:
        f:jobLabel:
        f:namespaceSelector:
          .:
          f:matchNames:
        f:selector:
    Manager:         fleetagent
    Operation:       Update
    Time:            2023-07-04T15:26:34Z
  Resource Version:  65805
  UID:               10425148-9b4b-44e7-b098-240eddbc9768
Spec:
  Endpoints:
    Bearer Token File:  /var/run/secrets/kubernetes.io/serviceaccount/token
    Honor Labels:       true
    Port:               https-metrics
    Relabelings:
      Action:  replace
      Source Labels:
        __metrics_path__
      Target Label:  metrics_path
    Scheme:          https
    Tls Config:
      Ca File:               /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      Insecure Skip Verify:  true
    Bearer Token File:       /var/run/secrets/kubernetes.io/serviceaccount/token
    Honor Labels:            true
    Metric Relabelings:
      Action:  drop
      Regex:   container_cpu_(cfs_throttled_seconds_total|load_average_10s|system_seconds_total|user_seconds_total)
      Source Labels:
        __name__
      Action:  drop
      Regex:   container_fs_(io_current|io_time_seconds_total|io_time_weighted_seconds_total|reads_merged_total|sector_reads_total|sector_writes_total|writes_merged_total)
      Source Labels:
        __name__
      Action:  drop
      Regex:   container_memory_(mapped_file|swap)
      Source Labels:
        __name__
      Action:  drop
      Regex:   container_(file_descriptors|tasks_state|threads_max)
      Source Labels:
        __name__
      Action:  drop
      Regex:   container_spec.*
      Source Labels:
        __name__
      Action:  drop
      Regex:   .+;
      Source Labels:
        id
        pod
    Path:  /metrics/cadvisor
    Port:  https-metrics
    Relabelings:
      Action:  replace
      Source Labels:
        __metrics_path__
      Target Label:  metrics_path
    Scheme:          https
    Tls Config:
      Ca File:               /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      Insecure Skip Verify:  true
    Bearer Token File:       /var/run/secrets/kubernetes.io/serviceaccount/token
    Honor Labels:            true
    Path:                    /metrics/probes
    Port:                    https-metrics
    Relabelings:
      Action:  replace
      Source Labels:
        __metrics_path__
      Target Label:  metrics_path
    Scheme:          https
    Tls Config:
      Ca File:               /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      Insecure Skip Verify:  true
  Job Label:                 k8s-app
  Namespace Selector:
    Match Names:
      kube-system
  Selector:
    Match Labels:
      <http://app.kubernetes.io/name|app.kubernetes.io/name>:  kubelet
      k8s-app:                 kubelet
Events:                        <none>
this is the describe of the problematic servicemonitor
Ok I apparently fixed it with this and a complete new redeploy of the GitRepo:
Copy code
diff:
  comparePatches:
    - apiVersion: <http://admissionregistration.k8s.io/v1|admissionregistration.k8s.io/v1>
      kind: MutatingWebhookConfiguration
      name: rancher-monitoring-admission
      operations:
        - {"op":"remove", "path":"/webhooks"}
    - apiVersion: <http://admissionregistration.k8s.io/v1|admissionregistration.k8s.io/v1>
      kind: ValidatingWebhookConfiguration
      name: rancher-monitoring-admission
      operations:
        - {"op":"remove", "path":"/webhooks"}
    - apiVersion: <http://monitoring.coreos.com/v1|monitoring.coreos.com/v1>
      kind: ServiceMonitor
      name: rancher-monitoring-kubelet
      namespace: kube-system
      operations:
        - {"op":"remove", "path":"/spec/endpoints"}
👍 1
l
hello, I think this PR is related to this issue however, working with fleet 0.8.2 I still face the same problem with prometheus operator and ServiceMonitor
133 Views