limited-eye-27484
11/03/2022, 6:22 PMcattle-monitoring-system
alertmanager by hand editing and replacing the contents of the alertmanager-rancher-monitoring-alertmanager
opaque secret with a real alertmanager.yaml file that routes to a Slack channel.
Now the question is; Does that also need to be done when using the Prometheus Federator in a Project monitor with its own Prometheus, Alertmanager and Grafana?
I tried using the exact same alertmanager.yaml file that is now working for the cattle-monitoring-system Alertmanager in a Project Monitor with no luck. I can see my alert in the Project Monitor alertmanager, they just never fire into my Slack channel since the alertmanger instance in the Project thinks it has no real config.apiVersion: <http://monitoring.coreos.com/v1alpha1|monitoring.coreos.com/v1alpha1>
kind: AlertmanagerConfig
metadata:
creationTimestamp: "2022-11-03T05:46:38Z"
generation: 3
managedFields:
- apiVersion: <http://monitoring.coreos.com/v1alpha1|monitoring.coreos.com/v1alpha1>
fieldsType: FieldsV1
fieldsV1:
f:spec:
.: {}
f:receivers: {}
f:route:
.: {}
f:groupBy: {}
f:groupInterval: {}
f:groupWait: {}
f:matchers: {}
f:repeatInterval: {}
manager: agent
operation: Update
time: "2022-11-03T05:46:38Z"
name: default
namespace: cattle-project-p-glrz6
resourceVersion: "38078839"
uid: 67c2fa11-763c-4e29-abf6-11e63daaba96
spec:
receivers:
- name: default
slackConfigs:
- apiURL:
key: SLACK_URL
name: slack-channel
channel: '#alerts'
httpConfig: {}
sendResolved: true
text: '{{ template "slack.rancher.text" . }}'
route:
groupBy:
- job_name
- namespace
groupInterval: 5m
groupWait: 30s
matchers:
- name: alertname
value: KubeJobFailed
- name: severity
value: critical
- name: severity
value: warning
repeatInterval: 4h
millions-pizza-50389
01/11/2023, 11:43 PMadditionalAlertRelabelConfigs:
- action: replace
regex: (.*)
replacement: "$1"
source_labels:
- namespace
target_label: source_namespace
- action: replace
regex: (.+)
replacement: "default-namespace-name"
source_labels:
- namespace
target_label: namespace