This message was deleted.
# k3s
a
This message was deleted.
c
Which port are you scraping metrics from?
It sounds like you're scraping one of the Kubernetes ports and getting etcd client metrics, instead of scraping the etcd server metrics port
b
im only using the default community kube-prometheus-stack, any idea where i find the etcd metrics as it's embedded in k3s?
oh i see now.. i found the service at 127.0.0.1, i can manage from here, maybe need to expose the port or something
c
there is a flag for that
🙌 1
Copy code
--etcd-expose-metrics                      (db) Expose etcd metrics to client interface. (default: false)
b
yep, and then i created a fake etcd endpoint for the default kube-prometheus-service by adding a daemonset like this:
Copy code
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: etcd-metrics-proxy
spec:
  selector:
    matchLabels:
      app: etcd-metrics-proxy
  template:
    metadata:
      labels:
        app: etcd-metrics-proxy
        component: etcd
    spec:
      nodeSelector:
        <http://node-role.kubernetes.io/etcd|node-role.kubernetes.io/etcd>: "true"
      hostNetwork: true
      containers:
      - name: pause
        image: rancher/mirrored-pause:3.6
        ports:
          - containerPort: 2381
            name: metrics
works beautifully!
kube-prometheus-stack expects etcd pods with the label "component: etcd"