This message was deleted.
# opni
a
This message was deleted.
👀 1
b
there might be something wrong with our opinionated setup that could affect metric grouping I would check the raw query has the right cardinality (i.e two or more distinct matching series): via CLI
Copy code
opni metrics admin query --clusters=<cluster-id>  'kube_node_status_condition{job="kube-state-metrics",condition="Ready",status="true"} == 0'
or via a grafana dashboard
b
The CLI command has the following response regardless of the numeric value supplied or if the cluster-id is valid or not.
{"status":"success","data":{"resultType":"vector","result":[]}}
b
oh yeah sorry that should be
query range
with the time interval being some time interval where the nodes were brought down
b
Do you have a moment to provide a sample?
query-range
does not seem to be working for me.
b
I looked into the opnionated setup in opni-alerting and is too eager when grouping received labels. for example : • label=A, uuid=conditionId • label=B, uuid=conditionId happen close together they will be treated as the same instance of the alert. I am working on a fix
b
Do you know if this problem is already captured in an open GitHub issue?
b
there is no gh issue atm
b
Hi @bright-oil-36284. Would you suggest I create a GitHub issue for this bug?
Also, are you still thinking this is a label-grouping problem?
b
sure you can create a github issue, I think this is a label grouping problem on opni-alerting's side, we assume when we encapsulate a condition, to group by anything coming from the condition's source...
It has upsides like reducing footprint if the user configures a condition that can output many matching series or outputs many labels (for non metrics alerts), but definitely doesn't help when trying to capture situations that evolve over time