orange-airplane-98016
01/24/2023, 10:35 PMbreezy-action-99129
03/22/2023, 5:40 AMastonishing-king-67727
04/27/2023, 11:30 AMastonishing-king-67727
05/10/2023, 4:08 AMbreezy-action-99129
05/10/2023, 9:52 AMmodern-winter-77378
05/16/2023, 8:58 AMfaint-holiday-44205
05/17/2023, 4:25 PMI have some errors when installing Opni:
opni-cluster-system opni-agent-0 2/3 CrashLoopBackOff 7 (42s ago)
opni-cluster-system: Recent Operation: helm-operation-wxt2j Failed
exit code: 123
Opni = 0.92 + Rancher 2.73 - Does anyone have an Ideabreezy-action-99129
05/19/2023, 9:27 AMbreezy-action-99129
05/31/2023, 9:35 AMcreamy-wolf-46823
07/06/2023, 11:50 AMcreamy-wolf-46823
07/08/2023, 9:46 AMcreamy-wolf-46823
07/08/2023, 10:12 AMbest-microphone-20624
07/11/2023, 3:58 AMbest-microphone-20624
07/13/2023, 1:20 PMcreamy-wolf-46823
07/19/2023, 8:36 AMcreamy-wolf-46823
07/20/2023, 7:09 PMcreamy-wolf-46823
07/22/2023, 8:32 PMcreamy-wolf-46823
08/24/2023, 3:35 AMbest-microphone-20624
09/01/2023, 6:52 AM[403 Forbidden] {"error":{"root_cause":[{"type":"security_exception","reason":"no permissions for [cluster:monitor/task/get] and User [name=internalopni, backend_roles=[], requestedTenant=null]"}],"type":"security_exception","reason":"no permissions for [cluster:monitor/task/get] and User [name=internalopni, backend_roles=[], requestedTenant=null]"},"status":403}: opensearch request unsuccessful
Thanks.best-microphone-20624
09/08/2023, 1:25 AMcreamy-wolf-46823
09/13/2023, 8:30 AMcreamy-wolf-46823
09/16/2023, 9:41 AMbest-microphone-20624
09/18/2023, 1:30 AMopni-grafana-ingress
. When I attempt to login to grafana, I am presented with Sign in with oAuth
. When I select the option, I receive the following error message:
{
"error": "invalid_request",
"error_description": "The request is missing a required parameter, includes an invalid parameter value, includes a parameter more than once, or is otherwise malformed. The 'redirect_uri' parameter does not match any of the OAuth 2.0 Client's pre-registered redirect urls."
}
best-microphone-20624
09/19/2023, 4:18 AMconnection error: desc = "transport: error while dialing: dial unix /tmp/plugin653687210: connect: connection refused"
Is there a technique to reinitialize the Alerting Backend when it reaches this state?elegant-address-3986
09/19/2023, 6:16 PMopni-grafana-ingress.
When attempting to login, we are also presented with the Sign in with oAuth
, however, clicking on that redirects us to a page that says select user
with no users to select from. There is no where to type anything and no menu to select from.elegant-address-3986
09/21/2023, 2:35 PMbrief-jordan-43130
09/21/2023, 5:42 PMcreamy-wolf-46823
09/23/2023, 3:43 PMbest-microphone-20624
09/28/2023, 1:37 PM$ k -n opni logs opni-collector-agent-rwsqq -c otel-collector | more
2023-09-28T13:14:40.581Z info service/telemetry.go:82 Skipping telemetry setup. {"address": ":8888", "level": "None"}
2023-09-28T13:14:40.582Z info kube/client.go:101 k8s filtering {"kind": "processor", "name": "k8sattributes", "pipeline": "logs", "labelSelector": "", "fieldSelector": ""}
2023-09-28T13:14:40.582Z info memorylimiterprocessor@v0.74.0/memorylimiter.go:113 Memory limiter configured {"kind": "processor", "name": "memory_limiter", "pipeline": "logs", "limit_mib": 250, "spike_limit_mib": 50, "check_interval": 1}
2023-09-28T13:14:40.583Z info service/service.go:128 Starting otelcol-custom... {"Version": "0.74.0", "NumCPU": 16}
2023-09-28T13:14:40.583Z info extensions/extensions.go:41 Starting extensions...
2023-09-28T13:14:40.583Z info adapter/receiver.go:56 Starting stanza receiver {"kind": "receiver", "name": "journald/rke2", "data_type": "logs"}
2023-09-28T13:14:40.584Z info adapter/receiver.go:56 Starting stanza receiver {"kind": "receiver", "name": "filelog/rke2", "data_type": "logs"}
2023-09-28T13:14:40.584Z info adapter/receiver.go:56 Starting stanza receiver {"kind": "receiver", "name": "filelog/k8s", "data_type": "logs"}
2023-09-28T13:14:40.588Z info service/service.go:145 Everything is ready. Begin running and processing data.
2023-09-28T13:14:40.785Z info fileconsumer/file.go:196 Started watching file {"kind": "receiver", "name": "filelog/rke2", "data_type": "logs", "component": "fileconsumer", "path": "/var/lib/rancher/rke2/agent/logs/kubelet.log"}
2023-09-28T13:14:40.785Z error helper/transformer.go:110 Failed to process entry {"kind": "receiver", "name": "filelog/rke2", "data_type": "logs", "operator_id": "time-sev", "operator_type": "regex_parser", "error": "regex pattern does not match", "action": "drop", "entry": {"observed_timestamp":"2023-09-28T13:14:40.785813939Z","timestamp":"0001-01-01T00:00:00Z","body":"Log file created at: 2023/09/24 01:29:45","attributes":{"log.file.path":"/var/lib/rancher/rke2/agent/logs/kubelet.log"},"severity":0,"scope_name":""}}
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*TransformerOperator).HandleEntryError|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*TransformerOperator).HandleEntryError>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.74.0/operator/helper/transformer.go:110|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.74.0/operator/helper/transformer.go:110>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*ParserOperator).ParseWith|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*ParserOperator).ParseWith>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.74.0/operator/helper/parser.go:151|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.74.0/operator/helper/parser.go:151>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*ParserOperator).ProcessWithCallback|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*ParserOperator).ProcessWithCallback>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.74.0/operator/helper/parser.go:123|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.74.0/operator/helper/parser.go:123>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*ParserOperator).ProcessWith|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*ParserOperator).ProcessWith>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.74.0/operator/helper/parser.go:109|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.74.0/operator/helper/parser.go:109>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/parser/regex.(*Parser).Process|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/parser/regex.(*Parser).Process>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.74.0/operator/parser/regex/regex.go:110|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.74.0/operator/parser/regex/regex.go:110>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*WriterOperator).Write|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*WriterOperator).Write>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.74.0/operator/helper/writer.go:64|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.74.0/operator/helper/writer.go:64>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/input/file.(*Input).emit|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/input/file.(*Input).emit>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.74.0/operator/input/file/file.go:65|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.74.0/operator/input/file/file.go:65>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/fileconsumer.(*Reader).ReadToEnd|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/fileconsumer.(*Reader).ReadToEnd>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.74.0/fileconsumer/reader.go:102|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.74.0/fileconsumer/reader.go:102>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/fileconsumer.(*Manager).consume.func1|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/fileconsumer.(*Manager).consume.func1>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.74.0/fileconsumer/file.go:151|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.74.0/fileconsumer/file.go:151>
2023-09-28T13:14:40.796Z info fileconsumer/file.go:196 Started watching file {"kind": "receiver", "name": "filelog/k8s", "data_type": "logs", "component": "fileconsumer", "path": "/var/log/pods/calico-system_calico-kube-controllers-857487c55f-hl72v_56de42ad-344f-4530-b31e-95114e2076fb/calico-kube-controllers/11.log"}
2023-09-28T13:14:41.021Z error exporterhelper/queued_retry.go:317 Dropping data because sending_queue is full. Try increasing queue_size. {"kind": "exporter", "data_type": "logs", "name": "otlp", "dropped_items": 100}
<http://go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).send|go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).send>
<http://go.opentelemetry.io/collector/exporter@v0.74.0/exporterhelper/queued_retry.go:317|go.opentelemetry.io/collector/exporter@v0.74.0/exporterhelper/queued_retry.go:317>
<http://go.opentelemetry.io/collector/exporter/exporterhelper.NewLogsExporter.func2|go.opentelemetry.io/collector/exporter/exporterhelper.NewLogsExporter.func2>
<http://go.opentelemetry.io/collector/exporter@v0.74.0/exporterhelper/logs.go:115|go.opentelemetry.io/collector/exporter@v0.74.0/exporterhelper/logs.go:115>
<http://go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs|go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs>
<http://go.opentelemetry.io/collector/consumer@v0.74.0/logs.go:36|go.opentelemetry.io/collector/consumer@v0.74.0/logs.go:36>
<http://go.opentelemetry.io/collector/processor/processorhelper.NewLogsProcessor.func1|go.opentelemetry.io/collector/processor/processorhelper.NewLogsProcessor.func1>
<http://go.opentelemetry.io/collector@v0.74.0/processor/processorhelper/logs.go:71|go.opentelemetry.io/collector@v0.74.0/processor/processorhelper/logs.go:71>
<http://go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs|go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs>
<http://go.opentelemetry.io/collector/consumer@v0.74.0/logs.go:36|go.opentelemetry.io/collector/consumer@v0.74.0/logs.go:36>
<http://go.opentelemetry.io/collector/processor/processorhelper.NewLogsProcessor.func1|go.opentelemetry.io/collector/processor/processorhelper.NewLogsProcessor.func1>
<http://go.opentelemetry.io/collector@v0.74.0/processor/processorhelper/logs.go:71|go.opentelemetry.io/collector@v0.74.0/processor/processorhelper/logs.go:71>
<http://go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs|go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs>
<http://go.opentelemetry.io/collector/consumer@v0.74.0/logs.go:36|go.opentelemetry.io/collector/consumer@v0.74.0/logs.go:36>
<http://go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs|go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs>
<http://go.opentelemetry.io/collector/consumer@v0.74.0/logs.go:36|go.opentelemetry.io/collector/consumer@v0.74.0/logs.go:36>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/adapter.(*receiver).consumerLoop|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/adapter.(*receiver).consumerLoop>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.74.0/adapter/receiver.go:135|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.74.0/adapter/receiver.go:135>
2023-09-28T13:14:41.021Z error adapter/receiver.go:137 ConsumeLogs() failed {"kind": "receiver", "name": "filelog/k8s", "data_type": "logs", "error": "sending_queue is full"}
best-microphone-20624
09/29/2023, 4:25 PMkube-node-not-ready
Prometheus Alarm configured with Query kube_node_status_condition{job="kube-state-metrics",condition="Ready",status="true"} == 0
associated with a 3 node Kubernetes cluster and an alertmanager-webhook-logger Endpoint. I shutdown one of the cluster nodes and verified that "firing" entries were being added to the log for the alertmanager-webhook-logger as expected. I then restarted the downed cluster node and shutdown a different cluster node. In the logger log, I see a "resolved" entry added to the log 5 minutes after the last "firing" as expected. However, I do not see a new "firing" entry for the other cluster node that was shutdown. Any ideas why this might be the case?