This message was deleted.
# opni
a
This message was deleted.
f
This indicates that the logs are having issues getting to Opensearch. The logs go otel-collector -> agent -> gateway -> opni-otel-preprocessor -> opensearch. Would you be able to work your way along that path and check for additional errors? It might be easier to put this in a github issue.
b
Will do. Thanks.
Hi @famous-dusk-36365 I fixed the path to Opensearch and the logs look much cleaner now. Thanks! Currently, I'm just periodically seeing messages like the following. Is add'l tuning required somewhere in the path or should these errors be ignored? 2023-09-29T132238.698Z error exporterhelper/queued_retry.go:317 Dropping data because sending_queue is full. Try increasing queue_size. {"kind": "exporter", "data_type": "logs", "name": "otlp", "dropped_items": 6} go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).send go.opentelemetry.io/collector/exporter@v0.74.0/exporterhelper/queued_retry.go:317 go.opentelemetry.io/collector/exporter/exporterhelper.NewLogsExporter.func2 go.opentelemetry.io/collector/exporter@v0.74.0/exporterhelper/logs.go:115 go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs go.opentelemetry.io/collector/consumer@v0.74.0/logs.go:36 go.opentelemetry.io/collector/processor/processorhelper.NewLogsProcessor.func1 go.opentelemetry.io/collector@v0.74.0/processor/processorhelper/logs.go:71 go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs go.opentelemetry.io/collector/consumer@v0.74.0/logs.go:36 go.opentelemetry.io/collector/processor/processorhelper.NewLogsProcessor.func1 go.opentelemetry.io/collector@v0.74.0/processor/processorhelper/logs.go:71 go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs go.opentelemetry.io/collector/consumer@v0.74.0/logs.go:36 go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs go.opentelemetry.io/collector/consumer@v0.74.0/logs.go:36 github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/adapter.(*receiver).consumerLoop github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.74.0/adapter/receiver.go:135 2023-09-29T132238.698Z error adapter/receiver.go:137 ConsumeLogs() failed {"kind": "receiver", "name": "filelog/k8s", "data_type": "logs", "error": "sending_queue is full"}
b
let me double check Ron, but I believe the queue size may be managed internally by our agent kubernetes controller, so it may not be configurable at this time.