This message was deleted.
# opni
a
This message was deleted.
b
The etcd and apiserver log is not collected and how can I fix?
the etcd log is collected normally. But the log type is still workload
f
What distribution cluster is this running on?
b
on a native k8s. And i think it is because the hasField method in opnipreprocessing plugin is not worked as expected https://github.com/tybalex/opni-ingest-plugin/blob/f1a024e46e1dbca077c5180a3a120f9[…]a/org/opensearch/opnipreprocessing/plugin/OpniPreProcessor.java
f
That should work if the open telemetry collector is scraping the logs and annotating the logs correctly. Is your cluster dpeloyed by kubeadm? We can do some testing on this
b
Maybe you can insert the log to opensearch to find the problem.
POST _bulk { "create": { "_index": "logs" } } {"k8s.container.name":"etcd","k8s.namespace.name":"kube-system","container.image.tag":"60080/tkestack/etcd","log":"{\"level\":\"info\",\"ts\":\"2023-05-11T032328.748Z\",\"caller\":\"mvcc/hash.go:137\",\"msg\":\"storing new hash\",\"hash\":3343241951,\"revision\":5815545,\"compact-revision\":5814139}","k8s.pod.uid":"","Body.value":"2023-05-11T112328.748397999+08:00 stderr F {\"level\":\"info\",\"ts\":\"2023-05-11T032328.748Z\",\"caller\":\"mvcc/hash.go:137\",\"msg\":\"storing new hash\",\"hash\":3343241951,\"revision\":5815545,\"compact-revision\":5814139}","logtag":"F","namespace_name":"kube-system","SeverityNumber":0,"cluster_id":"180a2c2f-63ba-4b8f-a532-ba4a549bcfb1","log_type":"zlc_test","container.image.name":"registry.cn","k8s.pod.name":"etcd-192.168.137.245","stream":"stderr","kubernetes_component":"","k8s.pod.labels.tier":"control-plane","deployment":"","raw_ts":"raw time","k8s.pod.confighash":"29d6e6a1c05a2d07020e80d4b0ad7aae","restart_count":"2","template_matched":"","k8s.pod.labels.component":"etcd","log_source_field":"message","container_image":"registry.alauda.cn","anomaly_level":"","pod_name":"etcd-192.168.137.245","TraceFlags":0,"@timestamp":"2023-05-11T032328.748397999Z","k8s.node.name":"192.168.137.245","ingest_at":"2023-05-11 032344.705","service":"","time":"2023-05-11T112328.748397999+08:00"}
f
I see this is actually happening on RKE2. I'll create an issue to look into this
b
I think it is because the dot too and temporarily modified the preprocessor. Thanks for your help
BTW, is any article introduce the log anomaly algorithm?