This message was deleted.
# logging
a
This message was deleted.
p
that was the exact doc i was going to recommend, fwiw that setup works for me and i get those logs now (they are very VERY spammy btw)
did you verify your clusteroutput is working correctly?
also make sure your fluentbit daemonset is running on the control nodes
p
it is running on the control nodes, and unforunately I only have those containers setup to go to the clusteroutput.
from looking at the fluent-bit config that gets generated I’m failing to see how this could possible work. the control-plane components don’t run under kubernetes so the files it’s configured to tail don’t exist and the cluster wouldn’t know about those components to add the metadata
p
fluentbit runs on my control nodes, what do you mean by the control-plane components?
ah you know what, maybe it works for me because i used rke1 not rke2?
p
control-plane components being
etcd
kube-apiserver
kubelet
etc
p
they are running on the actual control plane nodes if you do a docker ps
so in theory fluentbit could track them based on the containerd socket or w/e
thats for rke1 though, rke2 i think uses its own special container engine thingy
p
but the generated config is this
Copy code
[SERVICE]
    Flush        1
    Grace        5
    Daemon       Off
    Log_Level    info
    Parsers_File parsers.conf
    Coro_Stack_Size    24576
    storage.path  /buffers

[INPUT]
    Name         tail
    DB  /tail-db/tail-containers-state.db
    DB.locking  true
    Mem_Buf_Limit  5MB
    Parser  docker
    Path  /var/log/containers/*.log
    Refresh_Interval  5
    Skip_Long_Lines  On
    Tag  kubernetes.*
[FILTER]
    Name        kubernetes
    Buffer_Size  0
    Kube_CA_File  /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    Kube_Tag_Prefix  kubernetes.var.log.containers
    Kube_Token_File  /var/run/secrets/kubernetes.io/serviceaccount/token
    Kube_Token_TTL  600
    Kube_URL  <https://kubernetes.default.svc:443>
    Match  kubernetes.*
    Merge_Log  On
    Use_Kubelet  Off
[OUTPUT]
    Name          forward
    Match         *
    Host          myfluentd.logging-operator.svc.cluster.local
    Port          24240

    Retry_Limit  False
so it’s only ever tailing logs under /var/log/containers and then trying to use the kubernetes api to determine their metadata
the files under /var/log/containers/* first only exist for containers run under kubernetes (they are symlinks) and second are just random (not really random) strings with no included metadata, you’d have to query docker via a lua script to (per all examples I’ve found) to find out the names
are you using json-file or journald for your log driver?
im on rke1 btw
116 Views