This message was deleted.
# logging
This message was deleted.
that was the exact doc i was going to recommend, fwiw that setup works for me and i get those logs now (they are very VERY spammy btw)
did you verify your clusteroutput is working correctly?
also make sure your fluentbit daemonset is running on the control nodes
it is running on the control nodes, and unforunately I only have those containers setup to go to the clusteroutput.
from looking at the fluent-bit config that gets generated I’m failing to see how this could possible work. the control-plane components don’t run under kubernetes so the files it’s configured to tail don’t exist and the cluster wouldn’t know about those components to add the metadata
fluentbit runs on my control nodes, what do you mean by the control-plane components?
ah you know what, maybe it works for me because i used rke1 not rke2?
control-plane components being
they are running on the actual control plane nodes if you do a docker ps
so in theory fluentbit could track them based on the containerd socket or w/e
thats for rke1 though, rke2 i think uses its own special container engine thingy
but the generated config is this
Copy code
    Flush        1
    Grace        5
    Daemon       Off
    Log_Level    info
    Parsers_File parsers.conf
    Coro_Stack_Size    24576
    storage.path  /buffers

    Name         tail
    DB  /tail-db/tail-containers-state.db
    DB.locking  true
    Mem_Buf_Limit  5MB
    Parser  docker
    Path  /var/log/containers/*.log
    Refresh_Interval  5
    Skip_Long_Lines  On
    Tag  kubernetes.*
    Name        kubernetes
    Buffer_Size  0
    Kube_CA_File  /var/run/secrets/
    Kube_Tag_Prefix  kubernetes.var.log.containers
    Kube_Token_File  /var/run/secrets/
    Kube_Token_TTL  600
    Kube_URL  <https://kubernetes.default.svc:443>
    Match  kubernetes.*
    Merge_Log  On
    Use_Kubelet  Off
    Name          forward
    Match         *
    Host          myfluentd.logging-operator.svc.cluster.local
    Port          24240

    Retry_Limit  False
so it’s only ever tailing logs under /var/log/containers and then trying to use the kubernetes api to determine their metadata
the files under /var/log/containers/* first only exist for containers run under kubernetes (they are symlinks) and second are just random (not really random) strings with no included metadata, you’d have to query docker via a lua script to (per all examples I’ve found) to find out the names
are you using json-file or journald for your log driver?
im on rke1 btw