This message was deleted.
# general
a
This message was deleted.
c
On all currently available releases, you need to add a extra mount for the log directory in order to get the log file actually written to the host
b
thanks, will try this
is there any full reference for those options? https://github.com/rancher/rke2/issues/1183#issuecomment-1242031356
also, i have noticed that 2 of 3 master nodes have problems with rke-server service
and it seems that kube api manifests have been updated, but not applied to the pod
c
I suspect you have a syntax error somewhere. What's the complete configuration file look like?
b
here is my current spec:
the only settings i add are :
c
what does the resulting config on the rke2 node look like? under /etc/rancher/rke2/config.yaml.d ?
I believe Rancher does some filtering of the config, you can’t set everything via the cluster editor. You might have to drop the required config directly on the node.
b
current configuration results in mounts with readonly fs for logs collection
changing audit output path to audit-log-path=- will produce another issues where 1 master is working fine and all logs are logged to stdout but 2 other masters remain in broken state
c
audit-policy-file=/var/log/audit-k8s/policy-test.yaml
Can you try putting the policy file somewhere other than where you’re going to be writing logs?
The log output mount shouldn’t have any other files mounted under it, and the policy file would violate that
b
i have moved audit policy file to a different dir, however the issue with read only mount remains:
c
Do you have selinux policy or something else that is blocking the write? We don't set up the mount as read-only.
It is just a normal hostpath mount
b
no, i do not have anything like this, inside kube-api manifest this path is set as read-only once i add it
the things become more weirder, finally the same configuration in cluster have been updated on all masters, but another issue is that 2 or 3 servers have kube-api running with incorrect settings, in manifest i see the following:
crictl inspect $POD :
/etc/rancher/rke2/config.yaml.d/50-rancher.yaml:
i have just noticed that kubelet is not able to start properly, probably thats why kube-api havent restarted yet