This message was deleted.
# rke2
a
This message was deleted.
c
what versions of container-selinux and rke2-selinux do you have installed?
and did you install them before starting rke2 for the first time?
If you initially started without it, you might try running a restorecon? https://github.com/rancher/rke2-selinux/blob/master/policy/centos9/rke2-selinux.spec#L12-L21
c
This is a fresh install. I install rke2-server which installed:
Copy code
# rpm -q container-selinux rke2-selinux
container-selinux-2.229.0-1.el9_3.noarch
rke2-selinux-0.18-1.el9.noarch
I am going to try to disable auditing, which is why i created the following at first:
Copy code
module rke2audit 1.0;

require {
        type rke2_service_t;
        type container_var_lib_t;
        class file { append create };
}

#============= rke2_service_t ==============
allow rke2_service_t container_var_lib_t:file create;
allow rke2_service_t container_var_lib_t:file append;
Remove this module and disable auditing and see where that gets me.
rke2-killall.sh
and
rke2-uninstall.sh
then rebooted
Yep, then I see this:
Copy code
type=AVC msg=audit(1719521450.168:12079): avc:  denied  { create } for  pid=9240 comm="kube-apiserver" name="audit.log" scontext=system_u:system_r:rke2_service_t:s0:c264,c550 tcontext=system_u:object_r:container_var_lib_t:s0 tclass=file permissive=0
c
wait you are creating your own rke2_service_t that replaces the one defined here? https://github.com/rancher/rke2-selinux/blob/master/policy/centos9/rke2.te#L10-L15 Oh, I see you’re just adding allows…
where are you trying to put the audit log that is being rejected by selinux? the default location should be allowed by our policy.
c
I was just using the default. I updated the audit policy to actually a bit more. The moment I did that, then tried to rebuild. I get that error.
All I did was replace:
Copy code
/etc/rancher/rke2/audit-policy.yaml
I removed that file. Reinstalled again. It's back to doing what I see above.
Copy code
CONTAINER           IMAGE               CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
b774fd9dd2636       0929b4140ada6	2 minutes ago       Exited              helm                       6                   97493d85d8234       helm-install-rke2-canal-vjzcp
4878ca78dc767       0929b4140ada6	2 minutes ago       Exited              helm                       6                   3c94b8565bd15       helm-install-rke2-coredns-gksnk
4c0bf3c59424a       b7e03d90f06bb	8 minutes ago       Running             kube-proxy                 0                   6d32a9d410ccc       <http://kube-proxy-bpvmn0kct1p.ftscenclave.com|kube-proxy-bpvmn0kct1p.ftscenclave.com>
ae72ab95c2758       3525a3daa55c9	8 minutes ago       Running             cloud-controller-manager   0                   4e3e0ec771104       <http://cloud-controller-manager-bpvmn0kct1p.ftscenclave.com|cloud-controller-manager-bpvmn0kct1p.ftscenclave.com>
a3633abdf6221       b7e03d90f06bb	8 minutes ago       Running             kube-controller-manager    0                   e8da5d6f127e0       <http://kube-controller-manager-bpvmn0kct1p.ftscenclave.com|kube-controller-manager-bpvmn0kct1p.ftscenclave.com>
b187a1407862f       b7e03d90f06bb	8 minutes ago       Running             kube-scheduler             0                   2c6a6331f7b09       <http://kube-scheduler-bpvmn0kct1p.ftscenclave.com|kube-scheduler-bpvmn0kct1p.ftscenclave.com>
97679d7172a20       b7e03d90f06bb	8 minutes ago       Running             kube-apiserver             0                   954fd3f96aec2       <http://kube-apiserver-bpvmn0kct1p.ftscenclave.com|kube-apiserver-bpvmn0kct1p.ftscenclave.com>
6411adbc0714b       7893f7425a52a	8 minutes ago       Running             etcd                       0                   15e27932b2e6b       <http://etcd-bpvmn0kct1p.ftscenclave.com|etcd-bpvmn0kct1p.ftscenclave.com>
Copy code
Error relocating /usr/lib/libreadline.so.8: RELRO protection failed: No such file or directory
Error relocating /lib/ld-musl-x86_64.so.1: RELRO protection failed: No such file or directory
Error relocating /usr/lib/libncursesw.so.6: RELRO protection failed: No such file or directory
Error relocating /usr/bin/entry: RELRO protection failed: No such file or directory
c
can you show the output of
kubectl get node -o yaml | grep <http://rke2.io/node|rke2.io/node>
are you by any chance using a custom data-dir, other than /var/lib/rancher/rke2 ?
c
Copy code
# kubectl get node -o yaml | grep <http://rke2.io/node|rke2.io/node>
      <http://rke2.io/node-args|rke2.io/node-args>: '["server"]'
      <http://rke2.io/node-config-hash|rke2.io/node-config-hash>: 37MVZBQX23DVZCJMZ6XM3CBRSRAWS4ELRMIQGCKYL7C7RLHL4W3A====
      <http://rke2.io/node-env|rke2.io/node-env>: '{"RKE2_SELINUX":"true"}'
Nope, no custom data directory
Well good news. I was able to get the default install going this time.
dnf reinstall container-selinux
Okay, my normal install without the updated audit policy and my selinux module update works. Now to change the policy while the cluster is running to see what happens. The whole reason I created the selinux module.
This log is empty:
Copy code
-rw-------. 1 root root system_u:object_r:container_log_t:s0 0 Jun 28 14:48 /var/lib/rancher/rke2/server/logs/audit.log
Now I modified the
/etc/rancher/rke2/audit-policy.yaml
in place. Restarted rke2-server
Copy code
-rw-------. 1 root root system_u:object_r:container_log_t:s0 2803229 Jun 28 15:14 /var/lib/rancher/rke2/server/logs/audit.log
I am getting logs! So I shouldn't have needed to do that. Great! I noticed the context of that file is different than the type enforcement I created. Somehow things with selinux got messed up. Maybe related to replacing the audit-policy.yaml file and not having the same context as the file it automatically creates? If I had did the
dnf reinstall container-selinux
from the beginning. Saved me so much time.
@creamy-pencil-82913 thanks 🙂
k
Hello people, regarding my last post about selinux issues on RockyLinux 9 and my message some minutes ago that it works... I have to admit, i was too hasty. While the K8s node was starting and running fine for a while, now the selinux message appears again.
rke2[18938]: time="2024-08-24T00:33:49+02:00" level=warning msg="SELinux is enabled for rke2 but process is not running in context 'container_runtime_t', rke2-selinux policy may need to be applied"
It looks like the correct context get lost through something ...