This message was deleted.
# rancher-desktop
a
This message was deleted.
r
If I provide a
--kube-apiserver-arg
is that an additive operation, or does that override the option I'm changing?
I'm guessing there may already be options set in the option I'd like to add to:
Copy code
--enable-admission-plugins=NodeRestriction,PodSecurity
f
It will just be appended
r
Oh great, I'll give it a go. Thanks for the assistance
f
Yeah, for those options that specify a list, I'm not sure what apiserver does. The effect will be the same as
--enable-admission-plugins=NodeRestriction --enable-admission-plugins=PodSecurity
. Not sure if that will work
r
In clusters I'm used to working with, I can view the pod spec for the kube-apiserver, which is nice for debugging the flags. I'm not very experienced with k3s yet, maybe it's not running as a pod? Can't seem to find it
f
You can find more information at CIS Hardening Guide | K3s
Note that Rancher Desktop is meant to be a dev environment and should not be used for production. But of course you should be able to use PSPs for development/testing purposes
r
Excellent, that's my intended purpose 🙂 Thanks a bunch for your quick and thorough assistance!
Just circling back, this override worked to get PSA enabled:
Copy code
env:
  K3S_EXEC: "--kube-apiserver-arg=enable-admission-plugins=PodSecurity"
f
But is the
NodeRestrictionPolicy
still loaded or not?
r
I'll see if I can find it, I have yet to locate where the kube-apiserver instance is running
I suppose it's in the lima vm somewhere, but I haven't figured out how to look around in there
f
rdctl shell
r
Found the k3s.log:
~/Library/logs/rancher-desktop/k3s.log
👍 1
Copy code
cat k3s.log | grep "Running kube-apiserver"
time="2023-04-20T17:28:08Z" level=info msg="Running kube-apiserver --advertise-address=192.168.205.2 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=<https://kubernetes.default.svc.cluster.local>,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=PodSecurity --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=<unix://kine.sock> --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=<https://kubernetes.default.svc.cluster.local> --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
I'm not actually sure if
NodeRestrictionPolicy
was there to start, I'll remove my override and see what's there
f
I thought it was there by default, but not sure either
r
Yeah,
NodeRestriction
is there by default. So it looks like I should redefine the entire option in the override:
Copy code
cat ~/Library/logs/rancher-desktop/k3s.log | grep "Running kube-apiserver"
time="2023-04-20T18:00:50Z" level=info msg="Running kube-apiserver --advertise-address=192.168.205.2 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=<https://kubernetes.default.svc.cluster.local>,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --etcd-servers=<unix://kine.sock> --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=<https://kubernetes.default.svc.cluster.local> --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
This override works and keeps NodeRestriction override.yaml:
Copy code
env:
  K3S_EXEC: "--kube-apiserver-arg=enable-admission-plugins=NodeRestriction,PodSecurity"
👍 1