This message was deleted.
# rke2
a
This message was deleted.
c
Its one of the first files in the directory too
Copy code
root@rke2-master-0:~# ls -larth /var/lib/rancher/rke2/agent/pod-manifests/
total 44K
-rw-r--r-- 1 root root 1.4K Jan 26 20:43 encryption-config-generator.yaml
-rw------- 1 root root 3.4K Jan 26 20:44 etcd.yaml
-rw------- 1 root root  10K Jan 26 20:59 kube-apiserver.yaml
-rw------- 1 root root 2.8K Jan 26 20:59 kube-scheduler.yaml
-rw------- 1 root root 5.8K Jan 26 20:59 kube-controller-manager.yaml
-rw------- 1 root root 3.8K Jan 26 20:59 cloud-controller-manager.yaml
drwxr-xr-x 7 root root 4.0K Jan 26 20:59 ..
drwxr-xr-x 2 root root 4.0K Jan 26 20:59 .
c
no… what is the static pod doing that you can’t coordinate using the same process that is dropping the pod manifest?
the kubelet just syncs all the static pods basically simultaneously, there is no concept of dependencies between them. That’s the sort of thing that would normally be handled as part of scheduling, before it got to being scheduled. if it was coming from the apiserver.
c
Yeah I guess i just need to drop the initial encryption config in place instead of a static pod
thanks!
The static pod fetches encryption keys from vault and does rotations - I guess I need to re-think this design for the initial state
c
for something like vault, the proper way to do that would probably be to use their KMS provider. Not to try to shoehorn in static key retrieval from vault.
c
Yup! And that’s the path I’m gonna head down now