Hi Team, I was trying to perform a DR . My setup h...
# general
r
Hi Team, I was trying to perform a DR . My setup have 3 control plane nodes which are also etcd nodes as well and two worker nodes. So I deleted all my control nodes and tried to bring up one new node with ALL roles (control plane, worker and etcd) and tried to restore from the snapshot available. The procedure is similar to what is explained at - https://www.suse.com/support/kb/doc/?id=000020695&_gl=1*1wrf1de*_ga*MTU4OTU4OTYwNS4[…]Q1*_ga_Y7SFXF9L00*MTcxOTQwOTk4My4zMi4xLjE3MTk0MTAyMjMuNTMuMC4w . Now in the rkestate file I can see for kubelet there is below entry . In my original running cluster , I had both credential provider config and binary stored under /etc/kubernetes , but as soon as I delete all control plane nodes and bring the new server up with ALL roles , the kubelet container start restarting because it can't find the binary and credential provider config at /etc/kubernetes. My question is does the snapshot from where I am restoring , does it only restore some default set of files under /etc/kubernetes or do I need to store these files somewhere else. The reason it was put in /etc/kubernetes is because the docker image of kubelet has /etc/kubernetes as bind mount . Any help would be appreciated in resolving this issue. How to make sure that credential provider config and binaries does not vanish when we restore from the snapshot.
Copy code
"kubelet": {
135           "image": "rancher/hyperkube:v1.28.9-rancher1",
136           "extraArgs": {
137             "image-credential-provider-bin-dir": "/etc/kubernetes",
138             "image-credential-provider-config": "/etc/kubernetes/credential-provider-config.yaml",
139             "kube-reserved": "cpu=250m,memory=256Mi"
140           },