https://rancher.com/ logo
Title
m

mysterious-apartment-28373

02/26/2023, 3:11 PM
Asking again before i open a bug.. This issue is reproduced 😞 I’m trying to restore my cluster from snapshot. I’ve done it many times with
v1.19.6
but now it doesn’t work for me with
v1.22.6
After running:
/usr/local/bin/k3s server --cluster-init --cluster-reset --cluster-reset-restore-path=/root/k3s-dr-recovery-master1-1674728519
I get the following error:
Jan 26 10:50:28 master1 k3s[9982]: time="2023-01-26T10:50:28Z" level=warning msg=" doesn't exist. continuing..."
Jan 26 10:50:28 master1 k3s[9982]: time="2023-01-26T10:50:28Z" level=warning msg=" doesn't exist. continuing..."
Jan 26 10:50:28 master1 k3s[9982]: time="2023-01-26T10:50:28Z" level=info msg="Cluster reset: backing up certificates directory to /var/lib/rancher/k3s/server/tls-1674730228"
Jan 26 10:50:28 master1 k3s[9982]: time="2023-01-26T10:50:28Z" level=warning msg="updating bootstrap data on disk from datastore"
Jan 26 10:50:28 master1 k3s[9982]: time="2023-01-26T10:50:28Z" level=fatal msg="failed to write to : open : no such file or directory"
Jan 26 10:50:29 master1 systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Jan 26 10:50:29 master1 systemd[1]: Failed to start Lightweight Kubernetes.
That empty path seems not right.. Any suggestions? Thanks
c

creamy-pencil-82913

02/26/2023, 8:36 PM
1. don’t use --cluster-init and --cluster-reset at the same time 2. you’re not on the latest 1.22 patch; I believe this bug has been fixed - but in the mean time, i suspect that you’re using --secrets-encrypt when running k3s? if so you also need to pass that when running the --cluster-reset command 3. get to the latest 1.22 release if at all possible; preferably up to 1.23 and then 1.24 as both 1.22 and 1.23 are end of life.