prehistoric-diamond-4224
12/04/2022, 3:13 PMgray-lawyer-73831
12/05/2022, 4:18 PM/var/lib/rancher/k3s/server/node-token
on the master node. Then, stop all nodes in cluster.
3. On your new machine, restore the snapshot. For example, if using s3, your commands might look something like this:
$ curl -sfL <https://get.k3s.io/> | sudo INSTALL_K3S_VERSION=$VERSION INSTALL_K3S_SKIP_ENABLE=true sh -
$ sudo k3s server --cluster-reset --etcd-s3 \
--cluster-reset-restore-path=SNAPSHOT_NAME \
--etcd-s3-bucket=YOUR_BUCKET_NAME --etcd-s3-folder=OPTIONAL_FOLDER --etcd-s3-region=YOUR_REGION \
--etcd-s3-access-key=YOURKEY --etcd-s3-secret-key="YOURSECRET" \
--token=TOKEN
4. Ensure you have whatever settings and volumes and whatnot present on your new machine, and ensure you have your k3s config.yaml setup as you’d like it, then you can start the k3s service and after a little while (5-10 minutes) you should have your cluster running just as it was. This new node will show Ready and the other nodes will show NotReady. Then you should be able to do any additional migrations you’d like within the cluster and then terminate the old stuff!prehistoric-diamond-4224
12/05/2022, 4:52 PMNotReady
state?
I have another question, since this is a somewhat old cluster, it started out with k3s 1.19 on sqlite mode and got updated to 1.24.
From what I understand it should still be using sqlite, would you suggest converting to ETCD first and then migrating?gray-lawyer-73831
12/05/2022, 5:00 PMprehistoric-diamond-4224
12/05/2022, 5:07 PMgray-lawyer-73831
12/07/2022, 5:44 PMprehistoric-diamond-4224
12/07/2022, 5:57 PM