quiet-memory-19288
11/02/2022, 9:20 PMcool-forest-29147
11/03/2022, 9:04 AMquiet-memory-19288
11/03/2022, 8:46 PMnutritious-tomato-14686
11/04/2022, 5:02 PM/var/lib/rancher/k3s/server
. You would need to switch to the embedded etcd db for that functionality. See https://docs.k3s.io/backup-restore#backup-and-restore-with-embedded-etcd-datastore-experimentalquiet-memory-19288
11/07/2022, 3:58 PMnutritious-tomato-14686
11/07/2022, 5:47 PMk3s-killall.sh
or a power loss) will cause corruption in your DB. You might get lucky and those lost writes are not needed, but do it a few thousand times and I guarantee you'll run into issues.
I'm not sure what others are doing to "ensure k3s stays happy", there isn't a way AFAIK to verify the integrity of the K3s DB. At the end of the day, this is why High Availability K8s is so valuable, its very resilient when you have 10s of clusters made up of 3 nodes vs 100s of single node clusters.cool-forest-29147
11/08/2022, 4:20 PMquiet-memory-19288
11/29/2022, 3:53 PMkubectl get --raw='/livez/etcd'
ok
That is working. But I am on sqlite. @nutritious-tomato-14686 any chance that is checking sqlite and I could use that as my healthy check? If it if ever fails, I just nuke (like I said above) and re-deploy?nutritious-tomato-14686
11/29/2022, 6:08 PMPRAGMA quick_check
quiet-memory-19288
11/29/2022, 6:46 PM