I have a highly available k3s cluster with 5 embed...
# k3s
h
I have a highly available k3s cluster with 5 embedded etcd server nodes…. Sometimes when i fail 2 server nodes, i lost my cluster entirely. Why is this happening sometimes??? Also, when i then go to restore from snapshot, i notice not all of my original images i imported into each nodes ctr image repo are there, why does this happen? It makes no sense because i imported the images months ago , and im using the most recent snapshot from 1 day ago….
c
any logs to this?
c
What do you mean by
i lost my cluster entirely
? You should be able to tolerate a 2 node outage with 5 nodes. What does “lost” mean? What specifically are you seeing?
> when i then go to restore from snapshot, i notice not all of my original images i imported into each nodes ctr image repo are there, Snapshots are etcd snapshots. They contain only Kubernetes resources. They are not snapshots of the host OS, or of the containerd image store, or anything else. Restoring a snapshot only gets you back the kubernetes resources that were in etcd at the time the snapshot was taken. If you’ve modified the host os, or if the kubelet has cleaned up some images from the containerd image store due to garbage collection… those will not be affected in any way by restoring a snapshot.