Not sure if interesting due to my env setting - ru...
# rke2
m
Not sure if interesting due to my env setting - running rke2 cli commands from inside a container for UT - but I get this occasional panic at the final stages of
cluster-reset
. If it'll be useful I can open an issue.
Copy code
2025-10-20T11:28:50+00:00 >> time="2025-10-20T11:28:50Z" level=info msg="Connected to etcd v3.5.21 - datastore using 16384 of 20480 bytes"
2025-10-20T11:28:50+00:00 >> time="2025-10-20T11:28:50Z" level=info msg="Defragmenting etcd database"
2025-10-20T11:28:50+00:00 >> time="2025-10-20T11:28:50Z" level=info msg="Connected to etcd v3.5.21 - datastore using 12288 of 20480 bytes"
2025-10-20T11:28:50+00:00 >> time="2025-10-20T11:28:50Z" level=info msg="Defragmenting etcd database"
2025-10-20T11:28:50+00:00 >> time="2025-10-20T11:28:50Z" level=info msg="Datastore using 12288 of 20480 bytes after defragment"
2025-10-20T11:28:50+00:00 >> time="2025-10-20T11:28:50Z" level=info msg="Datastore using 12288 of 20480 bytes after defragment"
2025-10-20T11:28:50+00:00 >> time="2025-10-20T11:28:50Z" level=info msg="Connection to etcd is ready"
2025-10-20T11:28:50+00:00 >> time="2025-10-20T11:28:50Z" level=info msg="ETCD server is now running"
2025-10-20T11:28:50+00:00 >> time="2025-10-20T11:28:50Z" level=info msg="rke2 is up and running"
2025-10-20T11:28:50+00:00 >> time="2025-10-20T11:28:50Z" level=info msg="Saving cluster bootstrap data to datastore"
2025-10-20T11:28:50+00:00 >> time="2025-10-20T11:28:50Z" level=info msg="Bootstrap key locked for initial create"
2025-10-20T11:28:50+00:00 >> time="2025-10-20T11:28:50Z" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1760959689: notBefore=2025-10-20 11:28:09 +0000 UTC notAfter=2026-10-20 11:28:50 +0000 UTC"
2025-10-20T11:28:50+00:00 >> time="2025-10-20T11:28:50Z" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1760959689: notBefore=2025-10-20 11:28:09 +0000 UTC notAfter=2026-10-20 11:28:50 +0000 UTC"
2025-10-20T11:28:50+00:00 >> time="2025-10-20T11:28:50Z" level=info msg="Shutting down kubelet and etcd"
2025-10-20T11:28:50+00:00 >> time="2025-10-20T11:28:50Z" level=error msg="Kubelet exited: signal: killed"
2025-10-20T11:28:50+00:00 >> time="2025-10-20T11:28:50Z" level=info msg="Bootstrap key lock is held"
2025-10-20T11:28:50+00:00 >> {"level":"warn","ts":"2025-10-20T11:28:50.180515Z","logger":"etcd-client","caller":"v3@v3.5.21-k3s1/retry_interceptor.go:63","msg":"retrying of unary invoker failed","target":"<etcd-endpoints://0xc00125a3c0/127.0.0.1:2379>","attempt":0,"error":"rpc error: code = Unavailable desc = error reading from server: EOF"}
2025-10-20T11:28:50+00:00 >> panic: rpc error: code = Unavailable desc = error reading from server: EOF
2025-10-20T11:28:50+00:00 >> 
2025-10-20T11:28:50+00:00 >> goroutine 629 [running]:
2025-10-20T11:28:50+00:00 >> <http://github.com/k3s-io/k3s/pkg/cluster.(*Cluster).Start.func2()|github.com/k3s-io/k3s/pkg/cluster.(*Cluster).Start.func2()>
2025-10-20T11:28:50+00:00 >> 	/go/pkg/mod/github.com/k3s-io/k3s@v1.33.2-0.20250616230217-4256d5813c74/pkg/cluster/cluster.go:79 +0x165
2025-10-20T11:28:50+00:00 >> created by <http://github.com/k3s-io/k3s/pkg/cluster.(*Cluster).Start|github.com/k3s-io/k3s/pkg/cluster.(*Cluster).Start> in goroutine 1
2025-10-20T11:28:50+00:00 >> 	/go/pkg/mod/github.com/k3s-io/k3s@v1.33.2-0.20250616230217-4256d5813c74/pkg/cluster/cluster.go:72 +0x19b
c
Wait, what are you doing?
You're running the cluster-reset command inside a container? Is all of rke2 running in a container, or are you trying to reset the cluster from inside a pod running on that cluster?
m
The entire cluster (single node) is running inside a container using the cli alone and not through systemctl. I understand this isn't the way to go, this is only meant for some specific UT scenarios, yet as this panic isn't persistent it might suggest some race.
c
Is the main rke2 server process also running when you run this command?
m
no
c
you might try on a newer release? That one’s a few months old, and that bit of code has already been refactored to no longer panic if the save fails.
m
sure, thank you