This message was deleted.
# k3s
a
This message was deleted.
f
The purple line is resource=events, and the high green line is resource=secrets
c
something (probably a pod) is trying to watch a future revision. It is likely very confused because the current revision is not retained when you switch datastores.
f
ahhhh ok
c
Restart the nodes, or just do a k3s-killall.sh and then start the services again.
f
kk
Or I can go dig around and start deleting pods 😆
c
yeah if you want to be more surgical you can go see what pod has watch errrors in its logs
f
Is that the same revision that shows up in metadata when you do
kubectl get ...
?
I might just out of curiosity
ty for the help
c
no
✅ 1
revision is a global change counter for etcd that is incremented any time any resource is created/altered/deleted
f
Ahhh ok. Hah, so if I just wait long enough it'll fix itself once the change counter gets up high enough :P j/k - I'm not willing to wait that long. Plus I kinda want to see what's doing it. Wonder if that'd show up in the pod logs too
It's interesting how some errors/warnings bubble up in kube API clients and come through almost the same as they show on the server. Guessing that's something that's built into the rpc libraries.
so far: longhorn-manager, rancher-webhook, fleet-agent