This message was deleted.
# k3s
a
This message was deleted.
h
b
Yes it works if I manually execute this script and reboot. But it could lead to unsynced writes and thus data loss. Could you please suggest to if we should include this in the systems service of k3s.
h
you can drain a node in rancher UI, or via CLI https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/ then run k3s-killall script then reboot your node
👍 1
those are the step I follow when doing OS updates and that appears to work for the deployments I have... you will have to assess your cluster and use your best judgement, perhaps try in non-prod first
b
@creamy-pencil-82913 @hundreds-evening-84071 i created a service which drains my node and then stops the k3s service this solves the 90 second to shutdown issue but the k3s service gives error "due level=error msg="scheduler exited: finished without leader elect" systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE k3s.service: Failed with result 'exit-code'. systemd[1]: k3s.service: Unit process 2011 (containerd-shim) remains running after unit stopped.[1]: Stopped k3s.service - Lightweight Kubernetes." . Is it because the node is drained and its a single node cluster and so cannot elect the leader?