This message was deleted.
# k3s
a
This message was deleted.
c
Can you show how you created the token and passed it to the agent?
Did the node ever join the cluster? If it did join once, there should be a node for it, and it will try to use node identity to authenticate instead of the token. If you delete the node from the cluster, you may have to clean some files from the agent to get it to use the token again.
If you can describe exactly what steps you followed that would be helpful
w
Thanx @creamy-pencil-82913 At the end I scraped all the VMs and created them from scratch with terraform. And after fixing some bugs it was correctly added. What I did (maybe it can be useful for someone): 1. token was created with:
k3s token create  --ttl 10m
2. it was passed to the install script using K3S_TOKEN env variable. Probably the issue was that I created one token, and passed it to the install script. k3s agent was not able to start properly. I fixed something and then created a new token and passed it to the install script. But IMHO install script is not restarting/refreshing the service with new token and it was probably outdated. I think better option to pass the token might be by using either file or config file, and then restarting the service.
I did some experiments. I deleted a node from the cluster (
kubectl delete node xxx
) and tried either the original token (one used for bootstraping the node) or a fresh new one but in both cases I get the 401 Unauthorized error message
I followed: https://docs.k3s.io/architecture#how-agent-node-registration-works and deleted /etc/rancher/node/password from the agent node
To bootstrap again the node after it was deleted I had to execute:
Copy code
root@dev-worker-0:/var/lib/rancher/k3s/agent# rm -r  /etc/rancher/node/ client-kubelet.crt client-kubelet.key kubelet.kubeconfig serving-kubelet.crt serving-kubelet.key k3scontroller.kubeconfig client-k3s-controller.crt client-k3s-controller.key kubeproxy.kubeconfig
It might not be the minimal set of files to delete
c
Would you mind opening an issue on GH? The agent should probably handle this better.
All you should need to delete is the client-kubelet cert and key, I believe
Maybe the kubeconfig too?
w
I deleted
rm -r  /etc/rancher/node/ /var/lib/rancher/k3s/agent/client-kubelet.crt /var/lib/rancher/k3s/agent/client-kubelet.key
and it was enough.
Thanx @creamy-pencil-82913. I just have reported this as a bug here https://github.com/k3s-io/k3s/issues/7797
c
It should probably just fall back to the token if node identity fails. Should be fixable by the July releases, June will be out early next week so it's too late for that.
👍 1
211 Views