Hey, I have a quick question. I'm setting up k3s cluster on number of Raspberry Pis that I want to import to the Rancher. When I do kubectl get nodes on the controller node, everything looks nice:
NAME STATUS ROLES AGE VERSION
client-rpi2 Ready <none> 10m v1.24.9+k3s1
raspberrypi Ready control-plane,master 22m v1.24.9+k3s1
But when I run kubectl anything on client-rpi2 (worker node) I get following:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I know to get such error off the master, I need to export kubeconfig file. But I don't find information I need to do that on worker nodes.
My question being: is that behavior expected on worker nodes and I use kubectl only on the controller node? Or can I somehow call kubectl from any nodes?
01/25/2023, 10:36 AM
The agents don't have a copy of the admin kubeconfig so yes, what you're seeing is normal
You can copy the kubeconfig from the server and change the address to point at the server by name instead of localhost if you want to use it elsewhere