Howdy I'm having trouble adding nodes to an existi...
# general
c
Howdy I'm having trouble adding nodes to an existing single node cluster. I set up a single node cluster using
sh ~/k3s server --flannel-backend wireguard-native --node-name someNodeName --cluster-init
and that seems to work, however the server (in
/etc/rancher/k3s/k3s.yaml
) is set to
<https://127.0.0.1:6443>
When I try join another server node to this cluster, using
sh ~/k3s server --flannel-backend wireguard-native --node-name someNodeName --server <https://192.168.88.212:6443>
(where 192.168.88.212 is the IP of the above single node) And I get the following from
journalctl
Copy code
time="2025-08-21T02:20:23Z" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token"
time="2025-08-21T02:20:23Z" level=info msg="To join server node to cluster: k3s server -s <https://192.168.88.211:6443> -t ${SERVER_NODE_TOKEN}"
time="2025-08-21T02:20:23Z" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token"
time="2025-08-21T02:20:23Z" level=info msg="To join agent node to cluster: k3s agent -s <https://192.168.88.211:6443> -t ${AGENT_NODE_TOKEN}"
time="2025-08-21T02:20:23Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
time="2025-08-21T02:20:23Z" level=info msg="Run: k3s kubectl"
time="2025-08-21T02:20:24Z" level=info msg="Password verified locally for node somenodename"
time="2025-08-21T02:20:24Z" level=info msg="certificate CN=somenodename signed by CN=k3s-server-ca@1755699413: notBefore=2025-08-20 14:16:53 +0000 UTC notAfter=2026-08-21 02:20:24 +0000 UTC"
time="2025-08-21T02:20:24Z" level=info msg="certificate CN=system:node:somenodename,O=system:nodes signed by CN=k3s-client-ca@1755699413: notBefore=2025-08-20 14:16:53 +0000 UTC notAfter=2026-08-21 02:20:24 +0000 UTC"
time="2025-08-21T02:20:24Z" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1755699413: notBefore=2025-08-20 14:16:53 +0000 UTC notAfter=2026-08-21 02:20:24 +0000 UTC"
time="2025-08-21T02:20:24Z" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1755699413: notBefore=2025-08-20 14:16:53 +0000 UTC notAfter=2026-08-21 02:20:24 +0000 UTC"
time="2025-08-21T02:20:24Z" level=info msg="Module overlay was already loaded"
time="2025-08-21T02:20:24Z" level=info msg="Module nf_conntrack was already loaded"
time="2025-08-21T02:20:24Z" level=info msg="Module br_netfilter was already loaded"
time="2025-08-21T02:20:24Z" level=info msg="Module iptable_nat was already loaded"
time="2025-08-21T02:20:24Z" level=info msg="Module iptable_filter was already loaded"
time="2025-08-21T02:20:24Z" level=warning msg="Failed to load kernel module nft-expr-counter with modprobe"
time="2025-08-21T02:20:24Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
time="2025-08-21T02:20:24Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
time="2025-08-21T02:20:25Z" level=info msg="containerd is now running"
time="2025-08-21T02:20:25Z" level=info msg="Creating k3s-cert-monitor event broadcaster"
time="2025-08-21T02:20:25Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --feature-gates=CloudDualStackNodeIPs=true --healthz-bind-address=127.0.0.1 --hostname-override=somenodename --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --node-ip=192.168.88.211 --node-labels= --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
time="2025-08-21T02:20:25Z" level=info msg="Connecting to proxy" url="<wss://127.0.0.1:6443/v1-k3s/connect>"
time="2025-08-21T02:20:25Z" level=info msg="Handling backend connection request [somenodename]"
time="2025-08-21T02:20:25Z" level=info msg="Remotedialer connected to proxy" url="<wss://127.0.0.1:6443/v1-k3s/connect>"
time="2025-08-21T02:20:25Z" level=error msg="Sending HTTP/1.1 503 response to 127.0.0.1:35958: runtime core not ready"
time="2025-08-21T02:20:25Z" level=info msg="Adding member somenodename-c230b040=<https://192.168.88.211:2380> to etcd cluster [lilz-kubernetes-1-db6c7708=<https://192.168.88.212:2380>]"
time="2025-08-21T02:20:25Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=somenodename --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
E0821 02:20:25.900707   12934 server.go:1039] "Failed to retrieve node info" err="apiserver not ready"
E0821 02:20:27.001367   12934 server.go:1039] "Failed to retrieve node info" err="apiserver not ready"
time="2025-08-21T02:20:28Z" level=info msg="Failed to test data store connection: failed to get etcd status: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""
E0821 02:20:29.390830   12934 server.go:1039] "Failed to retrieve node info" err="apiserver not ready"
Any insights would be greatly appreciated.
c
The admin kubeconfig always points at localhost. This is normal.
Did you remember to pass the token as well when joining the second node? Are the correct ports open?
c
The open ports are: 22, 80, 443, 6443 And yeah, the token is passed with
K3S_TOKEN
(sorry for missing that above)
The only other env var is
INSTALL_K3S_VERSION: "v1.29.15+k3s1"
c
That is not the correct set of ports. Please read the docs.
c
Ah yeah, I was looking for this page
I stumbled across it a couple of years ago but forgot where it was
Thanks
I'll give that a shot now
That moved things forward. I've run into this though
Copy code
level=fatal msg="starting kubernetes: preparing server: failed to bootstrap cluster data: Get \"<https://192.168.88.212:6443/v1-k3s/server-bootstrap>\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
If I recall there's a config param to skip certificate validation, right?
Weird I recreated the cluster and the second server node successfully registered this time
I guess I must've done something slightly different last time