Hello everyone, I have a question regarding settin...
# rke2
s
Hello everyone, I have a question regarding setting up a cluster using RKE2. • Currently, I have an rke2-server set up on-premises, and I want to add an rke2-agent which is located on a different LAN (AWS EC2). • It was successful to join cluster rke2-agent with port 9345. I forwarded rke2-server node 9345 port to 25345 and set rke2-agent config server field. (
server: https://<ON PREMISE PUBLIC IP>:25345
) • However, Problem occurred when agent tried to connect with kube-apiserver, so I would like to set kube-apiserver port 6443 to another port (for example, port 25443) in rke2-agent config. • However, I couldn’t find any options or documentation for this setup. (An option to make the agent point to a different port or to make the server use a port other than 6443.) I’m reaching out for your help with this. Any small help would be greatly appreciated. Please feel free to offer some advice.
[situation] → wrong. i edited flow below
[my agent config]
Copy code
server: https://<ON PREMISE PUBLIC IP>:25345
token: <SERVER TOKEN>
node-name: aws-worker01
node-label:
  - node-role=aws
[my server cofnig]
Copy code
node-name: control-plain
node-external-ip: <ON PREMISE PUBLIC IP>
tls-san:
  - <ON PREMISE PUBLIC IP>
  - 192.168.0.25
cni: "calico"
write-kubeconfig-mode: "0644"
kube-apiserver-arg:
 - advertise-port=25443
[agent start log]
Copy code
May 11 05:44:34 ip-<EC2 Public IP>.ap-northeast-2.compute.internal systemd[1]: Starting rke2-agent.service - Rancher Kubernetes Engine v2 (agent)...
May 11 05:44:34 ip-<EC2 Public IP>.ap-northeast-2.compute.internal sh[3115]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
May 11 05:44:34 ip-<EC2 Public IP>.ap-northeast-2.compute.internal systemctl[3116]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory
May 11 05:44:34 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:34Z" level=warning msg="not running in CIS mode"
May 11 05:44:34 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:34Z" level=info msg="Applying Pod Security Admission Configuration"
May 11 05:44:34 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:34Z" level=info msg="Starting rke2 agent v1.31.8+rke2r1 (f598b218a6f58bd566d6d757a352efb8260de42e)"
May 11 05:44:34 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:34Z" level=info msg="Updated load balancer rke2-agent-load-balancer default server: <ON-PREMISE REK2 SERVER Public IP>:25345"
May 11 05:44:34 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:34Z" level=info msg="Server <ON-PREMISE REK2 SERVER Public IP>:25345@STANDBY*->UNCHECKED from add to load balancer rke2-agent-load-balancer"
May 11 05:44:34 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:34Z" level=info msg="Updated load balancer rke2-agent-load-balancer server addresses -> [<ON-PREMISE REK2 SERVER Public IP>:25345] [default: <ON-PREMISE REK2 SERVER Public IP>:25345]"
May 11 05:44:34 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:34Z" level=info msg="Running load balancer rke2-agent-load-balancer 127.0.0.1:6444 -> [<ON-PREMISE REK2 SERVER Public IP>:25345] [default: <ON-PREMISE REK2 SERVER Public IP>:25345]"
May 11 05:44:34 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:34Z" level=info msg="Server <ON-PREMISE REK2 SERVER Public IP>:25345@UNCHECKED*->RECOVERING from successful dial"
May 11 05:44:34 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:34Z" level=info msg="Updated load balancer rke2-api-server-agent-load-balancer default server: <ON-PREMISE REK2 SERVER Public IP>:6443"
May 11 05:44:34 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:34Z" level=info msg="Server <ON-PREMISE REK2 SERVER Public IP>:6443@STANDBY*->UNCHECKED from add to load balancer rke2-api-server-agent-load-balancer"
May 11 05:44:34 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:34Z" level=info msg="Updated load balancer rke2-api-server-agent-load-balancer server addresses -> [<ON-PREMISE REK2 SERVER Public IP>:6443] [default: <ON-PREMISE REK2 SERVER Public IP>:6443]"
May 11 05:44:34 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:34Z" level=info msg="Running load balancer rke2-api-server-agent-load-balancer 127.0.0.1:6443 -> [<ON-PREMISE REK2 SERVER Public IP>:6443] [default: <ON-PREMISE REK2 SERVER Public IP>:6443]"
May 11 05:44:35 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:35Z" level=info msg="Module overlay was already loaded"
May 11 05:44:35 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:35Z" level=info msg="Module nf_conntrack was already loaded"
May 11 05:44:35 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:35Z" level=info msg="Module br_netfilter was already loaded"
May 11 05:44:35 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:35Z" level=info msg="Module iptable_nat was already loaded"
May 11 05:44:35 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:35Z" level=info msg="Module iptable_filter was already loaded"
May 11 05:44:35 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:35Z" level=warning msg="Failed to load kernel module nft-expr-counter with modprobe"
May 11 05:44:35 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:35Z" level=info msg="Runtime image <http://index.docker.io/rancher/rke2-runtime:v1.31.8-rke2r1|index.docker.io/rancher/rke2-runtime:v1.31.8-rke2r1> bin and charts directories already exist; skipping extract"
May 11 05:44:35 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:35Z" level=info msg="Removed kube-proxy static pod manifest"
May 11 05:44:35 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:35Z" level=info msg="Logging containerd to /var/lib/rancher/rke2/agent/containerd/containerd.log"
May 11 05:44:35 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:35Z" level=info msg="Running containerd -c /var/lib/rancher/rke2/agent/etc/containerd/config.toml"
May 11 05:44:36 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:36Z" level=info msg="containerd is now running"
May 11 05:44:36 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:36Z" level=info msg="Pulling images from /var/lib/rancher/rke2/agent/images/kube-proxy-image.txt"
May 11 05:44:36 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:36Z" level=info msg="Image <http://index.docker.io/rancher/hardened-kubernetes:v1.31.8-rke2r1-build20250423|index.docker.io/rancher/hardened-kubernetes:v1.31.8-rke2r1-build20250423> has already been pulled"
May 11 05:44:36 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:36Z" level=info msg="Imported <http://docker.io/rancher/hardened-kubernetes:v1.31.8-rke2r1-build20250423|docker.io/rancher/hardened-kubernetes:v1.31.8-rke2r1-build20250423>"
May 11 05:44:36 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:36Z" level=info msg="Imported images from /var/lib/rancher/rke2/agent/images/kube-proxy-image.txt in 4.095258ms"
May 11 05:44:36 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:36Z" level=info msg="Getting list of apiserver endpoints from server"
May 11 05:44:36 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:36Z" level=info msg="Creating rke2-cert-monitor event broadcaster"
May 11 05:44:36 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:36Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=aws-worker01 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-ip=172.31.0.177 --node-labels=node-role=aws --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key"
May 11 05:44:36 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:36Z" level=info msg="Got apiserver addresses from supervisor: [<ON-PREMISE REK2 SERVER Public IP>:6443]"
May 11 05:44:36 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:36Z" level=info msg="Connecting to proxy" url="wss://<ON-PREMISE REK2 SERVER Public IP>:25345/v1-rke2/connect"
May 11 05:44:36 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:36Z" level=info msg="Remotedialer connected to proxy" url="wss://<ON-PREMISE REK2 SERVER Public IP>:25345/v1-rke2/connect"
May 11 05:44:36 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:36Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=aws-worker01 --kubeconfig=/var/lib/rancher/rke2/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
May 11 05:44:36 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:36Z" level=info msg="Server <ON-PREMISE REK2 SERVER Public IP>:25345@RECOVERING*->ACTIVE from successful health check"
May 11 05:44:36 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:36Z" level=info msg="Server <ON-PREMISE REK2 SERVER Public IP>:6443@UNCHECKED*->RECOVERING from successful health check"
May 11 05:44:37 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:37Z" level=info msg="Server <ON-PREMISE REK2 SERVER Public IP>:6443@RECOVERING*->PREFERRED from successful health check"
May 11 05:44:45 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:45Z" level=info msg="Polling for API server readiness: GET /readyz failed: Get \"<https://127.0.0.1:6443/readyz?timeout=15s&verbose=>\": net/http: TLS handshake timeout"
May 11 05:44:45 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:45Z" level=info msg="Server <ON-PREMISE REK2 SERVER Public IP>:6443@PREFERRED*->FAILED from failed dial"
May 11 05:44:45 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:45Z" level=info msg="Server <ON-PREMISE REK2 SERVER Public IP>:6443@FAILED*->RECOVERING from successful health check"
May 11 05:44:46 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:46Z" level=info msg="Server <ON-PREMISE REK2 SERVER Public IP>:6443@RECOVERING*->FAILED from failed dial"
May 11 05:44:46 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:46Z" level=info msg="Server <ON-PREMISE REK2 SERVER Public IP>:6443@FAILED*->RECOVERING from successful health check"
May 11 05:44:47 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:47Z" level=info msg="Server <ON-PREMISE REK2 SERVER Public IP>:6443@RECOVERING*->PREFERRED from successful health check"
May 11 05:44:55 ip-<EC2 Public IP>.ap-northeast-2.compute.internal rke2[3119]: time="2025-05-11T05:44:55Z" level=info msg="Server <ON-PREMISE REK2 SERVER Public IP>:6443@PREFERRED*->FAILED from failed dial"
👀 1
그림1.png
b
What does your agent /etc/rancher/rke2/config.yaml (on the agent node) look like for server: Does it have :25443 or something else?
also, do you have anything set for "tls-san:" ?
s
• thank you so much for your reply. • below is my server config, agent config and agent log • i think problem is that agent get kube-apiserver ip and port from supervisor, and supervisor tell agent port of kube-apiserver is 6443. (
May 11 07:59:50 ip-<EC2 PUBLIC IP>.ap-northeast-2.compute.internal rke2[9669]: time="2025-05-11T07:59:50Z" level=info msg="Got apiserver addresses from supervisor: [<ON PREMISE PUBLIC IP>:6443]"
) • And i want to change this port by server config, but i cannot find config option. [server config]
Copy code
node-name: control-plane
node-external-ip: <ON PREMISE PUBLIC IP>
tls-san:
  - <ON PREMISE PUBLIC IP>
  - 192.168.0.25
cni: "calico"
write-kubeconfig-mode: "0644"
[agent config]
Copy code
server: https://<ON PREMISE PUBLIC IP>:25345
token: <node token>
node-name: aws-worker01
node-label:
  - node-role=aws
[agent log]
Copy code
May 11 07:59:50 ip-<EC2 PUBLIC IP>.ap-northeast-2.compute.internal rke2[9669]: time="2025-05-11T07:59:50Z" level=info msg="Getting list of apiserver endpoints from server"
May 11 07:59:50 ip-<EC2 PUBLIC IP>.ap-northeast-2.compute.internal rke2[9669]: time="2025-05-11T07:59:50Z" level=info msg="Creating rke2-cert-monitor event broadcaster"
May 11 07:59:50 ip-<EC2 PUBLIC IP>.ap-northeast-2.compute.internal rke2[9669]: time="2025-05-11T07:59:50Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=aws-worker01 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-ip=172.31.0.177 --node-labels=node-role=aws --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key"
May 11 07:59:50 ip-<EC2 PUBLIC IP>.ap-northeast-2.compute.internal rke2[9669]: time="2025-05-11T07:59:50Z" level=info msg="Got apiserver addresses from supervisor: [<ON PREMISE PUBLIC IP>:6443]"
May 09 08:46:47 ip-172-31-0-177.ap-northeast-2.compute.internal rke2[47413]: time="2025-05-09T08:46:47Z" level=info msg="Server 175.117.110.135:6443@PREFERRED*->FAILED from failed dial"
May 09 08:46:48 ip-172-31-0-177.ap-northeast-2.compute.internal rke2[47413]: time="2025-05-09T08:46:48Z" level=info msg="Server 175.117.110.135:6443@FAILED*->RECOVERING from successful health check"
May 09 08:46:49 ip-172-31-0-177.ap-northeast-2.compute.internal rke2[47413]: time="2025-05-09T08:46:49Z" level=info msg="Server 175.117.110.135:6443@RECOVERING*->FAILED from failed dial"
May 09 08:46:49 ip-172-31-0-177.ap-northeast-2.compute.internal rke2[47413]: time="2025-05-09T08:46:49Z" level=info msg="Server 175.117.110.135:6443@FAILED*->RECOVERING from successful health check"
May 09 08:46:49 ip-172-31-0-177.ap-northeast-2.compute.internal rke2[47413]: time="2025-05-09T08:46:49Z" level=info msg="Server 175.117.110.135:6443@RECOVERING*->FAILED from failed dial"
May 09 08:46:50 ip-172-31-0-177.ap-northeast-2.compute.internal rke2[47413]: time="2025-05-09T08:46:50Z" level=info msg="Server 175.117.110.135:6443@FAILED*->RECOVERING from successful health check"
May 09 08:46:51 ip-172-31-0-177.ap-northeast-2.compute.internal rke2[47413]: time="2025-05-09T08:46:51Z" level=info msg="Server 175.117.110.135:6443@RECOVERING*->PREFERRED from successful health check"
May 09 08:46:57 ip-172-31-0-177.ap-northeast-2.compute.internal rke2[47413]: time="2025-05-09T08:46:57Z" level=info msg="Server 175.117.110.135:6443@PREFERRED*->FAILED from failed dial"
b
On the master (server) nodes, I believe you still need to put the load balancer (router) DNS or IP address in the tls-san:
Copy code
node-name: control-plane
node-external-ip: <ON PREMISE PUBLIC IP>
tls-san:
  - <Load Balancer DNS or IP>
  - 192.168.0.25
cni: "calico"
write-kubeconfig-mode: "0644"
the agent config would also need the Load Balancer or Router DNS/IP in the server: portion
Copy code
[agent config]

server: https://<Load balancer or Router DNS or IP>:25345
token: <node token>
node-name: aws-worker01
node-label:
  - node-role=aws
You could also add the following for more verbose logging and to use the kubeconfig from the server to test kubectl commands (/etc/rancher/rke2/rke2.yaml):
Copy code
debug: true
v: 6
write-kubeconfig-mode: "0644"