stocky-dentist-56601
01/03/2025, 11:50 PMJan 03 17:46:09 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:09-06:00" level=info msg="Defragmenting etcd database"
Jan 03 17:46:09 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:09-06:00" level=info msg="etcd data store connection OK"
Jan 03 17:46:09 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:09-06:00" level=info msg="Saving cluster bootstrap data to datastore"
Jan 03 17:46:09 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:09-06:00" level=info msg="Waiting for API server to become available"
Jan 03 17:46:09 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:09-06:00" level=warning msg="Bootstrap key already exists"
Jan 03 17:46:12 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:12-06:00" level=info msg="Annotations and labels have already set on node: nuc-k8s-1"
Jan 03 17:46:12 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:12-06:00" level=info msg="Kube API server is now running"
Jan 03 17:46:12 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:12-06:00" level=info msg="Applying Cluster Role Bindings"
Jan 03 17:46:12 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:12-06:00" level=info msg="Waiting for cloud-controller-manager privileges to become available"
Jan 03 17:46:12 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:12-06:00" level=info msg="Watching for delete of nuc-k8s-1 Node object"
Jan 03 17:46:12 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:12-06:00" level=info msg="Creating rke2-supervisor event broadcaster"
Jan 03 17:46:12 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:12-06:00" level=info msg="Applying CRD <http://addons.k3s.cattle.io|addons.k3s.cattle.io>"
Jan 03 17:46:13 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:13-06:00" level=info msg="Applying CRD <http://etcdsnapshotfiles.k3s.cattle.io|etcdsnapshotfiles.k3s.cattle.io>"
Jan 03 17:46:13 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:13-06:00" level=info msg="Applying CRD <http://helmcharts.helm.cattle.io|helmcharts.helm.cattle.io>"
Jan 03 17:46:13 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:13-06:00" level=info msg="Applying CRD <http://helmchartconfigs.helm.cattle.io|helmchartconfigs.helm.cattle.io>"
Jan 03 17:46:13 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:13-06:00" level=info msg="Handling backend connection request [nuc-k8s-2]"
Jan 03 17:46:13 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:13-06:00" level=info msg="Handling backend connection request [node-server-camera-nvr]"
Jan 03 17:46:13 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:13-06:00" level=info msg="Handling backend connection request [node-nuc-hassio]"
Jan 03 17:46:13 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:13-06:00" level=info msg="Handling backend connection request [node-nuc-1]"
Jan 03 17:46:13 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:13-06:00" level=info msg="Stopped tunnel to 127.0.0.1:9345"
Jan 03 17:46:13 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:13-06:00" level=info msg="Connecting to proxy" url="<wss://192.168.2.23:9345/v1-rke2/connect>"
Jan 03 17:46:13 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:13-06:00" level=info msg="Connecting to proxy" url="<wss://192.168.2.7:9345/v1-rke2/connect>"
Jan 03 17:46:13 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:13-06:00" level=info msg="Proxy done" err="context canceled" url="<wss://127.0.0.1:9345/v1-rke2/connect>"
Jan 03 17:46:13 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:13-06:00" level=info msg="Connecting to proxy" url="<wss://192.168.2.5:9345/v1-rke2/connect>"
Jan 03 17:46:13 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:13-06:00" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
Jan 03 17:46:13 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:13-06:00" level=info msg="Handling backend connection request [nuc-k8s-1]"
Jan 03 17:46:13 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:13-06:00" level=info msg="Remotedialer connected to proxy" url="<wss://192.168.2.5:9345/v1-rke2/connect>"
Jan 03 17:46:13 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:13-06:00" level=info msg="Remotedialer connected to proxy" url="<wss://192.168.2.23:9345/v1-rke2/connect>"
Jan 03 17:46:13 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:13-06:00" level=info msg="Tunnel authorizer set Kubelet Port 10250"
Jan 03 17:46:13 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:13-06:00" level=info msg="Remotedialer connected to proxy" url="<wss://192.168.2.7:9345/v1-rke2/connect>"
Jan 03 17:46:14 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:14-06:00" level=info msg="Cluster Role Bindings applied successfully"
Jan 03 17:46:24 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:24-06:00" level=info msg="Pod for etcd not synced (waiting for termination of old pod sandbox), retrying"
Jan 03 17:46:44 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:44-06:00" level=info msg="Pod for etcd is synced"
Jan 03 17:46:44 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:44-06:00" level=info msg="Pod for kube-apiserver not synced (waiting for termination of old pod sandbox), retrying"
Jan 03 17:46:46 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:46-06:00" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
Jan 03 17:46:46 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:46-06:00" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
Jan 03 17:46:46 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:46-06:00" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
Jan 03 17:46:46 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:46-06:00" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
Jan 03 17:46:48 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:48-06:00" level=info msg="Handling backend connection request [nuc-k8s-2]"
Jan 03 17:46:48 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:48-06:00" level=info msg="Handling backend connection request [node-server-camera-nvr]"
Jan 03 17:46:48 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:48-06:00" level=info msg="Handling backend connection request [node-nuc-hassio]"
Jan 03 17:46:48 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:46:48-06:00" level=info msg="Handling backend connection request [node-nuc-1]"
Jan 03 17:47:04 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:47:04-06:00" level=info msg="Pod for etcd is synced"
Jan 03 17:47:04 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:47:04-06:00" level=info msg="Pod for kube-apiserver not synced (waiting for termination of old pod sandbox), retrying"
Jan 03 17:47:24 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:47:24-06:00" level=info msg="Pod for etcd is synced"
Jan 03 17:47:24 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:47:24-06:00" level=info msg="Pod for kube-apiserver is synced"
Jan 03 17:47:24 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:47:24-06:00" level=info msg="ETCD server is now running"
Jan 03 17:47:24 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:47:24-06:00" level=info msg="rke2 is up and running"
Jan 03 17:47:24 nuc-k8s-1 systemd[1]: Started Rancher Kubernetes Engine v2 (server).
░░ Subject: A start job for unit rke2-server.service has finished successfully
░░ Defined-By: systemd
░░ Support: <http://www.ubuntu.com/support>
░░
░░ A start job for unit rke2-server.service has finished successfully.
░░
░░ The job identifier is 152.
Jan 03 17:47:24 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:47:24-06:00" level=info msg="Failed to get existing traefik HelmChart" error="<http://helmcharts.helm.cattle.io|helmcharts.helm.cattle.io> \"traefik\" not found"
Jan 03 17:47:24 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:47:24-06:00" level=info msg="Reconciling ETCDSnapshotFile resources"
Jan 03 17:47:24 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:47:24-06:00" level=info msg="Starting dynamiclistener CN filter node controller"
Jan 03 17:47:24 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:47:24-06:00" level=info msg="Tunnel server egress proxy mode: agent"
Jan 03 17:47:24 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:47:24-06:00" level=info msg="Starting managed etcd node metadata controller"
Jan 03 17:47:24 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:47:24-06:00" level=info msg="Reconciliation of ETCDSnapshotFile resources complete"
Jan 03 17:47:24 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:47:24-06:00" level=info msg="Starting <http://k3s.cattle.io/v1|k3s.cattle.io/v1>, Kind=Addon controller"
Jan 03 17:47:24 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:47:24-06:00" level=info msg="Creating deploy event broadcaster"
Jan 03 17:47:24 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:47:24-06:00" level=info msg="Starting /v1, Kind=Node controller"
Jan 03 17:47:24 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:47:24-06:00" level=info msg="Cluster dns configmap already exists"
Jan 03 17:47:24 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:47:24-06:00" level=info msg="Labels and annotations have been set successfully on node: nuc-k8s-1"
Jan 03 17:47:25 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:47:25-06:00" level=info msg="Starting /v1, Kind=Secret controller"
Jan 03 17:47:25 nuc-k8s-1 rke2[1100]: time="2025-01-03T17:47:25-06:00" level=info msg="Updating TLS secret for kube-system/rke2-serving (count: 14): map[<http://listener.cattle.io/cn-10.43.0.1:10.43.0.1|listener.cattle.io/cn-10.43.0.1:10.43.0.1> <http://listener.cattle.io/cn-127.0.0.1:127.0.0.1|listener.cattle.io/cn-127.0.0.1:127.0.0.1> <http://listener.cattle.io/cn-192.168.2.23:192.168.2.23|listener.cattle.io/cn-192.168.2.23:192.168.2.23> <http://listener.cattle.io/cn-192.168.2.5:192.168.2.5|listener.cattle.io/cn-192.168.2.5:192.168.2.5> <http://listener.cattle.io/cn-192.168.2.7:192.168.2.7|listener.cattle.io/cn-192.168.2.7:192.168.2.7> <http://listener.cattle.io/cn-__1-f16284:::1|listener.cattle.io/cn-__1-f16284:::1> <http://listener.cattle.io/cn-kubernetes:kubernetes|listener.cattle.io/cn-kubernetes:kubernetes> <http://listener.cattle.io/cn-kubernetes.default:kubernetes.default|listener.cattle.io/cn-kubernetes.default:kubernetes.default> <http://listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc|listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc> <http://listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local|listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local> <http://listener.cattle.io/cn-localhost:localhost|listener.cattle.io/cn-localhost:localhost> <http://listener.cattle.io/cn-node-nuc-hassio:node-nuc-hassio|listener.cattle.io/cn-node-nuc-hassio:node-nuc-hassio> <http://listener.cattle.io/cn-nuc-k8s-1:nuc-k8s-1|listener.cattle.io/cn-nuc-k8s-1:nuc-k8s-1> <http://listener.cattle.io/cn-nuc-k8s-2:nuc-k8s-2|listener.cattle.io/cn-nuc-k8s-2:nuc-k8s-2> <http://listener.cattle.io/fingerprint:SHA1=57791EF61FBBDF00D66F1D5CD102A81FBCB7B0FB]|listener.cattle.io/fingerprint:SHA1=57791EF61FBBDF00D66F1D5CD102A81FBCB7B0FB]>"
Jan 03 17:47:39 nuc-k8s-1 rke2[1100]: 2025/01/03 17:47:39 ERROR: [transport] Client received GoAway with error code ENHANCE_YOUR_CALM and debug data equal to ASCII "too_many_pings".