This message was deleted.
# general
a
This message was deleted.
b
Gonna need more info to help with this. Things like: • What kind of cluster? K3s? RKE? • What docs are you following - what step are you on? • Do you have output logs? (attach them?) journald output? • What's OS are you using? • Reference is from where? The nodes? Journals? Logs? Rancher? Yaml?
r
Thank you for the quick response ! RKE2 : v1.27.16+rke2r2 Container Network: cilium OS: Red Hat Enterprise release 8.10 Provisioning Log [INFO ] waiting for at least one control plane, etcd, and worker node to be registered [INFO ] configuring bootstrap node(s) custom-a8b8e341dcb2: waiting for agent to check in and apply initial plan [INFO ] configuring bootstrap node(s) custom-a8b8e341dcb2: waiting for probes: etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet [INFO ] configuring bootstrap node(s) custom-a8b8e341dcb2: waiting for probes: etcd, kube-apiserver, kube-controller-manager, kube-scheduler [INFO ] configuring bootstrap node(s) custom-a8b8e341dcb2: waiting for probes: kube-apiserver, kube-controller-manager, kube-scheduler [INFO ] configuring bootstrap node(s) custom-a8b8e341dcb2: waiting for probes: kube-controller-manager, kube-scheduler [INFO ] configuring bootstrap node(s) custom-a8b8e341dcb2: waiting for cluster agent to connect [INFO ] non-ready bootstrap machine(s) custom-a8b8e341dcb2 and join url to be available on bootstrap node Journals Nov 06 111455 mynode1 rancher-system-agent[221890]: W1106 111455.524142 221890 reflector.go:443] pkg/mod/github.com/rancher/client-go@v1.24.0-rancher1/tools/cache/reflector.go:168: watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 123; INTERNAL_ERROR; received from peer") has prevented the request from succeeding Nov 06 111506 mynode1 systemd[1]: run-containerd-runc-k8s.io-107109908e65d1a3e5666cca4be09bcbc38f9eb6774c0e04406ec5e859b49590-runc.BDIAaI.mount: Succeeded. Nov 06 111558 mynode1 rancher-system-agent[221890]: W1106 111558.474897 221890 reflector.go:443] pkg/mod/github.com/rancher/client-go@v1.24.0-rancher1/tools/cache/reflector.go:168: watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 127; INTERNAL_ERROR; received from peer") has prevented the request from succeeding Nov 06 111616 mynode1 systemd[1]: run-containerd-runc-k8s.io-107109908e65d1a3e5666cca4be09bcbc38f9eb6774c0e04406ec5e859b49590-runc.afDIiJ.mount: Succeeded. Nov 06 111621 mynode1 systemd[1]: run-containerd-runc-k8s.io-107109908e65d1a3e5666cca4be09bcbc38f9eb6774c0e04406ec5e859b49590-runc.hjbmkJ.mount: Succeeded. Nov 06 111626 mynode1 systemd[1]: run-containerd-runc-k8s.io-107109908e65d1a3e5666cca4be09bcbc38f9eb6774c0e04406ec5e859b49590-runc.MOiHoJ.mount: Succeeded. Nov 06 111702 mynode1 rancher-system-agent[221890]: W1106 111702.939911 221890 reflector.go:443] pkg/mod/github.com/rancher/client-go@v1.24.0-rancher1/tools/cache/reflector.go:168: watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 131; INTERNAL_ERROR; received from peer") has prevented the request from succeeding Nov 06 111706 mynode1 systemd[1]: run-containerd-runc-k8s.io-107109908e65d1a3e5666cca4be09bcbc38f9eb6774c0e04406ec5e859b49590-runc.LaAneK.mount: Succeeded. Nov 06 111736 mynode1 systemd[1]: run-containerd-runc-k8s.io-107109908e65d1a3e5666cca4be09bcbc38f9eb6774c0e04406ec5e859b49590-runc.IKIpGL.mount: Succeeded. Nov 06 111812 mynode1 rancher-system-agent[221890]: W1106 111812.388461 221890 reflector.go:443] pkg/mod/github.com/rancher/client-go@v1.24.0-rancher1/tools/cache/reflector.go:168: watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 135; INTERNAL_ERROR; received from peer") has prevented the request from succeeding Nov 06 111816 mynode1 systemd[1]: run-containerd-runc-k8s.io-107109908e65d1a3e5666cca4be09bcbc38f9eb6774c0e04406ec5e859b49590-runc.cgcelL.mount: Succeeded. Nov 06 111856 mynode1 systemd[1]: run-containerd-runc-k8s.io-107109908e65d1a3e5666cca4be09bcbc38f9eb6774c0e04406ec5e859b49590-runc.DCCEdM.mount: Succeeded. Nov 06 111926 mynode1 systemd[1]: run-containerd-runc-k8s.io-107109908e65d1a3e5666cca4be09bcbc38f9eb6774c0e04406ec5e859b49590-runc.jhJHEN.mount: Succeeded. Nov 06 112001 mynode1 systemd[1]: Starting system activity accounting tool... Nov 06 112001 mynode1 systemd[1]: sysstat-collect.service: Succeeded. Nov 06 112001 mynode1 systemd[1]: Started system activity accounting tool. Nov 06 112016 mynode1 systemd[1]: run-containerd-runc-k8s.io-107109908e65d1a3e5666cca4be09bcbc38f9eb6774c0e04406ec5e859b49590-runc.CaMOpN.mount: Succeeded. Nov 06 112028 mynode1 rancher-system-agent[221890]: W1106 112028.794069 221890 reflector.go:443] pkg/mod/github.com/rancher/client-go@v1.24.0-rancher1/tools/cache/reflector.go:168: watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 139; INTERNAL_ERROR; received from peer") has prevented the request from succeeding Nov 06 112046 mynode1 systemd[1]: run-containerd-runc-k8s.io-107109908e65d1a3e5666cca4be09bcbc38f9eb6774c0e04406ec5e859b49590-runc.djpfdO.mount: Succeeded. Nov 06 112056 mynode1 systemd[1]: run-containerd-runc-k8s.io-107109908e65d1a3e5666cca4be09bcbc38f9eb6774c0e04406ec5e859b49590-runc.FpiogO.mount: Succeeded. Nov 06 112106 mynode1 systemd[1]: run-containerd-runc-k8s.io-107109908e65d1a3e5666cca4be09bcbc38f9eb6774c0e04406ec5e859b49590-runc.LdbjnO.mount: Succeeded. Nov 06 112126 mynode1 systemd[1]: run-containerd-runc-k8s.io-107109908e65d1a3e5666cca4be09bcbc38f9eb6774c0e04406ec5e859b49590-runc.mLhDIP.mount: Succeeded. Nov 06 112136 mynode1 systemd[1]: run-containerd-runc-k8s.io-107109908e65d1a3e5666cca4be09bcbc38f9eb6774c0e04406ec5e859b49590-runc.pafFfP.mount: Succeeded. Nov 06 112156 mynode1 systemd[1]: run-containerd-runc-k8s.io-107109908e65d1a3e5666cca4be09bcbc38f9eb6774c0e04406ec5e859b49590-runc.AlDajP.mount: Succeeded. Nov 06 112229 mynode1 rancher-system-agent[221890]: W1106 112229.588087 221890 reflector.go:443] pkg/mod/github.com/rancher/client-go@v1.24.0-rancher1/tools/cache/reflector.go:168: watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 143; INTERNAL_ERROR; received from peer") has prevented the request from succeeding Nov 06 112236 mynode1 systemd[1]: run-containerd-runc-k8s.io-107109908e65d1a3e5666cca4be09bcbc38f9eb6774c0e04406ec5e859b49590-runc.alfDaa.mount: Succeeded. Nov 06 112251 mynode1 rancher-system-agent[221890]: time="2024-11-06T112251-06:00" level=info msg="[Applyinator] No image provided, creating empty working directory /var/lib/rancher/agent/work/20241106-112251/99f5f3ce6c833e3e8ac17f35f7ab88463307edf69f060d79cfb34ec9362f20b7_0" Nov 06 112251 mynode1 rancher-system-agent[221890]: time="2024-11-06T112251-06:00" level=info msg="[Applyinator] Running command: sh [-c rke2 etcd-snapshot list --etcd-s3=false 2>/dev/null]" Nov 06 112251 mynode1 rancher-system-agent[221890]: time="2024-11-06T112251-06:00" level=info msg="[99f5f3ce6c833e3e8ac17f35f7ab88463307edf69f060d79cfb34ec9362f20b7_0stdout] Name Location Size Created" Nov 06 112251 mynode1 rancher-system-agent[221890]: time="2024-11-06T112251-06:00" level=info msg="[Applyinator] Command sh [-c rke2 etcd-snapshot list --etcd-s3=false 2>/dev/null] finished with err: <nil> and exit code: 0" Nov 06 112251 mynode1 rancher-system-agent[221890]: time="2024-11-06T112251-06:00" level=info msg="[K8s] updated plan secret fleet-default/custom-a8b8e341dcb2-machine-plan with feedback" Nov 06 112256 mynode1 systemd[1]: run-containerd-runc-k8s.io-107109908e65d1a3e5666cca4be09bcbc38f9eb6774c0e04406ec5e859b49590-runc.KAAMla.mount: Succeeded. Nov 06 112256 mynode1 systemd[1]: run-containerd-runc-k8s.io-107109908e65d1a3e5666cca4be09bcbc38f9eb6774c0e04406ec5e859b49590-runc.fApIna.mount: Succeeded. Nov 06 112431 mynode1 rancher-system-agent[221890]: W1106 112431.690918 221890 reflector.go:443] pkg/mod/github.com/rancher/client-go@v1.24.0-rancher1/tools/cache/reflector.go:168: watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 147; INTERNAL_ERROR; received from peer") has prevented the request from succeeding Nov 06 112436 mynode1 systemd[1]: run-containerd-runc-k8s.io-107109908e65d1a3e5666cca4be09bcbc38f9eb6774c0e04406ec5e859b49590-runc.AmBIfc.mount: Succeeded. Nov 06 112441 mynode1 systemd[1]: run-containerd-runc-k8s.io-107109908e65d1a3e5666cca4be09bcbc38f9eb6774c0e04406ec5e859b49590-runc.PBMIjc.mount: Succeeded. Nov 06 112526 mynode1 systemd[1]: run-containerd-runc-k8s.io-107109908e65d1a3e5666cca4be09bcbc38f9eb6774c0e04406ec5e859b49590-runc.glKbdd.mount: Succeeded. Nov 06 112533 mynode1 rancher-system-agent[221890]: W1106 112533.229070 221890 reflector.go:443] pkg/mod/github.com/rancher/client-go@v1.24.0-rancher1/tools/cache/reflector.go:168: watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 153; INTERNAL_ERROR; received from peer") has prevented the request from succeeding Nov 06 112546 mynode1 systemd[1]: run-containerd-runc-k8s.io-107109908e65d1a3e5666cca4be09bcbc38f9eb6774c0e04406ec5e859b49590-runc.fJMOod.mount: Succeeded. Nov 06 112635 mynode1 rancher-system-agent[221890]: W1106 112635.354222 221890 reflector.go:443] pkg/mod/github.com/rancher/client-go@v1.24.0-rancher1/tools/cache/reflector.go:168: watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 157; INTERNAL_ERROR; received from peer") has prevented the request from succeeding Nov 06 112646 mynode1 systemd[1]: run-containerd-runc-k8s.io-107109908e65d1a3e5666cca4be09bcbc38f9eb6774c0e04406ec5e859b49590-runc.lIIcAf.mount: Succeeded. Nov 06 112706 mynode1 systemd[1]: run-containerd-runc-k8s.io-107109908e65d1a3e5666cca4be09bcbc38f9eb6774c0e04406ec5e859b49590-runc.adLeLf.mount: Succeeded.
Any idea ? Really appreciate it !
Found the fix here “The workaround that worked for us was to set the cluster.x-k8s.io/cluster-name on the kubeconfig secret in the upstream cluster.” https://github.com/rancher/rancher/issues/44939