I recently configured a new harvester node and tri...
# rke2
s
I recently configured a new harvester node and tried to deploy an RKE2 downstream cluster, but the cluster remains in "waiting for cluster agent to connect state" I have verified the connectivity to between the VM and Rancher server (URL) on which the RKE2 gets deployed. I do see error in etcd pod:
Copy code
kubectl logs etcd-harbor1-pool1-114b4517-k4lh7  -n kube-system | grep rejected
{"level":"warn","ts":"2022-08-23T04:08:54.054Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:51538","server-name":"","error":"EOF"}
Number of pods are in pending state:
Copy code
NAMESPACE         NAME                                                        READY   STATUS      RESTARTS   AGE
calico-system     pod/calico-kube-controllers-677d488b5f-stpbg                0/1     Pending     0          55m
calico-system     pod/calico-node-7hntn                                       0/1     Running     0          55m
calico-system     pod/calico-typha-66d8ff6684-rnlx6                           0/1     Pending     0          55m
cattle-system     pod/cattle-cluster-agent-8df9b48fd-2dkdm                    0/1     Pending     0          56m
kube-system       pod/etcd-harbor1-pool1-114b4517-k4lh7                       1/1     Running     0          56m
kube-system       pod/harvester-cloud-provider-748f954ffb-ml8bd               1/1     Running     0          56m
kube-system       pod/harvester-csi-driver-controllers-779c557d47-8h7qb       0/3     Pending     0          56m
kube-system       pod/harvester-csi-driver-controllers-779c557d47-gvb9d       0/3     Pending     0          56m
kube-system       pod/harvester-csi-driver-controllers-779c557d47-rjvwt       0/3     Pending     0          56m
kube-system       pod/helm-install-harvester-cloud-provider-z87qt             0/1     Completed   0          56m
kube-system       pod/helm-install-harvester-csi-driver-m9pkj                 0/1     Completed   0          56m
kube-system       pod/helm-install-rke2-calico-crd-scvcm                      0/1     Completed   0          56m
kube-system       pod/helm-install-rke2-calico-kmppr                          0/1     Completed   1          56m
kube-system       pod/helm-install-rke2-coredns-plrx2                         0/1     Completed   0          56m
kube-system       pod/helm-install-rke2-ingress-nginx-dvrd4                   0/1     Pending     0          56m
kube-system       pod/helm-install-rke2-metrics-server-bwvwv                  0/1     Pending     0          56m
kube-system       pod/kube-apiserver-harbor1-pool1-114b4517-k4lh7             1/1     Running     0          56m
kube-system       pod/kube-controller-manager-harbor1-pool1-114b4517-k4lh7    1/1     Running     0          55m
kube-system       pod/kube-proxy-harbor1-pool1-114b4517-k4lh7                 1/1     Running     0          56m
kube-system       pod/kube-scheduler-harbor1-pool1-114b4517-k4lh7             1/1     Running     0          55m
kube-system       pod/rke2-coredns-rke2-coredns-76cb76d66-vl2wg               0/1     Pending     0          56m
kube-system       pod/rke2-coredns-rke2-coredns-autoscaler-58867f8fc5-whhwc   0/1     Pending     0          56m
tigera-operator   pod/tigera-operator-6457fc8c7c-s97d9                        1/1     Running     0          56m
Any suggestion on how to debug the issue or possible root cause?