This message was deleted.
# rke2
a
This message was deleted.
a
Are all of the pods up and ready?
kubectl get pods -A
g
ahh good point. Indeed there is a pod cloud-controller-manager-k8s-2-master-0 restarting all the time. I just check if I was able to deploy a pod on the cluster (which worked) πŸ˜‰
I get a
Copy code
E0525 12:18:58.265324       1 node_controller.go:229] error syncing 'k8s-2-worker-02.herren5.local': failed to get instance metadata for node k8s-2-worker-02.herren5.local: address annotations not yet set, requeuing
in the cloud-controller-manager pod. Does this causes the taint?
Ok, I now see that the master node has
Copy code
KubeletNotReady: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
I thought canal would be installed per default?
h
nevermind; I miss-read your original post
g
Found the error 🀦 As it was a tiny lab setup, I only gave 1 CPU to the nodes. Minimum is 2 CPU from the requirements. Now it is all good πŸ˜‰
πŸŽ‰ 2
r
As a note, default RKE2 installs don't include the normal control plane/etcd taints unless you specifically tell it to do so. I think this is just a simplicity factor for users doing testing on maybe a one node cluster (so you'd need it). If you install via the Rancher UI and uncheck the worker option for your control plane it'll add the taints. So I guess basically the takeaway is that RKE2 by default considers server nodes to also be worker nodes unless you somehow specify otherwise (unchecking worker in the UI, adding taints in config.yaml, adding taints with kubectl, etc).
πŸ‘ 1
βž• 1
533 Views