https://rancher.com/ logo
Title
w

wonderful-judge-81095

05/08/2023, 9:46 PM
Hello, New to Rancher as an admin and rancher version very old 2.3.1. I had to rebuild the cluster from scratch and I'm using similar configuration settings from another working cluster minus the clustername and workload names. I have 5 physical servers, centos 7, two are just worker nodes and 3 are etcd/control plane. All "system" nodes are active. The problem is the default workloads of 10 elasticsearch pods, 5 running pods on one physical server and 5 pods on the other physical server. The error I get is from the pod events are "0/5 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 4 node(s) didn't match node selector. I'm at a loss for what is wrong, i assume something with the taints? Any advice appreciated
i'm not sure if this is normal - tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s
more info - > kubectl get nodes -o=custom-columns=NodeName:.metadata.name,TaintKey:.spec.taints[].key,TaintValue:.spec.taints[].value,TaintEffect:.spec.taints[*].effect NodeName TaintKey TaintValue TaintEffect ca-capture01 <none> <none> <none> ca-capture02 <none> <none> <none> ca-capture03 node-role.kubernetes.io/controlplane,node-role.kubernetes.io/etcd true,true NoSchedule,NoExecute ca819-elastic01 node-role.kubernetes.io/controlplane,node-role.kubernetes.io/etcd true,true NoSchedule,NoExecute ca819-elastic02 node-role.kubernetes.io/controlplane,node-role.kubernetes.io/etcd true,true NoSchedule,NoExecute
solved it by removing the taints