We’re seeing weird behavior ( may be us ). When i...
# k3s
l
We’re seeing weird behavior ( may be us ). When installing 1.32.4+k3s1 by removing and then introducing a control-plane node - one at a time. The introduced node is not joining and seems to be creating a new cluster. Then I looked at k3s.service file and I joyrides that the server command is actually defined twice to the ExecStart=/usr/local/bin/k3s binary. Removing the second one > daemon-reload > restart the k3s service < does the trick. Then the node joins. Having server declared twice previously worked. Controlled that 1.32.1 and 1.30.5 has this. Here’s the command line we use: curl -sfL https://get.k3s.io |  K3S_TOKEN=my_token sh -s - server --server https://192.168.114.85:6443 --node-taint CriticalAddonsOnly=true:NoExecute --data-dir=/k3s-data --disable=coredns --disable-cloud-controller --disable-kube-proxy --disable=local-storage --disable=metrics-server --disable-network-policy --disable=servicelb --disable=traefik --kube-apiserver-arg=audit-log-path=/var/lib/rancher/audit/audit.log --kube-apiserver-arg=audit-log-maxage=30 --kube-apiserver-arg=audit-log-maxbackup=280 --kube-apiserver-arg=audit-log-maxsize=40 --kube-apiserver-arg=audit-policy-file=/var/lib/rancher/audit/audit-policy.yaml --kube-apiserver-arg=allow-privileged=true --kubelet-arg=container-log-max-files=2 --kubelet-arg=container-log-max-size=3Mi --cluster-dns=10.241.8.10 --cluster-cidr=10.240.32.0/20 --service-cidr=10.241.8.0/22 --embedded-registry --tls-san 192.168.114.83 —- So when one use server after sh -s - it’s added to the k3s.service system unit file. —- Looking at the get.k3s.io script confirms that the k3s server binary will be server in this case. Are we simply jerks in the way we call the k3s install script?