bumpy-glass-83711
05/21/2023, 6:28 PMconnection error: Get <https://74.220.22.141:6443/api?timeout=32s>: tls: failed to verify certificate: x509: certificate is valid for 10.43.0.1, 127.0.0.1, 192.168.1.3, 192.168.1.4, 192.168.1.5, 192.168.1.6, ::1, not 74.220.22.141
Unable to connect to the server: tls: failed to verify certificate: x509: certificate is valid for 10.43.0.1, 127.0.0.1, 192.168.1.3, 192.168.1.4, 192.168.1.5, 192.168.1.6, ::1, not 74.220.22.141
as These are running a VM in cloud platforms so public IP are dynamic here the 74.220.22.141 is the publicIP of loadbalancer
So the issue gets resolved when I pass in this command
kubectl cluster-info --insecure-skip-tls-verify
after that all the kubectl commands function normally
Here is the SHELL command I used to create the control plane with known loadbalancer privateIP
curl -sfL <https://get.k3s.io> | sh -s - server --node-taint CriticalAddonsOnly=true:NoExecute --tls-san 192.168.1.8
Here 192.168.1.8 is provateIP of lb
and to add additional controlplane nodes I used these commands
curl -sfL <https://get.k3s.io> | sh -s - server --token=$SECRET --node-taint CriticalAddonsOnly=true:NoExecute --tls-san 192.168.1.8
Here is the HA proxy config
cat <<EOF > /etc/haproxy/haproxy.cfg
frontend kubernetes-frontend
bind *:6443
mode tcp
option tcplog
timeout client 10s
default_backend kubernetes-backend
backend kubernetes-backend
timeout connect 10s
timeout server 10s
mode tcp
option tcp-check
balance roundrobin
server k3sserver1 <privateip>:6443
server k3sserver2 <privateip>:6443
EOF
alert-policeman-61846
05/23/2023, 7:27 AMbumpy-glass-83711
05/23/2023, 7:46 AMalert-policeman-61846
05/23/2023, 2:54 PM