Hi, Hoping someone can point me to the right confi...
# harvester
e
Hi, Hoping someone can point me to the right configuration i've likely messed up myself. Anytime the kube-vip leader is different than the first installed node (10.12.10.10), I get certificate issues which prevent any new deployments and other activities within an rke2 guest cluster.
Error updating load balancer with new hosts map[slack-etcd-0c23a992-8xfxv:{} slack-etcd-0c23a992-jq7nb:{} slack-etcd-0c23a992-wxnbn:{} slack-frigate-fe78c213-qhdj2:{} slack-worker-3f543955-749kd:{} slack-worker-3f543955-krqw5:{} slack-zigbee-f6788c22-5r2dm:{}]: update load balancer failed, error: create or update lb slack/slack-slackpenny-mongodb-service-67363435 failed, error: Get "<https://10.21.10.10:6443/apis/loadbalancer.harvesterhci.io/v1beta1/namespaces/slack/loadbalancers/slack-slackpenny-mongodb-service-67363435>": x509: certificate is valid for 127.0.0.1, ::1, 10.21.10.12, 10.53.0.1, not 10.21.10.10
The fix is to kill the kube-vip daemon on whichever other master node it happens to be running until it ends up back on hv01(10.21.10.10). The error above is when hv02(10.21.10.12) happened to have kube-vip. Since I don't see any issues on this, I suspect this 100% user error. I had experienced some of the previous certificate issues during upgrades, and I suspect as part of my own attempt to correct the issue i've misconfigured something. If I had to guess something I changed in the /oem files or an /etc/rancher file I changed caused this. On the off chance somewhere here would recognize or be able to point in me towards a clue i'd be grateful 🙂 I am running harvester 1.2.1 but again each upgrade i've had some hiccups and often do my own triage and find out (As part of this is learning through pain k8s)