This message was deleted.
# harvester
a
This message was deleted.
r
I suspect the cause is DHCP having a reservation for the VIP for only one Ethernet address. This raises the question, after creating the Harvester cluster and selecting DHCP as the method of obtaining an IP for the VIP, will all nodes, including the first node, thereafter "remember" the VIP and automatically bind it according to leader selection algorithm, or is it necessary to create static lease reservations for all nodes?
The three nodes have these harvester.config lines matching "vip"
Copy code
TRY nuc1
  vip: ""
  viphwaddr: ""
  vipmode: ""
TRY nuc2
  vip: 192.168.30.220
  viphwaddr: 02:31:01:75:56:93
  vipmode: dhcp
TRY node3
  vip: ""
  viphwaddr: ""
  vipmode: ""
I noticed the VIP IP on the mgmt-br interface has /32 mask, whereas the node IP has /24. Is that expected?
b
Where should I look in Harvester's Kube API to diagnose why none of the three nodes are binding the VIP IP on their mgmt-br interface?
If you have SSH access to a Harvester node and/or already have kubectl pointing at its kubeconfig: there should be kube-vip pods under the harvester-system namespace:
Copy code
# kubectl get pods -n harvester-system
Then you can read the pod logs
Copy code
# kubectl logs $KUBEVIP-POD-NAME -n harvester-system
This raises the question, after creating the Harvester cluster and selecting DHCP as the method of obtaining an IP for the VIP, will all nodes, including the first node, thereafter "remember" the VIP and automatically bind it according to leader selection algorithm
Yes.
or is it necessary to create static lease reservations for all nodes?
You need to create static DHCP leases for each node address and then one static lease for the VIP (
02:31:01:75:56:93
->
192.168.30.220
)
I noticed the VIP IP on the mgmt-br interface has /32 mask, whereas the node IP has /24. Is that expected?
Yeah
r
I don't quite understand why the VIP has a different Ethernet address. It's a secondary IP on the mgmt-br interface, which has the Ethernet address of it's enslaved device. I can confirm that something is leasing an IP with the specified VIP Ethernet address, but the switch sees arp only for the nodes' respective Ethernet addresses, not the VIP's Ethernet address. What role does that VIP Ethernet address play if not for migrating the VIP at L2? I'm using the default active-backup bond mode.
b
What role does that VIP Ethernet address play if not for migrating the VIP at L2?
Migrating the VIP at L2 is the main point of it having its own MAC address as I understand it
r
The static reservation makes sense now that I think about it. It wouldn't be possible to orchestrate that VIP address via DHCP if it wasn't a unique address.