brave-garden-49376
08/18/2025, 4:31 PMbland-article-62755
08/18/2025, 4:46 PMbrave-garden-49376
08/18/2025, 5:06 PMbland-article-62755
08/18/2025, 5:09 PMbland-article-62755
08/18/2025, 5:11 PMAssuming the Harvester nodes' management interfaces are attached to the VLAN 1 network. The user creates an additional cluster network called provider (this implies using a secondary network interface other than the management one on the nodes) and then creates three VM networks, net-1, net-100, and net-200, with VLAN 1, 100, and 200, respectively. The net-1 and net-100 networks are associated with the default mgmt cluster network. The remaining one, net-200, is associated with the provider cluster network. The user then creates three LB IP pools for the three VM networks, called pool-1, pool-100, and pool-200.
Case 1: The VM is attached to the net-1 network, and an LB is created from the pool-1 IP pool. This configuration is straightforward, and it works out of the box.
Case 2: The VM is attached to the net-1 network, and an LB is created from the pool-100 IP pool. This configuration doesn't work because the LB IP address is currently always bound to the mgmt-br interface. Since the management interface is attached to the VLAN 1 network, VLAN 100 traffic won't reach the mgmt-br interface.
Case 3: The VM is attached to the net-1 network, and an LB is created from the pool-200 IP pool. This configuration doesn't work for a similar reason as case 2.
Case 4: The VM is attached to the net-100 network, and the LB is created from the pool-1 IP pool. This configuration works as the LB IP address, which is bound to the mgmt-br interface, is in the same VLAN network as the node's management interface. Traffic will be DNAT'd and routed by the default gateway to the backend VM once it reaches the mgmt-br interface.
Case 5: The VM is attached to the net-100 network, and the LB is created from the pool-100 IP pool. This configuration doesn't work for a similar reason as case 2.
Case 6: The VM is attached to the net-100 network, and the LB is created from the pool-200 IP pool. This configuration doesn't work for a similar reason as case 2.
Case 7: The VM is attached to the net-200 network, and the LB is created from the pool-1 IP pool. This configuration works for a similar reason as case 4. The only difference is the DNAT'd and routed traffic goes through the provider-br bridge but not mgmt-br.
Case 8: The VM is attached to the net-200 network, and the LB is created from the pool-100 IP pool. This configuration doesn't work for a similar reason as case 2.
Case 9: The VM is attached to the net-200 network, and the LB is created from the pool-200 IP pool. This configuration doesn't work for a similar reason as case 2.
brave-garden-49376
08/18/2025, 6:38 PMbland-article-62755
08/18/2025, 6:40 PMbland-article-62755
08/18/2025, 6:40 PMbland-article-62755
08/18/2025, 6:40 PMbland-article-62755
08/18/2025, 6:40 PMbrave-garden-49376
08/18/2025, 6:46 PMbrave-garden-49376
08/20/2025, 8:48 PMbrave-garden-49376
08/20/2025, 8:49 PMbrave-garden-49376
08/20/2025, 9:29 PM5,6c5,6
< Currently Active Slave: None
< MII Status: down
---
> Currently Active Slave: enp59s0f0np0
> MII Status: up
10a11,18
>
> Slave Interface: enp59s0f0np0
> MII Status: up
> Speed: 25000 Mbps
> Duplex: full
> Link Failure Count: 0
> Permanent HW addr: 7c:fe:90:cb:73:62
> Slave queue ID: 0
Somehow the Network Config is failing to bring up the bond on all but one node.brave-garden-49376
08/21/2025, 12:30 PMbrave-garden-49376
08/21/2025, 9:47 PMou26l-compute:
----------------
mgmt-bo
ou26r-compute:
----------------
cnet1-bo
cnet2-bo
mgmt-bo
ou31c-compute:
----------------
mgmt-bo
should I clean those cnet*-bo files by hand?brave-garden-49376
08/22/2025, 1:35 PMbrash-petabyte-67855
08/22/2025, 6:18 PMbrash-petabyte-67855
08/22/2025, 6:21 PMbland-article-62755
08/22/2025, 6:50 PMbrave-garden-49376
08/22/2025, 7:03 PM