adamant-kite-43734
07/04/2024, 1:08 PMprehistoric-balloon-31801
07/08/2024, 2:07 AMsome-addition-13540
07/08/2024, 9:06 AMred-king-19196
07/09/2024, 3:27 AMmgmt
one? Or just created the L2Vlan VM Network associated with the mgmt
cluster network?some-addition-13540
07/09/2024, 7:56 AMsome-addition-13540
07/09/2024, 7:38 PMsome-addition-13540
07/09/2024, 8:06 PMadventurous-portugal-91104
07/09/2024, 8:08 PMadventurous-portugal-91104
07/09/2024, 8:12 PMred-king-19196
07/10/2024, 9:37 AMsome-addition-13540
07/10/2024, 9:55 AMsome-addition-13540
07/10/2024, 9:56 AMtime="2024-07-10T09:54:55Z" level=info msg="probe error, I/O timeout, address: 10.0.0.227:80, timeout: 3s"
But I can curl 10.0.0.227:80
totally fine from the Bastion itselfsome-addition-13540
07/10/2024, 9:56 AMsome-addition-13540
07/10/2024, 9:57 AMhypervisor-br
which I thought is the one I have assigned it to in the LB IPAM Pool
8: mgmt-br: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 1c:98:ec:5c:18:28 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.10/24 brd 10.0.2.255 scope global mgmt-br
valid_lft forever preferred_lft forever
inet 10.0.2.20/32 scope global mgmt-br
valid_lft forever preferred_lft forever
inet 10.0.0.30/32 scope global mgmt-br
valid_lft forever preferred_lft forever
inet 10.0.0.31/32 scope global mgmt-br
valid_lft forever preferred_lft forever
some-addition-13540
07/10/2024, 2:13 PMfaint-art-23779
07/10/2024, 2:16 PMip route show
and ip neigh
on the node and provide the output?some-addition-13540
07/10/2024, 2:27 PMfaint-art-23779
07/10/2024, 2:40 PMsome-addition-13540
07/10/2024, 2:46 PMfaint-art-23779
07/10/2024, 2:58 PMiptables-save
to save to a file and post the file here?some-addition-13540
07/10/2024, 3:05 PMsome-addition-13540
07/10/2024, 3:05 PMfaint-art-23779
07/10/2024, 3:11 PMsysctl -a | grep bridge-nf-call
should show net.bridge.bridge-nf-call-iptables=0
in your configuration. If so, please sysctl -w net.bridge.bridge-nf-call-iptables=1
to check if the ping/curl back to work or not. Thankssome-addition-13540
07/10/2024, 3:12 PMfaint-art-23779
07/10/2024, 3:13 PMfaint-art-23779
07/10/2024, 3:13 PMsome-addition-13540
07/10/2024, 3:25 PMfaint-art-23779
07/10/2024, 3:28 PMiptables-save
to check how the ping/http packet been redirected....some-addition-13540
07/10/2024, 3:31 PMfaint-art-23779
07/10/2024, 3:39 PMcurl
10.0.0.31 (the nginx). 10.0.0.30
is only for 6443 (apiserver) but not allowed for port 80.some-addition-13540
07/10/2024, 5:28 PMsome-addition-13540
07/10/2024, 5:30 PMcurl --connect-timeout 1 10.0.0.31:80
curl: (28) Failed to connect to 10.0.0.31 port 80 after 1001 ms: Timeout was reached
some-addition-13540
07/10/2024, 5:30 PMcurl --connect-timeout 1 10.0.0.30:6443
curl: (28) Failed to connect to 10.0.0.30 port 6443 after 1001 ms: Timeout was reached
some-addition-13540
07/10/2024, 5:30 PMbastion
some-addition-13540
07/10/2024, 5:30 PMsome-addition-13540
07/10/2024, 10:40 PMsysctl -w net.bridge.bridge-nf-call-iptables=0
Then the console / Harvester VIP is available from the Bastion
if I set it to 1 then it doesn't ..
sysctl -w net.bridge.bridge-nf-call-iptables=1
some-addition-13540
07/10/2024, 10:50 PMred-king-19196
07/11/2024, 3:02 AM10.0.0.0/24
subnet
• The LB IP address for the target VMs is 10.0.0.30
• The bastion VM is also on the 10.0.0.0/24
subnet but outside the Harvester cluster
• The Harvester nodes are on the 10.0.2.0/24
subnet
• The Harvester cluster VIP address is 10.0.2.30
• The VM Network for the 10.0.0.0/24
subnet is associated with the hypervisor
Cluster Network using a secondary NIC on each Harvester nodes
And you’re unable to access the LB IP address with port 6443 from the bastion VM. Direct access to the target VMs’ port 6443 is okay. Is that correct?red-king-19196
07/11/2024, 4:08 AMmgmt-br
. So, for this kind of VM-type load balancer usage, it’s required to create LBs from the IP pool with the same subnet range as the Harvester nodes’ management network.
The other way is to move the LB IP address inside the target VMs. If they are a guest cluster that has the Harvester Cloud Provider running, you can create an LB-type Service to announce the LB IP address on the 10.0.0.0/24
subnet.adventurous-portugal-91104
07/11/2024, 6:42 AMadventurous-portugal-91104
07/11/2024, 6:56 AMadventurous-portugal-91104
07/11/2024, 8:04 AMprehistoric-balloon-31801
07/11/2024, 8:46 AMadventurous-portugal-91104
07/11/2024, 8:47 AMadventurous-portugal-91104
07/11/2024, 8:47 AMadventurous-portugal-91104
07/11/2024, 8:47 AMadventurous-portugal-91104
07/11/2024, 8:47 AMadventurous-portugal-91104
07/11/2024, 8:51 AMadventurous-portugal-91104
07/11/2024, 8:53 AMadventurous-portugal-91104
07/11/2024, 8:54 AMadventurous-portugal-91104
07/11/2024, 8:54 AMadventurous-portugal-91104
07/11/2024, 8:55 AMadventurous-portugal-91104
07/11/2024, 8:59 AMIf that’s the case, the Harvester Load Balancer currently does not support assigning LB IP addresses to secondary interfaces. It will always bind the LB IP addresses regardless of what subnets are to the management interface, i.e.,. So, for this kind of VM-type load balancer usage, it’s required to create LBs from the IP pool with the same subnet range as the Harvester nodes’ management network.mgmt-br
The other way is to move the LB IP address inside the target VMs. If they are a guest cluster that has the Harvester Cloud Provider running, you can create an LB-type Service to announce the LB IP address on theSo to understand you correctly, to get this to work we would need to use a Rancher server, connect it to Harvester with the Harvester Cloud Provider and use that path to provide a Kubernetes cluster to get a working with a LB in a different network than the MGMT network ?subnet.10.0.0.0/24
red-king-19196
07/11/2024, 10:41 AMso if I create another LB, but this LB is set to the mgmt network 10.0.2.0/24, THEN it transfers that traffic to the VM in the hypervisor network as expected though this LB is on the mgmt network and not hypervisor network.This is the way to go. Traffic destined for
10.0.0.30
will never be routed to the mgmt-br
interface.red-king-19196
07/11/2024, 10:44 AMadventurous-portugal-91104
07/11/2024, 10:45 AMcool-thailand-26552
07/11/2024, 10:46 AMadventurous-portugal-91104
07/11/2024, 10:56 AMcool-thailand-26552
07/11/2024, 10:57 AMadventurous-portugal-91104
07/11/2024, 10:57 AMadventurous-portugal-91104
07/11/2024, 10:57 AMadventurous-portugal-91104
07/11/2024, 11:06 AMadventurous-portugal-91104
07/11/2024, 11:07 AMsome-addition-13540
07/11/2024, 11:14 AMred-king-19196
07/11/2024, 11:27 AMMy understanding is that using ipPools would circumvent thatIt wouldn’t. It’s a connectivity issue. Packets won’t be routed to the right interface where the LB IP address is bound.
red-king-19196
07/11/2024, 11:28 AMcool-thailand-26552
07/11/2024, 11:29 AMmgmt-br
Interface was routed to the Metal gateway subnets....red-king-19196
07/11/2024, 11:29 AMred-king-19196
07/11/2024, 11:32 AMOk so it looks like it worked for me on Equinix Metal using Metal Gateways because, somehow, theIf the VLANs and routing rules are configured appropriately, it would work 👍Interface was routed to the Metal gateway subnets....mgmt-br
adventurous-portugal-91104
07/11/2024, 11:33 AMadventurous-portugal-91104
07/11/2024, 11:40 AMred-king-19196
07/11/2024, 11:41 AMadventurous-portugal-91104
07/11/2024, 11:42 AMadventurous-portugal-91104
07/11/2024, 11:43 AMYou’ll need to add a static route for network 10.0.0.30/32 to go to the first NIC of the Harvester node on the router. I don’t have a complex environment to check if that’s all you need to do.I will check up on this.
red-king-19196
07/11/2024, 11:43 AMadventurous-portugal-91104
07/11/2024, 11:43 AMadventurous-portugal-91104
07/11/2024, 11:43 AMred-king-19196
07/11/2024, 11:43 AMadventurous-portugal-91104
07/11/2024, 11:44 AMred-king-19196
07/11/2024, 11:45 AMsome-addition-13540
07/11/2024, 11:51 AMred-king-19196
07/11/2024, 11:52 AMsome-addition-13540
07/11/2024, 11:53 AMred-king-19196
07/11/2024, 11:59 AMsome-addition-13540
07/11/2024, 12:02 PMsome-addition-13540
07/11/2024, 12:03 PMred-king-19196
07/11/2024, 12:04 PMsome-addition-13540
07/11/2024, 12:05 PMsome-addition-13540
07/11/2024, 12:05 PMred-king-19196
07/11/2024, 12:06 PMred-king-19196
07/11/2024, 12:07 PMadventurous-portugal-91104
07/11/2024, 12:07 PMadventurous-portugal-91104
07/11/2024, 12:08 PMsome-addition-13540
07/11/2024, 1:40 PMcool-thailand-26552
07/11/2024, 9:13 PMProviderID
tagging and labelling with availability zones. For instance, without CPI on a workload cluster, CAPI is not able to recognize a Machine
as being Ready
(CAPI connects to the Workload cluster API, checks the Node
objects and matches them with Machine
object in the Management Cluster, and bubbles up the ProviderID
to the Machine
Object).red-king-19196
07/12/2024, 2:27 AMone question we are able to use say Cilium as CNI in here ? I know for a standard rancher deployment not on harvester it is possible.Yes, you can choose Cilium as the CNI plugin during cluster creation.
red-king-19196
07/12/2024, 2:34 AMI sont get it why You need to have the CPI if you have cilium or k-vip or mlb running in l2 mode here?From the LB point of view, it also allows you to allocate IP addresses for LB-type service from the IP pools managed by the underlying Harvester cluster. You can still use kube-vip or metallb in the guest Kubernetes cluster, it’s just that you’ll need to decide and manage what IP addresses to be allocated for LB services.