This message was deleted.
# harvester
a
This message was deleted.
m
Ok, that seems to be a network error on our end or in our firewall. Sorry for the false alarm
I still have Problem accessing my workloads over the loadbalancer IP even though now everything is active. I have simplified my scenario and just try to reach a nginx running in a VM on port 80. I can curl it using the VM IP but not the Loadbalancer IP not matter from where I try it. I am going back over the Docs (https://docs.harvesterhci.io/v1.2/networking/loadbalancer#limitations), can somebody explain this section from the Limitation of the new Harvester LoadBalancer for VM Workloads:
Copy code
Access Restriction: The VM LB address is exposed only within the same network as the Harvester hosts. To access the LB from outside the network, you must provide a route from outside to the LB address.
I think I got it working, but does it mean, I can only expose LB IP in the same subnet as my hardware harvester nodes live? Like if my harvester hardware net is 10.10.0.0/24 and my rancher net is 10.10.10.0/24 do I NEED to set the Rancher LB IP to something in the 10.10.0.0/244 net like 10.10.0.33 or is it somehow possible to use the rancher net for the LB IP e.g. 10.10.10.0/24 like 10.10.10.33? And if it is possible what routes do i need to set for each net to make this work?
the weirdest thing is happening with the loadbalancer. I now have the following situation: When put a instance-label on my debug nginx vm. It is picked up by the loadbalancer and i can use all listener ports 80,443,6443,9345 against the nginx and get served nginx default site BUT when i switch the InstanceLabel to my rancher vm with a running rke2 server. The new loadbalancing backend is picked up but instead of my rancher/rke2 nodes becoming accessible under the loadbalancer VIP I am served the harvester dashboard WTF is happening