adamant-kite-43734
02/04/2025, 10:33 AMbland-article-62755
02/04/2025, 7:58 PMpowerful-easter-15334
02/04/2025, 8:00 PMpowerful-easter-15334
02/04/2025, 8:00 PMpowerful-easter-15334
02/04/2025, 8:10 PMpowerful-easter-15334
02/04/2025, 8:11 PMpowerful-easter-15334
02/05/2025, 4:40 AMpowerful-easter-15334
02/05/2025, 4:45 AMbland-article-62755
02/05/2025, 4:46 AMbland-article-62755
02/05/2025, 4:47 AMpowerful-easter-15334
02/05/2025, 4:48 AM6: mgmt-br: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether d4:f5:ef:63:58:2c brd ff:ff:ff:ff:ff:ff
inet 10.0.1.61/16 brd 10.0.255.255 scope global mgmt-br
valid_lft forever preferred_lft forever
inet 10.0.1.69/32 scope global mgmt-br
valid_lft forever preferred_lft forever
inet 10.0.1.60/32 scope global mgmt-br
valid_lft forever preferred_lft forever
inet 10.0.1.70/32 scope global mgmt-br
The harvester hosts can reach the VMs though, and the LB IPs (.60,69,70) are also attached to the mgmtbland-article-62755
02/05/2025, 4:50 AMmgmt-br
bland-article-62755
02/05/2025, 4:51 AMpowerful-easter-15334
02/05/2025, 4:52 AMpowerful-easter-15334
02/05/2025, 4:54 AMpowerful-easter-15334
02/05/2025, 5:22 AMtraceroute to 10.5.106.77 (10.5.106.77), 30 hops max, 60 byte packets
1 10.0.1.32 (10.0.1.32) 0.689 ms 0.622 ms 0.590 ms
2 10.5.106.77 (10.5.106.77) 0.565 ms 0.539 ms 0.513 ms
This is from a host to a VM - passes through the gatewaypowerful-easter-15334
02/05/2025, 5:27 AMopensuse@test:~> ip route show
default via 10.5.106.254 dev eth0
10.5.106.0/24 dev eth0 proto kernel scope link src 10.5.106.77
That's the correct gateway for the VM to reach the servers too
But a traceroute to a host doesn't even pass through there
opensuse@test:~> sudo traceroute 10.0.1.61
traceroute to 10.0.1.61 (10.0.1.61), 30 hops max, 60 byte packets
1 * * *
2 * * *
powerful-easter-15334
02/05/2025, 6:02 AMpowerful-easter-15334
02/05/2025, 6:33 AMpowerful-easter-15334
02/05/2025, 9:09 AMpowerful-easter-15334
02/05/2025, 9:09 AMpowerful-easter-15334
02/05/2025, 9:12 AMpowerful-easter-15334
02/05/2025, 10:10 AMharvester-1:/home/rancher # kubectl describe svc rancher-lb -n rancher-mgmt
Name: rancher-lb
Namespace: rancher-mgmt
Labels: <http://loadbalancer.harvesterhci.io/servicelb=true|loadbalancer.harvesterhci.io/servicelb=true>
Annotations: <http://kube-vip.io/ignore-service-security|kube-vip.io/ignore-service-security>: true
<http://kube-vip.io/loadbalancerIPs|kube-vip.io/loadbalancerIPs>: 10.0.1.70
<http://kube-vip.io/vipHost|kube-vip.io/vipHost>: harvester-1
Selector: <none>
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.53.1.100
IPs: 10.53.1.100
IP: 10.0.1.70
LoadBalancer Ingress: 10.0.1.70
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 32390/TCP
Endpoints: <none>
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32047/TCP
Endpoints: <none>
Port: kubeapi 6443/TCP
TargetPort: 6443/TCP
NodePort: kubeapi 32108/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Things here which are missing from, say, the ingress-expose svc:
• selector is none
• there are two values for IP?
• No endpoint values. For the ingress-expose, the service's endpoints are the pod IPs for the nginx ingress