This message was deleted.
# harvester
a
This message was deleted.
l
Here is the harvester project info (via develoepr rancher access for the cluster) that it seems to be referring to with the strange project name (I thought it was referring a guest cluster project name 😛 . This is actually showing my rke2 vms
apiVersion: <http://loadbalancer.harvesterhci.io/v1beta1|loadbalancer.harvesterhci.io/v1beta1>
kind: LoadBalancer
metadata:
annotations:
<http://cloudprovider.harvesterhci.io/service-uuid|cloudprovider.harvesterhci.io/service-uuid>: 305b4f79-ceff-4fc1-be08-17c740cd24f9
<http://loadbalancer.harvesterhci.io/namespace|loadbalancer.harvesterhci.io/namespace>: default
<http://loadbalancer.harvesterhci.io/network|loadbalancer.harvesterhci.io/network>: ''
<http://loadbalancer.harvesterhci.io/project|loadbalancer.harvesterhci.io/project>: c-m-kb9nwxh2/p-kfl9f
creationTimestamp: '2024-01-24T19:18:25Z'
finalizers:
- <http://wrangler.cattle.io/harvester-lb-controller|wrangler.cattle.io/harvester-lb-controller>
generation: 10
labels:
<http://cloudprovider.harvesterhci.io/cluster|cloudprovider.harvesterhci.io/cluster>: dev
name: dev-argocd-lb-09a33510
namespace: default
resourceVersion: '9013702'
uid: 0c3e2048-cbd5-4381-997b-189839651835
spec:
backendServerSelector:
<http://harvesterhci.io/vmName|harvesterhci.io/vmName>:
- dev-pool1-62d36532-2mchw
- dev-pool1-62d36532-djjzc
- dev-pool1-62d36532-wcwhq
ipam: pool
listeners:
- backendPort: 30657
name: http
port: 80
protocol: TCP
- backendPort: 32215
name: https
port: 443
protocol: TCP
status:
backendServers:
- 192.168.112.21
- 192.168.112.22
- 192.168.112.20
conditions:
- lastUpdateTime: '2024-01-24T19:18:35Z'
message: >-
allocate ip for lb default/dev-argocd-lb-09a33510 failed, error:
192.168.112.9 has been allocated to default/dev-argocd-lb-09a33510,
duplicate allocation is not allowed
status: 'False'
type: Ready
I was also poking around and found this as well:
apiVersion: <http://loadbalancer.harvesterhci.io/v1beta1|loadbalancer.harvesterhci.io/v1beta1>
kind: IPPool
metadata:
creationTimestamp: '2024-01-24T17:00:14Z'
finalizers:
- <http://wrangler.cattle.io/harvester-ipam-controller|wrangler.cattle.io/harvester-ipam-controller>
generation: 34
labels:
<http://loadbalancer.harvesterhci.io/global-ip-pool|loadbalancer.harvesterhci.io/global-ip-pool>: 'true'
<http://loadbalancer.harvesterhci.io/vid|loadbalancer.harvesterhci.io/vid>: '112'
managedFields:
- apiVersion: <http://loadbalancer.harvesterhci.io/v1beta1|loadbalancer.harvesterhci.io/v1beta1>
fieldsType: FieldsV1
fieldsV1:
f:spec:
.: {}
f:ranges: {}
f:selector:
.: {}
f:network: {}
f:scope: {}
manager: harvester
operation: Update
time: '2024-01-24T19:29:09Z'
- apiVersion: <http://loadbalancer.harvesterhci.io/v1beta1|loadbalancer.harvesterhci.io/v1beta1>
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.: {}
v:"<http://wrangler.cattle.io/harvester-ipam-controller|wrangler.cattle.io/harvester-ipam-controller>": {}
f:status:
.: {}
f:allocated:
.: {}
f:192.168.112.2: {}
f:192.168.112.3: {}
f:192.168.112.4: {}
f:192.168.112.5: {}
f:192.168.112.6: {}
f:192.168.112.7: {}
f:192.168.112.8: {}
f:192.168.112.9: {}
f:available: {}
f:conditions: {}
f:lastAllocated: {}
f:total: {}
manager: harvester-load-balancer
operation: Update
time: '2024-01-24T19:29:09Z'
name: global-ip-pool
resourceVersion: '9013591'
uid: f1cbd5ca-8dcd-4260-992d-cc24f58d276e
spec:
ranges:
- gateway: 192.168.112.1
rangeEnd: 192.168.112.9
rangeStart: 192.168.112.2
subnet: 192.168.112.0/24
selector:
network: default/k8s
scope:
- guestCluster: '*'
namespace: '*'
project: '*'
status:
allocated:
192.168.112.2: default/dev-argocd-lb-81963e40
192.168.112.3: default/dev-argocd-lb-98940262
192.168.112.4: default/dev-argocd-lb-f6583253
192.168.112.5: default/dev-argocd-lb-55e4ea27
192.168.112.6: default/dev-argocd-lb-43fe840d
192.168.112.7: default/dev-argocd-lb-b5a5dc14
192.168.112.8: default/dev-argocd-lb-18edf10b
192.168.112.9: default/dev-argocd-lb-09a33510
available: 0
conditions:
- lastUpdateTime: '2024-01-24T17:00:14Z'
status: 'True'
type: Ready
lastAllocated: 192.168.112.9
total: 8
I also tried deploying a load balance within rancher and it seems to assign an ip but the load balancer shows a timeout
apiVersion: <http://loadbalancer.harvesterhci.io/v1beta1|loadbalancer.harvesterhci.io/v1beta1>
kind: LoadBalancer
metadata:
annotations:
<http://loadbalancer.harvesterhci.io/namespace|loadbalancer.harvesterhci.io/namespace>: default
<http://loadbalancer.harvesterhci.io/network|loadbalancer.harvesterhci.io/network>: ''
<http://loadbalancer.harvesterhci.io/project|loadbalancer.harvesterhci.io/project>: c-m-kb9nwxh2/p-kfl9f
creationTimestamp: '2024-01-24T21:43:16Z'
finalizers:
- <http://wrangler.cattle.io/harvester-lb-controller|wrangler.cattle.io/harvester-lb-controller>
generation: 2
managedFields:
- apiVersion: <http://loadbalancer.harvesterhci.io/v1beta1|loadbalancer.harvesterhci.io/v1beta1>
fieldsType: FieldsV1
fieldsV1:
f:spec:
.: {}
f:backendServerSelector:
.: {}
f:app: {}
f:healthCheck:
.: {}
f:port: {}
f:ipPool: {}
f:ipam: {}
f:listeners: {}
f:workloadType: {}
manager: harvester
operation: Update
time: '2024-01-24T21:43:16Z'
- apiVersion: <http://loadbalancer.harvesterhci.io/v1beta1|loadbalancer.harvesterhci.io/v1beta1>
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.: {}
v:"<http://wrangler.cattle.io/harvester-lb-controller|wrangler.cattle.io/harvester-lb-controller>": {}
f:status:
.: {}
f:allocatedAddress:
.: {}
f:gateway: {}
f:ip: {}
f:ipPool: {}
f:mask: {}
f:conditions: {}
manager: harvester-load-balancer
operation: Update
time: '2024-01-24T21:43:22Z'
name: argocd-dev
namespace: default
resourceVersion: '9180035'
uid: 659e60e3-5255-4525-9cbb-f2f7a9fcfe1a
spec:
backendServerSelector:
app:
- argocd-dev
healthCheck:
port: 443
ipPool: argocd-dev
ipam: pool
listeners:
- backendPort: 443
name: https
port: 443
protocol: TCP
workloadType: vm
status:
allocatedAddress:
gateway: 192.168.112.1
ip: 192.168.112.11
ipPool: argocd-dev
mask: 255.255.255.0
conditions:
- lastUpdateTime: '2024-01-24T21:43:21Z'
message: 'wait service default/argocd-dev external ip failed, error: timeout'
status: 'False'
type: Ready
here is the ip pool:
apiVersion: <http://loadbalancer.harvesterhci.io/v1beta1|loadbalancer.harvesterhci.io/v1beta1>
kind: IPPool
metadata:
creationTimestamp: '2024-01-24T21:37:04Z'
finalizers:
- <http://wrangler.cattle.io/harvester-ipam-controller|wrangler.cattle.io/harvester-ipam-controller>
generation: 10
labels:
<http://loadbalancer.harvesterhci.io/global-ip-pool|loadbalancer.harvesterhci.io/global-ip-pool>: 'true'
<http://loadbalancer.harvesterhci.io/vid|loadbalancer.harvesterhci.io/vid>: '112'
managedFields:
- apiVersion: <http://loadbalancer.harvesterhci.io/v1beta1|loadbalancer.harvesterhci.io/v1beta1>
fieldsType: FieldsV1
fieldsV1:
f:spec:
.: {}
f:ranges: {}
f:selector:
.: {}
f:network: {}
f:scope: {}
manager: harvester
operation: Update
time: '2024-01-24T21:42:38Z'
- apiVersion: <http://loadbalancer.harvesterhci.io/v1beta1|loadbalancer.harvesterhci.io/v1beta1>
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.: {}
v:"<http://wrangler.cattle.io/harvester-ipam-controller|wrangler.cattle.io/harvester-ipam-controller>": {}
f:status:
.: {}
f:allocated:
.: {}
f:192.168.112.11: {}
f:available: {}
f:conditions: {}
f:lastAllocated: {}
f:total: {}
manager: harvester-load-balancer
operation: Update
time: '2024-01-24T21:43:16Z'
name: argocd-dev
resourceVersion: '9179923'
uid: 900bc85c-9a96-4675-9095-19d5678a27e9
spec:
ranges:
- gateway: 192.168.112.1
rangeEnd: 192.168.112.12
rangeStart: 192.168.112.11
subnet: 192.168.112.0/24
selector:
network: default/k8s
scope:
- guestCluster: '*'
namespace: '*'
project: '*'
status:
allocated:
192.168.112.11: default/argocd-dev
available: 1
conditions:
- lastUpdateTime: '2024-01-24T21:37:04Z'
status: 'True'
type: Ready
lastAllocated: 192.168.112.11
total: 2
a
Please double check: Your LB;
Copy code
apiVersion: <http://loadbalancer.harvesterhci.io/v1beta1|loadbalancer.harvesterhci.io/v1beta1>
kind: LoadBalancer
metadata:
 annotations:
  <http://cloudprovider.harvesterhci.io/service-uuid|cloudprovider.harvesterhci.io/service-uuid>: 8e61862b-b930-41f5-872d-f9ceb7937463
  <http://loadbalancer.harvesterhci.io/namespace|loadbalancer.harvesterhci.io/namespace>: default
  <http://loadbalancer.harvesterhci.io/network|loadbalancer.harvesterhci.io/network>: ''
  <http://loadbalancer.harvesterhci.io/project|loadbalancer.harvesterhci.io/project>: c-m-kb9nwxh2/p-kfl9f
..
 name: dev-argocd-lb-7d6b07a7
 namespace: default
Copy code
apiVersion: <http://loadbalancer.harvesterhci.io/v1beta1|loadbalancer.harvesterhci.io/v1beta1>
kind: IPPool
metadata:
spec:
 ranges:
  - gateway: 192.168.112.1
   rangeEnd: 192.168.112.10
   rangeStart: 192.168.112.2
   subnet: 192.168.112.0/24
 selector:
  network: default/k8s
  scope:
   - guestCluster: '*'
    namespace: 'default'
    project: c-m-kb9nwxh2/p-kfl9f
The IPPool has a `selector.network value default/k8s`; but in your above LB, it's ``loadbalancer.harvesterhci.io/network: ''` , it causes the LB can not select the target IPPool.
l
Thank you for your suggestions. However, I've continued to face persistent issues with IP allocation for LoadBalancers in both Harvester-managed and guest cluster contexts, despite following the recommended configurations. I've attempted to create both Harvester-level and guest-level LoadBalancers, adjusting the IP pool selectors to be both specific and minimal, yet I continue to encounter errors related to IP allocation (
duplicate allocation is not allowed
,
wait service external ip failed, error: timeout
). For instance, even in a scenario where I manually created a Harvester IPPool and corresponding LoadBalancer with minimal selectors, the system failed to assign an IP correctly, resulting in a timeout error. This suggests that there might be a deeper issue at play, possibly a bug within Harvester's load balancing mechanism or its integration with the IPAM controller. Given the consistency of these issues across different approaches and configurations, it seems we are dealing with a bug or a system limitation that's not addressed by standard configuration adjustments. I would appreciate any further insights or recommendations, and I'm also open to exploring alternative solutions or workarounds that could be applied in this scenario. I'll psot it as. bug in github. thanks again! 🙂
Issue posted here: [Bug] Load Balancer Deployment Fails in Both Guest and Harvester Cluster Scenarios · Issue #5033 · harvester/harvester (github.com) 🙂
👍 1