This message was deleted.
# general
a
This message was deleted.
c
check the metallb logs to see why its not provisioning it?
👍 1
r
I have checked the log, but I am still not sure what the issue is. Here a part of the log:
Copy code
E0607 23:08:58.712189       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"-mssql-2019-1-loadbalancer.1762c16020f21132", GenerateName:"", Namespace:"-static-database", SelfLink:"", UID:"", ResourceVersion:"562966093", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Service", Namespace:"-static-database", Name:"-mssql-2019-1-loadbalancer", UID:"9a1e8e79-a84a-4575-8cfa-83bcc90a356f", APIVersion:"v1", ResourceVersion:"34661861", FieldPath:""}, Reason:"nodeAssigned", Message:"announcing from node \"worker-bed-usbediac-cl-14\"", Source:v1.EventSource{Component:"metallb-speaker", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63820718629, loc:(*time.Location)(0x23fbc20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1186135b5b8e7d9, ext:1057330396583873, loc:(*time.Location)(0x23fbc20)}}, Count:24, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unknown desc = context deadline exceeded' (will not retry!)
{"caller":"level.go:63","level":"info","msg":"triggering discovery","op":"memberDiscovery","ts":"2023-06-08T00:38:53.475776894Z"}
{"caller":"level.go:63","error":"Get \"<https://10.43.0.1:443/api/v1/namespaces/metallb-system/pods?labelSelector=app%3Dmetallb%2Ccomponent%3Dspeaker>\": stream error: stream ID 53365; INTERNAL_ERROR","level":"error","msg":"failed to get pod IPs","op":"memberDiscovery","ts":"2023-06-08T00:42:43.702728778Z"}
{"caller":"level.go:63","level":"info","msg":"triggering discovery","op":"memberDiscovery","ts":"2023-06-08T00:43:54.69067581Z"}
{"caller":"level.go:63","error":"the server was unable to return a response in the time allotted, but may still be processing the request (get pods)","level":"error","msg":"failed to get pod IPs","op":"memberDiscovery","ts":"2023-06-08T00:48:43.703895942Z"}
{"caller":"level.go:63","level":"info","msg":"triggering discovery","op":"memberDiscovery","ts":"2023-06-08T00:48:55.8834029Z"}
.go:63","event":"deleteARPResponder","interface":"cali85b1362e492","level":"info","msg":"deleted ARP responder for interface","ts":"2023-06-08T17:21:22.875042116Z"}
{"caller":"level.go:63","event":"deleteNDPResponder","interface":"cali85b1362e492","level":"info","msg":"deleted NDP responder for interface","ts":"2023-06-08T17:21:22.875280174Z"}
{"caller":"level.go:63","level":"info","msg":"triggering discovery","op":"memberDiscovery","ts":"2023-06-08T17:21:29.588732145Z"}
{"caller":"level.go:63","level":"info","msg":"triggering discovery","op":"memberDiscovery","ts":"2023-06-08T17:21:55.766236676Z"}
{"caller":"level.go:63","error":"the server was unable to return a response in the time allotted, but may still be processing the request (get pods)","level":"error","msg":"failed to get pod IPs","op":"memberDiscovery","ts":"2023-06-08T17:23:14.698464989Z"}
{"caller":"level.go:63","level":"info","msg":"triggering discovery","op":"memberDiscovery","ts":"2023-06-08T17:26:56.947869852Z"}
c
what kind of cluster are you running this on? you didn’t say.
r
It is RKE cluster, Rancher v2.6.9, MetalLB v0.12.1, K8S version: v1.20.15
c
hmm. There’s not much in the log other than what appear to be some timeouts accessing the apiserver. I would look at the time frame around when you created the LB service to confirm that the lb controller is seeing it and trying to respond. you might also check the status field on the service.
👍 1