https://rancher.com/ logo
Title
v

victorious-monkey-50550

07/08/2022, 6:19 PM
We are using the KlipperLB load balancer and are finding that the svclb services are getting created in the kube-system namespaces. In addition, we can not delete the pods in that namespace - the initial pod is deleted, but then it gets recreated, so it won't die. Initial pods in kube-system namespace:
k get pods -n kube-system
NAME                                                        READY   STATUS    RESTARTS       AGE
coredns-d76bd69b-bnx9m                                      1/1     Running   3 (9d ago)     9d
metrics-server-7cd5fcb6b7-gflch                             1/1     Running   3 (9d ago)     9d
local-path-provisioner-6c79684f77-hlwsg                     1/1     Running   5 (9d ago)     9d
svclb-rook-ceph-mgr-dashboard-loadbalancer-1364e2b9-k7wsp   1/1     Running   0              7d23h
svclb-rook-ceph-mgr-dashboard-loadbalancer-1364e2b9-df8cj   1/1     Running   0              7d23h
svclb-rook-ceph-mgr-dashboard-loadbalancer-1364e2b9-sxb88   1/1     Running   0              7d23h
svclb-rook-ceph-mgr-dashboard-loadbalancer-1364e2b9-7nqq6   1/1     Running   1 (7d6h ago)   7d23h
svclb-istio-ingressgateway-c9776a98-m6hzn                   3/3     Running   0              3d2h
svclb-istio-ingressgateway-c9776a98-kwvr7                   3/3     Running   0              3d2h
svclb-istio-ingressgateway-c9776a98-4jn4m                   3/3     Running   0              3d2h
svclb-istio-ingressgateway-c9776a98-hd4jw                   3/3     Running   0              3d2h
svclb-keycloak-fb0704bb-pnq85                               1/1     Running   0              2d23h
svclb-keycloak-fb0704bb-xfh2p                               1/1     Running   0              2d23h
svclb-keycloak-fb0704bb-llc4r                               1/1     Running   0              2d23h
svclb-keycloak-fb0704bb-cjx4v                               1/1     Running   0              2d23h
svclb-postgres-db-40a02dd4-7h7q7                            1/1     Running   0              2d22h
svclb-postgres-db-40a02dd4-svf68                            1/1     Running   0              2d22h
svclb-postgres-db-40a02dd4-cn25l                            1/1     Running   0              2d22h
svclb-postgres-db-40a02dd4-tskt4                            1/1     Running   0              2d22h
svclb-stackgres-restapi-e9fc8149-xqhgw                      0/1     Pending   0              23h
svclb-stackgres-restapi-e9fc8149-jp4cx                      0/1     Pending   0              23h
svclb-stackgres-restapi-e9fc8149-444fj                      0/1     Pending   0              23h
svclb-stackgres-restapi-e9fc8149-jxgjm                      0/1     Pending   0              23h
svclb-nginx-service-2fcd8743-zbxv6                          1/1     Running   0              3h9m
svclb-nginx-service-2fcd8743-dpw6q                          1/1     Running   0              3h9m
svclb-nginx-service-2fcd8743-hj4gk                          1/1     Running   0              3h9m
svclb-nginx-service-2fcd8743-dt5pn                          1/1     Running   0              3h9m
svclb-nginx-b1379761-dhw6w                                  1/1     Running   0              3h27m
svclb-nginx-b1379761-zcgnd                                  1/1     Running   0              3h27m
svclb-nginx-b1379761-czflw                                  1/1     Running   0              3h27m
svclb-nginx-b1379761-jbpmv                                  1/1     Running   0              3h27m
svclb-west2-service-9d3852cf-rjhfw                          1/1     Running   0              64m
svclb-west2-service-9d3852cf-9p29w                          1/1     Running   0              64m
svclb-west2-service-9d3852cf-t2s5h                          1/1     Running   0              64m
svclb-west2-service-9d3852cf-7z2xw                          1/1     Running   0              64m
svclb-test-service-ac1dfd76-6qj28                           1/1     Running   0              47m
svclb-test-service-ac1dfd76-ckfdz                           1/1     Running   0              47m
svclb-test-service-ac1dfd76-cmdgz                           1/1     Running   0              47m
svclb-test-service-ac1dfd76-47j66                           1/1     Running   0              47m
svclb-test-f79880dc-gxrcx                                   0/2     Pending   0              4m35s
svclb-test-f79880dc-l9b9k                                   0/2     Pending   0              4m35s
svclb-test-f79880dc-x2bcz                                   0/2     Pending   0              4m35s
svclb-test-f79880dc-zkwdz                                   0/2     Pending   0              4m35s
svclb-test-replicas-2fef0816-l2xfv                          0/2     Pending   0              4m35s
svclb-test-replicas-2fef0816-zctcp                          0/2     Pending   0              4m35s
svclb-test-replicas-2fef0816-lgj2v                          0/2     Pending   0              4m35s
svclb-test-replicas-2fef0816-gjgwv                          0/2     Pending   0              4m35s
After deleting a couple of pods:
k delete pods -n kube-system svclb-test-f79880dc-gxrcx svclb-test-f79880dc-l9b9k
pod "svclb-test-f79880dc-gxrcx" deleted
pod "svclb-test-f79880dc-l9b9k" deleted
(envgen)$ k get pods -n kube-system
NAME                                                        READY   STATUS    RESTARTS       AGE
coredns-d76bd69b-bnx9m                                      1/1     Running   3 (9d ago)     9d
metrics-server-7cd5fcb6b7-gflch                             1/1     Running   3 (9d ago)     9d
local-path-provisioner-6c79684f77-hlwsg                     1/1     Running   5 (9d ago)     9d
svclb-rook-ceph-mgr-dashboard-loadbalancer-1364e2b9-k7wsp   1/1     Running   0              7d23h
svclb-rook-ceph-mgr-dashboard-loadbalancer-1364e2b9-df8cj   1/1     Running   0              7d23h
svclb-rook-ceph-mgr-dashboard-loadbalancer-1364e2b9-sxb88   1/1     Running   0              7d23h
svclb-rook-ceph-mgr-dashboard-loadbalancer-1364e2b9-7nqq6   1/1     Running   1 (7d6h ago)   7d23h
svclb-istio-ingressgateway-c9776a98-m6hzn                   3/3     Running   0              3d2h
svclb-istio-ingressgateway-c9776a98-kwvr7                   3/3     Running   0              3d2h
svclb-istio-ingressgateway-c9776a98-4jn4m                   3/3     Running   0              3d2h
svclb-istio-ingressgateway-c9776a98-hd4jw                   3/3     Running   0              3d2h
svclb-keycloak-fb0704bb-pnq85                               1/1     Running   0              2d23h
svclb-keycloak-fb0704bb-xfh2p                               1/1     Running   0              2d23h
svclb-keycloak-fb0704bb-llc4r                               1/1     Running   0              2d23h
svclb-keycloak-fb0704bb-cjx4v                               1/1     Running   0              2d23h
svclb-postgres-db-40a02dd4-7h7q7                            1/1     Running   0              2d22h
svclb-postgres-db-40a02dd4-svf68                            1/1     Running   0              2d22h
svclb-postgres-db-40a02dd4-cn25l                            1/1     Running   0              2d22h
svclb-postgres-db-40a02dd4-tskt4                            1/1     Running   0              2d22h
svclb-stackgres-restapi-e9fc8149-xqhgw                      0/1     Pending   0              23h
svclb-stackgres-restapi-e9fc8149-jp4cx                      0/1     Pending   0              23h
svclb-stackgres-restapi-e9fc8149-444fj                      0/1     Pending   0              23h
svclb-stackgres-restapi-e9fc8149-jxgjm                      0/1     Pending   0              23h
svclb-nginx-service-2fcd8743-zbxv6                          1/1     Running   0              3h11m
svclb-nginx-service-2fcd8743-dpw6q                          1/1     Running   0              3h11m
svclb-nginx-service-2fcd8743-hj4gk                          1/1     Running   0              3h11m
svclb-nginx-service-2fcd8743-dt5pn                          1/1     Running   0              3h11m
svclb-nginx-b1379761-dhw6w                                  1/1     Running   0              3h29m
svclb-nginx-b1379761-zcgnd                                  1/1     Running   0              3h29m
svclb-nginx-b1379761-czflw                                  1/1     Running   0              3h29m
svclb-nginx-b1379761-jbpmv                                  1/1     Running   0              3h29m
svclb-west2-service-9d3852cf-rjhfw                          1/1     Running   0              66m
svclb-west2-service-9d3852cf-9p29w                          1/1     Running   0              66m
svclb-west2-service-9d3852cf-t2s5h                          1/1     Running   0              66m
svclb-west2-service-9d3852cf-7z2xw                          1/1     Running   0              66m
svclb-test-service-ac1dfd76-6qj28                           1/1     Running   0              48m
svclb-test-service-ac1dfd76-ckfdz                           1/1     Running   0              48m
svclb-test-service-ac1dfd76-cmdgz                           1/1     Running   0              48m
svclb-test-service-ac1dfd76-47j66                           1/1     Running   0              48m
svclb-test-f79880dc-x2bcz                                   0/2     Pending   0              6m23s
svclb-test-f79880dc-zkwdz                                   0/2     Pending   0              6m23s
svclb-test-replicas-2fef0816-l2xfv                          0/2     Pending   0              6m23s
svclb-test-replicas-2fef0816-zctcp                          0/2     Pending   0              6m23s
svclb-test-replicas-2fef0816-lgj2v                          0/2     Pending   0              6m23s
svclb-test-replicas-2fef0816-gjgwv                          0/2     Pending   0              6m23s
svclb-test-f79880dc-bfhcc                                   0/2     Pending   0              3s
svclb-test-f79880dc-vz52q                                   0/2     Pending   0              3s
c

creamy-pencil-82913

07/08/2022, 6:22 PM
why are you trying to delete them? They’re managed by klipper-lb so yes, if you delete them, it will re-create them as long as the service that requires them exists
v

victorious-monkey-50550

07/08/2022, 6:46 PM
@creamy-pencil-82913 when we stop the Load Balancer service, the svclb instances are not killed, as we have seen in the past. We restart the LB service (with new definitions), we seem to end up with conflicts of ports / names, due to the lingering svclbs
c

creamy-pencil-82913

07/08/2022, 7:00 PM
what do you mean by ‘stop’ the LB service. You mean the pods don’t get cleaned up when the service is deleted?
Kubernetes doesn’t really have a concept of “stopping” a Service. You can delete it, or delete (scale to 0) the pods that back it while leaving the service defined, but Services themselves don’t have a ‘started’ or ‘stopped’ state.
If you are deleting the service and the svclb pods remain, that’s a problem though.
v

victorious-monkey-50550

07/08/2022, 7:23 PM
Yes, we delete the LB service and the pods remain. We agree it is a bug. Is there a work around to get the pods to go away?
c

creamy-pencil-82913

07/08/2022, 8:15 PM
Would you mind opening a github issue?
Created https://github.com/k3s-io/k3s/issues/5823 for you, please add any additional details you can share