Hi all, curious if anyone has seen this behavior w...
# k3s
a
Hi all, curious if anyone has seen this behavior with services & endpoints on k3s... I just have a single machine running k3s, no complex cluster. I have a simple manifests for a deployment with 1 container, a service for that container, and a Traefik ingress. This should be a 111 relationship, yet when I run
kubectl describe service/myservice
I see 6 endpoints, and unfortunately, Traefik dashboard confirms this under its "services" view. I can totally delete everything, even the api resources and then apply these manifests and the 6 endpoints will show up again. Been spinning my wheels on this for a few days, not sure where to look or what to do. A colleague, who's more of a k8s admin wonders if it's some kind of bug in k3s, he didn't see anything wrong with my manifests or anything and recommended I post here.
c
you know that services can have more than one port, right?
there’s an endpoint per port per address family
a
yes, this one does not. This has 6 endpoint IP addresses all with the same port.
c
did you scale the application up to 6 replicas?
what’s the output of
kubectl get service -n NAMESPACE SERVICE -o yaml
and
kubectl get endpoints -n NAMESPACE SERVICE -o yaml
a
No, I only have the 1 node, IIRC multiple replicas fail when there is only one node correct?
c
Or are you perhaps using a poorly specified label selector for the service, and it’s matching other pods?
get the service and endpoint yaml and I might be able to point you at what to check next
a
I need to scrape that output from sensitive info... it'll take me a bit. There are a number of other things deployed in that namespace.
c
that’s why I said to just include the specific service’s service and endpoints…
a
sorry, missed that.
service:
Copy code
apiVersion: v1
kind: Service
metadata:
  annotations:
    <http://kubectl.kubernetes.io/last-applied-configuration|kubectl.kubernetes.io/last-applied-configuration>: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"myapp","environment":"staging"},"name":"ui-service","namespace":"default"},"spec":{"ports":[{"name":"ui-port","port":8080,"targetPort":8080}],"selector":{"app":"myapp","environment":"staging"}}}
  creationTimestamp: "2024-06-03T18:42:03Z"
  labels:
    app: myapp
    environment: staging
  name: ui-service
  namespace: default
  resourceVersion: "3620836"
  uid: ffd69d15-24d2-4c6c-acc9-480381fab216
spec:
  clusterIP: 10.43.19.167
  clusterIPs:
  - 10.43.19.167
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: ui-port
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: myapp
    environment: staging
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
On the endpoints, I can already see IPs that are associated with other deployments being tied to this service for some reason. Those need to go away. I'll scrape that here in a sec.
endpoints:
Copy code
apiVersion: v1
kind: Endpoints
metadata:
  creationTimestamp: "2024-06-04T17:48:23Z"
  labels:
    app: myapp
    environment: staging
  name: ui-service
  namespace: default
  resourceVersion: "3729444"
  uid: 09617112-10cb-42cf-b466-444ed7b2e574
subsets:
- addresses:
  - ip: 10.42.0.193
    nodeName: cicd3
    targetRef:
      kind: Pod
      name: ctrl-deployment-775f47748d-8v2z2
      namespace: default
      uid: ad46ea2c-1a7f-4e3f-8f23-ea1ed77143aa
  - ip: 10.42.0.194
    nodeName: cicd3
    targetRef:
      kind: Pod
      name: logger-deployment-88c9877f9-qpn48
      namespace: default
      uid: 7382821e-7f63-4c76-926c-3664abbd58ce
  - ip: 10.42.0.198
    nodeName: cicd3
    targetRef:
      kind: Pod
      name: postgres-deployment-7686575558-8h5ks
      namespace: default
      uid: 228c0c78-0cb8-4394-8922-2cb092e896db
  - ip: 10.42.0.200
    nodeName: cicd3
    targetRef:
      kind: Pod
      name: cp-deployment-596dc55f85-wnv5v
      namespace: default
      uid: 9fee2059-ad8c-48f4-bfae-078f1d8abaff
  - ip: 10.42.0.201
    nodeName: cicd3
    targetRef:
      kind: Pod
      name: ui-deployment-676c47cc97-vthmm
      namespace: default
      uid: 039049eb-4fd8-49d6-bc29-f3dfd0edf1d4
  - ip: 10.42.0.202
    nodeName: cicd3
    targetRef:
      kind: Pod
      name: mb-deployment-5f645dc9cc-5r99q
      namespace: default
      uid: 3653ef44-f35d-4bbc-90f6-8daad1a0a52d
  ports:
  - name: ui-port
    port: 8080
    protocol: TCP
The only one that should relate to this service is the
ui-deployment
if that's not obvious... but I'm not sure how these are getting or crossed between deployments.
anyway, appreciate the look you're giving it.
c
what do you see if you do
kubectl get pod -n default -l app=myapp,environment=staging
a
maybe because I'm using the same selector string across these related services?
c
that’s the label selector you put on the service, so any pods that match those labels will be used for the service
if you don’t want them used, you should use unique labels, or change the label selector on the service
a
that's got to be it...let me see if I can unscrew it.
c
Copy code
selector:
    app: myapp
    environment: staging
is where you tell it what labels to match
a
yeah I get that... and I've got separate labels for the cp, ctrl, mb, etc. deployments. I wonder if this is lingering from an older version of the manifests in my repo.
I might have something from kustomize bugging this up too. Thanks I'll dig in and report back
Looks like this is an artifact of using commonlables in Kustomize.yaml, and at some point, not having specific/unique labels in the service/deployment manifests. I cleaned up the ones I care about (right now) and found that clearing out everything and re-deploying left the postgres service with 6 IP:PORT endpoints like the ui-service had. Thanks for the 2nd set of eyes.