This message was deleted.
# k3s
a
This message was deleted.
c
Where exactly do you get that error?
c
From traefik. I'll get the no route to host
c
are you sure the CNI is working properly?
c
What's the best way to validate? I can hit the svc ip and the pod IP from the host in which k3s is running. But it doesn't seem that traefik can from within the cluster.
I've seen metallb mentioned a few times, but my understanding was it comes prebuilt with klipper so I shouldn't need it?
c
no… metallb would be a replacement for servicelb (klipper-lb)
Can you show the specific configuration you have in place, and the error you’re getting? What does the ingress resource look like, and what is the full error from traefik.
c
Copy code
[gerard@gumption ~]$ kubectl get all -A
NAMESPACE     NAME                                          READY   STATUS        RESTARTS       AGE
kube-system   pod/helm-install-traefik-crd-wzwmn            0/1     Completed     0              28d
kube-system   pod/helm-install-traefik-dfmw8                0/1     Completed     1              28d
kube-system   pod/traefik-6d7d945c64-zjvtv                  1/1     Terminating   3 (17h ago)    2d23h
kube-system   pod/svclb-my-nginx-52dde893-5bv9f             0/1     Pending       0              18h
default       pod/unmanic-5cbfc4c76c-hwk7w                  1/1     Running       2 (67m ago)    16h
kube-system   pod/svclb-traefik-cca01f78-92jm8              2/2     Running       28 (67m ago)   28d
kube-system   pod/traefik-6d7d945c64-hqg6n                  1/1     Running       2 (67m ago)    16h
default       pod/my-nginx-5df9d8fcf-hbzx6                  1/1     Running       2 (67m ago)    16h
kube-system   pod/coredns-6799fbcd5-tc88x                   1/1     Running       14 (67m ago)   28d
kube-system   pod/metrics-server-67c658944b-flbh8           0/1     Running       22 (66m ago)   28d
kube-system   pod/local-path-provisioner-84db5d44d9-wwv7x   1/1     Running       22 (66m ago)   28d

NAMESPACE     NAME                      TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                      AGE
default       service/kubernetes        ClusterIP      10.43.0.1       <none>         443/TCP                      28d
kube-system   service/kube-dns          ClusterIP      10.43.0.10      <none>         53/UDP,53/TCP,9153/TCP       28d
kube-system   service/metrics-server    ClusterIP      10.43.228.46    <none>         443/TCP                      28d
default       service/unmanic-service   ClusterIP      10.43.124.186   <none>         8888/TCP                     2d23h
default       service/my-nginx          LoadBalancer   10.43.86.157    <pending>      80:30584/TCP                 18h
kube-system   service/traefik           LoadBalancer   10.43.168.174   10.20.30.161   80:31175/TCP,443:31161/TCP   28d

NAMESPACE     NAME                                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
kube-system   daemonset.apps/svclb-traefik-cca01f78    1         1         1       1            1           <none>          28d
kube-system   daemonset.apps/svclb-my-nginx-52dde893   1         1         0       1            0           <none>          18h

NAMESPACE     NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/metrics-server           0/1     1            0           28d
default       deployment.apps/unmanic                  1/1     1            1           27d
default       deployment.apps/my-nginx                 1/1     1            1           18h
kube-system   deployment.apps/traefik                  1/1     1            1           28d
kube-system   deployment.apps/coredns                  1/1     1            1           28d
kube-system   deployment.apps/local-path-provisioner   1/1     1            1           28d

NAMESPACE     NAME                                                DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/metrics-server-67c658944b           1         1         0       28d
kube-system   replicaset.apps/traefik-f4564c4f4                   0         0         0       28d
default       replicaset.apps/unmanic-5cbfc4c76c                  1         1         1       27d
default       replicaset.apps/my-nginx-5df9d8fcf                  1         1         1       18h
kube-system   replicaset.apps/traefik-6d7d945c64                  1         1         1       2d23h
kube-system   replicaset.apps/coredns-6799fbcd5                   1         1         1       28d
kube-system   replicaset.apps/local-path-provisioner-84db5d44d9   1         1         1       28d

NAMESPACE     NAME                                 COMPLETIONS   DURATION   AGE
kube-system   job.batch/helm-install-traefik-crd   1/1           13s        28d
kube-system   job.batch/helm-install-traefik       1/1           17s        28d
[gerard@gumption ~]$ kubectl get ing -A
NAMESPACE   NAME              CLASS     HOSTS   ADDRESS        PORTS   AGE
default     unmanic-ingress   traefik   *       10.20.30.161   80      3d
default     my-nginx          traefik   *       10.20.30.161   80      18h
[gerard@gumption ~]$ kubectl get ing unmanic-ingress -o yaml
apiVersion: <http://networking.k8s.io/v1|networking.k8s.io/v1>
kind: Ingress
metadata:
  annotations:
    <http://kubectl.kubernetes.io/last-applied-configuration|kubectl.kubernetes.io/last-applied-configuration>: |
      {"apiVersion":"<http://networking.k8s.io/v1|networking.k8s.io/v1>","kind":"Ingress","metadata":{"annotations":{},"name":"unmanic-ingress","namespace":"default"},"spec":{"rules":[{"http":{"paths":[{"backend":{"service":{"name":"unmanic-service","port":{"number":8888}}},"path":"/unmanic","pathType":"Prefix"}]}}]}}
  creationTimestamp: "2024-01-22T19:11:56Z"
  generation: 2
  name: unmanic-ingress
  namespace: default
  resourceVersion: "239573"
  uid: a66eaf61-8c85-4843-b76f-363dbde520cb
spec:
  ingressClassName: traefik
  rules:
  - http:
      paths:
      - backend:
          service:
            name: unmanic-service
            port:
              number: 8888
        path: /unmanic
        pathType: Prefix
status:
  loadBalancer:
    ingress:
    - ip: 10.20.30.161
Copy code
time="2024-01-25T18:16:33Z" level=debug msg="'502 Bad Gateway' caused by: dial tcp 10.42.0.3:8888: connect: no route to host"
time="2024-01-25T18:16:34Z" level=debug msg="'502 Bad Gateway' caused by: dial tcp 10.42.0.3:8888: connect: no route to host"
time="2024-01-25T18:16:35Z" level=debug msg="'502 Bad Gateway' caused by: dial tcp 10.42.0.3:8888: connect: no route to host"
c
it’s trying to connect to the pod, not the service. The service has a 10.43. address, I suspect 10.42.0.3 is the
unmanic-5cbfc4c76c-hwk7w
pod but I can’t tell.
Copy code
default       service/unmanic-service   ClusterIP      10.43.124.186   <none>         8888/TCP                     2d23h
What is the output of
kubectl get pod -A -o wide
- are you sure the service in the pod is actually listening on 8888? You’ll need to show the full yaml for the service and pod as well.
c
Copy code
NAMESPACE     NAME                                      READY   STATUS        RESTARTS       AGE     IP           NODE       NOMINATED NODE   READINESS GATES
kube-system   helm-install-traefik-crd-wzwmn            0/1     Completed     0              28d     <none>       gumption   <none>           <none>
kube-system   helm-install-traefik-dfmw8                0/1     Completed     1              28d     <none>       gumption   <none>           <none>
kube-system   traefik-6d7d945c64-zjvtv                  1/1     Terminating   3 (17h ago)    2d23h   10.42.0.84   gumption   <none>           <none>
kube-system   svclb-my-nginx-52dde893-5bv9f             0/1     Pending       0              18h     <none>       <none>     <none>           <none>
default       unmanic-5cbfc4c76c-hwk7w                  1/1     Running       2 (79m ago)    17h     10.42.0.3    gumption   <none>           <none>
kube-system   svclb-traefik-cca01f78-92jm8              2/2     Running       28 (79m ago)   28d     10.42.0.4    gumption   <none>           <none>
kube-system   traefik-6d7d945c64-hqg6n                  1/1     Running       2 (79m ago)    17h     10.42.0.8    gumption   <none>           <none>
default       my-nginx-5df9d8fcf-hbzx6                  1/1     Running       2 (79m ago)    17h     10.42.0.2    gumption   <none>           <none>
kube-system   coredns-6799fbcd5-tc88x                   1/1     Running       14 (79m ago)   28d     10.42.0.5    gumption   <none>           <none>
kube-system   metrics-server-67c658944b-flbh8           0/1     Running       22 (78m ago)   28d     10.42.0.6    gumption   <none>           <none>
kube-system   local-path-provisioner-84db5d44d9-wwv7x   1/1     Running       22 (78m ago)   28d     10.42.0.7    gumption   <none>           <none>
c
I suspect that the service doesn’t have the ports mapped correctly - what does 8888 on the service map to for the pod? Is that the correct port for the pod?
Alternately, do you have network policy deployed that is blocking the connection?
c
Copy code
[gerard@gumption ~]$ curl <http://10.42.0.3:8888> -vvv
*   Trying 10.42.0.3:8888...
* Connected to 10.42.0.3 (10.42.0.3) port 8888
> GET / HTTP/1.1
> Host: 10.42.0.3:8888
> User-Agent: curl/8.5.0
> Accept: */*
> 
< HTTP/1.1 301 Moved Permanently
< Server: TornadoServer/6.0.2
< Content-Type: text/html; charset=UTF-8
< Date: Thu, 25 Jan 2024 19:37:54 GMT
< Location: /unmanic/ui/dashboard/
< Content-Length: 0
< 
* Connection #0 to host 10.42.0.3 left intact
I can hit the pod directly
Copy code
[gerard@gumption ~]$ kubectl get networkpolicies -A
No resources found
No netpols
c
do you have firewalld/ufw running?
c
Looks like I do have firewalld running, but nothing there.
Copy code
public (default, active)
  target: default
  ingress-priority: 0
  egress-priority: 0
  icmp-block-inversion: no
  interfaces: wlan0
  sources: 
  services: dhcpv6-client ssh
  ports: 
  protocols: 
  forward: yes
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:
c
Disable that, and restart the node.
sorry, this is kinda basics, figured you’d looked at that first
c
Wow alright. That looks like it was the issue
I would never have guessed. Definitely a RTFM moment
Thanks for your help!
👍 1