This message was deleted.
# k3s
a
This message was deleted.
c
Is it wrong about your services not having any endpoints?
or are there other ingresses that are not working, but aren’t throwing any errors?
when you say “arent functional”, what IS happening? Do you get an error page, do you get a connection refused, what?
k
Hi Brandon thank you for responding. I am getting a connection refused. The services appear online and all pods are running and in ready state. This didn’t occur until I tried the upgrade. Everything was functional before then.
c
Are you using servicelb? What is the status of the Traefik service and the svclb-traefik pods?
k
I am using metallb
Copy code
status of svc's follow
kubectl get svc --all-namespaces -o wide NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 120d <none> kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 120d k8s-app=kube-dns kube-system metrics-server ClusterIP 10.43.60.134 <none> 443/TCP 120d k8s-app=metrics-server metallb metallb-webhook-service ClusterIP 10.43.210.166 <none> 443/TCP 98d app.kubernetes.io/component=controller,app.kubernetes.io/instance=metallb,app.kubernetes.io/name=metallb cert-manager cert-manager-webhook ClusterIP 10.43.175.138 <none> 443/TCP 98d app.kubernetes.io/component=webhook,app.kubernetes.io/instance=cert-manager,app.kubernetes.io/name=webhook cert-manager cert-manager ClusterIP 10.43.230.168 <none> 9402/TCP 98d app.kubernetes.io/component=controller,app.kubernetes.io/instance=cert-manager,app.kubernetes.io/name=cert-manager cert-manager godaddy-webhook ClusterIP 10.43.52.253 <none> 443/TCP 98d app.kubernetes.io/instance=godaddy-webhook,app.kubernetes.io/name=godaddy-webhook grafana grafana ClusterIP 10.43.65.48 <none> 80/TCP 85d app.kubernetes.io/instance=grafana,app.kubernetes.io/name=grafana prometheus prometheus-kube-state-metrics ClusterIP 10.43.45.113 <none> 8080/TCP 74d app.kubernetes.io/instance=prometheus,app.kubernetes.io/name=kube-state-metrics prometheus prometheus-node-exporter ClusterIP 10.43.113.35 <none> 9100/TCP 74d app=prometheus,component=node-exporter,release=prometheus prometheus prometheus-server ClusterIP 10.43.85.141 <none> 80/TCP 74d app=prometheus,component=server,release=prometheus prometheus prometheus-alertmanager ClusterIP 10.43.207.19 <none> 80/TCP 74d app=prometheus,component=alertmanager,release=prometheus prometheus prometheus-pushgateway ClusterIP 10.43.138.83 <none> 9091/TCP 74d app=prometheus,component=pushgateway,release=prometheus opensearch opensearch-dashboards ClusterIP 10.43.30.247 <none> 5601/TCP 51d app=opensearch-dashboards,release=opensearch-dashboards longhorn-system longhorn-replica-manager ClusterIP None <none> <none> 28d longhorn.io/component=instance-manager,longhorn.io/instance-manager-type=replica longhorn-system longhorn-engine-manager ClusterIP None <none> <none> 28d longhorn.io/component=instance-manager,longhorn.io/instance-manager-type=engine longhorn-system longhorn-frontend ClusterIP 10.43.190.135 <none> 80/TCP 28d app=longhorn-ui longhorn-system longhorn-admission-webhook ClusterIP 10.43.239.46 <none> 9443/TCP 28d app=longhorn-admission-webhook longhorn-system longhorn-backend ClusterIP 10.43.6.50 <none> 9500/TCP 28d app=longhorn-manager longhorn-system longhorn-conversion-webhook ClusterIP 10.43.246.198 <none> 9443/TCP 28d app=longhorn-conversion-webhook longhorn-system longhorn-recovery-backend ClusterIP 10.43.232.22 <none> 9600/TCP 28d app=longhorn-recovery-backend opensearch opensearch-cluster-master-headless ClusterIP None <none> 9200/TCP,9300/TCP 28d app.kubernetes.io/instance=opensearch,app.kubernetes.io/name=opensearch opensearch opensearch-cluster-master ClusterIP 10.43.189.200 <none> 9200/TCP,9300/TCP 28d app.kubernetes.io/instance=opensearch,app.kubernetes.io/name=opensearch plex plex ClusterIP 10.43.164.71 <none> 80/TCP 8d app.kubernetes.io/name=plex plex plex-udp-32410 LoadBalancer 10.43.137.204 192.168.0.10 32410:31992/UDP 8d app.kubernetes.io/instance=plex,app.kubernetes.io/name=plex plex plex-udp-32413 LoadBalancer 10.43.238.249 192.168.0.10 32413:30605/UDP 8d app.kubernetes.io/instance=plex,app.kubernetes.io/name=plex plex plex-udp-32412 LoadBalancer 10.43.237.2 192.168.0.10 32412:32739/UDP 8d app.kubernetes.io/instance=plex,app.kubernetes.io/name=plex plex plex-tcp-3005 LoadBalancer 10.43.132.253 192.168.0.10 3005:31438/TCP 8d app.kubernetes.io/instance=plex,app.kubernetes.io/name=plex plex plex-udp-1900 LoadBalancer 10.43.89.39 192.168.0.10 1900:30977/UDP 8d app.kubernetes.io/instance=plex,app.kubernetes.io/name=plex plex plex-tcp-32400 LoadBalancer 10.43.59.80 192.168.0.10 32400:32761/TCP 8d app.kubernetes.io/instance=plex,app.kubernetes.io/name=plex plex plex-tcp-8324 LoadBalancer 10.43.72.109 192.168.0.10 8324:32134/TCP 8d app.kubernetes.io/instance=plex,app.kubernetes.io/name=plex plex plex-tcp-32469 LoadBalancer 10.43.111.19 192.168.0.10 32469:30623/TCP 8d app.kubernetes.io/instance=plex,app.kubernetes.io/name=plex plex plex-udp-32414 LoadBalancer 10.43.253.58 192.168.0.10 32414:31949/UDP 8d app.kubernetes.io/instance=plex,app.kubernetes.io/name=plex rpi-dns rpi-dns ClusterIP 10.43.48.200 <none> 80/TCP 4d14h app.kubernetes.io/instance=rpi-dns,app.kubernetes.io/name=rpi-dns rpi-dns rpi-dns-udp-svc LoadBalancer 10.43.37.46 192.168.0.10 53:30276/UDP 4d14h app.kubernetes.io/instance=rpi-dns,app.kubernetes.io/name=rpi-dns rpi-dns rpi-dns-tcp-svc LoadBalancer 10.43.162.167 192.168.0.10 53:31562/TCP 4d14h app.kubernetes.io/instance=rpi-dns,app.kubernetes.io/name=rpi-dns kube-system traefik LoadBalancer 10.43.248.126 192.168.0.11 8031675/TCP,44330315/TCP 13h app.kubernetes.io/instance=traefik,app.kubernetes.io/name=traefik longhorn-system csi-attacher ClusterIP 10.43.108.167 <none> 12345/TCP 13h app=csi-attacher longhorn-system csi-provisioner ClusterIP 10.43.164.76 <none> 12345/TCP 13h app=csi-provisioner longhorn-system csi-resizer ClusterIP 10.43.60.125 <none> 12345/TCP 13h app=csi-resizer longhorn-system csi-snapshotter ClusterIP 10.43.144.158 <none> 12345/TCP 13h app=csi-snapshotter
Copy code
NAME                                      READY   STATUS      RESTARTS       AGE    IP            NODE   NOMINATED NODE   READINESS GATES
csi-nfs-node-7pc98                        3/3     Running     54 (13h ago)   101d   192.168.0.5   pi1    <none>           <none>
local-path-provisioner-7b7dc8d6f5-l92tq   1/1     Running     0              13h    10.42.0.240   pi1    <none>           <none>
coredns-b96499967-2dv64                   1/1     Running     0              13h    10.42.0.241   pi1    <none>           <none>
helm-install-traefik-crd-c4wws            0/1     Completed   0              13h    10.42.0.242   pi1    <none>           <none>
helm-install-traefik-l8k2n                0/1     Completed   1              13h    10.42.0.239   pi1    <none>           <none>
traefik-9c6dc6686-thvv8                   1/1     Running     0              13h    10.42.0.243   pi1    <none>           <none>
csi-nfs-node-vbcwn                        3/3     Running     73 (13h ago)   101d   192.168.0.6   pi2    <none>           <none>
metrics-server-5c8978b444-bqvml           1/1     Running     2 (13h ago)    15h    10.42.1.9     pi2    <none>           <none>
csi-nfs-controller-f56b4b4b-cgqr8         3/3     Running     7 (13h ago)    15h    192.168.0.7   pi3    <none>           <none>
csi-nfs-node-hkvqr                        3/3     Running     69 (13h ago)   101d   192.168.0.7   pi3    <none>           <none>
csi-nfs-node-t86w4                        3/3     Running     69 (13h ago)   101d   192.168.0.8   pi4    <none>           <none>
I figured out the issue. It was a conflict between metallb and the k3s service provider. I thought I had it disabled, but I didn’t have the right command line switch set. To fix I had to use the correct switch and make sure my internal dns server was pointing to the correct loadbalancer IP.