This message was deleted.
# k3d
a
This message was deleted.
m
Service:
Copy code
apiVersion: v1
kind: Service
metadata:
  name: argo-server
spec:
  ports:
  - name: web
    port: 2746
    targetPort: 2746
  selector:
    app: argo-server
  type: LoadBalancer
Ingress:
Copy code
apiVersion: <http://networking.k8s.io/v1|networking.k8s.io/v1>
kind: Ingress
metadata:
  name: argo-server-ingress
spec:
  rules:
  - http:
      paths:
      - backend:
          service:
            name: argo-server
            port:
              number: 2746
        path: /
        pathType: Prefix
w
So which do you want to use? The LoadBalancer type service or the ingress? Since you're forwarding to ports that are not 80 or 443, I guess you're not using Ingress at all. Can you please check
kubectl get pods -A
and see if there are any svclb pods pending?
m
Ah, yeah I guess locally I just need the service of type loadbalancer
Copy code
╰─ k -n myns get po
NAME                                           READY   STATUS                       RESTARTS   AGE
svclb-argo-server-pnf69                        1/1     Running                      0          38m
(I'm a bit confused still about when I need and don't need to use a service with type clusterip, nodeport, or loadbalancer)
When you ask
So which do you want to use? The LoadBalancer type service or the ingress?
I get confused. I want to expose these things externally in my live clusters (k3s), and I wish to expose them when using k3d locally. And preferably I wish the configuration for the live clusters and the localhost cluster to look as similar as possible. In my live clusters, I think that I do need a service of type loadbalancer OR type nodeport to be able to get the ingress working. Answers like this one (https://stackoverflow.com/a/60076900) tells me that.
So my services are of type loadbalancer at the moment. Now your question leads me to think that I can choose, when using k3d, whether I want to route traffic to the service (of type loadbalancer) OR to the ingress. Is that correct?
I think that I'm not able to get https://k3d.io/v5.4.1/usage/exposing_services/#1-via-ingress-recommended working. It worked when my service was of type loadbalancer, but it doesn't seem to work when my service is of type clusterip. I'm following the steps pretty much exactly.
I'm sure there's something about
“map port
8081
from the host to port
80
on the container which matches the nodefilter `loadbalancer`“
that I'm misunderstanding
So far I understand that the
-p "8081:3000@loadbalancer"
will map port 8081 on my system to port 3000 in the loadbalancer. I think that the load balancer is the second one here:
Copy code
╰─ k3d node list
NAME                        ROLE           CLUSTER        STATUS
k3d-localcluster-server-0   server         localcluster   running
k3d-localcluster-serverlb   loadbalancer   localcluster   running
I also noticed this:
Copy code
╰─ docker ps
CONTAINER ID   IMAGE                            COMMAND                  CREATED         STATUS         PORTS                                                                                                                                                                                                         NAMES
66828b350766   <http://ghcr.io/k3d-io/k3d-proxy:5.4.1|ghcr.io/k3d-io/k3d-proxy:5.4.1>   "/bin/sh -c nginx-pr…"   8 minutes ago   Up 8 minutes   80/tcp, 0.0.0.0:8086->8086/tcp, :::8086->8086/tcp, 0.0.0.0:8087->2746/tcp, :::8087->2746/tcp, 0.0.0.0:8081->3000/tcp, :::8081->3000/tcp, 0.0.0.0:37503->6443/tcp, 0.0.0.0:8082->8000/tcp, :::8082->8000/tcp   k3d-localcluster-serverlb
0ef97c8d1ace   rancher/k3s:v1.22.7-k3s1         "/bin/k3d-entrypoint…"   8 minutes ago   Up 8 minutes                                                                                                                                                                                                                 k3d-localcluster-server-0
Where I see those port mappings in the docker container. What I don't understand yet is why localhost:8081 in my browser now doesn't work. What I've changed now is to make all of services be of type clusterip. I still have the ingress'es.
I did confirm that the steps in https://k3d.io/v5.4.1/usage/exposing_services/#1-via-ingress-recommended work.. so I must be doing something wrong in my own setup
When looking at the logs of the load balancer docker container, I see this when trying to go to fex localhost8081
Copy code
╰─ docker logs k3d-localcluster-serverlb -f
2022/06/24 12:49:28 [error] 58#58: *187 connect() failed (111: Connection refused) while connecting to upstream, client: 172.29.0.1, server: 0.0.0.0:3000, upstream: "172.29.0.2:3000", bytes from/to client:0/0, bytes from/to upstream:0/0
I'm confused by this also:
Since you're forwarding to ports that are not 80 or 443, I guess you're not using Ingress at all.
Can I not use ingress to expose something on ports different than those? https://stackoverflow.com/a/56243253 tells me no 🤔
w
Woah, now that's a bunch of posts here 😁 Let's start with Ingress: • By default, Ingress is meant for plain HTTP and HTTPS traffic, usually assigned to the privileged ports 80 and 443 respectively. • The Traefik Ingress Controller is listening on those two ports by default, but you can configure it to listen on different ports (wouldn't recommend that) • In K3s, the Traefik Ingress controller is exposed using a service of
type: LoadBalancer
, which (in K3s) means, that some
svclb
pods will be spawned in the nodes (k3d containers) that forward traffic from the nodes' (k3d containers) ports 80 and 443 to the Ingress Controller (simplified). • Now as per the k3d documentation, you map some port of your host, say
8080
to port
80
on the
loadbalancer
(which is the one container ending in
serverlb
always spawned by k3d as you noticed already) -> this makes the serverlb forward all coming in on port 80 to port 80 on all server nodes in your k3d cluster, where it will then be routed to the Ingress controller • The Ingress controller configures the cluster (simplified) in a way, that traffic is then routed to the Kubernetes service that you defined in the Ingress object -> Here, it doesn't matter, if it's of type
NodePort
,
LoadBalancer
or just
ClusterIP
, as the traffic is already inside the cluster through the Ingress • You use Ingress to access your services by domain name, e.g. myservice.mydomain.com
With that said, let's have a look at one of your previous comments:
What I don't understand yet is why localhost:8081 in my browser now doesn't work. What I've changed now is to make all of services be of type clusterip. I still have the ingress'es.
Port 8081 from your machine to 3000 on the loadbalancer, which forwards that to port 3000 on the server0 container. But since you don't have anything exposed on that port on the server0 container (
type: ClusterIP
makes the service only available inside the cluster), there is nothing it can talk to, hence the error also in the proxy logs. You can use many ways to expose your service externally, e.g. `type: NodePort`: opens a port on every node in your cluster and forwards traffic from there to your service;
type: LoadBalancer
talks to the API to get some loadbalancer with an external IP provisioned (in cloud providers that's a VM with a public IP, in K3s that's the IP of one of your nodes, since it cannot provision something);
hostPort
in your PodSpec maps directly to a port on the node the pod is on, without using a service at all.
m
Thank you very much for the thorough answer! 🙏
From https://k3d.io/v5.4.1/usage/exposing_services/ it seems that what I need is • some web server deployment (nginx) • a clusterip service • ingress and then start k3d with
k3d cluster create -p "8081:80@loadbalancer"
To check my understanding: When I go to localhost:8081 after this, the request goes through (at least) these steps: • (outside the cluster) To the k3d docker container: k3d-localcluster-serverlb. This is the k3d load balancer. • (inside the cluster) To/through the traefik (loadbalancer) service • (inside the cluster) To/through the svclb-traefik-xxxxx pod. This pod should've picked up on any
Kind: Ingress
that's defined. • (inside the cluster) To/through the nginx clusterip service • (inside the cluster) To/through the nginx pod Does that look correct?
w
Simplified, that's how it works yep, though the order is a bit off I think 🤔 Additionally, there's all the Kubernetes Networking in between, e.g. kube-proxy and iptables rules to route traffic to a Kubernetes service. But you listed most components.
svclb
is an instance of Rancher's
klipper-lb
(https://github.com/k3s-io/klipper-lb), which checks for
LoadBalancer
type services and binds to the specified service ports on one of the nodes to "imitate" a real
LoadBalancer
. On k3d level:
User
->
8081@localhost
->
80@loadbalancer-container
->
80@server-container
In K3s (simplified flow):
Ingress
->
Service
->
Pod
m
Thanks! 🙂 My biggest initial misunderstanding was that the ingress (through traefik) either way goes through 80/443. So I can't have multiple ingresses answering on the same path (
/
) when developing locally. Then they'll get in each other's way
So I can have one single k3d port forward (
-p "8081:80@loadbalancer"
) to the traefik ingress controller. But I can have additional to services of type loadbalancer
It seems to me that when developing locally, I'm better off just using services of type loadbalancer vs relying on ingress as long as I need to expose multiple things. I'm having problems exposing multiple things locally using traefik, because then I need to set different paths for the different things, and then I need to deal with something like https://stackoverflow.com/questions/70789020/traefik-ingress-rewrite-target-does-nothing to make everything work.
w
You can also use different hosts completely instead of changing the paths. E.g. serviceA.localhost, serviceB.localhost, etc. At least on Linux you can use
libnss-myhostname
to resolve all
*.localhost
addresses to 127.0.0.1, so you won't have to touch your hosts file.
But there are many ways to solve the same challenge 😉
m
Ah thanks, nice tip!
311 Views