https://rancher.com/ logo
#k3s
Title
b

brash-controller-15153

03/21/2023, 5:04 PM
Hi! does anyone have experience with a
k3s cluster
with a k3s server with
public ip
and k3s agents in a private network behind nat (like home net for example)?. • I opened the udp ports
8472,51820,51821
and tcp ports
6443,10250
in my router for allowing connections to the private ip where the agents are located. • I also started the agents with the dynamic ip address given from my isp and the server with the public ip address. but somehow the
traefik ingress controller
or the
Ingress
is not able to forward the incoming requests from the public url
<http://staging.company.org|staging.company.org>
to the agents in my private net. I also created other agents with
public ips
and they are able to serve a
whoami
application though
<http://staging.company.org|staging.company.org>
but when the load balancer selects the pods running inside the nodes on private net, then it just hangs and any answer comes from the pods.
v1.24.10+k3s1
c

creamy-pencil-82913

03/21/2023, 5:33 PM
if you’re seeing responses come from pods without NATing, that sounds a lot like https://github.com/k3s-io/k3s/issues/7096 - can you try the workaround mentioned in the comments?
b

brash-controller-15153

03/21/2023, 6:24 PM
the iptables version installed is
iptables v1.8.7 (nf_tables)
so It should be ok.
p

plain-byte-79620

03/22/2023, 11:11 AM
Is the service listening on the nodes on the private network? Maybe you have to configure your router to NAT those ports to the right private node IP.
b

brash-controller-15153

03/22/2023, 11:18 AM
sorry… what do you mean with service listening on the nodes? 🙂
p

plain-byte-79620

03/22/2023, 11:20 AM
Are you not exposing the pods on the agents?
b

brash-controller-15153

03/22/2023, 5:25 PM
mmm do you mean with the service?
p

plain-byte-79620

03/23/2023, 9:32 AM
yes. I am trying to understand your setup. From the port that you opened it seems that you are using Wireguard as backend. The server starts with no issue with a public IP. Does the agent start or not?
b

brash-controller-15153

03/23/2023, 4:19 PM
the agents could start i started the deployment, the service and the ingress
Copy code
apiVersion: apps/v1
kind: Deployment
metadata:
  name: whoami-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: whoami
  template:
    metadata:
      labels:
        app: whoami
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: <http://kubernetes.io/hostname|kubernetes.io/hostname>
                    operator: In
                    values:
                      - k3sde3
                      # - k3sde2
                      # - k3sde3
      containers:
        - name: whoami
          image: traefik/whoami
          ports:
            - containerPort: 80
          resources:
            limits:
              memory: 256Mi
              cpu: "0.5"
            requests:
              memory: 20Mi
              cpu: "0.1"
---
# whoami Service
apiVersion: v1
kind: Service
metadata:
  name: whoami-service
spec:
  selector:
    app: whoami
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80
---
# whoami Ingress
apiVersion: <http://networking.k8s.io/v1|networking.k8s.io/v1>
kind: Ingress
metadata:
  name: whoami-ingress
  annotations:
    <http://cert-manager.io/cluster-issuer|cert-manager.io/cluster-issuer>: "letsencrypt-staging"
spec:
  tls:
    - hosts:
        - <http://staging.staging-buky.co|staging.staging-buky.co>
      secretName: staging-co
  rules:
    - host: <http://staging.staging-buky.co|staging.staging-buky.co>
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: whoami-service
                port:
                  name: http
p

plain-byte-79620

03/23/2023, 4:23 PM
You are creating this service on port 80 that is deployed on a worker I presume. How are you trying to contact it?
b

brash-controller-15153

03/23/2023, 4:41 PM
I thought I just have to create the service in same ns as the ingress and the deployment. I declared the worker k3sde3 since this is one in my private network
im not traying to contact it. shouldn’t k3s handle this? i mean throgh wireguard-native?
I appreciate your help very much
p

plain-byte-79620

03/23/2023, 4:53 PM
Wirguard is only used for the pod to pod traffic. The service are managed by kube-proxy. In case of an ingress resource the Ingress expose the service and than forward the traffic to the right pod. Are you using a tutorial for this configuration?
b

brash-controller-15153

03/23/2023, 5:15 PM
no at all. im following the information in the official documentation. but im wondering how does it work for fowarding the traffic I mean from client -> dns (cloudflare) -> k3sserver -> serviceLB -> traefik ingress controller -> Ingress -> Service -> pod This will work if the nodes declare an public ip. but how should be the setup for nated nodes in private network?
I also know that k3s starts an serviceLB inside all nodes
c

creamy-pencil-82913

03/23/2023, 5:31 PM
the clients connect to the Ingress Service which is a LoadBalancer service. The ingress routes from the ingress pod to the backend service’s pod.s
If some nodes don’t have a public IP for clients to connect to, then you shouldn’t direct client traffic to those nodes. Don’t put private addresses in the DNS record for whatever hostname you are configuring in CloudFlare DNS.
the ingress should still be able to traffic to the back-end service if it’s running on those pods, because that’s in-cluster traffic (assuming you have some way of getting that traffic between nodes, such as all the nodes being on the same private network)
b

brash-controller-15153

03/23/2023, 7:28 PM
Okey I thank you both @creamy-pencil-82913 and @plain-byte-79620 for helping me. I had to install
wireguard
on the
k3sserver
and on the
k3sagents
in order to make it work. I also had to apply the port forwarding rules in my
AirPort Express
for the known ports
after that a restart of agents and master processes did help as well
under ubuntu server 22.4.02 LTS work as expected. some agents running raspberry os are still not reachable. maybe because of the ip tables
Im also wondering why the documentation says that port forwarding is needed since the k3s server will act as a central point for communication between the nodes. When using WireGuard, only the k3s server’s port (default 6443 for the Kubernetes API) needs to be accessible from the agents. The agents will initiate the connection to the server, so no additional port forwarding is needed for the agents behind NAT.
c

creamy-pencil-82913

03/23/2023, 8:04 PM
The server is not a “central point for communication”. Intra-cluster traffic (pod to pod or pod to service) goes directly between nodes - peer to peer if you will.
🎉 1
The server is a central point of communication only for traffic to/from the control plane.
52 Views