This message was deleted.
# k3s
a
This message was deleted.
c
if you’re seeing responses come from pods without NATing, that sounds a lot like https://github.com/k3s-io/k3s/issues/7096 - can you try the workaround mentioned in the comments?
b
the iptables version installed is
iptables v1.8.7 (nf_tables)
so It should be ok.
p
Is the service listening on the nodes on the private network? Maybe you have to configure your router to NAT those ports to the right private node IP.
b
sorry… what do you mean with service listening on the nodes? 🙂
p
Are you not exposing the pods on the agents?
b
mmm do you mean with the service?
p
yes. I am trying to understand your setup. From the port that you opened it seems that you are using Wireguard as backend. The server starts with no issue with a public IP. Does the agent start or not?
b
the agents could start i started the deployment, the service and the ingress
Copy code
apiVersion: apps/v1
kind: Deployment
metadata:
  name: whoami-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: whoami
  template:
    metadata:
      labels:
        app: whoami
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: <http://kubernetes.io/hostname|kubernetes.io/hostname>
                    operator: In
                    values:
                      - k3sde3
                      # - k3sde2
                      # - k3sde3
      containers:
        - name: whoami
          image: traefik/whoami
          ports:
            - containerPort: 80
          resources:
            limits:
              memory: 256Mi
              cpu: "0.5"
            requests:
              memory: 20Mi
              cpu: "0.1"
---
# whoami Service
apiVersion: v1
kind: Service
metadata:
  name: whoami-service
spec:
  selector:
    app: whoami
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80
---
# whoami Ingress
apiVersion: <http://networking.k8s.io/v1|networking.k8s.io/v1>
kind: Ingress
metadata:
  name: whoami-ingress
  annotations:
    <http://cert-manager.io/cluster-issuer|cert-manager.io/cluster-issuer>: "letsencrypt-staging"
spec:
  tls:
    - hosts:
        - <http://staging.staging-buky.co|staging.staging-buky.co>
      secretName: staging-co
  rules:
    - host: <http://staging.staging-buky.co|staging.staging-buky.co>
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: whoami-service
                port:
                  name: http
p
You are creating this service on port 80 that is deployed on a worker I presume. How are you trying to contact it?
b
I thought I just have to create the service in same ns as the ingress and the deployment. I declared the worker k3sde3 since this is one in my private network
im not traying to contact it. shouldn’t k3s handle this? i mean throgh wireguard-native?
I appreciate your help very much
p
Wirguard is only used for the pod to pod traffic. The service are managed by kube-proxy. In case of an ingress resource the Ingress expose the service and than forward the traffic to the right pod. Are you using a tutorial for this configuration?
b
no at all. im following the information in the official documentation. but im wondering how does it work for fowarding the traffic I mean from client -> dns (cloudflare) -> k3sserver -> serviceLB -> traefik ingress controller -> Ingress -> Service -> pod This will work if the nodes declare an public ip. but how should be the setup for nated nodes in private network?
I also know that k3s starts an serviceLB inside all nodes
c
the clients connect to the Ingress Service which is a LoadBalancer service. The ingress routes from the ingress pod to the backend service’s pod.s
If some nodes don’t have a public IP for clients to connect to, then you shouldn’t direct client traffic to those nodes. Don’t put private addresses in the DNS record for whatever hostname you are configuring in CloudFlare DNS.
the ingress should still be able to traffic to the back-end service if it’s running on those pods, because that’s in-cluster traffic (assuming you have some way of getting that traffic between nodes, such as all the nodes being on the same private network)
b
Okey I thank you both @creamy-pencil-82913 and @plain-byte-79620 for helping me. I had to install
wireguard
on the
k3sserver
and on the
k3sagents
in order to make it work. I also had to apply the port forwarding rules in my
AirPort Express
for the known ports
after that a restart of agents and master processes did help as well
under ubuntu server 22.4.02 LTS work as expected. some agents running raspberry os are still not reachable. maybe because of the ip tables
Im also wondering why the documentation says that port forwarding is needed since the k3s server will act as a central point for communication between the nodes. When using WireGuard, only the k3s server’s port (default 6443 for the Kubernetes API) needs to be accessible from the agents. The agents will initiate the connection to the server, so no additional port forwarding is needed for the agents behind NAT.
c
The server is not a “central point for communication”. Intra-cluster traffic (pod to pod or pod to service) goes directly between nodes - peer to peer if you will.
🎉 1
The server is a central point of communication only for traffic to/from the control plane.
108 Views