This message was deleted.
# k3s
a
This message was deleted.
c
Agents expect to be able to connect directly to the server IPs. If the server nodes are actually pods running somewhere, with inaccessible addresses, that will not work.
a
This works because the nodes are in the same subnet as the agents?
c
Right but even if you do that, the agents still need to be able to connect directly to the servers. The external LB is just used to provide a fixed registration address that the agents initially use to find a server. Once they are connected they switch over to connecting directly to the servers.
All cluster members need to be able to connect directly to each other.
a
Okay that makes sense, so the best option here is to establish L3 connectivity in some way via wireguard or some other solution?
c
idk that you can run wireguard in a pod, I’ve not tried it.
a
This is probably not the place to ask, but have you used k0s for a setup like this before? They seem to advertise supporting a setup along these lines, but I was wondering if I would end up in the same situation with that.
c
no, I have never used k0s
What you’re doing here might work better if you looked at it as a virtual control-plane, and ran the servers with --disable-agent so that they are not full members of the cluster. This isn’t a supported configuration but neither is running in an environment that lacks full connectivity between all nodes.
a
Unfortunately they do already have --disable-agent. These are the flags I am passing:
Copy code
- server
        - --disable-agent
        - --disable=coredns,servicelb,traefik
        - --tls-san={{tlsSan}}
        - --flannel-backend=none
        - --egress-selector-mode=cluster
c
yeah, so do you know how
kubectl logs
and
kubectl exec
work? And what the egress-selector is doing?
a
Correct me if I am wrong but essentially the k8s control plane establishes a websocket? connection to the kubelet on the node which executes the action. As for the egress selector, I am not sure.
c
right so when you run one of those commands, the apiserver makes a connection to the kubelet to pull logs, or run the command in the pod and pipe output back to the client. This means that the server MUST be able to open a connection to the kubelet. I am guessing that your server pods can’t connect to the agents? K3s includes an embedded egress proxy so that the apiserver can connect to kubelets, using a websocket tunnel connection initiated by the agent. This means that you only need agent -> server connectivity, not agent - server. However, this does mean that the agents need to be able to connect directly to all of the servers, so that every server has the websocket tunnel to use when it needs to talk to that kubelet. The problem you have in your environment, is that servers can’t connect to agents, and agents can’t connect to servers. All anything can connect to is the LB you’ve put in front of the servers.
Basically, you need to rearchitect your design so that you have at least SOME sort of functional connectivity between agents and servers. You cannot rely on the reverse proxy in front of the servers handling everything.
a
Thank you for the detailed explanation and help, looks like I have some work to do 🙂