This message was deleted.
# k3s
a
This message was deleted.
c
I’m not sure what you mean by controllers and workers. Do you mean servers and agents?
b
Yes
Copy code
root@control0:~# kubectl get nodes
NAME                        STATUS   ROLES                       AGE   VERSION
<http://control0.kube.nadybot.org|control0.kube.nadybot.org>   Ready    control-plane,etcd,master   87m   v1.28.5+k3s1
<http://control1.kube.nadybot.org|control1.kube.nadybot.org>   Ready    control-plane,etcd,master   84m   v1.28.5+k3s1
<http://control2.kube.nadybot.org|control2.kube.nadybot.org>   Ready    control-plane,etcd,master   82m   v1.28.5+k3s1
<http://worker0.kube.nadybot.org|worker0.kube.nadybot.org>    Ready    <none>                      79m   v1.28.5+k3s1
<http://worker1.kube.nadybot.org|worker1.kube.nadybot.org>    Ready    <none>                      79m   v1.28.5+k3s1
<http://worker2.kube.nadybot.org|worker2.kube.nadybot.org>    Ready    <none>                      79m   v1.28.5+k3s1
<http://worker3.kube.nadybot.org|worker3.kube.nadybot.org>    Ready    <none>                      79m   v1.28.5+k3s1
<http://worker4.kube.nadybot.org|worker4.kube.nadybot.org>    Ready    <none>                      79m   v1.28.5+k3s1
c
the kubelet bind address needs to be available to other nodes in the clusters for metrics, kubectl exec, kubectl logs, and so on to work right.
and it should be one of the internal or external IP addresses. If the kubelet bind address doesn’t match one of the addresses in
kubectl get node -o wide
you’re going to have problems.
b
The IP address matches, and they can all reach each other on that IP. Is securing the server by utilizing a private network overlay actually the right approach, or would it be safe enough to just have K3s bind to the external, world-reachable IP?
c
if they’re all running on a private network and you’re only exposing them via a single external IP and an external load balancer, I don’t personally see a lot of value in putting another overlay on the private network that they all share.
b
They only have one external IP. I'm creating the private network via Netmaker (wireguard mesh-vpn). The question is: what is recommended security-wise? Is it safe to bind all K3s services to the external IP, or is my private VPN the better choice?
c
right, so if you have only one external IP for the whole cluster, then all the nodes are communicating between themselves via private IPs, correct?
b
No, I have one external IP per VPS and nothing else. That's why I added a private VPN on top
c
oh. You said you have only one public IP. If you have one PER NODE that is something else entirely.
b
Yes, sorry. One public IP per node and that's it. The provider doesn't have any private network I could use
c
Have you considered just using the built-in wireguard/tailscale support? Otherwise you’re running vxlan over the top of whatever other VPN you are using which isn’t particularly efficient.
b
I didn't understand if that solution would give me what I need. Would this prevent K3s from binding to the external IP address, and only run wireguard there?
c
no. there’s not a good way to manually bind everything to a private IP. Do you not have the ability to simply not expose some ports on the public IPs?
b
Like how? K3s binds to loads of ports and unless I set up a firewall-rule, that's not really an option. And the problem with firewall-rules is that they need to be changed for every agent that joins
c
most cloud providers offer security group rules or some other equivalent that protects the nodes via an external firewall type system.
b
Trust me, this one doesn't. Given that, having a wireguard-mesh and K3s on top doesn't seem a bad choice then?
c
yeah not idea but if that’s the best you can do I guess
You might play with the flannel backend options to find one with minimum overhead
b
The vxlan one isn't adding encryption, right? Since my hypervisors are all on pretty modern CPUs, and my VPSs all have 8 or more cores, the wireguard shouldn't be too heavy on the load
c
no, but anything that encapsulates will further reduce the packet mtu
b
That's true. What's the actual visible effect then?
c
more packets to send the same amount of data, a bit of additional processing to handle the headers, and so on?
b
My MTU on the private VPN is 1420
c
just… overhead
b
Okay, if that's all, and we're not talking connections dropping or anything like that… I can see if I can tune it up
Thank you for answering all my questions. I'm off to bed. Wish you a great reast of the day!
👋 1
@creamy-pencil-82913 After creating several deployments, I noticed that even though all my servers and agents are configured identically network-wise, I cannot get logs from pods running on any of the servers. It works on the agents, not on the servers. All agents and servers are completely bound to the VPN interface. Error in K3s is
Copy code
k3s[140407]: E0108 12:02:02.257141  140407 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"error dialing backend: proxy error from 10.23.0.8:6443 while dialing 10.23.0.8:10250, code 502: 502 Bad Gateway"}: error dialing backend: proxy error from 10.23.0.8:6443 while dialing 10.23.0.8:10250, code 502: 502 Bad Gateway
I get this even when I'm on
10.23.0.8
, but I can definitely connect to the kubelet from there:
Copy code
curl -i -k '<https://10.23.0.8:10250/containerLogs/kube-system/traefik-f4564c4f4-8mjdm/traefik>'
HTTP/2 401
content-type: text/plain; charset=utf-8
content-length: 12
date: Mon, 08 Jan 2024 14:07:48 GMT

Unauthorized
Network confguration in
k3s.service
is done like this:
Copy code
…
	'--node-ip=10.23.0.8' \
	'--flannel-iface=netmaker' \
	'--bind-address=10.23.0.8' \
	'--kubelet-arg=address=10.23.0.8' \
…
What might be causing this? These arguments are the same on the agents.
I ditched the whole cluster and redid everything from scratch, only to find out that suddenly no node was able to retrieve the logs of any node, not even its own logs. After much trial and error, I found out that
--egress-selector-mode disabled
fixes this problem for me. I'm not 100% sure why, but maybe it makes sense?
104 Views