This message was deleted.
# k3d
a
This message was deleted.
w
It was the fastest to implement for me back then 🤷‍♂️ Since it's completely unrelated to K3s internals and we only need plain TCP/UDP proxying, I didn't bother too much with choosing something else.
e
fair enough! thanks for the answer ❤️
w
Sure thing 🙂
e
do you do anything specific on the network layer to route traffic on different OS-s, or is that done on other layers of k3d/k3s?
w
k3d really doesn't do much OS-specific in general. Cluster-External networking is handled by Docker (we just do some "manual" IPAM to ensure that server node containers keep the same IP across restarts). Cluster-Internal is handled by K3s. What's your problem/challenge?
e
I don’t have a specific problem atm. just trying to understand how the different layers work.
I’m also running docker-for-mac which has it’s own VM/network routing
w
There are some docs over here: https://k3d.io/v5.5.1/design/project/ Not sure if that's helpful enough
e
I have ways to go. I also don’t fully understand why
docker ps
won’t return all the containers in deployed pods, but I think that has something to do with docker in docker runtimes
w
Because K3s is nested inside containers. So when k3d creates a K3s cluster you have: • 1 Docker container for the k3d-proxy (optional, but default) • 1 Docker container for the k3d-tools (optional) • n Docker containers for the K3s nodes Inside the K3s node containers, K3s is running containerd. So when you deploy a pod, that will end up being run by containerd within the K3s node containers, so it's not connected at all to your host's Docker.
e
so docker-in-docker or more specifically containerd-in-docker?
w
Yep
Well, it's k3d = K3s-in-Docker, where K3s incorporates containerd as the runtime
e
👍
I have more reading to do on linux namespaces, cgroups, chroot
w
Diving into the basics of containers? Have fun! 🙂
e
I’m a bit time constrained, but that’s where I’m at, yeah 😄
tnx
while we’re on the subject, are there any tools you could recommend for learning/debugging k8s network layers aside from the obvious:
netstat, mtr, dig, ping ...
?
I’ve heard about
telepresence
and seems like every blog is recommending it, but it seems dangerous unless connecting only to a dev cluster.
w
Isn't telepresence for spawning connections to your cluster to develop/debug your application rather than the k8s network stack?
e
not sure. I don’t understand it fully yet
w
It all depends on the CNI Plugin that you have deployed as well. E.g. Cilium has lots of nice built-in tooling and a good CLI and UI (Hubble) You have some other CNI but also kube-proxy? Better know some iptables/nftables as well.
e
k3d/k3s uses flannel by default, yes?
w
e
networking knowledge on linux is another thing I lack 🙂
w
That's going to be a tough ride for you then 😬
e
yeah. I know some stuff. Usually, if I keep at it long enough, I can solve it, but that’s not good enough 🙃
w
What's your goal? Just learning? Or you have some task to do that requires some in-depth knowledge?
e
just learning at the moment. I have 2 goals atm: 1. reduce parity with our segment of prod and local/dev env 2. be able to contribute when our k8s services go down
w
Sounds good 👍
e
I’ve derailed this thread after you’ve already answered my original question. do you mind if I derail it some more?
I want to ask about source code volumes in dev. I could ask it in the main channel
w
Yeah no worries, though you may want to get multiple different answers to some questions 👍
105 Views