This message was deleted.
# k3d
a
This message was deleted.
c
👍 1
🙏 1
a
Thanks @creamy-pencil-82913. Would the solution be to pass the
--prefer-bundled-bin
flag to K3s like below? Gave it a try but it didn't make any difference. I tried a bit of different syntax like --prefer-bundled-bin=true etc as well. config.yaml:
Copy code
options:
  k3s:
    extraArgs:
      - arg: --prefer-bundled-bin
        nodeFilters:
          - server:*
          - agent:*
Versions I'm running:
Copy code
❯ k3d --version
k3d version v5.4.9
k3s version v1.25.7-k3s1 (default)

❯ iptables --version
iptables v1.8.7 (nf_tables)
c
no, just use 1.25.8
👍 1
a
I'm running 1.25.8 on the nodes now (set
image: rancher/k3s:v1.25.8-k3s1
in my config.yaml) but still can't communicate between services in the cluster except with the one that has port 80 exposed.
Copy code
bash-5.1# nc -zv postgresdb 5432
bash-5.1# nc -zv dashboard-localdev 80
dashboard-localdev (10.43.151.120:80) open
I can connect fine to external services:
Copy code
bash-5.1# nc -zv <http://google.com|google.com> 443
<http://google.com|google.com> (142.250.74.142:443) open
Do you have any other ideas for what I could look into @creamy-pencil-82913?
Copy code
❯ kubectl  get nodes
NAME                            STATUS   ROLES                  AGE     VERSION
k3d-linklocalcluster-server-0   Ready    control-plane,master   3m49s   v1.25.8+k3s1
k3d-linklocalcluster-agent-0    Ready    <none>                 3m43s   v1.25.8+k3s1
c
a couple nc commands with no output doesn’t really provide much info to work off of
a
I was thinking if there was some strange setting on my computer, but I have replicated it across both my home and work computer running Ubuntu 22.04.2. As k3s runs on my docker network, could it be some setting in my host machines iptables, /etc/resolv.conf, /etc/hosts or something similar that is messing up the interpod communication in k3s? DNS seems to resolve fine in the cluster but how should the routes look like from inside the pods for example? This is how it looks like for me:
Copy code
bash-5.1# ip route
default via 10.42.1.1 dev eth0 
10.42.0.0/16 via 10.42.1.1 dev eth0 
10.42.1.0/24 dev eth0 proto kernel scope link src 10.42.1.15
c
I would probably start a little less low-level. Does the service have pods running? Are they on the same nodes? Can you reach the ports directly on any of the pods directly if you bypass the service?
a
I've tried it all: • All services have pods running successfully • I've tried running the services both on different nodes and on the same, it doesn't make any difference • Can't reach the ports on the pods directly • Can't reach the ports on the host (but they are not exposed as nodeports either)
c
did you customize your docker networking or anything?
I’m not sure exactly how k3d sets up networking between the nodes but I don’t generally hear anyone complaining about it not working.
a
No far from, I'm no network expert but I have a few docker networks up and running from running some apps locally using docker-compose:
Copy code
❯ docker network ls
NETWORK ID     NAME                   DRIVER    SCOPE
0b7f8d70e980   bridge                 bridge    local
d66b00cbe5c7   data-api_app-network   bridge    local
7be6054f08fe   host                   host      local
a7f9b31a8512   k3d-linklocalcluster   bridge    local
5dcc66fe03dd   none                   null      local
I've previously deleted all the ones docker would allow me to but it didn't make any difference
Agree it's really strange, pod communication is such a core feature so it feels like I'm missing something really basic
c
did you deploy anything other than just your DB?
a
Yeah I'm trying to get our app running in K3s (that is currently running on AKS) so I have 6 services running; the postgresdb, redis, rabbitmq and some internal node.js apps
c
did you deploy any network policies?
a
Ah I did actually, one moment that could be it
c
you might delete all the NPs just to be sure they’re not overly generous in what pods they’re matching
and then if that works re-add them one by one and see when things break
a
Thanks Brad you're a genius, I had completely overlooked that I deployed a NP that was interfering
Thanks a ton for your help
c
np, gl!
660 Views