This message was deleted.
# general
a
This message was deleted.
h
ok atleast one issue may be because one of the images may not actualy manage to listen on the port in the pod. the log say
The directory /usr/share/nginx/html is not mounted.
Therefore, over-writing the default index.html file with some useful information:
RTNETLINK answers: Network unreachable
is there any way to see if a pod container actually listen on the port in the pod ip ? describe pod only show what is exposed i think.
c
are you sure that ipv6 is working between your nodes? It sounds like this is a cluster with ipv6 as the primary address family, how did you configure the cluster and service cidrs and CNI?
h
well i have 2 working ingress->service->pod's now. so i would assume so. I did try to use my own addresses for cluster and service cidr. but the clusters would not bootstrap and register to the rancher webui. but when i stopped trying to give the script from
Copy code
<https://get.rke2.io>
a cluster and service cidr. it defaulted to these ULA addresses i am testing with now. while i would prefer GUA addresses. atleast this gave a working cluster that i could register to the rancher web ui
c
have you tried using the example ranges from our docs? https://docs.rke2.io/networking/basic_network_options#dual-stack-configuration
just to ensure it’s not any problem with your cluster network…
you might also confirm that you can ping and curl between pods using their ipv6 addresses.
h
ping between pods works. curl works if the container actually list on the port. I think some of the images i have been testing with may listen on old legacy ip only
i did not try the GUA addresses on the example, i did not want to use someone elses's official ip's
i assume for a single stack config just omit the ipv4 addresses from that dual stack example ?
c
Those are not “someone else’s IPs”. iirc they are currently reserved by IANA and are not in use. Even if they were, they aren’t exposed outside your cluster and use of them wouldn’t affect anything.
use of some reserved classes of IPv6 address will cause odd behavior, I would try the documented examples if you’re troubleshooting.
h
so their function would be the same as the default addresses i have now. I can try those on another cluster. But since ping works betwen pods. and even from hosts to pod's it seems ok
the fd00:: that seems to be the default ranges, is the ULA address space for internal unreachable networks tho. so the usage for them here seems very logical ?
c
you haven’t even said what you’re actually using for the cluster and service cidrs. what are they?
h
altho you are right in that ULA have some odd behaviors. but that are usualy in combination with ipv4... I will try the addresses on another cluster. but i suspect the issue i was seeing was containers hardcoded to listening to legacy ip. with 0.0.0.0
but is there an easy way to detect this happing? a container starting, but failing to listen to a port ? i guess a health check would have detected it, but those are not included in the images usualy
c
health check would be part of the pod spec, not the image. If you are creating the pods, you have the ability to configure the health check.
h
i know. But I was looking for a way to quick and easy verify if a pod is listening. since i spent many hours troubleshooting this ingress today. and just now realized the pod may not have been listening. also... why would kubectl port-forward work if the pod was not truly listening ? does it connect to the same host as the pod is running on an reached the pod via localhost lo networking since that do have ipv4 ?
c
I suspect that it uses the pod’s primary IP as the target of the forwarded traffic, but I’m not sure.
so you’re not actually connecting to localhost. You’re connecting to
pod-ip:8080
within the pod’s network namespace.
h
well that did not work with curl, from any of the hosts, not even the hosts where the pod was running. but kubectl port-forward did work.. perhaps it starts a sidecar in the same pod.
Ok it is possible to see from the host if the pod is listening. find the interface with the pod ip :
ip -6 neigh ls | grep fd00:42:0:6::4a
find the network namespace with the interface :
ip a | grep -i cali317a725c155 -A3
check for listening ports:
ss -plont -N cni-a955439b-7f2d-9851-b274-92da8c874690
State                      Recv-Q                     Send-Q                                          Local Address:Port                                            Peer Address:Port                     Process
LISTEN                     0                          9                                                           *:8000                                                       *:*                         users:(("httpd",pid=1346947,
fd=3))
compare with a pod running a container that hardcode listening to ipv4 only
ss -plont -N cni-a15cd4c7-9e74-89f8-1587-e1bd816dfa58
State                   Recv-Q                  Send-Q                                    Local Address:Port                                    Peer Address:Port                 Process
LISTEN                  0                       10                                              0.0.0.0:5858                                         0.0.0.0:*                     users:(("node",pid=1203637,fd=20))
c
it doesn’t use a sidecar, but it does interface with the container runtime to open a tunnel into the pod sandbox’s network namespace.
h
may explain why it works with port-forward. thanks for the support. taking a break for tonight 🙂
113 Views