This message was deleted.
# rancher-desktop
a
This message was deleted.
q
I don't think there is one. You can add one manually.
a
Interesting, I can pull from it (without setting anything up). I ended up tagging then pushing the images from this magical place to docker personal repo and then just pull from their from my apps 😞
q
a
I will keep an eye on that, thanks. For now, I will continue to tag/push to my public repo from the registry used by rancher when docker images are built. Now I just need the rancher containers to know about ingress url's and use them like service urls so that I don't need separate URLS for web side of my app and internal urls for api calls between multiple containers.
w
that’s nor normally how flows happen. pod to pod traffic tends to stay inside the cluster and container to container is inside the pod and then the interface is made public via a service. any reason you are pushing traffic outside of the cluster only to go back in? https://learnk8s.io/kubernetes-network-packets
a
This is a weird case. I am using camunda and forms.io (via formsflow.ai) and for some unknown reason, running on openshift (where the dns does route as the URLS are real), works fine when camunda tries to communicate to the forms api pod when using the external route. If i use the internal service name, i do get a weird java error. In localhost, when we use the service name, i can see that it got a forms.io token, but the transaction fails with the same weird java error. And if i use the ingress url, i get error where it can't recognize the url. Very strange. Also, if the egress urls aren't understood by other containers, when i configure the web container, i need to have an external route configuration different from internal to internal communication. Just makes it messy.
w
yeah and is working around key parts of k8s so feels like it will cause problems. so your camunda pod can’t talk to your forms pod via a service or is the forms endpoint completely external to the cluster? if the forms service isn’t reachable by the camunda pod something else is up
and localhost should be used for containers in the same pod, but shouldn’t be used for pod to pod
a
hmm.. so i think the issue is that the node.js web frontend creates the url to post to the an api, but it only knows the ingress url. Then the API container posts the payload created by the frontend container to the camunda api container which doesn't understand the ingress url.. Sigh...would be nice if all the containers new all the egress urls but i can understand why they don't. I will have to try something else. Maybe adjust hosts file to point service names to localhost??