https://rancher.com/ logo
Title
c

cool-monkey-71774

02/20/2023, 10:27 AM
Hi everyone, I have a service that I want to expose and I do it with NodePort. I want to restrict the IP address of who can access to the service, but the IP that I get is 10.something, so I can't restrict with network policy using my IP. I have read that in order to get the client IP using nodeport, we have to set
externalTrafficPolicy
to Local in the Nodeport definition, unfortunately when I do this I can't access my service even without NetworkPolicy. Does someone have any idea on how to fix that ? Thank you !
The IP that hits the pod when I access with nodeport is 10.43.3.0, if I use this IP in the network policy, then I can access, but it is not the IP that I am connecting from. I noticed that if I have a similar setup with a nginx, everything works fine, I guess it is because the protocol is HTTP and it can read the header that tell what is the client IP.
r

rough-farmer-49135

02/21/2023, 2:07 PM
10.42.x.y is the default range for pod IPs & 10.43.a.b is the default range for service IPs. Those aren't accessible outside the cluster by default. You can use the ingress controller (nginx is the default) to redirect HTTP(S) to a NodePort IP on a cluster internal address like that but otherwise you'd need to do something to put the port to listen on the external network rather than the internal.
c

cool-monkey-71774

02/21/2023, 2:11 PM
I just solved my issue in a way that I don't quite like, I replaced the nodeport with a tcp rule for nginx-ingress, this way I can expose whatever port I want. Then I added a network policy ingress rule to the rke2-ingress-controller to only accept from my proxies, and then I added my whitelist rule in my haproxy configuration. I don't like it as I have to edit my haproxy config everytime I want to edit my whitelist. I could not use NetworkPolicy directly as it either have the NodePort IP or the ingress one if I use the TCP ingress way.
r

rough-farmer-49135

02/21/2023, 2:22 PM
You could go for a wildcard DNS if it'd be easier to separate by hostnames in your ingresses, though I'm not sure if that helps you or not as I'm not fully getting what you're trying to do.
c

cool-monkey-71774

02/22/2023, 6:39 AM
What I am trying to do is restrict access to a service by setting the IP that I want. With an ingress I can do it with the whitelist-source-range annotation. But Ingresses only works for HTTP(s) services, when it comes to TCP, I can't use Ingress. I thought I could use the NetworkPolicy to filter access to my service, but it apply on the the pods, and the IP that is connecting is the one from the service so I can't do NetworkPolicy. My workaround is to filter all my connections to my cluster so it can only comes from my proxies by adding a NetworkPolicy to the rke2-ingress-controller (I switched from using NodePort to having a TCP ingress with nginx-ingress), I then filter which IP has access by editing my HAProxy configuration. I added the wildcard DNS already but this has nothing to do with what I am trying to do
r

rough-farmer-49135

02/22/2023, 2:46 PM
The only way you could use NetworkPolicy for that would be if you set up a separate ingress or load balancer that your front end proxy redirects the appropriate traffic towards. I think it'd be slightly easier implementation if you did filtered by hostname or subdomain on your front-end proxies and just tossed a key pair and did 2 way SSL so that your proxy only uses the cert on traffic to the right hostnames. If you're already doing anything auth-wise with something like Keycloak it could fit into that fairly easily. On the other hand, if you've got it working I doubt it'd be worth changing either.
c

cool-monkey-71774

02/22/2023, 2:48 PM
Yeah I think I will stick with my solution, thank you anyway for your help I keep the idea in mind !