This message was deleted.
# rancher-desktop
a
This message was deleted.
e
looking at networks.yaml I see:
Copy code
paths:
  varRun: /private/var/run
  socketVMNet: /opt/rancher-desktop/bin/socket_vmnet
  sudoers: /private/etc/sudoers.d/zzzzz-rancher-desktop-lima
group: everyone
networks:
  rancher-desktop-shared:
    mode: shared
    gateway: 192.168.205.1
    dhcpEnd: 192.168.205.254
    netmask: 255.255.255.0
  host:
    mode: host
    gateway: 192.168.206.1
    dhcpEnd: 192.168.206.254
    netmask: 255.255.255.0
  rancher-desktop-bridged_en4:
    mode: bridged
    interface: en4
  rancher-desktop-bridged_en5:
    mode: bridged
    interface: en5
  rancher-desktop-bridged_en6:
    mode: bridged
    interface: en6
  rancher-desktop-bridged_en7:
    mode: bridged
    interface: en7
  rancher-desktop-bridged_en8:
    mode: bridged
    interface: en8
  rancher-desktop-bridged_bridge0:
    mode: bridged
    interface: bridge0
  rancher-desktop-bridged_en0:
    mode: bridged
    interface: en0
f
Depends on the network config. Any reason why you don't use
127.0.0.1
?
Traefik is always forwarded to localhost:
Copy code
❯ curl <http://127.0.0.1.sslip.io>
404 page not found
(obviously there is no app on that route, but you see the 404 error from Traefik)
e
oh interesting, so I can just do something like point all dev.io requests (domain + subdomain) to 127.0.0.1?
and then curl of <http:dev.io> would route there
f
Yes, but depending on how you do it, you may need to redirect each subdomain manually (e.g. when you go through
/etc/hosts
)
e
since im on mac I believe theres a resolver folder that dynamically does it 🤷 neat, I'll give it a shot tomorrow
thanks!
f
Cool! Let me know if you get the subdomain stuff working automatically.
e
something like this I expect should do it, will let you know:
Copy code
sudo tee /etc/resolver/dev.io << EOF
nameserver 127.0.0.1
domain dev.io
search_order 1
EOF
f
Thanks. I thought this would just tell it to use a DNS server on localhost to resolve any names ending with
<http://dev.io|dev.io>
. But you would still have to run a local
dnsmasq
or something. Anyways, let me know when you have it working 😄
e
not sure I understand what you meant with I would still have to run a local dnsmasq. lets say I added this to traefik and made my dns change as mentioned above:
Copy code
apiVersion: <http://helm.cattle.io/v1|helm.cattle.io/v1>
kind: HelmChartConfig
metadata:
  name: traefik
  namespace: kube-system
spec:
  valuesContent: |-
    ingressRoute:
      dashboard:
        enabled: true
        matchRule: Host(`<http://traefik.dev.io|traefik.dev.io>`)
shouldn't that work? to send the request to 127.0.0.1
ah nevermind I see I was thinking traefik also resolves queries. Not sure why I had that misunderstanding
I was trying to move away from dnsmasq to a dependency free approach, but seems like outside of manually editing /etc/hosts I am wasting time. Thanks
was the network tab removed?
ah its for qemu specifically
if I wanted to access a kubernetes service through .svc.cluster.local how could I expose that? I currently use ifconfig with bridge100 to ensure the ip requests go through my lima node
basically host to vm requests
f
For your DNS issue, if you don't want to run e.g.
dnsmasq
, then I think using a "magic" DNS name is the best way to get automatic wildcard support. E.g. any name
*.<http://127.0.0.1.sslip.io|127.0.0.1.sslip.io>
will resolve to
127.0.0.1
(any other IP address in the name will work too). So you can use
<http://foo.bar.127.0.0.1.sslip.io|foo.bar.127.0.0.1.sslip.io>
to access a specific service in your cluster via Traefik.
If you want short names, then either editing
/etc/hosts
or using a local DNS server are the only options I'm aware off.
e
yup so what I ended up doing was using dnsmasq on localhost port 53 to route to localhost. That way traefik can pick up the requests. For the in-cluster routing i just used the route command to route certain network traffic to the lima VM host ip
both work perfectly fine, just running into issues where the VM or something about routing requests seems to get suspended about 20% of the time. Unsure if its because of whatever pods I'm running