https://rancher.com/ logo
Docs
Join the conversationJoin Slack
Channels
academy
amazon
arm
azure
cabpr
chinese
ci-cd
danish
deutsch
developer
elemental
epinio
espanol
events
extensions
fleet
français
gcp
general
harvester
harvester-dev
hobbyfarm
hypper
japanese
k3d
k3os
k3s
k3s-contributor
kim
kubernetes
kubewarden
lima
logging
longhorn-dev
longhorn-storage
masterclass
mesos
mexico
nederlands
neuvector-security
office-hours
one-point-x
onlinemeetup
onlinetraining
opni
os
ozt
phillydotnet
portugues
rancher-desktop
rancher-extensions
rancher-setup
rancher-wrangler
random
rfed_ara
rio
rke
rke2
russian
s3gw
service-mesh
storage
submariner
supermicro-sixsq
swarm
terraform-controller
terraform-provider-rancher2
terraform-provider-rke
theranchcast
training-0110
training-0124
training-0131
training-0207
training-0214
training-1220
ukranian
v16-v21-migration
vsphere
windows
Powered by Linen
rancher-desktop
  • s

    stale-balloon-14828

    10/11/2022, 9:18 PM
    Hi I am migrating from Docker Desktop to Rancher Desktop on Mac and am running into issues with bind mounting certain directories (e.g. /tmp or /srv). It appears to succeed for /Users for example. In Docker Desktop there was a file share settings that needed to be modified to allow for exporting other paths besides the defaults for macOS. Is there a similar configuration available for Rancher Desktop? If not, is there documentation anywhere on what directories can be bind mounted?
    q
    f
    • 3
    • 19
  • n

    narrow-flower-69849

    10/12/2022, 6:42 AM
    #rancher-desktop I need help with the issue below, I've been struggling with this issue for 2 weeks now without any breakthrough:
  • n

    narrow-flower-69849

    10/12/2022, 6:43 AM
    2022-10-12T06:25:41.684Z: Running: wsl.exe --distribution rancher-desktop --exec docker images 2022-10-12T06:25:45.672Z: Running: wsl.exe --distribution rancher-desktop --exec /sbin/rc-update --update 2022-10-12T06:25:46.228Z: Running: wsl.exe --distribution rancher-desktop --exec /usr/local/bin/wsl-service k3s start 2022-10-12T06:25:52.023Z: Capturing output: wsl.exe --distribution rancher-desktop --exec cat /proc/net/route 2022-10-12T06:25:52.533Z: Capturing output: wsl.exe --distribution rancher-desktop --exec cat /proc/net/fib_trie 2022-10-12T06:26:14.047Z: Capturing output: wsl.exe --distribution rancher-desktop --exec /bin/sh -c if test -r /etc/rancher/k3s/k3s.yaml; then echo yes; else echo no; fi 2022-10-12T06:26:14.515Z: Capturing output: wsl.exe --distribution rancher-desktop --exec wslpath -a -u C:\Users\Lucky\AppData\Local\Programs\Rancher Desktop\resources\resources\linux\wsl-helper 2022-10-12T06:26:14.973Z: Capturing output: wsl.exe --distribution rancher-desktop --exec /mnt/c/Users/Lucky/AppData/Local/Programs/Rancher Desktop/resources/resources/linux/wsl-helper k3s kubeconfig
  • n

    narrow-flower-69849

    10/12/2022, 6:44 AM
    wsl-helper logs
  • n

    narrow-flower-69849

    10/12/2022, 6:44 AM
    Error: could not detect WSL2 VM: could not find WSL2 VM ID: <no error> Error: could not detect WSL2 VM: could not find WSL2 VM ID: <no error> Error: could not detect WSL2 VM: could not find WSL2 VM ID: could not dial VM 968D396D-EAFF-4FBF-BDF7-986D7BBED448: could not dial Hyper-V socket: connect(968d396d-eaff-4fbf-bdf7-986d7bbed448:016a6eb7-facb-11e6-bd58-64006a7986d3) failed: The requested address is not valid in its context. Error: could not detect WSL2 VM: could not find WSL2 VM ID: could not dial VM 968D396D-EAFF-4FBF-BDF7-986D7BBED448: could not dial Hyper-V socket: connect(968d396d-eaff-4fbf-bdf7-986d7bbed448:016a6eb7-facb-11e6-bd58-64006a7986d3) failed: The requested address is not valid in its context. Error: could not detect WSL2 VM: could not find WSL2 VM ID: could not dial VM 968D396D-EAFF-4FBF-BDF7-986D7BBED448: could not dial Hyper-V socket: connect(968d396d-eaff-4fbf-bdf7-986d7bbed448:016a6eb7-facb-11e6-bd58-64006a7986d3) failed: The requested address is not valid in its context. Error: could not detect WSL2 VM: could not find WSL2 VM ID: could not dial VM 968D396D-EAFF-4FBF-BDF7-986D7BBED448: could not dial Hyper-V socket: connect(968d396d-eaff-4fbf-bdf7-986d7bbed448:016a6eb7-facb-11e6-bd58-64006a7986d3) failed: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. Error: could not detect WSL2 VM: could not find WSL2 VM ID: could not dial VM 968D396D-EAFF-4FBF-BDF7-986D7BBED448: could not dial Hyper-V socket: connect(968d396d-eaff-4fbf-bdf7-986d7bbed448:016a6eb7-facb-11e6-bd58-64006a7986d3) failed: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. time="2022-10-12T08:25:50+02:00" level=info msg="Got WSL2 VM" guid=968d396d-eaff-4fbf-bdf7-986d7bbed448 time="2022-10-12T08:25:50+02:00" level=info msg=Listening endpoint="npipe:////./pipe/docker_engine"
  • n

    narrow-flower-69849

    10/12/2022, 6:48 AM
    I'm not sure if this is a security issue or I need to use WSL2 instead of WSL...please advise I'll appreciate any help in this regard because the issue has become a blocker from my side
    q
    a
    • 3
    • 4
  • w

    wide-mechanic-33041

    10/12/2022, 11:50 AM
    I don’t believe RD supports distros in v1 so solution will be to figure out v2. Are other WSL2 distros working well?
    k
    • 2
    • 2
  • f

    fast-garage-66093

    10/12/2022, 5:04 PM
    Auto-update to 1.6.0 release has been enabled
    👍 3
  • i

    icy-parrot-30770

    10/12/2022, 7:43 PM
    Hello everyone I need your help, I have a lot of access rights problem using the "rancher.io/local-path" provisioner of "rancher desktop". Wanting to be as close as possible to the target configuration, I would like to try if using "rook.io + ceph" as a provisioner solve my access right issues. For the installation of rook.io, there is a "cluster-test.yaml" file dedicated for that purpose, but you must define the path of the volume (on the VM) containing the data which must be blank and unformatted For this I would like to use an external usb key (or an external HDD). -> could you tell me if it is possible to directly mount this usb key on the "rancher desktop" VM? (I'm on mac osx 12.6 monterey)
    f
    • 2
    • 6
  • p

    proud-easter-69184

    10/13/2022, 9:53 AM
    installs 1.6.0 Instantly get 3 errors about symlinks I didn’t create being wrong 😂
    s
    i
    f
    • 4
    • 8
  • t

    thankful-sandwich-80010

    10/13/2022, 6:10 PM
    Somewhat new to Rancher Desktop, hoping someone can point me in the right direction to understand the following issue, i.e. is there something obvious to explain this or where to begin in understanding the difference/cause… Our dev team has a MySQL container with a couple dozen stored procedures. We noticed with Rancher Desktop (container engine: dockerd) on macOS Montery (16 GB mem), the stored procedures are extremely slow compared to Docker Desktop using the same docker image and data. My team mate just uninstalled Rancher Desktop and reinstalled Docker Desktop, then grabbed some timings on the stored procedures. Here are two examples of the extreme differences in execution time Rancher Desktop Docker Desktop 0 hr 39 min 0 hr 02 min 1 hr 10 min 0 hr 04 min
    f
    • 2
    • 8
  • p

    purple-action-15801

    10/13/2022, 8:12 PM
    Hey everyone. I have a weird issue that I can't seem to find the answer to anywhere on the interwebz and wanted to post it here to see if anyone has seen this behavior. • This does not happen for MacOS or Linux, just Windows.
  • p

    purple-action-15801

    10/13/2022, 8:18 PM
    Let me try this again... Hey everyone. I have a weird issue that I can't seem to find the answer to anywhere on the interwebz and wanted to post it here to see if anyone has seen this behavior. Context: • This does not happen for MacOS or Linux, just Windows. • I install Rancher-Desktop on Windows • I have a corporate VPN • I do not have admin access to my machine because, yuh know, corporate. • When I am NOT on the VPN, I can do
    nerdctl pull mongo
    against hub.docker.com no problem at all. The Problem: • When I AM on the VPN, I can't pull from my corporate container registry. ◦ However, my MacOS can and my Linux machine can. So this is just with windows. I suspect that it is because Rancher-Desktop can't access
    /etc/rancher/desktop/credfwd
    because it throws an error
    /usr/local/bin/docker-credential-rancher-desktop: source: line 5: can't open '/etc/rancher/desktop/credfwd': No such file or directory
    w
    f
    k
    • 4
    • 16
  • b

    better-nail-51710

    10/14/2022, 8:02 AM
    Hello again, I'm running Rancher Desktop 1.6.0 on MacBook. I want to scan an image that has been built locally (with nerdctl), but I receive the following error:
    [31mFATAL[0m	image scan error: scan error: unable to initialize a scanner: unable to initialize a docker scanner: 4 errors occurred:
    	* unable to inspect the image (nginx-helloworld:latest): Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
    	* unable to initialize Podman client: no podman socket found: stat podman/podman.sock: no such file or directory
    	* failed to initialize a containerd client: failed to dial "/run/k3s/containerd/containerd.sock": connection error: desc = "transport: error while dialing: dial unix /run/k3s/containerd/containerd.sock: connect: permission denied"
    	* GET <https://index.docker.io/v2/library/nginx-helloworld/manifests/latest>: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/nginx-helloworld Type:repository]]
    The same situation if I try to scan an image pulled from my company private registry:
    [31mFATAL[0m	image scan error: scan error: unable to initialize a scanner: unable to initialize a docker scanner: 4 errors occurred:
    	* unable to inspect the image (<http://artifactory.mycompany.com/images/hello-app:v0.0.1|artifactory.mycompany.com/images/hello-app:v0.0.1>): Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
    	* unable to initialize Podman client: no podman socket found: stat podman/podman.sock: no such file or directory
    	* failed to initialize a containerd client: failed to dial "/run/k3s/containerd/containerd.sock": connection error: desc = "transport: error while dialing: dial unix /run/k3s/containerd/containerd.sock: connect: permission denied"
    	* GET <https://artifactory.mycompany.com/v2/images/hello-app/manifests/v0.0.1>: UNAUTHORIZED: The client does not have permission for manifest; map[manifest:hello-app/v0.0.1/manifest.json]
    I found this issue (still open): https://github.com/rancher-sandbox/rancher-desktop/issues/539 Is there any possibility of scanning a locally built image with nerdctl and an image pulled from a private registry? If not, this is a severe blocker in the process of adopting Rancher Desktop as a local Kubernetes development. Thank you!
    f
    q
    • 3
    • 10
  • b

    bumpy-tiger-73147

    10/14/2022, 10:28 AM
    Hi ranch folks! I'm running rancher desktop on a Mac Apple Silicon. I installed following this tutorial: https://blog.brazdeikis.io/today-i-learned/install-docker-on-mac-without-docker-desktop/ Everything works well, but I just updated a few minutes ago and now I'm getting an annoying message in "Diagnostics":
    The file ~/.docker/cli-plugins/docker-compose should be a symlink to ~/.rd/bin/docker-compose, but points to /opt/homebrew/opt/docker-compose/bin/docker-compose.
    My docker-compose comes from brew so this is normal, but I didn't know I shouldn't use it and instead use the one shipped with rancher-desktop. Should I just change the link and point to
    ~/.rd/bin/docker-compose
    ? Or is there something else going on here?
    f
    • 2
    • 4
  • m

    magnificent-pilot-35275

    10/14/2022, 3:00 PM
    hi. attempting to run Rancher Desktop on a apple Silcon here
    q
    • 2
    • 3
  • f

    flaky-dusk-65029

    10/14/2022, 6:12 PM
    Q: Are Rancher Desktop, Podman Desktop and Docker Desktop able to co-exist? I'm on Windows 11 + WSL2 (Ubuntu 22.04) with administrator access to my dev machine. Docker in WSL doesn't work at all. Podman sorta works. Rancher Kubernetes/k3s seems to just hang upon startup.
    w
    • 2
    • 11
  • n

    numerous-night-87848

    10/14/2022, 9:40 PM
    Hello! I’m getting started with rancher-desktop on macOS. I see this warning in diagnostics.
    Kubernetes is using context kind-argo instead of rancher-desktop.
    What does this mean?
    f
    • 2
    • 13
  • e

    echoing-ability-7881

    10/15/2022, 11:48 AM
    Hi Rancher community how are you, my question today is related to Cronjob on Rancher RKE GUI- Lets suppose i have a application deploy on Rancher workload and i want to run Schedule for it that will run per minute whole day and i want some other command to run constantly. What should i do in RKE please help #general #k3s #rancher-desktop #k3s #rancher-desktop
    q
    • 2
    • 5
  • q

    quick-keyboard-83126

    10/16/2022, 6:53 AM
    docker-credential-osxkeychain is really annoying.
    ➕ 1
    k
    • 2
    • 30
  • q

    quick-keyboard-83126

    10/16/2022, 7:16 AM
    @fast-garage-66093 it'd be nice if this diagnostic explained what it was seeing:
    The application cannot reach the general internet for updated kubernetes versions and other components, but can still operate.
    o
    f
    • 3
    • 12
  • f

    future-keyboard-11621

    10/17/2022, 11:19 AM
    I am trying to access rancher-desktop cluster outside that is running on a different computer on my network.
    E1017 13:18:31.887786 22909 proxy_server.go:147] Error while proxying request: x509: certificate is valid for 10.43.0.1, 127.0.0.1, 192.168.5.15, ::1, fec0::5055:55ff:fe64:2235, not 10.0.0.5
    The error suggests that the certificate is only valid for the given ip-s not for external access, how could I bypass/fix this?
    👍 1
    a
    c
    p
    • 4
    • 9
  • h

    helpful-butcher-75556

    10/17/2022, 1:38 PM
    I see differences in the way environment variables are injected depending on if
    nerdctl run
    is used vs
    nerdctl compose up
    with an
    env-file
    is used. This is on RD 1.6.0 Mac M1 If you have this in a
    .env
    file:
    ZZ='.env file $test pound # and more'
    ZZ_NOQUOTES=.env file $test pound # and more
    nerdctl run --rm --env-file ./.env  busybox /bin/sh -c set
    # looks like it adds single quotes
    ZZ=''"'"'.env file $test pound # and more'"'"
    ZZ_NOQUOTES='.env file $test pound # and more'
    But if you you
    nerdctl compose up
    and use that file the single quotes work as the spec says they should (truncating the second value since # is a comment)
    ZZ='.env file $test pound # and more'
    ZZ_NOQUOTES='.env file  pound'
    The compose file:
    services:
      busybox:
        image: busybox
        env_file:
          - .env
        command: ['sh', '-c', 'set']
    f
    • 2
    • 1
  • a

    adventurous-piano-43924

    10/17/2022, 2:31 PM
    Hi everyone! Thanks for the great rancher desktop 🙂 I upgraded to version 1.6 and I started getting some weird errors like
    f
    c
    • 3
    • 2
  • a

    adventurous-piano-43924

    10/17/2022, 2:31 PM
    kubectl port-forward service/mysql-dna-mariadb 3306:3306 error: error upgrading connection: error dialing backend: x509: certificate is valid for 127.0.0.1, 192.168.5.15, fec0::5055:55ff:fe35:8a75, not 192.168.205.2 it seems the certificate is got somehow messed up? The clusters seems to work well (all pods running, no errors in diagnostics) but whenever I try to get to the pods it gives errors
  • c

    chilly-farmer-59320

    10/17/2022, 10:04 PM
    Hi everyone, I installed Rancher Desktop and upgrade to version 1.6.0 on M1 Mac. I want to user Kubernetes with containerd. However, it looks like nerdctl keeps on timing out on pull from docker registry. See below:
    ➜  bin ./nerdctl run nginx
    <http://docker.io/library/nginx:latest|docker.io/library/nginx:latest>: resolving      |--------------------------------------|
    elapsed: 9.9 s                  total:   0.0 B (0.0 B/s)
    INFO[0010] trying next host                              error="failed to do request: Head \"<https://registry-1.docker.io/v2/library/nginx/manifests/latest>\": dial tcp: lookup <http://registry-1.docker.io|registry-1.docker.io> on 192.168.5.3:53: read udp 192.168.5.15:34708->192.168.5.3:53: i/o timeout" host=<http://registry-1.docker.io|registry-1.docker.io>
    FATA[0010] failed to resolve reference "<http://docker.io/library/nginx:latest|docker.io/library/nginx:latest>": failed to do request: Head "<https://registry-1.docker.io/v2/library/nginx/manifests/latest>": dial tcp: lookup <http://registry-1.docker.io|registry-1.docker.io> on 192.168.5.3:53: read udp 192.168.5.15:34708->192.168.5.3:53: i/o timeout
    w
    • 2
    • 4
  • c

    clever-pillow-66654

    10/18/2022, 11:13 AM
    Anyone else stuck on ‘waiting for notes’ in 1.6.0 on macOS (Ventura beta 22A5373b) Main error I noticed is in server.log:
    2022-10-18T11:12:22.502Z: Auth failure: user/password validation failure for attempted login of user interactive-user
    f
    g
    +2
    • 5
    • 20
  • m

    many-kite-98414

    10/18/2022, 11:45 AM
    Hi Everyone,
  • m

    many-kite-98414

    10/18/2022, 11:46 AM
    I have installed Rancher Desktop 1.6.0 on Windows. Upon reboot, I do not see the docker commands getting recognized. Below is the command line output: C:\>docker -v 'docker' is not recognized as an internal or external command, operable program or batch file. WSL Status is as Below: C:\>wsl -l -v NAME STATE VERSION * rancher-desktop Running 2 rancher-desktop-data Stopped 2 Diagnostics: No problems detected Rancher Desktop appears to be functioning correctly. Images: I see the list of images Container Engine: dockered (moby) I appreciate your help in resolving the issue. Thanks,
    w
    k
    • 3
    • 5
  • h

    hundreds-sandwich-7373

    10/18/2022, 1:37 PM
    So I'm using rancher desktop 1.6 on macos 12.5.1 on an M1 using dockerd ( moby) and I'm having some issues with swarm ingress.
    k
    c
    c
    • 4
    • 35
Powered by Linen
Title
h

hundreds-sandwich-7373

10/18/2022, 1:37 PM
So I'm using rancher desktop 1.6 on macos 12.5.1 on an M1 using dockerd ( moby) and I'm having some issues with swarm ingress.
I did a docker swarm init and my nodes look like this:
$ docker node ls
ID                            HOSTNAME               STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
1vdqq80ak1g4s0nnph4xjsean *   lima-rancher-desktop   Ready     Active         Leader           20.10.18
and created a network and my network look slike this
$ docker network ls
NETWORK ID     NAME              DRIVER    SCOPE
34aa9f60baee   bridge            bridge    local
7cd7acb2040c   docker_gwbridge   bridge    local
b928699d5d6c   host              host      local
1qd6e2d7zqrg   ingress           overlay   swarm
4583883d6b30   none              null      local
tw4bolgoata9   svc_default       overlay   swarm
and did a docker stack deploy co -c docker-compose.yaml
and the docker-compse.yaml looks like this:
version: "3.8"
services:
  co-svc:
    image: co-test:1.0
    command:
      - serve
    deploy:
      replicas: 2
    ports:
      - mode: ingress
        protocol: tcp
        published: 8080
        target: 8080
    networks:
      - svc_default
    environment:
      JAVA_OPTS: -Dcoherence.wka=tasks.co_co-svc

networks:
  svc_default:
    external: true
I can see my service running:
$ docker service ls
ID             NAME        MODE         REPLICAS   IMAGE         PORTS
6ygzdu8w6f7s   co_co-svc   replicated   2/2        co-test:1.0   *:8080->8080/tcp
according to the logs it's healthy
$ docker service logs co_co-svc -n 5
co_co-svc.1.vhj6hny4nxpb@lima-rancher-desktop    | 13:24:06.005 [main] DEBUG io.smallrye.openapi.jaxrs - SROAP10001: Processing JaxRs method: io.example.entity.Task updateTask(java.util.UUID id, io.example.entity.Task task)
co_co-svc.1.vhj6hny4nxpb@lima-rancher-desktop    | 13:24:06.006 [main] DEBUG io.smallrye.openapi.jaxrs - SROAP10000: Processing a JAX-RS resource class: CoherenceResource
co_co-svc.1.vhj6hny4nxpb@lima-rancher-desktop    | 13:24:06.006 [main] DEBUG io.smallrye.openapi.jaxrs - SROAP10001: Processing JaxRs method: javax.ws.rs.core.Response get()
co_co-svc.1.vhj6hny4nxpb@lima-rancher-desktop    | 13:24:06.016 [features-thread] INFO io.helidon.common.HelidonFeatures - Helidon MP 2.5.4 features: [CDI, Config, Fault Tolerance, Health, JAX-RS, Metrics, Open API, REST Client, Security, Server, Tracing]
co_co-svc.1.vhj6hny4nxpb@lima-rancher-desktop    | 13:24:06.134 [Logger@9255624 22.06.2] INFO coherence - (thread=PagedTopic:Topic, member=2, up=9.698): Partition ownership has stabilized with 2 nodes
co_co-svc.2.724kxhhi8f3d@lima-rancher-desktop    | 13:24:05.795 [main] DEBUG io.smallrye.openapi.jaxrs - SROAP10001: Processing JaxRs method: javax.ws.rs.core.Response get()
co_co-svc.2.724kxhhi8f3d@lima-rancher-desktop    | 13:24:05.841 [Logger@9242415 22.06.2] INFO coherence - (thread=PagedTopic:Topic, member=1, up=9.399): Transferring primary PartitionSet{0..127} to member 2 requesting 128
co_co-svc.2.724kxhhi8f3d@lima-rancher-desktop    | 13:24:05.851 [features-thread] INFO io.helidon.common.HelidonFeatures - Helidon MP 2.5.4 features: [CDI, Config, Fault Tolerance, Health, JAX-RS, Metrics, Open API, REST Client, Security, Server, Tracing]
co_co-svc.2.724kxhhi8f3d@lima-rancher-desktop    | 13:24:06.026 [Logger@9242415 22.06.2] INFO coherence - (thread=PagedTopic:Topic, member=1, up=9.584): Partition ownership has stabilized with 2 nodes
co_co-svc.2.724kxhhi8f3d@lima-rancher-desktop    | 13:24:06.128 [Logger@9242415 22.06.2] INFO coherence - (thread=PagedTopic:Topic, member=1, up=9.686): Partition ownership has stabilized with 2 nodes
but when i try and access it, via curl it just ... hangs
$ curl -vvv <http://localhost:8080/health>
*   Trying 127.0.0.1:8080...
* Connected to localhost (127.0.0.1) port 8080 (#0)
> GET /health HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.79.1
> Accept: */*
>
this goes on forever...
I have no idea how to even debug this, so any help would be greatly appreciated - please don't come at me with "move to k8s" because that's fundamentally unhelpful.
fwiw -- from inside the swarm I can see everything just fine:
$ docker exec -ti 24011efe1863 bash
bash-4.4# curl <http://co-svc:8080/health>
{"outcome":"UP","status":"UP","checks":[{"name":"deadlock","state":"UP","status":"UP"},{"name":"diskSpace","state":"UP","status":"UP","data":{"free":"90.58 GB","freeBytes":97263853568,"percentFree":"92.55%","total":"97.87 GB","totalBytes":105088212992}},{"name":"heapMemory","state":"UP","status":"UP","data":{"free":"54.82 MB","freeBytes":57482184,"max":"1.45 GB","maxBytes":1553989632,"percentFree":"97.29%","total":"95.00 MB","totalBytes":99614720}}]}bash-4.4# curl <http://co-svc:8080/coherence>
{"cluster":"root's cluster","members":[{"name":null,"address":"/10.0.4.7"},{"name":null,"address":"tasks.co_co-svc/10.0.4.6"}]}bash-4.4#
k

kind-iron-72902

10/18/2022, 5:23 PM
Hi @hundreds-sandwich-7373 Another user reported another network issue on M1 yesterday at https://rancher-users.slack.com/archives/C0200L1N1MM/p1666044276563699 where they had to disable their vpn/firewall to fix it? Is that the case here as well?
h

hundreds-sandwich-7373

10/18/2022, 5:59 PM
I don't think so, when i start it with docker run:
docker run --rm -ti -p "8080:8080" -e "JAVA_OPTS=-Dcoherence.localhost=127.0.0.1" co-test:1.0 serve
it works fine.
@kind-iron-72902 no, this is entirely different. I managed to pull and build images just fine.
k

kind-iron-72902

10/18/2022, 6:31 PM
Thanks. Someone with access to an M1 machine is going to look at this.
Also could you file an issue at https://github.com/rancher-sandbox/rancher-desktop/issues/new/choose ? That makes it easier for us to manage problems.
c

creamy-laptop-9977

10/18/2022, 8:29 PM
@hundreds-sandwich-7373 I’m trying to test this, but unable to make much progress without your
co-test:1.0
image. Can you replicate this with another public image? (I’ll continue to look for a possible test image as well.)
… and did this work under earlier versions of Rancher Desktop?
Also, can you provide the output of:
docker network inspect svc_default
Finally (for tonight) can you check and see if your app is listening on IPv4 or v6 in the VM?
rdctl shell
, then
sudo netstat -nlp | grep 8080
. In my testing, there seems be a disconnect between IPv4 and IPv6, but it could easily be due to my test configuration.
h

hundreds-sandwich-7373

10/19/2022, 5:43 AM
@creamy-laptop-9977 you can use any image at all - an nginx serving helloword.html would be functionally equivalent for these purposes.
network inspect:
$ docker network inspect svc_default
[
    {
        "Name": "svc_default",
        "Id": "tw4bolgoata9gpp785ql59jwb",
        "Created": "2022-10-18T09:05:29.876631272Z",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.4.0/24",
                    "Gateway": "10.0.4.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": null,
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4100"
        },
        "Labels": null
    }
]
$ rdctl shell
lima-rancher-desktop:/Users/lcoughli$ sudo netstat -nlp | grep 8080
tcp        0      0 :::8080                 :::*                    LISTEN      2860/dockerd
I have no idea if it worked under a previous version - this is my first go around with rancher-desktop. It has worked under docker-desktop.
it does look like an ip6/4 thing maybe? That's fascinating.
fwiw:
$ curl -g -6 "http://[::1]:8080/"
curl: (7) Failed to connect to ::1 port 8080 after 12 ms: Connection refused
Also, when you list the ports int he compose section, it's actually the ingress network that is responsible for the mapping i think, and that's picked up appropos
$ docker network inspect ingress
[
    {
        "Name": "ingress",
        "Id": "1qd6e2d7zqrgwsnsus1mn0nx0",
        "Created": "2022-10-18T13:22:14.42613455Z",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",
                    "Gateway": "10.0.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": true,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "02fc1a3f9c335feb51d51c7ae87610401cd3befb98650bf9652e2884d9eb3c28": {
                "Name": "co_co-svc.2.lwvctygwa616q7up8zozq4mwg",
                "EndpointID": "eb482388ddded26424d09a2385cbc47f3bf9214bf086116cfd0938a757f1a2fd",
                "MacAddress": "02:42:0a:00:00:0a",
                "IPv4Address": "10.0.0.10/24",
                "IPv6Address": ""
            },
            "1976e6b0b7f6a9dba6a8dd0d43cfc9e38b8381ac175ae09ffbd3967f605dbcdd": {
                "Name": "co_co-svc.1.clatiekuuz08610tosp328c50",
                "EndpointID": "b16ba537248fe430698660c2b6a2b3863e0538a4357dc3a1ee6a565cad24b292",
                "MacAddress": "02:42:0a:00:00:09",
                "IPv4Address": "10.0.0.9/24",
                "IPv6Address": ""
            },
            "ingress-sbox": {
                "Name": "ingress-endpoint",
                "EndpointID": "82b7925d1c6e6d2afb74889bf156544284007a343ac6f38a2ea6e648ed24ca89",
                "MacAddress": "02:42:0a:00:00:02",
                "IPv4Address": "10.0.0.2/24",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4096"
        },
        "Labels": {},
        "Peers": [
            {
                "Name": "e49e33aed00b",
                "IP": "192.168.5.15"
            }
        ]
    }
]
c

calm-sugar-3169

10/19/2022, 5:23 PM
I suspect the IPV6 thing might be a red herring, I managed to repro this on a non-M1 mac and I can see the the service is actually bound and responds tho IPV4 within the VM and there is actually a corresponding rule for it in the IP tables:
Chain DOCKER-INGRESS (2 references)
 pkts bytes target     prot opt in     out     source               destination
    5   300 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8080 to:172.18.0.2:8080
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0
as for the netstat why it is showing IPv6 this might be the explanation: https://github.com/moby/moby/issues/2174#issuecomment-237439515
now this might be one of those situations that request comes in but has no idea how to get out 🤷
Ok, I can confirm that this behavior is the same on 1.5.1 so it is not a regression. Below is my observation, it seems as though the application listens on both IPV4 and IPV6, you can confirm this by curling the application from within the VM:
curl 127.0.0.1:8080
However, curling with IPV6:
curl -g -6 '<http://localhost:8080>'
Hangs, this could be due to this open issue: https://github.com/moby/moby/issues/24379 In addition, looking at the tcpdump within the VM I can see the curl request that is destined for port 8080 from the client (host in this case) is always forced to IPv6
20:02:55.367583 lo    In  IP6 ::1.54490 > ::1.8080: Flags [S], seq 3666321179, win 65476, options [mss 65476,sackOK,TS val 869371798 ecr 0,nop,wscale 7], length 0
20:02:55.367593 lo    In  IP6 ::1.8080 > ::1.54490: Flags [S.], seq 3999623390, ack 3666321180, win 65464, options [mss 65476,sackOK,TS val 869371798 ecr 869371798,nop,wscale 7], length 0
20:02:55.367602 lo    In  IP6 ::1.54490 > ::1.8080: Flags [.], ack 1, win 512, options [nop,nop,TS val 869371798 ecr 869371798], length 0
20:02:55.368230 lo    In  IP6 ::1.54490 > ::1.8080: Flags [P.], seq 1:79, ack 1, win 512, options [nop,nop,TS val 869371798 ecr 869371798], length 78: HTTP: GET / HTTP/1.1
20:02:55.368235 lo    In  IP6 ::1.8080 > ::1.54490: Flags [.], ack 79, win 511, options [nop,nop,TS val 869371798 ecr 869371798], length 0
20:02:59.929836 lo    In  IP6 ::1.54490 > ::1.8080: Flags [F.], seq 79, ack 1, win 512, options [nop,nop,TS val 869376360 ecr 869371798], length 0
20:02:59.977814 lo    In  IP6 ::1.8080 > ::1.54490: Flags [.], ack 80, win 511, options [nop,nop,TS val 869376408 ecr 869376360], length 0
This could potentially be related to the open issue mentioned above or perhaps misconfiguration in Lima’s underlying switch that always forces IPV6 with the swarm’s overlay networks. However, it is worth noting that when a published port container is deployed:
docker run -p 8081:80 -td nginx
Everything works as expected and looking at the tcpdump in the VM the requests are made over IPV4 as expected:
19:34:54.386979 lo    In  IP 127.0.0.1.35410 > 127.0.0.1.8081: Flags [S], seq 3673555498, win 65495, options [mss 65495,sackOK,TS val 1782487875 ecr 0,nop,wscale 7], length 0
19:34:54.387035 lo    In  IP 127.0.0.1.8081 > 127.0.0.1.35410: Flags [S.], seq 228590965, ack 3673555499, win 65483, options [mss 65495,sackOK,TS val 1782487875 ecr 1782487875,nop,wscale 7], length 0
19:34:54.387092 lo    In  IP 127.0.0.1.35410 > 127.0.0.1.8081: Flags [.], ack 1, win 512, options [nop,nop,TS val 1782487875 ecr 1782487875], length 0
19:34:54.387974 lo    In  IP 127.0.0.1.35410 > 127.0.0.1.8081: Flags [P.], seq 1:79, ack 1, win 512, options [nop,nop,TS val 1782487876 ecr 1782487875], length 78
19:34:54.388017 lo    In  IP 127.0.0.1.8081 > 127.0.0.1.35410: Flags [.], ack 79, win 511, options [nop,nop,TS val 1782487876 ecr 1782487876], length 0
19:34:54.388655 lo    In  IP 127.0.0.1.8081 > 127.0.0.1.35410: Flags [P.], seq 1:854, ack 79, win 512, options [nop,nop,TS val 1782487877 ecr 1782487876], length 853
19:34:54.388703 lo    In  IP 127.0.0.1.35410 > 127.0.0.1.8081: Flags [.], ack 854, win 506, options [nop,nop,TS val 1782487877 ecr 1782487877], length 0
19:34:54.389693 lo    In  IP 127.0.0.1.35410 > 127.0.0.1.8081: Flags [F.], seq 79, ack 854, win 512, options [nop,nop,TS val 1782487878 ecr 1782487877], length 0
19:34:54.390404 lo    In  IP 127.0.0.1.8081 > 127.0.0.1.35410: Flags [F.], seq 854, ack 80, win 512, options [nop,nop,TS val 1782487879 ecr 1782487878], length 0
19:34:54.390534 lo    In  IP 127.0.0.1.35410 > 127.0.0.1.8081: Flags [.], ack 855, win 512, options [nop,nop,TS val 1782487879 ecr 1782487879], length 0
To summarize this could be a combination of the open issue mentioned above in conjunction with the misconfiguration of lima’s underlying switch (vde_vmnet) or the misbehaviour is completely driven by the open issue here: https://github.com/moby/moby/issues/24379
Also, create the corresponding issue to keep track of this: https://github.com/rancher-sandbox/rancher-desktop/issues/3221
c

creamy-laptop-9977

10/19/2022, 8:54 PM
Thanks for continuing the dig into this, @calm-sugar-3169! Interesting situation indeed!
h

hundreds-sandwich-7373

10/20/2022, 5:22 AM
Thank you so much for this follow up, I really appreciate it, unfortunately it looks like I decanned the worms here.
🤣 2
View count: 35