stale-balloon-14828
10/11/2022, 9:18 PMnarrow-flower-69849
10/12/2022, 6:42 AMnarrow-flower-69849
10/12/2022, 6:43 AMnarrow-flower-69849
10/12/2022, 6:44 AMnarrow-flower-69849
10/12/2022, 6:44 AMnarrow-flower-69849
10/12/2022, 6:48 AMwide-mechanic-33041
10/12/2022, 11:50 AMfast-garage-66093
10/12/2022, 5:04 PMicy-parrot-30770
10/12/2022, 7:43 PMproud-easter-69184
10/13/2022, 9:53 AMthankful-sandwich-80010
10/13/2022, 6:10 PMpurple-action-15801
10/13/2022, 8:12 PMpurple-action-15801
10/13/2022, 8:18 PMnerdctl pull mongo
against hub.docker.com no problem at all.
The Problem:
• When I AM on the VPN, I can't pull from my corporate container registry.
◦ However, my MacOS can and my Linux machine can. So this is just with windows.
I suspect that it is because Rancher-Desktop can't access /etc/rancher/desktop/credfwd
because it throws an error /usr/local/bin/docker-credential-rancher-desktop: source: line 5: can't open '/etc/rancher/desktop/credfwd': No such file or directory
better-nail-51710
10/14/2022, 8:02 AM[31mFATAL[0m image scan error: scan error: unable to initialize a scanner: unable to initialize a docker scanner: 4 errors occurred:
* unable to inspect the image (nginx-helloworld:latest): Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
* unable to initialize Podman client: no podman socket found: stat podman/podman.sock: no such file or directory
* failed to initialize a containerd client: failed to dial "/run/k3s/containerd/containerd.sock": connection error: desc = "transport: error while dialing: dial unix /run/k3s/containerd/containerd.sock: connect: permission denied"
* GET <https://index.docker.io/v2/library/nginx-helloworld/manifests/latest>: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/nginx-helloworld Type:repository]]
The same situation if I try to scan an image pulled from my company private registry:
[31mFATAL[0m image scan error: scan error: unable to initialize a scanner: unable to initialize a docker scanner: 4 errors occurred:
* unable to inspect the image (<http://artifactory.mycompany.com/images/hello-app:v0.0.1|artifactory.mycompany.com/images/hello-app:v0.0.1>): Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
* unable to initialize Podman client: no podman socket found: stat podman/podman.sock: no such file or directory
* failed to initialize a containerd client: failed to dial "/run/k3s/containerd/containerd.sock": connection error: desc = "transport: error while dialing: dial unix /run/k3s/containerd/containerd.sock: connect: permission denied"
* GET <https://artifactory.mycompany.com/v2/images/hello-app/manifests/v0.0.1>: UNAUTHORIZED: The client does not have permission for manifest; map[manifest:hello-app/v0.0.1/manifest.json]
I found this issue (still open): https://github.com/rancher-sandbox/rancher-desktop/issues/539
Is there any possibility of scanning a locally built image with nerdctl and an image pulled from a private registry? If not, this is a severe blocker in the process of adopting Rancher Desktop as a local Kubernetes development.
Thank you!bumpy-tiger-73147
10/14/2022, 10:28 AMThe file ~/.docker/cli-plugins/docker-compose should be a symlink to ~/.rd/bin/docker-compose, but points to /opt/homebrew/opt/docker-compose/bin/docker-compose.
My docker-compose comes from brew so this is normal, but I didn't know I shouldn't use it and instead use the one shipped with rancher-desktop. Should I just change the link and point to ~/.rd/bin/docker-compose
? Or is there something else going on here?magnificent-pilot-35275
10/14/2022, 3:00 PMflaky-dusk-65029
10/14/2022, 6:12 PMnumerous-night-87848
10/14/2022, 9:40 PMKubernetes is using context kind-argo instead of rancher-desktop.
What does this mean?echoing-ability-7881
10/15/2022, 11:48 AMquick-keyboard-83126
10/16/2022, 6:53 AMquick-keyboard-83126
10/16/2022, 7:16 AMThe application cannot reach the general internet for updated kubernetes versions and other components, but can still operate.
future-keyboard-11621
10/17/2022, 11:19 AME1017 13:18:31.887786 22909 proxy_server.go:147] Error while proxying request: x509: certificate is valid for 10.43.0.1, 127.0.0.1, 192.168.5.15, ::1, fec0::5055:55ff:fe64:2235, not 10.0.0.5
The error suggests that the certificate is only valid for the given ip-s not for external access, how could I bypass/fix this?helpful-butcher-75556
10/17/2022, 1:38 PMnerdctl run
is used vs nerdctl compose up
with an env-file
is used. This is on RD 1.6.0 Mac M1
If you have this in a .env
file:
ZZ='.env file $test pound # and more'
ZZ_NOQUOTES=.env file $test pound # and more
nerdctl run --rm --env-file ./.env busybox /bin/sh -c set
# looks like it adds single quotes
ZZ=''"'"'.env file $test pound # and more'"'"
ZZ_NOQUOTES='.env file $test pound # and more'
But if you you nerdctl compose up
and use that file the single quotes work as the spec says they should (truncating the second value since # is a comment)
ZZ='.env file $test pound # and more'
ZZ_NOQUOTES='.env file pound'
The compose file:
services:
busybox:
image: busybox
env_file:
- .env
command: ['sh', '-c', 'set']
adventurous-piano-43924
10/17/2022, 2:31 PMadventurous-piano-43924
10/17/2022, 2:31 PMchilly-farmer-59320
10/17/2022, 10:04 PM➜ bin ./nerdctl run nginx
<http://docker.io/library/nginx:latest|docker.io/library/nginx:latest>: resolving |--------------------------------------|
elapsed: 9.9 s total: 0.0 B (0.0 B/s)
INFO[0010] trying next host error="failed to do request: Head \"<https://registry-1.docker.io/v2/library/nginx/manifests/latest>\": dial tcp: lookup <http://registry-1.docker.io|registry-1.docker.io> on 192.168.5.3:53: read udp 192.168.5.15:34708->192.168.5.3:53: i/o timeout" host=<http://registry-1.docker.io|registry-1.docker.io>
FATA[0010] failed to resolve reference "<http://docker.io/library/nginx:latest|docker.io/library/nginx:latest>": failed to do request: Head "<https://registry-1.docker.io/v2/library/nginx/manifests/latest>": dial tcp: lookup <http://registry-1.docker.io|registry-1.docker.io> on 192.168.5.3:53: read udp 192.168.5.15:34708->192.168.5.3:53: i/o timeout
clever-pillow-66654
10/18/2022, 11:13 AM2022-10-18T11:12:22.502Z: Auth failure: user/password validation failure for attempted login of user interactive-user
many-kite-98414
10/18/2022, 11:45 AMmany-kite-98414
10/18/2022, 11:46 AMhundreds-sandwich-7373
10/18/2022, 1:37 PMhundreds-sandwich-7373
10/18/2022, 1:37 PM$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
1vdqq80ak1g4s0nnph4xjsean * lima-rancher-desktop Ready Active Leader 20.10.18
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
34aa9f60baee bridge bridge local
7cd7acb2040c docker_gwbridge bridge local
b928699d5d6c host host local
1qd6e2d7zqrg ingress overlay swarm
4583883d6b30 none null local
tw4bolgoata9 svc_default overlay swarm
version: "3.8"
services:
co-svc:
image: co-test:1.0
command:
- serve
deploy:
replicas: 2
ports:
- mode: ingress
protocol: tcp
published: 8080
target: 8080
networks:
- svc_default
environment:
JAVA_OPTS: -Dcoherence.wka=tasks.co_co-svc
networks:
svc_default:
external: true
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
6ygzdu8w6f7s co_co-svc replicated 2/2 co-test:1.0 *:8080->8080/tcp
$ docker service logs co_co-svc -n 5
co_co-svc.1.vhj6hny4nxpb@lima-rancher-desktop | 13:24:06.005 [main] DEBUG io.smallrye.openapi.jaxrs - SROAP10001: Processing JaxRs method: io.example.entity.Task updateTask(java.util.UUID id, io.example.entity.Task task)
co_co-svc.1.vhj6hny4nxpb@lima-rancher-desktop | 13:24:06.006 [main] DEBUG io.smallrye.openapi.jaxrs - SROAP10000: Processing a JAX-RS resource class: CoherenceResource
co_co-svc.1.vhj6hny4nxpb@lima-rancher-desktop | 13:24:06.006 [main] DEBUG io.smallrye.openapi.jaxrs - SROAP10001: Processing JaxRs method: javax.ws.rs.core.Response get()
co_co-svc.1.vhj6hny4nxpb@lima-rancher-desktop | 13:24:06.016 [features-thread] INFO io.helidon.common.HelidonFeatures - Helidon MP 2.5.4 features: [CDI, Config, Fault Tolerance, Health, JAX-RS, Metrics, Open API, REST Client, Security, Server, Tracing]
co_co-svc.1.vhj6hny4nxpb@lima-rancher-desktop | 13:24:06.134 [Logger@9255624 22.06.2] INFO coherence - (thread=PagedTopic:Topic, member=2, up=9.698): Partition ownership has stabilized with 2 nodes
co_co-svc.2.724kxhhi8f3d@lima-rancher-desktop | 13:24:05.795 [main] DEBUG io.smallrye.openapi.jaxrs - SROAP10001: Processing JaxRs method: javax.ws.rs.core.Response get()
co_co-svc.2.724kxhhi8f3d@lima-rancher-desktop | 13:24:05.841 [Logger@9242415 22.06.2] INFO coherence - (thread=PagedTopic:Topic, member=1, up=9.399): Transferring primary PartitionSet{0..127} to member 2 requesting 128
co_co-svc.2.724kxhhi8f3d@lima-rancher-desktop | 13:24:05.851 [features-thread] INFO io.helidon.common.HelidonFeatures - Helidon MP 2.5.4 features: [CDI, Config, Fault Tolerance, Health, JAX-RS, Metrics, Open API, REST Client, Security, Server, Tracing]
co_co-svc.2.724kxhhi8f3d@lima-rancher-desktop | 13:24:06.026 [Logger@9242415 22.06.2] INFO coherence - (thread=PagedTopic:Topic, member=1, up=9.584): Partition ownership has stabilized with 2 nodes
co_co-svc.2.724kxhhi8f3d@lima-rancher-desktop | 13:24:06.128 [Logger@9242415 22.06.2] INFO coherence - (thread=PagedTopic:Topic, member=1, up=9.686): Partition ownership has stabilized with 2 nodes
$ curl -vvv <http://localhost:8080/health>
* Trying 127.0.0.1:8080...
* Connected to localhost (127.0.0.1) port 8080 (#0)
> GET /health HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.79.1
> Accept: */*
>
this goes on forever...$ docker exec -ti 24011efe1863 bash
bash-4.4# curl <http://co-svc:8080/health>
{"outcome":"UP","status":"UP","checks":[{"name":"deadlock","state":"UP","status":"UP"},{"name":"diskSpace","state":"UP","status":"UP","data":{"free":"90.58 GB","freeBytes":97263853568,"percentFree":"92.55%","total":"97.87 GB","totalBytes":105088212992}},{"name":"heapMemory","state":"UP","status":"UP","data":{"free":"54.82 MB","freeBytes":57482184,"max":"1.45 GB","maxBytes":1553989632,"percentFree":"97.29%","total":"95.00 MB","totalBytes":99614720}}]}bash-4.4# curl <http://co-svc:8080/coherence>
{"cluster":"root's cluster","members":[{"name":null,"address":"/10.0.4.7"},{"name":null,"address":"tasks.co_co-svc/10.0.4.6"}]}bash-4.4#
kind-iron-72902
10/18/2022, 5:23 PMhundreds-sandwich-7373
10/18/2022, 5:59 PMdocker run --rm -ti -p "8080:8080" -e "JAVA_OPTS=-Dcoherence.localhost=127.0.0.1" co-test:1.0 serve
kind-iron-72902
10/18/2022, 6:31 PMcreamy-laptop-9977
10/18/2022, 8:29 PMco-test:1.0
image. Can you replicate this with another public image? (I’ll continue to look for a possible test image as well.)docker network inspect svc_default
rdctl shell
, then sudo netstat -nlp | grep 8080
. In my testing, there seems be a disconnect between IPv4 and IPv6, but it could easily be due to my test configuration.hundreds-sandwich-7373
10/19/2022, 5:43 AM$ docker network inspect svc_default
[
{
"Name": "svc_default",
"Id": "tw4bolgoata9gpp785ql59jwb",
"Created": "2022-10-18T09:05:29.876631272Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.4.0/24",
"Gateway": "10.0.4.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": null,
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4100"
},
"Labels": null
}
]
$ rdctl shell
lima-rancher-desktop:/Users/lcoughli$ sudo netstat -nlp | grep 8080
tcp 0 0 :::8080 :::* LISTEN 2860/dockerd
$ curl -g -6 "http://[::1]:8080/"
curl: (7) Failed to connect to ::1 port 8080 after 12 ms: Connection refused
$ docker network inspect ingress
[
{
"Name": "ingress",
"Id": "1qd6e2d7zqrgwsnsus1mn0nx0",
"Created": "2022-10-18T13:22:14.42613455Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": true,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"02fc1a3f9c335feb51d51c7ae87610401cd3befb98650bf9652e2884d9eb3c28": {
"Name": "co_co-svc.2.lwvctygwa616q7up8zozq4mwg",
"EndpointID": "eb482388ddded26424d09a2385cbc47f3bf9214bf086116cfd0938a757f1a2fd",
"MacAddress": "02:42:0a:00:00:0a",
"IPv4Address": "10.0.0.10/24",
"IPv6Address": ""
},
"1976e6b0b7f6a9dba6a8dd0d43cfc9e38b8381ac175ae09ffbd3967f605dbcdd": {
"Name": "co_co-svc.1.clatiekuuz08610tosp328c50",
"EndpointID": "b16ba537248fe430698660c2b6a2b3863e0538a4357dc3a1ee6a565cad24b292",
"MacAddress": "02:42:0a:00:00:09",
"IPv4Address": "10.0.0.9/24",
"IPv6Address": ""
},
"ingress-sbox": {
"Name": "ingress-endpoint",
"EndpointID": "82b7925d1c6e6d2afb74889bf156544284007a343ac6f38a2ea6e648ed24ca89",
"MacAddress": "02:42:0a:00:00:02",
"IPv4Address": "10.0.0.2/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4096"
},
"Labels": {},
"Peers": [
{
"Name": "e49e33aed00b",
"IP": "192.168.5.15"
}
]
}
]
calm-sugar-3169
10/19/2022, 5:23 PMChain DOCKER-INGRESS (2 references)
pkts bytes target prot opt in out source destination
5 300 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 to:172.18.0.2:8080
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
curl 127.0.0.1:8080
However, curling with IPV6:
curl -g -6 '<http://localhost:8080>'
Hangs, this could be due to this open issue: https://github.com/moby/moby/issues/24379
In addition, looking at the tcpdump within the VM I can see the curl request that is destined for port 8080 from the client (host in this case) is always forced to IPv6
20:02:55.367583 lo In IP6 ::1.54490 > ::1.8080: Flags [S], seq 3666321179, win 65476, options [mss 65476,sackOK,TS val 869371798 ecr 0,nop,wscale 7], length 0
20:02:55.367593 lo In IP6 ::1.8080 > ::1.54490: Flags [S.], seq 3999623390, ack 3666321180, win 65464, options [mss 65476,sackOK,TS val 869371798 ecr 869371798,nop,wscale 7], length 0
20:02:55.367602 lo In IP6 ::1.54490 > ::1.8080: Flags [.], ack 1, win 512, options [nop,nop,TS val 869371798 ecr 869371798], length 0
20:02:55.368230 lo In IP6 ::1.54490 > ::1.8080: Flags [P.], seq 1:79, ack 1, win 512, options [nop,nop,TS val 869371798 ecr 869371798], length 78: HTTP: GET / HTTP/1.1
20:02:55.368235 lo In IP6 ::1.8080 > ::1.54490: Flags [.], ack 79, win 511, options [nop,nop,TS val 869371798 ecr 869371798], length 0
20:02:59.929836 lo In IP6 ::1.54490 > ::1.8080: Flags [F.], seq 79, ack 1, win 512, options [nop,nop,TS val 869376360 ecr 869371798], length 0
20:02:59.977814 lo In IP6 ::1.8080 > ::1.54490: Flags [.], ack 80, win 511, options [nop,nop,TS val 869376408 ecr 869376360], length 0
This could potentially be related to the open issue mentioned above or perhaps misconfiguration in Lima’s underlying switch that always forces IPV6 with the swarm’s overlay networks.
However, it is worth noting that when a published port container is deployed:
docker run -p 8081:80 -td nginx
Everything works as expected and looking at the tcpdump in the VM the requests are made over IPV4 as expected:
19:34:54.386979 lo In IP 127.0.0.1.35410 > 127.0.0.1.8081: Flags [S], seq 3673555498, win 65495, options [mss 65495,sackOK,TS val 1782487875 ecr 0,nop,wscale 7], length 0
19:34:54.387035 lo In IP 127.0.0.1.8081 > 127.0.0.1.35410: Flags [S.], seq 228590965, ack 3673555499, win 65483, options [mss 65495,sackOK,TS val 1782487875 ecr 1782487875,nop,wscale 7], length 0
19:34:54.387092 lo In IP 127.0.0.1.35410 > 127.0.0.1.8081: Flags [.], ack 1, win 512, options [nop,nop,TS val 1782487875 ecr 1782487875], length 0
19:34:54.387974 lo In IP 127.0.0.1.35410 > 127.0.0.1.8081: Flags [P.], seq 1:79, ack 1, win 512, options [nop,nop,TS val 1782487876 ecr 1782487875], length 78
19:34:54.388017 lo In IP 127.0.0.1.8081 > 127.0.0.1.35410: Flags [.], ack 79, win 511, options [nop,nop,TS val 1782487876 ecr 1782487876], length 0
19:34:54.388655 lo In IP 127.0.0.1.8081 > 127.0.0.1.35410: Flags [P.], seq 1:854, ack 79, win 512, options [nop,nop,TS val 1782487877 ecr 1782487876], length 853
19:34:54.388703 lo In IP 127.0.0.1.35410 > 127.0.0.1.8081: Flags [.], ack 854, win 506, options [nop,nop,TS val 1782487877 ecr 1782487877], length 0
19:34:54.389693 lo In IP 127.0.0.1.35410 > 127.0.0.1.8081: Flags [F.], seq 79, ack 854, win 512, options [nop,nop,TS val 1782487878 ecr 1782487877], length 0
19:34:54.390404 lo In IP 127.0.0.1.8081 > 127.0.0.1.35410: Flags [F.], seq 854, ack 80, win 512, options [nop,nop,TS val 1782487879 ecr 1782487878], length 0
19:34:54.390534 lo In IP 127.0.0.1.35410 > 127.0.0.1.8081: Flags [.], ack 855, win 512, options [nop,nop,TS val 1782487879 ecr 1782487879], length 0
To summarize this could be a combination of the open issue mentioned above in conjunction with the misconfiguration of lima’s underlying switch (vde_vmnet) or the misbehaviour is completely driven by the open issue here: https://github.com/moby/moby/issues/24379creamy-laptop-9977
10/19/2022, 8:54 PMhundreds-sandwich-7373
10/20/2022, 5:22 AM