adamant-kite-43734
10/25/2023, 8:44 AMwide-garage-9465
10/25/2023, 10:37 AMfreezing-engineer-70181
10/27/2023, 2:41 AMfreezing-engineer-70181
10/27/2023, 2:56 AMfreezing-engineer-70181
10/27/2023, 2:57 AMwide-garage-9465
10/27/2023, 4:21 AMfreezing-engineer-70181
10/27/2023, 4:25 AMfreezing-engineer-70181
10/27/2023, 4:27 AMCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8a52da2764c2 registry:2 "/entrypoint.sh /etc…" 7 days ago Up 2 days 0.0.0.0:5000->5000/tcp k3d-local-reg
f3e178aef06f 1b9bf3d4c187 "/bin/sh -c nginx-pr…" 7 days ago Up 2 days 80/tcp, 0.0.0.0:30000-30767->30000-30767/tcp, 0.0.0.0:60784->6443/tcp, 0.0.0.0:32769->30040/tcp k3d-k3s-default-serverlb
fae6c1f8a49c rancher/k3s:v1.27.4-k3s1 "/bin/k3s server --t…" 8 weeks ago Up 2 days k3d-k3s-default-server-0
wide-garage-9465
10/27/2023, 4:27 AMdocker ps
and docker logs <k3d-loadbalancer>
would be interestingfreezing-engineer-70181
10/27/2023, 4:30 AM2023/10/24 12:13:34 [notice] 48#48: signal 29 (SIGIO) received
2023/10/24 12:16:33 [error] 129#129: *25 connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: 0.0.0.0:30010, upstream: "172.19.0.2:30010", bytes from/to client:0/0, bytes from/to upstream:0/0
2023/10/24 12:16:33 [error] 129#129: *27 connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: 0.0.0.0:30010, upstream: "172.19.0.2:30010", bytes from/to client:0/0, bytes from/to upstream:0/0
2023/10/24 12:16:33 [error] 129#129: *29 connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: 0.0.0.0:30010, upstream: "172.19.0.2:30010", bytes from/to client:0/0, bytes from/to upstream:0/0
2023/10/24 12:22:05 [notice] 62#62: exiting
2023/10/24 12:22:05 [notice] 62#62: exit
2023/10/24 12:22:05 [notice] 48#48: signal 17 (SIGCHLD) received from 62
2023/10/24 12:22:05 [notice] 48#48: worker process 62 exited with code 0
2023/10/24 12:22:05 [notice] 48#48: signal 29 (SIGIO) received
2023/10/24 12:22:07 [error] 129#129: *39 connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: 0.0.0.0:30030, upstream: "172.19.0.2:30030", bytes from/to client:0/0, bytes from/to upstream:0/0
2023/10/24 12:22:08 [error] 129#129: *41 connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: 0.0.0.0:30030, upstream: "172.19.0.2:30030", bytes from/to client:0/0, bytes from/to upstream:0/0
2023/10/26 04:17:11 [error] 129#129: *1571 connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: 0.0.0.0:30030, upstream: "172.19.0.2:30030", bytes from/to client:0/0, bytes from/to upstream:0/0
freezing-engineer-70181
10/27/2023, 4:31 AMfreezing-engineer-70181
10/27/2023, 4:33 AMwide-garage-9465
10/27/2023, 4:34 AMdocker ps
showing the k3d containers, pleasefreezing-engineer-70181
10/27/2023, 4:38 AMfreezing-engineer-70181
10/27/2023, 4:41 AMwide-garage-9465
10/27/2023, 4:41 AMwide-garage-9465
10/27/2023, 4:42 AMAt the beginning it is good. it was broken about 7 days ago. how can I recover it pleaseSince I still don't have the information I need, I cannot really help you here 🤔
wide-garage-9465
10/27/2023, 4:42 AMdocker ps
please and full logs of serverlb and server-0 containersfreezing-engineer-70181
10/27/2023, 4:42 AMfreezing-engineer-70181
10/27/2023, 4:46 AMfreezing-engineer-70181
10/27/2023, 4:48 AMwide-garage-9465
10/27/2023, 4:50 AMdocker ps
?freezing-engineer-70181
10/27/2023, 4:52 AMdocker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8a52da2764c2 registry:2 "/entrypoint.sh /etc…" 7 days ago Up 2 days 0.0.0.0:5000->5000/tcp k3d-local-reg
f3e178aef06f 1b9bf3d4c187 "/bin/sh -c nginx-pr…" 7 days ago Up 2 days 80/tcp, 0.0.0.0:30000-30767->30000-30767/tcp, 0.0.0.0:60784->6443/tcp, 0.0.0.0:32769->30040/tcp k3d-k3s-default-serverlb
fae6c1f8a49c rancher/k3s:v1.27.4-k3s1 "/bin/k3s server --t…" 8 weeks ago Up 2 days k3d-k3s-default-server-0
freezing-engineer-70181
10/27/2023, 4:52 AMdocker ps
wide-garage-9465
10/27/2023, 4:53 AMfreezing-engineer-70181
10/27/2023, 4:54 AMwide-garage-9465
10/27/2023, 4:55 AMfreezing-engineer-70181
10/27/2023, 4:56 AMwide-garage-9465
10/27/2023, 4:57 AMCreated 7 days ago
`k3d-k3s-default-server-0`: Created 8 weeks ago
Also, the IMAGE
of the loadbalancer does not check out. It should be something like <http://ghcr.io/k3d-io/k3d-proxy|ghcr.io/k3d-io/k3d-proxy>
.
So it looks like there were changes made to the loadbalancer 7 days ago which matches with your observation that it doesn't work anymore since then.wide-garage-9465
10/27/2023, 4:59 AMhow about after cluster created?Create it, make it accessible from the K3s container, e.g. by attaching it to the docker network, then use it in your pod definitions (and define
imagePullSecrets
if you're using a password there.freezing-engineer-70181
10/27/2023, 4:59 AMwide-garage-9465
10/27/2023, 5:00 AMwide-garage-9465
10/27/2023, 5:00 AMfreezing-engineer-70181
10/27/2023, 5:00 AMwide-garage-9465
10/27/2023, 5:01 AMwide-garage-9465
10/27/2023, 5:01 AMwide-garage-9465
10/27/2023, 5:02 AMfreezing-engineer-70181
10/27/2023, 5:02 AMfreezing-engineer-70181
10/27/2023, 5:04 AMwide-garage-9465
10/27/2023, 5:05 AMfreezing-engineer-70181
10/27/2023, 5:05 AMwide-garage-9465
10/27/2023, 5:06 AMk3d
for actual production workloads though, especially because of those limitation that docker brings with it.freezing-engineer-70181
10/27/2023, 5:06 AMwide-garage-9465
10/27/2023, 5:06 AMfreezing-engineer-70181
10/27/2023, 5:07 AMfreezing-engineer-70181
10/27/2023, 5:08 AMwide-garage-9465
10/27/2023, 5:09 AMfreezing-engineer-70181
10/27/2023, 5:12 AMwide-garage-9465
10/27/2023, 5:13 AMfreezing-engineer-70181
10/27/2023, 5:14 AMfreezing-engineer-70181
10/27/2023, 5:15 AM