This message was deleted.
# k3d
a
This message was deleted.
w
Did you do any manual changes there? The config generated is super simple, so you can just spin up a k3d-proxy container. With some env vars. I'm not at my desk right now, but that's surely possible.
f
Thanks for your response.I didn't change anything.BTW, i used k3d on win10 with docker desktop. kubectl complained that can't connect to the API port .How to slove the problem please
I tried to telnet the api port on localhost:port. i can't get response.
My question: is there a way to generate another lb and connect to the previous cluster?
w
Wsl2? Can you give me any output for docker? K3d containers including load balancer and logs of the load balancer?
f
kubectl.exe get pods E1027 122500.801946 43500 memcache.go:265] couldn't get current server API group list: Get "https://host.docker.internal:60784/api?timeout=32s": dial tcp 10.55.99.8060784 connectex: No connection could be made because the target machine actively refused it. E1027 122502.848662 43500 memcache.go:265] couldn't get current server API group list: Get "https://host.docker.internal:60784/api?timeout=32s": dial tcp 10.55.99.8060784 connectex: No connection could be made because the target machine actively refused it. E1027 122504.881390 43500 memcache.go:265] couldn't get current server API group list: Get "https://host.docker.internal:60784/api?timeout=32s": dial tcp 10.55.99.8060784 connectex: No connection could be made because the target machine actively refused it. E1027 122506.920690 43500 memcache.go:265] couldn't get current server API group list: Get "https://host.docker.internal:60784/api?timeout=32s": dial tcp 10.55.99.8060784 connectex: No connection could be made because the target machine actively refused it. E1027 122508.947904 43500 memcache.go:265] couldn't get current server API group list: Get "https://host.docker.internal:60784/api?timeout=32s": dial tcp 10.55.99.8060784 connectex: No connection could be made because the target machine actively refused it. Unable to connect to the server: dial tcp 10.55.99.8060784 connectex: No connection could be made because the target machine actively refused it.
Copy code
CONTAINER ID   IMAGE                      COMMAND                   CREATED       STATUS      PORTS                                                                                                                         NAMES
8a52da2764c2   registry:2                 "/entrypoint.sh /etc…"   7 days ago    Up 2 days   0.0.0.0:5000->5000/tcp                                                                                                        k3d-local-reg
f3e178aef06f   1b9bf3d4c187               "/bin/sh -c nginx-pr…"   7 days ago    Up 2 days   80/tcp, 0.0.0.0:30000-30767->30000-30767/tcp, 0.0.0.0:60784->6443/tcp, 0.0.0.0:32769->30040/tcp                               k3d-k3s-default-serverlb
fae6c1f8a49c   rancher/k3s:v1.27.4-k3s1   "/bin/k3s server --t…"   8 weeks ago   Up 2 days                                                                                                                                 k3d-k3s-default-server-0
w
docker ps
and
docker logs <k3d-loadbalancer>
would be interesting
f
Copy code
2023/10/24 12:13:34 [notice] 48#48: signal 29 (SIGIO) received
2023/10/24 12:16:33 [error] 129#129: *25 connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: 0.0.0.0:30010, upstream: "172.19.0.2:30010", bytes from/to client:0/0, bytes from/to upstream:0/0
2023/10/24 12:16:33 [error] 129#129: *27 connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: 0.0.0.0:30010, upstream: "172.19.0.2:30010", bytes from/to client:0/0, bytes from/to upstream:0/0
2023/10/24 12:16:33 [error] 129#129: *29 connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: 0.0.0.0:30010, upstream: "172.19.0.2:30010", bytes from/to client:0/0, bytes from/to upstream:0/0
2023/10/24 12:22:05 [notice] 62#62: exiting
2023/10/24 12:22:05 [notice] 62#62: exit
2023/10/24 12:22:05 [notice] 48#48: signal 17 (SIGCHLD) received from 62
2023/10/24 12:22:05 [notice] 48#48: worker process 62 exited with code 0
2023/10/24 12:22:05 [notice] 48#48: signal 29 (SIGIO) received
2023/10/24 12:22:07 [error] 129#129: *39 connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: 0.0.0.0:30030, upstream: "172.19.0.2:30030", bytes from/to client:0/0, bytes from/to upstream:0/0
2023/10/24 12:22:08 [error] 129#129: *41 connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: 0.0.0.0:30030, upstream: "172.19.0.2:30030", bytes from/to client:0/0, bytes from/to upstream:0/0
2023/10/26 04:17:11 [error] 129#129: *1571 connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: 0.0.0.0:30030, upstream: "172.19.0.2:30030", bytes from/to client:0/0, bytes from/to upstream:0/0
this is latest logs of k3d-loadbalancer
server: https://host.docker.internal:60784 i use k3d kubeconfig get --all to find the api port is above. but i can't telnet it
w
I need your k3d command that was used to create the cluster and output of
docker ps
showing the k3d containers, please
f
k3d cluster create k3s -p 30000-30767:30000-30767
At the beginning it is good. it was broken about 7 days ago. how can I recover it please
w
Did that work? The port range is pretty large and docker doesn't cope very well with that.
At the beginning it is good. it was broken about 7 days ago. how can I recover it please
Since I still don't have the information I need, I cannot really help you here 🤔
docker ps
please and full logs of serverlb and server-0 containers
f
yup,at very beginning it worked.
here is the log files with 7zip
the k3d version is 5.6.0
w
What about
docker ps
?
f
Copy code
docker ps
CONTAINER ID   IMAGE                      COMMAND                   CREATED       STATUS      PORTS                                                                                                                         NAMES
8a52da2764c2   registry:2                 "/entrypoint.sh /etc…"   7 days ago    Up 2 days   0.0.0.0:5000->5000/tcp                                                                                                        k3d-local-reg
f3e178aef06f   1b9bf3d4c187               "/bin/sh -c nginx-pr…"   7 days ago    Up 2 days   80/tcp, 0.0.0.0:30000-30767->30000-30767/tcp, 0.0.0.0:60784->6443/tcp, 0.0.0.0:32769->30040/tcp                               k3d-k3s-default-serverlb
fae6c1f8a49c   rancher/k3s:v1.27.4-k3s1   "/bin/k3s server --t…"   8 weeks ago   Up 2 days                                                                                                                                 k3d-k3s-default-server-0
Sorry here is
docker ps
w
Ah it was up there already, sorry
f
i found inside k3s pull images offen fail.so i want to create a local registry and attach to the exsiting cluster. can i?
w
Yeah, you just need to do all manual work for that. k3d has options to do that for you when creating the cluster
f
how about after cluster created?
w
`k3d-k3s-default-serverlb`:
Created 7 days ago
`k3d-k3s-default-server-0`:
Created 8 weeks ago
Also, the
IMAGE
of the loadbalancer does not check out. It should be something like
<http://ghcr.io/k3d-io/k3d-proxy|ghcr.io/k3d-io/k3d-proxy>
. So it looks like there were changes made to the loadbalancer 7 days ago which matches with your observation that it doesn't work anymore since then.
how about after cluster created?
Create it, make it accessible from the K3s container, e.g. by attaching it to the docker network, then use it in your pod definitions (and define
imagePullSecrets
if you're using a password there.
f
yes.i modified the lb with k3d node edit --port-add command to remove 30040 port.but not successed
w
Yeah I don't think you can remove a port from a port-range.
I'd recommend just setting up a new cluster with the correct ports and the built-in registry support
🎉 1
f
so is there a way to remove the exposed port?
w
No
How did you try that? k3d doesn't offer any command for it.
f
i think so ,i will try to migrate from the exist clust to a new one on the same machine
i cannot remove any exposed port.
w
Do you run production workloads there?
f
yes.it is a IOT edge device
w
It's not recommended to use
k3d
for actual production workloads though, especially because of those limitation that docker brings with it.
f
now i only can use the kubectl inside container. the effect is a little low
w
Also, docker + k3d/k3s sounds like an unnecessary overhead for an IOT edge device 🤔
f
actually it is a windows 10 OS.
i want to use k3d to create k3s so that i can use the same way to operate it with k8s on the cloud
w
Fair enough, just be aware of the limitations. k3d is a development/testing tool mainly and cannot overcome limitations imposed by running within Docker.
f
thanks for your suggestion. if I encounter the issue after i migrate to the new cluster.i will recommand they to use k3s on linux OS with k3d.
w
If they're on Linux, please recommend K3s without k3d for production.
f
got you.I will
thank you very much.i have to migrate the broken cluster -:) see you