https://rancher.com/ logo
#k3d
Title
# k3d
a

adamant-kite-43734

04/21/2023, 3:36 PM
This message was deleted.
r

rough-farmer-49135

04/21/2023, 3:55 PM
Personally, the only time I've seen references to a Kubernetes connection on 8080 have always been when the kubeconfig is missing. Might be other possible reasons, though?
s

silly-fish-63272

04/21/2023, 4:07 PM
What is really bothersome is that deleting (and of course rebooting) everything and then re-installing using the same installation procedure still results in the error. It leads me to think there is something still stuck somewhere on the drive that is borking it up. Just today we spun up another clean linux box/VM and ran the install procedures and it all came up perfectly. It just "feels" like there is something till on the drives somewhere that has a configuration value of "stopWorkingAfterRedeployments: True".
c

creamy-pencil-82913

04/21/2023, 4:11 PM
as Bill said, if you see something using
<http://localhost:8080>
it almost always means that you forgot to use a kubeconfig file, and it’s falling back to the default server address. K3s will always use https and port 6443.
You didn’t mention where you are seeing that error, either
s

silly-fish-63272

04/21/2023, 5:07 PM
Oh, right, the error is shown when we do kubectl get events.
c

creamy-pencil-82913

04/21/2023, 5:08 PM
is there an event that says that, or are you just forgetting to point kubectl at the correct kubeconfig to talk to your cluster?
s

silly-fish-63272

04/21/2023, 5:10 PM
There are several events that show up in the events log (besides our application just totally horking up). I'll have to double-check our install/start-up procedure re the kubeconfig file.
c

creamy-pencil-82913

04/21/2023, 5:10 PM
can you post the complete output from the command when you see that?
s

silly-fish-63272

04/21/2023, 5:14 PM
If you mean the get Events command:
Last login: Fri Apr 21 12:31:31 2023 from 172.16.15.11
itng@tmc1:~$ kubectl get events
E0421 13:05:13.670109    5769 memcache.go:265] couldn't get current server API group list: Get "<http://localhost:8080/api?timeout=32s>": dial tcp 127.0.0.1:8080: connect: connection refused
E0421 13:05:13.670910    5769 memcache.go:265] couldn't get current server API group list: Get "<http://localhost:8080/api?timeout=32s>": dial tcp 127.0.0.1:8080: connect: connection refused
E0421 13:05:14.157934    5769 memcache.go:265] couldn't get current server API group list: Get "<http://localhost:8080/api?timeout=32s>": dial tcp 127.0.0.1:8080: connect: connection refused
E0421 13:05:14.159035    5769 memcache.go:265] couldn't get current server API group list: Get "<http://localhost:8080/api?timeout=32s>": dial tcp 127.0.0.1:8080: connect: connection refused
E0421 13:05:14.160043    5769 memcache.go:265] couldn't get current server API group list: Get "<http://localhost:8080/api?timeout=32s>": dial tcp 127.0.0.1:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I think you may be on to something re the config file, I see on one of the working boxes that there IS a config file under the user's .kube folder, but NOT on one of the machines that has been toasted.
c

creamy-pencil-82913

04/21/2023, 5:18 PM
Did you forget to set the KUBECONFIG environment variable, or did the file that it points at get deleted somehow? This just looks like user error, there’s probably nothing wrong with your cluster at all.
s

silly-fish-63272

04/21/2023, 5:58 PM
We're using the basic k3d installation which did not call out setting KUBECONFIG. Also, doing "env | grep KUBE" on one of the working Linux boxes does not show that variable being set.
c

creamy-pencil-82913

04/21/2023, 6:08 PM
you can run
kubectl config view -v=6
to see the current kubeconfig and where it’s coming from
maybe compare that between your nodes?
By default it will look at
~/.kube/config
Are you maybe logging in as a different user than the one that ran k3d to create the cluster, and don’t have the config file dropped by k3d?
s

silly-fish-63272

04/21/2023, 6:43 PM
Brilliant, I think you are right, I just got one of the toasted Linux clusters to come back to life: I deleted K3D and Docker, but it appears that the ".kube" directory nor ".k3d" directories of the user don't get deleted so i deleted them by hand, and then re-installed K3D and docker. Seems to be working now, will post more info if I can get the exact cause. Thanks for you help and insight!
74 Views