This message was deleted.
# k3d
a
This message was deleted.
f
Copy code
~                                                                                                                                                                                                                                                    14:17:19
❯ k3d cluster create mycluster
INFO[0000] Prep: Network                                
INFO[0000] Created network 'k3d-mycluster'              
INFO[0000] Created image volume k3d-mycluster-images    
INFO[0000] Starting new tools node...                   
INFO[0000] Starting Node 'k3d-mycluster-tools'          
INFO[0001] Creating node 'k3d-mycluster-server-0'       
INFO[0001] Creating LoadBalancer 'k3d-mycluster-serverlb' 
INFO[0001] Using the k3d-tools node to gather environment information 
INFO[0001] HostIP: using network gateway 172.21.0.1 address 
INFO[0001] Starting cluster 'mycluster'                 
INFO[0001] Starting servers...                          
INFO[0001] Starting Node 'k3d-mycluster-server-0'       
INFO[0006] All agents already running.                  
INFO[0006] Starting helpers...                          
INFO[0006] Starting Node 'k3d-mycluster-serverlb'       
INFO[0012] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap... 
INFO[0014] Cluster 'mycluster' created successfully!    
INFO[0014] You can now use it like this:                
kubectl cluster-info
~ 15s                                                                                                                                                                                                                                                14:17:39
❯ kubectl cluster-info                                         
E0227 14:17:51.700887 1004432 memcache.go:255] couldn't get resource list for <http://metrics.k8s.io/v1beta1|metrics.k8s.io/v1beta1>: the server is currently unable to handle the request
E0227 14:17:51.705815 1004432 memcache.go:106] couldn't get resource list for <http://metrics.k8s.io/v1beta1|metrics.k8s.io/v1beta1>: the server is currently unable to handle the request
E0227 14:17:51.708650 1004432 memcache.go:106] couldn't get resource list for <http://metrics.k8s.io/v1beta1|metrics.k8s.io/v1beta1>: the server is currently unable to handle the request
E0227 14:17:51.709917 1004432 memcache.go:106] couldn't get resource list for <http://metrics.k8s.io/v1beta1|metrics.k8s.io/v1beta1>: the server is currently unable to handle the request
Kubernetes control plane is running at <https://0.0.0.0:39541>
CoreDNS is running at <https://0.0.0.0:39541/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy>
Metrics-server is running at <https://0.0.0.0:39541/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy>

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
~                                                                                                                                                                                                                                                    14:18:07
❯ kubectl get pods -n kube-system
E0227 14:18:22.176214 1005153 memcache.go:255] couldn't get resource list for <http://metrics.k8s.io/v1beta1|metrics.k8s.io/v1beta1>: the server is currently unable to handle the request
E0227 14:18:22.197499 1005153 memcache.go:106] couldn't get resource list for <http://metrics.k8s.io/v1beta1|metrics.k8s.io/v1beta1>: the server is currently unable to handle the request
E0227 14:18:22.198935 1005153 memcache.go:106] couldn't get resource list for <http://metrics.k8s.io/v1beta1|metrics.k8s.io/v1beta1>: the server is currently unable to handle the request
E0227 14:18:22.200326 1005153 memcache.go:106] couldn't get resource list for <http://metrics.k8s.io/v1beta1|metrics.k8s.io/v1beta1>: the server is currently unable to handle the request
NAME                                      READY   STATUS    RESTARTS   AGE
helm-install-traefik-p4kzq                0/1     Pending   0          38s
helm-install-traefik-crd-m5jgv            0/1     Pending   0          38s
local-path-provisioner-79f67d76f8-lrbh2   0/1     Pending   0          38s
coredns-597584b69b-thvd9                  0/1     Pending   0          38s
metrics-server-5f9f776df5-vf52f           0/1     Pending   0          38s
~                                                                                                                                                                                                                                                    14:47:23
❯ kubectl describe node
E0227 14:47:40.154262 1042003 memcache.go:255] couldn't get resource list for <http://metrics.k8s.io/v1beta1|metrics.k8s.io/v1beta1>: the server is currently unable to handle the request
E0227 14:47:40.163356 1042003 memcache.go:106] couldn't get resource list for <http://metrics.k8s.io/v1beta1|metrics.k8s.io/v1beta1>: the server is currently unable to handle the request
E0227 14:47:40.165430 1042003 memcache.go:106] couldn't get resource list for <http://metrics.k8s.io/v1beta1|metrics.k8s.io/v1beta1>: the server is currently unable to handle the request
E0227 14:47:40.167727 1042003 memcache.go:106] couldn't get resource list for <http://metrics.k8s.io/v1beta1|metrics.k8s.io/v1beta1>: the server is currently unable to handle the request
No resources found in default namespace.
I'm not sure what to look for in logs, but this might be relevant :
Copy code
Feb 27 14:40:03 sylvain-thinkpad kernel: [43848.161346] overlayfs: upper fs does not support RENAME_WHITEOUT.
Feb 27 14:40:03 sylvain-thinkpad kernel: [43848.161365] overlayfs: upper fs missing required features.
(happens a few times when launching k3d)
Well I found this log in docker logs :
Copy code
time="2023-02-27T14:07:43Z" level=info msg="Waiting to retrieve agent configuration; server is not ready: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: invalid argument"
so I'm going to try the workaround for ZFS found in the FAQ and report back
b
f
OK I solved my problem, it was the ZFS issue, despite me creating a single (local) server setup and not having the same errors as in the FAQ, the workaround from https://k3d.io/v5.2.2/faq/faq/#issues-with-zfs solved my issue
👍 1
643 Views