creamy-pencil-82913
03/10/2023, 8:05 PMcreamy-pencil-82913
03/10/2023, 8:05 PMINSTALL_K3S_VERSION=v1.22.17+k3s1
not INSTALL_K3S_VERSION=v1.22.17
bright-fireman-42144
03/10/2023, 8:06 PMcreamy-pencil-82913
03/10/2023, 8:06 PMbright-fireman-42144
03/10/2023, 8:07 PMbright-fireman-42144
03/10/2023, 8:09 PMcreamy-pencil-82913
03/10/2023, 8:13 PMhundreds-evening-84071
03/10/2023, 9:45 PMcurl -sfL <https://get.k3s.io> | INSTALL_K3S_CHANNEL=stable sh -
or should it be done in steps? 1.22 to 1.23 to 1.24 and so on?hundreds-evening-84071
03/10/2023, 9:45 PMcreamy-pencil-82913
03/10/2023, 9:57 PMhandsome-salesclerk-54324
03/11/2023, 7:56 PMW0311 11:54:32.405026 2510 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Anyone know why?acceptable-leather-15942
03/11/2023, 10:35 PMTopology Aware Hints
works on k3s? I can’t seem to get this working. All my nodes have a different <http://topology.kubernetes.io/zone|topology.kubernetes.io/zone>
label. Adding the annotation <http://service.kubernetes.io/topology-aware-hints|service.kubernetes.io/topology-aware-hints>: auto
to my service should better route the traffic, but it doesn’t seem to have an effect.loud-apartment-45889
03/12/2023, 6:53 AMbroad-farmer-70498
03/13/2023, 6:55 PMwhite-garden-41931
03/14/2023, 12:29 AM2023/03/03 00:09:28 Starting NATS Server Reloader v0.7.4
Error: too many open files
Stream closed EOF for testkube/testkube-nats-0 (reloader)
which prevents one of my pods (testkube/testkube-nats-0) from starting.
and I'm not sure if that is specific to k3s or upstream Kubernetes.
Is this a known issue or should I investigate more deeply?straight-midnight-66298
03/14/2023, 11:42 AMlimited-needle-7506
03/14/2023, 9:33 PM/etc/rancher/k3s/
.
But say the dev machine running k3s is shared by users with different credentials, each with access to the registries.yaml file and /etc/rancher/k3s/
directory.
How would I prevent user x from accessing/viewing user y's credentials stored in registries.yaml while at the same time using user's x credentials to pull and push to the private registry?
Thanks in advance.adamant-pencil-35455
03/15/2023, 12:32 PMadamant-pencil-35455
03/15/2023, 12:33 PMimportant-kitchen-32874
03/15/2023, 1:45 PMimportant-kitchen-32874
03/15/2023, 1:51 PMimportant-kitchen-32874
03/15/2023, 1:57 PMdelightful-author-23241
03/15/2023, 11:04 PMFailed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox image "rancher/mirrored-pause:3.6": failed to pull image "rancher/mirrored-pause:3.6": failed to pull and unpack image "<http://docker.io/rancher/mirrored-pause:3.6|docker.io/rancher/mirrored-pause:3.6>": failed to extract layer sha256:c640e628658788773e4478ae837822c9bc7db5b512442f54286a98ad50f88fd4: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount3367908043: signal: segmentation fault: : unknown
Segmentation fault always seems like there is quite something going wrong, and I couldn't find anything related to this when googling (additionally, I have no idea what I'm doing when it comes to k8s), so I thought maybe you people can give me some guidance here. Or is this more of an issue with containerd itself?breezy-autumn-81048
03/16/2023, 11:44 AMTrace[1068908304]: [30.003276269s] [30.003276269s] END
E0314 15:02:02.236947 1 reflector.go:140] <http://k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169|k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169>: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "<https://10.43.0.1:443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-webhook-ca&resourceVersion=360915>": dial tcp 10.43.0.1:443: i/o timeout
W0314 15:03:28.953687 1 reflector.go:424] <http://k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169|k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169>: failed to list *v1.Secret: Get "<https://10.43.0.1:443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-webhook-ca&resourceVersion=360915>": dial tcp 10.43.0.1:443: i/o timeout
I0314 15:03:28.953816 1 trace.go:219] Trace[516939538]: "Reflector ListAndWatch" name:<http://k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169|k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169> (14-Mar-2023 15:02:58.949) (total time: 30004ms):
Trace[516939538]: ---"Objects listed" error:Get "<https://10.43.0.1:443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-webhook-ca&resourceVersion=360915>": dial tcp 10.43.0.1:443: i/o timeout 30004ms (15:03:28.953)
Trace[516939538]: [30.004226263s] [30.004226263s] END
E0314 15:03:28.953837 1 reflector.go:140] <http://k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169|k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169>: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "<https://10.43.0.1:443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-webhook-ca&resourceVersion=360915>": dial tcp 10.43.0.1:443: i/o timeout
W0314 15:04:44.919380 1 reflector.go:424] <http://k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169|k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169>: failed to list *v1.Secret: Get "<https://10.43.0.1:443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-webhook-ca&resourceVersion=360915>": dial tcp 10.43.0.1:443: i/o timeout
I0314 15:04:44.919458 1 trace.go:219] Trace[430405071]: "Reflector ListAndWatch" name:<http://k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169|k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169> (14-Mar-2023 15:04:14.918) (total time: 30000ms):
Trace[430405071]: ---"Objects listed" error:Get "<https://10.43.0.1:443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-webhook-ca&resourceVersion=360915>": dial tcp 10.43.0.1:443: i/o timeout 30000ms (15:04:44.919)
Trace[430405071]: [30.000964846s] [30.000964846s] END
E0314 15:04:44.919472 1 reflector.go:140] <http://k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169|k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169>: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "<https://10.43.0.1:443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-webhook-ca&resourceVersion=360915>": dial tcp 10.43.0.1:443: i/o timeout
Can someone explain what's wrong? It feels that I can't fully install a helm chart because of this issue. ( I noticed this issue when was trying to install a helm chart of actions-runner-controller, and the error I got: Error: Internal error occurred: failed calling webhook "<http://webhook.cert-manager.io|webhook.cert-manager.io>": failed to call webhook: Post "<https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s>": context deadline exceeded )
As well, here are some logs from the pod of actions-runner-controller:
Warning FailedMount 17m kubelet Unable to attach or mount volumes: unmounted volumes=[cert], unattached volumes=[kube-api-access-v48zj secret tmp cert]: timed out waiting for the condition
Warning FailedMount 8m32s kubelet Unable to attach or mount volumes: unmounted volumes=[cert], unattached volumes=[tmp cert kube-api-access-v48zj secret]: timed out waiting for the condition
Warning FailedMount 6m18s (x5 over 19m) kubelet Unable to attach or mount volumes: unmounted volumes=[cert], unattached volumes=[secret tmp cert kube-api-access-v48zj]: timed out waiting for the condition
Warning FailedMount 103s (x2 over 4m1s) kubelet Unable to attach or mount volumes: unmounted volumes=[cert], unattached volumes=[cert kube-api-access-v48zj secret tmp]: timed out waiting for the condition
Warning FailedMount 86s (x18 over 21m) kubelet MountVolume.SetUp failed for volume "cert" : secret "actions-runner-controller-serving-cert" not found
Thanks in advance,wonderful-crayon-55427
03/16/2023, 2:19 PMroot# touch /etc/pki/file
touch: cannot touch '/etc/pki/file': No such file or directory
Which is odd as I'm logged into root, do not have any special mounts on this directory, or any pv/pvc claiming this.
Any ideas on where to look?
I have considered doing something along the lines of:
$ kubectl -n <namespace> get <pod> -o yaml > config.yml
# <create a new kustomization.yml which patches the current pod with (emptDir: {} and securityContext)
$ kustomize build . | kubectl -n <namespace> apply -f -
But that seems egregious and unnecessary.flat-whale-67864
03/16/2023, 7:56 PMechoing-tomato-53055
03/17/2023, 5:30 PMlevel=info msg="[Applyinator] No image provided, creating empty working directory /var/lib/rancher/agent/work/
bored-horse-3670
03/20/2023, 2:28 PMcareful-honey-96496
03/20/2023, 7:38 PMdocker run --runtime=sysbox-runc -it --rm -P --hostname=syscont nestybox/ubuntu-bionic-systemd-docker:latest
• From with in the System container I ran docker run --name k3s-server-1 --hostname k3s-server-1 -p 6443:6443 -d rancher/k3s:v1.24.10-k3s1 server
I can see the k3s-server container is running, but all pods are on pending and it doesn’t show the server node. In the logs of the k3s container I see this message/ error:
Waiting to retrieve agent configuration; server is not ready: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: operation not permitted
Anyone who has experience in setting up k3s in Sysbox?
Haven’t been able to find where I can adjust the snapshotter to native for example.brash-controller-15153
03/21/2023, 5:04 PMk3s cluster
with a k3s server with public ip
and k3s agents in a private network behind nat (like home net for example)?.
• I opened the udp ports 8472,51820,51821
and tcp ports 6443,10250
in my router for allowing connections to the private ip where the agents are located.
• I also started the agents with the dynamic ip address given from my isp and the server with the public ip address.
but somehow the traefik ingress controller
or the Ingress
is not able to forward the incoming requests from the public url <http://staging.company.org|staging.company.org>
to the agents in my private net.
I also created other agents with public ips
and they are able to serve a whoami
application though <http://staging.company.org|staging.company.org>
but when the load balancer selects the pods running inside the nodes on private net, then it just hangs and any answer comes from the pods.
v1.24.10+k3s1
brash-controller-15153
03/21/2023, 5:04 PMk3s cluster
with a k3s server with public ip
and k3s agents in a private network behind nat (like home net for example)?.
• I opened the udp ports 8472,51820,51821
and tcp ports 6443,10250
in my router for allowing connections to the private ip where the agents are located.
• I also started the agents with the dynamic ip address given from my isp and the server with the public ip address.
but somehow the traefik ingress controller
or the Ingress
is not able to forward the incoming requests from the public url <http://staging.company.org|staging.company.org>
to the agents in my private net.
I also created other agents with public ips
and they are able to serve a whoami
application though <http://staging.company.org|staging.company.org>
but when the load balancer selects the pods running inside the nodes on private net, then it just hangs and any answer comes from the pods.
v1.24.10+k3s1
creamy-pencil-82913
03/21/2023, 5:33 PMbrash-controller-15153
03/21/2023, 6:24 PMiptables v1.8.7 (nf_tables)
so It should be ok.plain-byte-79620
03/22/2023, 11:11 AMbrash-controller-15153
03/22/2023, 11:18 AMplain-byte-79620
03/22/2023, 11:20 AM