https://rancher.com/ logo
Docs
Join the conversationJoin Slack
Channels
academy
amazon
arm
azure
cabpr
chinese
ci-cd
danish
deutsch
developer
elemental
epinio
espanol
events
extensions
fleet
français
gcp
general
harvester
harvester-dev
hobbyfarm
hypper
japanese
k3d
k3os
k3s
k3s-contributor
kim
kubernetes
kubewarden
lima
logging
longhorn-dev
longhorn-storage
masterclass
mesos
mexico
nederlands
neuvector-security
office-hours
one-point-x
onlinemeetup
onlinetraining
opni
os
ozt
phillydotnet
portugues
rancher-desktop
rancher-extensions
rancher-setup
rancher-wrangler
random
rfed_ara
rio
rke
rke2
russian
s3gw
service-mesh
storage
submariner
supermicro-sixsq
swarm
terraform-controller
terraform-provider-rancher2
terraform-provider-rke
theranchcast
training-0110
training-0124
training-0131
training-0207
training-0214
training-1220
ukranian
v16-v21-migration
vsphere
windows
Powered by Linen
k3s
  • c

    cuddly-egg-57762

    10/14/2022, 10:45 AM
    Hello good people. I'm using k3s v1.23.6+k3s1 with cilium:v1.12.2 on top of Rocky Linux 8.6, in a 3 node cluster (proxmox VMs). Today I saw a very strange behavior: in the first node of the cluster, the k3s service restarted. From the service logs I see there was only this error:
    ott 14 10:24:13 node1 k3s[23642]: E1014 10:24:13.249106   23642 leaderelection.go:330] error retrieving resource lock kube-system/cloud-controller-manager: Get "<https://127.0.0.1:6444/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cloud-controller-manager?timeou>>
    ott 14 10:24:13 node1 k3s[23642]: I1014 10:24:13.249252   23642 leaderelection.go:283] failed to renew lease kube-system/cloud-controller-manager: timed out waiting for the condition
    ott 14 10:24:13 node1 k3s[23642]: F1014 10:24:13.249332   23642 controllermanager.go:237] leaderelection lost
    ott 14 10:24:13 node1 k3s[23642]: E1014 10:24:13.249629   23642 runtime.go:76] Observed a panic: F1014 10:24:13.249332   23642 controllermanager.go:237] leaderelection lost
    ott 14 10:24:13 node1 k3s[23642]: goroutine 22078 [running]:
    ott 14 10:24:13 node1 k3s[23642]: <http://k8s.io/apimachinery/pkg/util/runtime.logPanic({0x3f70c00|k8s.io/apimachinery/pkg/util/runtime.logPanic({0x3f70c00>, 0xc0230f6340})
    ott 14 10:24:13 node1 k3s[23642]:         /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apimachinery@v1.23.6-k3s1/pkg/util/runtime/runtime.go:74 +0x7d
    ott 14 10:24:13 node1 k3s[23642]: <http://k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0|k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0>, 0x0, 0x1})
    ott 14 10:24:13 node1 k3s[23642]:         /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apimachinery@v1.23.6-k3s1/pkg/util/runtime/runtime.go:48 +0x75
    ott 14 10:24:13 node1 k3s[23642]: panic({0x3f70c00, 0xc0230f6340})
    ott 14 10:24:13 node1 k3s[23642]:         /usr/local/go/src/runtime/panic.go:1038 +0x215
    ott 14 10:24:13 node1 k3s[23642]: <http://k8s.io/klog/v2.(*loggingT).output(0x7c06200|k8s.io/klog/v2.(*loggingT).output(0x7c06200>, 0x3, 0x0, 0xc02be58700, 0x0, {0x6018774, 0x0}, 0xc00eabd1c0, 0x0)
    ott 14 10:24:13 node1 k3s[23642]:         /go/pkg/mod/github.com/k3s-io/klog/v2@v2.30.0-k3s1/klog.go:982 +0x625
    ott 14 10:24:13 node1 k3s[23642]: <http://k8s.io/klog/v2.(*loggingT).printf(0x0|k8s.io/klog/v2.(*loggingT).printf(0x0>, 0x0, 0x0, {0x0, 0x0}, {0x4811943, 0x13}, {0x0, 0x0, 0x0})
    ott 14 10:24:13 node1 k3s[23642]:         /go/pkg/mod/github.com/k3s-io/klog/v2@v2.30.0-k3s1/klog.go:753 +0x1c5
    ott 14 10:24:13 node1 k3s[23642]: <http://k8s.io/klog/v2.Fatalf(...)|k8s.io/klog/v2.Fatalf(...)>
    ott 14 10:24:13 node1 k3s[23642]:         /go/pkg/mod/github.com/k3s-io/klog/v2@v2.30.0-k3s1/klog.go:1513
    ott 14 10:24:13 node1 k3s[23642]: <http://k8s.io/cloud-provider/app.Run.func3()|k8s.io/cloud-provider/app.Run.func3()>
    ott 14 10:24:13 node1 k3s[23642]:         /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/cloud-provider@v1.23.6-k3s1/app/controllermanager.go:237 +0x55
    ott 14 10:24:13 node1 k3s[23642]: <http://k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run.func1()|k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run.func1()>
    ott 14 10:24:13 node1 k3s[23642]:         /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/client-go@v1.23.6-k3s1/tools/leaderelection/leaderelection.go:203 +0x1f
    ott 14 10:24:13 node1 k3s[23642]: <http://k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run(0xc00991f440|k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run(0xc00991f440>, {0x51d17b0, 0xc00012a008})
    ott 14 10:24:13 node1 k3s[23642]:         /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/client-go@v1.23.6-k3s1/tools/leaderelection/leaderelection.go:213 +0x189
    ott 14 10:24:13 node1 k3s[23642]: <http://k8s.io/client-go/tools/leaderelection.RunOrDie({0x51d17b0|k8s.io/client-go/tools/leaderelection.RunOrDie({0x51d17b0>, 0xc00012a008}, {{0x5223e48, 0xc00eba1cc0}, 0x37e11d600, 0x2540be400, 0x77359400, {0xc0076699e0, 0x4bacd10, 0x0}, ...})
    ott 14 10:24:13 node1 k3s[23642]:         /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/client-go@v1.23.6-k3s1/tools/leaderelection/leaderelection.go:226 +0x94
    ott 14 10:24:13 node1 k3s[23642]: <http://k8s.io/cloud-provider/app.leaderElectAndRun(0xc0072e29a8|k8s.io/cloud-provider/app.leaderElectAndRun(0xc0072e29a8>, {0xc00c4fd410, 0x2a}, 0xc007319308, {0x47dabc2, 0x6}, {0x4829eef, 0x18}, {0xc0076699e0, 0x4bacd10, ...})
    ott 14 10:24:13 node1 k3s[23642]:         /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/cloud-provider@v1.23.6-k3s1/app/controllermanager.go:516 +0x2c5
    ott 14 10:24:13 node1 k3s[23642]: created by <http://k8s.io/cloud-provider/app.Run|k8s.io/cloud-provider/app.Run>
    ott 14 10:24:13 node1 k3s[23642]:         /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/cloud-provider@v1.23.6-k3s1/app/controllermanager.go:220 +0x7a5
    ott 14 10:24:13 node1 k3s[23642]: panic: unreachable
    ott 14 10:24:13 node1 k3s[23642]: goroutine 22078 [running]:
    ott 14 10:24:13 node1 k3s[23642]: <http://k8s.io/cloud-provider/app.leaderElectAndRun(0xc0072e29a8|k8s.io/cloud-provider/app.leaderElectAndRun(0xc0072e29a8>, {0xc00c4fd410, 0x2a}, 0xc007319308, {0x47dabc2, 0x6}, {0x4829eef, 0x18}, {0xc0076699e0, 0x4bacd10, ...})
    ott 14 10:24:13 node1 k3s[23642]:         /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/cloud-provider@v1.23.6-k3s1/app/controllermanager.go:526 +0x2d8
    ott 14 10:24:13 node1 k3s[23642]: created by <http://k8s.io/cloud-provider/app.Run|k8s.io/cloud-provider/app.Run>
    ott 14 10:24:13 node1 k3s[23642]:         /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/cloud-provider@v1.23.6-k3s1/app/controllermanager.go:220 +0x7a5
    ott 14 10:24:17 node1 systemd[1]: k3s.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
    Just for curiosity I check also the kernel messages and I saw this strange stack related to eBPF:
    [ 6381.435296] WARNING: CPU: 3 PID: 66903 at mm/vmalloc.c:330 vmalloc_to_page+0x219/0x220
    [ 6381.436034] Modules linked in: tcp_diag inet_diag fuse sunrpc cls_bpf sch_ingress xt_TPROXY nf_tproxy_ipv6 nf_tproxy_ipv4 xt_nat xt_set xt_CT veth ip_set_hash_ip xt_socket nf_socket_ipv4 nf_socket_ipv6 ip6table_filter ip6table_raw ip6table_mangle ip6_tables iptable_filter iptable_raw iptable_mangle ip6t_MASQUERADE ipt_MASQUERADE xt_conntrack xt_comment nft_counter xt_mark nft_compat iptable_nat ip_tables dm_thin_pool dm_persistent_data dm_bio_prison dm_snapshot dm_bufio br_netfilter bridge stp llc overlay nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip_set nf_tables nfnetlink bochs_drm drm_vram_helper drm_ttm_helper ttm drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops drm joydev pcspkr virtio_balloon i2c_piix4 xfs libcrc32c sr_mod cdrom sd_mod t10_pi sg ata_generic ata_piix libata virtio_net serio_raw net_failover virtio_console failover virtio_scsi dm_mirror dm_region_hash dm_log dm_mod
    [ 6381.436234]  [last unloaded: nft_fib]
    [ 6381.442676] Red Hat flags: eBPF/cls eBPF/cgroup
    [ 6381.443442] CPU: 3 PID: 66903 Comm: tc Kdump: loaded Not tainted 4.18.0-372.9.1.el8.x86_64 #1
    [ 6381.444220] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS <http://rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org|rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org> 04/01/2014
    [ 6381.445856] RIP: 0010:vmalloc_to_page+0x219/0x220
    [ 6381.446707] Code: 50 ec 00 c3 48 81 e7 00 00 00 c0 e9 1a ff ff ff 48 8b 3d 32 bf ed 00 48 81 e7 00 f0 ff ff 48 89 fa eb 8d 0f 0b e9 15 fe ff ff <0f> 0b 31 c0 c3 66 90 0f 1f 44 00 00 e8 d6 fd ff ff 48 2b 05 ff 4f
    [ 6381.448372] RSP: 0018:ffffa3a70de3bca0 EFLAGS: 00010293
    [ 6381.449215] RAX: 0000000000000063 RBX: fffff440c41877c0 RCX: 0000000000000000
    [ 6381.450091] RDX: 0000000000000000 RSI: ffffffffc0c008bb RDI: 0000000000000000
    [ 6381.450970] RBP: ffffffffc0bff8bb R08: 0000000000000000 R09: 0000000000000001
    [ 6381.451854] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000001
    [ 6381.452774] R13: ffffa3a70de3bcef R14: ffffffffc0c008bb R15: 0000000000000001
    [ 6381.453647] FS:  00007f61cc987740(0000) GS:ffff96fe73d80000(0000) knlGS:0000000000000000
    [ 6381.454571] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    [ 6381.455437] CR2: 000055c5f2725df0 CR3: 0000000106106000 CR4: 00000000000006e0
    [ 6381.456367] Call Trace:
    [ 6381.457321]  __text_poke+0x203/0x260
    [ 6381.458295]  text_poke_bp_batch+0x67/0x160
    [ 6381.459135]  ? bpf_prog_c13c61c429dc020a_handle_policy+0x25c3/0x2d08
    [ 6381.460008]  text_poke_bp+0x3a/0x54
    [ 6381.460830]  ? bpf_prog_c13c61c429dc020a_handle_policy+0x25c3/0x2d08
    [ 6381.461658]  __bpf_arch_text_poke+0x19e/0x1b0
    [ 6381.462551]  prog_array_map_poke_run+0xe2/0x1d0
    [ 6381.463447]  bpf_fd_array_map_update_elem+0x8b/0xe0
    [ 6381.464299]  map_update_elem+0x1cf/0x1f0
    [ 6381.465148]  __do_sys_bpf+0x92b/0x1da0
    [ 6381.465987]  ? audit_log_exit+0x2b2/0xd60
    [ 6381.466788]  do_syscall_64+0x5b/0x1a0
    [ 6381.467656]  entry_SYSCALL_64_after_hwframe+0x65/0xca
    [ 6381.468496] RIP: 0033:0x7f61ccac073d
    [ 6381.469318] Code: 00 c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 23 37 0d 00 f7 d8 64 89 01 48
    [ 6381.471074] RSP: 002b:00007fff72060128 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
    [ 6381.471970] RAX: ffffffffffffffda RBX: 00007fff72062530 RCX: 00007f61ccac073d
    [ 6381.472845] RDX: 0000000000000080 RSI: 00007fff72060130 RDI: 0000000000000002
    [ 6381.473702] RBP: 0000564de994f1f0 R08: 00007fff7206020c R09: 0000000000000008
    [ 6381.474554] R10: 0000000000000000 R11: 0000000000000246 R12: 0000564de9901a80
    [ 6381.475530] R13: 0000564de994f700 R14: 00007fff72060260 R15: 0000564de9991938
    [ 6381.476413] ---[ end trace b293578addd05c82 ]---
    It's the first time after months I see this problem: do you know why this is happening? Could the problem be caused by the kernel version (4.18.0-372.9.1.el8.x86_64)?
    a
    • 2
    • 5
  • e

    enough-carpet-20915

    10/15/2022, 9:55 AM
    How do I add
    InsecureSkipVerify = true
    to the built in traefik in k3s?
  • e

    enough-carpet-20915

    10/15/2022, 10:11 AM
    Ah, never mind I figured out how to make the stackgres adminui run as plain http
  • e

    echoing-ability-7881

    10/15/2022, 11:47 AM
    Hi Rancher community how are you, my question today is related to Cronjob on Rancher RKE GUI- Lets suppose i have a application deploy on Rancher workload and i want to run Schedule for it that will run per minute whole day and i want some other command to run constantly. What should i do in RKE please help #general #k3s #rancher-desktop #k3s #
  • f

    flat-waiter-75025

    10/15/2022, 12:30 PM
    Hi community, when provisioning K3S vagrant E2E test machines , is it possible to avoid downloading K3S with each provisioning ( like providing the binary from host machine ) ?
  • m

    melodic-hamburger-23329

    10/17/2022, 1:00 AM
    Is there a manual for setting up Buildkit for k3s? I’d like to be able to build images inside containers running in k3s using nerdctl.
  • m

    melodic-hamburger-23329

    10/17/2022, 5:43 AM
    I’m trying to enable the experimental tracing feature. However, I’m getting following error when trying to create the confs:
    resource mapping not found for name: "" namespace: "" from "kube-tracing.yaml": no matches for kind "TracingConfiguration" in version "<http://apiserver.config.k8s.io/v1alpha1|apiserver.config.k8s.io/v1alpha1>"
    ensure CRDs are installed first
    resource mapping not found for name: "" namespace: "" from "kube-tracing.yaml": no matches for kind "KubeletConfiguration" in version "<http://kubelet.config.k8s.io/v1beta1|kubelet.config.k8s.io/v1beta1>"
    ensure CRDs are installed first
    kube-tracing.yaml:
    apiVersion: <http://apiserver.config.k8s.io/v1alpha1|apiserver.config.k8s.io/v1alpha1>
    kind: TracingConfiguration
    endpoint: <...>:4317
    ---
    apiVersion: <http://kubelet.config.k8s.io/v1beta1|kubelet.config.k8s.io/v1beta1>
    kind: KubeletConfiguration
    featureGates:
      KubeletTracing: true
    tracing:
      endpoint: <...>:4317
    I enabled the feature gates (
    feature-gates=APIServerTracing=true,KubeletTracing=true
    ), but it seems the CRDs are missing. How do I get the CRDs..?
    c
    • 2
    • 1
  • g

    great-monitor-14716

    10/17/2022, 7:55 AM
    Any idea why my cilium-agent on k3s-agents is trying to connect to the api-server on https://127.0.0.1:6443? The 3 k3s-servers have all cilium pods running smooth but as soon as I join a k3s-agent in there it fails to start up cilium-agent there with this in the logs:
    level=info msg="Initializing daemon" subsys=daemon
    level=info msg="Establishing connection to apiserver" host="<https://127.0.0.1:6443>" subsys=k8s
    level=info msg="Establishing connection to apiserver" host="<https://127.0.0.1:6443>" subsys=k8s
    level=info msg="Establishing connection to apiserver" host="<https://127.0.0.1:6443>" subsys=k8s
    level=info msg="Establishing connection to apiserver" host="<https://127.0.0.1:6443>" subsys=k8s
    I've also tried using cilium CLI and set
    cilium config set k8sServiceHost 192.168.250.11
    cilium config set k8sServicePort 6443
    but it keeps on trying to connect to 127.0.0.1:6443 I've disabled kube-proxy, network policy, servicelb and traefik during k3s install.
    kubectl get nodes -A -o wide
    NAME          STATUS   ROLES                       AGE   VERSION        INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
    k3s-agent1    Ready    <none>                      8h    v1.24.6+k3s1   192.168.250.14   <none>        Ubuntu 20.04.5 LTS   5.4.0-128-generic   <containerd://1.6.8-k3s1>
    k3s-server1   Ready    control-plane,etcd,master   9h    v1.24.6+k3s1   192.168.250.11   <none>        Ubuntu 20.04.5 LTS   5.4.0-128-generic   <containerd://1.6.8-k3s1>
    k3s-server2   Ready    control-plane,etcd,master   9h    v1.24.6+k3s1   192.168.250.12   <none>        Ubuntu 20.04.5 LTS   5.4.0-128-generic   <containerd://1.6.8-k3s1>
    k3s-server3   Ready    control-plane,etcd,master   8h    v1.24.6+k3s1   192.168.250.13   <none>        Ubuntu 20.04.5 LTS   5.4.0-128-generic   <containerd://1.6.8-k3s1>
    
    root@k3s-server1:~# cilium status
        /¯¯\
     /¯¯\__/¯¯\    Cilium:         2 errors
     \__/¯¯\__/    Operator:       OK
     /¯¯\__/¯¯\    Hubble:         disabled
     \__/¯¯\__/    ClusterMesh:    disabled
        \__/
    
    DaemonSet         cilium             Desired: 4, Ready: 3/4, Available: 3/4, Unavailable: 1/4
    Deployment        cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
    Containers:       cilium             Running: 4
                      cilium-operator    Running: 1
    Cluster Pods:     3/3 managed by Cilium
    Image versions    cilium             <http://quay.io/cilium/cilium:v1.12.2@sha256:986f8b04cfdb35cf714701e58e35da0ee63da2b8a048ab596ccb49de58d5ba36|quay.io/cilium/cilium:v1.12.2@sha256:986f8b04cfdb35cf714701e58e35da0ee63da2b8a048ab596ccb49de58d5ba36>: 4
                      cilium-operator    <http://quay.io/cilium/operator-generic:v1.12.2@sha256:00508f78dae5412161fa40ee30069c2802aef20f7bdd20e91423103ba8c0df6e|quay.io/cilium/operator-generic:v1.12.2@sha256:00508f78dae5412161fa40ee30069c2802aef20f7bdd20e91423103ba8c0df6e>: 1
    Errors:           cilium             cilium          1 pods of DaemonSet cilium are not ready
                      cilium             cilium-4spdt    unable to retrieve cilium status: container cilium-agent is in CrashLoopBackOff, exited with code 1: level=fatal msg="Unable to initialize Kubernetes subsystem" error="unable to create k8s client: unable to create k8s client: Get \"<https://127.0.0.1:6443/api/v1/namespaces/kube-system>\": dial tcp 127.0.0.1:6443: connect: connection refused" subsys=daemon
    
    root@k3s-server1:~# kubectl get pods -A -o wide
    NAMESPACE     NAME                                      READY   STATUS             RESTARTS         AGE     IP               NODE          NOMINATED NODE   READINESS GATES
    kube-system   cilium-4spdt                              0/1     CrashLoopBackOff   99 (2m25s ago)   7h15m   192.168.250.14   k3s-agent1    <none>           <none>
    kube-system   cilium-6mjjd                              1/1     Running            1 (26m ago)      7h15m   192.168.250.13   k3s-server3   <none>           <none>
    kube-system   cilium-9xcpk                              1/1     Running            1 (26m ago)      7h15m   192.168.250.11   k3s-server1   <none>           <none>
    kube-system   cilium-b7grx                              1/1     Running            1 (25m ago)      7h15m   192.168.250.12   k3s-server2   <none>           <none>
    kube-system   cilium-operator-7b5b55f786-4nfmd          1/1     Running            1 (25m ago)      9h      192.168.250.12   k3s-server2   <none>           <none>
    kube-system   coredns-b96499967-96fsc                   1/1     Running            1 (25m ago)      9h      10.0.1.76        k3s-server2   <none>           <none>
    kube-system   local-path-provisioner-7b7dc8d6f5-gbxjc   1/1     Running            1 (25m ago)      9h      10.0.1.20        k3s-server2   <none>           <none>
    kube-system   metrics-server-668d979685-j6xzf           1/1     Running            1 (25m ago)      9h      10.0.1.177       k3s-server2   <none>           <none>
    • 1
    • 1
  • r

    rapid-jelly-9995

    10/17/2022, 12:12 PM
    Hi! I have a question regarding node registration for k3s clusters. I am running k3s v1.22.15-k3s1 (started using k3d v5.3.0) in combination with [KubeEdge](kubeedge.io). So there are nodes created that aren't managed via k3d and are "unknown" to k3s. Joining nodes work and rolling out workload as well. However, the kubectl exec and logs feature don't work, because it says:
    Error from server: Get "<https://192.168.1.100:10351/containerLogs/kubeedge/edgemesh-agent-q78bp/edgemesh-agent>": x509: cannot validate certificate for 192.168.1.100 because it doesn't contain any IP SANs
    Now, my question: is it somehow possible to accept nodes blindly without validating their certificates? One thing to mention is that this is a local only dev cluster.
    l
    • 2
    • 5
  • g

    great-monitor-14716

    10/17/2022, 2:28 PM
    Hi! Not sure if this is k3s or cilium related. I'm trying to add a feature to cilium CNI (installed through cilium CLI) following the docs:
    cilium install \
        --kube-proxy-replacement=strict \
        --helm-set ingressController.enabled=true
    Getting error:
    Error: Unable to install Cilium: unable to create secret kube-system/hubble-server-certs: secrets "hubble-server-certs" already exists
    Should I throw it out and install cilium through helm instead? or is there a way to use the CLI to get the same results?
    c
    • 2
    • 3
  • e

    enough-carpet-20915

    10/17/2022, 5:45 PM
    So, for grafana dashboards what does everyone use? I've got 15282 which is pretty good but the math is wrong for the cpu/ram. That total RAM and CPU is for one node, not for the entire cluster of 4 nodes.
    l
    • 2
    • 1
  • a

    ancient-raincoat-46356

    10/17/2022, 9:07 PM
    I created a bash script today to start taking backups of my K3s cluster. Currently I'm only grabbing the following
    COPY_CONTENTS=( "/var/lib/rancher/k3s/server/db/snapshots" "/etc/rancher/k3s/k3s.yaml" "/etc/default/k3s" )
    The docs really only say to backup the ETCD snapshots but I'm finding a few other things that seem important. Does anyone have any other suggestions of things I should be including?
    c
    • 2
    • 7
  • a

    agreeable-garden-91801

    10/18/2022, 8:26 AM
    Hi! There is a proper way to change host IP of master node on k3s running cluster ?(can be single-node or multi-node). when changing IP address k3s service is not starting. is there correct way without deleting node to change IP? Thanks
    b
    c
    • 3
    • 3
  • f

    faint-pillow-41543

    10/18/2022, 2:26 PM
    Hello there, may I ask if anybody can point me to a doc explaining how to set http-to-https redirection on Traefik IngressRoutes CRDs? I've been scavaging for hours without any clear answers.
  • f

    faint-pillow-41543

    10/18/2022, 2:40 PM
    nvm, found it, it looks like posting a question somewhere just brings me good fortune. I used this solution https://pet2cattle.com/2022/10/traefik-ingressroute-http-to-https
  • s

    stocky-controller-51764

    10/20/2022, 6:38 AM
    Hi there!Is there any use cases of k3s with large scale of worker nodes, maybe up to X thousand?
    k
    n
    f
    • 4
    • 4
  • f

    faint-pillow-41543

    10/20/2022, 1:30 PM
    Hello, I'm trying to expose a TCP port on my single-node cluster (exposing a redis server) but I'm struggling to do do using IngressRouteTCP. I managed to add a custom entryPoint to traefik static conf and apprently, from traefik dashboard, router is ok and pointing to the correct svc. However, the port (6379) is still closed from outside (I can't see any iptables rule opening that for outside connections either). May I ask if someone can point me to a stright doc on how to open a TCP port on a freshly installed K3S cluster?
    c
    • 2
    • 2
  • p

    prehistoric-judge-25958

    10/20/2022, 8:06 PM
    How do I enable feature gates in my K3s cluster? I want enable StatefulSetAutoDeletePVC
    --feature-gates=StatefulSetAutoDeletePVC=true
    ✅ 1
    g
    • 2
    • 6
  • t

    tall-belgium-91376

    10/21/2022, 12:45 PM
    Hi, today updated my master’s certs because they were expired, now I see this
    invalid bearer token, Token has been invalidated
    all over the master’s logs. Do I need to do something on agent side other than restarting the
    k3s-agent
    ?
    n
    • 2
    • 9
  • c

    clever-air-65544

    10/21/2022, 1:59 PM
    new weekly is up! https://github.com/k3s-io/k3s/discussions/6313
    🚀 3
  • c

    careful-piano-35019

    10/21/2022, 3:55 PM
    @colossal-toddler-39587 https://twitter.com/alexellisuk/status/1583467517670195200?s=20&amp;t=I8Tf1TR4c25caFX9yhLemQ maybe here is a better place to look into ^^
    c
    • 2
    • 2
  • m

    melodic-hamburger-23329

    10/23/2022, 4:49 AM
    Does kubectl exec require service load balancer? I disabled klipper and I can otherwise manage k3s using kubectl but kubectl exec gets stuck and timeouts.
  • a

    able-mechanic-45652

    10/24/2022, 9:07 AM
    i need a bit of debugging help, seem our k3s cluster is unable to resolve names correctly. Probably issue with worker iptables or something, pods running on same worker as coredns resolve names correctly but pods on other workers don't seem to be able to resolve clusters internal names so seem unable to connect to coredns
    b
    • 2
    • 11
  • s

    square-engine-61315

    10/24/2022, 11:44 AM
    I have a pod stuck in terminating status with the following error:
    error killing pod: failed to "KillPodSandbox" for "26335a48-9ee2-422c-8909-c8c8db421e85" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"...\": failed to get network \"cbr0\" cached result: decoding version from network config: unexpected end of JSON input"
    I'm not sure whether this is a bug, nor where to file it if it is! I have tried restarting the node, but it makes no difference. Any ideas?
    b
    • 2
    • 12
  • e

    enough-carpet-20915

    10/24/2022, 5:47 PM
    Oops
  • e

    enough-carpet-20915

    10/24/2022, 5:48 PM
    Is what's in the logs
  • e

    enough-carpet-20915

    10/24/2022, 5:49 PM
    Never mind. I forgot to give the --cluster-cidr flag to the second master as well
  • f

    fierce-horse-50681

    10/25/2022, 3:21 AM
    Hello everyone, can I ask flannel related questions here? The README of flannel led me here. But this is the channel of k3s, which makes me feel a little strange.
    c
    b
    • 3
    • 5
  • e

    enough-carpet-20915

    10/25/2022, 11:39 AM
    If I have servers with two IPs (one public and one private) how do I tell k3s to use the private interface for all inter-node communication? Is setting
    --flannel-iface
    all I need?
    p
    • 2
    • 8
  • w

    white-garden-41931

    10/26/2022, 12:24 PM
    Thank you, @bulky-vase-94318 ! This is almost exactly what I'm trying to build at home (
    s/Centos/Fedora/
    )
Powered by Linen
Title
w

white-garden-41931

10/26/2022, 12:24 PM
Thank you, @bulky-vase-94318 ! This is almost exactly what I'm trying to build at home (
s/Centos/Fedora/
)
View count: 26