https://rancher.com/ logo
Docs
Join the conversationJoin Slack
Channels
academy
amazon
arm
azure
cabpr
chinese
ci-cd
danish
deutsch
developer
elemental
epinio
espanol
events
extensions
fleet
français
gcp
general
harvester
harvester-dev
hobbyfarm
hypper
japanese
k3d
k3os
k3s
k3s-contributor
kim
kubernetes
kubewarden
lima
logging
longhorn-dev
longhorn-storage
masterclass
mesos
mexico
nederlands
neuvector-security
office-hours
one-point-x
onlinemeetup
onlinetraining
opni
os
ozt
phillydotnet
portugues
rancher-desktop
rancher-extensions
rancher-setup
rancher-wrangler
random
rfed_ara
rio
rke
rke2
russian
s3gw
service-mesh
storage
submariner
supermicro-sixsq
swarm
terraform-controller
terraform-provider-rancher2
terraform-provider-rke
theranchcast
training-0110
training-0124
training-0131
training-0207
training-0214
training-1220
ukranian
v16-v21-migration
vsphere
windows
Powered by Linen
k3s
  • m

    melodic-hamburger-23329

    07/08/2022, 4:12 AM
    Sorry for lot of questions 😅 Trying to set up a POC of CICD system running on k3s. How do I provide pods access to the containerd managed by k3s? Also, is it ok to have multiple pods accessing that same containerd?
    c
    • 2
    • 1
  • f

    flat-ice-58483

    07/10/2022, 8:58 AM
    Hi. Are there any good guides on setting up Let's Encrypt with k3s (running on k3os)?
    w
    • 2
    • 37
  • b

    bland-painting-61617

    07/10/2022, 10:06 PM
    Hey guys, I have a cluster with 3 master nodes. One of my offsite noes which is a master node was offline for 2 weeks due to hw failure. I brought it up but K3s didn't start up for an hour or so, so I attempted deletion of etcd db files and manually removed it from the etcd cluster. Tried to start it again and I saw the database size grow nicely as it was syncing the content. Around 600MB out of the total 780 MB, etcd complains that the member was removed from the cluster and dies. I'm not sure what removes the member - would it be K3s timing out with the start up and aborting the process?
    • 1
    • 2
  • j

    jolly-waitress-71272

    07/12/2022, 7:55 PM
    Does K3s still install its own kubectl? If so where?
    c
    • 2
    • 7
  • q

    quiet-energy-91205

    07/13/2022, 12:41 PM
    Hey! I'm currently trying to use Cilium alongside K3s. I'm currently using Cilium 1.11.6 and K3s v1.24.2+k3s2 (also tried v1.23.8+k3s2). However, I am receiving this error:
    kubectl logs -n kube-system cilium-4vt6k 
    Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), clean-cilium-state (init)
    Error from server: Get "<https://192.168.1.3:10250/containerLogs/kube-system/cilium-4vt6k/cilium-agent>": proxy error from 192.168.1.3:6443 while dialing 192.168.1.3:10250, code 500: 500 Internal Server Error
    K3s config:
    advertise-address: {{ private_ip }}
    bind-address: {{ private_ip }}
    node-ip: {{ private_ip }}
    
    node-external-ip: {{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}
    
    kubelet-arg:
      - address={{ private_ip }}
    
    kube-apiserver-arg:
      - kubelet-preferred-address-types=InternalIP
    
    flannel-backend: none
    disable-kube-proxy: true
    disable-network-policy: true
    disable:
      - servicelb
      - traefik
      - local-storage
    Cilium config:
    k8sServiceHost: {{ hostvars[groups['server'][0]]['private_ip'] }}
    k8sServicePort: 6443
    kubeProxyReplacement: strict
    
    bandwidthManager: true
    
    externalIPs:
      enabled: true
    
    hostPort:
      enabled: true
    
    nodePort:
      enabled: true
    
    hostServices:
      enabled: true
    
    hubble:
      enabled: false
    
    nodeinit:
      restartPods: true
    c
    • 2
    • 3
  • t

    thousands-mechanic-72394

    07/13/2022, 7:34 PM
    In a Kubernetes workshop at UberConf 2022, a provided deployment based on MySql 5.5.x would never start and entered a CrashLoopBackOff state. What was frustrating was there were no logs for the container that failed. The k3s logs told me an error had occurred and the the deployment went to CrashLoopBackOff. (The image runs fine in Docker.) I upgraded the Docker image to be based on MySql 8.0.x and it worked fine. I ran an image scan against the mysql:5.5:45 and found 38 critical issues, but no critical issues were found for mysql:8.0.29. Given the absence of log information, I’m wondering if K3s / RD scans the image before deployment and blocks the deployment if critical vulnerabilities are found? (edited)
    c
    • 2
    • 13
  • n

    numerous-chef-53737

    07/13/2022, 7:44 PM
    Hi, i only recently started with k8s as well as k3s, i apologize for the stupid questions in advance. i am trying to use my own TLS certificate (+ intermediate) with a newly brought up k3s cluster (with builtin traefik ingres). i tried the most obvious way: created a tls secret and referenced that from within a normal ingress object but traefik still serves its self generated cert instead of mine. can someone help me with that?
    c
    w
    • 3
    • 5
  • c

    careful-table-97595

    07/14/2022, 12:51 AM
    Hello. Tried to upgrade my k3s cluster from v1.22.7+k3s1 to v1.23.8+k3s2 (cluster in HA mode) by stopping k3s, then using k3s-killall script, replacing the k3s binary to the expected version, then restarting k3s daemon. The upgrade itself went well, but a lot of pods were not able to start correctly on the nodes with the new version with this error:
    incompatible CNI versions; config is \"1.0.0\", plugin supports [\"0.1.0\" \"0.2.0\" \"0.3.0\" \"0.3.1\" \"0.4.0\"]"
    So I reverted back to my original version (v1.22+k3s1) Did I miss something? I haven't found anything relevant in the documentation
    c
    f
    • 3
    • 14
  • a

    ancient-air-32350

    07/14/2022, 6:39 AM
    Hello Is ist possible to use the cni bandwith plugin with k3s and flannel or do i need to switch to calico for that ? https://www.cni.dev/plugins/current/meta/bandwidth/
    w
    • 2
    • 2
  • a

    average-arm-20932

    07/15/2022, 5:23 PM
    Hello Team,
    
    I'm using K3s version(v1.22.4+k3s1), I need to send the SSL connections directly to the backend, not decrypt at my Traefik. The backend needs to receive https requests.
    The below annotations is not working, could anyone help me here, any help appreciated.
    
    apiVersion: <http://networking.k8s.io/v1|networking.k8s.io/v1>
    kind: Ingress
    metadata:
      name: cp-certissuer-ing
      namespace: cp-certissuer
      annotations:
        <http://traefik.ingress.kubernetes.io/ssl.passthrough|traefik.ingress.kubernetes.io/ssl.passthrough>: "True"
    spec:
      rules:
      - host: <http://server.example.com|server.example.com>
        http:
          paths:
          - backend:
              service:
                name: cp-certissuer
                port:
                  number: 8080
            path: /cert/actuator/info
            pathType: Prefix
    
      tls:
      - hosts:
        - <http://server.example.com|server.example.com>
        secretName: cp-certissuer-ssl-secret
    w
    • 2
    • 10
  • b

    bland-painting-61617

    07/17/2022, 4:13 PM
    Is it just me thinking that the documentation of Rancher and K3s is written by someone who thinks we all know how it works or are we supposed to dig into code to know? For example:
    --etcd-s3-endpoint-ca value                (db) S3 custom CA cert to connect to S3 endpoint
    custom cert file path? base64 encoded pem file? pem file with no new lines?
    b
    • 2
    • 4
  • f

    flat-engine-95579

    07/17/2022, 5:01 PM
    Nevermind, fixed it with https://github.com/k3s-io/k3s/issues/4234, my bad.
    c
    • 2
    • 1
  • a

    alert-elephant-31589

    07/17/2022, 5:23 PM
    Anyone that can help me out here? https://rancher-users.slack.com/archives/CGGQEHPPW/p1658071877156619
    b
    • 2
    • 2
  • s

    strong-tomato-67726

    07/18/2022, 10:11 AM
    How does secret encryption at Rest works under the hood ?
    c
    • 2
    • 1
  • b

    brainy-electrician-41196

    07/19/2022, 7:18 AM
    salutations~! noticed one of my k3s-agent nodes was misbehaving deleted node from the cluster, k3s-agent-uninstall.sh'd it, and went to add it to the cluster again came up complaining about CNI not initializing properly i'm using cilium as my CNI do i need to tell the agent not to do anything with flannel?
    • 1
    • 3
  • c

    crooked-rocket-3338

    07/19/2022, 2:51 PM
    Hi guys, is the issue related to kube-router not cleaning up IP (https://github.com/rancher/rancher/issues/14504) is still happening ? I have a k3s cluster (1.22) and I realized some of my load balancer failed with the
    failed to allocate for range
    thing.
    c
    • 2
    • 13
  • f

    fancy-insurance-98888

    07/22/2022, 2:10 AM
    hello,the mirror site(rancher-mirror.cnrancher.com) has broken,any way to solve this?
  • b

    best-oil-69865

    07/22/2022, 10:20 AM
    Can I install the k3s inside the lxc
  • b

    best-oil-69865

    07/22/2022, 10:20 AM
    because I get overlayFs issue
  • f

    flaky-dusk-65029

    07/22/2022, 7:13 PM
    Q: Has anybody tried running k3s in containers within WSL and podman? I didn't have admin access to my windows laptop, so I was hoping not to (or unable to) install docker desktop, podman desktop, or rancher desktop.
  • g

    gorgeous-nightfall-51562

    07/23/2022, 2:00 AM
    https://github.com/stazdx/demo-loki-k3s
  • c

    cool-ocean-71403

    07/23/2022, 8:44 AM
    Is there any way the k3s service lb can pass or preserve source ip and port from the clients connection to my backend services? Right now when I deploy any service using the service lb, my service can only see and log the ip addresses of the service lb daemonset but my backend service is unable to read the original client up which is hitting the service lb. Also, in future iterations of k3s, is it possible to configure the klipper lb in a way that it creates only one daemonset and routes all load balancer service traffics through one single daemonset? So, only one pod per node and maybe a node label flag to allow the daemonset to scale on any node. A full single daaaemonset for every load balancer service seems very unefficient in terms of resource usage and also it creates 10 extra pods on every node if 10 services are of type Load Balancer.
    f
    • 2
    • 3
  • m

    magnificent-egg-26329

    07/23/2022, 2:14 PM
    Regarding agent networking, possible to specify the value
    --node-ip value, -i value
    When I am installing with the https://get.k3s.io/ shell script?
    b
    • 2
    • 4
  • a

    agreeable-area-10613

    07/24/2022, 7:05 AM
    Hi, We have wired behavior for jobs. Every pod of failed/succeeded job are being disappear after 2-5 minutes. We are not using
    ttlSecondsAfterFinished
    . we are only using
    backoffLimit: 0
    and
    restartPolicy: OnFailure
    We are using latest k3s version and i remember that in the past the failures job wasn't deleted and we was able to see the logs a day after.. Any idea?
  • s

    strong-tomato-67726

    07/25/2022, 5:13 PM
    --kube-apiserver-arg="enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount"
    h
    • 2
    • 12
  • n

    numerous-sunset-21016

    07/26/2022, 11:01 AM
    Is the way to generate my own psk for
    --flannel-backend=ipsec
    to just pre-populate ``/var/lib/rancher/k3s/server/cred/ipsec.psk` ? Docs don't seem to tell me what the expectation for owning this is.
    • 1
    • 1
  • a

    ancient-raincoat-46356

    07/28/2022, 7:04 PM
    Has anyone ever attempted to change the hostname of their cluster servers? I had built a cluster for an environment and we decided to change the name of the env and therefore I wanted to update the cluster servers hostnames to reflect this. After updating the hostname on one of the master nodes and restarted k3s, the k3s server is failing to start. I was hoping to make this change without affecting the workloads on the cluster.
    c
    • 2
    • 2
  • a

    ancient-raincoat-46356

    07/28/2022, 7:05 PM
    I think it might have to do with the Kubeconfig. It would seem that the name of the server might be embedded in the cert data. Is there a way to regenerate the kubeconfig, similar to what one might do with
    kubeadm init phase admin
    ?
  • b

    broad-rocket-5348

    07/29/2022, 4:56 AM
    Hi, does anyone know if k3s cluster can communicate only on public IPs?
    c
    • 2
    • 1
  • s

    steep-continent-12582

    08/01/2022, 5:40 AM
    Hi folks, I'm running k3s on top of an NFS-based rootfs which has worked for me for years, but recently did a pretty large set of upgrades so now I'm on containerd vs docker and having issues with some containers and snapshotters: •
    overlay
    snapshotter, understandably seems to not work here • Trying the
    native
    snapshotter, some containers throwing errors like:
    Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd container: copying of parent failed: failed to copy file info for /var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.native/snapshots/new-4170709921: failed to chown /var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.native/snapshots/new-4170709921: lchown /var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.native/snapshots/new-4170709921: invalid argument
    • Other times (and I haven't figured out the pattern here, I thought it was under overlay config but seems not), I get this kind of similar but different error, where the native snapshotter is not mentioned, seems this one relates to the pull/unpack rather than the starting:
    Failed to pull image "<http://quay.io/prometheus/busybox|quay.io/prometheus/busybox>": rpc error: code = Unknown desc = failed to pull and unpack image "<http://quay.io/prometheus/busybox:latest|quay.io/prometheus/busybox:latest>": failed to extract layer sha256:c7412c2a678786efc84fe4bd0b1a79ecca47457b0eb4d4bceb1f79d6b4f75695: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount4066054735: failed to Lchown "/var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount4066054735/bin" for UID 0, GID 0: lchown /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount4066054735/bin: invalid argument: unknown
    • For both of the above, my GitHub searches seemed to indicate they're caused by not running on an ext4 filesystem, which is true in my situation. • In some cases I can't start any containers in a pod, in others I can start most except busybox • I recalled that on docker, I believe I was using
    devmapper
    which seemed to work fine and did not have this problem, but as far as I can tell,
    devmapper
    is either not included at all in the k3s version of containerd, or just not configured (some places say it's not included, but I see errors in the containerd.log and
    ctr plugin ls
    output that kind of indicate it's there but not working. I'm also not 100% sure if it would fix my problem. So, my questions (and thanks in advance) are: • How come only the busybox image so far has this particular problem? I've gotten a bunch of others to work so far. • Is there some tweak to the
    native
    snapshotter that can be done to allow things to work? • If not, is there a simply way to get
    devmapper
    working?
    c
    e
    • 3
    • 4
Powered by Linen
Title
s

steep-continent-12582

08/01/2022, 5:40 AM
Hi folks, I'm running k3s on top of an NFS-based rootfs which has worked for me for years, but recently did a pretty large set of upgrades so now I'm on containerd vs docker and having issues with some containers and snapshotters: •
overlay
snapshotter, understandably seems to not work here • Trying the
native
snapshotter, some containers throwing errors like:
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd container: copying of parent failed: failed to copy file info for /var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.native/snapshots/new-4170709921: failed to chown /var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.native/snapshots/new-4170709921: lchown /var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.native/snapshots/new-4170709921: invalid argument
• Other times (and I haven't figured out the pattern here, I thought it was under overlay config but seems not), I get this kind of similar but different error, where the native snapshotter is not mentioned, seems this one relates to the pull/unpack rather than the starting:
Failed to pull image "<http://quay.io/prometheus/busybox|quay.io/prometheus/busybox>": rpc error: code = Unknown desc = failed to pull and unpack image "<http://quay.io/prometheus/busybox:latest|quay.io/prometheus/busybox:latest>": failed to extract layer sha256:c7412c2a678786efc84fe4bd0b1a79ecca47457b0eb4d4bceb1f79d6b4f75695: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount4066054735: failed to Lchown "/var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount4066054735/bin" for UID 0, GID 0: lchown /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount4066054735/bin: invalid argument: unknown
• For both of the above, my GitHub searches seemed to indicate they're caused by not running on an ext4 filesystem, which is true in my situation. • In some cases I can't start any containers in a pod, in others I can start most except busybox • I recalled that on docker, I believe I was using
devmapper
which seemed to work fine and did not have this problem, but as far as I can tell,
devmapper
is either not included at all in the k3s version of containerd, or just not configured (some places say it's not included, but I see errors in the containerd.log and
ctr plugin ls
output that kind of indicate it's there but not working. I'm also not 100% sure if it would fix my problem. So, my questions (and thanks in advance) are: • How come only the busybox image so far has this particular problem? I've gotten a bunch of others to work so far. • Is there some tweak to the
native
snapshotter that can be done to allow things to work? • If not, is there a simply way to get
devmapper
working?
c

creamy-pencil-82913

08/01/2022, 6:30 AM
I think you're going to be kind of on your own with this. Running containerd or docker on nfs isn't really done. If you need to run on diskless nodes I'd probably use iscsi or something else like that to expose a traditional block device and then put ext4 or xfs on it, since those are the two filesystems that both of those projects are tested against. The native snapshotter is barely more than a "better than nothing" naive implementation and putting that on top of nfs just makes it worse. It's cool that it worked for you for so long but honestly I'm surprised.
s

steep-continent-12582

08/01/2022, 6:51 AM
Hey, thanks for the reply! Yeah i was kind of surprised it worked too, but it was as simple as just setting docker back in 1.18 and off it went, no problems. I can look into iSCSI, as it's all off a TrueNAS box, maybe just mounting the volume as the containerd directory would be easier than trying to boot from it but I'll see. Shame it's not as simple as getting devmapper to work though, but I hear you on it being kind of uncharted!
e

early-sundown-21192

08/03/2022, 11:23 PM
there is an lvm snapshotter as well, but I’d go the iscsi route like brandond said with a filesystem like ext4 or xfs directly
s

steep-continent-12582

08/03/2022, 11:25 PM
thx @early-sundown-21192 - in the spirit of speed and getting my cluster back online I've decided to do something in-between and just mount an iSCSI volume at
/var/lib/k3s/agent/containerd
and leave the rest on NFS, rather than try to figure out iSCSI boot at this time I have it working on one node since last night and so far, it seems to work well, just need to repeat for the others 🙂
View count: 10