https://rancher.com/ logo
Join the conversationJoin Slack
Channels
academy
amazon
arm
azure
cabpr
chinese
ci-cd
danish
deutsch
developer
elemental
epinio
espanol
events
extensions
fleet
français
gcp
general
harvester
harvester-dev
hobbyfarm
hypper
japanese
k3d
k3os
k3s
k3s-contributor
kim
kubernetes
kubewarden
lima
logging
longhorn-dev
longhorn-storage
masterclass
mesos
mexico
nederlands
neuvector-security
office-hours
one-point-x
onlinemeetup
onlinetraining
opni
os
ozt
phillydotnet
portugues
rancher-desktop
rancher-extensions
rancher-setup
rancher-wrangler
random
rfed_ara
rio
rke
rke2
russian
s3gw
service-mesh
storage
submariner
supermicro-sixsq
swarm
terraform-controller
terraform-provider-rancher2
terraform-provider-rke
theranchcast
training-0110
training-0124
training-0131
training-0207
training-0214
training-1220
ukranian
v16-v21-migration
vsphere
windows
Powered by Linen
k3s
  • c

    cool-ocean-71403

    06/28/2022, 9:17 PM
    Is the ResourceQuota admission plugin enabled by default in k3s install?
    c
    • 2
    • 8
  • d

    damp-ram-19757

    06/29/2022, 9:19 AM
    Trying to launch K3's with Ansible and I am getting Python errors. Does anyone have any ideas what is going on? Python is installed on all machines.
    q
    • 2
    • 1
  • m

    melodic-hamburger-23329

    06/29/2022, 1:35 PM
    Any idea how to fix
    unable to select an IP from default routes
    error preventing k3s startup?
    c
    • 2
    • 17
  • m

    melodic-hamburger-23329

    06/30/2022, 9:36 AM
    I’m trying to get k3s use image registry mirror, but apparently the images either are not pulled from correct registry (maybe issue with the rewrite..) or they are pulled from dockerhub. `crictl info`:
    "registry": {
          "configPath": "",
          "mirrors": {
            "*": {
              "endpoint": [
                "<https://myregistry.net>"
              ],
              "rewrite": {
                "^/(.*)": "some-namespace/$1"
              }
            },
            "<http://docker.io|docker.io>": {
              "endpoint": [
                "<https://myregistry.net>"
              ],
              "rewrite": {
                "^/(.*)": "some-namespace/$1"
              }
            }
          }
    I would like to manage all images from internal registry, including kube-system images.
    • 1
    • 1
  • g

    gorgeous-pencil-75892

    06/30/2022, 1:28 PM
    Hi, I have a k3s cluster with 3 master nodes (embedded etcd). I converted once from a single sqlite master node. Is there a way to get back to the single master with sqlite as storage without starting a new cluster from scratch?
    n
    • 2
    • 2
  • g

    green-shampoo-61471

    06/30/2022, 5:09 PM
    Hey I have a strange scenario where a cluster deployed at a site is going to be moved to another site with a different IP schema. The problem is that we do not manage the network at this site. ideally I would send someone to just rebuild the cluster but I am trying to avoid doing that. I know that this can be done by stopping kubernetes on the master node, making some config changes (deleting some stuff) and bringing it back up. Then you are able to rejoin your nodes to the master this should bring everything back. The problem is that I am currently running K3os and with it being immutable im not sure if this is even a valid option. Is there a way to just re-start the bootstrap part at the startup of K3os and I can just serve the files over the internet?
    n
    c
    • 3
    • 4
  • h

    hundreds-state-15112

    06/30/2022, 10:15 PM
    Hey y’all hopefully quick question. I’m trying to automate the installation of a k3s Rancher Server cluster with Ansible and have questions about the embedded etcd init functionality (thread)
    c
    • 2
    • 12
  • m

    melodic-hamburger-23329

    07/01/2022, 10:26 AM
    Is it possible to override the ContainerdConfigTemplate? I’d like to provide own config file instead of template, and also use version 2 syntax (currently getting warning in logs because k3s generates config file with old syntax). Ref: https://rancher.com/docs/k3s/latest/en/advanced/#configuring-containerd
    • 1
    • 1
  • c

    cool-ocean-71403

    07/01/2022, 10:44 AM
    How do I enable mount propagation on my alpine linux k3s installation to make it work with longhorn storage? I tried to deploy longhorn but the manager container is not starting up due to this mount propagation error. Any help would be appreciated. @high-waitress-66594 I think perhaps you can provide some insights on how to solve this.
    h
    • 2
    • 4
  • b

    brave-afternoon-4801

    07/01/2022, 2:04 PM
    I am having an issue getting k3s to delete everything, which is strange. Background: I've used the official install shell script to install directly onto an ubuntu workstation, and am using the following commands to wipe the machine
    sudo systemctl stop k3s
    sudo k3s-killall.sh
    docker rm -f $(docker ps -aq)
    sudo rm -rf /var/lib/rancher /var/lib/kubelet
    However, when I start k3s again, there are containers from a helm chart I was playing with that are still being created. What is the proper way to wipe everything?
    ✅ 1
    b
    • 2
    • 3
  • l

    late-needle-80860

    07/01/2022, 6:26 PM
    I’m trying to update a cluster to v1.23.7+k3s1 from v.1.23.6+k3s1. However, when the first control-plane node comes up it can’t start and
    journalct -u k3s.service -f
    shows:
    Jul 01 20:25:25 test-test-master-0 systemd[1]: Failed to start Lightweight Kubernetes.
    Jul 01 20:25:30 test-test-master-0 systemd[1]: k3s.service: Scheduled restart job, restart counter is at 151.
    Jul 01 20:25:30 test-test-master-0 systemd[1]: Stopped Lightweight Kubernetes.
    Jul 01 20:25:30 test-test-master-0 systemd[1]: Starting Lightweight Kubernetes...
    Jul 01 20:25:30 test-test-master-0 sh[53137]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
    Jul 01 20:25:30 test-test-master-0 sh[53138]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory
    Jul 01 20:25:30 test-test-master-0 k3s[53141]: time="2022-07-01T20:25:30+02:00" level=info msg="Starting k3s v1.23.7+k3s1 (ec61c667)"
    Jul 01 20:25:30 test-test-master-0 k3s[53141]: time="2022-07-01T20:25:30+02:00" level=warning msg="Cluster CA certificate is not trusted by the host CA bundle, but the token does not include a CA hash. Use the full token from the server's node-token file to enable Cluster CA validation."
    Jul 01 20:25:30 test-test-master-0 k3s[53141]: time="2022-07-01T20:25:30+02:00" level=info msg="Managed etcd cluster not yet initialized"
    Jul 01 20:25:30 test-test-master-0 k3s[53141]: time="2022-07-01T20:25:30+02:00" level=warning msg="Cluster CA certificate is not trusted by the host CA bundle, but the token does not include a CA hash. Use the full token from the server's node-token file to enable Cluster CA validation."
    Jul 01 20:25:30 test-test-master-0 k3s[53141]: time="2022-07-01T20:25:30+02:00" level=fatal msg="starting kubernetes: preparing server: failed to validate server configuration: critical configuration value mismatch"
    Jul 01 20:25:30 test-test-master-0 systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
    Jul 01 20:25:30 test-test-master-0 systemd[1]: k3s.service: Failed with result 'exit-code'.
    Jul 01 20:25:30 test-test-master-0 systemd[1]: Failed to start Lightweight Kubernetes.
    c
    • 2
    • 18
  • c

    cool-ocean-71403

    07/01/2022, 7:35 PM
    Is kine being used at all when using external etcd datastore in k3s cluster? Just saw kine 0.9.3 in rc, so was wondering if I need to upgrade or it is not necessary at all, because am using external etcd datastore. Also, how to clear the external etcd datastore completely if I am re-deploying k3s on a fresh install?
    c
    • 2
    • 6
  • n

    numerous-kilobyte-30360

    07/01/2022, 9:39 PM
    Hi! I'm new to k3s, coming from Docker Swarm in a small home lab environment. In my k3s deployment, all (3) nodes have shared storage already, located under
    /mnt/storage
    . For this reason, I don't want to use Longhorn. I'd like to use my existing replicated storage. k3s seems to default to
    /var/lib/rancher/k3s/storage
    with the local storage provider. I see an option for use during initial k3s setup,
    --default-local-storage-path
    per the documentation at https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#storage-class The setting appears to be located in
    /var/lib/rancher/k3s/server/manifests/local-storage.yaml
    on each node, but it also appears from reading around on forums that this gets overwritten and should be changed with tools, rather than directly. • Is it possible to change the
    --default-local-storage-path
    parameter without reinstalling k3s? • Is this even the correct parameter to change? Thanks!
    c
    • 2
    • 11
  • m

    melodic-hamburger-23329

    07/02/2022, 6:22 AM
    I’m trying to get stargz work with private image registry when using k3s and containerd. I got the authentication working using deprecated syntax (
    [plugins."io.containerd.snapshotter.v1.stargz".registry.mirrors."*"]
    and
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."*"]
    ; not sure if I need both..?). I’d like to get the authentication work using also the new
    config_path
    &
    certs.d
    syntax. Does anyone have insight regarding what’s correct and recommended way? I’ve been checking docs of stargz and containerd/cri, but am a bit lost. Also, although the services eventually boot successfully, I’m currently getting lot of errors, like these during k3s install:
    time="2022-07-02T14:51:25.485441124+09:00" level=info msg="Received status code: 401 Unauthorized. Refreshing creds..." key="<http://k8s.io/25/extract-737524037-rTpL|k8s.io/25/extract-737524037-rTpL> sha256:47539da01eebacb627943ac3a63d918b69534684b239edac42ce4c64742fb4fd" mountpoint=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.stargz/snapshotter/snapshots/17/fs parent="<http://k8s.io/24/sha256:114c5ec8519affa4cba972ba546f900b4803d5863f5242a2daee68aa6133fd7d|k8s.io/24/sha256:114c5ec8519affa4cba972ba546f900b4803d5863f5242a2daee68aa6133fd7d>" src="<http://docker.io/rancher/klipper-helm:v0.7.3-build20220613/sha256:5b8b2ba8c050ccd08bd8f00f15210c40a3f50674a0b9457b3f06870723889cd0|docker.io/rancher/klipper-helm:v0.7.3-build20220613/sha256:5b8b2ba8c050ccd08bd8f00f15210c40a3f50674a0b9457b3f06870723889cd0>"
    
    time="2022-07-02T14:51:26.853585280+09:00" level=warning msg="failed to prepare remote snapshot" error="failed to resolve layer: failed to resolve layer \"sha256:5b8b2ba8c050ccd08bd8f00f15210c40a3f50674a0b9457b3f06870723889cd0\" from \"<http://docker.io/rancher/klipper-helm:v0.7.3-build20220613\|docker.io/rancher/klipper-helm:v0.7.3-build20220613\>": failed to resolve the blob: failed to resolve the source: cannot resolve layer: failed to redirect (host \"<http://registry-1.docker.io|registry-1.docker.io>\", ref:\"<http://docker.io/rancher/klipper-helm:v0.7.3-build20220613\|docker.io/rancher/klipper-helm:v0.7.3-build20220613\>", digest:\"sha256:5b8b2ba8c050ccd08bd8f00f15210c40a3f50674a0b9457b3f06870723889cd0\"): failed to access to the registry with code 404: failed to redirect (host \"<http://registry-jpe2.r-local.net|registry-jpe2.r-local.net>\", ref:\"<http://docker.io/rancher/klipper-helm:v0.7.3-build20220613\|docker.io/rancher/klipper-helm:v0.7.3-build20220613\>", digest:\"sha256:5b8b2ba8c050ccd08bd8f00f15210c40a3f50674a0b9457b3f06870723889cd0\"): failed to access to the registry with code 401: failed to resolve: failed to resolve target" key="<http://k8s.io/25/extract-737524037-rTpL|k8s.io/25/extract-737524037-rTpL> sha256:47539da01eebacb627943ac3a63d918b69534684b239edac42ce4c64742fb4fd" parent="<http://k8s.io/24/sha256:114c5ec8519affa4cba972ba546f900b4803d5863f5242a2daee68aa6133fd7d|k8s.io/24/sha256:114c5ec8519affa4cba972ba546f900b4803d5863f5242a2daee68aa6133fd7d>" remote-snapshot-prepared=false
    
    time="2022-07-02T14:51:37.262725372+09:00" level=error msg="ContainerStatus for \"1e7c670c740754bfa1d6e3c4ca2540025adb6cd8063abeb057911f9535d0f1b3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e7c670c740754bfa1d6e3c4ca2540025adb6cd8063abeb057911f9535d0f1b3\": not found"
    time="2022-07-02T14:51:37.263923059+09:00" level=error msg="ContainerStatus for \"5af6495a6eeb8d647ce97e2ece46489961799cb711a85533fd4a38892b0af32e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5af6495a6eeb8d647ce97e2ece46489961799cb711a85533fd4a38892b0af32e\": not found"
    Is this normal with private registries, or do I possibly have some misconfiguration somewhere…?
    • 1
    • 1
  • m

    melodic-hamburger-23329

    07/04/2022, 6:19 AM
    Is it possible to specify helm chart versions for k3s?
    • 1
    • 1
  • r

    ripe-restaurant-90224

    07/05/2022, 9:10 AM
    Hi all. I have been using an nginx ingress controller with
    hostnetwork: true
    and a public IP and it has been doing exactly what I want but now I'd like to put it behind an
    sslh
    so I can use one of the ports for ssh also. I can't seem to find how to set the bind address to
    127.0.0.1
    or the private IP so
    sslh
    can listen on the external IP and proxy to the ingress. Does anyone know what I need to set?
    • 1
    • 1
  • a

    average-autumn-93923

    07/05/2022, 5:25 PM
    Here’s a funny one… I have two master nodes that disagree about which one is ready.
    root@dp6448:~# k3s kubectl get nodes
    NAME     STATUS     ROLES                       AGE   VERSION
    dp6448   Ready      control-plane,etcd,master   69d   v1.23.4+k3s1
    dp6449   NotReady   control-plane,etcd,master   69d   v1.23.4+k3s1
    
    root@dp6449:~# k3s kubectl get nodes
    NAME     STATUS     ROLES                       AGE   VERSION
    dp6448   NotReady   control-plane,etcd,master   69d   v1.23.4+k3s1
    dp6449   Ready      control-plane,etcd,master   69d   v1.23.4+k3s1
    If I ask them to
    kubectl describe node
    each other, I get
    Conditions:
      Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
      ----             ------    -----------------                 ------------------                ------              -------
      MemoryPressure   Unknown   Tue, 05 Jul 2022 10:32:22 +0000   Tue, 05 Jul 2022 10:33:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
      DiskPressure     Unknown   Tue, 05 Jul 2022 10:32:22 +0000   Tue, 05 Jul 2022 10:33:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
      PIDPressure      Unknown   Tue, 05 Jul 2022 10:32:22 +0000   Tue, 05 Jul 2022 10:33:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
      Ready            Unknown   Tue, 05 Jul 2022 10:32:22 +0000   Tue, 05 Jul 2022 10:33:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
    Everything other than that seems fine… etcd is healthy (one of the nodes is the leader and the other is not), they’re just both surfacing that this node isn’t ready. Confused as to how this is possible with healthy etcd.
    c
    • 2
    • 3
  • b

    billowy-needle-49036

    07/05/2022, 8:03 PM
    curl <https://get.k3s.io/> | INSTALL_K3S_VERSION="v1.23.6+k3s1" INSTALL_K3S_EXEC="--disable=traefik --node-ip 10.2.0.1" sh -
    curl <http://10.42.0.4:9153>
    works (gives 404, ok). But on a neighbor host (10.2.0.210),
    curl <http://10.2.0.1:9153>
    is connection-refused. Shouldn't that work? where do i debug next?
    c
    • 2
    • 23
  • h

    hundreds-state-15112

    07/05/2022, 8:22 PM
    For the etcd backups, I should run all 3 k3s instances with the same flags right? Deploying to just one doesn’t setup etcd backups on all of them? Also, do they coordinate or are they all 3 competing to put backup their own version of the world at the same time? Do they just let the leader do it?
    c
    • 2
    • 17
  • h

    high-fall-28740

    07/05/2022, 11:55 PM
    Hello - Is there a K3s equivalent for a path location where certs are stored? E.g. /etc/kubernetes/pki/ ?
    ✅ 1
    h
    c
    • 3
    • 21
  • m

    melodic-hamburger-23329

    07/06/2022, 4:10 AM
    Where can I find HelmChart CRD definition (preferably Git)?
    c
    • 2
    • 13
  • m

    melodic-hamburger-23329

    07/06/2022, 10:07 AM
    fyi, it seems the info related to kubernetes-dashboard does not apply anymore. Also apparently need kubectl 1.24+ in order to create the token.
    n
    • 2
    • 1
  • r

    rough-ice-65066

    07/06/2022, 12:39 PM
    Hi all, I have a k3s cluster with 2 nodes(1GB RAM and 2 CPUs). When I run some compute intensive workload on the cluster the node goes to a not ready state(because of OOM). I want to avoid the node going to not ready state instead I want to kill the pod which is consuming more resources. I tried setting the
    kube-reserved, system-reserved, kube-reserved-cgroup, system-reserved-cgroup
    flags on kubelet. But If I don't have requests or limits in my pod spec it still crashes the node(NotReady state). I have tried combination of setting the above flags with eviction-hard flag as well. Is there any other solution to overcome this?
    c
    • 2
    • 1
  • h

    hundreds-state-15112

    07/06/2022, 8:43 PM
    Opened PR for documentation https://github.com/rancher/docs/pull/4185
    c
    n
    • 3
    • 20
  • b

    bulky-potato-48137

    07/07/2022, 1:42 PM
    Hi all, guess I'm pretty late checking out k3s. It's wonderfully lightweight. I'm trying to
    go get <http://github.com/k3s-io/k3s|github.com/k3s-io/k3s>
    but getting error
    go: downloading <http://github.com/k3s-io/k3s|github.com/k3s-io/k3s> v1.21.9
    go get: <http://github.com/k3s-io/k3s@v1.21.9|github.com/k3s-io/k3s@v1.21.9>: parsing go.mod:
    	module declares its path as: <http://github.com/rancher/k3s|github.com/rancher/k3s>
    	        but was required as: <http://github.com/k3s-io/k3s|github.com/k3s-io/k3s>
    I then tried
    go get <http://github.com/rancher/k3s|github.com/rancher/k3s>
    but again it errors
    go: downloading <http://github.com/rancher/k3s|github.com/rancher/k3s> v1.21.9
    go get: <http://github.com/rancher/k3s@v1.21.9|github.com/rancher/k3s@v1.21.9> requires
    	<http://github.com/kubernetes-sigs/cri-tools@v0.0.0-00010101000000-000000000000|github.com/kubernetes-sigs/cri-tools@v0.0.0-00010101000000-000000000000>: invalid version: unknown revision 000000000000
    How can I
    go get
    like most go modules?
    c
    • 2
    • 1
  • b

    best-accountant-61831

    07/07/2022, 3:27 PM
    IIRC, it was possible to write a
    <http://upgrade.cattle.io/v1|upgrade.cattle.io/v1>
    plan, for Ubuntu packages upgrades. I think at least I once saw an example of that, but my search fu is lacking. Any known pointers to such a thing?
    c
    • 2
    • 2
  • b

    bright-london-1095

    07/07/2022, 7:28 PM
    Hi Team, I have upgraded
    k3s
    version from
    1.20.7+k3s1
    to
    1.22.8+k3s1
    I see some changes like image name being named as
    rancher/mirrored-coredns-coredns:1.9.1
    for COREDNS pod and
    rancher/mirrored-library-traefik:2.6.1
    for TRAEFIK pod. Is this an expected or am i missing some configuration or image being used is incorrect ? Please note both are part
    k3s
    installation and not installed separately... TIA
    c
    • 2
    • 9
  • m

    melodic-hamburger-23329

    07/08/2022, 1:17 AM
    Does k3s (including all the components, e.g. containerd) run on btrfs filesystem? If so, are there benefits or possible issues in such compared to ext4?
    c
    • 2
    • 1
  • a

    agreeable-mouse-95550

    07/08/2022, 1:33 AM
    Hi folks. I’m comparing a few different Kubernetes variants, looking into how much memory they use per CRD installed. Note that I mean literally per CustomResourceDefinition that is created - not per custom resource. I’m seeing
    k3s server
    w/sqlite use ~4MB/CRD, which is about the same as the upstream API server. Does that sound about right?
    c
    • 2
    • 4
  • m

    melodic-hamburger-23329

    07/08/2022, 3:58 AM
    I’m curious about the helm-install pod. Could this be somehow utilized in non-k3s environment? I.e., have some service running that listens to changes to HelmChart CRDs and would do deployments automatically..?
    c
    • 2
    • 1
Powered by Linen
Title
m

melodic-hamburger-23329

07/08/2022, 3:58 AM
I’m curious about the helm-install pod. Could this be somehow utilized in non-k3s environment? I.e., have some service running that listens to changes to HelmChart CRDs and would do deployments automatically..?
c

creamy-pencil-82913

07/08/2022, 7:29 AM
That's what the helm controller embedded in k3s does already... it is also available as a standalone project on GitHub, but I will warn you that it's probably not anywhere near suitable for general purpose use.
✅ 1
View count: 14