https://rancher.com/ logo
Docs
Join the conversationJoin Slack
Channels
academy
amazon
arm
azure
cabpr
chinese
ci-cd
danish
deutsch
developer
elemental
epinio
espanol
events
extensions
fleet
français
gcp
general
harvester
harvester-dev
hobbyfarm
hypper
japanese
k3d
k3os
k3s
k3s-contributor
kim
kubernetes
kubewarden
lima
logging
longhorn-dev
longhorn-storage
masterclass
mesos
mexico
nederlands
neuvector-security
office-hours
one-point-x
onlinemeetup
onlinetraining
opni
os
ozt
phillydotnet
portugues
rancher-desktop
rancher-extensions
rancher-setup
rancher-wrangler
random
rfed_ara
rio
rke
rke2
russian
s3gw
service-mesh
storage
submariner
supermicro-sixsq
swarm
terraform-controller
terraform-provider-rancher2
terraform-provider-rke
theranchcast
training-0110
training-0124
training-0131
training-0207
training-0214
training-1220
ukranian
v16-v21-migration
vsphere
windows
Powered by Linen
k3s
  • c

    creamy-pencil-82913

    03/10/2023, 8:05 PM
    Where did you see that 1.22.3 was the latest stable?
  • c

    creamy-pencil-82913

    03/10/2023, 8:05 PM
    Also, you’re missing bits from the end of the version in your install command. It would be something like
    INSTALL_K3S_VERSION=v1.22.17+k3s1
    not
    INSTALL_K3S_VERSION=v1.22.17
  • b

    bright-fireman-42144

    03/10/2023, 8:06 PM
    to be honest.... out of date information that has trained OpenAI. I turned to the bot as I didn't want to keep spamming all you fine folk.
  • c

    creamy-pencil-82913

    03/10/2023, 8:06 PM
    Why anyone uses chatbots instead of looking at documentation or project repos I’ll never understand
    b
    • 2
    • 3
  • b

    bright-fireman-42144

    03/10/2023, 8:07 PM
    it pointed me to all the different compatibility matrices.. so it did it's job there.
  • b

    bright-fireman-42144

    03/10/2023, 8:09 PM
    anyways.... back at it. Thanks again for your help @creamy-pencil-82913!
  • c

    creamy-pencil-82913

    03/10/2023, 8:13 PM
    1.23 is end of life as well, I would probably use the latest 1.24 release. you can get that from INSTALL_K3S_CHANNEL=v1.24
    👍 1
  • h

    hundreds-evening-84071

    03/10/2023, 9:45 PM
    with k3s cluster running on 1.22 is it okay to run this to go to latest version available in stable channel?
    curl -sfL <https://get.k3s.io> | INSTALL_K3S_CHANNEL=stable sh -
    or should it be done in steps? 1.22 to 1.23 to 1.24 and so on?
  • h

    hundreds-evening-84071

    03/10/2023, 9:45 PM
    looking at this doc: https://docs.k3s.io/upgrades/manual
  • c

    creamy-pencil-82913

    03/10/2023, 9:57 PM
    See the Kubernetes version skew policy link on that page
  • h

    handsome-salesclerk-54324

    03/11/2023, 7:56 PM
    Running k3s on wsl-2 I'm getting:
    W0311 11:54:32.405026    2510 sysinfo.go:203] Nodes topology is not available, providing CPU topology
    Anyone know why?
  • a

    acceptable-leather-15942

    03/11/2023, 10:35 PM
    Does anyone now if
    Topology Aware Hints
    works on k3s? I can’t seem to get this working. All my nodes have a different
    <http://topology.kubernetes.io/zone|topology.kubernetes.io/zone>
    label. Adding the annotation
    <http://service.kubernetes.io/topology-aware-hints|service.kubernetes.io/topology-aware-hints>: auto
    to my service should better route the traffic, but it doesn’t seem to have an effect.
    • 1
    • 1
  • l

    loud-apartment-45889

    03/12/2023, 6:53 AM
    ASK SW INFO: -3x Alpine vm run on VMware Workstation 17 ? 1. can I deploy k3s into those 3 vms from Rancher instead of install k3s on those 3 vms first then install Rancher. if yes any doc on that 2. I install Rancher on docker, but it takes 2.5GB RAM. The Rancher is idle Why so big
    h
    • 2
    • 2
  • b

    broad-farmer-70498

    03/13/2023, 6:55 PM
    @creamy-pencil-82913 does this mean the image binary installs the crd now when launched? https://github.com/k3s-io/helm-controller/commit/09dedbf504bdaa722b99d9e7ea8bd67fba787bd2
    c
    • 2
    • 44
  • w

    white-garden-41931

    03/14/2023, 12:29 AM
    I haven't seen this reported before...running k3s 1.25.4+k3s1 on Fedora 36 I get an error:
    2023/03/03 00:09:28 Starting NATS Server Reloader v0.7.4
    Error: too many open files
    Stream closed EOF for testkube/testkube-nats-0 (reloader)
    which prevents one of my pods (testkube/testkube-nats-0) from starting. and I'm not sure if that is specific to k3s or upstream Kubernetes. Is this a known issue or should I investigate more deeply?
    c
    • 2
    • 2
  • s

    straight-midnight-66298

    03/14/2023, 11:42 AM
    quick question; Is Rancher 2.7.1 compatible with Kubernetes 1.25? I tried searching github repo readme and everything but I can’t see clear statement up to what version is compatible. I’m in the need to use cluster autoscaler function and it says it only from version Kubernetes 1.25 but it’s confusing now if I can use AS function with Rancher 2.71 and K3s... Anybody who can shine a light here please? Thanks!
    a
    • 2
    • 1
  • l

    limited-needle-7506

    03/14/2023, 9:33 PM
    Hi, When it comes to private registries, the user's credentials are stored in registries.yaml which is found in
    /etc/rancher/k3s/
    . But say the dev machine running k3s is shared by users with different credentials, each with access to the registries.yaml file and
    /etc/rancher/k3s/
    directory. How would I prevent user x from accessing/viewing user y's credentials stored in registries.yaml while at the same time using user's x credentials to pull and push to the private registry? Thanks in advance.
    c
    • 2
    • 4
  • a

    adamant-pencil-35455

    03/15/2023, 12:32 PM
    Greetings
  • a

    adamant-pencil-35455

    03/15/2023, 12:33 PM
    I have some networking issues with a k3s setup. Is there a specific way to ask a question here?
    h
    p
    l
    • 4
    • 24
  • i

    important-kitchen-32874

    03/15/2023, 1:45 PM
    Hi folks! Is there some authoritative source for the base runtime cost of running a k3s server node?
  • i

    important-kitchen-32874

    03/15/2023, 1:51 PM
    I found a comparison from MicroK8s here but of course there must be some bias there 🙂
    n
    • 2
    • 4
  • i

    important-kitchen-32874

    03/15/2023, 1:57 PM
    Ah, after a lot more digging I found https://docs.k3s.io/reference/resource-profiling - might be good to cross-link that in a more visible way, since all the banner claims about k3s are about size 🙂
  • d

    delightful-author-23241

    03/15/2023, 11:04 PM
    Hello everyone! I may be having a very edge-casey issue. I tried setting up k3s on Asahi Linux on an M1 Mac for the fun of it, and I seem to be getting the following issue across the board:
    Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox image "rancher/mirrored-pause:3.6": failed to pull image "rancher/mirrored-pause:3.6": failed to pull and unpack image "<http://docker.io/rancher/mirrored-pause:3.6|docker.io/rancher/mirrored-pause:3.6>": failed to extract layer sha256:c640e628658788773e4478ae837822c9bc7db5b512442f54286a98ad50f88fd4: mount callback failed on /var/lib/rancher/k3s/agent/containerd/tmpmounts/containerd-mount3367908043: signal: segmentation fault: : unknown
    Segmentation fault always seems like there is quite something going wrong, and I couldn't find anything related to this when googling (additionally, I have no idea what I'm doing when it comes to k8s), so I thought maybe you people can give me some guidance here. Or is this more of an issue with containerd itself?
    c
    • 2
    • 9
  • b

    breezy-autumn-81048

    03/16/2023, 11:44 AM
    Hi community, I have deployed a K3S cluster using Rancher and on top of it have installed cert-manager v1.11.0. All pods are running, however, the cert-manager-webhook pod is logging some errors:
    Trace[1068908304]: [30.003276269s] [30.003276269s] END
    E0314 15:02:02.236947       1 reflector.go:140] <http://k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169|k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169>: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "<https://10.43.0.1:443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-webhook-ca&resourceVersion=360915>": dial tcp 10.43.0.1:443: i/o timeout
    W0314 15:03:28.953687       1 reflector.go:424] <http://k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169|k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169>: failed to list *v1.Secret: Get "<https://10.43.0.1:443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-webhook-ca&resourceVersion=360915>": dial tcp 10.43.0.1:443: i/o timeout
    I0314 15:03:28.953816       1 trace.go:219] Trace[516939538]: "Reflector ListAndWatch" name:<http://k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169|k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169> (14-Mar-2023 15:02:58.949) (total time: 30004ms):
    Trace[516939538]: ---"Objects listed" error:Get "<https://10.43.0.1:443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-webhook-ca&resourceVersion=360915>": dial tcp 10.43.0.1:443: i/o timeout 30004ms (15:03:28.953)
    Trace[516939538]: [30.004226263s] [30.004226263s] END
    E0314 15:03:28.953837       1 reflector.go:140] <http://k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169|k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169>: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "<https://10.43.0.1:443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-webhook-ca&resourceVersion=360915>": dial tcp 10.43.0.1:443: i/o timeout
    W0314 15:04:44.919380       1 reflector.go:424] <http://k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169|k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169>: failed to list *v1.Secret: Get "<https://10.43.0.1:443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-webhook-ca&resourceVersion=360915>": dial tcp 10.43.0.1:443: i/o timeout
    I0314 15:04:44.919458       1 trace.go:219] Trace[430405071]: "Reflector ListAndWatch" name:<http://k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169|k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169> (14-Mar-2023 15:04:14.918) (total time: 30000ms):
    Trace[430405071]: ---"Objects listed" error:Get "<https://10.43.0.1:443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-webhook-ca&resourceVersion=360915>": dial tcp 10.43.0.1:443: i/o timeout 30000ms (15:04:44.919)
    Trace[430405071]: [30.000964846s] [30.000964846s] END
    E0314 15:04:44.919472       1 reflector.go:140] <http://k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169|k8s.io/client-go@v0.26.0/tools/cache/reflector.go:169>: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "<https://10.43.0.1:443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-webhook-ca&resourceVersion=360915>": dial tcp 10.43.0.1:443: i/o timeout
    Can someone explain what's wrong? It feels that I can't fully install a helm chart because of this issue. ( I noticed this issue when was trying to install a helm chart of actions-runner-controller, and the error I got:
    Error: Internal error occurred: failed calling webhook "<http://webhook.cert-manager.io|webhook.cert-manager.io>": failed to call webhook: Post "<https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s>": context deadline exceeded )
    As well, here are some logs from the pod of actions-runner-controller:
    Warning  FailedMount  17m                  kubelet            Unable to attach or mount volumes: unmounted volumes=[cert], unattached volumes=[kube-api-access-v48zj secret tmp cert]: timed out waiting for the condition
      Warning  FailedMount  8m32s                kubelet            Unable to attach or mount volumes: unmounted volumes=[cert], unattached volumes=[tmp cert kube-api-access-v48zj secret]: timed out waiting for the condition
      Warning  FailedMount  6m18s (x5 over 19m)  kubelet            Unable to attach or mount volumes: unmounted volumes=[cert], unattached volumes=[secret tmp cert kube-api-access-v48zj]: timed out waiting for the condition
      Warning  FailedMount  103s (x2 over 4m1s)  kubelet            Unable to attach or mount volumes: unmounted volumes=[cert], unattached volumes=[cert kube-api-access-v48zj secret tmp]: timed out waiting for the condition
      Warning  FailedMount  86s (x18 over 21m)   kubelet            MountVolume.SetUp failed for volume "cert" : secret "actions-runner-controller-serving-cert" not found
    Thanks in advance,
    r
    • 2
    • 12
  • w

    wonderful-crayon-55427

    03/16/2023, 2:19 PM
    I seem to have some directories within containers within pods with /etc/pki (and subdirectories) as read-only, so much so that when I try:
    root# touch /etc/pki/file
    touch: cannot touch '/etc/pki/file': No such file or directory
    Which is odd as I'm logged into root, do not have any special mounts on this directory, or any pv/pvc claiming this. Any ideas on where to look? I have considered doing something along the lines of:
    $ kubectl -n <namespace> get <pod> -o yaml > config.yml
    # <create a new kustomization.yml which patches the current pod with (emptDir: {} and securityContext)
    $ kustomize build . | kubectl -n <namespace> apply -f -
    But that seems egregious and unnecessary.
  • f

    flat-whale-67864

    03/16/2023, 7:56 PM
    @flat-whale-67864 has left the channel
  • e

    echoing-tomato-53055

    03/17/2023, 5:30 PM
    @here: is anyone faced/facing the below issue when spinning up kubernetes cluster using rancher 2.7.
    level=info msg="[Applyinator] No image provided, creating empty working directory /var/lib/rancher/agent/work/
  • b

    bored-horse-3670

    03/20/2023, 2:28 PM
    Hey, I had a k3s that's been running on an ubuntu VM for a couple years now. (single node) I noticed that it recently stopped forwarding egress traffic. I tried switching the flannel mode from the default vxlan to the wireguard-native type. It is definitely using wireguard, but the egress traffic still times out. It's a proxmox VM connected to a bridge device on the proxmox host. The weird thing is that the packets leaving the ubuntu VM still have the container's address set as the source ip. I tried restarting k3s and also tried restarting the whole host. I saw a few similar issues on the github issue tracker. I'll include my iptables-save output in a thread.
    c
    • 2
    • 9
  • c

    careful-honey-96496

    03/20/2023, 7:38 PM
    Hi all, I am new to k3s. I am trying to set it up in sysbox. • I setup an ubuntu 22.04 vm in Azure • Installed https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository on the VM • Installed https://github.com/nestybox/sysbox/blob/master/docs/user-guide/install-package.md#installing-sysbox on the VM • On the VM I ran
    docker run --runtime=sysbox-runc -it --rm -P --hostname=syscont nestybox/ubuntu-bionic-systemd-docker:latest
    • From with in the System container I ran
    docker run --name k3s-server-1 --hostname k3s-server-1 -p 6443:6443 -d rancher/k3s:v1.24.10-k3s1 server
    I can see the k3s-server container is running, but all pods are on pending and it doesn’t show the server node. In the logs of the k3s container I see this message/ error:
    Waiting to retrieve agent configuration; server is not ready: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: operation not permitted
    Anyone who has experience in setting up k3s in Sysbox? Haven’t been able to find where I can adjust the snapshotter to native for example.
    c
    • 2
    • 2
  • b

    brash-controller-15153

    03/21/2023, 5:04 PM
    Hi! does anyone have experience with a
    k3s cluster
    with a k3s server with
    public ip
    and k3s agents in a private network behind nat (like home net for example)?. • I opened the udp ports
    8472,51820,51821
    and tcp ports
    6443,10250
    in my router for allowing connections to the private ip where the agents are located. • I also started the agents with the dynamic ip address given from my isp and the server with the public ip address. but somehow the
    traefik ingress controller
    or the
    Ingress
    is not able to forward the incoming requests from the public url
    <http://staging.company.org|staging.company.org>
    to the agents in my private net. I also created other agents with
    public ips
    and they are able to serve a
    whoami
    application though
    <http://staging.company.org|staging.company.org>
    but when the load balancer selects the pods running inside the nodes on private net, then it just hangs and any answer comes from the pods.
    v1.24.10+k3s1
    c
    p
    • 3
    • 5
Powered by Linen
Title
b

brash-controller-15153

03/21/2023, 5:04 PM
Hi! does anyone have experience with a
k3s cluster
with a k3s server with
public ip
and k3s agents in a private network behind nat (like home net for example)?. • I opened the udp ports
8472,51820,51821
and tcp ports
6443,10250
in my router for allowing connections to the private ip where the agents are located. • I also started the agents with the dynamic ip address given from my isp and the server with the public ip address. but somehow the
traefik ingress controller
or the
Ingress
is not able to forward the incoming requests from the public url
<http://staging.company.org|staging.company.org>
to the agents in my private net. I also created other agents with
public ips
and they are able to serve a
whoami
application though
<http://staging.company.org|staging.company.org>
but when the load balancer selects the pods running inside the nodes on private net, then it just hangs and any answer comes from the pods.
v1.24.10+k3s1
c

creamy-pencil-82913

03/21/2023, 5:33 PM
if you’re seeing responses come from pods without NATing, that sounds a lot like https://github.com/k3s-io/k3s/issues/7096 - can you try the workaround mentioned in the comments?
b

brash-controller-15153

03/21/2023, 6:24 PM
the iptables version installed is
iptables v1.8.7 (nf_tables)
so It should be ok.
p

plain-byte-79620

03/22/2023, 11:11 AM
Is the service listening on the nodes on the private network? Maybe you have to configure your router to NAT those ports to the right private node IP.
b

brash-controller-15153

03/22/2023, 11:18 AM
sorry… what do you mean with service listening on the nodes? 🙂
p

plain-byte-79620

03/22/2023, 11:20 AM
Are you not exposing the pods on the agents?
View count: 1