https://rancher.com/ logo
Join the conversationJoin Slack
Channels
academy
amazon
arm
azure
cabpr
chinese
ci-cd
danish
deutsch
developer
elemental
epinio
espanol
events
extensions
fleet
français
gcp
general
harvester
harvester-dev
hobbyfarm
hypper
japanese
k3d
k3os
k3s
k3s-contributor
kim
kubernetes
kubewarden
lima
logging
longhorn-dev
longhorn-storage
masterclass
mesos
mexico
nederlands
neuvector-security
office-hours
one-point-x
onlinemeetup
onlinetraining
opni
os
ozt
phillydotnet
portugues
rancher-desktop
rancher-extensions
rancher-setup
rancher-wrangler
random
rfed_ara
rio
rke
rke2
russian
s3gw
service-mesh
storage
submariner
supermicro-sixsq
swarm
terraform-controller
terraform-provider-rancher2
terraform-provider-rke
theranchcast
training-0110
training-0124
training-0131
training-0207
training-0214
training-1220
ukranian
v16-v21-migration
vsphere
windows
Powered by Linen
k3s
  • s

    square-engine-61315

    08/16/2022, 11:40 AM
    👋 Hi, I'm new here. I'm trying to upgrade a k3s cluster, and I'm reading things like this in the docs:
    Please note that upgrades from experimental Dqlite to embedded etcd are not supported. If you attempt an upgrade it will not succeed and data will be lost.
    How do I know which one I'm using?
    ✅ 1
    k
    • 2
    • 6
  • e

    eager-librarian-82484

    08/16/2022, 4:40 PM
    --cluster-init seems to be a one time deal? Is there a way to just --cluster-init and quit? How harmful is it to keep the option all the time? What happens if cluster-init is on multiple servers? Also after initial bootstrapping can you specifiy a list of servers so that you aren't always trying to connect to a single server?
    ✅ 1
    b
    • 2
    • 4
  • b

    brainy-postman-1566

    08/16/2022, 11:35 PM
    https://rancher-users.slack.com/archives/C3ASABBD1/p1660691848276429
  • p

    powerful-summer-42797

    08/17/2022, 8:59 AM
    Hello, Does anyone use calico instead of flannel with k3s? I can observe the node is taking far more time to start with calico than with flannel (pods are running after 200s instead of 30s). Any clue on how to fix that?
    b
    • 2
    • 3
  • n

    nutritious-crayon-45180

    08/18/2022, 3:04 AM
    Did anyone try to create more than 3000 K3s clusters (preferably on Raspberry Pi) and able to manage from Rancher? Wanted to know the scalability for K3s with Rancher.. any suggestions on how to manage clusters at this scale would help.
    k
    h
    w
    • 4
    • 10
  • r

    red-waitress-37932

    08/18/2022, 4:19 PM
    I'm trying to diagnose, why a readiness probe mysteriously fails on my k3os cluster. is there a log file or something where that would show up?
  • w

    wonderful-spring-28306

    08/19/2022, 7:59 AM
    Hey @creamy-pencil-82913 Following from this https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/ My aim is to create a k3s cluster where the control-plane nodes are not accessible with
    kubectl get no
    Research shows me that I should start the kubernetes process with
    --register-node=false
    Is this possible with k3s or do you have a better suggestion on how I could achieve this?
    c
    l
    • 3
    • 14
  • r

    red-musician-8168

    08/19/2022, 8:27 PM
    This is what really brought me here . . .
  • h

    hundreds-state-15112

    08/19/2022, 10:00 PM
    Quick question, I see in the release notes for 1.24.3 that etcd snapshot compression has been added and is used “if enabled” but I can’t find anything in the docs on this new option and it’s not entirely clear from these commits whether it is on by default, or what key I’d use to configure it
    c
    • 2
    • 5
  • c

    creamy-pencil-82913

    08/20/2022, 5:04 AM
    No. It's not a real thing that you can ping. Services are just iptables port forwarding managed by kube-proxy.
    👍 1
  • f

    faint-airport-39912

    08/20/2022, 7:29 AM
    Hello Everyone, I am trying to add new master node in the highly available k3s setup (I am using postgres as database to store the state of k3s) and here is the command
    curl -sfL <https://get.k3s.io> | sh -s - server --datastore-endpoint='<postgres://k3s:password@192.168.1013:5432/k3s>'  --token='K102xxxxx''
    I'm using to add new master node, but it is giving me following error so can you someone look into it and help ?? FYI, This highly available k3s cluster is configured one year back. `
    systemd[1]: k3s.service: Service hold-off time over, scheduling restart.
    
    systemd[1]: k3s.service: Scheduled restart job, restart counter is at 97783.
    
    systemd[1]: Stopped Lightweight Kubernetes.
    
    systemd[1]: Starting Lightweight Kubernetes...
    
    sh[23500]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
    
    sh[23500]: /bin/sh: 1: /usr/bin/systemctl: not found
    
    k3s[23512]: time="2022-08-11T07:19:00Z" level=info msg="Starting k3s v1.24.3+k3s1 (990ba0e8)"
    
    k3s[23512]: time="2022-08-11T07:19:00Z" level=info msg="Configuring postgres database connection pooling: maxIdleConns=2, maxOpenConns=
    
    k3s[23512]: time="2022-08-11T07:19:00Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
    
    k3s[23512]: time="2022-08-11T07:19:00Z" level=info msg="Database tables and indexes are up to date"
    
    k3s[23512]: time="2022-08-11T07:19:00Z" level=info msg="Kine available at <unix://kine.sock>"
    
    k3s[23512]: time="2022-08-11T07:19:00Z" level=fatal msg="starting kubernetes: preparing server: 
    bootstrap data already found and encryp
    
    "
    systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
    
    systemd[1]: k3s.service: Failed with result 'exit-code'.
    `
  • c

    chilly-telephone-51989

    08/20/2022, 8:22 PM
    I'm trying to make the database service available inside kubernetes for which I created the following service along with the endpoint
    apiVersion: v1
    kind: Service
    metadata:
      name: postgres
      namespace: xp
      labels:
        service: postgres
        type: database
    spec:
      type: ClusterIP
      selector:
        service: postgres
        type: database
      ports:
        - name: client
          protocol: TCP
          port: 5432
          targetPort: 5432
    ---
    apiVersion: v1
    kind: Endpoints
    metadata:
      name: postgres
      namespace: xp
      labels:
        service: postgres
        type: database
    subsets:
      - addresses:
          - ip: 172.19.0.2
        ports:
          - name: client
            port: 5432
            protocol: TCP
    I tried verifying the service using busybox but nslookup and ping both fail for postgres.xp however pinging the IP 172.19.0.2 runs just fine. how do I expose the IP of an external service inside?
    s
    • 2
    • 5
  • b

    billowy-needle-49036

    08/20/2022, 9:46 PM
    1-node setup like https://github.com/drewp/infra/blob/main/multikube.py I run a hello service; can't curl it from the one node: details, config, and logs here: https://gist.github.com/drewp/e381498de3b1cf8c51f26e74bc08625d (line 44 is what i think should work) A setup like this 1) works on blank ubuntu VMs at digitalocean, and 2) used to work at home. Did i mess with some subtle ipv6 setting or something?
    • 1
    • 1
  • c

    chilly-telephone-51989

    08/21/2022, 6:10 PM
    I'm running k3s locally and i need to have ingress from which the traffic should divert to "gateway" service this is my very simple ingress file. I'm unable to access it even with my local machine ip. is there something wrong with the file?
    apiVersion: <http://networking.k8s.io/v1|networking.k8s.io/v1>
    kind: Ingress
    metadata:
      name: ingress
      namespace: xplore
      annotations:
        <http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: traefik
    spec:
      rules:
        - http:
            paths:
              - path: /
                pathType: Prefix
                backend:
                  service:
                    name: gateway
                    port:
                      number: 80
    ✅ 1
    k
    • 2
    • 12
  • k

    kind-nightfall-56861

    08/22/2022, 7:18 AM
    I'm looking for best practises for caching inside containers. I have a .NET 6 backend which uses MemoryCache to store certain keys and values to improve the overall performance and lessen the load on the 3rd party backend. Turns out that MemoryCache is pod specific (duh), which sadly took me way too long to figure out why I had 5 different cache responses (I had 5 active pods). How have people been resolving such an issue? Do you divert to a volume / database?
    ✅ 1
    q
    • 2
    • 4
  • m

    many-church-13850

    08/23/2022, 4:27 AM
    Hi Guys, any documentation on how to deploy k3 cluster using terraform?
    b
    • 2
    • 2
  • e

    elegant-article-67113

    08/23/2022, 12:41 PM
    Is there a way to view what install options were used on k3s after the fact?
    s
    b
    • 3
    • 3
  • m

    melodic-hamburger-23329

    08/25/2022, 4:47 AM
    k3s seems to be still using 1.5 series containerd. Is there something preventing versionup to 1.6? I noticed there’s 1.6.6 tag in the fork repo, but there doesn’t seem to be k3s release using this one. Also, is there some documentation regarding the differences between official containerd and the fork?
    c
    • 2
    • 2
  • r

    refined-toddler-64572

    08/25/2022, 8:02 PM
    Greetings.. using K3s (
    1.24.3+k3s1
    ) on Ubuntu 22.04.1 with ZFS (
    2.1.4
    ) and external containerd (
    1.5.9
    ). Works GREAT, no day-to-day issues. What I've been struggling with is when K3s uses an external containerd -- either kubelet / cadvisor is a bit wonky with metrics in that the
    image=
    and
    container=
    are missing which breaks many dashboards. I can't tell if this is a K3s / kubelet issue, cadvisor issue, containerd issue.. not sure where to seek advise. Kube-Prometheus-Stack is deployed, works well as long as a dashboard doesn't try to use something like
    container_cpu_usage_seconds_total{image!=""}
    returns an empty set as the image reference is missing, but that does return data when K3s uses bundled containerd. Suggestions welcomed.
  • c

    creamy-river-81697

    08/25/2022, 8:50 PM
    ☝️ For more context on this issue : https://github.com/dotdc/grafana-dashboards-kubernetes/issues/18 Seems that the problem comes from using an external containerd. @refined-toddler-64572 is doing this because he's using ZFS and the embedded containerd in k3s doesn't seems to support it (maybe due to the overlay FS). Is there a way to make k3s work with ZFS? If yes, how? If not, is there any plan to support ZFS in the future?
    r
    • 2
    • 3
  • m

    melodic-hamburger-23329

    08/26/2022, 1:11 AM
    If I want to do
    nerdctl build
    inside a container running in k3s, what should I do? k3s doesn’t bundle buildkitd I think, so I guess I need to set up buildkitd manually. k3s bundles containerd, so I’m not quite sure how this manual should be applied: https://github.com/containerd/nerdctl/blob/master/docs/build.md
    • 1
    • 2
  • f

    flat-fish-63205

    08/26/2022, 12:26 PM
    hi, we started seeing this issue https://github.com/kubernetes/kubernetes/issues/97129 in recent versions of k3s v1.22 and 1.24. Can anyone please help resolve this?
    while true; do kubectl exec -it <pod_name> /bin/ls; done
    Running above command will give intermittent error:
    Error from server: error dialing backend: EOF
  • c

    clever-air-65544

    08/26/2022, 3:46 PM
    Latest k3s weekly report is up! https://github.com/k3s-io/k3s/discussions/6039
  • s

    stale-fish-49559

    08/26/2022, 4:45 PM
    Hi, I am able to boot of k3s on yocto without an issues, that i know of, but coredns just keeps printing `still waiting on: kubernetes`` and eventually has a timeout on the api server
    Kubernetes API connection failure: Get "<https://10.43.0.1:443/version>": dial tcp 10.43.0.1:443: connect: network is unreachable
    . Any ideas how to approached this problem? DNS cache issue, flannel?
    • 1
    • 8
  • p

    prehistoric-diamond-4224

    08/27/2022, 11:14 AM
    Hi there! I have a bit of a problem at hand, one of our less eperienced fellows upgraded k3s by several versions at once, from before 1.22 to 1.24, this of course caused mayhem for a ot of workloads and clients that relied on older APIs. Traefik ingress is one of those, as of now all of our ingresses are down. Since there was an instance of traefik v1.81 present on the cluster at the time of upgrade, k3s did not touch it, but of course, now traefik complains that it cannot find
    *v1beta1.Ingress
    which was deprecated in 1.22. Is there a traefik v1 version that i can upgrade to that supports
    v1.Ingress
    ? I'd rather not update traefik to v2 since i'd have to migrate all of our ingresses, many of which are rmanaged by helm charts. Would nginx be compatibe with standard Ingress resources? Is a migration from traefiik v1.81 to nginx ingress feasible without modifying existing ingress resources? Thank you
  • m

    melodic-hamburger-23329

    08/29/2022, 3:17 AM
    How do I upgrade from v1.24.3+k3s1 to v1.24.4+k3s1? Is just
    stop k3s => replace k3s binary => start k3s
    enough?
    h
    r
    w
    • 4
    • 4
  • a

    average-monitor-43003

    08/29/2022, 4:31 PM
    I got this error this morning from `pod/helm-install-traefik-q76r7`:
    + echo 'Installing helm_v3 chart'
    + helm_v3 install --set-string global.systemDefaultRegistry= traefik <https://10.43.0.1:443/static/charts/traefik-10.19.300.tgz> --values /config/values-01_HelmChart.yaml --values /config/values-10_HelmChartConfig.yaml
    Error: INSTALLATION FAILED: cannot re-use a name that is still in use
    k3s-1.24.3
    kube-system    pod/helm-install-traefik-crd-f8qps                      0/1     Completed          0                  18m
    kube-system    pod/helm-install-traefik-q76r7                          0/1     CrashLoopBackOff   5 (63s ago)        4m32s
    kube-system   job.batch/helm-install-traefik-crd                   1/1           14m        18m
    kube-system   job.batch/helm-install-traefik                       0/1           4m35s      4m39s
    ✅ 1
    c
    • 2
    • 16
  • g

    gifted-morning-94496

    08/29/2022, 4:54 PM
    #k3s have anybody made the k3s cluster thru terraform on vsphere? any documentation would be really great?
  • m

    microscopic-smartphone-81961

    08/29/2022, 5:57 PM
    I'm having an issue where k3s v1.20.11+k3s2 with Rancher 2.5.x in a 2 cluster of 2 VM nodes with MySQL as its external database is using all available cpu on the host and showing anywhere from 1500 - 12000 sockets in time_wait with the mysql server port varying by sometimes 200+ connections per second. Rancher pods are showing 100%+ cpu, Traefix is using 50%, and mysql is using 50% and k3s-server is using at times > 600% cpu. Rancher only manages 1 3-node baremetal RKE cluster. I'm unable to determine what is causing the massive number of TCP connections and/or the high CPU usage within the K3S cluster. Both clusters were running fine until Saturday when the Rancher K3S cluster pretty much blew up and hasn't been able to recover since. both k3s nodes are complaining about possible SYN flood due to the number of connections, but traffic is all internal.
  • m

    melodic-hamburger-23329

    08/30/2022, 3:02 AM
    Any idea why I’m not seeing any resources in kubernetes-dashboard running in k3s even though logging in with a token generated for a cluster-admin account? I set up the dashboard using the helm chart.
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: admin
      namespace: kube-system
    ---
    apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
    kind: ClusterRoleBinding
    metadata:
      name: admin
    roleRef:
      apiGroup: <http://rbac.authorization.k8s.io|rbac.authorization.k8s.io>
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: admin
      namespace: kube-system
    `kubectl get clusterrole cluster-admin -oyaml`:
    apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
    kind: ClusterRole
    metadata:
      annotations:
        <http://rbac.authorization.kubernetes.io/autoupdate|rbac.authorization.kubernetes.io/autoupdate>: "true"
      creationTimestamp: "2022-08-29T05:57:20Z"
      labels:
        <http://kubernetes.io/bootstrapping|kubernetes.io/bootstrapping>: rbac-defaults
      name: cluster-admin
      resourceVersion: "72"
      uid: f381f7e3-9f54-4da5-bcc4-39745a2e8bbd
    rules:
    - apiGroups:
      - '*'
      resources:
      - '*'
      verbs:
      - '*'
    - nonResourceURLs:
      - '*'
      verbs:
      - '*'
    • 1
    • 1
Powered by Linen
Title
m

melodic-hamburger-23329

08/30/2022, 3:02 AM
Any idea why I’m not seeing any resources in kubernetes-dashboard running in k3s even though logging in with a token generated for a cluster-admin account? I set up the dashboard using the helm chart.
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin
  namespace: kube-system
---
apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
kind: ClusterRoleBinding
metadata:
  name: admin
roleRef:
  apiGroup: <http://rbac.authorization.k8s.io|rbac.authorization.k8s.io>
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin
  namespace: kube-system
`kubectl get clusterrole cluster-admin -oyaml`:
apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
kind: ClusterRole
metadata:
  annotations:
    <http://rbac.authorization.kubernetes.io/autoupdate|rbac.authorization.kubernetes.io/autoupdate>: "true"
  creationTimestamp: "2022-08-29T05:57:20Z"
  labels:
    <http://kubernetes.io/bootstrapping|kubernetes.io/bootstrapping>: rbac-defaults
  name: cluster-admin
  resourceVersion: "72"
  uid: f381f7e3-9f54-4da5-bcc4-39745a2e8bbd
rules:
- apiGroups:
  - '*'
  resources:
  - '*'
  verbs:
  - '*'
- nonResourceURLs:
  - '*'
  verbs:
  - '*'
Not sure why, but the issue seems to have solved after I created the dashboard to kubernetes-dashboard namespace instead of kube-system
View count: 11