https://rancher.com/ logo
Join the conversationJoin Slack
Channels
academy
amazon
arm
azure
cabpr
chinese
ci-cd
danish
deutsch
developer
elemental
epinio
espanol
events
extensions
fleet
français
gcp
general
harvester
harvester-dev
hobbyfarm
hypper
japanese
k3d
k3os
k3s
k3s-contributor
kim
kubernetes
kubewarden
lima
logging
longhorn-dev
longhorn-storage
masterclass
mesos
mexico
nederlands
neuvector-security
office-hours
one-point-x
onlinemeetup
onlinetraining
opni
os
ozt
phillydotnet
portugues
rancher-desktop
rancher-extensions
rancher-setup
rancher-wrangler
random
rfed_ara
rio
rke
rke2
russian
s3gw
service-mesh
storage
submariner
supermicro-sixsq
swarm
terraform-controller
terraform-provider-rancher2
terraform-provider-rke
theranchcast
training-0110
training-0124
training-0131
training-0207
training-0214
training-1220
ukranian
v16-v21-migration
vsphere
windows
Powered by Linen
k3s
  • p

    proud-finland-41550

    08/02/2022, 5:08 AM
    Hi, I'm having trouble finding any information about the Cloud Controller Manager that ships with k3s, is it configurable? What does it work with?
    c
    • 2
    • 9
  • p

    proud-plumber-22060

    08/03/2022, 4:27 PM
    Hi all.. I've been a long time k3s user. I'm trying to standup a new k3s single node cluster on Digital Ocean. I have an older 1.18 install that's worked fine but it needs to be upgraded. I haven't been able to get k3s it work really at all, on both debian, ubuntu, and coreos. I will have basic issues with
    kubectl get nodes
    it will just timeout. At times I can't even install the k8s example nginx app. I've installed k3s via the standard k3s install script, no options set.
    sudo k3s check-config
    passes, there's a two missing modules. The logs are loaded with errors of slow sql and errors like this
    : Get \"<https://127.0.0.1:6443/api/v1/nodes/ubuntu-s-1vcpu-1gb-sfo3-01?timeout=10s>\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
    and many other endpoints.
    • 1
    • 2
  • b

    broad-tomato-47786

    08/04/2022, 1:57 AM
    Hi folks. I'm trying to set up a multiple-node (multiple k8s nodes) cluster on a single Linux machine with k3s. In the k3s docs (https://rancher.com/docs/k3s/latest/en/quick-start/) I failed to find related information about such setting.
    c
    • 2
    • 2
  • b

    broad-tomato-47786

    08/04/2022, 1:58 AM
    I'm wondering if the following step could archive the goal above: • 1. set up a k3s cluster with
    curl -sfL <https://get.k3s.io> | sh -
    • 2. set up another kubernetes node by
    k3s agent ...
    on the same machine
  • b

    broad-tomato-47786

    08/04/2022, 1:59 AM
    I haven't tried it yet, but I would like to confirm that it works before playing the changes to my host machine. Any help would be appreciated!
  • f

    flat-engine-95579

    08/04/2022, 6:42 AM
    I want to use k3s as a agent on an old raspberry pi (more specifically, the Raspberry Pi 1 Model B+). The pi is running raspberry pi os (formerly known as raspbian) with a armv6l architecture. My first thought was to build k3s from source on the pi, but I ran into memory problems. After adding a swap file the build went fine (although it two several days) but the binaries are still not the correct architecture and gives a
    illegal instruction
    when executed. My second try is to build it on a much more powerful x86_64 machine. I have tried using qemu and docker to emulate the pi's architecture, but some of the binaries are still the wrong architecture. The build process for k3s with dapper is also somewhat contrived, and running the building inside a (qemu) docker container makes it kind of hard to "trick" the whole build process into thinking it's on an old raspi. Would it be easier to try and cross-compile the binaries? Also, what are https://github.com/k3s-io/k3s-root used for exactly? Should I also build this?
    q
    f
    • 3
    • 5
  • r

    refined-magician-25478

    08/05/2022, 3:06 PM
    Hi everyone, I implemented using TLS for a local registry running in the cluster but am confused as to why the registry.yaml configuration file that containerd references needs to have the key. In this case I would consider containerd the client which would only need the public cert/ca and not the private key. Does anyone know why this is or does anyone know of documentation that explains the need for the key? I did some google searches but nothing seemed to explain why the key is needed.
    c
    • 2
    • 2
  • i

    important-art-22288

    08/05/2022, 4:59 PM
    Hey team! I’m running k3s on a raspberry pi and trying to set up a networking configuration where some ingresses are exposed to the public internet (e.g. a public-facing API) and others that are only exposed to within my local network. My first pass was using Traefik’s IP Whitelist to try and only whitelist internal CIDR ranges, but after a lot of troubleshooting I found that doesn’t work due to the limitations of forwarding the client IP
    x-forwarded-for
    headers through the CNI — all requests were showing up internally with the internal IP of the CNI in the header. It looks like there are some complicated workarounds with Flannel there but I ditched that route given the complexity. Is there any good way to do this, or am I thinking about it wrong? All I need to do is to have some ingresses only exposed to the local network, and others exposed to the public internet, if possible. The current Pi networking config just forwards 443 and 80 through the router configuration, but if there’s a better way to do that I’m open to it
    c
    • 2
    • 5
  • m

    mysterious-toddler-89639

    08/05/2022, 5:22 PM
    Hi all! I'm new here I'm trying to install k3s
    [root@ip-172-21-1-217 rocky]# k3s --version
    k3s version v1.24.3+k3s1 (990ba0e8)
    go version go1.18.1
    The installation script is running successfully. I disabled the selinux before so to make the k3s process a little bit faster and easier., but at the moment an getting this error from the metrics service
    E0805 17:15:09.404108    3602 available_controller.go:524] <http://v1beta1.metrics.k8s.io|v1beta1.metrics.k8s.io> failed with: failing or missing response from <https://10.42.0.24:4443/apis/metrics.k8s.io/v1beta1>: Get "<https://10.42.0.24:4443/apis/metrics.k8s.io/v1beta1>": proxy error from 127.0.0.1:6443 while dialing 10.42.0.24:4443, code 503: 503 Service Unavailable
    W0805 17:15:10.410003    3602 handler_proxy.go:102] no RequestInfo found in the context
    W0805 17:15:10.410002    3602 handler_proxy.go:102] no RequestInfo found in the context
    E0805 17:15:10.410062    3602 controller.go:116] loading OpenAPI spec for "<http://v1beta1.metrics.k8s.io|v1beta1.metrics.k8s.io>" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
    , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
    E0805 17:15:10.410084    3602 controller.go:113] loading OpenAPI spec for "<http://v1beta1.metrics.k8s.io|v1beta1.metrics.k8s.io>" failed with: Error, could not get list of group versions for APIService
    I0805 17:15:10.410093    3602 controller.go:129] OpenAPI AggregationController: action for item <http://v1beta1.metrics.k8s.io|v1beta1.metrics.k8s.io>: Rate Limited Requeue.
    I0805 17:15:10.412183    3602 controller.go:126] OpenAPI AggregationController: action for item <http://v1beta1.metrics.k8s.io|v1beta1.metrics.k8s.io>: Rate Limited Requeue.
    What would be the best way to debug this? or if you had this issue in the past what would be the solution? thanks in advance
    c
    • 2
    • 10
  • r

    rich-crowd-19730

    08/06/2022, 7:33 PM
    It looks like when running k3s with embedded etcd, that it's looking for other nodes to perform ha. Is there a way to use etcd with k3s on a single node without ha
    c
    • 2
    • 7
  • b

    best-wall-17038

    08/07/2022, 5:43 PM
    Hi All, I would like to deploy postgres in my local cluster but I am bit confused how should I configure the PV? Can someone help me pls ? Do I have to create any strogeclass or can I use
    local-path
    ?
    i
    • 2
    • 6
  • i

    important-art-22288

    08/07/2022, 7:39 PM
    Hey team, I’ve got a k3s cluster set up on my local network that I’m exposing apps internally and externally to. Internally, I’m accessing the apps via the cluster local IP, or through
    hostname.local
    syntax which is working correctly — however, I’m trying to host several apps at once and running into issues with path name collision. Since I don’t have access to adding subdomains given that I’m hosting locally (would rather not go the route of host files) I’m trying to deploy apps under
    /path/{subpaths}
    but running into issues with the names not resolving. For example with the traefik dashboard I was hoping to deploy an ingress under
    hostname.local/traefik/dashboard
    rather than ``hostname.local/dashboard` . to avoid name clash. Is something like this possible or do I need to pursue a different route?
  • t

    thousands-advantage-10804

    08/07/2022, 10:52 PM
    Who is the team that handles k3s yum repo?
    Error: Transaction test error:
      package k3s-selinux-1.1-1.el8.noarch does not verify: Header V4 RSA/SHA1 Signature, key ID e257814a: BAD
    c
    • 2
    • 2
  • c

    cuddly-egg-57762

    08/08/2022, 12:17 PM
    Hello people. I'm trying the offline installation of k3s and I would also use cilium as network provider (and metallb, but that's another story). The k3s images are correctly imported, putting the airgap tar file into
    /var/lib/rancher/k3s/agent/images/
    , but when I do the same thing with cilium operator and cilium "client" images tar files they seems to be not imported automatically during the cluster init. Do the auto-import only works for k3s airgap package? Or am I making something wrong? I put here also the list of images directory and the crictl image list after the k3s cluster init:
    [root@rocky1 srv]# ls /var/lib/rancher/k3s/agent/images/
    cilium-operator.tar  cilium.tar  k3s-airgap-images-amd64.tar.gz  metallb-controller.tar  metallb-speaker.tar
    [root@rocky1 srv]# crictl image list
    IMAGE                                        TAG                    IMAGE ID            SIZE
    <http://docker.io/rancher/klipper-helm|docker.io/rancher/klipper-helm>               v0.7.3-build20220613   38b3b9ad736af       239MB
    <http://docker.io/rancher/klipper-lb|docker.io/rancher/klipper-lb>                 v0.3.5                 dbd43b6716a08       8.51MB
    <http://docker.io/rancher/local-path-provisioner|docker.io/rancher/local-path-provisioner>     v0.0.21                fb9b574e03c34       35.3MB
    <http://docker.io/rancher/mirrored-coredns-coredns|docker.io/rancher/mirrored-coredns-coredns>   1.9.1                  99376d8f35e0a       49.7MB
    <http://docker.io/rancher/mirrored-library-busybox|docker.io/rancher/mirrored-library-busybox>   1.34.1                 62aedd01bd852       1.47MB
    <http://docker.io/rancher/mirrored-library-traefik|docker.io/rancher/mirrored-library-traefik>   2.6.2                  72463d8000a35       103MB
    <http://docker.io/rancher/mirrored-metrics-server|docker.io/rancher/mirrored-metrics-server>    v0.5.2                 f73640fb50619       65.7MB
    <http://docker.io/rancher/mirrored-pause|docker.io/rancher/mirrored-pause>             3.6                    6270bb605e12e       686kB
    Thanks a lot for your help!
    g
    • 2
    • 4
  • b

    bumpy-agency-19657

    08/08/2022, 8:50 PM
    Hello team, I have a k3s cluster with 1 master node and 1 worker. When I try to run the exec command on a pod running on the worker node:
    "kubectl exec -it myapp-deploy2-859f8f4dfc-9xv8v -- ls"
    I get the following error: "Error from server: error dialing backend: x509: certificate is valid for. localhost not worker" How can I execute a exec command in a pod on worker node?
    j
    • 2
    • 2
  • j

    jolly-waitress-71272

    08/08/2022, 9:04 PM
    I have 4 metal I want to commit to a k3s cluster. How many should be masters?
    c
    s
    • 3
    • 7
  • m

    melodic-hamburger-23329

    08/09/2022, 7:37 AM
    nerdctl system prune --all
    doesn’t seem to work with k3s. `$ nerdctl version`:
    WARN[0000] unable to determine buildctl version: exec: "buildctl": executable file not found in $PATH
    WARN[0000] unable to determine runc version: exec: "runc": executable file not found in $PATH
    Client:
     Version:	v0.22.2
     OS/Arch:	linux/amd64
     Git commit:	2899222cb0715f1e5ffe356d10c3439ee8ee3ba4
     builctl:
      Version:
    
    Server:
     containerd:
      Version:	v1.6.6-k3s1
      GitCommit:
     runc:
      Version:
    `nerdctl system prune --all`:
    WARNING! This will remove:
      - all stopped containers
      - all networks not used by at least one container
      - all images without at least one container associated to them
    
    Are you sure you want to continue? [y/N] y
    FATA[0000] needs CNI plugin "firewall" to be installed in CNI_PATH ("/var/lib/rancher/k3s/data/current/bin"), see <https://github.com/containernetworking/plugins/releases>: exec: "/var/lib/rancher/k3s/data/current/bin/firewall": stat /var/lib/rancher/k3s/data/current/bin/firewall: no such file or directory
    `cat /etc/nerdctl/nerdctl.toml`:
    address        = "unix:///run/k3s/containerd/containerd.sock"
    namespace      = "<http://k8s.io|k8s.io>"
    snapshotter    = "stargz"
    cgroup_manager = "systemd"
    cni_path       = "/var/lib/rancher/k3s/data/current/bin"
    cni_netconfpath = "/var/lib/rancher/k3s/agent/etc/cni/net.d"
    With plain v1.6.6 containerd (RD 1.5.0 in containerd mode with k8s disabled) the commands executed without issues.
    c
    • 2
    • 12
  • c

    cool-forest-29147

    08/09/2022, 9:31 AM
    Hey folks. We're looking at setting up a k3s cluster on a slightly unusual setup: we have two powerful machines, with 100g NICs which we can direct attach to each other (ie no switch) plus regular 10g NICs; and a number of low-powered machines each with a regular 10G NIC. All running Ubuntu. How would you go about setting this up? I'm thinking of trying to bridge a 100G and 10G NIC on one of the big machines to get everything on the same subnet to keep k3s happy. Would this work? Is there a better topology for k3s?
    h
    j
    • 3
    • 6
  • c

    crooked-elephant-85769

    08/09/2022, 5:49 PM
    Hi all, running k3s / containerd in a payload cluster and need to exec a root shell inside an exisiting container. Unfortunately, no sudo or su available and so I'm looking for a way to start a shell process as user "root". "critcl exec -ti <id> sh" runs the shell successfully, albeit not as user root. No options like "-u 0" seem to be available. Any idea how to proceed? # k3s --version k3s version v1.22.10+k3s1 (b004f4d5) go version go1.16.10
  • a

    aloof-oyster-85392

    08/10/2022, 7:27 AM
    In case this needs attention, I am unable to install the latest version of k3s on my Jetson Nano device. Previous versions can be installed, though, using this command:
    curl -sfL <https://get.k3s.io> | K3S_URL=<https://10.0.0.90:6443/> K3S_TOKEN=PRE_SHARED_TOKEN_KEY INSTALL_K3S_EXEC="--docker" INSTALL_K3S_VERSION="v1.23.9+k3s1" sh -s -
    Some information about my Jetson Nano:
    ubuntu@w5:~$ docker info
    Client:
     Context:    default
     Debug Mode: false
     Plugins:
      app: Docker App (Docker Inc., v0.9.1-beta3)
      buildx: Docker Buildx (Docker Inc., v0.8.2-docker)
      compose: Docker Compose (Docker Inc., v2.6.0)
    
    Server:
     Containers: 9
      Running: 6
      Paused: 0
      Stopped: 3
     Images: 592
     Server Version: 20.10.17
     Storage Driver: overlay2
      Backing Filesystem: extfs
      Supports d_type: true
      Native Overlay Diff: true
      userxattr: false
     Logging Driver: json-file
     Cgroup Driver: cgroupfs
     Cgroup Version: 1
     Plugins:
      Volume: local
      Network: bridge host ipvlan macvlan null overlay
      Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
     Swarm: inactive
     Runtimes: runc io.containerd.runc.v2 io.containerd.runtime.v1.linux nvidia
     Default Runtime: nvidia
     Init Binary: docker-init
     containerd version: 0197261a30bf81f1ee8e6a4dd2dea0ef95d67ccb
     runc version: v1.1.3-0-g6724737
     init version: de40ad0
     Security Options:
      seccomp
       Profile: default
     Kernel Version: 4.9.253-tegra
     Operating System: Ubuntu 18.04.6 LTS
     OSType: linux
     Architecture: aarch64
     CPUs: 4
     Total Memory: 3.863GiB
     Name: w5
     ID: FVGZ:HQ4F:6UZT:JDNG:CWYN:SUFJ:RM2P:MI5U:44OS:WA4R:ZMT2:6QCT
     Docker Root Dir: /var/lib/docker
     Debug Mode: false
     Username: aslanpour
     Registry: <https://index.docker.io/v1/>
     Labels:
     Experimental: false
     Insecure Registries:
      127.0.0.0/8
     Live Restore Enabled: false
    c
    • 2
    • 1
  • c

    cuddly-egg-57762

    08/10/2022, 9:07 AM
    hello, I'm having problems using the manifest file to deploy cilium after cluster init because it find that the node is not ready. Of course it's not ready because the network is missing so the job pod which should install cilium cannot be scheduled 😄 :
    0/1 nodes are available: 1 node(s) had untolerated taint {<http://node.kubernetes.io/not-ready|node.kubernetes.io/not-ready>: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
    am I missing something? Or should I deploy cilium in a different way and then deploy other helm resource using the manifests? Thanks for your help
    l
    c
    • 3
    • 9
  • s

    stale-fish-49559

    08/10/2022, 7:22 PM
    hi, i am limited to a sysvinit system. I want to run k3s single node then multinode. Can anyone provide info to quickly get started? I am looking at https://rancher.com/docs/k3s/latest/en/installation/install-options/#configuration-file to get started but i cannot see where the kubeconfig is being written. Any help would be appreciated.
    k
    • 2
    • 6
  • c

    cuddly-egg-57762

    08/11/2022, 8:14 AM
    good morning good people, is there a way to pass a local chart tar archive to the helm-install job pod which is responsible to deploy an Helm manifest file? I want to install metallb in an offline environment so I put the images under the
    /var/lib/rancher/k3s/agent/images/
    and specified a manifest file in
    /var/lib/rancher/k3s/server/manifests/
    . The problem of course, since it is airgapped, it's that the job pod report the following error:
    Error: failed to download "metallb/metallb" at version "0.12.1"
    How may I provide to the job pod a tar.gz with the chart definition? Is it possible? Thanks in advance for your help
    c
    • 2
    • 3
  • i

    incalculable-air-54033

    08/11/2022, 12:44 PM
    Hello 🙂 Which version of K3s contains the most updated versions of the following packages please?
    <http://docker.io/rancher/klipper-helm|docker.io/rancher/klipper-helm>
    <http://docker.io/rancher/klipper-lb|docker.io/rancher/klipper-lb> 	
    <http://docker.io/rancher/local-path-provisioner|docker.io/rancher/local-path-provisioner> 	
    <http://docker.io/rancher/mirrored-coredns-coredns|docker.io/rancher/mirrored-coredns-coredns> 	
    <http://docker.io/rancher/mirrored-library-busybox|docker.io/rancher/mirrored-library-busybox> 		
    <http://docker.io/rancher/mirrored-library-traefik|docker.io/rancher/mirrored-library-traefik> 	
    <http://docker.io/rancher/mirrored-metrics-server|docker.io/rancher/mirrored-metrics-server>	
    <http://docker.io/rancher/mirrored-pause|docker.io/rancher/mirrored-pause>
    b
    • 2
    • 1
  • s

    stale-fish-49559

    08/11/2022, 3:59 PM
    Hi i am getting a lot of cgroup related errors on yocto, any idea how to fix this?
    evel=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/cpuacct/kubepods/burstable/pod0ec7b43f-bdb3-4601-8b5b-a6353c88ce93/69bf1ec361729a88f1ccbe8e5566e4d7f0a137b59e1602f1ee3bd38cfb5ec5a4: device or resource busy"
  • m

    most-crowd-3167

    08/11/2022, 8:21 PM
    Hi I'm using the helm-controller embedded in k3s, but it seems like deleting a HelmChart does not actually uninstall the release according to
    helm list
    . I want to look at the logs of helm-controller, but it doesn't seem like there is actually a pod running for this.
  • l

    limited-traffic-81887

    08/12/2022, 9:24 PM
    Hello! I have an "at-home" cluster setup on 3 rasperry pis, the master and 2 nodes are both healthy and "ready". And I have a healthy running pod printing something every 2 seconds on a loop. the issue is that, from either the master, or my desktop kubectl, I am getting a proxy 503 error when running
    kubectl logs hello-raspi
    the full error looks like this
    Error from server: Get "<https://10.0.0.142:10250/containerLogs/default/hello-raspi/hello-raspi>": proxy error from 127.0.0.1:6443 while dialing 10.0.0.142:10250, code 503: 503 Service Unavailable
    . Can anyone point me in the right direction, google is tough on a specific error like this
    k
    • 2
    • 14
  • k

    kind-nightfall-56861

    08/14/2022, 10:33 PM
    Hey, similar to many other people I have given up on hiring a provider to host my software due to cost and limitations. So, a few weeks have I abandoned my webhost and my in-house windows server (powercost) and transfered everything to a Raspberry Pi minicluster. While I'm still in the transition of moving my applications to docker containerizations, I'm already running into a few problems with the software that I did migrate. At the moment the biggest pain in the * is the way Ingress works, and I'm not sure if that's my mistake or a limitation from Ingress. So I'm using Cloudflare as my (proxied) DNS resolver, and Ingress to resolve requested hosts to specific pods. While you'd think it's fine... It's acting kind of weird. 1. My pod has a nodeport, exposing port 80 targetting ports 31199 (Haven't figured out SSL certs yet) 2. Next I have an Ingress setup for my host (http://preview.krakensoftware.eu) But strangly, when I navigate towards http://preview.krakensoftware.eu/ it doesn't work and returns either a Cloudflare error screen or a connection refused error. It only works if I navigate towards the chosen port http://preview.krakensoftware.eu:31199/ which buggs me out, I thought that this was one of the things that Ingress should resolve. Does anyone have an idea?
    ✅ 1
    b
    s
    • 3
    • 16
  • a

    average-photographer-35368

    08/15/2022, 5:40 PM
    I'm having some problems installing k3s (via the bash installer script). looks like the channel url that the bash script install is using is returning a 503:
    ubuntu@ip-10-0-1-25:~/mentha/k3s-garden$ curl <https://update.k3s.io/v1-release/channels/stable>
    <html>
    <head><title>503 Service Temporarily Unavailable</title></head>
    <body>
    <center><h1>503 Service Temporarily Unavailable</h1></center>
    <hr><center>openresty/1.15.8.1</center>
    </body>
    </html>
    ✅ 1
    m
    • 2
    • 3
  • f

    fierce-monkey-81592

    08/15/2022, 10:58 PM
    I’m trying to install Rancher on a single server just to make running some dev ops on a single box easier. I’ve followed the instructions here: https://rancher.com/docs/rancher/v2.5/en/installation/other-installation-methods/single-node-docker/ but it doesn’t seem to work. It looks like core-dns won’t deploy and I can’t seem to create any other deployments either… they all just sit in “pending”…. not sure where to begin troubleshoot to resolve this issue!
    s
    k
    • 3
    • 6
Powered by Linen
Title
f

fierce-monkey-81592

08/15/2022, 10:58 PM
I’m trying to install Rancher on a single server just to make running some dev ops on a single box easier. I’ve followed the instructions here: https://rancher.com/docs/rancher/v2.5/en/installation/other-installation-methods/single-node-docker/ but it doesn’t seem to work. It looks like core-dns won’t deploy and I can’t seem to create any other deployments either… they all just sit in “pending”…. not sure where to begin troubleshoot to resolve this issue!
s

square-engine-61315

08/16/2022, 11:47 AM
Start here:
kubectl --namespace kube-system describe deploy/coredns
k

kind-nightfall-56861

08/16/2022, 12:09 PM
for me it usually works to execute this command to check the status of the kube-system pods;
kubectl get all -n kube-system
And if it turns out that one or more pods are being a pain in the *, then I'm pretty much forcing them to redeploying.
kubectl delete --all pods -n kube-system --force
Same way of work for any namespace tbh
s

square-engine-61315

08/16/2022, 12:12 PM
@kind-nightfall-56861 that works sometimes. But instead of deleting pods, you could restart the deployment that controls the pods:
kubectl rollout restart -n kube-system deployment coredns
But you might want to find out why the deployment is failing before you do that. That's what I suggest:
kubectl --namespace kube-system describe deploy/coredns
k

kind-nightfall-56861

08/16/2022, 12:19 PM
Tbh, when I try do restart through the Rancher interface, restarting almost never works, but idk if that restart translates to that console line. I'm finding that my method works 100% of the time, but I might be mistaken.
s

square-engine-61315

08/16/2022, 12:54 PM
Deleting the pod is almost like restarting a deployment that has
.spec.strategy.type==Recreate
. I think the default is
.spec.strategy.type==RollingUpdate
. The latter will try to start new pod before stopping the old one, which is nice for high availability, but does not work with all applications or pods.
👍 1
View count: 50