This message was deleted.
# general
a
This message was deleted.
p
you are most likely running out of memory and rancher cant complete startup, its a pretty memory hungry app
are you starting up rancher outside of k3s or inside?
aka docker version of rancher or kubernetes helm version
w
starting up rancher inside of k3s
p
what does kubectl describe pod rancher -n cattle-system show?
what does kubectl logs rancher -n cattle-system --previous show?
w
describe pod
Copy code
Name:                 rancher-85dbd69598-hv2pw
Namespace:            cattle-system
Priority:             1000000000
Priority Class Name:  rancher-critical
Node:                 ip-172-31-12-135.us-west-1.compute.internal/172.31.12.135
Start Time:           Thu, 23 Mar 2023 03:34:59 +0000
Labels:               app=rancher
                      pod-template-hash=85dbd69598
                      release=rancher
Annotations:          <none>
Status:               Running
IP:                   10.42.0.20
IPs:
  IP:           10.42.0.20
Controlled By:  ReplicaSet/rancher-85dbd69598
Containers:
  rancher:
    Container ID:  <containerd://6b36fd65fffe2c388fa610e6396d85bd8349e19cc037ae0dbe4b1b16f165aee>1
    Image:         rancher/rancher:v2.7.1
    Image ID:      <http://docker.io/rancher/rancher@sha256:188ac186125ca1d4bef1e741568ec381eed1af4e9a876c7b15eabcc98325f2b0|docker.io/rancher/rancher@sha256:188ac186125ca1d4bef1e741568ec381eed1af4e9a876c7b15eabcc98325f2b0>
    Port:          80/TCP
    Host Port:     0/TCP
    Args:
      --no-cacerts
      --http-listen-port=80
      --https-listen-port=443
      --add-local=true
    State:          Running
      Started:      Thu, 23 Mar 2023 03:35:01 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:80/healthz delay=60s timeout=1s period=30s #success=1 #failure=3
    Readiness:      http-get http://:80/healthz delay=5s timeout=1s period=30s #success=1 #failure=3
    Environment:
      CATTLE_NAMESPACE:           cattle-system
      CATTLE_PEER_SERVICE:        rancher
      CATTLE_BOOTSTRAP_PASSWORD:  <set to the key 'bootstrapPassword' in secret 'bootstrap-secret'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wbbjc (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kube-api-access-wbbjc:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 <http://cattle.io/os=linux:NoSchedule|cattle.io/os=linux:NoSchedule>
                             <http://node.kubernetes.io/not-ready:NoExecute|node.kubernetes.io/not-ready:NoExecute> op=Exists for 300s
                             <http://node.kubernetes.io/unreachable:NoExecute|node.kubernetes.io/unreachable:NoExecute> op=Exists for 300s
p
the previous pod logs you'd be looking for error 137 if it is a memory issue
129 Views