https://rancher.com/ logo
Title
w

wide-kitchen-20738

03/23/2023, 5:16 AM
Hi all, We are trying to install rancher 2.7 on k3s cluster(v1.23.17+k3s1) in a single node ec2 instance(t3.medium) We selected external tls and one replica. After the installation is done we are getting 404 page not found if we try to access rancher. I see some errors in the rancher logs
2023/03/23 03:36:04 [INFO] Starting <http://rke.cattle.io/v1|rke.cattle.io/v1>, Kind=RKECluster controller
2023/03/23 03:36:04 [ERROR] failed to start controller for <http://cluster.x-k8s.io/v1alpha3|cluster.x-k8s.io/v1alpha3>, Kind=Cluster: failed to wait for caches to sync
2023/03/23 03:36:04 [ERROR] failed to start controller for <http://cluster.x-k8s.io/v1alpha3|cluster.x-k8s.io/v1alpha3>, Kind=MachineHealthCheck: failed to wait for caches to sync
2023/03/23 03:36:04 [ERROR] failed to start controller for <http://cluster.x-k8s.io/v1alpha3|cluster.x-k8s.io/v1alpha3>, Kind=MachineSet: failed to wait for caches to sync
E0323 03:36:04.152482      33 gvks.go:69] failed to sync schemas: failed to sync cache for <http://rke-machine-config.cattle.io/v1|rke-machine-config.cattle.io/v1>, Kind=DigitaloceanConfig
2023/03/23 03:36:04 [INFO] Watching metadata for <http://rke-machine-config.cattle.io/v1|rke-machine-config.cattle.io/v1>, Kind=Amazonec2Config
2023/03/23 03:36:04 [INFO] Starting <http://cluster.x-k8s.io/v1alpha3|cluster.x-k8s.io/v1alpha3>, Kind=MachineDeployment controller
2023/03/23 03:36:04 [INFO] [CleanupOrphanBindingsDone] orphan bindings cleanup has already run, skipping
2023/03/23 03:36:04 [INFO] checking configmap cattle-system/admincreated to determine if orphan bindings cleanup needs to run
2023/03/23 03:36:04 [INFO] duplicate bindings cleanup has already run, skipping
2023/03/23 03:36:04 [INFO] Watching metadata for <http://rke-machine-config.cattle.io/v1|rke-machine-config.cattle.io/v1>, Kind=AzureConfig
2023/03/23 03:36:04 [INFO] [clean-catalog-orphan-bindings] cleaning up orphaned catalog bindings
2023/03/23 03:36:04 [INFO] Starting <http://rke-machine-config.cattle.io/v1|rke-machine-config.cattle.io/v1>, Kind=Amazonec2Config controller
2023/03/23 03:36:04 [INFO] [clean-catalog-orphan-bindings] Processing 2 rolebindings
2023/03/23 03:36:04 [INFO] [clean-catalog-orphan-bindings] Deleting orphaned role global-catalog
2023/03/23 03:36:04 [INFO] Watching metadata for <http://rke-machine-config.cattle.io/v1|rke-machine-config.cattle.io/v1>, Kind=DigitaloceanConfig
2023/03/23 03:36:04 [INFO] Starting <http://rke-machine-config.cattle.io/v1|rke-machine-config.cattle.io/v1>, Kind=AzureConfig controller
2023/03/23 03:36:04 [WARNING] [clean-catalog-orphan-bindings] Error when deleting role global-catalog, <http://roles.rbac.authorization.k8s.io|roles.rbac.authorization.k8s.io> "global-catalog" not found
2023/03/23 03:36:04 [WARNING] [CleanupOrphanCatalogBindingsDone] error during orphan binding cleanup: <http://roles.rbac.authorization.k8s.io|roles.rbac.authorization.k8s.io> "global-catalog" not found
2023/03/23 03:36:04 [ERROR] failed to cleanup orphan catalog bindings
2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine-config.cattle.io/v1|rke-machine-config.cattle.io/v1>, Kind=HarvesterConfig
2023/03/23 03:36:05 [INFO] Starting <http://rke-machine-config.cattle.io/v1|rke-machine-config.cattle.io/v1>, Kind=DigitaloceanConfig controller
2023/03/23 03:36:05 [INFO] driverMetadata: refreshing data from upstream <https://releases.rancher.com/kontainer-driver-metadata/release-v2.7/data.json>
2023/03/23 03:36:05 [INFO] Retrieve data.json from local path /var/lib/rancher-data/driver-metadata/data.json
2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine-config.cattle.io/v1|rke-machine-config.cattle.io/v1>, Kind=LinodeConfig
2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine-config.cattle.io/v1|rke-machine-config.cattle.io/v1>, Kind=VmwarevsphereConfig
2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=VmwarevsphereMachine
2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=AzureMachineTemplate
2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=HarvesterMachineTemplate
2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=Amazonec2Machine
2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=DigitaloceanMachineTemplate
2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=Amazonec2MachineTemplate
2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=LinodeMachineTemplate
2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=LinodeMachine
2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=AzureMachine
2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=DigitaloceanMachine
2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=VmwarevsphereMachineTemplate
2023/03/23 03:36:05 [INFO] Watching metadata for <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=HarvesterMachine
2023/03/23 03:36:05 [INFO] Watching metadata for <http://cluster.x-k8s.io/v1alpha3|cluster.x-k8s.io/v1alpha3>, Kind=Cluster
2023/03/23 03:36:05 [INFO] Watching metadata for <http://cluster.x-k8s.io/v1alpha3|cluster.x-k8s.io/v1alpha3>, Kind=MachineHealthCheck
2023/03/23 03:36:05 [INFO] Watching metadata for <http://cluster.x-k8s.io/v1alpha3|cluster.x-k8s.io/v1alpha3>, Kind=MachineSet
2023/03/23 03:36:05 [INFO] Starting <http://rke-machine-config.cattle.io/v1|rke-machine-config.cattle.io/v1>, Kind=HarvesterConfig controller
2023/03/23 03:36:05 [INFO] Starting <http://rke-machine-config.cattle.io/v1|rke-machine-config.cattle.io/v1>, Kind=LinodeConfig controller
2023/03/23 03:36:05 [INFO] Starting <http://rke-machine-config.cattle.io/v1|rke-machine-config.cattle.io/v1>, Kind=VmwarevsphereConfig controller
2023/03/23 03:36:05 [INFO] Starting <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=VmwarevsphereMachine controller
2023/03/23 03:36:05 [INFO] Starting <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=AzureMachineTemplate controller
2023/03/23 03:36:05 [INFO] Starting <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=HarvesterMachineTemplate controller
2023/03/23 03:36:05 [INFO] Starting <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=Amazonec2Machine controller
2023/03/23 03:36:05 [INFO] Starting <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=DigitaloceanMachineTemplate controller
2023/03/23 03:36:05 [INFO] Starting <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=Amazonec2MachineTemplate controller
2023/03/23 03:36:05 [INFO] Starting <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=LinodeMachineTemplate controller
2023/03/23 03:36:05 [INFO] Starting <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=LinodeMachine controller
2023/03/23 03:36:05 [INFO] Starting <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=AzureMachine controller
2023/03/23 03:36:05 [INFO] Starting <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=DigitaloceanMachine controller
2023/03/23 03:36:05 [INFO] Starting <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=VmwarevsphereMachineTemplate controller
2023/03/23 03:36:05 [INFO] Starting <http://rke-machine.cattle.io/v1|rke-machine.cattle.io/v1>, Kind=HarvesterMachine controller
2023/03/23 03:36:05 [INFO] Starting <http://cluster.x-k8s.io/v1alpha3|cluster.x-k8s.io/v1alpha3>, Kind=Cluster controller
2023/03/23 03:36:05 [INFO] Starting <http://cluster.x-k8s.io/v1alpha3|cluster.x-k8s.io/v1alpha3>, Kind=MachineHealthCheck controller
2023/03/23 03:36:05 [INFO] Starting <http://cluster.x-k8s.io/v1alpha3|cluster.x-k8s.io/v1alpha3>, Kind=MachineSet controller
2023/03/23 03:36:08 [INFO] Loaded configuration from <https://releases.rancher.com/kontainer-driver-metadata/release-v2.7/data.json> in [0x6d10098]
2023/03/23 03:36:08 [INFO] Loaded configuration from <https://releases.rancher.com/kontainer-driver-metadata/release-v2.7/data.json> in [0x6d10098]
2023/03/23 03:36:08 [INFO] kontainerdriver amazonelasticcontainerservice listening on address 127.0.0.1:42731
2023/03/23 03:36:08 [INFO] kontainerdriver googlekubernetesengine listening on address 127.0.0.1:41457
2023/03/23 03:36:08 [INFO] kontainerdriver azurekubernetesservice listening on address 127.0.0.1:45945
2023/03/23 03:36:08 [INFO] kontainerdriver amazonelasticcontainerservice stopped
2023/03/23 03:36:08 [INFO] dynamic schema for kontainerdriver amazonelasticcontainerservice updating
2023/03/23 03:36:08 [INFO] kontainerdriver azurekubernetesservice stopped
2023/03/23 03:36:08 [INFO] dynamic schema for kontainerdriver azurekubernetesservice updating
2023/03/23 03:36:08 [INFO] kontainerdriver googlekubernetesengine stopped
2023/03/23 03:36:08 [INFO] dynamic schema for kontainerdriver googlekubernetesengine updating
I'm a new to rancher and tried almost every solution that i found on google..
p

polite-piano-74233

03/23/2023, 5:20 AM
you are most likely running out of memory and rancher cant complete startup, its a pretty memory hungry app
are you starting up rancher outside of k3s or inside?
aka docker version of rancher or kubernetes helm version
w

wide-kitchen-20738

03/23/2023, 5:21 AM
starting up rancher inside of k3s
p

polite-piano-74233

03/23/2023, 5:29 AM
what does kubectl describe pod rancher -n cattle-system show?
what does kubectl logs rancher -n cattle-system --previous show?
w

wide-kitchen-20738

03/23/2023, 5:32 AM
describe pod
Name:                 rancher-85dbd69598-hv2pw
Namespace:            cattle-system
Priority:             1000000000
Priority Class Name:  rancher-critical
Node:                 ip-172-31-12-135.us-west-1.compute.internal/172.31.12.135
Start Time:           Thu, 23 Mar 2023 03:34:59 +0000
Labels:               app=rancher
                      pod-template-hash=85dbd69598
                      release=rancher
Annotations:          <none>
Status:               Running
IP:                   10.42.0.20
IPs:
  IP:           10.42.0.20
Controlled By:  ReplicaSet/rancher-85dbd69598
Containers:
  rancher:
    Container ID:  <containerd://6b36fd65fffe2c388fa610e6396d85bd8349e19cc037ae0dbe4b1b16f165aee>1
    Image:         rancher/rancher:v2.7.1
    Image ID:      <http://docker.io/rancher/rancher@sha256:188ac186125ca1d4bef1e741568ec381eed1af4e9a876c7b15eabcc98325f2b0|docker.io/rancher/rancher@sha256:188ac186125ca1d4bef1e741568ec381eed1af4e9a876c7b15eabcc98325f2b0>
    Port:          80/TCP
    Host Port:     0/TCP
    Args:
      --no-cacerts
      --http-listen-port=80
      --https-listen-port=443
      --add-local=true
    State:          Running
      Started:      Thu, 23 Mar 2023 03:35:01 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:80/healthz delay=60s timeout=1s period=30s #success=1 #failure=3
    Readiness:      http-get http://:80/healthz delay=5s timeout=1s period=30s #success=1 #failure=3
    Environment:
      CATTLE_NAMESPACE:           cattle-system
      CATTLE_PEER_SERVICE:        rancher
      CATTLE_BOOTSTRAP_PASSWORD:  <set to the key 'bootstrapPassword' in secret 'bootstrap-secret'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wbbjc (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kube-api-access-wbbjc:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 <http://cattle.io/os=linux:NoSchedule|cattle.io/os=linux:NoSchedule>
                             <http://node.kubernetes.io/not-ready:NoExecute|node.kubernetes.io/not-ready:NoExecute> op=Exists for 300s
                             <http://node.kubernetes.io/unreachable:NoExecute|node.kubernetes.io/unreachable:NoExecute> op=Exists for 300s
p

polite-piano-74233

03/23/2023, 5:36 AM
the previous pod logs you'd be looking for error 137 if it is a memory issue