This message was deleted.
# general
a
This message was deleted.
a
Could you try delete
-v /rancher:/var/lib/rancher
and try again?
a
I did get you
a
I would like to check if you get the same error without a mounted volume
a
after mounting the volume also i am getting same error sudo docker run -d --restart=unless-stopped -p 80:80 -p 5055:443 -v /rancher:/var/lib/rancher --privileged rancher/rancher:latest docker logs docker logs 4ba08385f5b1 --follow INFO: Running k3s server --cluster-init --cluster-reset ERROR: time="2024-07-09T105117Z" level=warning msg="remove /var/lib/rancher/k3s/agent/etc/k3s-api-server-agent-load-balancer.json: no such file or directory" time="2024-07-09T105117Z" level=info msg="Starting k3s v1.28.6+k3s2 (c9f49a3b)" time="2024-07-09T105117Z" level=info msg="Managed etcd cluster bootstrap already complete and initialized" time="2024-07-09T105117Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1719570345: notBefore=2024-06-28 102545 +0000 UTC notAfter=2025-07-09 105117 +0000 UTC" time="2024-07-09T105117Z" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1719570345: notBefore=2024-06-28 102545 +0000 UTC notAfter=2025-07-09 105117 +0000 UTC" time="2024-07-09T105117Z" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1719570345: notBefore=2024-06-28 102545 +0000 UTC notAfter=2025-07-09 105117 +0000 UTC" time="2024-07-09T105117Z" level=fatal msg="starting kubernetes: preparing server: start managed database: Managed etcd cluster membership was previously reset, please remove the cluster-reset flag and start k3s normally. If you need to perform another cluster reset, you must first manually delete the file at /var/lib/rancher/k3s/server/db/reset-flag" INFO: Running k3s server --cluster-init --cluster-reset 2024/07/09 105132 [INFO] Rancher version v2.8.5 (7af1354e9) is starting 2024/07/09 105132 [INFO] Rancher arguments {ACMEDomains:[] AddLocal:true Embedded:false BindHost: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false Trace:false NoCACerts:false AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Features: ClusterRegistry:} 2024/07/09 105132 [INFO] Listening on /tmp/log.sock 2024/07/09 105132 [INFO] Waiting for server to become available: Get "https://127.0.0.1:6444/version?timeout=15m0s": dial tcp 127.0.0.16444 connect: connection refused 2024/07/09 105134 [INFO] Waiting for server to become available: Get "https://127.0.0.1:6444/version?timeout=15m0s": dial tcp 127.0.0.16444 connect: connection refused 2024/07/09 105136 [INFO] Waiting for server to become available: Get "https://127.0.0.1:6444/version?timeout=15m0s": dial tcp 127.0.0.16444 connect: connection refused 2024/07/09 105138 [INFO] Waiting for server to become available: Get "https://127.0.0.1:6444/version?timeout=15m0s": dial tcp 127.0.0.16444 connect: connection refused 2024/07/09 105140 [INFO] Waiting for server to become available: Get "https://127.0.0.1:6444/version?timeout=15m0s": dial tcp 127.0.0.16444 connect: connection refused 2024/07/09 105142 [INFO] Waiting for server to become available: Get "https://127.0.0.1:6444/version?timeout=15m0s": dial tcp 127.0.0.16444 connect: connection refused 2024/07/09 105144 [INFO] Waiting for server to become available: Get "https://127.0.0.1:6444/version?timeout=15m0s": dial tcp 127.0.0.16444 connect: connection refused 2024/07/09 105146 [INFO] Waiting for server to become available: Get "https://127.0.0.1:6444/version?timeout=15m0s": dial tcp 127.0.0.16444 connect: connection refused 2024/07/09 105148 [INFO] Waiting for server to become available: Get "https://127.0.0.1:6444/version?timeout=15m0s": dial tcp 127.0.0.16444 connect: connection refused 2024/07/09 105150 [INFO] Waiting for server to become available: Get "https://127.0.0.1:6444/version?timeout=15m0s": dial tcp 127.0.0.16444 connect: connection refused 2024/07/09 105152 [INFO] Running in single server mode, will not peer connections 2024/07/09 105153 [INFO] Applying CRD features.management.cattle.io 2024/07/09 105153 [INFO] Updating embedded CRD clusterroletemplatebindings.management.cattle.io 2024/07/09 105153 [INFO] Updating embedded CRD globalroles.management.cattle.io 2024/07/09 105153 [INFO] Updating embedded CRD globalrolebindings.management.cattle.io 2024/07/09 105153 [INFO] Updating embedded CRD projects.management.cattle.io 2024/07/09 105153 [INFO] Updating embedded CRD projectroletemplatebindings.management.cattle.io 2024/07/09 105153 [INFO] Updating embedded CRD roletemplates.management.cattle.io 2024/07/09 105153 [INFO] Applying CRD navlinks.ui.cattle.io 2024/07/09 105153 [INFO] Applying CRD podsecurityadmissionconfigurationtemplates.management.cattle.io 2024/07/09 105153 [INFO] Applying CRD clusters.management.cattle.io 2024/07/09 105153 [INFO] Applying CRD apiservices.management.cattle.io 2024/07/09 105153 [INFO] Applying CRD clusterregistrationtokens.management.cattle.io 2024/07/09 105153 [INFO] Applying CRD settings.management.cattle.io 2024/07/09 105153 [INFO] Applying CRD preferences.management.cattle.io 2024/07/09 105153 [INFO] Applying CRD features.management.cattle.io 2024/07/09 105153 [INFO] Applying CRD clusterrepos.catalog.cattle.io 2024/07/09 105153 [INFO] Applying CRD operations.catalog.cattle.io 2024/07/09 105154 [INFO] Applying CRD apps.catalog.cattle.io 2024/07/09 105154 [INFO] Applying CRD fleetworkspaces.management.cattle.io 2024/07/09 105154 [INFO] Applying CRD managedcharts.management.cattle.io 2024/07/09 105154 [INFO] Applying CRD clusters.provisioning.cattle.io 2024/07/09 105154 [INFO] Applying CRD clusters.provisioning.cattle.io 2024/07/09 105154 [INFO] Applying CRD rkeclusters.rke.cattle.io 2024/07/09 105154 [INFO] Applying CRD rkecontrolplanes.rke.cattle.io 2024/07/09 105154 [INFO] Applying CRD rkebootstraps.rke.cattle.io 2024/07/09 105154 [FATAL] k3s exited with: exit status 2 container keep restarting every 12 seconds after mounting the volume also can you please help me on this
a
would you try without persistent Volumes?
sudo docker run -d --restart=unless-stopped -p 80:80 -p 5055:443 --privileged rancher/rancher:latest
a
Yes both i tried but issue is same
p
stop the container, do "docker system prune" and try again
a
after doing docker prune also i am getting this error docker logs aa1a00cfaeeff2630310d7ad402e772c94bdf704a49d58222696777c2bc02afb --follow INFO: Running k3s server --cluster-init --cluster-reset ERROR: time="2024-07-09T173421Z" level=warning msg="remove /var/lib/rancher/k3s/agent/etc/k3s-api-server-agent-load-balancer.json: no such file or directory" time="2024-07-09T173421Z" level=info msg="Starting k3s v1.28.6+k3s2 (c9f49a3b)" time="2024-07-09T173421Z" level=info msg="Managed etcd cluster bootstrap already complete and initialized" time="2024-07-09T173421Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1720544075: notBefore=2024-07-09 165435 +0000 UTC notAfter=2025-07-09 173421 +0000 UTC" time="2024-07-09T173421Z" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1720544075: notBefore=2024-07-09 165435 +0000 UTC notAfter=2025-07-09 173421 +0000 UTC" time="2024-07-09T173421Z" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1720544075: notBefore=2024-07-09 165435 +0000 UTC notAfter=2025-07-09 173421 +0000 UTC" time="2024-07-09T173421Z" level=fatal msg="starting kubernetes: preparing server: start managed database: Managed etcd cluster membership was previously reset, please remove the cluster-reset flag and start k3s normally. If you need to perform another cluster reset, you must first manually delete the file at /var/lib/rancher/k3s/server/db/reset-flag" INFO: Running k3s server --cluster-init --cluster-reset 2024/07/09 173437 [INFO] Rancher version v2.8.5 (7af1354e9) is starting 2024/07/09 173437 [INFO] Rancher arguments {ACMEDomains:[] AddLocal:true Embedded:false BindHost: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false Trace:false NoCACerts:false AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Features: ClusterRegistry:} 2024/07/09 173437 [INFO] Listening on /tmp/log.sock 2024/07/09 173437 [INFO] Waiting for server to become available: Get "https://127.0.0.1:6444/version?timeout=15m0s": dial tcp 127.0.0.16444 connect: connection refused 2024/07/09 173439 [INFO] Waiting for server to become available: Get "https://127.0.0.1:6444/version?timeout=15m0s": dial tcp 127.0.0.16444 connect: connection refused 2024/07/09 173441 [INFO] Waiting for server to become available: Get "https://127.0.0.1:6444/version?timeout=15m0s": dial tcp 127.0.0.16444 connect: connection refused 2024/07/09 173443 [INFO] Waiting for server to become available: Get "https://127.0.0.1:6444/version?timeout=15m0s": dial tcp 127.0.0.16444 connect: connection refused 2024/07/09 173445 [INFO] Waiting for server to become available: Get "https://127.0.0.1:6444/version?timeout=15m0s": dial tcp 127.0.0.16444 connect: connection refused 2024/07/09 173447 [INFO] Waiting for server to become available: Get "https://127.0.0.1:6444/version?timeout=15m0s": dial tcp 127.0.0.16444 connect: connection refused 2024/07/09 173449 [INFO] Waiting for server to become available: Get "https://127.0.0.1:6444/version?timeout=15m0s": dial tcp 127.0.0.16444 connect: connection refused 2024/07/09 173451 [INFO] Waiting for server to become available: Get "https://127.0.0.1:6444/version?timeout=15m0s": dial tcp 127.0.0.16444 connect: connection refused 2024/07/09 173453 [INFO] Running in single server mode, will not peer connections 2024/07/09 173453 [INFO] Applying CRD features.management.cattle.io 2024/07/09 173453 [INFO] Updating embedded CRD clusterroletemplatebindings.management.cattle.io 2024/07/09 173453 [INFO] Updating embedded CRD globalroles.management.cattle.io 2024/07/09 173453 [INFO] Updating embedded CRD globalrolebindings.management.cattle.io 2024/07/09 173453 [INFO] Updating embedded CRD projects.management.cattle.io 2024/07/09 173453 [INFO] Updating embedded CRD projectroletemplatebindings.management.cattle.io 2024/07/09 173453 [INFO] Updating embedded CRD roletemplates.management.cattle.io 2024/07/09 173453 [INFO] Applying CRD navlinks.ui.cattle.io 2024/07/09 173453 [INFO] Applying CRD podsecurityadmissionconfigurationtemplates.management.cattle.io 2024/07/09 173453 [INFO] Applying CRD clusters.management.cattle.io 2024/07/09 173453 [INFO] Applying CRD apiservices.management.cattle.io 2024/07/09 173453 [INFO] Applying CRD clusterregistrationtokens.management.cattle.io 2024/07/09 173453 [INFO] Applying CRD settings.management.cattle.io 2024/07/09 173453 [INFO] Applying CRD preferences.management.cattle.io 2024/07/09 173454 [INFO] Applying CRD features.management.cattle.io 2024/07/09 173454 [INFO] Applying CRD clusterrepos.catalog.cattle.io 2024/07/09 173454 [INFO] Applying CRD operations.catalog.cattle.io 2024/07/09 173454 [INFO] Applying CRD apps.catalog.cattle.io 2024/07/09 173454 [INFO] Applying CRD fleetworkspaces.management.cattle.io 2024/07/09 173454 [INFO] Applying CRD managedcharts.management.cattle.io 2024/07/09 173454 [INFO] Applying CRD clusters.provisioning.cattle.io 2024/07/09 173454 [INFO] Applying CRD clusters.provisioning.cattle.io 2024/07/09 173454 [INFO] Applying CRD rkeclusters.rke.cattle.io 2024/07/09 173454 [INFO] Applying CRD rkecontrolplanes.rke.cattle.io 2024/07/09 173454 [INFO] Applying CRD rkebootstraps.rke.cattle.io 2024/07/09 173454 [INFO] Applying CRD rkebootstraptemplates.rke.cattle.io 2024/07/09 173454 [INFO] Applying CRD rkecontrolplanes.rke.cattle.io 2024/07/09 173454 [INFO] Applying CRD custommachines.rke.cattle.io 2024/07/09 173454 [INFO] Applying CRD etcdsnapshots.rke.cattle.io 2024/07/09 173454 [INFO] Applying CRD clusters.cluster.x-k8s.io 2024/07/09 173454 [INFO] Applying CRD machinedeployments.cluster.x-k8s.io 2024/07/09 173454 [INFO] Applying CRD machinehealthchecks.cluster.x-k8s.io 2024/07/09 173455 [INFO] Applying CRD machines.cluster.x-k8s.io 2024/07/09 173455 [FATAL] k3s exited with: exit status 1
Can you please help me on this issue
@powerful-librarian-10572 @acceptable-nest-4738 please help me on this issue
p
Have no clue
a
any global options i need to add in this docker cmd