This message was deleted.
# general
This message was deleted.
Here’s the first snippet of my startup log:
Copy code
2022/11/16 18:21:10 [INFO] Rancher arguments {ACMEDomains:[] AddLocal:true Embedded:false BindHost: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false Trace:false NoCACerts:true AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Agent:false Features: ClusterRegistry:}
2022/11/16 18:21:10 [INFO] Listening on /tmp/log.sock
2022/11/16 18:21:10 [INFO] Running etcd --data-dir=management-state/etcd --heartbeat-interval=500 --election-timeout=5000
2022-11-16 18:21:10.780741 W | pkg/flags: unrecognized environment variable ETCD_URL_arm64=<>
2022-11-16 18:21:10.780792 W | pkg/flags: unrecognized environment variable ETCD_URL_amd64=<>
2022-11-16 18:21:10.780797 W | pkg/flags: unrecognized environment variable ETCD_UNSUPPORTED_ARCH=amd64
2022-11-16 18:21:10.780802 W | pkg/flags: unrecognized environment variable ETCD_URL=ETCD_URL_amd64
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2022-11-16 18:21:10.780826 I | etcdmain: etcd Version: 3.4.3
2022-11-16 18:21:10.780832 I | etcdmain: Git SHA: 3cf2f69b5
2022-11-16 18:21:10.780835 I | etcdmain: Go Version: go1.12.12
2022-11-16 18:21:10.780839 I | etcdmain: Go OS/Arch: linux/amd64
2022-11-16 18:21:10.780842 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4
2022-11-16 18:21:10.780888 N | etcdmain: the server is already initialized as member before, starting as etcd member...
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2022-11-16 18:21:10.781263 I | embed: name = default
2022-11-16 18:21:10.781276 I | embed: data dir = management-state/etcd
2022-11-16 18:21:10.781282 I | embed: member dir = management-state/etcd/member
2022-11-16 18:21:10.781285 I | embed: heartbeat = 500ms
2022-11-16 18:21:10.781288 I | embed: election = 5000ms
2022-11-16 18:21:10.781291 I | embed: snapshot count = 100000
2022-11-16 18:21:10.781299 I | embed: advertise client URLs = <http://localhost:2379>
2022-11-16 18:21:10.781303 I | embed: initial advertise peer URLs = <http://localhost:2380>
2022-11-16 18:21:10.781308 I | embed: initial cluster = 
2022-11-16 18:21:10.858833 I | etcdserver: recovered store from snapshot at index 394803990
2022-11-16 18:21:10.859317 I | mvcc: restore compact to 371885592
2022-11-16 18:21:11.411583 I | etcdserver: restarting member 8e9e05c52164694d in cluster cdf818194e3a8c32 at commit index 394866842
raft2022/11/16 18:21:11 INFO: 8e9e05c52164694d switched to configuration voters=(10276657743932975437)
raft2022/11/16 18:21:11 INFO: 8e9e05c52164694d became follower at term 1520
raft2022/11/16 18:21:11 INFO: newRaft 8e9e05c52164694d [peers: [8e9e05c52164694d], term: 1520, commit: 394866842, applied: 394803990, lastindex: 394866842, lastterm: 1520]
2022-11-16 18:21:11.414671 I | etcdserver/api: enabled capabilities for version 3.4
2022-11-16 18:21:11.414686 I | etcdserver/membership: added member 8e9e05c52164694d [<http://localhost:2380>] to cluster cdf818194e3a8c32 from store
2022-11-16 18:21:11.414693 I | etcdserver/membership: set the cluster version to 3.4 from store
2022-11-16 18:21:11.415664 I | mvcc: restore compact to 371885592
2022-11-16 18:21:11.489724 W | auth: simple token is not cryptographically signed
2022-11-16 18:21:11.490627 I | etcdserver: starting server... [version: 3.4.3, cluster version: 3.4]
2022-11-16 18:21:11.490876 I | etcdserver: 8e9e05c52164694d as single-node; fast-forwarding 9 ticks (election ticks 10)
2022-11-16 18:21:11.492215 I | embed: listening for peers on
raft2022/11/16 18:21:15 INFO: 8e9e05c52164694d is starting a new election at term 1520
raft2022/11/16 18:21:15 INFO: 8e9e05c52164694d became candidate at term 1521
raft2022/11/16 18:21:15 INFO: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 1521
raft2022/11/16 18:21:15 INFO: 8e9e05c52164694d became leader at term 1521
raft2022/11/16 18:21:15 INFO: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 1521
2022-11-16 18:21:15.915457 I | etcdserver: published {Name:default ClientURLs:[<http://localhost:2379>]} to cluster cdf818194e3a8c32
2022-11-16 18:21:15.915481 I | embed: ready to serve client requests
2022-11-16 18:21:15.916121 N | embed: serving insecure client requests on, this is strongly discouraged!
2022/11/16 18:21:15 [INFO] Waiting for k3s to start
2022/11/16 18:21:16 [INFO] Waiting for k3s to start
what was the full command you used to upgrade rancher using docker? p.s. Docker installs are only for non-production environments
Righty, this Rancher server instance serves up Dev and Pre clusters and services - nothing in our Prod environments. I did the standard procedure: Stopped the container, made the data backup, pulled in the new Rancher image, started the new Rancher version using the data volume, let it run to do the update. Once done and verified, deleted the old Rancher version container. I’ll get the exact command I ran when I’m back in the office tomorrow.
We bring our own certs, and I have the data volume mounted on the host VM at a path.
@square-orange-60123 here is the full command:
Copy code
docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged -v /mnt/ssl/incommon/current/wildcard-cert.pem:/etc/rancher/ssl/cert.pem -v /mnt/ssl/incommon/current/wildcard-key.pem:/etc/rancher/ssl/key.pem -v /opt/rancher:/var/lib/rancher rancher/rancher:v2.5.16 --no-cacerts
did you follow these docs? if so, I don’t see the
--volumes-from rancher-data
in your command
👍 1
Yeah; that’s if you have the volume data only inside the container. Since we mount that data in from the host VM, and start it up using that same path, the —volumes-from is not needed here.
The Rancher server upgrades fine, and in fact the clusters do too. It’s just that the storage class is now broken when trying to provision new pvcs/pvs. The existing pvcs/pv still function and mount/bind.
A fresh, new cluster provisioned by that same rancher server works fine, using the same cluster yaml. So I’m really confused.
🤔 1
I’m not sure about the answer to that one. Are you using vsphere storage provider in the local cluster?
I am not.
Sorry, ignore those last 2 posts - different issue on another thread lol
😃 1