future-account-50371
08/22/2022, 11:02 AMmost-sunset-36476
08/22/2022, 11:15 AMimportant-umbrella-22006
08/22/2022, 12:34 PMTTLAfterFinished=true
But when i change kube-controller-manager and kube-apiserver pods configuration, these gets restarted and revise configuration file to original. Does it control by fleet?
can someone help me with the process of how to modify kube-controller-manager and kube-apiserver pods configuration?handsome-mouse-36138
08/22/2022, 1:35 PMloud-belgium-84462
08/22/2022, 2:33 PMrough-teacher-88372
08/22/2022, 4:15 PMError: UPGRADE FAILED: unable to build kubernetes objects from current release manifest: resource mapping not found for name: "rancher" namespace: "" from "": no matches for kind "Ingress" in version "<http://networking.k8s.io/v1beta1|networking.k8s.io/v1beta1>"
ensure CRDs are installed first
I'd absolutely love any help on thisbland-dress-30943
08/22/2022, 8:09 PMbrash-monitor-41966
08/22/2022, 8:21 PMbrash-monitor-41966
08/22/2022, 8:22 PMsome-oil-32535
08/22/2022, 9:58 PMcuddly-restaurant-47972
08/22/2022, 10:49 PMmany-church-13850
08/23/2022, 4:16 AMfreezing-wolf-83208
08/23/2022, 5:50 AMmost-sunset-36476
08/23/2022, 6:41 AMtime="2022-08-19T07:04:25Z" level=warning msg="Error while getting agent config: Get \"<https://rancher.gandalf.mordor.net/v3/connect/config>\": dial tcp 20.42.142.242:443: i/o timeout"
time="2022-08-19T07:04:28Z" level=error msg="Failed to connect to proxy. Empty dialer response" error="dial tcp 20.42.142.242:443: i/o timeout"
time="2022-08-19T07:04:28Z" level=error msg="Remotedialer proxy error" error="dial tcp 20.42.142.242:443: i/o timeout"
time="2022-08-19T07:04:38Z" level=info msg="Connecting to <wss://rancher.gandalf.mordor.net/v3/connect> with token starting with 4sfxz9dk54w2hp9xv729k8tbgnf"
time="2022-08-19T07:04:38Z" level=info msg="Connecting to proxy" url="<wss://rancher.gandalf.mordor.net/v3/connect>"
time="2022-08-19T07:04:48Z" level=error msg="Failed to connect to proxy. Empty dialer response" error="dial tcp 20.42.142.242:443: i/o timeout"
time="2022-08-19T07:04:48Z" level=error msg="Remotedialer proxy error" error="dial tcp 20.42.142.242:443: i/o timeout"
time="2022-08-19T07:04:58Z" level=info msg="Connecting to <wss://rancher.gandalf.mordor.net/v3/connect> with token starting with 4sfxz9dk54w2hp9xv729k8tbgnf"
time="2022-08-19T07:04:58Z" level=info msg="Connecting to proxy" url="<wss://rancher.gandalf.mordor.net/v3/connect>"
time="2022-08-19T07:05:00Z" level=warning msg="Error while getting agent config: Get \"<https://rancher.gandalf.mordor.net/v3/connect/config>\": dial tcp 20.42.142.242:443: i/o timeout"
most-sunset-36476
08/23/2022, 6:42 AMmost-sunset-36476
08/23/2022, 6:47 AMmost-sunset-36476
08/23/2022, 6:48 AM2022/08/22 14:45:36 [TRACE] dialerFactory: Skipping node [system1] for tunneling the cluster connection because nodeConditions are not as expected
2022/08/22 14:45:36 [TRACE] dialerFactory: Skipping node [general1] for tunneling the cluster connection because nodeConditions are not as expected
most-sunset-36476
08/23/2022, 6:57 AM2022/08/22 13:13:03 [ERROR] failed to sync schemas: Post "<https://10.0.0.1:443/apis/authorization.k8s.io/v1/selfsubjectaccessreviews?timeout=15m0s>": unexpected EOF
2022/08/22 14:01:59 [ERROR] error syncing 'c-fzndh/m-2r949': handler rke-worker-upgrader: getNodePlan error for node [m-2r949]: failed to find plan for 10.100.0.4, requeuing
2022/08/22 14:01:59 [ERROR] error syncing 'c-fzndh/m-7whc7': handler rke-worker-upgrader: getNodePlan error for node [m-7whc7]: failed to find plan for 10.100.0.5, requeuing
2022/08/22 14:01:59 [ERROR] error syncing 'c-fzndh/m-7whc7': handler rke-worker-upgrader: getNodePlan error for node [m-7whc7]: failed to find plan for 10.100.0.5, requeuing
2022/08/22 14:01:59 [ERROR] error syncing 'c-fzndh/m-2r949': handler rke-worker-upgrader: getNodePlan error for node [m-2r949]: failed to find plan for 10.100.0.4, requeuing
2022/08/22 14:01:59 [ERROR] error syncing 'c-fzndh/m-7whc7': handler rke-worker-upgrader: getNodePlan error for node [m-7whc7]: failed to find plan for 10.100.0.5, requeuing
2022/08/22 14:01:59 [ERROR] error syncing 'c-fzndh/m-2r949': handler rke-worker-upgrader: getNodePlan error for node [m-2r949]: failed to find plan for 10.100.0.4, requeuing
2022/08/22 14:01:59 [ERROR] error syncing 'c-fzndh/m-7whc7': handler rke-worker-upgrader: getNodePlan error for node [m-7whc7]: failed to find plan for 10.100.0.5, requeuing
2022/08/22 14:01:59 [ERROR] error syncing 'c-fzndh/m-2r949': handler rke-worker-upgrader: getNodePlan error for node [m-2r949]: failed to find plan for 10.100.0.4, requeuing
2022/08/22 14:01:59 [ERROR] error syncing 'c-fzndh/m-7whc7': handler rke-worker-upgrader: getNodePlan error for node [m-7whc7]: failed to find plan for 10.100.0.5, requeuing
2022/08/22 14:01:59 [ERROR] error syncing 'c-fzndh/m-2r949': handler rke-worker-upgrader: getNodePlan error for node [m-2r949]: failed to find plan for 10.100.0.4, requeuing
2022/08/22 14:01:59 [ERROR] error syncing 'c-fzndh/m-7whc7': handler rke-worker-upgrader: getNodePlan error for node [m-7whc7]: failed to find plan for 10.100.0.5, requeuing
2022/08/22 14:01:59 [ERROR] error syncing 'c-fzndh/m-2r949': handler rke-worker-upgrader: getNodePlan error for node [m-2r949]: failed to find plan for 10.100.0.4, requeuing
2022/08/22 14:02:00 [ERROR] error syncing 'c-fzndh/m-7whc7': handler rke-worker-upgrader: getNodePlan error for node [m-7whc7]: failed to find plan for 10.100.0.5, requeuing
2022/08/22 14:02:00 [ERROR] error syncing 'c-fzndh/m-2r949': handler rke-worker-upgrader: getNodePlan error for node [m-2r949]: failed to find plan for 10.100.0.4, requeuing
2022/08/22 14:02:00 [ERROR] error syncing 'c-fzndh/m-2r949': handler rke-worker-upgrader: getNodePlan error for node [m-2r949]: failed to find plan for 10.100.0.4, requeuing
2022/08/22 14:02:00 [ERROR] error syncing 'c-fzndh/m-7whc7': handler rke-worker-upgrader: getNodePlan error for node [m-7whc7]: failed to find plan for 10.100.0.5, requeuing
2022/08/22 14:02:00 [ERROR] error syncing 'c-fzndh/m-7whc7': handler rke-worker-upgrader: getNodePlan error for node [m-7whc7]: failed to find plan for 10.100.0.5, requeuing
2022/08/22 14:02:00 [ERROR] error syncing 'c-fzndh/m-2r949': handler rke-worker-upgrader: getNodePlan error for node [m-2r949]: failed to find plan for 10.100.0.4, requeuing
2022/08/22 14:02:02 [ERROR] error syncing 'c-fzndh/u-3svhcofub3-admin': handler cluster-crtb-sync: couldn't ensure service account impersonator: failed to get secret for service account: cattle-impersonation-system/cattle-impersonation-u-3svhcofub3, error: timed out waiting for the condition, requeuing
2022/08/22 14:02:03 [ERROR] error syncing 'c-fzndh/m-7whc7': handler rke-worker-upgrader: getNodePlan error for node [m-7whc7]: failed to find plan for 10.100.0.5, requeuing
2022/08/22 14:02:03 [ERROR] error syncing 'c-fzndh/m-2r949': handler rke-worker-upgrader: getNodePlan error for node [m-2r949]: failed to find plan for 10.100.0.4, requeuing
2022/08/22 14:02:05 [ERROR] error syncing 'p-9w5l9/creator-project-owner': handler cluster-prtb-sync: couldn't ensure service account impersonator: failed to get secret for service account: cattle-impersonation-system/cattle-impersonation-user-ff9gr, error: timed out waiting for the condition, requeuing
2022/08/22 14:02:07 [ERROR] error syncing 'p-29gg6/creator-project-owner': handler cluster-prtb-sync: couldn't ensure service account impersonator: failed to get secret for service account: cattle-impersonation-system/cattle-impersonation-user-ff9gr, error: timed out waiting for the condition, requeuing
2022/08/22 14:02:08 [ERROR] error syncing 'c-fzndh/m-7whc7': handler rke-worker-upgrader: getNodePlan error for node [m-7whc7]: failed to find plan for 10.100.0.5, requeuing
2022/08/22 14:02:08 [ERROR] error syncing 'c-fzndh/m-2r949': handler rke-worker-upgrader: getNodePlan error for node [m-2r949]: failed to find plan for 10.100.0.4, requeuing
2022/08/22 14:02:18 [ERROR] error syncing 'c-fzndh/m-7whc7': handler rke-worker-upgrader: getNodePlan error for node [m-7whc7]: failed to find plan for 10.100.0.5, requeuing
2022/08/22 14:02:18 [ERROR] error syncing 'c-fzndh/m-2r949': handler rke-worker-upgrader: getNodePlan error for node [m-2r949]: failed to find plan for 10.100.0.4, requeuing
2022/08/22 14:02:39 [ERROR] error syncing 'c-fzndh/m-7whc7': handler rke-worker-upgrader: getNodePlan error for node [m-7whc7]: failed to find plan for 10.100.0.5, requeuing
2022/08/22 14:02:39 [ERROR] error syncing 'c-fzndh/m-2r949': handler rke-worker-upgrader: getNodePlan error for node [m-2r949]: failed to find plan for 10.100.0.4, requeuing
2022/08/22 14:03:09 [ERROR] error syncing 'c-fzndh/m-7whc7': handler rke-worker-upgrader: getNodePlan error for node [m-7whc7]: failed to find plan for 10.100.0.5, requeuing
2022/08/22 14:03:09 [ERROR] error syncing 'c-fzndh/m-2r949': handler rke-worker-upgrader: getNodePlan error for node [m-2r949]: failed to find plan for 10.100.0.4, requeuing
2022/08/22 14:08:51 [ERROR] Failed to handling tunnel request from remote address 10.244.0.10:55478 (X-Forwarded-For: 10.1.0.4): response 401: failed authentication
most-sunset-36476
08/23/2022, 7:36 AMBTW the [m-7whc7] and [m-2r949] are the stuck nodes.
colossal-policeman-83714
08/23/2022, 7:38 AM2022/08/23 07:40:14 [INFO] Done waiting for CRD <http://projectmonitorgraphs.management.cattle.io|projectmonitorgraphs.management.cattle.io> to become available
2022/08/23 07:40:14 [INFO] Waiting for CRD <http://cisbenchmarkversions.management.cattle.io|cisbenchmarkversions.management.cattle.io> to become available
2022/08/23 07:40:15 [INFO] Done waiting for CRD <http://cisbenchmarkversions.management.cattle.io|cisbenchmarkversions.management.cattle.io> to become available
2022/08/23 07:40:15 [INFO] Waiting for CRD <http://templates.management.cattle.io|templates.management.cattle.io> to become available
2022/08/23 07:40:15 [INFO] Done waiting for CRD <http://templates.management.cattle.io|templates.management.cattle.io> to become available
2022/08/23 07:40:15 [INFO] Waiting for CRD <http://monitormetrics.management.cattle.io|monitormetrics.management.cattle.io> to become available
2022/08/23 07:40:16 [INFO] Done waiting for CRD <http://monitormetrics.management.cattle.io|monitormetrics.management.cattle.io> to become available
2022/08/23 07:40:17 [FATAL] k3s exited with: exit status 255
colossal-policeman-83714
08/23/2022, 7:39 AM2022/08/23 07:41:52 [DEBUG] Loglevel set to [debug]
2022/08/23 07:41:52 [INFO] Rancher version v2.6.3 (3c1d5fac3) is starting
2022/08/23 07:41:52 [INFO] Rancher arguments {ACMEDomains:[] AddLocal:true Embedded:false BindHost: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:true Trace:false NoCACerts:false AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Features: ClusterRegistry:}
2022/08/23 07:41:52 [INFO] Listening on /tmp/log.sock
2022/08/23 07:41:52 [INFO] Waiting for server to become available: Get "<https://127.0.0.1:6443/version?timeout=15m0s>": dial tcp 127.0.0.1:6443: connect: connection refused
2022/08/23 07:41:54 [INFO] Waiting for server to become available: Get "<https://127.0.0.1:6443/version?timeout=15m0s>": dial tcp 127.0.0.1:6443: connect: connection refused
2022/08/23 07:41:56 [INFO] Waiting for server to become available: Get "<https://127.0.0.1:6443/version?timeout=15m0s>": dial tcp 127.0.0.1:6443: connect: connection refused
2022/08/23 07:41:58 [INFO] Waiting for server to become available: Get "<https://127.0.0.1:6443/version?timeout=15m0s>": dial tcp 127.0.0.1:6443: connect: connection refused
2022/08/23 07:42:00 [INFO] Waiting for server to become available: Get "<https://127.0.0.1:6443/version?timeout=15m0s>": dial tcp 127.0.0.1:6443: connect: connection refused
2022/08/23 07:42:02 [INFO] Waiting for server to become available: Get "<https://127.0.0.1:6443/version?timeout=15m0s>": dial tcp 127.0.0.1:6443: connect: connection refused
2022/08/23 07:42:04 [INFO] Waiting for server to become available: an error on the server ("apiserver not ready") has prevented the request from succeeding
2022/08/23 07:42:06 [INFO] Waiting for server to become available: an error on the server ("apiserver not ready") has prevented the request from succeeding
2022/08/23 07:42:08 [INFO] Waiting for server to become available: an error on the server ("apiserver not ready") has prevented the request from succeeding
2022/08/23 07:42:14 [FATAL] k3s exited with: exit status 255
colossal-policeman-83714
08/23/2022, 7:43 AMrough-pager-2467
08/23/2022, 9:05 AMadventurous-addition-59971
08/23/2022, 10:21 AMcareful-optician-75900
08/23/2022, 10:35 AM2022/08/12 19:27:36 [ERROR] Error during subscribe write tcp 192.68.102.197:80->192.68.101.253:53682: write: broken pipe
2022/08/12 19:27:36 [ERROR] Error during subscribe write tcp 192.68.102.197:80->192.68.101.253:53714: write: broken pipe
2022/08/12 19:27:36 [ERROR] Error during subscribe write tcp 192.68.102.197:80->192.68.101.253:53698: write: broken pipe
2022/08/12 19:27:49 [ERROR] Error during subscribe write tcp 192.68.102.197:80->192.68.101.253:44424: write: broken pipe
2022/08/12 19:28:06 [ERROR] Failed to handling tunnel request from remote address 192.68.101.98:37902: response 400: cluster not found
2022/08/12 19:28:06 [INFO] Active TLS secret serving-cert (ver=570139701) (count 12): map[<http://listener.cattle.io/cn-127.0.0.1:127.0.0.1|listener.cattle.io/cn-127.0.0.1:127.0.0.1> <http://listener.cattle.io/cn-192.68.101.118:192.68.101.118|listener.cattle.io/cn-192.68.101.118:192.68.101.118> <http://listener.cattle.io/cn-192.68.101.218:192.68.101.218|listener.cattle.io/cn-192.68.101.218:192.68.101.218> <http://listener.cattle.io/cn-192.68.101.223:192.68.101.223|listener.cattle.io/cn-192.68.101.223:192.68.101.223> <http://listener.cattle.io/cn-192.68.101.98:192.68.101.98|listener.cattle.io/cn-192.68.101.98:192.68.101.98> <http://listener.cattle.io/cn-192.68.102.146:192.68.102.146|listener.cattle.io/cn-192.68.102.146:192.68.102.146> <http://listener.cattle.io/cn-192.68.102.160:192.68.102.160|listener.cattle.io/cn-192.68.102.160:192.68.102.160> <http://listener.cattle.io/cn-192.68.102.197:192.68.102.197|listener.cattle.io/cn-192.68.102.197:192.68.102.197> <http://listener.cattle.io/cn-192.68.102.232:192.68.102.232|listener.cattle.io/cn-192.68.102.232:192.68.102.232> <http://listener.cattle.io/cn-localhost:localhost|listener.cattle.io/cn-localhost:localhost> <http://listener.cattle.io/cn-rancher.cattle-system:rancher.cattle-system|listener.cattle.io/cn-rancher.cattle-system:rancher.cattle-system> <http://listener.cattle.io/cn-listener.cattle.io/fingerprint:SHA1=|listener.cattle.io/cn-listener.cattle.io/fingerprint:SHA1=>
2022/08/12 19:28:06 [INFO] Active TLS secret tls-rancher-internal (ver=570139702) (count 1): map[<http://listener.cattle.io/cn-10.100.249.233:10.100.249.233|listener.cattle.io/cn-10.100.249.233:10.100.249.233> <http://listener.cattle.io/fingerprint:SHA1=86080A37DFA63E45479DCAA425785F79B5AAF953|listener.cattle.io/fingerprint:SHA1=86080A37DFA63E45479DCAA425785F79B5AAF953>]
2022/08/12 19:28:11 [ERROR] Failed to serve peer connection 192.68.101.118: websocket: close 1006 (abnormal closure): unexpected EOF
2022/08/12 19:28:11 [INFO] error in remotedialer server [400]: read tcp 192.68.102.197:443->192.68.102.160:43614: use of closed network connection
2022/08/12 19:28:11 [ERROR] Failed to serve peer connection 192.68.102.160: websocket: close 1006 (abnormal closure): unexpected EOF
2022/08/12 19:28:11 [INFO] error in remotedialer server [400]: read tcp 192.68.102.197:443->192.68.101.118:50186: use of closed network connection
2022/08/12 19:28:16 [INFO] Handling backend connection request [192.68.102.160]
2022/08/12 19:28:16 [INFO] Handling backend connection request [192.68.101.118]
2022/08/12 19:28:16 [ERROR] Failed to handling tunnel request from remote address 192.68.101.98:37962: response 400: cluster not found
2022/08/12 19:28:16 [ERROR] Failed to handling tunnel request fr
brash-monitor-41966
08/23/2022, 12:56 PMabundant-printer-55142
08/23/2022, 1:22 PMacoustic-photographer-83125
08/23/2022, 1:37 PMlittle-smartphone-40189
08/23/2022, 2:10 PMbrash-monitor-41966
08/23/2022, 2:30 PM