clever-air-65544
09/16/2022, 2:29 PMcurved-lifeguard-39360
09/16/2022, 2:35 PMInvalidParameterException: You cannot specify an AMI Type other than CUSTOM, when specifying an image id in your launch template. { RespMetadata: { StatusCode: 400, RequestID: "f92492c3-3f77-4f63-b91b-c6794fc81488" }, ClusterName: "pano-prod", Message_: "You cannot specify an AMI Type other than CUSTOM, when specifying an image id in your launch template.", NodegroupName: "pool-pvt" }
proud-ram-62490
09/16/2022, 3:08 PMfailed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: open /dev/pts/0: operation not permitted: unknown exit code: 128
proud-ram-62490
09/16/2022, 3:09 PMicy-winter-80635
09/17/2022, 5:33 AMwooden-angle-771
09/17/2022, 10:13 AM/var/lib/rancher/rke2/bin
folder is missing. And therefore containerd
binary is missing and cant start. Using CentOS and installation using the rpm
method. Someone seeing something similar?hundreds-sugar-37524
09/17/2022, 10:24 AMagreeable-oil-87482
09/17/2022, 5:56 PMadventurous-magazine-8486
09/18/2022, 3:45 AMplain-portugal-37007
09/18/2022, 8:58 AM2m3s Normal Created pod/prometheus-rancher-monitoring-prometheus-0 Created container config-reloader
2m3s Normal Started pod/prometheus-rancher-monitoring-prometheus-0 Started container config-reloader
2m3s Normal Pulled pod/prometheus-rancher-monitoring-prometheus-0 Container image "rancher/mirrored-library-nginx:1.21.1-alpine" already present on machine
2m3s Normal Created pod/prometheus-rancher-monitoring-prometheus-0 Created container prometheus-proxy
2m2s Normal Started pod/prometheus-rancher-monitoring-prometheus-0 Started container prometheus-proxy
2m2s Normal Killing pod/prometheus-rancher-monitoring-prometheus-0 Stopping container prometheus
2m2s Normal Killing pod/prometheus-rancher-monitoring-prometheus-0 Stopping container config-reloader
2m2s Normal Killing pod/prometheus-rancher-monitoring-prometheus-0 Stopping container prometheus-proxy
2m Normal Scheduled pod/prometheus-rancher-monitoring-prometheus-0 Successfully assigned cattle-monitoring-system/prometheus-rancher-monitoring-prometheus-0 to ip-10-102-66-114.ec2.internal
2m Normal Pulled pod/prometheus-rancher-monitoring-prometheus-0 Container image "<http://quay.io/prometheus-operator/prometheus-config-reloader:v0.56.0|quay.io/prometheus-operator/prometheus-config-reloader:v0.56.0>" already present on machine
2m Normal Created pod/prometheus-rancher-monitoring-prometheus-0 Created container init-config-reloader
2m Normal Started pod/prometheus-rancher-monitoring-prometheus-0 Started container init-config-reloader
117s Normal Scheduled pod/prometheus-rancher-monitoring-prometheus-0 Successfully assigned cattle-monitoring-system/prometheus-rancher-monitoring-prometheus-0 to ip-10-102-66-114.ec2.internal
117s Normal Pulled pod/prometheus-rancher-monitoring-prometheus-0 Container image "<http://quay.io/prometheus-operator/prometheus-config-reloader:v0.56.0|quay.io/prometheus-operator/prometheus-config-reloader:v0.56.0>" already present on machine
117s Normal Created pod/prometheus-rancher-monitoring-prometheus-0 Created container init-config-reloader
116s Normal Started pod/prometheus-rancher-monitoring-prometheus-0 Started container init-config-reloader
114s Normal Scheduled pod/prometheus-rancher-monitoring-prometheus-0 Successfully assigned cattle-monitoring-system/prometheus-rancher-monitoring-prometheus-0 to ip-10-102-66-114.ec2.internal
50s Warning FailedMount pod/prometheus-rancher-monitoring-prometheus-0 MountVolume.SetUp failed for volume "tls-assets" : secret "prometheus-rancher-monitoring-prometheus-tls-assets" not found
ambitious-motherboard-40337
09/18/2022, 4:15 PMsilly-jordan-81965
09/19/2022, 5:41 AMcalm-dinner-82480
09/19/2022, 6:39 AMagreeable-school-15335
09/19/2022, 9:36 AMkubectl
using the kubeconfig in another computer for two specific clusters (olders EKS cluster saws as ). Both clusters are in 1.21 version. I have this error after all `kubectl`/`helm` commands :
Error from server (InternalError): an error on the server ("unable to create impersonator account: ClusterUnavailable 503: ClusterUnavailable 503: cluster not found") has prevented the request from succeeding
One other weird thing : I can't use execute shell or logs directly in rancher GUI for this two clusters. It works fine before the 2.6.6.
I decide to upgrade to rncher 2.6.8. But, only for this two clusters, the cluster-agent upgrade is not propaged (I'm still in clter-agent 2.6.6 for them)
After navigate in rancher console, I saw, one of this clusters in error in cluster management page. It said the cluster is unavailable (But I can navigate and manipulate it in rancher interface). I research for it and I found this command :
kubectl patch <http://clusters.management.cattle.io|clusters.management.cattle.io> <REPLACE_WITH_CLUSTERID> -p '{"status":{"agentImage":"dummy"}}' --type merge
I try this for the "unavailable" cluster but no changes. I decide to watch the logs in the rancher deployment and I get this error :
[ERROR] [secretmigrator] failed to migrate service account token secret for cluster c-wrh8l, will retry: Operation cannot be fulfilled on <http://clusters.management.cattle.io|clusters.management.cattle.io> "c-wrh8l": the object has been modified; please apply your changes to the latest version and try again
Somebody can help me ?cold-nightfall-40279
09/19/2022, 10:54 AMcold-nightfall-40279
09/19/2022, 10:59 AM(node:14232) UnhandledPromiseRejectionWarning: FetchError: invalid json response body at <https://desktop.version.rancher.io/v1/checkupgrade> reason: Unexpected token < in JSON at position 0
at C:\Users\sau\AppData\Local\Programs\Rancher Desktop\resources\app.asar\node_modules\node-fetch\lib\index.js:273:32
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async Xn.checkForUpdates (C:\Users\sau\AppData\Local\Programs\Rancher Desktop\resources\app.asar\dist\app\background.js:29:58509)
at async Xn.getLatestVersion (C:\Users\sau\AppData\Local\Programs\Rancher Desktop\resources\app.asar\dist\app\background.js:29:60057)
at async NsisUpdater.getUpdateInfoAndProvider (C:\Users\sau\AppData\Local\Programs\Rancher Desktop\resources\app.asar\node_modules\electron-updater\out\AppUpdater.js:298:19)
at async NsisUpdater.doCheckForUpdates (C:\Users\sau\AppData\Local\Programs\Rancher Desktop\resources\app.asar\node_modules\electron-updater\out\AppUpdater.js:312:24)
at async ci (C:\Users\sau\AppData\Local\Programs\Rancher Desktop\resources\app.asar\dist\app\background.js:29:63495)
(Use `Rancher Desktop --trace-warnings ...` to show where the warning was created)
(node:14232) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see <https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode>). (rejection id: 2)
Config file has no clusters, will retry later
[21072:0919/114238.364:ERROR:<http://gpu_init.cc|gpu_init.cc>(446)] Passthrough is not supported, GL is disabled, ANGLE is
As per the message it appears that the Rancher Desktop software is being blocked by the proxy while attempting to download some data.
What is the destination IP/Hostname is Rancher Desktop trying to access so that I may ask my IT team to relax the restrictions for that host.
Thanks,
Saupolite-breakfast-84569
09/19/2022, 2:41 PMapiVersion: <http://cert-manager.io/v1|cert-manager.io/v1>
kind: Certificate
metadata:
name: <http://rancher.sand.example.com|rancher.sand.example.com>
namespace: istio-system
spec:
privateKey:
rotationPolicy: Always
secretName: <http://rancher.sand.example.com|rancher.sand.example.com>
commonName: <http://rancher.sand.example.com|rancher.sand.example.com>
issuerRef:
name: letsencrypt-prod-istio
kind: ClusterIssuer
dnsNames:
- <http://rancher.sand.example.com|rancher.sand.example.com>
The cluster already have istio installed so I created the following Virtual Service and Gateway:
apiVersion: <http://networking.istio.io/v1beta1|networking.istio.io/v1beta1>
kind: Gateway
metadata:
name: rancher
namespace: cattle-system
spec:
selector:
app: istio-ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- <http://rancher.sand.example.com|rancher.sand.example.com>
tls:
mode: SIMPLE
credentialName: <http://rancher.sand.example.com|rancher.sand.example.com>
---
apiVersion: <http://networking.istio.io/v1beta1|networking.istio.io/v1beta1>
kind: VirtualService
metadata:
name: rancher
namespace: cattle-system
spec:
gateways:
- rancher
hosts:
- <http://rancher.sand.example.com|rancher.sand.example.com>
http:
- name: "http"
route:
- destination:
host: rancher.cattle-system.svc.cluster.local
port:
number: 80
Everything works except when I try under my terminal kubectl exec
and kubectl port-forward
.
$ kubectl exec -v=7 -it myPod -- bash
I0919 16:30:52.730382 61542 round_trippers.go:457] Response Status: 403 Forbidden in 78 milliseconds
I0919 16:30:52.730998 61542 helpers.go:216] server response object: [{
"metadata": {}
}]
F0919 16:30:52.731059 61542 helpers.go:115] Error from server:
Has anyone has this issue before?proud-salesmen-12221
09/19/2022, 4:30 PMrapid-bear-5359
09/19/2022, 6:35 PMrapid-bear-5359
09/19/2022, 6:36 PMrapid-bear-5359
09/19/2022, 6:40 PMcreamy-crowd-89310
09/19/2022, 7:55 PMbrash-machine-34636
09/19/2022, 11:33 PMbrash-machine-34636
09/19/2022, 11:35 PMfierce-coat-52387
09/20/2022, 12:13 AMbrash-planet-10109
09/20/2022, 8:23 AMcreamy-crowd-89310
09/20/2022, 8:35 AMbrash-planet-10109
09/20/2022, 8:37 AMcreamy-crowd-89310
09/20/2022, 8:39 AMbrash-planet-10109
09/20/2022, 8:49 AM