brash-zebra-92886
12/01/2022, 7:33 PMlemon-jelly-91576
12/01/2022, 7:48 PMkubectl version
I’m getting this error:
invalid configuration: no configuration has been provided
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: <http://version.Info|version.Info>{Major:"1", Minor:"25", GitVersion:"v1.25.4", GitCommit:"872a965c6c6526caa949f0c6ac028ef7aff3fb78", GitTreeState:"clean", BuildDate:"2022-11-09T13:36:36Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"darwin/amd64"}
Kustomize Version: v4.5.7
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Something in the configuration seems to have become corrupted somehow. How can I reset the ~/.kube/config or re-establish a new configuration?acceptable-printer-7134
12/01/2022, 8:35 PMaws-auth
config map entry for the MapUsers
where can we check logs regarding importing?miniature-advantage-78722
12/01/2022, 10:04 PM--cgroup-driver
flag?famous-grass-8099
12/01/2022, 11:11 PMrancher-server_1 | 2022/12/01 04:54:09 [ERROR] Failed to handle tunnel request from remote address 172.19.0.2:49716 (X-Forwarded-For: 44.230.106.196, 172.31.30.132): response 400: websocket: the client is not using the websocket protocol: 'websocket' token not found in 'Upgrade' header
rancher-server_1 | 2022/12/01 04:54:09 [ERROR] Failed to handle tunnel request from remote address 172.19.0.2:49716 (X-Forwarded-For: 44.230.106.196, 172.31.30.132): response 400: Error during upgrade for host [c-sxlgx]: websocket: the client is not using the websocket protocol: 'websocket' token not found in 'Upgrade' header
I have another AWS EKS cluster. I am trying to import in above rancher instance. It is showing waiting
status.early-lifeguard-63817
12/01/2022, 11:43 PMbright-fireman-42144
12/02/2022, 1:32 AMdamp-dinner-23240
12/02/2022, 8:33 AMversion: "3"
services:
db:
container_name: spring-db
image: mysql
platform: linux/amd64
environment:
MYSQL_DATABASE: todos
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- ./db/data:/var/lib/mysql:rw
ports:
- "3307:3307"
restart: always
Below is the error I seekind-waitress-15815
12/02/2022, 9:31 AMsalmon-carpenter-62625
12/02/2022, 10:08 AMbillions-plastic-92005
12/02/2022, 12:06 PMcreamy-room-58344
12/02/2022, 4:13 PMfreezing-fireman-44188
12/02/2022, 4:39 PMcreamy-accountant-88363
12/02/2022, 5:11 PM<http://clusters.provisioning.cattle.io/v1|clusters.provisioning.cattle.io/v1>
API using a kubernetesVersion
that is built/supplied by someone else. Currently this only seems to work with the rke2 or k3s kubernetes versions bundled with Rancher.gentle-petabyte-40055
12/02/2022, 7:38 PMfew-carpenter-10741
12/02/2022, 10:10 PMWaiting for cp-zema-uat1 to finish provisioning
has anyone had this problem before and how can I delete it?
thanks in advancegentle-petabyte-40055
12/03/2022, 4:29 AMgentle-petabyte-40055
12/03/2022, 4:29 AMgentle-petabyte-40055
12/03/2022, 4:30 AMgentle-advantage-38637
12/03/2022, 12:35 PMgentle-advantage-38637
12/03/2022, 9:17 PMbright-fish-35393
12/05/2022, 1:10 AMlively-night-78214
12/05/2022, 7:24 AMadorable-photographer-68517
12/05/2022, 10:08 AMsparse-potato-80319
12/05/2022, 11:29 AMancient-air-32350
12/05/2022, 12:12 PMambitious-student-74765
12/05/2022, 1:38 PMthankful-balloon-877
12/05/2022, 4:16 PMsystemctl start rke2-server
, and waiting for it to come up. This time, the service will not come up - in the kubelet.log file I find several entries of this:
E1205 13:53:15.683661 2726 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown"
E1205 13:53:15.683713 2726 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" pod="kube-system/etcd-rancher-har-nue-01"
E1205 13:53:15.683746 2726 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" pod="kube-system/etcd-rancher-har-nue-01"
E1205 13:53:15.683802 2726 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-rancher-har-nue-01_kube-system(e18aa5e5b83a5a3c56d78e4054612394)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-rancher-har-nue-01_kube-system(e18aa5e5b83a5a3c56d78e4054612394)\\\": rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/etcd-rancher-har-nue-01" podUID=e18aa5e5b83a5a3c56d78e4054612394
E1205 13:53:15.723238 2726 kubelet.go:2466] "Error getting node" err="node \"rancher-har-nue-01\" not found"
Am I right in thinking that this is my issue? If yes, any ideas what is happening here and where that "invalid argument: unknown" could come from?silly-solstice-24970
12/05/2022, 8:29 PMrke version v1.3.12
, and while attempting to monitor with prometheus+grafana k8s services (without success) noticed they didn’t get and endpoint IP:
kube-system kube-prometheus-coredns ClusterIP None <none> 9153/TCP 127m
kube-system kube-prometheus-kube-controller-manager ClusterIP None <none> 10257/TCP 127m
kube-system kube-prometheus-kube-etcd ClusterIP None <none> 2381/TCP 127m
kube-system kube-prometheus-kube-proxy ClusterIP None <none> 10249/TCP 127m
kube-system kube-prometheus-kube-scheduler ClusterIP None <none> 10259/TCP 127m
kube-system kube-prometheus-kubelet ClusterIP None <none> 10250/TCP,10255/TCP,4194/TCP 127m
Because of that, we are unable to monitor those metrics; this is an example configuration for one of our clusters:
---
nodes:
- address: node1.localdomain
hostname_override: node01
user: rke
role:
- controlplane
- worker
- etcd
labels:
role: storage-node
- address: node02.localdomain
hostname_override: node02
user: rke
role:
- controlplane
- worker
- etcd
labels:
role: storage-node
- address: node03.localdomain
hostname_override: node03
user: rke
role:
- controlplane
- worker
- etcd
labels:
role: storage-node
- address: node04.localdomain
hostname_override: node04
user: rke
role:
- worker
labels:
role: storage-node
- address: node05.localdomain
hostname_override: node05
user: rke
role:
- worker
labels:
role: storage-node
services:
etcd:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
external_urls: []
ca_cert: ""
cert: ""
key: ""
path: ""
uid: 0
gid: 0
snapshot: null
retention: ""
creation: ""
backup_config: null
kube-api:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
service_cluster_ip_range: 10.43.0.0/16
service_node_port_range: ""
pod_security_policy: false
always_pull_images: false
secrets_encryption_config: null
audit_log: null
admission_configuration: null
event_rate_limit: null
kube-controller:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
cluster_cidr: 10.42.0.0/16
service_cluster_ip_range: 10.43.0.0/16
scheduler:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
kubelet:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
cluster_domain: cluster.local
infra_container_image: ""
cluster_dns_server: 10.43.0.10
fail_swap_on: false
generate_serving_certificate: false
kubeproxy:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
network:
plugin: canal
options: {}
mtu: 0
node_selector: {}
authentication:
strategy: x509
sans: []
webhook: null
addons: ""
addons_include: []
ssh_key_path: ~/.ssh/id_rsa
ssh_cert_path: ""
ssh_agent_auth: false
authorization:
mode: rbac
options: {}
ignore_docker_version: false
kubernetes_version: "v1.23.7-rancher1-1"
..........
Is there anything I have to do to enable those metrics?creamy-accountant-88363
12/05/2022, 9:09 PM