early-lifeguard-63817
12/01/2022, 2:09 PMdazzling-computer-84464
12/01/2022, 7:32 PM2022/12/01 19:28:41 [DEBUG] No active connection for cluster [c-rw259], will wait for about 30 seconds
2022/12/01 19:28:41 [TRACE] dialerFactory: apiEndpoint hostPort for cluster [c-pw867] is [172.20.0.1:443]
2022/12/01 19:28:41 [TRACE] dialerFactory: no tunnel session found for cluster [c-pw867], falling back to nodeDialer
brash-zebra-92886
12/01/2022, 7:33 PMlemon-jelly-91576
12/01/2022, 7:48 PMkubectl version
I’m getting this error:
invalid configuration: no configuration has been provided
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: <http://version.Info|version.Info>{Major:"1", Minor:"25", GitVersion:"v1.25.4", GitCommit:"872a965c6c6526caa949f0c6ac028ef7aff3fb78", GitTreeState:"clean", BuildDate:"2022-11-09T13:36:36Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"darwin/amd64"}
Kustomize Version: v4.5.7
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Something in the configuration seems to have become corrupted somehow. How can I reset the ~/.kube/config or re-establish a new configuration?acceptable-printer-7134
12/01/2022, 8:35 PMaws-auth
config map entry for the MapUsers
where can we check logs regarding importing?miniature-advantage-78722
12/01/2022, 10:04 PM--cgroup-driver
flag?famous-grass-8099
12/01/2022, 11:11 PMrancher-server_1 | 2022/12/01 04:54:09 [ERROR] Failed to handle tunnel request from remote address 172.19.0.2:49716 (X-Forwarded-For: 44.230.106.196, 172.31.30.132): response 400: websocket: the client is not using the websocket protocol: 'websocket' token not found in 'Upgrade' header
rancher-server_1 | 2022/12/01 04:54:09 [ERROR] Failed to handle tunnel request from remote address 172.19.0.2:49716 (X-Forwarded-For: 44.230.106.196, 172.31.30.132): response 400: Error during upgrade for host [c-sxlgx]: websocket: the client is not using the websocket protocol: 'websocket' token not found in 'Upgrade' header
I have another AWS EKS cluster. I am trying to import in above rancher instance. It is showing waiting
status.early-lifeguard-63817
12/01/2022, 11:43 PMbright-fireman-42144
12/02/2022, 1:32 AMdamp-dinner-23240
12/02/2022, 8:33 AMversion: "3"
services:
db:
container_name: spring-db
image: mysql
platform: linux/amd64
environment:
MYSQL_DATABASE: todos
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- ./db/data:/var/lib/mysql:rw
ports:
- "3307:3307"
restart: always
Below is the error I seekind-waitress-15815
12/02/2022, 9:31 AMsalmon-carpenter-62625
12/02/2022, 10:08 AMbillions-plastic-92005
12/02/2022, 12:06 PMcreamy-room-58344
12/02/2022, 4:13 PMfreezing-fireman-44188
12/02/2022, 4:39 PMcreamy-accountant-88363
12/02/2022, 5:11 PM<http://clusters.provisioning.cattle.io/v1|clusters.provisioning.cattle.io/v1>
API using a kubernetesVersion
that is built/supplied by someone else. Currently this only seems to work with the rke2 or k3s kubernetes versions bundled with Rancher.gentle-petabyte-40055
12/02/2022, 7:38 PMfew-carpenter-10741
12/02/2022, 10:10 PMWaiting for cp-zema-uat1 to finish provisioning
has anyone had this problem before and how can I delete it?
thanks in advancegentle-petabyte-40055
12/03/2022, 4:29 AMgentle-petabyte-40055
12/03/2022, 4:29 AMgentle-petabyte-40055
12/03/2022, 4:30 AMgentle-advantage-38637
12/03/2022, 12:35 PMgentle-advantage-38637
12/03/2022, 9:17 PMbright-fish-35393
12/05/2022, 1:10 AMlively-night-78214
12/05/2022, 7:24 AMadorable-photographer-68517
12/05/2022, 10:08 AMsparse-potato-80319
12/05/2022, 11:29 AMancient-air-32350
12/05/2022, 12:12 PMambitious-student-74765
12/05/2022, 1:38 PMthankful-balloon-877
12/05/2022, 4:16 PMsystemctl start rke2-server
, and waiting for it to come up. This time, the service will not come up - in the kubelet.log file I find several entries of this:
E1205 13:53:15.683661 2726 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown"
E1205 13:53:15.683713 2726 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" pod="kube-system/etcd-rancher-har-nue-01"
E1205 13:53:15.683746 2726 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" pod="kube-system/etcd-rancher-har-nue-01"
E1205 13:53:15.683802 2726 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-rancher-har-nue-01_kube-system(e18aa5e5b83a5a3c56d78e4054612394)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-rancher-har-nue-01_kube-system(e18aa5e5b83a5a3c56d78e4054612394)\\\": rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/etcd-rancher-har-nue-01" podUID=e18aa5e5b83a5a3c56d78e4054612394
E1205 13:53:15.723238 2726 kubelet.go:2466] "Error getting node" err="node \"rancher-har-nue-01\" not found"
Am I right in thinking that this is my issue? If yes, any ideas what is happening here and where that "invalid argument: unknown" could come from?thankful-balloon-877
12/05/2022, 4:16 PMsystemctl start rke2-server
, and waiting for it to come up. This time, the service will not come up - in the kubelet.log file I find several entries of this:
E1205 13:53:15.683661 2726 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown"
E1205 13:53:15.683713 2726 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" pod="kube-system/etcd-rancher-har-nue-01"
E1205 13:53:15.683746 2726 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" pod="kube-system/etcd-rancher-har-nue-01"
E1205 13:53:15.683802 2726 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-rancher-har-nue-01_kube-system(e18aa5e5b83a5a3c56d78e4054612394)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-rancher-har-nue-01_kube-system(e18aa5e5b83a5a3c56d78e4054612394)\\\": rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/etcd-rancher-har-nue-01" podUID=e18aa5e5b83a5a3c56d78e4054612394
E1205 13:53:15.723238 2726 kubelet.go:2466] "Error getting node" err="node \"rancher-har-nue-01\" not found"
Am I right in thinking that this is my issue? If yes, any ideas what is happening here and where that "invalid argument: unknown" could come from?creamy-pencil-82913
12/05/2022, 5:53 PMthankful-balloon-877
12/05/2022, 6:10 PMcreamy-pencil-82913
12/05/2022, 6:26 PMthankful-balloon-877
12/05/2022, 6:26 PMcreamy-pencil-82913
12/05/2022, 7:00 PMthankful-balloon-877
12/05/2022, 7:16 PMrancher-har-nue-01:~ # ls /var/log/containers/
rancher-har-nue-01:~ # rpm -qa|grep container
container-selinux-2.188.0-150400.1.8.noarch
creamy-pencil-82913
12/05/2022, 7:25 PMthankful-balloon-877
12/05/2022, 7:28 PMtime="2022-12-05T19:26:40.803544362Z" level=warning msg="cleanup warnings time=\"2022-12-05T19:26:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.
io pid=17802\ntime=\"2022-12-05T19:26:40Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/k3s/containerd/io.containerd.runtime.v2.task/
<http://k8s.io/3e36f5b6a0971ee3b62b5597f9c9931d8e58edc45b9a55ecf509272f6bb5a1a2/init.pid|k8s.io/3e36f5b6a0971ee3b62b5597f9c9931d8e58edc45b9a55ecf509272f6bb5a1a2/init.pid>: no such file or directory\"\n"
time="2022-12-05T19:26:40.803953105Z" level=error msg="copy shim log" error="read /proc/self/fd/21: file already closed"
time="2022-12-05T19:26:40.818880484Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-rancher-har-nue-01,Uid:e18aa5e5b83a5a3c56d78e4054612394
,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd task: failed to create shim: OCI runtime create failed: runc create faile
d: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown"
if you want I can upload the full file, but it seems similar to kubelet log?rancher-har-nue-01:~ # getenforce
Permissive
creamy-pencil-82913
12/05/2022, 7:29 PMthankful-balloon-877
12/05/2022, 7:29 PMrancher-har-nue-01:~ # rpm -qa|grep rke
rke2-selinux-0.11-1.sle.noarch
rke2-common-1.23.14~rke2r1-0.x86_64
rke2-server-1.23.14~rke2r1-0.x86_64
rancher-har-nue-01:~ # grep PRETTY /etc/os-release
PRETTY_NAME="SUSE Linux Enterprise Micro 5.3"
creamy-pencil-82913
12/05/2022, 7:32 PMselinux: true
or remove the other selinux bits.thankful-balloon-877
12/05/2022, 7:54 PMselinux: true
added to config.yaml, but that seems to get stuck with the same loop.
what I notice is that on one of my existing installations (same setup just slightly older versions)
ls -RZ /var/lib/rancher/|grep container_var_lib
....
system_u:object_r:container_var_lib_t:s0 rke2
system_u:object_r:container_var_lib_t:s0 agent
system_u:object_r:container_var_lib_t:s0 bin
system_u:object_r:container_var_lib_t:s0 server
....
where as on the new one
ls -RZ /var/lib/rancher/|grep container_var_lib
<empty>
rancher-har-nue-01:~ # restorecon -Rvn /var/lib/rancher/|grep container_var_lib
<empty>
rancher-prv-01:~ # rpm -qa|egrep 'selinux|rke'
selinux-policy-targeted-20210716-150400.2.3.noarch
patterns-microos-selinux-5.3.3-150400.1.1.x86_64
libselinux1-3.1-150400.1.69.x86_64
selinux-policy-20210716-150400.2.3.noarch
container-selinux-2.188.0-150400.1.2.noarch
selinux-tools-3.1-150400.1.69.x86_64
rke2-selinux-0.9-1.sle.noarch
rke2-common-1.23.9~rke2r1-0.x86_64
rke2-server-1.23.9~rke2r1-0.x86_64
creamy-pencil-82913
12/05/2022, 7:57 PMthankful-balloon-877
12/05/2022, 7:57 PMcreamy-pencil-82913
12/05/2022, 7:57 PMthankful-balloon-877
12/05/2022, 7:58 PMrancher-har-nue-01:~ # rpm -q container-selinux
container-selinux-2.188.0-150400.1.8.noarch
rancher-har-nue-01:~ # rpm -q rke2-selinux
rke2-selinux-0.11-1.sle.noarch
rancher-har-nue-01:~ # rpm -q --requires rke2-selinux |grep container
container-selinux >= 2.164.2-1.1
creamy-pencil-82913
12/05/2022, 7:58 PMthankful-balloon-877
12/05/2022, 7:58 PMcreamy-pencil-82913
12/05/2022, 7:58 PMthankful-balloon-877
12/05/2022, 8:01 PMold:
rancher-prv-01:~ # ls -Z /usr/bin/rke2
system_u:object_r:container_runtime_exec_t:s0 /usr/bin/rke2
new:
rancher-har-nue-01:~ # ls -Z /usr/bin/rke2
system_u:object_r:bin_t:s0 /usr/bin/rke2
restorecon -Rvn /usr/bin/rke2
doesn't return anything 😕rancher-har-nue-01:~ # kubectl get node
NAME STATUS ROLES AGE VERSION
rancher-har-nue-01 Ready control-plane,etcd,master 118s v1.23.14+rke2r1
some combination of copying /etc/selinux from the other setup, installing libselinux and container-selinux devel packages from security:SELinux, ignoring some relabel errors during boot, copying the rke2 binaries to /usr/local/bin and repeated restorecon's on the latter and /var/lib/rancher (it would keep resetting) made it come up now. hm I rather not keep it so macgyvered, wonder if something in the packages changed during versions that made the selinux container policies no longer install correctly themselves