green-rain-9522
06/19/2023, 8:49 AMkubelet:
extra_args:
eviction-hard: >-
memory.available<100Mi,nodefs.available<10%,imagefs.available<15%,nodefs.inodesFree<5%
to the /etc/rancher/rke2/config.yaml, but it doesn’t seem to effect the worker nodesbest-jordan-89798
06/19/2023, 11:41 AMbest-jordan-89798
06/19/2023, 11:41 AMloud-eve-73457
06/20/2023, 10:54 AMPlan
CRD. should I delete it mannually or just leave it there without touch. Thanks.broad-farmer-70498
06/20/2023, 3:02 PMwonderful-rain-13345
06/20/2023, 11:09 PMbrief-mouse-13981
06/21/2023, 7:35 AMwonderful-rain-13345
06/21/2023, 10:55 PMshy-lunch-4399
06/22/2023, 11:57 AMrke2-ingress-controller
on my rke2-agent nodes are not coming up.
The one on the rke2-server node is working correctly.
when I view the logs of the 2 agent nodes, there is no error, but each container is trying to use 10.43.0.1
as the pod IP, but only the one on my server node succeeds to do so.
Can anybody help me with this matter?hundreds-evening-84071
06/22/2023, 1:26 PM# kubectl get no
E0622 09:22:58.972153 15285 memcache.go:287] couldn't get resource list for <http://metrics.k8s.io/v1beta1|metrics.k8s.io/v1beta1>: the server is currently unable to handle the request
E0622 09:22:58.987766 15285 memcache.go:121] couldn't get resource list for <http://metrics.k8s.io/v1beta1|metrics.k8s.io/v1beta1>: the server is currently unable to handle the request
E0622 09:22:58.990105 15285 memcache.go:121] couldn't get resource list for <http://metrics.k8s.io/v1beta1|metrics.k8s.io/v1beta1>: the server is currently unable to handle the request
E0622 09:22:58.991244 15285 memcache.go:121] couldn't get resource list for <http://metrics.k8s.io/v1beta1|metrics.k8s.io/v1beta1>: the server is currently unable to handle the request
NAME STATUS ROLES AGE VERSION
But everything appears to be okay. Should I ignore these lines in output?eager-refrigerator-66976
06/22/2023, 1:38 PMKubernetes v1.25 removes the beta PodSecurityPolicy admission plugin. Please follow the upstream documentation to migrate from PSP if using the built-in PodSecurity Admission Plugin, prior to upgrading to v1.25.0+rke2r1.
While upstream documentation tells me to disable PodSecurityPolicy
admission plugin. I’ve removed it from kube-apiserver-arg
enable-admission-plugins=NodeRestriction
but it still somehow added back… I am guessing it is enforced by RKE2 hardening so what I actually got is:
I0622 09:17:49.341327 1 flags.go:64] FLAG: --enable-admission-plugins="[NodeRestriction,PodSecurityPolicy,NodeRestriction]"
I also tried to add
--disable-admission-plugins="[PodSecurityPolicy]"
but this cause api server to fail
"command failed" err="[PodSecurityPolicy] in enable-admission-plugins and disable-admission-plugins overlapped"
So my question is: How do I disable PodSecurityPolicy
admission plugin to be able to remove all PSP before upgrading to 1.25.x 🙏straight-mechanic-16308
06/22/2023, 2:10 PMhundreds-evening-84071
06/22/2023, 2:45 PMrke2-agent
required to run just rancher on RKE2 cluster?
What I am trying to do is run a 3 node RKE2 cluster for rancher, these 3 nodes will have rke2-server
.
but am unsure if I will have to deploy rke2-agent
nodes will be required for rancher application?elegant-action-27521
06/22/2023, 5:02 PMwhite-lunch-51379
06/23/2023, 9:09 PMgreen-rain-9522
06/25/2023, 9:21 AMloud-eve-73457
06/25/2023, 10:40 AMglobal-unrestricted-psp
. I wonder to know how should I prepare before upgrading my cluster to v1.25. will the remaining of such psp configuration lead to a crack about my app and cluster. How could I delete them? thankskind-air-74358
06/26/2023, 10:42 AMFailed to test data store connection: this server is a not a member of the etcd cluster. Found [rancher-dev-control-plane-02-abc=https://<old-ip-address:2380
What should be te best way to work around this?best-jordan-89798
06/28/2023, 6:41 AMbest-jordan-89798
06/28/2023, 6:41 AMpurple-pilot-92297
06/28/2023, 8:28 PMJun 28 20:23:33 rke2-bld-01 rke2[2805]: {"level":"info","ts":"2023-06-28T20:23:33.508Z","logger":"etcd-client","caller":"v3@v3.5.7-k3s1/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
Jun 28 20:23:33 rke2-bld-01 rke2[2805]: time="2023-06-28T20:23:33Z" level=error msg="Kubelet exited: exit status 1"
Jun 28 20:23:36 rke2-bld-01 rke2[2805]: time="2023-06-28T20:23:36Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: <https://127.0.0.1:9345/v1-rke2/readyz>: 500 Internal Server Error"
Jun 28 20:23:39 rke2-bld-01 rke2[2805]: time="2023-06-28T20:23:39Z" level=error msg="Kubelet exited: exit status 1"
Jun 28 20:23:39 rke2-bld-01 rke2[2805]: time="2023-06-28T20:23:39Z" level=info msg="Container for etcd not found (no matching container found), retrying"
I can see the images there:
root@rke2-bld-01:~# /var/lib/rancher/rke2/bin/ctr --address /run/k3s/containerd/containerd.sock --namespace <http://k8s.io|k8s.io> image ls
REF TYPE DIGEST SIZE PLATFORMS LABELS
<http://docker.io/rancher/hardened-etcd:v3.5.7-k3s1-build20230609|docker.io/rancher/hardened-etcd:v3.5.7-k3s1-build20230609> application/vnd.docker.distribution.manifest.list.v2+json sha256:faa679c5463e1f42e2a8fff66cf0a1b47e18be09bdf073b8b965642ea4e4a9d5 59.6 MiB linux/amd64,linux/arm64,linux/s390x io.cri-containerd.image=managed
<http://docker.io/rancher/hardened-etcd@sha256:faa679c5463e1f42e2a8fff66cf0a1b47e18be09bdf073b8b965642ea4e4a9d5|docker.io/rancher/hardened-etcd@sha256:faa679c5463e1f42e2a8fff66cf0a1b47e18be09bdf073b8b965642ea4e4a9d5> application/vnd.docker.distribution.manifest.list.v2+json sha256:faa679c5463e1f42e2a8fff66cf0a1b47e18be09bdf073b8b965642ea4e4a9d5 59.6 MiB linux/amd64,linux/arm64,linux/s390x io.cri-containerd.image=managed
<http://docker.io/rancher/hardened-kubernetes:v1.27.3-rke2r1-build20230614|docker.io/rancher/hardened-kubernetes:v1.27.3-rke2r1-build20230614> application/vnd.docker.distribution.manifest.list.v2+json sha256:8ad41bdeae0d1f3f569512380610f822b419b2720d96c03110690563be62fbac 191.4 MiB linux/amd64,linux/arm64,linux/s390x io.cri-containerd.image=managed
<http://docker.io/rancher/hardened-kubernetes@sha256:8ad41bdeae0d1f3f569512380610f822b419b2720d96c03110690563be62fbac|docker.io/rancher/hardened-kubernetes@sha256:8ad41bdeae0d1f3f569512380610f822b419b2720d96c03110690563be62fbac> application/vnd.docker.distribution.manifest.list.v2+json sha256:8ad41bdeae0d1f3f569512380610f822b419b2720d96c03110690563be62fbac 191.4 MiB linux/amd64,linux/arm64,linux/s390x io.cri-containerd.image=managed
sha256:08610ff5575b078662d9a87cf08b8e78c7445cb6440f48757ae127d541dc715c application/vnd.docker.distribution.manifest.list.v2+json sha256:8ad41bdeae0d1f3f569512380610f822b419b2720d96c03110690563be62fbac 191.4 MiB linux/amd64,linux/arm64,linux/s390x io.cri-containerd.image=managed
sha256:d52d36ccbf1ad9932b0e995da0ed5049875db322705a20f572e5f4cd6ed87bb1 application/vnd.docker.distribution.manifest.list.v2+json sha256:faa679c5463e1f42e2a8fff66cf0a1b47e18be09bdf073b8b965642ea4e4a9d5 59.6 MiB linux/amd64,linux/arm64,linux/s390x io.cri-containerd.image=managed
But no containers are running:
root@rke2-bld-01:~# /var/lib/rancher/rke2/bin/ctr --address /run/k3s/containerd/containerd.sock --namespace <http://k8s.io|k8s.io> container ls
CONTAINER IMAGE RUNTIME
root@rke2-bld-01:~#
Please let me know if there is any further information I can provide, or if if this would be better suited for a github issue.hundreds-evening-84071
06/30/2023, 3:17 PMambitious-plastic-3551
06/30/2023, 3:22 PMambitious-plastic-3551
06/30/2023, 3:22 PMhundreds-evening-84071
06/30/2023, 3:26 PMrough-farmer-49135
06/30/2023, 4:48 PMbright-fireman-42144
07/03/2023, 3:31 AMnarrow-noon-75604
07/03/2023, 8:22 AMfancy-agency-31734
07/03/2023, 12:24 PMmodern-kangaroo-10377
07/03/2023, 12:55 PM