busy-crowd-80458
03/23/2023, 7:17 AMroot@awx1:~# /var/lib/rancher/rke2/bin/kubectl get nodes
NAME STATUS ROLES AGE VERSION
awx1 Ready control-plane,etcd,master,worker 4h38m v1.24.11+rke2r1
awx2 Ready control-plane,etcd,master,worker 4h36m v1.24.11+rke2r1
awx3 Ready control-plane,etcd,master,worker 4h36m v1.24.11+rke2r1
busy-crowd-80458
03/23/2023, 7:18 AMbusy-crowd-80458
03/23/2023, 7:18 AMbusy-crowd-80458
03/23/2023, 7:23 AMbusy-crowd-80458
03/23/2023, 7:28 AMbusy-crowd-80458
03/23/2023, 7:28 AMThis output indicates that each replica is running on a different node, and thus the worker role is functioning on all three nodes.
Regarding the rke2-agent, it might not be running on all nodes because the RKE2 architecture combines the control plane and worker roles in a single binary (rke2-server). In an RKE2 cluster, you typically have one or more nodes running rke2-server and other nodes running rke2-agent. However, since your nodes are running both roles, the rke2-server binary takes care of both control plane and worker functionality, and there's no need for the rke2-agent to be running separately.
future-magician-11278
03/23/2023, 2:11 PMstale-painting-80203
03/24/2023, 10:57 PMkube-system helm-install-rke2-ingress-nginx-tp5xd 0/1 CrashLoopBackOff 45 (58s ago)
kube-system helm-install-rke2-metrics-server-rq8mc 0/1 CrashLoopBackOff 45 (104s ago)
Both seem to have the same error:
+ helm_v3 install --set-string global.clusterCIDR=10.42.0.0/16 --set-string global.clusterCIDRv4=10.42.0.0/16 --set-string global.clusterDNS=10.43.0.10 --set-string global.clusterDomain=cluster.local --set-string global.rke2DataDir=/var/lib/rancher/rke2 --set-string global.serviceCIDR=10.43.0.0/16 rke2-ingress-nginx /tmp/rke2-ingress-nginx.tgz
Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "<https://10.43.0.1:443/version>": dial tcp 10.43.0.1:443: i/o timeout
+ exit
+ helm_v3 install --set-string global.clusterCIDR=10.42.0.0/16 --set-string global.clusterCIDRv4=10.42.0.0/16 --set-string global.clusterDNS=10.43.0.10 --set-string global.clusterDomain=cluster.local --set-string global.rke2DataDir=/var/lib/rancher/rke2 --set-string global.serviceCIDR=10.43.0.0/16 rke2-metrics-server /tmp/rke2-metrics-server.tgz
Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "<https://10.43.0.1:443/version>": dial tcp 10.43.0.1:443: i/o timeout
hallowed-breakfast-56871
03/26/2023, 9:20 PMrefined-analyst-8898
03/27/2023, 12:33 PMingress-nginx
chart. That's expected, but I'm hesitant to customize it because I don't have a complete mental map of the life cycle, especially upgrades. I've patched the daemonset to add the optional arg. It'd be helpful to hear from someone more familiar with RKE2 that this is entirely expected or a patently bad idea!hundreds-evening-84071
03/27/2023, 2:09 PMimages to see what images have been pulled locallysudo k3s crictl
Is there something similar for RKE2?to delete any images no currently used by a running containersudo k3s crictl rmi --prune
hundreds-evening-84071
03/27/2023, 8:11 PMrke2-coredns-rke2-coredns
There are 2 pods for rke2-coredns and first is running on Linux node second remains in pending state,
Is this one of those things that will only run on Linux?refined-analyst-8898
03/27/2023, 9:00 PMkubectl
too.
The cluster seems to be functioning despite all the ruckus in the pod list.
Is it normal for you to clean up a bunch of stuck pods after a cluster configuration change?worried-ram-81084
03/28/2023, 11:16 AMMar 28 11:10:06 soss-m1 rke2[1006]: time="2023-03-28T11:10:06Z" level=info msg="Failed to test data store connection: this server is a not a member of the etcd cluster. Found [soss-m1-a184a80d=<https://172.16.1.11:2380> soss-m3-bd71347c=<https://172.16.1.13:2380>], expect: soss-m1-a184a80d=<https://10.1.1.11:2380>"
how do i change the expected IP address?ancient-army-24563
03/29/2023, 6:58 AMancient-army-24563
03/29/2023, 6:58 AMancient-army-24563
03/29/2023, 7:00 AMancient-army-24563
03/29/2023, 7:01 AMancient-army-24563
03/29/2023, 7:05 AMingress:
provider: nginx
options:
map-hash-bucket-size: "128"
ssl-protocols: SSLv2
extra_args:
enable-ssl-passthrough: ""
ancient-army-24563
03/29/2023, 10:18 AMancient-army-24563
03/29/2023, 10:33 AMhandsome-tiger-45123
03/29/2023, 1:28 PMlittle-doctor-70130
03/30/2023, 12:48 PMkubectl get nodes
says it the master node is still v1.21.3. Checking rke2 --version
says it is v1.21.14 - what am I missing?colossal-television-75726
03/30/2023, 3:12 PMsteep-london-53093
03/30/2023, 3:54 PM<https://192.168.0.15:10250/containerLogs/kube-system/kube-proxy-k8s-master-03/kube-proxy>": proxy error from 127.0.0.1:9345 while dialing 192.168.0.15:10250, code 503: 503 Service Unavailable
Can you give me any hint to find the root cause of this problem?
Thank you in advance!polite-translator-35958
03/30/2023, 6:06 PMyum install rke2-server
but I’d really like to add rpm.rancher.io to my company’s Artifactory server as a remote so that we can cache the rpms locally. I can’t seem to browse rpm.rancher.io trees and I can’t seem to configure an Artifactory Remote for it. Anyone ever done this? Anyone ever mirrored one of the rpm.rancher.io repos?crooked-cat-21365
03/31/2023, 11:27 AMhundreds-airport-66196
03/31/2023, 3:23 PMmagnificent-vr-88571
03/31/2023, 4:33 PMDefaulting debug container name to debugger-bzglt.
error: ephemeral containers are disabled for this cluster (error from server: "the server could not find the requested resource").
Any documentation to enable it?stale-painting-80203
03/31/2023, 6:59 PMWarning Failed 20m (x3 over 20m) kubelet Failed to pull image "harbor10165.senode.dev/sgs/webapp:2.0": rpc error: code = Unknown desc = failed to pull and unpack image "harbor10165.senode.dev/sgs/webapp:2.0": failed to resolve reference "harbor10165.senode.dev/sgs/webapp:2.0": failed to do request: Head "<https://harbor10165.senode.dev/v2/sgs/webapp/manifests/2.0>": x509: certificate signed by unknown authority