astonishing-action-56877
09/12/2023, 4:54 PMmodern-dress-80156
09/12/2023, 8:04 PMjournalctl -u rke2-server | grep proxy
Sep 11 22:12:41 server1 rke2[1192239]: time="2023-09-11T22:12:41Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: <https://127.0.0.1:9345/v1-rke2/readyz>: 500 Internal Server Error"
Sep 11 22:12:46 server1 rke2[1192239]: time="2023-09-11T22:12:46Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: <https://127.0.0.1:9345/v1-rke2/readyz>: 500 Internal Server Error"
Sep 11 22:12:51 server1 rke2[1192239]: time="2023-09-11T22:12:51Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: <https://127.0.0.1:9345/v1-rke2/readyz>: 500 Internal Server Error"
Sep 11 22:12:56 server1 rke2[1192239]: time="2023-09-11T22:12:56Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: <https://127.0.0.1:9345/v1-rke2/readyz>: 500 Internal Server Error"
Sep 11 22:13:01 server1 rke2[1192239]: time="2023-09-11T22:13:01Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: <https://127.0.0.1:9345/v1-rke2/readyz>: 500 Internal Server Error"
Sep 11 22:13:06 server1 rke2[1192239]: time="2023-09-11T22:13:06Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: <https://127.0.0.1:9345/v1-rke2/readyz>: 500 Internal Server Error"
Sep 11 22:13:11 server1 rke2[1192239]: time="2023-09-11T22:13:11Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: <https://127.0.0.1:9345/v1-rke2/readyz>: 500 Internal Server Error"
Sep 11 22:13:16 server1 rke2[1192239]: time="2023-09-11T22:13:16Z" level=info msg="Tunnel server egress proxy mode: agent"
Sep 11 22:13:16 server1 rke2[1192239]: time="2023-09-11T22:13:16Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=server1 --kubeconfig=/var/lib/rancher/rke2/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
full-wire-66533
09/13/2023, 7:13 AMrke2-killall.sh
and rke2-uninstall.sh
and started my install routine with version v1.26.8+rke2r1
again. Now my first server doesn´t come up in the logs i found errors like:
containerd-log:
time="2023-09-13T09:12:31.551578482+02:00" level=error msg="StopPodSandbox for \"a9fbe97808f75a6a5e13edebcfcae7312dde8a68dac50b6dbfe98db0a63346b1\" failed" error="failed to destroy network for sandbox \"a9fbe97808f75a6a5e13edebcfcae7312dde8a68dac50b6dbfe98db0a63346b1\": cni plugin not initialized"
rke2-server-log:
INFO[0309] Waiting to retrieve kube-proxy configuration; server is not ready: <https://127.0.0.1:9345/v1-rke2/readyz>: 500 Internal Server Error
{"level":"warn","ts":"2023-09-13T09:13:28.937025+0200","logger":"etcd-client","caller":"v3@v3.5.9-k3s1/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"<etcd-endpoints://0xc00056ee00/127.0.0.1:2379>","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: authentication handshake failed: context deadline exceeded\""}
{"level":"info","ts":"2023-09-13T09:13:28.937108+0200","logger":"etcd-client","caller":"v3@v3.5.9-k3s1/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
INFO[0307] Waiting for API server to become available
echoing-father-81877
09/13/2023, 4:35 PMroot 280354 279542 4 16:27 ? 00:00:18 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<1%,nodefs.available<1% --eviction-minimum-reclaim=imagefs.available=500Mi,nodefs.available=500Mi --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=lima-ubuntu-sensor --image-gc-high-threshold=100 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --node-labels= --pod-infra-container-image=<http://index.docker.io/rancher/mirrored-pause:3.6|index.docker.io/rancher/mirrored-pause:3.6> --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
echoing-father-81877
09/13/2023, 7:55 PMrke2 certificate
was hoping there was a way to check expiration, much like kubeadm offers.hundreds-train-60767
09/14/2023, 4:33 AMhundreds-train-60767
09/14/2023, 4:34 AMhundreds-train-60767
09/14/2023, 4:34 AMfast-motorcycle-89632
09/14/2023, 11:11 AMCluster->Registration->Step 2
section.
After I run the command the termainal shows below info: (I run the command on a clean Ubuntu server)
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 30869 0 30869 0 0 1866k 0 --:--:-- --:--:-- --:--:-- 1884k
[INFO] Label: <http://cattle.io/os=linux|cattle.io/os=linux>
[INFO] Role requested: etcd
[INFO] Role requested: controlplane
[INFO] Using default agent configuration directory /etc/rancher/agent
[INFO] Using default agent var directory /var/lib/rancher/agent
[INFO] Determined CA is necessary to connect to Rancher
[INFO] Successfully downloaded CA certificate
[INFO] Value from <https://xxx.com/cacerts> is an x509 certificate
[INFO] Successfully tested Rancher connection
[INFO] Downloading rancher-system-agent binary from <https://xxx.com/assets/rancher-system-agent-amd64>
[INFO] Successfully downloaded the rancher-system-agent binary.
[INFO] Downloading rancher-system-agent-uninstall.sh script from <https://xxx.com/assets/system-agent-uninstall.sh>
[INFO] Successfully downloaded the rancher-system-agent-uninstall.sh script.
[INFO] Generating Cattle ID
[INFO] Successfully downloaded Rancher connection information
[INFO] systemd: Creating service file
[INFO] Creating environment file /etc/systemd/system/rancher-system-agent.env
[INFO] Enabling rancher-system-agent.service
Created symlink /etc/systemd/system/multi-user.target.wants/rancher-system-agent.service → /etc/systemd/system/rancher-system-agent.service.
[INFO] Starting/restarting rancher-system-agent.service
But at Rancher UI, it stopped at Waiting for Node Ref
status, anyone can help me fix it?🥺quiet-musician-28232
09/14/2023, 4:46 PMshy-zebra-53074
09/15/2023, 3:47 PMcurl -sfL <https://get.rke2.io> | INSTALL_RKE2_TYPE="server" INSTALL_RKE2_VERSION="v1.28.2+rke2r1" sh -'
But I’m getting the following:
fatal: [127.0.0.1]: FAILED! => {"changed": true, "cmd": "curl -sfL <https://get.rke2.io> | INSTALL_RKE2_TYPE=\"server\" INSTALL_RKE2_VERSION=\"v1.28.2+rke2r1\" sh -", "delta": "0:00:03.991457", "end": "2023-09-15 11:40:12.170800", "msg": "non-zero return code", "rc": 1, "start": "2023-09-15 11:40:08.179343", "stderr": "Errors during downloading metadata for repository 'rancher-rke2-1.28-stable':\n - Status code: 404 for <https://rpm.rancher.io/rke2/stable/1.28/centos/8/x86_64/repodata/repomd.xml> (IP: 104.21.2.160)\nError: Failed to download metadata for repo 'rancher-rke2-1.28-stable': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried", "stderr_lines": ["Errors during downloading metadata for repository 'rancher-rke2-1.28-stable':", " - Status code: 404 for <https://rpm.rancher.io/rke2/stable/1.28/centos/8/x86_64/repodata/repomd.xml> (IP: 104.21.2.160)", "Error: Failed to download metadata for repo 'rancher-rke2-1.28-stable': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried"], "stdout": "[INFO] using 1.28 series from channel stable\nRancher RKE2 Common (stable) 2.8 kB/s | 2.3 kB 00:00 \nRancher RKE2 1.28 (stable) 407 B/s | 387 B 00:00 ", "stdout_lines": ["[INFO] using 1.28 series from channel stable", "Rancher RKE2 Common (stable) 2.8 kB/s | 2.3 kB 00:00 ", "Rancher RKE2 1.28 (stable) 407 B/s | 387 B 00:00 "]}
Has the 1.28 been released on the RPM stable
channels yet?bland-machine-10503
09/15/2023, 4:16 PMSTABLE_CHANNEL='<https://update.rke2.io/v1-release/channels/stable>'
LATEST_CHANNEL='<https://update.rke2.io/v1-release/channels/latest>'
STABLE_CHANNEL_A=$(curl -si ${STABLE_CHANNEL} | grep ^location: | awk '{print $2}' | tr -d '\r')
STABLE_CHANNEL_V=$(echo ${STABLE_CHANNEL_A} | awk -F/ '{print $NF}')
LATEST_CHANNEL_A=$(curl -si ${LATEST_CHANNEL} | grep ^location: | awk '{print $2}' | tr -d '\r')
LATEST_CHANNEL_V=$(echo ${LATEST_CHANNEL_A} | awk -F/ '{print $NF}')
echo "Latest channel ${LATEST_CHANNEL_V}"
echo "Stable channel ${STABLE_CHANNEL_V}"
echo "All versions: <https://github.com/rancher/rke2/releases/>"
echo "SUSE RKE2 Support Matrix: <https://www.suse.com/suse-rke2/support-matrix/all-supported-versions/>"
echo "Kubernetes Releases: <https://kubernetes.io/releases/>"
echo "Docker Images at <https://hub.docker.com/r/rancher/rke2-upgrade/tags>"
echo "ALL CHANNELS:"
for csv_line in $(curl -s <https://update.rke2.io/v1-release/channels> | jq '.data[]|select(.type="chane")|[.name, .latest]|@csv' -cr);
do
X0=$(echo "${csv_line}" | awk -F, '{print $1}' | tr -d '"')
X1=$(echo "${csv_line}" | awk -F, '{print $2}' | tr -d '"')
X2=$(echo "${X1}" | tr '+' '-')
echo "Channel $X0 -> Release $X1 -> Docker image $X2"
done
which provides, among other things ...
Channel stable -> Release v1.25.13+rke2r1 -> Docker image v1.25.13-rke2r1
Channel latest -> Release v1.28.1+rke2r1 -> Docker image v1.28.1-rke2r1
Channel testing -> Release v1.28.2-rc1+rke2r1 -> Docker image v1.28.2-rc1-rke2r1
shy-zebra-53074
09/15/2023, 4:18 PMcreamy-pencil-82913
09/15/2023, 4:48 PMbulky-eve-17563
09/18/2023, 12:21 PMabundant-noon-17295
09/19/2023, 1:43 PMquaint-alarm-7893
09/20/2023, 2:25 AMquaint-alarm-7893
09/20/2023, 2:25 AMplain-planet-80115
09/20/2023, 11:47 AMcis 1.23
profile and the PSACT is rancher-restricted
. This is a fresh cluster with only the RKE2 components and no user workloads.
The 1.23-hardened
cis scan profile is failing on this fresh cluster. I have completed the prerequisites for setting up the nodes. Is this expected? Following are the tests which are failing:little-hair-13922
09/20/2023, 12:38 PMambitious-plastic-3551
09/20/2023, 1:22 PMambitious-plastic-3551
09/20/2023, 1:22 PMcolossal-spring-98913
09/21/2023, 2:32 AMtoken
in config.yaml and the EncryptionConfig? I’m trying to figure out how just the token is sufficient to restore a node from a etcd backup. Is the encryptionConfig also stored in etcd ?sparse-flag-14809
09/21/2023, 2:32 PMsparse-flag-14809
09/21/2023, 2:33 PMsparse-flag-14809
09/21/2023, 2:34 PMwonderful-rain-13345
09/21/2023, 6:42 PME0921 18:35:26.833337 270467 memcache.go:265] couldn't get current server API group list: Get "<https://rancher.internal.nullreference.io/k8s/clusters/local/api?timeout=32s>": tls: failed to verify certificate: x509: certificate signed by unknown authority
ambitious-plastic-3551
09/22/2023, 1:09 PMmammoth-memory-36508
09/22/2023, 10:11 PMfierce-tomato-30072
09/25/2023, 1:22 AMroot 2159939 9.8 5.3 1198428 438608 ? Ssl Sep24 67:40 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --advertise-address=IP PUBLIC
my config:
server: <https://192.168.0.61:9345>
data-dir: /var/lib/rancher/rke2
tls-san:
- cluster.local
- 192.168.0.61
- 14.225.53.251
node-external-ip: IP PUBLIC
But "kubectl get nodes -o wide" is nothing
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master-01 Ready control-plane,etcd,master 2d17h v1.26.4+rke2r1 192.168.0.61 <none> Ubuntu 22.04.2 LTS 5.15.0-75-generic <containerd://1.6.19-k3s1>
worker-01 Ready <none> 2d17h v1.26.4+rke2r1 192.168.0.5 <none> Ubuntu 22.04.2 LTS 5.15.0-75-generic <containerd://1.6.19-k3>
Thanks all.