some-salesmen-70989
02/08/2023, 4:30 PMbest-carpet-6588
02/08/2023, 5:23 PMhigh-family-7634
02/08/2023, 6:16 PMshy-hamburger-95730
02/08/2023, 6:23 PMshy-hamburger-95730
02/08/2023, 6:24 PMhigh-family-7634
02/08/2023, 7:37 PMrough-farmer-49135
02/08/2023, 8:08 PMloud-eve-73457
02/09/2023, 5:17 AMloud-eve-73457
02/09/2023, 5:25 AMhigh-vase-1301
02/09/2023, 9:14 AMstocky-article-82001
02/09/2023, 12:43 PMrke2-server
logs and I’m at a loss.
"Waiting to retrieve kube-proxy configuration; server is not ready: <https://127.0.0.1:9345/v1-rke2/readyz>: 500 Internal Server Error"
{"level":"warn","ts":"2023-02-09T07:42:48.269-0500","logger":"etcd-client","caller":"v3@v3.5.4-k3s1/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"<etcd-endpoints://0xc0005e3a40/127.0.0.1:2379>","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
stocky-article-82001
02/09/2023, 12:43 PMstocky-article-82001
02/09/2023, 12:45 PMelegant-action-27521
02/09/2023, 5:54 PM# ./kubectl get nodes
NAME STATUS ROLES AGE VERSION
henry-cluster4-pool1-27cb5e2d-2k2mw Ready control-plane,etcd,master,worker 22h v1.24.9+rke2r2
henry-cluster4-pool1-27cb5e2d-82llq Ready control-plane,etcd,master,worker 22h v1.24.9+rke2r2
henry-cluster4-pool1-27cb5e2d-fsfxg Ready control-plane,etcd,master,worker 22h v1.24.9+rke2r2
Truncated the output for brevity:
# ./kubectl describe pod kube-apiserver-henry-cluster4-pool1-27cb5e2d-fsfxg -n kube-system
Name: kube-apiserver-henry-cluster4-pool1-27cb5e2d-fsfxg
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node: henry-cluster4-pool1-27cb5e2d-fsfxg/xxxxxxxxxxx
Start Time: Wed, 08 Feb 2023 14:06:08 -0500
Labels: component=kube-apiserver
tier=control-plane
Annotations: <http://kubernetes.io/config.hash|kubernetes.io/config.hash>: 4874a08227e8932676b83ca998a390f3
<http://kubernetes.io/config.mirror|kubernetes.io/config.mirror>: 4874a08227e8932676b83ca998a390f3
<http://kubernetes.io/config.seen|kubernetes.io/config.seen>: 2023-02-08T13:54:41.543487161-05:00
<http://kubernetes.io/config.source|kubernetes.io/config.source>: file
<http://kubernetes.io/psp|kubernetes.io/psp>: global-unrestricted-psp
Status: Running
IP: XXXXXXXXXX
IPs:
IP: XXXXXXXXXX
Controlled By: Node/henry-cluster4-pool1-27cb5e2d-fsfxg
Containers:
kube-apiserver:
Container ID: <containerd://07c1f36b907b90784ed1a6d7e4c95c6817018a113ce7a9b555a87660b72e4fc>e
Image: <http://index.docker.io/rancher/hardened-kubernetes:v1.24.9-rke2r2-build20230104|index.docker.io/rancher/hardened-kubernetes:v1.24.9-rke2r2-build20230104>
Image ID: <http://docker.io/rancher/hardened-kubernetes@sha256:284ed583bf9011db9110b47683084c3238127c49ab937ad31be3efbf5656a0bc|docker.io/rancher/hardened-kubernetes@sha256:284ed583bf9011db9110b47683084c3238127c49ab937ad31be3efbf5656a0bc>
Port: <none>
Host Port: <none>
Command:
kube-apiserver
Args:
--allow-privileged=true
--anonymous-auth=false
--api-audiences=<https://kubernetes.default.svc.cluster.local>,rke2
--authorization-mode=Node,RBAC
--bind-address=0.0.0.0
--cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs
--client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt
--egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml
--enable-admission-plugins=NodeRestriction,PodSecurityPolicy
--enable-aggregator-routing=true
--encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json
--etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt
--etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt
--etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key
--etcd-servers=<https://127.0.0.1:2379>
--feature-gates=JobTrackingWithFinalizers=true
--kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt
--kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt
--kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--profiling=false
--proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt
--proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key
--requestheader-allowed-names=system:auth-proxy
--requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
--secure-port=6443
--service-account-issuer=<https://kubernetes.default.svc.cluster.local>
--service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key
--service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key
--service-cluster-ip-range=10.43.0.0/16
--service-node-port-range=30000-32767
--storage-backend=etcd3
--tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt
--tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key
State: Running
Started: Wed, 08 Feb 2023 14:12:32 -0500
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Wed, 08 Feb 2023 13:54:42 -0500
Finished: Wed, 08 Feb 2023 14:12:10 -0500
Ready: True
Restart Count: 1
Requests:
cpu: 250m
memory: 1Gi
Liveness: exec [kubectl get --server=<https://localhost:6443/> --client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --raw=/livez] delay=10s timeout=15s period=10s #success=1 #failure=8
Readiness: exec [kubectl get --server=<https://localhost:6443/> --client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --raw=/readyz] delay=0s timeout=15s period=5s #success=1 #failure=3
Startup: exec [kubectl get --server=<https://localhost:6443/> --client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --raw=/livez] delay=10s timeout=5s period=10s #success=1 #failure=24
Environment:
FILE_HASH: 65c747a3981e887284258667d87df8f0aac1d5eb238d049859c1986c85b92190
NO_PROXY: .svc,.cluster.local,10.42.0.0/16,10.43.0.0/16
POD_HASH: 0d017805ffe058495a528aa9431ffa9b
Mounts:
/etc/ca-certificates from dir1 (rw)
/etc/ssl/certs from dir0 (rw)
/var/lib/rancher/rke2/server/cred/encryption-config.json from file1 (ro)
/var/lib/rancher/rke2/server/db/etcd/name from file0 (ro)
/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml from file2 (ro)
/var/lib/rancher/rke2/server/logs from dir2 (rw)
/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt from file3 (ro)
/var/lib/rancher/rke2/server/tls/client-auth-proxy.key from file4 (ro)
/var/lib/rancher/rke2/server/tls/client-ca.crt from file5 (ro)
/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt from file6 (ro)
/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key from file7 (ro)
/var/lib/rancher/rke2/server/tls/etcd/client.crt from file8 (ro)
/var/lib/rancher/rke2/server/tls/etcd/client.key from file9 (ro)
/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt from file10 (ro)
/var/lib/rancher/rke2/server/tls/request-header-ca.crt from file11 (ro)
/var/lib/rancher/rke2/server/tls/server-ca.crt from file12 (ro)
/var/lib/rancher/rke2/server/tls/service.key from file13 (ro)
/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt from file14 (ro)
/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key from file15 (ro)
---
---
---
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 28m (x32 over 22h) kubelet Readiness probe failed: Error from server (InternalError): an error on the server ("[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]informer-sync ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[+]shutdown ok\nreadyz check failed") has prevented the request from succeeding
When executing the command for the Readiness check I get the following error:
# ./kubectl get --server=<https://localhost:6443/> --client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --raw=/readyz
Error in configuration:
* client-cert-data and client-cert are both specified for default. client-cert-data will override.
* client-key-data and client-key are both specified for default; client-key-data will override
hundreds-airport-66196
02/09/2023, 7:16 PMcool-monkey-71774
02/10/2023, 7:50 AMworried-nest-47450
02/10/2023, 12:16 PMfleet-default
. Help? 🙃handsome-tiger-45123
02/11/2023, 2:43 PMapiVersion: <http://helm.cattle.io/v1|helm.cattle.io/v1>
kind: HelmChartConfig
metadata:
name: rke2-ingress-nginx
namespace: kube-system
spec:
valuesContent: |-
controller:
customPorts:
- port: 1094
targetPort: 1094
protocol: TCP
name: xrootd
Unfortunately it does not work, am I missing something? Thanks!bright-fireman-42144
02/11/2023, 9:49 PMwhite-address-50409
02/12/2023, 12:26 PMloud-eve-73457
02/13/2023, 4:52 AMcrooked-cat-21365
02/13/2023, 9:33 AMI0210 14:18:58.178856 320 round_trippers.go:466] curl -v -XPOST -H "X-Stream-Protocol-Version: <http://v4.channel.k8s.io|v4.channel.k8s.io>" -H "X-Stream-Protocol-Version: <http://v3.channel.k8s.io|v3.channel.k8s.io>" -H "X-Stream-Protocol-Version: <http://v2.channel.k8s.io|v2.channel.k8s.io>" -H "X-Stream-Protocol-Version: <http://channel.k8s.io|channel.k8s.io>" -H "User-Agent: kubectl/v1.24.10 (linux/amd64) kubernetes/5c1d2d4" -H "Authorization: Bearer <masked>" '<https://kas.gitlab.com/k8s-proxy/api/v1/namespaces/sample-kube001/pods/ubuntu/exec?command=echo&command=hello&container=ubuntu&stderr=true&stdout=true>'
I0210 14:18:58.652230 320 round_trippers.go:553] POST <https://kas.gitlab.com/k8s-proxy/api/v1/namespaces/sample-kube001/pods/ubuntu/exec?command=echo&command=hello&container=ubuntu&stderr=true&stdout=true> 101 Switching Protocols in 473 milliseconds
I0210 14:18:58.652260 320 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 0 ms Duration 473 ms
I0210 14:18:58.652274 320 round_trippers.go:577] Response Headers:
I0210 14:18:58.652330 320 round_trippers.go:580] Connection: upgrade
I0210 14:18:58.652349 320 round_trippers.go:580] Upgrade: SPDY/3.1
I0210 14:18:58.652366 320 round_trippers.go:580] Via: 1.1 gitlab-agent/v15.9.0-rc1/add168d2
I0210 14:18:58.652383 320 round_trippers.go:580] Via: gRPC/1.0 gitlab-kas/v15.9.0-rc1/v15.9.0-rc1
I0210 14:18:58.652397 320 round_trippers.go:580] X-Stream-Protocol-Version: <http://v4.channel.k8s.io|v4.channel.k8s.io>
I0210 14:18:58.652411 320 round_trippers.go:580] Date: Fri, 10 Feb 2023 14:18:58 GMT
I0210 14:18:58.652458 320 log.go:198] (0xc000276000) (0xc000992640) Create stream
I0210 14:18:58.652474 320 log.go:198] (0xc000276000) (0xc000992640) Stream added, broadcasting: 1
error: Timeout occurred
loud-eve-73457
02/13/2023, 9:53 AMapiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
9000: "default/example-go:8080"
ambitious-plastic-3551
02/13/2023, 12:28 PMambitious-plastic-3551
02/13/2023, 12:29 PMambitious-plastic-3551
02/13/2023, 1:35 PMambitious-plastic-3551
02/13/2023, 2:24 PMambitious-plastic-3551
02/13/2023, 2:25 PMglamorous-lighter-5580
02/14/2023, 6:35 AMloud-eve-73457
02/15/2023, 5:49 AM