https://rancher.com/ logo
#k3s
Title
m

mysterious-toddler-89639

08/05/2022, 5:22 PM
Hi all! I'm new here I'm trying to install k3s
Copy code
[root@ip-172-21-1-217 rocky]# k3s --version
k3s version v1.24.3+k3s1 (990ba0e8)
go version go1.18.1
The installation script is running successfully. I disabled the selinux before so to make the k3s process a little bit faster and easier., but at the moment an getting this error from the metrics service
Copy code
E0805 17:15:09.404108    3602 available_controller.go:524] <http://v1beta1.metrics.k8s.io|v1beta1.metrics.k8s.io> failed with: failing or missing response from <https://10.42.0.24:4443/apis/metrics.k8s.io/v1beta1>: Get "<https://10.42.0.24:4443/apis/metrics.k8s.io/v1beta1>": proxy error from 127.0.0.1:6443 while dialing 10.42.0.24:4443, code 503: 503 Service Unavailable
W0805 17:15:10.410003    3602 handler_proxy.go:102] no RequestInfo found in the context
W0805 17:15:10.410002    3602 handler_proxy.go:102] no RequestInfo found in the context
E0805 17:15:10.410062    3602 controller.go:116] loading OpenAPI spec for "<http://v1beta1.metrics.k8s.io|v1beta1.metrics.k8s.io>" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
E0805 17:15:10.410084    3602 controller.go:113] loading OpenAPI spec for "<http://v1beta1.metrics.k8s.io|v1beta1.metrics.k8s.io>" failed with: Error, could not get list of group versions for APIService
I0805 17:15:10.410093    3602 controller.go:129] OpenAPI AggregationController: action for item <http://v1beta1.metrics.k8s.io|v1beta1.metrics.k8s.io>: Rate Limited Requeue.
I0805 17:15:10.412183    3602 controller.go:126] OpenAPI AggregationController: action for item <http://v1beta1.metrics.k8s.io|v1beta1.metrics.k8s.io>: Rate Limited Requeue.
What would be the best way to debug this? or if you had this issue in the past what would be the solution? thanks in advance
c

creamy-pencil-82913

08/05/2022, 5:29 PM
Is the metrics-server pod running?
m

mysterious-toddler-89639

08/05/2022, 5:31 PM
yes the metrics-server pod is running
Copy code
[root@ip-172-21-1-217 rocky]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                      READY   STATUS      RESTARTS       AGE
kube-system   helm-install-traefik-crd-kcsjt            0/1     Completed   0              7d
kube-system   helm-install-traefik-jkm25                0/1     Completed   1              7d
kube-system   svclb-traefik-d961ae38-8vw2q              2/2     Running     16 (61s ago)   62m
kube-system   coredns-d6954f7dc-228gn                   1/1     Running     8 (61s ago)    62m
kube-system   local-path-provisioner-76f9df454c-b8q2c   1/1     Running     11 (61s ago)   62m
kube-system   traefik-6bf5457c7f-m8hj6                  1/1     Running     8 (61s ago)    62m
kube-system   metrics-server-9676cb865-qkkwm            1/1     Running     9 (61s ago)    62m
it did restart few times, but then it runs
is there a specific way to configure the host file for k3s? maybe the default config from aws image, is not correct for the k3s deployment and it could make a problem on dns or cert level
if it is of any help
Copy code
[root@ip-172-21-1-217 rocky]# kubectl get apiservice <http://v1beta1.metrics.k8s.io|v1beta1.metrics.k8s.io> -o yaml
apiVersion: <http://apiregistration.k8s.io/v1|apiregistration.k8s.io/v1>
kind: APIService
metadata:
  annotations:
    <http://objectset.rio.cattle.io/applied|objectset.rio.cattle.io/applied>: H4sIAAAAAAAA/4SSQY/aMBCF/0o1Z5NiQlKw1APHqq2ExIr72JnAbIgd2U5WCPHfV4bArlZiOU7evJdvnnwC7HhLPrCzoNLgaccheozsbNYsQsbu5yBBQMO2AgWr9Z8N+YENgYCWIlYYEdQJ0FoXL7aQRqdfycRAMfPsMoMxHihlcQoB8VB3b5b8ZDc0oKDJwydlkOLHX7bV71VVOfs0wmJLoBKiZxMm2HG4cz93hg5Nsje9pkk4hkgtnAUcUNPh2/v2GPagoJjrgvRC53VdlLnBXC7nsvhVynK2NEuja6SaZlim0JF0kJoiymwkHstPC6Ejk/65867vPm66bYirsPbsPMfjf7bc9i0oOZ0KYBvI9J42DXcv/zZb8lwfQUXfk4BbI+oEX+pKCnm4wj0oY7i/mxEd7t9uLBeI8/k9AAD//wQqzKFnAgAA
    <http://objectset.rio.cattle.io/id|objectset.rio.cattle.io/id>: ""
    <http://objectset.rio.cattle.io/owner-gvk|objectset.rio.cattle.io/owner-gvk>: <http://k3s.cattle.io/v1|k3s.cattle.io/v1>, Kind=Addon
    <http://objectset.rio.cattle.io/owner-name|objectset.rio.cattle.io/owner-name>: metrics-apiservice
    <http://objectset.rio.cattle.io/owner-namespace|objectset.rio.cattle.io/owner-namespace>: kube-system
  creationTimestamp: "2022-07-29T16:36:39Z"
  labels:
    <http://objectset.rio.cattle.io/hash|objectset.rio.cattle.io/hash>: 54b5eb8b3ff563ca319415761629c9cbfaefe2a6
  name: <http://v1beta1.metrics.k8s.io|v1beta1.metrics.k8s.io>
  resourceVersion: "125803"
  uid: c65908cf-bfff-4ab0-b426-cc2388dfdbc7
spec:
  group: <http://metrics.k8s.io|metrics.k8s.io>
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
    port: 443
  version: v1beta1
  versionPriority: 100
status:
  conditions:
  - lastTransitionTime: "2022-08-05T17:43:53Z"
    message: all checks passed
    reason: Passed
    status: "True"
    type: Available
logs from metric server
Copy code
[root@ip-172-21-1-217 rocky]# kubectl logs -f metrics-server-9676cb865-qkkwm
I0805 17:46:51.064471       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0805 17:46:51.064423       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0805 17:46:51.064505       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I0805 17:46:51.064531       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0805 17:46:51.064537       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0805 17:46:51.064543       1 dynamic_serving_content.go:130] Starting serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key
I0805 17:46:51.064568       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0805 17:46:51.064575       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0805 17:46:51.064435       1 secure_serving.go:202] Serving securely on [::]:4443
I0805 17:46:51.078705       1 server.go:188] "Failed probe" probe="metric-storage-ready" err="not metrics to serve"
I0805 17:46:51.165476       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
I0805 17:46:51.165552       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
I0805 17:46:51.165664       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I0805 17:46:51.846458       1 server.go:188] "Failed probe" probe="metric-storage-ready" err="not metrics to serve"
I0805 17:46:52.430066       1 server.go:188] "Failed probe" probe="metric-storage-ready" err="not metrics to serve"
I0805 17:46:52.846553       1 server.go:188] "Failed probe" probe="metric-storage-ready" err="not metrics to serve"
I0805 17:46:54.430663       1 server.go:188] "Failed probe" probe="metric-storage-ready" err="not metrics to serve"
I0805 17:46:56.431340       1 server.go:188] "Failed probe" probe="metric-storage-ready" err="not metrics to serve"
I0805 17:46:58.431308       1 server.go:188] "Failed probe" probe="metric-storage-ready" err="not metrics to serve"
I0805 17:47:00.431189       1 server.go:188] "Failed probe" probe="metric-storage-ready" err="not metrics to serve"
I0805 17:47:02.431913       1 server.go:188] "Failed probe" probe="metric-storage-ready" err="not metrics to serve"
I0805 17:47:04.430440       1 server.go:188] "Failed probe" probe="metric-storage-ready" err="not metrics to serve"
c

creamy-pencil-82913

08/05/2022, 7:08 PM
it looks like it’s working now? the APIService status is OK, and it’s normal for the metrics-server pod to log errors until it is able to successfully scrape the node. Does
kubectl top nodes
work?
m

mysterious-toddler-89639

08/08/2022, 8:14 AM
Yes it seems to be working now, maybe the errors were appearing on the booting time. The
kubectl top nodes
works properly. some how I'm still getting an error
Copy code
curl -I <http://localhost:8080/version>
curl: (7) Failed to connect to localhost port 8080: Connection refused
The same error appears when trying to run a helm command to install gitlab agent
35 Views