This message was deleted.
# k3s
a
This message was deleted.
c
Is the metrics-server pod running?
m
yes the metrics-server pod is running
Copy code
[root@ip-172-21-1-217 rocky]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                      READY   STATUS      RESTARTS       AGE
kube-system   helm-install-traefik-crd-kcsjt            0/1     Completed   0              7d
kube-system   helm-install-traefik-jkm25                0/1     Completed   1              7d
kube-system   svclb-traefik-d961ae38-8vw2q              2/2     Running     16 (61s ago)   62m
kube-system   coredns-d6954f7dc-228gn                   1/1     Running     8 (61s ago)    62m
kube-system   local-path-provisioner-76f9df454c-b8q2c   1/1     Running     11 (61s ago)   62m
kube-system   traefik-6bf5457c7f-m8hj6                  1/1     Running     8 (61s ago)    62m
kube-system   metrics-server-9676cb865-qkkwm            1/1     Running     9 (61s ago)    62m
it did restart few times, but then it runs
is there a specific way to configure the host file for k3s? maybe the default config from aws image, is not correct for the k3s deployment and it could make a problem on dns or cert level
if it is of any help
Copy code
[root@ip-172-21-1-217 rocky]# kubectl get apiservice <http://v1beta1.metrics.k8s.io|v1beta1.metrics.k8s.io> -o yaml
apiVersion: <http://apiregistration.k8s.io/v1|apiregistration.k8s.io/v1>
kind: APIService
metadata:
  annotations:
    <http://objectset.rio.cattle.io/applied|objectset.rio.cattle.io/applied>: H4sIAAAAAAAA/4SSQY/aMBCF/0o1Z5NiQlKw1APHqq2ExIr72JnAbIgd2U5WCPHfV4bArlZiOU7evJdvnnwC7HhLPrCzoNLgaccheozsbNYsQsbu5yBBQMO2AgWr9Z8N+YENgYCWIlYYEdQJ0FoXL7aQRqdfycRAMfPsMoMxHihlcQoB8VB3b5b8ZDc0oKDJwydlkOLHX7bV71VVOfs0wmJLoBKiZxMm2HG4cz93hg5Nsje9pkk4hkgtnAUcUNPh2/v2GPagoJjrgvRC53VdlLnBXC7nsvhVynK2NEuja6SaZlim0JF0kJoiymwkHstPC6Ejk/65867vPm66bYirsPbsPMfjf7bc9i0oOZ0KYBvI9J42DXcv/zZb8lwfQUXfk4BbI+oEX+pKCnm4wj0oY7i/mxEd7t9uLBeI8/k9AAD//wQqzKFnAgAA
    <http://objectset.rio.cattle.io/id|objectset.rio.cattle.io/id>: ""
    <http://objectset.rio.cattle.io/owner-gvk|objectset.rio.cattle.io/owner-gvk>: <http://k3s.cattle.io/v1|k3s.cattle.io/v1>, Kind=Addon
    <http://objectset.rio.cattle.io/owner-name|objectset.rio.cattle.io/owner-name>: metrics-apiservice
    <http://objectset.rio.cattle.io/owner-namespace|objectset.rio.cattle.io/owner-namespace>: kube-system
  creationTimestamp: "2022-07-29T16:36:39Z"
  labels:
    <http://objectset.rio.cattle.io/hash|objectset.rio.cattle.io/hash>: 54b5eb8b3ff563ca319415761629c9cbfaefe2a6
  name: <http://v1beta1.metrics.k8s.io|v1beta1.metrics.k8s.io>
  resourceVersion: "125803"
  uid: c65908cf-bfff-4ab0-b426-cc2388dfdbc7
spec:
  group: <http://metrics.k8s.io|metrics.k8s.io>
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
    port: 443
  version: v1beta1
  versionPriority: 100
status:
  conditions:
  - lastTransitionTime: "2022-08-05T17:43:53Z"
    message: all checks passed
    reason: Passed
    status: "True"
    type: Available
logs from metric server
Copy code
[root@ip-172-21-1-217 rocky]# kubectl logs -f metrics-server-9676cb865-qkkwm
I0805 17:46:51.064471       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0805 17:46:51.064423       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0805 17:46:51.064505       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I0805 17:46:51.064531       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0805 17:46:51.064537       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0805 17:46:51.064543       1 dynamic_serving_content.go:130] Starting serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key
I0805 17:46:51.064568       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0805 17:46:51.064575       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0805 17:46:51.064435       1 secure_serving.go:202] Serving securely on [::]:4443
I0805 17:46:51.078705       1 server.go:188] "Failed probe" probe="metric-storage-ready" err="not metrics to serve"
I0805 17:46:51.165476       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
I0805 17:46:51.165552       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
I0805 17:46:51.165664       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I0805 17:46:51.846458       1 server.go:188] "Failed probe" probe="metric-storage-ready" err="not metrics to serve"
I0805 17:46:52.430066       1 server.go:188] "Failed probe" probe="metric-storage-ready" err="not metrics to serve"
I0805 17:46:52.846553       1 server.go:188] "Failed probe" probe="metric-storage-ready" err="not metrics to serve"
I0805 17:46:54.430663       1 server.go:188] "Failed probe" probe="metric-storage-ready" err="not metrics to serve"
I0805 17:46:56.431340       1 server.go:188] "Failed probe" probe="metric-storage-ready" err="not metrics to serve"
I0805 17:46:58.431308       1 server.go:188] "Failed probe" probe="metric-storage-ready" err="not metrics to serve"
I0805 17:47:00.431189       1 server.go:188] "Failed probe" probe="metric-storage-ready" err="not metrics to serve"
I0805 17:47:02.431913       1 server.go:188] "Failed probe" probe="metric-storage-ready" err="not metrics to serve"
I0805 17:47:04.430440       1 server.go:188] "Failed probe" probe="metric-storage-ready" err="not metrics to serve"
c
it looks like it’s working now? the APIService status is OK, and it’s normal for the metrics-server pod to log errors until it is able to successfully scrape the node. Does
kubectl top nodes
work?
m
Yes it seems to be working now, maybe the errors were appearing on the booting time. The
kubectl top nodes
works properly. some how I'm still getting an error
Copy code
curl -I <http://localhost:8080/version>
curl: (7) Failed to connect to localhost port 8080: Connection refused
The same error appears when trying to run a helm command to install gitlab agent
209 Views