This message was deleted.
# harvester
a
This message was deleted.
r
Maybe try this?
Copy code
$ curl -X GET "$APISERVER/livez?verbose" --header "Authorization: Bearer $TOKEN" --insecure
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
livez check passed
a
But this does not stand for all on a Harvester cluster. At the moment, Harvester seems not defining a cluster level healthy endpoint.
r
That’s right. We could reuse the criteria/logic implemented on the dashboard console and expose it via an endpoint. Currently, they are • check if harvester pods are ready • check if harvester webhook pods are ready • check if rancher pods are ready • check if harvester api is ready
👍 1
c
@red-king-19196 I had tried to use the underlying kubernetes cluster health endpoint as a substitute but it did not work. Do you have any suggestions on how to check if the api is ready? ( or do we just construct kubectl commands and do some regex parsing ) . Any help would be appreciated
a
FWIW, the harvester installer console displays "Ready" based on the following code https://github.com/harvester/harvester-installer/blob/4adc9fea30a6e6f612b3e75fdd8f6eb37cb50c3d/pkg/console/dashboard_panels.go#L518-L538, which checks that: • At least one harvester apiserver pod is ready • At least one harvester webhook-server pod is ready • At least one rancher pod is ready • You can use
curl
to hit https://$HARVESTER_VIP/version The actual commands invoked are:
Copy code
> kubectl get po -n harvester-system -l <http://app.kubernetes.io/name=harvester|app.kubernetes.io/name=harvester> -l <http://app.kubernetes.io/component=apiserver|app.kubernetes.io/component=apiserver> -o jsonpath='{range .items[*]}{range @.status.conditions[*]}{@.type}={@.status};{end}{"\n"}' | grep "Ready=True"

> kubectl get po -n harvester-system -l <http://app.kubernetes.io/name=harvester|app.kubernetes.io/name=harvester> -l <http://app.kubernetes.io/component=webhook-server|app.kubernetes.io/component=webhook-server> -o jsonpath='{range .items[*]}{range @.status.conditions[*]}{@.type}={@.status};{end}{"\n"}' | grep "Ready=True"

> kubectl get po -n cattle-system -l app=rancher -o jsonpath='{range .items[*]}{range @.status.conditions[*]}{@.type}={@.status};{end}{"\n"}' | grep "Ready=True"

> curl -fk https://$HARVESTER_VIP/version
(It might be possible
grep "Ready=True"
is a bit loose, because that also matches
ContainersReady=True
even if there's also
Ready=False
in the output, but I don't know whether that's an actual problem in practice or not) This doesn't mean all components of the cluster are up and running, but it does mean enough of the cluster is running that you can talk to it.
🙌 2
There is also an open enhancement request for some sort of proper readiness or health check endpoint https://github.com/harvester/harvester/issues/5303
👀 2