This message was deleted.
# k3s
a
This message was deleted.
g
kubectl version
,
k3s -v
, or
kubectl get nodes
are probably the best way to show the version. But it is most likely 1.23 yes. 1.24 isn’t marked as stable yet
👍 1
f
How bout these firings. Should I worry? I googled it without success….
g
That endpoint looks like
http
not
https
, and I believe only https is allowed now. See https://github.com/k3s-io/k3s/issues/3619#issuecomment-973188304 which might guide you in the right direction to scrape the correct scheme now
f
Man I'm screwed. Anyone wanna walk me threw it
These are my notes so far. Still haven't figured it out? Really not trying to start over from scratch. But I guess that how we learn…..
g
It’s okay, it’s all a learning experience! In this case, what will probably help you best/fastest is looking up how to configure prometheus metrics in your kubernetes cluster
f
Now grafana dash board doesn't come up and monitoring is in a loop
g
Probably misconfigured during an update. Was this a cluster that someone else setup and handed down to you?
f
Nope I set it up
Jeff herring and networkchuck Tutorials
Jeff geerling sorry

https://youtu.be/IafVCHkJbtI

And this one

https://youtu.be/X9fSMGkjtug

To be more specific
g
so, without explicitly watching those, I believe you should be able to do the same thing, but then just edit whatever config you used to deploy prometheus to use the https endpoint instead of http
f
They were make commands
These were the steps
I need to create a yaml and deploy it I think
Don't know my endpoints or ports
This is what I was going to put in the yaml. Like you suggested. But I know their end points aren't the same as mine and ports
Not sure what the call the yaml file either
g
Usually the name of the file doesn’t matter. I would check in
manifests/setup
and
manifests
if there’s any files for prometheus with some of this data. Also, I suspect the endpoints section in that data isn’t necessary, but I’m not an expert with prometheus so I’m not 100% certain about that
Those pretty aptly named! You can go through them and see what to edit. But likely the
prometheus-kubeControllerManager
and
prometheus-kubeScheduler
f
That look right
I'm getting closer I think
Don't wanna f it up. Too close I think
g
I think you are close too! Go ahead and f it up and keep learning. It might take some time, but you’ll figure it out 🙂
f
Lol
Thanks for all the help
g
That yaml looks like the endpoint. I think you want to get to the servicemonitor resource ideally
and no problem
f
How bout this info
Still trudging
g
You’ll want to describe that pod to see why it’s crashlooping like that
f
I'm hoping I going to run. Kubectl log deployment/ Prometheus-config-reloader -c Prometheus-K3S-0 ??? When I get home today for some more clues…..
Still looking for Guidance
g
what does
prometheus-kubeControllerManagerPrometheusDiscoveryService.yaml
contina?
f
I tried changing to https and did a reboot but still crashing
g
what about
prometheus-serviceMonitorKubeControllerManager.yaml
?
f
That looks like that
g
what about towards the bottom of that file? Same thing?
f
I could change that to https. Haven't tried that
Rebooting now
g
nah that’s not going to work. The name of that actually matches the name of the port in the Service definition
f
Yeah I didn't like that. Fan started fast
g
Will you put that back to http-metrics and actually just change the ports in
prometheus-kubeControllerManagerPrometheusDiscoveryService.yaml
to both use
10257
? Probably put back whatever the other changes you made to these files as well. I think just updating those ports to be 10257 should work. Also update the ports in
prometheus-kubeSchedulerPrometheusDiscoveryService.yaml
to be
10259
I expect in that file for scheduler they’re showing
10251
currently
f
Yes
Do I still reboot?
g
Reboot the machine? Probably not necessary. But rebuild using those files. Might be able to just
kubectl apply -f <file_name>
them too
f
I don't think so
Unless it takes awhile
g
kubectl describe -n monitoring pod/prometheus-k8s-0
all the other files are set back to their original values right?
f
Correct I was very careful about that
Dam theirs so much
g
it’s okay you showed enough here
I’d like to get back to the point in this screenshot and then just edit those service ports 😕
f
https://github.com/carlosedp/cluster-monitoring This is what I did per Jeff geerling. Not sure it it helps
So I ran
kubectl delete -n monitoring pod/prometheus-k8s-0
Then it recreated and I had access to grafana again. Don't really know if it fixed it I doubt it