Unable to install prometheus ```Error: Error: INST...
# k3s
a
Unable to install prometheus
Copy code
Error: Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "<http://localhost:8080/version>": dial tcp [::1]:8080: connect: connection refused
Copy code
jsimons@blueberry:~$ sudo helm install kube-prometheus-stack \                                                                            
>   --create-namespace \
>   --namespace monitoring \
>   prometheus-community/kube-prometheus-stack
Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "<http://localhost:8080/version>": dial tcp [::1]:8080: connect: connection refused
jsimons@blueberry:~/.kube$ ls
cache  config
jsimons@blueberry:~/.kube$ ls -la
total 16
drwxr-x---  3 jsimons jsimons 4096 Jun 10 19:23 .
drwxr-x--- 10 jsimons jsimons 4096 Aug 27 22:57 ..
drwxr-x---  4 jsimons jsimons 4096 May 20 22:12 cache
-rw-------  1 jsimons jsimons 2957 Jun 11 17:57 config
jsimons@blueberry:~/.kube$ pwd
/home/jsimons/.kube
jsimons@blueberry:~/.kube$ ls la
ls: cannot access 'la': No such file or directory
jsimons@blueberry:~/.kube$ pwd  
/home/jsimons/.kube
jsimons@blueberry:~/.kube$ ls -la
total 16
drwxr-x---  3 jsimons jsimons 4096 Jun 10 19:23 .
drwxr-x--- 10 jsimons jsimons 4096 Aug 27 22:57 ..
drwxr-x---  4 jsimons jsimons 4096 May 20 22:12 cache
-rw-------  1 jsimons jsimons 2957 Jun 11 17:57 config
jsimons@blueberry:~/.kube$ cat config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data:  
certificate-authority-data certificate-authority-data certificate-authority-data certificate-authority-data certificate-authority-data certificate-authority-data certificate-authority-data certificate-authority-data certificate-authority-data certificate-authority-data certificate-authority-data certificate-authority-data
    server: <https://127.0.0.1:6443>
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    client-certificate-data: 
client-certificate-dataclient-certificate-data client-certificate-dataclient-certificate-data client-certificate-dataclient-certificate-data client-certificate-dataclient-certificate-data client-certificate-dataclient-certificate-data client-certificate-dataclient-certificate-data client-certificate-dataclient-certificate-data client-certificate-data client-certificate-data client-certificate-data=
    client-key-data: 
client-key-dataclient-key-data client-key-dataclient-key-data client-key-dataclient-key-data client-key-dataclient-key-data client-key-dataclient-key-data client-key-dataclient-key-data client-key-dataclient-key-data client-key-dataclient-key-data=
Copy code
jsimons@blueberry:~$ sudo k3s check-config

Verifying binaries in /var/lib/rancher/k3s/data/num/bin:
- sha256sum: good
- links: good

System:
- /usr/sbin iptables v1.8.10 (nf_tables): ok
- swap: disabled
- routes: default CIDRs 10.42.0.0/16 or 10.43.0.0/16 already routed

Limits:
- /proc/sys/kernel/keys/root_maxkeys: 30000004

info: reading kernel config from /proc/config.gz ...

Generally Necessary:
- cgroup hierarchy: cgroups V2 mounted, cpu|cpuset|memory controllers status: good
- /usr/sbin/apparmor_parser
apparmor: enabled and tools installed
- CONFIG_NAMESPACES: enabled
- CONFIG_NET_NS: enabled
- CONFIG_PID_NS: enabled
- CONFIG_IPC_NS: enabled
- CONFIG_UTS_NS: enabled
- CONFIG_CGROUPS: enabled
- CONFIG_CGROUP_PIDS: enabled
- CONFIG_CGROUP_CPUACCT: enabled
- CONFIG_CGROUP_DEVICE: enabled
- CONFIG_CGROUP_FREEZER: enabled
- CONFIG_CGROUP_SCHED: enabled
- CONFIG_CPUSETS: enabled
- CONFIG_MEMCG: enabled
- CONFIG_SECCOMP: enabled
- CONFIG_KEYS: enabled
- CONFIG_VETH: enabled (as module)
- CONFIG_BRIDGE: enabled (as module)
- CONFIG_BRIDGE_NETFILTER: enabled (as module)
- CONFIG_IP_NF_FILTER: enabled (as module)
- CONFIG_IP_NF_TARGET_MASQUERADE: enabled (as module)
- CONFIG_IP_NF_TARGET_REJECT: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_CONNTRACK: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_IPVS: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_MULTIPORT: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_STATISTIC: enabled (as module)
- CONFIG_IP_NF_NAT: enabled (as module)
- CONFIG_NF_NAT: enabled (as module)
- CONFIG_POSIX_MQUEUE: enabled

Optional Features:
- CONFIG_USER_NS: enabled
- CONFIG_BLK_CGROUP: enabled
- CONFIG_BLK_DEV_THROTTLING: enabled
- CONFIG_CGROUP_PERF: enabled
- CONFIG_CGROUP_HUGETLB: enabled
- CONFIG_NET_CLS_CGROUP: enabled (as module)
- CONFIG_CGROUP_NET_PRIO: enabled
- CONFIG_CFS_BANDWIDTH: enabled
- CONFIG_FAIR_GROUP_SCHED: enabled
- CONFIG_RT_GROUP_SCHED: missing
- CONFIG_IP_NF_TARGET_REDIRECT: enabled (as module)
- CONFIG_IP_SET: enabled (as module)
- CONFIG_IP_VS: enabled (as module)
- CONFIG_IP_VS_NFCT: enabled
- CONFIG_IP_VS_PROTO_TCP: enabled
- CONFIG_IP_VS_PROTO_UDP: enabled
- CONFIG_IP_VS_RR: enabled (as module)
- CONFIG_EXT4_FS: enabled
- CONFIG_EXT4_FS_POSIX_ACL: enabled
- CONFIG_EXT4_FS_SECURITY: enabled
- Network Drivers:
  - "overlay":
    - CONFIG_VXLAN: enabled (as module)
      Optional (for encrypted networks):
      - CONFIG_CRYPTO: enabled
      - CONFIG_CRYPTO_AEAD: enabled (as module)
      - CONFIG_CRYPTO_GCM: enabled (as module)
      - CONFIG_CRYPTO_SEQIV: enabled (as module)
      - CONFIG_CRYPTO_GHASH: enabled (as module)
      - CONFIG_XFRM: enabled
      - CONFIG_XFRM_USER: enabled (as module)
      - CONFIG_XFRM_ALGO: enabled (as module)
      - CONFIG_INET_ESP: enabled (as module)
      - CONFIG_INET_XFRM_MODE_TRANSPORT: missing
- Storage Drivers:
  - "overlay":
    - CONFIG_OVERLAY_FS: enabled (as module)

STATUS: pass
b
The localhost:8080 makes me wonder, do you have KUBECONFIG in your env pointing to the relevant file?
c
You do not.
That is not the correct address for the cluster that would be used if you were using the kubeconfig as shown in the docs.
a
Copy code
k config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: <https://127.0.0.1:6443>
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    client-certificate-data: DATA+OMITTED
    client-key-data: DATA+OMITTED
what do you suggest i do?
also I can use this command below and that kindof works
Copy code
sudo helm install kube-prometheus-stack   --create-namespace   --namespace kube-prometheus-stack   <oci://ghcr.io/prometheus-community/charts/kube-prometheus-stack>   --kubeconfig=$HOME/.kube/config
c
why are you using sudo? sudo does not retain env vars, unless you tell it on the command line which ones to keep
a
sudo seems to work
Copy code
jsimons@blueberry:~$ kubectl get nodes
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes)
jsimons@blueberry:~$ sudo kubectl get nodes
NAME         STATUS   ROLES                  AGE   VERSION
blueberry    Ready    control-plane,master   97d   v1.32.4+k3s1
pumpkin      Ready    etcd                   97d   v1.32.4+k3s1
raspberry    Ready    worker                 97d   v1.32.4+k3s1
strawberry   Ready    worker                 97d   v1.32.4+k3s1
c
The kubectl that comes with k3s has been modified to look at the correct path for the kubeconfig. Helm has not.
This is covered in the docs.
I don't know what is going on with the service unavailable error you get when running without sudo. Are you using the wrong kubeconfig for that?
a
this is my kube/config I am looking for you to help me and I am very appretaive of your time thank you for being here. the docs are great but did not help. I am all about reading the docs
Copy code
jsimons@blueberry:~/.kube$ cat config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data:  
certificate-authority-data certificate-authority-data certificate-authority-data certificate-authority-data certificate-authority-data certificate-authority-data certificate-authority-data certificate-authority-data certificate-authority-data certificate-authority-data certificate-authority-data certificate-authority-data
    server: <https://127.0.0.1:6443>
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    client-certificate-data: 
client-certificate-dataclient-certificate-data client-certificate-dataclient-certificate-data client-certificate-dataclient-certificate-data client-certificate-dataclient-certificate-data client-certificate-dataclient-certificate-data client-certificate-dataclient-certificate-data client-certificate-dataclient-certificate-data client-certificate-data client-certificate-data client-certificate-data=
    client-key-data: 
client-key-dataclient-key-data client-key-dataclient-key-data client-key-dataclient-key-data client-key-dataclient-key-data client-key-dataclient-key-data client-key-dataclient-key-data client-key-dataclient-key-data client-key-dataclient-key-data=
https://helm.sh/docs/helm/helm/ $KUBECONFIG set an alternative Kubernetes configuration file (default "~/.kube/config") my config file location https://docs.k3s.io/installation/configuration will update that to ~/.kube/config because root owns current file path
updated env
Copy code
KUBECONFIG=~/.kube/config
I still get this error
Copy code
jsimons@blueberry:~$ kubectl get nodes
E0828 14:13:35.890983    7501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"<http://localhost:8080/api?timeout=32s>\": dial tcp [::1]:8080: connect: connection refused"
E0828 14:13:35.895866    7501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"<http://localhost:8080/api?timeout=32s>\": dial tcp [::1]:8080: connect: connection refused"
E0828 14:13:35.900646    7501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"<http://localhost:8080/api?timeout=32s>\": dial tcp [::1]:8080: connect: connection refused"
E0828 14:13:35.905488    7501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"<http://localhost:8080/api?timeout=32s>\": dial tcp [::1]:8080: connect: connection refused"
E0828 14:13:35.912609    7501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"<http://localhost:8080/api?timeout=32s>\": dial tcp [::1]:8080: connect: connection refused"
The connection to the server localhost:8080 was refused - did you specify the right host or port?
jsimons@blueberry:~$ k get nodes      
NAME         STATUS   ROLES                  AGE   VERSION
blueberry    Ready    control-plane,master   98d   v1.32.4+k3s1
pumpkin      Ready    etcd                   98d   v1.32.4+k3s1
raspberry    Ready    worker                 98d   v1.32.4+k3s1
strawberry   Ready    worker                 98d   v1.32.4+k3s1
jsimons@blueberry:~$
is their a way to track all the funtion calls to see why it errors out like debug it in real time?
the actual config file is owned by root
Copy code
jsimons@blueberry:~$ ls -l /etc/rancher/k3s/                                       
total 12
drwxr-xr-x 3 root root 4096 Aug 17 19:00 certs.d
-rw-r--r-- 1 root root 2957 Aug 28 12:35 k3s.yaml
-rw-r--r-- 1 root root  365 Aug 20 22:10 registries.yaml
jsimons@blueberry:~$ ls -l /                           
total 60
lrwxrwxrwx   1 root root     7 Oct  7  2024 bin -> usr/bin
drwxr-xr-x   4 root root  4096 Jul 26 00:06 boot
drwxr-xr-x  18 root root 14020 Aug 28 12:35 dev
drwxr-xr-x 115 root root  4096 Aug 27 22:56 etc
drwxr-xr-x   3 root root  4096 Oct  7  2024 home
lrwxrwxrwx   1 root root     7 Oct  7  2024 lib -> usr/lib
drwx------   2 root root 16384 Oct  7  2024 lost+found
drwxr-xr-x   2 root root  4096 Oct  7  2024 media
drwxr-xr-x   2 root root  4096 Oct  7  2024 mnt
drwxr-xr-x   2 root root  4096 Oct  7  2024 opt
dr-xr-xr-x 227 root root     0 Dec 31  1969 proc
drwx------   8 root root  4096 Aug 27 22:49 root
drwxr-xr-x  34 root root   940 Aug 28 12:42 run
lrwxrwxrwx   1 root root     8 Oct  7  2024 sbin -> usr/sbin
drwxr-xr-x   7 root root  4096 Aug 17 01:05 snap
drwxr-xr-x   2 root root  4096 Oct  7  2024 srv
dr-xr-xr-x  12 root root     0 Aug 28 14:03 sys
drwxrwxrwt   8 root root   160 Aug 28 12:50 tmp
drwxr-xr-x  11 root root  4096 Oct  7  2024 usr
drwxr-xr-x  13 root root  4096 Oct  7  2024 var
jsimons@blueberry:~$ 

the env config is not still lost here
jsimons@blueberry:~$ ls -l .kube/   
total 8
drwxr-x--- 4 jsimons jsimons 4096 May 20 22:12 cache
-rw------- 1 jsimons jsimons 2957 Jun 11 17:57 config
jsimons@blueberry:~$
c
You need to
export KUBECONFIG=~/.kube/config
if you want other commands to see that env var
or preferably you would just set --write-kubeconfig-group and/or --write-kubeconfig-mode in the K3s config so that your non-root user can read the file at /etc/rancher/k3s/k3s.yaml, and then point the KUBECONFIG env var at that.
a
i already did that and that did not work the first part
c
or alternatively, just get a root shell if all of this is too complicated
if you did that then you wouldn’t be having this problem
a
Copy code
history command 
2005  export KUBECONFIG="~/.kube/config"
 2006  env
I am all for doing something complicated if its the best way of accomplishing the goal
i see what you sent in the docs and if you look at the terminal output you will see that my original config aligns exactly with the docs
to solve this issue i had to create a sim link i wonder why this is not in the docs
Copy code
sudo ln -s /etc/rancher/k3s/k3s.yaml ~/.kube/config
jsimons@blueberry:~$ ls -l .kube/config 
lrwxrwxrwx 1 root root 25 Aug 28 19:46 .kube/config -> /etc/rancher/k3s/k3s.yaml

simons@blueberry:~$ kubectl get nodes                                  
NAME         STATUS   ROLES                  AGE   VERSION
blueberry    Ready    control-plane,master   98d   v1.32.4+k3s1
pumpkin      Ready    etcd                   98d   v1.32.4+k3s1
raspberry    Ready    worker                 98d   v1.32.4+k3s1
strawberry   Ready    worker                 98d   v1.32.4+k3s1
Strange issue I guess
b
There are various ways around it. I use setfacl and run sudo setfacl -m g&lt;group name&gt;r /etc/rancher/k3s/k3s.yaml && echo 'export KUBECONFIG=/etc/rancher/k3s/k3s.yaml' >> ~/.bashrc && . ~/.bashrc Whichever way is easiest for you, it's just the same as with any other Kubernetes-style installation, needing to set up KUBECONFIG to use kubectl.
a
If you look at what I posted you will find that my original kubeconfig en was set to the correct path it’s in the thread
👍 1
Prometheus works had to do a nodeport