rapid-area-86392
08/31/2023, 1:41 PMprehistoric-gpu-28362
08/31/2023, 3:39 PMechoing-tomato-53055
09/01/2023, 11:22 AMquiet-rain-75318
09/02/2023, 10:34 AMwhite-garden-41931
09/04/2023, 5:00 PM--local
) and metrics server is failing with "no route to host localhost.localdomain"
I wonder what the best way to troubleshoot this is... ?
I tried editing the k3s.service
unit file and then realized that it's probably a simple problem (k3s n Fedora 38 server)white-garden-41931
09/04/2023, 11:48 PMNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
localhost.localdomain Ready control-plane,master 86s v1.27.4+k3s1 192.168.86.41 <none> Fedora Linux 38 (Server Edition) 6.4.13-200.fc38.x86_64 <containerd://1.7.1-k3s1>
I feel like I should have a better domain, any suggestions?
homelab
.myhomelabdomain.io with A records using my private address?high-alligator-99144
09/05/2023, 9:50 AMcto@vm01:~$ kubectl run cluster-tester -it --rm --restart=Never --image=busybox:latest
If you don't see a command prompt, try pressing enter.
/ #
/ #
/ # wget <http://google.com|google.com>
Connecting to <http://google.com|google.com> (142.250.113.101:80)
wget: can't connect to remote host (142.250.113.101): Connection timed out
When I manually set the http/https proxy for the pod, it works.
/ # export http_proxy=<http://proxy_ip>:port
/ # export https_proxy=<http://proxy_ip>:port
/ # wget <http://google.com|google.com>
Connecting to proxy_ip:port (proxy_ip:port)
saving to 'index.html'
index.html 100% |*********************************************************************************************************************************| 19265 0:00:00 ETA
'index.html' saved
/ #
How do I set proxy details so that, it's available for every pod that gets created in this environment?worried-jackal-89144
09/05/2023, 8:16 PMipam:.operator.clusterPoolIPv4PodCIDRList={{ pod_cidr }}
.
After that each time I run install script ( curl -sfL <https://get.k3s.io> | INSTALL_K3S_EXEC=agent INSTALL_K3S_SKIP_START=false INSTALL_K3S_SKIP_ENABLE=false K3S_TOKEN=REDACTED K3S_URL="<https://192.168.189.108:6443>" sh -
)on a worker node, this node gets networking broken. To fix it I have to reset VM.
It doesn't happen when k3s' default flannel is used. It doesn't affect also control-plane nodes.
Last entries reported using journalctl before networking is frozen:
Sep 05 22:03:45 k3s-dev-worker-0 audit[13896]: NETFILTER_CFG table=mangle:305 family=2 entries=18 op=nft_register_chain pid=13896 subj=unconfined comm="iptables-restor"
Sep 05 22:03:45 k3s-dev-worker-0 audit[13896]: NETFILTER_CFG table=mangle:305 family=2 entries=13 op=nft_unregister_table pid=13896 subj=unconfined comm="iptables-restor"
Sep 05 22:03:45 k3s-dev-worker-0 audit[13896]: SYSCALL arch=c000003e syscall=46 success=yes exit=4236 a0=3 a1=7ffcc5b52480 a2=0 a3=7ffcc5b5246c items=0 ppid=13725 pid=13896 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts5 ses=8 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=unconfined key=(null)
Sep 05 22:03:45 k3s-dev-worker-0 audit: PROCTITLE proctitle="iptables-restore"
Sep 05 22:03:45 k3s-dev-worker-0 audit[13896]: NETFILTER_CFG table=raw:306 family=2 entries=14 op=nft_register_chain pid=13896 subj=unconfined comm="iptables-restor"
Sep 05 22:03:45 k3s-dev-worker-0 audit[13896]: NETFILTER_CFG table=raw:306 family=2 entries=12 op=nft_unregister_table pid=13896 subj=unconfined comm="iptables-restor"
Sep 05 22:03:45 k3s-dev-worker-0 audit[13896]: SYSCALL arch=c000003e syscall=46 success=yes exit=4820 a0=3 a1=7ffcc5b52480 a2=0 a3=5f183649a000 items=0 ppid=13725 pid=13896 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts5 ses=8 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=unconfined key=(null)
Sep 05 22:03:45 k3s-dev-worker-0 audit: PROCTITLE proctitle="iptables-restore"
Sep 05 22:03:45 k3s-dev-worker-0 audit[13896]: NETFILTER_CFG table=filter:307 family=2 entries=21 op=nft_register_chain pid=13896 subj=unconfined comm="iptables-restor"
Seems like the same happens when I run on worker node /usr/local/bin/k3s-killall.sh
Article https://docs.k3s.io/upgrades/killall says that standard upgrade doesn't terminate the containers, but from my observations this is only true for master nodes. Worker nodes get pods killed.fresh-summer-90494
09/06/2023, 8:54 AMmagnificent-scooter-98714
09/06/2023, 10:26 PMmagnificent-scooter-98714
09/06/2023, 10:33 PMmicroscopic-fountain-47918
09/07/2023, 9:12 AM2023-09-07
.But I discovered that the system time was incorrect later.So,I used date -s 2023-08-20
to adjust the system time to past.
Finally,I found the k3s is not working properly,like this:
Unable to connect to the server: tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2023-08-20T00:00:47Z is before 2023-09-07T09:08:43Z
So,how to solve this error?Is there any better way to solve this error besides reinstalling it.some-florist-16358
09/07/2023, 11:24 AMwhite-garden-41931
09/07/2023, 8:27 PMechoing-butcher-4609
09/08/2023, 9:50 AMblue-farmer-46993
09/08/2023, 12:01 PMwhite-garden-41931
09/08/2023, 4:48 PMk3s
cluster to practice for my CKA and I'm looking for advice on the best traefik Ingress configuration....
When I look at https://docs.k3s.io/networking#traefik-ingress-controller
it mentions /var/lib/rancher/k3s/server/manifests/traefik.yaml
but that directory doesn't exist.
I only have /var/lib/rancher/k3s/server
numerous-tiger-84394
09/08/2023, 8:59 PMcurl localhost:80
, times out. What could be wrong?
* Trying 127.0.0.1:80...
* connect to 127.0.0.1 port 80 failed: Connection refused
* Trying ::1:80...
white-garden-41931
09/10/2023, 2:17 AMhappy-crayon-4402
09/11/2023, 4:40 PMping
. We suspected an iptable-related issue with nft backend being used by default in ubuntu jammy. However, inspecting iptable counters we are not observing any count for DROP or REJECT rules (using iptables -L -v
). We also tried passing the k3s flag --prefer-bundled-bin
without more luck. Any other idea on how to troubleshoot this issue ? Thanks in advance for your help.many-telephone-82541
09/14/2023, 11:39 AMdnsConfig
stanza to the pod, without it, it doesn't.blue-painter-27587
09/15/2023, 11:19 AMlate-needle-80860
09/18/2023, 9:42 AM—tls-san
parameter.
• should I configure / set it on all server nodes?
• And if I upgrade control-plane/server nodes by replacing them one by one ( will end up with the same name and ip ) should I set the parameter in that scenario as well?
—-
With the v1.28.1 release ( and others ) this parameter have come into focus because of the need to set it to patch the cert. CVE on k3s …
Thank youworried-receptionist-18982
09/18/2023, 11:09 PMkubectl port-forward
i can get any agent node to connect to a k3s service just fine. but if i start a pod, the pod can't seem to access the service. is there a good way to debug this issue? im using default flannel config, maybe it's an iptables issue?gorgeous-thailand-32105
09/19/2023, 9:14 AMstale-addition-90512
09/19/2023, 9:18 AMstale-addition-90512
09/19/2023, 10:28 AMhallowed-iron-50950
09/20/2023, 4:30 PMcurved-army-69172
09/22/2023, 9:12 AMshy-nest-74542
09/22/2023, 10:27 AMrootless: true
prefer-bundled-bin: true
OS config
ubuntu@ip-172-31-34-14:~$ cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04.2 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.2 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
k3s check-config is still utilizing host ip tables:
ubuntu@ip-172-31-34-14:~$ k3s check-config
Verifying binaries in /var/lib/rancher/k3s/data/42d7f04c5f9849f0016361b8fd9226a044204a3bca9f576ac8aabe93d2560386/bin:
- sha256sum: good
- links: good
System:
- /usr/sbin iptables v1.8.7 (nf_tables): ok
- swap: disabled
- routes: ok
Limits:
- /proc/sys/kernel/keys/root_maxkeys: 1000000
...
...
STATUS: pass
Can anyone please explain why the prefer-bundled-bin not getting enforced?