chilly-exabyte-62143
10/26/2022, 6:19 PMNodeHosts
? I know they are in coredns
configmap, so maybe using RBAC and calling kube API directly is the simplest way and then reloading the nginx config?lively-battery-54332
10/26/2022, 10:54 PMenough-carpet-20915
10/26/2022, 11:52 PMcurl -sfL <https://get.k3s.io> | INSTALL_K3S_VERSION=v1.25.2+k3s1 K3S_TOKEN="REDACTED" sh -s - server --cluster-init --cluster-cidr "10.44.0.0/16" --flannel-iface "enp35s0.4000" --node-ip "10.45.0.1" --node-external-ip "95.217.198.219"
It’s up and running fine.enough-carpet-20915
10/26/2022, 11:53 PMenough-carpet-20915
10/26/2022, 11:54 PMk3s-serve 26328 root 11u IPv4 475400 0t0 TCP localhost.localdomain:2380 (LISTEN)
enough-carpet-20915
10/26/2022, 11:54 PMenough-carpet-20915
10/27/2022, 5:54 PMkubectl
just fine but as soon as I scp /etc/rancher/k3s/k3s.yaml
to my desktop (and edit the server setting to point to the server name instead of localhost) I get this error: Unable to connect to the server: x509: certificate signed by unknown authority
enough-carpet-20915
10/27/2022, 6:03 PMchilly-exabyte-62143
10/27/2022, 9:27 PMmelodic-hamburger-23329
10/28/2022, 3:39 AMkubectl exec
hangs and timeouts when trying to connect k3s API server via Traefik TCP ingress route? Other commands, including port-forward
work.
> kubectl exec -i -t <...> -n <...> --v=9 -- /bin/sh
...
028 09:26:17.416564 9632 round_trippers.go:553] POST https://<...>:443/api/v1/namespaces/<...>/pods/<...>/exec?command=%2Fbin%2Fsh&container=<...>&stdin=true&stdout=true&tty=true 101 Switching Protocols in 104 milliseconds
I1028 09:26:17.416585 9632 round_trippers.go:570] HTTP Statistics: DNSLookup 15 ms Dial 19 ms TLSHandshake 0 ms Duration 104 ms
I1028 09:26:17.416596 9632 round_trippers.go:577] Response Headers:
I1028 09:26:17.416603 9632 round_trippers.go:580] Connection: Upgrade
I1028 09:26:17.416608 9632 round_trippers.go:580] Upgrade: SPDY/3.1
I1028 09:26:17.416613 9632 round_trippers.go:580] X-Stream-Protocol-Version: <http://v4.channel.k8s.io|v4.channel.k8s.io>
error: Timeout occurred
fast-agency-28891
10/28/2022, 6:18 AMgreat-winter-35080
10/28/2022, 12:15 PMclever-air-65544
10/28/2022, 5:29 PMnumerous-country-20400
10/31/2022, 10:10 AMcurl -fL <https://get.k3s.io> | K3S_KUBECONFIG_MODE="640" INSTALL_K3S_EXEC="server --disable-kube-proxy --disable=servicelb --disable=servicelb --disable-network-policy --flannel-backend=none --disable traefik" sh
And then install the tigera helm chart https://artifacthub.io/packages/helm/projectcalico/tigera-operator (using v3.24.3 right now) and the nginx ingress helm chart https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx (using 4.2.5).brainy-ram-25474
11/01/2022, 8:19 AMcuddly-fountain-93707
11/01/2022, 8:09 PMdamp-xylophone-94549
11/01/2022, 10:19 PMbland-summer-47692
11/02/2022, 6:27 AMbland-summer-47692
11/02/2022, 6:28 AMbland-summer-47692
11/02/2022, 6:30 AMbland-summer-47692
11/02/2022, 6:30 AMquaint-library-7108
11/02/2022, 2:23 PMquaint-library-7108
11/02/2022, 2:23 PMquaint-library-7108
11/02/2022, 2:23 PMquiet-memory-19288
11/02/2022, 9:20 PMquiet-memory-19288
11/02/2022, 10:03 PMINSTALL_K3S_CHANNEL=v1.2xxx sh -s - --disable=traefik --write-kubeconfig-mode 644
Can I get leaner? We only use host network, can I rip out flannel and turn off logging (pod logging too?) and the metrics server?
Does anyone already have a study on this I can read? I want just enough to run like 5 very little pods. But I need nothing else…able-traffic-85986
11/03/2022, 3:22 PMexternalTrafficPolicy: Local
and many more. Is there any solution at all with my recent setup? Do I have to exchange Traefik and/or KlipperLB with another product like MetalLB, NGINX-Ingress or something else to get this working on-prem? Do I have to disable NAT Masquerading to preserve the IP and is it mandatory to configure routes myself without NAT or can k3s handle that by itself? Or do I have to change something in flannel?
Another Problem is that randomly some nodes rapidly consume a lot of RAM and freeze. Only a reboot can fix that. Happens with with master and worker nodes. Could that be a side effect of forwarding traffic inside the cluster? Is that a known issue and is there a solution to fix that?
For Installation the following commands were used
# First Master
curl -sfL <https://get.k3s.io> |sh -s - server --datastore-endpoint ${K3S_DATASTORE_ENDPOINT} --node-taint CriticalAddonsOnly=true:NoExecute --node-ip <LOCAL IP MASTER01> --node-external-ip <PUBLIC IP MASTER01> --tls-san <LOCAL IP MASTER01> --tls-san <LOCAL IP MASTER02> --tls-san <LOCAL IP MASTER03> --tls-san <PUBLIC IP MASTER01> --tls-san <PUBLIC IP MASTER02> --tls-san <PUBLIC IP MASTER03> --tls-san <http://master.example.com|master.example.com> --tls-san master01.example.xom --tls-san <http://master02.example.com|master02.example.com> --tls-san <http://master03.example.com|master03.example.com> --flannel-iface=eth1
# Second/Third Master (same command with exchanged IPs plus token and serverURL)
# Worker Nodes
curl -sfL <https://get.k3s.io> |sh -s - agent --server ${K3S_URL} --token ${K3S_NODE_TOKEN} --node-ip <NODE LOCAL IP> --node-external-ip <NODE PUBLIC IP> --flannel-iface eth1
wide-author-88664
11/03/2022, 8:57 PMkind: HelmChart
in /var/lib/rancher/k3s/server/manifests
them it will automatically deploy it? Or does it go in /var/lib/rancher/k3s/server/static/charts
boundless-smartphone-66270
11/04/2022, 6:12 PMprehistoric-judge-25958
11/06/2022, 10:33 PMk3s check-config
what does these errors exactly mean should is solve this and if so how? I am running k3s on Debian 11 bullseye with 3 masters (etcd).
root:# k3s check-config
Verifying binaries in /var/lib/rancher/k3s/data/2ef87ff954adbb390309ce4dc07500f29c319f84feec1719bfb5059c8808ec6a/bin:
- sha256sum: good
- links: aux/ip6tables should link to iptables-detect.sh (fail)
- links: aux/ip6tables-restore should link to iptables-detect.sh (fail)
- links: aux/ip6tables-save should link to iptables-detect.sh (fail)
- links: aux/iptables should link to iptables-detect.sh (fail)
- links: aux/iptables-restore should link to iptables-detect.sh (fail)
- links: aux/iptables-save should link to iptables-detect.sh (fail)
....
STATUS: 6 (fail)