helpful-helmet-41924
02/05/2023, 5:27 AM09:23:53 root@T7910-k3sm1 ~ → curl -sfL <http://get.k3s.io> | INSTALL_K3S_VERSION=v1.24.10+k3s1 sh -
[INFO] Using v1.24.10+k3s1 as release
[INFO] Downloading hash <https://github.com/k3s-io/k3s/releases/download/v1.24.10+k3s1/sha256sum-amd64.txt>
[INFO] Skipping binary downloaded, installed k3s matches hash
Rancher K3s Common (stable) 0.0 B/s | 0 B 00:00
Errors during downloading metadata for repository 'rancher-k3s-common-stable':
- Curl error (35): SSL connect error for <https://rpm.rancher.io/k3s/stable/common/centos/8/noarch/repodata/repomd.xml> [OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to <http://rpm.rancher.io:443|rpm.rancher.io:443> ]
Error: Failed to download metadata for repo 'rancher-k3s-common-stable': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
polite-engineer-55788
02/05/2023, 10:09 AMlittle-shampoo-26858
02/05/2023, 11:04 PMresolv.conf
that points to my local DNS server. The nodes are all running v1.24.6+k3s1
. Has anyone ever experiened similar issues or have any troubleshooting advice?brief-nightfall-38236
02/07/2023, 8:18 AMmysterious-wire-57288
02/07/2023, 9:04 AMapiVersion: v1
kind: Service
metadata:
name: test-service
labels:
<http://svccontroller.k3s.cattle.io/lbpool|svccontroller.k3s.cattle.io/lbpool>: pool1
spec:
selector:
app: myapp
ports:
- port: 80
targetPort: 80
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: test-service-1
labels:
<http://svccontroller.k3s.cattle.io/lbpool|svccontroller.k3s.cattle.io/lbpool>: pool2
spec:
selector:
app: myapp
ports:
- port: 80
targetPort: 80
type: LoadBalancer
node has labels but still first service taking all IP. why is that?victorious-hair-86525
02/07/2023, 11:32 AMcurl -sfL <https://get.k3s.io> | sh -
. Now i would like to change the config to the embedded cluster k3s version (--cluster-init). Is it possible to do it without reinstall completely k3s ? thankselegant-article-67113
02/07/2023, 3:07 PMsteep-branch-9388
02/07/2023, 3:18 PMkubectl create secret tls mycert -n myns --cert=mychain.pem --key=mykey.key
and refer to it in /var/lib/rancher/k3s/server/manifests/traefik-config.yaml
apiVersion: <http://helm.cattle.io/v1|helm.cattle.io/v1>
kind: HelmChartConfig
metadata:
name: traefik
namespace: kube-system
spec:
valuesContent: |-
extraVolumeMounts:
- name: ssl
mountPath: /ssl
extraVolumes:
- name: ssl
secret:
secretName: mycert
Does this look fine? Do you know working examples for k3s + custom traefik certificates?few-artist-8751
02/07/2023, 4:54 PMMountVolume.SetUp failed for volume “flannel-cfg” : failed to sync configmap cache: timed out waiting for the condition
) . In the flannel logs I am checking W0206 13:37:12.951287 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work
.
Anyone have a thought about that? How can I pass the kubeconfig for flannel? Btw, really thank you for all your help and attention 🙂 Following the logs:acceptable-airline-39173
02/08/2023, 5:25 AMacceptable-airline-39173
02/08/2023, 5:30 AMbrief-sunset-33126
02/08/2023, 12:42 PMbrief-sunset-33126
02/08/2023, 12:46 PMbreezy-notebook-17719
02/09/2023, 8:24 AMv1.24.3+k3s1
to v1.26.1+k3s1
and now my pods are not using the proxy anymore a.k.a. cannot reach anything online. DNS resolution works and setting the proxy manually inside a test pod solves the issue. How can I solve this issue generally. I have the proxy settings in the /etc/systemd/system/k3s.service.env
which worked for the previous version but now it doesn't seem to set the proxy properly. Any hints/pointers would be highly appreciated 🙂chilly-airport-96481
02/09/2023, 11:52 AMeager-area-27176
02/10/2023, 1:48 PMnarrow-article-96388
02/13/2023, 3:26 PMcurl -sfL <https://get.k3s.io> | INSTALL_K3S_EXEC="server --disable traefik" sh -s - --docker
on worker:
curl -sfL <https://get.k3s.io> | K3S_URL=<https://10.10.1.60:6443> K3S_TOKEN={{ Token }} sh -s - --docker --node-ip 10.116.212.4 --node-external-ip 10.116.212.4 --flannel-iface eth1
- I installed K3S using wireguard with following this guideline https://www.inovex.de/de/blog/how-to-set-up-a-k3s-cluster-on-wireguard/ and K3S running well:
on master:
curl -sfL <https://get.k3s.io> | INSTALL_K3S_EXEC="server --disable traefik" sh -s - --docker --advertise-address 10.222.0.1 --node-external-ip 10.10.1.60 --flannel-iface=wg0 --flannel-backend=wireguard-native --flannel-external-ip
on worker:
curl -sfL <https://get.k3s.io> | K3S_URL=<https://10.10.1.60:6443> K3S_TOKEN={{ Token }} sh -s - --docker --node-ip 10.116.212.4 --node-external-ip 10.116.212.4 --flannel-iface eth1
- Baremetal01 (10.116.1.2) can connect to VPC A & VPC B via IPSec Site to Site
- All instances in VPC A & B can connect to baremetal01
But i have one problem, the pods in baremetal01, cannot connect to VPC A and VPC B, only connect to IP internal baremetal01.
Any advice would be appreciated. Thank youwonderful-baker-81666
02/14/2023, 4:21 PMloud-helmet-97067
02/15/2023, 12:01 PMcat /etc/rancher/k3s/config.yaml.d/50-rancher.yaml
{
"agent-token": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"disable-apiserver": false,
"disable-cloud-controller": false,
"disable-controller-manager": false,
"disable-etcd": false,
"disable-kube-proxy": false,
"disable-network-policy": false,
"disable-scheduler": false,
"docker": false,
"etcd-expose-metrics": false,
"etcd-snapshot-retention": 5,
"etcd-snapshot-schedule-cron": "0 */5 * * *",
"kube-controller-manager-arg": [
"cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager",
"secure-port=10257"
],
"kube-scheduler-arg": [
"cert-dir=/var/lib/rancher/k3s/server/tls/kube-scheduler",
"secure-port=10259"
],
"node-label": [
"<http://cattle.io/os=linux|cattle.io/os=linux>",
"<http://rke.cattle.io/machine=b89290bb-5f82-47e7-96bc-9cc16f126a5c|rke.cattle.io/machine=b89290bb-5f82-47e7-96bc-9cc16f126a5c>"
],
"node-taint": [
"<http://node-role.kubernetes.io/control-plane:NoSchedule|node-role.kubernetes.io/control-plane:NoSchedule>",
"<http://node-role.kubernetes.io/etcd:NoExecute|node-role.kubernetes.io/etcd:NoExecute>"
],
"private-registry": "/etc/rancher/k3s/registries.yaml",
"protect-kernel-defaults": false,
"secrets-encryption": false,
"selinux": false,
"server": "<https://x.x.x.1:6443>",
"token": "YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY"
Master Node 1 [Sample ip: x.x.x.1]:
cat /etc/rancher/k3s/config.yaml.d/50-rancher.yaml
{
"agent-token": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"cluster-init": true,
"disable-apiserver": false,
"disable-cloud-controller": false,
"disable-controller-manager": false,
"disable-etcd": false,
"disable-kube-proxy": false,
"disable-network-policy": false,
"disable-scheduler": false,
"docker": false,
"etcd-expose-metrics": false,
"etcd-snapshot-retention": 5,
"etcd-snapshot-schedule-cron": "0 */5 * * *",
"kube-controller-manager-arg": [
"cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager",
"secure-port=10257"
],
"kube-scheduler-arg": [
"cert-dir=/var/lib/rancher/k3s/server/tls/kube-scheduler",
"secure-port=10259"
],
"node-label": [
"<http://cattle.io/os=linux|cattle.io/os=linux>",
"<http://rke.cattle.io/machine=77f5f3c6-a380-48b0-8b74-c7c3da330ff6|rke.cattle.io/machine=77f5f3c6-a380-48b0-8b74-c7c3da330ff6>"
],
"node-taint": [
"<http://node-role.kubernetes.io/control-plane:NoSchedule|node-role.kubernetes.io/control-plane:NoSchedule>",
"<http://node-role.kubernetes.io/etcd:NoExecute|node-role.kubernetes.io/etcd:NoExecute>"
],
"private-registry": "/etc/rancher/k3s/registries.yaml",
"protect-kernel-defaults": false,
"secrets-encryption": false,
"selinux": false,
"token": "YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY"
}
On testing the issue when Master Node1 goes down. When Master Node0 goes down, sometimes kubectl works (not 100%). We assume kube-api-server and related pods are distributed among both Master nodes alng with etcd datastore sync when w provision from Rancher UI
Any insights/feedback on how to correctly achieve HA for Rancher UI provisioned k3s when one of master node goes down is highly appreciated.breezy-autumn-81048
02/15/2023, 3:12 PMelegant-article-67113
02/15/2023, 6:00 PMbitter-photographer-12652
02/16/2023, 1:58 PMλ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
cattle-system helm-operation-229zl 0/2 ContainerCreating 0 23m
cattle-system helm-operation-58tk4 0/2 ContainerCreating 0 4m52s
cattle-system helm-operation-5fgf4 0/2 ImagePullBackOff 0 37m
cattle-system helm-operation-5fml2 0/2 ImagePullBackOff 0 38m
cattle-system helm-operation-6dlgv 0/2 ContainerCreating 0 3m50s
cattle-system helm-operation-7fgf4 0/2 ContainerCreating 0 24m
cattle-system helm-operation-8cm54 0/2 ImagePullBackOff 0 39m
cattle-system helm-operation-dc45v 0/2 ContainerCreating 0 17m
cattle-system helm-operation-dslxw 0/2 ContainerCreating 0 21m
cattle-system helm-operation-f7blm 0/2 ImagePullBackOff 0 36m
cattle-system helm-operation-fqf8j 0/2 ContainerCreating 0 13m
cattle-system helm-operation-gbt25 0/2 ImagePullBackOff 0 41m
cattle-system helm-operation-jlvb5 0/2 ContainerCreating 0 18m
cattle-system helm-operation-jpqgq 0/2 ContainerCreating 0 6m57s
cattle-system helm-operation-jxdkh 0/2 ContainerCreating 0 26m
cattle-system helm-operation-kn6mz 0/2 ContainerCreating 0 22m
cattle-system helm-operation-l9x8z 0/2 ImagePullBackOff 0 40m
cattle-system helm-operation-lljpq 0/2 ContainerCreating 0 106s
cattle-system helm-operation-lpfp5 0/2 ContainerCreating 0 14m
cattle-system helm-operation-m5jpb 0/2 ContainerCreating 0 20m
cattle-system helm-operation-p2cbz 0/2 ContainerCreating 0 9m1s
cattle-system helm-operation-pgvx8 0/2 ContainerCreating 0 44s
cattle-system helm-operation-pxvtn 0/2 ContainerCreating 0 12m
cattle-system helm-operation-pzq64 0/2 ContainerCreating 0 11m
cattle-system helm-operation-rcfl8 0/2 ContainerCreating 0 15m
cattle-system helm-operation-rqx9l 0/2 ContainerCreating 0 2m48s
cattle-system helm-operation-thdwv 0/2 ContainerCreating 0 7m59s
cattle-system helm-operation-vf7nq 0/2 ContainerCreating 0 5m54s
cattle-system helm-operation-vswpd 0/2 ContainerCreating 0 10m
cattle-system helm-operation-vvzmr 0/2 ContainerCreating 0 16m
cattle-system helm-operation-wz2m6 0/2 ImagePullBackOff 0 35m
creamy-action-76081
02/17/2023, 3:17 AMcold-rain-40382
02/17/2023, 11:57 AMcold-rain-40382
02/17/2023, 12:05 PMflaky-nail-94620
02/17/2023, 2:41 PMbest-fountain-73060
02/19/2023, 7:47 AMsteep-branch-9388
02/20/2023, 9:29 AMvaluesContent
How can I use a values.yaml
file when I autodeploy helm charts?
https://docs.k3s.io/helm#customizing-packaged-components-with-helmchartconfigblue-farmer-46993
02/20/2023, 9:29 AMbreezy-autumn-81048
02/20/2023, 1:10 PME0220 13:04:57.268766 26 remote_image.go:113] PullImage "rancher/shell:v0.1.6" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "<http://docker.io/rancher/shell:v0.1.6|docker.io/rancher/shell:v0.1.6>": failed to resolve reference "<http://docker.io/rancher/shell:v0.1.6|docker.io/rancher/shell:v0.1.6>": failed to do request: Head <https://registry-1.docker.io/v2/rancher/shell/manifests/v0.1.6>: dial tcp: lookup <http://registry-1.docker.io|registry-1.docker.io>: no such host
E0220 13:04:57.268808 26 kuberuntime_image.go:50] Pull image "rancher/shell:v0.1.6" failed: rpc error: code = Unknown desc = failed to pull and unpack image "<http://docker.io/rancher/shell:v0.1.6|docker.io/rancher/shell:v0.1.6>": failed to resolve reference "<http://docker.io/rancher/shell:v0.1.6|docker.io/rancher/shell:v0.1.6>": failed to do request: Head <https://registry-1.docker.io/v2/rancher/shell/manifests/v0.1.6>: dial tcp: lookup <http://registry-1.docker.io|registry-1.docker.io>: no such host
E0220 13:04:57.268865 26 kuberuntime_manager.go:801] container start failed: ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "<http://docker.io/rancher/shell:v0.1.6|docker.io/rancher/shell:v0.1.6>": failed to resolve reference "<http://docker.io/rancher/shell:v0.1.6|docker.io/rancher/shell:v0.1.6>": failed to do request: Head <https://registry-1.docker.io/v2/rancher/shell/manifests/v0.1.6>: dial tcp: lookup <http://registry-1.docker.io|registry-1.docker.io>: no such host
E0220 13:04:57.269670 26 pod_workers.go:191] Error syncing pod f7002f13-3f4a-477d-b38e-b31a86854c8f ("helm-operation-d6b4f_cattle-system(f7002f13-3f4a-477d-b38e-b31a86854c8f)"), skipping: [failed to "StartContainer" for "helm" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"<http://docker.io/rancher/shell:v0.1.6\|docker.io/rancher/shell:v0.1.6\>": failed to resolve reference \"<http://docker.io/rancher/shell:v0.1.6\|docker.io/rancher/shell:v0.1.6\>": failed to do request: Head <https://registry-1.docker.io/v2/rancher/shell/manifests/v0.1.6>: dial tcp: lookup <http://registry-1.docker.io|registry-1.docker.io>: no such host", failed to "StartContainer" for "proxy" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\""]
E0220 13:05:00.264620 26 pod_workers.go:191] Error syncing pod 7f461430-9fa9-40a0-9292-b232999d4c68 ("helm-operation-csqtv_cattle-system(7f461430-9fa9-40a0-9292-b232999d4c68)"), skipping: [failed to "StartContainer" for "helm" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\"", failed to "StartContainer" for "proxy" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\""]
E0220 13:05:01.263203 26 pod_workers.go:191] Error syncing pod b36eedc8-bfaa-40fe-a347-7afe191c4806 ("helm-operation-zpkh5_cattle-system(b36eedc8-bfaa-40fe-a347-7afe191c4806)"), skipping: [failed to "StartContainer" for "helm" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\"", failed to "StartContainer" for "proxy" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\""]
W0220 13:05:01.348877 26 iptables.go:550] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 3: iptables v1.8.3 (legacy): can't initialize iptables table `mangle': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
E0220 13:05:02.263542 26 pod_workers.go:191] Error syncing pod cfa1b9ea-6f54-4486-b72c-516d85758097 ("helm-operation-qlplj_cattle-system(cfa1b9ea-6f54-4486-b72c-516d85758097)"), skipping: [failed to "StartContainer" for "helm" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\"", failed to "StartContainer" for "proxy" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\""]
E0220 13:05:05.262470 26 pod_workers.go:191] Error syncing pod 5c049d76-8ee5-42d6-b64d-da0715a58253 ("helm-operation-s9w8x_cattle-system(5c049d76-8ee5-42d6-b64d-da0715a58253)"), skipping: [failed to "StartContainer" for "helm" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\"", failed to "StartContainer" for "proxy" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\""]
E0220 13:05:07.264113 26 pod_workers.go:191] Error syncing pod ea230264-a67c-4829-a04f-e1cca0a26858 ("helm-operation-qdcft_cattle-system(ea230264-a67c-4829-a04f-e1cca0a26858)"), skipping: [failed to "StartContainer" for "helm" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\"", failed to "StartContainer" for "proxy" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\""]
E0220 13:05:07.264202 26 pod_workers.go:191] Error syncing pod e3db70fb-dc2e-4b8e-b70c-58c77fce33c3 ("helm-operation-6dx7j_cattle-system(e3db70fb-dc2e-4b8e-b70c-58c77fce33c3)"), skipping: [failed to "StartContainer" for "helm" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\"", failed to "StartContainer" for "proxy" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\""]
E0220 13:05:09.262982 26 pod_workers.go:191] Error syncing pod f7002f13-3f4a-477d-b38e-b31a86854c8f ("helm-operation-d6b4f_cattle-system(f7002f13-3f4a-477d-b38e-b31a86854c8f)"), skipping: [failed to "StartContainer" for "helm" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\"", failed to "StartContainer" for "proxy" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\""]
E0220 13:05:12.941958 26 proxier.go:833] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 3: iptables v1.8.3 (legacy): can't initialize iptables table `filter': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
I0220 13:05:12.941981 26 proxier.go:825] Sync failed; retrying in 30s
If it uses by default https://registry-1.docker.io/v2/rancher/shell/manifests/v0.1.6 how can I change it and add a link to the remote repository in my Artifactory?
Why it’s reporting that something is wrong with iptables?
Thanks in advance,