creamy-pencil-82913
08/30/2022, 7:17 AMcuddly-egg-57762
08/30/2022, 9:34 AMk3s[2515485]: time="2022-08-30T09:12:03Z" level=error msg="Failed to process config: failed to process /var/lib/rancher/k3s/server/manifests/cilium.yaml: yaml: line 14: could not find expected ':'"
even if everything looks fine to me:
apiVersion: <http://helm.cattle.io/v1|helm.cattle.io/v1>
kind: HelmChart
metadata:
name: cilium
namespace: kube-system
spec:
bootstrap: True
chartContent: <b64 encoded file>
targetNamespace: kube-system
valuesContent: |-
operator:
replicas: 2
image:
useDigest: false
tunnel: disabled
autoDirectNodeRoutes: true
kubeProxyReplacement: strict
loadBalancer:
mode:dsr
k8sServiceHost: 10.130.42.248
k8sServicePort: 6443
nativeRoutingCIDR: 10.0.0.0/16
image:
useDigest: false
pullPolicy: IfNotPresent
To generate the content of spec.chartContent I do the command "base64 cilium-x.y.z.tgz" and past the result into it. I'm I doing something wrong? Or am I missing something?dazzling-appointment-98003
08/30/2022, 10:05 AMstraight-businessperson-27680
08/30/2022, 6:17 PMclever-art-93319
08/31/2022, 7:41 AMFailed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox image "rancher/mirrored-pause:3.1": failed to pull image "rancher/mirrored-pause:3.1": failed to pull and unpack image "<http://docker.io/rancher/mirrored-pause:3.1|docker.io/rancher/mirrored-pause:3.1>": failed to resolve reference "<http://docker.io/rancher/mirrored-pause:3.1|docker.io/rancher/mirrored-pause:3.1>": failed to do request: Head "<https://registry-1.docker.io/v2/rancher/mirrored-pause/manifests/3.1>": net/http: TLS handshake timeout
glamorous-flag-56432
08/31/2022, 11:09 AM--cluster-init
etc, but not sure what is required for external ETCDbrainy-electrician-41196
08/31/2022, 7:07 PMjolly-waitress-71272
08/31/2022, 9:12 PMrefined-magician-25478
08/31/2022, 9:50 PMrefined-toddler-64572
09/01/2022, 2:43 PM1.24.4+k3s1
and I've been monitoring the logs, seeing stuff I haven't noticed before:refined-toddler-64572
09/01/2022, 2:43 PMk3s02 systemd[1]: Started Lightweight Kubernetes.
k3s02 k3s[567197]: time="2022-09-01T10:27:00-04:00" level=info msg="Tunnel server egress proxy waiting for runtime core to become available"
k3s02 k3s[567197]: time="2022-09-01T10:27:00-04:00" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: <https://127.0.0.1:6443/v1-k3s/readyz>: 500 Internal Server Error"
k3s02 k3s[567197]: Flag --cloud-provider has been deprecated, will be removed in 1.24 or later, in favor of removing cloud provider code from Kubelet.
k3s02 k3s[567197]: Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
k3s02 k3s[567197]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI
k3s02 k3s[567197]: I0901 10:27:00.937171 567197 server.go:192] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime
melodic-hamburger-23329
09/02/2022, 12:35 AMchilly-telephone-51989
09/02/2022, 1:59 AMcurl -sfL <https://get.k3s.io> | sh -s - server
but i did not get the token anywhere in /etc/rancher/k3s
the only file that is there is k3s.yaml
where do I get the token?chilly-telephone-51989
09/02/2022, 2:10 AMchilly-telephone-51989
09/02/2022, 2:14 AMcareful-optician-75900
09/02/2022, 9:45 AMannotations:
<http://field.cattle.io/projectId|field.cattle.io/projectId>: ""
Updating local copy of SSL certificate to classic load balancer every 1 mins. How to troubleshoot these issues ?
Nginx-ingress logs:
8 controller.go:177] Configuration changes detected, backend reload required.
8 backend_ssl.go:189] Updating local copy of SSL certificate "cattle-system/tls-rancher-ingress" with missing intermediate CA certs
I0830 05:07:32.859819 8 controller.go:195] Backend successfully reloaded.A
Any ideas for SSL Uploading every 1 mins ? Many thanksable-mechanic-45652
09/02/2022, 9:59 AMclever-air-65544
09/02/2022, 5:10 PMhigh-controller-26526
09/03/2022, 12:28 PMKUBECONFIG=k3s.yaml kubectl get nodes
NAME STATUS ROLES AGE VERSION
0-server Ready control-plane,master 2y43d v1.24.4+k3s1
1-agent Ready <none> 2y43d v1.24.4+k3s1
2-agent Ready <none> 2y43d v1.24.4+k3s1
3-agent Ready <none> 2y43d v1.24.4+k3s1
kind-nightfall-56861
09/03/2022, 10:34 PMprehistoric-diamond-4224
09/04/2022, 9:00 PMIngress
resources? I am aware that now there is the new IngressRoute
, but if i were to upgrade now, would v2 be compatible with all the old Iingress resources already present in the cluster?stale-orange-90901
09/06/2022, 6:27 PMcalm-france-2744
09/07/2022, 12:01 AMnumerous-zoo-73399
09/07/2022, 9:37 AMcontrol-plane,etcd,master
) and I am trying to update them. I do not use upgrade plan as I need additional operations, but the flow is the same.
I do replace k3s and run kill to the process.
it seems that it works on first node but after that 2 others are becoming NotReady
, i do not see any or may missed the logic regarding the order on doing it in k3s-upgrade
component.
Moreover are there any limitation of k3s version compatabilities? Meaning if I have one of the nodes with version v1.23.8+k3s1
and 2 others are still v1.21.4+k3s1
- can it be a limitation?
Thanks in advance 🙏 :rancher_employee:careful-optician-75900
09/07/2022, 10:02 AMred-musician-8168
09/08/2022, 11:45 PMstale-dinner-99388
09/09/2022, 12:59 PMhelm upgrade --install rancher rancher-latest/rancher --namespace cattle-system --set hostname=url --set ingress.tls.source="letsEncrypt" --set bootstrapPassword=abcdxyz --set letsEncrypt.email="email" --set letsEncrypt.environment="production"
clever-air-65544
09/09/2022, 6:39 PMstocky-sundown-51677
09/10/2022, 9:17 AMchilly-telephone-51989
09/11/2022, 1:11 AMk3s-agent.service - Lightweight Kubernetes
Loaded: loaded (/etc/systemd/system/k3s-agent.service; enabled; vendor preset: enabled)
Active: activating (start) since Sun 2022-09-11 00:57:46 UTC; 2min 17s ago
Docs: <https://k3s.io>
Process: 2842576 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exited, status=0/SUCCESS)
Process: 2842578 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Process: 2842579 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 2842580 (k3s-agent)
Tasks: 9
Memory: 15.5M
CPU: 146ms
CGroup: /system.slice/k3s-agent.service
└─2842580 "/usr/local/bin/k3s agent"
Sep 11 00:57:46 ip-172-31-41-97 sh[2842576]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Sep 11 00:57:46 ip-172-31-41-97 sh[2842577]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory
Sep 11 00:57:46 ip-172-31-41-97 k3s[2842580]: time="2022-09-11T00:57:46Z" level=info msg="Starting k3s agent v1.24.4+k3s1 (c3f830e9)"
Sep 11 00:57:46 ip-172-31-41-97 k3s[2842580]: time="2022-09-11T00:57:46Z" level=info msg="Running load balancer k3s-agent-load-balancer 127.0.0.1:6444 -> [3.128.3.142:6443]"
Sep 11 00:58:06 ip-172-31-41-97 k3s[2842580]: time="2022-09-11T00:58:06Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": context deadline >
Sep 11 00:58:28 ip-172-31-41-97 k3s[2842580]: time="2022-09-11T00:58:28Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": context deadline >
Sep 11 00:58:50 ip-172-31-41-97 k3s[2842580]: time="2022-09-11T00:58:50Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": context deadline >
Sep 11 00:59:12 ip-172-31-41-97 k3s[2842580]: time="2022-09-11T00:59:12Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": context deadline >
Sep 11 00:59:34 ip-172-31-41-97 k3s[2842580]: time="2022-09-11T00:59:34Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": context deadline >
Sep 11 00:59:56 ip-172-31-41-97 k3s[2842580]: time="2022-09-11T00:59:56Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": context deadline >
~
i tried using curl but it wont work.chilly-telephone-51989
09/11/2022, 1:11 AMk3s-agent.service - Lightweight Kubernetes
Loaded: loaded (/etc/systemd/system/k3s-agent.service; enabled; vendor preset: enabled)
Active: activating (start) since Sun 2022-09-11 00:57:46 UTC; 2min 17s ago
Docs: <https://k3s.io>
Process: 2842576 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exited, status=0/SUCCESS)
Process: 2842578 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Process: 2842579 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 2842580 (k3s-agent)
Tasks: 9
Memory: 15.5M
CPU: 146ms
CGroup: /system.slice/k3s-agent.service
└─2842580 "/usr/local/bin/k3s agent"
Sep 11 00:57:46 ip-172-31-41-97 sh[2842576]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Sep 11 00:57:46 ip-172-31-41-97 sh[2842577]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory
Sep 11 00:57:46 ip-172-31-41-97 k3s[2842580]: time="2022-09-11T00:57:46Z" level=info msg="Starting k3s agent v1.24.4+k3s1 (c3f830e9)"
Sep 11 00:57:46 ip-172-31-41-97 k3s[2842580]: time="2022-09-11T00:57:46Z" level=info msg="Running load balancer k3s-agent-load-balancer 127.0.0.1:6444 -> [3.128.3.142:6443]"
Sep 11 00:58:06 ip-172-31-41-97 k3s[2842580]: time="2022-09-11T00:58:06Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": context deadline >
Sep 11 00:58:28 ip-172-31-41-97 k3s[2842580]: time="2022-09-11T00:58:28Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": context deadline >
Sep 11 00:58:50 ip-172-31-41-97 k3s[2842580]: time="2022-09-11T00:58:50Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": context deadline >
Sep 11 00:59:12 ip-172-31-41-97 k3s[2842580]: time="2022-09-11T00:59:12Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": context deadline >
Sep 11 00:59:34 ip-172-31-41-97 k3s[2842580]: time="2022-09-11T00:59:34Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": context deadline >
Sep 11 00:59:56 ip-172-31-41-97 k3s[2842580]: time="2022-09-11T00:59:56Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": context deadline >
~
i tried using curl but it wont work.creamy-pencil-82913
09/11/2022, 1:23 AMchilly-telephone-51989
09/11/2022, 1:24 AM$ curl <http://172.31.41.97:6444/cacerts>
curl: (7) Failed to connect to 172.31.41.97 port 6444 after 0 ms: Connection refused
$ curl <http://127.0.0.1:6444/cacerts>
curl: (56) Recv failure: Connection reset by peer
creamy-pencil-82913
09/11/2022, 1:26 AMchilly-telephone-51989
09/11/2022, 1:27 AMcurl -sfL <https://get.k3s.io> | K3S_URL=<https://172.31.46.55:6443> K3S_TOKEN=<token> sh -
creamy-pencil-82913
09/11/2022, 1:29 AMchilly-telephone-51989
09/11/2022, 1:31 AM$ curl <https://172.31.46.55:6443/cacerts>
where do i need to look? is it some configuration file i need changed on server?k3s.service - Lightweight Kubernetes
Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2022-09-10 19:09:11 UTC; 6h ago
Docs: <https://k3s.io>
Process: 169774 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exited, status=0/SUCCESS)
Process: 169776 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Process: 169777 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 169778 (k3s-server)
Tasks: 108
Memory: 1.1G
CPU: 22min 1.441s
CGroup: /system.slice/k3s.service
├─169778 "/usr/local/bin/k3s server"
├─169883 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib>
├─171029 /var/lib/rancher/k3s/data/577968fa3d58539cc4265245941b7be688833e6bf5ad7869fa2afe02f15f1cd2/bin/containerd-shim-runc-v2 -namespace <http://k8s.io|k8s.io> -id 7e75a1f48>
├─171071 /var/lib/rancher/k3s/data/577968fa3d58539cc4265245941b7be688833e6bf5ad7869fa2afe02f15f1cd2/bin/containerd-shim-runc-v2 -namespace <http://k8s.io|k8s.io> -id e19ef81c2>
├─171214 /var/lib/rancher/k3s/data/577968fa3d58539cc4265245941b7be688833e6bf5ad7869fa2afe02f15f1cd2/bin/containerd-shim-runc-v2 -namespace <http://k8s.io|k8s.io> -id aa12f560d>
├─172321 /var/lib/rancher/k3s/data/577968fa3d58539cc4265245941b7be688833e6bf5ad7869fa2afe02f15f1cd2/bin/containerd-shim-runc-v2 -namespace <http://k8s.io|k8s.io> -id 78657bc84>
└─172352 /var/lib/rancher/k3s/data/577968fa3d58539cc4265245941b7be688833e6bf5ad7869fa2afe02f15f1cd2/bin/containerd-shim-runc-v2 -namespace <http://k8s.io|k8s.io> -id 6dbbbaa64>
Sep 10 19:10:11 ip-172-31-46-55 k3s[169778]: E0910 19:10:11.590025 169778 remote_runtime.go:604] "ContainerStatus from runtime service failed" err="rpc error: code = NotFo>
Sep 10 19:10:11 ip-172-31-46-55 k3s[169778]: I0910 19:10:11.590090 169778 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="0c2cbd5a84703>
Sep 10 19:10:11 ip-172-31-46-55 k3s[169778]: E0910 19:10:11.590892 169778 remote_runtime.go:604] "ContainerStatus from runtime service failed" err="rpc error: code = NotFo>
Sep 10 19:10:11 ip-172-31-46-55 k3s[169778]: I0910 19:10:11.591111 169778 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="72dcddebde696>
Sep 10 19:10:11 ip-172-31-46-55 k3s[169778]: E0910 19:10:11.591894 169778 remote_runtime.go:604] "ContainerStatus from runtime service failed" err="rpc error: code = NotFo>
Sep 10 19:10:11 ip-172-31-46-55 k3s[169778]: I0910 19:10:11.591975 169778 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="3d3b976e33d45>
Sep 10 19:10:11 ip-172-31-46-55 k3s[169778]: E0910 19:10:11.592433 169778 remote_runtime.go:604] "ContainerStatus from runtime service failed" err="rpc error: code = NotFo>
Sep 10 19:10:11 ip-172-31-46-55 k3s[169778]: I0910 19:10:11.592465 169778 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="bc744a234a1ae>
Sep 10 21:25:35 ip-172-31-46-55 k3s[169778]: time="2022-09-10T21:25:35Z" level=warning msg="Proxy error: write failed: write tcp 127.0.0.1:6443->127.0.0.1:54670: write: con>
Sep 10 23:40:26 ip-172-31-46-55 k3s[169778]: time="2022-09-10T23:40:26Z" level=warning msg="Proxy error: write failed: write tcp 127.0.0.1:6443->127.0.0.1:49300: write: bro>
~
creamy-pencil-82913
09/11/2022, 3:46 AMchilly-telephone-51989
09/11/2022, 3:51 AMcreamy-pencil-82913
09/11/2022, 5:27 AMchilly-telephone-51989
09/11/2022, 5:28 AMcreamy-pencil-82913
09/11/2022, 7:13 AMchilly-telephone-51989
09/13/2022, 6:23 PM$ sudo systemctl status k3s-agent
● k3s-agent.service - Lightweight Kubernetes
Loaded: loaded (/etc/systemd/system/k3s-agent.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Tue 2022-09-13 18:18:54 UTC; 1s ago
Docs: <https://k3s.io>
Process: 1368106 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exited, status=0/SUCCESS)
Process: 1368108 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Process: 1368109 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Process: 1368110 ExecStart=/usr/local/bin/k3s agent (code=exited, status=1/FAILURE)
Main PID: 1368110 (code=exited, status=1/FAILURE)
CPU: 557ms
Sep 13 18:18:54 ip-172-31-41-97 systemd[1]: k3s-agent.service: Main process exited, code=exited, status=1/FAILURE
Sep 13 18:18:54 ip-172-31-41-97 systemd[1]: k3s-agent.service: Failed with result 'exit-code'.
$ journalctl -xe
░░ The job identifier is 19227.
Sep 13 18:19:21 ip-172-31-41-97 k3s[1368328]: I0913 18:19:21.881020 1368328 server.go:395] "Kubelet version" kubeletVersion="v1.24.4+k3s1"
Sep 13 18:19:21 ip-172-31-41-97 k3s[1368328]: I0913 18:19:21.881053 1368328 server.go:397] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Sep 13 18:19:21 ip-172-31-41-97 k3s[1368328]: W0913 18:19:21.883573 1368328 manager.go:159] Cannot detect current cgroup on cgroup v2
Sep 13 18:19:21 ip-172-31-41-97 k3s[1368328]: I0913 18:19:21.883986 1368328 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt"
Sep 13 18:19:21 ip-172-31-41-97 k3s[1368328]: I0913 18:19:21.886066 1368328 server.go:644] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /"
Sep 13 18:19:21 ip-172-31-41-97 k3s[1368328]: I0913 18:19:21.886447 1368328 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Sep 13 18:19:21 ip-172-31-41-97 k3s[1368328]: I0913 18:19:21.886636 1368328 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroups>
Sep 13 18:19:21 ip-172-31-41-97 k3s[1368328]: I0913 18:19:21.886834 1368328 topology_manager.go:133] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Sep 13 18:19:21 ip-172-31-41-97 k3s[1368328]: I0913 18:19:21.886961 1368328 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true
Sep 13 18:19:21 ip-172-31-41-97 k3s[1368328]: I0913 18:19:21.887091 1368328 state_mem.go:36] "Initialized new in-memory state store"
Sep 13 18:19:21 ip-172-31-41-97 k3s[1368328]: I0913 18:19:21.892131 1368328 kubelet.go:376] "Attempting to sync node with API server"
Sep 13 18:19:21 ip-172-31-41-97 k3s[1368328]: I0913 18:19:21.892400 1368328 kubelet.go:267] "Adding static pod path" path="/var/lib/rancher/k3s/agent/pod-manifests"
Sep 13 18:19:21 ip-172-31-41-97 k3s[1368328]: I0913 18:19:21.892597 1368328 kubelet.go:278] "Adding apiserver pod source"
Sep 13 18:19:21 ip-172-31-41-97 k3s[1368328]: I0913 18:19:21.892796 1368328 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Sep 13 18:19:21 ip-172-31-41-97 k3s[1368328]: I0913 18:19:21.893836 1368328 kuberuntime_manager.go:239] "Container runtime initialized" containerRuntime="containerd" version="v1.6.6-k3s1" apiVersion="v1"
Sep 13 18:19:21 ip-172-31-41-97 systemd[1]: run-rc488dc88be974c37af8525a45620ac98.scope: Deactivated successfully.
░░ Subject: Unit succeeded
░░ Defined-By: systemd
░░ Support: <http://www.ubuntu.com/support>
░░
░░ The unit run-rc488dc88be974c37af8525a45620ac98.scope has successfully entered the 'dead' state.
Sep 13 18:19:21 ip-172-31-41-97 k3s[1368328]: I0913 18:19:21.894515 1368328 server.go:1177] "Started kubelet"
Sep 13 18:19:21 ip-172-31-41-97 k3s[1368328]: I0913 18:19:21.895424 1368328 server.go:150] "Starting to listen" address="0.0.0.0" port=10250
Sep 13 18:19:21 ip-172-31-41-97 k3s[1368328]: I0913 18:19:21.896198 1368328 server.go:410] "Adding debug handlers to kubelet server"
Sep 13 18:19:21 ip-172-31-41-97 k3s[1368328]: E0913 18:19:21.896811 1368328 server.go:166] "Failed to listen and serve" err="listen tcp 0.0.0.0:10250: bind: address already in use"
Sep 13 18:19:21 ip-172-31-41-97 microk8s.daemon-kubelite[1249831]: E0913 18:19:21.934045 1249831 kubelet.go:2424] "Error getting node" err="node \"ip-172-31-41-97\" not found"
Sep 13 18:19:21 ip-172-31-41-97 systemd[1]: k3s-agent.service: Main process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: <http://www.ubuntu.com/support>
░░
░░ An ExecStart= process belonging to unit k3s-agent.service has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 1.
Sep 13 18:19:21 ip-172-31-41-97 systemd[1]: k3s-agent.service: Failed with result 'exit-code'.
$ sudo systemctl status k3s-agent
● k3s-agent.service - Lightweight Kubernetes
Loaded: loaded (/etc/systemd/system/k3s-agent.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2022-09-13 18:25:02 UTC; 35s ago
Docs: <https://k3s.io>
Main PID: 1371914 (k3s-agent)
Tasks: 43
Memory: 80.0M
CPU: 2.583s
CGroup: /system.slice/k3s-agent.service
├─1371914 "/usr/local/bin/k3s agent"
├─1371928 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
└─1372208 /var/lib/rancher/k3s/data/577968fa3d58539cc4265245941b7be688833e6bf5ad7869fa2afe02f15f1cd2/bin/containerd-shim-runc-v2 -namespace <http://k8s.io|k8s.io> -id 206042638c2e1d9b44189a36d4de2d2adb726ea6e0bd41>
Sep 13 18:25:03 ip-172-31-41-97 k3s[1371914]: I0913 18:25:03.056267 1371914 vxlan.go:138] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
Sep 13 18:25:03 ip-172-31-41-97 k3s[1371914]: I0913 18:25:03.068475 1371914 kube.go:357] Skip setting NodeNetworkUnavailable
Sep 13 18:25:03 ip-172-31-41-97 k3s[1371914]: I0913 18:25:03.073387 1371914 apiserver.go:52] "Watching apiserver"
Sep 13 18:25:03 ip-172-31-41-97 k3s[1371914]: time="2022-09-13T18:25:03Z" level=info msg="Wrote flannel subnet file to /run/flannel/subnet.env"
Sep 13 18:25:03 ip-172-31-41-97 k3s[1371914]: time="2022-09-13T18:25:03Z" level=info msg="Running flannel backend."
Sep 13 18:25:03 ip-172-31-41-97 k3s[1371914]: I0913 18:25:03.076088 1371914 vxlan_network.go:61] watching for new subnet leases
Sep 13 18:25:03 ip-172-31-41-97 k3s[1371914]: I0913 18:25:03.080718 1371914 topology_manager.go:200] "Topology Admit Handler"
Sep 13 18:25:03 ip-172-31-41-97 k3s[1371914]: I0913 18:25:03.085814 1371914 reconciler.go:159] "Reconciler: start to sync state"
Sep 13 18:25:03 ip-172-31-41-97 k3s[1371914]: I0913 18:25:03.092278 1371914 iptables.go:177] bootstrap done
Sep 13 18:25:03 ip-172-31-41-97 k3s[1371914]: I0913 18:25:03.096937 1371914 iptables.go:177] bootstrap done
Sep 13 18:51:44 ip-172-31-34-0 k3s[11609]: time="2022-09-13T18:51:44Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": read tcp 127.0.0.1:51848->127.0.0.1:6444: read: connection>
Sep 13 18:51:56 ip-172-31-34-0 k3s[11609]: time="2022-09-13T18:51:56Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": read tcp 127.0.0.1:46952->127.0.0.1:6444: read: connection>
Sep 13 18:52:08 ip-172-31-34-0 k3s[11609]: time="2022-09-13T18:52:08Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": read tcp 127.0.0.1:47310->127.0.0.1:6444: read: connection>
Sep 13 18:52:20 ip-172-31-34-0 k3s[11609]: time="2022-09-13T18:52:20Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": read tcp 127.0.0.1:48554->127.0.0.1:6444: read: connection>
Sep 13 18:52:32 ip-172-31-34-0 k3s[11609]: time="2022-09-13T18:52:32Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": read tcp 127.0.0.1:34618->127.0.0.1:6444: read: connection>
Sep 13 18:52:45 ip-172-31-34-0 k3s[11609]: time="2022-09-13T18:52:45Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": read tcp 127.0.0.1:43442->127.0.0.1:6444: read: connection>
Sep 13 18:52:57 ip-172-31-34-0 k3s[11609]: time="2022-09-13T18:52:57Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": read tcp 127.0.0.1:33678->127.0.0.1:6444: read: connection>
Sep 13 18:53:09 ip-172-31-34-0 k3s[11609]: time="2022-09-13T18:53:09Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": read tcp 127.0.0.1:47584->127.0.0.1:6444: read: connection>
Sep 13 18:53:21 ip-172-31-34-0 k3s[11609]: time="2022-09-13T18:53:21Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": read tcp 127.0.0.1:34560->127.0.0.1:6444: read: connection>
Sep 13 18:53:33 ip-172-31-34-0 k3s[11609]: time="2022-09-13T18:53:33Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": read tcp 127.0.0.1:52804->127.0.0.1:6444: read: connection>
Sep 13 18:53:45 ip-172-31-34-0 k3s[11609]: time="2022-09-13T18:53:45Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": read tcp 127.0.0.1:38864->127.0.0.1:6444: read: connection>
Sep 13 18:53:57 ip-172-31-34-0 k3s[11609]: time="2022-09-13T18:53:57Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": read tcp 127.0.0.1:49322->127.0.0.1:6444: read: connection>
Sep 13 18:54:09 ip-172-31-34-0 k3s[11609]: time="2022-09-13T18:54:09Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": read tcp 127.0.0.1:49990->127.0.0.1:6444: read: connection>
Sep 13 18:54:21 ip-172-31-34-0 k3s[11609]: time="2022-09-13T18:54:21Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": read tcp 127.0.0.1:42560->127.0.0.1:6444: read: connection>
Sep 13 18:54:32 ip-172-31-34-0 k3s[11609]: time="2022-09-13T18:54:32Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": read tcp 127.0.0.1:43896->127.0.0.1:6444: read: connection>
Sep 13 18:54:44 ip-172-31-34-0 k3s[11609]: time="2022-09-13T18:54:44Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": read tcp 127.0.0.1:50946->127.0.0.1:6444: read: connection>
Sep 13 18:54:56 ip-172-31-34-0 k3s[11609]: time="2022-09-13T18:54:56Z" level=error msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": read tcp 127.0.0.1:52860->127.0.0.1:6444: read: connection>
Sep
from https://github.com/k3s-io/k3s/issues/2852 I tried the curl
curl -vk <https://127.0.0.1:6444/cacerts>
* Trying 127.0.0.1:6444...
* Connected to 127.0.0.1 (127.0.0.1) port 6444 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* OpenSSL SSL_connect: Connection reset by peer in connection to 127.0.0.1:6444
* Closing connection 0
* TLSv1.0 (OUT), TLS header, Unknown (21):
* TLSv1.3 (OUT), TLS alert, decode error (562):
curl: (35) OpenSSL SSL_connect: Connection reset by peer in connection to 127.0.0.1:6444
creamy-pencil-82913
09/13/2022, 7:04 PMSep 11 00:57:46 ip-172-31-41-97 k3s[2842580]: time="2022-09-11T00:57:46Z" level=info msg="Running load balancer k3s-agent-load-balancer 127.0.0.1:6444 -> [3.128.3.142:6443]"
chilly-telephone-51989
09/13/2022, 7:10 PM~$ curl -vk <https://3.128.3.142:6443>
* Trying 3.128.3.142:6443...
* Connected to 3.128.3.142 (3.128.3.142) port 6443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS header, Finished (20):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Request CERT (13):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.2 (OUT), TLS header, Finished (20):
creamy-pencil-82913
09/13/2022, 7:12 PMchilly-telephone-51989
09/13/2022, 7:13 PM5Gq9ygUwCgYIKoZIzj0EAwIDSQAwRgIhAJ6cFBI+o9tU8K2p0HhOH9Vh/d6l0p6N
hZByi9VeHIVKAiEAoYvANyaaZorxBrP3nSRtfJcI8yIAqrkXaRjWTNdH0jI=
-----END CERTIFICATE-----
* Connection #0 to host 3.128.3.142 left intact
should I paste all?creamy-pencil-82913
09/13/2022, 7:24 PMcurl -vks <https://3.128.3.142:6443/ping>
and get a pong
responsechilly-telephone-51989
09/13/2022, 7:26 PM$ curl -vks <https://3.128.3.142:6443/ping>
* Trying 3.128.3.142:6443...
* Connected to 3.128.3.142 (3.128.3.142) port 6443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS header, Finished (20):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Request CERT (13):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.2 (OUT), TLS header, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.3 (OUT), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: O=k3s; CN=k3s
* start date: Sep 10 19:09:06 2022 GMT
* expire date: Sep 13 19:09:54 2023 GMT
* issuer: CN=k3s-server-ca@1662836946
* SSL certificate verify result: self-signed certificate in certificate chain (19), continuing anyway.
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* Using Stream ID: 1 (easy handle 0x55b3fee32010)
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
> GET /ping HTTP/2
> Host: 3.128.3.142:6443
> user-agent: curl/7.81.0
> accept: */*
>
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* Connection state changed (MAX_CONCURRENT_STREAMS == 250)!
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
< HTTP/2 200
< content-type: text/plain
< content-length: 4
< date: Tue, 13 Sep 2022 19:26:06 GMT
<
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* Connection #0 to host 3.128.3.142 left intact
pong
curl -vk <https://3.128.3.142:6443/ping> | more
* Trying 3.128.3.142:6443...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to 3.128.3.142 (3.128.3.142) port 6443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
} [5 bytes data]
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
} [512 bytes data]
* TLSv1.2 (IN), TLS header, Certificate Status (22):
{ [5 bytes data]
* TLSv1.3 (IN), TLS handshake, Server hello (2):
{ [122 bytes data]
* TLSv1.2 (IN), TLS header, Finished (20):
{ [5 bytes data]
* TLSv1.2 (IN), TLS header, Supplemental data (23):
{ [5 bytes data]
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
{ [15 bytes data]
* TLSv1.2 (IN), TLS header, Supplemental data (23):
{ [5 bytes data]
* TLSv1.3 (IN), TLS handshake, Request CERT (13):
{ [45 bytes data]
* TLSv1.2 (IN), TLS header, Supplemental data (23):
{ [5 bytes data]
* TLSv1.3 (IN), TLS handshake, Certificate (11):
{ [963 bytes data]
* TLSv1.2 (IN), TLS header, Supplemental data (23):
{ [5 bytes data]
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
{ [79 bytes data]
* TLSv1.2 (IN), TLS header, Supplemental data (23):
{ [5 bytes data]
* TLSv1.3 (IN), TLS handshake, Finished (20):
{ [36 bytes data]
* TLSv1.2 (OUT), TLS header, Finished (20):
} [5 bytes data]
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
} [1 bytes data]
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
} [5 bytes data]
* TLSv1.3 (OUT), TLS handshake, Certificate (11):
} [8 bytes data]
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
} [5 bytes data]
* TLSv1.3 (OUT), TLS handshake, Finished (20):
} [36 bytes data]
* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: O=k3s; CN=k3s
* start date: Sep 10 19:09:06 2022 GMT
* expire date: Sep 13 19:09:54 2023 GMT
* issuer: CN=k3s-server-ca@1662836946
* SSL certificate verify result: self-signed certificate in certificate chain (19), continuing anyway.
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
} [5 bytes data]
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
} [5 bytes data]
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
} [5 bytes data]
* Using Stream ID: 1 (easy handle 0x5632688f4550)
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
} [5 bytes data]
> GET /ping HTTP/2
> Host: 3.128.3.142:6443
> user-agent: curl/7.81.0
> accept: */*
>
* TLSv1.2 (IN), TLS header, Supplemental data (23):
{ [5 bytes data]
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
{ [130 bytes data]
* TLSv1.2 (IN), TLS header, Supplemental data (23):
{ [5 bytes data]
* Connection state changed (MAX_CONCURRENT_STREAMS == 250)!
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
} [5 bytes data]
* TLSv1.2 (IN), TLS header, Supplemental data (23):
{ [5 bytes data]
* TLSv1.2 (IN), TLS header, Supplemental data (23):
{ [5 bytes data]
* TLSv1.2 (IN), TLS header, Supplemental data (23):
{ [5 bytes data]
< HTTP/2 200
< content-type: text/plain
< content-length: 4
< date: Tue, 13 Sep 2022 19:32:04 GMT
<
* TLSv1.2 (IN), TLS header, Supplemental data (23):
{ [5 bytes data]
100 4 100 4 0 0 504 0 --:--:-- --:--:-- --:--:-- 571
* Connection #0 to host 3.128.3.142 left intact
pong
creamy-pencil-82913
09/13/2022, 7:34 PMchilly-telephone-51989
09/13/2022, 7:35 PMcreamy-pencil-82913
09/13/2022, 7:40 PMjournalctl -u k3s-agent --no-pager > k3s-agent.log
chilly-telephone-51989
09/14/2022, 2:14 AM$ sudo netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 541/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 870/sshd: /usr/sbin
tcp 0 0 127.0.0.1:6444 0.0.0.0:* LISTEN 12440/k3s agent
tcp6 0 0 :::22 :::* LISTEN 870/sshd: /usr/sbin
udp 0 0 0.0.0.0:40987 0.0.0.0:* 12440/k3s agent
udp 0 0 127.0.0.53:53 0.0.0.0:* 541/systemd-resolve
udp 0 0 172.31.34.0:68 0.0.0.0:* 539/systemd-network
udp 0 0 127.0.0.1:323 0.0.0.0:* 717/chronyd
udp6 0 0 ::1:323 :::* 717/chronyd
$ curl <https://127.0.0.1:6444/cacerts/ping>
curl: (35) OpenSSL SSL_connect: Connection reset by peer in connection to 127.0.0.1:6444
curl -vk <https://127.0.0.1:6444/cacerts>
* Trying 127.0.0.1:6444...
* Connected to 127.0.0.1 (127.0.0.1) port 6444 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* OpenSSL SSL_connect: Connection reset by peer in connection to 127.0.0.1:6444
* Closing connection 0
* TLSv1.0 (OUT), TLS header, Unknown (21):
* TLSv1.3 (OUT), TLS alert, decode error (562):
curl: (35) OpenSSL SSL_connect: Connection reset by peer in connection to 127.0.0.1:6444
the connection reset by peer error is gone now.
I got the latest log of the agent too but the problem remains the same there and no change is there. Should I reinstall it or something?
UFW is already off. (but after every boot with sudo reboot
it is enabled magically so I have to disable it manually.
I did restart the service sudo systemctl restart k3s-agent
but i'm still getting same error though the curl as you can see has started working for https://127.0.0.1:6444/cacerts$ sudo systemctl status k3s-agent
● k3s-agent.service - Lightweight Kubernetes
Loaded: loaded (/etc/systemd/system/k3s-agent.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2022-09-14 11:51:30 UTC; 45s ago
Docs: <https://k3s.io>
Process: 17418 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exited, status=0/SUCCESS)
Process: 17420 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Process: 17421 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 17422 (k3s-agent)
Tasks: 42
Memory: 293.5M
CPU: 5.056s
CGroup: /system.slice/k3s-agent.service
├─17422 "/usr/local/bin/k3s agent"
├─17441 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
└─17810 /var/lib/rancher/k3s/data/577968fa3d58539cc4265245941b7be688833e6bf5ad7869fa2afe02f15f1cd2/bin/containerd-shim-runc-v2 -namespace <http://k8s.io|k8s.io> -id e5fa2f35fc2d8b44bcf5fcb92ae2e6644eeab37602306248>
Sep 14 11:51:31 ip-172-31-34-0 k3s[17422]: I0914 11:51:31.558658 17422 reconciler.go:159] "Reconciler: start to sync state"
Sep 14 11:51:31 ip-172-31-34-0 k3s[17422]: I0914 11:51:31.824386 17422 kube.go:128] Node controller sync successful
Sep 14 11:51:31 ip-172-31-34-0 k3s[17422]: I0914 11:51:31.824503 17422 vxlan.go:138] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
Sep 14 11:51:31 ip-172-31-34-0 k3s[17422]: I0914 11:51:31.846396 17422 kube.go:357] Skip setting NodeNetworkUnavailable
Sep 14 11:51:31 ip-172-31-34-0 k3s[17422]: time="2022-09-14T11:51:31Z" level=info msg="Wrote flannel subnet file to /run/flannel/subnet.env"
Sep 14 11:51:31 ip-172-31-34-0 k3s[17422]: time="2022-09-14T11:51:31Z" level=info msg="Running flannel backend."
Sep 14 11:51:31 ip-172-31-34-0 k3s[17422]: I0914 11:51:31.849661 17422 vxlan_network.go:61] watching for new subnet leases
Sep 14 11:51:31 ip-172-31-34-0 k3s[17422]: I0914 11:51:31.859232 17422 iptables.go:177] bootstrap done
Sep 14 11:51:31 ip-172-31-34-0 k3s[17422]: I0914 11:51:31.862995 17422 iptables.go:177] bootstrap done
Sep 14 11:51:41 ip-172-31-34-0 k3s[17422]: I0914 11:51:41.235876 17422 topology_manager.go:200] "Topology Admit Handler"
$ k get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-46-55 Ready control-plane,master 20m v1.24.4+k3s1
ip-172-31-34-0 Ready <none> 17m v1.24.4+k3s1
ip-172-31-41-97 Ready <none> 32s v1.24.4+k3s1