proud-ram-62490
04/19/2023, 2:55 PMdazzling-action-9795
04/19/2023, 4:36 PMApr 19 16:28:55 ng-10-51-52-101.lab k3s[989]: time="2023-04-19T16:28:55Z" level=info msg="Waiting for etcd server to become available"
Apr 19 16:28:55 ng-10-51-52-101.lab k3s[989]: time="2023-04-19T16:28:55Z" level=info msg="Waiting for API server to become available"
Apr 19 16:28:57 ng-10-51-52-101.lab k3s[989]: time="2023-04-19T16:28:57Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: <https://127.0.0.1:6443/v1-k3s/readyz>: 500 Internal Server Error"
Apr 19 16:29:00 ng-10-51-52-101.lab k3s[989]: time="2023-04-19T16:29:00Z" level=info msg="Tunnel server egress proxy waiting for runtime core to become available"
Apr 19 16:29:02 ng-10-51-52-101.lab k3s[989]: time="2023-04-19T16:29:02Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: <https://127.0.0.1:6443/v1-k3s/readyz>: 500 Internal Server Error"
Apr 19 16:29:05 ng-10-51-52-101.lab k3s[989]: time="2023-04-19T16:29:05Z" level=info msg="Tunnel server egress proxy waiting for runtime core to become available"
Apr 19 16:29:07 ng-10-51-52-101.lab k3s[989]: time="2023-04-19T16:29:07Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: <https://127.0.0.1:6443/v1-k3s/readyz>: 500 Internal Server Error"
Apr 19 16:29:10 ng-10-51-52-101.lab k3s[989]: time="2023-04-19T16:29:10Z" level=info msg="Tunnel server egress proxy waiting for runtime core to become available"
Apr 19 16:29:10 ng-10-51-52-101.lab k3s[989]: {"level":"warn","ts":"2023-04-19T16:29:10.926Z","caller":"etcdserver/server.go:2065","msg":"failed to publish local member to cluster through raft","local-member-id":"7dff6a3be78a5ee3","local-member-attributes":"{Name:ng-10-51-52-101.lab-accc9f42 ClientURLs:[<https://10.51.52.101:2379>]}","request-path":"/0/members/7dff6a3be78a5ee3/attributes","publish-timeout":"15s","error":"etcdserver: request timed out"}
Apr 19 16:29:11 ng-10-51-52-101.lab k3s[989]: {"level":"warn","ts":"2023-04-19T16:29:11.746Z","logger":"etcd-client","caller":"v3@v3.5.3-k3s1/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"<etcd-endpoints://0xc0005428c0/127.0.0.1:2379>","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
Apr 19 16:29:11 ng-10-51-52-101.lab k3s[989]: time="2023-04-19T16:29:11Z" level=error msg="Failed to check local etcd status for learner management: context deadline exceeded"
Apr 19 16:29:12 ng-10-51-52-101.lab k3s[989]: time="2023-04-19T16:29:12Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: <https://127.0.0.1:6443/v1-k3s/readyz>: 500 Internal Server Error"
Apr 19 16:29:15 ng-10-51-52-101.lab k3s[989]: time="2023-04-19T16:29:15Z" level=info msg="Tunnel server egress proxy waiting for runtime core to become available"
Apr 19 16:29:17 ng-10-51-52-101.lab k3s[989]: time="2023-04-19T16:29:17Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: <https://127.0.0.1:6443/v1-k3s/readyz>: 500 Internal Server Error"
Apr 19 16:29:20 ng-10-51-52-101.lab k3s[989]: time="2023-04-19T16:29:20Z" level=info msg="Tunnel server egress proxy waiting for runtime core to become available"
Apr 19 16:29:22 ng-10-51-52-101.lab k3s[989]: time="2023-04-19T16:29:22Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: <https://127.0.0.1:6443/v1-k3s/readyz>: 500 Internal Server Error"
Apr 19 16:29:25 ng-10-51-52-101.lab k3s[989]: time="2023-04-19T16:29:25Z" level=info msg="Tunnel server egress proxy waiting for runtime core to become available"
clever-mechanic-71254
04/19/2023, 5:20 PMfew-park-83463
04/19/2023, 7:02 PMfew-park-83463
04/19/2023, 7:03 PMfew-park-83463
04/19/2023, 7:03 PMfew-night-45972
04/19/2023, 7:09 PMDEBU[0002] Waiting for node k3d-dc3-server-0 to get ready (Log: 'k3s is up and running')
When I look at the docker logs for the server node, its crapping out on:
time="2023-04-18T18:11:05Z" level=info msg="Run: k3s kubectl"
time="2023-04-18T18:11:05Z" level=fatal msg="failed to find cpu cgroup (v2)"
This only just started happening after a windows update. Has anyone seen this issue? I tried a WSL2 kernel hack and I got it working:
[wsl2]
kernelCommandLine = cgroup_no_v1=all cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory
But now my clusters that were working before keep crashing pods with OOM errors that didn't happen before. So this workround doesn't seem great.full-cat-32507
04/19/2023, 9:03 PMrhythmic-lion-63405
04/20/2023, 6:36 AMrhythmic-lion-63405
04/20/2023, 7:01 AMambitious-furniture-5481
04/20/2023, 9:34 AMlemon-midnight-46796
04/20/2023, 1:05 PMglamorous-lighter-5580
04/20/2023, 3:58 PMfew-park-83463
04/20/2023, 7:33 PMfew-park-83463
04/20/2023, 7:34 PMfew-park-83463
04/20/2023, 7:34 PMastonishing-hydrogen-51487
04/20/2023, 10:23 PMastonishing-hydrogen-51487
04/20/2023, 10:26 PMbroad-monitor-34340
04/21/2023, 8:48 AMxhr.js:178 Mixed Content: The page at 'https://lb-0a6dbbcb4432406688a3339d691ebfb7-1.upcloudlb.com/dashboard/' was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint 'http://lb-0a6dbbcb4432406688a3339d691ebfb7-1.upcloudlb.com/v3-public/localProviders/local?action=login'. This request has been blocked; the content must be served over HTTPS.
white-branch-93180
04/21/2023, 9:08 AMmicroscopic-balloon-29953
04/21/2023, 10:15 AMmicroscopic-balloon-29953
04/21/2023, 10:18 AMbillions-mechanic-56038
04/21/2023, 11:18 AMwhite-branch-93180
04/21/2023, 11:25 AMmelodic-market-77727
04/21/2023, 11:42 AMbest-address-42882
04/21/2023, 3:08 PMlittle-autumn-44758
04/21/2023, 4:07 PMagreeable-pharmacist-24247
04/21/2023, 5:39 PMproud-ram-62490
04/21/2023, 7:23 PMmany-artist-13412
04/21/2023, 9:41 PM