creamy-waiter-66684
09/29/2022, 6:16 PMmelodic-hamburger-23329
09/30/2022, 7:34 AMtime="2022-09-30T16:30:47.729342645+09:00" level=fatal msg="failed to create new snapshotter" error="failed to restore remote snapshot: failed to prepare remote snapshot: sha256:08b10ee4e4d584086d7203095776335fc5f3a541402bb19e89e908096b30df2e: failed to resolve layer: failed to resolve layer \"sha256:a42e3d1ba15a55b32c4b95cd3486aab3103d7b685b471ce68130d718c16b4e88\" from \"...\": failed to resolve the blob: failed to resolve the source: cannot resolve layer: failed to redirect (host \"...\", ref:\"...\", digest:\"sha256:a42e3d1ba15a55b32c4b95cd3486aab3103d7b685b471ce68130d718c16b4e88\"): failed to access to the registry with code 404: failed to resolve: failed to resolve target"
Kind of impossible to do upgrades if need to basically recreate cluster every time :S
Am I possibly doing something wrong?
Steps:
• download latest k3s binary and put to /usr/local/bin
• systemctl stop k3s (or k3s-killall.sh; not sure which one recommended?)
• systemctl start k3s (or rerun install script; same result)melodic-hamburger-23329
09/30/2022, 8:04 AMdisable:
- "etcd"
or
disable-etcd: true
https://rancher.com/docs/k3s/latest/en/installation/disable-flags/
https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/clever-air-65544
09/30/2022, 4:50 PMelegant-article-67113
10/01/2022, 4:03 PMeager-cartoon-94692
10/01/2022, 11:39 PMprehistoric-diamond-4224
10/03/2022, 10:31 AMgreen-energy-38738
10/03/2022, 2:15 PMchilly-telephone-51989
10/03/2022, 2:37 PMbright-jordan-61721
10/03/2022, 3:28 PMv1.24.6+k3s1
and I have some pods configured with dnsPolicy: ClusterFirst
(which is the default) and noticing weird DNS resolution problems.
When I shell into a pod with this dns policy and cat /etc/resolv.conf
this is what I see:
bash-5.1# cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local [home search domains redacted]
nameserver 10.43.0.10
options ndots:5
I believe ndots:5 is causing the problem, as ping <http://github.com|github.com>
fails due to dns resolution, but ping <http://github.com|github.com>.
works instead.
Why is k3s setting the ndots:5 option by default? I’m not setting this with the pod’s dnsConfig at all. If this option were removed or reduced to ndots:1 it would likely solve my issue.prehistoric-judge-25958
10/03/2022, 5:59 PM/etc/rancher/k3s/registries.yaml
my node become in a NotReady state after "systemcl restart k3s"
mirrors:
harbor.k8s.lan:
endpoint:
- "<https://harbor.k8s.lan:443>"
configs:
"harbor.k8s.lan:443":
tls:
cert_file: /etc/rancher/k3s/certs/cert.pem
key_file: /etc/rancher/k3s/certs/cert-key.pem
ca_file: /etc/rancher/k3s/certs/k8s-lan.crt
insecure_skip_verify: "true"
I am using self-signed certificates for my k8s.lan domain and put them in the directory /etc/rancher/k3s/certs/
describe node k3s-master-01 output:
Normal Starting 23m kubelet Starting kubelet.
Warning InvalidDiskCapacity 23m kubelet invalid capacity 0 on image filesystem
Normal NodeAllocatableEnforced 23m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 23m (x2 over 23m) kubelet Node k3s-master-01 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 23m (x2 over 23m) kubelet Node k3s-master-01 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 23m (x2 over 23m) kubelet Node k3s-master-01 status is now: NodeHasSufficientPID
Normal NodeReady 23m kubelet Node k3s-master-01 status is now: NodeReady
Normal NodeNotReady 20m (x3 over 63m) node-controller Node k3s-master-01 status is now: NodeNotReady
late-needle-80860
10/04/2022, 2:04 PM--egress-selector-mode=disabled
to an already running cluster ( server/control-plane side of course). Is that possible or do one need to redeploy the cluster anew?
The reason for this is that I’m see failed: error dialing backend: EOF
when e.g. running the connectivity test
provided by the Cilium
cli
.
When I tried introducing it to a running test cluster I’m getting the infamous failed to validate server configuration critical configuration value mismatch
….
Is other there a workaround to get this in on a live/already running cluster?
Thank you very muchadamant-waiter-35487
10/05/2022, 8:30 AMcurl -sfL <https://get.k3s.io> | sh -
) installs both server and agent, but the tutorial on embedded ha ask us to start server on 3 nodes, and then join agent later. I am not sure if this means I need 3 nodes just for ha control plan, and need more nodes to behave as agent (worker) node?late-needle-80860
10/05/2022, 6:23 PM--egress-selector-mode=disabled
set on the servers
of X K3s cluster I have running - on v1.24.4+k3s1.
However, when running the cilium connectivity test …
command for the Cilium
CNI
I know get the following err: unable to start container process: open /dev/pts/0: operation not permitted: unknown
- troubleshooting that err leads me to:
• https://github.com/opencontainers/runc/pull/3554
• and this release: https://github.com/opencontainers/runc/releases/tag/v1.1.4
What K3s
release is that part of - if any?quiet-chef-27276
10/06/2022, 1:55 AMlate-needle-80860
10/07/2022, 8:05 AM--egress-selector-mode=disabled
on nodes in a cluster. What is one not getting? What’s the downside?
In the docs it says:
The apiserver does not use agent tunnels to communicate with nodes. Requires that servers run agents, and have direct connectivity to the kubelet on agents, or the apiserver will not be able to function access service endpoints or perform kubectl exec and kubectl logs.
So sounds fine to me. I wasn’t disabling the agent on servers
anyway. So not loosing anything there.
Are there any downsides or considerations one should have?
Thank you very muchlate-needle-80860
10/07/2022, 10:40 AMworker
node joins cordoned in order to different processes to complete in due time before regular workloads starts piling in on the new worker
.
Some of these processes might be/are:
• the configuration of containerd
for a private self-hosted registry
• Longhorn bootstrapping … and storage space setup … which needs to be fully up and ready before potential persistent storage needing regular workloads starts appearinghandsome-painter-48813
10/07/2022, 10:48 AMfailed to find cpuset cgroup (v2)
k3s check-config:
Generally Necessary:
- cgroup hierarchy: cgroups V2 mounted, cpu|cpuset|memory controllers status: bad (fail)
(for cgroups V1/Hybrid on non-Systemd init see <https://github.com/tianon/cgroupfs-mount>)
- /usr/sbin/apparmor_parser
apparmor: enabled and tools installed
I already set
GRUB_CMDLINE_LINUX="systemd.unified_cgroup_hierarchy=0"
And it does not work 😕clever-air-65544
10/07/2022, 4:45 PMred-boots-23091
10/08/2022, 11:50 AMenough-carpet-20915
10/08/2022, 5:58 PMError from server: error dialing backend: x509: certificate is valid for 127.0.0.1, 45.x.x.x, 2a02:c206:xxxx:xxxx::1, not 38.x.x.x
I tried doing k3s certificate rotate
(from https://github.com/k3s-io/k3s/wiki/K3s-Cert-Rotation) which seems to have rotated certs, but I'm still getting the same error.enough-carpet-20915
10/08/2022, 6:56 PMenough-carpet-20915
10/08/2022, 6:56 PMadmin@marge:~$ sudo k3s certificate rotate
INFO[0000] Server detected, rotating server certificates
INFO[0000] Rotating certificates for admin service
INFO[0000] Rotating certificates for etcd service
INFO[0000] Rotating certificates for api-server service
INFO[0000] Rotating certificates for controller-manager service
INFO[0000] Rotating certificates for cloud-controller service
INFO[0000] Rotating certificates for scheduler service
INFO[0000] Rotating certificates for k3s-server service
INFO[0000] Rotating dynamic listener certificate
INFO[0000] Rotating certificates for k3s-controller service
INFO[0000] Rotating certificates for auth-proxy service
INFO[0000] Rotating certificates for kubelet service
INFO[0000] Rotating certificates for kube-proxy service
INFO[0000] Successfully backed up certificates for all services to path /var/lib/rancher/k3s/server/tls-1665255335, please restart k3s server or agent to rotate certificates
admin@marge:~$ sudo diff -sr /var/lib/rancher/k3s/server/tls /var/lib/rancher/k3s/server/tls-1665255335/ | grep -i identical | awk '{print $2}' | xargs basename -a | awk 'BEGIN{print "Identical Files: "}; {print $1}'
Identical Files:
client-ca.crt
client-ca.key
dynamic-cert.json
peer-ca.crt
peer-ca.key
server-ca.crt
server-ca.key
request-header-ca.crt
request-header-ca.key
server-ca.crt
server-ca.key
service.key
apiserver-loopback-client__.crt
apiserver-loopback-client__.key
gifted-branch-26934
10/10/2022, 12:12 PMaverage-arm-20932
10/11/2022, 6:36 PMfamous-flag-15098
10/12/2022, 3:38 PMfamous-flag-15098
10/12/2022, 3:38 PMbillowy-bird-32869
10/13/2022, 9:40 AM/var/lib/rancher/k3s/server/manifests/
and in that resource, I have used spec.set to change the value of some property. The value to the property is delivered via an environment variable at the moment as I cannot think of any other way after looking at the options from k3s. Would someone have an idea, how I could parametrize my chart or if the way I have done it is correct, how do I supply the value. Thanks.
Sample:
spec:
helmVersion: v3
repo: <https://charts.gitlab.io>
chart: gitlab-runner
targetNamespace: gitlab-runners
set:
runners.tags: "$my_tag"
stale-vegetable-37217
10/13/2022, 1:52 PM