cool-ocean-71403
06/28/2022, 9:17 PMdamp-ram-19757
06/29/2022, 9:19 AMmelodic-hamburger-23329
06/29/2022, 1:35 PMunable to select an IP from default routes
error preventing k3s startup?melodic-hamburger-23329
06/30/2022, 9:36 AM"registry": {
"configPath": "",
"mirrors": {
"*": {
"endpoint": [
"<https://myregistry.net>"
],
"rewrite": {
"^/(.*)": "some-namespace/$1"
}
},
"<http://docker.io|docker.io>": {
"endpoint": [
"<https://myregistry.net>"
],
"rewrite": {
"^/(.*)": "some-namespace/$1"
}
}
}
I would like to manage all images from internal registry, including kube-system images.gorgeous-pencil-75892
06/30/2022, 1:28 PMgreen-shampoo-61471
06/30/2022, 5:09 PMhundreds-state-15112
06/30/2022, 10:15 PMmelodic-hamburger-23329
07/01/2022, 10:26 AMcool-ocean-71403
07/01/2022, 10:44 AMbrave-afternoon-4801
07/01/2022, 2:04 PMsudo systemctl stop k3s
sudo k3s-killall.sh
docker rm -f $(docker ps -aq)
sudo rm -rf /var/lib/rancher /var/lib/kubelet
However, when I start k3s again, there are containers from a helm chart I was playing with that are still being created. What is the proper way to wipe everything?late-needle-80860
07/01/2022, 6:26 PMjournalct -u k3s.service -f
shows:
Jul 01 20:25:25 test-test-master-0 systemd[1]: Failed to start Lightweight Kubernetes.
Jul 01 20:25:30 test-test-master-0 systemd[1]: k3s.service: Scheduled restart job, restart counter is at 151.
Jul 01 20:25:30 test-test-master-0 systemd[1]: Stopped Lightweight Kubernetes.
Jul 01 20:25:30 test-test-master-0 systemd[1]: Starting Lightweight Kubernetes...
Jul 01 20:25:30 test-test-master-0 sh[53137]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Jul 01 20:25:30 test-test-master-0 sh[53138]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory
Jul 01 20:25:30 test-test-master-0 k3s[53141]: time="2022-07-01T20:25:30+02:00" level=info msg="Starting k3s v1.23.7+k3s1 (ec61c667)"
Jul 01 20:25:30 test-test-master-0 k3s[53141]: time="2022-07-01T20:25:30+02:00" level=warning msg="Cluster CA certificate is not trusted by the host CA bundle, but the token does not include a CA hash. Use the full token from the server's node-token file to enable Cluster CA validation."
Jul 01 20:25:30 test-test-master-0 k3s[53141]: time="2022-07-01T20:25:30+02:00" level=info msg="Managed etcd cluster not yet initialized"
Jul 01 20:25:30 test-test-master-0 k3s[53141]: time="2022-07-01T20:25:30+02:00" level=warning msg="Cluster CA certificate is not trusted by the host CA bundle, but the token does not include a CA hash. Use the full token from the server's node-token file to enable Cluster CA validation."
Jul 01 20:25:30 test-test-master-0 k3s[53141]: time="2022-07-01T20:25:30+02:00" level=fatal msg="starting kubernetes: preparing server: failed to validate server configuration: critical configuration value mismatch"
Jul 01 20:25:30 test-test-master-0 systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Jul 01 20:25:30 test-test-master-0 systemd[1]: k3s.service: Failed with result 'exit-code'.
Jul 01 20:25:30 test-test-master-0 systemd[1]: Failed to start Lightweight Kubernetes.
cool-ocean-71403
07/01/2022, 7:35 PMnumerous-kilobyte-30360
07/01/2022, 9:39 PM/mnt/storage
. For this reason, I don't want to use Longhorn. I'd like to use my existing replicated storage.
k3s seems to default to /var/lib/rancher/k3s/storage
with the local storage provider.
I see an option for use during initial k3s setup, --default-local-storage-path
per the documentation at https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#storage-class
The setting appears to be located in /var/lib/rancher/k3s/server/manifests/local-storage.yaml
on each node, but it also appears from reading around on forums that this gets overwritten and should be changed with tools, rather than directly.
• Is it possible to change the --default-local-storage-path
parameter without reinstalling k3s?
• Is this even the correct parameter to change?
Thanks!melodic-hamburger-23329
07/02/2022, 6:22 AM[plugins."io.containerd.snapshotter.v1.stargz".registry.mirrors."*"]
and [plugins."io.containerd.grpc.v1.cri".registry.mirrors."*"]
; not sure if I need both..?). I’d like to get the authentication work using also the new config_path
& certs.d
syntax. Does anyone have insight regarding what’s correct and recommended way? I’ve been checking docs of stargz and containerd/cri, but am a bit lost.
Also, although the services eventually boot successfully, I’m currently getting lot of errors, like these during k3s install:
time="2022-07-02T14:51:25.485441124+09:00" level=info msg="Received status code: 401 Unauthorized. Refreshing creds..." key="<http://k8s.io/25/extract-737524037-rTpL|k8s.io/25/extract-737524037-rTpL> sha256:47539da01eebacb627943ac3a63d918b69534684b239edac42ce4c64742fb4fd" mountpoint=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.stargz/snapshotter/snapshots/17/fs parent="<http://k8s.io/24/sha256:114c5ec8519affa4cba972ba546f900b4803d5863f5242a2daee68aa6133fd7d|k8s.io/24/sha256:114c5ec8519affa4cba972ba546f900b4803d5863f5242a2daee68aa6133fd7d>" src="<http://docker.io/rancher/klipper-helm:v0.7.3-build20220613/sha256:5b8b2ba8c050ccd08bd8f00f15210c40a3f50674a0b9457b3f06870723889cd0|docker.io/rancher/klipper-helm:v0.7.3-build20220613/sha256:5b8b2ba8c050ccd08bd8f00f15210c40a3f50674a0b9457b3f06870723889cd0>"
time="2022-07-02T14:51:26.853585280+09:00" level=warning msg="failed to prepare remote snapshot" error="failed to resolve layer: failed to resolve layer \"sha256:5b8b2ba8c050ccd08bd8f00f15210c40a3f50674a0b9457b3f06870723889cd0\" from \"<http://docker.io/rancher/klipper-helm:v0.7.3-build20220613\|docker.io/rancher/klipper-helm:v0.7.3-build20220613\>": failed to resolve the blob: failed to resolve the source: cannot resolve layer: failed to redirect (host \"<http://registry-1.docker.io|registry-1.docker.io>\", ref:\"<http://docker.io/rancher/klipper-helm:v0.7.3-build20220613\|docker.io/rancher/klipper-helm:v0.7.3-build20220613\>", digest:\"sha256:5b8b2ba8c050ccd08bd8f00f15210c40a3f50674a0b9457b3f06870723889cd0\"): failed to access to the registry with code 404: failed to redirect (host \"<http://registry-jpe2.r-local.net|registry-jpe2.r-local.net>\", ref:\"<http://docker.io/rancher/klipper-helm:v0.7.3-build20220613\|docker.io/rancher/klipper-helm:v0.7.3-build20220613\>", digest:\"sha256:5b8b2ba8c050ccd08bd8f00f15210c40a3f50674a0b9457b3f06870723889cd0\"): failed to access to the registry with code 401: failed to resolve: failed to resolve target" key="<http://k8s.io/25/extract-737524037-rTpL|k8s.io/25/extract-737524037-rTpL> sha256:47539da01eebacb627943ac3a63d918b69534684b239edac42ce4c64742fb4fd" parent="<http://k8s.io/24/sha256:114c5ec8519affa4cba972ba546f900b4803d5863f5242a2daee68aa6133fd7d|k8s.io/24/sha256:114c5ec8519affa4cba972ba546f900b4803d5863f5242a2daee68aa6133fd7d>" remote-snapshot-prepared=false
time="2022-07-02T14:51:37.262725372+09:00" level=error msg="ContainerStatus for \"1e7c670c740754bfa1d6e3c4ca2540025adb6cd8063abeb057911f9535d0f1b3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e7c670c740754bfa1d6e3c4ca2540025adb6cd8063abeb057911f9535d0f1b3\": not found"
time="2022-07-02T14:51:37.263923059+09:00" level=error msg="ContainerStatus for \"5af6495a6eeb8d647ce97e2ece46489961799cb711a85533fd4a38892b0af32e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5af6495a6eeb8d647ce97e2ece46489961799cb711a85533fd4a38892b0af32e\": not found"
Is this normal with private registries, or do I possibly have some misconfiguration somewhere…?melodic-hamburger-23329
07/04/2022, 6:19 AMripe-restaurant-90224
07/05/2022, 9:10 AMhostnetwork: true
and a public IP and it has been doing exactly what I want but now I'd like to put it behind an sslh
so I can use one of the ports for ssh also. I can't seem to find how to set the bind address to 127.0.0.1
or the private IP so sslh
can listen on the external IP and proxy to the ingress. Does anyone know what I need to set?average-autumn-93923
07/05/2022, 5:25 PMroot@dp6448:~# k3s kubectl get nodes
NAME STATUS ROLES AGE VERSION
dp6448 Ready control-plane,etcd,master 69d v1.23.4+k3s1
dp6449 NotReady control-plane,etcd,master 69d v1.23.4+k3s1
root@dp6449:~# k3s kubectl get nodes
NAME STATUS ROLES AGE VERSION
dp6448 NotReady control-plane,etcd,master 69d v1.23.4+k3s1
dp6449 Ready control-plane,etcd,master 69d v1.23.4+k3s1
If I ask them to kubectl describe node
each other, I get
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure Unknown Tue, 05 Jul 2022 10:32:22 +0000 Tue, 05 Jul 2022 10:33:42 +0000 NodeStatusUnknown Kubelet stopped posting node status.
DiskPressure Unknown Tue, 05 Jul 2022 10:32:22 +0000 Tue, 05 Jul 2022 10:33:42 +0000 NodeStatusUnknown Kubelet stopped posting node status.
PIDPressure Unknown Tue, 05 Jul 2022 10:32:22 +0000 Tue, 05 Jul 2022 10:33:42 +0000 NodeStatusUnknown Kubelet stopped posting node status.
Ready Unknown Tue, 05 Jul 2022 10:32:22 +0000 Tue, 05 Jul 2022 10:33:42 +0000 NodeStatusUnknown Kubelet stopped posting node status.
Everything other than that seems fine… etcd is healthy (one of the nodes is the leader and the other is not), they’re just both surfacing that this node isn’t ready. Confused as to how this is possible with healthy etcd.billowy-needle-49036
07/05/2022, 8:03 PMcurl <https://get.k3s.io/> | INSTALL_K3S_VERSION="v1.23.6+k3s1" INSTALL_K3S_EXEC="--disable=traefik --node-ip 10.2.0.1" sh -
curl <http://10.42.0.4:9153>
works (gives 404, ok). But on a neighbor host (10.2.0.210), curl <http://10.2.0.1:9153>
is connection-refused. Shouldn't that work? where do i debug next?hundreds-state-15112
07/05/2022, 8:22 PMhigh-fall-28740
07/05/2022, 11:55 PMmelodic-hamburger-23329
07/06/2022, 4:10 AMmelodic-hamburger-23329
07/06/2022, 10:07 AMrough-ice-65066
07/06/2022, 12:39 PMkube-reserved, system-reserved, kube-reserved-cgroup, system-reserved-cgroup
flags on kubelet. But If I don't have requests or limits in my pod spec it still crashes the node(NotReady state). I have tried combination of setting the above flags with eviction-hard flag as well. Is there any other solution to overcome this?hundreds-state-15112
07/06/2022, 8:43 PMbulky-potato-48137
07/07/2022, 1:42 PMgo get <http://github.com/k3s-io/k3s|github.com/k3s-io/k3s>
but getting error
go: downloading <http://github.com/k3s-io/k3s|github.com/k3s-io/k3s> v1.21.9
go get: <http://github.com/k3s-io/k3s@v1.21.9|github.com/k3s-io/k3s@v1.21.9>: parsing go.mod:
module declares its path as: <http://github.com/rancher/k3s|github.com/rancher/k3s>
but was required as: <http://github.com/k3s-io/k3s|github.com/k3s-io/k3s>
I then tried go get <http://github.com/rancher/k3s|github.com/rancher/k3s>
but again it errors
go: downloading <http://github.com/rancher/k3s|github.com/rancher/k3s> v1.21.9
go get: <http://github.com/rancher/k3s@v1.21.9|github.com/rancher/k3s@v1.21.9> requires
<http://github.com/kubernetes-sigs/cri-tools@v0.0.0-00010101000000-000000000000|github.com/kubernetes-sigs/cri-tools@v0.0.0-00010101000000-000000000000>: invalid version: unknown revision 000000000000
How can I go get
like most go modules?best-accountant-61831
07/07/2022, 3:27 PM<http://upgrade.cattle.io/v1|upgrade.cattle.io/v1>
plan, for Ubuntu packages upgrades. I think at least I once saw an example of that, but my search fu is lacking. Any known pointers to such a thing?bright-london-1095
07/07/2022, 7:28 PMk3s
version from 1.20.7+k3s1
to 1.22.8+k3s1
I see some changes like image name being named as rancher/mirrored-coredns-coredns:1.9.1
for COREDNS pod and rancher/mirrored-library-traefik:2.6.1
for TRAEFIK pod.
Is this an expected or am i missing some configuration or image being used is incorrect ? Please note both are part k3s
installation and not installed separately...
TIAmelodic-hamburger-23329
07/08/2022, 1:17 AMagreeable-mouse-95550
07/08/2022, 1:33 AMk3s server
w/sqlite use ~4MB/CRD, which is about the same as the upstream API server. Does that sound about right?melodic-hamburger-23329
07/08/2022, 3:58 AM