rapid-businessperson-28408
11/07/2022, 7:23 PMbillions-airline-85860
11/08/2022, 10:05 AMserver {
listen 443;
proxy_pass my_agent_nodes_443;
}
upstream my_agent_nodes_443 {
server 185.230.138.110:30679;
server 185.239.208.173:30679;
server 38.242.195.7:30679;
server 38.242.134.42:30679;
}
And, everything is working hunky dory (which is why I'm flabbergasted). My missing understanding is, how did I get the port numbers for the worker nodes i.e. 30679?
I do all this for a hobby and did this work like 6 months ago, but didn't think to also document how I got to the port number and I can't find any mention of those ports in the emissary configs, only listeners for ports 80 and 443. 😮billions-airline-85860
11/08/2022, 10:06 AMbillions-airline-85860
11/08/2022, 10:14 AMjolly-waitress-71272
11/09/2022, 7:27 PMclever-air-65544
11/11/2022, 2:57 PMshy-shampoo-22224
11/14/2022, 3:41 PMgray-florist-76480
11/14/2022, 8:44 PMIngress
but not sure what about raw service like that which I need to access on predictable address.faint-airport-39912
11/15/2022, 8:30 AMkubectl
command so please suggest the solution to this problem ??
Unable to connect to the server: net/http: TLS handshake timeout
whenever i am getting timeout response for kubectl
command the application exposed via ingress stop working and getting 502
response.
These are the error logs from /var/log/syslog
limited-potato-16824
11/16/2022, 2:41 PMapply-k3s-master-plan-on-ip-1-2-3-444-with-37285dc25da-spg7r
Inspecting the jobs it looks like it want to downgrade to the v1.22.12 version again since that is the version I can see in Rancher:
I have read the page https://docs.k3s.io/upgrades/automated, our k3s clusters is managed by Rancher but we want to be in control of the upgrades.
Is there any way to prevent Rancher from trying to run the upgrade plans since I suspect the upgrade plans is installed through the cattle-cluster-agent? I see this in our k3s cluster:
$ kubectl get <http://plans.upgrade.cattle.io|plans.upgrade.cattle.io> -n cattle-system
NAME AGE
k3s-worker-plan 7d1h
k3s-master-plan 7d1h
few-minister-97494
11/16/2022, 4:48 PMfuture-fountain-82544
11/17/2022, 10:31 PMfuture-fountain-82544
11/17/2022, 10:33 PMK[33 random bytes hex-encoded]::server:[16 random bytes hex-encoded]
. The thing I’m not sure of is if the values are related (IE if they’re a public/private keypair or something)famous-flag-15098
11/18/2022, 4:13 PMExecStart=/usr/local/bin/k3s \
server \
' --disable' \
'traefik' \
'--disable' \
'servicelb'
agent
ExecStart=/usr/local/bin/k3s \
agent \
' --disable=traefik,servicelb'
When I first encountered this, I added the above to the systemd scripts, did a daemon-reload and restart. All the servicelb nodes terminated and all seemed well.
Approximately 2 weeks later, some of my services were not accessible and sure enough, I see that servicelb has started up again.
What gives?bright-london-1095
11/21/2022, 10:13 AMjolly-waitress-71272
11/22/2022, 4:56 PMk3s kubectl create secret generic kubeconfig --from-file=/etc/rancher/k3s/k3s.yaml
I'm looking at https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_create_secret_generic/ and what I think happens is the k3s.yaml gets stored in essentially a protected kubernetes var.
Is this something people normally do for some reason? I don't see it referenced at any other point in the instructions.bright-london-1095
11/23/2022, 1:06 AMk3s
version 1.22.8
and the traffic pod is in crashLoopBackOff and not accessible. I tried deleting pod but didn't help!
+ helm_v3 install --set-string global.systemDefaultRegistry= traefik <https://10.49.0.22:443/static/charts/traefik-10.14.100.tgz> --value/config/values-01_HelmChart.yaml --values /config/values-10_HelmChartConfig.yaml
Error: INSTALLATION FAILED: cannot re-use a name that is still in use
bright-london-1095
11/23/2022, 1:15 AMkube-system pod/helm-install-traefik-q76r7 0/1 CrashLoopBackOff 5 (63s ago) 4m32s
bright-london-1095
11/23/2022, 1:26 AMk3s
logs:
Nov 23 01:19:01 nestle-test k3s[1186]: I1123 01:19:01.070591 1186 scope.go:110] "RemoveContainer" containerID="e87d34e8fb2f4d3e07ab9ee9e8656ee73f0cd1dd00f7bce0ee9956d3bc091c67"
Nov 23 01:19:01 nestle-test k3s[1186]: E1123 01:19:01.070873 1186 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helm\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=helm pod=helm-install-traefik--1-8htfg_kube-system(9833a11e-3065-48d7-908f-baf37fef0168)\"" pod="kube-system/helm-install-traefik--1-8htfg" podUID=9833a11e-3065-48d7-908f-baf37fef0168
Nov 23 01:19:03 nestle-test k3s[1186]: E1123 01:19:03.366518 1186 remote_runtime.go:164] "RemovePodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to remove volatile sandbox root directory \"/run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/345c21c66d2e341863fb9b95eb829a477f2c6e799016674d3b2a80fe113601a1\": unlinkat /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/345c21c66d2e341863fb9b95eb829a477f2c6e799016674d3b2a80fe113601a1/shm: device or resource busy" podSandboxID="345c21c66d2e341863fb9b95eb829a477f2c6e799016674d3b2a80fe113601a1"
bright-london-1095
11/23/2022, 1:31 AMtraefik
shipped with k3s
(v1.22.8+k3s1)early-solstice-46134
11/23/2022, 8:04 AMcontainerd
daemon inside of K3s? I need to debug some stuff regarding our private registry, but I can't dig deep enough. journalctl -u k3s | grep <imagenaname>
does not show anything related to containerd
pulling/pushing said image, even when I start k3s with the --debug
flag. The logs for k3s show that it picks up the /etc/rancher/k3s/registries.yaml
file correctly.kind-helmet-90731
11/24/2022, 6:39 AMkind-helmet-90731
11/24/2022, 6:39 AMtime="2022-11-24T05:48:21Z" level=info msg="Configuration loaded from flags."
time="2022-11-24T05:51:04Z" level=error msg="Skipping service: no endpoints found" providerName=kubernetes ingress=prometheus-server serviceName=prometheus-server namespace=prometheus servicePort="&ServiceBackendPort{Name:,Number:80,}"
time="2022-11-24T05:51:05Z" level=error msg="Skipping service: no endpoints found" namespace=prometheus serviceName=prometheus-server servicePort="&ServiceBackendPort{Name:,Number:80,}" providerName=kubernetes ingress=prometheus-server
time="2022-11-24T05:51:05Z" level=error msg="Skipping service: no endpoints found" serviceName=opensearch-dashboards servicePort="&ServiceBackendPort{Name:,Number:5601,}" namespace=opensearch providerName=kubernetes ingress=opensearch-dashboards
time="2022-11-24T05:51:05Z" level=error msg="Skipping service: no endpoints found" namespace=prometheus servicePort="&ServiceBackendPort{Name:,Number:80,}" serviceName=prometheus-server providerName=kubernetes ingress=prometheus-server
time="2022-11-24T05:51:05Z" level=error msg="Skipping service: no endpoints found" namespace=opensearch providerName=kubernetes serviceName=opensearch-dashboards servicePort="&ServiceBackendPort{Name:,Number:5601,}" ingress=opensearch-dashboards
time="2022-11-24T05:51:05Z" level=error msg="Skipping service: no endpoints found" providerName=kubernetes serviceName=prometheus-server servicePort="&ServiceBackendPort{Name:,Number:80,}" ingress=prometheus-server namespace=prometheus
kind-helmet-90731
11/24/2022, 6:42 AMbright-postman-91926
11/24/2022, 7:41 AMfull-park-34540
11/24/2022, 3:36 PMcreamy-room-58344
11/25/2022, 3:21 PMhandsome-painter-48813
11/25/2022, 4:24 PMoverlay 468G 15G 430G 4% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/5e03898951de490bdc8e32ca67c24a1c691cf9efa8629ee494460f3fc5a6bc9b/rootfs
It goes up by 1 GB each second, how can I find container that does this?red-boots-23091
11/25/2022, 5:36 PMflat-continent-80260
11/26/2022, 12:35 AM