late-library-53097
04/12/2024, 3:11 PMhigh-lunch-88646
04/12/2024, 4:49 PMkubeconfig-generate-token
set to false
), it is working good, my only problem is that the username used for the rancher token command is not the expected one (more details in this thread).salmon-morning-84088
04/13/2024, 4:43 PMcool-gpu-54244
04/15/2024, 6:13 AMcool-gpu-54244
04/15/2024, 6:14 AMcool-gpu-54244
04/15/2024, 6:31 AMcurved-army-69172
04/15/2024, 8:33 AMadventurous-lizard-81275
04/15/2024, 11:23 AMadventurous-lizard-81275
04/15/2024, 11:30 AMclever-umbrella-15166
04/15/2024, 12:15 PM<http://abcd.com:5000>
), I'm receiving the following error:
Error response from daemon: Get "<https://abcd.com:5000/v2/>": dial tcp 100.xx.xx.xx:5000: i/o timeout (Client.Timeout exceeded while awaiting headers)
Configuration Details:
• I have created a /etc/docker/daemon.json
file with the following content:
{
"graph": "/scratch/docker-root",
"storage-driver": "overlay",
"insecure-registries": [
"<http://abcd.com:5000>"
]
}
• The local registry is hosted at <http://abcd.com:5000>
.
• Additionally, I have configured a proxy for Rancher Desktop.ambitious-evening-65605
04/15/2024, 10:52 PMpolite-piano-74233
04/16/2024, 4:51 AMmodern-application-87567
04/16/2024, 2:09 PMlimited-pizza-33551
04/16/2024, 3:30 PMglamorous-architect-89433
04/16/2024, 5:22 PMbitter-balloon-62654
04/16/2024, 5:50 PMabundant-napkin-79526
04/17/2024, 9:14 AMred-evening-57941
04/17/2024, 3:13 PMred-evening-57941
04/17/2024, 3:18 PMmost-dog-64048
04/17/2024, 5:17 PMcontainer_settings.allow_ip_forwarding = true
in the CNI when using calico as a custom CNI? Calico’s default is for this to be set to false
, at least when installed via tigera-operator
https://docs.k3s.io/networking/basic-network-options#custom-cnirefined-application-74576
04/17/2024, 6:28 PMrke2 etcd-snapshot save
fails. Looking at the source, looks like 127.0.0.1:9345
is hardcoded as the server. Is there some way to override this? https://github.com/rancher/rke2/blob/master/pkg/cli/cmds/etcd_snapshot.go#L24-L26adventurous-engine-93026
04/18/2024, 12:28 AMwide-apple-64055
04/18/2024, 3:41 PMwide-apple-64055
04/18/2024, 3:41 PMwide-apple-64055
04/18/2024, 3:41 PMwide-apple-64055
04/18/2024, 3:42 PMstale-alligator-62241
04/18/2024, 3:45 PMquick-motherboard-79406
04/18/2024, 7:22 PMrefined-application-74576
04/19/2024, 3:54 AMdatastore-endpoint: sqlite:///var/lib/rancher/rke2/server/db/rke2-sqlite.db
. The initial setup/install works as expected, but I've noticed an issue with restarting rke2-server
service. If I run systemctl restart rke2-server
it takes a long time. I've tried this multiple times, and each time the service restart takes around ~6m40s. The logs are chock full of:
Apr 19 03:34:27 jammy-01 rke2[1735185]: time="2024-04-19T03:34:27Z" level=info msg="Pod for kube-apiserver is synced"
Apr 19 03:34:31 jammy-01 rke2[1735185]: time="2024-04-19T03:34:31Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: <https://127.0.0.1:9345/v1-rke2/readyz>: 500 Internal Server Error"
when this happens.
However, if I issue a systemctl restart rke2-server
and follow up with a kill -9 $(pidof kube-apiserver)
the service restarts very shortly after. It seems as if there's some internal timer to restart kube-apiserver
after ~6m40s?. I've also noticed that if I let systemctl restart rke2-server
do it's thing, after ~6m40s the pid of kube-apiserver
indeed does change.
example:
sean@jammy-01:~$ pidof kube-apiserver
1733097
sean@jammy-01:~$ time sudo systemctl restart rke2-server && pidof kube-apiserver
real 6m39.124s
user 0m0.003s
sys 0m0.001s
1741042
issuing a restart and killing kube-apiserver shortly after.
sean@jammy-01:~$ cat kill.sh
#!/bin/bash
sleep 10
sudo kill -9 $(pidof kube-apiserver)
sean@jammy-01:~$ ./kill.sh &
[1] 1759136
sean@jammy-01:~$ time sudo systemctl restart rke2-server
[1]+ Done ./kill.sh
real 0m21.220s
user 0m0.006s
sys 0m0.005s
Is this expected behavior? Should I file an issue on Github?gifted-sandwich-56892
04/19/2024, 9:37 AMtime="2024-04-19T09:30:20Z" level=error msg="Failed to connect to proxy. Response status: 400 - 400 Bad Request. Response body: websocket: the client is not using the websocket protocol: 'upgrade' token not found in 'Connection' headerError during upgrade for host [c-m-6xskmbv9]: websocket: the client is not using the websocket protocol: 'upgrade' token not found in 'Connection' header" error="websocket: bad handshake"
have anyone faced or seen this issue before?