This message was deleted.
# rke2
a
This message was deleted.
c
The first one is normal, you’ll see errors connecting to etcd on the server at
<https://127.0.0.1:2379>
until the etcd pod is up. The second one is normal as things bootstrap; you’ll see it switch the load-balancer tunnel connection over to the new endpoint.
The last one is unusual, the agent shouldn’t be trying to connect to etcd. Are you sure you didn’t enable the rke2-server service instead of rke2-agent?
what is the actual problem you’re trying to diagnose? Just posting log lines without describing the problem doesn’t give us much to work with.
m
In the Restored cluster observed those error messages in server, so thought of confirming, whether it’s good to ignore or not. -- one of the agent, i did restarted services but it failed to join and observed those error messages.
resolved this issue by removing
etcd.yaml
from
/var/lib/rancher/rke2/agent/pod-manifests
of agent node. not sure about reason/scenario
etcd.yaml
got created in agent node.
c
did you accidentally start rke2-server service on the agent at some point?
m
not very much sure about that incident, but in agent nodes if we execute
systemctl restart rke2-server
it will fail right, since because it can’t find the
rke2-server
unit file?
c
depends on how you install it. If you install from tarball, both units are available.
m
oh, yeah just now noticed that in a server,
Copy code
-rw-r--r-- 1 root root  11 Jul  5 05:27 /usr/local/lib/systemd/system/rke2-agent.env
-rw-r--r-- 1 root root 870 Jul  5 05:27 /usr/local/lib/systemd/system/rke2-agent.service
-rw-r--r-- 1 root root  11 Jul  5 05:27 /usr/local/lib/systemd/system/rke2-server.env
-rw-r--r-- 1 root root 871 Jul  5 05:27 /usr/local/lib/systemd/system/rke2-server.service
That’s really weird to have it, since because we have an understanding in RKE2 either a node can be server/agent but not both. I strongly believe there is a possibility of hitting wrong service files both in server and agent nodes. Irrespective of installer it’s good to have either
server
or
agent
unit files only to avoid such mishap.
shall I create an github issue? or already some reason’s are framed around to have both for tarball install.
probably similar mishap might have occurred issue we have discussed earlier in https://rancher-users.slack.com/archives/C01PHNP149L/p1662855237037759?thread_ts=1660833392.861649&amp;cid=C01PHNP149L
c
no, there is just one tarball and it contains everything including both units, unlike the RPM which is split into three (common/server/agent). The rancher provisioning framework exclusively uses the tarball and needs to be able to install a single package and then later configure the node to perform the appropriate roles.
We expect users not to start the one they don’t want 😉
🙂 1
m
As a part of configuring appropriate roles, its good to remove unwanted unit files. 🙂 Looks its good to have this documented in the tarball procedure to remove those unwanted unit files.
c
we do support switching agents to servers by running
rke2 server
instead of
rke2 agent
, but if you prefer not to have the other files around, you’re free to remove them
👍 1
m
sure got it, for improvement and awareness for users about those unit file existence in the system it’s good to document.
we do support switching agents to servers by running
rke2 server
instead of
rke2 agent
,
similarly can we switch servers to agents by running
rke2 agent
, instead of
rke2 server
?
c
no, switching back is not currently supported, there is a bunch of component cleanup that needs to be handled.
m
sure got it, thank you.
805 Views