https://rancher.com/ logo
Title
s

stale-painting-80203

05/17/2022, 2:33 PM
I am trying to install and run rke2 in a SLES 15 SP3 VM and facing issues. I am using the guide here https://docs.rke2.io/install/quickstart/. I have disabled the firewall and enabled IPv4/IPv6 port forwarding in Wicked. Seems the server is running, but I do see errors in the logs. Also when I installed the agent in the same VM I got an error:
msg="failed to get CA certs: Get \"<https://127.0.0.1:6444/cacerts>\": read tcp 127.0.0.1:48636->127.0.0.1:6444: read: connection reset by peer
Curl gives following error:
curl -v <https://127.0.0.1:6444/cacerts>
*  Trying 127.0.0.1:6444...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 6444 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to 127.0.0.1:6444 
* Closing connection 0
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to 127.0.0.1:6444
Anyone know what this might be caused by? curl -k https://127.0.0.1:6443 gives a response.
rke2-server.service - Rancher Kubernetes Engine v2 (server)
   Loaded: loaded (/etc/systemd/system/rke2-server.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2022-05-16 20:46:15 PDT; 4min 55s ago
    Docs: <https://github.com/rancher/rke2#readme>
  Process: 48957 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exited>
  Process: 48960 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
  Process: 48961 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
  Main PID: 48962 (rke2)
   Tasks: 205
   CGroup: /system.slice/rke2-server.service
       ├─ 2652 /var/lib/rancher/rke2/data/v1.22.9-rke2r2-88ecb1441384/bin/containerd-shim-runc-v2 -namespace k8s.>
       ├─ 2673 /pause
       ├─ 2707 /var/lib/rancher/rke2/data/v1.22.9-rke2r2-88ecb1441384/bin/containerd-shim-runc-v2 -namespace k8s.>
       ├─ 2711 /var/lib/rancher/rke2/data/v1.22.9-rke2r2-88ecb1441384/bin/containerd-shim-runc-v2 -namespace k8s.>
       ├─ 2737 /var/lib/rancher/rke2/data/v1.22.9-rke2r2-88ecb1441384/bin/containerd-shim-runc-v2 -namespace k8s.>
       ├─ 2758 /pause
       ├─ 2778 /pause
       ├─ 2813 /var/lib/rancher/rke2/data/v1.22.9-rke2r2-88ecb1441384/bin/containerd-shim-runc-v2 -namespace k8s.>
       ├─ 2836 /var/lib/rancher/rke2/data/v1.22.9-rke2r2-88ecb1441384/bin/containerd-shim-runc-v2 -namespace k8s.>
       ├─ 2841 /pause
       ├─ 2866 /pause
       ├─ 2882 kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wa>
       ├─ 3069 etcd --config-file=/var/lib/rancher/rke2/server/db/etcd/config
       ├─ 3761 /var/lib/rancher/rke2/data/v1.22.9-rke2r2-88ecb1441384/bin/containerd-shim-runc-v2 -namespace k8s.>
       ├─ 3782 /pause
       ├─ 3807 /var/lib/rancher/rke2/data/v1.22.9-rke2r2-88ecb1441384/bin/containerd-shim-runc-v2 -namespace k8s.>
       ├─ 3835 /pause
       ├─ 3863 /var/lib/rancher/rke2/data/v1.22.9-rke2r2-88ecb1441384/bin/containerd-shim-runc-v2 -namespace k8s.>
       ├─ 3890 /var/lib/rancher/rke2/data/v1.22.9-rke2r2-88ecb1441384/bin/containerd-shim-runc-v2 -namespace k8s.>
       ├─ 3920 /var/lib/rancher/rke2/data/v1.22.9-rke2r2-88ecb1441384/bin/containerd-shim-runc-v2 -namespace k8s.>
       ├─ 3940 /pause
       ├─ 3941 /pause
       ├─ 4073 /coredns -conf /etc/coredns/Corefile
       ├─38373 kube-apiserver --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --allow-privileged>
       ├─42523 /cluster-proportional-autoscaler --namespace=kube-system --configmap=rke2-coredns-rke2-coredns-aut>
       ├─43471 /var/lib/rancher/rke2/data/v1.22.9-rke2r2-88ecb1441384/bin/containerd-shim-runc-v2 -namespace k8s.>
       ├─43493 /pause
       ├─43738 /opt/bin/flanneld --ip-masq --kube-subnet-mgr
       ├─45408 /usr/sbin/runsvdir -P /etc/service/enabled
       ├─45734 runsv felix
       ├─45735 runsv monitor-addresses
       ├─45736 runsv allocate-tunnel-addrs
       ├─45737 runsv node-status-reporter
       ├─45738 runsv cni
       ├─45739 calico-node -monitor-addresses
       ├─45740 calico-node -felix
       ├─45741 calico-node -allocate-tunnel-addrs
       ├─45742 calico-node -status-reporter
       ├─45743 calico-node -monitor-token
       ├─46226 /var/lib/rancher/rke2/data/v1.22.9-rke2r2-88ecb1441384/bin/containerd-shim-runc-v2 -namespace k8s.>
       ├─46261 /pause
       ├─48962 /opt/rke2/bin/rke2 server
       ├─48995 containerd -c /var/lib/rancher/rke2/agent/etc/containerd/config.toml -a /run/k3s/containerd/contai>
       ├─49051 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-freque>
       └─49452 kube-scheduler --permit-port-sharing=true --authentication-kubeconfig=/var/lib/rancher/rke2/server>

May 16 20:50:57 suse-vm rke2[48962]: E0516 20:50:57.578889  48962 memcache.go:101] couldn't get resource list for metr>
May 16 20:51:05 suse-vm rke2[48962]: I0516 20:51:04.992233  48962 trace.go:205] Trace[411526421]: "Reflector ListAndWa>
May 16 20:51:05 suse-vm rke2[48962]: Trace[411526421]: ---"Objects listed" 61358ms (20:51:03.401)
May 16 20:51:05 suse-vm rke2[48962]: Trace[411526421]: [1m2.51858886s] [1m2.51858886s] END
May 16 20:51:08 suse-vm rke2[48962]: time="2022-05-16T20:51:08-07:00" level=info msg="Event(v1.ObjectReference{Kind:\"A>
May 16 20:51:09 suse-vm rke2[48962]: E0516 20:51:09.360628  48962 memcache.go:196] couldn't get resource list for metr>
May 16 20:51:09 suse-vm rke2[48962]: E0516 20:51:09.931690  48962 memcache.go:101] couldn't get resource list for metr>
May 16 20:51:10 suse-vm rke2[48962]: time="2022-05-16T20:51:10-07:00" level=info msg="Starting /v1, Kind=Secret control>
May 16 20:51:10 suse-vm rke2[48962]: time="2022-05-16T20:51:10-07:00" level=info msg="Starting /v1, Kind=Node controlle>
May 16 20:51:10 suse-vm rke2[48962]: I0516 20:51:10.690858  48962 leaderelection.go:248] attempting to acquire leader
c

careful-piano-35019

05/17/2022, 2:41 PM
s

stale-painting-80203

05/17/2022, 2:43 PM
yes. I applied those port forwarding
n

narrow-jelly-24220

05/17/2022, 4:38 PM
@stale-painting-80203 as for the server I can see that logs are fine, but I am confused why did you install the agent on the same node?
s

stale-painting-80203

05/17/2022, 4:41 PM
mostly because I was trying to learn RKE2 and wanted to try it out. I could install the agent on another VM.
n

narrow-jelly-24220

05/17/2022, 4:42 PM
no sorry for the misunderstanding, the server already installs an embedded agent in the same node, so you dont really need to install the agent on the same node
s

stale-painting-80203

05/17/2022, 4:52 PM
I didn't know that. Thanks for the clarification.
n

narrow-jelly-24220

05/17/2022, 4:55 PM
no problem