This message was deleted.
# rke2
a
This message was deleted.
c
don’t do that!
I would recommend k3s for a Pi. Even that is going to be a stretch to do much with if you only have an older 1gb model.
c
the idea was I had a few raspi 3b (6 of them) in my workplace and I thought to see if it would be possible to deploy k8s on it and if so with how much of a spare remaining. Have just been experimenting basically. Did see etcd pod start up and saw that there was about 100M of memory remaining. But then I saw that there is an option of splitting server roles and have control plane be distributed so tried to separate etcd from the remaining and I still only saw kube-proxy starting up on the node. This was when I decided to try running only kube-apiserver to see if it would work and the answer was still the same. That's why I was wondering how much resource does kube-apiserver even need ? On a bare VM when Idle I saw the max resource request itself was from etcd at 512M while others were at 200M mark so I expected to see it atleast startup ....
I know it's a super extreme example but I have just been trying to see how much is possible at this point. And it seems hard to get it running well.
c
why would you want to use rke2 for that instead of k3s?
c
Looking around various online sources, I saw people claiming rke2 is just k3s but hardened, and so I felt like this could be a good use case.
Going through the deployment processes, I see a lot of resources also running under k3s name, for instance the containerd socket itself was running under unix:///run/k3s/containerd/containerd.sock. But does rke2 also make various other changes compared to k3s ?
I can see that k3s uses embedded sqlite as db, but again not recommended for multi node setup, so have to use embedded etcd and again this seemed similar to what rke2 is doing
c
RKE2 does use most of the same code as k3s, but there is more to it than just hardening
rke2 runs all the components in pods, so the resource utilization is much higher. RKE2 also defaults to etcd instead of kine/sqlite
RKE2 also comes with a bunch more stuff by default
c
so just getting kube-apiserver alone wouldn't get the cluster trotting along ? Would it even be possible to split these workloads across all the rpi's? I mean at that point sure I wouldn't be able to run any workload on the rpi's but was wondering would it even function ?
although at this point I might just deploy k3s to see if it's even possible to have something running on them, but have been wondering what the default resource request and limit for api-server is, that it doesn't even startup. Also super weird that there is no log anywhere that api-server couldn't start because of lack of resource. It just fails silently then waits indefinitely for the pod to start.
c
the kubelet’s feedback when it fails to schedule static pods due to lack of resources is very bad. There might be something buried in the kubelet log. that’s it.
I wouldn’t recommend wasting any more time trying to get this to work on a 1GB pi. Just use k3s. Or get Pis with more memory.
c
yeah at this point k3s is probably my best bet. Will try and see if switching to k3s makes any difference. Seeing as how it was the apiserver that failed to startup this time and not etcd, hopefully it won't be an issue. Thanks though
123 Views