This message was deleted.
# rke2
a
This message was deleted.
h
a
Ok thank you. So basically the answer is no, we're stuck at 255 nodes in this cluster. We have over a hundred TBs of Longhorn volumes and PVCs that we probably wouldn't be able to transfer to a new cluster. Dang
c
You need to plan cluster sizing (number of nodes, and pods per node) in advance, with larger CIDR blocks and node cidr masks. They effectively cannot be changed after the fact.
There is work in progress in Kubernetes to support multiple CIDR blocks, but it is still in alpha and not as far as I know supported by any CNIs.
👍 1
a
Ok thanks. I think we can kind of get around it for now with going from T3 instances types with 32G of RAM to M4's with 64G of RAM, and just have less worker nodes.
h
hundred TBs of longhorn - wow! 😎 How fast are your NICs and disks ?
a
We're just using T3.2XLarge instances with attached EBS volumes for our nodes with Longhorn storage, and we have separate dedicated "compute" nodes that run the pods that attach the Longhorn volumes. This is all for Coder (https://coder.com/) workspaces, which all have about 50GB persistent volumes mounted as a home directory in the workspace pod. A little over 900 Longhorn volumes in our biggest deployment. Right now we have a little over 50 dedicated Longhorn nodes each with a 2TB ebs volume attached to it. There's anywhere between 50 and 80 replicas on each longhorn node. It's a big PITA to reboot these nodes for maint. so we've considered increasing the EBS volume size so we have fewer dedicated Longhorn nodes... but I don't really know if that would end up being too many replicas on a single node