This message was deleted.
# rke2
a
This message was deleted.
b
Specifically, I want to spawn about 500 pods on a single node, and a total of about 12000 pods in a single cluster with 24 nodes... The /16 in use should provide more than enough addresses
but I understand that each node is using a /24, which is sort of expected as that's the way it's supposed to work πŸ™‚
I would like to reconfigure each node to use a /22 instead...
I tried adding this to
/etc/rancher/rke2/config.yaml
on each control plane node:
Copy code
kube-controller-manager-arg:
  - "node-cidr-mask-size=22"
and then restarting
rke2-server
.
(and then deleting and re-adding each node)
Whoops! It looks like a few of my server nodes were actually not properly reconfigured... Trying again now
c
You need to set up cluster network settings before starting the cluster for the first time. Nodes are allocated cidrs when they join the cluster, and those are not resized if you change the mask later on.
You'll also need to increase the max pods but I imagine you figured that out already?
b
Yep, I increased max pods
and as far as I can see, this did work without having to recreate the entire cluster
I had to kick out every single node and readmit them one by one
c
You should be aware that upstream Kubernetes does not test with more than 110 pods per node so you may run into oddities and need to tune other thresholds or intervals. https://kubernetes.io/docs/setup/best-practices/cluster-large/
πŸ‘ 1
b
note the IPs Longhorn sees (which are from each node) are now in /22s consistently
c
Yeah, deleting and readding nodes should do it.
πŸ™ 1
πŸ™Œ 1
b
Yep! I spent about five hours on this tonight... god it was painful πŸ˜„
I had to delete and recreate every node twice because the first time I screwed up and missed a control plane node (or two?)
so some had the "wrong" settings and were a "bad influence" on the rejoining worker nodes
Hi Brandon! Thanks again for the help. We were able to successfully use RKE2 to run a large simulation of a 7500 node p2p network, with 7500 pods actively communicating in a single namespace and across 24 virtual k8s workers!
πŸ™Œ 1