Hello team!
I'm investigating k3s as a potential solution for storing a large amount of configuration information for various service teams in my organization. We're considering using Custom Resource Definitions (CRDs) in the API server. K8s was not viable due to the 8GB limit etcd imposes. While the external database feature of k3s solves the size limit, I noticed while inserting a large number of very large Customer Resources (CRs), the k3s server begins to eat a large amount of the systems RAM. I'm inserting around 1 million of these large CRs on an EC2 instance with over 128GB of RAM. At around 140,000 objects inserted, the k3s server is eating over 70% of the systems RAM. Is this intended? I assume k3s server is caching the inserted objects? Is there anyway to alleviate the large amount of memory usage? Even after stopping the insertions, the RAM utilization still sits around 60 or 70 percent (goes does 20-30% from peak).