This message was deleted.
# harvester
a
This message was deleted.
p
Sorry. It’s about 3.3 mb per second
I removed guest cluster but still see 600 read and 700 write kb/s. Just wonder why in idle state so high rate and how long server SSDs will last under this load
After some time I only see 300kb/s for harvester write rate
s
Hi @powerful-table-93807, did you check this throughput on your service pods? Or did you benchmark on your guest VM? Also, could you provide the hardware you used? e.g. NVME or SSD, specific StorageClass or Default?
p
@salmon-city-57654 Hi. I used all default provided by harvester 1.2.1. I checked throughput on VM metrics. And it showed me that one control plane node has constant write 1.8 mb/s and rest of them 300 or 150 kb/s. I created cluster with 3 control-plane/etcd + 2 worker nodes and configured fleet with a small demo project (5 pods or so) From hardware perspective server has 10 200gb SSDs in RAID 6
s
Could you do some benchmark directly on this RAID? Raid6 would decrease some performance. But it need to know your baseline of this raid (that’s why I asked for the benchamrk). Then we could understand this. Did you use the hardware raid card for this raid6? If yes, did this card has BBU?
p
I tried some benchmarks on Ubuntu write speed was about 600 mb/s and Write about 400
s
Hi @powerful-table-93807 Did you run the benchmark by 1. install Ubuntu on this raid6 or 2. create a Ubuntu VM on Harvester? Also, how did you get this result? I mean, by fio or dd or other benchmark tools? and what is the benchmark io pattern?
p
@salmon-city-57654 Is there are any correlation between RAID write speed and how much on harvester + RKE2 installed by Rancher will be constantly writing data to the disk ?
Benchmark I was doing by default Disk application on Ubuntu 23 desktop and also used Crystal Disk on windows server
What I have now. Disk write by each VM and overall harvester cluster write
s
Is there are any correlation between RAID write speed and how much on harvester + RKE2 installed by Rancher will be constantly writing data to the disk ?
Because the guest cluster with harvester will use the harvester storageclass (means longhorn) So I would like to know the benchmark data on the harvester cluster
Or could you elaborate more about your rke2 cluster arch? I am afraid that I misunderstand something.
p
@salmon-city-57654 So our company is looking to build a new private cloud infrastructure instead of using VMWare because of big licensing price. One of the products we found harvester and for testing we installed it on one of our server to evaluate and compare with other solutions like MAAS from Canonical. So I have 1 harvester node in cluster + deployed using Rancher RKE2 cluster with next configuration • 3- etcd nodes - 4 CPU - 8GB • 2 control plane nodes - 4 CPU - 8GB • 2 worker nodes - 20 CPU - 40 GB Once situations what is worrying me it high constant disk writing which will kill disks in a few years + that harvester our of the box uses a lot of resources
Here I provided some screenshots where you can see that RKE2 cluster by itself does not consume a lot of resources + every separate node does not have high disk write rate (etcd nodes have about 150kb/s write rate which I know is fine) which in total has about 600kb/s but Harvester dashboard shows that over all cluster disk write rate is about 2.7 MB/s which is much much higher. But if we see metrics from harvester server we can see it utilizes a huge amount of RAM and CPU (don't know why it constantly uses 7 CPU) . My question: is it fine harvester behavior of here is something off which causing some issues.
s
Hi @powerful-table-93807, thanks for your reply So looks like you already deploy a 7 nodes RKE2 cluster and one node harvester, right? You also mentioned you would like to build a new private cloud infra. So, I could assume you would like to use Harvester to provision VM? But as you mentioned, I am not really sure about the RKE2 cluster here. If most of your cases deploy a VM, you can directly use a VM. If your scenario is container + VM, you need rancher + harvester. That will be more flexible. Would you like to share more of your scenario? Also, for the disk write. Before it, what types of disks? NVME? SSD? or HDD? Harvester has its own embedded rancher cluster and various components (like Longhorn, Kubevirt), so the disk IO will be slightly higher than the etcd node. Could we check the more specific for the Disk IO? I thought this figure was the whole IO, but I am unsure about the write. Could you share the write statistics?
My question: is it fine harvester behavior of here is something off which causing some issues.
Sorry, did you mean the disk IO or CPU/Memory usage?
p
@salmon-city-57654 yes. I also think in my case maybe it will be better to use for example Proxmox to create VMs and then using rancher setup RKE2 cluster