Hello, does someone have experience with running R...
# rke2
a
Hello, does someone have experience with running RKE2 on VM:s on Openstack? or just experience with low IOPS for etcd in general? If so did you solve it using PCI passthrough or do you have other alternatives/recommendations i can try before going the passthrough way? Right now my IOPS is far to low for etcd to live the life it wants to 😆
b
It wasn't on Openstack, but we had this issue running on hypervisors with spinners. Had to be backed by disks that were at least SSD or NVME to be viable. The spinning rust, even with a raid, just couldn't keep up. I don't think giving the VMs whole disk via passthrough is going to get you the IOP boost you need. What's backing your VM disks? Ceph?
💯 1
a
Yes it is Ceph, do you have any tips or tricks? Im not that familiar in this area but saw many etcd failure and i think it is because the IOPS, when running rke2 straight on baremetal it wasnt and issue and i hade 10x the IOPS
b
Yeah, so your iops are going to be dependent on a ceph block device, which means network and disk speeds on your ceph cluster.
Depending on how that pool is set up is what's going to drive those iops.
How many ceph nodes does your pool have?
And are your ceph disks ssds? NVME? HDD, or mixed?
a
my pool have 3 osd nodes which are all mixed with nvme or ssd
no HDD:s are used
b
You might want to split your nvme and ssd disks into separate classes/pools if you can. The Ceph folks always told me that you really want more than 3 nodes and ideally you start to see real performance gains at around 7+ nodes.
I'd also check your pool/ceph settings to make sure you have the best priority set for your openstack disks.
There's more tuning stuff that's available for ceph, like setting the MTU to
9000
on the nics etc. that might help.
a
thank you for pointing me in the right direction, i appreciate it very much, i now know where to start digging more which is awesome. Im going to start with expanding the osds to 3 more node since i have spillted mgr:s and osds now, and on those nodes i have more disk and power to use 😄
b
good luck
h
this is an old one but a good one... some of the GH links need updating https://support.scc.suse.com/s/kb/360045276411?language=en_US