https://rancher.com/ logo
Title
p

polite-piano-74233

03/22/2023, 8:03 PM
i assume because they are seperate vm's that would make it fine, given how flannel etc set up vxlan
r

rough-farmer-49135

03/22/2023, 10:55 PM
I also don't know but also would guess it would work. Though it's also not impossible that if the hypervisor is trying to do something too fancy with VM networking that it might get crossed.
I know I set up an 8 VM node cluster on 5 physical nodes and had worker & control plane VMs on the same host (CentOS 7, VMs via KVM) and saw no problems there, though with them in the same cluster it's not impossible that different clusters might've had an issue. I also had another 5 physical nodes with another 8 VM cluster on the same switch & VLAN (hypervisors with bonded NICs & VMs bridged to same IP space as hypervisors) with them and also saw no issues. Still not quite the same.
Strike that, I'm dumb. Those clusters were Rancher so I had a 3 master cluster hosting Rancher on the same 5 physical nodes next to the 8 node downstream cluster (all clusters RKE2). I did have issues at times, but a lot of that was leaving SELinux enforcing before it was quite production supported for RKE2 and other similar decisions. Things seemed ok after getting a successful setup, though.
So I can tell you it worked with CentOS 7 hypervisors using KVM for virtualization with CentOS 7 VMs with bridged network & using RKE2 for the Kubernetes distro.
­čĹŹ 2