This message was deleted.
# harvester
a
This message was deleted.
a
for the 10gb mgmt network I would say for production its recommended to have that ready before you start installing and configuring harvester. Its more about the necessary speed for images, but also access to Longhorn and virtual network storage which can get very easily congested on 1gb network. if you have multiple physical harvester nodes, that uses the same shared longhorn network storage cross hundreds of vms
w
I'm not sure I follow - once installed there is the option of choosing a dedicated NIC for longhorn traffic - i.e. [YOURDOMAIN]/dashboard/harvester/c/local/harvesterhci.io.setting/storage-network?mode=edit This gives you the option to set a dedicated network for the storage-network. Are you saying that transfer of images and moving data from this network happen over management - (i.e. the storage network is between volumes only and the raw data between them, i.e. replicas)?
a
oh well then I guess your fine. I just use the mgmt interface for all things related to this on sfp+ 10gb+
w
I think your right tho - if images are transmitted over the management nic to other nodes from the storage which itself needs a network then those are both heavy processes - and if management is on 10G and so is your storage then all this happens on one interface - which we have found 90% of the time to be ok. Where things come unstuck is when you provision some large i/o going on with the storage - then the rest of the management - e.g. kubeapi starts to struggle - hence the desire to segregate the storage to its own link.
a
True but your still on a 10GB interface preferably on SFP, you should be fine
w
However if the rest of the management is heavy on IO too then obviously the 1G link is not suitable - so I'll have to use both spf10+ ports for harvester which is a pain as were limited on ports.
a
if your worried (if this is a 10node+++) cluster then you should maybe consider 40GB sfp interfaces
or bundle two 10gb interfaces (I dont remember the function name right now)
w
no - its 5 node atm
a
how many vms approximately ?
w
only a handful - dedicated - rest for k8s - running clusters. on top with rancher
hence io between nodes is high
a
I guess it all depends on traffic, but I know for about 300vms at work, we have 40gb interfaces to the dedicated storage that has for all VM luns
but we have VMs that can be very IO heavy
w
i think the NVME drives can saturate the io tho as they are very fast
a
well if you stream the disk from storage to the vm
then you will use bandwith
w
hence thinking if we put them on their own link it might help.. the problem is replicas and the HA'ness of it all... its fine 90% of the time and we're also not 100% sure the NIC is the problem as it may be resources allcoated creating bottlenecks, thanks for your take tho
I'm going to recreate some of the problem edge cases we hit earlier today but with more resources allocated to the cluster help rule out CPU being the culprit to drop outs of vms during a provison of a large volume... I've got second cluster building for the upgrade so trying a few things out before we test restoring from backups. (looking at upgrading the main cluster, hence large io with getting things backed up and ready and issues with stability - so thinking about improvements)