This message was deleted.
# harvester
a
This message was deleted.
s
Hi @adamant-twilight-5237. Did you mean to replace a NIC or add an extra NIC?
a
I have 3 nics on all of my servers. 1 management NIC (1gb/s) and 2 dataplane NIC's (25gb/s in LACP and Trunk). It seams as if all of the live migration traffic is going over the management nic. I would like for it to go over the dataplane and not the management NIC.
b
We are in a similar setup and now wondering if the 10 Gbps for production requirement applies to the management nic if we have 25 Gbps for storage and compute
but 1 for mgmt
a
I was going to dig through the KubeVirt documentation but figured I would ask here first
👍 1
s
It seams as if all of the live migration traffic is going over the management nic.
That’s true. So if your source VM with huge memory is dirty and it still increases. It means the migration progress might take longer because we must wait for the memory to sync (through mgmt network) Could you also check this document? https://kubevirt.io/user-guide/operations/live_migration/#using-a-different-network-for-migrations I thought we should add similar document on harvester side. Thanks for the reminder.
👍 2
BTW, we have not tested this before. I create a GH issue for the enhancement. (https://github.com/harvester/harvester/issues/5848)
a
Thank you so much. Let me know how I can help. Testing, Configs, Etc..
So this is saying that a physical network is required. Is that a hard requirement or can we put a vlan in there like we do with the storage network?
s
Yeah, seems the kubevirt would create multus network internally but I am not sure that could we directly config with a multus network (maybe storage network)
a
So, I have been thinking about it and and it seems like it would be easy to take the current storage networkAttachmentDefinition (NAD) as a template and make a Live Migration NAD from it. I have some test equipment and I am going to test it and see if it takes.
I will report my findings here
I created a Net-Att-Def for the livemigration network in a test cluster setup. Here is the deffinition:
Copy code
apiVersion: <http://k8s.cni.cncf.io/v1|k8s.cni.cncf.io/v1>
kind: NetworkAttachmentDefinition
metadata:
  name: 'livemigration'
  annotations:
  finalizers:
    - <http://wrangler.cattle.io/harvester-network-nad-controller|wrangler.cattle.io/harvester-network-nad-controller>
    - <http://wrangler.cattle.io/harvester-network-manager-nad-controller|wrangler.cattle.io/harvester-network-manager-nad-controller>
  labels:
    <http://network.harvesterhci.io/clusternetwork|network.harvesterhci.io/clusternetwork>: backend-bond
    <http://network.harvesterhci.io/ready|network.harvesterhci.io/ready>: 'true'
    <http://network.harvesterhci.io/type|network.harvesterhci.io/type>: L2VlanNetwork
    <http://network.harvesterhci.io/vlan-id|network.harvesterhci.io/vlan-id>: '21'
  namespace: harvester-system
spec:
  config: '{"cniVersion":"0.3.1","type":"bridge","dns":{},"bridge":"backend-bond-br","promiscMode":true,"vlan":21,"ipam":{"type":"whereabouts","range":"192.168.21.129/26"}}'
It failed to provision the IP addresses so the migration failed. I went through the whereabouts documentation but it seemed pretty simplistic. Add the type
whereabouts
to your networkAttachmentDefinition and it should just provision the IPs it needs.