This message was deleted.
# harvester
a
This message was deleted.
g
would it be possible for you to please create an issue with detailed steps of what you are trying to do and requirements?
it is hard to answer without complete context
b
I will create an issue if I think it's an issue. Currently it could be anything. Let's just start to answer the simple question: Why is the CNI MTU in Harvester always 1500 (or better 1450 because of the vxlan header)?
g
Likely because that is what is packaged as the default with the cni
b
Yes, probably (sadly). Does it make sense for a storage network (which runs in K8S pods) to use 1450, even when the host adapters use 9000? My answer is no.
This is how it looks like when a storage network is configured:
Copy code
instance-manager-e-3a8b8e09:/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if42: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether c6:6e:22:a3:63:af brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.52.1.75/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::c46e:22ff:fea3:63af/64 scope link 
       valid_lft forever preferred_lft forever
4: lhnet1@if44: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 36:e6:4e:18:cf:33 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.190.139.81/26 brd 10.190.139.127 scope global lhnet1
       valid_lft forever preferred_lft forever
    inet6 fe80::34e6:4eff:fe18:cf33/64 scope link 
       valid_lft forever preferred_lft forever
lhnet1 is the storage network, automatically configured by Harvester. It has MTU 1500. If not configured, the storage traffic is using eth0, which has 1450. The underlying adapters have 9000, but it is useless.
BTW: I'm not sure if this is a "Harvester only" feature. Maybe I should ask in #longhorn-storage ?
No ideas? No replies in longhorn either? Is that requirement that strange?
I created a case... let's see...
g
the setting is default from the cni
b
Sure. But intentionally? I mean, why are you able the set MTU 9000 on the physical adapters when it's never used in the upper networks?
g
I am not questioning the requirement. I am waiting for longhorn team to answer if they benchmark longhorn against different MTU's and see an issue in doing so
if it seems a recurring ask from the end users we can expose this setting up. Right now if you apply the change manually it is likely going to be reset during the upgrade path
b
Ok, then let's wait for the longhorn people. I think it will be very useful to have a bigger MTU on the (dedicated) storage network. I doubt it's useful on the public and management network. And it might be useful in customer VMs. But currently it's not possible to test because it can't be configured.
g
you can configure it if you want to.. like i said most likely will be wiped on upgrades and i just need to test out the way to persist it across upgrades