This message was deleted.
# harvester
a
This message was deleted.
b
1 - Set up a node(s) for each VLAN and just use tags to make sure they have the appropriate access. Doesn't give as much flexibility/redundancy, but it would probably work
2 - Add additional NICs for the tagged vlans. Tagged VLAN traffic just gets routed to the appropriate interface and we can still manage it through the GUI.
3 - Hack the nodes to add the bridges manually. Per the aforementioned issue, if this gets screwed up it removes all bridge networks, which would be bad.
t
I tested having a VM with multiple tagged VLANs on the same port by configuring the VM to use an untagged port and turning off VLAN filtering on the underlying bridge at the host OS side. To Harvester the VM interface is considered untagged, while the VM is doing its own tagging/untagging. echo 0 > /sys/devices/virtual/net/${CLUSTER_NETWORK}-br/bridge/vlan_filtering You can also leave filtering enabled but then you need to manually manage the allowed VLANs on the necessary interfaces.
It might not work the same if you are sharing the mgmt interface. In my case, I had a dedicated NIC for VM traffic.
b
That's helpful! Thanks Chris!
@thousands-action-31843 Do you use a cloud init to set up a VLAN interface on the VM side?
t
No, we did not.
b
We went ahead and set up a seperate interface for the VLANs to be untagged/unfilted on the hosts, but we're trying to figure out the best way to get the VMs to do the tagging. It works so far when we've done it manually,
t
My experience with cloud-init is minimal, so haven't tried to set it up for vlan tagging personally. Someone else generated the VM YAML for our test workloads and I didn't even look at it.
b
Is there a mechanism for making the
echo 0 > /sys/devices/virtual/net/${CLUSTER_NETWORK}-br/bridge/vlan_filtering
or
ip link set trunk-br type bridge vlan_filtering 0
persistent? Seems like the bridge isn't fully up in time for the 90-custom.yaml in
/oem
for it to work properly.
t
We have our own boot script which we added which runs when the system boots. I added something to the script which waits for the bridge interface to be created by Harvester and then applies the change. I don't know if there is any other way to hook in or apply the change via a kubernetes config,
b
Mind sharing where you add the script?
t
Our method is overly complicated due to some other requirements. I think the simplest method would be to add a command to the roofs section of /oem/90_custom.yaml. Currently that section is empty:
Copy code
name: Harvester Configuration 
stages:               
  rootfs:
  - commands: []
You should be able to do something like this:
Copy code
name: Harvester Configuration 
stages:
  rootfs:
  - commands:         
    - /oem/your_script_to_disable_vlan_filtering.sh
I didn't show all the other stages like initramfs, etc.. We don't run any commands in rootfs, but do in initramfs stage using the same method.
b
That would work, It doesn't wait for the script to finish or anything right?
t
I am not sure, it probably does, so you can always background another script inside your script to handle that.
Have a wrapper script which just nohups the real script and runs it in the background.
b
Yeah then have it exit 0 so it thinks it's clean.
thanks
👍 1
Yeah... so I'm not sure why, but it seems like the nohup gets started, but the background process gets killed somehow. Maybe I'll end up writing this to systemd instead.