This message was deleted.
# harvester
a
This message was deleted.
s
Is there any DHCP server for your vlan network? If not, it should be ok to manually configure that interface on the guest VM.
p
There is, but it only provides IPs in the 50-150 range. So I figure I should be okay with setting a manual IP with .20 for example
Also just some things I'm not super sure of: I don't need to create another cluster network for the second vlan in order to use it? The details for the VM network should be identical to the details of the existing vlan, right? I should have both management as masquerade and vm network as bridge (2nd iface) for the VM And finally, I should be able to set the route + address inside the linux box?
s
I don’t need to create another cluster network for the second vlan in order to use it?
Yeah, I think so. I am not sure about how to avoid the DHCP in your case. cc @red-king-19196, @faint-art-23779 did you have any thoughts?
p
The really weird bit is that when I create the cluster network, I need to add 2 configs since the 2 nodes have differently named interfaces. But the moment I add the network config, the interface stops working - in the sense that from the node, I cannot ping the gateway (for example) And unlike the mgmt-br interface, which "claims" eth0 and takes the IP from it, the newclusterntwrk-br interface doesn't claim the ethernet port
r
Hi @powerful-easter-15334 Could you attach the support bundle file so we can examine your network setup, including the cluster network and other relevant configurations? If the new cluster network’s bridge does not contain the uplink interface, it cannot form a “cluster-wide” network.
p
Hello. Yep, I'll take care of that in a moment
I'll send the support bundle in a second, I decided to retry everything from scratch. Created the cluster network, then the configs (one for each VM) and then the VM network. Now I have a VM running which got an IP from the DHCP server (even though I set it to manual, but whatever, doesn't matter) The VM can ping the gateway on the VLAN, but the host nodes are now unable to ping anything in that VLAN anymore. I think it might have something to do with how I configured my interface (opensuse by the way). eno4 does become slave to trunk-bo eno4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master trunk-bo state UP group default qlen 1000 trunk-bo: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue m aster trunk-br state UP group default qlen 1000 trunk-br: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default And the state of trunk-br (cluster network interface) is in an Unknown state
Ah, I was setting the eno4 address manually with
ip address/route add
Flushing eno4, then applying the same to trunk-br lets the server ping the gateway again. Though state is still unknown. The VM and host can both ping the gateway, though they can't ping each other, and the ethernet symbol in the VM is grayed out with a ? on top
I'm fairly confident I mucked up the host network side of things
Oh, progress. I went and created ifcfg-enoX files for each interface. Then I manually created the ifcfg-trunk-br/bo/route files. The mgmt interfaces have their own iface config files but the trunk didn't
Now trunk appears to work, except I accidentally dropped my connection to the second server, and need to manually restart it
r
I’m currently loading the support bundle into my simulator. In the meantime, I’d like to clarify some points:
Now I have a VM running which got an IP from the DHCP server (even though I set it to manual, but whatever, doesn’t matter)
If I understand correctly, you’re saying you created a VM Network with the
manual
route mode selected. This, by design, has nothing to do with the external DHCP server. Harvester is incapable of managing the DHCP server running outside of the cluster. The
manual
here is just a way for the user to provide the route information about the to-be-created VM Network. So IMO it’s expected that the VM still gets the IP address from the DHCP server. From the output of the
ip link
command provided above, it looks like all the network interfaces and devices were created correctly. It seems you’re performing manual configurations against the network that the Harvester network controller should manage. Would you please describe what you want to achieve? I’d like to know about it and see how we can help, at least with the existing capability of the Harvester network controller without any manual configs.
Does your VM Network and VMs exist in the
default
namespace? I don’t see anything in the support bundle.
You can add additional namespaces to collect in the support bundle here. After updating the setting, please generate the support bundle again. Thank you.
p
Ohh, so that's what manual means in this scenario. Okay, the manual config was because when I created the cluster network, and affiliated network configs, I expected harvester to auto-create the configs for eno4, and the trunk-br and trunk-bo interfaces. I found it weird that it didn't so I tried a workaround. Now I copied the eno1 and mgmt-br/bo configs, and the connection appears to be working. I will try creating a VM and namespaces, and more things to test in a minute - lunch takes priority 🙂
👍 1
r
Hmm, that’s odd. When you create the
trunk
ClusterNetwork and associated VlanConfig objects (equivalent to the “Network Config” on the dashboard), behind the scenes, the Harvester network controller will set the uplink interface’s master to
trunk-bo
and attach it to the
trunk-br
. You don’t have to manually create and configure them. Do you mean the above did not happen?
p
It created them, yes. But there was no IP. With the mgmt network, 'ip a' shows the IPs associated with the mgmt-br (or bo, I forgot) interface. There was no IP associated with the corresponding trunk interface
r
I see. Other than the
mgmt-br
, bridges created by the Harvester network controller won’t have any IP addresses assigned. This is by design and is expected. Harvester constructs L2 networks that span all nodes so that VMs can run on top of them. If you want to let the VMs on a specific VM Network communicate freely with the management network, you’ll have to make these two networks “routable” in your network infrastructure, and this is out of Harvester’s control. For instance, when the VM wants to communicate with the Harvester VIP, the traffic should be sent from the VM, through the L2 VM Network, and routed by the gateway, to the management network. We architected the cluster network like this, so there’s no need to bind IP addresses on bridges other than
mgmt-br
.
And that’s why you’ll see a column called “Route Connectivity” on the VM Network page of the dashboard. When it’s “Active”, it means the VM Network is reachable from the management network.
p
Alright, it got very complex very fast. Is there a particular reason why the trunk-* networks might show as Down when I run 'ip a"
Sorry, trying to break it down as much as possible
r
It’s okay. Do you mean the command output you posted above?
eno4
,
trunk-bo
, and
trunk-br
are all
UP,LOWER_UP
. Do you experience any network connectivity issues with the VM attached to the relevant VM Network?
Please ignore the
UNKNOWN
state. It’s not relevant. We don’t bind IP addresses on the bridge interface. We use it in a pure L2 way.
p
Okay okay. Thank you very much for your explanations, I'm understanding a lot more of what Harvester does and why I'll test quickly with my manual config just for sanity, then wipe it back to default and go from there again
r
Sure, if you find something weird, just ask in the channel. We’re glad to help 🙂
p
Thank you so much for the help
🙌 1
Alright, did a reset on all the network changes I made. It looks like you said it should with the different interfaces. • The VM can ping the gateway, good. • The VM gets a static IP from DHCP - nice • Default route points to the gateway - good • But, the VM cannot ping the host, or vice versa. The host also cannot ping the gateway So, overall good and desired results except for the last one
Got a better support bundle - with a VM running in the default ns
Ah, and route connectivity on the VM Networks tab shows 'inactive'
And just to be absolutely 100% sure - when creating the VM network, I am using the actual real VLAN which exists in the "physical" network.
r
Things are getting better 😄 So, the route connectivity of the VM Network is
inactive
, which is the culprit for the communication issue between the VM and the host. The VM Network must be routeable from the management network via the gateway. Is the gateway outside the Harvester cluster? Or is it another VM running on the Harvester cluster? Regarding the VLAN, is it a tagged (trunk mode) or untagged (access mode) VLAN on the switch side? If it’s the latter one, no matter what the VLAN ID is in your physical network, you’ll need to create the VM Network with VLAN ID 1 (or choose the “Untagged Network” type).
Anyways, your VLAN config is likely correct. Otherwise, you won’t see the DHCP IP configured on the VM, and the VM will not be able to ping the gateway.
So better to check the gateway/router config
p
The gateway is outside the cluster. I have H1 and H2 on 10.5.106.10 and 11 respectively. The gateway is .254 I have the DHCP to give VMs .50-150 I tried both with tagged and untagged networks - same result (though maybe I should delete the tagged when testing untagged) And finally, it should be fine to just have that VM network, and not have the mgmt network at all?
r
And finally, it should be fine to just have that VM network, and not have the mgmt network at all?
If you’re not planning to use the Rancher Integration and load balancer features, then yes, you can totally ignore the route connectivity issue.
p
@red-king-19196 I got back to the office to bother my network guy. Thank you a ton, it works great now!
🙌 1