This message was deleted.
# harvester
a
This message was deleted.
b
is it possible to rename network interfaces so they are consistent across nodes?
b
hehehe I just asked this yesterday.
I figured it out... mostly
The elemental folks had another idea for adding udev rules, which you could do too.
b
I think the intended way forward is quite a bit more clicking, but: 1. Networks > Cluster Networks/Configs > Create Cluster Network (top right) Then you can click "Create Network Config" and schedule it to a specific node and then select its NIC on the "Uplinks" tab You'd have to create a Network Config under that Cluster Network for each node, but after that, you should be able to create a VM network using the Cluster Network you defined above.
b
> Then you can click "Create Network Config" and schedule it to a specific node and then select its NIC on the "Uplinks" tab In testing, when we've done this, VMs spin up fine on all of the nodes, but the VMs fail to migrate from one node to another.
b
What do you mean by fail? The migration doesn't start, the migration starts and then fails, there are no places to migrate it? Not sure if you have the appetite to remanufacture this setup, but if so, would you be willing to file a GitHub issue? https://github.com/harvester/harvester/issues and if you'd be willing to submit a support bundle to harvester-support-bundle@suse.com that would give us the best surface area for diagnosing what's going on.
b
It triggers a migration, but under the config on the VM it the migration tab/section never gets filled out. There's an error message about a scheduler failure for the pod labels.
Right now we're still trying to get our primary harvester instance set up, and we have a rancher support contract but not one for harvester. I'm pretty sure we're still going to set up a staging/dev instance and I'll be happy to fill out a GitHub issue once we get that all set up.
g
is it possible to file and issue with the details and a support bundle which includes namespaces where VM is
will make it easier for us to figure out what is going on
b
I actually spent a bunch of time on this yesterday.
Turns out all my tests I was running where the rename was working and adding the nics via the node selector wasn't were flawed. It ended up being really stupid and sanity harming after I figured it out. It was compatibility issues with the hardware CPU arch/capabilities. In my "fix" it just happened to always pick VMs on the oldest models of CPU so the VMs would migrate back and forth no problem. And it always created them on the newest CPU (by chance) with the multiple network configs. VMs that STARTED* on the older stuff could always move around freely. But when it started on the newer ones it would fail and give a message about no matching labels for the pods. (Which kinda obfuscated what was going on)