Hi all, I'm using Harvester 1.4.2 and have 3 VMs ...
# harvester
p
Hi all, I'm using Harvester 1.4.2 and have 3 VMs on the default management network where I have k0s cluster set up (each VM on its own Harvester node). After VM reboot, the IPs have changed I found, any way to set them back to previous value i.e. pin them? Reason being my etcd state is now broken, and I'd like to recover the k0s cluster. Thanks.
t
do you want to use clout-init for that?
In case you do, in the VM config, go to Advanced Options and scroll down to Network Data. Here is the format:
Copy code
network:
  version: 2
  ethernets:
    enp1s0:
        addresses:
        - 10.23.18.100/24
        gateway4: 10.23.18.1
        nameservers:
          addresses: [10.23.0.111, 10.23.0.112]
๐Ÿ™Œ 1
this will set a static IP to the VM.
p
@thousands-advantage-10804 Thanks, I'll try that.
@thousands-advantage-10804 Sadly it didn't have any effect after I rebooted the VM. I did set the values correctly though.
t
k0s? need to check if it supports cloud-init
what is the host OS?
p
Actually my indentation was off, retrying. The (VM) host OS is Ubuntu.
Same result unfortunately. The IP changed first reboot to .154, after the latest reboot to .155
t
ubuntu supports cloud-init. you may have local settings that override it. And the indents are basically 2 spaces.
Copy code
network:
  version: 2
  ethernets:
    enp1s0:
      addresses:
      - 10.23.18.100/24
      gateway4: 10.23.18.1
      nameservers:
        addresses: [10.23.0.111, 10.23.0.112]
p
Yea it does, but looks like it's not re-running cloud-init. I'll try and tinker with it and post back here.
t
p
Didn't work, tried couple of things but to no avail. Looks like the management network is that restricted and ideally the VMs should be in their own network.
t
that is not the case. I run all my VMs on the mgnt network. I wonder if you have anything in the User Data. Maybe test with a new VM. It works on a large Harvester demo cluster I have.
๐Ÿ‘€ 1
p
This is what I got (e.g. of the networks tab on one of my VMs), in my user-data I do nothing fancy, install some packages, start some services, etc.
@thousands-advantage-10804 Do you run
bridge
or
masquerade
for network type?
Also when I do
ip route
on such VM I get this as default GW, is that the same for you?
t
I run bridge. here is a clip. Looks like you donโ€™t have DHCP on your mgnt network. the address 169.254.x.x shows that. Did you try using a static with cloud init on a new vm?
๐Ÿ™Œ 1
a static worked for me.
p
Thanks for the clip, so something is clearly not right on my end from the network side. I assume you're running multiple hosts (Harvester) as well?
t
single node for that one.
โœ… 1
p
So you observe the same behavior in a cluster setup too? That the VM IP remains regardless of on what Harvester node it runs?
t
Yes
Are you running vlans? Check the switch if you are.
p
AFAIK not, should be the default one (1), it's a 10G FS switch.
t
Are you seeing different ips depending on the node?
p
Yea I do, for node 01 subnet is 10.x.0.x for node 02 its 10.x.1.x, and if the VM runs on the last node it's 10.x.2.x
t
Is there a router between the subnets? Or is the a /23 or /22?
p
The IPs are all in the /24 subnet
t
why split subnets for the nodes? I can see this causing problems with longhorn if packets are not routing correctly.
are the nodes in the same datacenter?
p
No, the Harvester nodes are on their own subnet (also /24), but the VM IPs are split as mentioned (depending what node is running it)
I haven't done anything extra during Harvester spin-up for the network, so I'm confused why I'm unable to set static IPs like in your case.
t
so wait. you are setting a static for a subnet that may NOT be exposed to the nodes. Did you configure the vm network on all nodes?
p
No, it was handled by Harvester itself, I didn't do anything fancy with the network, just chose the settings as shared in my image previously.
t
it is odd that it works on 1 node and not on other two.
p
For perspective, I've attached screenshots. All 3 of these VMs sit on one of the Harvester nodes, but have these IPs (I didn't choose them, and am unable to as well, must be the built-in DHCP doing that).
The actual IPs of the Harvester nodes are in their own subnet with my other servers connected on the switch.
t
Those are the management overlay. Did you create a virtual machine network?
p
I did not create a VM network though, as you pointed out.
t
AH you need one..
โœ… 1
๐Ÿ™ 1
p
That seems to be it
t
HUZZAH!
p
Thanks man! I'll need to do some reconfiguration ๐Ÿ˜„
t
vms --> vmnet --> cluster net --> nics.. clear as mud..
โœ… 1
p
Yea, I'll jump on that come Monday and straighten that out and re-test. Thanks again!
t
any time !
๐Ÿ™Œ 1