This message was deleted.
# harvester
a
This message was deleted.
👍 1
w
I’m in the same situation. Something must have been changed the last couple of days somewhere. The only thing that works right now is to specify dhcp in the network config: network: ethernets: enp1s0: dhcp4: true dhcp6: true enp2s0: dhcp6: true dhcp4: true version: 2
also you need to add iptables to the packages list: #cloud-config package_update: true packages: - qemu-guest-agent - iptables runcmd: - - systemctl - enable - ‘--now’ - qemu-guest-agent.service
b
Thanks. Yeah, I always put iptables in. I’m trying to specify the dhcp options now. Are you using ubuntu jammy? Also, perhaps you can let me know what your vm network is configured like?
STill a no go. Is there an example of a bare-minimum configured harvester and rancher cluster combo set up. My config is pretty minimal and done according to docs in a fairly simple way, but I may not be understanding the VM network config portion
w
I have tried with Rocky Linux, Ubuntu 22.04 Cloud Image etc. No difference. My network has an internal network (vlan) of 172.30.10.0/24 and an external network on separate vlan. Both have dhcp servers. If I create nodes manually and install RKE2 or K3S everything works fine so it’s something with the rancher setup or with the harvester cloud provider
b
Hrm. OK. Yeah, I don’t have a DHCP server set up. I just assumed it was within harvester’s machinery that one was there to manage the IP assignment. Is that a bad assumption?
it works fine allocating IPs for manual VMs
so i assume that it would happen via rancher k8s cluster builds
ON my HARVESTER node: 4: mgmt-br: BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether d06726d5dc:0b brd ffffffffff:ff inet 172.16.1.100/24 brd 172.16.1.255 scope global mgmt-br valid_lft forever preferred_lft forever inet 172.16.1.10/32 scope global mgmt-br valid_lft forever preferred_lft forever inet 172.16.1.200/32 scope global mgmt-br valid_lft forever preferred_lft forever 10: flannel.1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1450 qdisc noqueue state UNKNOWN group default link/ether d6800cf22b:b6 brd ffffffffff:ff inet 10.52.0.0/32 scope global flannel.1 valid_lft forever preferred_lft forever And a LOT of calico interfaces, obviously. But, very simple
w
Yeah if you have no DHCP the networks will be state down if you login to the node and run
ip a
b
well, actually, without a DHCP, they are UP, just no ip assignment on the VMs provisioned through rancher. However, like I said, when I provision vms via harvester, they come up with IPs no problem
I assume that’s being done through cloud-init or some other mechnaism
correction toabove net config. that’s on my harvester node
w
Interesting. This is one of the hard parts of setting things up - how to make sure everything is separated but also have an ip adress
How have you defined the ip range?
b
I have.
I can easily set up a DHCP server on the routers.
just didn’t realize
w
Or hm yes that is one of the harvester nodes. That has a static ip setup.
b
Sorry. “how have you defined IP range”. You mean IP Pool?
w
I think the hard part is to solve how to isolate the vm:s from your secure internal network but still have a way of reaching the harvester provider from “outside”
b
Christian. That’s fine. Right now I can keep it flat as I just want to test functionality
After that, I can start segregating it all out
w
Ok! But interesting how the vms get ip Adresses today without and dhcp?
b
I mean, like I said, the only thing I can think of is via harvester static assignment via cloud init. Let me see if something is expose
w
Yea, see if you can find an netplan config on the node.
b
doing it right now. So the one difference is that when doing it through harvester it doesn’t force you to use a vm network. It uses the harvester mgmt net and it uses “masquerade”. That works. I’m trying a manual provision selecting the same options that rancher would, whicih is a vm network you have to define (untagged in my case) and mode = bridge
probably won’t work
rancher doesn’t give me the option to use the default harvester mgmt net with masquearde mode.
w
Ah yes you are right. That is probably the reason
If you configure a dhcp you should be fine
How do you reach them from outside?
Are you using the load balancer?
b
Through a bounce box for ssh. Yes I use a harvester lb to map ports. Also can set up port forwards via router
w
Ok im trying to use the ingress but have no idea on how to configure it as a load balancer.
b
Looks like the DHCP server worked
thank you sir!
It’s just building out the cluster now 😃
hopefully it gets all the way
w
Cool!
Let me know how it goes
b
thank you!!