This message was deleted.
# harvester
a
This message was deleted.
a
At first, when you install a Harvester (k8s) cluster, you may have many NODEs, eacho NODE has it's NODE IP as it's identity, it can not be changed in the life-cycle of this cluster. Let's say your cluster has 3 NODEs, and the NODE IP : 192.168.3.1, 2, 3
You can use those IP to ssh login into those NODEs.
i
Hello Jian, thanks for taking the time
So it seems the problem I am facing is that for a single node install I will need at least 2 IP addresses, is this correct?
a
But for the whole cluster, we have the concept of VIP, let's 192.168.122.100, this IP will be possibly located at any of those NODEs.
i
seems the VIP is not allowed to be the same ip like the node
a
NO, you need N (Nodes) +1 IPs
sure, NODE IP should be identical, and can't change
i
and this can be a normal ip or do I have to have a floating IP for it?
I can just obtain normal ips at my cloud provider. Hope the floating part is realised than by kube-vip?
a
At the view of a black box, this VIP is a static IP to the cluster
In Harvester internal, it can float this IP to any NODE.
i
are you part of the development team?
a
yes
yes, staic IP from your provider; Harvester float it internally with kube-vip
i
great, than maybe an idea in this regards. Would it not be possible to use a dns record for routing to the relevant node? I just ask because an additional ip costs 100$ a year πŸ™‚
a
Harvester VIP is backed by L2 ARP broadcast now, it has no L3 router ability to notify itself. That means a DNS record is not directly available at the moment.
i
I was actually searching for the last 4 hours how this VIP thing could work. The image is different from what I see in my isntaller. The text beow just says VIP and not domain name of management node. This is already for the 1.3 version?
Ok, than seems like for now I have to invest in an additional IP. Thanks a lot for your help! And maybe a last thing in regards to the documentation. Would be great if you could extend the description. A definition for VIP. It should become clear from the description that a normal additional IP is used. When searching for VIP I found things like floating IP associated with a specially formated virtual MAC and so on. Thought I need to switch my cloud provider because of this πŸ™‚
a
even for
domain name
, it is still resolved to an
ip
when system installation. stictly speaking, a
pure dynamic dns
record is not fit for Harvester
vip
yet.
i
You will be at ITSA next week? I mean with SUSE?
a
there are many concepts like
virutal ip
,
floating ip
,
elastic ip
..., sometimes, the implementation is mixed with usage, caused ...
i
yep, I learned this today! Therefore it would have been helpful to know that just a standard additional IP address is required πŸ™‚
The in regards to the image, "Domain name of hte management node..." Is this a new feature for harvester 1.3 already?
a
sorry for that..., and I am also not in ITSA
i
Ok, in that case I would have invited you for a coffee, Maybe another time πŸ™‚
🀝 1
a
V1.2.0 is the latest version; which version are you using ?
i
I am currently trying to give V1.2 a go
downloaded today from github
I
On my screen I do not see the descriptive text below the menu
This is why I was asking
a
You get the picture from here; and it seems to be out-dated :( https://docs.harvesterhci.io/v1.2/install/management-address
i
well, with the text it did look more feature rich πŸ™‚
hence my assumption that their should be a different option than requiring to have an additional IP
a
there could be may possible solutions, but to be honest, the current one has fewer assumption to infrastructure, not too many adminstrators can handle floating IP easily, so Harvester/kube-vip takes this burden
i
Well this is very good to have not this floating IP stuff. Even better would it be if one could go without an additional IP address. I mean for freetime usage spending 100 bugs extra is something were I had to think about twice πŸ™‚
Well, IP is ordered, hope I can give Harvester a go tonight πŸ™‚
a
good luck
i
Jian, maybe another technical question you can answer. I am currently considering to use Harvester in a hybrid setup. Hence, to maintain a rancher cluster in the datacenter and at the same time to connect it with a cluster at home. I assume it makes sense to have a rancher and harvester colocated in the cloud, right? I am thinking about bandwidth and traffic requirements. Having an architecture description on how to build those hybrids what also be interesting to have.
I mean, beetter to have distrbuted clusters or multiple clusters, same for harvester, better to have multiple installs in different locations, or could they be centrally managed...
a
if you have some diagrams, it will be better to understand
i
I plan to to split computing requirments for an app I am developing. Would like to do GPU processing stuff from a home server, as it gets to costly at the data center
unfortunately I do not have. Final setup will be something like 1 Server with GPU at home + 4 virtual root servers in cloud. I have some stuff I need to manage in VMs in the cloud, app related stuff will all run on kubernetes
would be nice if home server could be part of cluster and managed within harvester as well. Than I do not need to think about monitoring and vpn connection, etc
a
k8s/etcd is
sensitive
to latency and
write
speed
i
I assumed this indeed. I am on 1.5mbit upstream 😞 Hence, I expected already that I cannot place the homeserver within the same cluster
But good to hear from someone who knows the technology. Thanks for your help!
Maybe is a little gift in case you are interested https://my-nutri-diary.org/ Is free of charge and my freetime project πŸ™‚
πŸ‘ 1
a
not sure your home internet has fixed IP, when yes, at least you can try to add your home PC as a worker node in the cluster
And, can your home pc connect to your remote NODE directly? or need VPN ?
i
my idea was using vpn. Will try the worker node thing.
a
FYI: We will update the document, thanks. https://github.com/harvester/docs/issues/465.
i
Great. Jian, may I ask you another question. I have now successfully installed Harvester. First thing I of course tried was to create a VM. I downloaded Open Suse Leap 15.5 iso, created a VM with CD-Rom set to iso and added a volume (longhorn-harvester default). Unfortunately, the container does not start. Only error message I get is the following one . 0/1 nodes are available: 1 Insufficient devices.kubevirt.io/kvm. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod. The pvc seems to be created. Do you have an idea what might go wrong?
a
@incalculable-painting-97670 The Harvester HOST, needs the virtualization/nested-virtualization enabled, is your HOST an VM from certain vendor ? please check
Copy code
lsmod | grep kvm
if your Host does not support the virtualization, then you can't create VM
i
Good morning Jian, I have two server variant options Root Server and Virtual Server. Currently Harvester is installed on a virtual server, when I execute on the Virtual Server running harvester I get the output in attached image. When running the command on the root, grep does not find anything at all. Hence behavior on Root and Virtual server is different. What should be the output in order for Harvester to work?
From Root Server
a
my working node, KVM VM as Harvester OS, it is supporting (nested-kvm)
Copy code
harv41:~ # lsmod | grep kvm
kvm_amd               147456  0
ccp                   106496  1 kvm_amd
kvm                  1056768  1 kvm_amd
irqbypass              16384  1 kvm
harv41:~ #
i
I assume I would have to see kvm_amd for it to work, right?
a
what's the CPU of your
Virtual Server
cat /proc/cpuinfo
i
seems like AMD processor, rest is QEMU CPU..
so this is from the Root server. I need to reboot the virtual server with harvester first
a
my host PC, AMD 59XX:
Copy code
processor	: 23
vendor_id	: AuthenticAMD
cpu family	: 25
model		: 33
model name	: AMD Ryzen 9 5900X 12-Core Processor
stepping	: 0
microcode	: 0xa201016
KVM VM, as Harvester OS:
Copy code
processor	: 9
vendor_id	: AuthenticAMD
cpu family	: 25
model		: 1
model name	: AMD EPYC-Milan Processor
stepping	: 1
microcode	: 0x1000065
cpu MHz		: 3693.060
cache size	: 512 KB
i
Yesterday night I have read that my cloud provider seems to have nested virtualization disabled. Do you know how I can find out? I contacted them yesterday night already but do not have feedback
a
have you:
Copy code
harv41:~ # ls /dev/kvm
/dev/kvm
i
no such file or directory
ok, this is the problem than
thanks for your help!
I will now get real hardware at Hetzner. Processor wise Intel and ARM should both work?
I was thinking about the following server
a
we support X86_64 (AMD / INTEL), ARM is on the way ...
i
ok, than the intel core i9 should not become a problem πŸ™‚
a
It looks the
virt server
and
root server
, are both QEMU simulated, they seem not supporting
nested-virtualization
well.
i
I assume that is a Netcup thing than. Likely they have configured their HW in a way that users are not able to use their quota completly
well, good to know. Decision is made to swith to other data center than
One question, I have read that I should not install Rancher on top of Harvester for a production environment. Is this something which has to be expected to break? I do not want to spent extra money for another rancher server
My assumption is that even if rancher would break, I still could log in to Harvester and play back backups of the rancher server, or is this wrong once Harvester is configured to connect with Rancher?
I was also considering to use RKE2 to install Rancher within a VM running on Harvester, in order to be able to scale out Rancher to a real server later on
I mean in case it should be ever required
We have a better solution now
It is experimental in v1.2.0, and will be stable in next release.
i
great. Would you recommend to go with the experimental version now or simple install rancher within a vm on top of harvester
so migratian to 1.3 should than work?
a
you may try both solutions and decide which one to go
Harvester spends a big effort in
upgrading
, we make sure the upgrade path is working smoothly.
i
another question, because I currently have another rancher cluster running in production. Can I import a downstream cluster into multiple rancher servers to maintain them or would this break things.
I did not want to take the risk trying it out so far
and i am rather limited with test hardware
a
I am not sure, need to check `Rancher`'s document. Multi-boss will have nature risk.
i
well, maybe a leave this out for the moment till i have a new setup running
πŸ˜€ 1
the other way around, I could use multiple rancher servers with a single harvester instance?
a
when Rancher run in mulit-vm, then ok, but for
rancher-addon
, it is not supporting multi-instances yet
i
ok, good to now, thanks
seems like a lot of migration awaiting me on the weekend πŸ™‚
πŸ‘ 1
Hello Jian, I would have another question as I am currently preparing setup on a different server. I am wondering, is the hard disc in harvester encrypted by default? I have not seen any options in harvester (longhorn) on how this could be activated in the overlay fs. Also I would be interested if VLANs are encrypted by default or how and if encryption could be enabled
a
the hard disk seems not encrypted yet, but LH acts like a block driver, the data is not simply continuously writen in hard disk block
i
and has longhorn an option to encrypt things? I mean from a security point of view even if its a block driver parts of data might we reconstructable
Would the harvester installer reuse an existing filesystem if I preset one? I mean I could create an encrypted BTRFS partition.
i
ok, do you know of micro os comes with dm_crypt and cryptsetup?
I have not worked with it before.
a
no, I am blank to those areas πŸ™‚
i
well, nevertheless thanks for the directions. Will try it in the slack chat than once I got things installed. Unfortunately hetzner cloud is a nightmare as well. no easy way to install an iso. KVM access costs 10 bugs extra per 3 hours 😞 Germany is lost...!
a
Buy a PC with 1000 Euro, you can run Harvester single-node cluster
i
well, countryside, 2 mbit upload more is not possible at my place. no other choice instead of data centre
a
cloud is technical-sensitive investment
i
I know, very expensive once things scale. Therefore I try to get a route server only and run kubernetes on top
root server..
a
ah, I mean those cloud providers, they need to spend big amount of money to have a technical team
then meet user requirements quickly and smoothly
i
well, at least for my latest one it did not look like good invested money. Service was something which worked as long as you did not need it πŸ™‚
πŸ˜‚ 1
I mean the nice thing is, when you have to help yourself, you continueously learn not things. so also ok even though not the easy way
new things..
regarding vlan, I assume using something like service mesh than likely would be the right way to encrypt tr