This message was deleted.
# harvester
a
This message was deleted.
b
Did you see the thread above?
There was some caveats.
f
Oh, ok. I was worried because I found very few results when searching for Harvester nfs och harvester iscsi
b
I haven't dug into those links, but it seems to be the place for "supported" third party drivers.
That being said, you might be able to run other types of claims through that, but I don't know if NFS is one of the supported types.
It might need to be block based storage.
f
Yes, the documents seems to be lacking a bit.
b
(For VMs I assume it would)
But also, if you have file based storage, you can just extend the NFS into the VM in that layer instead of trying to do it through the hypervisor.
One of the things that was mentioned in Leonardo's thread was that the third party drivers aren't supported for boot devices.
f
Support Read-Write-Many (RWX), well that would be problematic for iSCSI using xfs/ext4.
b
¯\_(ツ)_/¯
RWX is only required for live migration
f
It's a bit strange that there is so little documentation.
b
Did you see the release note?
It was introduced in 1.2.0
So it's brand new.
They don't have step by step stuff here for how exactly to set it up for each driver, but the concepts seem well layed out.
If you have a support contract, I'm sure the Suse support folk could whip something up for you if you need it.
f
I wouldn't like to have a support contract for something that I can't easily set up myself.
b
All I'll mention is that easily is pretty subjective
That being said, there's docs for doing this with Ceph.
f
Yes, but then you need to run ceph. I run NFS and iSCSI, but I prefer NFS.
Building up a complete storage cluster for a Harverster lab seems a bit much.
b
This is the first time I've heard you mention a Lab. That being said, networkstorage for a Lab seems like a lot vs just adding some storage to nodes and letting Longhorn do the rest.
f
Well, I got 120GB drives on each machine. Nice blade servers. Don't want to use local storage.
b
You can attach iSCSI there instead.
f
Yes but I guess I do that inside the Harvester UI? Also, are the requirements one disk per physical node?
b
We run bare metal and have multiple disks, but you'll need at least one extra disk for Longhorn. You need to set up the disk in the OS layer first, then install Longhorn to use it.
f
Ah I only have two disks in raid1 booting Harvester.
b
Right, so you'd need to connect an iSCSI disk on each host so they're visible as block device.
f
I am just trying to understand how Harvester wants to manage it, I guess via the Harvester UI?
Does it handle it cluster wide or per node?
in VMware I have VMFS as a cluster file system for example.
b
Harvester itself doesn't care.
It just sees/generates a claim.
It offloads onto whatever storage class you're using for the management part
f
Yes, but only one node can read/write to a iscsi claim right? Or is there some cluster fs?
b
Each node is going to need a seperate iscsi claim
f
Great 🙂
b
So if you have a 3 node cluster you're gonna use 3 disks
Longhorn is gonna replicate data across those disks.
f
Cool!
b
On your storage backend you'll probably have dedupe stuff going on to shrink the size.
f
I guess I mount it via some config after the OS boots? So I can reinstall the nodes without having to individually enter and manually configure them?
b
yep, add all the bits to mount an iscsi disk and add it to your fstab
f
OK, so it will go away if I reinstall via PXE?
b
Then point longhorn at that block device and set up your k8s storage pools.
Yeah you re-install all the keys change and recovery will become difficult.
f
OK, so manually mount on each machine via fstab, got it.
I can fix that.
b
That being said, you can bootstrap in configs to do the setup/mounting
Or run ansible... or...
f
Bootstrap in config is great.
And yes Ansible could be good as well 🙂
b
there's a dozen different ways depending on your setup.
f
Yep, I was just interested if I set it up via the Harvester UI.
b
Only if the disks are already present on the machine.
f
Just thought that it was strange that NFS only was mentioned for backup and not primary storage.
b
It's fine for mounting in workloads, but it's not block storage
You need block storage for VMs
f
Ah OK.
I guess I am more used to VMware and Proxmox.
b
Exactly, you can't mount an NFS share directly into a VMware vm right?
You need vdisk file
f
I mount it in the cluster
b
which emulates the block storage.
f
Nope
I just mount the NFS cluster wide and then it handles vms as vmdk files.
Same with VMFS over iSCSI
b
That's what I'm saying
f
All done via one UI
No manual stuff.
b
you need vmdk files, not NFS
vmdk provides the "block"
f
thanks vmdk.
b
NFS doesn't do that.
f
Correct, but Harvester can't just run a qcow2 image on a NFS mount?
b
without snapshotting that probably gets messy.
and sometimes the lag from NFS can be an issue. We're talking about what's possible and not what's been currently implemented into the upstream projects and is production ready.
anyways
f
Thanks for your help
b
np
f
If you have any good answer to my other question I had below this I would love it, I have not solved that problem yet.
b
I don't, but full logs will probably be helpful for those that might know.
f
OK, will fix that, thanks!
Have a great day
g
the disk needs to be from a pvc support block volumes
i dont think nfs csi supports block volumes but i may be wrong
s
Hi @flat-librarian-14243, quick check the thread you mentioned. You have the local driver for OS and general purposes. You would like to know what external storage we support, right? If yes, v1.2.0, we support external storage like rook. You could try to attach the iSCSI device as the Longhorn backend for your case. We do not support the NFS as our storage backend currently. If not, would you like to describe in more detail the scenario you would like to.
f
I am digging into this, I am noticing that all harvester servers that are installed got the same initiator name, very strange.
s
Yes, we found that recently. In the current design, the Longhorn is the only one that uses iSCSI. The same initiator would not harm us with Longhorn, but we will handle it.
f
Since I need to mount disks on these via iscsi in the OS it's problem that takes a lot of extra time.
So, if I only have a small physical boot disk, what is the standard and supported way to get more storage available for my harvester servers? For both VMs and containers.
s
Now, Harvester only supports NetApp and ROOK for external storage. NOTE: you still need longhorn for images. Or add another internal storage.
f
Vicente, is there an ETA to fix the the initiator name error?
s
Could you help create an issue for the duplicate initiator name on GitHub? We do not have any issue because we do not support external SAN, but if we would like to support it, there could be an issue. Thanks!
f
I am on it!
🙌 1