This message was deleted.
# harvester
a
This message was deleted.
w
you import the disks for longhorn to use - this is then managed by longhorn, when you setup storage classes you can chose which types of disk to use (you can tag your different types of drive to make this simple) and you can choose how much replication you want - longhorn then ensures you always have the required number of copies of your data and will present your volume as one drive to your cluster when you mount it.
a
yes okay, but this means the size of the disk is the boundry of the size, so i am not able to use e.g. 6 disk in a "software raid5 or raid10" like way, correct?
w
do not use raid
add the disks raw and let longhorn manage them for you - I would recommend you tag them up by bus type and storage type - longhorn manages replication for you so you are wasting resources and adding overhead adding raid to the mix I believe - fellow "Harvesters" feel free to shoot me down
also - I would share the disks across your nodes evenly
don't put them in one machine - unless your just testing
then you build in some hardware failure support
a
i am planing to have a 6 node cluster with 6 nvme ssd 2tb each, so i will have to make sure it is replicated in my environment
so 36disks at least in total
w
so 6x6 drives - nice...
cool - just let longhorn manage them
it'll deal with the replication for you
typically a volume has 3 replcias
a
okay this is all "new wordl" for my coming from SAN/network storage based esx / vsphere envs
w
me too - we just decomissioned a large SAN on RAID 6 to this type of env
a
oh great than i might want to bother you in fuure a bit more if this is okay? πŸ™‚
w
for example - this is a random volume in longhorn - in our cloud
so that volume has 2 replicas on two discrete nodes...
a
thats what i want to archive, if one nodes fails, just reboot Vm / workloads on an available node
w
we're just updating our cluster to have dedicated network for storage - watch out as NVME can saturate an 10G adapter - we use 2x10 and one for management
a
planing on 2x10G for storage and 2x10G for workload
w
I dont work for rancherlabs btw, had some issues with an upgrade so been on here trying to get some help before re-install...
a
good to know
w
2x10 for workload? do you mean management as thats huge.... if you've got the ports... we use 1x10G for storage, 1x10G for management 1x1G for ingress from the external network (1G line so it only needs 1G bandwidth). Our local network can connect to the management network for administration.
πŸ‘€ 1
Our new install we're also using a stripped down machine as a witness node so we can reduce the overhead with control-planes
thats got 1x10G connection and a small SSD for storage only
would be interested to hear how you get on...
what kinda compute you using?
a
this right now πŸ˜‰ just some random small SoC AMD laying aroung
w
wow - thats a bit tight on CPU cores
thats similar to our low power rancher cluster we're using in HA mode to manage the k8's clusters in our harvester install
but that doesnt need much compute
thats our basic layout
a
planned: 1x AMD Epyc 7313P 3 GHz, 16 Cores, 32 Threads 128MB Cache, 256G RAM, 1x 960 GB Samsung M.2 PM9A3 (Harvester Install), 6x 1,92TB Samsung U.2 PM9A3 (VMs), 4x10G Ethernet
w
nice - we went low power.... 65W TDP πŸ˜›
155W - thats more than 2x the power consumption ... worth thinking about...
guess you get 8 more cores for that too... our stack cruises along with all the networking gear at around 250W - we were burning that on one node on the old intel xeons...
a
this infra should last the bext 3-5 years, forcasted at around 250 VM and 3-5 kubernetes cluster mainly dev/qa internal
w
boards were a bastard to get hold of tho... We want to get at least that out of this setup.... similar use here - its our in-house private cluster, but as the power is expensive we wanted to keep it low...
a
we a a memory usage type of user
able to share the specs / details on the hardware?
w
as the rack is onsite we recently moved out of the 1u chasis into conventional cases with big coolers and the whole lot runs silent πŸ™‚
I've got a load of chasis here if you need them.... they are noisy tho - drove us nuts...
CPUs we used - AMD Ryzen 9 7900 3.7GHz it was a PITA getting the right board, DDR5 ECC RAM was pricy, the boards have remote management - had to import them and took a while. These do run hot - and in 1u chasis the little coppper coolers were struggling - in a larger chasis they run at half the temp with a decent fan, watch your airflow tho so its exhausing out the case the right way dpeending on how your rack is setup.... good luck
Thats an AM5 socket - so you have to be really careful with the thermal compound when fitting the cooler. Anyway - we're UK based if you need a second site for backup....
a
germany here and sorry to say: non eu based is a pain in a tight compliance environment as ours πŸ˜•
w
ok - if you branch into brexit land and need a partner - look us up πŸ˜‰