This message was deleted.
# longhorn-storage
a
This message was deleted.
s
I'm no expert but I'll share my two cents. Running it (converged)with workloads on the same nodes hasn't been a problem for us. I think the obvious here is the hardware your trying to use it with when going converged. We only see it use 1-3% on r630’s with dual 14c CPU with 10 SSDs x 3 nodes.
I just caught the hypervisor part. I'm curious for production why hypervisor with a converged approach. Maybe you should check out harvester or kubevirt if you need VMs.
w
Hi Garet, thanks for your reply.
when you say converged you mean running in the same nodes longhorn, workloads and share the underlying disks via longhorn, right?
👍 1
also I have a doubt. as far as I understand, just to state it: longhorn (or any other storage manager) shares de volumes with the requester pod via network. I have only used kubernetes over EKS and never had to worried about this. how do you manage to isolate the workloads network bandwith from the storage network bandwith? I mean, to avoid the IO throttle the application traffic performance
s
That feature is hopefully coming in 1.3. We currently run everything on 10G LAGs and haven't had any issues but, this enhancement will follow best practice designs.
Thanks for making me look. Now we have to accommodate the storage network in 1.3 😁 I've been wanting to play with Multis for other reasons like VLAN to Pod.
w
Great to know you're not having issues on a 10G network.
Thanks for your comments!