This message was deleted.
# k3s
a
This message was deleted.
πŸ‘ 1
s
This sounds interesting. πŸ˜‰
b
Cool!!!! Really interesting indeed
c
absolutely, would love to hear about that!
f
its for ml pipelines
s
it would (likely) be useful to show some info on performance, latency, error handling, packetsize, etc to help understand the motivating factors.... every decision comes with its own side-order of thorns and tiger traps yknow
for some reason the 'smoke signals' and 'carrier pigeon' backends seem to be especially lossy πŸ™‚
f
im not sure what u mean, i was just stating my use case for the ask
s
Well, I was wondering why a faster fabric was necessary for GPUs... and along what axis you were measuring to assess 'faster' and what struggles you're / you've run into.... clearly it's important to ya, or you wouldn't be building it πŸ˜‰ and I was just curious as to what those motivations were ... because there's always a trade off
was just idle curiosity, really.
c
Uh, subscribe πŸ™‚ We're also using GPUs heavily, would be really interested to see what you're up to.
πŸ‘ 1
f
i need 10,000 GPUs
...im not kidding
graph neural network problem
ability to go from 5 to 1000 in a minute based on compute need of graph slice; up to 10K.
then the elastic fabric scales back down once compute is finished for that sub-graph
c
Thats a hell of a use case dang.
f
yea, GNNs are rather beefy