04/27/2023, 6:44 PM
Quick (hopefully) question, I'm looking to deploy longhorn on 2 specific nodes on a 10 worker-node cluster. I came across But one thing seemed a bit confusing/concerning:
Result: workloads that use Longhorn volumes can run on any nodes. Longhorn only schedules replicas of
on the node
, and
would that mean that potentially (in my case), workloads would only have a 20% chance of getting scheduled on a node for which longhorn is running (and by extension, where the PVs are)? 1. am I reading this right? 2. does this kb still apply given links in it go to v1.2.2 docs?


04/29/2023, 9:02 AM
1. I think it means when deploy a pod and pvc, then the Longhorn volume related workloads will only launched on the nodes which are labeled. In your case, it will always create workloads, e.g., replicas on this two nodes. 2. Yes, the document is still workable with using the latest version:
and the document is 3. As mentioned in the official document, the best practice to deploy Longhorn in a 3 nodes environment.
­čĹŹ 1


05/01/2023, 2:53 PM
Yeah, we're actually going to be setting the replicas to 1 because we dont actually need replication. We're basically looking for a more dynamic alternative to without all the pitfalls of some of the other dynamic provisioners that cant enforce pvc/pv quotas
bit outside the typical use case for longhorn, but lacking better options


06/01/2023, 5:35 PM
maybe longhorn strict-local disk could match your requirement (still longhorn managed, but local replica access ? )
@narrow-egg-98197 wrt to node selector, still unclear for me. In my context, all my nodes have longhorn components. However i want to define a storageclass targeting only certain nodes to host the related replicas.
I guess is close to my solution. Using Nodes and Disks labels. The documentation only mentions longhorn ui usage. Is it possible to apply the labels on longhorn CR with kubectl apply (on Node ?)