This message was deleted.
# longhorn-storage
a
This message was deleted.
b
I'm thinking about it. The only ways that come to my mind would be to either: • scale dependent workloads down to 0, which will automatically detach the volumes. Longhorn could remember the scale setting and restore it after - I can see how this solution could be a bit cumbersome. It is not the business of longhorn to interfere with workload but it already does so by being able to remove pods of broken volumes • an easier solution, save the setting and only apply it when volumes are recreated/mounted - I don't see how that would be a problem. It's not like applying the setting would suddenly terminate all the longhorn system managed pods, would it? I think save it and let it propagate slowly would be the best way to do it. It's like upgrading longhorn, it happens sort of transparently whilst the engine pods and etc. are upgraded.
i
@famous-journalist-11332 @famous-shampoo-18483 Do you have any idea for this question?
l
Option 2 is the most logical one to me …. and thank you for bringing this to the table @bland-painting-61617
👍 1
b
I think this would be a good way to handle it. Thanks for taking my feedback 😊
f
IIRC, we raised a similar idea before: When users changes the settings in the dangerous zone (which typically requires all volume detached), Longhorn should enqueue the request then automatically apply it once there is a safe chance to do that. For example, if there is one single moment that all volumes/instances on a node are stopped, Longhorn should apply the settings before the new instance/volume starting on that node. Not sure if there is a ticket tracking this.
👍 3
110 Views