This message was deleted.
# harvester
a
This message was deleted.
s
I haven't had this issue. One of my hosts filled up at one point and then longhorn migrated volumes around to balance them out seemingly automatically. You could go into the longhorn ui and manually evict some volumes from that node? If you wanted to expand it,: take it to maintenance, add drives, and then bring it back up?
b
My latest issue is a physical 2TiB device getting written to 100% from one single replica due to the unfortunate combination of 200% storage overprovision setting + harvester setting extra disks to 0% reserved by default
(an appliance writing and writing and writing and no trim)
l
Use e.g. LVM2 or another similar technology. This is also recommended by Longhorn.
👍 1
b
That’s probably the way to go. Been having some issues with harvester node LVM picking up LVM volumes on longhorn volumes, which causes I/O error spam in kernel log when machines are migrated to other nodes, so will have to look into the LVM config - currently just ignoring everything through global filter
👍 1