thanks for hints. why i ask a guidance is i don't know the correct operation process. i tried kubectl drain node, with ignore-daemonsets flag and delete-emptydir-data flag, also tried cordon and maintain node via harvester ui, also tried these operation combination. i dont know how to verify if the node could be reboot safely. @salmon-city-57654, i am not sure which operation is setting noschedule taint, kubectl drain or harvester ui maintenance. Replicas rebuilding happen on the nodes which have ssd and scheduleable flag on, never reboot. i know the situation i'm facing might be my miss configuration, not harvester or longhorn issue. So i just ask a documents hints. thanks again.