I've had a node fail and one VM had that node sele...
# harvester
w
I've had a node fail and one VM had that node selected in "node scheduling" all it's disks have 3 replicas, one of which is on the failed machine, 2 replicas are good. The problem is that I cant change the config in harvester to change the "node scheduling" it allows me to change this but it doesn't actually change - node sticks at the current one. So I cant migrate this VM to a working node at present - anyone know how I can fix this? Harvester version 1.4.1 I have gone into longhorn, updated replica count to 2 and deleted the failed replicas on the failed node, then changed the "node scheduling" to any node and tried to start - but the change isn't made and it refuses to start as "Unschedulable" so the config changes are not being applied.
Basically - it doesn't look like changing the "node schedule" is possible despite the fact that there is available capacity and there are healthy replicas, other than restoring manually I cant see an easy way to sort this out.
I'll raise as a bug as it should be possible to change the node scheduling, currently cloning the disk to start a new VM based on the clone as a workaround.
OK - workaround was to edit the YAML directly and change the scheduled node to a healthy one - this got me running again.
👀 1
Issue is on GH - https://github.com/harvester/harvester/issues/8351 I'll be updating to 1.4.2 soon so will re-test this when its updated
r
Hi Craig, I have some updates on the GH ticket. Please take a look. BTW, did you tried editing the node selectors using the Harvester UI?
✅ 1