This message was deleted.
# harvester
a
This message was deleted.
b
No, because when the Node that holds the replica reboots, there's no disk available for the VM to run on.
Unless I'm not understanding your question
Do you have enough storage to replicate to 2 copies for the upgrade?
Cause you could do that and then scale back down.
Last thing I'll mention and sorry that it's not at all part of your question: I'd just make note that 1.4.0 has a
latest
release last I saw and there's a big warning about not running latest in production and only using
stable
releases.
🥸 1
b
The process we are looking for - say we have a 3 node harvester cluster with 3 VMs that have replica 1 storage that form a DB cluster, and each VM is scheduled on a different node via affinity rules. We are happy for any 1 of these VMs to be turned off as the DB cluster would still be active and could service traffic. For a zero downtime upgrade we would like to upgrade each harvester node one at a time, so we could turn off 1 VM at a time and still have a working DB cluster. We don't need to have all 3 VMs running during the upgrade
b
Oh, you just want to ignore the safety check.
I have no idea about that.
I'm sorry
b
Yeah we are happy for it to be "stuck" until we manually turn off the VM on that harvester node
b
Coffee clearly hasn't kicked in yet for me.
The only way I've seen end users able to kick off an upgrade is through the UI and there's no settings for ignoring those checks afaik.
If you had a 4th node, I think it could migrate it and it just wouldn't get stuck, but this still isn't what you're asking about
c
Each vm has an
Advanced Option
called
Maintenance strategy
that may help here. The default appears to be
Migrate
but there is a
Shutdown
option too.
b
oh nice, ShutdownAndRestartAfterDisable looks like it could be a winner if the checks cater for it. Will try on the next upgrade (to 1.4.0 went fine after shutdown)
c
Im struggling to find documentation on it tho
c
Ahh nice thanks. Ill give this a shot next upgrade too. I had the same issue you had my last upgrade. I noticed harvester would just get stuck until I stopped the vm. Im running some of my vms with 1 replica too
Hopefully harvester honors that setting
b
It'll probably honor it, but did they think of using it as an exclusion for the pre-upgrade validation or not is what'll make the difference I think.
b
Did it let you start the upgrade? When I clicked upgrade I got a webhook error as a result of this change https://github.com/harvester/harvester/pull/4956
but yeah let us know!
b
@brainy-kilobyte-33711 based off the release notes it looks like it got merged in with the 1.3.x releases so prior to that (1,2.x -> 1.3.x ) it would just get stuck
c
Yeah when I did my upgrade, I believe the precheck script and harvester complained but it still let me do the upgrade. I didnt get the error shown in that ticket
That would make sense why I didnt see it