This message was deleted.
# lima
a
This message was deleted.
e
Nice - That should make migrating to bigger colima instance easier, right?
b
How do you mean ? It is more about making snapshots, not size of vm
e
Oh, so you can’t make a snapshot and restore to different size disk? That’s too bad.
The inability to resize disk without losing it is definitely a failing of lima/colima.
b
Oh, haven’t tried it
But qemu-img does support resize, the trick is repartitioning and resizing the filesystem on the “inside”
e
I would be a big win to sort out that process.
b
@enough-smartphone-71732: it works the other way around, you can resize a running disk - but not a disk with snapshots
{"error": {"class": "GenericError", "desc": "Can't resize an image which has snapshots"}}
So it would be a separate command (
block_resize
)
For instance ext4 does support live resizing of fs
e
Think you could explain how to do it in the case aren’t snapshots? Currently all DDEV colima users who run out of space have to delete and recreate.
Yes, ext4 does it fine, I’ve certainly done it with a variety of OSs. But sure don’t know how to do it with lima/colima.
b
(QEMU) block_resize size=214748364800 node-name=#block339 {"return": {}}
something like that. I used another command query-block to find the node name
so it would be a different QMP command
e
Yeah, would have to be boiled down a lot more than that.
b
Not too much more, but yeah: func (m *Monitor) BlockResize(device *string, nodeName *string, size int64) (err error)
e
I’m saying it would have to be wrapped in a much simpler setup to be useful to the average user.
b
oh for sure. tried resizing the CD first, worked poorly
but it doesn't have to involve snapshots
I don't think there is any ticket about it, though ? Only the boring workaround of stop + resize / convert + start, for "shrinking"
e
Yeah, shrinking is nice but people live without that possibility in dozens of environments, including Docker Desktop. But enlarging is such a standard need it hits everybody at some point. I’ve started recommending starting with 100GB for DDEV to avoid this.
b
? 100 is the current default
docker-machine just had 20
e
It just takes a few images and a few volumes and lack of cleanup.
b
this was more about reclaiming sparse holes, but sure
e
People routinely hit the 60GB level. I think Colima’s default is 60GB, and folks definitely hit it.
b
the data disk would help, but resizing could still be useful (without having to use qemu-img)
👍 1
Besides the whole emulating and pretending business, partitioning is the worst about the container host VMs
having to decide up front, how much cpu/ram/disk you will pretend to have - instead of just using the machine
e
Yup