This message was deleted.
# rancher-desktop
a
This message was deleted.
q
what version of rancher desktop?
and have you checked the logs?
i have quite a few images that are more than 5 mins old (RD 1.9.0 / m1)
is it deleting all images or only some?
p
Note that Kubernetes likes to delete unused images if you're running out of disk space, IIRC, so if you have that enabled (and if you don't need it) you could see if disabling that helps.
f
The kubelet garbage collection will only trigger once disk usage reaches 80%, so you would have to have almost 80GB of images inside the VM before it starts running. It does however delete images in order of when they were last used, so freshly built images that were never used in Kubernetes are probably the first to go.
The best way to deal with this would be to manually delete older images, and also to run
docker image prune
to remove all "dangling" images that are not reachable anymore.
q
Eww, it deletes the most recently built image if it was never used? yuck.
f
An alternative is using
containerd
which keeps Kubernetes images in a separate namespace, so kubelet will not delete images that you only want to run directly.
kubelet tries to keep the current working set, if it runs out of space. Why would it keep images that were never used by the cluster.
Kubernetes doesn't really expect you to build your image locally on the node, but pull them on demand from a registry
q
Sigh. It's a good thing I generally disable k8s on RD 🙂
d
Hello! The version of rancher desktop is 1.9.0 I’m using dockerd(moby) container runtime. It is deleting images that are not being used at the moment. Also images that have been used before and now not. Machine is not running out of disk space, have over 200GB free. Now I have only the images that RD requires and one other image that I built and oddly enough is not used. Note that I’m not talking about kubernetes, only docker. But I guess the same thing happens in kubernetes as well.
q
is kubernetes enabled? i disabled it
d
Enabled
q
if you aren't using it, try disabling it
note that there are two things you could call a "machine", your mac, and a thing called "lima vm", when we talk about a machine running out of space, we mean the lima vm.
Copy code
% rdctl shell df -h 2>/dev/null|tail -1
                         97.9G     88.7G      4.2G  95% /root
d
Yes, root partition in lima vm is 100%
But
Copy code
❯ rdctl shell df -h 2>/dev/null|tail -1
:/var/folders           460.4G    255.7G    204.7G  56% /var/folders
q
Copy code
% rdctl shell df -h /root 2>/dev/null
Filesystem                Size      Used Available Use% Mounted on
mount1                   97.9G     88.7G      4.2G  95% /tmp/rancher-desktop
why would your vm be 100%....
d
No idea
Copy code
❯ rdctl shell df -h /root 2>/dev/null
Filesystem                Size      Used Available Use% Mounted on
/dev/disk/by-label/data-volume
                         97.9G     93.0G         0 100% /mnt/data
q
that seems bad
😅 1
let's look at the world:
Copy code
rdctl shell sh -c 'df -h 2> /dev/null|sort -h -r -k3|grep %' 2>/dev/null
d
Copy code
❯ rdctl shell sh -c 'df -h 2> /dev/null|sort -h -r -k3|grep %' 2>/dev/null
:/var/folders           460.4G    255.7G    204.7G  56% /var/folders
:/Volumes               460.4G    255.7G    204.7G  56% /Volumes
                        460.4G    255.7G    204.7G  56% /tmp/rancher-desktop
                        460.4G    255.7G    204.7G  56% /Users/stef
tmpfs                     3.9G    509.2M      3.4G  13% /
/dev/vda                233.4M    233.4M         0 100% /media/vda
/dev/loop0               14.1M     14.1M         0 100% /.modloop
tmpfs                     1.5G    968.0K      1.5G   0% /run
shm                       3.9G         0      3.9G   0% /dev/shm
devtmpfs                 10.0M         0     10.0M   0% /dev
cgroup_root              10.0M         0     10.0M   0% /sys/fs/cgroup
                         97.9G     93.0G         0 100% /var/lib
                         97.9G     93.0G         0 100% /usr/local
                         97.9G     93.0G         0 100% /tmp
                         97.9G     93.0G         0 100% /root
                         97.9G     93.0G         0 100% /root
                         97.9G     93.0G         0 100% /mnt/data
                         97.9G     93.0G         0 100% /home
                         97.9G     93.0G         0 100% /etc
                         35.2M     35.2M         0 100% /mnt/lima-cidata
q
those 100%s seem really problematic
d
I guess thats the cause of the issue. But I don’t have much in this VM. In fact only what RD creates
q
yeah, i'm not sure what's in it or how it's slicing things, i wouldn't have expected mine to be at 95%, but that gives me 5% to play w/ 🙂
d
I ran
docker system df -v
and one of the results is
Copy code
Build cache usage: 47.63GB
lol
q
Build cache usage: 23.23GB
ok, that's where your space is going 🙂
you can increase the size of your vm's disk in settings, i'd probably just do that 🙂
sorry, while i'm technically using that part of moby, it's something i clearly don't notice 🙂
d
I ran
Copy code
docker builder prune
docker volume prune
and dropped to 36%
Let’s see how it goes from now on
👍 1
Thanks 😄
q
@fast-garage-66093 this needs a diagnostics