This message was deleted.
# rancher-desktop
a
This message was deleted.
c
Here’s essentially what I’m running:
Copy code
# Terminal 1
docker run -it -e <var1> -e <var2> -v /pathA:/containerPathA:ro -v /pathB:/containerPathB --rm <image>

# Terminal 2
docker run -it -e <var1> -e <var2> -v /pathA:/containerPathA:ro -v /pathC:/containerPathB --rm <image>

# Terminal 3
docker run -it -e <var1> -e <var2> -v /pathA:/containerPathA:ro -v /pathD:/containerPathB --rm <image>

# Terminal 4
docker run -it -e <var1> -e <var2> -v /pathA:/containerPathA:ro -v /pathE:/containerPathB --rm <image>
q
Can you do
rdctl shell
?
c
Yes - do you mean during a crash^ ?
q
Yes...
I mean, you can leave it open in advance. The idea is to talk to the VM using that shell and ask it questions, e.g.
top
.
c
Gotcha. Yeah I should have some time later today to try crashing it again and try this step
q
BTW, you didn't specify which system is hosting your VM (macOS, Windows, Linux)...
c
Rancher desktop 1.8.1 macOS 13.1
UDPATE: I was able to replicate the error again. This time I left an
rdctl shell
open. However, when the runtime hanged, rdctl behaved the same - the shell was unresponsive, and if I tried to open another terminal window to open rdctl, it also hangs. So both docker and rdctl get locked up when this happens. The only commands for these CLIs that don’t hang are the help and version options. Everything else hangs. Is there any hope in switching runtimes to nerdctl?
For more context, the containers I am running are executing terraform via ansible. But they are each using separate source volumes, and they each point to different cloud resources, so they aren’t writing over each other or anything like that. I have no issues unless I run them simultaneously.
q
Interesting. To me, that feels like qemu or the ssh bridge getting stuck. You're past where I can help. I mean, you could look into setting up strace or qemu logging, but I haven't done much of that in this area.
c
Appreciate the help anyway! Was looking at my VM config, looks to be set at only 2cpu and 2GB ram, which feels very low. I ran another test (which didn’t crash this time 🤷‍♂️ ), but the RAM usage didn’t look endangered. Probably will try bumping up to 8GB ram and maybe 4cpu just to see if that helps.
👍 1
Just in case anyone else comes across this thread, it does not appear to be a resource problem. After bumping up to 16gb ram, still seeing this behavior.
Will continue conversation over there
284 Views