CUDA Out of Memory Error when running ai_pickup with --use_llm

Hello,

When I stopped the running by clich ctrl+z then run this command again:

python -m stretch.app.ai_pickup --use_llm

I still got this error as shown in screenshot:

Do i need every time to run: nvidia-smi then kill -9 [ ] ?

Hi, @Mallak_Alqaisi,

In this case, it’s better to stop the process using Ctrl+C instead of Ctrl+Z. Ctrl+C properly terminates the program and releases the GPU memory.

When using Ctrl+Z, the process is only paused (not stopped), so it continues holding onto GPU memory in the background. That’s why you run into the CUDA out-of-memory error when you try to run the command again.

If Ctrl+C doesn’t work and you end up using Ctrl+Z, then yes, you will need to manually stop the process by finding its PID (e.g., via nvidia-smi) and running:

kill -9 <PID>

Best,
Jason

This is what i got when i use ctrl+c and it didn’t works shown in the screenshot, Is there another way can i use to stop running instead of ctrl+z ?

If Ctrl+C doesn’t work, there isn’t really another way besides killing the process.

You can do:

kill %1

This will stop the latest job. If needed, you can force it with:

kill -9 %1