I successfully trained the network but got this error during validation:

RuntimeError: CUDA error: out of memory

16

Best Answer


The error occurs because you ran out of memory on your GPU.

One way to solve it is to reduce the batch size until your code runs without this error.

When encountering the 'cuda out of memory' error, it means that the GPU's memory is insufficient to handle the current workload. To resolve this issue, there are several steps you can take:

1. Reduce batch size: Decreasing the batch size will lower the memory requirements, but may also impact the model's performance.

2. Use smaller models: Consider using a lighter model architecture that requires less memory.

3. Gradient checkpointing: Implement gradient checkpointing to trade off memory usage for additional computation time.

4. Memory optimization: Optimize your code and make sure to release unnecessary GPU memory allocations.

By following these steps, you should be able to overcome the 'cuda out of memory' error and continue with your GPU-accelerated computations.

The best way is to find the process engaging gpu memory and kill it:

find the PID of python process from:

nvidia-smi

copy the PID and kill it by:

sudo kill -9 pid

1.. When you only perform validation not training,
you don't need to calculate gradients for forward and backward phase.
In that situation, your code can be located under

with torch.no_grad():...net=Net()pred_for_validation=net(input)...

Above code doesn't use GPU memory

2.. If you use += operator in your code,
it can accumulate gradient continuously in your gradient graph.
In that case, you need to use float() like following site
https://pytorch.org/docs/stable/notes/faq.html#my-model-reports-cuda-runtime-error-2-out-of-memory

Even if docs guides with float(), in case of me, item() also worked like

entire_loss=0.0for i in range(100):one_loss=loss_function(prediction,label)entire_loss+=one_loss.item()

3.. If you use for loop in training code,
data can be sustained until entire for loop ends.
So, in that case, you can explicitly delete variables after performing optimizer.step()

for one_epoch in range(100):...optimizer.step()del intermediate_variable1,intermediate_variable2,...

I had the same issue and this code worked for me :

import gcgc.collect()torch.cuda.empty_cache()

It might be for a number of reasons that I try to report in the following list:

  1. Modules parameters: check the number of dimensions for your modules. Linear layers that transform a big input tensor (e.g., size 1000) in another big output tensor (e.g., size 1000) will require a matrix whose size is (1000, 1000).
  2. RNN decoder maximum steps: if you're using an RNN decoder in your architecture, avoid looping for a big number of steps. Usually, you fix a given number of decoding steps that is reasonable for your dataset.
  3. Tensors usage: minimise the number of tensors that you create. The garbage collector won't release them until they go out of scope.
  4. Batch size: incrementally increase your batch size until you go out of memory. It's a common trick that even famous library implement (see the biggest_batch_first description for the BucketIterator in AllenNLP.

In addition, I would recommend you to have a look to the official PyTorch documentation: https://pytorch.org/docs/stable/notes/faq.html

I am a Pytorch user. In my case, the cause for this error message was actually not due to GPU memory, but due to the version mismatch between Pytorch and CUDA.

Check whether the cause is really due to your GPU memory, by a code below.

import torchfoo = torch.tensor([1,2,3])foo = foo.to('cuda')

If an error still occurs for the above code, it will be better to re-install your Pytorch according to your CUDA version. (In my case, this solved the problem.)Pytorch install link

A similar case will happen also for Tensorflow/Keras.

If you are getting this error in Google Colab use this code:

import torchtorch.cuda.empty_cache()

Not sure if this'll help you or not, but this is what solved the issue for me:

export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128

Nothing else in this thread helped.

In my experience, this is not a typical CUDA OOM Error caused by PyTorch trying to allocate more memory on the GPU than you currently have.

The giveaway is the distinct lack of the following text in the error message.

Tried to allocate xxx GiB (GPU Y; XXX GiB total capacity; yyy MiB already allocated; zzz GiB free; aaa MiB reserved in total by PyTorch)

In my experience, this is an Nvidia driver issue. A reboot has always solved the issue for me, but there are times when a reboot is not possible.

One alternative to rebooting is to kill all Nvidia processes and reload the drivers manually. I always refer to the unaccepted answer of this question written by Comzyh when performing the driver cycle. Hope this helps anyone trapped in this situation.

If someone arrives here because of fast.ai, the batch size of a loader such as ImageDataLoaders can be controlled via bs=N where N is the size of the batch.

My dedicated GPU is limited to 2GB of memory, using bs=8 in the following example worked in my situation:

from fastai.vision.all import *path = untar_data(URLs.PETS)/'images'def is_cat(x): return x[0].isupper()dls = ImageDataLoaders.from_name_func(path, get_image_files(path), valid_pct=0.2, seed=42,label_func=is_cat, item_tfms=Resize(244), num_workers=0, bs=)learn = cnn_learner(dls, resnet34, metrics=error_rate)learn.fine_tune(1)

Problem solved by the following code:

import osos.environ['CUDA_VISIBLE_DEVICES']='2, 3'

If you're running Keras/TF in Jupyter on a local server and another notebook is open which was accessing the GPU, you can also get this error. Just halt and close the other notebook(s). This can occur even if the other notebook isn't actively running anything.

This is distinct from PyTorch OOM errors, which typically refer to PyTorch's allocation of GPU RAM and are of the form

OutOfMemoryError: CUDA out of memory. Tried to allocate 734.00 MiB (GPU 0; 7.79 GiB total capacity; 5.20 GiB already allocated; 139.94 MiB free; 6.78 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Because PyTorch manages a subset of GPU RAM for a given job, it can sometimes draw an OOM error even though there's sufficient available RAM in the GPU (just not enough in Torch's self-allocation)

These errors can be a bit obscure to troubleshoot, but generally three techniques can be helpful:

  1. at the head of your notebook, add these lines: import osos.environ["PYTORCH_CUDA_ALLOC_CONF"] = "max_split_size_mb:64"
  2. delete objects that are on the GPU as soon as you don't need them anymore
  3. reduce things like batch_size in training or testing scenarios

You can monitor GPU RAM simplistically with watch nvidia-smi

Every 2.0s: nvidia-smi numbaCruncha123: Wed May 31 11:30:57 2023Wed May 31 11:30:57 2023+-----------------------------------------------------------------------------+| NVIDIA-SMI 510.108.03 Driver Version: 510.108.03 CUDA Version: 11.6 ||-------------------------------+----------------------+----------------------+| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC || Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. || | | MIG M. ||===============================+======================+======================|| 0 NVIDIA GeForce ... Off | 00000000:26:00.0 Off | N/A || 37% 33C P2 34W / 175W | 7915MiB / 8192MiB | 3% Default || | | N/A |+-------------------------------+----------------------+----------------------++-----------------------------------------------------------------------------+| Processes: || GPU GI CI PID Type Process name GPU Memory || ID ID Usage ||=============================================================================|| 0 N/A N/A 2905 C ...user/z_Venv/NC/bin/python 1641MiB || 0 N/A N/A 31511 C ...user/z_Venv/NC/bin/python 6271MiB |+-----------------------------------------------------------------------------+

This will tell you what's using RAM across the entire GPU.

Note: if you've got a notebook running but don't see anything here, it's possible you're running on the CPU.

Find out what other processes are also using the GPU and free up that space.

find the PID of python process by running:

nvidia-smi

and kill it using

sudo kill -9 pid

I had this same error RuntimeError: CUDA error: out of memory

I was able to resolve this on a machine with 4 GPUs by first running nvidia-smi to learn that GPU 1 is already at full capacity by another user, causing the error as my script also tried to use the first GPU. I then ran export CUDA_VISIBLE_DEVICES=2,3,4 on the cli. My script now runs by looking only for GPUs 2,3,4 and ignoring 1.

In my case, my code actually doesn't need a GPU but was trying to use them, so I set export CUDA_VISIBLE_DEVICES="" and now it runs on CPU without attempting to use GPU.

I faced the same issue with my computer. All you have to do is customize your configuration file to match your computer's specifications. Turns out my computer takes image sizes below 600 X 600 and when I adjusted the same in the configuration file, the program ran smoothly.

Picture Describing my cfg file