How to resolve “RuntimeError: CUDA out of memory”?

RuntimeError: CUDA out of memory. Tried to allocate 576.00 MiB
(GPU 0; 39.42 GiB total capacity; 20.89 GiB already allocated;
403.69 MiB free; 20.90 GiB reserved in total by PyTorch)
If reserved memory is >> allocated memory try setting max_split_size_mb
to avoid fragmentation. See documentation for Memory Management and
PYTORCH_CUDA_ALLOC_CONF
解决办法如下:

1:In your Python script, add torch.cuda.empty_cache() to before loading the model
import os
os.environ['PYTORCH_CUDA_ALLOC_CONF'] = 'max_split_size_mb=xxx'
Replace ‘xxx’ with the appropriate value based on your GPU memory and experiment with different values to see what works best for your specific situation.

2:Run export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:32 before running your Python script

3:Run nvidia-smi to monitor your GPU memory usage and kill the other processes using your GPU:

你可能感兴趣的:(生活与工作,人工智能)