RuntimeError: CUDA out of memory. Tried to allocate 32.00 MiB||查看GPU内存

RuntimeError: CUDA out of memory. Tried to allocate 32.00 MiB (GPU 0; 6.00 GiB total capacity; 4.40 GiB already allocated; 0 bytes free; 4.43 GiB reserved in total by PyTorch)

Traceback (most recent call last):
  File "D:/Papers to read/2022.07/DeamNet-main/DeamNet-main/train.py", line 170, in 
    train(epoch)
  File "D:/Papers to read/2022.07/DeamNet-main/DeamNet-main/train.py", line 73, in train
    prediction = model(input)
................
 File "D:\ProgramData\Anaconda3\envs\python36\lib\site-packages\torch\nn\functional.py", line 1136, in relu
    result = torch.relu(input)
RuntimeError: CUDA out of memory. Tried to allocate 32.00 MiB (GPU 0; 6.00 GiB total capacity; 4.40 GiB already allocated; 0 bytes free; 4.43 GiB reserved in total by PyTorch)

Process finished with exit code 1

查到的方法

1. 查看电脑GPU内存 在cmd下输入:

nvidia-smi.exe -h

RuntimeError: CUDA out of memory. Tried to allocate 32.00 MiB||查看GPU内存_第1张图片

 然后再输入

nvidia-smi.exe -i 0

可以看到电脑的运行内存,

RuntimeError: CUDA out of memory. Tried to allocate 32.00 MiB||查看GPU内存_第2张图片

 关于搜到的调用GPU的方法,源代码里有,所以不是这个问题

os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "0,1"
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")  #修改:增加此行. 判断CUDA是否能使用,不可以就使用CPU

2. 另一个方法是修改batchsize

batchsize原来是8,改为1,可以运行了

parser.add_argument('--batchSize', type=int, default=1, help='training batch size') 

RuntimeError: CUDA out of memory. Tried to allocate 32.00 MiB||查看GPU内存_第3张图片

 

你可能感兴趣的:(pytorch,人工智能,python)