血泪 cuda out of memory:

问题描述

我碰到一个开源代码,它不显示epoch,iteration,其实它这个模型在运行,但是它在最后save model时也就是在validation时报错:cuda out of memory。让我一直以为这个模型一开始就会报错,并没有训练成功,为此我从16G的显卡、32G的显卡、48G的显卡,再一次次的cuda out of memory后,我绝望了,我知道不是显存的问题,就是模型本身的问题。

例如:

@Override
	RuntimeError: CUDA out of memory. Tried to allocate 16.00 GiB (GPU 0; 15.73 GiB total capacity; 5.45 GiB already allocated; 606.88 MiB free; 13.47 GiB reserved in total by PyTorch)
  0%|                                                                                                                          | 0/2 [00:23<?, ?image/s]
	}

这个时候一定要看前面的报错,一定要看时哪里出了问题

原因分析:

@Override
Traceback (most recent call last):
  File "basicsr/train.py", line 244, in <module>
    train_pipeline(root_path)
  File "basicsr/train.py", line 237, in train_pipeline
    model.validation(val_loader, current_iter, tb_logger, opt['val']['save_img'])
  File "/home/iecy/pycharm/Openbayes/QuanTexSR-main/basicsr/models/base_model.py", line 48, in validation
    self.nondist_validation(dataloader, current_iter, tb_logger, save_img, save_as_dir)
  File "/home/iecy/pycharm/Openbayes/QuanTexSR-main/basicsr/models/qsr_model.py", line 289, in nondist_validation
    self.test()
  File "/home/iecy/pycharm/Openbayes/QuanTexSR-main/basicsr/models/qsr_model.py", line 253, in test
    self.output = net_g.test(lq_input)
  File "/home/iecy/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
    return func(*args, **kwargs)
  File "/home/iecy/pycharm/Openbayes/QuanTexSR-main/basicsr/archs/quantsr_arch.py", line 417, in test
    input = self.content_model(input)
  File "/home/iecy/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/iecy/pycharm/Openbayes/QuanTexSR-main/basicsr/archs/rrdbnet_arch.py", line 117, in forward
    feat = self.lrelu(self.conv_up2(F.interpolate(feat, scale_factor=2, mode='nearest')))
  File "/home/iecy/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/functional.py", line 3690, in interpolate
    return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors)
	}

其中上面有一行就是问题所在。我将它修改之后,就没有再出现cuda out of memory.

@Override
Test 0801_s001: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:10<00:00, 10.36s/image]
2022-04-21 18:43:44,104 INFO: Validation General_Image_Valid
         # psnr: 8.9772 Best: 8.9772 @ 0 iter
         # ssim: 0.0348 Best: 0.0348 @ 0 iter
         # lpips: 0.9741        Best: 0.9741 @ 0 iter
	}

解决方案:

1.原因一:找到错误点,增加以下语句:

with torch.no_grad():

outputs = Net_(inputs) ---错误代码的位置。

2.原因二:找到错误点
看看到底是什么问题,修改其地方。

3.原因三:GPU没有选对

os.environ[“CUDA_VISIBLE_DEVICES”] = “0, 2, 3”

4.修改batch size,减小输入的大小。
但你要知道,如果你再怎么减小batch size、再怎么减小输入,就是出问题。那肯定是模型本身的问题。

更新:

当在validation报错时,应该是我验证集图片的问题。当时输入时2048*2048,可能是因为这个原因导致了cuda out of memory.

你可能感兴趣的:(深度学习)