pytorch test过程中 RuntimeError: cuda runtime error : out of memory

尝试过Variable(x, volatile = True), 但Variable参数已经deprecated了

解决方法:在main.py函数中test的语句整个test过程设置torch.no_grad

    with torch.no_grad():
        tester.test()

参考:
These two have different goals: model.eval() will notify all your layers that you are in eval mode, that way, batchnorm or dropout layers will work in eval mode instead of training mode. torch.no_grad() impacts the autograd engine and deactivate it. It will reduce memory usage and speed up …

ref: https://discuss.pytorch.org/t/model-eval-vs-with-torch-no-grad/19615

你可能感兴趣的:(Pytorch)