RuntimeError: one of the variables needed for gradient computation has been modified by an inplace

Traceback (most recent call last):
  File "train.py", line 182, in <module>
    train(config)
  File "train.py", line 120, in train
    loss.backward()
  File "/home/zqzhu/anaconda3/lib/python3.7/site-packages/torch/tensor.py", line 102, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/zqzhu/anaconda3/lib/python3.7/site-packages/torch/autograd/__init__.py", line 90, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

先说下出现这个错误的原因吧:在计算loss的时候又新增加了一项,然后就写成下面的样子:

loss += new_loss

于是乎就出现了上面的错误,错误分析:由于pytorch升级到pytorch0.4之后,与之前pytorch0.3的用法发生来了一些变化,比如最重要的在pytorch0.4中将Tensor与Variance都组合成了同一个东西,pytorch0.4不再支持inplace操作。
解决方法:
1.把所有的inplace=True改成inplace=False
2.将loss+=new_loss这样所有的+=操作,改成loss=loss+new_loss
3.将pytorch版本回退到0.3,或者添加一个pytorch0.3的conda环境。

参考资料:
记PyTorch踩过的坑

你可能感兴趣的:(Torch,Python,PyTorch)