pytorch bug汇总

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [64, 256, 56, 56]], which is output 0 of ReluBackward0, is at version 24; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

这个问题的意思是模型在梯度计算的时候,变量x由于被固定值修改,使得其无法更新。
(尝试更换pytorch版本已以及修改relu的inplace无效)

解决方案:

#错误问题
x = self.layer2(x)
x[0] = t1+t2 #这个地方无法更新,导致出问题
x = self.layer3(x)

#解决方式
```python
x = self.layer2(x)
temp = x.clone()
temp[0] = t1+t2 #用一个替代值来修改里面的内容后再赋值给x可以解决这个问题
x = temp
x = self.layer3(x)

你可能感兴趣的:(深度学习,pytorch,bug,python)