Pytorch RuntimeError: one of the variables needed for gradient computation has been modified

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [4, 512, 16, 16]], which is output 0 of ConstantPadNdBackward, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later.

参考文章:RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [4, 512, 16, 16]], which is output 0 of ConstantPadNdBackward, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True). · Issue #23 · NVlabs/FUNIT · GitHub

loss.backward()出的问题

根据文章可以:This error can be resolved by setting inplace=False in nn.ReLU and nn.LeakyReLU in blocks.py

我是按里面的修改F.relu

def my_relu(x):
    return torch.maximum(x, torch.zeros_like(x))

你可能感兴趣的:(pytorch,pytorch,深度学习,人工智能)