pytorch : grad can be implicitly created only for scalar outputs

  • 错误信息
  File "***.py", line 101, in train
    loss.backward()
  File "***/anaconda3/envs/ngepc/lib/python3.8/site-packages/torch/tensor.py", line 221, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "***/anaconda3/envs/ngepc/lib/python3.8/site-packages/torch/autograd/__init__.py", line 126, in backward
    grad_tensors_ = _make_grads(tensors, grad_tensors_)
  File "***/anaconda3/envs/ngepc/lib/python3.8/site-packages/torch/autograd/__init__.py", line 50, in _make_grads
    raise RuntimeError("grad can be implicitly created only for scalar outputs")
RuntimeError: grad can be implicitly created only for scalar outputs
  • 原因是函数autograd.grad()的grad_outputs 参数默认是output为数值,Mse损失 参数’none’: no reduction will be applied,将输出矩阵,需要输出矩阵元素的和或均值已得到数值而非矩阵的loss,所以修改如下:

self.l = nn.MSELoss(reduction='nobe')

改为:

self.l = nn.MSELoss(reduction='mean')

类似于:将

 a=Variable(torch.FloatTensor([1,2,3]),requires_grad=True)
 autograd.grad(outputs=a,inputs=a) 

改为

 a=Variable(torch.FloatTensor([1,2,3]),requires_grad=True)
 autograd.grad(outputs=(a.sum()/a.shape[1]),inputs=a) 

你可能感兴趣的:(语言学习笔记,深度学习,python,pytorch)