torch.nn.MSELoss的用法

CLASS torch.nn.MSELoss(size_average=None, reduce=None, reduction=mean)

torch.nn.functional.mse_loss(input, target, size_average=None, reduce=None, reduction=mean) → Tensor


参数

  • size_average: 默认为True, 计算一个batch中所有loss的均值;reduce为 False时,忽略这个参数;

  • reduce: 默认为True, 计算一个batch中所有loss的均值或者和;

    • reduce = False,size_average 参数失效,返回的 loss是向量,维度为 (batch_size, ) ;

    • reduce = True,size_average 参数失效,返回的 loss是标量;

      • size_average = True,返回 loss.mean();
      • size_average = False,返回 loss.sum();
  • reduction :‘none’ | ‘mean’ | ‘sum’,默认均值;指定size_average 和reduce参数就不使用reduction ,与之相反

输入

"mse_cpu" not implemented for 'Int'
  • Input: (N,∗) where *∗ means, any number of additional dimensions;input.float()
  • Target: (N,∗) , same shape as the input;Target.float()
loss = nn.MSELoss(reduce=False, size_average=False)
input = torch.randn(3, 5)
target = torch.randn(3, 5)
output = loss(input.float(), target.float())
print(output)

#output:
#tensor([[1.2459e+01, 5.8741e-02, 1.8397e-01, 4.9688e-01, 7.3362e-02],
#[8.0921e-01, 1.8580e+00, 4.5180e+00, 7.5342e-01, 4.1929e-01],
#[2.6371e-02, 1.5204e+00, 1.5778e+00, 1.1634e+00, 9.5338e-03]])
loss = nn.MSELoss(reduce = True, size_average=True)
input = torch.randn(3, 5)
target = torch.randn(3, 5)
output = loss(input, target)
print(output)

#output:
#tensor(1.2368)

这个地方有一个巨坑,就是一定要小心input和target的位置,说的更具体一些,target一定需要是一个不能被训练更新的、requires_grad=False的值,否则会报错!!!

loss=nn.MSELoss()
input=torch.randn(3,5,requires_grad=True)
target=torch.randn(3,5)
error=loss(input,target)
error.backward()

另外,关于MSELoss的设定

若设定loss=torch.nn.MSELoss(reduction=‘mean’),最终输出值是(target-input)每个元素数字平方和除以width x height,也就是在batch和特征维度上都做了平均。如果只想在batch上做平均,则可以写成这个样子:

#需要注意的是,这里的input和target是mini-batch的形式
loss=torch.nn.MSELoss(reduction='sum')
loss=loss(input,target)/target.size(0)

你可能感兴趣的:(PyTorch深度学习实践,pytorch,深度学习,人工智能)