mseloss 与 smooth_l1_loss比较

测试代码:

import torch
import torch.nn.functional as F

conf_mask = torch.FloatTensor([0.0, 10.0, 0.0, 1.0, 1.0])
conf_data = torch.FloatTensor([10.1, 0.9, 0.0, 10.2, 10.2])

loss_fn = torch.nn.MSELoss()# reduce=False, size_average=False)

x= loss_fn(conf_mask, conf_data).item()
print('-----0&1',x)

loc_loss = F.smooth_l1_loss(conf_mask, conf_data)
print(loc_loss)

mseloss拆开loss也会变大:

import torch
import torch.nn.functional as F

conf_mask = torch.FloatTensor([0.0, 10.0, 0.0, 1.0, 1.0])
conf_data = torch.FloatTensor([10.1, 0.9, 0.0, 10.2, 10.2])

loss_fn = torch.nn.MSELoss()# reduce=False, size_average=False)

x= loss_fn(conf_mask, conf_data).item()
print('-----0&1',x)


x= loss_fn(conf_mask[:3], conf_data[:3]).item()
print('-----0&1',x)


x= loss_fn(conf_mask[3:], conf_data[3:]).item()
print('-----0&1',x)

 

 

nn.SmoothL1Loss
也叫作 Huber Loss,误差在 (-1,1) 上是平方损失,其他情况是 L1 损失。
loss(xi,yi)={12(xi−yi)2|xi−yi|−12,if |xi−yi|<1otherwise
loss(xi,yi)={12(xi−yi)2if |xi−yi|<1|xi−yi|−12,otherwise
这里很上面的 L1Loss 类似,都是 element-wise 的操作,下标 ii 是 xx 的第 ii 个元素。

loss_fn = torch.nn.SmoothL1Loss(reduce=False, size_average=False)
input = torch.autograd.Variable(torch.randn(3,4))
target = torch.autograd.Variable(torch.randn(3,4))
loss = loss_fn(input, target)
print(input); print(target); print(loss)
print(input.size(), target.size(), loss.size())

nn.MSELoss
均方损失函数,用法和上面类似,这里 loss, x, y 的维度是一样的,可以是向量或者矩阵,ii 是下标。 
loss(xi,yi)=(xi−yi)2
loss(xi,yi)=(xi−yi)2
loss_fn = torch.nn.MSELoss(reduce=False, size_average=False)
input = torch.autograd.Variable(torch.randn(3,4))
target = torch.autograd.Variable(torch.randn(3,4))
loss = loss_fn(input, target)
print(input); print(target); print(loss)
print(input.size(), target.size(), loss.size())
 

你可能感兴趣的:(torch,深度学习)