PyTorch中的Loss Fucntion

转载:http://sshuair.com/2017/10/21/pytorch-loss/


PyTorch中的Loss Fucntion


       深度学习中的Loss Function有很多,常见的有L1、L2、HingeLoss、CrossEntropy,其最终目的就是计算预测的f(x)f(x) 与真值 yy 之间的差别,而优化器的目的就是minimize这个差值,当loss的值稳定后,便是 f(x)f(x) 的参数WW最优的时候。不同的Loss Function适用场景不同,各个深度学习框架实现大同小异,这里用PyTorch来对常见的Loss Function进行阐述。这里先构造一个预测值 y^y^ 和真值 yy

Cross Entropy

Cross Entropy(也就是交叉熵)来自香农的信息论,简单来说,交叉熵是用来衡量在给定的真实分布pkpk下,使用非真实分布qkqk所指定的策略 f(x)f(x) 消除系统的不确定性所需要付出的努力的大小。交叉熵的越低说明这个策略越好,我们总是minimize交叉熵,因为交叉熵越小,就证明算法所产生的策略越接近最优策略,也就间接证明我们的算法所计算出的非真实分布越接近真实分布。交叉熵损失函数从信息论的角度来说,其实来自于KL散度,只不过最后推导的新式等价于交叉熵的计算公式:

H(p,q)=k=1N(pklogqk)H(p,q)=−∑k=1N(pk∗logqk)

最大似然估计、Negative Log Liklihood(NLL)、KL散度与Cross Entropy其实是等价的,都可以进行互相推导,当然MSE也可以用Cross Entropy进行对到出(详见Deep Learning Book P132)。

Cross Entropy可以用于分类问题,也可以用于语义分割,对于分类问题,其输出层通常为Sigmoid或者Softmax,当然也有可能直接输出加权之后的,而pytorch中与Cross Entropy相关的loss Function包括:

  • CrossEntropyLoss: combines LogSoftMax and NLLLoss in one single class,也就是说我们的网络不需要在最后一层加任何输出层,该loss Function为我们打包好了;
  • NLLLoss: 也就是negative log likelihood loss,如果需要得到log分布,则需要在网络的最后一层加上LogSoftmax
  • NLLLoss2d: 二维的negative log likelihood loss,多用于分割问题
  • BCELoss: Binary Cross Entropy,常用于二分类问题,当然也可以用于多分类问题,通常需要在网络的最后一层添加sigmoid进行配合使用,其target也就是yy值需要进行one hot编码,另外BCELoss还可以用于Multi-label classification
  • BCEWithLogitsLoss: 把Sigmoid layer 和 the BCELoss整合到了一起
  • KLDivLoss: TODO
  • PoissonNLLLoss: TODO

下面就用PyTorch对上面的Loss Function进行说明

CrossEntropyLoss

pytorch中CrossEntropyLoss是通过两个步骤计算出来的,第一步是计算log softmax,第二步是计算cross entropy(或者说是negative log likehood),CrossEntropyLoss不需要在网络的最后一层添加softmax和log层,直接输出全连接层即可。而NLLLoss则需要在定义网络的时候在最后一层添加softmax和log层

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.autograd as autograd
import numpy as np

# 预测值f(x) 构造样本,神经网络输出层
inputs_tensor = torch.FloatTensor( [
 [10, 2, 1,-2,-3],
 [-1,-6,-0,-3,-5],
 [-5, 4, 8, 2, 1]
 ])

# 真值y
targets_tensor = torch.LongTensor([1,3,2])
# targets_tensor = torch.LongTensor([1])

inputs_variable = autograd.Variable(inputs_tensor, requires_grad=True) 
targets_variable = autograd.Variable(targets_tensor)
print('input tensor(nBatch x nClasses): {}'.format(inputs_tensor.shape))
print('target tensor shape: {}'.format(targets_tensor.shape))
input tensor(nBatch x nClasses): torch.Size([3, 5])
target tensor shape: torch.Size([3])
1
2
3
4
loss = nn.CrossEntropyLoss()
output = loss(inputs_variable, targets_variable)
# output.backward()
print('pytorch 内部实现的CrossEntropyLoss: {}'.format(output))
pytorch 内部实现的CrossEntropyLoss: Variable containing:
 3.7925
[torch.FloatTensor of size 1]

手动计算

1.log softmax

1
2
3
4
5
6
7
8
9
# 手动计算log softmax, 计算结果的值域是[0, 1]
softmax_result = F.softmax(inputs_variable) #.sum() #计算softmax
print(('softmax_result(sum=1):{} \n'.format(softmax_result)))
logsoftmax_result = np.log(softmax_result.data)  # 计算log,以e为底, 计算后所有的值都小于0
print('手动计算 calculate logsoftmax_result:{} \n'.format(logsoftmax_result))

# 直接调用F.log_softmax
softmax_result = F.log_softmax(inputs_variable)
print('F.log_softmax calculate logsoftmax_result:{} \n'.format(logsoftmax_result))
softmax_result(sum=1):Variable containing:
 9.9953e-01  3.3531e-04  1.2335e-04  6.1413e-06  2.2593e-06
 2.5782e-01  1.7372e-03  7.0083e-01  3.4892e-02  4.7221e-03
 2.2123e-06  1.7926e-02  9.7875e-01  2.4261e-03  8.9251e-04
[torch.FloatTensor of size 3x5]


手动计算 calculate logsoftmax_result:
-4.6717e-04 -8.0005e+00 -9.0005e+00 -1.2000e+01 -1.3000e+01
-1.3555e+00 -6.3555e+00 -3.5549e-01 -3.3555e+00 -5.3555e+00
-1.3021e+01 -4.0215e+00 -2.1476e-02 -6.0215e+00 -7.0215e+00
[torch.FloatTensor of size 3x5]


F.log_softmax calculate logsoftmax_result:
-4.6717e-04 -8.0005e+00 -9.0005e+00 -1.2000e+01 -1.3000e+01
-1.3555e+00 -6.3555e+00 -3.5549e-01 -3.3555e+00 -5.3555e+00
-1.3021e+01 -4.0215e+00 -2.1476e-02 -6.0215e+00 -7.0215e+00
[torch.FloatTensor of size 3x5]

2.手动计算loss

pytorch中NLLLoss定义如下:

loss(x,class)=x[class]loss(x,class)=−x[class]

这里为什么可以这么写呢?下面用第三个样本进行解释

我们用one-hot编码后,得到真实分布概率的值px(or pmodel)px(or pmodel)为(这里一共有5类):[0,0,1,0,0]

而模型预测的每一类分布概率,也就是非真实分布的概率qx(or ppred)qx(or ppred)为:[2.5782e-01 1.7372e-03 7.0083e-01 3.4892e-02 4.7221e-03] 注意:概率要求其结果为1,这里使用的是softmax计算出来的结果,而不是log softmax

那么根据Cross Entroy(交叉熵): Nk=1(pklogqk)−∑k=1N(pk∗logqk)

或者negative log likehood(最大似然): mi=1log(pmodel(yixi;θ))−∑i=1mlog(pmodel(yi∣xi;θ))

将对应项目相乘即可得到最终的loss结果:

0×log(2.57821001)+0×log(1.73721003)+0×log(7.00831001)+1×log(3.48921002)+0×log(4.72211003)=1×log(3.48921002)0×log(2.5782⋅10−01)+0×log(1.7372⋅10−03)+0×log(7.0083⋅10−01)+1×log(3.4892⋅10−02)+0×log(4.7221⋅10−03)=1×log(3.4892⋅10−02)

也就恒等于

0×1.355510+00+0×6.355510+00+1×3.55491001+0×3.355510+00+0×5.355510+00=1×3.355510+000×−1.3555⋅10+00+0×−6.3555⋅10+00+1×−3.5549⋅10−01+0×−3.3555⋅10+00+0×−5.3555⋅10+00=1×−3.3555⋅10+00

由于其他类别都是0,而且真实概率的一定是1,因此可以简化表示为loss(x,class)=x[class]loss(x,class)=−x[class] 在这里也就是qx[2]−qx[2] = 3.3555

下面是分别计算每个样本的交叉熵,最后求平均,可以看到,最终结果和直接调用nn.CrossEntroyLoss的结果相同

1
2
3
4
5
6
7
sample_loss1 = -softmax_result[0,1]
sample_loss2 = -softmax_result[1,3]
sample_loss3 = -softmax_result[2,2]

print(sample_loss1.data, sample_loss2.data, sample_loss3.data)
final_loss = (sample_loss1 + sample_loss2 + sample_loss3)/3
print('最终计算Loss结果:',final_loss)
 8.0005
[torch.FloatTensor of size 1]

 3.3555
[torch.FloatTensor of size 1]

1.00000e-02 *
  2.1476
[torch.FloatTensor of size 1]

最终计算Loss结果: Variable containing:
 3.7925
[torch.FloatTensor of size 1]

NLLLoss

接着我们把上面计算的log softmax结果作为NLLLoss的输入,可以看到计算结果和CrossEntropyLoss相同

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
inputs_tensor = torch.FloatTensor([
    [-4.6717e-04, -8.0005e+00, -9.0005e+00, -1.2000e+01, -1.3000e+01],
    [-1.3555e+00, -6.3555e+00, -3.5549e-01, -3.3555e+00, -5.3555e+00],
    [-1.3021e+01, -4.0215e+00, -2.1476e-02, -6.0215e+00, -7.0215e+00]
])
    
targets_tensor = torch.LongTensor([1,3,2])
# targets_tensor = torch.LongTensor([1])

inputs_variable = autograd.Variable(inputs_tensor, requires_grad=True)
targets_variable = autograd.Variable(targets_tensor)

loss = nn.NLLLoss()
output = loss(inputs_variable, targets_variable)
print('NLLLoss 结果:{}'.format(output))
NLLLoss 结果:Variable containing:
 3.7925
[torch.FloatTensor of size 1]

NLLLoss2d

NLLLoss2d用于计算二维的损失函数,与NLLLoss差别在于,NLLLoss输出的是1维向量,NLLLoss2d输出的是二维向量,比如说有20类,那么NLLLoss2d输出的应该是20 x width x height,而对应的真值也是二维的 1 x width x height,在计算loss的时候只需要输出层的每个像素与target真值对应位置的像素进行计算即可,然后再对整张图片的每个像素求loss,最后求和取平均(batch size = 1的情况下),多用于语义分割中。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
# 构造样本 假设一共有四种类别,图像大小是5x5的,batch_size是1
inputs_tensor = torch.FloatTensor([
[[10, 2, 1,-2,-3],
 [10, 2, 1,-2,-3],
 [-1,-6,-0,-3,-5],
 [-1,-6,-0,-3,-5],
 [-5, 4, 8, 2, 1]],
[[10, 2, 1,-2,-3],
 [10, 2, 1,-2,-3],
 [-1,-6,-0,-3,-5],
 [-1,-6,-0,-3,-5],
 [-5, 4, 8, 2, 1]],
[[10, 2, 1,-2,-3],
 [10, 2, 1,-2,-3],
 [-1,-6,-0,-3,-5],
 [-1,-6,-0,-3,-5],
 [-5, 4, 8, 2, 1]],
[[10, 2, 1,-2,-3],
 [10, 2, 1,-2,-3],
 [-1,-6,-0,-3,-5],
 [-1,-6,-0,-3,-5],
 [-5, 4, 8, 2, 1]],
    ])
inputs_tensor = torch.unsqueeze(inputs_tensor,0)
# inputs_tensor = torch.unsqueeze(inputs_tensor,1)
print('input size(nBatch x nClasses x height x width): ', inputs_tensor.shape)

targets_tensor = torch.LongTensor([
 [0, 0, 1, 1, 0],
 [0, 1, 1, 1, 2],
 [0, 1, 2, 2, 2],
 [1, 1, 2, 2, 3],
 [0, 3, 2, 3, 3]
])

targets_tensor = torch.unsqueeze(targets_tensor,0)
print('target size(nBatch x height x width): ', targets_tensor.shape)

inputs_variable = autograd.Variable(inputs_tensor, requires_grad=True)
inputs_variable = F.log_softmax(inputs_variable) #计算log softmax
targets_variable = autograd.Variable(targets_tensor)

loss = nn.NLLLoss2d()
output = loss(inputs_variable, targets_variable)
print('NLLLoss 结果:{}'.format(output))
input size(nBatch x nClasses x height x width):  torch.Size([1, 4, 5, 5])
target size(nBatch x height x width):  torch.Size([1, 5, 5])
NLLLoss 结果:Variable containing:
 1.3863
[torch.FloatTensor of size 1]
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# 构造样本 假设一共有四种类别,图像大小是2x2的,batch_size是1
inputs_tensor = torch.FloatTensor([
[[2, 4],
 [1, 2]],
[[5, 3],
 [3, 0]],
[[5, 3],
 [5, 2]],
[[4, 2],
 [3, 2]],
    ])
inputs_tensor = torch.unsqueeze(inputs_tensor,0)
# inputs_tensor = torch.unsqueeze(inputs_tensor,1)
print('input size(nBatch x nClasses x height x width): ', inputs_tensor.shape)

targets_tensor = torch.LongTensor([
 [0, 2],
 [2, 3]
])

targets_tensor = torch.unsqueeze(targets_tensor,0)
print('target size(nBatch x height x width): ', targets_tensor.shape)

inputs_variable = autograd.Variable(inputs_tensor, requires_grad=True)
inputs_variable = F.log_softmax(inputs_variable) #计算log softmax
targets_variable = autograd.Variable(targets_tensor)

loss = nn.NLLLoss2d()
output = loss(inputs_variable, targets_variable)
print('NLLLoss 结果:{}'.format(output))
input size(nBatch x nClasses x height x width):  torch.Size([1, 4, 2, 2])
target size(nBatch x height x width):  torch.Size([1, 2, 2])
NLLLoss 结果:Variable containing:
 1.7265
[torch.FloatTensor of size 1]

那么输入到模型中的数据为经过log softmax计算后的结果:

1
2
print('inputs:{} \n'.format(inputs_variable))
print('target:{} \n'.format(targets_tensor))
inputs:Variable containing:
(0 ,0 ,.,.) = 
 -3.8828 -0.6265
 -4.2539 -1.1427

(0 ,1 ,.,.) = 
 -0.8828 -1.6265
 -2.2539 -3.1427

(0 ,2 ,.,.) = 
 -0.8828 -1.6265
 -0.2539 -1.1427

(0 ,3 ,.,.) = 
 -1.8828 -2.6265
 -2.2539 -1.1427
[torch.FloatTensor of size 1x4x2x2]


target:
(0 ,.,.) = 
  0  2
  2  3
[torch.LongTensor of size 1x2x2]

同样,我们使用与之前计算CrossEntropyLoss中的方式相同,根据target的标签值,对每个像素对应的4个类别位置取出最大值即可,如下图所示

那么最后的loss计算结果如下:

1
2
# 计算loss 结果
-(-3.8828 + (-1.6265) + (-0.2539)+ (-1.1427))/4
1.7264749999999998

BCELoss

BCELoss

loss(o,t)=1/Ni=1N=1N[tilog(oi)+(1ti)log(1oi)]loss(o,t)=−1/N∑i=1N=1N[ti∗log(oi)+(1−ti)∗log(1−oi)]
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# 预测值f(x) 构造样本,神经网络输出层
inputs_tensor = torch.FloatTensor( [
 [10, 2, 1,-2,-3],
 [-1,-6,-0,-3,-5],
 [-5, 4, 8, 2, 1]
 ])
inputs_tensor = F.sigmoid(inputs_tensor).data
# 真值y
targets_tensor = torch.LongTensor([
 [1,0,0,0,0],
 [0,1,0,0,0],
 [0,0,0,0,1]
])
# targets_tensor = torch.LongTensor([1])

inputs_variable = autograd.Variable(inputs_tensor, requires_grad=True) 
targets_variable = autograd.Variable(targets_tensor)
print('input tensor (nBatch x nClasses): {}'.format(inputs_tensor.shape))
print('target tensor shape: {}'.format(targets_tensor.shape))
input tensor (nBatch x nClasses): torch.Size([3, 5])
target tensor shape: torch.Size([3, 5])
1
2
3
4
loss = nn.BCELoss()
output = loss(inputs_variable, targets_variable)
# output.backward()
print('pytorch 内部实现的CrossEntropyLoss: {}'.format(output))

Hinge

L1 & L2

L1 与 L2多用于预测问题,也就是说yy是一个连续的值,

其中L1定义如下

L1(y^,y)=1/m|y^iyi|L1(y^,y)=1/m∑|y^i−yi|

L2的定义如下:

L2(y^,y)=1/m|y^iyi|2L2(y^,y)=1/m∑|y^i−yi|2

其中 y^y^ 为预测值,yy 为真值,mm为样本个数,下面如果无特殊说明,都用此表示




你可能感兴趣的:(深度学习,pytorch)