import torch
import torch.nn as nn
x = torch.Tensor([1, 2, 3])
target = torch.Tensor([2, 2, 4])
criterion = nn.L1Loss()
loss = criterion(x, target)
loss # tensor(0.6667)
计算过程
(|1-2| + |2-2| + |3-4|)/3 = 0.6667
SmootchL1Loss 是L1 范数损失的变形。
在绝对值小于1的情况下,计算均方误差;在>= 1的情况下,减去 0.5。
公式:
L o s s ( x , y ) = 1 N { 1 2 ( x i , y i ) 2 , ∣ x i − y i ∣ < 1 ∣ x i − y i ∣ − 0.5 , 其它 \begin{equation} Loss(x, y) = \frac{1}{N}\begin{cases} \frac{1}{2}(x_i, y_i)^2, |x_i - y _i|<1 \\ |x_i - y _i| - 0.5, 其它 \end{cases} \end{equation} Loss(x,y)=N1{21(xi,yi)2,∣xi−yi∣<1∣xi−yi∣−0.5,其它
代码实现
x = torch.Tensor([1, 2, 3])
target = torch.Tensor([2, 2, 4])
criterion = nn.SmoothL1Loss()
loss = criterion(x, target)
loss # tensor(0.3333)
公式:
L o s s ( x , y ) = 1 N ∑ i = 1 N ∣ x − y ∣ 2 Loss(x, y) = \frac{1}{N} \sum^N_{i=1} |x - y|^2 Loss(x,y)=N1i=1∑N∣x−y∣2
代码实现:
x = torch.Tensor([1, 2, 3])
target = torch.Tensor([2, 2, 4])
criterion = nn.MSELoss()
loss = criterion(x, target)
loss # tensor(0.6667)
公式:
L o s s ( x , y ) = − 1 N ∑ i = 1 N [ t i ∗ l o g ( o i ) + ( 1 − t i ) ∗ l o g ( 1 − o i ) ] Loss(x, y) = -\frac{1}{N} \sum^N_{i=1} [ t_i * log(o_i) + (1-t_i) * log(1-o_i) ] Loss(x,y)=−N1i=1∑N[ti∗log(oi)+(1−ti)∗log(1−oi)]
代码实现:
x = torch.Tensor([0, 0, 1, 0, 1, 0])
target = torch.Tensor([0.3, 0.2, 0.8, 0.5, 0.7, 0.2])
criterion = nn.BCELoss()
loss = criterion(target, x)
loss # tensor(0.3460)
计算过程:
import math
sum_loss = (1-0) * math.log(1-0.3) + (1-0) * math.log(1-0.2) + math.log(0.8) + (1-0) * math.log(1-0.5) + math.log(0.7) + (1-0) * math.log(1-0.2)
loss = (-sum_loss)/6
loss # 0.34598795373000657
二分类交叉熵损失 有一个变形,叫做 BCEWithLogitsLoss,其将 Sigmoid 集成进来。
与单独使用 Sigmoid 和 BCELoss 相比,BCEWithLogitsLoss 在数值上更稳定。
predict = torch.Tensor([[0.1, 0.5, 0.4], [0.1, 0.6, 0.1]])
label = torch.LongTensor([1, 2])
loss = nn.CrossEntropyLoss(reduction='none')
loss(predict, label) # tensor([0.9459, 1.2944])
loss = nn.CrossEntropyLoss(reduction='mean')
loss(predict, label) # tensor(1.1201)
loss = nn.CrossEntropyLoss(reduction='sum')
loss(predict, label) # tensor(2.2403)
CrossEntropyLoss 可以分解为 softmax,log,NLLLoss
import torch.nn.functional as F
predict = torch.Tensor([[0.1, 0.5, 0.4], [0.1, 0.6, 0.1]])
label = torch.LongTensor([1, 2])
softmax = torch.softmax(predict, dim=1)
print('softmax : ', softmax)
_log = torch.log(softmax)
print('log : ', _log)
nll_loss = F.nll_loss(_log, label)
print('nll_loss : ', nll_loss)
'''
softmax : tensor([[0.2603, 0.3883, 0.3514],
[0.2741, 0.4519, 0.2741]])
log : tensor([[-1.3459, -0.9459, -1.0459],
[-1.2944, -0.7944, -1.2944]])
nll_loss : tensor(1.1201)
'''
公式
D k l ( p ∣ q ) = ∑ i = 1 N p ( x i ) l o g ( p ( x i ) q ( x i ) ) D_{kl}(p|q) = \sum^N_{i=1} p(x_i) log(\frac{p(x_i)}{q(x_i)}) Dkl(p∣q)=i=1∑Np(xi)log(q(xi)p(xi))
代码实现
predict = torch.Tensor([0.1, 0.3, 0.6])
label = torch.LongTensor([0.1, 0.6, 0.3])
loss = nn.KLDivLoss()
loss(predict, label) # tensor(0.)
公式
L o s s ( x , y ) = { 1 − c o s ( x , y ) , l a b e l = 1 m a x ( 0 , c o s ( x , y ) + m a r g i n ) , l a b e l = − 1 \begin{equation} Loss(x, y) = \begin{cases} 1-cos(x, y), label=1 \\ max(0, cos(x,y) + margin), label=-1 \end{cases} \end{equation} Loss(x,y)={1−cos(x,y),label=1max(0,cos(x,y)+margin),label=−1
x = torch.Tensor([[0.1, 0.5, 0.4], [0.1, 0.5, 0.4]])
y = torch.Tensor([[0.5, 0.4, 0.1], [0.1, 0.5, 0.4]])
label = torch.Tensor([-1, 1])
loss = nn.CosineEmbeddingLoss()
loss(x, y, label) # tensor(0.3452)
torch.cosine_similarity(x, y)
# tensor([0.6905, 1.0000])
如一条裙子可以是 长裙短裙连衣裙,也可以是百褶裙、A字裙
公式:
L o s s ( x , y ) = 1 N ∑ i = 1 ; i ! = y i N ∑ j = 1 y j ! = 0 [ m a x ( 0 , 1 − ( x y i − x i ) ) ] Loss(x, y) = \frac{1}{N} \sum_{i=1;i!=y_i}^{N} \sum_{j=1}^{y_j!=0} [max(0,1 - (x_{y_i} - x_i) )] Loss(x,y)=N1i=1;i!=yi∑Nj=1∑yj!=0[max(0,1−(xyi−xi))]
-1
代表占位符,后面的标签都是错误的标签。loss = torch.nn.MultiLabelMarginLoss()
x = torch.FloatTensor([[0.1, 0.2, 0.4, 0.8, 1.1, 4, 7]])
y = torch.LongTensor([[5, 4, 3, 0, -1, 1, 2]])
loss(x, y) # tensor(4.2571)
2023-11-14(六)