【pytorch】对照原理,自己实现pytorch自带的损失函数:L1Loss,L2Loss,BCELoss,BCEWithLogitsLoss,NLLLoss,CrossEntropyLoss

nn.L1Loss绝对值损失
#1批4个求损失和

def L1Loss(y,yhead):
    return torch.mean(torch.abs(y-yhead))
y=torch.rand(4,3)
yhead=torch.rand(4,3)
print(L1Loss(y,yhead))
print(nn.L1Loss(reduction='mean')(y,yhead))

输出:

tensor(0.4083)
tensor(0.4083)

#nn.MSELoss均方损失

def L2Loss(y,yhead):
    return torch.mean((y-yhead)**2)
y=torch.rand(4,3)
yhead=torch.rand(4,3)
print(L2Loss(y,yhead))
print(nn.MSELoss(reduction='mean')(y,yhead))

输出:

tensor([0.1401])
tensor(0.1401)

#nn.BCELoss二分类交叉熵损失

def BCELoss(y,yhead):
    return torch.mean(-(torch.log(y)*yhead+torch.log((1-y))*(1-yhead)))
y=torch.rand(4,3)
yhead=torch.rand(4,3)
print(BCELoss(y,yhead))
print(nn.BCELoss(reduction='mean')(y,yhead))

输出:

tensor([2.0282])
tensor(2.0282)

#nn.BCELoss二分类交叉熵损失

def BCELossWithSigmod(y,yhead):
    return torch.mean(-(torch.log(torch.sigmoid(y))*yhead+torch.log(1-torch.sigmoid(y))*(1-yhead)))
y=torch.rand(4,3)
yhead=torch.rand(4,3)
print(BCELossWithSigmod(y,yhead))
print(nn.BCEWithLogitsLoss(reduction='mean')(y,yhead))

输出:

tensor([0.7641])
tensor(0.7641)

‘’‘最大似然 / log似然代价函数,X是log_softmax()的输出,label是对应的标签位置
适合最后一层是log_softmax().其并不计算对数,而是假定输入已经计算过了对数。这里假定
目标经过了onehot编码,也假设输入y为onehot编码格式。目前中其余元素为0,相乘为0,
只取labelIndex处的至进行相乘,默认为1.最后加负号输出结果。
‘’’

def NLLLoss(y,labelIndex):
    loss=0
    for i,index in enumerate(labelIndex):
        loss+=y[i][index]
    return -loss/y.size(0)
y=torch.rand(2,3)
labelIndex=torch.LongTensor([1,2])
print(NLLLoss(y,labelIndex))
print(nn.NLLLoss(reduction='mean')(y,labelIndex))

输出:

tensor([[0.5495, 0.5472, 0.2328],
        [0.6703, 0.0252, 0.5524]])
tensor(-0.5498)
tensor(-0.5498)

log_softmax:
LogSoftmax ( x i ) = log ⁡ ( exp ⁡ ( x i ) ∑ j exp ⁡ ( x j ) ) \text{LogSoftmax}(x_{i}) = \log\left(\frac{\exp(x_i) }{ \sum_j \exp(x_j)} \right) LogSoftmax(xi)=log(jexp(xj)exp(xi))

'''最大似然 / log似然代价函数,X是log_softmax()的输出,label是对应的标签位置
适合最后一层是log_softmax().其并不计算对数,而是假定输入已经计算过了对数。这里假定
目标经过了onehot编码,也假设输入y为onehot编码格式。目前中其余元素为0,相乘为0,
只取labelIndex处的至进行相乘,默认为1.最后加负号输出结果。
'''

def NLLLoss(y,labelIndex):
    loss=0
    for i,index in enumerate(labelIndex):
        loss+=y[i][index]
    return -loss/y.size(0)

loss = nn.CrossEntropyLoss()
input = torch.randn(3, 5, requires_grad=True)
target = torch.empty(3, dtype=torch.long).random_(5)
output = loss(input, target)
print(output)
def myCrossEntropyLoss(y,yhead):
    return NLLLoss(torch.nn.LogSoftmax(dim=-1)(y),yhead)
output = myCrossEntropyLoss(input, target)
print(output)

输出:

tensor(1.9680, grad_fn=<NllLossBackward0>)
tensor(1.9680, grad_fn=<DivBackward0>)

CrossEntropyLoss: log_softmax+NLLLoss
在这里插入图片描述

你可能感兴趣的:(计算机视觉,python,pytorch,pytorch,python,计算机视觉)