分类问题常用的损失函数为交叉熵(Cross Entropy Loss)。
交叉熵描述了两个概率分布之间的距离,交叉熵越小说明两者之间越接近。
熵是信息量的期望值,它是一个随机变量的确定性的度量。
熵越大,变量的取值越不确定;反之,熵越小,变量取值就越确定。
本文将使用交叉熵损失函数用作一个简单案例,代码仅供参考学习!
提示:以下是本篇文章正文内容,下面案例可供参考
代码如下(示例):
import torch
import torch.nn as nn
import numpy as np
import math
代码如下(示例):
loss_f = nn.CrossEntropyLoss(weight=None, size_average=True, reduce=False)
output = torch.ones(2, 3, requires_grad=True) * 0.5 # 假设一个三分类任务,batchsize=2,假设每个神经元输出都为0.5
target = torch.from_numpy(np.array([0, 1])).type(torch.LongTensor)
loss = loss_f(output, target)
print('--------------------------------------------------- CrossEntropy loss: base')
print('loss: ', loss)
print('由于reduce=False,所以可以看到每一个样本的loss,输出为[1.0986, 1.0986]')
output = output[0].detach().numpy()
output_1 = output[0] # 第一个样本的输出值
target_1 = target[0].numpy()
# 第一项
x_class = output[target_1]
# 第二项
exp = math.e
sigma_exp_x = pow(exp, output[0]) + pow(exp, output[1]) + pow(exp, output[2])
log_sigma_exp_x = math.log(sigma_exp_x)
# 两项相加
loss_1 = -x_class + log_sigma_exp_x
print('--------------------------------------------------- 手动计算')
print('第一个样本的loss:', loss_1)
weight = torch.from_numpy(np.array([0.6, 0.2, 0.2])).float()
loss_f = nn.CrossEntropyLoss(weight=weight, size_average=True, reduce=False)
output = torch.ones(2, 3, requires_grad=True) * 0.5 # 假设一个三分类任务,batchsize为2个,假设每个神经元输出都为0.5
target = torch.from_numpy(np.array([0, 1])).type(torch.LongTensor)
loss = loss_f(output, target)
print('\n\n--------------------------------------------------- CrossEntropy loss: weight')
print('loss: ', loss) #
print('原始loss值为1.0986, 第一个样本是第0类,weight=0.6,所以输出为1.0986*0.6 =', 1.0986*0.6)
loss_f_1 = nn.CrossEntropyLoss(weight=None, size_average=False, reduce=False, ignore_index=1)
loss_f_2 = nn.CrossEntropyLoss(weight=None, size_average=False, reduce=False, ignore_index=2)
output = torch.ones(3, 3, requires_grad=True) * 0.5 # 假设一个三分类任务,batchsize为2个,假设每个神经元输出都为0.5
target = torch.from_numpy(np.array([0, 1, 2])).type(torch.LongTensor)
loss_1 = loss_f_1(output, target)
loss_2 = loss_f_2(output, target)
print('\n\n--------------------------------------------------- CrossEntropy loss: ignore_index')
print('ignore_index = 1: ', loss_1) # 类别为1的样本的loss为0
print('ignore_index = 2: ', loss_2) # 类别为2的样本的loss为0
# coding: utf-8
import torch
import torch.nn as nn
import numpy as np
import math
# ----------------------------------- CrossEntropy loss: base
loss_f = nn.CrossEntropyLoss(weight=None, size_average=True, reduce=False)
# 生成网络输出 以及 目标输出
output = torch.ones(2, 3, requires_grad=True) * 0.5 # 假设一个三分类任务,batchsize=2,假设每个神经元输出都为0.5
target = torch.from_numpy(np.array([0, 1])).type(torch.LongTensor)
loss = loss_f(output, target)
print('--------------------------------------------------- CrossEntropy loss: base')
print('loss: ', loss)
print('由于reduce=False,所以可以看到每一个样本的loss,输出为[1.0986, 1.0986]')
# 熟悉计算公式,手动计算第一个样本
output = output[0].detach().numpy()
output_1 = output[0] # 第一个样本的输出值
target_1 = target[0].numpy()
# 第一项
x_class = output[target_1]
# 第二项
exp = math.e
sigma_exp_x = pow(exp, output[0]) + pow(exp, output[1]) + pow(exp, output[2])
log_sigma_exp_x = math.log(sigma_exp_x)
# 两项相加
loss_1 = -x_class + log_sigma_exp_x
print('--------------------------------------------------- 手动计算')
print('第一个样本的loss:', loss_1)
# ----------------------------------- CrossEntropy loss: weight
weight = torch.from_numpy(np.array([0.6, 0.2, 0.2])).float()
loss_f = nn.CrossEntropyLoss(weight=weight, size_average=True, reduce=False)
output = torch.ones(2, 3, requires_grad=True) * 0.5 # 假设一个三分类任务,batchsize为2个,假设每个神经元输出都为0.5
target = torch.from_numpy(np.array([0, 1])).type(torch.LongTensor)
loss = loss_f(output, target)
print('\n\n--------------------------------------------------- CrossEntropy loss: weight')
print('loss: ', loss) #
print('原始loss值为1.0986, 第一个样本是第0类,weight=0.6,所以输出为1.0986*0.6 =', 1.0986*0.6)
# ----------------------------------- CrossEntropy loss: ignore_index
loss_f_1 = nn.CrossEntropyLoss(weight=None, size_average=False, reduce=False, ignore_index=1)
loss_f_2 = nn.CrossEntropyLoss(weight=None, size_average=False, reduce=False, ignore_index=2)
output = torch.ones(3, 3, requires_grad=True) * 0.5 # 假设一个三分类任务,batchsize为2个,假设每个神经元输出都为0.5
target = torch.from_numpy(np.array([0, 1, 2])).type(torch.LongTensor)
loss_1 = loss_f_1(output, target)
loss_2 = loss_f_2(output, target)
print('\n\n--------------------------------------------------- CrossEntropy loss: ignore_index')
print('ignore_index = 1: ', loss_1) # 类别为1的样本的loss为0
print('ignore_index = 2: ', loss_2) # 类别为2的样本的loss为0
在分类问题中用交叉熵可以更好的体现loss的同时,使其仍然是个凸函数,这对于梯度下降时的搜索很有用。
反观平方和函数,经过softmax后使得函数是一个非凸函数。