上篇文章线性回归本质上是回归问题。本篇要介绍的是一个分类问题。softmax回归是一个单层神经网络,在前一篇博客中,输入数据的维度是2,这里以Fashion-MNIST数据集为例,输入的是2828的图像。将2828的图像像素拉直,得到的是输入784维度的输入数据。所以本例当中输入数据的维度为784,那么上一篇文章中的 W W W矩阵维度也就变成784维。
上篇文章线性回归当中,输出的是1维数据,在Softmax回归中,输出的是多维的数据,具体来说就是图片的类别,这个类别可能是猫、狗等。如果结果标签有10类,那么 W W W矩阵的大小应该为784*10。
以输入维度4,输出维度3为例:
o 1 = x 1 w 11 + x 2 w 21 + x 3 w 31 + x 4 w 41 + b 1 o_1=x_1w_{11}+x_2w_{21}+x_3w_{31}+x_4w_{41}+b_1 o1=x1w11+x2w21+x3w31+x4w41+b1
o 2 = x 1 w 12 + x 2 w 22 + x 3 w 32 + x 4 w 42 + b 2 o_2=x_1w_{12}+x_2w_{22}+x_3w_{32}+x_4w_{42}+b_2 o2=x1w12+x2w22+x3w32+x4w42+b2
o 3 = x 1 w 13 + x 2 w 23 + x 3 w 33 + x 4 w 43 + b 3 o_3=x_1w_{13}+x_2w_{23}+x_3w_{33}+x_4w_{43}+b_3 o3=x1w13+x2w23+x3w33+x4w43+b3
既然是分类问题,那么输出结果必然要体现数据的类别。这里只是简单的直接对计算结果进行了输出,我们可以简单的规定取 o 1 o_1 o1、 o 2 o_2 o2、 o 3 o_3 o3中最大的那个来决定类别。比如 o 1 o_1 o1、 o 2 o_2 o2、 o 3 o_3 o3值分别为0.1、100、1,我们就说样本 x x x属于类别2。
直接对结果进行输出会带来两个问题:
这里就要提到Softmax
y i = e i ∑ j = 1 n e j , n = l e n ( l a b e l s ) y_i=\frac{e^i}{\sum_{j=1}^ne^j},{n=len(labels)} yi=∑j=1nejei,n=len(labels)
对照上例进行一下分解
o 1 ˊ = e o 1 e o 1 + e o 2 + e o 3 \acute{o_1}=\frac{e^{o_1}}{e^{o_1}+e^{o_2}+e^{o_3}} o1ˊ=eo1+eo2+eo3eo1
o 2 ˊ = e o 2 e o 1 + e o 2 + e o 3 \acute{o_2}=\frac{e^{o_2}}{e^{o_1}+e^{o_2}+e^{o_3}} o2ˊ=eo1+eo2+eo3eo2
o 3 ˊ = e o 3 e o 1 + e o 2 + e o 3 \acute{o_3}=\frac{e^{o_3}}{e^{o_1}+e^{o_2}+e^{o_3}} o3ˊ=eo1+eo2+eo3eo3
这样的好处是将输出限定在 [ 0 , 1 ] [0,1] [0,1]范围内,最后分类结果以概率形式输出。显然 o 1 ˊ + o 2 ˊ + o 3 ˊ = 1 \acute{o_1}+\acute{o_2}+\acute{o_3}=1 o1ˊ+o2ˊ+o3ˊ=1
最终得到的是如下式子
o i = x i W + b o^i=x^iW+b oi=xiW+b
y i ˊ = s o f t m a x ( o i ) \acute{y^i}=softmax(o^i) yiˊ=softmax(oi)
X取小批量的话
O = X W + b O=XW+b O=XW+b
Y ˊ = s o f t m a x ( O ) \acute{Y}=softmax(O) Yˊ=softmax(O)
在分类问题中,不需要精确计算类别的概率,只需要被预测的类别的概率比其他的高就可以。是一种非精确的预测。这种特点使得使用更适合衡量两个概率分布差异的测量函数,交叉熵损失函数。交叉熵描述的是实际输出的概率和预期应输出的概率之间的距离
H = − ∑ x p ( x ) l o g q ( x ) H=-{\sum_{x}}p(x)logq(x) H=−x∑p(x)logq(x)
比如预期输出 p = ( 1 , 0 , 0 ) p=(1,0,0) p=(1,0,0),每次mini-batch取2,实际输出 q 1 = ( 0.5 , 0.2 , 0.3 ) , q 2 = ( 0.8 , 0.1 , 0.1 ) q_1=(0.5,0.2,0.3),q_2=(0.8,0.1,0.1) q1=(0.5,0.2,0.3),q2=(0.8,0.1,0.1),那么
H 1 ( p , q 1 ) = − ( 1 ∗ l o g 0.5 + 0 ∗ l o g 0.2 + 0 ∗ l o g 0.3 ) = 0.3 H_1(p,q_1)=-(1*log{0.5}+0*log{0.2}+0*log{0.3})=0.3 H1(p,q1)=−(1∗log0.5+0∗log0.2+0∗log0.3)=0.3
H 2 ( p , q 2 ) = − ( 1 ∗ l o g 0.8 + 0 ∗ l o g 0.1 + 0 ∗ l o g 0.1 ) = 0.1 H_2(p,q_2)=-(1*log{0.8}+0*log{0.1}+0*log{0.1})=0.1 H2(p,q2)=−(1∗log0.8+0∗log0.1+0∗log0.1)=0.1
再取平均熵
0.3 + 0.1 2 = 0.2 \frac{0.3+0.1}{2}=0.2 20.3+0.1=0.2
交叉熵还有另一种形式
H = − ∑ x ( p ( x ) l o g q ( x ) + ( 1 − p ( x ) ) l o g ( 1 − q ( x ) ) ) H=-{\sum_{x}}(p(x)logq(x)+(1-p(x))log(1-q(x))) H=−x∑(p(x)logq(x)+(1−p(x))log(1−q(x)))
Fashion-MNIST数据集获取
torchvision包是PyTorch深度学习框架中用来构建计算机视觉模型的工具,主要由以下几个模块构成:
包引入
from IPython import display
import matplotlib.pyplot as plt
import torch
import torchvision
import torchvision.transforms as transforms
import time
import numpy as np
import sys
sys.path.append("/home/kesci/input")
import d2lzh as d2l
获取数据集,下载到本地
mnist_train = torchvision.datasets.FashionMNIST(root='/home/kesci/input/FashionMNIST2065', train=True, download=True, transform=transforms.ToTensor())
mnist_test = torchvision.datasets.FashionMNIST(root='/home/kesci/input/FashionMNIST2065', train=False, download=True, transform=transforms.ToTensor())
如果没有参数transform=transforms.ToTensor(),则读取出来的是PIL类型的图片
class torchvision.datasets.FashionMNIST(root, train=True, transform=None, target_transform=None, download=False)
root(string)– 数据集的根目录,其中存放processed/training.pt和processed/test.pt文件。
train(bool, 可选)– 如果设置为True,从training.pt创建数据集,否则从test.pt创建。
download(bool, 可选)– 如果设置为True,从互联网下载数据并放到root文件夹下。如果root目录下已经存在数据,不会再次下载。
transform(可被调用 , 可选)– 一种函数或变换,输入PIL图片,返回变换之后的数据。如:transforms.RandomCrop。
target_transform(可被调用 , 可选)– 一种函数或变换,输入目标,进行变换。
# 我们可以通过下标来访问任意一个样本
feature, label = mnist_train[0]
print(feature.shape, label) # Channel x Height x Width
torch.Size([1, 28, 28]) 9
# 数字标识的类别与具体描述的映射
def get_fashion_mnist_labels(labels):
text_labels = ['t-shirt', 'trouser', 'pullover', 'dress', 'coat',
'sandal', 'shirt', 'sneaker', 'bag', 'ankle boot']
return [text_labels[int(i)] for i in labels]
读取本地的数据集
# 读取数据训练集和测试集
batch_size = 256
num_workers = 4
train_iter = torch.utils.data.DataLoader(mnist_train, batch_size=batch_size, shuffle=True, num_workers=num_workers)
test_iter = torch.utils.data.DataLoader(mnist_test, batch_size=batch_size, shuffle=False, num_workers=num_workers)
模型初始化
# 10类,每张图的像素数28*28=784
num_inputs = 784
num_outputs = 10
W = torch.tensor(np.random.normal(0, 0.01, (num_inputs, num_outputs)), dtype=torch.float)
b = torch.zeros(num_outputs, dtype=torch.float)
W.requires_grad_(requires_grad=True)
b.requires_grad_(requires_grad=True)
W的shape
torch.Size([784, 10])
b的shape
torch.Size([10])
定义softmax
def softmax(X):
# 对X矩阵中的每一个元素进行e的幂操作
# 输入的X
#tensor([[0.0094, 0.3445, 0.2100, 0.7920, 0.7117],
# [0.8559, 0.2465, 0.4939, 0.7290, 0.2683]])
#输出的X_exp
#tensor([[1.0095, 1.4113, 1.2337, 2.2078, 2.0374],
# [2.3536, 1.2796, 1.6386, 2.0730, 1.3078]])
X_exp = X.exp()
# 按照相同的行求和,并在结果中保留行特征,输出的结果是n*1的矩阵。如果keepdim=False,输出的是1*n的矩阵
partition = X_exp.sum(dim=1, keepdim=True)
# print("X size is ", X_exp.size())
# print("partition size is ", partition, partition.size())
return X_exp / partition # 这里应用了广播机制
利用上面的softmax定义网络
def net(X):
return softmax(torch.mm(X.view((-1, num_inputs)), W) + b)
定义交叉熵(待优化)
def cross_entropy(y_hat, y):
return - torch.log(y_hat.gather(1, y.view(-1, 1)))
定义准确率
def accuracy(y_hat, y):
return (y_hat.argmax(dim=1) == y).float().mean().item()
def evaluate_accuracy(data_iter, net):
acc_sum, n = 0.0, 0
for X, y in data_iter:
acc_sum += (net(X).argmax(dim=1) == y).float().sum().item()
n += y.shape[0]
return acc_sum / n
训练模型
num_epochs, lr = 5, 0.1
# 本函数已保存在d2lzh_pytorch包中方便以后使用
def train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size,
params=None, lr=None, optimizer=None):
for epoch in range(num_epochs):
train_l_sum, train_acc_sum, n = 0.0, 0.0, 0
for X, y in train_iter:
y_hat = net(X)
l = loss(y_hat, y).sum()
# 梯度清零
if optimizer is not None:
optimizer.zero_grad()
elif params is not None and params[0].grad is not None:
for param in params:
param.grad.data.zero_()
l.backward()
if optimizer is None:
d2l.sgd(params, lr, batch_size)
else:
optimizer.step()
train_l_sum += l.item()
train_acc_sum += (y_hat.argmax(dim=1) == y).sum().item()
n += y.shape[0]
test_acc = evaluate_accuracy(test_iter, net)
print('epoch %d, loss %.4f, train acc %.3f, test acc %.3f'
% (epoch + 1, train_l_sum / n, train_acc_sum / n, test_acc))
train_ch3(net, train_iter, test_iter, cross_entropy, num_epochs, batch_size, [W, b], lr)
模型预测
X, y = iter(test_iter).next()
true_labels = d2l.get_fashion_mnist_labels(y.numpy())
pred_labels = d2l.get_fashion_mnist_labels(net(X).argmax(dim=1).numpy())
titles = [true + '\n' + pred for true, pred in zip(true_labels, pred_labels)]
d2l.show_fashion_mnist(X[0:9], titles[0:9])
引入包
# 加载各种包或者模块
import torch
from torch import nn
from torch.nn import init
import numpy as np
import sys
sys.path.append("/home/kesci/input")
import d2lzh as d2l
from collections import OrderedDict
batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size, root='/home/kesci/input/FashionMNIST2065')
num_inputs = 784
num_outputs = 10
class LinearNet(nn.Module):
def __init__(self, num_inputs, num_outputs):
super(LinearNet, self).__init__()
self.linear = nn.Linear(num_inputs, num_outputs)
def forward(self, x): # x 的形状: (batch, 1, 28, 28)
y = self.linear(x.view(x.shape[0], -1))
return y
# net = LinearNet(num_inputs, num_outputs)
class FlattenLayer(nn.Module):
def __init__(self):
super(FlattenLayer, self).__init__()
def forward(self, x): # x 的形状: (batch, *, *, ...)
return x.view(x.shape[0], -1)
net = nn.Sequential(
# FlattenLayer(),
# LinearNet(num_inputs, num_outputs)
OrderedDict([
('flatten', FlattenLayer()),
# 或者写成我们自己定义的LinearNet(num_inputs, num_outputs) 也可以
('linear', nn.Linear(num_inputs, num_outputs))])
)
# 初始化模型参数
init.normal_(net.linear.weight, mean=0, std=0.01)
init.constant_(net.linear.bias, val=0)
# 定义损失函数
loss = nn.CrossEntropyLoss() # 下面是他的函数原型
# class torch.nn.CrossEntropyLoss(weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean')
# 定义优化函数
optimizer = torch.optim.SGD(net.parameters(), lr=0.1) # 下面是函数原型
# class torch.optim.SGD(params, lr=, momentum=0, dampening=0, weight_decay=0, nesterov=False)
# 训练
num_epochs = 5
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size, None, None, optimizer)