Pytorch采用动态计算图引擎,区别于Tensorflow不采用eager模式的静态计算图,无需等到计算图的所有节点(所有操作)构建完成才能进行实际操作。
Pytorch API List
Package | Description |
---|---|
torch | The top-level PyTorch package and tensor library. |
torch.nn | A subpackage that contains modules and extensible classes for building neural networks. |
torch.autograd | A subpackage that supports all the differentiable Tensor operations in PyTorch. |
torch.nn.functional | A functional interface that contains typical operations used for building neural networks like loss functions, activation functions, and convolution operations. |
torch.optim | A subpackage that contains standard optimization operations like SGD and Adam. |
torch.utils | A subpackage that contains utility classes like data sets and data loaders that make data preprocessing easier. |
torchvision | A package that provides access to popular datasets, model architectures, and image transformations for computer vision. |
Tensor, DataSet & DataLoader
Tensor类似于numpy,网络训练时使用的tensor shape通常为[Batch, Channels, Height, Width]。Tensor的常见变换以及数据预处理的ETL(Extract, Transform, Load)三步见下面的代码示例:
import numpy as np
import matplotlib.pyplot as plt
import torchvision
import torchvision.transforms as transforms
class Getdata(Dateset):
def __init__(self, csv_file):
self.data = pd.read_csv(csv_file)
def __getitem__(self, index):
r = self.data.iloc[index]
label = torch.tensor(r.is_up_day, dtype=torch.long)
sample = self.normalize(torch.tensor(r.open, r.high, r.low, r.close))
return sample, label
def __len__(self):
return len(self.data)
train_set = torchvision.datasets.FashionMNIST(
root='./data/FashionMNIST',
train=True,
download=True,
transforms=transforms.Compose([
transforms.ToTensor()
])
)
# 数据探索
len(train_set)
train_set.train_labels
train_set.train_labels.bincount() #样本均衡性,如果不均衡best method for oversampling is copy
# 访问数据集中单个元素
sample = next(iter(train_set))
len(sample)
# len: 2
type(sample)
# type: tuple
# sample为sequence类型, 获取元组中每个元素 sequence unpacking
image, label = sample
# 维度挤压 例如灰度图,去掉维度为1的维
image.squeeze()
t1 = torch.tensor([
[1,1,1,1],
[1,1,1,1],
[1,1,1,1],
[1,1,1,1]
])
t2 = torch.tensor([
[2,2,2,2],
[2,2,2,2],
[2,2,2,2],
[2,2,2,2]
])
t3 = torch.tensor([
[3,3,3,3],
[3,3,3,3],
[3,3,3,3],
[3,3,3,3]
])
t = torch.stack((t1, t2, t2)) #squeeze的反向操作
t = t.reshape([3,1,4,4]) #构造适合网络训练的数据
t.flatten(start_dim=1).shape
#tensor.Size([3,16])
t.cuda()
# .cuda函数将Tensor转移到GPU
plt.imshow(image.squeeze(), cmap='gray') # color map
# 数据加载
train_loader = torch.utils.data.DataLoader(train_set,
batch_size=1000,
shuffle=True
)
# 获取单个batch的数据
batch = next(iter(train_loader))
len(batch)
# 2
type(batch)
# list
images, labels = batch
image.shape
# torch.Size([10,1,28,28]) 对应(Batch Size, Channels, Height, Width)
# 输出结果
grid = torchvision.utils.make_grid(images, nrow=10)
plt.figure(figsize=(15,15))
plt.imshow(np.transpose(grid, (1,2,0)))
# 按行排列输出十张灰度图
print('labels:', labels)
# labels: tensor([9, 0, 0, 3, 0, 2, 7, 2, 5, 5])
Model/Network
torch.nn.Module是pytorch中所有包含layer的network module的基类,继承torch.nn.Module可以实现自定义的网络模型。在构造函数中可以使用torch.nn预定义的layer组合网络模型的各层,在基类forward方法的实现中,可以使用layer的属性或nn.functional API提供的操作定义各层的forward pass,forward接收一个tensor返回一个tensor从而实现data transformation。
forward函数需要自定义,而backward函数是autograd自动定义的。autograd.Variable 包装一个Tensor并且记录应用在其上的历史运算操作,即每一个变量都有一个.creator属性,它引用一个常见Variable的Function。在Variable上调用backward()可以自动地计算全部的梯度。通过.data属性来访问最原始的tensor,而梯度则相应地被累计到了.grad中。
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
class Network(nn.Module):
def __init__(self):
super(Network, self).__init__()
self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5)
# the first in_channels and the last out_features are data depend parameters
# the out_channels are filter amount
self.conv2 = nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5)
self.fc1 = nn.Linear(in_features=12*4*4, out_features=120)
# outputSizeOfConv = [(inputSize + 2*pad - filterSize)/stride] + 1
# The output of conv1 should be ((28-5) + 1) / 2 = 12x12 image
# The output of conv2 should be ((12-5) + 1) / 2 = 4x4 image
self.fc2 = nn.Linear(in_features=120, out_features=60)
self.out = nn.Linear(in_features=60, out_features=10)
#Linear = Dense
def forward(self, t):
#pooling operation
t = F.max_pool2d(F.relu(self.conv1(t)), (2,2))
t = F.max_pool2d(F.relu(self.conv2(t)), (2,2))
t = t.view(-1, t.flatten(start_dim=1))
t = F.relu(self.fc1(t))
t = F.relu(self.fc2(t))
t = self.out(t)
return t
net = Network()
net.cuda()
# nn.Parameter也是一种Variable,当给Module赋值时自动注册的一个参数
params = list(net.parameters())
params[0].shape
# torch.Size([6,1,5,5]) conv1's weight
input = Variable(torch.randn(1, 1, 28, 28))
output = net(input)
# 计算损失
target = Variable(torch.arange(1,11)) # a dummy target
criterion = nn.CrossEntropyLoss()
loss = criterion(output, target)
# 反向传播 更新网络权重
# 梯度缓冲区设为0,否则梯度将累积到现有梯度
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
optimizer.zero_grad()
loss.backward()
optimizer.step()
训练网络
综合上述内容,我们对网络进行多次迭代训练
# 修改上述代码将根据网络输出计算损失,反向传播更新梯度的代码放到训练网络的循环中
#首先定义好损失函数和梯度下降优化方法
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(10): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the input
inputs, labels = data
# wrap in Variable
inputs, labels = Variable(inputs.cuda()), Variable(labels.cuda())
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.data[0]
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
分析训练结果
correct = 0
total = 0
for data in testloader:
images, labels = data
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
# torch.max(..., 1) 返回每行的最大值
# torch.max(..., 0) 返回每列的最大值
# _为返回的最大值的值,predicted为最大值对应的索引即预测的label
total += labels.size(0)
correct += (predicted == labels).sum()
print('Accuracy of the network on the 10000 test images: %d %%' % (100 * correct / total))
常见问题记录:
- load 和 load_state_dict 区别
load加载torch.save(model, PATH)保存的整个模型,而load_state_dict加载保存的参数
torch.save(model.state_dict(), PATH)
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH))
model.eval()
model.eval() 设置dropout and batch normalization layers为evaluation mode
view和unsqueeze的区别
view按参数更改数据维度
unsqueeze returns a new tensor with a dimension of size one inserted at the specified position.
>>> x = torch.tensor([1, 2, 3, 4])
>>> torch.unsqueeze(x, 0)
tensor([[ 1, 2, 3, 4]])
>>> torch.unsqueeze(x, 1)
tensor([[ 1],
[ 2],
[ 3],
[ 4]])
参考资料:
Pytorch官方文档
Deeplizard Python教程