Le-Net5神经网络模型如下:(图片来自Yann LeCun的论文)
本练习流程如下:
32323→conv1(3655)→28286→relu→maxpool(22)→14146→conv2(61655)→101016→relu→maxpool(22)→5516→view(-1,5516)→1400→fc1(400,120)→1120→relu→fc2(120,84)→1*84→relu→fc3(84,10)→softmax(10)
①神经网络类对象继承torch.nn.Module模型类
②类构造函数中设置神经网络的conv、pool、fc属性,其语句分别为:
self.conv=nn.Conv2d(inchannels,outchannels,kernel_size)通道数、过滤器数量、过滤器大小,其中kernel_size可以是int数也可以是(4,3)这种模式
self.pool=nn.MaxPool2d(2,2)步长stride默认会自适应
self.fc=nn.Linear(infeatures,outfeatures)输入特征数,输出特征数
③继承类需要覆写forward(self,x)前向传播方法,设置神经网络的前向传播流程,如下:
x=self.pool(F.relu(self.conv(x)))一层卷积、激活、池化操作的神经网络
x=F.relu(self.fc(x))一层全连接神经网络
代码示例如下:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
#设置卷积矩阵
self.conv1=nn.Conv2d(3,6,5)
self.conv2=nn.Conv2d(6,16,5)
#设置池化层
self.pool1=nn.MaxPool2d(2,2)
self.pool2=nn.MaxPool2d(2,2)
#设置全连接
self.fc1=nn.Linear(16*5*5,120)
self.fc2=nn.Linear(120,84)
self.fc3=nn.Linear(84,10)
def forward(self,x):
x=self.pool1(F.relu(self.conv1(x)))#第一层神经网络
x=self.pool2(F.relu(self.conv2(x)))#第二层神经网络
x=x.view(-1,16*5*5)#矩阵塑形化为1维网络
x=F.relu(self.fc1(x))#第三层神经网络,全连接1
x=F.relu(self.fc2(x))#第四层神经网络,全连接2
x=self.fc3(x)#第五层神经网络,全连接3
return x
①神经网络对象创建完成需要交给GPU运行
net=Net()
net.to(device)
②损失函数criterion=nn.CroseeEntropyLoss()可选用其它计算方法,其作用为:
loss=certerion(outputs,labels)可以计算实际输出和理想输出之间的损失函数
③梯度优化器主要设置学习步长lr
optimizer=optim.SGD(net.parameters(),lr=0.001,momentum=0.9)此处选择SGD,可选其它
演示代码如下:
net=Net()#生成神经网络对象、
net.to(device)
criterion = nn.CrossEntropyLoss()#定义损失函数
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
①训练集、训练加载器设置(此处训练集选用CIFAR10)
trainset=torchvision.datasets.CIFAR10(root="./data",train=True,download=True,transform=transforn)
root:下载后训练集的存储地址,此处下载由外网下载比较慢,可以参考:https://blog.csdn.net/york1996/article/details/81780065解决
trainloader=torch.utils.data.DataLoader(trainset,batch_size=4,shuffle=True,num_workers=2)
batch_size:没批数据的数量
num_workers:线程数量此处设置为2,但是必须再main线程中运行,否则报错。参考解决:http://www.manongjc.com/article/60188.html
②训练过程。训练过程为循环遍历训练,可以根据需要定义训练次数,单次训练如下:
for i, data in enumerate(trainloader,0)#enumerate为迭代器,参数1为迭代对象,参数2为迭代起始值,返回值i为索引值,data为迭代返回对象
inputs,labels=data#提取训练集图片及对应标签
inputs,labels=inputs.to(device),labels.to(device)此处必须由原变量接收,否则只会创建新对象,而不会改变原本对象,与Module模型不一样。参考:https://blog.csdn.net/qq_27261889/article/details/86575033
outputs=net(inputs)
loss=criterion(outputs,labels)#计算损失函数
loss.backword()#反向传播
optimizer.step#迭代优化
演示代码如下:
for epoch in range(2):
running_loss=0.0
for i,data in enumerate(trainloader,0):
#获取输入
inputs,labels=data
####前面必须用原来的对象接收,不然会包类型不匹配错误
####原因是tensor.to()会产生一个新的tensor对象而不是改变原对象
####参考网址:https://blog.csdn.net/qq_27261889/article/details/86575033
inputs=inputs.to(device)
labels=labels.to(device)
#梯度置0
optimizer.zero_grad()
#正向传播,反向传播,优化
outputs=net(inputs)
outputs=outputs.to(device)
loss=criterion(outputs,labels)
loss.backward()
optimizer.step()
操作与训练过程相似,只是不需要再计算梯度和方向传播。此处运用训练集作为测试集
代码示例如下:
correct=0
total=0
with torch.no_grad():
images,labels=data
images=images.to(device)
labels=labels.to(device)
outputs=net(images)
outputs=outputs.to(device)
_,predicted=torch.max(outputs.data,1)
total+=labels.size(0)
correct+=(predicted==labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
整体程序:
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
import numpy as np
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
#设置卷积矩阵
self.conv1=nn.Conv2d(3,6,5)
self.conv2=nn.Conv2d(6,16,5)
#设置池化层
self.pool1=nn.MaxPool2d(2,2)
self.pool2=nn.MaxPool2d(2,2)
#设置全连接
self.fc1=nn.Linear(16*5*5,120)
self.fc2=nn.Linear(120,84)
self.fc3=nn.Linear(84,10)
def forward(self,x):
x=self.pool1(F.relu(self.conv1(x)))#第一层神经网络
x=self.pool2(F.relu(self.conv2(x)))#第二层神经网络
x=x.view(-1,16*5*5)#矩阵塑形化为1维网络
x=F.relu(self.fc1(x))#第三层神经网络,全连接1
x=F.relu(self.fc2(x))#第四层神经网络,全连接2
x=self.fc3(x)#第五层神经网络,全连接3
return x
#图片展示
def imshow(img):
img=img/2+0.5 #unnormalize
npimg=img.numpy()
plt.imshow(np.transpose(npimg,(1,2,0)))
plt.show()
#plt.show()必须添加这个才能真正的看见展示的图片,但是这样展示窗口库之后,主程序后续都卡住
#解决方法可以采用多线程
classes=(
"plane","car","bird","cat","deer","dog","frog","horse","ship","truck"
)
#定义GPU设备,如果GPU可用就在GPU上跑,否则在CPU上跑
#然后需要将net对象,input对象,output对象,images对象,lables对象,targets对象全部to到GPU
device=torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
if __name__ == '__main__':
net=Net()#生成神经网络对象、
net.to(device)
criterion = nn.CrossEntropyLoss()#定义损失函数
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
#设置训练集
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(
root="./data", train=True, download=True, transform=transform
)
trainloader = torch.utils.data.DataLoader(
trainset, batch_size=4, shuffle=True, num_workers=2
)
#训练过程
for epoch in range(2):
running_loss=0.0
for i,data in enumerate(trainloader,0):
#获取输入
inputs,labels=data
####前面必须用原来的对象接收,不然会包类型不匹配错误
####原因是tensor.to()会产生一个新的tensor对象而不是改变原对象
####参考网址:https://blog.csdn.net/qq_27261889/article/details/86575033
inputs=inputs.to(device)
labels=labels.to(device)
#梯度置0
optimizer.zero_grad()
#正向传播,反向传播,优化
outputs=net(inputs)
outputs=outputs.to(device)
loss=criterion(outputs,labels)
loss.backward()
optimizer.step()
#打印状态信息
running_loss+=loss.item()
if i%2000==1999: #每2000批打印一次
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
runing_loss=0.0
print("Finished Training")
# ######测试集上观察训练效果
# #设置测试集
# testset = torchvision.datasets.CIFAR10(
# root="./data", train=False, download=True, transform=transform
# )
#
# testloader = torch.utils.data.DataLoader(
# testset, batch_size=4, shuffle=False, num_workers=2
# )
#
# dataiter=iter(testloader)
# images,labels=dataiter.next()
# images=images.to(device)
# labels=labels.to(device)
# print("GroundTruth","".join('%5s' % classes[labels[j]] for j in range(4)))
# #测试集的输入
# outputs=net(images)
# outputs=outputs.to(device)
# _,predicted=torch.max(outputs,1)
# print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
# for j in range(4)))
# # 显示图片
# imshow(torchvision.utils.make_grid(images))
#####查看神经网络在整个测试集上的表现
correct=0
total=0
with torch.no_grad():
images,labels=data
images=images.to(device)
labels=labels.to(device)
outputs=net(images)
outputs=outputs.to(device)
_,predicted=torch.max(outputs.data,1)
total+=labels.size(0)
correct+=(predicted==labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
运行结果:
Files already downloaded and verified
[1, 2000] loss: 2.228
[1, 4000] loss: 4.108
[1, 6000] loss: 5.779
[1, 8000] loss: 7.347
[1, 10000] loss: 8.839
[1, 12000] loss: 10.298
[2, 2000] loss: 1.394
[2, 4000] loss: 2.759
[2, 6000] loss: 4.097
[2, 8000] loss: 5.419
[2, 10000] loss: 6.720
[2, 12000] loss: 7.987
Finished Training
Accuracy of the network on the 10000 test images: 50 %
①当程序中需要进行两次回归时,会报错,解决方法是第一次回归时设置retain_graph=True,不释放资源。参考:https://oldpan.me/archives/pytorch-retain_graph-work
②trainloader在设置num_workers不等于0时会报多线程错误,需要将其放在main线程中。参考:
http://www.manongjc.com/article/60188.html
③plt.imshow()所展示的图片不会显示,需要在后面加上plt.show()但是如此一来后续的程序不会运行,可以用多线程解决
④inputs、images、labels在传输至GPU时如果不用本身接收会报对象不匹配错误,因为tensor.to()只是产生一个新对象不会改变原对象,参考:https://blog.csdn.net/qq_27261889/article/details/86575033