继上篇的数据的读取,这里介绍介绍计算机视觉关键的一步。识别模型的构建。说到视觉识别模型肯定少不了卷积神经网络(cnn)。这里重点介绍cnn神经网络。
1.简介卷积神经网络的发展
1979和1980年发表的论文中,福岛仿造生物的视觉皮层(visual cortex)设计了以“neocognitron”命名的神经网络。neocognitron是一个具有深度结构的神经网络,并且是最早被提出的深度学习算法之一。1987年由Alexander Waibel等提出的时间延迟网络(Time Delay Neural Network, TDNN)。TDNN是一个应用于语音识别问题的卷积神经网络。1988年,Wei Zhang提出了第一个二维卷积神经网络:平移不变人工神经网络(SIANN),并将其应用于检测医学影像。1989年,Yann LeCun构建了应用于计算机视觉问题的卷积神经网络,即LeNet的最初版本。1993年由贝尔实验室(AT&T Bell Laboratories)完成LeNet代码开发。二十一世纪,计算机硬件的提高,神经网络得到了长足发展。
2.简介卷积神经网络的结构
卷积神经网络和传统的神经网络一样。都是层级网络,只是层的功能和形式做了变化,可以说是传统神经网络的一个改进。
卷积神经网络的层级结构:
数据输入层:
该层要做的处理主要是对原始图像数据进行预处理,其中包括:
卷积计算层:
这一层就是卷积神经网络最重要的一个层次,也是“卷积神经网络”的名字来源。
在这个卷积层,有两个关键操作:
激励层:
把卷积层输出结果做非线性映射。CNN采用的激励函数一般为ReLU(The Rectified Linear Unit/修正线性单元),它的特点是收敛快,求梯度简单,但较脆弱。
池化层:
池化层夹在连续的卷积层中间, 用于压缩数据和参数的量,减小过拟合。简而言之,如果输入是图像的话,那么池化层的最主要作用就是压缩图像。
全连接层:
两层之间所有神经元都有权重连接,通常全连接层在卷积神经网络尾部。也就是跟传统的神经网络神经元的连接方式是一样的。
3.代码实现与见解
·
# 读取数据,准备训练数据与验证数据
import os, sys, glob, shutil, json
os.environ["CUDA_VISIBLE_DEVICES"] = '0'
import cv2
from PIL import Image
import numpy as np
from tqdm import tqdm, tqdm_notebook
import torch
torch.manual_seed(0)
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.benchmark = True
import torchvision.models as models
import torchvision.transforms as transforms
import torchvision.datasets as datasets
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
from torch.utils.data.dataset import Dataset
#步骤1:定义好读取图像的Dataset
class SVHNDataset(Dataset):
def __init__(self, img_path, img_label, transform=None):
self.img_path = img_path
self.img_label = img_label
if transform is not None:
self.transform = transform
else:
self.transform = None
def __getitem__(self, index):
img = Image.open(self.img_path[index]).convert('RGB')
if self.transform is not None:
img = self.transform(img)
# 设置最长的字符长度为5个
lbl = np.array(self.img_label[index], dtype=np.int)
lbl = list(lbl) + (5 - len(lbl)) * [10]
return img, torch.from_numpy(np.array(lbl[:5]))
def __len__(self):
return len(self.img_path)
train_path = glob.glob('cv/mchar_train/mchar_train/*.png')
train_path.sort()
train_json = json.load(open('cv/mchar_train.json'))
train_label = [train_json[x]['label'] for x in train_json]
print(len(train_path), len(train_label))
train_loader = torch.utils.data.DataLoader(
SVHNDataset(train_path, train_label,
transforms.Compose([
transforms.Resize((64, 128)),
transforms.RandomCrop((60, 120)),
transforms.ColorJitter(0.3, 0.3, 0.2),
transforms.RandomRotation(5),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])),
batch_size=40,
shuffle=True,
num_workers=0,
)
val_path = glob.glob('cv/mchar_val/mchar_val/*.png')
val_path.sort()
val_json = json.load(open('cv/mchar_val.json'))
val_label = [val_json[x]['label'] for x in val_json]
print(len(val_path), len(val_label))
val_loader = torch.utils.data.DataLoader(
SVHNDataset(val_path, val_label,
transforms.Compose([
transforms.Resize((60, 120)),
# transforms.ColorJitter(0.3, 0.3, 0.2),
# transforms.RandomRotation(5),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])),
batch_size=40,
shuffle=False,
num_workers=0,
)
# 定义模型,这里追求精度,使用与训练模型
class SVHN_Model1(nn.Module):
def __init__(self):
super(SVHN_Model1, self).__init__()
model_conv = models.resnet18(pretrained=True)
model_conv.avgpool = nn.AdaptiveAvgPool2d(1)
model_conv = nn.Sequential(*list(model_conv.children())[:-1])
self.cnn = model_conv
# CNN提取特征模块
# self.cnn = nn.Sequential(
# nn.Conv2d(3, 16, kernel_size=(3, 3), stride=(2, 2)),
# nn.ReLU(),
# nn.MaxPool2d(2),
# nn.Conv2d(16, 32, kernel_size=(3, 3), stride=(2, 2)),
# nn.ReLU(),
# nn.MaxPool2d(2),
# )
self.fc1 = nn.Linear(512, 11)
self.fc2 = nn.Linear(512, 11)
self.fc3 = nn.Linear(512, 11)
self.fc4 = nn.Linear(512, 11)
self.fc5 = nn.Linear(512, 11)
#self.fc6 = nn.Linear(32 * 3 * 7, 11)
def forward(self, img):
feat = self.cnn(img)
# print(feat.shape)
feat = feat.view(feat.shape[0], -1)
c1 = self.fc1(feat)
c2 = self.fc2(feat)
c3 = self.fc3(feat)
c4 = self.fc4(feat)
c5 = self.fc5(feat)
#c6 = self.fc6(feat)
return c1, c2, c3, c4, c5#, c6
model = SVHN_Model1()
#训练数据
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), 0.0001)
best_loss = 100.0
use_cuda = False
if use_cuda:
model = model.cuda()
if __name__ == '__main__':
for epoch in range(10):
#train_loss = train(train_loader, model, criterion, optimizer, epoch)
train_loss = train(train_loader, model, criterion, optimizer)
val_loss = validate(val_loader, model, criterion)
val_label = [''.join(map(str, x)) for x in val_loader.dataset.img_label]
val_predict_label = predict(val_loader, model, 1)
val_predict_label = np.vstack([
val_predict_label[:, :11].argmax(1),
val_predict_label[:, 11:22].argmax(1),
val_predict_label[:, 22:33].argmax(1),
val_predict_label[:, 33:44].argmax(1),
val_predict_label[:, 44:55].argmax(1),
]).T
val_label_pred = []
for x in val_predict_label:
val_label_pred.append(''.join(map(str, x[x != 10])))
val_char_acc = np.mean(np.array(val_label_pred) == np.array(val_label))
print('Epoch: {0}, Train loss: {1} \t Val loss: {2}'.format(epoch, train_loss, val_loss))
print(val_char_acc)
# 记录下验证集精度
if val_loss < best_loss:
best_loss = val_loss
torch.save(model.state_dict(), './model_1.pt')
这里直接应用预训练模型进行训练。
总结
通过对卷积神经网络的发展历程,明确了现代计算机科学辛路发展历程。总体收获不小。