L09_卷积神经网络之CIFAR图片识别

卷积神经网络之CIFAR图片识别

0.导入包

1.torch模块是用来使用深度学习框架中的一些东西

2.Dataloader用来构造数据加载器

3.torchvision是pytorch的一个图形库,它服务于PyTorch深度学习框架的。

4.F用来调用下属激活函数

5.optim调用优化器

import torch
from torch.utils.data import DataLoader
from torchvision import datasets,transforms
import torch.nn.functional as F
import torch.optim as optim

1.准备数据

1.1-2行是用来定义分批加载训练和测试数据时的批量大小(也就是一次取多少条数据)

2.第4行是表示:ToTensor()将其他图像数据(PIL Image或者 ndarray)类型转化为tensor类型,并归一化至0-1 。Normalize()表示用平均值和标准偏差归一化图像。

3.第6行是下载训练数据集并通过参数transform进行图像预处理

4.第7行构造数据加载器

batch_size1 = 64
batch_size2 = 32

transform = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))])

train_dataset = datasets.CIFAR10(root='./data/',train=True,download=True,transform=transform)
train_loader = DataLoader(train_dataset,batch_size=batch_size1,shuffle=True,num_workers=2)

test_dataset = datasets.CIFAR10(root='./data/',train=False,download=True,transform=transform)
test_loader = DataLoader(test_dataset,batch_size=batch_size2,shuffle=True,num_workers=2)
Files already downloaded and verified
Files already downloaded and verified

2.设计模型

class CNN_CIFAR(torch.nn.Module):
    def __init__(self):
        super(CNN_CIFAR,self).__init__();
        self.conv1 = torch.nn.Conv2d(3,12,kernel_size=5)
        self.conv2 = torch.nn.Conv2d(12,24,kernel_size=5)
        self.pooling = torch.nn.MaxPool2d(2)
        self.fc = torch.nn.Linear(5*5*24,10)
        
    def forward(self,x):
        BATCH_SIZE=x.size(0)
        x = F.relu(self.pooling(self.conv1(x)))
        x = F.relu(self.pooling(self.conv2(x)))
        x = x.view(BATCH_SIZE, -1)
        x = self.fc(x)
        return x

model = CNN_CIFAR()

3.构造损失和优化器

criterion = torch.nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(),lr=0.01)

4.训练

def train(epoch):
    running_loss = 0.0
    # 分批加载数据
    for batch_idx, data in enumerate(train_loader, 0):
        inputs, target = data
#         print(inputs.shape)
#         print(inputs)
#         print('-----')
#         print(target)
        optimizer.zero_grad()
        
        outputs = model(inputs)
#         print('-----')
#         print(outputs)
        loss = criterion(outputs, target)
        loss.backward()
        optimizer.step()
        
        running_loss += loss.item()
        if batch_idx % 300 == 299:
            print(f'epoch:{epoch+1},batch_idx:{batch_idx+1},loss:{running_loss/(batch_idx+1):.3f}')

5.测试

def test():
    correct = 0
    total = 0
    with torch.no_grad():
        for data in test_loader:
            images, labels = data
            outputs = model(images)
            _, predicted = torch.max(outputs.data, dim=1)
            total += labels.size(0)
            correct += (predicted == labels).sum().item()
        print(f'正确率:{100*correct/total}%')

6.程序执行入口

if __name__ == '__main__':
    for epoch in range(3):
        train(epoch)
        test()
epoch:1,batch_idx:300,loss:2.022
epoch:1,batch_idx:600,loss:1.889
正确率:36.11%
epoch:2,batch_idx:300,loss:1.535
epoch:2,batch_idx:600,loss:1.503
正确率:46.29%
epoch:3,batch_idx:300,loss:1.385
epoch:3,batch_idx:600,loss:1.374
正确率:51.91%

你可能感兴趣的:(研究生之深度学习,cnn,深度学习,神经网络)