Pytorch-day05-可视化-checkpoint

PyTorch 可视化

  • 1、模型结构可视化
  • 2、训练过程可视化
  • 3、模型评估可视化
#导入常用包
import os 
import numpy as np 
import torch
from torch import nn
from torch.utils.data import Dataset, DataLoader
from torchvision.transforms import transforms
import torchvision
import torch.nn.functional as F
# 自定义model
class DemoModel(nn.Module):
    def __init__(self):
        super(DemoModel, self).__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = x.view(-1, 16 * 5 * 5)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

方法一 print 打印(模型结构可视化)

model = DemoModel()
#方法一:print打印(模型结构可视化)
print(model)
DemoModel(
  (conv1): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))
  (pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
  (fc1): Linear(in_features=400, out_features=120, bias=True)
  (fc2): Linear(in_features=120, out_features=84, bias=True)
  (fc3): Linear(in_features=84, out_features=10, bias=True)
)

方法二 torchinfo

常用来查看模型输入输出和模型参数大小

torchinfo

#pip3 install torchinfo

  • trochinfo的使用也是十分简单,我们只需要使用torchinfo.summary()就行了,必需的参数分别是model,input_size[batch_size,channel,h,w]
  • 提供了模块信息(每一层的类型、输出shape和参数量)、模型整体的参数量、模型大小、一次前向或者反向传播需要的内存大小等
from torchinfo import summary
model = DemoModel() # 实例化模型
#方法二:torchinfo 查看 模型结构可视化
summary(model, (1, 3, 32, 32)) # 1:batch_size 3:图片的通道数 1024: 图片的高宽
==========================================================================================
Layer (type:depth-idx)                   Output Shape              Param #
==========================================================================================
DemoModel                                [1, 10]                   --
├─Conv2d: 1-1                            [1, 6, 28, 28]            456
├─MaxPool2d: 1-2                         [1, 6, 14, 14]            --
├─Conv2d: 1-3                            [1, 16, 10, 10]           2,416
├─MaxPool2d: 1-4                         [1, 16, 5, 5]             --
├─Linear: 1-5                            [1, 120]                  48,120
├─Linear: 1-6                            [1, 84]                   10,164
├─Linear: 1-7                            [1, 10]                   850
==========================================================================================
Total params: 62,006
Trainable params: 62,006
Non-trainable params: 0
Total mult-adds (M): 0.66
==========================================================================================
Input size (MB): 0.01
Forward/backward pass size (MB): 0.05
Params size (MB): 0.25
Estimated Total Size (MB): 0.31
==========================================================================================

方法三 Tensorboard (使用最多,可实现训练过程的可视化)

TensorBoard

  • TensorBoard作为一款可视化工具能够满足 输入数据(尤其是图片)、模型结构、参数分布、debug的需求
  • TensorBoard可以记录我们指定的数据,包括模型每一层的feature map,权重,以及训练loss等等
  • 利用TensorBoard实现训练过程可视化

安装

pip install tensorboard

启动tensorboard

tensorboard --logdir=/path/to/logs/ --port=xxxx

  • 其中“path/to/logs/“是指定的保存tensorboard记录结果的文件路径,等价于下面的“./runs”
  • port是外部访问TensorBoard的端口号,可以通过访问ip:port访问tensorboard)

重点:

tensorboard --logdir=path1 与 writer = SummaryWriter(path1), 两者的目录路径要保持一致,否则tensorboard 上不能显示结果。
writer 与writer.add_graph()

# from tensorboard import SummaryWriter
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter('./runs')
print(model)
DemoModel(
  (conv1): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))
  (pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
  (fc1): Linear(in_features=400, out_features=120, bias=True)
  (fc2): Linear(in_features=120, out_features=84, bias=True)
  (fc3): Linear(in_features=84, out_features=10, bias=True)
)
#方法三:tensorboard查看
writer.add_graph(model,torch.rand(1, 3, 32, 32))
writer.close()

tensorboard 可视图

#超参数定义
# 批次的大小
batch_size = 16 #可选32、64、128
# 优化器的学习率
lr = 1e-4
#运行epoch
max_epochs = 2
# 方案一:指定GPU的方式
# os.environ['CUDA_VISIBLE_DEVICES'] = '0,1' # 指明调用的GPU为0,1号

# 方案二:使用“device”,后续对要使用GPU的变量用.to(device)即可
# device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # 指明调用的GPU为1号

# 数据读取
#cifar10数据集为例给出构建Dataset类的方式
from torchvision import datasets

#“data_transform”可以对图像进行一定的变换,如翻转、裁剪、归一化等操作,可自己定义
data_transform=transforms.Compose([
                       transforms.ToTensor(),
                       transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))
                   ])


train_cifar_dataset = datasets.CIFAR10('cifar10',train=True, download=False,transform=data_transform)
test_cifar_dataset = datasets.CIFAR10('cifar10',train=False, download=False,transform=data_transform)

#构建好Dataset后,就可以使用DataLoader来按批次读入数据了

train_loader = torch.utils.data.DataLoader(train_cifar_dataset, 
                                           batch_size=batch_size, num_workers=4, 
                                           shuffle=True, drop_last=True)

test_loader = torch.utils.data.DataLoader(test_cifar_dataset, 
                                         batch_size=batch_size, num_workers=4, 
                                         shuffle=False)
#训练&验证
writer = SummaryWriter('./runs')
 # Set fixed random number seed
torch.manual_seed(42)
# 定义损失函数和优化器
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
My_model = DemoModel()
My_model = My_model.to(device)
# 交叉熵
criterion = torch.nn.CrossEntropyLoss()
# 优化器
optimizer = torch.optim.Adam(My_model.parameters(), lr=lr)
epoch = max_epochs

total_step = len(train_loader)
train_all_loss = []
test_all_loss = []
for i in range(epoch):
    My_model.train()
    train_total_loss = 0
    train_total_num = 0
    train_total_correct = 0

    for iter, (images,labels) in enumerate(train_loader):
        images = images.to(device)
        labels = labels.to(device)
        
        # Write the network graph at epoch 0, batch 0
        if epoch == 0 and iter == 0:
            writer.add_graph(My_model, input_to_model=(images,labels)[0], verbose=True)

        # Write an image at every batch 0
        if iter == 0:
            writer.add_image("Example input", images[0], global_step=epoch)
        
        outputs = My_model(images)
        loss = criterion(outputs,labels)
        train_total_correct += (outputs.argmax(1) == labels).sum().item()
        #backword
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        
        train_total_num += labels.shape[0]
        train_total_loss += loss.item()
        
        # Print statistics
        writer.add_scalar("Loss/Minibatches", train_total_loss, train_total_num)
        
        print("Epoch [{}/{}], Iter [{}/{}], train_loss:{:4f}".format(i+1,epoch,iter+1,total_step,loss.item()/labels.shape[0]))
    
    # Write loss for epoch
    writer.add_scalar("Loss/Epochs", train_total_loss, epoch)
    
    My_model.eval()
    test_total_loss = 0
    test_total_correct = 0
    test_total_num = 0
    for iter,(images,labels) in enumerate(test_loader):
        images = images.to(device)
        labels = labels.to(device)
        
        outputs = My_model(images)
        loss = criterion(outputs,labels)
        test_total_correct += (outputs.argmax(1) == labels).sum().item()
        test_total_loss += loss.item()
        test_total_num += labels.shape[0]
    print("Epoch [{}/{}], train_loss:{:.4f}, train_acc:{:.4f}%, test_loss:{:.4f}, test_acc:{:.4f}%".format(
        i+1, epoch, train_total_loss / train_total_num, train_total_correct / train_total_num * 100, test_total_loss / test_total_num, test_total_correct / test_total_num * 100
    
    ))
    train_all_loss.append(np.round(train_total_loss / train_total_num,4))
    test_all_loss.append(np.round(test_total_loss / test_total_num,4))

Epoch [1/2], Iter [1/3125], train_loss:0.144669

Epoch [1/2], Iter [710/3125], train_loss:0.119339

Epoch [1/2], Iter [1254/3125], train_loss:0.121789

比较

  1. 方法一 print(model),打印模型块得结构
  2. 方法二 torchinfo , 给一个输入大小, 提供模块信息更全,包含每一层的类型、输出shape 和参数量等。 (torchsummary 也有同样功能)
    https://blog.csdn.net/weixin_43183872/article/details/108329776
  3. 方法三 tensorboard ,TensorBoard可以记录我们指定的数据,包括模型每一层的feature map,权重,以及训练loss等等
    tensorboard 使用时,

tensorboard --logdir=path1 与 writer = SummaryWriter(path1), 两者的目录路径要保持一致,否则tensorboard 上不能显示结果。
writer 与writer.add_graph()

你可能感兴趣的:(pytorch,pytorch)