tensorboard可视化

loss参考:https://wuyaogexing.com/65/340362.html

卷积可视化参考:https://blog.csdn.net/qq_40042726/article/details/121192531

使用tensorboard可视化loss

  1. 代码中每一次迭代过程产生的数据报错到log中,使用TensorBoard命令打开这个log,可视化之后的数据可以在tensorflow官网查看,通过浏览器访问。

  1. 声明对象,不同版本之间的差异可以通过try except解决。

  1. 使用pytorch自带的库进行可视化:

from torch.utils.tensorboard import SummaryWriter

4.SummaryWriter()用于创建一个tensorboard文件,常用参数有:log_dir:文件的存放路径flush_secs:写入tensorboard文件的时间间隔

调用方式如下:

writer = SummaryWriter(log_dir='logs',flush_secs=60)

5.writer.add_scalar()用于在tensorboard中创建loss,其中常用参数有:tag:标签,如下图所示的scalar_value:标签的值,global_step:标签的x轴坐标

writer.add_scalar('Train_loss', loss, (epoch*epoch_size + iteration))

6.tensorboard --logdir=完成tensorboard文件的生成后,在命令行调用该文件,tensorboard网址,在终端输入命令打开服务器。

tensorboard --logdir=“D:\Study\Collection\Tensorboard-pytorch\logs”

实例如下:

import torch
from torch import nn
from torch.nn import Conv2d
from torch.utils.data import DataLoader
import torchvision
from torch.utils.tensorboard import SummaryWriter
dataset = torchvision.datasets.CIFAR10('./data',
                                       train=False,
                                       transform=torchvision.transforms.ToTensor(),
                                       download=True)
dataloader = DataLoader(dataset, batch_size=64)


class qingfeng(nn.Module):
    def __init__(self):
        super(qingfeng, self).__init__()
        self.conv1 = Conv2d(in_channels=3,
                            out_channels=6,
                            kernel_size=3,
                            stride=1,
                            padding=0)

    def forward(self, x):
        x = self.conv1(x)
        return x

qingfeng = qingfeng()
print(qingfeng)
writer = SummaryWriter('./logs')
step = 0
for data in dataloader:
    imgs, targets = data
    print("输入图片形状",imgs.shape)
    output = qingfeng(imgs)
    print("输出图片形状",output.shape)
    writer.add_images("input", imgs, step)

    output = torch.reshape(output, (-1, 3, 30, 30))
    print("改变形状后的图像",output.shape)
    writer.add_images('output', output, step)
    step += 1
writer.close()

使用tensorboard可视化卷积特征:

torch.nn.Module.register_forward_hook(hook_func)函数可以实现特征图的可视化, register_forward_hook是一个钩子函数,设置完后,当输入图片进行前向传播的时候会执行自定的函数,该函数作为参数传到register_forward_hook方法中。

hook_func函数可从前向过程中接收到三个参数:hook_func(module, input, output)。其中module指的是模块的名称,比如对于ReLU模块,module是ReLU(),对于卷积模块,module是Conv2d(in_channel=…),注意module带有具体的参数input和output就是特征图,这二者分别是module的输入和输出,输入可能有多个(比如concate层就有多个输入),输出只有一个,所以input是一个tuple,其中每一个元素都是一个Tensor,而输出就是一个Tensor。一般而言output可能更经常拿来做分析。我们可以在hook_func中将特征图画出来并保存为图片

import torch
import torch.nn as nn
import torchvision
from torchvision import transforms
import torch.nn.functional as F
from torch.utils.tensorboard import SummaryWriter
import os
import cv2

class MyModel(nn.Module):
    def __init__(self):
        super(MyModel, self).__init__()
        self.convl = torch.nn.Sequential(
            torch.nn.Conv2d(1, 32, kernel_size=3, stride=1, padding=1),
            torch.nn.ReLU(),
            torch.nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1),
            torch.nn.ReLU(),
            torch.nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1),
            torch.nn.ReLU(),
            torch.nn.MaxPool2d(stride=2, kernel_size=2)
        )

        self.dense = torch.nn.Sequential(
            torch.nn.Linear(14 * 14 * 128, 1024),
            torch.nn.ReLU(),
            torch.nn.Dropout(p=0.5),
            torch.nn.Linear(1024, 10)
        )

    def forward(self, x):
        x = self.convl(x)
        x = x.view(-1, 14 * 14 * 128) 
        x = self.dense(x)
        output = F.log_softmax(x, dim=1)
        return output

# hook_func函数可以接受模型、输入、输出三个参数
def hook_func(module, input):
    x = input[0][0]
    x = x.unsqueeze(1)
    global i
    # 多张图片组成网格,padding值每个字图之间的间隙
    image_batch = torchvision.utils.make_grid(x, padding=4)
    # 通道转换
    image_batch = image_batch.numpy().transpose(1, 2, 0)
    writer.add_image("test", image_batch, i, dataformats='HWC')
    i += 1

if __name__ == '__main__':
    os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
    writer = SummaryWriter("./logs")
    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    pipline = transforms.Compose([
        transforms.ToTensor(),  
        transforms.Normalize((0.1307,), (0.3081)),  
    ])
    if torch.cuda.is_available():
        map_location = "gpu"
    else:
        map_location = "cpu"
    model = MyModel().to(device)
    # 加载权重
    model.load_state_dict(torch.load('.pth',map_location=map_location))
    i=0
    for name, m in model.named_modules():
        # 可视化卷积层
        #if isinstance(m, torch.nn.Conv2d):
        m.register_forward_pre_hook(hook_func)
    img = cv2.imread('./1.png')
    writer.add_image("img", img, 1, dataformats='HWC')
    img = pipline(img).unsqueeze(0).to(device)
    img = transforms.functional.resize(img, [28, 28])
    img = img.reshape(-1, 1, 28, 28)
    with torch.no_grad():
        model(img)

你可能感兴趣的:(神经网络,深度学习,神经网络,人工智能)