查看torchinfo的网络结构输出

使用torchinfo,设置batchsize和图片大小就行。
我这里是导入了一个网络结构。可以自己去找。

from network.classifier import *

# model = MesoInception4()
# print(model)

from torchinfo import summary
model =  MesoInception4()
batch_size = 128
summary(model, input_size=(batch_size, 3, 256, 256))

输出

==========================================================================================
Layer (type:depth-idx)                   Output Shape              Param #
==========================================================================================
MesoInception4                           [128, 2]                  --
├─Conv2d: 1-1                            [128, 1, 256, 256]        3
├─Conv2d: 1-2                            [128, 4, 256, 256]        12
├─Conv2d: 1-3                            [128, 4, 256, 256]        144
├─Conv2d: 1-4                            [128, 4, 256, 256]        12
├─Conv2d: 1-5                            [128, 4, 256, 256]        144
├─Conv2d: 1-6                            [128, 2, 256, 256]        6
├─Conv2d: 1-7                            [128, 2, 256, 256]        36
├─BatchNorm2d: 1-8                       [128, 11, 256, 256]       22
├─MaxPool2d: 1-9                         [128, 11, 128, 128]       --
├─Conv2d: 1-10                           [128, 2, 128, 128]        22
├─Conv2d: 1-11                           [128, 4, 128, 128]        44
├─Conv2d: 1-12                           [128, 4, 128, 128]        144
├─Conv2d: 1-13                           [128, 4, 128, 128]        44
├─Conv2d: 1-14                           [128, 4, 128, 128]        144
├─Conv2d: 1-15                           [128, 2, 128, 128]        22
├─Conv2d: 1-16                           [128, 2, 128, 128]        36
├─BatchNorm2d: 1-17                      [128, 12, 128, 128]       24
├─MaxPool2d: 1-18                        [128, 12, 64, 64]         --
├─Conv2d: 1-19                           [128, 16, 64, 64]         4,800
├─ReLU: 1-20                             [128, 16, 64, 64]         --
├─BatchNorm2d: 1-21                      [128, 16, 64, 64]         32
├─MaxPool2d: 1-22                        [128, 16, 32, 32]         --
├─Conv2d: 1-23                           [128, 16, 32, 32]         6,400
├─ReLU: 1-24                             [128, 16, 32, 32]         --
├─BatchNorm2d: 1-25                      [128, 16, 32, 32]         (recursive)
├─MaxPool2d: 1-26                        [128, 16, 8, 8]           --
├─Dropout2d: 1-27                        [128, 1024]               --
├─Linear: 1-28                           [128, 16]                 16,400
├─LeakyReLU: 1-29                        [128, 16]                 --
├─Dropout2d: 1-30                        [128, 16]                 --
├─Linear: 1-31                           [128, 2]                  34
==========================================================================================
Total params: 28,525
Trainable params: 28,525
Non-trainable params: 0
Total mult-adds (G): 7.31
==========================================================================================
Input size (MB): 100.66
Forward/backward pass size (MB): 2868.92
Params size (MB): 0.11
Estimated Total Size (MB): 2969.70
==========================================================================================

直接使用print是这样,可以结合使用看。能画出图最好了。

MesoInception4(
  (Incption1_conv1): Conv2d(3, 1, kernel_size=(1, 1), stride=(1, 1), bias=False)
  (Incption1_conv2_1): Conv2d(3, 4, kernel_size=(1, 1), stride=(1, 1), bias=False)
  (Incption1_conv2_2): Conv2d(4, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  (Incption1_conv3_1): Conv2d(3, 4, kernel_size=(1, 1), stride=(1, 1), bias=False)
  (Incption1_conv3_2): Conv2d(4, 4, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
  (Incption1_conv4_1): Conv2d(3, 2, kernel_size=(1, 1), stride=(1, 1), bias=False)
  (Incption1_conv4_2): Conv2d(2, 2, kernel_size=(3, 3), stride=(1, 1), padding=(3, 3), dilation=(3, 3), bias=False)
  (Incption1_bn): BatchNorm2d(11, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (Incption2_conv1): Conv2d(11, 2, kernel_size=(1, 1), stride=(1, 1), bias=False)
  (Incption2_conv2_1): Conv2d(11, 4, kernel_size=(1, 1), stride=(1, 1), bias=False)
  (Incption2_conv2_2): Conv2d(4, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
  (Incption2_conv3_1): Conv2d(11, 4, kernel_size=(1, 1), stride=(1, 1), bias=False)
  (Incption2_conv3_2): Conv2d(4, 4, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
  (Incption2_conv4_1): Conv2d(11, 2, kernel_size=(1, 1), stride=(1, 1), bias=False)
  (Incption2_conv4_2): Conv2d(2, 2, kernel_size=(3, 3), stride=(1, 1), padding=(3, 3), dilation=(3, 3), bias=False)
  (Incption2_bn): BatchNorm2d(12, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (conv1): Conv2d(12, 16, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), bias=False)
  (relu): ReLU(inplace=True)
  (leakyrelu): LeakyReLU(negative_slope=0.1)
  (bn1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (maxpooling1): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False)
  (conv2): Conv2d(16, 16, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), bias=False)
  (maxpooling2): MaxPool2d(kernel_size=(4, 4), stride=(4, 4), padding=0, dilation=1, ceil_mode=False)
  (dropout): Dropout2d(p=0.5, inplace=False)
  (fc1): Linear(in_features=1024, out_features=16, bias=True)
  (fc2): Linear(in_features=16, out_features=2, bias=True)
)

你可能感兴趣的:(网络结构查看,深度学习,人工智能,python)