卷积神经网络参数量计算(模型参数,内存占用,显存占用)

0.计算卷积神经网络参数的两种方式

利用torchstat,利用torchsummary。

pip install torchsummary
pip install torchstat

具体实现步骤(可以试试其他卷积模型)。 

from torchstat import stat
import torchvision.models as models
from torchsummary import summary
# from model import vgg11, vgg13, vgg, vgg19
from torch import nn


class Vgg16_net(nn.Module):
    def __init__(self):
        super(Vgg16_net, self).__init__()

        self.layer1 = nn.Sequential(
            nn.Conv2d(in_channels=3, out_channels=64, kernel_size=3, stride=1, padding=1),  # (32-3+2)/1+1=32   32*32*64
            nn.BatchNorm2d(64),
            # inplace-选择是否进行覆盖运算
            # 意思是是否将计算得到的值覆盖之前的值,比如
            nn.ReLU(inplace=True),
            # 意思就是对从上层网络Conv2d中传递下来的tensor直接进行修改,
            # 这样能够节省运算内存,不用多存储其他变量

            nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1, padding=1),
            # (32-3+2)/1+1=32    32*32*64
            # Batch Normalization强行将数据拉回到均值为0,方差为1的正太分布上,
            # 一方面使得数据分布一致,另一方面避免梯度消失。
            nn.BatchNorm2d(64),
            nn.ReLU(inplace=True),

            nn.MaxPool2d(kernel_size=2, stride=2)  # (32-2)/2+1=16         16*16*64
        )

        self.layer2 = nn.Sequential(
            nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, stride=1, padding=1),
            # (16-3+2)/1+1=16  16*16*128
            nn.BatchNorm2d(128),
            nn.ReLU(inplace=True),

            nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, stride=1, padding=1),
            # (16-3+2)/1+1=16   16*16*128
            nn.BatchNorm2d(128),
            nn.ReLU(inplace=True),

            nn.MaxPool2d(2, 2)  # (16-2)/2+1=8     8*8*128
        )

        self.layer3 = nn.Sequential(
            nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, stride=1, padding=1),  # (8-3+2)/1+1=8   8*8*256
            nn.BatchNorm2d(256),
            nn.ReLU(inplace=True),

            nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, stride=1, padding=1),  # (8-3+2)/1+1=8   8*8*256
            nn.BatchNorm2d(256),
            nn.ReLU(inplace=True),

            nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, stride=1, padding=1),  # (8-3+2)/1+1=8   8*8*256
            nn.BatchNorm2d(256),
            nn.ReLU(inplace=True),

            nn.MaxPool2d(2, 2)  # (8-2)/2+1=4      4*4*256
        )

        self.layer4 = nn.Sequential(
            nn.Conv2d(in_channels=256, out_channels=512, kernel_size=3, stride=1, padding=1),
            # (4-3+2)/1+1=4    4*4*512
            nn.BatchNorm2d(512),
            nn.ReLU(inplace=True),

            nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1),
            # (4-3+2)/1+1=4    4*4*512
            nn.BatchNorm2d(512),
            nn.ReLU(inplace=True),

            nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1),
            # (4-3+2)/1+1=4    4*4*512
            nn.BatchNorm2d(512),
            nn.ReLU(inplace=True),

            nn.MaxPool2d(2, 2)  # (4-2)/2+1=2     2*2*512
        )

        self.layer5 = nn.Sequential(
            nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1),
            # (2-3+2)/1+1=2    2*2*512
            nn.BatchNorm2d(512),
            nn.ReLU(inplace=True),

            nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1),
            # (2-3+2)/1+1=2     2*2*512
            nn.BatchNorm2d(512),
            nn.ReLU(inplace=True),

            nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1),
            # (2-3+2)/1+1=2      2*2*512
            nn.BatchNorm2d(512),
            nn.ReLU(inplace=True),

            nn.MaxPool2d(2, 2)  # (2-2)/2+1=1      1*1*512
        )

        self.conv = nn.Sequential(
            self.layer1,
            self.layer2,
            self.layer3,
            self.layer4,
            self.layer5
        )

        self.fc = nn.Sequential(
            # y=xA^T+b  x是输入,A是权值,b是偏执,y是输出
            # nn.Liner(in_features,out_features,bias)
            # in_features:输入x的列数  输入数据:[batchsize,in_features]
            # out_freatures:线性变换后输出的y的列数,输出数据的大小是:[batchsize,out_features]
            # bias: bool  默认为True
            # 线性变换不改变输入矩阵x的行数,仅改变列数
            nn.Linear(512, 512),
            nn.ReLU(inplace=True),
            nn.Dropout(0.9),

            nn.Linear(512, 256),
            nn.ReLU(inplace=True),
            nn.Dropout(0.9),

            nn.Linear(256, 10)
        )

    def forward(self, x):
        x = self.conv(x)
        # 这里-1表示一个不确定的数,就是你如果不确定你想要reshape成几行,但是你很肯定要reshape成512列
        # 那不确定的地方就可以写成-1

        # 如果出现x.size(0)表示的是batchsize的值
        # x=x.view(x.size(0),-1)
        x = x.view(-1, 512)
        x = self.fc(x)
        return x


# 计算网络参数方式一
net1 = Vgg16_net()
stat(net1, (3, 224, 224))

# 计算网络参数方式二
# summary(net1, input_size=[(3, 256, 256)], batch_size=1, device="cpu")

1.模型参数

每一次卷积的参数量和特征图的大小无关,仅和卷积核的大小,偏置及BN有关。

1.每个卷积层的参数量,+1表示偏置:
Co x (Kw x Kh x Cin + 1)

2.全连接层的参数量
(D1 + 1) x D2

3.BN层的参数量
因为BN层需要学习两个参数 γ \gamma γ和 β \beta β,所以参数量是2xCo

print('# generator parameters:', sum(param.numel() for param in net.parameters()))

2.内存占用

1.参数量所占内存
(32位的float需要占用4个字节)
Memory(MB) = params x 4 /1024 /1024
比如:VGG参数量约为138million,则内存大小为138*3.8 = 524MB

2.每张图所占内存
计算一整张图的过程中的所有特征图层所占内存为Fw x Fh x C的加和,乘以4byte,再/1024/1024。

3.训练所占显存

比如:
参数量为500w,则内存为19MB;
一张图内存为100w,则内存为4MB;
Batchsize = 128;

则:
模型所占显存:19x2 = 38MB(1为params,1为Adam)
输出所占显存:128x4x2 = 1024MB(2为forward和backward)
总共需要显存:38+1024 > 1G


 

你可能感兴趣的:(cnn,人工智能,神经网络)