pytorch统计模型参数量并输出

有时候需要统计我们自己构建的模型参数,与baseline的网络比较

统计神经网络模型参数

方式一:

def get_parameter_number(net):
    total_num = sum(p.numel() for p in net.parameters())
    trainable_num = sum(p.numel() for p in net.parameters() if p.requires_grad)
    return {'Total': total_num, 'Trainable': trainable_num}

输出

    print("VPTR_Enc 模型总大小为:".format(get_parameter_number(VPTR_Enc)["Total"]))
    print("VPTR_Dec 模型总大小为:".format(get_parameter_number(VPTR_Dec)["Total"]))
    print("VPTR_Disc 模型总大小为:".format(get_parameter_number(VPTR_Disc)["Total"]))

如果输出MB格式,请在total_num后加入/1024/1024, print格式更改为:

    print("VPTR_Enc 模型总大小为:{:.3f}MB".format(get_parameter_number(VPTR_Enc)["Total"]))
    print("VPTR_Dec 模型总大小为:{:.3f}MB".format(get_parameter_number(VPTR_Dec)["Total"]))
    print("VPTR_Disc 模型总大小为:{:.3f}MB".format(get_parameter_number(VPTR_Disc)["Total"]))

方式二:

mport torch.nn as nn
from torchsummary import summary
定义⽹络结构
#
net = nn.Sequential(
            nn.Conv2d(1,8,kernel_size=7),
            nn.MaxPool2d(2,stride=2),
            nn.ReLU(True),
            nn.Conv2d(8,10,kernel_size=5),
            nn.MaxPool2d(2,stride=2),
            nn.ReLU(True)
)
输出每层⽹络参数信息
#
summary(net,(1,28,28),batch_size=1,device="cpu")
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1[1,8,22,22]400
         MaxPool2d-2[1,8,11,11]0
              ReLU-3[1,8,11,11]0
            Conv2d-4[1,10,7,7]2,010
         MaxPool2d-5[1,10,3,3]0
              ReLU-6[1,10,3,3]0
================================================================
Total params:2,410
Trainable params:2,410
Non-trainable params:0
----------------------------------------------------------------
Input size (MB):0.00
Forward/backward pass size (MB):0.05
Params size (MB):0.01
Estimated Total Size (MB):0.06
--------------------------------------------------------

方式二未做验证

你可能感兴趣的:(深度学习,python,pytorch,pytorch,深度学习,神经网络)