输入特征图大小为 [ B , C , H , W ] [B, C, H, W] [B,C,H,W],卷积核为 k ∗ k , C o u t k*k, C_{out} k∗k,Cout
k × k × × C × C o u t + C o u t k \times k \times \times C \times C_{out} + C_{out} k×k××C×Cout+Cout
H o u t = H + 2 ∗ P a d d i n g − K e r n e l s t r i d e + 1 H_{out}=\frac{H+2*Padding-Kernel}{stride}+1 Hout=strideH+2∗Padding−Kernel+1
至于为什么+1
, 画个图就清楚了:
FLOPs(floating point operations per second)
FLOPs:s小写,floating point operations的缩写(s表示复数),指浮点运算数,可以理解为计算量,用来衡量算法/模型的复杂度。
MACs:s小写,multiply–accumulate operations的缩写(s表示负数),有的时候也用MAdd表示,指乘加(a+b×c)运算数,1MACs包含一个乘法操作与一个加法操作,通常MACs是FLOPs的2倍。MACs较FLOPs相比不那么常见。
每一个kernel一次卷积(得到输出feature map的一个点)的计算量: C × k × k C \times k \times k C×k×k 次乘法 和 C × k × k − 1 C \times k \times k -1 C×k×k−1 次加法, 即需要 2 × C × k × k − 1 2 \times C \times k \times k-1 2×C×k×k−1 次 运算。如果考虑bias还要加1(输出feature map的这个点加上bias): 2 × C × k × k 2 \times C \times k \times k 2×C×k×k。
整个卷积操作的计算量即上述再乘以 输出feature map一共多少个点:
2 × C × k × k × H o u t × W o u t × C o u t 2 \times C \times k \times k \times H_{out} \times W_{out} \times C_{out} 2×C×k×k×Hout×Wout×Cout
BN层用来解决internal coviriate shift, ( i) 上层网络需要不停调整来适应输入数据分布的变化,导致网络学习速度的降低; ii) 让激活函数的输入分布保持在一个稳定状态来尽可能避免它们陷入梯度饱和区,以免使网络收敛变慢)
一个batch修正一次.
需要学习的参数即 γ \gamma γ 和 β \beta β
2 × C o u t 2 \times C_{out} 2×Cout
保持不变。
2 ∗ C ∗ H ∗ W 2*C*H*W 2∗C∗H∗W
池化层没有需要学习的参数,参数量为0
C i n = C o u t C_{in}=C_{out} Cin=Cout, H 和 W
的计算与卷积层一致:
H o u t = H + 2 ∗ P a d d i n g − K e r n e l s t r i d e + 1 H_{out}=\frac{H+2*Padding-Kernel}{stride}+1 Hout=strideH+2∗Padding−Kernel+1
以average pooling为例,一个kernel的运算里有 k ∗ k − 1 k*k-1 k∗k−1 次加法,和 1 1 1次除法,共 k ∗ k k*k k∗k次。总FLOPs:
k ∗ k ∗ C o u t ∗ H o u t ∗ W o u t k*k*C_{out} * H_{out} * W_{out} k∗k∗Cout∗Hout∗Wout
没有要学习的参数。
保持不变。
要对每一个点进行判别:
C ∗ H ∗ W C*H*W C∗H∗W
pytorch-ConvTranspose2d
weight:
k ∗ k ∗ C i n ∗ C o u t k*k*C_{in}*C_{out} k∗k∗Cin∗Cout
bias:
C o u t C_{out} Cout
即普通卷积层取逆操作,pytorch里面包含参数output_padding
, 可以使输入输出feature map一样大。
H o u t = ( H − 1 ) × S t r i d e − 2 ∗ P a d d i n g + K e r n e l + o u t p u t _ p a d d i n g H_{out} = (H-1) \times Stride - 2*Padding + Kernel + output\_padding Hout=(H−1)×Stride−2∗Padding+Kernel+output_padding
先举个网络的例子:
import torch.nn as nn
def conv_block(in_dim,out_dim,act_fn):
model = nn.Sequential(
nn.Conv2d(in_dim,out_dim, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(out_dim),
act_fn,
)
return model
def conv_trans_block(in_dim,out_dim,act_fn):
model = nn.Sequential(
nn.ConvTranspose2d(in_dim,out_dim, kernel_size=3, stride=2, padding=1,output_padding=1),
nn.BatchNorm2d(out_dim),
act_fn,
)
return model
def maxpool():
pool = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
return pool
def conv_block_3(in_dim,out_dim,act_fn):
model = nn.Sequential(
conv_block(in_dim,out_dim,act_fn),
conv_block(out_dim,out_dim,act_fn),
nn.Conv2d(out_dim,out_dim, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(out_dim),
)
return model
class Conv_residual_conv(nn.Module):
def __init__(self,in_dim,out_dim,act_fn):
super(Conv_residual_conv,self).__init__()
self.in_dim = in_dim
self.out_dim = out_dim
act_fn = act_fn
self.conv_1 = conv_block(self.in_dim,self.out_dim,act_fn)
self.conv_2 = conv_block_3(self.out_dim,self.out_dim,act_fn)
self.conv_3 = conv_block(self.out_dim,self.out_dim,act_fn)
def forward(self,input):
conv_1 = self.conv_1(input)
conv_2 = self.conv_2(conv_1)
res = conv_1 + conv_2
conv_3 = self.conv_3(res)
return conv_3
class FusionNet(nn.Module):
def __init__(self, input_nc=6, output_nc=2, ngf=32):
super(FusionNet,self).__init__()
self.in_dim = input_nc
self.out_dim = ngf
self.final_out_dim = output_nc
act_fn = nn.LeakyReLU(0.2, inplace=True)
act_fn_2 = nn.ReLU()
print("\n------Initiating FusionNet------\n")
# encoder
self.down_1 = Conv_residual_conv(self.in_dim, self.out_dim, act_fn)
self.pool_1 = maxpool()
self.down_2 = Conv_residual_conv(self.out_dim, self.out_dim * 2, act_fn)
self.pool_2 = maxpool()
self.down_3 = Conv_residual_conv(self.out_dim * 2, self.out_dim * 4, act_fn)
self.pool_3 = maxpool()
self.down_4 = Conv_residual_conv(self.out_dim * 4, self.out_dim * 8, act_fn)
self.pool_4 = maxpool()
# bridge
self.bridge = Conv_residual_conv(self.out_dim * 8, self.out_dim * 16, act_fn)
# decoder
self.deconv_1 = conv_trans_block(self.out_dim * 16, self.out_dim * 8, act_fn_2)
self.up_1 = Conv_residual_conv(self.out_dim * 8, self.out_dim * 8, act_fn_2)
self.deconv_2 = conv_trans_block(self.out_dim * 8, self.out_dim * 4, act_fn_2)
self.up_2 = Conv_residual_conv(self.out_dim * 4, self.out_dim * 4, act_fn_2)
self.deconv_3 = conv_trans_block(self.out_dim * 4, self.out_dim * 2, act_fn_2)
self.up_3 = Conv_residual_conv(self.out_dim * 2, self.out_dim * 2, act_fn_2)
self.deconv_4 = conv_trans_block(self.out_dim * 2, self.out_dim, act_fn_2)
self.up_4 = Conv_residual_conv(self.out_dim, self.out_dim, act_fn_2)
# output
self.out = nn.Conv2d(self.out_dim,self.final_out_dim, kernel_size=3, stride=1, padding=1)
# self.out_2 = nn.Tanh()
# self.out_2 = nn.Sigmoid()
# initialization
for m in self.modules():
if isinstance(m, nn.Conv2d):
m.weight.data.normal_(0.0, 0.02)
m.bias.data.fill_(0)
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.normal_(1.0, 0.02)
m.bias.data.fill_(0)
def forward(self,input):
down_1 = self.down_1(input)
pool_1 = self.pool_1(down_1)
down_2 = self.down_2(pool_1)
pool_2 = self.pool_2(down_2)
down_3 = self.down_3(pool_2)
pool_3 = self.pool_3(down_3)
down_4 = self.down_4(pool_3)
pool_4 = self.pool_4(down_4)
bridge = self.bridge(pool_4)
deconv_1 = self.deconv_1(bridge)
skip_1 = (deconv_1 + down_4)/2
up_1 = self.up_1(skip_1)
deconv_2 = self.deconv_2(up_1)
skip_2 = (deconv_2 + down_3)/2
up_2 = self.up_2(skip_2)
deconv_3 = self.deconv_3(up_2)
skip_3 = (deconv_3 + down_2)/2
up_3 = self.up_3(skip_3)
deconv_4 = self.deconv_4(up_3)
skip_4 = (deconv_4 + down_1)/2
up_4 = self.up_4(skip_4)
out = self.out(up_4)
# out = self.out_2(out)
#out = torch.clamp(out, min=-1, max=1)
return out
统计参数量:
def get_parameter_number(net):
total_num = sum(p.numel() for p in net.parameters())
trainable_num = sum(p.numel() for p in net.parameters() if p.requires_grad)
return {'Total': total_num, 'Trainable': trainable_num}
net=FusionNet(input_nc=1, output_nc=1).cuda()
print(net)
para_num_dict= get_parameter_number(net)
print(para_num_dict['Total'])
计算down_1
层的参数量:
参数量为:
1 ∗ 32 ∗ 3 ∗ 3 + 32 + 32 ∗ 2 + 32 ∗ 32 ∗ 3 ∗ 3 + 32 + 32 ∗ 2 + 32 ∗ 32 ∗ 3 ∗ 3 + 32 + 32 ∗ 2 + 32 ∗ 32 ∗ 3 ∗ 3 + 32 + 32 ∗ 2 + 32 ∗ 32 ∗ 3 ∗ 3 + 32 + 32 ∗ 2 = 37632 1*32*3*3+32+32*2+\\ 32*32*3*3+32+32*2+\\ 32*32*3*3+32+32*2+\\ 32*32*3*3+32+32*2+\\ 32*32*3*3+32+32*2=37632 1∗32∗3∗3+32+32∗2+32∗32∗3∗3+32+32∗2+32∗32∗3∗3+32+32∗2+32∗32∗3∗3+32+32∗2+32∗32∗3∗3+32+32∗2=37632
ptflops-github
Requirements: Pytorch >= 1.1, torchvision >= 0.3
pip install ptflops
# 或者 从 https://pypi.org/project/ptflops/#files 下载
pip install ptflops-0.6.9.tar.gz
net = FusionNet(input_nc=6, output_nc=2)
model_name = 'FusionNet'
flops, params = get_model_complexity_info(net, (6, 256, 256), as_strings=True, print_per_layer_stat=True)
print("%s |FLOPs: %s |params: %s" % (model_name, flops, params))
torchstat
安装:
pip install torchstat
使用:
from torchstat import stat
import torchvision.models as models
model = models.resnet18()
stat(model, (3, 224, 224))
pip install torchsummary
from torchsummary import summary
unet=UNet(1,1)
summary(unet.cuda(),input_size=(1,256,256),batch_size=-1)
ML/DL-复习笔记【九】- 神经网络中各层的计算量与参数量
https://www.freesion.com/article/6637381582/
https://blog.csdn.net/seraph_studio/article/details/124057820
CNN网络各种层的FLOPs和参数量paras计算