卷积操作的参数量计算

卷积操作的参数量计算

1.计算一维的卷积参数:

输入:
model = Sequential()
model.add(layers.Conv1D(64, 15, strides=2,input_shape=(178, 1), use_bias=False))
model.add(layers.ReLU())
model.add(layers.Conv1D(64, 3))
model.add(layers.Conv1D(64, 3, strides=2))
model.add(layers.ReLU())
model.add(layers.Conv1D(64, 3))
model.add(layers.Conv1D(64, 3, strides=2))  # [None, 54, 64]
model.add(layers.BatchNormalization())
model.add(layers.LSTM(64, dropout=0.5, return_sequences=True))
model.add(layers.LSTM(64, dropout=0.5, return_sequences=True))
model.add(layers.LSTM(32))
model.add(layers.Dense(5, activation="softmax"))
model.summary()
_________________________________________________________________
输出:
Model: "sequential"
Layer (type)                 Output Shape              Param #   
=================================================================
conv1d (Conv1D)              (None, 82, 64)            960       
_________________________________________________________________
re_lu (ReLU)                 (None, 82, 64)            0         
_________________________________________________________________
conv1d_1 (Conv1D)            (None, 80, 64)            12352     
_________________________________________________________________
conv1d_2 (Conv1D)            (None, 39, 64)            12352     
_________________________________________________________________
re_lu_1 (ReLU)               (None, 39, 64)            0         
_________________________________________________________________
conv1d_3 (Conv1D)            (None, 37, 64)            12352     
_________________________________________________________________
conv1d_4 (Conv1D)            (None, 18, 64)            12352     
_________________________________________________________________
batch_normalization (BatchNo (None, 18, 64)            256       
_________________________________________________________________
lstm (LSTM)                  (None, 18, 64)            33024     
_________________________________________________________________
lstm_1 (LSTM)                (None, 18, 64)            33024     
_________________________________________________________________
lstm_2 (LSTM)                (None, 32)                12416     
_________________________________________________________________
dense (Dense)                (None, 5)                 165       
=================================================================

计算一维的卷积参数:
①第一层:model.add(layers.Conv1D(64, 15, strides=2,input_shape=(178, 1), use_bias=False))
可以看出这个没有偏置b,因此计算 输出通道 * (卷积核面积 * 输入通道+偏置),即64 * (15 *1 *1 +0)=960,
②第二层:model.add(layers.Conv1D(64, 3))
参数:64 * (3 *1 *64 +1)=12352

2.计算二维的卷积核:

    输入(部分):卷积核大小(3*3)
    self.h1 = cnn_cell(32, self.inputs)
    self.h2 = cnn_cell(64, self.h1)
    self.h3 = cnn_cell(128, self.h2)
    self.h4 = cnn_cell(128, self.h3, pool=False)
    the_inputs (InputLayer)         (None, None, 200, 1) 0                                            
    
    输出(部分):
	conv2d_1 (Conv2D)               (None, None, 200, 32 320         the_inputs[0][0]                 
	__________________________________________________________________________________________________
	batch_normalization_1 (BatchNor (None, None, 200, 32 128         conv2d_1[0][0]                   
	__________________________________________________________________________________________________
	conv2d_2 (Conv2D)               (None, None, 200, 32 9248        batch_normalization_1[0][0]      
	__________________________________________________________________________________________________
	batch_normalization_2 (BatchNor (None, None, 200, 32 128         conv2d_2[0][0]                   
	__________________________________________________________________________________________________
	max_pooling2d_1 (MaxPooling2D)  (None, None, 100, 32 0           batch_normalization_2[0][0]      
	__________________________________________________________________________________________________
	conv2d_3 (Conv2D)               (None, None, 100, 64 18496       max_pooling2d_1[0][0]            
	__________________________________________________________________________________________________
	batch_normalization_3 (BatchNor (None, None, 100, 64 256         conv2d_3[0][0]                   
	__________________________________________________________________________________________________
	conv2d_4 (Conv2D)               (None, None, 100, 64 36928       batch_normalization_3[0][0]      
	__________________________________________________________________________________________________
	batch_normalization_4 (BatchNor (None, None, 100, 64 256         conv2d_4[0][0]                   
	__________________________________________________________________________________________________
	max_pooling2d_2 (MaxPooling2D)  (None, None, 50, 64) 0           batch_normalization_4[0][0]      
	__________________________________________________________________________________________________
	conv2d_5 (Conv2D)               (None, None, 50, 128 73856       max_pooling2d_2[0][0]            
	__________________________________________________________________________________________________
	batch_normalization_5 (BatchNor (None, None, 50, 128 512         conv2d_5[0][0]              

计算二维卷积参数操作一样:
①第一层: 输出通道 * (卷积核面积 * 输入通道+偏置),即32 * (3 *3 *1+1)=320
②第二层:64 * (3 * 3 *32 +1)=18496

以下计算是类似的。。。。。。。。。

你可能感兴趣的:(tensorflow,深度学习)