深度学习第四课是 卷积神经网络
,共四周内容:
作业及答案传送门
上面链接有作业及答案,这里主要总结一下第二周的作业内容,把学到的知识做个笔记。
这里先送上keras框架的中文文档,Keras中文文档,有不会的内容查官方文档还是最有效的。
之前有tensorflow、caffe、theano等诸多深度学习框架,这里keras的出现是为了进一步简化初学者构建神经网络的步骤。如果是只对深度学习有概念上的认识,而不清楚tensorflow里面的tensor(张量)等概念的同学,可以使用keras快速搭建自己的网络,这也符合NG快速搭建自己网络的理念。
keras里面的网络层的使用都很简单,将keras.layers
导入即可,这里以2D卷积层为例:
from keras import layers
from keras.layers import Conv2D
# 这里最后输出是X,输入是最后括号里面的X。8个3*3的2D卷积层
X = Conv2D(8,kernel_size=(3,3),strides=(1,1))(X)
keras.layers.convolutional.Conv2D(filters, kernel_size, strides=(1, 1), padding=’valid’, data_format=None, dilation_rate=(1, 1), activation=None, use_bias=True, kernel_initializer=’glorot_uniform’, bias_initializer=’zeros’, kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None)
参数
keras.utils.vis_utils
模块提供了画出Keras模型的函数(利用graphviz)
这里首先要安装两个依赖包(按照顺序安装):
graphviz
sudo apt-get install graphviz
如果提示有依赖无法安装,则先安装依赖,然后再装graphviz:
sudo apt-get -f install
sudo apt-get install graphviz
sudo pip3 install pydot-ng
安装好之后可以在程序中画出自己网络模型图,这里以第一个作业的happyModel为例:
### keras visualization
from keras.utils import plot_model
plot_model(happyModel, to_file='happymodel.png', show_shapes = True)
残差神经网络是 kaiming大神的论文提出来的,Deep Residual Learning for Image Recognition
主要用来解决神经网络随着深度的增加,梯度消失(学习速率降低)的问题。
残差网络的原理网上有很多解释,这里在直觉上,我感觉是将网络前面层的数据输送给后面,从而将中间层做个“假屏蔽”,这样就可以将深度网络做一个简化。(个人愚见,有待日后再学习理解)
这里贴出几个主要函数,都是使用Keras,将需要的网络层写出了,连接实现。
################# block identity #################
def identity_block(X, f, filters, stage, block):
"""
Implementation of the identity block as defined in Figure 4
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
Returns:
X -- output of the identity block, tensor of shape (n_H, n_W, n_C)
"""
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# retrieve filters
F1, F2, F3 = filters
# Save the input value. You'll need this later to add back to the main path.
X_shortcut = X
# First component of main path
X = Conv2D(filters = F1, kernel_size=(1,1), strides=(1,1), padding='valid',name=conv_name_base + '2a',
kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2a')(X)
X = Activation('relu')(X)
# Second component of main path
X = Conv2D(filters= F2, kernel_size=(f,f), strides=(1,1), padding=('same'), name=conv_name_base + '2b',
kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3,name=bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path
X = Conv2D(filters=F3, kernel_size=(1, 1), strides=(1, 1), padding=('valid'), name=conv_name_base + '2c',
kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2c')(X)
# Final step: Add shortcut value to main path, and pass it through a RELU activation
X = layers.add([X, X_shortcut])
X = Activation('relu')(X)
return X
################# block convolutional #################
# GRADED FUNCTION: convolutional_block
def convolutional_block(X, f, filters, stage, block, s=2):
"""
Implementation of the convolutional block as defined in Figure 4
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
s -- Integer, specifying the stride to be used
Returns:
X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C)
"""
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value
X_shortcut = X
##### MAIN PATH #####
# First component of main path
X = Conv2D(F1, (1, 1), strides=(s, s), name=conv_name_base + '2a', padding='valid',
kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
# Second component of main path (≈3 lines)
X = Conv2D(F2, (f, f), strides=(1, 1), name=conv_name_base + '2b', padding='same',
kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path (≈2 lines)
X = Conv2D(F3, (1, 1), strides=(1, 1), name=conv_name_base + '2c', padding='valid',
kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2c')(X)
##### SHORTCUT PATH #### (≈2 lines)
X_shortcut = Conv2D(F3, (1, 1), strides=(s, s), name=conv_name_base + '1', padding='valid',
kernel_initializer=glorot_uniform(seed=0))(X_shortcut)
X_shortcut = BatchNormalization(axis=3, name=bn_name_base + '1')(X_shortcut)
# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)
X = layers.add([X, X_shortcut])
X = Activation('relu')(X)
### END CODE HERE ###
return X
################# ResNet50 #################
# ResNet50
def ResNet50(input_shape=(64,64,3), classes = 6):
"""
Implementation of the popular ResNet50 the following architecture:
CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3
-> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER
Arguments:
input_shape -- shape of the images of the dataset
classes -- integer, number of classes
Returns:
model -- a Model() instance in Keras
"""
# Define the input as a tensor with shape input_shape
X_input = Input(input_shape)
# Zero-padding
X = ZeroPadding2D((3,3))(X_input)
# Stage 1
X = Conv2D(64, (7,7), strides=(2,2), name='conv1', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3,name='bn_conv1')(X)
X = Activation('relu')(X)
X = MaxPooling2D((3,3), strides=(2,2))(X)
# Stage2
X = convolutional_block(X, f=3, filters=[64,64,256], stage=2, block='a', s=1)
X = identity_block(X, 3, [64,64,256], stage=2, block='b')
X = identity_block(X, 3, [64,64,256], stage=2, block='c')
# Stage3
X = convolutional_block(X, f=3, filters=[128,128,512], stage=3, block='a',s=2)
X = identity_block(X, f=3, filters=[128,128,512], stage=3, block='b')
X = identity_block(X, f=3, filters=[128,128,512], stage=3, block='c')
X = identity_block(X, f=3, filters=[128,128,512], stage=3, block='d')
# Stage4
X = convolutional_block(X, f=3, filters=[256, 256, 1024], block='a', stage=4, s=2)
X = identity_block(X, f=3, filters=[256, 256, 1024], block='b', stage=4)
X = identity_block(X, f=3, filters=[256, 256, 1024], block='c', stage=4)
X = identity_block(X, f=3, filters=[256, 256, 1024], block='d', stage=4)
X = identity_block(X, f=3, filters=[256, 256, 1024], block='e', stage=4)
X = identity_block(X, f=3, filters=[256, 256, 1024], block='f', stage=4)
# Stage 5 (≈3 lines)
# The convolutional block uses three set of filters of size [512, 512, 2048], "f" is 3, "s" is 2 and the block is "a".
# The 2 identity blocks use three set of filters of size [256, 256, 2048], "f" is 3 and the blocks are "b" and "c".
X = convolutional_block(X, f = 3, filters=[512, 512, 2048], stage=5, block='a', s = 2)
X = identity_block(X, f = 3, filters=[256, 256, 2048], stage=5, block='b')
X = identity_block(X, f = 3, filters=[256, 256, 2048], stage=5, block='c')
# Avgpool
X = AveragePooling2D(pool_size=(2,2))(X)
# output layer
X = Flatten()(X)
X = Dense(classes,activation='softmax', name='fc'+str(classes), kernel_initializer=glorot_uniform(seed=0))(X)
# create model
model = Model(inputs=X_input, outputs=X, name='ResNet50')
return model