【深度学习】VGGNet原理解析及实现

【深度学习】VGGNet原理解析及实现

        VGGNet由牛津大学的视觉几何组(Visual Geometry Group)和Google DeepMind公司的研究员共同提出,是ILSVRC-2014中定位任务第一名和分类任务第二名。其突出贡献在于证明使用很小的卷积(3*3),增加网络深度可以有效提升模型的效果,而且VGGNet对其他数据集具有很好的泛化能力。到目前为止,VGGNet依然经常被用来提取图像特征

    VGGNet探索了CNN的深度及其性能之间的关系,通过反复堆叠3*3的小型卷积核和2*2的最大池化层,VGGNet成功的构筑了16-19层深的CNN。

一、VGGNet结构

    VGGNet有A-E七种结构,从A-E网络逐步变深,但是参数量并没有增长很多(图6-7),原因为:参数量主要消耗在最后3个全连接层,而前面的卷积层虽然层数多,但消耗的参数量不大。不过,卷积层的训练比较耗时,因为其计算量大。

    其中,D和E是常说的VGGNet-16和VGGNet-19。C很有意思,相比于B多了几个1*1的卷积层1*1卷积的意义在于线性变换,而输入的通道数和输出的通道数不变,没有发生降维。

                    


                        

VGG的性能:

                

VGGNet网络特点

1. VGGNet拥有5段卷积,每段卷积内有2-3个卷积层,同时每段尾部都会连接一个最大池化层(用来缩小图片)。

2. 每段内的卷积核数量一样,越后边的段内卷积核数量越多,依次为:64-128-256-512-512

3. 越深的网络效果越好。(图6-9)

4. LRN层作用不大(作者结论)

5. 1*1的卷积也是很有效的,但是没有3*3的卷积好,大一些的卷积核可以学习更大的空间特征。


为什么一个段内有多个3*3的卷积层堆叠?

这是个非常有用的设计。如下图所示,2个3*3的卷积层串联相当于1个5*5的卷积层,即一个像素会跟周围5*5的像素产生关联,可以说感受野大小为5*5。而3个3*3的卷积层相当于1个7*7的卷积层。并且,两个3*3的卷积层的参数比1个5*5的更少,前者为2*3*3=18,后者为1*5*5=25。

更重要的是,2个3*3的卷积层比1个5*5的卷积层有更多的非线性变换(前者可使用2次ReLu函数,后者只有两次),这使得CNN对特征的学习能力更强。

所以3*3的卷积层堆叠的优点为:

(1)参数量更小

(2)小的卷积层比大的有更多的非线性变换,使得CNN对特征的学习能力更强。

与其他网络对比:

            【深度学习】VGGNet原理解析及实现_第1张图片

    与ILSVRC-2012和ILSVRC-2013最好结果相比,VGGNet优势很大。与GoogLeNet对比,虽然7个网络集成效果不如GoogLeNet,但是单一网络测试误差好一些,而且只用2个网络集成效果与GoogLeNet的7网络集成差不多。

二、VGGNet实现

tensorboard可视化的VGGNet结构:

           【深度学习】VGGNet原理解析及实现_第2张图片

1.卷积层操作

    因为要多次实现卷积操作,所以写了一个单独的卷积层实现。

def conv_op(input,kh,kw,n_out,dh,dw,parameters,name):
    """
    定义卷积层的操作
    :param input: 输入的tensor
    :param kh:卷积核的高
    :param kw:卷积核的宽
    :param n_out:输出通道数(即卷积核的数量)
    :param dh:步长的高
    :param dw:步长的宽
    :param parameters:参数列表
    :param name:层的名字
    :return:返回卷积层的结果
    """

    n_in = input.get_shape()[-1].value  #通道数

    with tf.name_scope(name) as scope:
        kernel =tf.get_variable(scope+'w',
                                shape=[kh,kw,n_in,n_out],dtype=tf.float32,
                                initializer=tf.contrib.layers.xavier_initializer_conv2d())
        conv=tf.nn.conv2d(input,kernel,[1,dh,dw,1],padding='SAME')
        biases=tf.Variable(tf.constant(0.0,shape=[n_out],dtype=tf.float32),
                           trainable=True,name='b')
        z=tf.nn.bias_add(conv,biases) # wx+b
        activation =tf.nn.relu(z,name=scope)
        parameters +=[kernel,biases]
        return activation

2.全连接层操作

def fc_op(input,n_out,parameters,name):
    """
    定义全连接层操作
    注意:卷积层的结果要做扁平化才能和fc层相连接;此全连接操作带着RELU
    :param input: 输入的tensor
    :param n_out: 输出通道数(即神经元的数量)
    :param parameters: 参数列表
    :param name: 层的名字
    :return: 返回全连接层的结果
    """

n_in=input.get_shape()[-1].value with tf.name_scope(name) as scope: kernel =tf.get_variable(scope+'w', shape=[n_in,n_out],dtype=tf.float32, initializer=tf.contrib.layers.xavier_initializer() ) biases = tf.Variable(tf.constant(0.1, shape=[n_out], dtype=tf.float32), trainable=True, name='b') activation=tf.nn.relu(tf.matmul(input,kernel)+biases, name=scope) #ReLU parameters +=[kernel,biases] return activation

3.最大池化操作

def maxPool_op(input,kh,kw,dh,dw,name):
    return tf.nn.max_pool(input,ksize=[1,kh,kw,1],strides=[1,dh,dw,1],
                          padding='SAME',name=name)

4.VGGNet实现

def vggNet(input,keep_prob):
    parameters =[]

    #conv1段
    conv1_1 =conv_op(input,kh=3,kw=3,n_out=64,dh=1,dw=1,
                    parameters=parameters,name='conv1_1')
    conv1_2 =conv_op(conv1_1, kh=3, kw=3, n_out=64, dh=1, dw=1,
                      parameters=parameters, name='conv1_2')
    pool1 =maxPool_op(conv1_2,kh=2,kw=2,dh=2,dw=2,name='pool1')

    # conv2段
    conv2_1 = conv_op(pool1, kh=3, kw=3, n_out=128, dh=1, dw=1,
                      parameters=parameters, name='conv2_1')
    conv2_2 = conv_op(conv2_1, kh=3, kw=3, n_out=128, dh=1, dw=1,
                      parameters=parameters, name='conv2_2')
    pool2 = maxPool_op(conv2_2, kh=2, kw=2, dh=2, dw=2, name='pool2')

    # conv3段
    conv3_1 = conv_op(pool2, kh=3, kw=3, n_out=256, dh=1, dw=1,
                      parameters=parameters, name='conv3_1')
    conv3_2 = conv_op(conv3_1, kh=3, kw=3, n_out=256, dh=1, dw=1,
                      parameters=parameters, name='conv3_2')
    conv3_3 = conv_op(conv3_2, kh=3, kw=3, n_out=256, dh=1, dw=1,
                      parameters=parameters, name='conv3_3')
    pool3 = maxPool_op(conv3_3, kh=2, kw=2, dh=2, dw=2, name='pool3')


    # conv4段
    conv4_1 = conv_op(pool3, kh=3, kw=3, n_out=512, dh=1, dw=1,
                      parameters=parameters, name='conv4_1')
    conv4_2 = conv_op(conv4_1, kh=3, kw=3, n_out=512, dh=1, dw=1,
                      parameters=parameters, name='conv4_2')
    conv4_3 = conv_op(conv4_2, kh=3, kw=3, n_out=512, dh=1, dw=1,
                      parameters=parameters, name='conv4_3')
    pool4 = maxPool_op(conv4_3, kh=2, kw=2, dh=2, dw=2, name='pool4')

    # conv5段
    conv5_1 = conv_op(pool4, kh=3, kw=3, n_out=512, dh=1, dw=1,
                      parameters=parameters, name='conv5_1')
    conv5_2 = conv_op(conv5_1, kh=3, kw=3, n_out=512, dh=1, dw=1,
                      parameters=parameters, name='conv5_2')
    conv5_3 = conv_op(conv5_2, kh=3, kw=3, n_out=512, dh=1, dw=1,
                      parameters=parameters, name='conv5_3')
    pool5 = maxPool_op(conv5_3, kh=2, kw=2, dh=2, dw=2, name='pool5')

    #将最后一个卷积层的结果扁平化:每个样本占一行
    conv_shape=pool5.get_shape()
    col=conv_shape[1].value *conv_shape[2].value * conv_shape[3].value
    flat=tf.reshape(pool5, [-1,col],name='flat')

    # fc6段
    fc6 =fc_op(input=flat,n_out=4096,parameters=parameters,name='fc6')
    fc6_dropout =tf.nn.dropout(fc6,keep_prob,name='fc6_drop')

    # fc7段
    fc7 = fc_op(input=fc6_dropout, n_out=4096, parameters=parameters, name='fc7')
    fc7_dropout = tf.nn.dropout(fc7, keep_prob, name='fc7_drop')

    # fc8段:最后一个全连接层,使用softmax进行处理得到分类输出概率
    fc8=fc_op(input=fc7_dropout,n_out=1000,parameters=parameters,name='fc8')
    softmax =tf.nn.softmax(fc8)
    predictions =tf.arg_max(softmax,1)
    return predictions,softmax,fc8,parameters

5.测评:前向和反向用时测评

def time_compute(session, target, feed,info_string):
    num_batch = 100 #100
    num_step_burn_in = 10  # 预热轮数,头几轮迭代有显存加载、cache命中等问题可以因此跳过
    total_duration = 0.0  # 总时间
    total_duration_squared = 0.0
    for i in range(num_batch + num_step_burn_in):
        start_time = time.time()
        _ = session.run(target,feed_dict=feed )
        duration = time.time() - start_time
        if i >= num_step_burn_in:
            if i % 10 == 0:  # 每迭代10次显示一次duration
                print("%s: step %d,duration=%.5f " % (datetime.now(), i - num_step_burn_in, duration))
            total_duration += duration
            total_duration_squared += duration * duration
    time_mean = total_duration / num_batch
    time_variance = total_duration_squared / num_batch - time_mean * time_mean
    time_stddev = math.sqrt(time_variance)
    # 迭代完成,输出
    print("%s: %s across %d steps,%.3f +/- %.3f sec per batch " %
          (datetime.now(), info_string, num_batch, time_mean, time_stddev))

6.运行结果

为了节约时间,设置 batch_size = 2 ,运行结果如下:前向预测和反向学习

前向预测:

【深度学习】VGGNet原理解析及实现_第3张图片

反向学习

【深度学习】VGGNet原理解析及实现_第4张图片

    反向是前向用时的3-4倍。

【附录】整体代码

# -*- coding:utf-8 -*-
"""
@author:Lisa
@file:VggNet.py.py
@note:from
@time:2018/6/25 0025下午 7:29
"""

import tensorflow as tf
import math
import time
from datetime import datetime

def conv_op(input,kh,kw,n_out,dh,dw,parameters,name):
    """
    定义卷积层的操作
    :param input: 输入的tensor
    :param kh:卷积核的高
    :param kw:卷积核的宽
    :param n_out:输出通道数(即卷积核的数量)
    :param dh:步长的高
    :param dw:步长的宽
    :param parameters:参数列表
    :param name:层的名字
    :return:返回卷积层的结果
    """

    n_in = input.get_shape()[-1].value  #通道数

    with tf.name_scope(name) as scope:
        kernel =tf.get_variable(scope+'w',
                                shape=[kh,kw,n_in,n_out],dtype=tf.float32,
                                initializer=tf.contrib.layers.xavier_initializer_conv2d())
        conv=tf.nn.conv2d(input,kernel,[1,dh,dw,1],padding='SAME')
        biases=tf.Variable(tf.constant(0.0,shape=[n_out],dtype=tf.float32),
                           trainable=True,name='b')
        z=tf.nn.bias_add(conv,biases) # wx+b
        activation =tf.nn.relu(z,name=scope)
        parameters +=[kernel,biases]
        return activation

def fc_op(input,n_out,parameters,name):
    """
    定义全连接层操作
    注意:卷积层的结果要做扁平化才能和fc层相连接
    :param input: 输入的tensor
    :param n_out: 输出通道数(即神经元的数量)
    :param parameters: 参数列表
    :param name: 层的名字
    :return: 返回全连接层的结果
    """

    n_in=input.get_shape()[-1].value

    with tf.name_scope(name) as scope:
        kernel =tf.get_variable(scope+'w',
                                shape=[n_in,n_out],dtype=tf.float32,
                                initializer=tf.contrib.layers.xavier_initializer() )

        biases = tf.Variable(tf.constant(0.1, shape=[n_out], dtype=tf.float32),
                             trainable=True, name='b')
        activation=tf.nn.relu(tf.matmul(input,kernel)+biases, name=scope)
        parameters +=[kernel,biases]
        return activation


def maxPool_op(input,kh,kw,dh,dw,name):
    return tf.nn.max_pool(input,ksize=[1,kh,kw,1],strides=[1,dh,dw,1],
                          padding='SAME',name=name)



def vggNet(input,keep_prob):
    parameters =[]

    #conv1段
    conv1_1 =conv_op(input,kh=3,kw=3,n_out=64,dh=1,dw=1,
                    parameters=parameters,name='conv1_1')
    conv1_2 =conv_op(conv1_1, kh=3, kw=3, n_out=64, dh=1, dw=1,
                      parameters=parameters, name='conv1_2')
    pool1 =maxPool_op(conv1_2,kh=2,kw=2,dh=2,dw=2,name='pool1')

    # conv2段
    conv2_1 = conv_op(pool1, kh=3, kw=3, n_out=128, dh=1, dw=1,
                      parameters=parameters, name='conv2_1')
    conv2_2 = conv_op(conv2_1, kh=3, kw=3, n_out=128, dh=1, dw=1,
                      parameters=parameters, name='conv2_2')
    pool2 = maxPool_op(conv2_2, kh=2, kw=2, dh=2, dw=2, name='pool2')

    # conv3段
    conv3_1 = conv_op(pool2, kh=3, kw=3, n_out=256, dh=1, dw=1,
                      parameters=parameters, name='conv3_1')
    conv3_2 = conv_op(conv3_1, kh=3, kw=3, n_out=256, dh=1, dw=1,
                      parameters=parameters, name='conv3_2')
    conv3_3 = conv_op(conv3_2, kh=3, kw=3, n_out=256, dh=1, dw=1,
                      parameters=parameters, name='conv3_3')
    pool3 = maxPool_op(conv3_3, kh=2, kw=2, dh=2, dw=2, name='pool3')


    # conv4段
    conv4_1 = conv_op(pool3, kh=3, kw=3, n_out=512, dh=1, dw=1,
                      parameters=parameters, name='conv4_1')
    conv4_2 = conv_op(conv4_1, kh=3, kw=3, n_out=512, dh=1, dw=1,
                      parameters=parameters, name='conv4_2')
    conv4_3 = conv_op(conv4_2, kh=3, kw=3, n_out=512, dh=1, dw=1,
                      parameters=parameters, name='conv4_3')
    pool4 = maxPool_op(conv4_3, kh=2, kw=2, dh=2, dw=2, name='pool4')

    # conv5段
    conv5_1 = conv_op(pool4, kh=3, kw=3, n_out=512, dh=1, dw=1,
                      parameters=parameters, name='conv5_1')
    conv5_2 = conv_op(conv5_1, kh=3, kw=3, n_out=512, dh=1, dw=1,
                      parameters=parameters, name='conv5_2')
    conv5_3 = conv_op(conv5_2, kh=3, kw=3, n_out=512, dh=1, dw=1,
                      parameters=parameters, name='conv5_3')
    pool5 = maxPool_op(conv5_3, kh=2, kw=2, dh=2, dw=2, name='pool5')

    #将最后一个卷积层的结果扁平化:每个样本占一行
    conv_shape=pool5.get_shape()
    col=conv_shape[1].value *conv_shape[2].value * conv_shape[3].value
    flat=tf.reshape(pool5, [-1,col],name='flat')

    # fc6段
    fc6 =fc_op(input=flat,n_out=4096,parameters=parameters,name='fc6')
    fc6_dropout =tf.nn.dropout(fc6,keep_prob,name='fc6_drop')

    # fc7段
    fc7 = fc_op(input=fc6_dropout, n_out=4096, parameters=parameters, name='fc7')
    fc7_dropout = tf.nn.dropout(fc7, keep_prob, name='fc7_drop')

    # fc8段:最后一个全连接层,使用softmax进行处理得到分类输出概率
    fc8=fc_op(input=fc7_dropout,n_out=1000,parameters=parameters,name='fc8')
    softmax =tf.nn.softmax(fc8)
    predictions =tf.arg_max(softmax,1)
    return predictions,softmax,fc8,parameters


def time_compute(session, target, feed,info_string):
    num_batch = 100 #100
    num_step_burn_in = 10  # 预热轮数,头几轮迭代有显存加载、cache命中等问题可以因此跳过
    total_duration = 0.0  # 总时间
    total_duration_squared = 0.0
    for i in range(num_batch + num_step_burn_in):
        start_time = time.time()
        _ = session.run(target,feed_dict=feed )
        duration = time.time() - start_time
        if i >= num_step_burn_in:
            if i % 10 == 0:  # 每迭代10次显示一次duration
                print("%s: step %d,duration=%.5f " % (datetime.now(), i - num_step_burn_in, duration))
            total_duration += duration
            total_duration_squared += duration * duration
    time_mean = total_duration / num_batch
    time_variance = total_duration_squared / num_batch - time_mean * time_mean
    time_stddev = math.sqrt(time_variance)
    # 迭代完成,输出
    print("%s: %s across %d steps,%.3f +/- %.3f sec per batch " %
          (datetime.now(), info_string, num_batch, time_mean, time_stddev))


def main():
    with tf.Graph().as_default():
        """仅使用随机图片数据 测试前馈和反馈计算的耗时"""
        image_size = 224
        batch_size = 2   #32

        images = tf.Variable(tf.random_normal([batch_size, image_size, image_size, 3],
                                              dtype=tf.float32, stddev=0.1))

        keep_prob=tf.placeholder(tf.float32)
        predictions,softmax,fc8, parameters = vggNet(images,keep_prob)

        init = tf.global_variables_initializer()
        sess = tf.Session()
        sess.run(init)

        """
        AlexNet forward 计算的测评
        传入的target:fc8(即最后一层的输出)
        优化目标:loss
        使用tf.gradients求相对于loss的所有模型参数的梯度


        AlexNet Backward 计算的测评
        target:grad

        """
        time_compute(sess, target=fc8, feed={keep_prob:1.0},info_string="Forward")

        obj = tf.nn.l2_loss(fc8)
        grad = tf.gradients(obj, parameters)
        time_compute(sess, grad, feed={keep_prob:0.5},info_string="Forward-backward")


if __name__ == "__main__":
    main()

------------------------------------------------------         END      ----------------------------------------------------------

参考:

《tensorflow实战》黄文坚(本文内容及代码大多源于此书,感谢!)

大牛论文《VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION Karen Simonyan 等

VGGNet笔记   https://blog.csdn.net/muyiyushan/article/details/62895202(翻译)


你可能感兴趣的:(深度学习,从零开始学习卷积神经网络)