[Keras实战] 构建DenseNet实现Cifar10数据集90%+准确率

构建DenseNet实现Cifar10数据集90%+准确率


Cifar-10数据集

本文使用的数据集基于Cifar-10,关于这个数据集本身的来历和细节网上有太多的文章了,这里不做多余的介绍了,基本上下面的这样张图就可以概括了:

Cifar-10

实验环境

  • Keras 2.0.5
  • Tensorflow 1.2.0
  • Nvidia TitanX GPU *2

数据增强

在深度学习领域尤其是图像识别,数据增强发挥了十分重要的作用。Keras为我们提供了非常方便的数据增强工具:

def getDataGenerator(train_phase,rescale=1./255):
    if train_phase == True:
        datagen = ImageDataGenerator(
        rotation_range=0.,
        width_shift_range=0.05,
        height_shift_range=0.05,
        shear_range=0.05,
        zoom_range=0.05,
        channel_shift_range=0.,
        fill_mode='nearest',
        horizontal_flip=True,
        vertical_flip=False,
        rescale=rescale)
    else: 
        datagen = ImageDataGenerator(
        rescale=rescale
        )

    return datagen

getDataGenerator函数将会返回一个Keras中的数据生成器,其所做的主要变换在我们传递的函数参数中已经可以看的很清楚了。需要注意的是,我们需要指定当前是在做training还是validation或test,针对后两者,我们不做特殊的变换,只要简单的rescale数据就好。
接着,我们赶紧看一下生成器具体的效果,首先,导入我们本次实验中所用到的数据集:

(x_train,y_train),(x_test,y_test) = cifar10.load_data()

我们只是简单的看几张图片,因此只要用训练集数据就好。首先,对其做一下简单的预处理:

    x_train = x_train.astype('float32')
    y_train = keras.utils.to_categorical(y_train, 10)

这里,我们把训练集的标记y映射到了类别矩阵,但是我们在做可视化时要还原出具体的这一矩阵所指代的类别,我们可以这样获取其的类别映射表:

    label_list_path = 'datasets/cifar-10-batches-py/batches.meta'
    keras_dir = os.path.expanduser(os.path.join('~', '.keras'))
    datadir_base = os.path.expanduser(keras_dir)
    if not os.access(datadir_base, os.W_OK):
        datadir_base = os.path.join('/tmp', '.keras')
    label_list_path = os.path.join(datadir_base, label_list_path)
    with open(label_list_path, mode='rb') as f:
        labels = pickle.load(f)

接下来,我们需要获取我们之前定义的生成器:

datagen = getDataGenerator(train_phase=True)

通过生成器,我们可以批量的生成经过数据提升预处理后的图片,我们plt将其画出来:

    figure = plt.figure()
    plt.subplots_adjust(left=0.1,bottom=0.1, right=0.9, top=0.9,hspace=0.5, wspace=0.3)
    for x_batch,y_batch in datagen.flow(x_train,y_train,batch_size = pics_num):
        for i in range(pics_num):
            pics_raw = x_batch[i]
            pics = array_to_img(pics_raw)
            ax = plt.subplot(pics_num//5, 5, i+1)
            ax.axis('off')
            ax.set_title(labels['label_names'][np.argmax(y_batch[i])])
            plt.imshow(pics)
        plt.savefig("./processed_data.jpg")
        break   

最终,我们打开processed_data.jpg就可以简单的观察一下经过预处理的数据集了:
[Keras实战] 构建DenseNet实现Cifar10数据集90%+准确率_第1张图片

DenseNet

关于DenseNet本身的架构可以直接看这篇论文.
总得来说,DenseNet的主体部分由DenseBlock组成,不同的DenseBlock之间又由TransitionBlock进行链接。
为了实现DenseBlock的结构,我们先构建好简单的单个卷积层:

def conv_block(input, nb_filter, dropout_rate=None, weight_decay=1E-4):
    x = Activation('relu')(input)
    x = Convolution2D(nb_filter, (3, 3), kernel_initializer="he_uniform", padding="same", use_bias=False,
                      kernel_regularizer=l2(weight_decay))(x)
    if dropout_rate is not None:
        x = Dropout(dropout_rate)(x)
    return x

DenseBlock实际上就是卷积层的叠加,不过每一个卷积层都与其之上的同一个DenseBlock中的卷积层有相互链接:

def dense_block(x, nb_layers, nb_filter, growth_rate, dropout_rate=None, weight_decay=1E-4):
    concat_axis = 1 if K.image_dim_ordering() == "th" else -1

    feature_list = [x]

    for i in range(nb_layers):
        x = conv_block(x, growth_rate, dropout_rate, weight_decay)
        feature_list.append(x)
        x = Concatenate(axis=concat_axis)(feature_list)
        nb_filter += growth_rate

    return x, nb_filter

关于TransitionBlock没什么好说的,按照论文里的来:

def transition_block(input, nb_filter, dropout_rate=None, weight_decay=1E-4):
    concat_axis = 1 if K.image_dim_ordering() == "th" else -1

    x = Convolution2D(nb_filter, (1, 1), kernel_initializer="he_uniform", padding="same", use_bias=False,
                      kernel_regularizer=l2(weight_decay))(input)
    if dropout_rate is not None:
        x = Dropout(dropout_rate)(x)
    x = AveragePooling2D((2, 2), strides=(2, 2))(x)

    x = BatchNormalization(axis=concat_axis, gamma_regularizer=l2(weight_decay),
                           beta_regularizer=l2(weight_decay))(x)

    return x

接着,我们再搭起整个DenseNet结构:

def createDenseNet(nb_classes, img_dim, depth=40, nb_dense_block=3, growth_rate=12, nb_filter=16, dropout_rate=None,
                     weight_decay=1E-4, verbose=True):

    model_input = Input(shape=img_dim)

    concat_axis = 1 if K.image_dim_ordering() == "th" else -1

    assert (depth - 4) % 3 == 0, "Depth must be 3 N + 4"

    # layers in each dense block
    nb_layers = int((depth - 4) / 3)

    # Initial convolution
    x = Convolution2D(nb_filter, (3, 3), kernel_initializer="he_uniform", padding="same", name="initial_conv2D", use_bias=False,
                      kernel_regularizer=l2(weight_decay))(model_input)

    x = BatchNormalization(axis=concat_axis, gamma_regularizer=l2(weight_decay),
                            beta_regularizer=l2(weight_decay))(x)

    # Add dense blocks
    for block_idx in range(nb_dense_block - 1):
        x, nb_filter = dense_block(x, nb_layers, nb_filter, growth_rate, dropout_rate=dropout_rate,
                                   weight_decay=weight_decay)
        # add transition_block
        x = transition_block(x, nb_filter, dropout_rate=dropout_rate, weight_decay=weight_decay)

    # The last dense_block does not have a transition_block
    x, nb_filter = dense_block(x, nb_layers, nb_filter, growth_rate, dropout_rate=dropout_rate,
                               weight_decay=weight_decay)

    x = Activation('relu')(x)
    x = GlobalAveragePooling2D()(x)
    x = Dense(nb_classes, activation='softmax', kernel_regularizer=l2(weight_decay), bias_regularizer=l2(weight_decay))(x)

    densenet = Model(inputs=model_input, outputs=x)

    if verbose: 
        print("DenseNet-%d-%d created." % (depth, growth_rate))

    return densenet

训练

有了模型,有了数据,接下来我们就开始训练模型。
首先,先定义好一些我们所需要用到的参数:

#define DenseNet parms
ROWS = 32
COLS = 32
CHANNELS = 3
nb_classes = 10
batch_size = 32
nb_epoch = 40
img_dim = (ROWS,COLS,CHANNELS)
densenet_depth = 40
densenet_growth_rate = 12

我们这里使用的是DenseNet-40-12架构,图片的大小是(32,32,3),采用Tensorflow默认的channel last。
接着,导入我们的数据集,并且做好Normalization等预处理工作,获取我们的生成器:

    (x_train,y_train),(x_test,y_test) = cifar10.load_data()
    x_train = x_train.astype('float32')
    x_test = x_test.astype('float32')
    x_train /= 255
    x_test /= 255
    y_train = keras.utils.to_categorical(y_train, nb_classes)
    y_test= keras.utils.to_categorical(y_test, nb_classes)
    train_datagen = getDataGenerator(train_phase=True)
    train_datagen = train_datagen.flow(x_train,y_train,batch_size = batch_size)
    validation_datagen = getDataGenerator(train_phase=False)
    validation_datagen = validation_datagen.flow(x_test,y_test,batch_size = batch_size)

接着,我们需要完成对于模型的搭建以及编译工作:

    model = createDenseNet(nb_classes=nb_classes,img_dim=img_dim,depth=densenet_depth,
                  growth_rate = densenet_growth_rate)
    if resume == True: 
        model.load_weights(check_point_file)

    optimizer = Adam()
    #optimizer = SGD(lr=0.001)

    model.compile(loss='categorical_crossentropy',optimizer=optimizer,metrics=['accuracy'])

需要注意的是,这里,我们根据resume变量确定我们是要继续之前的训练还是开始一个全新的训练。在优化器方面,对于前20个epoch采用了Adam,其比起SGD在收敛速度方面具有明显的优势。在20epoch之后,acc稳定在85%左右,然后我中断了训练,改用了SGD,最终模型收敛到acc 90%左右。
接着,我们定义一些回调函数,帮助我们做好实时的checkpoint工作,在使用SGD时,我们又需要实时的当acc不再上升时降低learning rate:

    """
    lr_reducer = ReduceLROnPlateau(monitor='val_acc', factor=np.sqrt(0.1),
                                    cooldown=0, patience=3, min_lr=1e-6)
    """
    model_checkpoint = ModelCheckpoint(check_point_file, monitor="val_acc", save_best_only=True,
                                  save_weights_only=True, verbose=1)

    #callbacks=[lr_reducer,model_checkpoint]
    callbacks=[model_checkpoint]

最后,我们可以开始训练啦:

    history = model.fit_generator(generator=train_datagen,
                    steps_per_epoch= x_train.shape[0] // batch_size,
                    epochs=nb_epoch,
                    callbacks=callbacks,
                    validation_data=validation_datagen,
                    validation_steps = x_test.shape[0] // batch_size,
                    verbose=1)

测试模型

有了模型,我们需要对其的性能进行测试。
首先,当然是根据我们之前的checkpoint读取在训练过程中获得的最优的模型:

    model = createDenseNet(nb_classes=nb_classes,img_dim=img_dim,depth=densenet_depth,
                  growth_rate = densenet_growth_rate)
    model.load_weights(check_point_file)
    optimizer = Adam()
    model.compile(loss='categorical_crossentropy',optimizer=optimizer,metrics=['accuracy'])

接着,我们就可以利用测试集来测试我们模型的准确率了:

    (x_train,y_train),(x_test,y_test) = cifar10.load_data()
    x_test = x_test.astype('float32')
    x_test /= 255
    y_test= keras.utils.to_categorical(y_test, nb_classes)
    test_datagen = getDataGenerator(train_phase=False)
    test_datagen = test_datagen.flow(x_test,y_test,batch_size = batch_size,shuffle=False)

    # Evaluate model with test data set and share sample prediction results
    evaluation = model.evaluate_generator(test_datagen,
                                        steps=x_test.shape[0] // batch_size,
                                        workers=4)
    print('Model Accuracy = %.2f' % (evaluation[1]))

最后,我们来看看模型错误的分类了哪些图片:

    counter = 0
    figure = plt.figure()
    plt.subplots_adjust(left=0.1,bottom=0.1, right=0.9, top=0.9,hspace=0.5, wspace=0.3)
    for x_batch,y_batch in test_datagen:
        predict_res = model.predict_on_batch(x_batch)
        for i in range(batch_size):
            actual_label = labels['label_names'][np.argmax(y_batch[i])]
            predicted_label = labels['label_names'][np.argmax(predict_res[i])]
            if actual_label != predicted_label:
                counter += 1
                pics_raw = x_batch[i]
                pics_raw *= 255
                pics = array_to_img(pics_raw)
                ax = plt.subplot(25//5, 5, counter)
                ax.axis('off')
                ax.set_title(predicted_label)
                plt.imshow(pics)
            if counter >= 25:
                plt.savefig("./wrong_predicted.jpg")
                break
        if counter >= 25:
                break

最终获得以下的图片,其中,我们只选择了其中25张图片,图片上方标注的是我们模型的错误预测结果:
[Keras实战] 构建DenseNet实现Cifar10数据集90%+准确率_第2张图片

源代码

DenseNet-Cifar10

Reference

titu1994/DenseNet
fchollet/keras
Keras Document

你可能感兴趣的:(机器学习)