基于DCNN的xception模型

基于DCNN的xception模型

  • 目录
    • 论文介绍
      • 数据增强
      • xception网络结构
      • optimizer 和 loss function
    • 代码实现
      • 导入需要的包
      • 定义网络结构
        • 自己未完成的
        • github代码
      • 读取数据
      • 设置训练集和测试集路径和标签函数并打乱顺序
      • 创建训练集和测试集
      • 定义变化学习率函数
      • 模型编译
      • 模型训练
      • 评估与测验
    • 总结

目录

论文介绍

看到一篇论文使用Xception对乳腺病例图像进行二分类和多级分类,论文中给出了网络的结构,刚好最近学习了一些tensorflow自定义网络层的知识,想着动手试一下。论文提出的模型在其数据集上得到了二分类准确率99.01%和多分类准确率96.57%的结果。

论文地址:Multi-Classification of Breast Histopathological Image Using Xception

论文中使用的数据集是BREAKHIS中从82名患者收集的乳腺肿瘤显微镜活检,分别有40X,100X,200X和400X的放大倍率,分辨率为700 X 460的三通道RGB图像。不过我使用的是自己的数据集,所以详细的信息就不多加赘述感兴趣的话可以看看论文。


数据增强

论文中使用了Keras ImageDataGenerator功能进行了数据增强,通过旋转、伸缩、翻转等方法得到了更多图像,下次会加入这一步骤看看效果。


xception网络结构

文中提到研究者们使用的基于DCNN 的 xception 模型进行分类的分类器是一种低成本、低计算模型。它完全基于深度可分离卷积层。它有36个核大小为3 × 3的卷积层来提取特征,这些特征被构造成14个模块,除了第一个模块外,它们之间都有线性残差连接。

基于DCNN的xception模型_第1张图片

残差网络调用跳过链接解决消失梯度问题。卷积层用于提取特征,从CNN的帧中学习特征。减少计算成本,这降低了过拟合和计算成本。


optimizer 和 loss function

所提出的模型使用 Adam 优化器、分类交叉熵和 relu 作为激活函数。


代码实现


导入需要的包

from d2l import tensorflow as d2l
import tensorflow as tf
import numpy as np
import random
import tensorflow.keras as keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten,Dropout
from tensorflow.keras.layers import Conv2D,MaxPooling2D
from tensorflow.keras.optimizers import SGD
import random
import os
import pathlib
import shutil
from random import shuffle
from glob import glob
import matplotlib.pyplot as plt

定义网络结构

自己未完成的

#先创建一个输入节点(样本形状):
img_inputs = keras.Input(shape=(128,128,3))
img_inputs.shape

x = layers.Conv2D(32,3,(2,2),activation = "relu")(img_inputs)
x = layers.Conv2D(64,3,activation = "relu")(x)
y = layers.SeparableConv2D(128,3)(x)
y = layers.Activation("relu")(y)
y = layers.SeparableConv2D(128,3,activation= "relu")(y)
y = layers.MaxPooling2D(3,(2,2))(y)
a = layers.Conv2D(128,6,(2,2))(x)
b = layers.add([a,y])
model = keras.Model(img_inputs, b, name="toy_resnet")

model.summary()

输出结果:

Model: "toy_resnet"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 128, 128, 3) 0                                            
__________________________________________________________________________________________________
conv2d_41 (Conv2D)              (None, 63, 63, 32)   896         input_1[0][0]                    
__________________________________________________________________________________________________
conv2d_42 (Conv2D)              (None, 61, 61, 64)   18496       conv2d_41[0][0]                  
__________________________________________________________________________________________________
separable_conv2d_27 (SeparableC (None, 59, 59, 128)  8896        conv2d_42[0][0]                  
__________________________________________________________________________________________________
activation_14 (Activation)      (None, 59, 59, 128)  0           separable_conv2d_27[0][0]        
__________________________________________________________________________________________________
separable_conv2d_28 (SeparableC (None, 57, 57, 128)  17664       activation_14[0][0]              
__________________________________________________________________________________________________
conv2d_43 (Conv2D)              (None, 28, 28, 128)  295040      conv2d_42[0][0]                  
__________________________________________________________________________________________________
max_pooling2d_13 (MaxPooling2D) (None, 28, 28, 128)  0           separable_conv2d_28[0][0]        
__________________________________________________________________________________________________
add_9 (Add)                     (None, 28, 28, 128)  0           conv2d_43[0][0]                  
                                                                 max_pooling2d_13[0][0]           
==================================================================================================
Total params: 340,992
Trainable params: 340,992
Non-trainable params: 0
__________________________________________________________________________________________________

查看网络图:

keras.utils.plot_model(model, "mini_resnet.png", show_shapes=True)

基于DCNN的xception模型_第2张图片

弄了一部分想偷懒了,就去github上找了找,还真找到了一样的代码,就copy过来用了


github代码

代码来源 TanyaChutani/Xception-Tf2.0

def block(layer, filters, kernel_size, strides=1, padding='valid', layer_name='conv',
          pool_size=2, pool_strides=None, filter_2=False):

    if layer_name == 'conv':
        layer = tf.keras.layers.Conv2D(filters=filters, kernel_size=kernel_size,
                                       strides=strides, use_bias=False)(layer)
        layer = tf.keras.layers.BatchNormalization()(layer)
        layer = tf.keras.layers.ReLU()(layer)
        layer = tf.keras.layers.Conv2D(filters=filters*2, kernel_size=kernel_size,
                                       use_bias=False)(layer)
        layer = tf.keras.layers.BatchNormalization()(layer)
        layer = tf.keras.layers.ReLU()(layer)

    elif layer_name == 'separable_conv':
        layer = tf.keras.layers.SeparableConv2D(filters, kernel_size,
                                                padding=padding, use_bias=False)(layer)
        layer = tf.keras.layers.BatchNormalization()(layer)
        layer = tf.keras.layers.ReLU()(layer)
        if filter_2:
            layer = tf.keras.layers.SeparableConv2D(filter_2, kernel_size,
                                                    padding=padding, use_bias=False)(layer)
        else:
            layer = tf.keras.layers.SeparableConv2D(filters, kernel_size,
                                                    padding=padding, use_bias=False)(layer)
        layer = tf.keras.layers.BatchNormalization()(layer)
        layer = tf.keras.layers.MaxPooling2D(pool_size, strides=pool_strides,
                                             padding=padding)(layer)
    return layer


def add_block(layer, filters, kernel_size, strides=1, padding='valid', pool_size=2, pool_strides=None):
    layer = tf.keras.layers.ReLU()(layer)
    layer = tf.keras.layers.SeparableConv2D(
        filters, kernel_size, padding=padding, use_bias=False)(layer)
    layer = tf.keras.layers.BatchNormalization()(layer)
    layer = tf.keras.layers.ReLU()(layer)
    layer = tf.keras.layers.SeparableConv2D(
        filters, kernel_size, padding=padding, use_bias=False)(layer)
    layer = tf.keras.layers.BatchNormalization()(layer)
    layer = tf.keras.layers.MaxPooling2D(pool_size, strides=pool_strides,
                                         padding=padding)(layer)
    return layer


def entry_flow(input_layer):
    block_1 = block(input_layer, 32, 3, 2, layer_name='conv')

    block_2 = block(block_1, 128, 3, padding='same',
                    layer_name='separable_conv')
    layer_add = tf.keras.layers.Conv2D(filters=128, kernel_size=1, strides=2,
                                       padding='same', use_bias=False)(block_1)
    layer_add = tf.keras.layers.BatchNormalization()(layer_add)
    layer = tf.keras.layers.Add()([block_2, layer_add])

    block_3 = add_block(layer, 256, 3, 1, 'same', 3, 2)
    layer_add = tf.keras.layers.Conv2D(filters=256, kernel_size=1, strides=2,
                                       padding='same', use_bias=False)(layer)
    layer_add = tf.keras.layers.BatchNormalization()(layer_add)
    layer = tf.keras.layers.Add()([block_3, layer_add])

    block_4 = add_block(layer, 728, 3, 1, 'same', 3, 2)
    layer_add = tf.keras.layers.Conv2D(filters=728, kernel_size=1, strides=2,
                                       padding='same', use_bias=False)(layer)
    layer_add = tf.keras.layers.BatchNormalization()(layer_add)
    layer = tf.keras.layers.Add()([block_4, layer_add])
    return layer


def middle_flow(input_layer):
    for _ in range(8):
        for __ in range(3):
            layer = tf.keras.layers.ReLU()(input_layer)
            layer = tf.keras.layers.SeparableConv2D(filters=728, kernel_size=3,
                                                    padding='same', use_bias=False)(layer)
            layer = tf.keras.layers.BatchNormalization()(layer)
        output_layer = tf.keras.layers.Add()([input_layer, layer])
    return output_layer


def exit_flow(input_layer):
    layer = tf.keras.layers.ReLU()(input_layer)
    block_1 = block(layer, 728, 3, padding='same', layer_name='separable_conv',
                    pool_size=3, pool_strides=2, filter_2=1024)

    layer_add = tf.keras.layers.Conv2D(filters=1024, kernel_size=1,
                                       strides=2, padding='same', use_bias=False)(input_layer)
    layer_add = tf.keras.layers.BatchNormalization()(layer_add)
    layer = tf.keras.layers.Add()([block_1, layer_add])

    layer = tf.keras.layers.SeparableConv2D(filters=1536, kernel_size=3,
                                            padding='same', use_bias=False)(layer)
    layer = tf.keras.layers.BatchNormalization()(layer)
    layer = tf.keras.layers.ReLU()(layer)
    layer = tf.keras.layers.SeparableConv2D(filters=2048, kernel_size=3,
                                            padding='same', use_bias=False)(layer)
    layer = tf.keras.layers.BatchNormalization()(layer)
    layer = tf.keras.layers.ReLU()(layer)

    layer = tf.keras.layers.GlobalAvgPool2D()(layer)
    layer = tf.keras.layers.Dense(1000, activation='relu')(layer)

    return layer


def xception(shape, include_top):
    model_input = tf.keras.layers.Input(shape=shape)
    entry_block = entry_flow(model_input)
    mid_block = middle_flow(entry_block)
    exit_block = exit_flow(mid_block)

    if include_top:
        model_output = tf.keras.layers.Dense(10)(exit_block)
        model = tf.keras.models.Model(model_input, model_output)
    model = tf.keras.models.Model(model_input, model_output)
    model.summary()
    return model


shape = 128, 128, 3
model = xception(shape, include_top=True)

查看网络图:
基于DCNN的xception模型_第3张图片



读取数据

代码同样来源于上面的github链接


read_img() 用于将图像路径转换为tensor张量并初步预处理图像(调整大小,数值归一化)

def read_img(image_path):
  img = tf.io.read_file(image_path)
  img = tf.image.decode_image(img, channels=3)
  img.set_shape([None,None,3])
  img = tf.image.resize(img, [128, 128])  #可以调整大小
  img  = img/255.0
  return img

load_data() 用于生成图像和对应标签的捆绑

def load_data(image_path, label):
  image = read_img(image_path)
  return image, label

data_generator() 用于生成prefetch类型的数据集用于训练和测试

def data_generator(features,labels):
  dataset = tf.data.Dataset.from_tensor_slices((features,labels))
  dataset = dataset.shuffle(buffer_size=100)
  autotune = tf.data.experimental.AUTOTUNE
  dataset = dataset.map(load_data, num_parallel_calls=autotune)
  dataset = dataset.batch(batch_size=batch_size)
  #dataset = dataset.repeat()
  dataset = dataset.prefetch(autotune)
  return dataset

设置训练集和测试集路径和标签函数并打乱顺序

train_images = glob('data_sets/train_test_dataset/train/*')
np.random.shuffle(train_images)
lb=lambda img:1 if img[35]=='A' else 0
train_y=[lb(file) for file in train_images]

test_images = glob('data_sets/train_test_dataset/test/*')
np.random.shuffle(test_images)
lb=lambda img:1 if img[34]=='A' else 0
test_y=[lb(file) for file in test_images]
  • 设置batch_size和epoch
batch_size=64
epochs = 1

创建训练集和测试集

train_dataset = data_generator(train_images,train_y)
test_dataset = data_generator(test_images,test_y)

定义变化学习率函数

starter_learning_rate = 1e-2
end_learning_rate = 1e-5
decay_steps = 80000
learning_rate_fn = tf.keras.optimizers.schedules.PolynomialDecay(
    starter_learning_rate,
    decay_steps,
    end_learning_rate,
    power=0.8)

模型编译

model.compile(loss=tf.losses.SparseCategoricalCrossentropy
              (from_logits=True), 
              optimizer=tf.keras.optimizers.SGD(learning_rate=learning_rate_fn), 
              metrics=[tf.metrics.SparseCategoricalAccuracy()])

模型训练

model.fit(train_dataset,epochs=epochs,steps_per_epoch=len(train_images)//batch_size)

训练结果:

 2/130 [..............................] - ETA: 51s - loss: 0.4251 - sparse_categorical_accuracy: 0.8203WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.1196s vs `on_train_batch_end` time: 0.3521s). Check your callbacks.
130/130 [==============================] - 69s 529ms/step - loss: 0.4845 - sparse_categorical_accuracy: 0.7669


评估与测验

test_scores = model.evaluate(test_dataset, verbose=2)
print("Test loss:", test_scores[0])
print("Test accuracy:", test_scores[1])

测试结果:

56/56 - 6s - loss: 0.5455 - sparse_categorical_accuracy: 0.7230
Test loss: 0.5454705953598022
Test accuracy: 0.7229691743850708

总结

单从训练和测试的结果来看还是不够好的,不过话有许多方法可以进行调整,比如对数据进行特征提取,进行数据增强,将优化器和损失函数更改一下类型,优化一下学习率在训练时的改变,加入验证集,增加训练轮数,保存每轮过后的模型,将训练和测试结果可视化,观察变化曲线等等。不过还是很开心算是东抄西抄完成了一个小任务,自己写的代码太少了,还是得多写多练。但是又想快速让项目推进快一点,感觉是不是有点太浮躁了,这样学习的路子好像不太对,那就尽量找点时间写写简单的题目熟练下语言。

你可能感兴趣的:(深度学习,tensorflow,神经网络)