从零开始开发自己的类keras深度学习框架5:实现卷积层

认真学习,佛系更博。

前面我们已经实现了一个简单的全连接层的神经网络模型,本章将实现卷积神经网络,以及详细介绍其前向传播和反向传播(2种实现方法)的实现。

有了全连接层的铺垫,卷积神经网络就变得更好理解,我们先将整个代码贴上,然后来详细讨论一下前向传播和反向传播:

from enet.layers.base_layer import Layer
from enet.optimizer import optimizer_dict
from enet.utils import img2col, col2img

import numpy as np


class Conv2D(Layer):
    """
    2维卷积层
    """

    def __init__(self, filters, kernel_size=(3, 3), strides=1, padding="same", activation=None, optimizer="sgd",
                 input_shape=None, name=None, **k_args):
        """
        初始化变量
        :param filters: 卷积核个数
        :param kernel_size: 卷积核大小
        :param strides: 卷积步长
        :param padding: same或valid
        :param activation: 激活函数
        :param optimizer: 优化器
        :param input_shape:
        :param name: 网络层名字
        :param k_args:
        """
        super(Conv2D, self).__init__(layer_type="conv2d")

        assert padding.lower() in {"same", "valid"}

        self.filters = filters
        self.name = name

        if input_shape:
            self.input_shape = input_shape

        self.kernel_size = kernel_size
        self.strides = strides

        self.padding = padding

        assert activation in {None, "sigmoid", "relu", "softmax"}
        assert optimizer in {"sgd", "momentum", "adagrad", "adam", "rmsprop"}

        self.activation = activation
        self.optimizer = optimizer_dict[optimizer](**k_args)

        self.weight = None
        self.bias = None

        # padding_shape用于记录pad之后的大小
        self.padding_shape = None
        self.cache_weight = None

    def build(self, input_shape):
        """
        根据input_shape来构建网络模型参数
        :param input_shape: 输入形状
        :return: 无返回值
        """
        self.input_shape = input_shape

        self.weight_shape = (self.kernel_size + (input_shape[-1], self.filters))
        self.weight = self.add_weight(shape=self.weight_shape, node_num=input_shape)
        self.bias = self.add_weight(shape=(self.filters, ), initializer="zero")

        if self.padding == "same":
            self.output_shape = ((input_shape[0] - 1) // self.strides + 1,
                                 (input_shape[1] - 1) // self.strides + 1,
                                 self.filters)
        else:
            self.output_shape = ((input_shape[0] - self.kernel_size[0]) // self.strides + 1,
                                 (input_shape[1] - self.kernel_size[1]) // self.strides + 1,
                                 self.filters)

    def forward(self, input_signal, *args, **k_args):
        """
        前向传播
        :param input_signal: 输入信息
        :param args:
        :param k_args:
        :return:
        """
        # 填充边界数据
        if self.padding == "same":
            input_signal = np.pad(input_signal,
                                  ((0, 0),
                                   (self.kernel_size[0] // 2, self.kernel_size[0] // 2),
                                   (self.kernel_size[1] // 2, self.kernel_size[1] // 2),
                                   (0, 0)),
                                  mode="constant"
                                  )

        self.padding_shape = input_signal.shape

        matrix_weight = self.weight.reshape((-1, self.filters))
        matrix_image = img2col(input_signal, self.kernel_size, self.strides)

        self.cache_weight = matrix_weight
        self.cache = matrix_image

        output_signal = np.matmul(matrix_image, matrix_weight) + self.bias

        return output_signal.reshape((-1,) + self.output_shape)

    def backward(self, delta):
        """
        反向传播, 使用col2img方式
        :param delta: 梯度
        :return:
        """
        delta_col = delta.reshape((delta.shape[0], -1, self.filters))

        delta_w = np.sum(np.matmul(self.cache.transpose(0, 2, 1), delta_col), axis=0).reshape(self.weight.shape)
        delta_b = np.sum(delta_col, axis=(0, 1))

        # 更新到优化器中
        self.optimizer.grand(delta_w=delta_w, delta_b=delta_b)

        delta_padding_image_col = np.matmul(delta_col, self.cache_weight.transpose())
        output_delta = col2img(delta_padding_image_col, self.kernel_size, self.padding_shape, self.strides)

        # 如果padding为same,则需要去除边界
        if self.padding == "same":
            output_delta = output_delta[:,
                                        self.kernel_size[0] // 2: - (self.kernel_size[0] // 2),
                                        self.kernel_size[1] // 2: - (self.kernel_size[1] // 2),
                                        :]

        return output_delta

    def alternative_backward(self, delta):
        """
        另一种反向传播,使用
        :param delta: 梯度
        :return:
        """
        delta_col = delta.reshape((delta.shape[0], -1, self.filters))

        delta_w = np.sum(np.matmul(self.cache.transpose(0, 2, 1), delta_col), axis=0).reshape(self.weight.shape)
        delta_b = np.sum(delta_col, axis=(0, 1))

        # 更新到优化器中
        self.optimizer.grand(delta_w=delta_w, delta_b=delta_b)

        # 该方式将传回来的梯度和权值矩阵的翻转结果作卷积运算
        # 先填充delta, 若步长不为1,则需要将回传的梯度填充大小为步长为1的输出大小,其余位置填充0
        if self.padding == "same":
            back_per_stride_height, back_per_stride_width = self.input_shape[0], self.input_shape[1]
        else:
            back_per_stride_height, back_per_stride_width = self.input_shape[0] - self.kernel_size[0] + 1, \
                                                            self.input_shape[1] - self.kernel_size[1] + 1

        if self.strides != 1:
            new_delta = np.zeros(shape=(delta.shape[0],
                                        back_per_stride_height,
                                        back_per_stride_width,
                                        delta.shape[-1]))
            new_delta[:, ::self.strides, ::self.strides, :] = delta
            delta = new_delta

        # weight, 然后输入通道和输出通道变换位置
        flip_weight = np.flip(self.weight, axis=(0, 1)).swapaxes(2, 3).reshape((-1, self.input_shape[-1]))

        # 梯度边界填充
        pixel = [k_size // 2 if self.padding == "same" else k_size - 1 for k_size in self.kernel_size]
        delta = np.pad(delta, ((0, 0), (pixel[0], pixel[0]), (pixel[1], pixel[1]), (0, 0)), mode="constant")
        matrix_delta = img2col(delta, self.kernel_size, 1)

        return np.dot(matrix_delta, flip_weight).reshape((delta.shape[0],) + self.input_shape)

    def update(self, lr):
        """
        更新参数
        :param lr: 学习率
        :return:
        """
        delta_w, delta_b = self.optimizer.get_delta_and_reset(lr, "delta_w", "delta_b")

        self.weight += delta_w
        self.bias += delta_b

这里,我们实现的框架采用和tensorflow一样的通道模式,即:batch, height, width, channel, 卷积核维度为kernel_height, kernel_width, input_channel, output_channel;

和全连接层的矩阵运算不同,卷积运算需要提取像素领域块,然后做卷积操作,关于该运算有很多资料,这里贴一个:https://www.zhihu.com/question/28385679/answer/44297845 ;

另外,网上也有很多该操作的实现,在实现img2col时将4维图像转化为2维矩阵,卷积核也转化为2维矩阵;我们这里稍微有一些不同,图像我们转化为3维即可,因为numpy可以有很方便的广播机制,也便于我们理解。img2col如下:

def img2col(image, kernel_size, stride):
    """
    img2col的实现,加速卷积运算
    :param image: 图像
    :param kernel_size: 核大小例如(3, 3)
    :param stride: 步长
    :return: 生成的矩阵, 为3维, batch,
    """
    batch_size, height, width, channel = image.shape
    out_h, out_w = (height - kernel_size[0]) // stride + 1, (width - kernel_size[1]) // stride + 1

    image_col = np.zeros(shape=(batch_size, out_h * out_w, kernel_size[0] * kernel_size[1] * channel))
    for i in range(out_h):
        h_min = i * stride
        h_max = i * stride + kernel_size[0]
        for j in range(out_w):
            w_min = j * stride
            w_max = j * stride + kernel_size[1]

            image_col[:, i * out_w + j, :] = image[:, h_min: h_max, w_min: w_max, :].reshape((batch_size, -1))

    return image_col

图像转化为矩阵后,就可以很方便的进行矩阵运算,注意padding操作对输出map的影响;

另外,卷积层的反向传播也是一个难点,网上的大部分资料都使用回传的梯度和翻转的矩阵做卷积运算来求回传的梯度,见上文的alternative_backward,其详细推导却很难解释清除,我建议大家不用过分研究别人的解释,自己画一些图然后模拟一下卷积运算的过程,基本都可以明白个大概,当然,我们也可以使用col2img的方式求梯度,这种方法原理更直观,也更好理解;

在dense的实现时我们列出了矩阵求导的公式,该公式同样适用于卷积部分,因为我们以及将图像转化为了3维矩阵,对于3维(2维也是一样的道理)矩阵的倒数很容易求解出来,然后根据3维矩阵,实现col2img:

def col2img(image_col, kernel_size, padding_shape, stride):
    """
    col2img的实现,回传梯度
    :param stride: 步长
    :param image_col: 图像列信息
    :param kernel_size: 核大小
    :param padding_shape: 原图像pad之后的图像大小
    :return:
    """
    batch_size, height, width, channel = padding_shape
    out_h, out_w = (height - kernel_size[0]) // stride + 1, (width - kernel_size[1]) // stride + 1

    padding_image = np.zeros(shape=padding_shape)

    for i in range(out_h):
        h_min = i * stride
        h_max = i * stride + kernel_size[0]
        for j in range(out_w):
            w_min = j * stride
            w_max = j * stride + kernel_size[1]

            padding_image[:, h_min: h_max, w_min: w_max, :] += image_col[:, i * out_w + j, :].reshape((batch_size,
                                                                                                       kernel_size[0],
                                                                                                       kernel_size[1],
                                                                                                       channel))
    return padding_image

因为img2col一个像素可能映射到多个位置,所以转化回来时要进行相加;另外,还是要注意不同的padding对应的边界问题;

关于convolution的介绍到此基本就结束了,另外关于池化层的实现可以见max_pooling和average_pooling,这里不再赘述,下一章将介绍batch_normalization层;

整个代码的github网址为:https://github.com/darkwhale/neural_network,不断更新中;

你可能感兴趣的:(神经网络,深度学习,神经网络,深度学习)