Auto-encoders、RBM和CNN的区别

Auto-encoders、RBM和CNN的区别

学DL的过程中,发现有DL也有自己的分支,一个RBM构成的DBN(深度信念网络),一个是用CNN(卷积神经网络),都了解一点,却说不清它们之间的区别,所以整理了一下:
Autoencoder is a simple 3-layer neural network where output units are directly connected back to input units. E.g. in a network like this:

Auto-encoders、RBM和CNN的区别_第1张图片
o u t p u t [ i ] output[i] output[i] has edge back to i n p u t [ i ] input[i] input[i] for every i i i. Typically, number of hidden units is much less then number of visible (input/output) ones. As a result, when you pass data through such a network, it first compresses (encodes) input vector to “fit” in a smaller representation, and then tries to reconstruct (decode) it back. The task of training is to minimize an error or reconstruction, i.e. find the most efficient compact representation (encoding) for input data.

RBM shares similar idea, but uses stochastic approach. Instead of deterministic (e.g. logistic or ReLU) it uses stochastic units with particular (usually binary of Gaussian) distribution. Learning procedure consists of several steps of Gibbs sampling (propagate: sample hiddens given visibles; reconstruct: sample visibles given hiddens; repeat) and adjusting the weights to minimize reconstruction error.
Auto-encoders、RBM和CNN的区别_第2张图片
Intuition behind RBMs is that there are some visible random variables (e.g. film reviews from different users) and some hidden variables (like film genres or other internal features), and the task of training is to find out how these two sets of variables are actually connected to each other.
具体的RBM介绍可以看看原文,请点

Convolutional Neural Networks are somewhat similar to these two, but instead of learning single global weight matrix between two layers, they aim to find a set of locally connected neurons. CNNs are mostly used in image recognition. Their name comes from “convolution” operator or simply “filter”. In short, filters are an easy way to perform complex operation by means of simple change of a convolution kernel. Apply Gaussian blur kernel and you’ll get it smoothed. Apply Canny kernel and you’ll see all edges. Apply Gabor kernel to get gradient features.
Auto-encoders、RBM和CNN的区别_第3张图片
The goal of convolutional neural networks is not to use one of predefined kernels, but instead to learn data-specific kernels. The idea is the same as with autoencoders or RBMs - translate many low-level features (e.g. user reviews or image pixels) to the compressed high-level representation (e.g. film genres or edges) - but now weights are learned only from neurons that are spatially close to each other.
Auto-encoders、RBM和CNN的区别_第4张图片
更详细的资料可以 点这里

这三种模型都有它们的应用场景,各有优缺点,归纳了一下它们的主要特点:

  1. Auto-encoders 是最简单的一个模型.,比较容易直观的理解和应用,也体现了它的可解释性,相比起RBM,它更容易找到比较好的参数parameters.

  2. RBM 是可生成的,也就是说,不像autoencoders那样只区分某些data的向量,RBMs还可以通过给定的联合分布生成新的数据。它的功能更丰富、更灵活。由RBM构成的DBN深度信念网络CNN也一样,之间有交叉,也有不同,DBN是一种无监督的机器学习模型,而CNN则是有监督的机器学习模型。

  3. CNNs 是应用在具体任务的模型. 目前图像识别中的大多数顶层算法都是基于cnn的, 但在这个领域之外,它几乎不适用(比如,用卷积分析电影的原因是什么?)

都是一些暂时的理解,以后有新的认识再更新。

你可能感兴趣的:(Deep,Learning)