论文阅读-A Fast Learning Algorithm for Deep Belief Nets

作者: G.E.Hinton et. al.
日期: 2006
类型: article
来源: Neural Computation
评价: Deep Learning eve(深度学习前夕)
论文链接: http://www.cs.toronto.edu/~hinton/absps/ncfast.pdf
文章比较"硬核", 各种算法原理解释, 数学公式和术语. 而且作者真的是很喜欢用一些生物上的词汇来描述模型, 比如synapse strength(突触强度), mind(头脑), 导致读的时候很困惑(挠头). 需要有RBM(受限玻尔兹曼机)和wake-sleep 算法的基础. 恰好我没有, 读的很困难, 笔记只做了简单的梳理和摘抄.

1 Purpose

  1. To design a generative model to surpass discriminative models.
  2. To train a deep, densely connected belief network efficiently.
    The explaining-away effects make inference difficult in densely connected belief nets that has many hidden layers.
  • challenges
  1. It’s difficult to infer the conditional distribution of the hidden activities when given a data vector.
  2. Variational methds(变分方法) use simple approximations to teh true conditional distribution, but the approximation may be poor, especially at the deepest hidden layer, where the prior assumes independence.
  3. Variational learning still requires all of the parameters to be learned together and this make the learning time scale poorly (extreme time consuming?) as the number of parameters increase.

2 The previous work

  1. Back propagation nets
  2. support vector machines

3 The proposed method

  1. The authors designed A hybrid model, in which its top two hidden layers form an undirected associative memory, and the remaining hidden layers form a directed acyclic graph that converts the representations in the associative memory into observable variables such as the pixels of an image.
    论文阅读-A Fast Learning Algorithm for Deep Belief Nets_第1张图片
    my understanding:
    • associative memory: the top two hidden layers . It really confused me when I was reading some parts of the paper when *associative memory * jump out. Actually it’s just the top two hidden layers.
    • directed graph and undirected associative memory: supervised layers and unsupervised layers??
    • It is a generative model, what’s the association with current hot network namely GAN(generative adversarial network)

pixels of an image(原文中说的是observable variables, 直接说剩余的层组成一个directed )
The authors derive a fast, greedy algotithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory(??).

3.1 Hypotheses

In section7, the authors uses the term of mind to describe the internal state of the model, and they emphasize that they are not intended to use it as a metaphor D:).

3.2 Methodology

  1. In order to deal with explaining-away(相消解释) phennomenon, the model introduced the idea of a “complementary” prior(先验补偿).
  2. Introduces a fast, greedy learning alogrithm for constructing multilayer directed networks one layer at a time. Using a variational bound, it shows that as each layer is added, the overall generative model improves.
  3. using a up-down algorithm to fine-tune the weights generated by the fast, greedy algorithm.
    up-down algorithm is a contrastive version of wake-sleep algorithm.

3.3 Experiment

  1. In section 6, a network with three hidden layers and 1.7 million weights was tested on the MNIST set of handwritten digits. It got a performance of error rate of 1.25%, which outperform the best backpropagation nets and support vector machine(reported in 2002).

3.4 Data sets

MNIST handwritten digits set.

Are the data sets sufficient?

In 2006, yes, It is a sufficient data sets. Even it is small in a modern perspective, it took days to train model in 2006.

3.5 Features

  1. In the proposed model , there is a fast, greedy learning algorithm that can find fairly good set of parameters quickly, even in deep networks with millions of parameters and many layers.
  2. The learing algorithm is unsupervised but can be applied to labeled data by learning a model that generates both the label and the data.(means learning the joint distribution of label and data??)
  3. In the proposed model , there is a fine-tuning algorithm that learns an excellent generative model that outperform discriminative mehtods on the MNIST database of hand-written digits.(What does learns a … model mean? Does the algorithm create a model or just the model was trained well?)
  4. The generative model makes it easy to interpret the distributed representations in the deep hidden layers.(The author did this by generate images through the model, to look into the mind of a neural network)
  5. The inference required for forming a percept is both fast and accurate.(Does it mean that the model can form the cognitive ablility quickly?)
  6. The learning algorithm is local. Adjustments to a synapse strength depend on only the state of the presynaptic and postsynaptic neuron. (What does synapse mean here? A unit of the network?)
  7. The communication is simple. Neurons need only to communicate their stochastic binary states.

3.6 Advantages

It has some major advantages as compared to discriminative models.

  1. Generative models can learn low-level features without requiring feedback from the label, and they can learn many more parameters than discriminative models without overfitting.
  2. It is easy to see what the network has learned by generating from its model.
  3. It is possible to interpret the nonlinear, distributed hidden representations in the deep hidden layers by generating images from them.
  4. The superior classification performance of discriminative learning methods holds only for domains in which it is not possible to learn a good generative model. This set of domains is being eroded by Moore’s law.(So, you mean that as the computational ability of computers grows, those domains will diminish?)

3.7 Weakness

the author lists limitations of his model:

  1. It is designed for images in which nonbinary values can be treated as probabilities(which is not the case for natural images);
  2. Its use of top-down feedback during perception is limited to the associative memory in the top two layers;(associative memory, actually, is the top two layers)
  3. It does not have a systematic way to deal with perceptual invariances. (perspective distortion? unbalanced illumination?)
  4. It assumes that segmentation has already be performed.
  5. It does not learn to sequentially attend to the most informative parts of objects when discrimination is difficult.

3.8 Application

not mentioned.

4 What is the author’s next step?

not mentioned, maybe breaking through the limitations? Actually, the limitations of 3 and 5 is almost resolved by deep convolutional neural networks and attention mechanism.

4.1 Do you agree with the author about the next step?

I agree with to break through limitation 3 and 5.

5 What do other researchers say about his work?

  • [George E. Dahl. et al. | Context-Dependent Pre-trained Deep Neural Networks for Large Vocabulary Speech Recognition | IEEE. | 2012]

    • The pretraining algorithm we use is the deep belief network (DBN) pre-training algorithm of [24].
    • we abandon the deep belief network once pre-training is complete and only retain and continue training the recognition weights
    • It often outperforms random initialization for the deeper architectures we are interested in training and provides results very robust to the initial random seed. The generative model learned during pre-training helps prevent overfitting, even when using models with very high capacity and can aid in the subsequent optimization of the recognition weights
  • [Daniela M.witten et al. | Covariance-regularized regression and classification for high dimensional problems | Journal of the Royal Statistical Society | 2009]
    Indeed, many methods in the deep learning literature involve processing the features without using the outcome. Principal components regression is a classical example of this; a more recent example with much more extensive preprocessing is in Hinton et al. (2006).

  • [Ruslan Salakhutdinov et al. | Semantic Hashing | International Journal of Approximate Reasoning | 2009 ]
    The model can be trained efficiently by using a Restriced Boltzmann Machine (RBM) to learn one layer of hidden variables at a time [8] (Hinton is the second author of this paper).

参考博客

  • A Fast Learning Algorithm for Deep Belief Nets. 一些概念不太懂, 比如explaining away(相消解释), 这篇博客翻译的还稍微好点.
  • Deep Belief Network简介. 对DBN解释的比较清楚, 可以帮助理解.

你可能感兴趣的:(论文阅读,#,Deep,Learning,DBN)