SYSU_夏令营活动-研究笔记

第一篇:

1.为什么要用DL

From : "A Tutorial Survey of Architectures, Algorithms, and
Applications for Deep Learning"

most machine learning and signal processing techniques had exploited shallow-structured architectures.(Gaussian mixture models (GMMs) and hidden Markov models (HMMs) , support vector machines (SVMs), logistic regression....)

Shallow architectures have been shown effective in solving many simple or well-constrained problems, but their limited modeling and representational power can cause difficulties when dealing with more complicated real-world applications involving natural signals such as human speech, natural sound and language, and natural image and visual scenes.(不适用于复杂的应用,因此需要深层的结构来解决这些问题)

Human information processing mechanisms (e.g., vision and speech), however, suggest the need of deep architectures for extracting complex structure and building internal representation from rich sensory inputs. For example, human speech production and perception systems are both equipped with clearly layered hierarchical structures in transforming the information from the waveform level to the linguistic level.(深度学习的多层结构与人类处理信息的方式相似(不是太理解人类处理信息是怎么分层的),对事物的建模或抽象表现能力更强,也能模拟更复杂的模型。)

发展于神经网络,但是不能用BP神经网络(因为基于局部梯度下降,经常被困在较差的局部最优中,随着网络深度的增加,严重程度会显着增加)

2.深度学习的历史

deep belief network (DBN)

which is composed of a stack of Restricted Boltzmann Machines (RBMs). A core component of the DBN is a greedy, layer-by-layer learning algorithm which optimizes DBN weights at time complexity linear to the size and depth of the networks.

DBN的一个核心组件是一个贪心的、逐层的学习算法,能够在较好时间复杂度下优化权重

我的理解:DBN有一个非监督预训练的过程,使用RBM进行权重偏置的初始化。为DNN提供较好的初始参数

Use DBN to facilitate the training of DNNs plays an important role in igniting the interest of deep learning for speech feature coding and for speech recognition. (DBN的生成模型可以有效的促进DNN的训练,不过也有很多更简单的替代方法,比如说从浅层开始训练:训练->插入新的隐藏层->再对整体训练->插入新的....)

3.三大类深层架构

  • Generative architectures

  • Discriminative architectures

  • Hybrid generative-discriminative architectures

实践环节

BigDl的安装

不久前已经装过了hadoop和hdfs,所以这次就直接用了。

  • 首先从git克隆BigDL的源码下来
  • 然后安装Maven(特别慢,于是google解决办法:添加阿里云的maven库镜像)
  • 跑make-dist.sh(失败了无数次,注意spark的版本)
  • 在spark-shell中使用BigDL(有如下坑,官方都没有提到,一路踩坑过来...)
  • 启动时疯狂报错(无法创建各种db等):通过对日志进行分析,发现是spark-shell没有BigDL目录的权限,用chmod改777就OK.
  • 无法引入intel的包。
    • 官网给错了包的名字和路径,因此注意启动时引入的包一定是在 {BigDL_HOME}/dist/lib/bigdl_XXXX.jar
  • 最后死在了奇怪的...,感觉需要rebuild BigDL...没时间弄了
SYSU_夏令营活动-研究笔记_第1张图片

SPARK

学习了下spark的一些运行原理:

  • transformation和action
  • 宽依赖与窄依赖
  • RDD

以及跑了下word2vec的示例:


SYSU_夏令营活动-研究笔记_第2张图片
image.png
SYSU_夏令营活动-研究笔记_第3张图片
image.png

你可能感兴趣的:(SYSU_夏令营活动-研究笔记)