SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and

  • 前言
  • 摘要
  • 介绍及动机
  • Sec 2 SqueezeNet
  • Sec 3 Related Work
  • Sec 4 Evaluation
  • Sec 5 Conclusions and Future Work

前言

文章链接,http://arxiv.org/abs/1602.07360
代码链接,https://github.com/DeepScale/SqueezeNet
目前文章还是第一版,所以内容上不是很多。

摘要

先提了在相同精确度下,体积更小的深度神经网络有着以下3点好处。

  1. Smaller DNNs require less communication across servers during distributed training.
  2. Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car.
  3. Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory.

接着,提出了本文的创新SqueezeNet!

  1. SqueezeNet achieves AlexNet- level accuracy on ImageNet with 50x fewer parameters.
  2. with model compression techniques we are able to compress SqueezeNet to less than 1MB (461x smaller than AlexNet).

相当诱人吧。

介绍及动机

第一段详细阐述了摘要中提及的3点好处,这3点好处也成为了作者的动机。
另外,简要说明文章每个章节的概要,脉络清晰。

  1. Section 2, we describe SqueezeNet.
  2. We review related work in Section 3.
  3. We evaluate SqueezeNet in Section 4.
  4. We conclude in Section 5.

Sec. 2 SqueezeNet

主旨:维持精确度且使用更少的参数。

  1. 本章第1小节主要介绍了结构的设计策略(Architectural Design Strategies) 这个应该也是全文的精华所在了,一共有3点:

    • Replace 3x3 filters with 1x1 filters.
    • Decrease the number of input channels to 3x3 filters.
    • Downsample late in the network so that con- volution layers have large activation maps.
  2. 本章第2小节介绍了The Fire Module是如何设计的。如图,
    SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and_第1张图片

  3. 本章第3小节介绍了The SqueezeNet architecture。如图,
    SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and_第2张图片
    同时,在本小节还提及了SqueezeNet其他的一些细节思路.

Sec. 3 Related Work

整个研究的目的是:

to identify a model that has very few parameters while preserving accuracy.

所以,压缩网络,成为必不可少的工作。

Sec. 4 Evaluation

这章没啥好说的,一张表格就足以解释了。
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and_第3张图片
本章结尾,作者们不忘show一下Deep Compression的强大之处,即便SqueezeNet已经很少的参数了,还是能压缩9.2x,且精确度不变。

Sec. 5 Conclusions and Future Work

我用两段原话结尾:

表达了作者对SqueezeNet的评价之高,

We think SqueezeNet will be a good candidate DNN architecture for a variety of applications, especially those in which small model size is of importance.

表达了作者期望这个研究能启发读者去建设美好未来,

We hope that SqueezeNet will inspire the reader to consider and explore the broad range of possibilities in the design space of DNN architectures.


2016年8月22日更新

SqueezeNet的缺点:内存占用较大,在移动设备上运行速度慢。

你可能感兴趣的:(图像识别,深度学习,随笔)