韩松博士毕业论文(精简总结)

 

 

 

 

论文由三部分构成,也是韩松在博士期间的工作,相关论文与解析见下面:

  1. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman coding。   从软件端极大的压缩网络的权重,                                                                                                                                           参考:https://blog.csdn.net/weixin_36474809/article/details/80643784
  2. DSD:Dense-sparse-dense training for deep neural networks。                                                                                         一种由密集->稀疏->密集的新的网络训练方式,能从训练层面提升网络准确率,                                                                   参考:https://blog.csdn.net/weixin_36474809/article/details/85322584
  3. EIE:Efficient Inference Engine on Compressed Deep Neural Network .                                                                               在Deep compression的基础上,EIE是基于硬件的稀疏网络加速实现,硬件上达到很好的效果 。                                          参考:https://blog.csdn.net/weixin_36474809/article/details/85326634

动机Why:Neural networks are difficult to deploy on embedded systems with limited hardware resources.

                  * Computationally intensive

                  * Memory intensive

做法How:Co-designed the algorithm and hardware for deep learning.

                  * Simplify and compress DNN models

                  * Deaigned customized hardware for the compressed model

结果Result: Outpreforms CPU,GPU and mobile GPU by factors of 189,13,and 307

                   Consumes 24000,3400, and 2700 less energy than CPU,GPU and mobile GPU.

详细内容参考:https://blog.csdn.net/weixin_36474809/article/details/85613013

 

你可能感兴趣的:(韩松博士毕业论文(精简总结))