寒武纪论文对比记录

重点论文:

   寒武纪从2014年开始:
(1)DianNao: A Small-Footprint High-Throughput Accelerator for Ubiquitous Machine-Learning
(2)DaDianNao: A Machine-Learning Supercomputer
(3)PuDianNao: A Polyvalent Machine Learning Accelerator
(4)ShiDianNao: Shifting Vision Processing Closer to the Sensor
(5)Cambricon-X: An Accelerator for Sparse Neural Networks

论文侧重点:

(1)DianNao:可以看作是硬件设计的基础

(2)DaDianNao:面向服务器端的高性能计算架构

(3)  ShiDianNao:面向边缘端设备应用场景的

   (4)  PuDianNao:面向更加泛化的机器学习算法的

(5)combricon:面向更加广泛的机器学习加速器的指令集架构。

寒武纪的DianNao系列芯片构架也采用了流式处理的乘加树(DianNao[2]、DaDianNao[3]、PuDianNao[4])和类脉动阵列的结构(ShiDianNao[5])。为了兼容小规模的矩阵运算并保持较高的利用率,同时更好的支持并发的多任务,DaDianNao和PuDianNao降低了计算粒度,采用了双层细分的运算架构,即在顶层的PE阵列中,每个PE由更小规模的多个运算单元构成,更细致的任务分配和调度虽然占用了额外的逻辑,但有利于保证每个运算单元的计算效率并控制功耗,

1. DianNao

    参考:https://blog.csdn.net/evolone/article/details/80765094

2. DaDianNao

 参考:https://blog.csdn.net/u013108511/article/details/88831132

3. ShiDianNao

参考:https://blog.csdn.net/evolone/article/details/82594250

https://www.dazhuanlan.com/2019/12/18/5df9db0b0812e/

https://blog.csdn.net/weixin_33810006/article/details/87977439

          

 

[2] Chen Y, Chen Y, Chen Y, et al.DianNao: a small-footprint high-throughput accelerator for ubiquitousmachine-learning[C]// International Conference on Architectural Support forProgramming Languages and Operating Systems. ACM, 2014:269-284.
[3] Luo T, Luo T, Liu S, et al.DaDianNao: A Machine-Learning Supercomputer[C]// Ieee/acm InternationalSymposium on Microarchitecture. IEEE, 2015:609-622.
[4] Liu D, Chen T, Liu S, et al.PuDianNao: A Polyvalent Machine Learning Accelerator[C]// TwentiethInternational Conference on Architectural Support for Programming Languages andOperating Systems. ACM, 2015:369-381.
[5] Du Z, Fasthuber R, Chen T, et al.ShiDianNao: shifting vision processing closer to the sensor[C]// ACM/IEEE,International Symposium on Computer Architecture. IEEE, 2015:92-104.
[6] Eric Chung, Jeremy Fowers, KalinOvtcharov, et al. Accelerating Persistent Neural Networks at Datacenter Scale.Hot Chips 2017.
[7] Meng W, Gu Z, Zhang M, et al.Two-bit networks for deep learning on resource-constrained embedded devices[J].arXiv preprint arXiv:1701.00485, 2017.
[8] Hubara I, Courbariaux M, SoudryD, et al. Binarized neural networks[C]//Advances in neural informationprocessing systems. 2016: 4107-4115.
[9] Qiu J, Wang J, Yao S, et al.Going deeper with embedded fpga platform for convolutional neuralnetwork[C]//Proceedings of the 2016 ACM/SIGDA International Symposium onField-Programmable Gate Arrays. ACM, 2016: 26-35.
[10] Xilinx, Deep Learningwith INT8Optimizationon Xilinx Devices, www.xilinx.com/support/doc…
[11] Han S, Kang J, Mao H, et al.Ese: Efficient speech recognition engine with compressed lstm on fpga[J]. arXivpreprint arXiv:1612.00694, 2016.
[12] Zhang S, Du Z, Zhang L, et al. Cambricon-X: An accelerator for sparseneural networks[C]// Ieee/acm International Symposium on Microarchitecture.IEEE Computer Society, 2016:1-12.
[13] Shafiee A, Nag A, MuralimanoharN, et al. ISAAC: A convolutional neural network accelerator with in-situ analogarithmetic in crossbars[C]//Proceedings of the 43rd International Symposium onComputer Architecture. IEEE Press, 2016: 14-26.

你可能感兴趣的:(硬件加速器)