参考文献

剪枝:

[1] Song Han, Huizi Mao,William J.Dally.  Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman coding[C] ICLR2016.

量化:

[1] Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu,Matthew Tang, Andrew Howard, Hartwig Adam, and DmitryKalenichenko. Quantization and training of neural networksfor efficient integer-arithmetic-only inference. InThe IEEEConference on Computer Vision and Pattern Recognition(CVPR), June 2018.

[2]Raghuraman Krishnamoorthi.Quantizing deep convolu-tional networks for efficient inference: A whitepaper.CoRR,abs/1806.08342, 2018.

[3]Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, andJian Cheng. Quantized convolutional neural networks formobile devices.CoRR, abs/1512.06473, 2015.

[4]Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, and YurongChen. Incremental network quantization: Towards losslesscnns with low-precision weights.CoRR, abs/1702.03044,2017.

[5]huchang Zhou, Zekun Ni, Xinyu Zhou, He Wen, Yuxin Wu,and Yuheng Zou. Dorefa-net: Training low bitwidth convo-lutional neural networks with low bitwidth gradients.CoRR,abs/1606.06160, 2016.

[1] Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally and Kurt Keutzer. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size.」_arXiv:1602.07360 (https://arxiv.org/abs/1602.07360)_ [cs.CV]

[3] Michael Zhu and Suyog Gupta,「To prune, or not to prune: exploring the efficacy of pruning for model compression」, 2017 NIPS Workshop on Machine Learning of Phones and other Consumer Devices (_https://arxiv.org/pdf/1710.01878.pdf_)

[4] Sharan Narang, Gregory Diamos, Shubho Sengupta, and Erich Elsen. (2017).「Exploring Sparsity in Recurrent Neural Networks.」(_https://arxiv.org/abs/1704.05119_)

[5] Raanan Y. Yehezkel Rohekar, Guy Koren, Shami Nisimov and Gal Novik.「Unsupervised Deep Structure Learning by Recursive Independence Testing.」, 2017 NIPS Workshop on Bayesian Deep Learning (_http://bayesiandeeplearning.org/2017/papers/18.pdf_).

融合: 

[1]  Batch Normalization: Accelerating Deep Network Trainingby Reducing Internal Covariate Shift

Ioffe S , Szegedy C . Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift[J]. 2015.

参数共享:

[1]Hieu Pham, Melody Y. Guan, Barret Zoph, Quoc V. Le, andJeff Dean. Efficient neural architecture search via parameter sharing.CoRR, abs/1802.03268, 2018

边缘设备:

[1]Tien-Ju Yang, Andrew G. Howard, Bo Chen, Xiao Zhang,Alec Go, Mark Sandler, Vivienne Sze, and Hartwig Adam.Netadapt: Platform-aware neural network adaptation for mo-bile applications. InECCV, 2018.

[2]Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun.Shufflenet: An extremely efficient convolutional neural net-work for mobile devices.CoRR, abs/1707.01083, 2017.

寒武纪:

[1] Z. Du et al., "ShiDianNao: Shifting vision processing closer to the sensor," 2015 ACM/IEEE 42nd Annual International Symposium on Computer Architecture (ISCA), Portland, OR, 2015, pp. 92-104.

[2]Tianshi Chen, Zidong Du, Ninghui Sun et al., DianNao: A Small-Footprint High-Throughput Accelerator for Ubiquitous Machine-Learning[C]// Proceedings of the 19th international conference on Architectural support for programming languages and operating systems. 2014.

[3]Tao Luo, Shaoli Liu, Ling Li et al., DaDianNao: A Neural Network Supercomputer[J]. IEEE Transactions on Computers, 2016, 66(1):1-1.

Psp:

[1]Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, XiaogangWang, and Jiaya Jia. Pyramid scene parsing network. InCVPR, 2017.

ResNet:

[1]He K , Zhang X , Ren S , et al. Deep Residual Learning for Image Recognition[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society, 2016.

DeepLab:

[1] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos,Kevin Murphy, and Alan L Yuille. Deeplab: Semantic imagesegmentation with deep convolutional nets, atrous convolu-tion, and fully connected crfs.TPAMI, 2017

 

TPU:

[1] Jouppi, Norman P.; Young, Cliff; Patil, Nishant; Patterson, David, et. al., “In-Datacenter Performance

Analysis of a Tensor Processing Unit TM”, Google, Inc., Mountain View, CA USA

MobileNet:

[1] Andrew G. Howard, Menglong Zhu, Bo Chen, DmitryKalenichenko, Weijun Wang, Tobias Weyand, Marco An-dreetto, and Hartwig Adam. Mobilenets: Efficient convolu-tional neural networks for mobile vision applications.CoRR,abs/1704.04861, 2017

[2]

[3]Howard A G, Sandler M , Chu G , et al. Searching for MobileNetV3[J]. 2019.

FCN:

[1] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for seman-

tic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern

Recognition, pages 3431–3440, 2015.

The Dataset:

[1]Marius Cordts, Mohamed Omran, Sebastian Ramos, TimoRehfeld, Markus Enzweiler, Rodrigo Benenson, UweFranke, Stefan Roth, and Bernt Schiele.The cityscapesdataset for semantic urban scene understanding. InCVPR,2016.

[2]Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays,Pietro Perona, Deva Ramanan, Piotr Doll ́ar, and C LawrenceZitnick. Microsoft COCO: Common objects in context. InECCV, 2014

Distilling:

[1]Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. Distillingthe knowledge in a neural network. InNIPS Deep Learningand Representation Learning Workshop, 2015.

SqueezeNet:

[1]J. Hu, L. Shen, and G. Sun. Squeeze-and-Excitation Net-works.ArXiv e-prints, Sept. 2017.

[2]Forrest N. Iandola, Matthew W. Moskewicz, Khalid Ashraf,Song Han, William J. Dally, and Kurt Keutzer. Squeezenet:Alexnet-level accuracy with 50x fewer parameters and<1mb model size.CoRR, abs/1602.07360, 2016.

AlexNet:

[1]Krizhevsky A , Sutskever I , Hinton G . ImageNet Classification with Deep Convolutional Neural Networks[C]// NIPS. Curran Associates Inc. 2012.

BatchNorm:

[1]Ioffe S , Szegedy C . Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift[J]. 2015.

 

你可能感兴趣的:(参考文献)