训练:
1、caffe参数详解:https://blog.csdn.net/qq_14845119/article/details/54929389
2、batch_size对训练结果的影响:
https://www.zhihu.com/question/61607442
https://www.zhihu.com/question/32673260
http://playground.tensorflow.org/
3、SSD的负样本:
Single Shot MultiBox Detector 深度学习笔记之SSD物体检测模型:
https://www.sohu.com/a/168738025_717210
4、人群计数(Crowd
Counting)研究综述,重点看数据集:
https://mp.weixin.qq.com/s?__biz=MzUxNjcxMjQxNg==&mid=2247486326&idx=2&sn=53f317e573f4eca98e29ce99101ddb87&chksm=f9a279f9ced5f0efcb7ec0c40e18b55b8bcd2f2bfee0d12fd38d04faca14fa6616e0350fcf9b&mpshare=1&scene=1&srcid=1214IMVPSu1SQvXaqgfcdsi1#rd
5、训练优化思路:
从数据入手(数据筛选,优化,增加数据量),从训练参数入手,从准确率测试方法入手(detection_eval,视频检测计数准确率)
裁剪压缩:
1、mobilenet压缩:https://blog.csdn.net/muwu5635/article/details/75309434
2、模型压缩相关论文:
https://blog.csdn.net/sanallen/article/details/79237581
3、CNN模型压缩与加速算法综述:
https://mp.weixin.qq.com/s/KQo8jFmFnWLnydHli3XwEw?scene=25#wechat_redirect
4、模型压缩方法:compressing-deep-neural-nets
原文:http://machinethink.net/blog/compressing-deep-neural-nets/
中文:http://www.360doc.com/content/17/0904/15/1609415_684551872.shtml
5、模型压缩:https://blog.csdn.net/wspba/article/details/75671573
6、caffemodel的剪枝与压缩(二):https://blog.csdn.net/Dlyldxwl/article/details/79502829
量化:
1、深度学习模型的量化方法(论文学习
& tensorflow lite量化方法):
https://blog.csdn.net/cokeonly/article/details/79024279
2、从TensorFlow Lite源码入门CNN量化:
https://zhuanlan.zhihu.com/p/42811261
http://openaccess.thecvf.com/content_cvpr_2018/papers/Jacob_Quantization_and_Training_CVPR_2018_paper.pdf
3、Why GEMM is at
the heart of deep learning:
https://petewarden.com/2015/04/20/why-gemm-is-at-the-heart-of-deep-learning/
4、、关于英伟达TensorRT的量化:
https://note.youdao.com/share/?id=829ba6cabfde990e2832b048a4f492b3&type=note#/
5、ncnn量化
https://github.com/Tencent/ncnn/wiki/quantized-int8-inference
https://github.com/BUG1989/caffe-int8-convert-tools
6、IQmath浮点转定点:
https://wenku.baidu.com/view/066eedc9da38376baf1fae16.html
Intel:
https://software.intel.com/zh-cn/articles/lower-numerical-precision-deep-learning-inference-and-training