MATLAB神经网络单片机,使用ARM单片机进行神经网络推算

大多数人认为,神经网络推算都应该留给GPU,TPU各种专用ASIC运算,实际,在ARM上也可以实现神经网络推算.

在很久之前(半年前吧),ARM已经推出了CMSIS-NN,但是,大家都不太感兴趣啊.

实际上,在ARM实现NN算法,所需的开销不算很大,当然,不是在线学习了,学习训练成本比推理要大出很多.

实际开销(以官方例子),下面所有开销,都可以在需要时候再申请,用完就释放:

scratch_buffer ≈ 40KB

col_buffer ≈ 3KB

CNN_IMG_SIZE*CNN_IMG_SIZE*NUM_OUT_CH,比如3通道颜色,分别是RGB,CNN大小是32,则3*32*32=3072 = 3KB开销.

输出的基本缓冲区 IP1_OUT_DIM , 例子中取10字节,根据可能出现的可能性决定.

其实给了个很好的例子,我在此总结一下.

首先,你需要在PC上安装caffe深度学习框架,大致安装流程如下.

1)准备系统环境.

apt install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev protobuf-compiler

apt install libboost-all-dev

apt install libatlas-base-dev

apt install libgflags-dev libgoogle-glog-dev liblmdb-dev

apt install python-dev

2)安装Anaconda2.

wget https://mirrors.tuna.tsinghua.edu.cn/anaconda/archive/Anaconda2-5.0.1-Linux-x86_64.sh

3)下载Caffe.

git clone https://github.com/BVLC/caffe.git

4)在bashrc中导出相关环境变量.

export PATH=/root/anaconda2/bin:$PATH

export PYTHONPATH=/root/caffe/python:$PYTHONPATH

5)复制配置文件,然后修改.

cp Makefile.config.example Makefile.config

6)修改参考.

## Refer to http://caffe.berkeleyvision.org/installation.html

# Contributions simplifying and improving our build system are welcome!

# cuDNN acceleration switch (uncomment to build with cuDNN).

# USE_CUDNN := 1

# CPU-only switch (uncomment to build without GPU support)

CPU_ONLY := 1

# uncomment to disable IO dependencies and corresponding data layers

# USE_OPENCV := 0

# USE_LEVELDB := 0

# USE_LMDB := 0

# This code is taken from https://github.com/sh1r0/caffe-android-li

# USE_HDF5 := 0

# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)

#You should not set this flag if you will be reading LMDBs with any

#possibility of simultaneous read and write

# ALLOW_LMDB_NOLOCK := 1

# Uncomment if you're using OpenCV 3

# OPENCV_VERSION := 3

# To customize your choice of compiler, uncomment and set the following.

# N.B. the default for Linux is g++ and the default for OSX is clang++

# CUSTOM_CXX := g++

# CUDA directory contains bin/ and lib/ directories that we need.

CUDA_DIR := /usr/local/cuda

# On Ubuntu 14.04, if cuda tools are installed via

# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:

# CUDA_DIR := /usr

# CUDA architecture setting: going with all of them.

# For CUDA < 6.0, comment the *_50 through *_61 lines for compatibility.

# For CUDA < 8.0, comment the *_60 and *_61 lines for compatibility.

# For CUDA >= 9.0, comment the *_20 and *_21 lines for compatibility.

CUDA_ARCH := -gencode arch=compute_20,code=sm_20

-gencode arch=compute_20,code=sm_21

-gencode arch=compute_30,code=sm_30

-gencode arch=compute_35,code=sm_35

-gencode arch=compute_50,code=sm_50

-gencode arch=compute_52,code=sm_52

-gencode arch=compute_60,code=sm_60

-gencode arch=compute_61,code=sm_61

-gencode arch=compute_61,code=compute_61

# BLAS choice:

# atlas for ATLAS (default)

# mkl for MKL

# open for OpenBlas

BLAS := atlas

# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.

# Leave commented to accept the defaults for your choice of BLAS

# (which should work)!

# BLAS_INCLUDE := /path/to/your/blas

# BLAS_LIB := /path/to/your/blas

# Homebrew puts openblas in a directory that is not on the standard search path

# BLAS_INCLUDE := $(shell brew --prefix openblas)/include

# BLAS_LIB := $(shell brew --prefix openblas)/lib

# This is required only if you will compile the matlab interface.

# MATLAB directory should contain the mex binary in /bin.

# MATLAB_DIR := /usr/local

# MATLAB_DIR := /Applications/MATLAB_R2012b.app

# NOTE: this is required only if you will compile the python interface.

# We need to be able to find Python.h and numpy/arrayobject.h.

PYTHON_INCLUDE := /usr/include/python2.7

/usr/lib/python2.7/dist-packages/numpy/core/include

# Anaconda Python distribution is quite popular. Include path:

# Verify anaconda location, sometimes it's in root.

ANACONDA_HOME := $(HOME)/anaconda2

PYTHON_INCLUDE := $(ANACONDA_HOME)/include

$(ANACONDA_HOME)/include/python2.7

$(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include

# Uncomment to use Python 3 (default is Python 2)

# PYTHON_LIBRARIES := boost_python3 python3.5m

# PYTHON_INCLUDE := /usr/include/python3.5m

# /usr/lib/python3.5/dist-packages/numpy/core/include

# We need to be able to find libpythonX.X.so or .dylib.

#PYTHON_LIB := /usr/lib

PYTHON_LIB := $(ANACONDA_HOME)/lib

# Homebrew installs numpy in a non standard path (keg only)

# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include

# PYTHON_LIB += $(shell brew --prefix numpy)/lib

# Uncomment to support layers written in Python (will link against Python libs)

# WITH_PYTHON_LAYER := 1

# Whatever else you find you need goes here.

INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial/

LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib

# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies

# INCLUDE_DIRS += $(shell brew --prefix)/include

# LIBRARY_DIRS += $(shell brew --prefix)/lib

# NCCL acceleration switch (uncomment to build with NCCL)

# https://github.com/NVIDIA/nccl (last tested version: v1.2.3-1+cuda8.0)

# USE_NCCL := 1

# Uncomment to use `pkg-config` to specify OpenCV library paths.

# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)

# USE_PKG_CONFIG := 1

# N.B. both build and distribute dirs are cleared on `make clean`

BUILD_DIR := build

DISTRIBUTE_DIR := distribute

# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171

# DEBUG := 1

# The ID of the GPU that 'make runtest' will use to run unit tests.

TEST_GPUID := 0

# enable pretty build (comment to see full commands)

Q ?= @

7)hdf5库的修改,主要在Caffe根目录Makefile文件.

[包含目录中,包含HDF5的库路径,链接库中hdf5_h和hdf5修改为hdf5_serial_hl和hdf5_seria]

INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial/

LIBRARIES += hdf5_serial_hl hdf5_serial

8)编译

make pycaffe -j8

make all -j8

make test -j8

make runtest -j8

这里写的比较概括,如果还是不懂自己再找办法,或者找我以前的文章也有.

然后把ARM例子Clone下来.

git clone https://github.com/ARM-software/ML-examples

他所需的LMDB文件,在训练后可以得到,当然,先修改下路径,指向我的训练集.

[此文件目前还没有,因为需要训练,就算是直接创建,也是没什么用途的.]

MATLAB神经网络单片机,使用ARM单片机进行神经网络推算_第1张图片

移步至Caffe,获取并创建训练数据.

cd caffe/data/cifar10

./get_cifar10.sh

在同一个目录下,create_cifar10.sh需要修正一下路径.

MATLAB神经网络单片机,使用ARM单片机进行神经网络推算_第2张图片

同时,新版的caffe要在根目录执行脚本才行,比如我的是~/caffe下执行以下代码.

./examples/cifar10/create_cifar10.sh

在上面都做完后,产生两个目录.

6c9bbee11b9e1899e7eb03730ab7c7aa.png

最后,到ARM的ML目录里,进行训练.

python nn_quantizer.py --model models/cifar10_m4_train_test.prototxt --weights models/cifar10_m4_iter_70000.caffemodel.h5 --save models/cifar10_m4.pkl

由于ARM的NN库只要8B精度,并且迭代次数较低,所以会相对比较快一些,如果需要更高精度的,也可以自己改prototxt,找到平衡点.

MATLAB神经网络单片机,使用ARM单片机进行神经网络推算_第3张图片

生成相关文件.

python code_gen.py --model models/cifar10_m4.pkl --out_dir code/m4

然后给出了准确的每一层所需开销(当然,buffer保证满足最大的条件即可),当然还有代码.

MATLAB神经网络单片机,使用ARM单片机进行神经网络推算_第4张图片

code目录下生成了对应的文件,还有关键函数.

MATLAB神经网络单片机,使用ARM单片机进行神经网络推算_第5张图片

根据代码注释可见(在图片顶部).输入参数是一个RGB888方形小图像,输出是权重信息.

那接下来还有什么?那当然是如何图形resize算法,应用层如何使用这些数据.

你可能感兴趣的:(MATLAB神经网络单片机)