安装ncnn

安装protobuf

下载protobuf源代码
sudo apt-get install autoconf automake libtool curl
git clone https://github.com/google/protobuf 
cd protobuf 

注意的是gmock由于网络的原因无法下载,需要我们联网。
编译与安装protobuf
./autogen.sh
./configure
make
make check
sudo make install
sudo ldconfig 

默认的安装路径是/usr/local/lib。

安装ncnn

git clone https://github.com/Tencent/ncnn

下载完成后,需要对源码进行编译
cd ncnn
mkdir build && cd build
cmake …

-- CMAKE_INSTALL_PREFIX = /home/tclxa/ncnn/build/install
-- Configuring done
-- Generating done
-- Build files have been written to: /home/tclxa/ncnn/build

make -j8

[  1%] Running C++ protocol buffer compiler on onnx.proto
[  2%] Running C++ protocol buffer compiler on caffe.proto
[  4%] Built target mxnet2ncnn
[ 90%] Built target ncnn
Scanning dependencies of target onnx2ncnn
[ 92%] Built target ncnn2mem
[ 95%] Building CXX object tools/onnx/CMakeFiles/onnx2ncnn.dir/onnx.pb.cc.o
[ 95%] Building CXX object tools/onnx/CMakeFiles/onnx2ncnn.dir/onnx2ncnn.cpp.o
Scanning dependencies of target caffe2ncnn
[ 96%] Building CXX object tools/caffe/CMakeFiles/caffe2ncnn.dir/caffe2ncnn.cpp.o
[ 97%] Building CXX object tools/caffe/CMakeFiles/caffe2ncnn.dir/caffe.pb.cc.o
/home/tclxa/ncnn/tools/onnx/onnx2ncnn.cpp: In function ‘int main(int, char**)’:
/home/tclxa/ncnn/tools/onnx/onnx2ncnn.cpp:463:32: warning: unused variable ‘output_name’ [-Wunused-variable]
             const std::string& output_name = node.output(j);
                                ^~~~~~~~~~~
/home/tclxa/ncnn/tools/onnx/onnx2ncnn.cpp:1397:81: warning: format ‘%d’ expects argument of type ‘int’, but argument 4 has type ‘google::protobuf::int64 {aka long int}’ [-Wformat=]
                     fprintf(stderr, "  # %s=%d\n", attr.name().c_str(), attr.i());
                                                                         ~~~~~~~~^
[ 98%] Linking CXX executable onnx2ncnn
[ 98%] Built target onnx2ncnn
[100%] Linking CXX executable caffe2ncnn
[100%] Built target caffe2ncnn

make install

[ 85%] Built target ncnn
[ 87%] Built target ncnn2mem
[ 92%] Built target caffe2ncnn
[ 95%] Built target mxnet2ncnn
[100%] Built target onnx2ncnn
Install the project...
-- Install configuration: "release"
-- Installing: /home/tclxa/ncnn/build/install/lib/libncnn.a
-- Installing: /home/tclxa/ncnn/build/install/include/allocator.h
-- Installing: /home/tclxa/ncnn/build/install/include/blob.h
-- Installing: /home/tclxa/ncnn/build/install/include/cpu.h
-- Installing: /home/tclxa/ncnn/build/install/include/layer.h
-- Installing: /home/tclxa/ncnn/build/install/include/layer_type.h
-- Installing: /home/tclxa/ncnn/build/install/include/mat.h
-- Installing: /home/tclxa/ncnn/build/install/include/modelbin.h
......

安装caffe(CPU)

cp Makefile.config.example Makefile.config
nano Makefile.config:

## Refer to http://caffe.berkeleyvision.org/installation.html
# Contributions simplifying and improving our build system are welcome!

# cuDNN acceleration switch (uncomment to build with cuDNN).
# USE_CUDNN := 1

# CPU-only switch (uncomment to build without GPU support).
CPU_ONLY := 1

# uncomment to disable IO dependencies and corresponding data layers
USE_OPENCV := 0
# USE_LEVELDB := 0
# USE_LMDB := 0
# This code is taken from https://github.com/sh1r0/caffe-android-lib
# USE_HDF5 := 0

# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
#	You should not set this flag if you will be reading LMDBs with any
#	possibility of simultaneous read and write
# ALLOW_LMDB_NOLOCK := 1

# Uncomment if you're using OpenCV 3
OPENCV_VERSION := 2

# To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
# CUSTOM_CXX := g++

# CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr

# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 through *_61 lines for compatibility.
# For CUDA < 8.0, comment the *_60 and *_61 lines for compatibility.
# For CUDA >= 9.0, comment the *_20 and *_21 lines for compatibility.
CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
		-gencode arch=compute_20,code=sm_21 \
		-gencode arch=compute_30,code=sm_30 \
		-gencode arch=compute_35,code=sm_35 \
		-gencode arch=compute_50,code=sm_50 \
		-gencode arch=compute_52,code=sm_52 \
		-gencode arch=compute_60,code=sm_60 \
		-gencode arch=compute_61,code=sm_61 \
		-gencode arch=compute_61,code=compute_61

# BLAS choice:
# atlas for ATLAS (default)
# mkl for MKL
# open for OpenBlas
BLAS := atlas
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
# Leave commented to accept the defaults for your choice of BLAS
# (which should work)!
# BLAS_INCLUDE := /path/to/your/blas
# BLAS_LIB := /path/to/your/blas

# Homebrew puts openblas in a directory that is not on the standard search path
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
# BLAS_LIB := $(shell brew --prefix openblas)/lib

# This is required only if you will compile the matlab interface.
# MATLAB directory should contain the mex binary in /bin.
# MATLAB_DIR := /usr/local
# MATLAB_DIR := /Applications/MATLAB_R2012b.app

# NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
PYTHON_INCLUDE := /usr/include/python2.7 \
		/usr/lib/python2.7/dist-packages/numpy/core/include
# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.
# ANACONDA_HOME := $(HOME)/anaconda
#PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
		# $(ANACONDA_HOME)/include/python2.7 \
		# $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include

# Uncomment to use Python 3 (default is Python 2)
# PYTHON_LIBRARIES := boost_python3 python3.6m
# PYTHON_INCLUDE := /usr/include/python3.6m \
                 /usr/lib/python3.6/dist-packages/numpy/core/include

# We need to be able to find libpythonX.X.so or .dylib.
# PYTHON_LIB := /usr/lib
PYTHON_LIB := $(ANACONDA_HOME)/lib

# Homebrew installs numpy in a non standard path (keg only)
# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
# PYTHON_LIB += $(shell brew --prefix numpy)/lib

# Uncomment to support layers written in Python (will link against Python libs)
WITH_PYTHON_LAYER := 1

# Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial/
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu/hdf5/serial/ 

# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
# LIBRARY_DIRS += $(shell brew --prefix)/lib

# NCCL acceleration switch (uncomment to build with NCCL)
# https://github.com/NVIDIA/nccl (last tested version: v1.2.3-1+cuda8.0)
# USE_NCCL := 1

# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1

# N.B. both build and distribute dirs are cleared on `make clean`
BUILD_DIR := build
DISTRIBUTE_DIR := distribute

# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
# DEBUG := 1

# The ID of the GPU that 'make runtest' will use to run unit tests.
TEST_GPUID := 0

# enable pretty build (comment to see full commands)
Q ?= @

# LINKFLAGS := -Wl,-rpath,$(HOME)/anaconda2/lib

make all -j8

PROTOC src/caffe/proto/caffe.proto
CXX .build_release/src/caffe/proto/caffe.pb.cc
CXX src/caffe/solvers/nesterov_solver.cpp
CXX src/caffe/solvers/adadelta_solver.cpp
CXX src/caffe/solvers/rmsprop_solver.cpp
CXX src/caffe/solvers/sgd_solver.cpp
CXX src/caffe/solvers/adagrad_solver.cpp
CXX src/caffe/solvers/adam_solver.cpp
CXX src/caffe/common.cpp
CXX src/caffe/layers/sigmoid_cross_entropy_loss_layer.cpp
CXX src/caffe/layers/swish_layer.cpp
CXX src/caffe/layers/lstm_layer.cpp
CXX src/caffe/layers/deconv_layer.cpp
CXX src/caffe/layers/rnn_layer.cpp
CXX src/caffe/layers/batch_norm_layer.cpp
......

make test -j8

CXX src/caffe/test/test_util_blas.cpp
CXX src/caffe/test/test_math_functions.cpp
CXX src/caffe/test/test_sigmoid_cross_entropy_loss_layer.cpp
CXX src/caffe/test/test_pooling_layer.cpp
CXX src/caffe/test/test_filter_layer.cpp
CXX src/caffe/test/test_tile_layer.cpp
CXX src/caffe/test/test_threshold_layer.cpp
CXX src/caffe/test/test_softmax_with_loss_layer.cpp
CXX src/caffe/test/test_filler.cpp
CXX src/caffe/test/test_data_layer.cpp
CXX src/caffe/test/test_syncedmem.cpp
CXX src/caffe/test/test_inner_product_layer.cpp
CXX src/caffe/test/test_bias_layer.cpp
CXX src/caffe/test/test_common.cpp
CXX src/caffe/test/test_gradient_based_solver.cpp
CXX src/caffe/test/test_lstm_layer.cpp
CXX src/caffe/test/test_platform.cpp
CXX src/caffe/test/test_embed_layer.cpp
....

make runtest -j8

[----------] Global test environment tear-down
[==========] 1058 tests from 143 test cases ran. (36036 ms total)
[  PASSED  ] 1058 tests.

测试ncnn

tclxa@tclxa:~/ncnn$ cat CMakeLists.txt
取消下面的注释
# add_subdirectory(examples)

# 赋值文件 squeezenet_v1.1.param squeezenet_v1.1.bin squeezenet_v1.1.prototxt squeezenet_v1.1.caffemodel
tclxa@tclxa:~/ncnn$ cp examples/squeezenet*  build/examples/

tclxa@tclxa:~/ncnn/build/examples$ ./squeezenet test.jpg 
404 = 0.988602
405 = 0.005207
908 = 0.004395

你可能感兴趣的:(ncnn)