Caffe安装方式(Ubuntu 16.04 GPU版)、附带自己在安装过程中遇到的各种问题及解决方案

本文分两部分介绍:一个是caffe的安装方式,一个是安装中可能会遇到的问题以及相应的解决方案。

安装环境

目前根据自己的安装情况来说,caffe好像对python3的支持不是很好,也许可能是自己使用的是anaconda3的原因,独立的python3版本还没有测试过(有安装测试过的朋友,还望不吝赐教,欢迎留言)。本教程的安装环境是ananconda3(是不是很奇怪,笑哭),具体的使用方式是:

使用conda创建一个python2.7的虚拟环境,然后在其中进行如下的安装操作(有点意思,笑哭)。

caffe安装

1、安装依赖包

sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev protobuf-compiler

sudo apt-get install --no-install-recommends libboost-all-dev

sudo apt-get install libopenblas-dev liblapack-dev libatlas-base-dev

sudo apt-get install libgflags-dev libgoogle-glog-dev liblmdb-dev

sudo apt-get install git cmake build-essential

2、在你要安装的路径下 clone(一般是放在/home/user/下面,即~/下) :

git clone https://github.com/BVLC/caffe.git

3、进入 caffe目录 ,将 Makefile.config.example 文件复制一份并更名为 Makefile.config ,也可以在 caffe 目录下直接输入以下命令完成复制操作 :

sudo cp Makefile.config.example Makefile.config

复制一份的原因是编译 caffe 时需要的是 Makefile.config 文件,而Makefile.config.example 只是caffe 给出的配置文件例子,不能用来编译 caffe。

4、修改 Makefile.config 文件,在 caffe 目录下打开该文件:

sudo vim Makefile.config

5、修改 Makefile.config 文件内容:

如下是本人修改的内容:

## Refer to http://caffe.berkeleyvision.org/installation.html
# Contributions simplifying and improving our build system are welcome!

# cuDNN acceleration switch (uncomment to build with cuDNN).
USE_CUDNN := 1

# CPU-only switch (uncomment to build without GPU support).
# CPU_ONLY := 1

# uncomment to disable IO dependencies and corresponding data layers
# USE_OPENCV := 0
# USE_LEVELDB := 0
# USE_LMDB := 0
# This code is taken from https://github.com/sh1r0/caffe-android-lib
# USE_HDF5 := 0

# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
#	You should not set this flag if you will be reading LMDBs with any
#	possibility of simultaneous read and write
# ALLOW_LMDB_NOLOCK := 1

# Uncomment if you're using OpenCV 3
OPENCV_VERSION := 3

# To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
# CUSTOM_CXX := g++

# CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr

# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 through *_61 lines for compatibility.
# For CUDA < 8.0, comment the *_60 and *_61 lines for compatibility.
# For CUDA >= 9.0, comment the *_20 and *_21 lines for compatibility.
CUDA_ARCH :=-gencode arch=compute_30,code=sm_30 \
		-gencode arch=compute_35,code=sm_35 \
		-gencode arch=compute_50,code=sm_50 \
		-gencode arch=compute_52,code=sm_52 \
		-gencode arch=compute_60,code=sm_60 \
		-gencode arch=compute_61,code=sm_61 \
		-gencode arch=compute_61,code=compute_61

# BLAS choice:
# atlas for ATLAS (default)
# mkl for MKL
# open for OpenBlas
BLAS := atlas
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
# Leave commented to accept the defaults for your choice of BLAS
# (which should work)!
# BLAS_INCLUDE := /path/to/your/blas
# BLAS_LIB := /path/to/your/blas

# Homebrew puts openblas in a directory that is not on the standard search path
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
# BLAS_LIB := $(shell brew --prefix openblas)/lib

# This is required only if you will compile the matlab interface.
# MATLAB directory should contain the mex binary in /bin.
# MATLAB_DIR := /usr/local
# MATLAB_DIR := /Applications/MATLAB_R2012b.app

# NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
PYTHON_INCLUDE := /usr/include/python2.7 \
		/usr/lib/python2.7/dist-packages/numpy/core/include
# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.
# ANACONDA_HOME := $(HOME)/anaconda
# PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
		# $(ANACONDA_HOME)/include/python2.7 \
		# $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include

# Uncomment to use Python 3 (default is Python 2)
# PYTHON_LIBRARIES := boost_python3 python3.5m
# PYTHON_INCLUDE := /usr/include/python3.5m \
#                 /usr/lib/python3.5/dist-packages/numpy/core/include

# We need to be able to find libpythonX.X.so or .dylib.
PYTHON_LIB := /usr/lib
# PYTHON_LIB := $(ANACONDA_HOME)/lib

# Homebrew installs numpy in a non standard path (keg only)
# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
# PYTHON_LIB += $(shell brew --prefix numpy)/lib

# Uncomment to support layers written in Python (will link against Python libs)
WITH_PYTHON_LAYER := 1

LINKFLAGS := -Wl,-rpath,/home/super-r/anaconda3/lib

# Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu /usr/lib/x86_64-linux-gnu/hdf5/serial

# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
# LIBRARY_DIRS += $(shell brew --prefix)/lib

# NCCL acceleration switch (uncomment to build with NCCL)
# https://github.com/NVIDIA/nccl (last tested version: v1.2.3-1+cuda8.0)
# USE_NCCL := 1

# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1

# N.B. both build and distribute dirs are cleared on `make clean`
BUILD_DIR := build
DISTRIBUTE_DIR := distribute

# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
# DEBUG := 1

# The ID of the GPU that 'make runtest' will use to run unit tests.
TEST_GPUID := 0

# enable pretty build (comment to see full commands)
Q ?= @

主要修改了以下几个方面:

1)应用cudnn

将
#USE_CUDNN := 1
修改成: 
USE_CUDNN := 1

2)使用版本3的OpenCV 

将
#OPENCV_VERSION := 3 
修改为: 
OPENCV_VERSION := 3

3)使用python编写layer

将
#WITH_PYTHON_LAYER := 1 
修改为 
WITH_PYTHON_LAYER := 1

4)修改python路径

INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib 
修改为: 
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu /usr/lib/x86_64-linux-gnu/hdf5/serial       

6、修改caffe目录下的Makefile文件:

将:
NVCCFLAGS +=-ccbin=$(CXX) -Xcompiler -fPIC $(COMMON_FLAGS)
替换为:
NVCCFLAGS += -D_FORCE_INLINES -ccbin=$(CXX) -Xcompiler -fPIC $(COMMON_FLAGS)
将:
ifeq ($(USE_HDF5), 1)
	LIBRARIES += hdf5_hl hdf5
改为:
ifeq ($(USE_HDF5), 1)
	LIBRARIES += hdf5_serial_hl hdf5_serial

7、编译。在caffe目录下执行:

sudo make all -j8

(-j8表示自己的cpu核数,如果不知道就直接make all)

8、编译成功之后,执行如下命令:

sudo make runtest

出现如下图所示情况,则表示caffe安装成功:

9、MNIST数据集测试一下安装成功的caffe

1)首先进入caffe目录下:

cd ~/caffe

2)下载MNIST数据集

./data/mnist/get_mnist.sh

3)将下载好的数据转换为lmdb数据库格式(便于读取,读取速度会更快):

./examples/mnist/create_mnist.sh

4)训练网络

./examples/mnist/train_lenet.sh

训练的时候可以看到损失与精度数值,如下图所示:

编译caffe的python接口

首先进入caffe目录下。

1、安装依赖

sudo pip install -r python/requirements.txt

2、编译

sudo make pycaffe

出现如下图所示情况则表示编译成功:

3、配置环境变量

打开如下文件
gedit ~/.bashrc
在文件末尾添加caffe路径
export PYTHONPATH=~/caffe/python(这是我的安装路径,实际情况需要根据你自己的安装路径进行填写)
使环境变量生效
source ~/.bashrc

z之后进入python环境执行import caffe即可导入成功,如下图所示:

 

编译matlab接口

首先进入caffe目录下。执行如下语句:

make matcaffe

 环境变量问题:如果之前已经配置过caffe的环境变量,则不需要再次配置。

可能出现的问题

1)编译caffe时出现错误:

a)/home/ubuntu/anaconda3/lib/libpng16.so.16: undefined reference to `inflateValidate@ZLIB_1.2.9'

b)Makefile:607: recipe for target ‘.build_release/tools/caffe.bin’ failed 
make: * [.build_release/tools/caffe.bin] Error 1 

原因:可能是python是通过anaconda安装的而不是直接安装的
解决办法:在Makefile.config中加入一行下列信息:

LINKFLAGS := -Wl,-rpath,/home/ubuntu/anaconda3/lib

如果使用的是conda创建的python虚拟环境,则相应将lib路径改到对应的python虚拟环境下即可。

比如:

/home/ubuntu/anaconda3/envs/python27/lib

2)出现compute-20错误

原因:这是因为GPU算力所致

解决方式:将Makefile.config中如下部分注释掉即可:

# For CUDA >= 9.0, comment the *_20 and *_21 lines for compatibility.
CUDA_ARCH :=#-gencode arch=compute_20,code=sm_20 \
        #-gencode arch=compute_20,code=sm_21 \
        -gencode arch=compute_30,code=sm_30 \
        -gencode arch=compute_35,code=sm_35 \
        -gencode arch=compute_50,code=sm_50 \
        -gencode arch=compute_52,code=sm_52 \
        -gencode arch=compute_60,code=sm_60 \
        -gencode arch=compute_61,code=sm_61 \
        -gencode arch=compute_61,code=compute_61

3)make pycaffe时出现错误: fatal error: pyconfig.h: No such file or directory

即:

/usr/include/boost/python/detail/wrap_python.hpp:50:23: fatal error: pyconfig.h: No such file or directory
compilation terminated.
make: *** [python/caffe/_caffe.so] Error 1

解决方式:添加环境变量

export CPLUS_INCLUDE_PATH=/usr/include/python2.7

4)运行make runtest出现错误:

ubuntu@ubuntu-ST-KN:~/caffe$ make runtest
.build_release/tools/caffe
.build_release/tools/caffe: error while loading shared libraries: libcudart.so.9.0: cannot open shared object file: No such file or directory
Makefile:544: recipe for target 'runtest' failed
make: *** [runtest] Error 127

解决方式

ubuntu@ubuntu-ST-KN:~/caffe$ sudo cp /usr/local/cuda/lib64/libcudart.so.9.0 /usr/local/lib/libcudart.so.9.0 && sudo ldconfig
ubuntu@ubuntu-ST-KN:~/caffe$ make runtest

类似这样的错误也如上这样解决即可。

5)在进行make pycaffe时报错:Makefile:501: recipe for target 'python/caffe/_caffe.so' failed,如下图所示:

报错原因:numpy路径不对

解决办法:打开Makefile.connfig文件,在PYTHONINCLUDE环境下面输入正确的地址

查找的方式:在自己实际使用的python环境下,执行如下的语句:

终端输入命令Python进入自己的python工作环境,然后执行下列输入:

>>>import numpy as np
>>>np.__file__

最后输出的文件路径即为实际使用的numpy路径,然后对照着Makefile.config文件中的PYTHON_INCLUDE行做相应的修改即可

6)进入python环境执行import caffe时出现错误:ImportError: No module named google.protobuf.internal

解决方式:在相应的python环境下执行:pip install protobuf即可

这个问题,如果在编译make pycaffe之前成功执行了:sudo pip install -r python/requirements.txt,是不会出现这个问题的,因为txt文件中就包含了这个依赖文件。

7)需要注意的一点是:每次重新编译的时候,首先要做的是make clean,不然之后的编译操作会出问题。

你可能感兴趣的:(Linux系统操作)