linux配置caffe环境,Linux下Caffe、Docker、Tensorflow、PyTorch环境搭建(CentOS 7)

注:模型的训练、测试、部署都可以通过Docker环境完成,环境问题会更少。

1. CUDA 8.0安装

CUDA 8.0

Config env variables

# CUDA PATH

export PATH="/usr/local/cuda-8.0/bin:$PATH"

# CUDA LDLIBRARY_PATH

export LD_LIBRARY_PATH="/usr/local/cuda-8.0/lib64:$LD_LIBRARY_PATH"

CUDA check

$ nvcc --version

nvcc: NVIDIA (R) Cuda compiler driver

Copyright (c) 2005-2016 NVIDIA Corporation

Built on Tue_Jan_10_13:22:03_CST_2017

Cuda compilation tools, release 8.0, V8.0.61

2. cuDNN安装

# unzip cudnn

tar zxvf cudnn-8.0-linux-x64-v5.1.tgz

cd cuda

# copy include file

sudo cp include/cudnn.h /usr/local/cuda-8.0/include/

# copy .so file

sudo cp lib64/libcudnn.so.5.1.10 /usr/local/cuda-8.0/lib64/

# add ln link

cd /usr/local/cuda-8.0/lib64/

sudo ln -s libcudnn.so.5.1.10 libcudnn.so.5

sudo ln -s libcudnn.so.5 libcudnn.so

3. NCCL安装

# clone nccl

git clone https://github.com/NVIDIA/nccl.git

make CUDA_HOME=/usr/local/cuda-8.0 test

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:./build/lib

./build/test/single/all_reduce_test 10000000

make PREFIX=nccl install

# Copy files

sudo cp /yourpath/nccl/build/include/nccl.h /usr/local/include

sudo cp /yourpath/nccl/build/lib/libnccl* /usr/local/lib

# Edit ~/.bashrc

export LD_LIBRARY_PATH="/usr/local/cuda-8.0/lib64:/yourpath/nccl/build/lib:$LD_LIBRARY_PATH"

4. Caffe安装

Install dependencies

sudo yum install protobuf-devel leveldb-devel snappy-devel opencv-devel boost-devel hdf5-devel gflags-devel glog-devel lmdb-devel atlas-devel

sudo yum install python-pip

sudo pip install --upgrade pip

sudo pip install numpy

Installation

Caffe Test

5. Tensorflow安装

sudo pip install tensorflow-gpu

6. PyTorch安装

pip install http://download.pytorch.org/whl/cu80/torch-0.1.12.post2-cp27-none-linux_x86_64.whl

pip install torchvision

pip install lmdb

pip install mahotas

pip install cffi

7. Docker安装

# Install docker

sudo yum install docker-ce

# Start docker

sudo systemctl start docker

# Test docker

sudo docker run hello-world

8. Nvidia-Docker安装

# Install nvida-docker

# https://github.com/NVIDIA/nvidia-docker

wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker-1.0.1-1.x86_64.rpm

sudo rpm -i /tmp/nvidia-docker*.rpm && rm /tmp/nvidia-docker*.rpm

# start

sudo systemctl start nvidia-docker

# Test nvidia-smi

nvidia-docker run --rm nvidia/cuda nvidia-smi

你可能感兴趣的:(linux配置caffe环境)