此文仅关于Faster-RCNN_TF
第一步 先将
(你的环境路径)lib/python2.7/site-packages/tensorflow/include/tensorflow/core/platform
没有使用anaconda的路径类似下面:
/usr/local/lib/python2.7/dist-packages/tensorflow/include/tensorflow/core/platform/default/mutex.h
如果使用了anaconda(比如我)的路径类似下面:
/home/lbz/.conda/envs/py2/lib/python2.7/site-packages/tensorflow/include/tensorflow/core/platform/default/mutex.h
路径下的mutex.h文件打开 在其中修改
将
#include "nsync_cv.h"
#include "nsync_mu.h"
改为
#include "external/nsync/public/nsync_cv.h"
#include "external/nsync/public/nsync_mu.h"
修改lib文件夹下的make.sh文件为下述内容 重新编译即可
TF_INC=$(python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())')
TF_LIB=$(python -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())')
NSYNC_INC=$TF_INC"/external/nsync/public"
$TF_INC"/home/lbz/.conda/envs/py2/lib/python2.7/site-packages/tensorflow/contrib/makefile/downloads/nsync/public"
CUDA_PATH=/usr/local/cuda/
CXXFLAGS=''
if [[ "$OSTYPE" =~ ^darwin ]]; then
CXXFLAGS+='-undefined dynamic_lookup'
fi
cd roi_pooling_layer
if [ -d "$CUDA_PATH" ]; then
nvcc -std=c++11 -c -o roi_pooling_op.cu.o roi_pooling_op_gpu.cu.cc \
-I $TF_INC -D GOOGLE_CUDA=1 -x cu -Xcompiler -fPIC $CXXFLAGS \
-arch=sm_37
# g++ -std=c++11 -shared -o roi_pooling.so roi_pooling_op.cc \
# roi_pooling_op.cu.o -I $TF_INC -D GOOGLE_CUDA=1 -fPIC $CXXFLAGS \
# -lcudart -L $CUDA_PATH/lib64
g++ -std=c++11 -shared -o roi_pooling.so roi_pooling_op.cc -D_GLIBCXX_USE_CXX11_ABI=0 \
roi_pooling_op.cu.o -I $TF_INC -L $TF_LIB -ltensorflow_framework -D GOOGLE_CUDA=1 \
-fPIC $CXXFLAGS -lcudart -L $CUDA_PATH/lib64
else
g++ -std=c++11 -shared -o roi_pooling.so roi_pooling_op.cc \
-I $TF_INC -fPIC $CXXFLAGS
fi
cd ..
#cd feature_extrapolating_layer
#nvcc -std=c++11 -c -o feature_extrapolating_op.cu.o feature_extrapolating_op_gpu.cu.cc \
# -I $TF_INC -D GOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -arch=sm_50
#g++ -std=c++11 -shared -o feature_extrapolating.so feature_extrapolating_op.cc \
# feature_extrapolating_op.cu.o -I $TF_INC -fPIC -lcudart -L $CUDA_PATH/lib64
#cd ..
原理不明,请大神不吝赐教