tensorflow+tensorrt+object_detection (CUDA10.0 Cudnn7.4.1) 在anconda 环境下编译安装
os: ubuntu 16.04
cuda:10.0
cudnn:7.4.1(其它版本也可以,eg:7.3.1)
tensorflow:1.13.0rc(1.13.1版本容易编译失败)
python:3.5 (3.7版本编译1.13.0,编译失败)
tensorrt:5.0.2
tensorrt 各种版本下载地址需要注册账号登录:https://developer.nvidia.com/nvidia-tensorrt-5x-download
官方教程:https://developer.download.nvidia.cn/compute/machine-learning/tensorrt/docs/5.0/GA_5.0.2.6/TensorRT-Installation-Guide.pdf
官方教程中提供三种安装方式分别为deb,tar,和rpm。我使用deb,和tar两种方式均安装成功,第三种方式没有尝试过。推荐使用tar方式安装。
tensorrt支持的配置如下:
Tensorrt | ubuntu | cuda | tensorflow |
---|---|---|---|
5.1.5 | ubuntu 16.04 | cuda 10.1.168 | tensorflow1.13.1 |
5.1.2 | ubuntu 16.04 | cuda 10.1.105 | tensorflow1.13.1 |
5.0.2 | ubuntu 16.04 | cuda 10.0.130 | tensorflow1.13.0rc0 |
tensorflow1.12.0 |
经过尝试第三个配置容易容易编译成功。
如果使用anconda安装,首先创建虚拟环境:例如创建名为trt的独立虚拟环境
conda create -n trt python=3.5
//激活环境
conda activate trt
下载tensorrt并解压到安装的路径:例如:/home/user_1/software/TensorRT-5.0.2.6
可以使用ubuntu16.04自带的文件管理系统解压或者执行如下命令:
$ tar xzvf TensorRT-5.1.x.x.Ubuntu-1x.04.x.x86_64-gnu.cuda-x.x.cudnn7.x.tar.gz
其中:
5.1.x.x 是下载的 TensorRT 版本
Ubuntu-1x.04.x 为 14.04.5 , 16.04.4 or 18.04.1
cuda-x.x 是 CUDA ver版本 9.0 , 10.0 , or 10.1
cudnn7.x 为 cuDNN 版本 7.5
解压之后执行命令:
$ ls TensorRT-5.1.x.x
bin data doc graphsurgeon include TensorRT-Release-Notes.pdf uff
添加Tensorrt/lib 安装路径 到系统变量LD_LIBRARY_PATH,执行如下命令:
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:
安装Tensorrt wheel 文件
//如果使用虚拟环境可以省去sudo
$ cd TensorRT-5.1.x.x/python
$ sudo pip3 install tensorrt-5.1.x.x-cp3x-none-linux_x86_64.whl
安装uff wheel文件
$ cd TensorRT-5.1.x.x/uff
$ sudo pip3 install graphsurgeon-0.4.0-py2.py3-none-any.whl
在安装完tensorrt后可以根据tensorflow官方编译安装教程安装tensorflow。官方教程链接为:https://tensorflow.google.cn/install/source
(详细步骤请参考官方文档)
激活事先创建好的python环境
conda activate trt
安装依赖
pip install -U --user pip six numpy wheel setuptools mock future>=0.17.1
pip install -U --user keras_applications==1.0.6 --no-deps
pip install -U --user keras_preprocessing==1.0.5 --no-deps
安装工程管理工具bazel
安装教程参考官方文档(不同版本的tensorflow需要不同版本的bazel),参考地址:https://docs.bazel.build/versions/master/install.html
下载tensorflow源代码
git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow
配置安装路径等执行命令
./configure
配置过程有几点要注意一个是选择的python版本与其对应的packages地址。第二个选择cuda时cuda版本与路径是否正确,三是是否支持tensorrt,输入正确的tensort安装地址,如果是deb安装默认即可,如果tar安装则需要复制tensorrt的安装地址,其它的选择n即可,如果编译不过查看是否有配置错误,如果配置错误运行命令 bazel clean 后重新执行./config命令,注意询问是否使用clang作为CUDA的编译器时选择N:
You have bazel 0.15.0 installed.
Please specify the location of python. [Default is /home/shx/anaconda3/envs/trt/bin/python]: //根据提示选择你的python路径,由于我激活了虚拟环境所以默认的就是我的虚拟环境地址
Found possible Python library paths:
/home/shx/anaconda3/envs/trt/lib/python3.5/site-packages
Please input the desired Python library path to use. Default is [/home/shx/anaconda3/envs/trt/lib/python3.5/site-packages]://这个是选择python对应的packages安装地址
Do you wish to build TensorFlow with jemalloc as malloc support? [Y/n]:
jemalloc as malloc support will be enabled for TensorFlow.
Do you wish to build TensorFlow with Google Cloud Platform support? [Y/n]:
Google Cloud Platform support will be enabled for TensorFlow.
Do you wish to build TensorFlow with Hadoop File System support? [Y/n]:
Hadoop File System support will be enabled for TensorFlow.
Do you wish to build TensorFlow with Amazon AWS Platform support? [Y/n]:
Amazon AWS Platform support will be enabled for TensorFlow.
Do you wish to build TensorFlow with Apache Kafka Platform support? [Y/n]:
Apache Kafka Platform support will be enabled for TensorFlow.
Do you wish to build TensorFlow with XLA JIT support? [y/N]:
No XLA JIT support will be enabled for TensorFlow.
Do you wish to build TensorFlow with GDR support? [y/N]:
No GDR support will be enabled for TensorFlow.
Do you wish to build TensorFlow with VERBS support? [y/N]:
No VERBS support will be enabled for TensorFlow.
Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]:
No OpenCL SYCL support will be enabled for TensorFlow.
Do you wish to build TensorFlow with CUDA support? [y/N]: Y
CUDA support will be enabled for TensorFlow.
Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 9.0]: 9.0
./configure
Please specify the location where CUDA 9.0 toolkit is in./configurestalled. Refer to README.md for more details. [Default is /usr/local/./configurecuda]:
./configure
Please specify the cuDNN version you want to use. [Leave./configure empty to default to cuDNN 7.0]: 7.0./configure
./configure
Please specify the location where cuDNN 7 library is ins./configuretalled. Refer to README.md for more details. [Default is /usr/local/cud./configurea]:
Do you wish to build TensorFlow with TensorRT support? [y/N]:
No TensorRT support will be enabled for TensorFlow.
Please specify the NCCL version you want to use. If NCLL 2.2 is not installed, then you can use version 1.3 that can be fetched automatically but it may have worse performance with multiple GPUs. [Default is 2.2]: 1.3
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https:// developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your
build time and binary size. [Default is: 3.5,7.0] 6.1
Do you want to use clang as CUDA compiler? [y/N]:
nvcc will be used as CUDA compiler.
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:
Do you wish to build TensorFlow with MPI support? [y/N]:
No MPI support will be enabled for TensorFlow.
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:
Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]:
Not configuring the WORKSPACE for Android builds.
Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See tools/bazel.rc for more details.
--config=mkl # Build with MKL support.
--config=monolithic # Config for mostly static monolithic build.
Configuration finished
配置完成后执行下面命令编译tensorflow
bazel build --config=opt --config=cuda //tensorflow/tools/
下面为build完成后出现的信息,其中wheel文件存储在/tmp/tensorflow_pkg文件夹下
warning: no files found matching '*.pyd' under directory '*'
warning: no files found matching '*.pd' under directory '*'
warning: no files found matching '*.dll' under directory '*'
warning: no files found matching '*.lib' under directory '*'
warning: no files found matching '*.h' under directory 'tensorflow/include/ tensorflow'
warning: no files found matching '*' under directory 'tensorflow/include/ Eigen'
warning: no files found matching '*.h' under directory 'tensorflow/include/ google'
warning: no files found matching '*' under directory 'tensorflow/include/ third_party'
warning: no files found matching '*' under directory 'tensorflow/include/ unsupported'
2019年 07月 23日 星期二 08:54:08 CST : === Output wheel file is in: /tmp/tensorflow_pkg
安装tensorflow wheel文件
pip_package:build_pip_package
在tensorrt安装路径下存在TensorRT-5.0.2.6/samples/python/end_to_end_tensorflow_mnist demo。
$ cd TensorRT-5.0.2.6/samples/python/end_to_end_tensorflow_mnist demo
$ mkdir models
$ python model.py
出现如下信息:
2019-07-23 09:09:03.239484: I tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA library libcublas.so.10.0 locally
60000/60000 [==============================] - 8s 135us/sample - loss: 0.1978 - acc: 0.9409
Epoch 2/5
60000/60000 [==============================] - 8s 131us/sample - loss: 0.0789 - acc: 0.9761
Epoch 3/5
60000/60000 [==============================] - 8s 130us/sample - loss: 0.0530 - acc: 0.9831
Epoch 4/5
60000/60000 [==============================] - 8s 129us/sample - loss: 0.0351 - acc: 0.9893
Epoch 5/5
60000/60000 [==============================] - 8s 128us/sample - loss: 0.0279 - acc: 0.9912
10000/10000 [==============================] - 1s 65us/sample - loss: 0.0751 - acc: 0.9767
执行命令:
$ python sample.py -d /home/user_XX/software/TensorRT-5.0.2.6/data
Test Case: 4
Prediction: 4
如果使用deb方式安装,直接执行python sample.py即可。如果使用tar文件安装需要指定data路径,这个文件在TensorRT安装的根目录下。
可以看到测试样本4 预测值4,说明成功安装tensorrt和tensorflow。
TensorFlow对象检测API是一个基于TensorFlow构建的开源框架,可以轻松构建,训练和部署对象检测模型。官方简介和安装教程地址为:https://github.com/tensorflow/models/tree/master/research/object_detection
参考官方教程安装如果报错:from tensorflow.python.eager import monitoring ImportError: cannot import monitoring
通过pip 查看tensorflow-estimator版本是否和当前匹配,如果不匹配通过
$ pip uninstall tensorflow-estimator
卸载后再使用
$ pip install tensorflow-estimator==
查看版本,然后
$ pip install tensorflow-estimator==version
成功安装。
上述步骤全部安装完成,学习基于tensorrt 官方例程。例子地址:https://github.com/tensorflow/tensorrt/tree/r1.13/tftrt/examples/object_detection
* note:tensorflow/tensorrt 提供的例程有两个分支其中最新的master分支支持tensorflow1.14以上版本。如果tensorflow安装的是之前的版本请选择1.13及一下版本。
Download
from tftrt.examples.object_detection import download_model
config_path, checkpoint_path = download_model('ssd_mobilenet_v1_coco', output_dir='models')
# help(download_model) for more
Optimize
from tftrt.examples.object_detection import optimize_model
frozen_graph = optimize_model(
config_path=config_path,
checkpoint_path=checkpoint_path,
use_trt=True,
precision_mode='FP16'
)
# help(optimize_model) for other parameters
note:如果报错Segmentation fault(core dumped)(这个问题坑了我一周,一直以为是编译安装出了问题!!!!) 不要慌!!!可能是因为函数
trt.create_inference_graph(
input_graph_def=frozen_graph,
outputs=output_names,
max_batch_size=max_batch_size,
max_workspace_size_bytes=max_workspace_size_bytes,
precision_mode=precision_mode,
minimum_segment_size=minimum_segment_size,
is_dynamic_op=False,
maximum_cached_engines=maximum_cached_engines)
max_workspace_size_bytes参数设置的小了。上述参数可以在object_detection.py中修改。经过反复尝试,最好修改这段代码,直接加载.pb文件进行转换。