Ubuntu16.04 配置TensorFlow 1.10.1(Object Detection API)运行MobileNet-SSD(ssd_mobilenet_v1_coco)

Ubuntu16.04 配置TensorFlow 1.10.1(Object Detection API)运行MobileNet-SSD(ssd_mobilenet_v1_coco)

    • TensorFlow的安装方式
      • 从源码安装
        • 1. 克隆 TensorFlow 仓库
        • 2. 安装 Bazel及其他依赖
          •   1. 下载二进制安装程序
          •   2. 安装所需包
          •   3. 运行安装程序
          •   4. 设置环境变量
        • 3. 创建 pip 包并安装
          •   1.配置TensorFlow
          •   2.编译生成安装程序
          •   3.运行安装程序
    • 安装object_detection API
      • 1.克隆models
      • 2.安装依赖库
      • 3.Protobuf编译
        • 手动安装protobuf编译器
      • 4.将库添加到环境变量PYTHONPATH
    • 测试安装
      • 测试TensorFlow
      • 测试Tensorflow Object Detection API
    • 运行MobileNet-SSD
      • 下载训练过的数据集

本文是在Ubuntu16.0配置了FFmpeg和OpenCV等依赖库的前提下配置TensorFlow。着重介绍TensorFlow部分的配置。如有需要了解OpenCV相关配置的可以参考之前的博客(https://blog.csdn.net/Arvin_liang/article/details/83583871 ),里面有OpenCV部分的详细配置,且此部分也是在配置了Caffe的基础上,再进行的TensorFlow的配置。

TensorFlow的安装方式

  TensorFlow官方给定了不止一种安装方法,有二进制安装基于 VirtualEnv 安装基于 Docker安装以及从源码安装
  如果想要快速安装TensorFlow,可以使用二进制安装,方便快捷;
  如果不想担心依赖问题,则可以使用基于 Docker安装的安装方式,可一步到位配置好TensorFlow的依赖问题;
  如果想保持原有的系统开发环境不被破坏,则推荐可以使用基于 VirtualEnv 的安装,环境独立,排查安装等问题也很容易;
  如果以上都不是你所担心的问题,那就采用从源码安装,这也是本文要介绍的安装方式,因为要配置Object_detection API,所以需要自己来定制安装。

从源码安装

  从源码配置,是相对较为复杂的一种方式,且本文后续提到的安装TensorFlow的Object_detection API,也是在此基础上进行的安装配置。以下为具体步骤:

1. 克隆 TensorFlow 仓库

  $ git clone --recurse-submodules https://github.com/tensorflow/tensorflow
  $ cd */tensorflow
  $ git checkout v1.10.1 //切换源码到指定版本
  --recurse-submodules用于获取 TesorFlow 依赖的 protobuf 库,对于源码配置,这是必要的参数。

2. 安装 Bazel及其他依赖

  Bazel也有多种安装方式,二进制安装程序源码编译安装等等(可参考 https://docs.bazel.build/versions/master/install-ubuntu.html ),我在源码安装的时候始终编译不过,因为Bazel不是重点,所以我采用了二进制安装程序进行安装,具体如下:

  1. 下载二进制安装程序

  访问 https://github.com/bazelbuild/bazel/releases ,下载所需要版本(官方提示最新版本(当前0.19.2)可能会不稳定,所以我采用了稍早一点的版本0.18.1)的安装程序bazel-0.18.1-installer-linux-x86_64.sh

  2. 安装所需包

    $ sudo apt-get install pkg-config zip g++ zlib1g-dev unzip python

  3. 运行安装程序

  如下方式选择其一:
    $ chmod +x bazel-0.18.1-installer-linux-x86_64.sh
    $ ./bazel-0.18.1-installer-linux-x86_64.sh --user
    --user标志,将Bazel安装到$HOME/bin系统上的目录并设置.bazelrc路径$HOME/.bazelrc
  或者
    $ sudo sh bazel-0.18.1-installer-linux-x86_64.sh

---------------

Bazel is bundled with software licensed under the GPLv2 with Classpath exception.
You can find the sources next to the installer on our release page:
   https://github.com/bazelbuild/bazel/releases

# Release 0.18.1 (2018-10-31)

Baseline: c062b1f1730f3562d5c16a037b374fc07dc8d9a2

Cherry picks:

   + 2834613f93f74e988c51cf27eac0e59c79ff3b8f:
     Include also ext jars in the bootclasspath jar.
   + 2579b791c023a78a577e8cb827890139d6fb7534:
     Fix toolchain_java9 on --host_javabase= after
     7eb9ea150fb889a93908d96896db77d5658e5005
   + faaff7fa440939d4367f284ee268225a6f40b826:
     Release notes: fix markdown
   + b073a18e3fac05e647ddc6b45128a6158b34de2c:
     Fix NestHost length computation Fixes #5987
   + bf6a63d64a010f4c363d218e3ec54dc4dc9d8f34:
     Fixes #6219. Don't rethrow any remote cache failures on either
     download or upload, only warn. Added more tests.
   + c1a7b4c574f956c385de5c531383bcab2e01cadd:
     Fix broken IdlClassTest on Bazel's CI.
   + 71926bc25b3b91fcb44471e2739b89511807f96b:
     Fix the Xcode version detection which got broken by the upgrade
     to Xcode 10.0.
   + 86a8217d12263d598e3a1baf2c6aa91b2e0e2eb5:
     Temporarily restore processing of workspace-wide tools/bazel.rc
     file.
   + 914b4ce14624171a97ff8b41f9202058f10d15b2:
     Windows: Fix Precondition check for addDynamicInputLinkOptions
   + e025726006236520f7e91e196b9e7f139e0af5f4:
     Update turbine

Important changes:

  - Fix regression #6219, remote cache failures

## Build informations
   - [Commit](https://github.com/bazelbuild/bazel/commit/1050764)
Uncompressing.......

Bazel is now installed!

Make sure you have "/usr/local/bin" in your path. You can also activate bash
completion by adding the following line to your ~/.bashrc:
  source /usr/local/lib/bazel/bin/bazel-complete.bash

See http://bazel.build/docs/getting-started.html to start a new project!
  4. 设置环境变量

  一般情况,使用上述方法运行Bazel安装程序,则Bazel可执行文件将安装/usr/bin或者/usr/local/bin目录中。确定目录后,将目录添加到默认路径:
  $ sudo vi ~/.bashrc
  添加
  export PATH="$PATH:/usr/bin"或者export PATH="$PATH:/usr/local/bin"
  重启或者$ source ~/.bashrc生效。

3. 创建 pip 包并安装

  1.配置TensorFlow

   $ cd */tensorflow//进入源码目录
   $ ./configure //运行配置文件,进行如下配置

WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.google.protobuf.UnsafeUtil (file:/home/alpha_gpu/.cache/bazel/_bazel_alpha_gpu/install/cdf71f2489ca9ccb60f7831c47fd37f1/_embedded_binaries/A-server.jar) to field java.lang.String.value
WARNING: Please consider reporting this to the maintainers of com.google.protobuf.UnsafeUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
You have bazel 0.18.1 installed.
Please specify the location of python. [Default is /usr/bin/python]: 


Found possible Python library paths:
 /usr/local/lib/python2.7/dist-packages
 /home/alpha_gpu/Downloads/tensorflow/models/research
 /usr/lib/python2.7/dist-packages
 /home/alpha_gpu/Downloads/tensorflow/models/research/slim
Please input the desired Python library path to use.  Default is [/usr/local/lib/python2.7/dist-packages]

Do you wish to build TensorFlow with jemalloc as malloc support? [Y/n]: n
No jemalloc as malloc support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Google Cloud Platform support? [Y/n]: n
No Google Cloud Platform support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Hadoop File System support? [Y/n]: n
No Hadoop File System support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Amazon AWS Platform support? [Y/n]: n
No Amazon AWS Platform support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Apache Kafka Platform support? [Y/n]: n
No Apache Kafka Platform support will be enabled for TensorFlow.

Do you wish to build TensorFlow with XLA JIT support? [y/N]: n
No XLA JIT support will be enabled for TensorFlow.

Do you wish to build TensorFlow with GDR support? [y/N]: n
No GDR support will be enabled for TensorFlow.

Do you wish to build TensorFlow with VERBS support? [y/N]: n
No VERBS support will be enabled for TensorFlow.

Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: n
No OpenCL SYCL support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]: n
No CUDA support will be enabled for TensorFlow.

Do you wish to download a fresh release of clang? (Experimental) [y/N]: n
Clang will not be downloaded.

Do you wish to build TensorFlow with MPI support? [y/N]: n
No MPI support will be enabled for TensorFlow.

Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]: 


Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: n
Not configuring the WORKSPACE for Android builds.

Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See tools/bazel.rc for more details.
       --config=mkl            # Build with MKL support.
       --config=monolithic     # Config for mostly static monolithic build.
Configuration finished

  因为是CPU版本,所以在OpenCLCUDA选择了N 。
  如果需要配置CUDA的,可参考 http://www.tensorfly.cn/tfdoc/get_started/os_setup.html 的可选: 安装 CUDA (在 Linux 上开启 GPU 支持)部分进行配置。

  2.编译生成安装程序

  $ bazel build -c opt //tensorflow/tools/pip_package:build_pip_package
  $ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg

  3.运行安装程序

  $ pip install /tmp/tensorflow_pkg/tensorflow-1.10.1-cp27-none-linux_x86_64.whl
  .whl 文件的实际名字和源码版本有关。

至此,TensorFlow的源码安装部分完成,验证部分我们在进行了后续object_detection API配置后进行。

安装object_detection API

1.克隆models

将models克隆到tensorflow的目录下
$ cd */tensorflow
$ git clone https://github.com/tensorflow/models.git
这样,在tensorflow的目录下就有了models目录及其目录下的所有文件。

2.安装依赖库

TensorFlow的models需要以下依赖库:

  • Protobuf 3.0.0
  • Python-tk
  • Pillow 1.0
  • lxml
  • tf Slim (已经包含在了"models/research/"中)
  • Jupyter notebook
  • Matplotlib
  • Tensorflow (>=1.9.0)
  • Cython
  • contextlib2
  • cocoapi

因为之前已经配置安装好了CPU版的Tensorflow ,所以只需要执行以下安装:
  $ sudo apt-get install protobuf-compiler python-pil python-lxml python-tk
  $ pip install --user Cython
  $ pip install --user contextlib2
  $ pip install --user jupyter
  $ pip install --user matplotlib

COCO API安装
  $ git clone https://github.com/cocodataset/cocoapi.git
  $ cd cocoapi/PythonAPI
  $ make
  $ cp -r pycocotools */tensorflow/models/research/ //将编译好的pycocotools文件夹复制到tensorflow/models/research/

3.Protobuf编译

  $ cd */tensorflow/models/research/
  $ protoc object_detection/protos/*.proto --python_out=.
如果编译失败,则可能是Protobuf的版本引发的问题(官方提示apt-get安装时可能安装高于3.0.0的protobuf-compiler版本,在使用时可能引发问题),则需要手动安装Protobuf 3.0.0

手动安装protobuf编译器

  $ cd */tensorflow/models/research/
  $ wget -O protobuf.zip https://github.com/google/protobuf/releases/download/v3.0.0/protoc-3.0.0-linux-x86_64.zip
  $ unzip protobuf.zip
  使用下载的protoc版本,再次运行编译过程:
$ ./bin/protoc object_detection/protos/*.proto --python_out=. //需要在tensorflow/models/research/目录下执行

4.将库添加到环境变量PYTHONPATH

  将tensorflow/models/research/tensorflow/models/research/slim添加到PYTHONPATH变量中。
  $ sudo vi ~/.bashrc
  在文件末尾添加
  export PYTHONPATH=$PYTHONPATH:*/tensorflow/models/research/:*/tensorflow/models/research/slim
  以上只是示例,请根据实际情况将指tensorflow/models/research/和tensorflow/models/research/slim目录添加到PYTHONPATH。
  重启或者$ source ~/.bashrc生效。

测试安装

测试TensorFlow

$ cd models/tutorials/image/mnist
$ python convolutional.py

alpha_gpu@alpha-gpu-server:~/Downloads/tensorflow/models/tutorials/image/mnist$ python convolutional.py 
Extracting data/train-images-idx3-ubyte.gz
Extracting data/train-labels-idx1-ubyte.gz
Extracting data/t10k-images-idx3-ubyte.gz
Extracting data/t10k-labels-idx1-ubyte.gz
2018-11-28 11:42:10.811194: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
Initialized!
Step 0 (epoch 0.00), 2.1 ms
Minibatch loss: 8.334, learning rate: 0.010000
Minibatch error: 85.9%
Validation error: 84.6%
Step 100 (epoch 0.12), 94.7 ms
Minibatch loss: 3.252, learning rate: 0.010000
Minibatch error: 6.2%
Validation error: 7.7%
Step 200 (epoch 0.23), 94.7 ms
Minibatch loss: 3.346, learning rate: 0.010000
Minibatch error: 7.8%
Validation error: 4.5%
Step 300 (epoch 0.35), 94.4 ms
Minibatch loss: 3.127, learning rate: 0.010000
Minibatch error: 1.6%
Validation error: 3.1%
...

有如上显示,则为TensorFlow安装成功。

测试Tensorflow Object Detection API

在*/tensorflow/models/research/下执行:
$ python object_detection/builders/model_builder_test.py

alpha_gpu@alpha-gpu-server:~/Downloads/tensorflow/models/research$ python object_detection/builders/model_builder_test.py
......................
----------------------------------------------------------------------
Ran 22 tests in 0.061s

OK

如上显示,则为配置Object Detection API成功。

TensorFlow安装参考自:http://www.tensorfly.cn/tfdoc/get_started/os_setup.html
TensorFlow Object Detection API安装参考自:https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md

运行MobileNet-SSD

TensorFlow提供了很多预先训练过的数据集,我提取其中一个ssd_mobilenet_v1_coco的数据集来进行推理运行,进行object detection。

下载训练过的数据集

下载页面
  https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md
或是直接下载:
  $ wget http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_2018_01_28.tar.gz
将压缩包解压到*/tensorflow/models/research/object_detection/目录下
  $ cd */tensorflow/models/research/object_detection/
  $ tar -xzvf ssd_mobilenet_v1_coco_2018_01_28.tar.gz
在 */tensorflow/models/research/object_detection/下创建python文件MobileNet_SSD.py,输入以下代码:

import os
import cv2
import time
import numpy as np
import tensorflow as tf

from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util

CWD_PATH = os.getcwd()

# Path to frozen detection graph. This is the actual model that is used for the object detection.
MODEL_NAME = 'ssd_mobilenet_v1_coco_2018_01_28'
PATH_TO_CKPT = os.path.join(CWD_PATH, 'object_detection', MODEL_NAME, 'frozen_inference_graph.pb')

# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join(CWD_PATH, 'object_detection', 'data', 'mscoco_label_map.pbtxt')

NUM_CLASSES = 90

# Loading label map
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES,
                                                            use_display_name=True)
category_index = label_map_util.create_category_index(categories)

#Open video file
cap = cv2.VideoCapture("video1.mp4") 
#cap = cv2.VideoCapture(0)
#Load a frozen TF model 
detection_graph = tf.Graph()
with detection_graph.as_default():
    od_graph_def = tf.GraphDef()
    with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
        serialized_graph = fid.read()
        od_graph_def.ParseFromString(serialized_graph)
        tf.import_graph_def(od_graph_def, name='')

with detection_graph.as_default():
     with tf.Session(graph=detection_graph) as sess:
           while (cap.isOpened()):
              ret, frame = cap.read()
              start = time.time()
              image_np = frame
              image_np_expanded = np.expand_dims(image_np, axis=0)
              image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
              boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
              scores = detection_graph.get_tensor_by_name('detection_scores:0')
              classes = detection_graph.get_tensor_by_name('detection_classes:0')
              num_detections = detection_graph.get_tensor_by_name('num_detections:0')
              # Actual detection.
              (boxes, scores, classes, num_detections) = sess.run(
                [boxes, scores, classes, num_detections],
                feed_dict={image_tensor: image_np_expanded})
              vis_util.visualize_boxes_and_labels_on_image_array(
                  image_np,
                  np.squeeze(boxes),
                  np.squeeze(classes).astype(np.int32),
                  np.squeeze(scores),
                  category_index,
                  use_normalized_coordinates=True,
                  line_thickness=8)
              end = time.time()
              print('frame:', 1.0 / (end - start))
              cv2.imshow("capture", image_np)
              cv2.waitKey(1)
 
cap.release()
cv2.destroyAllWindows()

执行MobileNet_SSD.py即可看到运行结果。我本地运行i7 4770 可达到16~19fps。
一定要确保所有的环境变量都配置生效,否则运行报错。

你可能感兴趣的:(环境配置)