jetson-nano安装TensorFlow并运行目标检测

目录

    • 一、更换软件源
    • 二、更新系统
    • 三、TensorFlow-GPU版的安装
    • 四、目标检测运行

一、更换软件源

  1. 先按照博客JETSON-Nano刷机运行deepstream4.0的demo刷系统
  2. 更换清华的源
    sudo cp /etc/apt/sources.list /etc/apt/sources.list.bak #为防止误操作后无法恢复,先备份原文件sources.list
    sudo gedit /etc/apt/sources.list
    删除文件所有内容,复制下面的内容:

deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic main multiverse restricted universe
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-security main multiverse restricted universe
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-updates main multiverse restricted universe
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-backports main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-security main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-updates main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-backports main multiverse restricted universe

保存sources.list

二、更新系统

  1. sudo apt-get update
  2. sudo apt-get upgrade
  3. CUDA的路径写入环境变量
    sudo gedit ~/.bashrc
    在最后添加
export CUDA_HOME=/usr/local/cuda-10.0
export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64:$LD_LIBRARY_PATH
export PATH=/usr/local/cuda-10.0/bin:$PATH
  1. 保存退出。然后 source ~/.bashrc
  2. nvcc -V 可以看到CUDA版本

三、TensorFlow-GPU版的安装

  1. python3 -m pip install --upgrade pip
  2. sudo apt-get install python3-pip libhdf5-serial-dev hdf5-tools
  3. pip3 install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu==1.13.1+nv19.5 --user
  4. pip3 install numpy pycuda --user

四、目标检测运行

  1. 下载TensorFlow的目标检测模型TensorFlow model zoo
    这里选择ssd_mobilenet_v2_coco
    ssd_mobilenet_v2_coco_2018_03_29.tar.gz
  2. 下载TRT目标检测程序
  3. TRT_object_detection-master.zip拷贝到nano系统里,然后解压。
  4. cd TRT_object_detection
  5. mkdir model
  6. 解压 ssd_mobilenet_v2_coco_2018_03_29.tar.gz
  7. 拷贝ssd_mobilenet_v2_coco_2018_03_29文件夹里的frozen_inference_graph.pb文件到model文件夹里。
  8. gedit /usr/lib/python3.6/dist-packages/graphsurgeon/node_manipulation.py
     node = NodeDef()
     node.name = name
     node.op = op if op else name
+    node.attr["dtype"].type = 1
     for key, val in kwargs.items():
         if key == "dtype":
             node.attr["dtype"].type = val.as_datatype_enum
  1. 编辑TRT_object_detection/config里的model_ssd_mobilenet_v2_coco_2018_03_29.py
import graphsurgeon as gs

-  path = 'model/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb'
+  path = 'model/frozen_inference_graph.pb'
   TRTbin = 'TRT_ssd_mobilenet_v2_coco_2018_03_29.bin'
  1. python3 main.py image.jpg
    jetson-nano安装TensorFlow并运行目标检测_第1张图片
    上图识别出了person

你可能感兴趣的:(JETSON-NANO)