笔记(八)Jetson Nano 跑通 jetson-inference

笔记(八)Jetson Nano 跑通 jetson-inference

jetson-inference仓库使用NVIDIA TensorRT将神经网络有效地部署到嵌入式Jetson平台上,通过图形优化,内核融合和FP16/INT8提高了性能和能效。视觉基元(例如,用于图像识别的imageNet,用于对象检测的detectNet和用于语义分割的segNet)从共享的tensorNet对象继承。

详细信息可以访问jetson-inference项目首页。

博主参考了官方和其它相关教程(参考链接见文末),基于以下环境跑通了jetson-inference,随着官方教程的升级,可能导致本教程无法使用,还望各位见谅。

  • 系统版本:Ubuntu 18.04 LTS arm64架构
  • JetPack版本:4.4
  • CUDA版本:10.2
  • TensorFlow版本:1.3.0
  • 更新时间:2020-10-11

本文除了提供了相关教程,还将相关仓库迁移至gitee,便于国内下载,如果想体验最新的版本,建议全部选择官方仓库操作。

1、获取文件

首先安装依赖包:

sudo apt-get update
sudo apt-get install git cmake libpython3-dev python3-numpy

接着从git仓库clone本项目至本地:

# 进入桌面
cd ~/Desktop

# 以下二选一
# 【官方】clone 项目
git clone https://github.com/dusty-nv/jetson-inference
# 【gitee】clone 博主迁移的gitee项目地址(下载速度快,但版本不一定最新)
# 不一定需要使用博主的git地址,如果你有gitee账户的话,可以将官方项目迁移至gitee,然后clone,这样速度就会很快。
git clone https://gitee.com/XPSWorld/jetson-inference

cd jetson-inference
git submodule update --init

# 创建编译文件夹
mkdir build

Jetson Nano系统最低配置是需要16G的SD卡,而跑通这个例子需要的模型就大概2G以上,所以这个例子的大部分并没有放到SD卡上(SD卡上只有运行这个模型所需要的TensorRT)。

由于国内网络环境原因,无法下载模型文件或者下载非常慢,有些帖子说是可以科学上网解决下载速度慢,笔者没有实测,爱折腾的朋友可以试试。

jetson-inference的文档里面有相关说明(Model Download Mirror),对于中国用户,提供了下载镜像,所以只需要从该页面下载即可:

# 进入模型存放目录
cd data/networks
# 使用wget下载各个模型,可根据实际需要下载
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/AlexNet.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/Deep-Homography-COCO.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/DetectNet-COCO-Airplane.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/DetectNet-COCO-Bottle.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/DetectNet-COCO-Chair.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/DetectNet-COCO-Dog.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/facenet-120.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/FCN-Alexnet-Aerial-FPV-720p.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/FCN-Alexnet-Cityscapes-HD.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/FCN-Alexnet-Cityscapes-SD.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/FCN-Alexnet-Pascal-VOC.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/FCN-Alexnet-SYNTHIA-CVPR16.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/FCN-Alexnet-SYNTHIA-Summer-HD.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/FCN-Alexnet-SYNTHIA-Summer-SD.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/FCN-ResNet18-Cityscapes-1024x512.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/FCN-ResNet18-Cityscapes-2048x1024.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/FCN-ResNet18-Cityscapes-512x256.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/FCN-ResNet18-DeepScene-576x320.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/FCN-ResNet18-DeepScene-864x480.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/FCN-ResNet18-MHP-512x320.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/FCN-ResNet18-MHP-640x360.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/FCN-ResNet18-Pascal-VOC-320x320.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/FCN-ResNet18-Pascal-VOC-512x320.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/FCN-ResNet18-SUN-RGBD-512x400.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/FCN-ResNet18-SUN-RGBD-640x512.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/GoogleNet-ILSVRC12-subset.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/GoogleNet.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/Inception-v4.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/multiped-500.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/ped-100.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/ResNet-101.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/ResNet-152.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/ResNet-18.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/ResNet-50.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/SSD-Inception-v2.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/SSD-Mobilenet-v1.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/SSD-Mobilenet-v2.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/Super-Resolution-BSD500.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/VGG-16.tar.gz
wget https://github.com/dusty-nv/jetson-inference/releases/download/model-mirror-190618/VGG-19.tar.gz

以下是上述预训练模型的简单介绍:

图像识别预训练模型:

Network CLI argument NetworkType enum
AlexNet alexnet ALEXNET
GoogleNet googlenet GOOGLENET
GoogleNet-12 googlenet-12 GOOGLENET_12
ResNet-18 resnet-18 RESNET_18
ResNet-50 resnet-50 RESNET_50
ResNet-101 resnet-101 RESNET_101
ResNet-152 resnet-152 RESNET_152
VGG-16 vgg-16 VGG-16
VGG-19 vgg-19 VGG-19
Inception-v4 inception-v4 INCEPTION_V4

物体检测:

Network CLI argument NetworkType enum Object classes
SSD-Mobilenet-v1 ssd-mobilenet-v1 SSD_MOBILENET_V1 91 (COCO classes)
SSD-Mobilenet-v2 ssd-mobilenet-v2 SSD_MOBILENET_V2 91 (COCO classes)
SSD-Inception-v2 ssd-inception-v2 SSD_INCEPTION_V2 91 (COCO classes)
DetectNet-COCO-Dog coco-dog COCO_DOG dogs
DetectNet-COCO-Bottle coco-bottle COCO_BOTTLE bottles
DetectNet-COCO-Chair coco-chair COCO_CHAIR chairs
DetectNet-COCO-Airplane coco-airplane COCO_AIRPLANE airplanes
ped-100 pednet PEDNET pedestrians
multiped-500 multiped PEDNET_MULTI pedestrians, luggage
facenet-120 facenet FACENET faces

语义分割预处理模型:

Dataset Resolution CLI Argument Accuracy Jetson Nano Jetson Xavier
Cityscapes 512x256 fcn-resnet18-cityscapes-512x256 83.3% 48 FPS 480 FPS
Cityscapes 1024x512 fcn-resnet18-cityscapes-1024x512 87.3% 12 FPS 175 FPS
Cityscapes 2048x1024 fcn-resnet18-cityscapes-2048x1024 89.6% 3 FPS 47 FPS
DeepScene 576x320 fcn-resnet18-deepscene-576x320 96.4% 26 FPS 360 FPS
DeepScene 864x480 fcn-resnet18-deepscene-864x480 96.9% 14 FPS 190 FPS
Multi-Human 512x320 fcn-resnet18-mhp-512x320 86.5% 34 FPS 370 FPS
Multi-Human 640x360 fcn-resnet18-mhp-512x320 87.1% 23 FPS 325 FPS
Pascal VOC 320x320 fcn-resnet18-voc-320x320 85.9% 45 FPS 508 FPS
Pascal VOC 512x320 fcn-resnet18-voc-512x320 88.5% 34 FPS 375 FPS
SUN RGB-D 512x400 fcn-resnet18-sun-512x400 64.3% 28 FPS 340 FPS
SUN RGB-D 640x512 fcn-resnet18-sun-640x512 65.1% 17 FPS 224 FPS

旧版细分模型:

Network CLI Argument NetworkType enum Classes
Cityscapes (2048x2048) fcn-alexnet-cityscapes-hd FCN_ALEXNET_CITYSCAPES_HD 21
Cityscapes (1024x1024) fcn-alexnet-cityscapes-sd FCN_ALEXNET_CITYSCAPES_SD 21
Pascal VOC (500x356) fcn-alexnet-pascal-voc FCN_ALEXNET_PASCAL_VOC 21
Synthia (CVPR16) fcn-alexnet-synthia-cvpr FCN_ALEXNET_SYNTHIA_CVPR 14
Synthia (Summer-HD) fcn-alexnet-synthia-summer-hd FCN_ALEXNET_SYNTHIA_SUMMER_HD 14
Synthia (Summer-SD) fcn-alexnet-synthia-summer-sd FCN_ALEXNET_SYNTHIA_SUMMER_SD 14
Aerial-FPV (1280x720) fcn-alexnet-aerial-fpv-720p FCN_ALEXNET_AERIAL_FPV_720p 2

使用tar解压模型,tar支持单个文件解压,但可以使用下述命令批量解压:

# 批量解压
for tar in *.tar.gz;  do tar -zxvf $tar; done

上述模型全部下载下来,会占用2.2G左右空间,解压完成以后,可以使用下述命令删除压缩包,存储空间够用的朋友可以将压缩包移到其他文件夹:

# 删除所有压缩包
sudo rm -R *.tar.gz

由于已经下载完成模型,所以需要注释脚本中下载模型的语句:

# 回到jetson-inference文件夹根目录
# 编辑 CMakePreBuild.sh 文件
sudo nano CMakePreBuild.sh

找到如下内容,将./download-models.sh $BUILD_INTERACTIVE注释,后续没有改动的话,应该在文末:

# download/install models and PyTorch
if [ $BUILD_CONTAINER = "NO" ]; then
	# ./download-models.sh $BUILD_INTERACTIVE
	./install-pytorch.sh $BUILD_INTERACTIVE
else
	# in container, the models are mounted and PyTorch is already installed
	echo "Running in Docker container => skipping model downloads";
fi


echo "[Pre-build]  Finished CMakePreBuild script"

2、编译项目

方法1(官方):

注释完成以后,进行编译,执行下述命令会安装pytorch,相关文件下载可能会很慢,如果想加快安装进度,可以参考博主下一种方法:

cd build
# 此命令将启动CMakePreBuild.sh脚本,该脚本会在Jetson上安装某些必备软件包时要求sudo特权。
cmake ../

方法2:

上一节我们注释了模型下载部分,在模型下载部分下面还有一行代码,是执行pytorch安装脚本的命令。

模型下载脚本(download-models.sh)和pytorch安装脚本(install-pytorch.sh)都在tools文件夹下,在build目录下执行cmake ../时,会将这两个文件复制至该目录下并执行。

打开pytorch安装脚本(install-pytorch.sh),定位到install_pytorch_v160_python36_jp44函数(在博主环境下执行的为该函数,下载1.6.0版本的pytorch,该文件将近300M),可以看到函数中有下述语句,该语句意思是从官网下载pytorch的安装包并安装:

# install pytorch wheel
download_wheel pip3 "torch-1.6.0-cp36-cp36m-linux_aarch64.whl" "https://nvidia.box.com/shared/static/9eptse6jyly1ggt9axbja2yrmj6pbarc.whl"

local wheel_status=$?

if [ $wheel_status != 0 ]; then
echo "$LOG failed to install PyTorch v1.6.0 (Python 3.6)"
return 1
fi

博主使用motrix下载工具下载whl文件至本地安装,大家可以使用迅雷或者其他专用下载工具下载,这些工具可以多线程分块下载,加快下载速度,本地安装pytorch之后就可以把该语句注释掉,详细如下:

博主已将该文件(pytorch 1.6.0)上传至gitee,可使用以下命令clone至本地,博主将该文件分卷压缩了,解压就行:

git clone https://gitee.com/XPSWorld/pytorch1.6.0.git

使用ftp工具(如xftp)将下载的whl文件上传至Jetson Nano,然后使用pip3安装whl文件:

sudo pip3 install torch-1.6.0-cp36-cp36m-linux_aarch64.whl

安装完成别忘记注释安装命令那几行语句。

接着往下看install_pytorch_v160_python36_jp44函数内容,找到编译torchvision的指令:

# build torchvision
move_ffmpeg
echo "$LOG cloning torchvision..."
sudo rm -r -f torchvision-36
git clone -bv0.7.0 https://github.com/pytorch/vision torchvision-36
cd torchvision-36
echo "$LOG building torchvision for Python 3.6..."
sudo python3 setup.py install
cd ../
restore_ffmpeg

以上指令会从github clone项目,然后编译,从github clone项目会比较慢,大家可以将地址改为博主迁移的仓库(不一定为最新版本),将

https://github.com/pytorch/vision

改为:

https://gitee.com/XPSWorld/vision.git

或者改为自己迁移至gitee的地址。

修改完成以后保存,并进入根目录,执行下述语句编译:

cd build       # 进入build
cmake ../      # 运行cmake,它会自动执行上一级目录下面的 CMakePrebuild.sh
# 。。。 漫长等待

make				# 编译
sudo make install	# 安装
# 。。。 漫长等待

如果编译成功,会生成下列文件夹结构(需要使用apt安装tree才能使用该指令):

sworld@xp:~/Desktop/jetson-inference/build$ tree -L 1
.
├── aarch64
├── CMakeCache.txt
├── CMakeFiles
├── cmake_install.cmake
├── docs
├── download-models.rc
├── download-models.sh
├── examples
├── install_manifest.txt
├── install-pytorch.rc
├── install-pytorch.sh
├── Makefile
├── python
├── tools
├── torchvision-36
└── utils

至此,项目编译完成!

3、测试

进入项目根目录,进入测试文件夹,运行测试项目:

cd build/aarch64/bin
# 运行测试程序
./imagenet-console images/orange_0.jpg output_0.jpg 

输出如下,同时会在同级目录生成一张识别结果图片output_0.jpg

sworld@xp:~/Desktop/jetson-inference/build/aarch64/bin$ ./imagenet-console images/orange_0.jpg output_0.jpg 
[video]  created imageLoader from file:///home/sworld/Desktop/jetson-inference/build/aarch64/bin/images/orange_0.jpg
------------------------------------------------
imageLoader video options:
------------------------------------------------
# ....
# 省略内容
# ....

[TRT]    imageNet -- loaded 1000 class info entries
[TRT]    imageNet -- networks/bvlc_googlenet.caffemodel initialized.
[image] loaded 'images/orange_0.jpg'  (1024x683, 3 channels)
class 0950 - 0.966797  (orange)
imagenet:  96.67969% class #950 (orange)
[image] saved 'output_0.jpg'  (1024x683, 3 channels)

[TRT]    ------------------------------------------------
[TRT]    Timing Report networks/bvlc_googlenet.caffemodel
[TRT]    ------------------------------------------------
[TRT]    Pre-Process   CPU   0.14021ms  CUDA   1.72536ms
[TRT]    Network       CPU  85.34181ms  CUDA  83.09109ms
[TRT]    Post-Process  CPU   0.40840ms  CUDA   0.71406ms
[TRT]    Total         CPU  85.89043ms  CUDA  85.53053ms
[TRT]    ------------------------------------------------

[TRT]    note -- when processing a single image, run 'sudo jetson_clocks' before
                to disable DVFS for more accurate profiling/timing measurements

[image] imageLoader -- End of Stream (EOS) has been reached, stream has been closed
imagenet:  shutting down...
imagenet:  shutdown complete.

识别图片,结果显示相似度96.68%:

笔记(八)Jetson Nano 跑通 jetson-inference_第1张图片

最后放上一张visual studio code开发项目的开发环境界面图,个人觉得vs code用着比pycharm舒服,插件多,可定制性强,后续博主有时间会总结vs code和pycharm搭建jetson nano开发环境的教程:

笔记(八)Jetson Nano 跑通 jetson-inference_第2张图片

参考链接:

Building the Project from Source

官方教程:jetson-inference

玩转Jetson Nano(四)跑通jetson-inference

Linux:tar命令批量解压方法总结

你可能感兴趣的:(Jetson,Nano学习笔记,Jetson,Nano,js-inference,机器学习,图像识别,pytorch)