gst插件,即GStreamer插件,是用于实现DeepStream功能嵌入GStreamer编解码流程使用。
GStreamer是用于插件,数据流和媒体类型处理/协商的框架。它用于创建流媒体应用程序。插件是在运行时动态加载的共享库,可以独立扩展和升级。当安排并链接在一起时,插件形成处理流水线,该流水线定义了流媒体应用程序的数据流。您可以通过其广泛的在线文档,从“什么是GStreamer?”开始了解有关GStreamer的更多信息。
开源GStreamer插件:
除了在GStreamer框架库中提供的开源插件之外,DeepStream SDK还包括利用GPU功能的NVIDIA硬件加速插件。有关DeepStream GStreamer插件的完整列表,请参见《 NVIDIA DeepStream插件手册》。
NVIDIA硬件加速插件:
NVIDIA DeepStream SDK是基于开源GStreamer多媒体框架的流分析工具包。DeepStream SDK加快了可伸缩IVA应用程序的开发速度,使开发人员更容易构建核心深度学习网络,而不必从头开始设计端到端应用程序。包含NVIDIA Jetson模块或NVIDIA dGPU适配器的系统均支持该SDK。它由可扩展的硬件加速插件集合组成,这些插件与低级库进行交互以优化性能,并定义了标准化的元数据结构,可实现自定义/用户特定的添加。
有关DeepStream SDK的更多详细信息和说明,请参考以下材料:
NVIDIA DeepStream SDK开发指南
NVIDIA DeepStream插件手册
NVIDIA DeepStream SDK API参考文档
通过gst命令可以查看到有很多nvidia官方插件:
nvidia@nvidia-desktop:~/projects/deepstream-test1-app_toson/build$ gst-inspect-1.0 -a |grep NVIDIA
dsexample: Author NVIDIA Corporation. Post on Deepstream for Tesla forum for any queries @ https://devtalk.nvidia.com/default/board/209/
dsexample: Description NVIDIA example plugin for integration with DeepStream on DGPU
dsexample: Binary package NVIDIA DeepStream 3rdparty IP integration example plugin
nvof: Author NVIDIA Corporation. Post on Deepstream for Jetson/Tesla forum for any queries @ https://devtalk.nvidia.com/default/board/209/
nvof: Description NVIDIA opticalflow plugin for integration with DeepStream on DGPU
nvof: Binary package NVIDIA DeepStream 3rdparty IP integration opticalflow plugin
nvinfer: Author NVIDIA Corporation. Deepstream for Tesla forum: https://devtalk.nvidia.com/default/board/209
nvinfer: Description NVIDIA DeepStreamSDK TensorRT plugin
nvinfer: Binary package NVIDIA DeepStreamSDK TensorRT plugin
nvmultistreamtiler: Author NVIDIA Corporation. Post on Deepstream for Tesla forum for any queries @ https://devtalk.nvidia.com/default/board/209/
nvmultistreamtiler: Description NVIDIA Multistream Tiler plugin
nvmultistreamtiler: Binary package NVIDIA Multistream Plugins
nvdewarper: Author NVIDIA Corporation. Post on Deepstream for Tesla forum for any queries @ https://devtalk.nvidia.com/default/board/209/
nvsegvisual: Author NVIDIA Corporation. Post on Deepstream for Jetson/Tesla forum for any queries @ https://devtalk.nvidia.com/default/board/209/
nvsegvisual: Description NVIDIA segmentation visualization plugin for integration with DeepStream on DGPU
nvsegvisual: Binary package NVIDIA DeepStream Segmantation Visualization Plugin
nvdsosd: Author NVIDIA Corporation. Post on Deepstream for Tesla forum for any queries @ https://devtalk.nvidia.com/default/board/209/
nvmsgconv: Author NVIDIA Corporation. Post on Deepstream for Tesla forum for any queries @ https://devtalk.nvidia.com/default/board/209/
nvofvisual: Author NVIDIA Corporation. Post on Deepstream for Jetson/Tesla forum for any queries @ https://devtalk.nvidia.com/default/board/209/
nvofvisual: Description NVIDIA opticalflow visualization plugin for integration with DeepStream on DGPU
nvofvisual: Binary package NVIDIA DeepStream Optical Flow Visualization Plugin
nvmsgbroker: Author NVIDIA Corporation. Post on Deepstream for Tesla forum for any queries @ https://devtalk.nvidia.com/default/board/209/
nvstreamdemux: Author NVIDIA Corporation. Post on Deepstream for Tesla forum for any queries @ https://devtalk.nvidia.com/default/board/209/
nvstreamdemux: Description NVIDIA Multistream mux/demux plugin
nvstreamdemux: Binary package NVIDIA Multistream Plugins
nvstreammux: Author NVIDIA Corporation. Post on Deepstream for Tesla forum for any queries @ https://devtalk.nvidia.com/default/board/209/
nvstreammux: Description NVIDIA Multistream mux/demux plugin
nvstreammux: Binary package NVIDIA Multistream Plugins
nvtracker: Author NVIDIA Corporation. Post on Deepstream SDK forum for any queries @ https://devtalk.nvidia.com/default/board/209/
nvv4l2decoder: Long-name NVIDIA v4l2 video decoder
请参阅官方文档:https://docs.nvidia.com/metropolis/deepstream/dev-guide/DeepStream%20Development%20Guide/deepstream_custom_plugin.html#wwpID0E0TB0HA
了解到有几个插件源码,大部分都未开源,暂时可以从仅有的插件入手,如果不能支持则需要自己写插件就比较繁琐了。
deepstream开源插件在目录:/opt/nvidia/deepstream/deepstream-4.0/sources/gst-plugins/
nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-4.0/sources/gst-plugins$ ls
gst-dsexample gst-nvinfer gst-nvmsgbroker gst-nvmsgconv
我们可以先从dsexample
开始:
/opt/nvidia/deepstream/deepstream-4.0/sources/gst-plugins/gst-dsexample/gstdsexample.cpp
gst-dsexample插件的描述:
GStreamer示例插件(gst-dsexample)演示了以下内容:
请参阅官方文档:https://docs.nvidia.com/metropolis/deepstream/dev-guide/DeepStream%20Development%20Guide/deepstream_custom_plugin.html#wwpID0E0TB0HA
可以使用命令行简单测试:
#----------------------- use dsexample plugin -----------------------
# use mp4 files
gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream-4.0/samples/streams/sample_720p.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvvideoconvert ! dsexample full-frame=1 ! nvdsosd ! nvegltransform ! nveglglessink
# use rtsp camera
gst-launch-1.0 rtspsrc latency=2000 location="rtsp://admin:[email protected]:554/cam/realmonitor?channel=1&subtype=0" ! rtph264depay ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvvideoconvert ! dsexample full-frame=1 ! nvdsosd ! nvegltransform ! nveglglessink
结果:
(如图,画了2个固定不动的框,不明白在表达什么,可以从gstdsexample.cpp
里去了解,我正在研究中。)
《使用NVIDIA DeepStream构建实时修订应用程序》:
《Building a Real-time Redaction App Using NVIDIA DeepStream, Part 1: Training》
《Building a Real-time Redaction App Using NVIDIA DeepStream, Part 2: Deployment》
在Part 1
中,学习如何使用ResNet34骨干训练RetinaNet网络以进行对象检测。这包括使用容器,准备数据集,调整超参数以及训练模型。
在Part 2
中,学习如何构建和部署基于AI的实时应用程序。该模型使用DeepStream SDK
部署在NVIDIA Jetson驱动的AGX Xavier
边缘设备上,以实时编辑多个视频流上的人脸。
PS:遗憾的是我没有完成调试,没有环境训练,并且由于得不到part1中的onnx模型文件,所以part2也未能完成。
注:part2中编译/opt/nvidia/deepstream/deepstream-4.0/sources/apps/retinanet_for_redaction_with_deepstream
的时候,有一个坑,需要修改Makfile:47行:
增加库-lgstrtp-1.0
问了相关人员,暂时未能了解到其他的学习资源,所以还是回到deepstream源码例程中去研究。
objectDetector_SSD例程:
源码路径:/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_SSD
方法请查阅README
。
该例程需要安装tensorflow:For Jetson, refer to https://elinux.org/Jetson_Zoo#TensorFlow
并且下载:http://download.tensorflow.org/models/object_detection/ssd_inception_v2_coco_2017_11_17.tar.gz
$ cd ssd_inception_v2_coco_2017_11_17
$ python /usr/lib/python2.7/dist-packages/uff/bin/convert_to_uff.py \
frozen_inference_graph.pb -O NMS \
-p /usr/src/tensorrt/samples/sampleUffSSD/config.py \
-o sample_ssd_relu6.uff
再将得到的sample_ssd_relu6.uff
模型文件复制到README目录。
编译并运行:
# 需要指定cuda版本
$ export CUDA_VER=10.0
$ make -C nvdsinfer_custom_impl_ssd
$ gst-launch-1.0 filesrc location=../../samples/streams/sample_1080p_h264.mp4 ! \
decodebin ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 \
height=720 ! nvinfer config-file-path= config_infer_primary_ssd.txt ! \
nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink
# 或者:
$ deepstream-app -c deepstream_app_config_ssd.txt
objectDetector_Yolo例程:
源码路径:/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_Yolo
方法请查阅README
。
需要先下载yolo模型文件:$ ./prebuild.sh
(注:其中下载了有yolo-v2、yolo-v2-tiny、yolo-v3、yolo-v3-tiny,根据自己的需要可以关闭不需要的下载)
根据README
修改配置文件config_infer_primary_yolo[...].txt
。