DeepStream之deepstream_test1_app解析

1. 项目介绍

项目路径:opt\nvidia\deepstream\deepstream-5.0\sources\apps\sample_apps\deepstream-test1

这个项目是一个Deepstream中一个简单的demo, 用来教我们如何在pipeline中使用各种DeepStream SDK elements从而在流数据中得到一些有意义的信息。

2. 应用程序整体PipeLine结构图 

DeepStream之deepstream_test1_app解析_第1张图片

 3. 如何使用?

Compilation Steps:
  $ cd apps/deepstream-test1/
  $ make
  $ ./deepstream-test1-app 
# 样例
  $ ./deepstream-test1-app /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264

4. element解析

上图简单介绍了各种element的用法,下面着重介绍下DeepStream SDK中nvinfer这个元素。

Deepstream通过nvinfer的实例来使用TensorRT的API推理加速神经网络模型。在应用程序中我们需要正确的配置nvinfer实例参数通过一个config文件。

/* Set all the necessary properties of the nvinfer element,
   * the necessary ones are : */
# 在程序中我们通过如下代码指定nvinfer实例的配置文件
  g_object_set (G_OBJECT (pgie),
      "config-file-path", "dstest1_pgie_config.txt", NULL);

如下我们对该配置文件中一些重要的参数进行解析

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
# ------------- caffel model -----------------------
model-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel
proto-file=../../../../samples/models/Primary_Detector/resnet10.prototxt
# tensorrt eninge路径,当我们在特定平台上生成engine后,我们可以通过设置这个参数来指定
#生成engine路径,直接进行推理不需要再次编译生成engine
model-engine-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
# -----------------------------------------------------------------------
# 模型类别文件
labelfile-path=../../../../samples/models/Primary_Detector/labels.txt
# ----------------- only int8 ----------------------------
# 如果平台能进行INT8推理需要指定该参数
int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin
# ----------------------------------
force-implicit-batch-dim=1
batch-size=1
# 网络运行精度设置
# 0 - FP32 1 - INT8 2 - FP16(Jetson TX2)
network-mode=1
# 模型检测类别个数
num-detected-classes=4
interval=0
gie-unique-id=1
# -------------- for caffe ----------
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
# ---------------------------------------
#scaling-filter=0
#scaling-compute-hw=0

[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1

以下是一些通用参数的解析

  • network definition生成,engine编译需要指定的参数
# Following properties are mandatory when engine files are not specified:
# 当网络的engine没有生成之前,以下这些参数是必须指定的(强制性的),
#   int8-calib-file(Only in INT8)    # 如果运行精度为INT8,必须指定
# --------------- network从caffe模型解析得到下面参数必须指定 --------------------
#   Caffemodel mandatory properties: model-file, proto-file, output-blob-names
# ----------------------------------------------------------------------------
# ----------------network从TensorFlow模型解析得到下面参数必须指定 -------------
#   UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
# --------------------------------------------------------------------------
# --------------network从ONNX模型解析得到下面参数必须指定---------------------
#   ONNX: onnx-file
#--------------------------------------------------------------------------

5. nvinfer输出数据处理函数

nvinfer输出数据在通过nvosd元素显示之前,我们可以做一些处理。通过对nvosd元素的sink pad添加一个probe实现bounding box类别显示以及视频左上方各类别的计数。

// osd_sink_pad_bufer_prob will extract metadata received on OSD sink pad
// and update params for drawing rectangle, object information etc.

static GstPadProbeReturn 
osd_sink_pad_bufer_prob(GstPad* pad, GstPadProbeInfo* info, gpointer u_data)
{
	GstBuffer* buf = (GstBuffer*) info->data;
	guint num_rects = 0;
	NvDsObjectMeta* obj_meta= NULL;
	guint vehicle_count = 0;
	guint person_count = 0;
	NvDsMetaList* l_frame = NULL;
	NvDsMetaList* l_obj = NULL;
	NvDsDisplayMeta* display_meta = NULL;

	NvDsBatchMeta* batch_meta = gst_buffer_get_nvds_batch_meta(buf);
	// frame information analysis
	for(l_frame = batch_meta->frame_meta_list; l_frame != NULL; l_frame = l_frame->next)
	{
		NvDsFrameMeta* frame_meta = (NvDsFrameMeta*) (l_frame->data);
		int offset = 0;
		// objects in each frame 
		for(l_obj = frame_meta->obj_meta_list; l_obj != NULL; l_obj = l_obj->next)
		{

			obj_meta = (NvDsObjectMeta*)(l_obj->data);
			// each object information
			if(obj_meta->class_id == PGIE_CLASS_ID_VEHICLE)
			{
				vehicle_count++;
				num_rects++;
			}
			if(obj_meta->class_id == PGIE_CLASS_ID_PERSON)
			{
				person_count++;
				num_rects++;
			}
		}
		display_meta = nvds_acquire_display_meta_from_pool(batch_meta);
		NVOSD_TextParams* txt_params = &display_meta->text_params[0];
		display_meta->num_labels = 1;
		txt_params->display_text = g_malloc0(MAX_DISPLAY_LEN);
		offset = snprintf(txt_params->display_text, MAX_DISPLAY_LEN, "Person = %d ", person_count);
		offset = snprintf(txt_params->display_text + offset, MAX_DISPLAY_LEN, "Vehicle = %d ", vehicle_count);

		/* Now set the offsets where the string should appear */
        txt_params->x_offset = 10;
        txt_params->y_offset = 12;

        /* Font , font-color and font-size */
        txt_params->font_params.font_name = "Serif";
        txt_params->font_params.font_size = 10;
        txt_params->font_params.font_color.red = 1.0;
        txt_params->font_params.font_color.green = 1.0;
        txt_params->font_params.font_color.blue = 1.0;
        txt_params->font_params.font_color.alpha = 1.0;

        /* Text background color */
        txt_params->set_bg_clr = 1;
        txt_params->text_bg_clr.red = 0.0;
        txt_params->text_bg_clr.green = 0.0;
        txt_params->text_bg_clr.blue = 0.0;
        txt_params->text_bg_clr.alpha = 1.0;

        nvds_add_display_meta_to_frame(frame_meta, display_meta);

	}
	g_print ("Frame Number = %d Number of objects = %d "
            "Vehicle Count = %d Person Count = %d\n",
            frame_number, num_rects, vehicle_count, person_count);
    frame_number++;
    return GST_PAD_PROBE_OK;
}

至此, 我对deepstream的第一个demo进行了肤浅的解析,如果存在什么错误希望指出.截图保命!!!!!

你可能感兴趣的:(#,deepstream,深度学习)