【TensorRT】execute_async VS execute_async_v2

        execute_async和execute_async_v2是tensorrt做异步推理时的api,其官方描叙上差别只有一句话。

execute_async(self: tensorrt.tensorrt.IExecutionContext, batch_size: int = 1, bindings: List[int], stream_handle: int, input_consumed: capsule = None) → bool

Asynchronously execute inference on a batch. This method requires a array of input and output buffers. The mapping from tensor names to indices can be queried using ICudaEngine::get_binding_index() .

Parameters

  • batch_size – The batch size. This is at most the value supplied when the ICudaEngine was built.

  • bindings – A list of integers representing input and output buffer addresses for the network.

  • stream_handle – A handle for a CUDA stream on which the inference kernels will be executed.

  • input_consumed – An optional event which will be signaled when the input buffers can be refilled with new data

execute_async_v2(self: tensorrt.tensorrt.IExecutionContext, bindings: List[int], stream_handle: int, input_consumed: capsule = None) → bool

Asynchronously execute inference on a batch. This method requires a array of input and output buffers. The mapping from tensor names to indices can be queried using ICudaEngine::get_binding_index() . This method only works for execution contexts built from networks with no implicit batch dimension.

Parameters

  • bindings – A list of integers representing input and output buffer addresses for the network.

  • stream_handle – A handle for a CUDA stream on which the inference kernels will be executed.

  • input_consumed – An optional event which will be signaled when the input buffers can be refilled with new data

但是,我想知道的是在实际使用过程中在速度上的差异,所以作了一个简单的实验对比。使用PointPillars中的rpn网络推理同一张点云图像。实验一中包含stream.synchronize()操作,实验二中注释掉stream.synchronize()操作,对比测试结果。

def inference_sync(context, bindings, inputs, outputs, stream, batch_size=1):
    # Transfer input data to the GPU.
    start=time.time()
    [cuda.memcpy_htod_async(inp.device, inp.host, stream) for inp in inputs]
    # Run inference.
    context.execute_async(batch_size=batch_size, bindings=bindings, stream_handle=stream.handle)
    # Transfer predictions back from the GPU.
    [cuda.memcpy_dtoh_async(out.host, out.device, stream) for out in outputs]
    # Synchronize the stream
    stream.synchronize()
    print("time",time.time()-start)
    # Return only the host outputs.
    return [out.host for out in outputs]

def inference_async_v2(context, bindings, inputs, outputs, stream):
    # Transfer input data to the GPU.
    start=time.time()
    [cuda.memcpy_htod_async(inp.device, inp.host, stream) for inp in inputs]
    # Run inference.      
    context.execute_async_v2(bindings=bindings, stream_handle=stream.handle)      
    # Transfer predictions back from the GPU.
    [cuda.memcpy_dtoh_async(out.host, out.device, stream) for out in outputs]     
    # Synchronize the stream
    stream.synchronize()
    print("v2 time",time.time()-start)
    # Return only the host outputs.
    return [out.host for out in outputs]          

实验一:做stream.synchronize()

execute_async(ms) execute_async_v2(ms)
1 0.01885676383972168  0.012494564056396484
2 0.012447595596313477 0.012444019317626953
3 0.012630224227905273 0.012173175811767578
4 0.01241612434387207 0.01211094856262207
5 0.012379646301269531 0.01217961311340332

实验二:不做stream.synchronize()

execute_async(ms) execute_async_v2(ms)
1 0.006377458572387695  0.012206554412841797
2 0.006362199783325195 0.012171268463134766
3 0.0064013004302978516 0.012173175811767578
4 0.006360769271850586 0.01211094856262207
5 0.006306886672973633 0.01217961311340332

【参考文献】

https://docs.nvidia.com/deeplearning/tensorrt/api/python_api/infer/Core/ExecutionContext.html

你可能感兴趣的:(tensorrt,tensorrt)