onnx 需要指定provider

ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)

今天运行程序遇到上述错误,根据提示大概知道怎么解决。

def run_inference(model, input):
    ort_session = ort.InferenceSession(model)

    outputs = ort_session.run(
        None,
        {"input": input},
    )
    return outputs

改为(CPU)也可以根据tensorrt或者gpu填’TensorrtExecutionProvider’ 或者’CUDAExecutionProvider’:

def run_inference(model, input):
    ort_session = ort.InferenceSession(model, providers=['CPUExecutionProvider'])

    outputs = ort_session.run(
        None,
        {"input": input},
    )
    return outputs

你可能感兴趣的:(onnx,onnx)