TVM 可以加载编译的模型种类

 

relay 本质上在TVM中就是一种模型描述的IR, 将各种模型编译成relay后再进行一步优化

示例用法: tvm.relay.frontend.from_onnx("/path/to/onnx/model")  加载onnx模型

 

 

tvm.relay.frontend

Functions:

from_caffe(init_net, predict_net, …)

Convert from caffe model into compatible relay Function.

from_caffe2(init_net, predict_net[, shape, …])

Load caffe2 graph which contains init_net and predict_net into Relay Function.

from_coreml(model[, shape])

Convert from coreml model into Relay Function.

from_darknet(net[, shape, dtype])

Convert from Darknet’s model into compatible relay Function.

from_keras(model[, shape, layout])

Convert keras model to relay Function.

from_mxnet(symbol[, shape, dtype, …])

Convert from MXNet”s model into compatible relay Function.

from_onnx(model[, shape, dtype, opset, …])

Convert a ONNX model into an equivalent Relay Function.

from_pytorch(script_module, input_infos[, …])

Load PyTorch model in the form of a scripted PyTorch model and convert into relay.

from_tensorflow(graph[, layout, shape, outputs])

Load tensorflow graph which is a python tensorflow graph object into relay.

from_tflite(model[, shape_dict, dtype_dict])

Convert from tflite model into compatible relay Function.

quantize_conv_bias_mkldnn_from_var(bias_var, …)

Quantized conv2d bias

你可能感兴趣的:(人工智能落地,自然语言处理,深度学习)