TVM 0.8 指令显示

  • 前言:

  • 目录



  1. 按照TVM官网上的方法安装好TVM
  2. 运行例子
    cd tvm/tutorials/frontend


    现在只试用了两个例子:
    python from_onnx.py
    python from_pytorch.py
    运行过程中会在互联网上下载资源,如果下载失败,需要提前把资源准备好:
    (1) 先下载附件resource.zip
    (2) 下载脚本cp_resource.sh
    更改第一行,运行:
    source cp_resource.sh
    参考链接:https://tvm.apache.org/docs/tutorials/index.html
user="yourname"

unzip resource.zip

mkdir /home/${user}/.tvm_test_data
mkdir /home/${user}/.tvm_test_data/data
mkdir /home/${user}/.tvm_test_data/onnx

cp resource/cat.png /home/${user}/.tvm_test_data/data/cat.png
cp resource/super_resolution_0.2.onnx /home/${user}/.tvm_test_data/onnx/super_resolution.onnx

cp resource/imagenet_synsets.txt /home/${user}/.tvm_test_data/data/imagenet_synsets.txt
cp resource/imagenet_classes.txt /home/${user}/.tvm_test_data/data/imagenet_classes.txt
  1. 打印Module
    以from_onnx.py为例,第81行改为print(mod)
  2. 打印TVM IR
    下载zip文件: https://github.com/uwsampl/tophub
    unzip tophub-master.zip
    cp tophub-master/tophub/* ~/.tvm/tophub/
    cd tvm/vta/tests/python/integration
    打开文件test_benchmark_topi_conv2d.py,把第88行的print_ir=False改成print_ir=True
    python test_benchmark_topi_conv2d.py
  3. 打印VTA指令:
    print VTA Instructions操作
  4. 打印NVVM IR
    6.1 Enable CUDA backend
    Edit build/config.cmake to customize the compilation options
    Change set(USE_CUDA OFF) to set(USE_CUDA /path/to/cuda), for example: set(USE_CUDA /usr/local/cuda-10.2)
    cmake … ; make -j4
    6.2 下载demo文件并运行: python dump_NVVM_IR_demo.py
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.
"""
Compile ONNX Models
===================
**Author**: `Joshua Z. Zhang `_

This article is an introductory tutorial to deploy ONNX models with Relay.

For us to begin with, ONNX package must be installed.

A quick solution is to install protobuf compiler, and

.. code-block:: bash

    pip install onnx --user

or please refer to offical site.
https://github.com/onnx/onnx
"""
import os

import onnx
import numpy as np
import tvm
from tvm import te
import tvm.relay as relay
from tvm.contrib.download import download_testdata

######################################################################
# Load pretrained ONNX model
# ---------------------------------------------
# The example super resolution model used here is exactly the same model in onnx tutorial
# http://pytorch.org/tutorials/advanced/super_resolution_with_caffe2.html
# we skip the pytorch model construction part, and download the saved onnx model
model_url = "".join(
    [
        "https://gist.github.com/zhreshold/",
        "bcda4716699ac97ea44f791c24310193/raw/",
        "93672b029103648953c4e5ad3ac3aadf346a4cdc/",
        "super_resolution_0.2.onnx",
    ]
)
model_path = download_testdata(model_url, "super_resolution.onnx", module="onnx")
# now you have super_resolution.onnx on disk
onnx_model = onnx.load(model_path)

######################################################################
# Load a test image
# ---------------------------------------------
# A single cat dominates the examples!
from PIL import Image

img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
img_path = download_testdata(img_url, "cat.png", module="data")
img = Image.open(img_path).resize((224, 224))
img_ycbcr = img.convert("YCbCr")  # convert to YCbCr
img_y, img_cb, img_cr = img_ycbcr.split()
x = np.array(img_y)[np.newaxis, np.newaxis, :, :]

######################################################################
# Compile the model with relay
# ---------------------------------------------
target = tvm.target.cuda()

input_name = "1"
shape_dict = {input_name: x.shape}
mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)
print(mod)

with tvm.transform.PassContext(opt_level=1):
     lib = relay.build_module.build(mod, target=target, params=params)

     # export library
     filename = "net.tar"
     lib.export_library(filename)

     os.system("tar -xvf {}".format(filename))

     filename = "devc.o"
     print("INFO. saved to ", filename)
     print("INFO. search 'NVVM Compiler'")

6.3 目录下会产生临时文件devc.o, 用vim或gvim打开,搜索NVVM,就能看见NVVM IR。手工删除二进制字符,然后另存为文本文件。
注:
vim操作:
第一行至当前行全部删除:dgg
当前行至最后一行全部删除:dG
6.4 NVVM IR: https://docs.nvidia.com/cuda/nvvm-ir-spec/index.html#introduction

你可能感兴趣的:(AI,for,CV)