TVM学习记录(持续更新)

第一件事情当然是安装,官方给了教程。

  • https://tvm.apache.org/docs/install/index.html,配置文件中要启用cuda,cublas,llvm

功能

深度学习模型的编译

https://zhuanlan.zhihu.com/p/335374653

Expressions for Operators
参考链接

https://tvm.apache.org/docs/tutorials/get_started/tensor_expr_get_started.html#sphx-glr-tutorials-get-started-tensor-expr-get-started-py 及https://zhuanlan.zhihu.com/p/267671270

Describe the Computation
n = te.var("n")
A = te.placeholder((n,), name="A")
B = te.placeholder((n,), name="B")
C = te.compute(A.shape, lambda i: A[i] + B[i], name="C")
s = te.create_schedule([C.op])
print(tvm.lower(s, [A, B, C], simple_mode=True)) # 可以查看schedule result

这个和编写tf代码类似。具体如何使用参照链

schedule的几种方法。
  • Split,这是增加循环嵌套层数同时不改变循环体,从并行计算的角度讲,分割后的外循环的每一次迭代都可能独立执行,这为并行计算加速创造了条件。不过如果原有循环体中的数据关联程度很大,导致外循环中的不同次迭代产生关联,此时就不能够按照这种方法并行计算。下面是样例以及修改后的对比。
A = te.placeholder((m,), name="A")
B = te.compute((m,), lambda i: A[i] * 2, name="B")
s = te.create_schedule(B.op)
xo, xi = s[B].split(B.op.axis[0], factor=32) # 此处可以使用 nparts=32 分割相反的轴
print(tvm.lower(s, [A, B], simple_mode=True))

分割后的结果,相当于增加了一层循环嵌套:

primfn(A_1: handle, B_1: handle) -> ()
  attr = {"global_symbol": "main", "tir.noalias": True}
  buffers = {B: Buffer(B_2: Pointer(float32), float32, [m: int32], [stride: int32], type="auto"),
             A: Buffer(A_2: Pointer(float32), float32, [m], [stride_1: int32], type="auto")}
  buffer_map = {A_1: A, B_1: B} {
  for (i.outer: int32, 0, floordiv((m + 31), 32)) {
    for (i.inner: int32, 0, 32) {
      if @tir.likely((((i.outer*32) + i.inner) < m), dtype=bool) {
        B_2[(((i.outer*32) + i.inner)*stride)] = ((float32*)A_2[(((i.outer*32) + i.inner)*stride_1)]*2f32)
      }
    }
  }
}
  • Tile,增加二维数组的嵌套,可以同时给第0个轴和第1个轴增加循环嵌套。
A = te.placeholder((m, n), name="A")
B = te.compute((m, n), lambda i, j: A[i, j], name="B")

s = te.create_schedule(B.op)
xo, yo, xi, yi = s[B].tile(B.op.axis[0], B.op.axis[1], x_factor=10, y_factor=5)
print(tvm.lower(s, [A, B], simple_mode=True))
  • Fuse,合并fuse运算是分割split运算的逆运算,将两个循环合并到一层,
B = te.compute((m, n), lambda i, j: A[i, j], name="B")

s = te.create_schedule(B.op)
# tile to four axises first: (i.outer, j.outer, i.inner, j.inner)
xo, yo, xi, yi = s[B].tile(B.op.axis[0], B.op.axis[1], x_factor=10, y_factor=5)
# then fuse (i.inner, j.inner) into one axis: (i.inner.j.inner.fused)
fused = s[B].fuse(xi, yi)
print(tvm.lower(s, [A, B], simple_mode=True))
  • Reorder,重排序是指按照顺序将轴重新排序,做这个操作的目的可以增大cache的命中率。二维数组可以把它看做是由若干行所构成的结构。循环体的访存可以看作3次读取,一次写入。这种写法保证了按k增加的二维数组的访存局部性A[i,k]友好,但是对于C[i,j]和B[k,j]都是不友好的。我们于是设想把k提出来,对j进行优化。
for i in range(1024):
    for j in range(1024):
        for k in range(32):
            C[i,j] += A[i,k] * B[k,j]

# reorder后的操作
for k in range(32):
    for i in range(1024):
        for j in range(1024):
            C[i,j] += A[i,k] * B[k,j]

TVM中使用方法:

A = te.placeholder((m, n), name="A")
B = te.compute((m, n), lambda i, j: A[i, j], name="B")

s = te.create_schedule(B.op)
# tile to four axises first: (i.outer, j.outer, i.inner, j.inner)
xo, yo, xi, yi = s[B].tile(B.op.axis[0], B.op.axis[1], x_factor=10, y_factor=5)
# then reorder the axises: (i.inner, j.outer, i.outer, j.inner)
s[B].reorder(xi, yo, xo, yi)
print(tvm.lower(s, [A, B], simple_mode=True))
  • 其余的一些方法:
B = te.compute((m,), lambda i: A[i] + 1, name="B")
C = te.compute((m,), lambda i: B[i] * 2, name="C")

s = te.create_schedule(C.op)
print(tvm.lower(s, [A, B, C], simple_mode=True))


primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
  attr = {"global_symbol": "main", "tir.noalias": True}
  buffers = {C: Buffer(C_2: Pointer(float32), float32, [m: int32], [stride: int32], type="auto"),
             B: Buffer(B_2: Pointer(float32), float32, [m], [stride_1: int32], type="auto"),
             A: Buffer(A_2: Pointer(float32), float32, [m], [stride_2: int32], type="auto")}
  buffer_map = {A_1: A, B_1: B, C_1: C} {
  for (i: int32, 0, m) {
    B_2[(i*stride_1)] = ((float32*)A_2[(i*stride_2)] + 1f32)
  }
  for (i_1: int32, 0, m) {
    C_2[(i_1*stride)] = ((float32*)B_2[(i_1*stride_1)]*2f32)
  }
}

我们将使用compute_at将B的计算移入C的计算的第一轴。这样我们可以将两个循环合并成为一个循环。

A = te.placeholder((m,), name="A")
B = te.compute((m,), lambda i: A[i] + 1, name="B")
C = te.compute((m,), lambda i: B[i] * 2, name="C")

s = te.create_schedule(C.op)
s[B].compute_at(s[C], C.op.axis[0])
s[B].compute_root() #  可以使用这个函数将其再移动回去。

print(tvm.lower(s, [A, B, C], simple_mode=True))
primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
  attr = {"global_symbol": "main", "tir.noalias": True}
  buffers = {B: Buffer(B_2: Pointer(float32), float32, [m: int32], [stride: int32], type="auto"),
             C: Buffer(C_2: Pointer(float32), float32, [m], [stride_1: int32], type="auto"),
             A: Buffer(A_2: Pointer(float32), float32, [m], [stride_2: int32], type="auto")}
  buffer_map = {A_1: A, B_1: B, C_1: C} {
  for (i: int32, 0, m) {
    B_2[(i*stride)] = ((float32*)A_2[(i*stride_2)] + 1f32)
    C_2[(i*stride_1)] = ((float32*)B_2[(i*stride)]*2f32)
  }
}

compute_inline,使用内联的方法。

A = te.placeholder((m,), name="A")
B = te.compute((m,), lambda i: A[i] + 1, name="B")
C = te.compute((m,), lambda i: B[i] * 2, name="C")

s = te.create_schedule(C.op)
s[B].compute_inline()
print(tvm.lower(s, [A, B, C], simple_mode=True))

primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
  attr = {"global_symbol": "main", "tir.noalias": True}
  buffers = {C: Buffer(C_2: Pointer(float32), float32, [m: int32], [stride: int32], type="auto"),
             B: Buffer(B_2: Pointer(float32), float32, [m], [stride_1: int32], type="auto"),
             A: Buffer(A_2: Pointer(float32), float32, [m], [stride_2: int32], type="auto")}
  buffer_map = {A_1: A, B_1: B, C_1: C} {
  for (i: int32, 0, m) {
    C_2[(i*stride)] = (((float32*)A_2[(i*stride_2)] + 1f32)*2f32)
  }
}
如何使用auto tuning优化keras模型:
  • https://zhuanlan.zhihu.com/p/340557261

你可能感兴趣的:(TVM学习记录(持续更新))