About TVM - 190522 - TVM Series #001

About TVM

【Text】

TVM is an open deep learning compiler stack for CPUs, GPUs, and specialized accelerators.【TVM:编译器和专用的加速器】 It aims to close the gap between the productivity-focused deep learning frameworks, and the performance- or efficiency-oriented hardware backends. 【NOTE1:用于缩小注重生产力的深度学习框架和以性能和效率为导向的硬件后端的差距】 TVM provides the following main features: 【NOTE2:主要的两个特性:1 将深度学习模型编译为最小可展开的模块;2 自动生成和优化张量算子】

Compilation of deep learning models in Keras, MXNet, PyTorch, Tensorflow, CoreML, DarkNet into minimum deployable modules on diverse hardware backends.
Infrastructure to automatic generate and optimize tensor operators on more backend with better performance.
TVM stack began as a research project at the SAMPL group of Paul G. Allen School of Computer Science & Engineering, University of Washington. The project is now driven by an open source community involving multiple industry and academic institutions. The project adopts Apache-style merit based governace model.【NOTE3:采用基于阿帕奇风格的价值管理模型】

TVM provides two level optimizations show in the following figure. Computational graph optimization to perform tasks such as high-level operator fusion, layout transformation, and memory management. Then a tensor operator optimization and code generation layer that optimizes tensor operators. More details can be found at the techreport.
About TVM - 190522 - TVM Series #001_第1张图片(引自https://tvm.ai/about,侵删)

【NOTE4】
Intel Xeon至强处理器
树莓派
高性能显卡GPU
手机移动终端

【Question ?】

  • LLVM
  • CUDA
  • Metal
  • Edge FPGA
  • Cloud FPGA
  • Device Fleet

【Answers !】

  • CUDA

CUDA(Compute Unified Device Architecture),是显卡厂商NVIDIA推出的运算平台。 CUDA™是一种由NVIDIA推出的通用并行计算架构,该架构使GPU能够解决复杂的计算问题。 它包含了CUDA指令集架构(ISA)以及GPU内部的并行计算引擎。 开发人员现在可以使用C语言来为CUDA™架构编写程序,C语言是应用最广泛的一种高级编程语言。所编写出的程序可以在支持CUDA™的处理器上以超高性能运行。CUDA3.0已经开始支持C++和FORTRAN。
ref: 百度百科 https://baike.baidu.com/item/CUDA/1186262?fr=aladdin

【Conclusion】

  • TVM是一种深度学习端到端部署,优化,加速的编译器框架

你可能感兴趣的:(Deep,Learning,TVM)