onnxruntime源码编译

CMakeLists.txt中gpu参数设置

set(CMAKE_CUDA_FLAGS "${CMAKE_CUDA_FLAGS} -gencode=arch=compute_62,code=sm_62") # tx2
set(CMAKE_CUDA_FLAGS "${CMAKE_CUDA_FLAGS} -gencode=arch=compute_53,code=sm_53") # nano
set(CMAKE_CUDA_FLAGS "${CMAKE_CUDA_FLAGS} -gencode=arch=compute_61,code=sm_61") # 1070 gtx
set(CMAKE_CUDA_FLAGS "${CMAKE_CUDA_FLAGS} -gencode=arch=compute_72,code=sm_72") # nx 

onnxruntime源码编译cpu版本

  • windows下,默认Visual Studio 2017,可以选择更高级Visual Studio 16 2019
.\build.bat --config RelWithDebInfo --build_shared_lib --parallel
  • Linux
./build.sh --config RelWithDebInfo --build_shared_lib --parallel

onnxruntime源码编译cuda版本

CUDA Prerequisites说明:
Install CUDA and cuDNN
ONNX Runtime is built and tested with CUDA 10.2 and cuDNN 8.0.3 using Visual Studio 2019 version 16.7.
ONNX Runtime can also be built with CUDA versions from 10.1 up to 11.0, and cuDNN versions from 7.6 up to 8.0.
The path to the CUDA installation must be provided via the CUDA_PATH environment variable, or the --cuda_home parameter
The path to the cuDNN installation (include the cuda folder in the path) must be provided via the cuDNN_PATH environment variable, or --cudnn_home parameter. The cuDNN path should contain bin, include and lib directories.
The path to the cuDNN bin directory must be added to the PATH environment variable so that cudnn64_8.dll is found.
Build Instructions
Windows

.\build.bat --use_cuda --cudnn_home  --cuda_home 

Linux

./build.sh --use_cuda --cudnn_home  --cuda_home 

onnxruntime源码编译DNNL and MKLML

说明:
The DNNL execution provider can be built for Intel CPU or GPU. To build for Intel GPU, install Intel SDK for OpenCL Applications. Install the latest GPU driver - Windows graphics driver, Linux graphics compute runtime and OpenCL driver.

Note that DNNL is built as a shared provider library

Windows

.\build.bat --use_dnnl

Linux

./build.sh --use_dnnl

To build for Intel GPU, replace dnnl_opencl_root with the path of the Intel SDK for OpenCL Applications.

Windows

.\build.bat --use_dnnl --dnnl_gpu_runtime ocl --dnnl_opencl_root "c:\program files (x86)\intelswtools\sw_dev_tools\opencl\sdk"

Linux

./build.sh --use_dnnl --dnnl_gpu_runtime ocl --dnnl_opencl_root "/opt/intel/sw_dev_tools/opencl-sdk"

总结

bash ./build.sh --skip_tests --use_cuda --config RelWithDebInfo --build_shared_lib --parallel --cmake_path=/home/xxx/work/tools/cmake-3.15.6/bin/cmake --ctest_path=/home/xxx/work/tools/cmake-3.15.6/bin/ctest --cuda_home=/usr/local/cuda-10.0/ --cudnn_home=/usr/local/cuda-10.0 --build_wheel

你可能感兴趣的:(onnxruntime)