在windows10下安装libtorch(pytorch1.0)

1.0允许现有的Python模型转换为可以加载和执行的序列化表示 纯粹来自C ++,不依赖于Python。也就是说可以只用c++来编写模型的预测阶段(当然训练也可以,只是开发起来比较慢,,还是推荐python训练,然后转换成c++模型,用c++来写前向)。
pytorch的c++接口,PyTorch C ++ API - 也称为LibTorch。一般来说,用python接口来训练模型,然后转换成c++模型,再用c++重写前向是一个比较常见的模式。我也是刚刚才开始学习,希望大家能一起学习讨论。

一 安装pytorch1.0

去 https://pytorch.org/get-started/locally/ 寻找自己需要得版本(网上有一些教程的命令只是适合他们环境的,适合自己的还需要从这里选)
在windows10下安装libtorch(pytorch1.0)_第1张图片
在我环境下得命令是:

conda install pytorch torchvision cuda80 -c pytorch

其他cuda版本的命令:

NO CUDA:

conda install pytorch-cpu torchvision-cpu -c pytorch

CUDA9.0:

conda install pytorch torchvision -c pytorch

或者选择pip安装

Python 2.7

pip install https://download.pytorch.org/whl/cpu/torch-1.0.0-cp27-cp27mu-linux_x86_64.whl
pip install torchvision

if the above command does not work, then you have python 2.7 UCS2, use this command

pip install https://download.pytorch.org/whl/cpu/torch-1.0.0-cp27-cp27m-linux_x86_64.whl

Python 3.5

pip3 install https://download.pytorch.org/whl/cpu/torch-1.0.0-cp35-cp35m-win_amd64.whl
pip3 install torchvision

Python 3.6

pip3 install https://download.pytorch.org/whl/cpu/torch-1.0.0-cp36-cp36m-win_amd64.whl
pip3 install torchvision

Python 3.7

pip3 install https://download.pytorch.org/whl/cpu/torch-1.0.0-cp37-cp37m-win_amd64.whl
pip3 install torchvision

CUDA8.x:

conda install pytorch torchvision cuda80 -c pytorch

pip安装
Python 3.5

pip3 install https://download.pytorch.org/whl/cu80/torch-1.0.0-cp35-cp35m-win_amd64.whl
pip3 install torchvision

Python 3.6

pip3 install https://download.pytorch.org/whl/cu80/torch-1.0.0-cp36-cp36m-win_amd64.whl
pip3 install torchvision

Python 3.7

pip3 install https://download.pytorch.org/whl/cu80/torch-1.0.0-cp37-cp37m-win_amd64.whl
pip3 install torchvision

CUDA10.0:

conda install pytorch torchvision cuda100 -c pytorch

cuda10好像不支持py27

Python 3.5

pip3 install https://download.pytorch.org/whl/cu100/torch-1.0.0-cp35-cp35m-win_amd64.whl
pip3 install torchvision

Python 3.6

pip3 install https://download.pytorch.org/whl/cu100/torch-1.0.0-cp36-cp36m-win_amd64.whl
pip3 install torchvision

Python 3.7

pip3 install https://download.pytorch.org/whl/cu100/torch-1.0.0-cp37-cp37m-win_amd64.whl
pip3 install torchvision

source 安装

安装依赖

conda install numpy pyyaml mkl mkl-include setuptools cmake cffi typing
If you want to build on Windows, Visual Studio 2017 14.11 toolset and NVTX are also needed. Especially, for CUDA 8 build on Windows, there will be an additional requirement for VS 2015 Update 3 and a patch for it. The details of the patch can be found out here.

Build

git clone --recursive https://github.com/pytorch/pytorch
cd pytorchset “VS150COMNTOOLS=C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\VC\Auxiliary\Build”
set CMAKE_GENERATOR=Visual Studio 15 2017 Win64
set DISTUTILS_USE_SDK=1
REM The following two lines are needed for Python 2.7, but the support for it is very experimental.
set MSSdk=1
set FORCE_PY27_BUILD=1
REM As for CUDA 8, VS2015 Update 3 is also required to build PyTorch. Use the following two lines.
set “PREBUILD_COMMAND=%VS140COMNTOOLS%…\VC\vcvarsall.bat”
set PREBUILD_COMMAND_ARGS=x64
call “%VS150COMNTOOLS%\vcvarsall.bat” x64 -vcvars_ver=14.11
python setup.py install

顺带安装一下libtorch
在windows10下安装libtorch(pytorch1.0)_第2张图片
命令是

Download here:
https://download.pytorch.org/libtorch/cu80/libtorch-win-shared-with-deps-latest.zip

二 模型转换(TorchScript)

官网网址:https://pytorch.org/docs/master/jit.html
参考网址:https://pytorch.org/tutorials/advanced/cpp_export.html

step 1:将pytorch模型转换为torch脚本

主要有两种方法:

1. 用tracing转换

示例代码如下:

import torch
import torchvision
#An instance of your model.
model = torchvision.models.resnet18()
#An example input you would normally provide to your model's forward() method.
example = torch.rand(1, 3, 224, 224)
#Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, example)

In[1]: output = traced_script_module(torch.ones(1, 3, 224, 224))
In[2]: output[0, :5]
Out[2]: tensor([-0.2698, -0.0381,  0.4023, -0.3010, -0.0448], grad_fn=< SliceBackward >)
1. 用annotation转换
import torch
class MyModule(torch.nn.Module):
    def __init__(self, N, M):
        super(MyModule, self).__init__()

    def forward(self, input):
        if input.sum() > 0:
          output = self.weight.mv(input)
        else:
          output = self.weight + input
        return output
import torch

class MyModule(torch.jit.ScriptModule):
   def __init__(self, N, M):
       super(MyModule, self).__init__()
       self.weight = torch.nn.Parameter(torch.rand(N, M))

   @torch.jit.script_method
   def forward(self, input):
       if input.sum() > 0:
         output = self.weight.mv(input)
       else:
         output = self.weight + input
       return output

my_script_module = MyModule()

step 2:保存模型

traced_script_module.save(“model.pt”)

step 3:用c++加载模型

示例代码:

#include  // One-stop header.

#include 
#include 

int main(int argc, const char* argv[]) {
  if (argc != 2) {
    std::cerr << "usage: example-app \n";
    return -1;
  }

  // Deserialize the ScriptModule from a file using torch::jit::load().
  std::shared_ptr module = torch::jit::load(argv[1]);

  assert(module != nullptr);
  std::cout << "ok\n";
}
用cmake编译

CMakeLists.txt示例

cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(custom_ops)
find_package(Torch REQUIRED)
add_executable(example-app example-app.cpp)
target_link_libraries(example-app “${TORCH_LIBRARIES}”)
set_property(TARGET example-app PROPERTY CXX_STANDARD 11)

目录结构

libtorch/
bin/
include/
lib/
share/

  1. lib/文件夹包含您必须链接的共享库,
  2. include/文件夹包含程序需要包含的头文件,
  3. share/包含必要的CMake配置,以启用上面的简单find_package(Torch)命令。

最后一步是构建应用程序。为此,假设我们的示例目录布局如下:

example-app/
CMakeLists.txt
example-app.cpp

执行命令:

mkdir build
cd build
cmake -DCMAKE_PREFIX_PATH=/path/to/libtorch …
make

输出结果:

root@4b5a67132e81:/example-app# mkdir build
root@4b5a67132e81:/example-app# cd build
root@4b5a67132e81:/example-app/build# cmake -DCMAKE_PREFIX_PATH=/path/to/libtorch …
– The C compiler identification is GNU 5.4.0
– The CXX compiler identification is GNU 5.4.0
– Check for working C compiler: /usr/bin/cc
– Check for working C compiler: /usr/bin/cc – works
– Detecting C compiler ABI info
– Detecting C compiler ABI info - done
– Detecting C compile features
– Detecting C compile features - done
– Check for working CXX compiler: /usr/bin/c++
– Check for working CXX compiler: /usr/bin/c++ – works
– Detecting CXX compiler ABI info
– Detecting CXX compiler ABI info - done
– Detecting CXX compile features
– Detecting CXX compile features - done
– Looking for pthread.h
– Looking for pthread.h - found
– Looking for pthread_create
– Looking for pthread_create - not found
– Looking for pthread_create in pthreads
– Looking for pthread_create in pthreads - not found
– Looking for pthread_create in pthread
– Looking for pthread_create in pthread - found
– Found Threads: TRUE
– Configuring done
– Generating done
– Build files have been written to: /example-app/build
root@4b5a67132e81:/example-app/build# make
Scanning dependencies of target example-app
[ 50%] Building CXX object CMakeFiles/example-app.dir/example-app.cpp.o
[100%] Linking CXX executable example-app
[100%] Built target example-app

成功:

root@4b5a67132e81:/example-app/build# ./example-app model.pt
ok

step 4:用c++执行前向

/ Create a vector of inputs.
std::vector inputs;
inputs.push_back(torch::ones({1, 3, 224, 224}));

// Execute the model and turn its output into a tensor.
at::Tensor output = module->forward(inputs).toTensor();

std::cout << output.slice(/*dim=*/1, /*start=*/0, /*end=*/5) << '\n';

结果:

root@4b5a67132e81:/example-app/build# make
Scanning dependencies of target example-app
[ 50%] Building CXX object CMakeFiles/example-app.dir/example-app.cpp.o
[100%] Linking CXX executable example-app
[100%] Built target example-app
root@4b5a67132e81:/example-app/build# ./example-app model.pt
-0.2698 -0.0381  0.4023 -0.3010 -0.0448
[ Variable[CPUFloatType]{1,5} ]

python的输出结果是:

tensor([-0.2698, -0.0381, 0.4023, -0.3010, -0.0448], grad_fn=< SliceBackward >)

step 5:了解更多

  • The Torch Script reference: https://pytorch.org/docs/master/jit.html
  • The PyTorch C++ API documentation: https://pytorch.org/cppdocs/
  • The PyTorch Python API documentation: https://pytorch.org/docs/

你可能感兴趣的:(深度学习,pytorch)