tensorrt 安装

首先要先安装cuda和cudnn,这个在这就不写了,网上可以搜到很多相关的文章。

https://blog.csdn.net/qq_42393859/article/details/85294126

这里要注意的是,cuda和cudnn的版本,根据你tensorrt版本需求安装。

nvidia官网上有相关文件,不过要先登录帐号,我cuda是10.0.130版本,cudnn是配套的7.5版本,加起来2个多G。

tensort各个版本的下载位置:https://developer.nvidia.com/nvidia-tensorrt-download,我这里下载的是5.1.5GA tar包版本,

Tar File Install Packages For Linux x86 TensorRT 5.1.5.0 GA for Ubuntu 16.04 and CUDA 10.0 tar package

这里根据你的环境选择自己对应的版本。

各个版本的安装文档链接:https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/index.html

例5.1.x.x版本  tar包文件的安装

 Tar File Installation
Note: Before issuing the following commands, you'll need to replace 5.1.x.x with your specific TensorRT version. The following commands are examples.

    Install the following dependencies, if not already present:
        Install the CUDA Toolkit 9.0, 10.0 or 10.1
        cuDNN 7.5.0
        Python 2 or Python 3 (Optional)
    Download the TensorRT tar file that matches the Linux distribution you are using.
    Choose where you want to install TensorRT. This tar file will install everything into a subdirectory called TensorRT-5.1.x.x.
    Unpack the tar file.

    $ tar xzvf TensorRT-5.1.x.x..-gnu.cuda-x.x.cudnn7.x.tar.gz

    Where:
        5.1.x.x is your TensorRT version
         is Ubuntu-14.04.5, Ubuntu-16.04.5, Ubuntu-18.04.2, Red-Hat, or CentOS-Linux
         is x86_64 or ppc64le
        cuda-x.x is CUDA version 9.0, 10.0, or 10.1
        cudnn7.x is cuDNN version 7.5
    This directory will have sub-directories like lib, include, data, etc…

    $ ls TensorRT-5.1.x.x
    bin  data  doc  graphsurgeon  include  lib  python  samples  targets  TensorRT-Release-Notes.pdf  uff

    Add the absolute path to the TensorRTlib directory to the environment variable LD_LIBRARY_PATH:

    $ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:

    Install the Python TensorRT wheel file.

    $ cd TensorRT-5.1.x.x/python

    If using Python 2.7:

    $ sudo pip2 install tensorrt-5.1.x.x-cp27-none-linux_x86_64.whl

    If using Python 3.x:

    $ sudo pip3 install tensorrt-5.1.x.x-cp3x-none-linux_x86_64.whl

    Install the Python UFF wheel file. This is only required if you plan to use TensorRT with TensorFlow.

    $ cd TensorRT-5.1.x.x/uff

    If using Python 2.7:

    $ sudo pip2 install uff-0.6.3-py2.py3-none-any.whl

    If using Python 3.x:

    $ sudo pip3 install uff-0.6.3-py2.py3-none-any.whl

    In either case:

    $ which convert-to-uff
    /usr/local/bin/convert-to-uff

    Install the Python graphsurgeon wheel file.

    $ cd TensorRT-5.1.x.x/graphsurgeon

    If using Python 2.7:

    $ sudo pip2 install graphsurgeon-0.4.1-py2.py3-none-any.whl

    If using Python 3.x:

    $ sudo pip3 install graphsurgeon-0.4.1-py2.py3-none-any.whl

    Verify the installation:
        Ensure that the installed files are located in the correct directories. For example, run the tree -d command to check whether all supported installed files are in place in the lib, include, data, etc… directories.
        Build and run one of the shipped samples, for example, sampleMNIST in the installed directory. You should be able to compile and execute the sample without additional settings. For more information about sampleMNSIT, see the "Hello World" For TensorRT sample.
        The Python samples are in the samples/python directory.


附tensorrt模型加速torch模型步骤:
1.set up tensorrt
read ./TensorRT-5.1.5.0/tensorrt_install.md
2.set up torch2trt
https://github.com/NVIDIA-AI-IOT/torch2trt
3.set up trt_pose
https://github.com/NVIDIA-AI-IOT/trt_pose
4.convert torch model to tensorrt model(model densenet121,model resnet18)
./trt_pose_module/build_trt.py
5.trt_pose_module test
./trt_pose_module/trt_pose_module.py

 

你可能感兴趣的:(深度学习)