Pytorch-1-TX2中安装pytorch(亲自操作过!)

Yolov-1-TX2上用YOLOv3训练自己数据集的流程(VOC2007-TX2-GPU)

Yolov--2--一文全面了解深度学习性能优化加速引擎---TensorRT

Yolov--3--TensorRT中yolov3性能优化加速(基于caffe)

yolov-5-目标检测:YOLOv2算法原理详解

yolov--8--Tensorflow实现YOLO v3

yolov--9--YOLO v3的剪枝优化

yolov--10--目标检测模型的参数评估指标详解、概念解析

yolov--11--YOLO v3的原版训练记录、mAP、AP、recall、precision、time等评价指标计算

yolov--12--YOLOv3的原理深度剖析和关键点讲解


此博客已经更新,请看最新博客 :https://blog.csdn.net/qq_33869371/article/details/88591538


TX2作为一个嵌入式平台的深度学习端,具备不错的GPU性能,我们可以发现TX2的GPU的计算能力是6.2。这意味着TX2对半精度运算有着良好的支持,我们完全可以在桌面端训练好模型,然后移植到TX2上利用半精度运行进行推理,这样可以达到生产落地的效果。

就算本篇文章主要展示了如何在TX2中源码编译Pytorch-1.0.1比1.01后几天的master pytorch

首先我们需要一个相对纯净的jetpack系统,3.2-3.3版本(最新的4.1.1也可以)都可以,所以我们最好将TX2的系统重新刷一遍,以免造成一些其他不兼容的错误。

刷系统:从NVIDIA官网下载TX2的系统包:https://developer.nvidia.com/embedded/jetpack

步骤

接下来我们严格按照步骤来进行Pytorch的源码安装。

在JetPack-3.2系统中,我们的python一般有两个版本,python命令对应着python2.7。而python3命令对应着python3.5。这里我们使用python3作为编译环境,大家一定要分清楚这两个不同python版本的命令集,否则会造成编译错误。

可以通过which python3来查看当前的python3.5的执行环境。

依赖件

一定要提前开辟一块内存缓冲区,至少1G的!!!(非常重要) ,具体步骤见前一篇:https://blog.csdn.net/qq_33869371/article/details/87706617

 

 

首先安装依赖件:

注意我们使用的命令pip3对应着python3,如果你不清楚系统中的pip和python的绑定信息,采用pip --version查看当前命令在哪个python中,例如我这边则是:

pip3 --version
pip3 9.0.1 from path/to/lib/python3.5/site-packages/pip (python 3.5)

总之我们要使用对应python3的pip命令,首先安装pip3,然后在python3环境中安装一些必要组件。

sudo apt install libopenblas-dev libatlas-dev liblapack-dev
sudo apt install liblapacke-dev checkinstall # For OpenCV
sudo apt-get install python3-pip

pip3 install --upgrade pip3==9.0.1
sudo apt-get install python3-dev

sudo pip3 install numpy scipy # 这个花费的时间略长,20分钟到30分钟左右
sudo pip3 install pyyaml
sudo pip3 install scikit-build
sudo apt-get -y install cmake
sudo apt install libffi-dev
sudo pip3 install cffi

安装完之后,我们添加cudnn的lib和include路径,为什么要执行这一步,因为我们在刷好机后,cuda和cudnn也已经安装好,但是JetPack系统中的cudnn路径和我们一般ubuntu系统中的路径略有不同(为什么不同看这里:https://oldpan.me/archives/pytorch-gpu-ubuntu-nvidia-cuda90),这时需要我们将cudnn的路径添加到环境变量中并激活:

sudo gedit ~/.bashrc
export CUDNN_LIB_DIR=/usr/lib/aarch64-linux-gnu
export CUDNN_INCLUDE_DIR=/usr/include
source ~/.bashrc

下载Pytorch源码包

我们从github上直接拷贝最新的Pytorch源码包,然后利用pip3安装好所有必备的库,并对第三方库进行更新。

#可以加快运行速度
sudo nvpmodel -m 0         # 切换工作模式到最大
sudo  ~/jetson_clocks.sh   # 强制开启风扇最大转速 
wget https://rpmfind.net/linux/mageia/distrib/cauldron/aarch64/media/core/release/ninja-1.9.0-1.mga7.aarch64.rpm
#下面都为ninja-1.9.0-1,打不开的话截取前部分网页进入查看
sudo add-apt-repository universe

sudo apt-get update

sudo apt-get install alien

sudo apt-get install nano

sudo alien ninja-1.8.2-3.mga7.aarch64.rpm

#If previous line fails, proceed to <$sudo dpkg -i ninja-1.8.2-3.mga7.aarch64.deb>

      sudo alien -g ninja-1.8.2-3.mga7.aarch64.rpm

      cd ninja-1.8.2

      sudo nano debian/control

      #at architecture, add arm64 after aarch64         aarch64 to        aarch64, arm64   

      sudo debian/rules binary

      cd ..

sudo dpkg -i ninja_1.8.2-4_arm64.deb

sudo apt install ninja-build

 https://blog.csdn.net/qq_25689397/article/details/50932575

#首先下载,因为最新的版本貌似编译有点问题,所以下载旧的版本吧,这行命令也是好久才找到的,下0.4.1版本

git clone --recursive --depth 1 https://github.com/pytorch/pytorch.git -b v${PYTORCH_VERSION=0.4.1}

   下载持续近半个小时~ (中断的原因是因为网速太慢!!!)

 

git clone --recursive https://github.com/pytorch/pytorch.git
git clone --recursive https://gitee.com/wangdong_cn_admin/pytorch

更新cmake:

https://cmake.org/files/v3.14/

python -m pip install --upgrade pip
 
python -m pip install cmake

https://blog.csdn.net/cm_cyj_1116/article/details/79316115

将下载的安装包放到 /usr路径下,并且解压;
   tar zxvf cmake-3.14.0-Linux-i386.tar.gz
 

https://blog.csdn.net/learning_tortosie/article/details/80593956

Pytorch-1-TX2中安装pytorch(亲自操作过!)_第1张图片

 

nvidia@tegra-ubuntu:~$ sudo pip3 install -r requirements.txt
The directory '/home/nvidia/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/home/nvidia/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
nvidia@tegra-ubuntu:~$ 


 

 

#进入文件夹

cd pytorch
 

#升级一下

git submodule update --init
 

#安装依赖库

sudo pip3 install -U setuptools
sudo pip3 install -r requirements.txt
 

进行安装

sudo python3 setup.py install

#这里直接后面用install,网上有人用,develop,build的命令,实验了一周都不对
 

#再安装另外一个依赖库

pip3 install tensorboardX


#下载另一个文件夹

git clone https://github.com/pytorch/vision

cd vision

#安装

sudo python3 setup.py install

 

问题:

sudo python3 setup.py install
fatal: ambiguous argument 'HEAD': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git [...] -- [...]'
Building wheel torch-1.1.0a0
-- Building version 1.1.0a0
 

Could not find /home/nvidia/pytorch/third_party/gloo/CMakeLists.txt
Did you run 'git submodule update --init --recursive'?
 

nvidia@tegra-ubuntu:~/pytorch-0.3.0$ git submodule update --init
nvidia@tegra-ubuntu:~/pytorch-0.3.0$ sudo python3 setup.py install
[sudo] password for nvidia: 
fatal: ambiguous argument 'HEAD': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git  [...] -- [...]'
running install
running build_deps
-- The C compiler identification is GNU 5.4.0
-- The CXX compiler identification is GNU 5.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Try OpenMP C flag = [-fopenmp]
-- Performing Test OpenMP_FLAG_DETECTED
-- Performing Test OpenMP_FLAG_DETECTED - Success
-- Try OpenMP CXX flag = [-fopenmp]
-- Performing Test OpenMP_FLAG_DETECTED
-- Performing Test OpenMP_FLAG_DETECTED - Success
-- Found OpenMP: -fopenmp  
-- Compiling with OpenMP support
-- Could not find hardware support for NEON on this machine.
-- No OMAP3 processor on this machine.
-- No OMAP4 processor on this machine.
-- asimd/Neon found with compiler flag : -D__NEON__
-- Looking for cpuid.h
-- Looking for cpuid.h - not found
-- Performing Test NO_GCC_EBX_FPIC_BUG
-- Performing Test NO_GCC_EBX_FPIC_BUG - Failed

...........
...............

[100%] Linking CXX static library libTHD.a
[100%] Built target THD
Install the project...
-- Install configuration: "Release"
-- Installing: /home/nvidia/pytorch-0.3.0/torch/lib/tmp_install/lib/libTHD.a
-- Installing: /home/nvidia/pytorch-0.3.0/torch/lib/tmp_install/include/THD/THD.h
-- Installing: /home/nvidia/pytorch-0.3.0/torch/lib/tmp_install/include/THD/base/DataChannelRequest.h
-- Installing: /home/nvidia/pytorch-0.3.0/torch/lib/tmp_install/include/THD/base/DataChannel.h
-- Installing: /home/nvidia/pytorch-0.3.0/torch/lib/tmp_install/include/THD/base/THDGenerateAllTypes.h
-- Installing: /home/nvidia/pytorch-0.3.0/torch/lib/tmp_install/include/THD/base/ChannelType.h
-- Installing: /home/nvidia/pytorch-0.3.0/torch/lib/tmp_install/include/THD/base/Cuda.h
-- Installing: /home/nvidia/pytorch-0.3.0/torch/lib/tmp_install/include/THD/base/TensorDescriptor.h
-- Installing: /home/nvidia/pytorch-0.3.0/torch/lib/tmp_install/include/THD/process_group/General.h
-- Installing: /home/nvidia/pytorch-0.3.0/torch/lib/tmp_install/include/THD/process_group/Collectives.h
-- Installing: /home/nvidia/pytorch-0.3.0/torch/lib/tmp_install/include/THD/master_worker/master/State.h
-- Installing: /home/nvidia/pytorch-0.3.0/torch/lib/tmp_install/include/THD/master_worker/master/generic/THDTensorRandom.h
-- Installing: /home/nvidia/pytorch-0.3.0/torch/lib/tmp_install/include/THD/master_worker/master/generic/THDTensorMath.h
-- Installing: /home/nvidia/pytorch-0.3.0/torch/lib/tmp_install/include/THD/master_worker/master/generic/THDTensorLapack.h
-- Installing: /home/nvidia/pytorch-0.3.0/torch/lib/tmp_install/include/THD/master_worker/master/generic/THDTensor.h
-- Installing: /home/nvidia/pytorch-0.3.0/torch/lib/tmp_install/include/THD/master_worker/master/generic/THDTensorCopy.h
-- Installing: /home/nvidia/pytorch-0.3.0/torch/lib/tmp_install/include/THD/master_worker/master/generic/THDStorage.h
-- Installing: /home/nvidia/pytorch-0.3.0/torch/lib/tmp_install/include/THD/master_worker/master/THDTensor.h
-- Installing: /home/nvidia/pytorch-0.3.0/torch/lib/tmp_install/include/THD/master_worker/master/THDStorage.h
-- Installing: /home/nvidia/pytorch-0.3.0/torch/lib/tmp_install/include/THD/master_worker/master/THDRandom.h
-- Installing: /home/nvidia/pytorch-0.3.0/torch/lib/tmp_install/include/THD/master_worker/master/Master.h
-- Installing: /home/nvidia/pytorch-0.3.0/torch/lib/tmp_install/include/THD/master_worker/worker/Worker.h
Traceback (most recent call last):
  File "setup.py", line 662, in 
    install_requires=['pyyaml', 'numpy'],
  File "/usr/local/lib/python3.5/dist-packages/setuptools/__init__.py", line 145, in setup
    return distutils.core.setup(**attrs)
  File "/usr/lib/python3.5/distutils/core.py", line 148, in setup
    dist.run_commands()
  File "/usr/lib/python3.5/distutils/dist.py", line 955, in run_commands
    self.run_command(cmd)
  File "/usr/lib/python3.5/distutils/dist.py", line 974, in run_command
    cmd_obj.run()
  File "setup.py", line 290, in run
    self.run_command('build_deps')
  File "/usr/lib/python3.5/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/usr/lib/python3.5/distutils/dist.py", line 974, in run_command
    cmd_obj.run()
  File "setup.py", line 129, in run
    build_libs(libs)
  File "setup.py", line 105, in build_libs
    from tools.nnwrap import generate_wrappers as generate_nn_wrappers
  File "/home/nvidia/pytorch-0.3.0/tools/nnwrap/__init__.py", line 1, in 
    from .generate_wrappers import generate_wrappers, wrap_function, \
  File "/home/nvidia/pytorch-0.3.0/tools/nnwrap/generate_wrappers.py", line 4, in 
    from ..cwrap import cwrap
  File "/home/nvidia/pytorch-0.3.0/tools/cwrap/__init__.py", line 1, in 
    from .cwrap import cwrap
  File "/home/nvidia/pytorch-0.3.0/tools/cwrap/cwrap.py", line 5, in 
    from .plugins import ArgcountChecker, OptionalArguments, ArgumentReferences, \
  File "/home/nvidia/pytorch-0.3.0/tools/cwrap/plugins/__init__.py", line 425, in 
    from .OptionalArguments import OptionalArguments
  File "/home/nvidia/pytorch-0.3.0/tools/cwrap/plugins/OptionalArguments.py", line 5, in 
    from ...shared import cwrap_common
  File "/home/nvidia/pytorch-0.3.0/tools/shared/__init__.py", line 2, in 
    from .cwrap_common import set_declaration_defaults, \
  File "/home/nvidia/pytorch-0.3.0/tools/shared/cwrap_common.py", line 1
    ../../torch/lib/ATen/common_with_cwrap.py
    ^
SyntaxError: invalid syntax

 

https://devtalk.nvidia.com/default/topic/1041716/jetson-agx-xavier/pytorch-install-problem/

nvidia@tegra-ubuntu:~/pytorch-0.4.1$ git submodule update --init
nvidia@tegra-ubuntu:~/pytorch-0.4.1$ sudo python3 setup.py install
[sudo] password for nvidia: 
Sorry, try again.
[sudo] password for nvidia: 
fatal: ambiguous argument 'HEAD': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git  [...] -- [...]'
running install
running build_deps
+ USE_CUDA=0
+ USE_ROCM=0
+ USE_NNPACK=0
+ USE_MKLDNN=0
+ USE_GLOO_IBVERBS=0
+ USE_DISTRIBUTED_MW=0
+ FULL_CAFFE2=0
+ [[ 9 -gt 0 ]]
+ case "$1" in
+ USE_CUDA=1
+ shift
+ [[ 8 -gt 0 ]]
+ case "$1" in
+ USE_NNPACK=1
+ shift
+ [[ 7 -gt 0 ]]
+ case "$1" in
+ break
+ CMAKE_INSTALL='make install'
+ USER_CFLAGS=
+ USER_LDFLAGS=
+ [[ -n '' ]]
+ [[ -n '' ]]
+ [[ -n '' ]]
++ uname
+ '[' Linux == Darwin ']'
++ dirname tools/build_pytorch_libs.sh
+ cd tools/..
+++ pwd
++ printf '%q\n' /home/nvidia/pytorch-0.4.1
+ PWD=/home/nvidia/pytorch-0.4.1
+ BASE_DIR=/home/nvidia/pytorch-0.4.1
+ TORCH_LIB_DIR=/home/nvidia/pytorch-0.4.1/torch/lib
+ INSTALL_DIR=/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install
+ THIRD_PARTY_DIR=/home/nvidia/pytorch-0.4.1/third_party
+ CMAKE_VERSION=cmake
+ C_FLAGS=' -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include"   -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/TH" -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/THC"   -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/THS" -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/THCS"   -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/THNN" -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/THCUNN"'
+ C_FLAGS=' -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include"   -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/TH" -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/THC"   -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/THS" -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/THCS"   -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/THNN" -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/THCUNN" -DOMPI_SKIP_MPICXX=1'
+ LDFLAGS='-L"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/lib" '
+ LD_POSTFIX=.so
++ uname
+ [[ Linux == \D\a\r\w\i\n ]]
+ LDFLAGS='-L"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/lib"  -Wl,-rpath,$ORIGIN'
+ CPP_FLAGS=' -std=c++11 '
+ GLOO_FLAGS=
+ THD_FLAGS=
+ NCCL_ROOT_DIR=/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install
+ [[ 1 -eq 1 ]]
+ GLOO_FLAGS='-DUSE_CUDA=1 -DNCCL_ROOT_DIR=/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install'
+ [[ 0 -eq 1 ]]
+ [[ 0 -eq 1 ]]
+ CWRAP_FILES='/home/nvidia/pytorch-0.4.1/torch/lib/ATen/Declarations.cwrap;/home/nvidia/pytorch-0.4.1/torch/lib/THNN/generic/THNN.h;/home/nvidia/pytorch-0.4.1/torch/lib/THCUNN/generic/THCUNN.h;/home/nvidia/pytorch-0.4.1/torch/lib/ATen/nn.yaml'
+ CUDA_NVCC_FLAGS=' -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include"   -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/TH" -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/THC"   -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/THS" -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/THCS"   -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/THNN" -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/THCUNN" -DOMPI_SKIP_MPICXX=1'
+ [[ -z '' ]]
+ CUDA_DEVICE_DEBUG=0
+ '[' -z 6 ']'
+ BUILD_TYPE=Release
+ [[ -n '' ]]
+ [[ -n '' ]]
+ echo 'Building in Release mode'
Building in Release mode
+ mkdir -p torch/lib/tmp_install
+ for arg in '"$@"'
+ [[ nccl == \n\c\c\l ]]
+ pushd /home/nvidia/pytorch-0.4.1/third_party
~/pytorch-0.4.1/third_party ~/pytorch-0.4.1
+ build_nccl
+ mkdir -p build/nccl
+ pushd build/nccl
~/pytorch-0.4.1/third_party/build/nccl ~/pytorch-0.4.1/third_party ~/pytorch-0.4.1
+ cmake ../../nccl -DCMAKE_MODULE_PATH=/home/nvidia/pytorch-0.4.1/cmake/Modules_CUDA_fix -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install '-DCMAKE_C_FLAGS= -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include"   -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/TH" -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/THC"   -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/THS" -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/THCS"   -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/THNN" -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/THCUNN" -DOMPI_SKIP_MPICXX=1 ' '-DCMAKE_CXX_FLAGS= -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include"   -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/TH" -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/THC"   -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/THS" -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/THCS"   -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/THNN" -I"/home/nvidia/pytorch-0.4.1/torch/lib/tmp_install/include/THCUNN" -DOMPI_SKIP_MPICXX=1  -std=c++11  ' -DCMAKE_SHARED_LINKER_FLAGS= -DCMAKE_UTILS_PATH=/home/nvidia/pytorch-0.4.1/cmake/public/utils.cmake -DNUM_JOBS=6
CMake Error at /home/nvidia/pytorch-0.4.1/cmake/Modules_CUDA_fix/upstream/FindPackageHandleStandardArgs.cmake:247 (string):
  string does not recognize sub-command APPEND
Call Stack (most recent call first):
  /home/nvidia/pytorch-0.4.1/cmake/Modules_CUDA_fix/upstream/FindCUDA.cmake:1104 (find_package_handle_standard_args)
  /home/nvidia/pytorch-0.4.1/cmake/Modules_CUDA_fix/FindCUDA.cmake:11 (include)
  CMakeLists.txt:5 (FIND_PACKAGE)


CMake Error at /home/nvidia/pytorch-0.4.1/cmake/Modules_CUDA_fix/upstream/FindPackageHandleStandardArgs.cmake:247 (string):
  string does not recognize sub-command APPEND
Call Stack (most recent call first):
  /home/nvidia/pytorch-0.4.1/cmake/Modules_CUDA_fix/upstream/FindCUDA.cmake:1104 (find_package_handle_standard_args)
  /home/nvidia/pytorch-0.4.1/cmake/Modules_CUDA_fix/FindCUDA.cmake:11 (include)
  CMakeLists.txt:5 (FIND_PACKAGE)


CMake Error at /home/nvidia/pytorch-0.4.1/cmake/Modules_CUDA_fix/upstream/FindPackageHandleStandardArgs.cmake:247 (string):
  string does not recognize sub-command APPEND
Call Stack (most recent call first):
  /home/nvidia/pytorch-0.4.1/cmake/Modules_CUDA_fix/upstream/FindCUDA.cmake:1104 (find_package_handle_standard_args)
  /home/nvidia/pytorch-0.4.1/cmake/Modules_CUDA_fix/FindCUDA.cmake:11 (include)
  CMakeLists.txt:5 (FIND_PACKAGE)


CMake Error at /home/nvidia/pytorch-0.4.1/cmake/Modules_CUDA_fix/upstream/FindPackageHandleStandardArgs.cmake:247 (string):
  string does not recognize sub-command APPEND
Call Stack (most recent call first):
  /home/nvidia/pytorch-0.4.1/cmake/Modules_CUDA_fix/upstream/FindCUDA.cmake:1104 (find_package_handle_standard_args)
  /home/nvidia/pytorch-0.4.1/cmake/Modules_CUDA_fix/FindCUDA.cmake:11 (include)
  CMakeLists.txt:5 (FIND_PACKAGE)


CMake Error at /home/nvidia/pytorch-0.4.1/cmake/Modules_CUDA_fix/upstream/FindPackageHandleStandardArgs.cmake:361 (string):
  string does not recognize sub-command APPEND
Call Stack (most recent call first):
  /home/nvidia/pytorch-0.4.1/cmake/Modules_CUDA_fix/upstream/FindCUDA.cmake:1104 (find_package_handle_standard_args)
  /home/nvidia/pytorch-0.4.1/cmake/Modules_CUDA_fix/FindCUDA.cmake:11 (include)
  CMakeLists.txt:5 (FIND_PACKAGE)


-- Autodetected CUDA architecture(s): 6.2 
-- Set NVCC_GENCODE for building NCCL: -gencode=arch=compute_62,code=sm_62
-- Configuring incomplete, errors occurred!
See also "/home/nvidia/pytorch-0.4.1/third_party/build/nccl/CMakeFiles/CMakeOutput.log".
See also "/home/nvidia/pytorch-0.4.1/third_party/build/nccl/CMakeFiles/CMakeError.log".
Failed to run 'bash tools/build_pytorch_libs.sh --use-cuda --use-nnpack nccl caffe2 nanopb libshm gloo THD c10d'

 

nvidia@tegra-ubuntu:~$ git clone --recursive https://github.com/pytorch/pytorch.git
Cloning into 'pytorch'...
remote: Enumerating objects: 317, done.
remote: Counting objects: 100% (317/317), done.
remote: Compressing objects: 100% (217/217), done.
error: RPC failed; curl 56 GnuTLS recv error (-24): Decryption has failed.
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed
[ 64%] Building CXX object caffe2/CMakeFiles/caffe2.dir/sgd/clip_tensor_op.cc.o
[ 64%] Building CXX object caffe2/CMakeFiles/caffe2.dir/sgd/ftrl_op.cc.o
[ 64%] Building CXX object caffe2/CMakeFiles/caffe2.dir/sgd/gftrl_op.cc.o
[ 64%] Building CXX object caffe2/CMakeFiles/caffe2.dir/sgd/iter_op.cc.o
[ 64%] Building CXX object caffe2/CMakeFiles/caffe2.dir/sgd/lars_op.cc.o
[ 64%] Building CXX object caffe2/CMakeFiles/caffe2.dir/sgd/learning_rate_adaption_op.cc.o
[ 64%] Building CXX object caffe2/CMakeFiles/caffe2.dir/sgd/learning_rate_op.cc.o
[ 65%] Building CXX object caffe2/CMakeFiles/caffe2.dir/sgd/momentum_sgd_op.cc.o
[ 65%] Building CXX object caffe2/CMakeFiles/caffe2.dir/sgd/rmsprop_op.cc.o
[ 65%] Building CXX object caffe2/CMakeFiles/caffe2.dir/sgd/wngrad_op.cc.o
[ 65%] Building CXX object caffe2/CMakeFiles/caffe2.dir/sgd/yellowfin_op.cc.o
[ 65%] Building CXX object caffe2/CMakeFiles/caffe2.dir/share/contrib/nnpack/conv_op.cc.o
[ 65%] Building CXX object caffe2/CMakeFiles/caffe2.dir/share/contrib/depthwise/depthwise3x3_conv_op.cc.o
[ 65%] Building CXX object caffe2/CMakeFiles/caffe2.dir/transforms/common_subexpression_elimination.cc.o
[ 65%] Building CXX object caffe2/CMakeFiles/caffe2.dir/transforms/conv_to_nnpack_transform.cc.o
[ 65%] Building CXX object caffe2/CMakeFiles/caffe2.dir/transforms/pattern_net_transform.cc.o
[ 65%] Building CXX object caffe2/CMakeFiles/caffe2.dir/transforms/single_op_transform.cc.o
[ 65%] Linking CXX shared library ../lib/libcaffe2.so
[ 65%] Built target caffe2
Makefile:140: recipe for target 'all' failed
make: *** [all] Error 2
Failed to run 'bash ../tools/build_pytorch_libs.sh --use-cuda --use-fbgemm --use-nnpack --use-qnnpack caffe2'
nvidia@tegra-ubuntu:~/pytorch-stable$ 

 

 

参考:

https://m.oldpan.me/archives/nvidia-jetson-tx2-source-build-pytorch

https://blog.csdn.net/qq_36118564/article/details/85417331

https://blog.csdn.net/zsfcg/article/details/84099869

https://rpmfind.net/linux/mageia/distrib/cauldron/aarch64/media/core/release/

你可能感兴趣的:(pytorch)