GeForce RTX 3080 with CUDA capability sm_86 is not compatible with the current PyTorch installation.

RTX3080+cuda问题

 

1.通过最近装cuda环境研究发下仅仅通过以下命令是不能够测试cuda是否可以正常使用的。

import torch
# 我在python的命令环境测试该命令 返回是True的但是在run torch程序的时候报错
torch.cuda.is_available()


2.如果想要测是你的cuda版本torch是否可以使用,使用以下命令测试可能会更好。

import torch
torch.zeros(1).cuda()


如果报错是GeForce RTX 3080 with CUDA capability sm_86 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.那么一定是cuda版本的问题了。安装cuda版本不对。

 

解决办法:

 

1.在运行pytorch程序的时候报这样的错:
GeForce RTX 3080 with CUDA capability sm_86 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.
具有CUDA功能的GeForce RTX 3080 sm_86与当前Pythorch安装不兼容。当前Pythorch安装支持CUDA功能sm_37sm_50 sm_60 sm_61 sm_70 sm_75 compute_37。
我配置的环境是 ubuntu20系统,nvdia-455显卡驱动,安装的cuda 是1.2版本对应的cudnn是8_8.0.5.39。torch是torch-1.7.0-cp38-cp38-linux_x86_64。
去torch的githup上和去cuda的官网上查看发现
A CUDA application binary (with one or more GPU kernels) can contain the compiled GPU code in two forms, binary cubin objects and forward-compatible PTX assembly for each kernel. Both cubin and PTX are generated for a certain target compute capability. A cubin generated for a certain compute capability is supported to run on any GPU with the same major revision and same or higher minor revision of compute capability. For example, a cubin generated for compute capability 7.0 is supported to run on a GPU with compute capability 7.5, however a cubin generated for compute capability 7.5 is not supported to run on a GPU with compute capability 7.0, and a cubin generated with compute capability 7.x is not supported to run on a GPU with compute capability 8.x.
(大概意思是说如果你gpu算力是7.5的话,算力是7的cuda可以在上边运行,但算力为7.5的cuda不能够在算力为7的gpu上运行。算力为7的cuda不能够在算力为8的gpu上运行。)说人话,就是cuda算力要低于或等于gpu的算法,但是跨版本就不行。
so综上所述我们要下算力为8的cuda才能够在gpu上运行程序。
2.在cuda官方文档中发现cuda10.2最高支持的是7.5的因此报错是正常的。
With versions 11.0 of the CUDA Toolkit, nvcc can generate cubin native to the NVIDIA Ampere GPU architecture (compute capability 8.0). When using CUDA Toolkit 11.0, to ensure that nvcc will generate cubin files for all recent GPU architectures as well as a PTX version for forward compatibility with future GPU architectures, specify the appropriate -gencode= parameters on the nvcc command line as shown in the examples below.
从上述文档可以看到11.0就可以了。于是我就去踩坑去了。可以的话会发教程的。
3. 解决方案:
下载cuda11重新配置环境。
参考文章
1.参考文章1,csdn的   https://blog.csdn.net/weixin_43896241/article/details/108979744#commentBox
2.cdua官方文档,英文都是上边的   https://docs.nvidia.com/cuda/ampere-compatibility-guide/index.html#building-applications-with-ampere-support
3.pytorch的githup issue,国外的小伙也遇到问题了  https://github.com/pytorch/pytorch/issues/45028

 

 


 

你可能感兴趣的:(Linux,技术浅谈)