GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation

服务器在训练网络时报错:

anaconda3/envs/particlesfm/lib/python3.7/site-packages/torch/cuda/init.py:104: UserWarning:
GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.
If you want to use the GeForce RTX 3090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))

[解决方法]
CUDA capability sm_86与安装的pytorch版本不匹配,即当前RTX 3090的算力是8.6,但是当前的PyTorch依赖的CUDA版本支持的算力只有3.7、5.0、6.0、7.0,因此根据设备支持安装对应的pytorch和cuda版本:

pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html

但是上述命令报错:

ERROR: Exception:
Traceback (most recent call last):
File “/home/yitong/anaconda3/envs/particlesfm/lib/python3.7/site-packages/pip/_vendor/urllib3/response.py”, line 438, in _error_catcher
yield
File “/home/yitong/anaconda3/envs/particlesfm/lib/python3.7/contextlib.py”, line 130, in exit
self.gen.throw(type, value, traceback)
File “/home/yitong/anaconda3/envs/particlesfm/lib/python3.7/site-packages/pip/_vendor/urllib3/response.py”, line 443, in _error_catcher
raise ReadTimeoutError(self._pool, None, “Read timed out.”)
pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host=‘download.pytorch.org’, port=443): Read timed out.

这个是因为网络问题,排查网络后重新安装即可。
在查资料的过程中发现还有一种方法,即直接安装对应cuda的pytorch,不指定torch版本:

pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113

但是经过验证发现无法解决这一问题,分析可能仍旧安装了不适配的pytorch。

参考1
参考2

你可能感兴趣的:(ubuntu系统,pytorch,深度学习)