nvidia-docker镜像中tensorflow-gpu无法使用gpu

nvidia-docker之tensorflow-gpu无法使用gpu

  • 一、报错(docker容器中)
  • 二、解决(必须在已经安装好nvidia-docker前提下)
      • 1. 检测 nvidia-docker(宿主机)
      • 2. suggestion-slover解决
  • 三、检验容器中是否可以使用gpu加速tensorflow-gpu,Pytroch代码

一、报错(docker容器中)

no NVIDIA GPU device is present: /dev/nvidia0 does not exist

nvidia-docker镜像中tensorflow-gpu无法使用gpu_第1张图片

二、解决(必须在已经安装好nvidia-docker前提下)

1. 检测 nvidia-docker(宿主机)

tpx@aiot-3000:~$ docker run --runtime=nvidia --rm nvidia/cuda:10.1-base nvidia-smi

nvidia-docker镜像中tensorflow-gpu无法使用gpu_第2张图片

2. suggestion-slover解决

  • As per the docs here and here, you have to add a “gpus” argument when creating a the docker container to have gpu support.
  • So you should start your container something like this. The “–gpus all” makes all the gpus available on the host to be visible to the container.
docker run -it  -d --gpus all  --restart=always  --name 【容器名】【镜像id】 /bin/bash

Also you can try running nvidia-smi on the tensorflow image to quickly check if gpu is accessible on the container.

docker run -it --rm --gpus all tensorflow/tensorflow:latest-gpu-jupyter nvidia-smi

三、检验容器中是否可以使用gpu加速tensorflow-gpu,Pytroch代码

>>> import tensorflow as ps
>>> print()
返回True,则配置完毕
>>>import torch
>>>print(torch.cuda.is_available())#检查gpu是否可用

你可能感兴趣的:(深度学习生产环境准备,tensorflow,深度学习)