指定使用哪块GPU运行程序

情景:加入实验室有8块卡,前7块卡已经呗占用了,如果我们需要运行程序,肯定会错,错误如下:

Traceback (most recent call last):
  File "train.py", line 182, in 
    fasterRCNN = vgg16(classes, pretrained=True, class_agnostic=args.class_agnostic)
  File "/home/lthpc/lw/WeaklyObjectDetection/lib/model/faster_rcnn/vgg16.py", line 26, in __init__
    _fasterRCNN.__init__(self, classes, class_agnostic)
  File "/home/lthpc/lw/WeaklyObjectDetection/lib/model/faster_rcnn/faster_rcnn.py", line 35, in __init__
    self.RCNN_rpn = _RPN(self.dout_base_model)
  File "/home/lthpc/lw/WeaklyObjectDetection/lib/model/rpn/rpn.py", line 74, in __init__
    sequence_length=sequence_length)
  File "/home/lthpc/lw/WeaklyObjectDetection/lib/model/rpn/bi_lstm.py", line 50, in __init__
    self.u_omega = Variable(torch.randn(self.attention_size).cuda(), requires_grad=True)
RuntimeError: CUDA error: out of memory

这个时候需要我们到底指定那块显卡,首先使用nvidia-smi查看显卡信息:

指定使用哪块GPU运行程序_第1张图片

查看玩完显卡信息,我们看哪块卡是空余的,我们就使用那块卡。命令如下

CUDA_VISIBLE_DEVICES=2(显卡的编号) python train.py

 

你可能感兴趣的:(深度学习环境,linux,python,docker)