首先查看有多少块GPU,torch.cuda.device_count(),
然后将变量或者模型放入指定的GPU中
import torch
import torch.nn as nn
if __name__ == '__main__':
# torch.cuda.current_device()
# for i in range(torch.cuda.device_count()):
# print(torch.cuda.get_device_name(i),torch.cuda.is_available())
a = torch.randn(3,5).cuda(torch.cuda.device_count() - 1) if torch.cuda.is_available() else torch.randn(3,5)
print(a)
model = nn.Conv2d(3,128,kernel_size=(3,3),device='cuda:3')
print(model)
tensor([[ 0.3226, -0.0360, 0.5029, 0.8832, 1.8058],
[-0.7347, -2.1244, 0.2387, -0.7839, 0.9663],
[-0.1772, 0.9707, 0.0945, -1.5504, -0.4405]], device='cuda:3')
Conv2d(3, 128, kernel_size=(3, 3), stride=(1, 1))
Linux中查看GPU的使用情况,
nvidia-smi
动态的每过5秒钟查看:
watch -n 5 nvidai-smi
然后根据PID查看具体是哪个用户正在使用GPU
ps -f -p 36396