pytorch使用GPU

  1. 初始化device
    if torch.cuda.is_available():
        if not opt.gpuid:#opt为命令行传入的参数对象
            opt.gpuid = 0
        opt.device = torch.device("cuda:%d" % opt.gpuid)
    else:
        opt.device = torch.device("cpu")
        opt.gpuid = -1
        print("CUDA is not available, fall back to CPU.")
  1. 将criterion,model,tensor转移到GPU上
	# model = Seq2SeqModel(opt)
	model = model.to(opt.device)
	criterion = torch.nn.CrossEntropyLoss().to(opt.device)
	#src: a LongTensor
	src = src.to(opt.device)
  1. 从GPU取出数据
    src = src.cpu()

你可能感兴趣的:(Framework)