[已解决]RuntimeError: Tensor for argument #2 ‘mat1‘ is on CPU, but expected it to be on GPU (

运行程序的时候,报错:

RuntimeError: Tensor for argument #2 'mat1' is on CPU, but expected it to be on GPU (while checking arguments for addmm)

出错的位置在:

predict = model(torch.autograd.Variable(x_train))

前提代码提示在:

pytorch网络的模板_子根的博客-CSDN博客

原因分析:

这是因为model模型我是放到CUDA里去了,所以导入的 x_train数据也需要放入到CUDA中去

进阶版一:

predict = model(torch.autograd.Variable(x_train.cuda()))

虽然不报错,但是我是想 将预测得到的 predict 数据和标签值进行对比映射带二维图像上的嘛,但是会报错:

TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

报错的位置,是画图的位置:

plt.plot(x_train.numpy(), predict.numpy(), 'r.')

友情提示:此时的 x_train 是Tensor类型,而 predict 虽然也是Tensor格式但是带了个小尾巴,所以导致类型不匹配,所以画不了图

tensor([[ 3.3000],
        [ 4.4000],
        [ 5.5000],
        [ 6.7100],
        [ 6.9300],
        [ 4.1680],
        [ 9.7790],
        [ 6.1820],
        [ 7.5900],
        [ 2.1670],
        [ 7.0420],
        [10.7910],
        [ 5.3130],
        [ 7.9970],
        [ 3.1000]])

tensor([[1.0622],
        [1.5116],
        [1.9626],
        [2.4584],
        [2.5479],
        [1.4172],
        [3.7154],
        [2.2425],
        [2.8182],
        [0.5984],
        [2.5943],
        [4.1281],
        [1.8858],
        [2.9845],
        [0.9798]], device='cuda:0', grad_fn=)

进阶版二:

解决办法,将他们放入到CPU当中就行了,(好像还有其他更简单的方法,奈何我不会):

predict = model(torch.autograd.Variable(x_train.cuda())).cpu()

然后就完美地画出图了。

参考博客:

python - 参数 #2'mat1' 的张量在 CPU 上,但预计它在 GPU 上 - 探索字符串

你可能感兴趣的:(深度学习,python,人工智能)