<input>:1: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed.

今天遇到这个问题啊

结果出现这个错误
解决方法很简单
放到gpu上
我把数据放到gpu上 reguires_grad需要重新赋值为True

    self.w_1_p = self.w_1_p.to(device)
    self.w_2_p = self.w_2_p.to(device)
    self.w_3_p = self.w_3_p.to(device)
    self.fc_w = self.fc_w.to(device)

    self.w_1_p.requires_grad = True
    self.w_2_p.requires_grad = True
    self.w_3_p.requires_grad = True
    self.fc_w.requires_grad = True

经过上述并没有解决问题,就算强制设置叶子几点也失败
关键原因在于原来使用的Variable太老了 是之前0.x版本的pytorch
对于最新的pytorch直接使用在自定义参数 而且直接赋值在gpu上
具体代码如下

tmp = sio.loadmat(‘./tmp/afew/w_1.mat’)[‘w_1’]
# self.w_1_p = Variable(torch.from_numpy(tmp), requires_grad=True)
self.w_1_p=torch.nn.Parameter(torch.from_numpy(tmp).cuda())

    tmp = sio.loadmat('./tmp/afew/w_2.mat')['w_2']
    # self.w_2_p = Variable(torch.from_numpy(tmp), requires_grad=True)
    self.w_2_p = torch.nn.Parameter(torch.from_numpy(tmp).cuda())

    tmp = sio.loadmat('./tmp/afew/w_3.mat')['w_3']
    # self.w_3_p = Variable(torch.from_numpy(tmp), requires_grad=True)
    self.w_3_p = torch.nn.Parameter(torch.from_numpy(tmp).cuda())

    tmp = sio.loadmat('./tmp/afew/fc.mat')['theta']
    # self.fc_w = Variable(torch.from_numpy(tmp.astype(np.float64)), requires_grad=True)
    self.fc_w = torch.nn.Parameter(torch.from_numpy(tmp).cuda())

原始的代码都是numpy写的,torch 的gpu计算也有类似的,不需要转换

你可能感兴趣的:(pytorch,深度学习,python,pytorch)