pytorch更新完后合并了Variable与Tensor
torch.Tensor()能像Variable一样进行反向传播的更新,返回值为Tensor
Variable自动创建tensor,且返回值为Tensor,(所以以后不需要再用Variable)
Tensor创建后,默认requires_grad=Flase
可以通过xxx.requires_grad_()将默认的Flase修改为True
下面附代码及官方文档代码:
import torch
from torch.autograd import Variable #使用Variabl必须调用库
lis=torch.range(1,6).reshape((-1,3))#创建1~6 形状
#行不指定(-1意为由计算机自己计算)列为3的floattensor矩阵
print(lis)
print(lis.requires_grad) #查看默认的requires_grad是否是Flase
lis.requires_grad_() #使用.requires_grad_()修改默认requires_grad为true
print(lis.requires_grad)
结果如下:
tensor([[1., 2., 3.],
[4., 5., 6.]])
False
True
创建一个Variable,Variable必须接收Tensor数据 不能直接写为 a=Variable(range(6)).reshape((-1,3))
否则报错Variable data has to be a tensor, but got range
正确如下:
import torch
from torch.autograd import Variable
tensor=torch.FloatTensor(range(8)).reshape((-1,4))
my_ten=Variable(tensor)
print(my_ten)
print(my_ten.requires_grad)
my_ten.requires_grad_()
print(my_ten.requires_grad)
结果:
tensor([[0., 1., 2., 3.],
[4., 5., 6., 7.]])
False
True
由上面可以看出,Tensor完全可以取代Variable。下面给出官方文档:
# 默认创建requires_grad = False的Tensor
x = torch . ones ( 1 ) # create a tensor with requires_grad=False (default)
x . requires_grad
# out: False
# 创建另一个Tensor,同样requires_grad = False
y = torch . ones ( 1 ) # another tensor with requires_grad=False
# both inputs have requires_grad=False. so does the output
z = x + y
# 因为两个Tensor x,y,requires_grad=False.都无法实现自动微分,
# 所以操作(operation)z=x+y后的z也是无法自动微分,requires_grad=False
z . requires_grad
# out: False
# then autograd won't track this computation. let's verify!
# 因而无法autograd,程序报错
z . backward ( )
# out:程序报错:RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
# now create a tensor with requires_grad=True
w = torch . ones ( 1 , requires_grad = True )
w . requires_grad
# out: True
# add to the previous result that has require_grad=False
# 因为total的操作中输入Tensor w的requires_grad=True,因而操作可以进行反向传播和自动求导。
total = w + z
# the total sum now requires grad!
total . requires_grad
# out: True
# autograd can compute the gradients as well
total . backward ( )
w . grad
#out: tensor([ 1.])
# and no computation is wasted to compute gradients for x, y and z, which don't require grad
# 由于z,x,y的requires_grad=False,所以并没有计算三者的梯度
z . grad == x . grad == y . grad == None
# True
existing_tensor . requires_grad_ ( )
existing_tensor . requires_grad
# out:True
或者直接用Tensor创建时给定requires_grad=True
my_tensor = torch.zeros(3,4,requires_grad = True)
my_tensor.requires_grad
# out: True
lis=torch.range(1,6,requires_grad=True).reshape((-1,3))
print(lis)
print(lis.requires_grad)
lis.requires_grad_()
print(lis.requires_grad)
结果
tensor([[1., 2., 3.],
[4., 5., 6.]], requires_grad=True)
True
True