pytorch之Variable对象之梯度推导

由简入繁
看代码:
import torch
from torch.autograd import Variable

a = Variable(torch.ones(4),requires_grad=True)
b=a+2
c=b.mean()
print©
c.backward()
print(a.grad)
输出:
tensor(3., grad_fn=)
tensor([0.2500, 0.2500, 0.2500, 0.2500])
解析:a是一个4维数组,所以b=(1/4)a,求导为1/4,所以a的梯度为 [0.2500, 0.2500, 0.2500, 0.2500]

加点难度:
import torch
from torch.autograd import Variable

a = Variable(torch.ones(4),requires_grad=True)
b=a*a
c=b.mean()
print©
c.backward()
print(a.grad)
输出为:
tensor(1., grad_fn=)
tensor([0.5000, 0.5000, 0.5000, 0.5000])
解析:这里a=(1/4)a2,求导为1/2

再看代码:
import torch
from torch.autograd import Variable
import torch.nn as nn

sig=nn.CosineSimilarity()
t1=torch.ones(4)
for i in range(4):
t1[i]=i
print(t1)
a = Variable(t1,requires_grad=True)
c=torch.sin(a)
print(c.grad_fn)
d=c.mean()
d.backward()
print(a.grad)
结果:
tensor([0., 1., 2., 3.])

tensor([ 0.2500, 0.1351, -0.1040, -0.2475])
解析:复合函数求导,d=f(g(x)),f(x)=1/4x,g(x)=sin(x),所以复合导数为(1/4)*cos(x),而x=[0,1,2,3],带入即可得到结果

你可能感兴趣的:(PythonLearn,深度学习)