PyTorch - autograd自动微分

论文:Automatic Differentiation in Machine Learning: a Survey,自动微分

参考 PyTorch:AUTOMATIC DIFFERENTIATION WITH TORCH.AUTOGRAD

PyTorch - autograd自动微分_第1张图片

Loss是标量,雅可比向量积,JVP,

primitive operation(原始操作):

PyTorch - autograd自动微分_第2张图片

torch.autograd(),计算梯度

import torch

x = torch.ones(5)  # input tensor, 输入向量
print(f"x: {x}")
y = torch.zeros(3)  # expected output, 标签
print(f"y: {y}")
w = torch.randn(5, 3, requires_grad=True)  # 开启自动微分
print(f"w: {w}")
b = torch.randn(3, requires_grad=True)
print(f"b: {b}")
z = torch.matmul(x, w)+b

loss = torch.nn.functional.binary_cross_entropy_with_logits(z, y)

使用梯度反向传播算法,back propagation

backward()是Tensor类的方法,loss是标量直接调用backward(),loss如果是张量,则backward()需要传入张量

print(f"Gradient function for z = {z.grad_fn}")
print(f"Gradient function for loss = {loss.grad_fn}")

loss.backward()
print(w.grad)
print(b.grad)

retain_graph=True,保留图,不保留的话,第2次调用会报错:RuntimeError: Trying to backward through the graph a second time

torch.no_grad()关闭自动求导:

z = torch.matmul(x, w)+b
print(z.requires_grad)

with torch.no_grad():
    z = torch.matmul(x, w)+b
print(z.requires_grad)

z = z.detach()之后,z.requires_grad是False

z = z.detach()
z.requires_grad

DAG:directed acyclic graph,有向无环图

张量loss,输入torch.ones_like(inp),反向传播

retain_graph保留图,可以连续backward()

梯度置0,np.grad.zero_()

inp = torch.eye(5, requires_grad=True)
out = (inp+1).pow(2)
print(out)
out.backward(torch.ones_like(inp), retain_graph=True)
print(f"First call\n{inp.grad}")
out.backward(torch.ones_like(inp), retain_graph=True)
print(f"\nSecond call\n{inp.grad}")
inp.grad.zero_()
out.backward(torch.ones_like(inp), retain_graph=True)
print(f"\nCall after zeroing gradients\n{inp.grad}")

你可能感兴趣的:(pytorch,深度学习,python)