PyTorch入门——Autograd

一、什么是Autograd

        Autograd是PyTorch中所有神经网络的核心,它提供自动计算张量梯度的方法。使用Autograd我们在搭建神经网络是只需要定义正向传播过程,PyTorch会自动生成反向传播过程的计算公式。

二、如何使用Autograd

        Tensor是Autograd的核心类,我们在使用时要把需要自动计算梯度的Tensor的requires_grad设为True,把不需要计算梯度的Tensor设置为False。

        我们可以通过tensor.grad访问梯度,通过tensor.grad_fn访问tensor所在的计算梯度的函数。

        使用with torch.no_grad(): 之后的代码块不进行自动梯度计算。

代码示例如下:

import torch

###############################################################
# Create a tensor and set requires_grad=True to track computation with it
x = torch.ones(2, 2, requires_grad=True)
print(x)

###############################################################
# Do an operation of tensor:
y = x + 2
print(y)

###############################################################
# ``y`` was created as a result of an operation, so it has a ``grad_fn``.
print(y.grad_fn)

###############################################################
# Do more operations on y
z = y * y * 3
out = z.mean()

print(z, out)

################################################################
# ``.requires_grad_( ... )`` changes an existing Tensor's ``requires_grad``
# flag in-place. The input flag defaults to ``False`` if not given.
a = torch.randn(2, 2)
a = ((a * 3) / (a - 1))
print(a.requires_grad)
a.requires_grad_(True)
print(a.requires_grad)
b = (a * a).sum()
print(b.grad_fn)

###############################################################
# Gradients
# ---------
# Let's backprop now
# Because ``out`` contains a single scalar, ``out.backward()`` is
# equivalent to ``out.backward(torch.tensor(1.))``.

out.backward() #反向传播计算梯度

###############################################################
# print gradients d(out)/dx
#

print(x.grad)

x = torch.randn(3, requires_grad=True)

y = x * 2
while y.data.norm() < 1000:
    y = y * 2

print(y)

###############################################################
# Now in this case ``y`` is no longer a scalar. ``torch.autograd``
# could not compute the full Jacobian directly, but if we just
# want the Jacobian-vector product, simply pass the vector to
# ``backward`` as argument:
v = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)
y.backward(v)

print(x.grad)

###############################################################
# You can also stop autograd from tracking history on Tensors
# with ``.requires_grad=True`` by wrapping the code block in
# ``with torch.no_grad()``:
print(x.requires_grad)
print((x ** 2).requires_grad)

with torch.no_grad():
	print((x ** 2).requires_grad)

你可能感兴趣的:(#,Pytorch,深度学习,深度学习,pytorch,python)