torch.autograd provides classes and functions implementing automatic differentiation of arbitrary scalar valued functions. It requires minimal changes to the existing code - you only need to wrap all tensors in Variable
objects.
torch.autograd.
backward
(
variables,
grad_variables,
retain_variables=False
)
[source]
Computes the sum of gradients of given variables w.r.t. graph leaves.
The graph is differentiated using the chain rule. If any of variables
are non-scalar (i.e. their data has more than one element) and require gradient, the function additionaly requires specifyinggrad_variables
. It should be a sequence of matching length, that containins gradient of the differentiated function w.r.t. corresponding variables (None
is an acceptable value for all variables that don’t need gradient tensors).
This function accumulates gradients in the leaves - you might need to zero them before calling it.
Parameters: |
|
---|
Variable API is nearly the same as regular Tensor API (with the exception of a couple in-place methods, that would overwrite inputs required for gradient computation). In most cases Tensors can be safely replaced with Variables and the code will remain to work just fine. Because of this, we’re not documenting all the operations on variables, and you should refere to torch.Tensor
docs for this purpose.
Supporting in-place operations in autograd is a hard matter, and we discourage their use in most cases. Autograd’s aggressive buffer freeing and reuse makes it very efficient and there are very few occasions when in-place operations actually lower memory usage by any significant amount. Unless you’re operating under heavy memory pressure, you might never need to use them.
All Variable
s keep track of in-place operations applied to them, and if the implementation detects that a variable was saved for backward in one of the functions, but it was modified in-place afterwards, an error will be raised once backward pass is started. This ensures that if you’re using in-place functions and not seing any errors, you can be sure that the computed gradients are correct.
torch.autograd.
Variable
[source]
Wraps a tensor and records the operations applied to it.
Variable is a thin wrapper around a Tensor object, that also holds the gradient w.r.t. to it, and a reference to a function that created it. This reference allows retracing the whole chain of operations that created the data. If the Variable has been created by the user, its creator will beNone
and we call such objects leaf Variables.
Since autograd only supports scalar valued function differentiation, grad size always matches the data size. Also, grad is normally only allocated for leaf variables, and will be always zero otherwise.
Variables: |
|
---|---|
Parameters: |
|
backward
(
gradient=None,
retain_variables=False
)
[source]
Computes the gradient of current variable w.r.t. graph leaves.
The graph is differentiated using the chain rule. If the variable is non-scalar (i.e. its data has more than one element) and requires gradient, the function additionaly requires specifyinggradient
. It should be a tensor of matching type and location, that containins the gradient of the differentiated function w.r.t. self
.
This function accumulates gradients in the leaves - you might need to zero them before calling it.
Parameters: |
|
---|
detach
(
)
[source]
Returns a new Variable, detached from the current graph.
Result will never require gradient. If the input is volatile, the output will be volatile too.
Note
Returned Variable uses the same data tensor, as the original one, and in-place modifications on either of them will be seen, and may trigger errors in correctness checks.
detach_
(
)
[source]
Detaches the Variable from the graph that created it, making it a leaf.
register_hook
(
hook
)
[source]
Registers a backward hook.
The hook will be called every time a gradient with respect to the variable is computed. The hook should have the following signature:
hook(grad) -> Variable or None
The hook should not modify its argument, but it can optionally return a new gradient which will be used in place of grad
.
This function returns a handle with a method handle.remove()
that removes the hook from the module.
Example
>>> v = Variable(torch.Tensor([0, 0, 0]), requires_grad=True)
>>> h = v.register_hook(lambda grad: grad * 2) # double the gradient
>>> v.backward(torch.Tensor([1, 1, 1]))
>>> v.grad.data
2
2
2
[torch.FloatTensor of size 3]
>>> h.remove() # removes the hook
reinforce
(
reward
)
[source]
Registers a reward obtained as a result of a stochastic process.
Differentiating stochastic nodes requires providing them with reward value. If your graph contains any stochastic operations, you should call this function on their outputs. Otherwise an error will be raised.
Parameters: | reward (Tensor) – Tensor with per-element rewards. It has to match the device location and shape of Variable’s data. |
---|
torch.autograd.
Function
[source]
Records operation history and defines formulas for differentiating ops.
Every operation performed on Variable
s creates a new function object, that performs the computation, and records that it happened. The history is retained in the form of a DAG of functions, with edges denoting data dependencies (input <- output
). Then, when backward is called, the graph is processed in the topological ordering, by calling backward()
methods of eachFunction
object, and passing returned gradients on to next Function
s.
Normally, the only way users interact with functions is by creating subclasses and defining new operations. This is a recommended way of extending torch.autograd.
Since Function logic is a hotspot in most scripts, almost all of it was moved to our C backend, to ensure that the framework overhead is minimal.
Each function is meant to be used only once (in the forward pass).
Variables: |
|
---|
backward
(
*grad_output
)
[source]
Defines a formula for differentiating the operation.
This function is to be overriden by all subclasses.
All arguments are tensors. It has to accept exactly as many arguments, as many outputs didforward()
return, and it should return as many tensors, as there were inputs to forward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input.
forward
(
*input
)
[source]
Performs the operation.
This function is to be overriden by all subclasses.
It can take and return an arbitrary number of tensors.
mark_dirty
(
*args
)
[source]
Marks given tensors as modified in an in-place operation.
This should be called at most once, only from inside the forward()
method, and all arguments should be inputs.
Every tensor that’s been modified in-place in a call to forward()
should be given to this function, to ensure correcness of our checks. It doesn’t matter wheter the function is called before or after modification.
mark_non_differentiable
(
*args
)
[source]
Marks outputs as non-differentiable.
This should be called at most once, only from inside the forward()
method, and all arguments should be outputs.
This will mark outputs as not requiring gradients, increasing the efficiency of backward computation. You still need to accept a gradient for each output in backward()
, but it’s always going to be None
.
This is used e.g. for indices returned from a max Function
.
Marks that given pairs of distinct tensors are sharing storage.
This should be called at most once, only from inside the forward()
method, and all arguments should be pairs of (input, output).
If some of the outputs are going to be tensors sharing storage with some of the inputs, all pairs of (input_arg, output_arg) should be given to this function, to ensure correctness checking of in-place modification. The only exception is when an output is exactly the same tensor as input (e.g. in-place ops). In such case it’s easy to conclude that they’re sharing data, so we don’t require specifying such dependencies.
This function is not needed in most functions. It’s primarily used in indexing and transpose ops.
save_for_backward
(
*tensors
)
[source]
Saves given tensors for a future call to backward()
.
This should be called at most once, and only from inside the forward()
method.
Later, saved tensors can be accessed through the saved_tensors
attribute. Before returning them to the user, a check is made, to ensure they weren’t used in any in-place operation that modified their content.
Arguments can also be None
.