pytorch如何计算导数,如何使用PyTorch计算偏导数?

I want to use PyTorch to get the partial derivatives between output and input. Suppose I have a function Y = 5*x1^4 + 3*x2^3 + 7*x1^2 + 9*x2 - 5, and I train a network to replace this function, then I use autograd to calculate dYdx1, dYdx2:

net = torch.load('net_723.pkl')

x = torch.tensor([[1,-1]],requires_grad=True).type(torch.FloatTensor)

y = net(x)

grad_c = torch.autograd.grad(y,x,create_graph=True,retain_graph=True)[0]

Then I get a wrong derivative as:

>>>tensor([[ 7.5583, -5.3173]])

but when I use function to calculate, I get the right answer:

Y = 5*x[0,0]**4 + 3*x[0,1]**3 + 7*x[0,0]**2 + 9*x[0,1] - 5

grad_c = torch.autograd.grad(Y,x,create_graph=True,retain_graph=True)[0]

>>>tensor([[ 34., 18.]])

Why does this happen?

解决方案

A neural network is a universal function approximator. What that means is, that, for enough computational resources, training time, nodes, etc., you can approximate any function.

Without any further information on how you trained your network in the first example, I would suspect that your network simply does not fit properly to the underlying function, meaning that the internal representation of your network actually models a different function!

For the second code snippet, autmatic differentiation does give you the exact partial derivative. It does so via a different method, see another one of my answers on SO, on the topic of AutoDiff/Autograd specifically.

你可能感兴趣的:(pytorch如何计算导数)