PyTorch是Facebook发布的一款非常具有个性的深度学习框架,它和Tensorflow,Keras,Theano等其他深度学习框架都不同,它是动态计算图模式,其应用模型支持在运行过程中根据运行参数动态改变,而其他几种框架都是静态计算图模式,其模型在运行之前就已经确定。
pip install numpy
pip install scipy
pip install http://download.pytorch.org/whl/cu75/torch-0.1.12.post2-cp27-none-linux_x86_64.whl
import torch
x = torch.Tensor(3,4)
print("x Tensor: ",x)
import torch
from torch.autograd import Variable
x=Variable(torch.Tensor(2,2))
print("x variable: ",x)
import torch
from torch.autograd import Variable
x = Variable(torch.ones(2, 2), requires_grad=True)
print(x)
y=x+2
print(y)
z = y * y * 3
out = z.mean()
print(z, out)
out.backward() # backward 就是求导的意思
print(x.grad)
z=(x+2)*(x+2)*3,它的导数是 3*(x+2)/2,
当 x=1 时导数的值就是 3*(1+2)/2=4.5
权值更新方法:learning_rate 是学习速率,多数时候就叫做 lr,是学习步长,用步长 * 导数就是每次权重修正的 delta 值,lr 越大表示学习的速度越快,相应的精度就会降低。
weight = weight + learning_rate * gradient
learning_rate = 0.01
for f in model.parameters():
f.data.sub_(f.grad.data * learning_rate)
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
sample = Variable(torch.ones(2,2))
a=torch.Tensor(2,2)
a[0,0]=0
a[0,1]=1
a[1,0]=2
a[1,1]=3
target = Variable (a)
sample 的值为:[[1,1],[1,1]]
target 的值为:[[0,1],[2,3]]
criterion = nn.L1Loss()
loss = criterion(sample, target)
print(loss)
loss = (|0-1|+|1-1|+|2-1|+|3-1|) / 4 = 1
criterion = nn.SmoothL1Loss()
loss = criterion(sample, target)
print(loss)
loss = 0.625
criterion = nn.MSELoss()
loss = criterion(sample, target)
print(loss)
loss = 1.5
criterion = nn.BCELoss()
loss = criterion(sample, target)
print(loss)
loss = -13.8155