Pytorch常用函数(更新ing)

Torch常用的函数

模型创建

nn.Sequential

A sequential container. Modules will be added to it in the order they are passed in the constructor. Alternatively, an ordered dict of modules can also be passed in.

简单的理解就是将神经网络模块按照顺序放入其中,构成一个完整的神经网络

nn.Conv2d

Applies a 2D convolution over an input signal composed of several input planes.

就是一个二维卷积层

nn.MaxPool2d

Applies a 2D max pooling over an input signal composed of several input planes.

二维池化层

nn.Flatten()

Flattens a contiguous range of dims into a tensor

将数据进行平坦化,注意,由于其用在卷积神经网络当中,所以默认从第二维进行平坦
区别见此

nn.Linear

Applies a linear transformation to the incoming data: y = x*A + b

CNN中的线性层,要计算的参数主要是这个

forward

这个必须要加,否则要自己写一个,使得在每一次迭代的时候模型正常运算 详细原因

更新模型

nn.CrossEntropyLoss()

This criterion computes the cross entropy loss between input and target.

计算损失熵

注意:不要经过softmax层

loss.backward()
给予损失计算梯度

torch.optim.*

torch.optim is a package implementing various optimization algorithms. Most commonly used methods are already supported, and the interface is general enough, so that more sophisticated ones can be also easily integrated in the future.

模型优化,可以理解成在每一次计算完损失函数后,需要使用损失和此来对参数进行更新
常用的有

from .adadelta import Adadelta as Adadelta
from .adagrad import Adagrad as Adagrad
from .adam import Adam as Adam
from .adamax import Adamax as Adamax
from .adamw import AdamW as AdamW
from .asgd import ASGD as ASGD
from .lbfgs import LBFGS as LBFGS
from .nadam import NAdam as NAdam
from .optimizer import Optimizer as Optimizer
from .radam import RAdam as RAdam
from .rmsprop import RMSprop as RMSprop
from .rprop import Rprop as Rprop
from .sgd import SGD as SGD
from .sparse_adam import SparseAdam as SparseAdam

每种区别 (只有其中六种,剩下的我也不造,有hxd找到能后踢我一下吗?)

torch.optim.*.zero_grad()
Sets the gradients of all optimized torch.Tensor s to zero.

将梯度置零,注:如果是minibatch, 则在每一次的batch中,都需要进行置零

torch.optim.*.step()

Some optimization algorithms such as Conjugate Gradient and LBFGS need to reevaluate the function multiple times, so you have to pass in a closure that allows them to recompute your model. The closure should clear the gradients, compute the loss, and return it.

更新参数,step()和zero_grad()和backward()是同时出现的,具体解释。

你可能感兴趣的:(pytorch学习,pytorch,深度学习,python)