这里写链接内容
1.traingd:批梯度下降训练函数,沿网络性能参数的负梯度方向调整网络的权值和阈值.
2.traingdm:动量批梯度下降函数,也是一种批处理的前馈神经网络训练方法,不但具有更快的收敛速度,而且引入了一个动量项,有效避免了局部最小问题在网络训练中出现.
3.trainrp:有弹回的BP算法,用于消除梯度模值对网络训练带来的影响,提高训练的速度.(主要通过delt_inc和delt_dec来实现权值的改变)
4.trainlm:Levenberg-Marquardt算法,对于中等规模的BP神经网络有最快的收敛速度,是系统默认的算法.由于其避免了直接计算赫赛矩阵,从而减少了训练中的计算量,但需要较大内存量.
5. traincgb:Plwell-Beale算法:通过判断前后梯度的正交性来决定权值和阈值的调整方向是否回到负梯度方向上来.
6. trainscg:比例共轭梯度算法:将模值信赖域算法与共轭梯度算法结合起来,减少用于调整方向时搜索网络的时间.
一般来说,traingd和traingdm是普通训练函数,而traingda,traingdx,traingd,trainrp,traincgf,traincgb,trainscg,trainbgf等等都是快速训练函数.总体感觉就是训练时间的差别比较大,还带有精度的差异.
(以上信息来自网上,忘记出处)
(以下信息来自MATLAB帮助文档,随后附有我的翻译)
nntrain
Neural Network Toolbox Training Functions.
To change a neural network’s trainingalgorithm set the net.trainFcn
property to the name of the correspondingfunction. For example, to use
the scaled conjugate gradient backproptraining algorithm:
net.trainFcn = ‘trainscg’;
Backpropagation training functions that useJacobian derivatives
These algorithms can be faster but requiremore memory than gradient
backpropation. They are also not supported on GPU hardware.
trainlm - Levenberg-Marquardt backpropagation.
trainbr - Bayesian Regulation backpropagation.
Backpropagation training functions that usegradient derivatives
These algorithms may not be as fast asJacobian backpropagation.
They are supported on GPU hardware with theParallel Computing Toolbox.
trainbfg - BFGS quasi-Newton backpropagation.
traincgb - Conjugate gradient backpropagation with Powell-Beale restarts.
traincgf - Conjugate gradient backpropagation with Fletcher-Reeves updates.
traincgp - Conjugate gradient backpropagation with Polak-Ribiere updates.
traingd - Gradient descent backpropagation.
traingda - Gradient descent with adaptive lr backpropagation.
traingdm - Gradient descent with momentum.
traingdx - Gradient descent w/momentum & adaptive lr backpropagation.
trainoss - One step secant backpropagation.
trainrp - RPROP backpropagation.
trainscg - Scaled conjugate gradient backpropagation.
Supervised weight/bias training functions
trainb - Batch training with weight & bias learning rules.
trainc - Cyclical order weight/bias training.
trainr - Random order weight/bias training.
trains - Sequential order weight/bias training.
Unsupervised weight/bias training functions
trainbu - Unsupervised batch training with weight & bias learning rules.
trainru - Unsupervised random order weight/bias training.
翻译:
神经网络工具箱训练函数.
设置net.trainFcn属性里的当前函数的名字可以改变神经网络的训练函数.举个例子,用如下代码可以把训练函数设置成scaled conjugate gradientbackprop(扩展共轭梯度反向,我乱猜的)训练函数
net.trainFcn =’trainscg’;
Backpropagation(反向传播)训练函数使用的是Jacobian(雅克比)导数.
这些算法很快,但是比导数反向传播法需要更多的内存.他们也不支持GPU.
trainlm - Levenberg-Marquardt backpropagation.
trainbr - Bayesian Regulation backpropagation.
反向传播训练函数使用的是梯度导数
这些算法没有雅克比反向传播的算法那么快.他们支持GPU,借助于并行运算工具箱(ParallelComputing Toolbox).
trainbfg - BFGS quasi-Newton backpropagation.
traincgb - Conjugate gradient backpropagation with Powell-Beale restarts.
traincgf - Conjugate gradient backpropagation with Fletcher-Reeves updates.
traincgp - Conjugate gradient backpropagation with Polak-Ribiere updates.
traingd - Gradient descent backpropagation.
traingda - Gradient descent with adaptive lr backpropagation.
traingdm - Gradient descent with momentum.
traingdx - Gradient descent w/momentum & adaptive lr backpropagation.
trainoss - One step secant backpropagation.
trainrp - RPROP backpropagation.
trainscg - Scaled conjugate gradient backpropagation.
权值/偏差受控训练法
trainb - Batch training with weight & bias learning rules.
trainc - Cyclical order weight/bias training.
trainr - Random order weight/bias training.
trains - Sequential order weight/bias training.
权值/偏差不受控训练法
trainbu - Unsupervised batch training with weight & bias learning rules.
trainru - Unsupervised random order weight/bias training.