matlab newff函数解析

matlab newff函数解析

  • newff
  • Syntax
  • Description
  • Examples
  • Algorithm
  • 传递函数TFi
  • 学习训练函数BTF
  • 参数说明

see:http://matlab.izmiran.ru/help/toolbox/nnet/newff.html

newff

Create a feed-forward backpropagation network

Syntax

net = newff

net = newff(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF)

Description

net = newff creates a new network with a dialog box.

newff(PR,[S1 S2…SNl],{TF1 TF2…TFNl},BTF,BLF,PF) takes,

PR -- R x 2 matrix of min and max values for R input elements

Si -- Size of ith layer, for Nl layers

TFi -- Transfer function of ith layer, default = 'tansig'

BTF -- Backpropagation network training function, default = 'traingdx'

BLF -- Backpropagation weight/bias learning function, default = 'learngdm'

PF -- Performance function, default = 'mse'

and returns an N layer feed-forward backprop network.

The transfer functions TFi can be any differentiable transfer function such as tansig, logsig, or purelin.

The training function BTF can be any of the backprop training functions such as trainlm, trainbfg, trainrp, traingd, etc.

Caution: trainlm is the default training function because it is very fast, but it requires a lot of memory to run. If you get an “out-of-memory” error when training try doing one of these:

  1. Slow trainlm training, but reduce memory requirements by setting net.trainParam.mem_reduc to 2 or more. (See help trainlm.)
  2. Use trainbfg, which is slower but more memory-efficient than trainlm.
  3. Use trainrp, which is slower but more memory-efficient than trainbfg.
    The learning function BLF can be either of the backpropagation learning functions such as learngd or learngdm.

The performance function can be any of the differentiable performance functions such as mse or msereg.

Examples

Here is a problem consisting of inputs P and targets T that we would like to solve with a network.

P = [0 1 2 3 4 5 6 7 8 9 10];
T = [0 1 2 3 4 3 2 1 2 3 4];

Here a two-layer feed-forward network is created. The network’s input ranges from [0 to 10]. The first layer has five tansig neurons, the second layer has one purelin neuron. The trainlm network training function is to be used.

net = newff([0 10],[5 1],{'tansig' 'purelin'});

Here the network is simulated and its output plotted against the targets.

Y = sim(net,P);
plot(P,T,P,Y,'o')

Here the network is trained for 50 epochs. Again the network’s output is plotted.

net.trainParam.epochs = 50;
net = train(net,P,T);
Y = sim(net,P);
plot(P,T,P,Y,'o')

Algorithm

Feed-forward networks consist of N1 layers using the dotprod weight function, netsum net input function, and the specified transfer functions.

The first layer has weights coming from the input. Each subsequent layer has a weight coming from the previous layer. All layers have biases. The last layer is the network output.

Each layer’s weights and biases are initialized with initnw.

Adaption is done with trains, which updates weights with the specified learning function. Training is done with the specified training function. Performance is measured according to the specified performance function.

传递函数TFi

purelin:线性传递函数。
tansig :正切S型传递函数。
logsig :对数S型传递函数。 

隐含层和输出层函数的选择对BP神经网络预测精度有较大影响,一般隐含层节点转移函数选用 tansig函数或logsig函数,输出层节点转移函数选用tansig函数或purelin函数。

学习训练函数BTF

traingd:最速下降BP算法。
traingdm:动量BP算法。
trainda:学习率可变的最速下降BP算法。
traindx:学习率可变的动量BP算法。
trainrp:弹性算法。

变梯度算法:

traincgf:Fletcher-Reeves修正算法
traincgp:Polak_Ribiere修正算法
traincgb:Powell-Beale复位算法
trainbfg:BFGS 拟牛顿算法
trainoss:OSS算法

参数说明

运行 net.trainParam 可以查看参数

Show Training Window Feedback   showWindow: true
Show Command Line Feedback showCommandLine: false
Command Line Frequency                show: 25   两次显示之间的训练次数
Maximum Epochs                      epochs: 1000 训练次数
Maximum Training Time                 time: Inf 最长训练时间(秒)
Performance Goal                      goal: 0 网络性能目标
Minimum Gradient                  min_grad: 1e-07 性能函数最小梯度
Maximum Validation Checks         max_fail: 6 最大验证失败次数
Mu                                      mu: 0.001 学习速率
Mu Decrease Ratio                   mu_dec: 0.1 学习速率下降值
Mu Increase Ratio                   mu_inc: 10 学习速率增长值
Maximum mu                          mu_max: 10000000000 动量因子

你可能感兴趣的:(matlab,机器学习,matlab,机器学习)