matlab performfcn,matlab-神经网络-感知器(4)

训练感知器,使用train,重复得将一组向量应用到一个网络上,每次更新网络,直到达到准则

>> P

P =

0     1     0     1     1

1     1     1     0     0

>> T

T =

0     1     0     0     0

关于newp的说明

Define a sequence of targets T (together P and T define the operation of an AND gate), and then let the network adapt for 10 passes through the sequence. Then simulate the updated network.

T1 = {0 0 0 1};

net.adaptParam.passes = 10;

net = adapt(net,P1,T1);

Y = sim(net,P1)

Now define a new problem, an OR gate, with batch inputs P and targets T.

P2 = [0 0 1 1; 0 1 0 1];

T2 = [0 1 1 1];

Here you initialize the perceptron (resulting in new random weight and bias values), simulate its output, train for a maximum of 20 epochs, and then simulate it again.

net = init(net);

Y = sim(net,P2)

net.trainParam.epochs = 20;

net = train(net,P2,T2);

Y = sim(net,P2)

Notes

Perceptrons can classify linearly separable classes in a finite amount of time. If input vectors have large variances in their lengths, learnpn can be faster than learnp.

>> net = newp(P,T)

net =

Neural Network object:

architecture:

numInputs: 1

numLayers: 1

biasConnect: [1]

inputConnect: [1]

layerConnect: [0]

outputConnect: [1]

numOutputs: 1  (read-only)

numInputDelays: 0  (read-only)

numLayerDelays: 0  (read-only)

subobject structures:

inputs: {1×1 cell} of inputs

layers: {1×1 cell} of layers

outputs: {1×1 cell} containing 1 output

biases: {1×1 cell} containing 1 bias

inputWeights: {1×1 cell} containing 1 input weight

layerWeights: {1×1 cell} containing no layer weights

functions:

adaptFcn: ‘trains’

divideFcn: (none)

gradientFcn: ‘calcgrad’

initFcn: ‘initlay’

performFcn: ‘mae’

plotFcns: {‘plotperform’,’plottrainstate’}

trainFcn: ‘trainc’

parameters:

adaptParam: .passes

divideParam: (none)

gradientParam: (none)

initParam: (none)

performParam: (none)

trainParam: .show, .showWindow, .showCommandLine, .epochs,

.goal, .time

weight and bias values:

IW: {1×1 cell} containing 1 input weight matrix

LW: {1×1 cell} containing no layer weight matrices

b: {1×1 cell} containing 1 bias vector

other:

name: ”

userdata: (user information)

>> y=sim(net,P)

y =

1     1     1     1     1

>>

经过仿真可以发现结果非常不理想,因为权值和阈值都是默认的0,我们看下调整和训练的函数的相关参数

>> help(net.adaptFcn)

TRAINS Sequential order incremental training w/learning functions.

Syntax

[net,TR,Ac,El] = trains(net,Pd,Tl,Ai,Q,TS,VV,TV)

info = trains(code)

Description

TRAINS is not called directly.  Instead it is called by TRAIN for

network’s whose NET.trainFcn property is set to ‘trains’.

TRAINS trains a network with weight and bias learning rules with

sequential updates. The sequence of inputs is presented to the network

with updates occurring after each time step.

This incremental training algorithm is commonly used for adaptive

applications.

TRAINS takes these inputs:

NET – Neural network.

Pd  – Delayed inputs.

Tl  – Layer targets.

Ai  – Initial input conditions.

Q   – Batch size.

TS  – Time steps.

VV  – Ignored.

TV  – Ignored.

and after training the network with its weight and bias

learning functions returns:

NET – Updated network.

TR  – Training record.

TR.timesteps – Number of time steps.

TR.perf – performance for each time step.

Ac  – Collective layer outputs.

El  – Layer errors.

Training occurs according to the TRAINS’ training parameter

shown here with its default value:

net.trainParam.passes    1  Number of times to present sequence

Dimensions for these variables are:

Pd – NoxNixTS cell array, each element P{i,j,ts} is a ZijxQ matrix.

Tl – NlxTS cell array, each element P{i,ts} is an VixQ matrix or [].

Ai – NlxLD cell array, each element Ai{i,k} is an SixQ matrix.

Ac – Nlx(LD+TS) cell array, each element Ac{i,k} is an SixQ matrix.

El – NlxTS cell array, each element El{i,k} is an SixQ matrix or [].

Where

Ni = net.numInputs

Nl = net.numLayers

LD = net.numLayerDelays

Ri = net.inputs{i}.size

Si = net.layers{i}.size

Vi = net.targets{i}.size

Zij = Ri * length(net.inputWeights{i,j}.delays)

TRAINS(CODE) return useful information for each CODE string:

‘pnames’    – Names of training parameters.

‘pdefaults’ – Default training parameters.

Network Use

You can create a standard network that uses TRAINS for adapting

by calling NEWP or NEWLIN.

To prepare a custom network to adapt with TRAINS:

1) Set NET.adaptFcn to ‘trains’.

(This will set NET.adaptParam to TRAINS’ default parameters.)

2) Set each NET.inputWeights{i,j}.learnFcn to a learning function.

Set each NET.layerWeights{i,j}.learnFcn to a learning function.

Set each NET.biases{i}.learnFcn to a learning function.

(Weight and bias learning parameters will automatically be

set to default values for the given learning function.)

To allow the network to adapt:

1) Set weight and bias learning parameters to desired values.

2) Call ADAPT.

See NEWP and NEWLIN for adaption examples.

Algorithm

Each weight and bias is updated according to its learning function

after each time step in the input sequence.

net.adaptParam.passes表示现有队列在网络的训练次数,自适应调整次数

net.trainParam.epochs表示最多训练次数

>> net.trainParam.epochs = 20

>> net=train(net,P,T)

net =

Neural Network object:

architecture:

numInputs: 1

numLayers: 1

biasConnect: [1]

inputConnect: [1]

layerConnect: [0]

outputConnect: [1]

numOutputs: 1  (read-only)

numInputDelays: 0  (read-only)

numLayerDelays: 0  (read-only)

subobject structures:

inputs: {1×1 cell} of inputs

layers: {1×1 cell} of layers

outputs: {1×1 cell} containing 1 output

biases: {1×1 cell} containing 1 bias

inputWeights: {1×1 cell} containing 1 input weight

layerWeights: {1×1 cell} containing no layer weights

functions:

adaptFcn: ‘trains’

divideFcn: (none)

gradientFcn: ‘calcgrad’

initFcn: ‘initlay’

performFcn: ‘mae’

plotFcns: {‘plotperform’,’plottrainstate’}

trainFcn: ‘trainc’

parameters:

adaptParam: .passes

divideParam: (none)

gradientParam: (none)

initParam: (none)

performParam: (none)

trainParam: .show, .showWindow, .showCommandLine, .epochs,

.goal, .time, .passes

weight and bias values:

IW: {1×1 cell} containing 1 input weight matrix

LW: {1×1 cell} containing no layer weight matrices

b: {1×1 cell} containing 1 bias vector

other:

name: ”

userdata: (user information)

>>

训练完毕,仿真(simulate)一下看看

>> y=sim(net,P)

y =

0     1     0     0     0

>> T

T =

0     1     0     0     0

>>

效果很好,没有误差

>> y=sim(net,[1;1])

y =

1

>> y=sim(net,[1;0])

y =

0

>> plotpv(P,T)

>>

本文转载自:深未来

欢迎加入我爱机器学习QQ14群:336582044

微信扫一扫,关注我爱机器学习公众号

你可能感兴趣的:(matlab,performfcn)