上一例的训练后的权值和阈值如下:
>> net.iw{1,1}
ans =
2 1
>> net.b{1}
ans =
-3
>>
adapt学习自适应函数
>> help adapt
--- help for network/adapt ---
ADAPT Allow a neural network to adapt.
Syntax
[net,Y,E,Pf,Af,tr] = adapt(NET,P,T,Pi,Ai)
Description
[NET,Y,E,Pf,Af,tr] = ADAPT(NET,P,T,Pi,Ai) takes,
NET - Network.
P - Network inputs.
T - Network targets, default = zeros.
Pi - Initial input delay conditions, default = zeros.
Ai - Initial layer delay conditions, default = zeros.
and returns the following after applying the adapt function
NET.adaptFcn with the adaption parameters NET.adaptParam:
NET - Updated network.
Y - Network outputs.
E - Network errors.
Pf - Final input delay conditions.
Af - Final layer delay conditions.
TR - Training record (epoch and perf).
Note that T is optional and only needs to be used for networks
that require targets. Pi and Pf are also optional and need
only to be used for networks that have input or layer delays.
ADAPT's signal arguments can have two formats: cell array or matrix.
The cell array format is easiest to describe. It is most
convenient to be used for networks with multiple inputs and outputs,
and allows sequences of inputs to be presented:
P - NixTS cell array, each element P{i,ts} is an RixQ matrix.
T - NtxTS cell array, each element T{i,ts} is an VixQ matrix.
Pi - NixID cell array, each element Pi{i,k} is an RixQ matrix.
Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
Y - NoxTS cell array, each element Y{i,ts} is an UixQ matrix.
E - NoxTS cell array, each element E{i,ts} is an UixQ matrix.
Pf - NixID cell array, each element Pf{i,k} is an RixQ matrix.
Af - NlxLD cell array, each element Af{i,k} is an SixQ matrix.
Where:
Ni = net.numInputs
Nl = net.numLayers
No = net.numOutputs
ID = net.numInputDelays
LD = net.numLayerDelays
TS = number of time steps
Q = batch size
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Ui = net.outputs{i}.size
The columns of Pi, Pf, Ai, and Af are ordered from oldest delay
condition to most recent:
Pi{i,k} = input i at time ts=k-ID.
Pf{i,k} = input i at time ts=TS+k-ID.
Ai{i,k} = layer output i at time ts=k-LD.
Af{i,k} = layer output i at time ts=TS+k-LD.
The matrix format can be used if only one time step is to be
simulated (TS = 1). It is convenient for network's with
only one input and output, but can be used with networks that
have more.
Each matrix argument is found by storing the elements of
the corresponding cell array argument into a single matrix:
P - (sum of Ri)xQ matrix
T - (sum of Vi)xQ matrix
Pi - (sum of Ri)x(ID*Q) matrix.
Ai - (sum of Si)x(LD*Q) matrix.
Y - (sum of Ui)xQ matrix.
E - (sum of Ui)xQ matrix
Pf - (sum of Ri)x(ID*Q) matrix.
Af - (sum of Si)x(LD*Q) matrix.
Examples
Here two sequences of 12 steps (where T1 is known to depend
on P1) are used to define the operation of a filter.
p1 = {-1 0 1 0 1 1 -1 0 -1 1 0 1};
t1 = {-1 -1 1 1 1 2 0 -1 -1 0 1 1};
Here NEWLIN is used to create a layer with an input range
of [-1 1]), one neuron, input delays of 0 and 1, and a
learning rate of 0.5. The linear layer is then simulated.
net = newlin([-1 1],1,[0 1],0.5);
Here the network adapts for one pass through the sequence.
The network's mean squared error is displayed. (Since this
is the first call of ADAPT the default Pi is used.)
[net,y,e,pf] = adapt(net,p1,t1);
mse(e)
Note the errors are quite large. Here the network adapts
to another 12 time steps (using the previous Pf as the
new initial delay conditions.)
p2 = {1 -1 -1 1 1 -1 0 0 0 1 -1 -1};
t2 = {2 0 -2 0 2 0 -1 0 0 1 0 -1};
[net,y,e,pf] = adapt(net,p2,t2,pf);
mse(e)
Here the network adapts through 100 passes through
the entire sequence.
p3 = [p1 p2];
t3 = [t1 t2];
net.adaptParam.passes = 100;
[net,y,e] = adapt(net,p3,t3);
mse(e)
The error after 100 passes through the sequence is very
small - the network has adapted to the relationship
between the input and target signals.
Algorithm
ADAPT calls the function indicated by NET.adaptFcn, using the
adaption parameter values indicated by NET.adaptParam.
Given an input sequence with TS steps the network is
updated as follows. Each step in the sequence of inputs is
presented to the network one at a time. The network's weight and
bias values are updated after each step, before the next step in
the sequence is presented. Thus the network is updated TS times.
我们用自适应方式 来学习训练AND运算
P=[0 1 0 1 1;1 1 1 0 0]
T=[0 1 0 0 0]
net = newp(P,T)
net.adaptParam.passes=10
[net,y,E] = adapt(net,P,T)
[net,y,E] = adapt(net,P,T)
2次adapt调用完成任务
>> Y = sim(net,P)
Y =
0 1 0 0 0
>> T
T =
0 1 0 0 0
我们写个test.m文件
P=[0 1 0 1 1;1 1 1 0 0]
T=[0 1 0 0 0]
net = newp(P,T)
net.adaptParam.passes=10
while (mae(e)>1e-20)
[net,y,e]=adapt(net,P,T)
end
mae(e)
y=sim(net,P)
运行效果很好
>> test
P =
0 1 0 1 1
1 1 1 0 0
T =
0 1 0 0 0
net =
Neural Network object:
architecture:
numInputs: 1
numLayers: 1
biasConnect: [1]
inputConnect: [1]
layerConnect: [0]
outputConnect: [1]
numOutputs: 1 (read-only)
numInputDelays: 0 (read-only)
numLayerDelays: 0 (read-only)
subobject structures:
inputs: {1x1 cell} of inputs
layers: {1x1 cell} of layers
outputs: {1x1 cell} containing 1 output
biases: {1x1 cell} containing 1 bias
inputWeights: {1x1 cell} containing 1 input weight
layerWeights: {1x1 cell} containing no layer weights
functions:
adaptFcn: 'trains'
divideFcn: (none)
gradientFcn: 'calcgrad'
initFcn: 'initlay'
performFcn: 'mae'
plotFcns: {'plotperform','plottrainstate'}
trainFcn: 'trainc'
parameters:
adaptParam: .passes
divideParam: (none)
gradientParam: (none)
initParam: (none)
performParam: (none)
trainParam: .show, .showWindow, .showCommandLine, .epochs,
.goal, .time
weight and bias values:
IW: {1x1 cell} containing 1 input weight matrix
LW: {1x1 cell} containing no layer weight matrices
b: {1x1 cell} containing 1 bias vector
other:
name: ''
userdata: (user information)
net =
Neural Network object:
architecture:
numInputs: 1
numLayers: 1
biasConnect: [1]
inputConnect: [1]
layerConnect: [0]
outputConnect: [1]
numOutputs: 1 (read-only)
numInputDelays: 0 (read-only)
numLayerDelays: 0 (read-only)
subobject structures:
inputs: {1x1 cell} of inputs
layers: {1x1 cell} of layers
outputs: {1x1 cell} containing 1 output
biases: {1x1 cell} containing 1 bias
inputWeights: {1x1 cell} containing 1 input weight
layerWeights: {1x1 cell} containing no layer weights
functions:
adaptFcn: 'trains'
divideFcn: (none)
gradientFcn: 'calcgrad'
initFcn: 'initlay'
performFcn: 'mae'
plotFcns: {'plotperform','plottrainstate'}
trainFcn: 'trainc'
parameters:
adaptParam: .passes
divideParam: (none)
gradientParam: (none)
initParam: (none)
performParam: (none)
trainParam: .show, .showWindow, .showCommandLine, .epochs,
.goal, .time
weight and bias values:
IW: {1x1 cell} containing 1 input weight matrix
LW: {1x1 cell} containing no layer weight matrices
b: {1x1 cell} containing 1 bias vector
other:
name: ''
userdata: (user information)
net =
Neural Network object:
architecture:
numInputs: 1
numLayers: 1
biasConnect: [1]
inputConnect: [1]
layerConnect: [0]
outputConnect: [1]
numOutputs: 1 (read-only)
numInputDelays: 0 (read-only)
numLayerDelays: 0 (read-only)
subobject structures:
inputs: {1x1 cell} of inputs
layers: {1x1 cell} of layers
outputs: {1x1 cell} containing 1 output
biases: {1x1 cell} containing 1 bias
inputWeights: {1x1 cell} containing 1 input weight
layerWeights: {1x1 cell} containing no layer weights
functions:
adaptFcn: 'trains'
divideFcn: (none)
gradientFcn: 'calcgrad'
initFcn: 'initlay'
performFcn: 'mae'
plotFcns: {'plotperform','plottrainstate'}
trainFcn: 'trainc'
parameters:
adaptParam: .passes
divideParam: (none)
gradientParam: (none)
initParam: (none)
performParam: (none)
trainParam: .show, .showWindow, .showCommandLine, .epochs,
.goal, .time
weight and bias values:
IW: {1x1 cell} containing 1 input weight matrix
LW: {1x1 cell} containing no layer weight matrices
b: {1x1 cell} containing 1 bias vector
other:
name: ''
userdata: (user information)
y =
0 0 0 0 0
e =
0 1 0 0 0
net =
Neural Network object:
architecture:
numInputs: 1
numLayers: 1
biasConnect: [1]
inputConnect: [1]
layerConnect: [0]
outputConnect: [1]
numOutputs: 1 (read-only)
numInputDelays: 0 (read-only)
numLayerDelays: 0 (read-only)
subobject structures:
inputs: {1x1 cell} of inputs
layers: {1x1 cell} of layers
outputs: {1x1 cell} containing 1 output
biases: {1x1 cell} containing 1 bias
inputWeights: {1x1 cell} containing 1 input weight
layerWeights: {1x1 cell} containing no layer weights
functions:
adaptFcn: 'trains'
divideFcn: (none)
gradientFcn: 'calcgrad'
initFcn: 'initlay'
performFcn: 'mae'
plotFcns: {'plotperform','plottrainstate'}
trainFcn: 'trainc'
parameters:
adaptParam: .passes
divideParam: (none)
gradientParam: (none)
initParam: (none)
performParam: (none)
trainParam: .show, .showWindow, .showCommandLine, .epochs,
.goal, .time
weight and bias values:
IW: {1x1 cell} containing 1 input weight matrix
LW: {1x1 cell} containing no layer weight matrices
b: {1x1 cell} containing 1 bias vector
other:
name: ''
userdata: (user information)
y =
0 1 0 0 0
e =
0 0 0 0 0
ans =
0
y =
0 1 0 0 0
>> y
y =
0 1 0 0 0
>> T
T =
0 1 0 0 0