一、 介绍新版newff
Syntax
· net = newff(P,T,[S1 S2...S(N-l)],{TF1 TF2...TFNl}, BTF,BLF,PF,IPF,OPF,DDF)
Description
newff(P,T,[S1 S2...S(N-l)],{TF1 TF2...TFNl}, BTF,BLF,PF,IPF,OPF,DDF) takes several arguments
P |
R x Q1 matrix of Q1 sample R-element input vectors |
T |
SN x Q2 matrix of Q2 sample SN-element target vectors |
Si |
Size of ith layer, for N-1 layers, default = [ ]. |
TFi |
Transfer function of ith layer. (Default = 'tansig' for |
BTF |
Backpropagation network training function (default = 'trainlm') |
BLF |
Backpropagation weight/bias learning function (default = 'learngdm') |
IPF |
Row cell array of input processing functions. (Default = {'fixunknowns','removeconstantrows','mapminmax'}) |
OPF |
Row cell array of output processing functions. (Default = {'removeconstantrows','mapminmax'}) |
DDF |
Data divison function (default = 'dividerand') |
Examples
Here is a problem consisting of inputs P and targets T to be solved with a network.
· P = [0 1 2 3 4 5 6 7 8 9 10];T = [0 1 2 3 4 3 2 1 2 3 4];
Here a network is created with one hidden layer of five neurons.
· net = newff(P,T,5);
The network is simulated and its output plotted against the targets.
· Y = sim(net,P);plot(P,T,P,Y,'o')
The network is trained for 50 epochs. Again the network's output is plotted.
· net.trainParam.epochs = 50;net = train(net,P,T);Y = sim(net,P);plot(P,T,P,Y,'o')
二、 新版newff与旧版newff调用语法对比
Example1
比如输入input(6*1000),输出output为(4*1000),那么
旧版定义:net=newff(minmax(input),[14,4],{'tansig','purelin'},'trainlm');
新版定义:net=newff(input,output,14,{'tansig','purelin'},'trainlm');
Example2
比如输入input(6*1000),输出output为(4*1000),那么
旧版定义:net=newff(minmax(input),[49,14,4],{'tansig','tansig','tansig'},'traingdx');
新版定义:net=newff(input,output, [49,14], {'tansig','tansig','tansig'},'traingdx');
三、 旧版newff使用方法在新版本中使用
提示:旧版本定义的newff虽也能在新版本中使用,但会有警告,警告如下:
Warning: NEWFF used in an obsolete way.
> In obs_use at 18
In newff>create_network at 127
In newff at 102
See help for NEWFF to update calls to the new argument list.
四、 新版newff与旧版newff使用的训练效果对比
旧版本:旧用法训练次数多,但精度高
新版本:新用法训练次数少,但精度可能达不到要求
造成上述原因是:
程序里面的权值、阈值的初始值是随机赋值的,所以每次运行的结果都会不一样,有好有坏。
你可以把预测效果不错的网络的权值和阈值作为初始值。
具体可以查看net.iw{1,1}、net.lw{2,1}、net.b{1}、net.b{2}的值。
现在给一个完整的例子
%% 清空环境变量
clc
clear
%% 训练数据预测数据
data=importdata('test.txt');
%从1到768间随机排序
k=rand(1,768);
[m,n]=sort(k);
%输入输出数据
input=data(:,1:8);
output =data(:,9);
%随机提取500个样本为训练样本,268个样本为预测样本
input_train=input(n(1:500),:)';
output_train=output(n(1:500),:)';
input_test=input(n(501:768),:)';
output_test=output(n(501:768),:)';
%输入数据归一化
[inputn,inputps]=mapminmax(input_train);
%% BP网络训练
% %初始化网络结构
net=newff(inputn,output_train,10);
net.trainParam.epochs=1000;
net.trainParam.lr=0.1;
net.trainParam.goal=0.0000004;
%% 网络训练
net=train(net,inputn,output_train);
%% BP网络预测
%预测数据归一化
inputn_test=mapminmax('apply',input_test,inputps);
%网络预测输出
BPoutput=sim(net,inputn_test);
%% 结果分析
%根据网络输出找出数据属于哪类
BPoutput(find(BPoutput<0.5))=0;
BPoutput(find(BPoutput>=0.5))=1;
%% 结果分析
%画出预测种类和实际种类的分类图
figure(1)
plot(BPoutput,'og')
hold on
plot(output_test,'r*');
legend('预测类别','输出类别')
title('BP网络预测分类与实际类别比对','fontsize',12)
ylabel('类别标签','fontsize',12)
xlabel('样本数目','fontsize',12)
ylim([-0.5 1.5])
%预测正确率
rightnumber=0;
for i=1:size(output_test,2)
if BPoutput(i)==output_test(i)
rightnumber=rightnumber+1;
end
end
rightratio=rightnumber/size(output_test,2)*100;
sprintf('测试准确率=%0.2f',rightratio)