1-感知器-----MATLAB神经网络和优化算法-初步学习

2022-12-03 MATLAB神经网络和优化算法学习-学习来源:b站

太强大了!【MATLAB神经网络和优化算法】68讲全!从放弃到精通看这个视频就够了!中国人不骗中国人!-MATLAB、神经网络、优化算法_哔哩哔哩_bilibili

先上代码,后补理论。 

% 2022-12-03
% 神经网络初步学习
% 感知器神经网络-点分类
clear 
clc

% 输入向量
p = [0,0,1,1;0,1,0,1];
% 目标向量
t = [0,1,1,1];

% 建立感知器神经网络
net = newp(minmax(p),1,'hardlim','learnp');
% 进行网路训练
net = train(net,p,t);
% 进行网络仿真
y = sim(net,p);
% 绘制样本点
plotpv(p,y)
% 在感知器向量图中绘制分界线
plotpc(net.iw{1,1},net.b{1});

很奇怪,我自己跑出来的结果和UP主有区别,是因为我没有保存分类器么?emmmmmm 

mae - Mean Absolute error 平均绝对误差

mse - Mean squared error 平均平方误差

see - Sum squared error 误差平方和

调用方式 perf = mae(e)

阈值函数 hardlim 和 hardlims

其中plotpv调用格式为plotpv(x,t),x为n*m,n不超过3,t为n*m,n不超过3

plotpc是在感知器上画分类线,调用为plot(w,b),w不能超过3*3

learnp是感知器权值和偏置学习函数,需要去看帮助文档。

adapt是自适应训练函数更新网络

%使用建立好的神经网络,进行分类
% 训练并保存,下次直接调用该神经网络

clear 
clc

% 输入向量
p = [-0.4,-0.4,0.5,-0.2,-0.7; -0.6,0.6,-0.4,0.3,0.8];
% 输出向量
t = [1,1,0,0,1];
% 绘制样本
plotpv(p,t)
% 建立神经网络
net = newp(minmax(p),1,'hardlim','learnp');
hold on
linehandle = plot(net.IW{1},net.b{1});
% e为误差矩阵或向量
e = 1;
net.adaptParam.passes=10;

% 误差达到要求才停止训练
while mae(e)
    % 进行感知器神经网络训练
    [net,y,e] = adapt(net,p,t);
    linehandle = plotpc(net.IW{1},net.b{1},linehandle);
    drawnow
end
% 将训练好的神经网络进行保存
% save net1 net;

% 进行新的分类
% load net1.mat
x = [-0.3,0.3,0.9;-0.6,0.2,0.8];
y = sim(net,x);
figure
plotpv(x,y)
plotpc(net.IW{1},net.b{1})
set(gcf,'position',[60,60,300,300]);

 感知器学习

《matlab机器学习及其应用教程》1-6简单感知机的matlab仿真,《小杰matlab》淘宝店店长“小猪老师”出品_哔哩哔哩_bilibili

感知器

①已知点集合N_{i}=(x_{1},y_{1}),(x_{2},y_{2}),...,(x_{i},y_{i})t_{i}为1或者-1(或者0),学习率为\eta

②初始化设置权值w和偏置b,一般为0或者很小的随机数。

③依次选点(N_{i},t_{i}),如果t_{i} * (w*N_{i}^{'}+b) <=0,则修正权值和偏置:

w=w+\eta *(t_{i}-o_{i})*N_{i}

b=b+\eta *(t_{i}-o_{i})

④转至步骤③,直到所有的点被正确分类。

非常的奇怪,和UP一样的代码,分出来的w和b结果不一样。emmmm

clc
clear
% 输入点,竖排三个点
x = [3,3;4,3;1,1];
t = [1,1,-1];
% 设置初始化权值
% 因为x点集合是二维的,所以w是二维
% 如果w是三维,则需要转换至三维,此时的w=[0,0,0]
w=[0,0];
% 设置偏置
b=0;
% 设置学习率
p=0.5;

% 初始化控制符
flag = 0;

while flag==0
    d = [0,0,0];
    for i = 1:3
        % 此时将点进行了转置
        o = w * x(i,:)'+b;
        if o>0
            o=1;
        elseif o<0
            o=-1
        end
        % 判断是否更新权值和阈值
        if t(i)* (w * x(i,:)'+b) <=0
            w = w + p*(t(i)-o) * x(i,:);
            b = b + p*(t(i)-o);
        else
            % 表明正确分类
            d(i) = 1;
        end
    end
    if isequal(d,[1,1,1])
        flag=1;
    end
    
end

x=x';
t(t == -1) = 0;
figure
plotpv(x,t)
plotpc(w,b)
xlabel('x')
ylabel('y')

代码中没有引入误差e(误差矩阵或向量),e=t-y,t为目标向量,y为输出向量。

 MATLAB2019B实列代码

%% Normalized Perceptron Rule
% A 2-input hard limit neuron is trained to classify 5 input vectors into two 
% categories. Despite the fact that one input vector is much bigger than the others, 
% training with LEARNPN is quick.
% 
% Each of the five column vectors in X defines a 2-element input vectors, 
% and a row vector T defines the vector's target categories. Plot these vectors 
% with PLOTPV.

X = [ -0.5 -0.5 +0.3 -0.1 -40; ...
      -0.5 +0.5 -0.5 +1.0 50];
T = [1 1 0 0 1];
plotpv(X,T);
%% 
% Note that 4 input vectors have much smaller magnitudes than the fifth 
% vector in the upper left of the plot. The perceptron must properly classify 
% the 5 input vectors in X into the two categories defined by T. 
% 
% PERCEPTRON creates a new network with LEARPN learning rule, which is less 
% sensative to large variations in input vector size than LEARNP (the default).
% 
% The network is then configured with the input and target data which results 
% in initial values for its weights and bias. (Configuration is normally not necessary, 
% as it is done automatically by ADAPT and TRAIN.)
%%
net = perceptron('hardlim','learnpn');
net = configure(net,X,T);
%% 
% Add the neuron's initial attempt at classification to the plot.
% 
% The initial weights are set to zero, so any input gives the same output 
% and the classification line does not even appear on the plot. Fear not... we 
% are going to train it!
%%
hold on
linehandle = plotpc(net.IW{1},net.b{1});
%% 
% ADAPT returns a new network object that performs as a better classifier, 
% the network output, and the error. This loop allows the network to adapt, plots 
% the classification line, and continues until the error is zero.
%%
E = 1;
while (sse(E))
   [net,Y,E] = adapt(net,X,T);
   linehandle = plotpc(net.IW{1},net.b{1},linehandle);
   drawnow;
end
%% 
% Note that training with LEARNP took only 3 epochs, while solving the same 
% problem with LEARNPN required 32 epochs. Thus, LEARNPN does much better job 
% than LEARNP when there are large variations in input vector size.
% 
% Now SIM can be used to classify any other input vector. For example, classify 
% an input vector of [0.7; 1.2].
% 
% A plot of this new point with the original training set shows how the network 
% performs. To distinguish it from the training set, color it red.
%%
x = [0.7; 1.2];
y = net(x);
plotpv(x,y);
circle = findobj(gca,'type','line');
circle.Color = 'red';
%% 
% Turn on "hold" so the previous plot is not erased. Add the training set 
% and the classification line to the plot.
%%
hold on;
plotpv(X,T);
plotpc(net.IW{1},net.b{1});
hold off;
%% 
% Finally, zoom into the area of interest.
% 
% The perceptron correctly classified our new point (in red) as category 
% "zero" (represented by a circle) and not a "one" (represented by a plus). The 
% perceptron learns properly in much shorter time in spite of the outlier (compare 
% with the "Outlier Input Vectors" example).
%%
axis([-2 2 -2 2]);
%% 
% Copyright 1992-2014 The MathWorks, Inc.

你可能感兴趣的:(matlab,#神经网络-优化算法,matlab,开发语言,神经网络)