学习矢量量化(LVQ)

一 自组织竞争神经网络 net=newc([0 1;0 1],2)
1. 网络结构
单层神经元网络 输入节点与输出节点之间全互联
竞争是神经元之间的竞争;当神经元赢时,该神经元输出为1,否则为0。
2. 训练过程
权值调整――Kohonen学习规则:dw=learnk(w,p,[],[],a,[],[],[],[],[],lp,[]);
只对获胜的神经元权值进行调整,使得网络的权值趋向于输入向量。结果,获胜的神经元对将来再次出现的相似向量(能被阈值b所包容的)更加容易赢得该神经元的胜利。最终实现了对输入向量的分类。
阈值调整――阈值学习规则:[dB,LS]=learncon(B,P,Z,N,A,T,E,gW,gA ,D,LP,LS)
使经常活动的神经元的阈值越来越小,并且使得不经常活动的神经元活动更加频繁。

二 自组织特征映射(SOFM)神经网络
1. 网络结构
在结构上模拟了大脑皮层中神经元呈二维空间点阵的结构

输入层和竞争层组成单层神经网络 :
输入层:一维神经元 n节
竞争层:二维神经元拓扑结构 相互间可能有局部连接

拓扑结构: 矩形网格 gridtop()
六角形 hextop()
随机结构 randtop()

神经元间距: 欧氏距离 dist();box距离 boxdist();
link距离 linkdist();manhattan距离 mandist()

  1. 训练过程
    对获胜节点及半径k内节点进行权值调整,且k越来越小,直到只包含获胜节点本身为止;这样,使得对于某类模式,获胜节点能作出最大响应,相邻节点作出较少响应。
    权值调整――learnsom():
    排序阶段:学习率由初始值下降至调整阶段学习率;邻域大小由最大神经元距离减小到1
    调整阶段:学习率缓慢下降,直到0;邻域大小一直为1。学习矢量量化(LVQ)神经网络
  2. 网络结构
    竞争层(隐层)+线性层
    线性层的一个期望类别对应竞争层中若干个子类
  3. 学习规则
    竞争层将自动学习对输入向量进行分类,这种分类的结果仅仅依赖于输入向量之间的距离。如果两个输入向量特别相近,竞争层就把他们分在同一类。

详细介绍见:http://www.doc88.com/p-8495503025413.html

function [dw,ls] = learnlv3(w,p,z,n,a,t,e,gW,gA,d,lp,ls)
%LEARNLV2 LVQ2 weight learning function.
%
%   Syntax
%   
%     [dW,LS] = learnlv3(w,p,n,a,T,lp,ls,Ttrain,C)
%     info = learnlv2(code)
%
%   Description
%
%     LEARNLV3 is the OLVQ weight learning function.
%
%     LEARNLV2(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
%       W  - SxR weight matrix (or Sx1 bias vector).
%       P  - RxQ input vectors (or ones(1,Q)).
%       Z  - SxQ weighted input vectors.
%       N  - SxQ net input vectors.
%       A  - SxQ output vectors.
%       T  - SxQ layer target vectors.
%       E  - SxQ layer error vectors.
%       gW - SxR weight gradient with respect to performance.
%       gA - SxQ output gradient with respect to performance.
%       D  - SxS neuron distances.
%       LP - Learning parameters, none, LP = [].
%       LS - Learning state, initially should be = [].
%     and returns,
%       dW - SxR weight (or bias) change matrix.
%       LS - New learning state.
%
%     Learning occurs according to LEARNLV1's learning parameter,
%     shown here with its default value.
%       LP.lr - 0.01 - Learning rate
%
%     LEARNLV2(CODE) returns useful information for each CODE string:
%       'pnames'    - Returns names of learning parameters.
%       'pdefaults' - Returns default learning parameters.
%       'needg'     - Returns 1 if this function uses gW or gA.
%
%   Examples
%
%     Here we define a sample input P, output A, weight matrix W, and
%     output gradient gA for a layer with a 2-element input and 3 neurons.
%     We also define the learning rate LR.
%
%       p = rand(2,1);
%       w = rand(3,2);
%       n = negdist(w,p);
%       a = compet(n);
%       gA = [-1;1; 1];
%       lp.lr = 0.5;
%
%     Since LEARNLV2 only needs these values to calculate a weight
%     change (see Algorithm below), we will use them to do so.
%
%       dW = learnlv3(w,p,n,a,lp,Ttrain,C)
%
%   Network Use
%
%     You can create a standard network that uses LEARNLV2 with NEWLVQ.
%
%     To prepare the weights of layer i of a custom network
%     to learn with LEARNLV1:
%     1) Set NET.trainFcn to 'trainwb1'.
%        (NET.trainParam will automatically become TRAINWB1's default parameters.)
%     2) Set NET.adaptFcn to 'adaptwb'.
%        (NET.adaptParam will automatically become TRAINWB1's default parameters.)
%     3) Set each NET.inputWeights{i,j}.learnFcn to 'learnlv2'.
%        Set each NET.layerWeights{i,j}.learnFcn to 'learnlv2'.
%        (Each weight learning parameter property will automatically
%        be set to LEARNLV2's default parameters.)
%
%     To train the network (or enable it to adapt):
%     1) Set NET.trainParam (or NET.adaptParam) properties as desired.
%     2) Call TRAIN (or ADAPT).
%
%   Algorithm
%
%     LEARNLV3 calculates the weight change dW for a given neuron from
%     the neuron's input P, output A, train vector target T train, output
%     conexion matrix C and learning rate LR
%     according to the OLVQ rule, given i the index of the neuron whose
%     output a(i) is 1:
%
%       dw(i,:) = +lr*(p-w(i,:)) if C(:,i) = Ttrain
%               = -lr*(p-w(i,:)) if C(:,i) ~= Ttrain
%
%     if C(:,i) ~= Ttrain then the index j is found of the neuron with the
%     greatest net input n(k), from the neurons whose C(:,k)=Ttrain.  This
%     neuron's weights are updated as follows:
%
%       dw(j,:) = +lr*(p-w(i,:))
%
%   See also LEARNLV1, ADAPTWB, TRAINWB, ADAPT, TRAIN.

% Mark Beale, 11-31-97
% Copyright (c) 1992-1998 by The MathWorks, Inc.
% $Revision: 1.1.1.1 $

% FUNCTION INFO
% =============
if isstr(w)
  switch lower(w)
  case 'name'
      dw = 'Learning Vector Quantization 3';
  case 'pnames'
    dw = {'lr';'window'};
  case 'pdefaults'
    lp.lr = 0.01;
    lp.window = 0.25;
    dw = lp;
  case 'needg'
    dw = 1;
  otherwise
    error('NNET:Arguments','Unrecognized code.')
  end
  return
end


% CALCULATION
% ===========

[S,R] = size(w);
Q = size(p,2);
pt = p';
dw = zeros(S,R);
% For each q...
for q=1:Q

  % Find closest neuron k1 找到获胜神经元
  nq = n(:,q);
  k1 = find(nq == max(nq));
  k1 = k1(1);

  % Find next closest neuron k2 次获胜神经元
  nq(k1) = -inf;
  k2 = find(nq == max(nq));
  k2 = k2(1);


  % and if x falls into the window...
  d1 = abs(n(k1,q)); % Shorter distance
  d2 = abs(n(k2,q)); % Greater distance

  if d2/d1 > ((1-lp.window)/(1+lp.window))

      % then move incorrect neuron away from input,
      % and the correct neuron towards the input
      ptq = pt(q,:);
      if gA(k1,q) ~= gA(k2,q)
          % indicate the incorrect neuron with i, the other with j
          if gA(k1,q) ~= 0
              i = k1;
              j = k2;
          else
              i = k2;
              j = k1;
          end
          dw(i,:) = dw(i,:) - lp.lr*(ptq - w(i,:));
          dw(j,:) = dw(j,:) + lp.lr*(ptq - w(j,:));
      else
          dw(k1,:) = dw(k1,:) + 0.11*lp.window*(ptq-w(k1,:));
       %   dw(k2,:) = dw(k2,:) + 0.11*lp.window*(ptq-w(k2,:));
      end
  end
end

以上代码转自:http://blog.csdn.net/cxf7394373/article/details/6400372

你可能感兴趣的:(学习矢量量化(LVQ))