OpenCV3.0的神经网络类-MLP(多层感知机参考)[cv::ml::ANN_MLP Class Reference]

继承关系为:

cv::ml::ANN_MLP——>cv::ml::StatModel——>cv::Algorithm

公有枚举类型

enum ActivationFunctions { 
  IDENTITY = 0, 
  SIGMOID_SYM = 1, 
  GAUSSIAN = 2 
}
enum TrainFlags { 
  UPDATE_WEIGHTS = 1, 
  NO_INPUT_SCALE = 2, 
  NO_OUTPUT_SCALE = 4 
}
enum TrainingMethods { 
  BACKPROP =0, 
  RPROP =1 
}

继承自统计模型类的共有类型(枚举类型)

enum  Flags { 
  UPDATE_MODEL = 1, 
  RAW_OUTPUT =1, 
  COMPRESSED_INPUT =2, 
  PREPROCESSED_INPUT =4 
}

公有成员函数

获取值

virtual double 	getBackpropMomentumScale () const =0

virtual double 	getBackpropWeightScale () const =0

virtual cv::Mat getLayerSizes () const =0

virtual double 	getRpropDW0 () const =0

virtual double 	getRpropDWMax () const =0

virtual double 	getRpropDWMin () const =0

virtual double 	getRpropDWMinus () const =0

virtual double 	getRpropDWPlus () const =0

virtual TermCriteria 	getTermCriteria () const =0 返回各个项的范围

virtual int 	getTrainMethod () const =0

virtual Mat 	getWeights (int layerIdx) const =0

设置值


virtual void 	setActivationFunction (int type, double param1=0, double param2=0)=0

virtual void 	setBackpropMomentumScale (double val)=0

virtual void 	setBackpropWeightScale (double val)=0

virtual void 	setLayerSizes (InputArray _layer_sizes)=0

virtual void 	setRpropDW0 (double val)=0

virtual void 	setRpropDWMax (double val)=0

virtual void 	setRpropDWMin (double val)=0

virtual void 	setRpropDWMinus (double val)=0

virtual void 	setRpropDWPlus (double val)=0

virtual void 	setTermCriteria (TermCriteria val)=0

virtual void 	setTrainMethod (int method, double param1=0, double param2=0)=0

继承自统计模型类的公有成员函数

virtual float 	calcError (const Ptr< TrainData > &data, bool test, OutputArray resp) const
 	Computes error on the training or test dataset. More...
 
virtual bool 	empty () const
 	Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read. More...
 
virtual int 	getVarCount () const =0
 	Returns the number of variables in training samples. More...
 
virtual bool 	isClassifier () const =0
 	Returns true if the model is classifier. More...
 
virtual bool 	isTrained () const =0
 	Returns true if the model is trained. More...
 
virtual float 	predict (InputArray samples, OutputArray results=noArray(), int flags=0) const =0
 	Predicts response(s) for the provided sample(s) More...
 
virtual bool 	train (const Ptr< TrainData > &trainData, int flags=0)
 	Trains the statistical model. More...
 
virtual bool 	train (InputArray samples, int layout, InputArray responses)
 	Trains the statistical model. More...

继承自算法类的公有成员函数

 	Algorithm ()
 
virtual 	~Algorithm ()
 
virtual void 	clear ()
 	Clears the algorithm state. More...
 
virtual String 	getDefaultName () const
 
virtual void 	read (const FileNode &fn)
 	Reads algorithm parameters from a file storage. More...
 
virtual void 	save (const String &filename) const
 
virtual void 	write (FileStorage &fs) const
 	Stores algorithm parameters in a file storage. More...

静态公有成员函数

static Ptr< ANN_MLP >  create ()
  Creates empty model. More...

继承自统计模型类的静态公有成员函数

template
static Ptr< _Tp >  train (const Ptr< TrainData > &data, int flags=0)
  Create and train model with default parameters. More...

继承自算法类的静态公有成员函数


template
static Ptr< _Tp >  load (const String &filename, const String &objname=String())
  Loads algorithm from the file. More...
 
template
static Ptr< _Tp >  loadFromString (const String &strModel, const String &objname=String())
  Loads algorithm from a String. More...
 
template
static Ptr< _Tp >  read (const FileNode &fn)
  Reads algorithm from the file node. More...

详细描述

【机器翻译】人工神经网络——多层感知器。与许多其他模型毫升构造和训练,在模型这些步骤是分开的。首先,创建一个与指定的网络拓扑结构使用非默认的构造函数或方法ANN_MLP::创建。所有的重量都设置为0。然后,网络训练使用一组输入和输出向量。训练过程可以重复不止一次,也就是说,重量可以调整基于新的训练数据。额外的旗帜StatModel::火车可用:ANN_MLP::TrainFlags。
 另请参阅   神经网络

成员细目枚举文件

enum cv::ml::ANN_MLP::ActivationFunctions

possible activation functions

Enumerator
IDENTITY 

Identity function: f(x)=x

SIGMOID_SYM 

Symmetrical sigmoid: f(x)=β(1eαx)/(1+eαx

Note
If you are using the default sigmoid activation function with the default parameter values fparam1=0 and fparam2=0 then the function used is y = 1.7159*tanh(2/3 * x), so the output will range from [-1.7159, 1.7159], instead of [0,1].
GAUSSIAN 

Gaussian function: f(x)=βeαxx


enum cv::ml::ANN_MLP::TrainFlags

Train options

Enumerator
UPDATE_WEIGHTS 

Update the network weights, rather than compute them from scratch. In the latter case the weights are initialized using the Nguyen-Widrow algorithm.

NO_INPUT_SCALE 

Do not normalize the input vectors. If this flag is not set, the training algorithm normalizes each input feature independently, shifting its mean value to 0 and making the standard deviation equal to 1. If the network is assumed to be updated frequently, the new training data could be much different from original one. In this case, you should take care of proper normalization.

NO_OUTPUT_SCALE 

Do not normalize the output vectors. If the flag is not set, the training algorithm normalizes each output feature independently, by transforming it to the certain range depending on the used activation function.


enum cv::ml::ANN_MLP::TrainingMethods

Available training methods

Enumerator
BACKPROP 

The back-propagation algorithm.

RPROP 

The RPROP algorithm. See [101] for details.


成员函数文件

好累,翻译不下去了。等有大段空闲时间时,再接着翻译。


你可能感兴趣的:(机器学习)