极限学习机和支持向量机_极限学习机I

极限学习机和支持向量机

Around 2005, A novel machine learning approach was introduced by Guang-Bin Huang and a team of researchers at Nanyang Technological University, Singapore.

2005年前后,黄光斌和新加坡南洋理工大学的一组研究人员介绍了一种新颖的机器学习方法。

This new proposed learning algorithm tends to reach the smallest training error, obtain the smallest norm of weights and the best generalization performance, and runs extremely fast, in order to differentiate it from the other popular SLFN learning algorithms, it is called the Extreme Learning Machine (ELM).

这种新提出的学习算法趋于达到最小的训练误差获得最小的权重范数最佳的泛化性能 ,并且运行速度极快 ,以使其与其他流行的SLFN学习算法区分开来,称为极限学习机。 (榆树)。

This method mainly addresses the issue of far slower training time of neural networks than required, the main reasons for which is that all the parameters of the networks are tuned iteratively by using such learning algorithms. These slow-gradient based learning algorithms are extensively used to train neural networks.

该方法主要解决了神经网络的训练时间比所需的慢得多的问题,其主要原因是通过使用这种学习算法来迭代地调整网络的所有参数。 这些基于慢梯度的学习算法被广泛用于训练神经网络。

Before going into how ELM works and how is it so good, let’s see how gradient-based neural networks based off.

在探讨ELM的工作原理以及它的性能如何之前,让我们看看基于梯度的神经网络是如何建立的。

基于梯度的神经网络的演示

你可能感兴趣的:(python)