粒子群优化粒子群改进算法

Particle Swarm Optimization (PSO) is one of the heuristic optimization methods that use swarming rules of the birds/insects that we see in nature. The main idea is to follow the leading member/leading group of particles of the swarm who are close to the goal (most probably the food) of the team. I will not give detailed information about PSO here, anyone can find enough info in the open literature among very great studies, some of which are given in the references in the end of this post. One only thing that I should add is, the PSO that is given here is an improved one that has added parameters with some coefficients which have been found with Genetic Algorithm. This version given here outperforms the standard PSO almost at all of the universal test functions given here. Some detailed information is given in the upcoming sections.

粒子群优化(PSO)是一种启发式优化方法之一,它使用了我们在自然界中看到的鸟类/昆虫的群聚规则。 主要思想是跟随团队成员接近团队目标(最可能是食物)的领导成员/领导小组。 我不会在此处提供有关PSO的详细信息,任何人都可以在非常出色的研究中的公开文献中找到足够的信息,其中一些在本文结尾的参考文献中给出。 我要添加的唯一一件事是,此处给出的PSO是一种改进的PSO,它添加了一些参数,这些参数具有一些在遗传算法中发现的系数。 此处给出的该版本几乎在此处给出的所有通用测试功能上均优于标准PSO。 在接下来的部分中将提供一些详细信息。

The very first thing to do is to import necessary modules to our python environment. The pygame module is used for visualization of what is happening through the iterations. It can be taken to “off” if not necessary.

首先要做的是将必要的模块导入到我们的python环境中。 pygame模块用于可视化迭代中发生的事情。 如有必要,可以将其设为“关闭”。

Import Necessary Modules, at first 首先导入必要的模块

群群 (Swarm Class)

The swarm class given below consists of sub-routines what is needed for PSO.

下面给出的swarm类由PSO所需的子例程组成。

The init function is the main body of our class where we define the basic features of the swarming particle objects. We input the function that is going to be optimized, as well as the lower and upper bounds that are obligatory. Some (“egg” and “griewank”) of the universal test functions defined in the “Universal Test Functions” section given after, are predefined inside the init function with their lower and upper boundaries, however, in real world cases, these arguments should be input with the function itself by the user.

初始化函数是我们班级的主体,在这里我们定义了群集粒子对象的基本特征。 我们输入将要优化的功能,以及必填的上下限。 在下文给出的“通用测试功能”部分中定义的某些通用测试功能(“蛋”和“格里高克”)是在init函数内部预先定义的,具有上下限,但是,在实际情况下,这些参数应由用户自己输入功能。

The pygame screen where the user can visualize what is happening inside the swarm is in the “off” mode by default but can be made “on” by making the “display” argument “True”.

pygame屏幕上,用户可以看到群体内部发生的事情,默认情况下处于“关闭”模式,但可以通过将“显示”参数设置为“真”来将其设为“打开”。

The migration is inherently “on” if nothing is done but can be closed by changing the “migrationexists” argument to “False”. The migration probability is set to 0.15 by default and can be changed if necessary. The default number of particles inside the swarm is kept at 50 but can be altered, either.

如果不执行任何操作,则迁移在本质上是“启动”的,但可以通过将“ migrationexists”自变量更改为“ False”来关闭。 默认情况下,迁移概率设置为0.15,必要时可以更改。 群集内的默认粒子数量保持为50,但也可以更改。

The functions “coefficients”, “evaluate_fitness”, “distance”, “migration” and “optimize” are the functions where the main PSO algorithm is run — that the user does not need to consider about — . The values given in the “coefficients” function were obtained from MoGenA (Multi Objective Genetic Algorithm) — that I have published in my Github Repo a few months ago and will also write a post in Medium in near future — and should be kept unchanged for the best performance, that is why any user calling swarm class will not see the values in normal usage. There will be migration with a probability of 15%, where the worst 20% of the swarm is going to be changed with new particles. Some of these new particles will be placed to the domain randomly while the rest will be placed according to some predefined algorithm that is promoting the best particles. This migration algorithm developed here increases the rate of finding the global optimum value by decreasing the chance of being stuck at a local minima.

函数“ coefficients”,“ evaluate_fitness”,“ distance”,“ migration”和“ optimize”是运行主要PSO算法的功能-用户无需考虑-。 “系数”函数中给出的值是从MoGenA(多目标遗传算法)获得的 -我几个月前已经在我的Github Repo中发布了该值,并且不久后还将在“ Medium”中写过一篇文章,并且应该保持不变最佳性能,这就是为什么任何调用swarm类的用户都不会在正常使用情况下看到这些值的原因。 迁移的可能性为15%,其中最坏的20%的群体将被新粒子所取代。 这些新粒子中的一些将随机放置到域中,而其余的将根据促进最佳粒子的一些预定义算法放置。 这里开发的这种迁移算法通过减少卡在局部最小值的机会来提高找到全局最优值的速度。

The iterations are done by calling “update” function where the default iteration number is given as 50.

可以通过调用“更新”函数来完成迭代,其中默认迭代数为50。

Swarm Class 群群

通用测试功能 (Universal Test Functions)

Universal test functions are used to evaluate/compare the performance of the optimization methods. They are generally very though functions to optimize with lots of local minima that are very close to global minima. The mostly used ones are given in the address here and some of them are given in the next section, below.

通用测试功能用于评估/比较优化方法的性能。 它们通常具有通过大量与全局最小值非常接近的局部最小值进行优化的功能。 最常用的在下面的地址中给出,其中一些在下面的下一节中给出。

Some of the Universal Test Functions 一些通用测试功能

产生粒子 (Generate Particles)

The first thing to do is to generate the swarming particles randomly inside the boundaries of the function domain. The example below is done with “egg” function predefined inside the swarm class with its lower [-512,-515] and upper [512, 512] boundaries. The number of particles is given as 100, which means we will have a swarming group that has 100 particles inside.

首先要做的是在函数域的边界内随机生成群集粒子。 下面的示例通过在swarm类内部预定义的“ egg”函数完成,其下边界为[-512,-515]和上边界为[512,512]。 粒子的数量为100,这意味着我们将拥有一个包含100个粒子的群集组。

The default display mode is off but since we want to visualize what is happening, the display mode changed to “True”. There will be migration in the swarm and its probability parameter is kept as default.

默认显示模式为关闭,但由于我们要可视化正在发生的事情,因此显示模式已更改为“ True”。 群中将有迁移,其概率参数保留为默认值。

The “egg” function is given as follows.

“ egg”功能如下。

The global optimum point is at (512,404.2319) with a value of -959.6407.

全局最佳点位于(512,404.2319),其值为-959.6407。

Eggholder Function 蛋座功能
Generating a Swarming Group with 100 Particles 生成具有100个粒子的群
Iterate 重复

The results of the iterations seen below are the last 10 of the total 310 iterations. What we see during an iteration is a brief situation that gives information about the best value and positions ever and also at that iteration.

下面显示的迭代结果是总共310个迭代中的最后10个。 我们在迭代过程中看到的是一个简短的情况,它给出了有关有史以来以及在该迭代过程中最佳价值和位置的信息。

iteration no       :  301
migration : False
best_particle : 65
best_position : {'0': 484.55488893931, '1': 435.47531421515777}
best_value : -951.633848665597
best_particle_ever : 55
best_position_ever : {'0': 512, '1': 404.3716709393827}
best_value_ever : -959.6184029391127
best_value_obtained at iteration no: 101
--------------------------------------------------------
iteration no : 302
migration : False
best_particle : 86
best_position : {'0': 512, '1': 403.88478570058084}
best_value : -959.5040834781366
best_particle_ever : 55
best_position_ever : {'0': 512, '1': 404.3716709393827}
best_value_ever : -959.6184029391127
best_value_obtained at iteration no: 101
--------------------------------------------------------
iteration no : 303
migration : True
best_particle : 55
best_position : {'0': 512, '1': 404.3716709393827}
best_value : -955.6986659036597
best_particle_ever : 55
best_position_ever : {'0': 512, '1': 404.3716709393827}
best_value_ever : -959.6184029391127
best_value_obtained at iteration no: 101
--------------------------------------------------------
iteration no : 304
migration : False
best_particle : 12
best_position : {'0': 439.7348848078938, '1': 451.8887247292044}
best_value : -893.5687522656307
best_particle_ever : 55
best_position_ever : {'0': 512, '1': 404.3716709393827}
best_value_ever : -959.6184029391127
best_value_obtained at iteration no: 101
--------------------------------------------------------
iteration no : 305
migration : False
best_particle : 17
best_position : {'0': 512.0, '1': 405.4521743065799}
best_value : -957.9344736771322
best_particle_ever : 55
best_position_ever : {'0': 512, '1': 404.3716709393827}
best_value_ever : -959.6184029391127
best_value_obtained at iteration no: 101
--------------------------------------------------------
iteration no : 306
migration : True
best_particle : 78
best_position : {'0': 512, '1': 403.6452268849142}
best_value : -936.2364185513111
best_particle_ever : 55
best_position_ever : {'0': 512, '1': 404.3716709393827}
best_value_ever : -959.6184029391127
best_value_obtained at iteration no: 101
--------------------------------------------------------
iteration no : 307
migration : False
best_particle : 32
best_position : {'0': 475.53568091537545, '1': 426.0044490589951}
best_value : -937.0178639753331
best_particle_ever : 55
best_position_ever : {'0': 512, '1': 404.3716709393827}
best_value_ever : -959.6184029391127
best_value_obtained at iteration no: 101
--------------------------------------------------------
iteration no : 308
migration : True
best_particle : 29
best_position : {'0': 512, '1': 401.8206558721925}
best_value : -951.5497677541362
best_particle_ever : 55
best_position_ever : {'0': 512, '1': 404.3716709393827}
best_value_ever : -959.6184029391127
best_value_obtained at iteration no: 101
--------------------------------------------------------
iteration no : 309
migration : True
best_particle : 73
best_position : {'0': 512, '1': 403.94636462938774}
best_value : -926.525509762136
best_particle_ever : 55
best_position_ever : {'0': 512, '1': 404.3716709393827}
best_value_ever : -959.6184029391127
best_value_obtained at iteration no: 101
--------------------------------------------------------
iteration no : 310
migration : False
best_particle : 3
best_position : {'0': 512, '1': 404.9632235948374}
best_value : -957.9416518266312
best_particle_ever : 55
best_position_ever : {'0': 512, '1': 404.3716709393827}
best_value_ever : -959.6184029391127
best_value_obtained at iteration no: 101
--------------------------------------------------------

The plot below shows the global best value obtained during total iterations. What is seen below is, the convergence rate is quite fast which means the global optimum point has been found at a very early stage of the iterations.

下图显示了在总迭代期间获得的全局最佳值。 下面显示的是,收敛速度非常快,这意味着在迭代的非常早期就找到了全局最优点。

Optimization Plot 优化图

结果 (The Results)

When the results are investigated, followings can be inferred:

调查结果时,可以推断出以下情况:

  • The best result is obtained at the iteration 101, which is not a meaningful improvement when compared with the result obtained around iteration 30.

    最佳结果是在迭代101处获得的,与在迭代30处获得的结果相比,这并不是有意义的改进。
  • Almost the best result is obtained just in 30 iterations although the total number of iterations is 310, that shows that the rate of finding global optima is quite high which is so crucial in engineering applications.

    尽管迭代总数为310,但几乎在30次迭代中就获得了最佳结果,这表明找到全局最优值的比率非常高,这在工程应用中至关重要。

The detailed codes can be found in my GitHub Repo given below, anyone can use it with no hesitation.

可以在下面给出的GitHub Repo中找到详细的代码,任何人都可以毫不犹豫地使用它。

翻译自: https://medium.com/swlh/particle-swarm-optimization-pso-improved-algorithm-67bed2555307

你可能感兴趣的:(算法,python,java,人工智能,机器学习)