(2 作 2017 Michael Kirley)A Recursive Decomposition Method for Large Scale Continuous Optimization

Abstract

问题:

However, the main challenge when using this framework lies in problem decomposition. That is, deciding how to allocate decision variables to a particular subproblem, especially interacting decision variables. Existing decomposition methods are typically computationally expensive.

如何分配决策变量到子问题中,尤其针对相互作用的决策变量。现有的分解方法在计算上代价很大。

解决方法:

In this paper, we propose a new decomposition method, which we call recursive differential grouping (RDG), by considering the interaction between decision variables based on nonlinearity detection.

基于非线性检测的决策变量之间的相互作用,提出了一种新的分解方法,我们称之为递归差分分组(RDG)

RDG recursively examines the interaction between a selected decision variable and the remaining variables, placing all interacting decision variables into the same subproblem.

RDG递归地检查所选决策变量与其余变量之间的相互作用,将所有相互作用的决策变量置于同一子问题中。

结果:

RDG greatly improved the efficiency of problem decomposition in terms of time complexity. Significantly, when RDG was embedded in a CC framework, the optimization results were better than results from seven other decomposition methods.

RDG在时间复杂度方面极大地提高了问题分解的效率。 值得注意的是,当RDG嵌入CC框架时,优化结果优于其他七种分解方法的结果。

关键词:

Continuous optimization problem, cooperative co-evolution (CC), decomposition method, large scale global optimization (LSGO).

连续优化问题,协同协同进化(CC),分解方法,大规模全局优化(LSGO)。


Conclution

方法:

A robust decomposition method—RDG—was proposed, which can decompose an n-dimensional problem using O(n log(n)) FEs based on a measure of nonlinearity between decision variables.

提出了一种鲁棒分解方法——RDG,它可以基于决策变量之间的非线性度量,使用O(n log(n))FEs来分解n维问题。

实验结果:

Significantly, RDG outperformed seven other decomposition methods when embedded into the DECC/CMAESCC framework and tested across a suite of benchmark LSGO problems. When compared against two other state-of-the-art hybrid algorithms, the CMAESCC-RDG algorithm achieved statistically significantly better results.

值得注意的是,当嵌入到DECC / CMAESCC框架中并在一系列基准LSGO问题中进行测试时,RDG的表现优于其他七种分解方法。 与其他两种最先进的混合算法进行比较时,CMAESCC-RDG算法在统计上获得了更好的结果。


留给自己的问题

1 分解策略有哪些

2 分解原理是什么

你可能感兴趣的:((2 作 2017 Michael Kirley)A Recursive Decomposition Method for Large Scale Continuous Optimization)