J Ins 2016 01 068 PDF
J Ins 2016 01 068 PDF
Information Sciences
journal homepage: www.elsevier.com/locate/ins
a r t i c l e i n f o a b s t r a c t
Article history: For evolutionary algorithms, the search data during evolution has attracted considerable
Received 11 July 2015 attention and many kinds of data mining methods have been proposed to derive useful in-
Revised 26 December 2015
formation behind these data so as to guide the evolution search. However, these methods
Accepted 3 January 2016
mainly centered on the single objective optimization problems. In this paper, an adap-
Available online xxx
tive differential evolution algorithm based on analysis of search data is developed for the
Keywords: multi-objective optimization problems. In this algorithm, the useful information is firstly
Adaptive differential evolution derived from the search data during the evolution process by clustering and statistical
Multi-objective optimization methods, and then the derived information is used to guide the generation of new popula-
Multiple subpopulation tion and the local search. In addition, the proposed differential evolution algorithm adopts
multiple subpopulations, each of which evolves according to the assigned crossover opera-
tor borrowed from genetic algorithms to generate perturbed vectors. During the evolution
process, the size of each subpopulation is adaptively adjusted based on the information
derived from its search results. The local search consists of two phases that focus on explo-
ration and exploitation, respectively. Computational results on benchmark multi-objective
problems show that the improvements of the strategies are positive and that the proposed
differential evolution algorithm is competitive or superior to some previous multi-objective
evolutionary algorithms in the literature.
© 2016 Elsevier Inc. All rights reserved.
1 1. Introduction
2 In practical industries, most optimization problems need to deal with multiple objectives simultaneously, which often
3 results in conflict because the improvement in one objective will inevitably cause the deterioration in some other objec-
4 tives. These problems are referred to as multi-objective optimization problems (MOPs), and multi-objective evolutionary
5 algorithms (MOEAs) have shown a very good performance for these problems [7,8,30]. Due to the good ability of obtaining
6 well distributed solutions, the Pareto-based MOEAs are widely adopted, e.g., NSGA-II [9], microGA [5], SPEA2 [47], MOEA/D
7 [43], multi-objective particle swarm optimization (MOPSO) [6], multi-objective scatter search [21], and hybrid MOEA [34].
8 Differential evolution (DE) [32] is an evolutionary algorithm, and has shown very good performance for single objective
9 problems (SOPs) [3,24,26,33,35,37,48]. Consequently many researchers have attempted to extend DE to deal with multiple
10 objectives. The first attempt was made by Abbass et al. [1] in which a Pareto DE (PDE) algorithm was developed to solve
11 continuous MOPs. Another multi-objective DE (MODE) similar to PDE is proposed by Madavan [18] through incorporating the
∗
Correspondence to: Institute of Industrial Engineering & Logistics Optimization, Northeastern University, Shenyang 110819, PR China.
E-mail address: qhjytlx@mail.neu.edu.cn (L. Tang).
http://dx.doi.org/10.1016/j.ins.2016.01.068
0020-0255/© 2016 Elsevier Inc. All rights reserved.
Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous
multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
JID: INS
ARTICLE IN PRESS [m3Gsc;February 18, 2016;20:51]
12 fast non-dominated sorting and ranking method of NSGA-II into DE. To achieve a balance of convergence and diversity, Robic
13 and Filipic [28] developed a so-called DEMO algorithm that adopts two mechanisms for the convergence and diversity, re-
14 spectively. Huang et al. [14] extended their self-adaptive DE (SaDE) originally designed for the SOPs to multi-objective space
15 and constructed a multi-objective SaDE (MOSaDE) that can adaptively select an appropriate mutation strategy. Santana-
16 Quintero et al. [29] incorporated a local search based on the use of rough set theory into MODE to solve constrained MOPs,
17 and the computational results showed that the hybrid strategy succeeds to make the MODE robust and efficient. Ali et al.
18 [2] proposed a multi-objective DE algorithm (MODEA), in which the authors adapted the opposition-based learning strategy
19 to generate the initial population, the random localization in the mutation step, and a new strategy based on the PDEA
20 [18] and DEMO [28] in the selection step. Soliman et al. [31] presented a memetic coevolutionary MODE where coevolution
21 and local search are both incorporated. In this algorithm, two populations are maintained, one consisting of solutions and
22 the other storing search directions of solutions, to implement the coevolution. Kukkonen and Lampinen [16] developed a
23 generalized DE (GDE3) by extending the selection operator of canonical DE so that it can handle constrained MOPs.
24 Although many MODEs have been successfully applied to solve different MOPs, there are still two issues that have not
25 been taken into account by previous researches.
26 Firstly, in previous MODEs only one population is maintained in the whole evolution process. It is clear that the main
27 disadvantage is that the diversity of the population may be very poor because it is easy for the population to converge
28 to one or several local optimal areas for some MOPs with many local optima. In the literature, some researchers have
29 developed MOEAs with multiple subpopulations and the computational results illustrate that the performance of MOEAs
30 can be considerably improved. In the MOPSO proposed in [25], the population is divided into several subpopulations by
31 a clustering technique, and the computational results reported by the authors showed that the distribution of obtained
32 solutions can be much improved. In another kind of MOPSO proposed by Yen and Leong [39], a dynamic mechanism is
33 designed to adaptively adjust the number of subpopulations. The computational results reported by the authors showed
34 that this strategy can significantly improve the performance of the MOPSO. There are also some papers adopting multiple
35 subpopulations in DE, however, most papers only deal with the SOPs [17,40]. For the MOPs, Zaharie and Petcu [41] developed
36 a parallel adaptive Pareto DE (APDE) algorithm in which the population was divided into several subpopulations and the
37 parallelization was implemented based on an island model and a random connection topology. Parsopoulos et al. [23] also
38 presented a multi-population DE called vector evaluated DE for MOPs in which the parallelization was implemented by a
39 ring topology. Although the multiple subpopulation strategy was adopted in [23,41], the focus of the two algorithms was the
40 parallelization of MODE. In addition, in the two parallel MODEs only one mutation strategy was used during the evolution
41 process. Since different mutation strategies have different search behaviors and performance, how to combine the multi-
42 population strategy with multiple mutation strategies still needs to be studied so as to further improve the performance of
43 MODE.
44 Secondly, the information contained in the search data during evolution is neglected by most MODEs in the literature,
45 though such information is very valuable and has attracted considerable attentions from researchers. Several kinds of data
46 mining methods have been developed to derive the useful information from the search data so as to guide the evolution to
47 promising regions [3,15,19,22,27,38,42,44]. However, these methods are mainly centered on the SOPs [45]. For DE with data
48 mining techniques, Qin et al. [26] used the statistical method to help DE to adaptively select the most appropriate mutation
49 strategies, and then Huang et al. [14] extended this strategy to MODE. An opposition-based learning method was adopted in
50 Ali et al. [2] to generate a high quality initial population. However, in the above three references the data mining techniques
51 were only focused on the mutation strategy selection and the generation of initial population. The incorporation of data
52 mining techniques into multi-population strategy and local search in MODE still needs to be studied.
53 Motivated by the above two main issues, in this paper we propose a hybrid DE algorithm for the MOPs, namely an
54 adaptive multi-population DE (AMPDE). The proposed AMPDE has the following three main features with comparison to
55 previous MODEs in the literature.
56 • The AMPDE adopts multiple populations for MOPs, each of which maintains a different evolution path, so as to improve
57 the search robustness of traditional MODEs. Instead of the canonical mutation strategies used in MODEs, the AMPDE
58 adopts the crossover operators of genetic algorithms to generate perturbed vectors.
59 • The AMPDE adopts data mining methods such as clustering and statistical method to derive useful information from the
60 search data during the evolution process. The derived information is then used to guide the generation of new population
61 and local search. For example, the size of each subpopulation will be adaptively adjusted based on the information
62 derived from its previous search results.
63 • The AMPDE adopts a two-phase local search to improve exploration and exploitation abilities: the first phase focuses on
64 the improvement of exploration through the data analysis of the evolution process, and the second phase centers on the
65 improvement of exploitation through the data analysis of current non-dominated solutions.
66 The remainder of this paper is organized as follows. Section 2 describes some related definitions of multi-objective op-
67 timization and DE algorithm. The details of the proposed AMPDE are provided in Section 3. In Section 4, the AMPDE is
68 compared with some other state-of-the-art MOEAs based on bi-objective and tri-objective benchmark MOPs. Finally, the
69 conclusions based on the present study are drawn in Section 5.
Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous
multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
JID: INS
ARTICLE IN PRESS [m3Gsc;February 18, 2016;20:51]
70 2. Background
s.t. gi ( X ) ≥ 0 ( i = 1, 2, . . . , m ) (2)
hi ( X ) = 0 (i = 1, 2, . . . , p) (3)
X ∈ n (4)
73 In the previous definition, X = (x1 , x2 , …, xn ) is the decision vector of n variables and F(X) is the objective vector consisting
74 of k objectives f1 (X), …, fk (X) that should be optimized simultaneously. Constraints (2) and (3) are the inequality and equality
75 constraints.
76 Given two decision vectors X = (x1 , x2 , …, xn ) and Y = (y1 , y2 , …, yn ), X is said to dominate Y if and only if fi (X ) ≤ fi (Y )
77 for any i∈ {1, 2,..., k} and f j (X ) < f j (Y ) for at least one objective j ∈ {1, 2,..., k}. If there is no other X in the decision space
78 that can dominate the decision vector X∗ , then X∗ is called the Pareto optimal vector. The set of corresponding objective
79 vectors of all Pareto optimal vectors in the objective space (the space in which the objective vector belongs) is called the
80 Pareto front. The task of a MOP is to obtain the set of all Pareto optimal vectors and the corresponding Pareto front.
82 In canonical DE algorithm, the evolution of population at each generation G consists of three main steps, namely muta-
83 tion, crossover and selection.
84 For the ith solution (Xi,G ) in the population, the mutation step (with the traditional DE/rand/bin/1) first randomly selects
85 three different solutions Xr 1, G , Xr 2, G , and Xr 3, G from the population (the three solutions are also different from Xi, ), and then
86 generates a new perturbed solution Vi,G by Vi,G = Xr 1, G + F × (Xr 2, G – Xr 3, G ), in which F is the control parameter.
87 Based on Vi,G = (v1, i , G , …, vn , i , G ), a trial solution Ui,G = (u1, i , G , …, un , i , G ) is generated in the crossover step by u j,i,G =
88 {vx j,i,G,,otherwise
if rand j ≤Cr or j= jrand
, where randj is a random number in [0, 1] and Cr ∈[0, 1] is the crossover probability. jrand is a random
j,i,G
89 integer index in [1, n] used to ensure that the trial solution Ui,G is different from solution Xi,G .
90 In the third step, the new solution Xi,G+ 1 in the next generation is determined based on the comparison result be-
91 tween the trial solution Ui,G and solution Xi,G . To ensure the convergence, the selection rule is defined by Xi,G+1 =
if f (Ui,G )≤ f (Xi,G )
92 {UXi,G ,,otherwise .
i,G
94 As mentioned in Section 1, the proposed AMPDE is designed with the motivation to test the incorporation of a multi-
95 population strategy and the data mining techniques into DE for the MOPs. The two key components of the AMPDE are the
96 multi-population management strategy and the data analysis strategy of search data obtained from the previous evolution
97 process. The two strategies are not independent with each other because the adjustment of subpopulation size is based on
98 the analysis results. In addition, the data analysis strategy also derives useful information from the evolution process and
99 the non-dominated solutions to guide the local search. The main procedure of the AMPDE is provided in Algorithm 1, and
100 its components are elaborated in the following sections.
102 To obtain an initial population with good diversity, we developed a diversification method, which follows the main ideas
103 of reference [21]. The procedure of this method is given in Algorithm 2 where NP denotes the size of population, and
104 Xi = (xi ,1 , xi ,2 , …, xi,n ) is the ith solution in the population. Please note that each subpopulation has the same size at the
105 initialization stage.
107 As the most widely studied evolutionary algorithm, genetic algorithm has attracted considerable attentions in recent
108 years and many kinds of crossover operators have been designed. Among these operators, the blend crossover (denoted as
109 BLX-α ) [13], the simulated binary crossover (denoted as SBX) [10], the simplex crossover (denoted as SPX) [36], and the
Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous
multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
JID: INS
ARTICLE IN PRESS [m3Gsc;February 18, 2016;20:51]
Algorithm 1
Main procedure of the proposed AMPDE
1: Initialization:
2: Set the termination criterion, and initialize the values of parameters. Set the generation count G = 0.
3: Set the external archive (denoted as EXA) to be empty.
4: Generate an initial population with four subpopulations using the Initialize-Population() method in
Section 3.1, and then assign a crossover operator to each subpopulation.
5: Evaluate each solution in each subpopulation.
6: Add the non-dominated solutions in all the subpopulations to EXA.
7: while (the termination criterion is not reached) do
8: Set G = G +1.
9: Improve the EXA by the two-phase local search method described in Section 3.3.
10: (1) Phase I: Improve the diversity of the non-dominated solutions of the EXA using the
EXA-propagation-search () method in Section 3.3.1.
11: (2) Phase II: If G is a multiple of nCLS (frequency of performing phase II), improve the EXA using
the EXA-clustering-search() in Section 3.3.2, and then repair the diversity of EXA using the
EXA-propagation-search () method.
12: Generate new population using the adaptive generation method in Section 3.2.
13: (1) If G is a multiple of ntrim (frequency of adjusting subpopulations’ sizes), then determine the
new size of each subpopulation based on the New-size-calculation () method described in Section
3.2.1; otherwise, keep the sizes unchanged.
14: (2) Generate new solutions for each subpopulation with the given size according to the
Subpopulation-adjustment () method described in Section 3.2.2.
15: Evaluate each solution in each subpopulation.
16: Update the EXA using the EXA-update-strategy () method described in Section 3.4.
17: End while
Algorithm 2
Initialize-Population()
110 parent centric crossover (denoted as PCX) [11] have proved to be very effective for continuous problems. The different char-
111 acteristics of these operators determine their different search powers for different problems. Therefore, it is reasonable to
112 concurrently use these operators by assigning an operator to each subpopulation and designing an appropriate management
113 strategy to adaptively adjust the size of each subpopulation through analyzing and learning from search data during the
114 evolution process. Such an adaptive subpopulation generation method can make the EAs more robust for different kinds of
115 MOPs.
116 In this paper, the above four crossover operators are used and correspondingly the population is divided into four parts.
117 Each subpopulation is assigned with a crossover operator. The evolution procedure follows the idea of canonical DE, and
118 the only difference is that in the AMPDE the perturbed vector is generated by applying the assigned crossover operator.
119 That is, the mutation operation is performed through the assigned operator. For a solution Xi in one subpopulation, one
120 non-dominated solution will be randomly selected from the EXA if this subpopulation is assigned with BLX-α or SBX, while
121 two non-dominated solutions will be randomly selected from the EXA if this subpopulation is assigned with SPX or PCX. The
122 operator will be applied to the current solution and the selected non-dominated solutions to generate the perturbed vector.
123 After the above mutation operation, the canonical crossover operation of DE is performed to generate the trial solution and
124 finally the selection operation will be applied to select the solution for the next generation. Therefore, we only use the four
125 crossover operators in genetic algorithm to act as the mutation strategy of canonical DE and consequently our algorithm is
126 still a DE algorithm.
127 To dynamically adapt the size of each subpopulation, we propose the following adaptive subpopulation generation
128 method based on the contribution of each sub-population to the external archive.
Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous
multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
JID: INS
ARTICLE IN PRESS [m3Gsc;February 18, 2016;20:51]
132 flag is added to each non-dominated solution in the EXA to show by which subpopulation it is obtained. Once the subpop-
133 ulation adjustment procedure is called, the new size of each subpopulation is determined as follows.
134 Step 1. Determine the number of non-dominated solutions that are provided by each subpopulation j (denoted as bj ). Note
135 that if one solution is found by more than one subpopulation, the flag contains all indexes of these subpopulations.
136 Step 2. The new size of each subpopulation j (denoted as SubPj ) is then calculated as SubPj = NP × b j / 4i=1 bi .
137 To ensure that each subpopulation can always have a chance to evolve, a minimum size (i.e., 5) is set for each subpopu-
138 lation.
143 • In the case of size increase for a certain subpopulation j with the crossover operator k, we first select a random solution
144 from this subpopulation j, then select a random solution (or two solutions if operator k is SPX or PCX) from the EXA and
145 finally generate a new solution by applying operator k on the selected solutions. This process continues until the size of
146 subpopulation j reaches its new size.
147 • On the contrary, in the case of size decrease for a certain subpopulation j, the worst solution (or the most crowded one
148 if all the solutions in this subpopulation are all non-dominated ones) will be repeatedly deleted until the new size is
149 reached.
150 Besides the above two cases, for the other solutions that still remain for a certain subpopulation j with the crossover
151 operator k, we use the following update method based on DE: first, for each solution we select a random solution (or two
152 solutions if operator k is SPX or PCX) from the EXA; second, generate a perturbed solution Vi,G by applying operator k on this
153 solution and the selected non-dominated solution; and finally perform the traditional crossover step and selection step of
154 DE based on this perturbed solution. In the selection step, the trial solution will be selected only if it dominates the target
155 solution.
157 Since the solutions in the EXA are involved in the generation of new solutions in our algorithm, the quality and diversity
158 of the EXA is very important to the whole performance of our algorithm. Therefore, we develop a two-phase local search
159 to improve the EXA. The first phase focuses on the improvement of exploration (or diversity) through the data analysis of
160 the evolution process by a statistical method and the second phase centers on the improvement of exploitation (or quality)
161 through the data analysis of non-dominated solutions from the EXA by a clustering method. We prefer to use such a strategy
162 because the second phase needs many diversified solutions that can be obtained by the first phase so as to derive useful
163 information.
Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous
multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
JID: INS
ARTICLE IN PRESS [m3Gsc;February 18, 2016;20:51]
Algorithm 3
EXA-propagation-search()
1: if |EXA| ≤ 2 then
2: for k: =1 to nEXA do
3: Generate a new solution Sk by perturbation () to a randomly selected solution from the EXA.
4: Update the EXA with the new solution Sk .
5: end for
6: Else
7: for k: = 1 to nEXA do
8: for each operator i: =1 to 4 do
9: Determine the contribution of subpopulation i as ci =bi /|EXA|, where bi is the sum of
non-dominated solutions provided by subpopulation i to the EXA (note that the contribution of
subpopulation i can also been seen as the contribution of operator i).
10: end for
11: Use the roulette-wheel method to select an operator based on the contribution of each
subpopulation.
12: Perform the selected operator on the randomly selected solutions from the EXA to generate a new
solution Sk (the operator used to generate this solution will be memorized by Sk ).
13: Update the EXA with the new solution Sk .
14: end for
15: end if
Algorithm 4
Clustering
Algorithm 5
Generating new solutions for each cluster
185 the two nearest clusters we need to calculate the distance of (|EXA| − ncluster )(|EXA| − ncluster − 1 )/2 cluster pairs. Since
186 ncluster is generally much smaller than |EXA| and |EXA| is generally equals to nEXA during the evolution, the complexity of
187 the clustering process is O(nEXA 3 ). The update of the EXA generally needs k×nEXA comparisons and nEXA new solutions will
188 be generated, so the complexity of Algorithm 5 is at most O(k×nEXA 2 ). Since the number of objectives is at most 3, the
189 complexity of the two-phase local search is O(nEXA 3 ).
190 In addition, as shown in Algorithm 1, phase II is not performed at each generation like the phase I. This phase is applied
191 if the generation counter is a multiple of nCLS (i.e., the frequency of performing phase II). We prefer to adopt this strategy
192 because the phase II is based on the clustering and learning of the EXA and the EXA generally has little change in two
193 consecutive generations. If we perform it at each generation, the clustering and learning results may be the same but a great
194 deal of computational efforts has to be paid. So, it is reasonable to apply the phase II when the EXA has sufficient changes.
195 The value of nCLS and its impact on the performance of the AMPDE are analyzed through the experiment in Section 4.4.1.3.
Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous
multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
JID: INS
ARTICLE IN PRESS [m3Gsc;February 18, 2016;20:51]
197 For a given non-dominated solution i in the current population at generation G, this paper uses the update procedure in
198 reference [4] that can be described as follows.
199 Step 1. If solution i is dominated by any one solution in the EXA, then solution i is discarded.
200 Step 2. If no solution in the EXA can dominate solution i, add it to the EXA, and then delete all solutions in the EXA that
201 are dominated by this solution.
202 Step 3. If the sum of non-dominated solutions exceeds nEXA , delete the most crowded one.
204 For MOPs with constraints, we use the constraint-handling approach proposed in [4] and [21] to compare solutions for
205 such MOPs. That is, solution Xi is said to dominate Xj if any of the following three conditions is satisfied: (1) two solutions
206 are both feasible and solution Xi dominates Xj ; (2) solution Xi is feasible and solution Xj is infeasible; (3) both solutions are
207 infeasible but solution Xi has a smaller overall violation of constraints.
209 To test the performance of our AMPDE algorithm, experiments based on benchmark test problems are carried out. The
210 AMPDE algorithm is implemented using C++ language and all the experiments are performed on a PC with the Intel Core
211 9550 CPU (2.83 GHz for a single core) and the Windows 7 operating system.
213 12 bi-objective and three-objective benchmark problems presented in the literature are chosen as the test problems,
214 whose definitions are given in Appendix A. These problems are often used in many published papers dealing with evolu-
215 tionary algorithms for MOPs.
216 (1) The bi-objective problems are the ZDT series: ZDT1, ZDT2, ZDT3, ZDT4, and ZDT6 in [46].
217 (2) The three-objective problems are the DTLZ family of scalable test problems: DTLZ1, DTLZ2, DTLZ3, DTLZ4, DTLZ5,
218 DTLZ6, and DTLZ7 in [12].
220 For the selected test problems, the true Pareto fronts of them are known. In this paper, we adopt two performance
221 metrics that are often used in many research papers: the general distance (GD) and the hypervolume (IH ). The description
222 of these performance metrics are provided in Appendix B.
224 For the parameters contained in the four operators, we prefer to adopt their suggested setting in the original literature:
225 α =0.5 for the BLX-α operator according to [13]; η = 20 for the SBX operator according to [9] and [10]; ε = 1 for the SPX
226 operator according to [40]; and σ η = σ ε = 0.1 for the PCX operator according to [11]. The value of the crossover probability
227 Cr in the crossover step is set by a normal distribution with mean value Crk and standard deviation 0.05, denoted by N(Crk ,
228 0.05) where Crk is the median value of Cr for solutions in subpopulation k. The value of Crk for each subpopulation k is
229 initially set to 0.1.
230 For the other parameters contained in the proposed AMPDE, we adopt the following parameter setting based on the
231 computational results: npop = 100, nEXA = 100, nCLS = 60, ntrim = 20, and ncluster = 20. In the following Section 4.4.1, the impact
232 of the value of each parameter on the performance of the AMPDE is provided and analyzed.
234 In this section, we first analyze the impact of parameters on the AMPDE, and then carry out experiments to show the
235 efficiency of the proposed improvement strategies such as the multiple subpopulations, the dynamic management on the
236 size of subpopulations, and the two-phase local search. At last, the proposed AMPDE is compared to several state-of-the-art
237 MOEAs such as NSGA-II [9], AbYSS [21], GDE3 [16], and SMPSO [20].
238 In these experiments, all the testing algorithms take the same stopping criterion of 50 0 0 0 function evaluations. We
239 made 100 independent runs for each problem, and collected the median (xm ) and interquartile range (IRQ) of each problem
240 as measures of location (or central tendency) and statistical dispersion.
Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous
multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
JID: INS
ARTICLE IN PRESS [m3Gsc;February 18, 2016;20:51]
Table 1
Comparison results for different values of ncluster .
GD
Problems 5 10 20 30 40
ZDT1 1.439e−04 3.432e −05 1.362e−04 3.479e −05 1.381e−04 3.794e −05 1.428e−04 3.496e −05 1.399e−04 2.980e −05
ZDT2 4.735e−05 4.698e −06 4.701e−05 3.415e −06 4.693e−05 3.287e −06 4.764e−05 4.177e −06 4.717e−05 3.575e −06
ZDT3 3.433e−05 2.875e −06 3.411e−05 3.546e −06 3.367e−05 3.440e −06 3.400e−05 2.908e −06 3.388e−05 3.364e −06
ZDT4 1.419e−04 3.719e −05 1.338e−04 3.573e −05 1.368e−04 3.160e −05 1.434e−04 3.539e −05 1.404e−04 3.759e −05
ZDT6 8.638e−04 4.679e −04 8.817e−04 5.344e −04 1.127e−03 1.735e −03 9.447e−04 5.989e −04 8.438e−04 6.657e −04
DTLZ1 5.729e−04 2.795e −05 5.709e−04 3.139e −05 6.007e−04 4.172e −05 5.668e−04 4.085e −05 5.685e−04 2.346e −05
DTLZ2 6.340e−04 3.892e −05 6.358e−04 4.093e −05 6.356e−04 3.288e −05 6.363e−04 3.339e −05 6.381e−04 4.896e −05
DTLZ3 2.056e−03 6.866e −04 2.070e−03 5.459e −04 2.022e−03 8.089e −04 2.059e−03 6.581e −04 2.124e−03 1.397e −03
DTLZ4 4.660e−03 3.383e −04 4.595e−03 4.014e −04 4.678e−03 1.948e −04 4.643e−03 2.692e −04 4.671e−03 2.638e −04
DTLZ5 2.505e−04 3.608e −05 2.473e−04 3.201e −05 2.551e−04 3.043e −05 2.473e−04 3.284e −05 2.496e−04 3.493e −05
DTLZ6 5.772e−04 1.773e −03 5.737e−04 1.878e −03 5.685e−04 3.130e −05 5.750e−04 5.593e −05 5.773e−04 1.542e −03
DTLZ7 1.983e−03 5.608e −04 1.968e−03 5.993e −04 1.868e−03 8.754e −04 1.916e−03 5.630e −04 1.932e−03 6.528e −04
IH
Problems 5 10 20 30 40
ZDT1 6.619e−01 5.234e −05 6.619e−01 4.909e −05 6.619e−01 3.757e −05 6.619e−01 3.767e −05 6.619e−01 4.013e −05
ZDT2 3.287e−01 4.116e −05 3.287e−01 3.720e −05 3.286e−01 3.911e −05 3.287e−01 4.097e −05 3.287e−01 3.788e −05
ZDT3 5.876e−01 1.820e −05 5.876e−01 2.244e −05 5.876e−01 1.781e −05 5.876e−01 2.268e −05 5.876e−01 1.875e −05
ZDT4 6.620e−01 4.735e −05 6.620e−01 3.273e −05 6.619e−01 3.741e −05 6.620e−01 3.430e −05 6.620e−01 3.505e −05
ZDT6 4.013e−01 5.687e −05 4.013e−01 6.856e −05 4.012e−01 1.182e −04 4.013e−01 5.871e −05 4.013e−01 6.014e −05
DTLZ1 7.659e−01 8.117e −03 7.657e−01 7.811e −03 7.698e−01 7.439e −03 7.662e−01 6.333e −03 7.672e−01 7.127e −03
DTLZ2 3.880e−01 6.302e −03 3.873e−01 6.313e −03 3.889e−01 6.776e −03 3.869e−01 5.953e −03 3.882e−01 5.553e −03
DTLZ3 3.192e−01 3.485e −02 3.193e−01 2.913e −02 3.222e−01 3.954e −02 3.209e−01 3.222e −02 3.210e−01 3.721e −02
DTLZ4 3.828e−01 8.561e −03 3.832e−01 7.714e −03 3.853e−01 8.114e −03 3.824e−01 7.322e −03 3.842e−01 7.074e −03
DTLZ5 9.401e−02 4.535e −05 9.400e−02 4.952e −05 9.397e−02 3.836e −05 9.401e−02 4.526e −05 9.401e−02 4.924e −05
DTLZ6 9.494e−02 4.124e −05 9.494e−02 4.299e −05 9.491e−02 4.830e −05 9.493e−02 3.894e −05 9.493e−02 3.583e −05
DTLZ7 2.944e−01 3.794e −03 2.940e−01 3.524e −03 2.930e−01 3.346e −02 2.952e−01 3.831e −03 2.942e−01 3.508e −03
Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous
multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
JID: INS
ARTICLE IN PRESS [m3Gsc;February 18, 2016;20:51]
Table 2
Comparison results for different values of ntrim .
GD
Problems 5 10 20 30 40
ZDT1 1.359e−04 3.125e −05 1.384e−04 3.427e −05 1.306e−04 3.492e −05 1.387e−04 3.386e −05 1.377e−04 3.638e −05
ZDT2 4.691e−05 3.258e −06 4.684e−05 3.536e −06 4.659e−05 3.355e −06 4.652e−05 2.580e −06 4.652e−05 3.103e −06
ZDT3 3.425e−05 2.689e −06 3.438e−05 2.644e −06 3.466e−05 3.069e −06 3.404e−05 2.709e −06 3.424e−05 2.420e −06
ZDT4 1.391e−04 3.835e −05 1.372e−04 3.517e −05 1.369e−04 3.892e −05 1.342e−04 4.017e −05 1.325e−04 3.356e −05
ZDT6 1.019e−03 1.752e −03 1.241e−03 2.034e −03 1.081e−03 1.811e −03 1.092e−03 2.111e −03 1.033e−03 1.745e −03
DTLZ1 5.945e−04 5.585e −05 5.943e−04 4.049e −05 5.947e−04 5.171e −05 5.980e−04 9.144e −05 5.914e−04 4.946e −05
DTLZ2 6.452e−04 4.224e −05 6.420e−04 3.807e −05 6.351e−04 3.430e −05 6.386e−04 3.606e −05 6.449e−04 3.379e −05
DTLZ3 1.901e−03 5.716e −04 2.148e−03 7.600e −04 1.998e−03 6.149e −04 2.121e−03 7.458e −04 2.028e−03 5.013e −04
DTLZ4 4.665e−03 2.732e −04 4.666e−03 2.823e −04 4.701e−03 2.622e −04 4.627e−03 2.856e −04 4.730e−03 2.818e −04
DTLZ5 2.478e−04 2.765e −05 2.480e−04 2.926e −05 2.554e−04 2.957e −05 2.527e−04 3.118e −05 2.511e−04 3.166e −05
DTLZ6 5.719e−04 3.441e −05 5.710e−04 3.274e −05 5.610e−04 3.810e −05 5.619e−04 4.208e −05 5.664e−04 3.131e −05
DTLZ7 2.132e−03 1.061e −03 2.224e−03 6.736e −04 2.137e−03 6.839e −04 2.131e−03 6.913e −04 2.230e−03 9.274e −04
IH
Problems 5 10 20 30 40
ZDT1 6.619e−01 2.928e −05 6.619e−01 2.998e −05 6.619e−01 4.320e −05 6.619e−01 3.141e −05 6.619e−01 3.964e −05
ZDT2 3.286e−01 3.033e −05 3.286e−01 3.055e −05 3.286e−01 3.765e −05 3.286e−01 3.704e −05 3.286e−01 4.018e −05
ZDT3 5.876e−01 2.133e −05 5.876e−01 1.894e −05 5.876e−01 1.885e −05 5.876e−01 2.077e −05 5.876e−01 2.149e −05
ZDT4 6.619e−01 3.521e −05 6.619e−01 3.256e −05 6.619e−01 3.152e −05 6.619e−01 3.426e −05 6.619e−01 3.631e −05
ZDT6 4.012e−01 1.123e −04 4.012e−01 1.312e −04 4.012e−01 1.327e −04 4.012e−01 1.324e −04 4.012e−01 1.131e −04
DTLZ1 7.691e−01 5.816e −03 7.698e−01 9.827e −03 7.702e−01 7.515e −03 7.692e−01 7.961e −03 7.696e−01 6.679e −03
DTLZ2 3.882e−01 8.127e −03 3.885e−01 5.872e −03 3.888e−01 6.784e −03 3.879e−01 5.634e −03 3.882e−01 5.621e −03
DTLZ3 3.290e−01 3.560e −02 3.232e−01 3.001e −02 3.161e−01 3.020e −02 3.183e−01 3.714e −02 3.163e−01 3.500e −02
DTLZ4 3.850e−01 5.455e −03 3.843e−01 6.692e −03 3.856e−01 6.814e −03 3.848e−01 6.266e −03 3.849e−01 7.587e −03
DTLZ5 9.396e−02 5.187e −05 9.396e−02 5.098e −05 9.397e−02 4.856e −05 9.397e−02 5.226e −05 9.397e−02 5.046e −05
DTLZ6 9.491e−02 4.417e −05 9.491e−02 3.676e −05 9.491e−02 3.898e −05 9.491e−02 3.644e −05 9.491e−02 4.030e −05
DTLZ7 2.916e−01 3.370e −02 2.936e−01 4.152e −03 2.936e−01 5.247e −03 2.946e−01 4.503e −03 2.942e−01 5.575e −03
249 95% based on the analysis of variance (ANOVA). The reason behind such results can be analyzed as follows. When ncluster
250 takes a small value such as 5, the number of solutions in each cluster is large and the distribution of these solutions is
251 wide, so many of the new solutions generated based on the learning of solutions in each cluster may be dominated by the
252 solutions in the cluster, especially when the selected solutions to perform crossover are far away from each other. On the
253 contrary, when ncluster takes a large value such as 40, the number of solutions in each cluster significantly decreases and
254 consequently the solutions in each cluster become too similar to each other. That is, the probability of finding better new
255 solutions becomes small and thus the algorithm’s performance deteriorates.
256 For the IH metric, the result in Fig. 1 shows that a large value of ncluster tends to obtain a better IH . This is because the
257 more clusters we classify, the better distribution of solutions in the EXA we will have, and thus the IH metric will be better.
258 However, the results of ANOVA show that the different values of ncluster do not have significant impact on the algorithm’s
259 performance, especially for the values of 20 and 40. Therefore, it is preferred to set the value of ncluster to be 20.
260 4.4.1.2. Frequency of adjusting subpopulations’ sizes – ntrim . The frequency of adjusting subpopulations’ sizes may affect the
261 performance of the AMPDE. If ntrim is small, the adjustment operation will be often performed. However, this may cause two
262 disadvantages: (1) the frequent adjustment will occupy a lot of computational efforts; and (2) each subpopulation cannot
263 have sufficient generations to evolve with a given set of initial solutions. On the contrary, if ntrim is large, some subpopu-
264 lations with inappropriate crossover operators may have been trapped in local optimal areas and their further evolutions
265 will waste computational efforts because they cannot find more promising solutions even if more generations are allocated
266 to them. Therefore, there is a trade-off for the value of ntrim so that each subpopulation can have sufficient generations to
267 evolve and at the same time more computational efforts are devoted to promising subpopulations.
268 To support the above analysis, we carried out experiment in which ntrim has five levels {5, 10, 20, 30, 40}. The experiment
269 results are given in Table 2 and the means plots of the GD and IH metrics obtained by each value are given in Fig. 2. It can
270 be found that for many problem instances the algorithm’s performance first improves but then deteriorates for the GD and
271 IH metrics with the increase of ntrim . Based on the results, the value of 20 can be taken as a trade-off. In addition, the results
272 of ANOVA show that the different values of ntrim do not have significant impact on the algorithm’s performance for the GD
273 and IH metrics.
Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous
multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
JID: INS
ARTICLE IN PRESS [m3Gsc;February 18, 2016;20:51]
274 4.4.1.3. Frequency of performing phase II of the local search – nCLS . As we analyzed in Section 3.3.2, the phase II of the local
275 search is not performed at each generation. The small value of nCLS will be harmful because the frequent application of
276 phase II will occupy a lot of computational efforts but the learning results may have little difference due to the fact that the
277 external archive may not have significant changes between every nCLS generations. In addition, the small value of nCLS will
278 also make the algorithm focus its search on the exploitation, and thus the number of solutions in the EXA may be small.
279 Since the update of new subpopulations is based on the EXA, the whole performance will be decreased. On the contrary, if
280 nCLS is large, the search focus of the algorithm will be put on the exploration and thus the distribution of solutions in the
281 EXA will be improved (that is, the IH metric will be better), but the GD metric may be not so good.
282 The computational results for nCLS with five levels {5, 20, 40, 60, 80} are shown in Table 3 and Fig. 3, which illustrates
283 the above analysis because the small value of 5 gives the worst performance while the large value of 80 has a worse GD
284 metric (though its IH metric is the best). In addition, the computational results show that the average size of the EXA for
285 many problems is about 30 when nCLS = 5, which is much less than its maximum value of 100. Based on these results, the
286 value of nCLS is set to 60 to obtain a balance for GD and IH metrics.
Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous
multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
JID: INS
ARTICLE IN PRESS [m3Gsc;February 18, 2016;20:51]
Table 3
Comparison results for different values of nCLS .
GD
Problems 5 20 40 60 80
ZDT1 1.085e−04 4.396e −05 1.297e−04 3.733e −05 1.381e−04 3.794e −05 1.289e−04 4.222e −05 1.359e−04 3.322e −05
ZDT2 6.529e−05 3.249e −05 4.651e−05 3.591e −06 4.693e−05 3.287e −06 4.623e−05 3.111e −06 4.677e−05 3.520e −06
ZDT3 5.822e−05 2.260e −05 3.399e−05 3.294e −06 3.367e−05 3.440e −06 3.420e−05 3.267e −06 3.384e−05 3.141e −06
ZDT4 1.117e−04 5.063e −05 1.173e−04 4.348e −05 1.368e−04 3.160e −05 1.350e−04 2.908e −05 1.333e−04 4.043e −05
ZDT6 4.876e−04 2.045e −03 8.902e−04 1.945e −03 1.127e−03 1.735e −03 1.438e−03 2.804e −03 1.173e−03 1.942e −03
DTLZ1 1.671e−03 2.046e −04 1.069e−03 4.251e −04 6.007e−04 4.172e −05 6.001e−04 5.263e −05 5.856e−04 4.080e −05
DTLZ2 7.630e−04 1.350e −04 6.464e−04 2.934e −05 6.356e−04 3.288e −05 6.461e−04 3.050e −05 6.392e−04 3.758e −05
DTLZ3 2.608e−03 5.584e −04 2.445e−03 7.435e −04 2.022e−03 8.089e −04 1.087e−03 9.913e −05 1.106e−03 4.191e −04
DTLZ4 1.001e−02 3.245e −03 4.668e−03 2.515e −04 4.678e−03 1.948e −04 4.647e−03 2.422e −04 4.653e−03 3.130e −04
DTLZ5 3.671e−04 6.808e −05 2.465e−04 3.362e −05 2.551e−04 3.043e −05 2.502e−04 2.694e −05 2.501e−04 2.692e −05
DTLZ6 5.859e−04 7.978e −05 5.733e−04 3.280e −05 5.685e−04 3.130e −05 5.627e−04 3.123e −05 5.666e−04 2.911e −05
DTLZ7 3.118e−03 2.653e −03 2.096e−03 1.079e −03 1.868e−03 8.754e −04 2.063e−03 8.619e −04 1.981e−03 7.805e −04
IH
Problems 5 20 40 60 80
ZDT1 6.534e−01 5.197e −03 6.619e−01 3.988e −05 6.619e−01 3.957e −05 6.619e−01 3.235e −05 6.619e−01 3.612e −05
ZDT2 3.226e−01 9.845e −03 3.286e−01 3.658e −05 3.286e−01 3.911e −05 3.286e−01 3.518e −05 3.286e−01 2.979e −05
ZDT3 5.841e−01 3.908e −03 5.876e−01 1.921e −05 5.876e−01 1.820e −05 5.876e−01 1.549e −05 5.876e−01 2.151e −05
ZDT4 6.564e−01 8.518e −03 6.619e−01 2.986e −03 6.619e−01 3.741e −05 6.619e−01 3.893e −05 6.619e−01 2.995e −05
ZDT6 3.957e−01 1.018e −02 4.012e−01 1.184e −04 4.012e−01 1.182e −04 4.012e−01 1.351e −04 4.012e−01 1.149e −04
DTLZ1 6.834e−01 1.900e −02 7.289e−01 3.057e −02 7.698e−01 7.439e −03 7.719e−01 6.939e −03 7.703e−01 6.586e −03
DTLZ2 3.717e−01 1.186e −02 3.885e−01 5.640e −03 3.889e−01 6.776e −03 3.873e−01 7.513e −03 3.875e−01 7.053e −03
DTLZ3 2.856e−01 1.516e −02 2.991e−01 2.433e −02 3.222e−01 3.954e −02 3.882e−01 6.784e −03 3.866e−01 8.864e −03
DTLZ4 3.226e−01 2.820e −02 3.857e−01 7.371e −03 3.853e−01 8.114e −03 3.848e−01 6.737e −03 3.848e−01 6.717e −03
DTLZ5 9.109e−02 1.282e −03 9.397e−02 4.765e −05 9.397e−02 3.836e −05 9.397e−02 4.968e −05 9.397e−02 4.276e −05
DTLZ6 9.486e−02 6.004e −04 9.491e−02 3.714e −05 9.491e−02 4.830e −05 9.491e−02 4.137e −05 9.491e−02 4.281e −05
DTLZ7 2.739e−01 3.439e −02 2.930e−01 6.200e −03 2.930e−01 3.346e −02 2.939e−01 8.259e −03 2.926e−01 3.406e −02
293 gle operator is used in the generation of new solutions and the local search. In addition, the subpopulation procedure is not
294 applied.
295 The comparison results between our AMPDE and the four ASPDE are given in Table 4, where the symbol “+” denotes
296 that the performance difference is significant between our AMPDE and the best one among the others and the symbol “–“
297 denotes that the performance difference is not significant with a confidence level of 95% (please note that these symbols are
298 used in the following tables). Based on the results, it can be seen that the four crossover operators have different advantages
299 on different problems and that the adoption of the multiple subpopulation strategy obtains the best GD metrics for 8 out
300 of 12 problems (the differences are significant for 6 problems) and the best IH metrics for 10 out of the 12 problems (the
301 differences are significant for 10 problems). Therefore, it can be concluded that this strategy combines the advantages of the
302 four operators and thus makes the proposed AMPDE more effective and robust for different problems.
303 4.4.2.2. Two-phase local search. In this section, the effect of the two-phase local search is tested by comparing the proposed
304 AMPDE to another version of AMPDE in which the two-phase local search is not used (denoted as AMPDEnoLS ).
305 The comparison results are presented in Table 5. From this table, it appears that the two-phase local search can obtain
306 better GD results for 10 problems (the improvement is significant for 6 problems) and better IH results for 8 problems (the
307 improvement is significant for 5 problems). Although the AMPDEnoLS obtains significantly better GD for problem ZDT4, the
308 distribution of the EXA obtained by it is significantly worse than that obtained by the AMPDE. The AMPDEnoLS obtains better
309 results for some problems because it can evolve for more generations than the AMPDE. In general, we can conclude that the
310 two-phase local search is positive to the AMPDE due to its effectiveness in improving the convergence and diversity of the
311 obtained EXA.
312 4.4.2.3. Adaptive population generation method. The aim of this section is to test the effect of the adaptive population gener-
313 ation method, i.e., the dynamic size control of subpopulations. In this experiment, we compared the proposed AMPDE with
314 another version of AMPDE in which the size of subpopulation is fixed to be 25 (this version is denoted as AMPDEfixed ).
315 The comparison results between the AMPDE and the AMPDEfixed are given in Table 6. Based on these results, it can
316 be found that the AMPDE obtains better results of GD and IH metrics for 8 out of the 12 problems (the improvement is
317 significant for 5 problems). It can be then concluded that the adoption of the adaptive population generation method can
Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous
multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
JID: INS
ARTICLE IN PRESS [m3Gsc;February 18, 2016;20:51]
Table 4
Comparison results between AMPDE and four ASPDE algorithms.
GD
ZDT1 1.289e−04 4.222e −05 1.117e−04 2.956e −05 1.216e−04 5.951e −05 2.577e−04 3.190e −04 6.124e−04 4.696e −04 +
ZDT2 4.623e−05 3.111e −06 8.463e−05 1.760e −05 7.940e−05 4.260e −05 1.101e−04 8.713e −04 2.608e−03 3.776e −03 +
ZDT3 3.420e−05 3.267e −06 6.980e−05 1.511e −05 5.567e−05 1.943e −05 2.129e−04 2.491e −04 5.248e−04 3.948e −04 +
ZDT4 1.350e−04 2.908e −05 3.657e−01 4.245e −01 1.261e−04 5.444e −05 5.893e−04 7.002e −04 2.338e−03 2.278e −03 –
ZDT6 1.438e−03 2.804e −03 1.170e−02 2.434e −02 9.292e−05 3.088e −05 4.550e−05 6.130e −06 1.177e−01 2.225e −01 +
DTLZ1 6.001e−04 5.263e −05 6.911e−01 1.478e+00 1.375e−03 5.053e −04 3.182e+01 1.682e+01 6.320e+00 1.872e+00 +
DTLZ2 6.461e−04 3.050e −05 7.501e−04 9.838e −05 6.339e−04 3.476e −05 1.426e−01 1.846e −02 3.183e−02 4.468e −03 –
DTLZ3 1.087e−03 9.913e −05 1.809e+01 1.029e+01 2.601e−03 3.125e −02 6.559e+01 2.265e+01 1.958e+01 4.755e+00 +
DTLZ4 4.647e−03 2.422e −04 4.862e−03 5.707e −04 4.774e−03 4.791e −04 1.853e−01 4.432e −02 3.136e−02 2.481e −02 +
DTLZ5 2.502e−04 2.694e −05 2.551e−04 3.140e −05 2.626e−04 4.688e −05 2.572e−01 5.803e −02 3.675e−02 1.058e −02 –
DTLZ6 5.627e−04 3.123e −05 9.393e−04 2.196e −04 1.004e−03 2.338e −03 4.672e−02 9.015e −02 9.047e−01 1.651e −01 +
DTLZ7 2.063e−03 8.619e −04 8.023e−03 4.570e −03 3.456e−03 2.339e −03 1.252e−03 1.017e −03 3.373e−03 1.750e −03 +
IH
ZDT1 6.619e−01 3.235e −05 6.468e−01 4.578e −03 6.490e−01 5.033e −03 6.488e−01 1.626e −02 1.791e−01 3.474e −02 +
ZDT2 3.286e−01 3.518e −05 3.151e−01 6.876e −03 3.214e−01 6.699e −03 2.684e−01 4.704e −02 2.415e−01 2.474e −01 +
ZDT3 5.876e−01 1.549e −05 5.811e−01 3.780e −03 5.845e−01 2.885e −02 5.445e−01 3.768e −02 1.316e−01 2.590e −02 +
ZDT4 6.619e−01 3.893e −05 3.128e−02 1.929e −01 6.612e−01 5.609e −03 6.081e−01 1.025e −01 4.075e−01 5.740e −01 +
ZDT6 4.012e−01 1.351e −04 3.953e−01 1.952e −02 3.854e−01 1.114e −02 4.013e−01 4.809e −04 2.438e−01 1.279e −01 +
DTLZ1 7.719e−01 6.939e −03 4.283e−02 3.648e −01 7.074e−01 3.443e −02 4.013e−01 0.0 0 0e+0 0 2.386e−01 0.0 0 0e+0 0 +
DTLZ2 3.873e−01 7.513e −03 3.866e−01 5.887e −03 3.868e−01 6.344e −03 7.726e−03 1.686e −02 7.922e−02 2.533e −02 +
DTLZ3 3.882e−01 6.784e −03 3.813e−01 0.0 0 0e+0 0 2.923e−01 2.095e −02 4.984e−03 0.0 0 0e+0 0 7.267e−02 0.0 0 0e+0 0 +
DTLZ4 3.848e−01 6.737e −03 3.776e−01 9.766e −03 3.764e−01 1.984e −02 4.984e−03 0.0 0 0e+0 0 1.122e−01 9.032e −02 +
DTLZ5 9.397e−02 4.968e −05 9.395e−02 5.532e −05 9.383e−02 5.252e −04 4.984e−03 0.0 0 0e+0 0 5.902e−02 1.010e −02 +
DTLZ6 9.491e−02 4.137e −05 9.024e−02 2.989e −03 9.109e−02 4.165e −03 8.154e−02 8.215e −03 9.296e−09 5.103e −06 +
DTLZ7 2.939e−01 8.259e −03 2.575e−01 1.647e −02 2.658e−01 2.642e −02 2.210e−01 3.958e −02 1.892e−01 1.139e −01 +
Table 5
Comparison results for the adoption of two-phase local search.
Problems GD IH
ZDT1 1.289e−04 4.222e −05 1.324e−04 3.643e −05 – 6.619e−01 3.235e −05 6.619e−01 2.813e −05 –
ZDT2 4.623e−05 3.111e −06 4.708e−05 3.761e −06 – 3.286e−01 3.518e −05 3.286e−01 4.748e −05 –
ZDT3 3.420e−05 3.267e −06 3.540e−05 5.152e −06 + 5.876e−01 1.549e −05 5.876e−01 7.419e −05 –
ZDT4 1.350e−04 2.908e −05 8.473e−06 1.228e −04 + 6.619e−01 3.893e −05 6.608e−01 4.460e −03 +
ZDT6 1.438e−03 2.804e −03 2.053e−03 2.008e −02 + 4.012e−01 1.351e −04 4.011e−01 1.431e −03 –
DTLZ1 6.001e−04 5.263e −05 1.351e−03 5.285e −04 + 7.719e−01 6.939e −03 7.064e−01 4.144e −02 +
DTLZ2 6.461e−04 3.050e −05 6.451e−04 3.994e −05 – 3.873e−01 7.513e −03 3.883e−01 7.858e −03 +
DTLZ3 1.087e−03 9.913e −05 2.361e−03 7.551e −04 + 3.882e−01 6.784e −03 2.992e−01 3.310e −02 +
DTLZ4 4.647e−03 2.422e −04 4.680e−03 3.889e −04 – 3.848e−01 6.737e −03 3.835e−01 6.832e −03 +
DTLZ5 2.502e−04 2.694e −05 2.540e−04 2.853e −05 – 9.397e−02 4.968e −05 9.399e−02 5.779e −05 –
DTLZ6 5.627e−04 3.123e −05 5.802e−04 5.189e −05 + 9.491e−02 4.137e −05 9.489e−02 2.454e −04 +
DTLZ7 2.063e−03 8.619e −04 2.292e−03 9.180e −04 + 2.939e−01 8.259e −03 2.950e−01 4.697e −03 +
318 adaptively select the appropriate subpopulation and make it have more chances to evolve and thus makes the AMPDE have
319 more robustness for different MOPs.
Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous
multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
JID: INS
ARTICLE IN PRESS [m3Gsc;February 18, 2016;20:51]
Table 6
Comparison results for the adoption of adaptive population generation method.
Problems GD IH
ZDT1 1.289e−04 4.222e −05 1.332e−04 3.901e −05 + 6.619e−01 3.235e −05 6.619e−01 3.681e −05 –
ZDT2 4.623e−05 3.111e −06 4.682e−05 3.209e −06 – 3.286e−01 3.518e −05 3.286e−01 4.095e −05 –
ZDT3 3.420e−05 3.267e −06 3.372e−05 2.407e −06 + 5.876e−01 1.549e −05 5.876e−01 2.051e −05 +
ZDT4 1.350e−04 2.908e −05 1.449e−04 3.319e −05 + 6.619e−01 3.893e −05 6.619e−01 3.183e −05 +
ZDT6 1.438e−03 2.804e −03 9.925e−04 1.397e −03 + 4.012e−01 1.351e −04 4.012e−01 8.094e −05 +
DTLZ1 6.001e−04 5.263e −05 5.801e−04 5.947e −05 + 7.719e−01 6.939e −03 7.673e−01 8.429e −03 +
DTLZ2 6.461e−04 3.050e −05 8.500e−04 1.630e −03 + 3.873e−01 7.513e −03 3.867e−01 7.130e −03 –
DTLZ3 1.087e−03 9.913e −05 2.318e−03 8.0 0 0e −01 + 3.882e−01 6.784e −03 3.149e−01 3.609e −02 +
DTLZ4 4.647e−03 2.422e −04 4.676e−03 2.688e −04 – 3.848e−01 6.737e −03 3.846e−01 7.714e −03 +
DTLZ5 2.502e−04 2.694e −05 2.481e−04 3.338e −05 – 9.397e−02 4.968e −05 9.400e−02 4.944e −05 +
DTLZ6 5.627e−04 3.123e −05 5.714e−04 4.008e −05 + 9.491e−02 4.137e −05 9.494e−02 3.240e −05 +
DTLZ7 2.063e−03 8.619e −04 2.346e−03 7.726e −04 – 2.939e−01 8.259e −03 2.935e−01 4.214e −03 +
Table 7
Comparison results between AMPDE and other MOEAs.
GD
ZDT1 1.289e−04 4.222e −05 4.586e−04 5.976e −05 1.525e−04 2.139e −05 1.299e−04 4.590e −05 1.088e−04 4.290e −05 +
ZDT2 4.623e−05 3.111e −06 5.225e−04 9.523e −05 5.223e−05 1.147e −05 4.661e−05 3.562e −06 4.740e−05 2.683e −06 –
ZDT3 3.420e−05 3.267e −06 1.799e−04 3.695e −05 3.393e−05 3.417e −06 3.305e−05 2.250e −06 3.609e−05 2.718e −06 –
ZDT4 1.350e−04 2.908e −05 1.606e−04 5.0 0 0e −05 2.316e−04 1.022e −04 1.614e−01 1.625e −01 1.212e−04 4.160e −05 +
ZDT6 1.438e−03 2.804e −03 9.627e−02 1.357e −02 7.475e−05 8.606e −06 4.344e−05 5.622e −06 5.031e−05 8.380e −03 +
DTLZ1 6.001e−04 5.263e −05 7.416e−04 2.683e −04 6.034e−04 8.943e −05 1.345e−03 3.854e −04 5.191e−03 1.378e −03 –
DTLZ2 6.461e−04 3.050e −05 1.366e−03 2.414e −04 6.841e−04 3.744e −05 1.192e−03 2.504e −04 3.393e−03 6.469e −04 +
DTLZ3 1.087e−03 9.913e −05 4.420e−03 1.156e −02 4.091e−03 1.015e −01 9.587e−01 1.203e+00 3.360e−03 1.156e −03 +
DTLZ4 4.647e−03 2.422e −04 5.137e−03 3.862e −04 4.881e−03 3.154e −04 4.925e−03 2.496e −04 5.745e−03 5.061e −04 +
DTLZ5 2.502e−04 2.694e −05 3.684e−04 7.521e −05 2.539e−04 3.135e −05 2.520e−04 3.071e −05 2.611e−04 3.347e −05 –
DTLZ6 5.627e−04 3.123e −05 6.314e−04 2.361e −03 9.373e−03 7.674e −03 5.760e−04 3.118e −05 5.664e−04 3.344e −05 –
DTLZ7 2.063e−03 8.619e −04 2.665e−03 7.560e −04 1.287e−03 1.057e −03 2.881e−03 5.106e −04 4.675e−03 1.072e −03 +
IH
ZDT1 6.619e−01 3.235e −05 6.551e−01 9.119e −04 6.620e−01 7.165e −05 6.620e−01 3.394e −05 6.620e−01 4.578e −05 –
ZDT2 3.286e−01 3.518e −05 3.206e−01 1.362e −03 3.287e−01 7.603e −05 3.287e−01 3.921e −05 3.287e−01 3.720e −05 –
ZDT3 5.876e−01 1.549e −05 5.835e−01 8.462e −04 5.876e−01 2.403e −05 5.876e−01 1.705e −05 5.875e−01 7.421e −05 –
ZDT4 6.619e−01 3.893e −05 6.595e−01 9.191e −04 6.599e−01 2.319e −03 1.864e−01 2.356e −01 6.618e−01 1.002e −04 –
ZDT6 4.012e−01 1.351e −04 6.587e−01 0.0 0 0e+0 0 4.006e−01 1.433e −04 4.013e−01 2.373e −05 4.013e−01 6.396e −05 +
DTLZ1 7.719e−01 6.939e −03 7.623e−01 7.801e −03 7.612e−01 8.167e −03 7.672e−01 5.962e −03 7.401e−01 9.501e −03 +
DTLZ2 3.873e−01 7.513e −03 3.754e−01 9.428e −03 3.847e−01 7.116e −03 3.771e−01 7.784e −03 3.548e−01 7.298e −03 –
DTLZ3 3.882e−01 6.784e −03 3.342e−01 3.843e −02 3.417e−01 4.182e −02 2.907e−01 1.006e −01 3.561e−01 1.448e −02 +
DTLZ4 3.848e−01 6.737e −03 3.759e−01 8.662e −03 3.867e−01 5.254e −03 3.764e−01 6.219e −03 3.632e−01 8.697e −03 +
DTLZ5 9.397e−02 4.968e −05 9.288e−02 2.400e −04 9.401e−02 2.998e −05 9.397e−02 5.044e −05 9.390e−02 6.704e −05 –
DTLZ6 9.491e−02 4.137e −05 9.355e−02 1.857e −02 4.883e−02 1.903e −02 9.487e−02 3.629e −05 9.488e−02 4.576e −05 –
DTLZ7 2.939e−01 8.259e −03 2.841e−01 7.348e −03 2.604e−01 4.918e −02 2.933e−01 3.471e −03 2.764e−01 6.196e −03 –
328 is significantly better than the others), the AbYSS obtains the best result for only one problem, the SMPSO achieves superior
329 results for 2 problems (both are significantly better than the others), and the NSGA-II fails to obtain a better result for any
330 one problem. More specifically, our AMPDE only obtains the best result for only one bi-objective problem ZDT2 but it shows
331 an overwhelming advantage over the others for the three-objective problems. The pairwise comparison results between our
332 AMPDE and its rivals are given in Table 8, in which the symbol “Y” denotes that our AMPDE obtains better result than
333 a certain rival while the symbol “N” denotes that the rival is superior. Based on Table 8, it can be seen that our AMPDE
334 obtains 9 better GD results than AbYSS and SMPSO, 12 better GD results than NSGA-II, and 10 better GD results than GDE3.
335 For the IH metric, our AMPDE obtains the best results for 2 bi-objective problems and 5 three-objective problems, re-
336 spectively. The AbYSS is superior to the other algorithms for 2 three-objective problems, the GDE3 algorithm also shows a
Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous
multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
JID: INS
ARTICLE IN PRESS [m3Gsc;February 18, 2016;20:51]
Table 8
Pair-wise comparison results between AMPDE and
other MOEAs for the GD metric.
ZDT1 Y Y Y N
ZDT2 Y Y Y Y
ZDT3 Y N N Y
ZDT4 Y Y Y N
ZDT6 Y N N N
DTLZ1 Y Y Y Y
DTLZ2 Y Y Y Y
DTLZ3 Y Y Y Y
DTLZ4 Y Y Y Y
DTLZ5 Y Y Y Y
DTLZ6 Y Y Y Y
DTLZ7 Y N Y Y
Total 12 9 10 9
Fig. 4. Pareto front for the best GD metric obtained by each algorithm on problem DTLZ7.
337 better performance than the others for 2 problems, and NSGA-II and SMPSO only obtains the best IH metric for only one
338 problem.
339 To give a graphical overview of the behavior of these algorithms, we show the Pareto fronts obtained by each algorithm
340 with the best GD metric for problem DTLZ7 in Fig. 4 (please note that the results are for a single run). From this figure, it
341 is clear that the Pareto front obtained by AbYSS is trapped in the first block of the true Pareto front, though the GD result
342 obtained by AbYSS is the best one. The SMPSO shows a little inferior performance in solution distribution than AMPDE,
343 NSGA-II and GDE3. Although the three algorithms (AMPDE, NSGA-II and GDE3) have similar performance in Fig. 4, it should
344 be pointed out that the maximum value of objective 3 obtained by GDE3 and NSGA-II are both larger than 6.0 0 0 (6.0 06 and
345 6.013 respectively) while the maximum value of objective 3 obtained by our AMPDE is exactly 6.0 0 0 (the value of the true
346 Pareto front is 6.0 0 0). That is, the Pareto fronts obtained by NSGA-II and GDE3 are a little far from the true Pareto front
347 with comparison to that obtained by our AMPDE.
348 Based on the comparison results between our AMPDE and the other MOEAs, it can be found that the performance of the
349 AMPDE is good and robust because it obtains comparable or superior results for most of the testing problems. The major
350 reasons can be analyzed as follows.
Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous
multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
JID: INS
ARTICLE IN PRESS [m3Gsc;February 18, 2016;20:51]
Table 9
Comparison results between AMPDEnoLS and other MOEAs for GD metric.
GD
ZDT1 1.324e−04 3.643e −05 4.586e−04 5.976e −05 1.525e−04 2.139e −05 1.299e−04 4.590e −05 1.088e−04 4.290e −05 +
ZDT2 4.708e−05 3.761e −06 5.225e−04 9.523e −05 5.223e−05 1.147e −05 4.661e−05 3.562e −06 4.740e−05 2.683e −06 +
ZDT3 3.540e−05 5.152e −06 1.799e−04 3.695e −05 3.393e−05 3.417e −06 3.305e−05 2.250e −06 3.609e−05 2.718e −06 +
ZDT4 8.473e−06 1.228e −04 1.606e−04 5.0 0 0e −05 2.316e−04 1.022e −04 1.614e−01 1.625e −01 1.212e−04 4.160e −05 +
ZDT6 2.053e−03 2.008e −02 9.627e−02 1.357e −02 7.475e−05 8.606e −06 4.344e−05 5.622e −06 5.031e−05 8.380e −03 +
DTLZ1 1.351e−03 5.285e −04 7.416e−04 2.683e −04 6.034e−04 8.943e −05 1.345e−03 3.854e −04 5.191e−03 1.378e −03 –
DTLZ2 6.451e−04 3.994e −05 1.366e−03 2.414e −04 6.841e−04 3.744e −05 1.192e−03 2.504e −04 3.393e−03 6.469e −04 +
DTLZ3 2.361e−03 7.551e −04 4.420e−03 1.156e −02 4.091e−03 1.015e −01 9.587e−01 1.203e+00 3.360e−03 1.156e −03 +
DTLZ4 4.680e−03 3.889e −04 5.137e−03 3.862e −04 4.881e−03 3.154e −04 4.925e−03 2.496e −04 5.745e−03 5.061e −04 +
DTLZ5 2.540e−04 2.853e −05 3.684e−04 7.521e −05 2.539e−04 3.135e −05 2.520e−04 3.071e −05 2.611e−04 3.347e −05 –
DTLZ6 5.802e−04 5.189e −05 6.314e−04 2.361e −03 9.373e−03 7.674e −03 5.760e−04 3.118e −05 5.664e−04 3.344e −05 +
DTLZ7 2.292e−03 9.180e −04 2.665e−03 7.560e −04 1.287e−03 1.057e −03 2.881e−03 5.106e −04 4.675e−03 1.072e −03 +
Table 10
Pair-wise comparison results between AMPDEnoLS and
other MOEAs for the GD metric.
ZDT1 Y Y N N
ZDT2 Y Y N Y
ZDT3 Y N N Y
ZDT4 Y Y Y Y
ZDT6 Y N N N
DTLZ1 N N N Y
DTLZ2 Y Y Y Y
DTLZ3 Y Y Y Y
DTLZ4 Y Y Y Y
DTLZ5 Y N N Y
DTLZ6 Y Y N N
DTLZ7 Y N Y Y
Total 11 7 5 9
351 Firstly, the two-phase local search helps to improve the non-dominated solutions contained in the EXA in both the quality
352 and diversity, and then these solutions with good quality and diversity are used to generate new solutions. This mechanism
353 drives the new solutions to converge to the true Pareto front quickly and at the same time keeps a good spread of non-
354 dominated solutions in the EXA. To illustrate the positive effect of the local search method, we further compared the AMPDE
355 without local search (AMPDEnoLS ) to the four state-of-the-art MOEAs with the GD metric in Tables 9 and 10. Based on the
356 comparison results, it is clear that the AMPDEnoLS can only obtain the best results for 4 problems. Although the AMPDEnoLS is
357 still superior to the NSGA-II and SMPSO for most problems, the AbYSS algorithm becomes competitive with the AMPDEnoLS
358 while the GDE3 algorithm is superior to the AMPDEnoLS .
359 Secondly, the adaptive population generation method can self-adaptively change the size of each subpopulation and allo-
360 cate more evolution chances to certain subpopulations with more appropriate crossover operators. That is, this mechanism
361 can help to guarantee a robust performance of the AMPDE for different kinds of multi-objective problems.
362 5. Conclusions
363 This paper proposes an adaptive multi-population DE algorithm for the multi-objective optimization problems. One ma-
364 jor feature of the proposed AMPDE algorithm is that it uses multiple subpopulations. For each subpopulation, a certain
365 crossover operator is assigned to it and the generation of perturbed vectors in this subpopulation is based on this operator,
366 which is different from the mutation step of the canonical DE algorithm. Another main feature is that it adopts clustering
367 and statistical methods to derive useful information from the search process. The derived information is then used to adap-
368 tively adjust the size of each subpopulation according to their contribution to the obtained external archive, and guide the
369 two-phase local search so as to improve both the quality and diversity of the external archive. Computational results on
370 benchmark problems are carried out to test the performance of the proposed AMPDE algorithm, and the results show that
371 the proposed improvements of the strategies are positive and that the proposed AMPDE is comparative or superior to some
372 state-of-the-art MOEAs in the literature for the ZDT and DTLZ series of problems. Our future research will be the inclusion
373 of more advanced clustering techniques into the AMPDE to assist the subpopulation management and the application of the
374 AMPDE for some complex MOPs in practical industries.
Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous
multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
JID: INS
ARTICLE IN PRESS [m3Gsc;February 18, 2016;20:51]
375 Acknowledgments
376 This research is partly supported by the National Natural Science Foundation of China (no. 61374203, no. 61573086), the
377 Foundation for Innovative Research Groups of the National Natural Science Foundation of China (no. 71321001), the National
378 863 High-Tech Research and Development Program of China (2013AA040704), and the State Key Laboratory of Synthetical
379 Automation for Process Industries Fundamental Research Funds (grant no. 2013ZCX02).
383 • The
GD metric indicates how far the obtained Pareto front is from the true Pareto optimal front. It is defined as GD =
|EXA| 2
384
i=1
di /|EXA|, where |EXA| is the number of the non-dominated solutions obtained by a certain algorithm, and di is
385 the Euclidean distance (measured in objective space) between each solution in the obtained Pareto front and the nearest
386 member of the true Pareto optimal front.
387 • The IH metric is used to measure the hypervolume in the objective space dominated by a given Pareto set of points. In the
388 case of two or three objectives, IH is the area or volume covered by a set of non-dominated solutions with comparison
389 to a reference point. As is a common practice in the literature, we normalize the objective values of the non-dominated
390 solutions obtained by different testing algorithms into [0, 1]. That is, the objective of each solution x is normalized by:
391 f j (x ) = ( f j (x ) − f jmin )/( f jmax − f jmin ), where fj (x) is the jth objective value of solution x and f jmin ( f jmax ) is the minimum
392 (maximum) value of the jth objective value among all the solutions in the union set of non-dominated solutions obtained
393 by all test algorithms. Then the reference point is set to (1.0, 1.0), and consequently the maximum value of IH is 1. Please
394 note that in the comparison of two Pareto fronts, the higher value of IH means a better frontier. When calculating IH , if all
Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous
multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
JID: INS
ARTICLE IN PRESS [m3Gsc;February 18, 2016;20:51]
395 solutions obtained by a certain algorithm are dominated by the reference point, then the hypervolume of this algorithm
396 is equal to 0.
397 Reference
398 [1] H.A. Abbass, R. Sarker, C. Newton, PDE: a Pareto-frontier differential evolution approach for multi-objective optimization problems, in: Proceedings of
399 IEEE Congress on Evolutionary Computation, 2001, pp. 971–978.
400 [2] M. Ali, P. Siarry, M. Pant, An efficient Differential Evolution based algorithm for solving multi-objective optimization problems, Eur. J. Oper. Res. 217
401 (2) (2012) 404–416.
402 [3] Y.Q. Cai, J.H. Wang, Differential evolution with hybrid linkage crossover, Inform. Sci. 320 (2015) 244–287.
403 [4] Y. Chia, C.K. Goh, V.A. Shim, K.C. Tan, A data mining approach to evolutionary optimisation of noisy multi-objective problems, Int. J. Syst. Sci. 43 (7)
404 (2012) 1217–1247.
405 [5] C.A.C. Coello, G.T. Pulido, Multiobjective optimization using a micro-genetic algorithm, in: Proceedings of Genetic and Evolutionary Computation Con-
406 ference (GECCO’2001), 2001, pp. 274–282.
407 [6] C.A.C. Coello, G.T. Pulido, M.S. Lechuga, Handling multiple objectives with particle swarm optimization, IEEE Trans. Evol. Comput. 8 (3) (2004) 256–279.
408 [7] C.A.C. Coello, D.A.V. Veldhuizen, G.B. Lamont, Evolutionary Algorithms for Solving Multi-Objective Problems, Kluwer, Norwell, MA, 2002.
409 [8] K. Deb, Multi-objective Optimization Using Evolutionary Algorithms, Wiley, Chichester, UK, 2001.
410 [9] K. Deb, S. Agrawal, A. Pratap, T. Meyarivan, A fast and elitist multi-objective genetic algorithm: NSGA-II, IEEE Trans. Evol. Comput. 6 (2) (2002) 182–197.
411 [10] K. Deb, R.B. Agrawal, Simulated binary crossover for continuous search space, Complex Syst 9 (1995) 115–148.
412 [11] K. Deb, A. Anand, D. Joshi, A computationally efficient evolutionary algorithm for real-parameter evolution, Evol. Comput. 10 (4) (2002) 371–395.
413 [12] K. Deb, L. Thiele, M. Laumanns, E. Zitzler, Scalable test problems for evolutionary multi-objective optimization, Evolutionary Multiobjective Optimiza-
414 tion, Springer-Verlag, Berlin, 2005, pp. 105–145.
415 [13] L.J. Eshelman, J.D. Schaffer, Real-coded genetic algorithms and interval-schemata, Foundations of Genetic Algorithms 2, L.D. Whitley, Morgan Kaufmann
416 Publishers, California, 1993, pp. 187–202.
417 [14] V.L. Huang, A.K. Qin, P.N. Suganthan, M.F. Tasgetiren, Multi-objective optimization based on self-adaptive differential evolution algorithm, in: Proceed-
418 ings of Congress on Evolutionary Computation, 2007 , pp. 3601–2608.
419 [15] L. Jourdan, C. Dhaenens, E.G. Talbi, Using datamining techniques to help metaheuristics: a short survey, in: Proceedings of the Third International
420 Work on Hybrid Metaheuristics, 2006, pp. 57–69.
421 [16] S. Kukkonen, J. Lampinen, GDE3: the third evolution step of generalized differential evolution, in: Proceedings of IEEE Congress on Evolutionary
422 Computation, 2005, pp. 443–450.
423 [17] F. Lu, J. Zhang, L. Gao, A hierarchical differential evolution algorithm with multiple sub-population parallel search mechanism, in: Proceedings of
424 International Conference on Computer Design and Applications (ICCDA), 2010, pp. 482–486.
425 [18] N.K. Madavan, Multiobjective optimization using a Pareto differential evolution approach, in: Proceedings of Congress on Evolutionary Computation,
426 2002, pp. 1145–1150.
427 [19] A.C. Martínez-Estudillo, C. Hervás-Martínez, F.J. Martínez-Estudillo, N. García-Pedrajas, Hybridization of evolutionary algorithms and local search by
428 means of a clustering method, IEEE Trans. Syst. Man Cybern. B. 36 (3) (2006) 534–545.
429 [20] A.J. Nebro, J.J. Durillo, J. García-Nieto, C.A. Coello Coello, F. Luna, E. Alba, SMPSO: a new pso-based metaheuristic for multi-objective optimization, in:
430 Proceedings of IEEE Symposium on Computational Intelligence in Multicriteria Decision-Making (MCDM 2009), 2009, pp. 66–73.
431 [21] A.J. Nebro, F. Luna, E. Alba, B. Dorronsoro, J.J. Durillo, A. Beham, AbYSS – adapting scatter search to multiobjective optimization, IEEE Trans. Evol.
432 Comput 12 (4) (2008) 439–457.
433 [22] N. Noman, H. Iba, Accelerating differential evolution using an adaptive local search, IEEE Trans. Evol. Comput 12 (1) (2008) 107–125.
434 [23] K.E. Parsopoulos, D.K. Tasoulis, N.G. Pavlidis, V.P. Plagianakos, M.N. Vrahatis, Vector evaluated differential evolution for multiobjective optimization, in:
435 Proceedings of IEEE Congress on Evolutionary Computation, 2004, pp. 204–211.
436 [24] I. Poikolainen, F. Neri, F. Caraffini, Cluster-based population initialization for differential evolution frameworks, Inform. Sci. 297 (2015) 216–235.
437 [25] G.T. Pulido, C.A.C. Coello, Using clustering techniques to improve the performance of a multi-objective particle swarm optimizer, in: Proceedings of
438 Genetic and Evolutionary Computation Conference, 2002, pp. 1051–1056.
439 [26] A.K. Qin, V.L. Huang, P.N. Suganthan, Differential evolution algorithm with strategy adaptation for global numerical optimization, IEEE Trans. Evol.
440 Comput. 13 (2) (2009) 398–417.
441 [27] S. Rahnamayan, H.R. Tizhoosh, M.M.A. Salama, Opposition-based differential evolution, IEEE Trans. Evol. Comput. 12 (1) (2008) 64–79.
442 [28] T. Robic, B. Filipic, DEMO: differential evolution for multiobjective optimization, in: Proceedings of 3rd International Conference on Evolutionary Mul-
443 tiCriterion Optimization, 2005, pp. 520–533.
444 [29] L.V. Santana-Quintero, A.G. Hernández-Díaz, J. Molina, C.A. Coello Coello, R. Caballero, DEMORS: a hybrid multi-objective optimization algorithm using
445 differential evolution and rough set theory for constrained problems, Comput. Oper. Res. 37 (3) (2010) 470–480.
446 [30] J.D. Schaffer, Multiple objective optimization with vector evaluated genetic algorithms, in: Proceedings of the First International Conference on Genetic
447 Algorithms, 1985, pp. 93–100.
448 [31] O. Soliman, L.T. Bui, H. Abbass, A memetic coevolutionary multi-objective differential evolution algorithm, Multi-Objective Memetic Algorithm,
449 Springer, Berlin, 2009, pp. 369–388.
450 [32] R. Storn, K. Price, Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces, J. Global Optim. 11 (4) (1997)
451 341–359.
452 [33] L.X. Tang, Y. Dong, J.Y. Liu, Differential evolution with an individual-dependent mechanism, IEEE Trans. Evol. Comput. 19 (4) (2015) 560–574.
453 [34] L.X. Tang, X.P. Wang, A hybrid multi-objective evolutionary algorithm for multi-objective optimization problems, IEEE Trans. Evol. Comput. 17 (1)
454 (2013) 20–46.
455 [35] L.X. Tang, Y. Zhao, J.Y. Liu, An improved differential evolution algorithm for practical dynamic scheduling in steelmaking-continuous casting production,
456 IEEE Trans. Evol. Comput. 18 (2) (2014) 209–225.
457 [36] S. Tsutsui, M. Yamamura, T. Higuchi, Multi-parent recombination with simplex crossover in real coded genetic algorithms, in: Proceedings of Genetic
458 Evolutionary Computation Conference (GECCO’99), 1999, pp. 657–664.
459 [37] J. Vesterstrom, R. Thomsen, A comparative study of differential evolution, particle swarm optimization and evolutionary algorithms on numerical
460 benchmark problems, in: Proceedings of IEEE Congress on Evolutionary Computation, 2004, pp. 1980–1987.
461 [38] Y. Wang, B. Lin, T. Weise, J.Y. Wang, B. Yuan, Q.J. Tian, Self-adaptive learning based particle swarm optimization, Inform. Sci. 181 (20) (2011) 4515–4538.
462 [39] G.G. Yen, W.F. Leong, Dynamic multiple swarms in multiobjective particle swarm optimization, IEEE Trans. Syst. Man Cybern. Part A: Syst. Man 39 (4)
463 (2009) 890–911.
464 [40] W. Yu, J. Zhang, Multi-population differential evolution with adaptive parameter control for global optimization, in: Proceedings of 13th Genetic and
465 Evolutionary Computation Conference (GECCO’2011), 2011, pp. 1093–1098.
466 [41] D. Zaharie, D. Petcu, Adaptive pareto differential evolution and its parallelization, Parallel Processing and Applied Mathematics, Springer, 2004,
467 pp. 261–268.
468 [42] Z.H. Zhan, J. Zhang, Y. Li, Y.H. Shi, Orthogonal learning particle swarm optimization, IEEE Trans. Evol. Comput 15 (6) (2011) 832–847.
469 [43] Q. Zhang, H. Li, MOEA/D: A multiobjective evolutionary algorithm based on decomposition, IEEE Trans. Evol. Comput 11 (6) (2007) 712–731.
470 [44] J.Q. Zhang, A.C. Sanderson, JADE: adaptive differential evolution with optional external archive, IEEE Trans. Evol. Comput 13 (5) (2009) 945–958.
Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous
multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
JID: INS
ARTICLE IN PRESS [m3Gsc;February 18, 2016;20:51]
471 [45] J. Zhang, Z. Zhan, Y. Lin, N. Chen, Y. Gong, J. Zhong, Evolutionary computation meets machine learning: a survey, IEEE Comput. Intell. Mach 6 (4)
472 (2011) 69–75.
473 [46] E. Zitzler, K. Deb, L. Thiele, Comparison of multiobjective evolutionary algorithms: empirical results, Evol. Comput 8 (2) (20 0 0) 173–195.
474 [47] E. Zitzler, M. Laumanns, L. Thiele, SPEA2: improving the strength Pareto evolutionary algorithm, Comput. Eng. Networks Lab. (TIK), Swiss Fed. Inst.
475 Technol. (ETH), Zurich, Switzerland, 2001 Tech. Rep. 103.
476 [48] W. Zhu, Y. Tang, J.A. Fang, W.B. Zhang, Adaptive population tuning scheme for differential evolution, Inform. Sci. 223 (2013) 164–191.
Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous
multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068