Cellular Genetic Algorithm for MOPs
Cellular Genetic Algorithm for MOPs
Multiobjective Optimization
A. J. Nebro, J. J. Durillo, F. Luna, B. Dorronsoro, and E. Alba
Departamento de Lenguajes y Ciencias de la Computacion
E.T.S. Ingeniera Informatica
Campus de Teatinos, 29071 Malaga (Spain)
antonio,durillo,flv,bernabe,[email protected]
Abstract. This paper introduces a new cellular genetic algorithm for solving multiobjective contin-
uous optimization problems. Our approach is characterized by using an external archive to store
non-dominated solutions and a feedback mechanism in which solutions from this archive randomly
replaces existing individuals in the population after each iteration. The result is a simple and elitist
algorithm called MOCell. Our proposal has been evaluated with both constrained and unconstrained
problems and compared against NSGA-II and SPEA2, two state-of-the-art evolutionary multiob-
jective optimizers. For the used benchmark, preliminary experiments indicate that MOCell obtains
competitive results in terms of convergence, and it clearly outperforms the other two compared al-
gorithms concerning the diversity of solutions along the Pareto front.
1 Introduction
Most optimization problems in the real world involve the minimization and/or maximization
of more than one function. Generally speaking, multiobjective optimization does not restrict
to nd a unique single solution of a given multiobjective optimization problem (MOP), but
a set of solutions called non-dominated solutions. Each solution in this set is said to be
a Pareto optimum, and when they are plotted in the objective space they are collectively
known as the Pareto front. Obtaining the Pareto front of a given MOP is the main goal
of multiobjective optimization. In general, the search spaces in MOPs use to be very large,
and evaluating the functions can require a signicant amount of time. These features make
dicult to apply deterministic techniques and, therefore, stochastic techniques have been
widely proposed within this domain. Among them, evolutionary algorithms (EAs) have been
investigated by many researchers, and some of the most well-known algorithms for solving
MOPs belong to this class (e.g. NSGA-II [10], PAES [12], and SPEA2 [17]).
EAs are especially well-suited for tackling MOPs because of their ability for nding
multiple trade-o solutions in one single run. Well-accepted subclasses of EAs are Genetic
Algorithms (GA), Genetic Programming (GP), Evolutionary Programming (EP), and Evo-
lution Strategies (ES). These algorithms work over a set (population) of potential solutions
(individuals) which undergoes stochastic operators in order to search for better solutions.
Most EAs use a single population (panmixia) of individuals and apply the operators to them
as a whole (see Fig. 1a). Conversely, there exist the so-called structured EAs, in which the
population is decentralized somehow. Among the many types of structured EAs, distributed
and cellular models are two popular optimization variants [4,6] (see Fig. 1b and Fig. 1c). In
many cases, these decentralized algorithms provide a better sampling of the search space,
2 A. J. Nebro, J. J. Durillo, F. Luna, B. Dorronsoro, and E. Alba
(a) (b) (c)
Fig. 1. Panmictic (a), distributed (b), and cellular (c) GAs
resulting in an improved numerical behavior with respect to an equivalent algorithm in
panmixia.
In this work, we focus on the cellular model of GAs (cGAs). In cGAs, the concept of
(small) neighborhood is intensively used; this means that an individual may only interact
with its nearby neighbors in the breeding loop [14]. The overlapped small neighborhoods
of cGAs help in exploring the search space because the induced slow diusion of solutions
through the population provides a kind of exploration (diversication), while exploitation
(intensication) takes place inside each neighborhood by genetic operations. These cGAs
were initially designed for working in massively parallel machines, although the model itself
has been adopted also for mono-processor machines, with no relation to parallelism at all.
Besides, the neighborhood is dened among tentative solutions in the algorithm, with no
relation to the geographical neighborhood denition in the problem space.
cGAs have proven to be very eective for solving a diverse set of single objective op-
timization problems from both classical and real world settings [1,2], but little attention
has been paid to its use in the multiobjective optimization eld. In [13], a multiobjective
evolution strategy following a predator-prey model is presented. This is a model similar to a
cGA, because solutions (preys) are placed on the vertices of an undirected connected graph,
thus dening neighborhoods, where they are caught by predators. Murata and Gen pre-
sented in [15] an algorithm in which, for an n-objective MOP, the population is structured
in an n-dimensional weight space, and the location of individuals (called cells) depends on
their weight vector. Thus, the information given by the weight vector of individuals is used
for guiding the search. A metapopulation evolutionary algorithm (called MEA) is presented
in [11]. This algorithm is a cellular model with the peculiarity that disasters can occasion-
ally happen in the population, thus dying all the individuals located in the disaster area
(extinction). Additionally, these empty areas can also be occupied by individuals (coloniza-
tion). Thus, this model allows a exible population size, combining the ideas of cellular and
spatially distributed populations. Finally, Alba et al. proposed in [3] cMOGA, the unique
cellular multiobjective algorithm based on the canonical cGA model before this work, to the
best of our known. In that work, cMOGA was used for optimizing a broadcasting strategy
specically designed for mobile ad hoc networks.
Our proposal is called MOCell, and it is, like in the case of [3], an adaptation of a
canonical cGA to the multiobjective eld. MOCell uses an external archive to store the
non-dominated solutions found during the execution of the algorithm, like many other mul-
tiobjective evolutionary algorithms do (e.g., PAES, SPEA2, or cMOGA). However, the main
feature characterizing MOCell with respect to these algorithms is that a number of solutions
are moved back into the population from the archive after each iteration, replacing randomly
selected existing individuals. The contributions of our work can be summarized as follows:
A Cellular Genetic Algorithm for Multiobjective Optimization 3
We propose a cGA for solving continuous MOPs. The algorithm uses an external archive
and a feedback of solutions from the archive to the population.
The algorithm is evaluated using a benchmark of constrained and unconstrained MOPs.
MOCell is compared against NSGA-II and SPEA2, two state-of-the-art GAs for solving
MOPs.
The rest of the paper is organized as follows. In Section 2, we present several basic
concepts on multiobjective optimization. In Section 3, we describe MOCell, our proposal for
facing MOPs. Our results are presented and discussed in Section 4. Finally, in Section 5 we
give our main conclusions and suggest some future research lines.
2 Multiobjective Optimization Fundamentals
In this section, we include some background on multiobjective optimization. In concrete,
we dene the concepts of MOP, Pareto optimality, Pareto dominance, Pareto optimal set,
and Pareto front. In these denitions we are assuming, without loss of generality, the min-
imization of all the objectives. A general multiobjective optimization problem (MOP) can
be formally dened as follows:
Denition 1 (MOP). Find a vector x
= [x
1
, x
2
, . . . , x
n
] which satises the m inequal-
ity constraints g
i
(x) 0, i = 1, 2, . . . , m, the p equality constraints h
i
(x) = 0, i =
1, 2, . . . , p, and minimizes the vector function f (x) = [f
1
(x), f
2
(x), . . . , f
k
(x)]
T
, where
x = [x
1
, x
2
, . . . , x
n
]
T
is the vector of decision variables.
The set of all values satisfying the constraints denes the feasible region and any point
x is a feasible solution. As mentioned before, we seek for the Pareto optima. Its formal
denition is provided next:
Denition 2 (Pareto Optimality). A point x
).
This denition states that x
= {x |x
, f(x
) f(x)}.
Denition 5 (Pareto Front). For a given MOP f(x) and its Pareto optimal set P
, the
Pareto front is dened as PF
= {f(x), x P
}.
Obtaining the Pareto front of a MOP is the main goal of multiobjective optimization.
However, given that a Pareto front can contain a large number of points, a good solution
must contain a limited number of them, which should be as close as possible to the exact
Pareto front, as well as they should be uniformly spread. Otherwise, they would not be very
useful to the decision maker.
4 A. J. Nebro, J. J. Durillo, F. Luna, B. Dorronsoro, and E. Alba
Algorithm 1 Pseudocode for a Canonical cGA
1: proc Steps Up(cga) //Algorithm parameters in cga
2: while not Termination Condition() do
3: for individual 1 to cga.popSize do
4: n listGet Neighborhood(cga,position(individual));
5: parentsSelection(n list);
6: ospringRecombination(cga.Pc,parents);
7: ospringMutation(cga.Pm,ospring);
8: Evaluate Fitness(ospring);
9: Insert(position(individual),ospring,cga,aux pop);
10: end for
11: cga.popaux pop;
12: end while
13: end proc Steps Up;
3 The Algorithm
In this section we detail rst a description of a canonical cGA; then, we describe the algorithm
MOCell.
3.1 Cellular Genetic Algorithms
A canonical cGA follows the pseudo-code included in Algorithm 1. In this basic cGA, the
population is usually structured in a regular grid of d dimensions (d = 1, 2, 3), and a neigh-
borhood is dened on it. The algorithm iteratively considers as current each individual in
the grid (line 3). An individual may only interact with individuals belonging to its neighbor-
hood (line 4), so its parents are chosen among its neighbors (line 5) with a given criterion.
Crossover and mutation operators are applied to the individuals in lines 6 and 7, with prob-
abilities P
c
and P
m
, respectively. Afterwards, the algorithm computes the tness value of
the new ospring individual (or individuals) (line 8), and inserts it (or one of them) into the
equivalent place of the current individual in the new (auxiliary) population (line 9) following
a given replacement policy.
After applying this reproductive cycle to all the individuals in the population, the newly
generated auxiliary population is assumed to be the new population for the next generation
(line 11). This loop is repeated until a termination condition is met (line 2). The most usual
termination conditions are to reach the optimal value, to perform a maximum number of
tness function evaluations, or a combination of both of them.
3.2 A Multiobjective cGA: MOCell
In this section we present MOCell, a multiobjective algorithm based on a cGA model. Its
pseudo-code is given in Algorithm 2. We can observe that Algorithms 1 and 2 are very similar.
One of the main dierences between the two algorithms is the existence of a Pareto front
(Denition 5) in the multiobjective case. The Pareto front is just an additional population
(the external archive) composed of a number of the non-dominated solutions found, since
it has a maximum size. In order to manage the insertion of solutions in the Pareto front
with the goal of obtaining a diverse set, a density estimator based on the crowding distance
(proposed for NSGA-II [10]) has been used. This measure is also used to remove solutions
from the archive when this becomes full.
A Cellular Genetic Algorithm for Multiobjective Optimization 5
Algorithm 2 Pseudocode of MOCell
1: proc Steps Up(mocell) //Algorithm parameters in mocell
2: Pareto front = Create Front() //Creates an empty Pareto front
3: while !TerminationCondition() do
4: for individual 1 to mocell.popSize do
5: n listGet Neighborhood(mocell,position(individual));
6: parentsSelection(n list);
7: ospringRecombination(mocell.Pc,parents);
8: ospringMutation(mocell.Pm,ospring);
9: Evaluate Fitness(ospring);
10: Insert(position(individual),ospring,mocell,aux pop);
11: Insert Pareto Front(individual);
12: end for
13: mocell.popaux pop;
14: mocell.popFeedback(mocell,ParetoFront);
15: end while
16: end proc Steps Up;
MOCell starts by creating an empty Pareto front (line 2 in Algorithm 2). Individuals
are arranged in a 2-dimensional toroidal grid, and the genetic operators are successively
applied to them (lines 7 and 8) until the termination condition is met (line 3). Hence,
for each individual, the algorithm consists of selecting two parents from its neighborhood,
recombining them in order to obtain an ospring, mutating it, evaluating the resulting
individual, and inserting it in both the auxiliary population (if it is not dominated by the
current individual) and the Pareto front. Finally, after each generation, the old population is
replaced by the auxiliary one, and a feedback procedure is invoked to replace a xed number
of randomly chosen individuals of the population by solutions from the archive.
We have incorporated a constrain handling mechanism in MOCell to deal with con-
strained problems. The mechanism is the same used by NSGA-II [10].
4 Computational Results
This section is devoted to the evaluation of MOCell. For that, we have chosen several test
problems taken from the specialized literature, and, in order to assess how competitive
MOCell is, we decided to compare it against two algorithms that are representative of the
state-of-the-art, NSGA-II and SPEA2. Next, we briey comment the main features of these
algorithms, including the parameter settings used in the subsequent experiments.
The NSGA-II algorithm was proposed by Deb et al. [10]. It is characterized by a Pareto
ranking of the individuals and the use of a crowding distance as density estimator. We have
used Debs NSGA-II implementation
1
. Specically, we used the real-coded version of the
algorithm and the parameter settings proposed in [10]. A crossover probability of p
c
= 0.9
and a mutation probability p
m
= 1/n (where n is the number of decision variables) are
used. The operators for crossover and mutation are SBX and polynomial mutation [9], with
distribution indexes of
c
= 20 and
m
= 20, respectively. The population and archive sizes
are 100 individuals. The algorithm stops after 25000 function evaluations.
In Table 1 we show the parameters used by MOCell. A square toroidal grid of 100 indi-
viduals has been chosen for structuring the population. The neighborhood used is composed
of nine individuals: the considered individuals plus those located at its North, East, West,
1
The implementation of NSGA-II is available for downloading at:
http://www.iitk.ac.in/kangal/soft.htm
6 A. J. Nebro, J. J. Durillo, F. Luna, B. Dorronsoro, and E. Alba
Table 1. Parameterization used in MOCell
Population Size 100 individuals (10 10)
Stopping Condition 25000 function evaluations
Neighborhood 1-hop neighbours (8 surrounding solutions)
Selection of Parents binary tournament + binary tournament
Recombination simulated binary, p
c
= 1.0
Mutation polynomial, p
m
= 1.0/L
(L = individual length)
Replacement rep if better individual (NSGA-II crowding)
Archive Size 100 individuals
Density Estimator crowding distance
Feedback 20 individuals
South, NorthWest, SouthWest, NorthEast, and SouthEast (see Fig. 1c). We have also used
SBX and polynomial mutation with the same distribution indexes as NSGA-II and SPEA2.
Crossover and mutation rates are p
c
= 1.0 and p
m
= 1/L, respectively.
The resulting ospring replaces the individual at the current position if the latter is
better than the former, but, as it is usual in multiobjective optimization, we need to dene
the concept of best individual. Our approach is to replace the current individual if it is
dominated by the ospring or both are non-dominated and the current individual has the
worst crowding distance (as dened in NSGA-II) in a population composed of the neighbor-
hood plus the ospring. For inserting the individuals in the Pareto front, the solutions in
the archive are also ordered according to the crowding distance; therefore, when inserting a
non-dominated solution, if the Pareto front is already full, the solution with a worst crowd-
ing distance value is removed. Finally, after each iteration, 20 randomly chosen individuals
in the population are replaced by the 20 best solutions from the external archive according
to the crowding distance (feedback mechanism).
4.1 Test Problems
We have selected for our tests both constrained and unconstrained problems that have been
used in most studies in this area. Given that they are widely known, we do not include full
details of them here for space constraints. They can be found in the cited references and
also in books such as [7] and [8].
SPEA2 was proposed by Zitler et al. in [17]. In this algorithm, each individual has
assigned a tness value that is the sum of its strength raw tness and a density estimation
based on the distance to the k-th nearest neighbor. Like in the case of NSGA-II, we have
used the authors implementation of SPEA2
2
. The algorithm is implemented within the
framework PISA [5]. However, the implementation of SPEA2 does not contain a constraint-
handling management, so we were forced to modify the original implementation for including
the same constraint mechanism used in NSGA-II and MOCell. We have used the following
values for the parameters. Both the population and the archive have a size of 100 individuals,
and the crossover and mutation operators are the same used in NSGA-II, using the same
values concerning their application probabilities and distribution indexes. As in NSGA-II,
the stopping condition is to compute 25000 function evaluations.
2
The implementation of SPEA2 is available at: http://www.tik.ee.ethz.ch/pisa
/selectors/spea2/spea2.html
A Cellular Genetic Algorithm for Multiobjective Optimization 7
Table 2. Unconstrained test functions
Problem Objective functions Variable bounds n
Schaer
f
1
(x) = x
2
f
2
(x) = (x 2)
2
10
5
x 10
5
1
Fonseca
f
1
(x) = 1 e
n
i=1
x
i
2
f
2
(x) = 1 e
n
i=1
x
i
+
1
2
4 x
i
4 3
Kursawe
f
1
(x) =
n1
i=1
10e
0.2
_
x
2
i
+x
2
i+1
f
2
(x) =
n
i=1
(|x
i
|
a
+ 5 sin x
b
i
); a = 0.8; b = 3
5 x
i
5 3
ZDT1
f
1
(x) = x
1
f
2
(x) = g(x)[1
_
x
1
/g(x)]
g(x) = 1 + 9(
n
i=2
x
i
)/(n 1)
0 x
i
1 30
ZDT2
f
1
(x) = x
1
f
2
(x) = g(x)[1 (x
1
/g(x))
2
]
g(x) = 1 + 9(
n
i=2
x
i
)/(n 1)
0 x
i
1 30
ZDT3
f
1
(x) = x
1
f
2
(x) = g(x)
_
1
_
x
1
g(x)
x
1
g(x)
sin (10x
1
)
_
g(x) = 1 + 9(
n
i=2
x
i
)/(n 1)
0 x
i
1 30
ZDT4
f
1
(x) = x
1
f
2
(x) = g(x)[1 (x
1
/g(x))
2
]
g(x) = 1 + 10(n 1) +
n
i=2
[x
2
i
10 cos (4x
i
)]
0 x
1
1
5 x
i
5
i = 2, ..., n
10
ZDT6
f
1
(x) = 1 e
4x
1
sin
6
(6x
1
)
f
2
(x) = g(x)[1 (f
1
(x)/g(x))
2
]
g(x) = 1 + 9[(
n
i=2
x
i
)/(n 1)]
0.25
0 x
i
1 10
Table 3. Constrained test functions
Problem Objective functions Constraints Variable bounds n
Osyczka2
f
1
(x) = (25(x
1
2)
2
+
(x
2
2)
2
+
(x
3
1)
2
(x
4
4)
2
+
(x
5
1)
2
)
f
2
(x) = x
2
1
+ x
2
2
+
x
2
3
+ x
2
4
+ x
2
5
+ x
2
6
g
1
(x) = 0 x
1
+ x
2
2
g
2
(x) = 0 6 x
1
x
2
g
3
(x) = 0 2 x
2
+ x
1
g
4
(x) = 0 2 x
1
+ 3x
2
g
5
(x) = 0 4 (x
3
3)
2
x
4
g
6
(x) = 0 (x
5
3)
3
+ x
6
4
0 x
1
, x
2
10
1 x
3
, x
5
5
0 x
4
6
0 x
6
10
6
Tanaka
f
1
(x) = x
1
f
2
(x) = x
2
g
1
(x) = x
2
1
x
2
2
+ 1+
0.1 cos (16 arctan (x
1
/x
2
)) 0
g
2
(x) = (x
1
0.5)
2
+
(x
2
0.5)
2
0.5
x
i
2
ConstrEx
f
1
(x) = x
1
f
2
(x) = (1 + x
2
)/x
1
g
1
(x) = x
2
+ 9x
1
6
g
2
(x) = x
2
+ 9x
1
1
0.1 x
1
1.0
0 x
2
5
2
Srinivas
f
1
(x) = (x
1
2)
2
+
(x
2
1)
2
+ 2
f
2
(x) = 9x
1
(x
2
1)
2
g
1
(x) = x
2
1
+ x
2
2
225
g
2
(x) = x
1
3x
2
10
20 x
i
20 2
The selected unconstrained problems include the studies of Schaer, Fonseca, and Kur-
sawe, as well as the problems ZDT1, ZDT2, ZDT3, ZDT4, and ZDT6. Their formulation is
provided in Table 2. The constrained problems are Osyczka2, Tanaka, Srinivas, and Con-
strEx. They are described in Table 3.
4.2 Performance Metrics
For assessing the performance of the algorithms on the test problems, two dierent issues are
normally taken into account: (i) minimize the distance of the Pareto front generated by the
proposed algorithm to the exact Pareto front, and (ii) to maximize the spread of solutions
found, so that we can have a distribution of vectors as smooth and uniform as possible. To
determine the rst issue it is usually necessary to know the exact location of the true Pareto
8 A. J. Nebro, J. J. Durillo, F. Luna, B. Dorronsoro, and E. Alba
front; in this work we have obtained these fronts using an enumerative search strategy, and
they are publicly available at http://neo.lcc.uma.es/software/esam (an exception are
the ZDTx problem family, whose fronts can be easily computed because their solutions are
known).
Generational Distance This metric was introduced by Van Veldhuizen and Lam-
ont [16] for measuring how far the elements are in the set of non-dominated vectors
found so far from those in the Pareto optimal set, and it is dened as:
GD =
n
i=1
d
2
i
n
, (1)
where n is is the number of vectors in the set of non-dominated solutions, and d
i
is the
Euclidean distance (measured in objective space) between each of these solutions and
the nearest member of the Pareto optimal set. It is clear that a value of GD = 0 means
that all the generated elements are in the Pareto optimal set. In order to get reliable
results, non-dominated sets are normalized before calculating this distance measure.
Spread The Spread metric [10] is a diversity metric that measures the extent of spread
achieved among the obtained solutions. This metric is dened as:
=
d
f
+d
l
+
N1
i=1
d
i
d
d
f
+d
l
+ (N 1)
d
, (2)
where d
i
is the Euclidean distance between consecutive solutions,
d is the mean of these
distances, and d
f
and d
l
are the Euclidean distances to the extreme (bounding) solutions
of the exact Pareto front in the objective space (see [10] for the details). This metric
takes a zero value for an ideal distribution, pointing out a perfect spread out of the
solutions in the Pareto front. We apply this metric after a normalization of the objective
function values.
4.3 Discussion of the Results
The results are summarized in Tables 4 (GD) and 5 (), and the best result for each problem
has a grey colored background. For each problem, we carried out 100 independent runs, and
the tables include the mean, x, and the standard deviation,
n
, of the results. Since we deal
with stochastic algorithms, an statistical analysis of the results has been made. It consists
of the following steps. First a Kolmogorov-Smirnov test is performed in order to check
whether the values of the results follow a normal distribution or not. If so, an ANOVA I
test is done, otherwise we perform a Kruskal-Wallis test. We always consider in this work
a 95% condence level in the statistical tests. Symbol + in tables 4 and 5 means that the
dierences among the values of the three algorithms for a given problem have statistical
condence (p-value under 0.05).
We consider rst the metric GD (Table 4). We can observe that the three compared
algorithms provide the best results for four out of the 12 studied problems. According to
these results we cannot decide a winner algorithm considering convergence, although they
allow us to conclude that MOCell is a competitive algorithm compared to NSGA-II and
SPEA2. Indeed, if we consider only the constrained studied problems, the reader can see
that MOCell behaves better than the other compared algorithms, since it reports the best
A Cellular Genetic Algorithm for Multiobjective Optimization 9
Table 4. Mean and standard deviation of the convergence metric GD
Problem MOCell x
n
NSGA-II x
n
SPEA2 x
n
Schaer 2.408e-4
1.79e5
2.328e-4
1.19e5
2.365e-4
1.06e5
+
Fonseca 1.983e-4
1.60e5
4.683e-4
3.95e5
2.251e-4
2.37e5
+
Kursawe 1.435e-4
1.01e5
2.073e-4
2.22e5
1.623e-4
1.51e5
+
Zdt1 4.057e-4
6.57e5
2.168e-4
3.57e5
1.992e-4
1.34e5
+
Zdt2 2.432e-4
9.29e5
1.714e-4
3.80e5
1.095e-4
5.37e5
+
Zdt3 2.540e-4
2.78e5
2.199e-4
3.72e5
2.336e-4
1.34e5
+
Zdt4 8.273e-4
1.85e3
4.888e-4
2.59e4
6.203e-2
3.94e2
+
Zdt6 2.106e-3
3.33e4
1.001e-3
8.66e5
8.252e-4
5.15e5
+
ConstrEx 1.968e-4
2.29e5
2.903e-4
3.19e5
2.069e-4
1.78e5
+
Srinivas 5.147e-5
1.51e5
1.892e-4
3.01e5
1.139e-4
1.98e5
+
Osyczka2 2.678e-3
5.30e3
1.071e-3
1.33e4
6.149e-3
1.14e2
+
Tanaka 7.494e-4
7.09e5
1.214e-3
7.95e5
7.163e-4
7.13e5
+
results for ConstrEx and Srinivas, while it is the second best approach in the other two
problems.
Regarding the spread metric (Table 5), the results indicate that MOCell clearly outper-
forms the other two algorithms concerning the diversity of the obtained Pareto fronts, since
it yields the best values in 9 out of the 12 problems. Additionally, MOCell reports the best
results for all the constrained studied problems. It stands out that NSGA-II can not obtain
the best value for the spread metric in any problem.
In order to graphically show our results, we plot in Fig. 2 three fronts obtained by MOCell,
NSGA-II, and SPEA2, together with the optimal Pareto set obtained by the enumerative
algorithm, for problem ConstrEx. The selected fronts are those having the best diversity
(lowest value of ) among the 100 produced ones by each technique for that problem.
We can observe that the nondominated set of solutions generated by MOCell achieves an
almost perfect spread out and convergence. Notice that SPEA2 obtains a very good diversity
for values of f
1
(x) lower than 0.66 (similar to the solution of MOCell), but it nds only 10
solutions when f
1
(x) [0.66, 1.0]. Regarding NSGA-II, its front does not have that problem,
but its worst diversity with respect to the case of MOCell stands out at a simple glance.
Table 5. Mean and standard deviation of the diversity metric
Problem MOCell x
n
NSGA-II x
n
SPEA2 x
n
Schaer 2.473e-1
3.11e2
4.448e-1
3.62e2
1.469e-1
1.14e2
+
Fonseca 9.695e-2
1.08e2
3.596e-1
2.83e2
1.445e-1
1.28e2
+
Kursawe 4.121e-1
4.32e3
5.460e-1
2.41e2
4.390e-1
8.94e3
+
Zdt1 1.152e-1
1.40e2
3.645e-1
2.91e2
1.684e-1
1.29e2
+
Zdt2 1.120e-1
1.61e2
3.644e-1
3.03e2
1.403e-1
6.71e2
+
Zdt3 6.998e-1
3.25e2
7.416e-1
2.25e2
7.040e-1
1.78e2
+
Zdt4 1.581e-1
6.14e2
3.651e-1
3.32e2
1.049e-1
1.71e1
+
Zdt6 1.859e-1
2.33e2
2.988e-1
2.48e2
1.728e-1
1.16e2
+
ConstrEx 1.323e-1
1.32e2
4.212e-1
3.52e2
5.204e-1
1.58e2
+
Srinivas 6.191e-2
8.63e3
3.680e-1
3.02e2
1.628e-1
1.25e2
+
Osyczka2 2.237e-1
3.50e2
4.603e-1
5.58e2
3.145e-1
1.35e1
+
Tanaka 6.629e-1
2.76e2
7.154e-1
2.35e2
6.655e-1
2.74e2
+
10 A. J. Nebro, J. J. Durillo, F. Luna, B. Dorronsoro, and E. Alba
NSGA-II
SPEA2
Fig. 2. MOCell nds a better convergence and spread of solutions than NSGA-II and SPEA2 on
problem ConstrEx
We also want to remark that, concerning diversity, MOCell is not only the best of the
three analyzed algorithms, but the dierences in the spread values are in general noticeable
compared to the rest of algorithms.
A Cellular Genetic Algorithm for Multiobjective Optimization 11
5 Conclusions and Future Work
We have proposed MOCell, a cellular genetic algorithm to solve multiobjective optimization
problems. The algorithm uses an external archive to store the nondominated individuals
found during the search. The most salient feature of MOCell with respect to the other cellular
approaches for multiobjective optimization is the feedback of individuals from the archive
to the population. MOCell was validated using a standard methodology which is currently
used within the evolutionary multiobjective optimization community. The algorithm was
compared against two state-of-the-art multiobjective optimization algorithms, NSGA-II and
SPEA2; for that purpose, twelve test problems, including unconstrained and constrained
ones, were chosen and two metrics were used to assess the performance of the algorithms.
The results of the metrics reveal that MOCell is competitive considering the convergence
metric, and it clearly outperforms all the proposals on the considered test problems according
to the spread metric.
Finally, the evaluation of MOCell with other benchmarks and its application to solve
real-world problems are matter of future work.
Acknowledgments
This work has been funded by FEDER and the Spanish MCYT under contract TIN2005-
08818-C04-01 (the OPLINK project, http://oplink.lcc.uma.es).
References
1. E. Alba and B. Dorronsoro. The exploration/exploitation tradeo in dynamic cellular evolu-
tionary algorithms. IEEE TEC, 9(2):126142, April 2005.
2. E. Alba, B. Dorronsoro, M. Giacobini, and M. Tomasini. Handbook of Bioinspired Algorithms
and Applications, Chapter 7, chapter Decentralized Cellular Evolutionary Algorithms, pages
103120. CRC Press, 2006.
3. E. Alba, B. Dorronsoro, F. Luna, A.J. Nebro, P. Bouvry, and L. Hogie. A Cellular Multi-
Objective Genetic Algorithm for Optimal Broadcasting Strategy in Metropolitan MANETs.
Computer Communications, page To appear, 2006.
4. E. Alba and M. Tomassini. Parallelism and Evolutionary Algorithms. IEEE Trans. on Evolu-
tionary Computation, 6(5):443462, October 2002.
5. S. Bleuler, M. Laumanns, L. Thiele, and E. Zitzler. PISA - A Platform and Programming
Language Independent Interface for Search Algorithms. In EMO 2003, pages 494508, 2003.
6. E. Cant u-Paz. Ecient and Accurate Parallel Genetic Algorithms. Kluwer Academic Publishers,
2000.
7. C.A. Coello, D.A. Van Veldhuizen, and G.B. Lamont. Evolutionary Algorithms for Solving
Multi-Objective Problems. Genetic Algorithms and Evolutionary Computation. Kluwer Acad-
emic Publishers, 2002.
8. K. Deb. Multi-Objective Optimization using Evolutionary Algorithms. John Wiley & Sons,
2001.
9. K. Deb and R.B. Agrawal. Simulated Binary Crossover for Continuous Search Space. Complex
Systems, 9:115148, 1995.
10. Kalyanmoy Deb, Amrit Pratap, Sameer Agarwal, and T. Meyarivan. A Fast and Elist Multi-
objective Genetic Algorithm: NSGA-II. IEEE TEC, 6(2):182197, 2002.
12 A. J. Nebro, J. J. Durillo, F. Luna, B. Dorronsoro, and E. Alba
11. Michael Kirley. MEA: A metapopulation evolutionary algorithm for multi-objective optimisa-
tion problems. In CEC 2001, pages 949956. IEEE Press, 2001.
12. J. Knowles and D. Corne. The Pareto Archived Evolution Strategy: A New Baseline Algo-
rithm for Multiobjective Optimization. In Proceedings of the 1999 Congress on Evolutionary
Computation, pages 9105, Piscataway, NJ, 1999. IEEE Press.
13. M. Laumanns, G. Rudolph, and H. P. Schwefel. A Spatial Predator-Prey Approach to Multi-
Objective Optimization: A Preliminary Study. In PPSN V, pages 241249, 1998.
14. B. Manderick and P. Spiessens. Fine-grained parallel genetic algorithm. In Proc. of the Third
Int. Conf. on Genetic Algorithms (ICGA), pages 428433, 1989.
15. T. Murata and M. Gen. Cellular Genetic Algorithm for Multi-Objective Optimization. In Proc.
of the 4th Asian Fuzzy System Symposium, pages 538542, 2002.
16. D. A. Van Veldhuizen and G. B. Lamont. Multiobjective Evolutionary Algorithm Research: A
History and Analysis. Technical Report TR-98-03, Dept. Elec. Comput. Eng., Air Force Inst.
Technol., Wright-Patterson, AFB, OH, 1998.
17. E. Zitzler, M. Laumanns, and L. Thiele. SPEA2: Improving the strength pareto evolutionary
algorithm. Technical Report 103, Computer Engineering and Networks Laboratory (TIK),
Swiss Federal Institute of Technology (ETH), 2001.