0% found this document useful (0 votes)
53 views9 pages

Multiobjective Cat Swarm Optimization

The document summarizes multi-objective optimization problems and several evolutionary algorithms proposed to solve such problems, including NSGA-II, SPEA2, PAES, and MOPSO. It then proposes a new multi-objective evolutionary algorithm called multi-objective cat swarm optimization (MOCSO) that extends the existing cat swarm optimization by incorporating an external archive and Pareto dominance to handle non-dominated solutions. The performance of MOCSO is demonstrated on standard test functions and its parameters are sensitivity tested using various performance metrics.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views9 pages

Multiobjective Cat Swarm Optimization

The document summarizes multi-objective optimization problems and several evolutionary algorithms proposed to solve such problems, including NSGA-II, SPEA2, PAES, and MOPSO. It then proposes a new multi-objective evolutionary algorithm called multi-objective cat swarm optimization (MOCSO) that extends the existing cat swarm optimization by incorporating an external archive and Pareto dominance to handle non-dominated solutions. The performance of MOCSO is demonstrated on standard test functions and its parameters are sensitivity tested using various performance metrics.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Expert Systems with Applications 39 (2012) 2956–2964

Contents lists available at SciVerse ScienceDirect

Expert Systems with Applications


journal homepage: www.elsevier.com/locate/eswa

Solving multiobjective problems using cat swarm optimization


Pyari Mohan Pradhan ⇑, Ganapati Panda
School of Electrical Sciences, Indian Institute of Technology Bhubaneswar, India

a r t i c l e i n f o a b s t r a c t

Keywords: This paper proposes a new multiobjective evolutionary algorithm (MOEA) by extending the existing cat
Multiobjective problems swarm optimization (CSO). It finds the nondominated solutions along the search process using the con-
Evolutionary algorithm cept of Pareto dominance and uses an external archive for storing them. The performance of our proposed
Swarm optimization approach is demonstrated using standard test functions. A quantitative assessment of the proposed
Cat swarm optimization
approach and the sensitivity test of different parameters is carried out using several performance metrics.
Multiobjective cat swarm optimization
Pareto dominance
The simulation results reveal that the proposed approach can be a better candidate for solving multiob-
jective problems (MOPs).
Ó 2011 Elsevier Ltd. All rights reserved.

1. Introduction If solution u is not dominated by any other solution then u is de-


clared as a nondominated or Pareto optimal solution. There are no
Most of the real world problems often have multiple con- superior solutions to the problem than u, although there may be
flicting objectives. In a single-objective optimization problem, other equally good solutions.
the optimal solution is clearly defined whereas in MOPs there
exists a set of trade-offs giving rise to numerous solutions. Each
solution represents a particular performance trade-off between 1.3. Pareto optimality
the objectives and can be considered optimal. Some basic con-
cepts of multiobjective optimization (MOO) are discussed in The candidate solution u 2 U is Pareto optimal if and only if,
brief.
zðv Þ  zðuÞ; :9v 2 U ð4Þ
1.1. Multiobjective optimization problem
The set of solutions that satisfy (4) is known as the Pareto optimal
It may be stated as a minimization of M components of a vector set and the fitness values corresponding to these solutions form the
function z with respect to a vector variable x = (x1, . . . , xn) in a uni- Pareto front or trade-off surface in objective space.
verse U, i.e., Although the traditional gradient based optimization tech-
niques can be used to obtain Pareto optimal solutions, they suffer
min zðxÞ ¼ ½z1 ðxÞ; z2 ðxÞ; z3 ðxÞ; . . . ; zM ðxÞ ð1Þ
from many drawbacks. Most of them fail to perform if the shape of
A solution u dominates v if u performs at least as well as v across all the Pareto front is concave or disconnected. They require differen-
the objectives and performs better than v in at least one objective. tiability of the objective functions as well as constraints and need
the multiple objectives to be aggregated into a single objective.
1.2. Pareto dominance Most of them provide single solution from each run. In addition
to the heavy computational cost, the inherent difficulty in finding
Given two candidate solutions u and v from U, vector z(u) is said the appropriate aggregation of the objectives, necessitates the
to dominate vector z(v) (denoted by z(u)  z(v)) if and only if, development of more efficient methods. Evolutionary algorithms
are used for MOPs because they provide a set of solutions in a sin-
zi ðuÞ 6 zi ðv Þ; 8i 2 f1; . . . ; Mg ð2Þ
gle run and do not require objectives to be aggregated. The perfor-
zi ðuÞ < zi ðv Þ; 9i 2 f1; . . . ; Mg ð3Þ mance of evolutionary algorithms does not get affected by the
shape of the Pareto front.
The CSO, is a recently developed evolutionary algorithm which
imitates the natural behaviour of cats and comes under the group
⇑ Corresponding author. Tel.: +91 8895621359. of swarm intelligence. CSO could be a candidate for improvements
E-mail addresses: pyarimohan.pradhan@gmail.com (P.M. Pradhan), ganapati. leading to applications in MOO. In this paper, a new MOEA, called
panda@gmail.com (G. Panda). multiobjective cat swam optimization (MOCSO), is proposed. The

0957-4174/$ - see front matter Ó 2011 Elsevier Ltd. All rights reserved.
doi:10.1016/j.eswa.2011.08.157
P.M. Pradhan, G. Panda / Expert Systems with Applications 39 (2012) 2956–2964 2957

concept of external archive and Pareto dominance is incorporated solutions and the use of a niching or clustering technique. The
into MOCSO for dealing with the nondominated solutions. fitness of each individual is computed using the strengths of
external nondominated solutions that dominate it. An improved
version of this technique, called SPEA2, is dealt in Zitzler,
2. Literature review Laumanns, and Thiele (2001).
 Pareto-archived evolutionary strategy (PAES) (Knowles &
The literature provides a large number of classical techniques Corne, 2000): This algorithm starts with a randomly initialized
for solving MOPs (Ehrgott, 2005; Goicoechea, Hansen, & Duckstein, solution. In each iteration one solution is generated using muta-
1982; Miettinen, 1999; Steuer, 1986). These techniques can be ap- tion. An external archive is maintained to collect non-
plied to a MOP if the objective functions and the constraints are dominated solutions. Each mutated individual is compared with
differentiable. Since they give a single solution from each run, sev- the elements of archive. An adaptive grid that divides the objec-
eral runs using different initial values are required to obtain the tive space in a recursive manner is used to maintain diversity of
solution set. solutions.
In mid-1980s, some evolutionary algorithms are proposed for  Multiobjective particle swarm optimization (MOPSO) (Coello
MOO. In these algorithms, a part of the population is selected & Lechuga, 2002): This algorithm uses an external archive to
according to each individual objective. This method gives a set of store the nondominated solutions. A special mutation operator
solutions which dominate in one objective but inferior in other is also incorporated to enhance the exploration capability of the
objectives. Thus an important part of the solution space remains particles. MOPSO could able to cover the full range of Pareto
undiscovered. Schaffer (1985) has proposed the vector evaluated front uniformly with exceptionally low computational cost.
genetic algorithm (VEGA) in which the concept of dominance is
implemented for the evaluation and selection of individuals. The In recent past, a large number of variants of these MOEAs are
evolutionary multiobjective (EMO) schemes based on weighted- proposed. Adra and Fleming (2009) has proposed a diversity man-
sums have also been proposed in which overall performance of agement operator for MOEAs. The fuzzy clustering concept is intro-
an individual is calculated as a weighted-sum of individual perfor- duced in Agrawal, Panigrahi, and Tiwari (2008). Daneshyari and
mance in each of the objectives. Haleja and Lin (1992) have in- Yen (2008) have proposed a cultural framework to adapt the
cluded the weight vector in the solution genotype and allowed parameters of MOPSO.
multiple weight combinations to propagate through the popula- Section 3 provides an outline of CSO algorithm. The proposed
tion during evolution. MOCSO algorithm is developed in detail in Section 4. The perfor-
Unlike these early attempts, the majority of modern EMO ap- mance metrics used for analysis of results are discussed in Sec-
proaches are based on the concept of Pareto dominance. The con- tion 5. Simulation results of MOCSO in comparison to NSGA-II
cept of Pareto optimum was originally introduced by Edgeworth (Deb et al., 2000b) and MOPSO (Coello & Lechuga, 2002) are pre-
(1881) and later generalized by Pareto (1896). Goldberg (1989) sented in Section 6. The comparison of MOCSO with different vari-
has incorporated the concept of Pareto optimality into an evolu- ants of MOPSO and NSGA-II is not included in this paper. Section 7
tionary algorithm. He has suggested that the use of nondominated provides the sensitivity analysis for MOCSO algorithm. Finally
ranking and selection moves the individuals in a population to- some concluding remarks and future work are listed in Section 8.
wards the Pareto front. These ideas of Goldberg have helped the
researchers in developing new MOEAs. Some of these MOEAs are
3. Cat swarm optimization
briefly described next.

Chu and Tsai (2007) has proposed a new evolutionary algorithm


 Multiobjective genetic algorithm (MOGA) (Fonseca & Fleming,
i.e. cat swarm optimization which imitates the natural behavior of
1993): In this algorithm each individual in the population is
cats. Cats always remain alert and move very slowly. This behavior
assigned a rank equal to the number of individuals dominating
of cats is represented as seeking mode. When the presence of a prey
it. The fitness is assigned to each individual using an interpola-
is sensed, cats chase it very quickly. This behavior of cats, i.e. chasing
tion between the best and the worst rank. The fitness of all indi-
with high speed, is represented as tracing mode. These two modes
viduals having the same rank is averaged and this value is
have been mathematically modeled for solving optimization prob-
assigned to each one of them.
lems. The position of the cats are used to represent the solution
 Niche Pareto genetic algorithm (NPGA) (Horn, Nafpliotis, &
set. Every cat has position and velocity for each dimension and a fit-
Goldberg, 1994): It uses a tournament selection technique
ness value. In addition to these a flag is used to identify whether the
based on Pareto dominance. Two individuals are randomly cho-
cat is in seeking mode or tracing mode. Chu and Tsai (2007) and
sen from the population and compared against a subset of the
Panda, Pradhan, and Majhi (2011) have shown that the CSO per-
entire population. The non-dominated individual is selected
forms better than PSO with respect to convergence speed and resid-
for reproduction.
ual mean square error, but it requires higher computation time.
 Non-dominated sorting genetic algorithm (NSGA) (Srinivas &
Deb, 1995): This algorithm classifies individuals according to
dominance in a ranking scheme. Each individual in the popula- 4. Proposed approach
tion is assigned with a rank on the basis of nondomination. Each
dominance class is assigned with a dummy fitness value pro- In order to extend CSO for solving MOPs, the Pareto ranking
portional to the population size. Since the process of Pareto scheme is incorporated. The nondominated solutions obtained by
ranking has to be repeated, hence the NSGA is not found to be the cats are stored in the external archive (Coello & Lechuga,
very efficient. An improved version of this technique, called 2002). The seeking mode corresponds to a global search process
NSGA-II, is proposed by Deb, Agrawal, Pratab, and Meyarivan whereas the tracing mode corresponds to a local search process.
(2000b). The important property of CSO is that it provides local as well as
 Strength Pareto evolutionary algorithm (SPEA) (Zitzler & global search capability simultaneously. This feature of CSO com-
Thiele, 1999): This algorithm integrated three different features bined with an external archive of nondominated solutions provide
of MOEAs: the use of dominance for evaluation and selection, faster convergence and better quality of nondominated solutions
the use of additional populations for storing non-dominated for MOCSO.
2958 P.M. Pradhan, G. Panda / Expert Systems with Applications 39 (2012) 2956–2964

4.1. Algorithm 7. Check the termination condition, if satisfied, terminate the pro-
gram. Otherwise repeat steps 3 to 5.
The CSO algorithm reaches its optimal solution using two
groups of cats i.e. one group containing cats in seeking mode and 4.2. Seeking mode
other group containing cats in tracing mode. The two groups com-
bine to solve the optimization problem. A mixture ratio (MR) is The seeking mode corresponds to a global search technique in
used which defines the ratio of number of cats in tracing mode the search space of the optimization problem. A term used in this
to that of number of cats in seeking mode. The flowchart of the mode is seeking memory pool (SMP). It is the number of copies of a
MOCSO algorithm is shown in Fig. 1 and the steps of the algorithm cat produced in seeking mode. The steps involved in this mode are:
are outlined below.
1. Create T (=SMP) copies of j th cat i.e. Ykd where (1 6 k 6 T) and
1. Randomly initialize the position of cats in D-dimensional space (1 6 d 6 D). D is the total number of dimensions.
i.e. Xid representing position of ith cat in dth dimension. 2. Apply a mutation operator to Yk.
2. Randomly initialize the velocity of cats i.e. Vid. 3. Evaluate the fitness of all mutated copies.
3. According to MR, cats are randomly picked from the population 4. Update the contents of the archive with the position of those
and their flag is set to seeking mode, and for others the flag is mutated copies which represent nondominated solutions.
set to tracing mode. 5. Pick a candidate randomly from T copies and place it at the
4. Evaluate the fitness of each cat. position of jth cat.
5. Store the position of the cats representing non-dominated solu-
tions in the archive. 4.3. Tracing mode
6. If ith cat is in seeking mode, apply the cat to the seeking mode
process, otherwise apply it to the tracing mode process. The tracing mode corresponds to a local search technique for the
optimization problem. In this mode, the cat traces the target while
spending high energy. The rapid chase of the cat is mathematically
Create N cats modeled as a large change in its position. Define position and veloc-
ity of ith cat in the D-dimensional space as Xi = (Xi1, Xi2, . . . , XiD) and
Vi = (Vi1, Vi2, . . . , ViD) where d (1 6 d 6 D) represents the dimension.
Initialize the position, The global best position of the cat swarm is represented as
velocity of each cat Xg = (Xg1, Xg2, . . . , XgD). The steps involved in tracing mode are:

1. Compute the new velocity of ith cat using (5).


Evaluate the fitness of cats and
store the nondominated solutions V id ¼ w  V id þ c  r  ðX gd  X id Þ ð5Þ
in the archive
where w is the inertia weight, c is the acceleration constant and r
is a random number uniformly distributed in the range [0, 1].
According to the value of MR,
The global best Xg is selected randomly from the external
randomly distribute the cats into archive.
tracing mode and seeking mode 2. Compute the new position of ith cat using (6).
X id ¼ X id þ V id ð6Þ
Yes No 3. If the new position of ith cat corresponding to any dimension
Is catk in
goes beyond the search space, then the corresponding boundary
seeking mode
? value is assigned to that dimension and the velocity corre-
sponding to that dimension is multiplied by 1 to continue
Seeking mode Tracing mode
the search in the opposite direction.
4. Evaluate the fitness of the cats.
5. Update the contents of the archive with the position of those
cats which represent nondominated vectors.
Evaluate the fitness of cats and
compare with the nondominated 5. Performance metrics
solutions in the archive

With the existence of different MOEAs, it is necessary to quantify


the performance of MOCSO on a number of test problems. Some
Update the contents performance metrics proposed by Deb (2001) are discussed below.
of archive
1. Set coverage metric (SCM): This metric, as suggested by Zitzler
(1999), measures the relative spread of solutions between the
obtained nondominated solution set and Pareto-optimal set.
Is maximum number of No The set coverage metric C(P⁄, Q) calculates the proportion of
generations reached solutions in Q which are weakly dominated by solutions of P⁄
? where P⁄ is the Pareto-optimal set and Q is the obtained non-
Yes dominated solution set.
2. Generational distance (GD): This metric is proposed by Veldhui-
Contents of the archive
are the solutions zen (1999). It finds the average distance of the non-dominated
solutions found so far from the Pareto optimal set. This is
Fig. 1. Flowchart for MOCSO algorithm. defined as
P.M. Pradhan, G. Panda / Expert Systems with Applications 39 (2012) 2956–2964 2959

N 12 obtained with four standard test functions. In each case, the results
P 2
di obtained from 30 independent experiments are reported and the
i¼1
GD ¼ ð7Þ best average result with respect to each metric is shown in bold
N
font. The Pareto fronts shown for each function correspond to the
where N is the number of nondominated solutions found so far average result obtained with respect to generational distance.
and di is the Euclidean distance between the ith solution and
the nearest member of Pareto optimal set.
6.1. Test function 1
3. Maximum Pareto-optimal front error (MFE): Veldhuizen (1999)
also suggested another metric which measures the maximum
This function was proposed by Deb (2002)
value of di where di is the Euclidean distance between the ith
solution of nondominated front and the member of Pareto opti- 
Minimize f1 ðx1 ; x2 Þ ¼ x1
mal set nearest to ith solution. This metric gives a broad idea on F: ð10Þ
Minimize f2 ðx1 ; x2 Þ ¼ gðx1 ; x2 Þ  hðx1 ; x2 Þ
the closeness to the Pareto optimal front.
4. Spacing (S): This metric is proposed by Schott (1995). It is calcu- where
lated with a relative distance measure between consecutive
solutions in the obtained non-dominated front. gðx1 ; x2 Þ ¼ 11 þ x22  10  cosð2px2 Þ
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
u qffiffiffiffiffiffiffiffiffiffiffiffiffi
u1 X N (
S¼t ðdi  dÞ  2 ð8Þ 1 f1 ðx1 ;x2 Þ
gðx1 ;x2 Þ
; if f 1 ðx1 ; x2 Þ 6 gðx1 ; x2 Þ
N i¼1 hðx1 ; x2 Þ ¼
0; otherwise
where N is the number
 i ofk nondominated solutions found so far
P   and 0 6 x1 6 1,  30 6 x2 6 30.
and di ¼ mink M m¼1 fm  fm ; k ¼ 1; . . . ; N and i – k. M is the total
number of objectives to be optimized. d  is the mean of all d . In this example, the total number of fitness evaluations carried
i
An algorithm having a smaller S is better because the solutions out is 4000. The Pareto fronts produced by MOCSO, MOPSO and
are closer to uniformly spread. NSGA-II respectively for the first test function are shown in Figs. 2–
5. Spread (D): Deb, Agrawal, Pratab, and Meyarivan (2000a) sug- 4. MOCSO and MOPSO are able to cover the entire Pareto-front
gested a metric which measures the extent of spread in the whereas NSGA-II fails. The true Pareto front for the first test func-
obtained solution set. tion is shown as a continuous line in each figure.
Table 1 shows the comparison of different performance metrics
PM e PN 
m¼1 dm þ i¼1 jdi  dj among the three algorithms. It can be seen that MOCSO performs
D¼ PM e 
ð9Þ
better with respect to all metrics except spacing in which MOPSO
m¼1 dm þ N  d
performs slightly better but with a higher standard deviation. It
also shows that MOCSO requires slightly higher computation time
where N is the number of nondominated solutions found so far, than MOPSO for the first test function. This limitation of MOCSO is
M is the total number of objectives to be optimized, di is the dis- attributed to the inherent higher computation time requirement of
tance between neighboring solutions and d  is the mean of all d .
i the CSO algorithm. In PSO the complete swarm of particles start
e
The parameter dm is the distance between the extreme solutions their flight with a global search and finish with a local search in
of Pareto optimal set and the obtained non-dominated solution the last iteration. All the particles undergo a similar process of
set corresponding to m-th objective function. velocity and position updation in each iteration. But in CSO, in each
iteration, a part of the population perform global search (seeking
Apart from these metrics, the computation time is also mea- mode) whereas the remaining perform local search (tracing mode).
sured to show the trade-off between the quality of solution and In the last iteration, most of the cats reach the final solution
computation time for our proposed approach. whereas a part of the population continue the global search
expecting a better solution. Since local and global search is carried
6. Simulation results and analysis

1.4
Simulation study is carried out in MATLAB to demonstrate the
Pareto Front
potentiality of MOCSO algorithm for function approximation task. MOCSO
To validate the performance of our approach, the results are com- 1.2
pared with those of other two initial versions of MOEAs i.e. MOPSO
and NSGA-II. After several experiments with different simulation 1
parameters, the following are found to be suitable for providing
the best performance. The initial population chosen for all the
0.8
three algorithms is 100. The simulation parameters used for
f2

NSGA-II are as follows: real coded GA, simulated binary crossover


with distribution index 20, polynomial mutation with distribution 0.6
index 20, binary tournament selection with pool size 50 and tour
size 2. The simulation parameters used for MOPSO are as follows: 0.4
the size of archive is 100, the inertia weight is 0.4, both the accel-
eration constants are taken as 2 and the random numbers are cho- 0.2
sen in the range [0 1]. The parameters for MOCSO are as follows:
SMP = 10, MR = 0.5, C = 2, w = 0.4, the size of the archive is 100
0
and r in the range [0 1]. These parameters are kept constant for 0 0.2 0.4 0.6 0.8 1
all simulation exercises. The total number of fitness evaluations f
1
are varied for different test functions. The performance of MOCSO
in comparison to MOPSO and NSGA-II is validated using the results Fig. 2. Pareto front produced by MOCSO for the first test function.
2960 P.M. Pradhan, G. Panda / Expert Systems with Applications 39 (2012) 2956–2964

1.4 out independently in each iteration, CSO requires a higher compu-


Pareto Front tation time in comparison to PSO.
MOPSO
1.2 One of the limitation of the MOCSO algorithm is that the cats
get copied in the seeking mode during the optimization process
and hence the MOCSO algorithm handles a larger population in
1
each iteration than the NSGA-II and MOPSO algorithm. This in-
creases the computation time requirement of the MOCSO
0.8 algorithm.
2
f

0.6 6.2. Test function 2

0.4
This function was proposed by Kita, Yabumoto, Mori, and
Nishikawa (1996) and is used as a test function for constrained
optimization. In this case multiple objectives are optimized in
0.2
presence of some linear or nonlinear constraints. The presence of
constraints makes it difficult for MOEAs to converge to the true
0 Pareto-optimal front and also causes difficulty in obtaining an uni-
0 0.2 0.4 0.6 0.8 1
f1 form distribution of solutions along the nondominated front.
Hence the performance of a MOEA is mostly determined by its con-
Fig. 3. Pareto front produced by MOPSO for the first test function. straint handling ability.
This function is defined as
(
Maximize f1 ðx1 ; x2 Þ ¼ x21 þ x2
1.4 F: ð11Þ
Pareto Front
Maximize f2 ðx1 ; x2 Þ ¼ 12 x1 þ x2 þ 1
NSGA − II
1.2 subject to
1 13
1
x1 þ x2  60
6 2
1 15
x1 þ x2  60
0.8 2 2
5x1 þ x2  30 6 0
f2

0.6
and x1, x2 P 0. The range adopted for our simulation exercise is
0 6 x1, x2 6 7.
0.4 In this example, the total number of fitness evaluations carried
out is 5000. Figs. 5–7 show the Pareto fronts produced by MOCSO,
0.2 MOPSO and NSGA-II respectively for the second test function. In
each figure the true Pareto front is shown as a continuous line. It
can be seen that MOCSO and MOPSO cover the complete Pareto
0
0 0.2 0.4 0.6 0.8 1 front whereas NSGA-II is not able to cover some parts of the true
f Pareto front. MOCSO is able to obtain the nondominated solutions
1
having f1 < 4 whereas MOPSO is not able to go beyond this limit.
Fig. 4. Pareto front produced by NSGA-II for the first test function. Table 2 shows the comparison of different performance metrics
among the three algorithms. It can be seen that MOCSO performs
best with respect to set coverage metric, generational distance
Table 1
Performance metrics for the first test function.
8.8
Algorithm MOCSO MOPSO NSGA-II True Pareto Front
MOCSO
Set coverage metric 8.6
Average 0.138 0.231 0.4297
Std. Dev. 0.0276 0.0192 0.3641
8.4
Generational distance
Average 0.0007692 0.001 0.0265
Std. Dev. 0.00005743 0.0001905 0.0000407 8.2
f2

Maximum Pareto-optimal front error


Average 0.0401 0.0466 0.543 8
Std. Dev. 0.0048 0.0031 0.8068
Spacing
7.8
Average 0.009 0.0089 0.009
Std. Dev. 0.0007 0.0034 0.0087
Spread 7.6
Average 0.6077 0.72 0.6594
Std. Dev. 0.0366 0.0967 0.189 7.4
−8 −6 −4 −2 0 2 4 6 8
Computation time f1
Average 1.7489 1.3141 3.8998
Std. Dev. 0.0908 0.0642 0.3742
Fig. 5. Pareto front produced by MOCSO for the second test function.
P.M. Pradhan, G. Panda / Expert Systems with Applications 39 (2012) 2956–2964 2961

8.8 and maximum Pareto-optimal front error whereas NSGA-II gives a


True Pareto Front better performance with respect to spacing and spread. It also
MOPSO
8.6 shows that MOCSO requires higher computation time in compari-
son to MOPSO. The higher computation time requirement of MOC-
SO is explained in Section 6.1.
8.4

8.2
7. Sensitivity analysis
2
f

8 In this section the impact of different parameters on the perfor-


mance of MOCSO algorithm is studied. Four different experiments
7.8 are carried out to demonstrate the effects of parameters.

7.6
7.1. Experiment 1: Effect of variation in population size

7.4 The number of cats as well as the number of iterations are var-
−4 −2 0 2 4 6 8
f ied for maintaining the total number of fitness evaluations. The
1
variation in performance is measured with respect to different
Fig. 6. Pareto front produced by MOPSO for the second test function. metrics discussed in Section 5. Different runs using 5, 25, 50, 75
and 100 cats are carried out. The results are discussed below in
detail.

8.8
True Pareto Front 7.1.1. Test function 1
NSGA − II
8.6 Table 3 shows that a population size of 100 cats provided the
best average results with respect to most of the metrics. With re-
8.4 spect to set coverage metric, the use of 25 cats provided the best
average result, but with a higher standard deviation in comparison
to the result obtained using 100 cats.
8.2
f2

8
7.1.2. Test function 2
A population size of 75 cats provided best average results with
7.8 respect to generational distance, maximum Pareto-optimal front
error, spacing and spread as shown in Table 4. The use of 100 cats
7.6 provided the best average result with respect to set coverage
metric.
7.4
−4 −2 0 2 4 6 8
f
1
7.1.3. Observation
Fig. 7. Pareto front produced by NSGA-II for the second test function.
The use of 100 cats performed best with respect to most of the
metrics for the test functions 1, 2 and 3 while a population size of
75 cats provided the best average results for the second test func-
tion. Hence a population size of 100 cats is a reasonable choice for
Table 2 any optimization problem.
Performance metrics for the second test function.

Algorithm MOCSO MOPSO NSGA-II


Table 3
Set Coverage Metric Results of experiment 1 for the first test function.
Average 0.037 0.0414 0.0641
Std. Dev. 0.0222 0.029 0.0287 Cats 5 25 50 75 100
Generational Distance Set Coverage Metric
Average 0.0274 0.0467 0.04 Average 0.1143 0.1023 0.108 0.132 0.138
Std. Dev. 0.0324 0.0535 0.044 Std. Dev. 0.0387 0.0284 0.0316 0.0365 0.0276
Maximum Pareto-optimal Front Error Generational Distance
Average 2.1071 2.4138 3.0051 Average 0.000834 0.00081 0.000824 0.000792 0.000769
Std. Dev. 2.2187 3.6838 3.4251 Std. Dev. 0.000055 0.000065 0.000071 0.00008 0.000057
Spacing Maximum Pareto-optimal Front Error
Average 0.1592 0.3184 0.1462 Average 0.0437 0.0426 0.0441 0.0424 0.0401
Std. Dev. 0.2338 0.4778 0.1518 Std. Dev. 0.004 0.0053 0.0049 0.006 0.0048
Spread Spacing
Average 1.0169 0.9925 0.7863 Average 0.0092 0.0092 0.0095 0.0094 0.009
Std. Dev. 0.1147 0.1176 0.1951 Std. Dev. 0.00093 0.000718 0.000679 0.000963 0.0007
Computation Time Spread
Average 3.6067 1.6886 9.8249 Average 0.6641 0.6545 0.6465 0.654 0.6077
Std. Dev. 0.3799 0.2247 0.6076 Std. Dev. 0.0402 0.0476 0.0478 0.0435 0.0366
2962 P.M. Pradhan, G. Panda / Expert Systems with Applications 39 (2012) 2956–2964

Table 4 Table 6
Results of experiment 1 for the second test function. Results of experiment 2 for the second test function.

Cats 5 25 50 75 100 Archive size 100 150 200 250 300


Set Coverage Metric Set Coverage Metric
Average 0.038 0.045 0.043 0.0443 0.037 Average 0.037 0.0418 0.0343 0.032 0.0321
Std. Dev. 0.0225 0.0236 0.0249 0.0281 0.0222 Std. Dev. 0.0222 0.0231 0.0247 0.0186 0.0169
Generational Distance Generational Distance
Average 0.0327 0.0306 0.0338 0.0204 0.0274 Average 0.0274 0.0193 0.0261 0.0179 0.0213
Std. Dev. 0.042 0.0379 0.0394 0.0279 0.0324 Std. Dev. 0.0324 0.0196 0.0343 0.0205 0.0241
Maximum Pareto-optimal Front Error Maximum Pareto-optimal Front Error
Average 2.9918 2.5704 2.74 1.5388 2.1071 Average 2.1071 2.5364 3.0608 2.0204 2.8509
Std. Dev. 4.0121 3.5559 3.0286 1.8489 2.2187 Std. Dev. 2.2187 2.698 4.0836 1.9703 3.3824
Spacing Spacing
Average 0.283 0.2068 0.1962 0.1249 0.1592 Average 0.1592 0.1682 0.1461 0.0765 0.129
Std. Dev. 0.3556 0.3092 0.176 0.0681 0.2338 Std. Dev. 0.2338 0.169 0.2227 0.0481 0.1591
Spread Spread
Average 1.0527 1.0229 1.0379 0.9811 1.0169 Average 1.0169 1.1102 1.1433 1.1387 1.1469
Std. Dev. 0.1191 0.1091 0.0899 0.0748 0.1147 Std. Dev. 0.1147 0.082 0.1053 0.0591 0.0796

7.2. Experiment 2: Effect of variation in archive size for each nondominated vector increases. Hence search effort and
complexity, involved in the selection of a new member of archive,
In this experiment the size of the archive is varied while all increases with increase in the size of the archive. Keeping this in
other parameters are kept constant. Different runs with archive mind, a value of 100 is adopted for our simulation exercise.
size of 100, 150, 200, 250 and 300 are carried out.
7.3. Experiment 3: Effect of variation in SMP
7.2.1. Test function 1
Table 5 shows that the best average results with respect to max- In this experiment the value of SMP is varied while all other
imum Pareto-optimal front error and spread are obtained using an parameters are kept unchanged. Different runs using 10, 30, 50,
archive size of 100. A value of 300 provided better average results 70 and 100 as SMP are carried out.
with respect to set coverage metric, generational distance and
spacing. 7.3.1. Test function 1
Table 7 shows that a value of 10 provided the best average re-
7.2.2. Test function 2 sults with respect to all metrics.
As we can see in Table 6, a mixed set of results are obtained. An
archive size of 100 performed best with respect to spread. An ar- 7.3.2. Test function 2
chive size of 250 performed best with respect to set coverage met- In this case a mixed set of results are obtained. As we can see in
ric, generational distance, maximum Pareto-optimal front error Table 8, a value of 10 performed best with respect to set coverage
and spacing. metric and spread. A value of 50 provided best average result with
respect to spacing and a value of 100 performed best with respect
7.2.3. Observation to generational distance and maximum Pareto-optimal front error.
It is seen that values of 100 and 300 for the size of archive pro- We can note that for those metrics where a value of 100 performed
vided best average results for different performance metrics. With best, a value of 10 provided competitive results.
increase in the size of archive, more number of nondominated
solutions enter into the archive. In each iteration the nondominat- 7.3.3. Observation
ed vectors are compared with all members of the archive. With in- It is seen that a value of 10 for SMP provided best average re-
crease in the archive size, the number of comparisons carried out sults for most of the performance metrics in test functions 1 and

Table 5 Table 7
Results of experiment 2 for the first test function. Results of experiment 3 for the first test function.

Archive size 100 150 200 250 300 SMP 10 30 50 70 100


Set Coverage Metric Set Coverage Metric
Average 0.138 0.1325 0.1322 0.1204 0.1086 Average 0.138 0.1637 0.162 0.161 0.1557
Std. Dev. 0.0276 0.0314 0.035 0.031 0.0237 Std. Dev. 0.0276 0.0438 0.0402 0.0372 0.0415
Generational Distance Generational Distance
Average 0.00077 0.00062 0.00049 0.00044 0.00039 Average 0.000769 0.000811 0.000789 0.000824 0.000832
Std. Dev. 0.00006 0.00007 0.00007 0.00006 0.00005 Std. Dev. 0.000057 0.000117 0.000094 0.00007 0.000058
Maximum Pareto-optimal Front Error Maximum Pareto-optimal Front Error
Average 0.0401 0.0438 0.0407 0.0434 0.0439 Average 0.0401 0.0413 0.0426 0.0442 0.0447
Std. Dev. 0.0048 0.0047 0.0096 0.0048 0.0043 Std. Dev. 0.0048 0.0087 0.0068 0.0047 0.0038
Spacing Spacing
Average 0.009 0.0063 0.005 0.0043 0.0037 Average 0.009 0.0098 0.0094 0.0095 0.0095
Std. Dev. 0.0007 0.00038 0.00044 0.0003 0.00024 Std. Dev. 0.0007 0.000862 0.000701 0.000934 0.000932
Spread Spread
Average 0.6077 0.6873 0.7161 0.7668 0.7843 Average 0.6077 0.6359 0.6703 0.6496 0.6677
Std. Dev. 0.0366 0.0408 0.0349 0.0349 0.036 Std. Dev. 0.0366 0.0349 0.0449 0.0442 0.044
P.M. Pradhan, G. Panda / Expert Systems with Applications 39 (2012) 2956–2964 2963

Table 8 can also be extended to perform under dynamic environment


Results of experiment 3 for the second test function. (Branke, 2002). The investigation on the convergence properties of
SMP 10 30 50 70 100 MOEAs have provided the analysts and decision makers more confi-
Set Coverage Metric dence on the ability of an MOEA to solve MOPs. Hence convergence
Average 0.037 0.0537 0.058 0.0553 0.0697 analysis of MOCSO algorithm is also an interesting problem which
Std. Dev. 0.0222 0.0353 0.0335 0.0261 0.0416 needs investigation. Some recent studies have suggested that the
Generational Distance Pareto ranking schemes perform poorly in the presence of many
Average 0.0274 0.0423 0.0318 0.0296 0.0264 (more than two) objectives (Purshouse, 2003). The study of perfor-
Std. Dev. 0.0324 0.0552 0.0513 0.0404 0.0407 mance of MOCSO in solving many objective problems is also chal-
Maximum Pareto-optimal Front Error lenging and needs investigation.
Average 2.1071 3.1567 2.3482 2.4434 2.0649
Std. Dev. 2.2187 4.0586 3.314 3.3579 2.9548
Spacing References
Average 0.1592 0.1855 0.1469 0.1934 0.1579
Std. Dev. 0.2338 0.235 0.1422 0.2511 0.1634 Adra, S. F., & Fleming, P. F. (2009). A diversity management operator for
evolutionary many-objective optimisation. In M. Ehrgott, C. M. Fonseca, X.
Spread Gandibleux, J.-K. Hao, & M. Sevaux (Eds.), Evolutionary Multi–Criterion
Average 1.0169 1.0773 1.0665 1.0452 1.0885 Optimization. 5th International Conference, EMO 2009. Lecture Notes in
Std. Dev. 0.1147 0.1051 0.0977 0.0887 0.097 Computer Science (Vol. 5467, pp. 81–94). Nantes, France: Springer.
Agrawal, S., Panigrahi, B., & Tiwari, M. K. (2008). Multiobjective particle swarm
algorithm with fuzzy clustering for electrical power dispatch. IEEE Transactions
on Evolutionary Computation, Vol. 12, 529–541.
2. A set of mixed results are obtained for the second test function. Branke, J. (2002). Evolutionary Optimization in Dynamic Environments. Norwell, MA:
Kluwer.
With increase in the value of SMP, the size of population increases Chu, S.-C., & Tsai, P.-W. (2007). Computational intelligence based on the behavior of
and hence computation time also increases. Hence a value of 10 for cats. International Journal of Innovative Computing, Information and Control, Vol.
SMP will be a reasonable choice for any problem. 3, 163–173.
Coello, C. A. C., & Lechuga, M. S. (2002). MOPSO: A proposal for multiple objective
particle swarm optimization. In Proceedings of the congress on evolutionary
computation (CEC’2002), Honolulu, HI, Vol. 1, (pp. 1051–1056).
8. Conclusion Daneshyari, M., & Yen, G. G. (2008). Cultural MOPSO: A cultural framework to adapt
parameters of multiobjective particle swarm optimization. In 2008 Congress on
Several papers have been reported in the literature showing the Evolutionary Computation (CEC’2008) (pp. 1325–1332). Hong Kong: IEEE
Service Center.
importance of MOEAs and the potential applications that are
Deb, K. (2001). Multi-Objective Optimization Using Evolutionary Algorithms. Wiley.
emerging with the development of this new area of research. Deb, K. (2002). Multi-objective genetic algorithms: Problem difficulties and
Development of low complexity MOEAs still remains as a challenge construction of test problems. Evolutionary Computation, Vol. 7, 205–230.
for researchers. Deb, K., Agrawal, S., Pratab, A., & Meyarivan, T. (2000b). A fast elitist non–dominated
sorting genetic algorithm for multi-objective optimization: NSGA–II. In
In this paper, a new MOEA is developed by extending CSO to Proceedings of the parallel problem solving from nature VI conference. Lecture
handle MOPs. problems. It uses local and global search process Notes in Computer Science (Vol. 1917, pp. 849–858). Paris, France: Springer.
simultaneously which provides a high convergence rate as shown Deb, K., Agrawal, S., Pratab, A., & Meyarivan, T. (2002). A fast and elitist
multiobjective genetic algorithm: NSGA–II. IEEE Transactions on Evolutionary
in Chu and Tsai (2007). To optimize MOPs simultaneously, the pro- Computation, Vol. 6, 182–197.
posed algorithm maintains an external archive and uses the ar- Deb, K., Agrawal, S., Pratab, A., & Meyarivan, T. (2000a). A fast and elitist
chive members to dynamically lead the cat swarm in searching multiobjective genetic algorithm: NSGA–II. Technical Report 200001.
Edgeworth, F. Y. (1881). Mathematical Physics. P. Keagan, London, England.
for more and better non-dominated solutions. Extensive perfor- Ehrgott, M. (2005). Multicriteria Optimization (second ed.). Springer.
mance assessment of MOCSO is carried out using results of differ- Fonseca, C. M., & Fleming, P. J. (1993). Genetic algorithms for multiobjective
ent metrics for several standard test functions. In order to create a optimization: Formulation, discussion and generalization. In Proceedings of the
fifth international conference on genetic algorithms, San Mateo, USA, (pp. 416–
common platform for comparison, the results of MOCSO is com- 423).
pared with the results obtained with the initial versions of Goicoechea, A., Hansen, D. R., & Duckstein, L. (1982). Multiobjective Decision Analysis
NSGA-II in Deb et al. (2000b) and MOPSO in Coello and Lechuga with Engineering and Business Applications. Wiley.
Goldberg, D. E. (1989). Genetic algorithms in search. Optimization and Machine
(2002). The simulation results clearly show that the proposed algo-
Learning. Reading, Massachusetts: Addison–Wesley Publishing Company.
rithm is able to cover the full Pareto front and provides a superior Haleja, P., & Lin, C. Y. (1992). Genetic search strategies in multicriterion optimal
quality of solution in comparison to the initial versions of MOPSO design. Structural optimization, Vol. 4.
and NSGA-II. The average values of different metrics for the pro- Horn, J., Nafpliotis, N., & Goldberg, D. E. (1994). A niched Pareto genetic algorithm
for multiobjective optimization. In Proceedings of the IEEE conference on
posed MOCSO algorithm suggests that it maintains a better spread evolutionary computation, IEEE world congress on computational intelligence,
of solutions and converges better to the true Pareto optimal fronts. Piscataway, USA, (pp. 82–87).
With respect to computation time, the MOCSO performs much bet- Kita, H., Yabumoto, Y., Mori, N., & Nishikawa, Y. (1996). Multi-objective
optimization by means of the thermodynamical genetic algorithm. Parallel
ter than NSGA-II but poorer than MOPSO. The higher computation Problem Solving From Nature PPSN, Vol. 4, 504–512.
time requirement of MOCSO is well explained in Section 6.1. The Knowles, J., & Corne, D. C. (2000). Approximating the nondominated front using the
influence of population size, archive size and seeking memory pool Pareto archived evolution strategy. Evolutionary Computation, Vol. 8, 149–172.
Miettinen, K. M. (1999). Nonlinear Multiobjective Optimization. Boston: Kluwer
on the performance of MOCSO has also been studied. The efficiency Academic Publishers.
of MOCSO compared to MOPSO algorithm is apparent in more real- Panda, G., Pradhan, P. M., & Majhi, B. (2011). IIR system identification using cat
istic problems like sensor network, cognitive radio etc. which will swarm optimization. Expert Systems with Applications, 38, 12671–12683.
Pareto, V. (1896). Cours DEconomie Politique. P. Keagan, London, England.
be dealt in future works. Purshouse, R. C. (2003). On the evolutionary optimisation of many objectives. Ph.D
In future, a crowding operator can be used to obtain a uniform thesis.
distribution of solutions along the Pareto front (Deb, Agrawal, Pra- Schaffer, J. D. (1985). Multiple objective optimization with vector evaluated genetic
algorithms. In Proceedings of the first international conference on genetic
tab, & Meyarivan, 2002). In order to enhance the performance, differ-
algorithms (pp. 93–100). Mahwah, New Jersey: Lawrence Erlbaum Associates.
ent variants of MOCSO can be constructed with the incorporation of Schott, J. R. (1995). Fault tolerant design using single and multi-criteria genetic
features like dynamic swarms, cultural framework, fuzzy clustering, algorithms. Master’s thesis.
diversity management operator, substitute distance assignment. A Srinivas, N., & Deb, K. (1995). Multiobjective optimization using nondominated
sorting in genetic algorithms. Evolutionary Computation, Vol. 2, 221–248.
more detailed study of different parameters is required to reduce Steuer, R. E. (1986). Multiple Criteria Optimization: Theory, Computation and
the computation time of MOCSO algorithm. The proposed study Application. Wiley.
2964 P.M. Pradhan, G. Panda / Expert Systems with Applications 39 (2012) 2956–2964

Veldhuizen, D. V. (1999). Multiobjective evolutionary algorithms: Classifications, EUROGEN 2001 – evolutionary methods for design, optimisation and control
analyses, and new innovations. Ph.D thesis. with applications to industrial problems, Barcelona, Spain
Zitzler, E. (1999). Evolutionary algorithms for multiobjective optimization. Ph.D Zitzler, E., & Thiele, L. (1999). Multiobjective evolutionary algorithms: A
thesis. comparative case study and the strength Pareto approach. IEEE Transactions
Zitzler, E., Laumanns, M., & Thiele, L. (2001). SPEA2: Improving the strength Pareto on Evolutionary Computation, Vol. 3, 257–271.
evolutionary algorithm for multiobjective optimization. In Proceedings of the

You might also like