0% found this document useful (0 votes)
5 views14 pages

1 s2.0 S0096300307009320 Main

The document presents a new variant of the harmony search optimization method called global-best harmony search (GHS), which incorporates concepts from swarm intelligence to enhance performance. Experimental results indicate that GHS outperforms traditional harmony search and its improved variant in solving benchmark problems and Integer Programming challenges. The paper also discusses the effects of noise and parameter analysis on the performance of the different harmony search variants.

Uploaded by

최대현
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views14 pages

1 s2.0 S0096300307009320 Main

The document presents a new variant of the harmony search optimization method called global-best harmony search (GHS), which incorporates concepts from swarm intelligence to enhance performance. Experimental results indicate that GHS outperforms traditional harmony search and its improved variant in solving benchmark problems and Integer Programming challenges. The paper also discusses the effects of noise and parameter analysis on the performance of the different harmony search variants.

Uploaded by

최대현
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Available online at www.sciencedirect.

com

Applied Mathematics and Computation 198 (2008) 643–656


www.elsevier.com/locate/amc

Global-best harmony search


a,* b
Mahamed G.H. Omran , Mehrdad Mahdavi
a
Department of Computer Science, Gulf University for Science and Technology, Kuwait
b
Department of Computer Engineering, Sharif University of Technology, Tehran, Iran

Abstract

Harmony search (HS) is a new meta-heuristic optimization method imitating the music improvisation process where
musicians improvise their instruments’ pitches searching for a perfect state of harmony. A new variant of HS, called
global-best harmony search (GHS), is proposed in this paper where concepts from swarm intelligence are borrowed to
enhance the performance of HS. The performance of the GHS is investigated and compared with HS and a recently devel-
oped variation of HS. The experiments conducted show that the GHS generally outperformed the other approaches when
applied to ten benchmark problems. The effect of noise on the performance of the three HS variants is investigated and a
scalability study is conducted. The effect of the GHS parameters is analyzed. Finally, the three HS variants are compared
on several Integer Programming test problems. The results show that the three approaches seem to be an efficient alterna-
tive for solving Integer Programming problems.
Ó 2007 Elsevier Inc. All rights reserved.

Keywords: Harmony search; Meta-heuristics; Evolutionary algorithms; Optimization

1. Introduction

Evolutionary algorithms (EAs) are general-purpose stochastic search methods simulating natural selection
and biological evolution. EAs differ from other optimization methods, such as Hill-Climbing [18] and Simu-
lated Annealing [20], in the fact that EAs maintain a population of potential (or candidate) solutions to a
problem, and not just one solution.
Generally, all EAs work as follows: a population of individuals is randomly initialized where each individ-
ual represents a potential solution to the problem at hand. The quality of each solution is evaluated using a
fitness function. A selection process is applied during each iteration of an EA in order to form a new popula-
tion. The selection process is biased toward the fitter individuals to increase their chances of being included in
the new population. Individuals are altered using unary transformation (mutation) and higher-order transfor-
mation (crossover). This procedure is repeated until convergence is reached. The best solution found is
expected to be a near-optimum solution.

*
Corresponding author.
E-mail addresses: [email protected] (M.G.H. Omran), [email protected] (M. Mahdavi).

0096-3003/$ - see front matter Ó 2007 Elsevier Inc. All rights reserved.
doi:10.1016/j.amc.2007.09.004
644 M.G.H. Omran, M. Mahdavi / Applied Mathematics and Computation 198 (2008) 643–656

The main evolutionary computation algorithms are:Genetic Programming (GP) [12,13], Evolutionary Pro-
gramming (EP) [3], Evolutionary Strategies (ES) [1], Genetic Algorithms (GA) [9] and Differential Evolution
(DE) [19].
EAs have been successfully applied to a wide range of optimization problems, for example, image process-
ing, pattern recognition, scheduling, engineering design, amongst others [9].
Recently, a new EA, called harmony search (HS), imitating the improvisation process of musicians was
proposed by Greem et al. [6]. The HS has been successfully applied to Many optimization problems
[11,7,15,8,16,5].
This paper proposes a new version of HS where concepts from swarm intelligence are used to enhance the
performance of HS. The new version is called global-best harmony search (GHS). The results of the experi-
ments conducted are shown and compared with the versions of HS proposed by Lee and Geem [16] and a
new variant of HS proposed by Mahdavi et al. [17]. Furthermore, the performance of the three approaches
when applied to noisy problems is investigated. A scalability study is also conducted. The effect of the
GHS parameters is studied. Finally, the three HS versions are used to tackle the Integer Programming
problem.
The remainder of the paper is organized as follows: Section 2 provides an overview of HS. IHS is summa-
rized in Section 3. The proposed approach is presented in Section 4. Results of the experiments are presented
and discussed in Section 5. In Section 6, the three HS variants are applied to the Integer Programming prob-
lem Finally, Section 7 concludes the paper.

2. The harmony search algorithm

Harmony search (HS) [16] is a new meta-heuristic optimization method imitating the music improvisation
process where musicians improvise their instruments’ pitches searching for a perfect state of harmony. The HS
works as follows:

Step 1: Initialize the problem and HS parameters: The optimization problem is defined as Minimize
(or maximize) f(x) such that LBi 6 xi 6 UBi Where f(x) is the objective function, x is a can-
didate solution consisting of N decision variables ðxi Þ, and LBi and UBi are the lower and
upper bounds for each decision variable, respectively. In addition, the parameters of the HS
are specified in this step. These parameters are the harmony memory size (HMS), harmony
memory considering rate (HMCR), pitch adjusting rate (PAR) and the number of improvisa-
tions (NI).
Step 2: Initialize the harmony memory: The initial harmony memory is generated from a uniform distribu-
tion in the ranges ½LBi ; UBi ; where 1 6 i 6 N . This is done as follows: xji ¼ LBi þ
r  ðUBi  LBi Þ; j ¼ 1; 2; . . . ; HMS where r  U ð0; 1Þ.
Step 3: Improvise a new harmony: Generating a new harmony is called improvisation. The new harmony vec-
tor, x0 ¼ ðx01 ; x02 ; . . . ; x0N Þ, is generated using the following rules: memory consideration, pitch adjust-
ment and random selection. The procedure works as follows:
for each i 2 ½1; N  do
if U(0,1) 6 HMCR then /*memory consideration*/
begin
x0i ¼ xji ; where j  U ð1; . . . ; HMSÞ.
if U(0,1) 6 PAR then /*pitch adjustment*/
begin
x0i ¼ x0i  r  bw; where r  U ð0; 1Þ and bw is an arbitrary distance bandwidth.
endif
else/* random selection */
x0i ¼ LBi þ r  ðUBi  LBi Þ
endif
done
M.G.H. Omran, M. Mahdavi / Applied Mathematics and Computation 198 (2008) 643–656 645

Step 4: Update harmony memory: The generated harmony vector, x0 ¼ ðx01 ; x02 ; . . . ; x0N Þ, replaces the worst
harmony in the HM, only if its fitness (measured in terms of the objective function) is better than
that of the worst harmony.
Step 5: Check the stopping criterion: Terminate when the maximum number of improvisations is reached.

The HMCR and PAR parameters of the HS help the method in searching for globally and locally improved
solutions, respectively. PAR and bw have a profound effect on the performance of the HS. Thus, fine tuning
these two parameters is very important. From these two parameters, bw is more difficult to tune because it can
take any value from ð0; 1Þ.

3. The improved harmony search algorithm

To address the shortcomings of the HS, Mahdavi et al. [17] proposed a new variant of the HS, called the
improved harmony search (IHS). The IHS dynamically updates PAR according to the following equation,
ðPARmax  PARmin Þ
PARðtÞ ¼ PARmin þ t ð1Þ
NI
where PARðtÞ is the pitch adjusting rate for generation t, PARmin is the minimum adjusting rate, PARmax is the
maximum adjusting rate and t is the generation number.
In addition, bw is dynamically updated as follows:
  bw  
ln min
bwmax
NI t
bwðtÞ ¼ bwmax e ð2Þ

where bwðtÞ is the bandwidth for generation t, bwmin is the minimum bandwidth and bwmax is the maximum
bandwidth.
A major drawback of the IHS is that the user needs to specify the values for bwmin and bwmax which are
difficult to guess and problem dependent.

4. The global-best harmony search

Inspired by the concept of swarm intelligence as proposed in Particle Swarm Optimization (PSO) [2,10], a
new variation of HS is proposed in this paper. In a global best PSO system, a swarm of individuals (called
particles) fly through the search space. Each particle represents a candidate solution to the optimization prob-
lem. The position of a particle is influenced by the best position visited by itself (i.e. its own experience) and the
position of the best particle in the swarm (i.e. the experience of swarm).
The new approach, called global-best harmony search (GHS), modifies the pitch-adjustment step of the HS
such that the new harmony can mimic the best harmony in the HM. Thus, replacing the bw parameter alto-
gether and adding a social dimension to the HS. Intuitively, this modification allows the GHS to work effi-
ciently on both continuous and discrete problems.
The GHS has exactly the same steps as the IHS with the exception that Step 3 is modified as follows:

for each i 2 ½1; N  do


if U(0,1) 6 HMCR then /* memory consideration */
begin
x0i ¼ xji ; where j  U ð1; . . . ; HMSÞ.
if U(0,1) 6 PAR(t) then /* pitch adjustment */
begin
x0i ¼ xbest
k , where best is the index of the best harmony in the HM and k  U ð1; N Þ.
endif
else /* random selection */
x0i ¼ LBi þ r  ðUBi  LBi Þ
646 M.G.H. Omran, M. Mahdavi / Applied Mathematics and Computation 198 (2008) 643–656

endif
done

4.1. Example

To further understand the HS, IHS and GHS algorithms, the Rosenbrock function (defined in Section 5) is
considered to show the algorithms behavior in consecutive generations. The number of decision variables is set
to 3 with possible values bounds between 600 and 600. Other parameters were set as in defined in Section 5.
All of the algorithms start with the same initialization of HM. The solutions vectors in HM are sorted accord-
ing to the values of objective function. The state of HM in different iterations for the algorithms HS, IHS, and
GHS are shown in Tables 1–3 respectively. All of the algorithms improvised a near optimal solution but the
result that is obtained by the GHS is better than the results of the other two algorithms.

Table 1
HM state in different iterations for the Rosenbrock function using the HS algorithm
Rank x1 x2 x3 f ðxÞ
Initial HM
1 206.909180 241.845703 102.795410 29.141686
2 99.938965 332.336426 397.961426 70.088302
3 381.262207 392.358398 87.634277 77.971790
4 381.262207 472.924805 87.634277 95.243494
5 470.910645 54.345703 430.700684 104.179736
Subsequent HM
1* 206.909180 54.337620 87.634277 13.718178
2 206.909180 241.845703 102.795410 29.141686
3 99.938965 332.336426 397.961426 70.088302
4 381.262207 392.358398 87.634277 77.971790
5 381.262207 472.924805 87.634277 95.243494
HM after 10 iterations
1 206.909180 54.337620 87.634277 13.718178
2 206.909180 54.341545 102.795321 15.721512
3 206.909180 241.845703 102.795410 29.141686
4 206.909180 316.369629 87.634277 39.325780
5 438.061523 54.345703 102.795436 52.221392
HM after 100 iterations
1 15.234375 54.332356 56.637901 2.787318
2 109.016680 54.332356 56.643102 5.635628
3 109.020996 54.338577 56.643102 5.636281
4 109.020996 54.332356 56.643102 5.636595
5 109.020996 54.341623 56.637901 5.637259
HM after 500 iterations
1 15.243489 18.162745 27.026367 0.467774
2 15.303073 18.135317 50.484959 1.206920
3 15.307083 18.135317 50.488030 1.207263
4 15.296501 18.134462 50.484959 1.208392
5 15.296501 18.134526 50.484959 1.208400
HM after 5000 iterations
1 3.140023 0.000003 5.433249 0.009857
2 3.140023 0.000003 5.433249 0.009857
3 3.140023 0.000003 5.433249 0.009857
4 3.140023 0.000003 5.433249 0.009857
5 3.140023 0.000003 5.433249 0.009857
* New good solution improvised in the first iteration.
M.G.H. Omran, M. Mahdavi / Applied Mathematics and Computation 198 (2008) 643–656 647

Table 2
HM state in different iterations for the Rosenbrock function using the IHS algorithm
Rank x1 x2 x3 f ðxÞ
Initial HM
1 206.909180 241.845703 102.795410 29.141686
2 99.938965 332.336426 397.961426 70.088302
3 381.262207 392.358398 87.634277 77.971790
4 381.262207 472.924805 87.634277 95.243494
5 470.910645 54.345703 430.700684 104.179736
Subsequent HM
1 206.909180 241.845703 102.795410 29.141686
2* 206.909180 472.924805 0.439453 67.467811
3 99.938965 332.336426 397.961426 70.088302
4 381.262207 392.358398 87.634277 77.971790
5 470.910645 54.345703 430.700684 104.179736
HM after 10 iterations
1 206.909180 54.345703 0.439453 11.786883
2 206.909180 241.845703 0.439453 26.145702
3 206.909180 241.845703 0.439453 26.145702
4 206.909180 241.845703 0.439453 26.145702
5 206.909180 241.845703 102.795410 29.141686
HM after 100 iterations
1 11.682129 54.345703 0.439453 1.314937
2 11.682129 54.345703 0.439453 1.314937
3 11.682129 54.345703 0.439453 1.314937
4 11.682129 54.345703 0.439453 1.314937
5 11.682129 54.345703 0.439453 1.314937
HM after 500 iterations
1 11.682129 18.127369 0.439416 0.522051
2 11.682129 18.127369 0.439429 0.522052
3 11.682129 18.127380 0.439416 0.522052
4 11.682129 18.127380 0.439416 0.522052
5 11.682129 18.127380 0.439416 0.522052
HM after 5000 iterations
1 0.000184 0.090208 0.087013 0.003297
2 0.000323 0.090182 0.087113 0.003298
3 0.000270 0.090184 0.087107 0.003298
4 0.000361 0.090143 0.087155 0.003298
5 0.000323 0.090166 0.087176 0.003300
* New good solution improvised in the first iteration.

5. Experimental results

This section compares the performance of the global-best harmony search (GHS) with that of the harmony
search (HS) and the improved harmony search (IHS) algorithms. For the GHS, HMS = 5, HMCR = 0.9,
PARmin ¼ 0:01 and PARmax ¼ 0:99. For the HS algorithm, HMS = 5, HMCR = 0.9, PAR = 0.3 and
bw = 0.01 (the values of the last three parameters were suggested by Dr. Zong Geem in a private communi-
cation). For the IHS algorithm, HMS = 5, HMCR = 0.9, PARmin ¼ 0:01, PARmax ¼ 0:99, bwmin ¼ 0:0001
and bwmax ¼ 1=ð20  ðUB  LBÞÞ. All functions were implemented in 30 dimensions except for the
two-dimensional Camel-Back function. Unless otherwise specified, these values were used as defaults for all
experiments which use static control parameters. The initial harmony memory was generated from a uniform
distribution in the ranges specified below.
The following functions have been used to compare the performance of the different methods. These bench-
mark functions provide a balance of unimodal and multimodal functions.
648 M.G.H. Omran, M. Mahdavi / Applied Mathematics and Computation 198 (2008) 643–656

Table 3
HM state in different iterations for the Rosenbrock function using the GHS
Rank x1 x2 x3 f ðxÞ
Initial HM
1 206.909180 241.845703 102.795410 29.141686
2 99.938965 332.336426 397.961426 70.088302
3 381.262207 392.358398 87.634277 77.971790
4 381.262207 472.924805 87.634277 95.243494
5 470.910645 54.345703 430.700684 104.179736
Subsequent HM
1* 191.381836 54.345703 87.634277 13.497674
2 206.909180 241.845703 102.795410 29.141686
3 99.938965 332.336426 397.961426 70.088302
4 381.262207 392.358398 87.634277 77.971790
5 470.910645 54.345703 430.700684 104.179736
HM after 10 iterations
1 206.909180 54.345703 87.634277 13.721652
2 206.909180 54.345703 87.634277 13.721652
3 206.909180 136.926270 102.795410 18.311658
4 206.909180 136.926270 87.634277 19.032912
5 206.909180 136.926270 87.634277 19.032912
HM after 100 iterations
1 18.859863 24.719238 45.629883 1.692257
2 18.859863 45.593262 45.629883 1.890251
3 18.859863 45.593262 45.629883 1.890251
4 18.859863 45.593262 45.629883 1.890251
5 18.859863 45.593262 45.629883 1.890251
HM after 500 iterations
1 18.859863 1.171875 22.082520 0.546616
2 18.859863 1.171875 22.082520 0.546616
3 18.859863 1.171875 22.082520 0.546616
4 18.859863 1.171875 22.082520 0.546616
5 18.859863 1.171875 22.082520 0.546616
HM after 5000 iterations
1 0.036621 0.073242 0.036621 0.002235
2 0.036621 0.073242 0.036621 0.002235
3 0.036621 0.073242 0.036621 0.002235
4 0.036621 0.073242 0.036621 0.002235
5 0.036621 0.073242 0.036621 0.002235
* New good solution improvised in the first iteration.

For each of these functions, the goal is to find the global minimizer, formally defined as
Given f: RN d ! R
find x 2 RN d such that f ðx Þ 6 f ðxÞ; 8x 2 RN d
The following functions were used:

A. Sphere function, defined as


X
Nd
f ðxÞ ¼ x2i ;
i¼1

where x ¼ 0 and f ðx Þ ¼ 0 for 100 6 xi 6 100.


M.G.H. Omran, M. Mahdavi / Applied Mathematics and Computation 198 (2008) 643–656 649

B. Schwefel’s Problem 2.22 [21], defined as


X
Nd Y
Nd
f ðxÞ ¼ jxi j þ jxi j;
i¼1 i¼1

where x ¼ 0 and f ðx Þ ¼ 0 for 10 6 xi 6 10.


C. Step function, defined as
X
Nd
f ðxÞ ¼ ðbxi þ 0:5cÞ2 ;
i¼1

where x ¼ 0 and f ðx Þ ¼ 0 for 100 6 xi 6 100.




D. Rosenbrock function, defined as


NX
d 1
2 2
f ðxÞ ¼ ð100ðxi  x2i1 Þ þ ðxi1  1Þ Þ;
i¼1

where x ¼ ð1; 1; . . . ; 1Þ and f ðx Þ ¼ 0 for 30 6 xi 6 30.




E. Rotated hyper-ellipsoid function, defined as


!2
X
Nd Xi
f ðxÞ ¼ xj ;
i¼1 j¼1

where x ¼ 0 and f ðx Þ ¼ 0 for 100 6 xi 6 100.


F. Generalized Swefel’s Problem 2.26 [21], defined as
Nd 
X pffiffiffiffiffiffi 
f ðxÞ ¼  xi sinð jxi jÞ ;
i¼1

where x ¼ ð420:9687; . . . ; 420:9687Þ and f ðx Þ ¼ 12569:5 for 500 6 xi 6 500.




G. Rastrigin function, defined as


X
Nd
f ðxÞ ¼ ðx2i  10 cosð2pxi Þ þ 10Þ;
i¼1

where x ¼ 0 and f ðx Þ ¼ 0 for 5:12 6 xi 6 5:12.




H. Ackley’s function, defined as


0 vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1 !
u
u1 X Nd
1 X
Nd
f ðxÞ ¼ 20 exp @0:2t x2 A  exp cosð2pxi Þ þ 20 þ e;
30 i¼1 i 30 i¼1

where x ¼ 0 and f ðx Þ ¼ 0 for 32 6 xi 6 32.


I. Griewank function, defined as
 
1 X Y
Nd Nd
xi
f ðxÞ ¼ x2i  cos pffi þ 1;
4000 i¼1 i¼1 i
where x ¼ 0 and f ðx Þ ¼ 0 for 600 6 xi 6 600.
J. Six-Hump Camel-Back function, defined as
1
f ðxÞ ¼ 4x21  2:1x41 þ x61 þ x1 x2  4x22 þ 4x42 ;
3
where x ¼ ð0:08983; 0:7126Þ; ð0:08983; 0:7126Þ and f ðx Þ ¼ 1:0316285 for 5 6 xi 6 5.
650 M.G.H. Omran, M. Mahdavi / Applied Mathematics and Computation 198 (2008) 643–656

Sphere, Schwefel’s Problem 2.22, Rosenbrock and rotated hyper-ellipsoid are unimodal, while the Step
function is a discontinuous unimodal function. Schwefel’s Problem 2.26, Rastrigin, Ackley and Griewank
are difficult multimodal functions where the number of local optima increases exponentially with the problem
dimension. The Camel-Back function is a low-dimensional function with only a few local optima.
The results reported in this section are averages and standard deviations over 30 simulations. Each simu-
lation was allowed to run for 50,000 evaluations of the objective function. The statistically significant best
solutions have been shown in bold (using the z-test with a ¼ 0:05).
Table 4 summarizes the results obtained by applying the three approaches to the benchmark functions. The
results show that the GHS outperformed HS and IHS in all the functions except for the Rotated-hyper ellip-
soid function (where there was no statistical significant difference between the three methods) and the Camel-
Back function (where HS and IHS performed better than the GHS). For the Camel-Back function, the GHS
performed relatively worse than HS and IHS because the dimension of the problem is too low (i.e. 2). In case
of low-dimensionality (i.e. small number of decision variables), the GHS’ pitch-adjustment step suffers. How-
ever, when the dimensionality increases, the GHS takes the lead and outperforms the other methods.

(1) Effect of noise on performance


This section investigates the effect of noise on the performance of the three approaches. The noisy ver-
sions of the benchmark functions are defined as
fNoisy ðxÞ ¼ f ðxÞ þ N ð0; 1Þ;
where N ð0; 1Þ represents the normal distribution with zero mean and standard deviation of one.
Table 5 summarizes the results obtained for the noisy problems for the benchmark functions. In general,
noise adversely affects the performance of all the examined strategies. Table 5, however, shows that the
GHS retained its position as the best performer when applied to the benchmark functions even in the
presence of noise. The exceptions are the noisy Rotated hyper-ellipsoid function where the HS per-
formed better than the GHS and the noisy Camel-Back function where HS and IHS outperformed
the GHS.

Table 4
Mean and standard deviation (SD) of the benchmark function optimization results
HS IHS GHS
Sphere 0.000187 0.000712 0.000010
(0.000032) (0.000644) (0.000022)
Schwefel Problem 2.22 0.171524 1.097325 0.072815
(0.072851) (0.181253) (0.114464)
Rosenbrock 340.297100 624.323216 49.669203
(266.691353) (559.847363) (59.161192)
Step 4.233333 3.333333 0(0)
(3.029668) (2.195956)
Rotated hyper-ellipsoid 4297.816457 4313.653320 5146.176259
(1362.148438) (1062.106222) (6348.792556)
Schwefel Problem 2.26 12539.237786 12534.968625 12569.458343
(11.960017) (10.400177) (0.050361)
Rastrigin 1.390625 3.499144 0.008629
(0.824244) (1.182907) (0.015277)
Ackley 1.130004 1.893394 0.020909
(0.407044) (0.314610) (0.021686)
Griewank 1.119266 1.120992 0.102407
(0.041207) (0.040887) (0.175640)
Camel-Back 1.031628 1.031628 1.031600
(0.000000) (0.000000) (0.000018)
M.G.H. Omran, M. Mahdavi / Applied Mathematics and Computation 198 (2008) 643–656 651

Table 5
Mean and standard deviation (SD) of the noisy benchmark function optimization results
HS IHS GHS
Sphere 0.071377 0.099251 0.001031
(0.169067) (0.222471) (0.001213)
Schwefel Problem 2.22 6.654909 6.723778 0.043880
(1.652772) (1.315942) (0.183376)
Rosenbrock 534.883721 504.485368 65.166415
(643.935556) (252.597506) (64.582174)
Step 10.054982 7.452871 0.000989
(4.526308) (3.914539) (0.000911)
Rotated hyper-ellipsoid 4596.546169 4412.324728 5574.541228
(1282.763999) (1511.752121) (6247.474005)
Schwefel Problem 2.26 12532.687992 12530.442804 12567.97599
(10.749470) (13.111425) (6.541833)
Rastrigin 11.530824 14.840561 0.019196
(3.719364) (2.901781) (0.055387)
Ackley 16.048601 16.068966 7.970873
(1.418158) (0.905531) (7.450370)
Griewank 3.549717 3.054119 0.012079
(1.135888) (1.518994) (0.019128)
Camel-Back 4.759560 4.867695 5.039987
(0.532310) (0.270855) (0.380634)

(2) Scalability study


When the dimension of the functions increases from 30 to 100, the performance of the different methods
degraded as shown in Table 6. The results show that the GHS is still the best performer. In general, the
IHS performed comparably to the HS.

Table 6
Mean and standard deviation (SD) of the benchmark function optimization results (N = 100)
HS IHS GHS
Sphere 8.683062 8.840449 2.230721
(0.775134) (0.762496) (0.565271)
Schwefel Problem 2.22 82.926284 82.548978 19.020813
(6.717904) (6.341707) (5.093733)
Rosenbrock 16675172.184717 17277654.059718 2598652.617273
(3182464.488466) (2945544.275052) (915937.797217)
Step 20280.200000 20827.733333 5219.933333
(2003.829956) (2175.284501) (1134.876027)
Rotated hyper-ellipsoid 215052.904398 213812.584732 321780.353575
(28276.375538) (28305.249583) (39589.041160)
Schwefel Problem 2.26 33937.364505 33596.899217 40627.345524
(572.390489) (731.191869) (395.457330)
Rastrigin 343.497796 343.232044 80.657677
(27.245380) (25.149464) (30.368471)
Ackley 13.857189 13.801383 8.767846
(0.284945) (0.530388) (0.880066)
Griewank 195.592577 204.291518 54.252289
(24.808359) (19.157177) (18.600195)
652 M.G.H. Omran, M. Mahdavi / Applied Mathematics and Computation 198 (2008) 643–656

(3) Effect of HMCR, HMS and PAR


In this subsection, the effect of HMCR, HMS and PAR on the performance of the GHS is investigated.
Table 7 summarizes the results obtained using different values for HMCR. The results show that
increasing the HMCR value improves the performance of the GHS for all the functions except for
the Camel-Back where the opposite is true. Using a small value for HMCR increases the diversity
and, hence, prevents the GHS from convergence (i.e. it results in (inefficient) random search). Thus, it
is generally better to use a large value for the HMCR ði:e: P 0:9Þ. However, for problem with very
low dimensionality (e.g. Camel-Back), using a small value of HMCR is beneficial since it improves
the exploration capability of the GHS.
Table 8 summarizes the results obtained using different values for HMS. Each simulation was allowed to
run for 50,000 evaluations of the objective function. The results show that no single choice is superior to
the others indicating an independence to the value of the HMS. In general, using a small HM seems to be
a good and logical choice with the added advantage of reducing space requirements. Actually, since HM
resembles the short-term memory of a musician and since the short-term memory of the human is known
to be small, it is logical to use a small HM.
Until now, we have used Eq. (1) to dynamically adjust the GHS’ PAR. Herein, the effect of using con-
stant values for PAR on the performance of the GHS is investigated. Table 9 shows the results of using
different values for PAR. In general, the results show that no single choice is superior to the others. How-
ever, it seams that using a relatively small value of PAR ði:e: 6 0:5Þ improves the performance of the
GHS. In addition, the GHS using a small constant value for PAR generally performed better than
the GHS using Eq. (1).

6. The integer programming problem

Many real-world applications (e.g. production scheduling, resource allocation, VLSI circuit design, etc.)
require the variables to be integers. These problems are called Integer Programming problems. Optimization

Table 7
The effect of HMCR
HMCR = 0.5 HMCR = 0.7 HMCR = 0.9 HMCR = 0.95
Sphere 5.295242 0.745164 0.000010 0.000001
(0.713208) (0.265503) (0.000022) (0.000001)
Schwefel Problem 2.22 36.952315 9.603271 0.072815 0.017029
(3.935531) (2.859507) (0.114464) (0.020551)
Rosenbrock 13090605.360898 376181.410765 49.669203 47.293368
(4323029.791459) (296509.690094) (59.161192) (100.109620)
Step 12222.833333 1761.633333 0(0) 0(0)
(1771.547235) (705.998654)
Rotated hyper-ellipsoid 30128.835193 17933.990434 5146.176259 3632.546963
(4745.705757) (5561.440021) (6348.792556) (5014.897019)
Schwefel Problem 2.26 8926.586935 11970.385304 12569.458343 12569.477436
(375.856862) (226.448517) (0.050361) (0.017952)
Rastrigin 153.349949 41.488430 0.008629 0.001606
(16.322363) (12.259544) (0.015277) (0.002871)
Ackley 16.262370 8.976493 0.020909 0.010069
(0.632526) (1.402425) (0.021686) (0.006675)
Griewank 117.145910 16.522998 0.136015 0.011433
(16.780988) (6.666186) (0.228799) (0.027773)
Camel-Back 1.031625 1.031622 1.031600 1.031608
(0.000003) (0.000014) (0.000018) (0.000015)
M.G.H. Omran, M. Mahdavi / Applied Mathematics and Computation 198 (2008) 643–656 653

Table 8
The effect of HMS
HMS = 5 HMS = 10 HMS = 20 HMS = 50
Sphere 0.000010 0.000014 0.000018 0.000019
(0.000022) (0.000024) (0.000047) (0.000030)
Schwefel Problem 2.22 0.072815 0.055379 0.051270 0.034180
(0.114464) (0.061338) (0.044451) (0.024898)
Rosenbrock 49.669203 71.871738 49.010352 65.821680
(59.161192) (86.611858) (51.127082) (64.172156)
Step 0(0) 0(0) 0(0) 0(0)
Rotated hyper-ellipsoid 5146.176259 2840.550927 4211.308058 2279.716176
(6348.792556) (3485.583318) (5741.749864) (2969.907898)
Schwefel Problem 2.26 12569.458343 12569.452670 12569.40515 12569.454553
(0.050361) (0.072130) (0.143585) (0.042580)
Rastrigin 0.008629 0.004154 0.011628 0.015724
(0.015277) (0.006119) (0.018224) (0.029582)
Ackley 0.020909 0.036708 0.030355 0.042470
(0.021686) (0.049579) (0.023022) (0.029947)
Griewank 0.136015 0.113143 0.100169 0.177135
(0.228799) (0.204220) (0.164517) (0.205283)
Camel-Back 1.031600 1.031618 1.03147 1.031567
(0.000018) (0.000009) (0.000160) (0.000054)

Table 9
The effect of PAR
PAR = 0.1 PAR = 0.3 PAR = 0.5 PAR = 0.7 PAR = 0.9
Sphere 0.000004 0.000005 0.000004 0.000011 0.000010
(0.000012) (0.000016) (0.000006) (0.000017) (0.000015)
Schwefel Problem 2.22 0.056458 0.047404 0.046387 0.046997 0.068705
(0.043551) (0.032014) (0.029801) (0.039182) (0.056855)
Rosenbrock 37.375855 34.694156 48.580795 41.744516 38.110374
(36.379778) (40.258490) (56.871491) (49.501602) (37.536347)
Step 0(0) 0(0) 0(0) 0(0) 0(0)
Rotated hyper-ellipsoid 2933.680647 2199.433261 2662.057366 2159.448105 4279.831064
(1677.980140) (2974.500178) (3615.948333) (2360.188117) (6700.051680)
Schwefel Problem 2.26 12569.46259 12569.42179 12569.383768 12569.458178 12569.42452
(0.024735) (0.108268) (0.197043) (0.059009) (0.235659)
Rastrigin 0.012112 0.013436 0.004066 0.015272 0.008142
(0.020371) (0.018171) (0.005308) (0.030422) (0.013092)
Ackley 0.023014 0.022487 0.024789 0.018062 0.027083
(0.021537) (0.014631) (0.017931) (0.018143) (0.024494)
Griewank 0.051157 0.091768 0.072257 0.202081 0.116950
(0.093015) (0.197166) (0.141671) (0.267918) (0.230767)
Camel-Back 1.031554 1.031611 1.031578 1.031568 1.029316
(0.000072) (0.000012) (0.000040) (0.000062) (0.001898)
654 M.G.H. Omran, M. Mahdavi / Applied Mathematics and Computation 198 (2008) 643–656

Table 10
Mean and standard deviation (SD) of the integer programming problems results
Function Method Mean (SD)
F 1 ðN ¼ 5Þ HS 0(0)
IHS 0(0)
GHS 0(0)
F 1 ðN ¼ 15Þ HS 0(0)
IHS 0(0)
GHS 0(0)
F 1 ðN ¼ 30Þ HS 0(0)
IHS 0(0)
GHS 0(0)
F2 HS 0(0)
IHS 0(0)
GHS 0(0)
F3 HS 0(0)
IHS 0(0)
GHS 0(0)
F4 HS 4.866667(0.991071)
IHS 4.066667(0.359011)
GHS 5(1)
F5 HS 3833.12(0)
IHS 3833.12(0)
GHS 3833.12(0)
F 6 ðN ¼ 5Þ HS 0(0)
IHS 0(0)
GHS 0(0)

methods developed for real search spaces can be used to solve Integer Programming problems by rounding off
the real optimum values to the nearest integers [14].
The unconstrained Integer Programming problem can be defined as
min f ðxÞ; x 2 S  Z N d ;
where Z N d is an N-dimensional discrete space of integers, and S represents a feasible region that is not neces-
sarily a bounded set. Integer Programming problems encompass both maximization and minimization prob-
lems. Any maximization problem can be converted into a minimization problem and vice versa. The problems
tackled in this section are minimization problems. Therefore, the remainder of the discussion focuses on min-
imization problems.
Six commonly used Integer programming benchmark problems [14] were chosen to investigate the perfor-
mance of the HS variants.

Test Problem 1:
X
Nd
F 1 ðxÞ ¼ jxi j;
i¼1

where x ¼ 0 and f ðx Þ ¼ 0. This problem was considered in dimensions 5, 15 and 30.

Test Problem 2:
2 2
F 2 ðxÞ ¼ ð9x21 þ 2x22  11Þ þ ð3x1 þ 4x22  7Þ ;
T
where x ¼ ð1; 1Þ and F 2 ðx Þ ¼ 0.
M.G.H. Omran, M. Mahdavi / Applied Mathematics and Computation 198 (2008) 643–656 655

Test Problem 3:
F 3 ðxÞ ¼ ðx1 þ 10x2 Þ2 þ 5ðx3  x4 Þ2 þ ðx2  2x3 Þ4 þ 10ðx1  x4 Þ4 ;
where x ¼ 0 and F 3 ðx Þ ¼ 0.

Test Problem 4:
F 4 ðxÞ ¼ 2x21 þ 3x22 þ 4x1 x2  6x1  3x2 ;
where x ¼ 0 and F 4 ðx Þ ¼ 0.

Test Problem 5:
F 5 ðxÞ ¼ 3803:84  138:08x1  232:92x2 þ 123:08x21 þ 203:64x22 þ 182:25x1 x2 ;
T
where x ¼ ð0; 1Þ and F 5 ðx Þ ¼ 3833:12.

Test Problem 6:
F 6 ðxÞ ¼ xT x;
where x ¼ 0 and f ðx Þ ¼ 0. This problem was considered in dimension 5 as in Laskari et al. [14].

For all the above problems, xi 2 ½100; 100  Z.


The three versions of HS were applied to the above test problems and the results are shown in Table 10. All
the parameters were set as in Section 5. The results show that the three approaches performed comparably.
The three approaches found the global optimum solution for all the benchmark problems except for F 4 . How-
ever, when HMCR was set to 0.1, the three methods found the global optimum.

7. Conclusions

This paper proposed a new version of harmony search. The approach modifies the pitch-adjustment step of
the HS such that a new harmony is affected by the best harmony in the harmony memory. This modification
alleviates the problem of tuning the HS’s bw parameter which is difficult to specify a priori. In addition, the new
modification allows the GHS to work efficiently on both continuous and discrete problems. The approach was
tested on ten benchmark functions where it generally outperformed the other approaches. This paper investi-
gated the effect of noise on the performance of the HS variations. Empirical results show that, in general, the
GHS provided the best results when applied to high-dimensional problems. Moreover, the effect of the two
parameters of GHS (i.e. HMCR and HMS) was investigated. The results show that using a large value for
HMCR (e.g. 0.95) generally improves the performance of the GHS except for the case of problems with very
low dimensionality where a small value of HMCR is recommended. In addition, using a small value for HMS
seems to be a good choice. Replacing the dynamically adjusted PAR with a constant value was also investi-
gated. The experiments show that using a relatively small constant value for PAR seems to improve the perfor-
mance of the GHS. These observations confirm the observations of Geem [4]. Finally, the performance of HS
variants to solve Integer Programming problems was studied. Experimental results using six commonly used
benchmark problems show that the three approaches performed comparably.

Acknowledgements

We are grateful to Dr. Zong Geem and Dr. Kang Lee for their valuable help and suggestions.

References

[1] T. Bäck, F. Hoffmeister, H. Schwefel, A Survey of evolution strategies. In: Proceedings of the Fourth International Conference on
Genetic Algorithms and their Applications, vols. 2–9, 1991.
656 M.G.H. Omran, M. Mahdavi / Applied Mathematics and Computation 198 (2008) 643–656

[2] R. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, in: Proceedings of the Sixth International Symposium on
Micromachine and Human Science, pp. 39–43, 1995.
[3] L. Fogel, Evolutionary programming in perspective: the top-down view, in: J.M. Zurada, R. MarksII, C. Robinson (Eds.),
Computational Intelligence: Imitating Life, IEEE Press, Piscataway, NJ, USA, 1994.
[4] Z. Geem, Optimal design of water distribution networks using harmony search. PhD Thesis, Korea University, Seoul, Korea, 2000.
[5] Z. Geem, Optimal cost design of water distribution networks using harmony search, Engineering Optimization 38 (3) (2006) 259–280.
[6] Z. Geem, J. Kim, G. Loganathan, A new heuristic optimization algorithm: harmony search, Simulation 76 (2) (2001) 60–68.
[7] Z. Geem, J. Kim, G. Loganathan, Harmony search optimization: application to pipe network design, International Journal of Model
Simulation 22 (2) (2002) 125–133.
[8] Z. Geem, C. Tseng, Y. Park, Harmony search for generalized orienteering problem: best touring in China, in: Springer Lecture Notes
in Computer Science 3412 (2005) 741–750.
[9] D. Goldberg, Genetic Algorithms in Search, Optimization and machine learning, Addison-Wesley, 1989.
[10] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proceedings of the IEEE International Joint Conference on Neural
Networks, IEEE Press, 1995. pp. 1942–1948.
[11] J. Kim, Z. Geem, E. Kim, Parameter estimation of the nonlinear Muskingum model using harmony search, Journal of American
Water Resource Association 37 (5) (2001) 1131–1138.
[12] J. Koza, Genetic Programming: On the Programming of Computers by Means of Natural Selection, MIT Press, Cambridge, MA,
1992.
[13] J. Koza, R. Poli, Genetic programming, in: E. Burke, G. Kendall (Eds.), Introductory Tutorials in Optimization, Decision Support
and Search Methodology, Kluwer Press, 2005, pp. 127–164, Chapter 5.
[14] E. Laskari, K. Parsopoulos, M. Vrahatis, Particle swarm optimization for integer programming, in: Proceedings of the 2002 Congress
on Evolutionary Computation (2) (2002) 1582–1587.
[15] K. Lee, Z. Geem, A new structural optimization method based on the harmony search algorithm, Computer and Structures 82 (9–10)
(2004) 781–798.
[16] K. Lee, Z. Geem, A new meta-heuristic algorithm for continuous engineering optimization: harmony search theory and practice,
Computer Methods in Applied Mechanics and Engineering 194 (2005) (2005) 3902–3933.
[17] M. Mahdavi, M. Fesanghary, E. Damangir, An improved harmony search algorithm for solving optimization problems, Applied
Mathematics and Computation 188 (2007) (2007) 1567–1579.
[18] Z. Michalewicz, D. Fogel, How to Solve It: Modern heuristics, Springer-Verlag, Berlin, 2000.
[19] R. Storn, K. Price, Differential evolution – a simple and efficient adaptive scheme for global optimization over continuous spaces,
Technical Report TR-95-012, International Computer Science Institute, Berkeley, CA, 1995.
[20] P. Van Laarhoven, E. Aarts, Simulated annealing: theory and applications, Kluwer Academic Publishers, 1987.
[21] X. Yao, Y. Liu, G. Lin, Evolutionary programming made faster, IEEE Transactions on Evolutionary Computation 3 (2) (1999) 82–
102.

You might also like