0% found this document useful (0 votes)
62 views6 pages

Kumar12 InformationGuidedSA

This document presents an information-guided framework for simulated annealing. The framework uses information gathered during the randomized exploration phase of simulated annealing to drive the optimization process. A guided annealing temperature is defined that incorporates this information into the cooling schedule. The resulting algorithm has two phases - an initial exploration phase to gather information, and an exploitation phase that uses the information to "reheat" and guide the optimization. Results show this approach significantly improves the performance and success rate of simulated annealing on problems where it traditionally struggles.

Uploaded by

Sandeep Gogadi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views6 pages

Kumar12 InformationGuidedSA

This document presents an information-guided framework for simulated annealing. The framework uses information gathered during the randomized exploration phase of simulated annealing to drive the optimization process. A guided annealing temperature is defined that incorporates this information into the cooling schedule. The resulting algorithm has two phases - an initial exploration phase to gather information, and an exploitation phase that uses the information to "reheat" and guide the optimization. Results show this approach significantly improves the performance and success rate of simulated annealing on problems where it traditionally struggles.

Uploaded by

Sandeep Gogadi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

An Information Guided Framework for Simulated Annealing

Mrinal Kumar

Abstract A framework for information-guided simulated


annealing is presented in this paper. Information gathered
during randomized exploration of the optimization domain
is used as feedback with progressively increasing gain to
drive the optimization procedure. Modeling of information
can be performed in a variety of ways, with the ultimate
objective of keeping track of the performance of the stochastic
search procedure. A guided-annealing temperature is defined
that incorporates information into the cooling schedule. The
resulting algorithm has two phases: phase I performs (nearly)
unrestricted exploration as a reconnaissance to survey the
optimization domain, while phase II re-heats the optimization
procedure and exploits information gathered during phase I.
Phase I flows seamlessly into phase II via an information effectiveness parameter without need for user input. The algorithm
presented in this paper improves the performance and success
rate of the existing simulated annealing algorithms significantly.
Results of are presented for a problem that is traditionally used
in the literature to illustrate the shortcomings of simulated
annealing and significant improvement is illustrated.

I. INTRODUCTION
Stochastic optimization has received much attention in
the past few decades due to the increasing complexity of
problems in engineering, science, medicine, economics and
several other fields. An optimization algorithm is called
stochastic if either (i) the cost function, L(x) or its gradient
g(x) is corrupted by noise, and/or (ii) the algorithm used for
optimization is randomized, i.e. selection of the search direction is made via randomized decision making. In Ref.[1],
Spall provides an excellent introduction to existing stochastic
search algorithms. The worthiness of any randomized algorithm can be judged by its ability to efficiently explore the
solution domain, undaunted by underlying dimensionality
(Refs. [2], [3]) and avoiding traps such as locally optimal
solutions. When randomization is coupled with Markov chain
theory (Refs.[4], [5], [6], [7]), truly powerful methods can
be developed for a wide variety of complex problems.
The current paper is concerned with one such method,
namely simulated annealing (SA), which was originally proposed by Kirkpatrick et al. [8] for combinatorial optimization
problems in their seminal 1983 Science paper. SA draws
inspiration from the physics of cooling - the fact that when a
sufficiently heated substance (solid or liquid) is taken through
a process of almost reversible ( gradual) cooling, it settles
down to its minimum energy state called the crystalline
form. This is in contrast to an amorphous state, i.e. a polycrystalline state, which may result if the cooling process
This work was not supported by any organization
M. Kumar is with the Department of Mechanical and Aerospace Engineering, 306 MAE-A, University of Florida, Gainesville, FL 32611-6250,
USA. Email: [email protected]

is not slow enough (Ref.[1]). Carried over to the context


of minimizing cost functions, the lowest energy crystalline
form can be thought of as a global minimum while a polycrystalline form can be envisioned as a local minimum.
The cooling schedule is equivalent to the policy by which
the probability of accepting a randomized jump of large
magnitude is reduced. The essence and success of SA lies
in its ability to accept an increase in the cost function (with
probability < 1) to avoid getting trapped in local troughs.
The clemency of this acceptance reduces over time and
this cooling schedule governs how well SA performs in
practice.
In the simple form of SA, random jumps are sampled
from a Gaussian distribution and the temperature cooling
schedule must be very slow for acceptable convergence [9].
In Ref.[10], Szu et al. developed a fast version of SA (FSA)
by sampling random jumps from a Cauchy distribution. Since
Cauchy distributions have heavy tails, large perturbations are
more frequently sampled. This leads to quicker exploration
of the search space, thus achieving the same results with a
faster cooling schedule. Similarly, many other varieties of SA
have been proposed, both in terms of the method of sampling
random perturbations (e.g. Refs.[11], [12]), and the selection
of annealing schedule (e.g. Refs.[8], [12], [13]).
While SA has been highly successful in a wide variety of
applications, theoretical results for its convergence characteristics are scarce [1]. In Ref.[13], it was shown that for a
discrete domain of optimization, SA converges in probability
if the annealing schedule is proportional to 1/ log k (k = iteration number). Results for more general conditions; and in
particular, continuous optimization domains are not available.
Several simulation oriented and review papers have thus been
published debating the true worth of SA (for example, see
Refs.[9], [14], [15], [16], [17]) and the general consensus
seems to be that it is a well established and widely applicable
optimization methodology albeit with limitations and several
open questions regarding theoretical convergence. In the
current paper, we consider the following two research issues
related to SA:
R1: Can information collected during randomized exploration of SA be made useful? Note that SA can
jump out of a trough containing the global minimum
just like it can jump out of a local minimum (of-course,
with a lower probability). In other words, the very forte
of SA (ability to escape local minima by temporarily
accepting a higher cost) can also become its weakness,
especially when numerous local minima are present. Is
there a way by which information collected during this
open-loop randomization can be utilized as feedback

to prevent such occurrences? More importantly, can this


be done without impairing the desirable exploratory
traits of SA?
R2: Indeed, if the cooling schedule of SA is sufficiently
slow, the above described problem is less likely to occur.
In implementation however, it is always desirable to
reach the global minimum with the least effort (e.g.
fast simulated annealing of Szu et al. (Ref.[10]). Could
information be used to reduce algorithm run time and
more importantly, minimize sensitivity to the cooling
schedule?
The current paper attempts to answer the above questions
by developing an information guided framework for simulated annealing. Information gathered during randomized SA
exploration is used as feedback with progressively increasing
gain to drive the optimization procedure. In this paper,
the term information is meant to be any function that
keeps track of the performance of the stochastic search
procedure. We introduce a guided-annealing temperature
that incorporates information into the cooling schedule. The
resulting algorithm has two phases: phase I performs (nearly)
unrestricted exploration as a reconnaissance to survey the
optimization domain, while phase II exploits the information
gathered during phase I. Phase I flows seamlessly into phase
II via an information effectiveness parameter without need
of user input.
The rest of this paper is arranged as follows: Sec.II defines the stochastic optimization problem and Sec.III briefly
recapitulates the existing SA algorithm. Various aspects of
the proposed information-based SA algorithm is presented in
Sec.IV and results are shown in Sec.V. Finally, conclusions
and directions for future research are outlined in Sec.VI.
II. P ROBLEM S TATEMENT
The basic problem considered in this paper can be formally
stated as (Ref. [1]):
Determine x = arg min L(x) = {x |L(x ) L(x)x }
x
(1)
where, L() is the cost function to be minimized and N
is the search space. In the present work, L() is assumed
to be noise-free. Most deterministic numerical optimization
methods suffer from the curse of dimensionality because
the volume of the search space grows exponentially with
its dimension, N.
III. S IMULATED A NNEALING
A simple version of the simulated annealing algorithm is
presented below (Ref.[1]). Random perturbations are introduced around the current state of the Markov chain using a
Gaussian distribution and the cooling schedule is assumed
to be geometric.
Algorithm A: (Simple SA)
1) (Initialization): Set an initial temperature T = Tmax
and a starting point for the search, x 0 = xcurr .
Determine L(xcurr ).

2) (Random Jump Proposal): Sample xnew from


the proposal density q(x) = N (xcurr , P), where
N (xcurr , P) is a N-dimensional Gaussian distribution
with mean xcurr and covariance matrix P. Determine
L(xnew ).
.
3) (Acceptance): Define = L(xnew ) L(xcurr ). Then
there are two cases:
a) (Cost Reduction): If < 0, accept xnew and
update xcurr xnew and L(xcurr ) L(xnew ).
b) (Cost Increase): If 0, accept xnew if
u exp( /kT ) where u U [0, 1]. I.E., accept the proposed new state with a probability
exp( /kT ). This is the Metropolis acceptance
step. If xnew was accepted, update xcurr xnew
and L(xcurr ) L(xnew ). Else the current state
does not move.
4) (End Chain for Current T ): Continue to sample from
the proposal q(x) and construct the chain according to
step (3) until the computational resource for the current
value of T is exhausted.
5) (Cooling): Reduce T according to the prescribed cooling schedule, e.g. Tnew = Tcurr and start building a new
chain (steps 2-4).
6) (End): Terminate the optimization process if the temperature is reduced below a threshold value or if
the total allocated computational resources has been
exhausted. The final answer is the most recent current
state, i.e. x = xcurr .
In the above steps, U [0, 1] denotes a uniform random
variable on [0, 1]. The covariance matrix P of the proposal
density N (xcurr , P) must be chosen by the user, as must be
the constant k in step (3.b) and the cooling rate in step 5.
The above algorithm works for both discrete and continuous
optimization problems, by modifying the jump proposal (step
2). Some important points:
The key distinguishing feature of SA is its ability to
accept an increase in the cost function (step (3.b)) to
avoid getting trapped in local minima.
The acceptance of a new point with higher cost depends
on two measures: (i) the actual positive increment in
the cost function ( ) and (ii) the current annealing
temperature (T ).
From the Metropolis acceptance criterion it is clear that
for a fixed annealing temperature T , the probability of
acceptance of a large positive cost increment is low.
On the other hand, for a fixed positive , a higher
annealing temperature causes probability of acceptance
to be higher. In other words, the algorithm accepts
large positive cost jumps with higher probability in
its initial stages. As the temperature cools, high cost
increments are rejected more frequently. This point will
be important in the sequel.
IV. I NFORMATION -G UIDED SA
In this section, we present a modified version of the
simulated annealing algorithm via an information feedback

term. The basic idea is very simple: make the information


collected during unrestricted exploration of the optimizationdomain useful in reaching the global minimum.
A. Guided Annealing Temperature
In the new algorithm, the Metropolis acceptance probability of step 4 in algorithm A is computed using a guided
annealing temperature, Tg as opposed to the standard annealing temperature, T . The guided annealing temperature is
defined as follows:
.
Tg = T + (1 )d
(2)
In the above equation, T is simply the annealing temperature
used above in algorithm A and is cooled according to
the same schedule as in standard SA. The term d is an
information feedback-function that tracks the performance of
the randomized algorithm during its run. It can be modeled in
a number of ways, but in this work, we define the information
feedback term as:
.
d = kxe x curr k
(3)
where, x curr is the current estimate of the global minimum
(best answer found so far), and xe is the ending position of
algorithm for the previous value of the modified annealing
temperature Tg . In other words, xe is the end point of
the Markov chain constructed for Tg,previous . Therefore, the
proposed information function keeps track of how far the
algorithm has wandered from its own current estimate of
the global minimum. The contribution of the information
feedback term (d ) to the guided annealing temperature is
controlled by the information-effectiveness parameter
[0, 1], which is a function of T . The desired qualitative nature
of (T ) is shown in Fig.1, which can be modeled as follows:
"
( 
2 )#
T
(4)
(T ) = 1 exp c
Tmax

no information feedback. As T reduces due to the cooling


schedule of SA, gradually decreases and information feedback becomes active. However, the effectiveness of feedback
is still very low because 1 for much of the cooling
schedule of SA. When the ratio T /Tmax reaches a small
value (= in Fig.1), information feedback starts dominating
the exploration procedure and the Metropolis acceptance
probability. If it is desired that the information feedback
gain increase from 0 to K when T cools to Tmax , we set
c = log(K )/ 2 (from Eq.4). For example, for K = 0.1,
= 0.1, c 10.
B. Two Phases of the Modified Algorithm
The information effectiveness parameter (T ) divides the
modified SA algorithm into two phases: (a) Phase I, which is
unrestricted randomized exploration as in standard SA and,
(b) Phase II, in which the standard annealing temperature
T has sufficiently cooled down, and the optimization procedure is primarily driven by feedback from the information
function, d .
In phase I, (T ) 1 and hence Tg T . During this phase,
the algorithm keeps track of its current estimate of the global
minimum, x , but does not allow it to affect the exploration
procedure by keeping the feedback gain (K = (1 ) small.
This is important because premature feedback can potentially
lead to premature (and potentially erroneous) convergence of
the algorithm to a local minimum.
When the standard simulated annealing temperature cools
down ( > T /Tmax 0), the information feedback quickly
starts to dominate the guided annealing temperature, i.e.
Tg d , and hence also the sample acceptance law (i.e.
the Metropolis acceptance criteria: u < exp( /kTg ), u
U [0, 1]).
C. Algorithm B: Information Guided SA
In this section, we present the new algorithm step by step:
Algorithm B:

Unguided
Random Exploration

: Guided Annealing Temp.


: (standard) Simulated
Annealing Temp.

Information-Guided
Random Exploration

PHASE II
(information guided)

PHASE I
unguided exploration
(~ pure simulated annealing)

: Information
Effectiveness Parameter
: Information Feedback
Function

Fig. 1.

Guided Annealing Temperature

Note that the feedback gain of the information function is


given by K = (1) and can be controlled via the parameter
c in Eq.4. At the start of the new algorithm, (T = Tmax ) = 1,
which is the case of pure simulated annealing and there is

1) (Initialization): Set an initial standard annealing temperature T = Tmax , and 0 = curr .


= , = (current state of
Also initialize curr
e
0
0
= L( ) and
the algorithm). Determine L(curr ), L curr
curr

d = ke curr k.
2) (Determination of Information Effectiveness): Compute (T ) and guided annealing temperature, Tg .
3) (Random Jump Proposal): Based on effectiveness of
information, generate a sample:
a) (Information ineffective): If , i.e. K = (1
) (1 ) = K , draw sample new from
the proposal density q( ) = N (curr , P).
b) (Information effective): If K > K , draw sample new  from the proposal density q( ) =

N curr , d .
Determine L(new ).
.
4) (Acceptance): Define = L(xnew ) L(xcurr ). Then
there are two cases:

a) (Cost Reduction): If < 0, accept xnew and


update xcurr xnew and L(xcurr ) L(xnew ). Also,
check to see it a new global minimum was found:
i) (New Global Minimum Found): If L(xnew ) <
, update x
L(x
curr xnew , L curr
L curr
new ) and

information function: d = kxe xnew k


ii) (Global Minimum Unchanged): Continue
without any updates.
b) (Cost Increase): If 0, accept new if u
exp( /kTg ) If new was accepted, update
curr new and L(curr ) L(new ). Else the
current state does not move.
5) (End Chain for Current Tg ): Continue to sample
and construct the chain according to step (3) until the
computational resource for the current value of Tg is
exhausted.
6) (Cooling): Reduce T , e.g. Tnew = Tcurr and start
building a new chain (steps 2-5).
7) (End): Terminate the optimization process if the
guided annealing temperature, Tg is below a threshold
value Tg or if the upper limit on total available
computational resources is reached. Answer: =
curr .

Current
estimate of
global min

Information
effectiveness
parameter

Guided
annealing temp.

Unguided
exploration
active?
Cost
reduction?

Metropolis
condition

A flowchart of the above algorithm is shown in Fig.2. A


few comments are in order at this point:
Once the user sets a value for c in Eq.4, the algorithm
automatically flows from phase I to phase II and no
additional user input is required. New sample generation
as well as sample acceptance laws are different for the
two phases.
Phase I serves as a period of reconnaissance because
it surveys the optimization domain. Information is collected and later used by phase II via feedback.
Phase II begins when the simulated annealing temperature has cooled down sufficiently. Note that guided
annealing temperature also decreases along with standard annealing temperature in phase I. However, if
at the end of phase I the current ending-state of
the algorithm (xe ) is far from the current estimate of
the global minimum (xcurr ), information feedback kicks
in and permits larger perturbations about the current
sample to be accepted with high probability. This is
equivalent to re-heating the exploration procedure. In
the standard SA algorithm, the optimization would cool
off at this point and the final answer would be far from
the true global minimum.
As phase II re-heats the exploration procedure, two
outcomes may result: (i) a new global minimum may
be discovered, and the algorithm would converge (perturbation would die down) arbitrarily close to it; or,
(ii) the algorithm would converge arbitrarily close to
its estimate of the global minimum at the end of phase
I.
Note that outcome (ii) is as important as outcome
(i) because it effectively resolves research issue R1
mentioned in Sec.I. Information feedback automatically
ensures that the current algorithm converges only if it
is arbitrarily close to its best estimate of the global
minimum. In addition, by tuning the constant c in the
effectiveness parameter (), the information could be
made influential at an earlier stage of the algorithm,
which could likely resolve research issue R2.
It is important to mention that Fig.1 is not an indication
of the relative amount of time the algorithm spends in
phases I and II. Depending on the performance of phase
I, the algorithm may spend almost no time, or, more
time than phase I in phase II.
V. R ESULTS
We now present results using the above described algorithm. Consider the minimization of the following cost
function:

Found new
global min?
Reduce unguided
exploration temp.

N
End

L(x) = 0.05 xi2 40 cos(xi ),


i=1

i=1

x = {x1 , . . . , xN } N

Fig. 2.

Flowchart: Information Guided Simulated Annealing

(5)

The above problem is typically used to demonstrate the


shortcomings of simulated annealing (see Refs.[1], [9]) and

SA
JUMP

-38.03
-39.01

-39.01

-40

-38.03
-39.01

-38.03
-39.01

-38.03

Fig. 3.

Cost Function of Eq.5


: Actual Global Minimum

although similar, is more difficult than the problem considered in Maryak et al. (Ref.[17]). The surface plot for
N = 2 shown in Fig.3 immediately elicits the underlying
difficulty: there are multiple local minima very close to
the unique global minimum: L(0) = 40. Moreover, several
local minima are very close in magnitude to the global
minimum: for example the closest ring of four local troughs
all hit minimum at 39.01 (see Fig.3). Clearly, an unguided
approach could easily get trapped in any one of these troughs,
which is what happens most of the time with standard SA. An
example is shown in Fig.4(a) and the corresponding history
of the cost function is shown in Fig.4(b). In Fig.4(a), the
white circles show various locations of xe that the algorithm
passed through. Also, the smaller light blue circles depict
the various best-estimates of the global minimum (xcurr )
the algorithm found during its run. Of course, none of this
information was used since we are considering the simple
SA in this figure. As a result, the SA algorithm ended its
run trapped in a trough containing the local minimum of
value 38.03.
On the other hand, the information guided approach presented in this paper performs very well for this problem
and converges near the global minimum. A sample run
is shown in Fig.5(a). The cost function history shown in
Fig.5(b) gives re-heating a physical meaning: as unguided
exploration (phase I) cools off, the algorithm determines
through information feedback that it is far from its best
estimate of the global minimum. It thus initiates phase II,
re-heating the guided annealing temperature (see Fig.6) and
converges only when Tg 0, i.e. the algorithm is near its
best estimate of the global minimum.
For a more detailed comparison, both standard SA and
information guided SA were run 25 times and the results are
shown in Table.I.
Method

# Runs

E[x ]

SA
Info. Guided SA

25
25

-38.72
-39.85

# Runs Trapped in
Local Min Trough
15/25
1/25

: Algorithm End Position

(a) Standard SA Gets Trapped in a Local Trough Near Termination

history

Cost History
Global Min.

SA jumps out of the


trough containing
the global min

(b) Time History of Cost Function for SA


Fig. 4.

Results of SA Optimization for Eq.5

the average results appear almost similar. However, it is


important to realize that even this apparently small difference
is significant because of the difficulty of the underlying
problem (see Fig.3). There exist multiple local minima very
close to the global minimum. Therefore, even E[x ]SA =
38.72 represents a poor result because it suggests that on
average, the algorithm gets trapped in a local trough. Fig.4(a)
illustrates one such run. Indeed, column 4 of Table I confirms
this suspicion, because it was discovered that SA got trapped
in a local minimum trough for 60% of all runs, which is very
significant.
On the other hand, the information guided SA returns an
average very close to the true global minimum, indicating
that the algorithm finishes in the correct location most of the
time. Indeed, only 1 out of 25 (4%) runs ended in the wrong
trough. This represents a significant improvement over the
standard SA algorithm.

TABLE I
A C OMPARISON OF SA AND I NFORMATION G UIDED SA FOR C OST
F UNCTION IN E Q .5 WITH N = 2

It is important to first explain column 3 of Table I because

VI. C ONCLUSIONS AND F URTHER R ESEARCH


A novel information guided framework for simulated
annealing was presented in this paper. Information was used
as a feedback with progressively increasing gain to develop
a two-phase stochastic search. The first phase performs pure

Guided Annealing Temperature:

Information Guided SA

: Actual Global Minimum

Re-heating the Guided Annealing


Temperature by Information Feedback

Final
Convergence

Start of Phase II

: Algorithm End Position

(a) Information Feedback Causes Modified Algorithm to Rediscover the Global Minimum

history

Cost History
Global Min.

Information feedback
re-heats exploration
to find global min

Fig. 6.
Guided Temperature History for the Information Guided SA
Algorithm

way of doing so. Further research is required to develop


more sophisticated means of embedding information into the
optimization procedure for improved performance and faster
convergence.
R EFERENCES

(b) Time History of Cost Function for Information-Guided SA


Fig. 5.

Results of Information Guided SA Optimization for Eq.5

simulated annealing, in the manner of a reconnaissance


of the optimization domain and collects information. The
second phase exploits collected information via feedback
and re-heats the randomized search such that convergence
occurs only when the search is arbitrarily close to its best
estimate of the global minimum. The transition of phase I
into phase II is controlled automatically via an information
effectiveness parameter. Usefulness of the described algorithm was demonstrated on a complex optimization problem
with multiple minima, which is typically used to illustrate the
shortcomings of standard simulated annealing. While much
promise is shown by the proposed technique, a lot remains
unexplored. For example, more work will be required to
develop techniques for incorporating complex constraints
into the optimization procedure. Many different methods
need to be researched, such as enforcing hard versus soft
constraints, use of penalty functions and the possible use of
Markov chain theory. A different research direction would
explore the extension of the proposed framework for solving
nonlinear stochastic optimal control problems. Finally, while
this paper describes one method for using information as
feedback, it is not claimed to be the only or the best

[1] J. C. Spall, Introduction to Stochastic Search and Optimization:


Estimation Simulation and Control, 1st ed. Hoboken, NJ: WileyInterscience, 2003.
[2] R. E. Bellman, Dynamic Programming. Princeton University Press,
Princeton, New Jersey, 1957.
[3] J. Rust, Using randomization to break the curse of dimensionality,
Econometrica, vol. 65, no. 3, pp. 487516, 1997.
[4] N. Metropolis, A. Rosenbluth, M. N. Rosenbluth, A. H. Teller,
and E. Teller, Equations of state calculations by fast computing
machines, Journal of Chemical Physics, vol. 21, pp. 10871092,
1953.
[5] W. K. Hastings, Monte carlo sampling methods using markov chains
and their applications, Bioamietrika, vol. 57, no. 1, pp. 97109, 1970.
[6] W. R. Gilks, S. Richardson, and D. J. Spiegelhalter, Markov Chain
Monte Carlo in practice. London: Chapman and Hall, 1996.
[7] S. P. Meyn and R. L. Tweedie, Markov chains and stochastic stability.
London: Communications and Control Engineering Series. SpringerVerlag London Ltd., 1993.
[8] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, Optimization by
simulated annealing, Science, vol. 220, no. 4598, pp. 671680, 1983.
[9] M. A. Styblinski and T.-S. Tang, Experiments in nonconvex optimization: Stochastic approximation with function smoothing and simulated
annealing, Neural Networks, vol. 3, pp. 467483, 1990.
[10] H. Szu and R. Hartley, Fast simulated annealing, Physics Letters A,
vol. 122, no. 3,4, pp. 157162, 1987.
[11] I. O. Bohachevsky, M. E. Johnson, and M. L. Stein, Generalized
simulated annealing for function optimization, Technometrics, vol. 28,
pp. 209217, 1986.
[12] S. P. Brooks and B. J. T. Morgan, Optimization using simulated
annealing, Journal of the Royal Statistical Society. Series D (The
Statistician), vol. 44, no. 2, pp. 241257, 1995.
[13] B. Hajek, Cooling schedules for optimal annealing, Mathematics of
Operations Research, vol. 13, pp. 311329, 1988.
[14] V. Fabian, Simulated annealing simulated, Computers & Mathematics with Applications, vol. 33, no. 1/2, pp. 8194, 1997.
[15] R. A. Rutenbar, Simulated annealing algorithms: An overview, IEEE
Circuits and Devices and Magazine, vol. 5, no. 1, pp. 1926, 1989.
[16] D. Bertsimas and J. Tsitsiklis, Simulated annealing, Statistical
Science, vol. 8, no. 1, pp. 1015, 1993.
[17] J. L. Maryak and D. C. Chin, Global random optimization by
simultaneous perturbation stochastic approximation, in Proc. of the
American Control Conference, Arlington, VA, Jun 25-27, 2001, pp.
756762.

You might also like