0% found this document useful (0 votes)
19 views87 pages

Metaheuristic Computation With Matlab Hardcover Erik Cuevas Alma Rodrguez Instant Download

The document is a comprehensive guide titled 'Metaheuristic Computation with MATLAB' authored by Erik Cuevas and Alma Rodríguez, published by CRC Press in 2021. It covers various optimization methods, including classical and metaheuristic techniques, and provides computational examples using MATLAB. The book includes chapters on Genetic Algorithms, Particle Swarm Optimization, and other algorithms, along with exercises and references.

Uploaded by

oscalarenc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views87 pages

Metaheuristic Computation With Matlab Hardcover Erik Cuevas Alma Rodrguez Instant Download

The document is a comprehensive guide titled 'Metaheuristic Computation with MATLAB' authored by Erik Cuevas and Alma Rodríguez, published by CRC Press in 2021. It covers various optimization methods, including classical and metaheuristic techniques, and provides computational examples using MATLAB. The book includes chapters on Genetic Algorithms, Particle Swarm Optimization, and other algorithms, along with exercises and references.

Uploaded by

oscalarenc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 87

Metaheuristic Computation With Matlab Hardcover

Erik Cuevas Alma Rodrguez download

https://ebookbell.com/product/metaheuristic-computation-with-
matlab-hardcover-erik-cuevas-alma-rodrguez-33172122

Explore and download more ebooks at ebookbell.com


Here are some recommended products that we believe you will be
interested in. You can click the link to download.

Metaheuristic Computation A Performance Perspective Erik Cuevas

https://ebookbell.com/product/metaheuristic-computation-a-performance-
perspective-erik-cuevas-12311796

Machine Learning And Metaheuristic Computation 1st Edition Erik Cuevas

https://ebookbell.com/product/machine-learning-and-metaheuristic-
computation-1st-edition-erik-cuevas-86084764

Metaheuristic And Evolutionary Computation Algorithms And Applications


1st Ed Hasmat Malik

https://ebookbell.com/product/metaheuristic-and-evolutionary-
computation-algorithms-and-applications-1st-ed-hasmat-malik-22473674

Metaheuristic Optimization Natureinspired Algorithms Swarm And


Computational Intelligence Theory And Applications 1st Ed Modestus O
Okwu

https://ebookbell.com/product/metaheuristic-optimization-
natureinspired-algorithms-swarm-and-computational-intelligence-theory-
and-applications-1st-ed-modestus-o-okwu-22496302
Advances On Computational Intelligence In Energy The Applications Of
Natureinspired Metaheuristic Algorithms In Energy 1st Ed Tutut Herawan

https://ebookbell.com/product/advances-on-computational-intelligence-
in-energy-the-applications-of-natureinspired-metaheuristic-algorithms-
in-energy-1st-ed-tutut-herawan-10489386

Computational Intelligence In Reliability Engineering New


Metaheuristics Neural And Fuzzy Techniques In Reliability 1st Edition
Yunchia Liang

https://ebookbell.com/product/computational-intelligence-in-
reliability-engineering-new-metaheuristics-neural-and-fuzzy-
techniques-in-reliability-1st-edition-yunchia-liang-4191878

Metaheuristic Algorithms Theory And Practice Gaige Wang Xiaoqi Zhao


Keqin Li

https://ebookbell.com/product/metaheuristic-algorithms-theory-and-
practice-gaige-wang-xiaoqi-zhao-keqin-li-55530164

Metaheuristic Optimization Algorithms Optimizers Analysis And


Applications 1st Edition Laith Abualigah

https://ebookbell.com/product/metaheuristic-optimization-algorithms-
optimizers-analysis-and-applications-1st-edition-laith-
abualigah-57036712

Metaheuristic Approaches For Optimum Design Of Reinforced Concrete


Structures Emerging Research And Opportunities Aylin Ece Kayabekir

https://ebookbell.com/product/metaheuristic-approaches-for-optimum-
design-of-reinforced-concrete-structures-emerging-research-and-
opportunities-aylin-ece-kayabekir-33122626
Metaheuristic Computation
with MATLAB®
Taylor & Francis
Taylor & Francis Group
http://taylorandfrancis.com
Metaheuristic Computation
with MATLAB®

Erik Cuevas
Alma Rodríguez
MATLAB ® is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks does not warrant the
accuracy of the text or exercises in this book. This book’s use or discussion of MATLAB ® software or related products
does not constitute endorsement or sponsorship by The MathWorks of a particular pedagogical approach or particular
use of the MATLAB ® software.

First edition published 2021


by CRC Press
6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-2742
and by CRC Press

2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN


© 2021 Taylor & Francis Group, LLC

CRC Press is an imprint of Taylor & Francis Group, LLC

Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot
assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers
have attempted to trace the copyright holders of all material reproduced in this publication and apologize to
copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been
acknowledged please write and let us know so we may rectify in any future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or
utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including
photocopying, microfilming, and recording, or in any information storage or retrieval system, without written
permission from the publishers.

For permission to photocopy or use material electronically from this work, access www.copyright.com or contact the
Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. For works that are
not available on CCC please contact [email protected]

Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for
identification and explanation without intent to infringe.

Library of Congress Cataloging-in-Publication Data


Names: Cuevas, Erik, author. | Rodríguez, Alma (Computer science scholar), author.
Title: Metaheuristic computation with MATLAB® / Erik Cuevas,
Alma Rodríguez.
Description: First edition. | Boca Raton : CRC Press, 2020. | Includes
bibliographical references and index.
Identifiers: LCCN 2020014306 | ISBN 9780367438869 (hardback) |
ISBN 9781003006312 (ebook)
Subjects: LCSH: MATLAB. | Metaheuristics. | Mathematical optimization.
Classification: LCC QA76.9.A43 C84 2020 | DDC 519.6--dc23
LC record available at https://lccn.loc.gov/2020014306

ISBN: 978-0-367-43886-9 (hbk)


ISBN: 978-1-003-00631-2 (ebk)

Typeset in Minion
by codeMantra
Contents

Preface, xi
Acknowledgments, xvii
Authors, xix

Chapter 1 ◾ Introduction and Main Concepts 1


1.1 INTRODUCTION 1
1.2 CLASSICAL OPTIMIZATION METHODS 3
1.2.1 The Gradient Descent Method 3
1.2.2 Gradient Computation 4
1.2.3 Computational Example in MATLAB 4
1.3 METAHEURISTIC METHODS 7
1.3.1 The Generic Procedure of a Metaheuristic Algorithm 11
1.4 EXPLOITATION AND EXPLORATION 12
1.5 PROBABILISTIC DECISION AND SELECTION 12
1.5.1 Probabilistic Decision 12
1.5.2 Probabilistic Selection 13
1.6 RANDOM SEARCH 14
1.6.1 Computational Implementation in MATLAB 15
1.7 SIMULATED ANNEALING 19
1.7.1 Computational Example in MATLAB 22
EXERCISES 25
REFERENCES 28

Chapter 2 ◾ Genetic Algorithms (GA) 29


2.1 INTRODUCTION 29
2.2 BINARY GA 31
2.2.1 Selection Operator 33

v
vi ◾ Contents

2.2.2 Binary Crossover Operator 35


2.2.3 Binary Mutation 36
2.2.4 Computational Procedure 37
2.3 GA WITH REAL PARAMETERS 43
2.3.1 Real-Parameter Crossover Operator 43
2.3.2 Real-Parameter Mutation Operator 53
2.3.3 Computational Procedure 57
REFERENCES 63

Chapter 3 ◾ Evolutionary Strategies (ES) 65


3.1 INTRODUCTION 65
3.2 THE (1 + 1) ES 66
3.2.1 Initialization 66
3.2.2 Mutation 66
3.2.3 Selection 67
3.3 COMPUTATIONAL PROCEDURE OF THE (1 + 1) ES 68
3.3.1 Description of the Algorithm (1 + 1) ES 68
3.4 MATLAB IMPLEMENTATION OF ALGORITHM (1 + 1) ES 70
3.5 ES VARIANTS 73
3.5.1 Adaptive (1 + 1) ES 73
3.5.2 (µ + 1) ES 80
3.5.3 (µ + λ ) ES 90
3.5.4 ( µ , λ ) ES 94
3.5.5 ( µ ,α , λ , β ) ES 100
3.5.6 Adaptive ( µ + λ ) ES and ( µ , λ ) ES 100
REFERENCES 112

Chapter 4 ◾ Moth–Flame Optimization (MFO) Algorithm 113


4.1 MFO METAPHOR 113
4.2 MFO SEARCH STRATEGY 114
4.2.1 Initialization 114
4.2.2 Cross Orientation 115
4.2.3 Other Mechanisms for the Balance of Exploration–Exploitation 116
4.2.4 MFO Variants 118
4.3 MFO COMPUTATION PROCEDURE 118
4.3.1 Algorithm Description 119
Contents ◾ vii

4.4 IMPLEMENTATION OF MFO IN MATLAB 123


4.5 APPLICATIONS OF MFO 126
4.5.1 Application of the MFO to Unconstrained Problems 127
4.5.2 Application of the MFO to Problems with Constrained 131
REFERENCES 137

Chapter 5 ◾ Differential Evolution (DE) 139


5.1 INTRODUCTION 139
5.2 DE SEARCH STRATEGY 140
5.2.1 Population Structure 141
5.2.2 Initialization 142
5.2.3 Mutation 142
5.2.4 Crossover 145
5.2.5 Selection 146
5.3 COMPUTATIONAL PROCESS OF DE 147
5.3.1 Implementation of the DE Scheme 147
5.3.2 The General Process of DE 148
5.4 MATLAB IMPLEMENTATION OF DE 149
5.5 SPRING DESIGN USING THE DE ALGORITHM 153
REFERENCES 157

Chapter 6 ◾ Particle Swarm Optimization (PSO) Algorithm 159


6.1 INTRODUCTION 159
6.2 PSO SEARCH STRATEGY 160
6.2.1 Initialization 160
6.2.2 Particle Velocity 161
6.2.3 Particle Movement 162
6.2.4 PSO Analysis 163
6.2.5 Inertia Weighting 163
6.3 COMPUTING PROCEDURE OF PSO 163
6.3.1 Algorithm Description 164
6.4 MATLAB IMPLEMENTATION OF THE PSO ALGORITHM 168
6.5 APPLICATIONS OF THE PSO METHOD 171
6.5.1 Application of PSO without Constraints 171
6.5.2 Application of the PSO to Problems with Constraints 175
viii ◾ Contents

REFERENCES 181

Chapter 7 ◾ Artificial Bee Colony (ABC) Algorithm 183


7.1 INTRODUCTION 183
7.2 ARTIFICIAL BEE COLONY 185
7.2.1 Initialization of the Population 185
7.2.2 Sending Worker Bees 185
7.2.3 Selecting Food Sources by Onlooker Bees 186
7.2.4 Determining the Exploring Bees 186
7.2.5 Computational Process ABC 186
7.2.6 Computational Example in MATLAB 187
7.3 RECENT APPLICATIONS OF THE ABC ALGORITHM IN
IMAGE PROCESSING 195
7.3.1 Applications in the Area of Image Processing 195
7.3.1.1 Image Enhancement 195
7.3.1.2 Image Compression 196
7.3.1.3 Border Detection 197
7.3.1.4 Clustering 197
7.3.1.5 Image Classification 197
7.3.1.6 Fusion in Images 198
7.3.1.7 Scene Analysis 198
7.3.1.8 Pattern Recognition 198
7.3.1.9 Object Detection 199
REFERENCES 199

Chapter 8 ◾ Cuckoo Search (CS) Algorithm 201


8.1 INTRODUCTION 201
8.2 CS STRATEGY 203
8.2.1 Lévy Flight (A) 204
8.2.2 Replace Some Nests by Constructing New Solutions (B) 205
8.2.3 Elitist Selection Strategy (C) 205
8.2.4 Complete CS Algorithm 205
8.3 CS COMPUTATIONAL PROCEDURE 206
8.4 THE MULTIMODAL CUCKOO SEARCH (MCS) 209
8.4.1 Memory Mechanism (D) 210
8.4.1.1 Initialization Phase 211
Contents ◾ ix

8.4.1.2 Capture Phase 211


8.4.1.3 Significant Fitness Value Rule 211
8.4.1.4 Non-Significant Fitness Value Rule 213
8.4.2 New Selection Strategy (E) 214
8.4.3 Depuration Procedure (F) 215
8.4.4 Complete MCS Algorithm 218
8.5 ANALYSIS OF CS 218
8.5.1 Experimental Methodology 218
8.5.2 Comparing MCS Performance for Functions f1 − f7 222
8.5.3 Comparing MCS Performance for Functions f8 − f14 224
REFERENCES 226

Chapter 9 ◾ Metaheuristic Multimodal Optimization 229


9.1 INTRODUCTION 229
9.2 DIVERSITY THROUGH MUTATION 230
9.3 PRESELECTION 231
9.4 CROWDING MODEL 231
9.5 SHARING FUNCTION MODEL 231
9.5.1 Numerical Example for Sharing Function Calculation 234
9.5.2 Computational Example in MATLAB 236
9.5.3 Genetic Algorithm without Multimodal Capacities 237
9.5.4 Genetic Algorithm with Multimodal Capacities 242
9.6 FIREFLY ALGORITHM 246
9.6.1 Computational Example in MATLAB 248
EXERCISES 252
REFERENCES 255

INDEX, 257
Taylor & Francis
Taylor & Francis Group
http://taylorandfrancis.com
Preface

Optimization applications are countless. Almost all processes of practical interest can be
optimized to improve their performance. Currently, there is no company that considers the
solution of optimization problems within its activities. In general terms, many processes
in science and industry can be formulated as optimization formulations. Optimization
occurs in the minimization of the time spent for the execution of a task, the cost of a prod-
uct, and the risk in an investment or the maximization of profits, the quality of a product,
and the efficiency of a device.
The vast majority of the optimization problems with practical implications in science,
engineering, economics, and business are very complex and difficult to resolve. Such
problems cannot be solved accurately by using classical optimization methods. Under
these circumstances, metaheuristic computation methods have emerged as an alternative
solution.
Metaheuristic algorithms are considered generic optimization tools that can solve
very complex problems characterized by having very large search spaces. Metaheuristic
methods reduce the effective size of the search space through the use of effective search
strategies. In general, these methods allow solving problems faster and more robust than
classical schemes. In comparison to other heuristic algorithms, metaheuristic techniques
are simpler to design and implement.
Metaheuristic methods represent an important area in artificial intelligence and applied
mathematics. During the last 10 years, a set of several metaheuristic approaches have
appeared, which allow the intersection of different disciplines including artificial intelli-
gence, biology, social studies, and mathematics. Most of the metaheuristic methods use as
inspiration existing biological or social phenomena which at a certain level of abstraction
can be regarded as models of optimization.
Recently, metaheuristic algorithms have become popular in science and industry. An
indicator of this situation is the large number of specialized journals, sessions, and confer-
ences in this area. In practice, metaheuristic schemes have attracted great interest, since
they have proved to be efficient tools for the solution of a wide range of problems in domains
such as logistics, bio-informatics, structural design, data mining, and finance.
The main purpose of this book is to provide a unified view of the most popular meta-
heuristic methods. Under this perspective, the fundamental design principles as well as the
operators of metaheuristic approaches which are considered essential are presented. In the
explanation, not only the design aspects but also their implementation have been considered

xi
xii   ◾    Preface

using the popular software MATLAB®. The idea with this combination is to motivate the
reader with the acquired knowledge of each method to reuse the existing code, configuring
it to his/her specific problems. All the MATLAB codes contained in the book, as well as
additional material, can be downloaded from www.crcpress.com/9780367438869.
This book provides the necessary concepts that enable the reader to implement and
modify the already known metaheuristic methods to obtain the desired performance for
the specific needs of each problem. For this reason, the book contains numerous examples
of problems and solutions that demonstrate the power of these methods of optimization.
The material has been written from a teaching perspective. For this reason, the book is
primarily intended for undergraduate and postgraduate students of Artificial Intelligence,
Metaheuristic Methods, and/or Evolutionary Computation. It can also be appropriate for
courses such as Optimization and Computational Mathematics. Likewise, the material can
be useful for researchers from metaheuristic and engineering communities. The objective
is to bridge the gap between metaheuristic techniques and complex optimization problems
that profit on the convenient properties of metaheuristic approaches. Therefore, students
and practitioners, who are not metaheuristic computation researchers, will appreciate that
the techniques discussed are beyond simple theoretical tools since they have been adapted
to solve significant problems that commonly arise in such areas.
Due to its content and structure, the book is suitable to fulfill the requirements of sev-
eral university subjects in the area of computing sciences, artificial intelligence, operations
research, applied mathematics, and some other disciplines. Similarly, many engineers and
professionals that work in the industry may find the content of this book interesting. In
this case, the simple explanations and the provided code can assist practitioners in finding
the solution of optimization problems which normally arise in various industrial areas.
Our original premise has been that metaheuristic methods can be easily exposed to
readers with limited mathematical skills. Consequently, we try to write a book in which the
contents are not only applicable but also understandable for any undergraduate student.
Although some concepts can be complex themselves, we try to expose them clearly without
trying to hide their implicit difficulty.
The book is structured so that the reader can clearly identify from the beginning the
objectives of each chapter and finally strengthen the knowledge acquired through the
implementation of several MATLAB programs. The book has been conceived for an intro-
ductory course. The material can be covered in a semester. The book consists of nine chap-
ters, and the details in the contents of each chapter are described below.
Chapter 1 introduces the main concepts that are involved in an optimization pro-
cess. In this way, once the optimization problem is generically formulated, the methods
used for its solution are then classified. Considering that the book focuses on the study of
metaheuristic techniques, traditional gradient-based algorithms will be only marginally
treated. Another important objective in this chapter is to explain the main characteris-
tics of the evolutionary algorithms introducing the dilemma of exploration and exploi-
tation. Furthermore, the acceptance and probabilistic selection are also analyzed. They
are two main operations used in most metaheuristic methods. Finally, three of the first
Preface   ◾    xiii

evolutionary methods are exposed, which have been considered as the basis for creation of
new algorithms. The idea with this treatment is to introduce the concepts of metaheuristic
methods through implementing techniques that are easy to understand.
In Chapter 2, the metaheuristic techniques known as Genetic Algorithms (GAs) are
introduced. They implement optimization schemes that emulate evolutionary principles
found in nature. GAs represent one of the most important search approaches in several
problem domains, such as the sciences, industry, and engineering. The main reasons for
their extensive use are their flexibility, ease of implementation, and global context. Among
different GAs, we will examine in detail binary-coded and real-parameter GAs. In this
chapter, several MATLAB implementations will be discussed and explained.
Chapter 3 describes the operation of Evolutionary Strategies (ES). The evolution pro-
cess that the ES method implements to solve optimization problems is also discussed.
Throughout this chapter, the operators used by the ES are defined along with their different
variants and computational implementation in the MATLAB environment.
Chapter 4 describes the inspiration of the Moth–Flame Optimization (MFO) algorithm
as well as the search strategy it implements to solve optimization problems. Throughout
this chapter, the operators used by the MFO are defined with the objective of analyzing the
theoretical concepts involved that allow the computational implementation of the algo-
rithm in the MATLAB environment. Then, the algorithm is used to solve optimization
problems. The examples illustrate the use of MFO for solving problems with and without
constraints.
Chapter 5 analyzes the Differential Evolution (DE) scheme. This approach is a popu-
lation algorithm that implements a direct and simple search strategy. Under its oper-
ation, DE considers the generation of parameter vectors based on the addition of the
weighted difference between two members of the population. In this chapter, the opera-
tive details of the DE algorithm are discussed. The implementation of DE in MATLAB is
also described. The objective of this chapter is to provide to the reader the mathematical
description of the DE operators and the capacity to apply this algorithm in the solution
of optimization problems. To do this, in the subsequent sections, the use of the DE algo-
rithm is considered in two aspects: the first is the resolution of an optimization problem
minimizing a mathematical benchmark function, and the second, the solution of engi-
neering problems that require an optimal design in their parameters considering some
design restrictions.
Chapter 6 presents the Particle Swarm Optimization (PSO) method. This scheme is
based on the collective behavior that some animals present when they interact in groups.
Such behaviors are found in several animal groups such as a school of fish or a flock of
birds. With these interactions, individuals reach a higher level of survival by collaborating
together, generating a kind of collective intelligence. This chapter describes the main char-
acteristics of the PSO algorithm, as well as its search strategy, considering also the solution
of optimization problems. In the chapter, the operators used by the PSO are defined with
the objective of analyzing their theoretical concepts involved that allow the computational
implementation of the algorithm in the MATLAB environment. Then, the algorithm is
xiv   ◾    Preface

used to solve real-world applications. The examples illustrate the use of PSO for solving
problems with and without restrictions.
In Chapter 7, the Artificial Bee Colony (ABC) algorithm is analyzed. In this chapter, the
parameters of the ABC algorithm, as well as the information necessary to implement it,
will be discussed in detail. In Section 7.1, a semblance of the ABC algorithm, as well as its
most relevant characteristics, will be discussed. In Section 7.2, the complete algorithm is
presented, reserving a sub-section for each component of the algorithm. However, a special
emphasis is placed on Section 7.2.6, where an optimization example of a two-dimensional
function using MATLAB is presented. Then, the results obtained will be discussed. Finally,
in Section 7.3, a summary of recent applications of the ABC algorithm in the area of image
processing is presented.
In Chapter 8, the main characteristics of the Cuckoo Search (CS) scheme are discussed.
Due to its importance, a multimodal version of the CS method is also reviewed. CS is a
simple and effective global optimization algorithm that is inspired by the breeding behavior
of some cuckoo species. One of the most powerful features of CS is the use of Lévy flights
to generate new candidate solutions. Under this approach, candidate solutions are modi-
fied by employing many small changes and occasionally large jumps. As a result, CS can
substantially improve the relationship between exploration–exploitation, still enhancing
its search capabilities. Despite such characteristics, the CS method still fails in providing
multiple solutions in a single execution. In order to overcome such inconvenience, a mul-
timodal optimization algorithm called the multimodal CS (MCS) is also presented. Under
MCS, the original CS is enhanced with multimodal capacities by means of (1) the incorpo-
ration of a memory mechanism to efficiently register potential local optima according to
their fitness value and the distance to other potential solutions, (2) the modification of the
original CS individual selection strategy to accelerate the detection process of new local
minima, and (3) the inclusion of a depuration procedure to cyclically eliminate duplicated
memory elements.
In Chapter 9, the most common techniques used by metaheuristic methods to opti-
mize multimodal problems are analyzed. Since the shared function scheme is the most
popular, this procedure will be treated in detail in this chapter. Additionally, in the
end, we will discuss the algorithm of fireflies. Such a method inspired by the attraction
behavior of these insects incorporates special operators that maintain interesting mul-
timodal capabilities.
Considering that writing this book has been a very enjoyable experience for the authors
and that the overall topic of metaheuristic computation has become a fruitful subject, it
has been tempting to introduce a large amount of new material and novel evolutionary
methods. However, the usefulness and potential adoption of the book seems to be founded
over a compact and appropriate presentation of successful algorithms, which in turn has
driven the overall organization of the book that we hope may provide the clearest picture
to the reader’s eyes.
Preface   ◾    xv

MATLAB® is a registered trademark of The MathWorks, Inc. For product information,


please contact:
The MathWorks, Inc.
3 Apple Hill Drive
Natick, MA 01760-2098 USA
Tel: 508-647-7000
Fax: 508-647-7001
E-mail: [email protected]
Web: www.mathworks.com
Taylor & Francis
Taylor & Francis Group
http://taylorandfrancis.com
Acknowledgments

There are many people who are somehow involved in the writing process of this book.
We thank the complete metaheuristic group at the Universidad de Guadalajara in Mexico
for supporting us in this project. We express our gratitude to Randi Cohen, who warmly
sustained this project. Acknowledgments also go to Talitha Duncan-Todd, who kindly
helped in the edition process.

Erik Cuevas
Alma Rodríguez
Guadalajara, Mexico

xvii
Taylor & Francis
Taylor & Francis Group
http://taylorandfrancis.com
Authors

Erik Cuevas is currently working as a Professor in the Department of Electronics at the


University of Guadalajara, Mexico. He completed his B.E in Electronics and Communication
Engineering from the University of Guadalajara in 1996, and his postgraduate degree
M.E in Applied Electronics from ITESO, Guadalajara, Mexico, in 1998. He received his
PhD in Artificial Intelligence from Freie Universität Berlin, Germany, in 2007. Dr. Cuevas
currently serves as an editorial board member or associate editor in Applied Soft
Computing, Applied Mathematical Modelling, Artificial Intelligence Review, International
Journal of Machine Learning and Cybernetics, ISA Transactions, Neural Processing Letters
and Mathematics and Computers in Simulation. His research interests include metaheuris-
tics and evolutionary computation in a wide range of applications such as image processing
and machine learning.

Alma Rodríguez is a PhD candidate currently working as a Professor in the Department


of Computer Science at the University of Guadalajara, Mexico. She is also working as
Professor at the Guadalajara campus of Universidad Panamericana, Mexico. In 2011, she
graduated with a degree in mechatronics engineering from Centro de Enseñanza Técnica
Industrial. In 2017, she completed her postgraduate degree M.E in Computer Science and
Electronics from Universidad de Guadalajara. Her research interests include metaheuristic
and evolutionary computation in a wide range of applications such as image processing
and machine learning.

xix
Chapter 1

Introduction and
Main Concepts

OBJECTIVE
The objective of this chapter is to introduce the main concepts that involve an optimi-
zation process. In this way, once the optimization problem is generically formulated,
the methods used for its solution are then classified. Considering that the book focuses
on the study of metaheuristic techniques, traditional gradient-based algorithms will be
only marginally treated. Another important objective of this chapter is to explain the
main characteristics of evolutionary algorithms, introducing the dilemma of exploration
and exploitation. Furthermore, acceptance and probabilistic selection are also analyzed.
These are the two main operations used in most metaheuristic methods. Finally, three of
the first evolutionary methods which have been considered as the basis for the creation
of new algorithms have been exposed. The idea with this treatment is to introduce the
concepts of metaheuristic methods, through implementing techniques that are easy to
understand.

1.1 INTRODUCTION
Optimization has become an essential part of all disciplines. One reason for this consid-
eration is the motivation to produce products or quality services at competitive prices.
In general, optimization is the process of finding the “best solution” to a problem among a
big set of possible solutions (Baldick, 2006).
An optimization problem can be formulated as a process in which it is desired to find the
optimum value x ∗ that minimizes or maximizes an objective function f ( x ). Such that

Minimize/Maximize f (x ), x = ( x1 ,…, x d ) ∈ d
(1.1)
Subject to: x ∈X ,

1
2   ◾    Metaheuristic Computation with MATLAB®

where x represents the vector of decision variables, while d specifies its dimension. X sym-
bolizes the set of candidate solutions, also known as the solution search space. In many
occasions, the bounds of the search space are located by the lower ( li ) or upper (ui ) limits
{
of each decision variables such that X = x ∈ d l i ≤ xi ≤ ui , i = 1,…,d . }
Sometimes it is necessary to minimize f ( x ), but in other scenarios it is necessary to
maximize. These two types of problems are easily converted from one to another through
the following relationship:

min f ( x ) ⇔ max [−1 ⋅ f ( x )]


x∗ x∗
(1.2)
max f ( x ) ⇔ min [−1 ⋅ f ( x )]
x∗ x∗

To clarify these concepts, the following minimization problem is presented as an example:

Minimize f (x ) = x 4 + 5x 3 + 4 x 2 − 4 x + 1
(1.3)
Subject to: x ∈[−4,1]

In this formulation, the minimization of a function with a single decision variable


(d = 1) is presented. The search space X for this problem is integrated by the interval
from −4 to 1. Under these circumstances, the idea is to find the value of x for which f ( x )
presents its minimum value, considering all the set of possible solutions defined within
the range [−4,1]. The x element that solves this formulation is called the optimum value
and is represented by x ∗. Figure 1.1 shows a graphical representation of the function to
minimize.
Figure 1.1 shows the solutions A and B which correspond to two different minima
obtained from f ( x ). This type of function is known as multimodal since it contains several
prominent minima. The minimum represented by point A represents the optimal solution
to f ( x ), while B is only a local minimum of f ( x ).

20

15

10
f(x)

0
B
A
-5

-10
-4 -3 -2 -1 0 1

FIGURE 1.1 Graphical representation of the optimization problem formulated in Eq. 1.3.
Introduction and Main Concepts   ◾    3

1.2 CLASSICAL OPTIMIZATION METHODS


In general, the function f ( x ) could be nonlinear with regard to its decision variables x. Due
to this complexity, optimization methods implement iterative processes for the efficient
exploration of the search space.
There are two kinds of algorithms used to solve these problems (Yang, 2010): classical
techniques and metaheuristic methods. Traditional schemes are based on the use of the
gradient of f ( x ) for the generation of new candidate solutions. In the case of metaheuristic
methods, they do not require functional information of the derivative in f ( x ) to perform a
search strategy that minimizes (or maximizes) a specific objective function. Instead of this,
a set of heuristic rules are implemented to conduct the search process. Some of these rules
are based on the reproduction of phenomena present in nature or society.
For the operation of classical optimization algorithms, they require that the objective
function presents two fundamental requirements for their use: f ( x ) must be twice differ-
entiable, and f ( x ) must have only a single optimum (Venkataraman, 2009).
This section considers a basic review of gradient-based optimization techniques. Since
the book discusses in-depth metaheuristic methods, the analysis of classical techniques
is only marginally treated. For detailed information, it is recommended to the reader
referring to other books.

1.2.1 The Gradient Descent Method


The gradient descent technique is one of the first techniques for the minimization of multi-
dimensional objective functions (Dennis, 1978). This method represents the basis on which
are founded several other more sophisticated optimization algorithms. Despite its slow
convergence, the gradient descent method is more frequently used for the optimization of
nonlinear functions. Such fact is due to its simplicity and easy implementation.
Under this method, starting from an initial point x 0, the candidate solution is modified
iteratively during a number of Niter iterations so that may tentatively find the optimal solution
x ∗. Such iterative modification is determined by the following expression (Hocking, 1991):

x k +1 = x k − α ⋅g ( f ( x )), (1.4)

where k represents the current iteration and α symbolizes the size of the search step.
In Eq. 1.4, the term g ( f ( x )) represents the gradient of the function f ( x ). The gradient g
of a function f ( x ) at the point g x expresses the direction in which the function presents its
maximum growth. Thus, in the case of a minimization problem, the descent direction can
be obtained (multiplying by −1) considering the opposite direction to g. Under this rule, it
( ) ( )
guarantees that f x k +1 < f x k , which means that the newly generated solution is better
than the previous one.
In general, although the formulation of an optimization problem involves the definition
of an objective function f ( x ), this is only for educational purposes and demonstration. In
practice, its definition is not known deterministically. Their values are known only at the
points sampled by the optimization algorithm. Under these circumstances, the gradient g
is calculated using numerical methods.
4   ◾    Metaheuristic Computation with MATLAB®

f (x)
gx
1

f (˜x1) – f(x)
h x2
x ˜x1

x1

FIGURE 1.2 Graphical representation of the numerical calculation process of the gradient.

1.2.2 Gradient Computation
( )
The gradient of a multidimensional function f ( x ) x = ( x1 ,…, x d ) ∈ d represents the way
in which the function changes with respect to one of their d dimensions. Therefore, the
gradient g x1 expresses the magnitude in which f ( x ) varies with respect to x1. This gradient
g x1 is defined as

∂ f (x )
g x1 = (1.5)
∂x1

To numerically calculate the gradient g xi , the following procedure (Mathews & Fink, 2000)
is conducted:

1. A new solution x i is generated. This solution x i is the same as x in all the decision
variables except in xi . This value will be replaced by xi + h, where h is a very small
value. Under these conditions, the new vector x i is defined as

x i = ( x1 , x 2 ,…,xi + h,…,x d ) (1.6)

2. Then, the gradient g xi is computed through the following model:

f ( x i ) − f ( x )
g xi ≈ (1.7)
h

Figure 1.2 shows a graphical representation of the numerical calculation process of


the gradient, considering a simple example, where two dimensions ( x1 , x 2 ) of f ( x ) are
contemplated.

1.2.3 Computational Example in MATLAB


To illustrate the operation of the gradient descent algorithm and its practical implementa-
tion in MATLAB®, it is considered to solve the following minimization problem:
Introduction and Main Concepts   ◾    5

Minimize f ( x1 ,x 2 ) = 10 − e
(
− x12 +3 x 22 )
−1 ≤ x1 ≤ 1 (1.8)
Subject to:
−1 ≤ x 2 ≤ 1

Under this formulation, a two-dimensional function f ( x1 , x 2 ) is considered within a search


space defined on the interval [−1,1] for each of the decision variable x1 and x 2. Figure 1.3
shows a representation of the objective function f ( x1 , x 2 ).
From Eq. 1.8, it is clear that the function f ( x1 , x 2 ) can be derived twice and is also
unimodal (it has only a minimum). For such reasons, it fulfills the requirements to apply
the gradient descent method in order to obtain the optimal value.
The minimization process of f ( x1 , x 2 ) through the gradient descent method is presented
in the form of pseudocode in Algorithm 1.1. The procedure starts selecting randomly a
candidate solution x k (k = 0), within the search space defined on the interval [−1,1] for
each of the variables x1 and x 2. Then, gradients g x1 and g x 2 are calculated at the point x k .
With this information, a new solution x k+1 is obtained as a result of applying Eq. 1.4. Since
f ( x1 , x 2 ) involves two decision variables, the new solution can be built by Eq. 1.4 consider-
ing each variable separately. Therefore, the new solution is updated as follows:

x1k +1 = x1k − α ⋅ g x1
(1.9)
x 2k +1 = x 2k − α ⋅ g x1

This process is repeated iteratively until a maximum number of iterations Niter has been
reached.

10

9.8

9.6
f (x1, x2)
9.4

9.2
1
9 0.5
1
0.5 0
0 x1
x2 -0.5
-0.5
-1 -1

FIGURE 1.3 Graphical representation of the function f ( x1 , x 2 ) = 10 − e


(
− x12 +3x 22 ).
6   ◾    Metaheuristic Computation with MATLAB®

0.8

0.6

0.4

0.2

x2 0

-0.2

-0.4

-0.6

-0.8

-1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
x1

FIGURE 1.4 Solution trajectory produced during the execution of Program 1.1.

The program shows the implementation of Algorithm 1.1 in MATLAB. In the opera-
tion of the program, first the function f ( x1 , x 2 ) is plotted in order to appreciate their main
characteristics. Then, in an iterative process, a set of solutions are produced from the initial
point x 0 to the optimal value. The trajectory experimented by the solutions during the
optimization process is also illustrated in Figure 1.4.

Algorithm 1.1 Gradient Descent Method to Solve the Formulation of Eq. 1

1. k ← 0
2. x1k ← Random [−1,1], x 2k ← Random [−1,1]
3. while (k < Niter) {
f ( x1k + h, x 2k ) − f ( x1k , x 2k ) f ( x1k , x 2k + h) − f ( x1k , x 2k )
4. g x1 ← , g x2 ←
h h
5. x1k +1 ← x1k − α ⋅ g x1 , x 2k +1 ← x 2k − α ⋅ g x 2
6. k ← k+1}

Program 1.1 Implementation of the Gradient Descent Method in


MATLAB®
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Gradient descent example
% Erik Cuevas, Alma Rodríguez
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Clean memory
clear all
% Formulation of the optimization problem Eq. 1.8
Introduction and Main Concepts   ◾    7

funstr='10-(exp(-1*(x^2+3*y^2)))';
f=vectorize(inline(funstr));
range=[-1 1 -1 1];
% Draw the function
Ndiv=50;
dx=(range(2)-range(1))/Ndiv; dy=(range(4)-range(3))/Ndiv;
[x,y] =meshgrid(range(1):dx:range(2),range(3):dy:range(4));
z=(f(x,y));
figure(1); surfc(x,y,z);
% Define the number of iterations
k=0;
niter=200;
% Gradient step size h definition
hstep = 0.001;
% Step size of the Gradient descent method
alfa=0.05;
%Initial point selection
xrange=range(2)-range(1);
yrange=range(4)-range(3);
x1=rand*xrange+range(1);
x2=rand*yrange+range(3);
% Optimization process
while (k<niter)
% Function evaluation
zn=f(x1,x2);
% Computation of gradients gx1 y gx2
vx1=x1+hstep;
vx2=x2+hstep;
gx1=(f(vx1,x2)-zn)/hstep;
gx2=(f(x1,vx2)-zn)/hstep;
% Draw the current position
figure(2)
contour(x,y,z,15); hold on;
plot(x1,x2,'.','markersize',10,'markerfacecolor','g');
hold on;
% Computation of the new solution
x1=x1-alfa*gx1;
x2=x2-alfa*gx2;
k=k+1;
end

1.3 METAHEURISTIC METHODS


Classical gradient-based methods represent the fastest algorithms for nonlinear optimiza-
tion (Reinhardt, Hoffmann, & Gerlach, 2013). An advantage of these techniques is that
they guarantee to obtain the optimal solution to the optimization problem. However, its
application is clearly restricted, since it is required that the objective function maintains
8   ◾    Metaheuristic Computation with MATLAB®

the conditions of two times differentiable and unimodality (Bartholomew-Biggs, 2008).


Real optimization problems usually produce objective functions that do not meet any of
these conditions. Many of these functions present a multimodal behavior (several min-
ima). Under these conditions, the use of gradient-based methods would converge to a
local minimum, without the possibility to explore the search space. Figure 1.5a shows an
example of a multimodal function. There exist many scenarios in which an objective func-
tion could not be differentiable. One simple example is to consider the application of a
rounding operation over the original function. As a result, a differentiable function can
turn into a non-differentiable one. Figure 1.5b shows the rounded version of the function
(
− x 2 +3x 2 )
f ( x1 , x 2 ) = 10 − e 1 2 presented in Figure 1.3.
Metaheuristic schemes (Simon, 2013) do not use the gradient information of an objec-
tive function. This fact makes possible that metaheuristic methods can optimize objective
functions as complex as required by the application. In some cases, the objective function
may even contain simulations or experimental models.
Metaheuristic algorithms do not need functional information of the derivative to
generate its search strategy that minimizes (or maximizes) a particular objective function.
Instead, such methods use heuristic rules for the construction of search patterns. These
rules, in many cases, are based on the simulation of different natural and social processes.
Since metaheuristic methods do not consider the use of derivatives, they do not possess rel-
evant information about the objective function. Under these conditions, these techniques
result in slower approaches in comparison to the gradient-based methods.
Metaheuristic methods are stochastic, which means that they use random processes to
determine the directions of the search strategy. Because of this, it is difficult to conduct
analytical techniques for the analysis of such methods. Therefore, most of their properties
have been discovered experimentally.
Although some metaheuristic algorithms have been designed to simulate the behavior of
biological phenomena, like the natural selection process in the case of genetic algorithms,
in its overall conception, such methods are considered as optimization schemes.
Some authors use the term population-based optimization algorithms to refer to meta-
heuristic techniques. Such nomination emphasizes the concept that metaheuristic schemes
usually consist of a population of candidate solutions to optimize a particular problem.

10
5
8
0 6
f (x1, x2) f (x1, x2)
4
-5 2
2 3 0 1
2 1 0.5
x2 0 0
1 0.5 0
-2 -1 x1 0 x1
-2 x2 -0.5 -0.5
-3
-1 -1
(a) (b)

FIGURE 1.5 Objective function types (a) multimodal or (b) not differentiable.
Introduction and Main Concepts   ◾    9

Each of these candidate solutions behaves as a search agent that leads the search strat-
egy. The idea is that as time passes, the population evolves until it eventually reaches the
optimal solution. However, this concept is not completely appropriate, since there are many
metaheuristic approaches which consist of a single candidate solution (simulated anneal-
ing or evolutionary strategies) so that on each iteration, only this solution is updated. This
reasoning considers that metaheuristic methods are more general than simple population
techniques.
Sometimes the term computational intelligence is used to refer to metaheuristic tech-
niques. Under this concept, the idea is to differentiate metaheuristic methods of expert
systems which are considered a traditional discipline of artificial intelligence. Expert
systems model deductive reasoning, while metaheuristic algorithms model inductive rea-
soning. Computational intelligence is a more general area than metaheuristic methods
and includes other approaches such as neural networks and fuzzy systems. Thus, the use of
such approaches is not restricted to the field of optimization.
Several academics use the term bio-inspired algorithms to refer to metaheuristic meth-
ods. However, this conception is not correct, since various metaheuristic techniques such
as differential evolution and the imperialist algorithm are not inspired by nature. There
are some other approaches such as evolutionary strategies and learning by an opposition
that have a very weak connection with biological processes. Under these conditions, it is
clear that the metaheuristic methods constitute a concept that is more general than the
bio-inspired algorithms.
Some authors often replace the term metaheuristic computation by heuristic algo-
rithms. Heuristic, which comes from the Greek, means find or discover. Heuristic
algorithms are methods that use intuitive rules based on common sense to solve problems.
Such algorithms do not expect to find the best solution, but any sufficiently acceptable solu-
tion. The term metaheuristic is used to describe a generic family of heuristic algorithms.
Therefore, most, if not all, of the algorithms discussed in this book can be considered as
metaheuristics.
Many researchers separate metaheuristic methods from techniques based on swarm
principles. Swarm algorithms consider the collective intelligence shown by the behavior of
groups of animals or insects. Two prominent algorithms in this category are Ant Colony
Optimization and Particle Swarm Optimization. Since the mechanism for implementing
swarm algorithms is similar to metaheuristic methods, in this book, the swarm algorithms
are considered as methods of metaheuristic computation.
From the previous discussions, it can be concluded that the terminology with which
metaheuristic methods have been defined is vague and dependent on a particular context.
In this book, and in order to amalgamate all points of view, metaheuristic algorithms are
defined as algorithms that do not consider gradient information to modify one or more
candidate solutions during the optimization process. In these methods, the search strategy
is determined by the combination of stochastic processes and deterministic models.
Metaheuristics are currently one of the most prolific areas in sciences and engineer-
ing. A reflex of its popularity is the large number of specialized journals and conferences
available in the subject. The number of proposed algorithms in the literature that fall
10   ◾    Metaheuristic Computation with MATLAB®

into the category of metaheuristics is very large so that a review of all the algorithms in
a single document is virtually impossible. Due to restrictions of space and coverage, this
book describes in detail those metaheuristic methods that, according to the literature,
are the most popular. With this in mind, it has been decided to divide the methods into
two classes. The first class corresponds to those techniques that are considered the first
approaches in using the concepts of metaheuristic computation. These techniques have
been the basis of many other algorithms. For this reason, such methods have been included
in the book. However, according to recent literature, they are not considered popular any-
more. Since these techniques are treated as a reference, its discussion is not very detailed
in this book. Therefore, they are addressed in this chapter. The second class of algorithms
involves metaheuristic methods that, according to the literature, are the most popular.
Such popularity means that they are the most applied, modified, combined, and analyzed
between the entire set of methods of metaheuristic computation. As these algorithms in
the opinion of the author are the most important, its description is deeper so that they are
treated throughout the book in separate chapters. Table 1.1 describes the methods and the
chapter in which they will be discussed.
This book has been written from a teaching perspective, in such a way that the reader
can implement and use the algorithms for the solution of his/her optimization problems.
The presented material, the discussed methods, and implemented programs are not avail-
able in any other book that considers metaheuristic methods as subject.
The book has two unique features. The first characteristic is that each method is
explained considering a detailed level. Therefore, it is possible to calibrate and change the
parameters of the methods in question. Under the perspective of this book, each meta-
heuristic technique is addressed through the use of simple and intuitive examples so that
the reader gradually gets a clear idea of the functioning of each method.
The second characteristic is the implementation of each metaheuristic method in
MATLAB. Most of the texts of metaheuristic computation explain the algorithms, differ-
ing in degree of detail and coverage of each method. However, many texts fail to provide
implementation information. This problem from the point of view of the authors of this
book is no less since most readers understand the methods completely when the theoretical
concepts are compared with the provided lines of code in its implementation.

TABLE 1.1 Metaheuristic Methods and Their Distribution through the Book
Class 1 Class 2
Chapter Algorithm Chapter Algorithm
1 Random Search 2 Genetic Algorithms (GA)
1 Simulated Annealing 3 Evolutionary Strategies (ES)
4 Moth–Flame Optimization (MFO)
5 Differential Evolution (DE)
6 Particle Swarm Optimization (PSO)
7 Artificial Bee Colony (ABC)
8 Cuckoo Search (CS)
9 Metaheuristic Multimodal Optimization
Introduction and Main Concepts   ◾    11

1.3.1 The Generic Procedure of a Metaheuristic Algorithm


In this section, the optimization process and its main concepts are discussed from the
point of view of the metaheuristic computation paradigm. The idea is to clearly define the
nomenclature used by these methods.
Most of the metaheuristic methods have been designed to find the optimal solution to
an optimization problem formulated in the following way:

Minimize/Maximize f ( x ), x = ( x1 ,…, x d ) ∈ d
(1.10)
Subject to: x ∈X ,

where x represents the vector of decision variables (candidate solution), while d specifies the
number of dimensions. X symbolizes the set of possible candidate solutions, also known as
the search space. On many scenarios, the search space is bounded by the lower ( li ) or upper
{
(ui ) limits of each decision variable d so that X = x ∈ d li ≤ xi ≤ ui , i = 1,…,d . }
To solve the problem formulated in Eq. 1.10, a metaheuristic algorithm maintains a sin-
( ) ( )
gle solution x k or a population of N candidate solutions P k x 1k , x k2 ,…, x kN , which evolve
(change their values) during a determined number of iterations (Niter), from an initial
state to the end. In the initial state, the algorithm initializes the set of candidate solutions
with a random value within the limits of the search space X. In every generation, a set of
metaheuristic operators is applied to candidate solutions P k to build a new population P k+1 .
The quality of each candidate solution xik is then evaluated through an objective function
f ( x ik ) (i ∈[1,2,…, N]) that describes the optimization problem. Usually, during the evolu-
tion process, the best solution m from all candidate solutions maintains a special consider-
ation. The idea is that at the end of the evolution process, this solution represents the best
possible solution. Figure 1.6 shows a graphical representation of the optimization process
from the point of view of the metaheuristic computation paradigm. In the nomenclature of
metaheuristic algorithms, a generation is known as an iteration, while the objective func-
( )
tion value f x ik produced by the candidate solution x ik is known as the “fitness” value of x ik .

k←0
Pk ← Random [X]

Pk+1 ← Operators ( Pk )

k ← k+1

Yes No solution
k<Niter
m

FIGURE 1.6 Optimization process from the point of view of metaheuristic computation paradigm.
12   ◾    Metaheuristic Computation with MATLAB®

1.4 EXPLOITATION AND EXPLORATION


Exploration is defined as the process of searching for new solutions. On the other hand,
exploitation refers to the refinement of existing solutions, which in the past have proven
to be successful in similar conditions. Exploration is risky since the quality of the solution
is uncertain. However, exploration can be very profitable, since incidentally, it is possible
to find the best solution. Exploitation is a more conservative process since the effects of
the candidate solution are already known. Both concepts, exploration and exploitation,
are conflicting. Promoting one means to decrease the other. Under such conditions, an
agent or search strategy must combine both features to find competitively the optimal
solution for a specific problem. In its operation, a metaheuristic approach should conduct
the exploration and exploitation of the search space. In the context of metaheuristic meth-
ods, exploration refers to the process of searching for new solutions in the search space.
Exploitation is the mechanism to refine locally the best solutions previously found in the
search process, with the aim of improving them. Pure exploration degrades the accuracy of
the produced solutions during the optimization process. On the other hand, pure exploita-
tion allows to improve already existing solutions, but adversely it limits the search strategy
so that solutions are easily trapped in local optima. Under these conditions, the ability of a
metaheuristic method to find the optimal solution depends on its capacity to balance the
exploration and exploitation processes in the search space. So far, the problem of balancing
exploration–exploitation has not been solved convincingly. Therefore, each metaheuristic
scheme implements a particular solution for its handling (Deb, 2001).

1.5 PROBABILISTIC DECISION AND SELECTION


The probabilistic decision and selection are two operations recurrently used for many
metaheuristic methods. The operation of decision probability refers to the case of execut-
ing an action conditioned to a certain probability. On the other hand, the selection probability
considers the process of selecting an element of a set, so that the items with the best quality
have a greater chance of being selected, compared with those that have lesser quality.

1.5.1 Probabilistic Decision
The probabilistic decision is an operation used frequently by metaheuristic methods to
condition the execution of different operators for searching for new solutions. The opera-
tion of the probabilistic decision can be formulated as how to execute an action A condi-
tioned to a probability PA . Since the probability dictates the frequency in which an action
will be executed, its value must be a valid probability ( PA ∈[0,1]). Under these conditions,
the process of acceptance or rejection of the action A is as follows: first, generates a random
number rA under a uniform distribution U [0,1]. If the value rA is less than or equal to PA ,
the action A is performed; otherwise, action A will have no effect.

Program 1.2 Probabilistic Decision Implementation


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Probabilistic decision example
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Introduction and Main Concepts   ◾    13

% Erik Cuevas, Alma Rodríguez


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Clear memory
clear all
% Variable initialization
ActionA=0;
NoActionA=0;
% Begin the iterative process
for i=1:10000
% Random number generation U [0,1]
rA=rand;
% Probabilistic decision
if (rA<=0.7)
ActionA=ActionA+1;
else
NoActionA=NoActionA+1;
end
end

As metaheuristic algorithms are iterative, the probability PA expresses the frequency with
which an action A is executed. Therefore, if the value PA is near to zero, the action A will
be poorly executed. While if the value of PA is close to one, action A will be executed
practically on all occasions. From the point of view of implementation, a random number
uniformly distributed (U [0,1]) is generated by the rand function.
To show the implementation of a probabilistic decision operation, the program
illustrated in Program 1.2 has been developed. The program considers 10,000 itera-
tions, of which 70% ( PA = 0.7 ) will perform the action to A, while the remaining 30%
do not. Under these conditions, after running the program, the action ActionA will
be executed approximately 7,000 times, while NoActionA will be executed around
3,000 times.

1.5.2 Probabilistic Selection
The probabilistic selection considers the process of selecting an element of a set, so that the
items with the best quality (according to a certain objective function) have a greater chance
of being chosen, compared with those that have lesser quality.
Methods of metaheuristic computation frequently have to choose a solution x ek from
( )
a population of elements P k x 1k , x 2k ,…, x kN , where e ∈[1,2,…, N ]. In the selection, it must
(( ) ( ) ( ))
consider the fitness quality of the solutions f x 1k , f x 2k ,…, f x kN , so that better solu-
tions are more likely to be chosen. Under this selection process, the probability of selecting
the solution x ek among the other N − 1 solutions is defined as

Pe =
( )f x ek
(1.11)
∑ f (x )
N
k
i
i=1
14   ◾    Metaheuristic Computation with MATLAB®

TABLE 1.2 Characteristics of Each Solution from the Numerical


Example of Probabilistic Selection
Solution f (⋅) Pe PeA
x 1k (
f x 1k = 25) P1 = 0.25 P1A = 0.25
x k2 f (x k
2 )=5 P2 = 0.05 P2A = 0.30
x 3k f (x k
3 ) = 40 P3 = 0.40 P3A = 0.70
x k4 f (x k
4 ) = 10 P4 = 0.10 P4A = 0.80
x 5k f (x k
5 ) = 20 P5 = 0.20 P5A = 1.00

Another important concept associated with the solution x ek is the cumulative probability
PeA. This cumulative probability is defined as

e
A
P = ∑P
i=1
i (1.12)

Under these conditions, the cumulative probability PNA of the last solution of the population
P k is equal to one.
Once calculated, the probabilities {P1 , P2 ,…, PN } and the cumulative probabilities
{ }
P1A , P2A ,…, PNA of all the solutions contained in the population P k. The selection process
can be explained as follows: first, a random number rS is generated considering a uniform
distribution U [0,1]. Then, iteratively, a test process is conducted. Therefore, starting with
the first solution, it is checked if P1A > rS. If this condition is not met, the second solution is
considered. This process continues testing each solution e until the condition PeA > rS has
been reached. As a result of this procedure, the solution x ek would be selected.
In order to clarify this process, a numerical example is developed. Assume a population
k
( )
P with five elements x 1k , x k2 , x 3k , x k4 , x 5k , whose qualities (fitness values), probabilities, and
cumulative probabilities are shown in Table 1.2. Given these values, the selection process is
the following: a uniformly distributed random value rS (U [0,1]) is generated. Considering
that the produced value was rS = 0.51, it is tested by using the first element if P1A > rS. As this
does not happen, it is proven with the second solution in the same condition P2A > rS. As
the condition is still not satisfied, the third solution is tested P3A > rS. As the condition is
fulfilled, the selected element is x 3k .

1.6 RANDOM SEARCH
The random search method (Matyas, 1965) is the first method which based its optimization
strategy on a full stochastic process. Under this method, only a candidate solution x k is
maintained during the process of evolution. In each iteration, the candidate solution x k is
modified by adding a random vector ∆x. Therefore, the new candidate solution is modeled
using the following expression:

x k +1 = x k + ∆x (1.13)
Introduction and Main Concepts   ◾    15

(
Assuming that the candidate solution x k has d dimensions x1k , x 2k ,…, x dk , each coordinate )
is modified ( ∆x = {∆x 1 , ∆x 2 ,…, ∆x d }) through a random disturbance ∆xi (i ∈[1,2,…, d])
modeled for a Gaussian probability distribution defined as

1  ( ∆x i − µi ) 
p ( ∆x i ) = exp  −0.5 ⋅
 σ i2  = N ( µi ,σ i ), (1.14)
σ i ⋅ 2π

where σ i and µi symbolize the standard deviation and the mean value, respectively, for the
dimension i. As the value ∆xi represents a local modification around xik , the average value
is assumed zero ( µi = 0 ) .
Once computed x k+1, it is tested if the new position improves the quality of the previous
candidate solution x k . Therefore, if the quality of x k+1 is better than x k , the value of x k+1
is accepted as the new candidate solution; otherwise, the solution x k remains unchanged.
This process can be defined for the case of minimization problem as

k +1
 x k +1
 ( ) ( )
si f x k +1 < f x k
x = (1.15)
 x
k
si f ( x ) ≥ f ( x )
k +1 k

This replacement criterion of accepting only changes that improve the quality of candidate
solutions is known as “greedy.” In random search, the perturbation ∆x imposed to x k could
provoke that the new value x k+1 may not be located within the search space X. Outside the
search space X, there is no definition of the objective function f ( x ). To avoid this prob-
lem, the algorithm must protect the evolution of the candidate solution x k , so that if x k+1
falls out of the search space X, it should be assigned a very poor quality (represented by a
( )
very large value). This is, f x k+1 = ∞ for the case of minimization or f x k+1 = −∞ for the ( )
maximization case.

1.6.1 Computational Implementation in MATLAB


To illustrate the operation of the random search algorithm and its practical implementa-
tion in MATLAB, in this section, the random search method is used to solve the following
maximization problem:

Maximize f ( x1 , x 2 ) = 3 ⋅ (1 − x1 ) ⋅ e
2 (−( x )−( x +1) ) + 10 ⋅  x1 − x 3 − x 5  ⋅ e(− x − x ) − 1 ⋅ e(−( x +1) − x )
2
1 2
2 2
1
2
2 1
2 2
2
 1 2

5 3
−3 ≤ x1 ≤ 3
Subject to: (1.16)
−3 ≤ x 2 ≤ 3

Under this formulation, a two-dimensional function f ( x1 , x 2 ) with a search space defined


on the interval [−1,1] for each decision variable x1 and x 2 is considered. Figure 1.7 shows a
representation of the objective function f ( x1 , x 2 ).
16   ◾    Metaheuristic Computation with MATLAB®

10

f (x1, x2) 0

-5

-10
2
0 2
x2 0
-2 -2
x1

FIGURE 1.7 Graphical representation of the function f ( x1 , x 2 ).

As it can be seen, the function f ( x1 , x 2 ) can be derived twice. However, it is multimodal


(several local optima). For these reasons, the function f ( x1 , x 2 ) does not fulfill the require-
ments to be minimized under the gradient descent method. Under these circumstances, it
is maximized by the random search method.
The operation of the random search method over the maximization of f ( x1 , x 2 ) is pre-
sented in the form of a pseudocode in Algorithm 1.2. The procedure starts by randomly
selecting a candidate solution x k (k = 0), within the search space defined on the interval
[−3,3] for each variable x1 and x 2 . Then, a new solution candidate x k+1 through the inclu-
sion of random disturbance ∆x is calculated. Once x k+1 is obtained, it is verified whether it
belongs to the search space. If it is not included in X, it is assigned to x k+1 a very low-quality
(( ) )
value f x k+1 = −∞ . The idea is that points outside the search space cannot be promoted
their production. Finally, it is decided if the new point x k+1 presents an improvement com-
pared with its predecessor x k . Therefore, if the quality of x k+1 is better than x k , the value of
x k+1 is accepted as the new candidate; otherwise, solution x k remains unchanged. This pro-
cess is repeated iteratively until a maximum number of iterations Niter has been reached.
From the point of view of implementation, the most critical part is the computation
of ∆x = ( ∆x1 , ∆x 2 ). These values are obtained by a Gaussian probability distribution. In
MATLAB, a random sample r which comes from a population with Gaussian distribution
is calculated as follows:

r = µ + σ ⋅randn;

where µ and σ represent the average value and standard deviation of the population,
respectively.
Introduction and Main Concepts   ◾    17

Algorithm 1.2 Random Search Method

1. k ← 0
2. x1k ← Random [−3,3], x 2k ← Random [−3,3]
3. while (k< Niter) {
4. ∆x 1 = N (0,1), ∆x 2 = N (0,1)
5. x 1k +1 = x 1k + ∆x 1, x 2k +1 = x 2k + ∆x 2
( ) ( )
6. x k = x1k , x 2k , x k +1 = x1k +1 , x 2k+1
7. If ( x ∉X ) { f ( x ) = −∞}
k+1 k+1

8. If ( f ( x ) < f ( x )) { x = x }
k +1 k k +1 k

9. k ← k+1}

Program 1.3 shows the implementation of Algorithm 1.2 in MATLAB. In the operation of
Program 1.3, first, the function f ( x1 , x 2 ) is plotted in order to visualize its characteristics.
Then, iteratively, the set of candidate solutions generated from its initial value x 0 is shown
until the optimum value x ∗ is found. Figure 1.8 shows an example of the set of solutions
produced during the optimization process.

3 3
2
4
2 2
6

0
4
1 2 2 1
0
x2 0 0
-2

x2
2
2

0 0
-1 -2 -1
-4
-6

-2 0 -2
-2

-3
0

-3
-3 -2 -1 0 1 2 3 -3 -2 -1 0 1 2 3
x1 x1
(a) (b)

FIGURE 1.8 Solution map drawn on (a) the function contours and (b) the grayscale regions.
18   ◾    Metaheuristic Computation with MATLAB®

Program 1.3 Random Search Method


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Random search algorithm
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Erik Cuevas, Alma Rodríguez
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Clear memory
clear all
% Definition of the objective function
funstr='3*(1-x).^2.*exp(-(x.^2)-(y+1).^2)-10*(x/5-x.^3-
y.^5).*exp(-x.^2-y.^2)-3*exp(-(x+1).^2 -y.^2)';
f=vectorize(inline(funstr));
range=[-3 3 -3 3];
% Draw the objective function
Ndiv=50;
dx=(range(2)-range(1))/Ndiv; dy=(range(4)-range(3))/Ndiv;
[x,y] =meshgrid(range(1):dx:range(2),range(3):dy:range(4));
z=f(x,y);
figure(1); surfc(x,y,z);
% Definition of the number of iterations
Niter=3000;
k=0;
% Initialization of the candidate solution
xrange=range(2)-range(1);
yrange=range(4)-range(3);
xn=rand*xrange+range(1);
yn=rand*yrange+range(3);
figure
% Starting point of the Optimization process
while(k<Niter)
% It is tested if the solution falls inside the search space
if ((xn>=range(1))&(xn<=range(2))&(yn>=range(3))&(yn<=
range(4)))
%If yes, it is evaluated
zn1=f(xn,yn);
else
% If not, it is assigned a low quality
zn1=-1000;
end
% The produced solution is drawn
contour(x,y,z,15); hold on;
plot(xn,yn,'.','markersize',10,'markerfacecolor','g');
drawnow;
hold on;
% A new solution is produced
xnc=xn+randn*1;
Introduction and Main Concepts   ◾    19

ync=yn+randn*1;
% It is tested if the solution falls inside the search space
if ((xnc>=range(1))&(xnc<=range(2))&(ync>=range(3))&(ync<=
range(4)))
% If yes, it is evaluated
zn2=f(xnc,ync);
else
% If not, it is assigned a low quality
zn2=-1000;
end
% It is analyzed if the new solution is accepted
if (zn2>zn1)
xn=xnc;
yn=ync;
end
k=k+1;
end

1.7 SIMULATED ANNEALING
Simulated annealing (Kirkpatrick, Gelatt, & Vecchi, 1983) is an optimization technique
that emulates the tempered process in metallic materials. The idea with this process is to
cool metallic material in a controlled way so that the crystal structures can orient them-
selves and avoid defects in metal structures.
The use of this process as an inspiration for the formulation of optimization algorithms
was first proposed by Kirkpatrick et al. (1983). Since then, several studies and applications
to analyze the scope of this method have been suggested. Different from the gradient-based
algorithms which have the disadvantage of stuck in local minima, the simulated annealing
method presents a great ability to avoid this difficulty.
In simulated annealing, the objective function to optimize is analogous to the energy
of a thermodynamic system. At high temperatures, the algorithm allows the exploration
of very distant points within the search space. Under these circumstances, the probability
with which bad-quality solutions are accepted is very large.
On the other hand, at low temperatures, the algorithm allows the generation of points
in neighbor locations. In this stage, the probability of accepting bad-quality solutions
is also reduced. Therefore, only new solutions that enhance their previous value will be
considered.
The simulated annealing maintains only one candidate solution ( x k ) during its opera-
tion. This solution is modified in each iteration using a procedure similar to the random
search method, where each point is updated through the generation of a random vector
∆x. The simulated annealing algorithm does not only accept changes that improve the
objective function. It also incorporates a probabilistic mechanism that allows accepting
solutions with lower quality (worse solutions). The idea with this mechanism is to accept
bad solutions in order to avoid getting trapped in local minima.
20   ◾    Metaheuristic Computation with MATLAB®

Under these circumstances, assuming as an example a maximization problem, a new


solution x k+1 will be accepted considering two different alternatives:

( ( ) ( ))
In the second option, although the quality of x k+1 is not superior to x k f x k +1 < f x k ,
k+1
the new solution x will be accepted according to an acceptance probability pa defined as

∆f

pa = e T , (1.17)

where T represents the temperature that controls the cooling process, while ∆f symbolizes
the energy difference between the point x k+1 and x k , which is defined as

( ) ( )
∆f = f x k +1 − f x k (1.18)

Therefore, the acceptance or not of a new position x k+1 is performed under the following
procedure. First, a random number r1 uniformly distributed between [0,1] is produced.
Then, if r1 < pa , the point x k+1 is accepted as the new solution.
For a given energy difference ∆f , if T is large, then pa →1, which means that all the
suggested values of x k+1 will be accepted regardless of their quality in comparison with
the previous candidate solution x k . If T is very small, then pa → 0, which means that only
the values of x k+1 that improve the quality of x k will be accepted. When this happens, the
search strategy of simulated annealing is similar to the random search method.
Thus, if T is large, the algorithm simulates a system with high thermal energy. Under
these conditions, the search space X is explored extensively. On the other hand, if T is very
small, the system allows refining around the position already known locally.
From the parameter description, it is clear that the most important element in simulated
annealing is the cooling control. This factor specifies the process in which the tempera-
ture is varied from high to low. Since this process depends on the specific application, it
requires a calibration stage performed by trial and error. There are several ways to control
the cooling process from an initial temperature Tini to a final temperature T fin . For this task,
two methods, the linear and the geometrical, are known.
In the linear scheme, the temperature reduction is modeled using the following
formulation:

T (k ) = Tini − β ⋅ k , (1.19)

where β is the cooling rate. It should be chosen so that T → 0 when k → Niter (maximum
number of iterations). This means that β = (Tini − T fin ) Niter. On the other hand, in the
geometric scheme, the temperature is decremented by the use of a cooling factor defined
Introduction and Main Concepts   ◾    21

on the interval [0,1]. Therefore, the geometric cooling strategy is modeled using the follow-
ing expression:

T (k ) = Tiniη k , (1.20)

The advantage of the geometrical model is that T → 0 when k → ∞. In practice,


η ∈[0.7,0.95]. Algorithm 1.3 shows the full computational procedure of the simulated
annealing method in the form of a pseudocode.
The simulated annealing method generically begins with a configuration of its param-
eters Tini , T fin , β , and Niter. Then, an initial point within the search space X is randomly
generated. Afterward, the evolution process begins. This process remains either until the
number of iterations has been reached or until the temperature has achieved its projected
final value T fin. During the process, a new candidate solution x k+1 is produced by applying
a perturbation ∆x over the original solution x k . Such modification ∆x is used in a similar
way to the random search algorithm described in Section 1.6. Once x k+1 is generated, it is
analyzed whether this new solution will be accepted or not. Two possibilities are considered
for this purpose. (1) If the new solution x k+1 improves the previous one, x k+1 is accepted.
Otherwise, if there is no improvement, (2) the acceptance of x k+1 is tested with a probability
pa . If the test is positive, x k+1 is accepted. Otherwise, it is discarded and x k is maintained.
Additionally, in each iteration, the temperature is decreased by a factor β in order to reduce
the degree of acceptance for new solutions that do not present better quality than their
predecessors.
For the generation of new candidate solutions x k+1, the simulated annealing algorithm
considers a random modification ∆x . Because of this, many new solutions could fall out-
side the search space X. The random search algorithm avoids this problem by assigning to
the new solution x k+1 a very bad quality, with the objective that x k+1 can never be selected.
As the simulated annealing scheme allows a probabilistic acceptance of solutions that
have poor quality, solutions outside the search space X could also be accepted. To avoid
this problem, the algorithm of simulated annealing implements a mechanism of genera-
tion of solutions, which does not consider solutions out the limits of the search space X.
This mechanism is implemented through a while cycle that is continually running until
a feasible solution belonging to the search space X has been reached (see lines 5–7 of
Algorithm 1.3).
An improvement often implemented in the method of simulated annealing is the
gradual reduction of random perturbations ∆x. The idea is that in first iterations, large
random jumps are permitted in order to explore the search space extensively. However, as
the algorithm evolves, this ability is reduced so that in the last iterations, only local per-
turbation around a solution is performed (with very small values of ∆x). The disturbances
∆x are calculated by using a Gaussian distribution such as ∆x ← N (0,σ ⋅ T ). Therefore, this
behavior is easily implemented by multiplying the standard deviation with the current
value of the temperature T. Under these conditions, as the temperature decreases (T → 0)
during the optimization process, the perturbation magnitude also diminishes ( ∆x → 0).
22   ◾    Metaheuristic Computation with MATLAB®

TABLE 1.3 Parameter Values Used by Simulated


Annealing in Program 1.4
Parameter Value
Tini 1
T fin 1 × 10−10
β 0.95
Niter 150
σ 2.5

Algorithm 1.3 Simulated Annealing Algorithm

2. k ← 0, T = Tini
3. x k ← Random [X]

∆f

10. pa ← e T

1.7.1 Computational Example in MATLAB


To illustrate the operation of the simulated annealing algorithm and its practical imple-
mentation in MATLAB, in this section, the simulated annealing method is used to solve
the maximization problem formulated in Eq. 1.16. Program 1.4 shows the implementation
of Algorithm 1.3 in MATLAB. In its operation, the parameters of the simulated annealing
approach have been configured as shown in Table 1.3.
Simulated annealing method significantly reduces the number of iterations to find the
optimal solution x * in comparison with the random search algorithm. In metaheuristic
schemes, there is always the problem of finding the number of appropriate iterations to
solve a particular optimization task. If too few iterations are performed, there is a risk
that the global optimum is never found. On the other hand, if the number of iterations
is exaggerated, the solution converges to the optimal value, but at high computational
expense.
Introduction and Main Concepts   ◾    23

Program 1.4 Simulated Annealing Algorithm


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Simulated annealing
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Erik Cuevas, Alma Rodríguez
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Clear memory
clear all
% Definition of the objective function
funstr='3*(1-x).^2.*exp(-(x.^2)-(y+1).^2)-10*(x/5-x.^3-
y.^5).*exp(-x.^2-y.^2)-1/3*exp(-(x+1).^2 -y.^2)';
f=vectorize(inline(funstr));
range=[-3 3 -3 3];
% Draw the Function as reference
Ndiv=50;
dx=(range(2)-range(1))/Ndiv; dy=(range(4)-range(3))/Ndiv;
[x,y] =meshgrid(range(1):dx:range(2),range(3):dy:range(4));
z=f(x,y);
figure(1); surfc(x,y,z);
% Definition of the number of iterations
k=1;
valid=0;
Niter=150;
% Define initial temperature
T_init =1.0;
% Define final temperature
T_fin = 1e-10;
% Cooling rate
beta=0.95;
% Initialization of the candidate solution
xrange=range(2)-range(1);
yrange=range(4)-range(3);
xn=rand*xrange+range(1);
yn=rand*yrange+range(3);
% Temperature initialization
T = T_init;
% Starting point of the Optimization process
while (k<Niter)
% The produced solution is drawn
contour(x,y,z); hold on;
plot(xn,yn,'.','markersize',10,'markerfacecolor','g');
drawnow;
hold off;
% It is evaluated the fitness of x(k)
E_old = f(xn,yn);
% A new solution x(k+1) is produced
24   ◾    Metaheuristic Computation with MATLAB®

% The magnitude of the perturbation is modified


xnc=xn+randn*2.5*T;
ync=yn+randn*2.5*T;
if((xnc>=range(1))&(xnc<=range(2))&(ync>=range(3))
&(ync<=range(4)))
valid=1;
end
end
valid=0;
% The current temperature is stored
data(k)=T;
% It is evaluated the fitness of x(k+1)
E_new=f(xnc,ync);
% The fitness difference is computed
DeltaE=E_new-E_old;
% Acceptance for fitness quality
if (DeltaE>0)
xn=xnc;
yn=ync;
end
% Probabilistic Acceptance
if (DeltaE<0 & exp(DeltaE/T)>rand);
xn=xnc;
yn=ync;
end
% Update Temperature
T = beta*T;
if (T<T_fin)
T=T_fin;
end
k=k+1;
end

In the operation of Program 1.4, first, the objective function f ( x1 , x 2 ) is plotted in order to
appreciate its main characteristics. Then, iteratively, the set of solutions generated from its
initial value x 0 to the optimum value x ∗ is shown. Figure 1.9a presents an example of the set
of solutions produced during the evolution process when Program 1.4 is executed. Figure
1.9b shows the cooling process experimented during the evolution process.
Introduction and Main Concepts   ◾    25

3 1
2
4
2
0.8

6
4
0
1 2
0.6

2
x2 0
0
2 T

2
0.4
2
0 0
-1 -2

-6 0.2
-2
-4

0
-2

-3 0
-3 -2 -1 0 1 2 3 0 50 100 150
k
x1
(a) (b)

FIGURE 1.9 (a) Produced solution map over the function contour and (b) cooling process
experimented during the optimization process.

EXERCISES

f (x ) = ∑ − x sen( x )
i=1
i i

Determine
a. which are the decision variables,
b. the number of dimensions.

Minimize f ( x ) = x 4 − 15 x 2 + 10x + 24
Subject to: x ∈[−4,3]
26   ◾    Metaheuristic Computation with MATLAB®

Minimize f1 ( x1 , x 2 ) = x1 ⋅e
(− x12 − x22 )
−2 ≤ x1 ≤ 2
Subject to:
−2 ≤ x 2 ≤ 2

Then, analyze the performance of gradient descent method considering that the
parameter α assumes the following values: 0.05, 0.1, and 0.5.

Minimize f 2 ( x1 , x 2 ) = sen( x1 ) ⋅ sen(x 2 )


0 ≤ x1 ≤ 6
Subject to:
0 ≤ x2 ≤ 6

a. Determine the number of local minima.


b. Analyze the performance of gradient descent method considering that the
parameter α assumes the following values: 0.05, 0.1, and 0.5.
c. Explain the behavior of the algorithm in the presence of more than one global
minimum.

 −( x 2 + 3 x 2 ) 
Minimize f3 ( x1 , x 2 ) = floor  10 ⋅  10 − e 1 2  
  
−1 ≤ x1 ≤ 1
Subject to:
−1 ≤ x 2 ≤ 1

b. Discuss the characteristics of this problem that don’t allow the operation of gradi-
ent descent method.
Introduction and Main Concepts   ◾    27

Solution f (⋅)

x k
1 (
f x 1k = 1 )
f (x )=2
k
x 2
k
2

x 3k f (x k
3 )=5
x k4 f (x k
4 )=2

Considering the probabilistic selection method, determine the probability of choos-


ing the element x 3k over the other.

f4 ( x1 , x 2 ) = e
− ((( x −π ) +( x −π ) ) 30)
1
2
2
2

Maximize
−100 ≤ x1 ≤100
Subject to:
−100 ≤ x 2 ≤100

a. Determine the global maximum.


b. Determine which characteristics of f 4 difficult the performance of the search
method.

(
 −0.2 0.5 x 2 + x 2 
)
+ e(
 2  0.5( cos(2πx1 )+ cos(2πx 2 )))
f5 ( x 1 , x 2 ) = 20 ⋅ e
1
Maximize − 20
−5 ≤ x1 ≤ 5
Subject to:
−5 ≤ x 2 ≤ 5

a. Determine the global maximum.


b. Analyze the multimodal characteristics of the function f5.
28   ◾    Metaheuristic Computation with MATLAB®

REFERENCES
Baldick, R. (2006). Applied optimization: Formulation and algorithms for engineering systems.
Cambridge: Cambridge University Press.
Bartholomew-Biggs, M. (2008). Nonlinear optimization with engineering applications. US: Springer.
Deb, K. (2001). Multi-objective optimization using evolutionary algorithms. John Wiley & Sons, Inc.
Dennis, J. E. (1978) A brief introduction to quasi-Newton methods. In G. H. Golub & J. Öliger
(Eds.), Numerical Analysis: Proceedings of Symposia in Applied Mathematics (pp. 19–52).
Providence, RI: American Mathematical Society.
Hocking, L. (1991). Optimal control: An introduction to the theory with applications. US: Oxford
University Press.
Kirkpatrick, S., Gelatt, C., & Vecchi, P. (1983). Optimization by simulated annealing. Science,
220(4598), 671–680.
Mathews, J., & Fink, K. (2000). Métodos numéricos con MATLAB. Madrid: Prentice Hall.
Matyas, J. (1965). Random optimization. Automation and Remote Control, 26, 246–253.
Reinhardt, R., Hoffmann, A., & Gerlach, T. (2013). Nichtlineare Optimierung. Heidelberg, Berlin:
Springer.
Simon, D. (2013). Evolutionary optimization algorithms, biologically inspired and population-based
approaches to computer intelligence. Hoboken, New Jersey: John Wiley & Sons, Inc.
Venkataraman, P. (2009). Applied optimization with MATLAB programming (2nd ed.). Hoboken,
New Jersey: John Wiley & Sons, Inc.
Yang, X.-S. (2010). Engineering optimization. Hoboken, New Jersey: John Wiley & Sons, Inc.
Chapter 2

Genetic Algorithms (GA)

2.1 INTRODUCTION
The initial concept of a genetic algorithm (GA) was first introduced by John Holland at
the University of Michigan (Holland, 1975). Nowadays, there exist several GA variants
where the original structure has been changed or combined with other computational
schemes. Such new GA approaches can be found in recent publications contained in
specialized journals such as Evolutionary Computation, Transactions on Evolutionary
Computation, Swarm and Evolutionary Computation, Applied Soft Computing, Soft
Computing, Evolutionary Intelligence, Memetic Computing, and Neural Computing and
Applications, to name a few.
GAs use natural genetics as a computational principle (Back, Pogel, & Michalewicz,
1997). In this chapter, we will discuss the characteristics in the operation of GAs.
GAs represent search strategies that have been extracted from the principles of natural
selection. Therefore, the central concepts of genetics are adopted and adapted artificially to
produce search schemes. These schemes maintain high robustness and minimal informa-
tion requirements.
GAs define three important operators (Vose, 1999):

• Selection
• Crossover
• Mutation

In its operation, a GA starts its strategy by producing a set of random solutions, instead of
considering only one candidate solution. Once the initial population of solutions is pro-
duced, each of them is evaluated in terms of the optimization problem symbolized by a
cost function. The value of this cost function is known in the metaheuristic context as a
fitness value assigned to each candidate solution. The evaluation of a solution corresponds
to combining its cost function value and its respective constraint violation. The result of

29
Discovering Diverse Content Through
Random Scribd Documents
down, as ours do, they are entirely covered by the lids with the
exception of just a tiny round space in the middle. The lizard sees, in
fact, through a hole in the middle of its eyelid. That is strange
enough; but what is stranger still is that the animal can move its
eyes in different directions at the same time. They are hardly ever
still for a single moment. But instead of moving together, like those
of all other animals, one may be looking upward toward the sky and
the other downward toward the ground; or the right eye may be
peering forward in front of the nose while the left one is glancing
backward toward the tail! Indeed, it would be very difficult to find an
odder sight than that of a chameleon when it is moving its eyes
about. They really look just as if they belonged to two different
animals.
But the most wonderful fact of all about the chameleon is that it
can change its color whenever it chooses.
How it does so no one quite knows. But the very same animal
which is brown all over as it sits upon a branch will become green all
over if you put it among leaves. The last thing at night, probably,
you will find that it is gray. Next day, perhaps, brown spots will
appear upon its body, and pinkish stripes upon its sides. And
occasionally it may be violet, and sometimes yellow, and sometimes
nearly black. So that if you were to go and look at a chameleon, and
then go and look at it again half an hour afterward, you might very
likely take it for a wholly different animal!
Then the chameleon has very odd habits. If it is annoyed, for
example, it puffs out its body in the most extraordinary way till it is
nearly double its ordinary size and its skin is stretched almost as
tight as the parchment of a drum. When it is caught it hisses like a
snake. And really it must be the very laziest creature on earth. If it
lifts a foot into the air it will often wait for quite a minute before it
puts it down again, and for two or even three minutes more before it
takes a second step. Then it always has to rest for some little time
after uncoiling its tail from a branch, while when it coils it round
another it stops and rests again. It will hardly travel two yards, in
fact, in a day.
Chameleons are found in many parts of Africa and Asia, and also
in Southeastern Europe.
CHAPTER XXVIII
SNAKES

There are a great many different kinds of snakes; but before we


read about some of them, we must tell you some thing about the
wonderful way in which their bodies are made.
In the first place, then, remember that snakes have a very large
number of those sections or pieces forming the spine which we call
vertebræ. We ourselves have only thirty-three of these little parts
when we begin life, and twenty-six afterward; this difference in
number being caused by the fact that five of the joints very soon
unite into a bony mass at the lower end, which we call sacrum, while
four more unite into another, which we call the coccyx. But some
snakes have hundreds of these vertebræ. The boas, for example,
have no less than three hundred and four!
In the next place, remember that all these vertebræ are fastened
together by what we call ball-and-socket joints. That is, there is a
round knob at the back of each vertebra which fits into a socket in
front of the vertebra behind it. This gives to the spine of a snake
great strength, for a vertebra cannot be forced out of its place
without breaking the vertebra behind it. And it also allows the spine
to be curled and twisted about in almost any direction; so that a
snake can easily coil up its body like a spring, or even tie it into a
knot.
Then, remember that a snake has a great many ribs. We have
twelve pairs of these important bones, most of which are jointed to
the breast-bone in front. But a snake may have as many as two
hundred and fifty-two pairs of ribs, while it has no breast-bone at all;
so that the tips of all the ribs are free. And every rib is fastened to a
vertebra of the spine by a ball-and-socket joint, just like those which
fasten the vertebræ themselves together. Besides this, there are no
less than five separate sets of muscles connected with the ribs, so
that the snake can move those bones about quite easily.
It is really by means of its ribs that a snake is able to glide over
the ground. If you were to look at the under side of a snake's body,
you would see that the scales are quite different from those on the
upper part. On the back and sides the scales are quite small, and are
almost oval, or oblong; but on the abdomen they are very long and
very narrow, and are set crosswise like the laths of a Venetian blind.

CHARACTERISTIC FORMS AND


MARKINGS OF AMERICAN BIRDS'
EGGS
Sea-Fowl:—13. Guillemot. 14. Tern.
21. Skimmer. Water-Fowl:—9, 16.
Ducks, Waders. 7. Heron. 11.
Gallinule. 12. Snowy Plover. 23.
Stilt Sandpiper. 24. Ring Plover.
Game-Birds:—6. Partridge. 19.
Ptarmigan. Birds of Prey:—3. Owl.
17. Buzzard-hawk. 20. Falcon.
Cuckoos:—8. Cuckoo. 10. Roadrunner.
Song-Birds:—1. Mockingbird. 2.
Towhee Finch. 4. Sparrow. 5.
Oriole. 15. Blackbird (grakle). 18.
Flycatcher. 22. Rosbin (Thrush). 25.
Woodhouse's Jay.
Now the tips of every pair of ribs in a snake's body are fastened
to one of these long abdominal scales in such a manner that when
the snake moves the ribs forward the edge of the scale is raised—
very much as you can raise the laths of the Venetian blind by pulling
the cord at the side; and the snake travels by moving forward its ribs
in turn, and catching hold of the ground with the edges of the
scales, using first the ribs of one side and then of the other.
When a snake is crawling, however, it does not curve its body
into upright loops as inaccurate pictures sometimes represent, but
keeps it pressed flat upon the ground, so that the scales may be
able easily to take hold of any little roughness upon the surface. And
when it climbs a tree it does not twine its body round and round the
trunk, but crawls straight up it, just as it crawls along the ground.
The mouth of a snake is very curiously made. We are not
speaking now of the fangs of the poisonous serpents; we will tell
you about these by and by. But remember that the mouth must be
made in a very strange way, in order to allow these creatures to
swallow their victims, which are often a good deal larger round than
their own throats.
It sounds impossible, yet the snake can swallow an animal larger
in diameter than its own throat, because the bones of its jaws,
instead of being firmly fastened together as ours are, can be forced
a long way apart, so as to make room for the carcass to pass.
Besides this, it has no less than six separate jaw-bones, four in
the upper part of the mouth and two in the lower, every one of
which is set with sharp, hooked teeth; and the points of these teeth
are directed toward the throat. Now every one of these jaw-bones
can be moved backward and forward at will. So when a snake
wishes to swallow the body of a victim, it first of all seizes it in its
mouth, and then pushes one of the jaw-bones forward and takes a
firm hold with the teeth. Then it pushes another forward, and then a
third, and then a fourth; and so it goes on, each time taking a fresh
hold with the hooked teeth, till at last the carcass is forced into the
mouth. Then the bones separate, so as to make plenty of room for it
to pass, and the alternate action of the jaws goes on as before till
the carcass is forced into the throat. And then the flesh of the
throat, which is very elastic, stretches out too, till before very long
the carcass disappears altogether.
Then the eyes of snakes are made in a very curious way, for the
eyelids, which are quite transparent, do not open and shut as ours
do, but cover the eyes altogether. So a snake cannot blink; and it
looks at you through its own eyelids, which are very much like little
spectacle-glasses fastened into the skin!
When a snake throws off its skin, which it always does once in a
year, and sometimes oftener, the eyelids are thrown off with it, and a
pair of new ones are found lying below all ready to take their place.
Just while this is happening (and it may take a day or two) the
creature is trying to look through a double layer of eye-coverings,
and can see very poorly until the outer one slips off. This is the
explanation of the popular saying that snakes are blind in August
(the usual skin-changing time).

Harmless Snakes
All serpents may properly enough be divided into two sections—
the non-poisonous ones, which are "harmless," so far as their bite is
concerned; and the poisonous ones, which inject a more or less
deadly venom into wounds made by certain long weapon-teeth
called fangs.
Let us consider first, for a moment, the harmless ones. The great
majority of them—of the common snakes of the whole world—
belong to a single family called colubers; and this family far
outnumbers all other serpents. Most of its members are of small
size; few exceed two yards in length, one of the exceptions being
our handsome king-snake of Texas and westward, which is a variety
of the northern milk-snake. All are slender, agile, sometimes
remarkably swift, with small heads, tapering and unarmed tails, and
little or no means of defence, although some of them make such a
show of fighting that they terrify many an enemy into leaving them
alone.
To this great family belong our various blacksnakes, or blue
racers, which occasionally are more than six feet long, and are
among the worst robbers of birds' nests, eating both eggs and
young, and the mother bird as well if it is small, and is not quick
enough in seeking to escape. This is the snake about which stories
of so-called fascination are told; we do not think there is much truth
in them, but that the bird is simply reckless in her efforts to drive
away the robber, and flies too near its darting jaws. The blacksnakes
are exceedingly swift runners and agile climbers. Another excellent
climber is the slender greensnake, which is so near the color of the
leaves that it will not be noticed easily as it hangs in loops upon the
branches of a bush, waiting quietly for some insect to come within
reach. Most of our snakes, however, spend their time mainly on the
ground, searching about the grass, among the tussocks of a swamp,
or amid dense thickets, after frogs, toads, tadpoles, ground-nesting
birds, mice, and especially insects, which last form the principal food
of the smaller kinds. Among these probably the most often seen are
the striped garter-snakes which abound in meadows and about
haystacks and old barns, where they search holes and corners for
mice and beetles. The warm, soft soil of old barnyards is a favorite
place for the laying of their eggs by snakes, most of which bury
them in such places and leave them to be hatched by the warmth of
the sunshine. Nearly every pond, marsh, and slow stream abounds
also in water-snakes, which are ugly in disposition as well as in color,
and feed mainly on fishes, both dead and alive. Of this kind is the
only snake to be found in England except the viper.
Perhaps the most curious of the colubrine snakes is the egg-
eating snake of South Africa. It is quite a small snake, not more than
two feet long, and scarcely thicker in body than a man's little finger;
yet it will swallow pigeons' eggs quite easily, and, if it is very hungry
indeed, will dispose of a hen's egg! This, of course, is owing to the
way in which the bones of the mouth are made. But if you were to
watch one of these snakes as it was eating an egg, you would see a
very strange thing happen. The egg would pass down the throat,
and for a few inches you would be able to watch its outline as it
moved along toward the stomach. Then, quite suddenly, the swelling
would disappear! The fact is this. About thirty of the vertebræ have
each a long, slender spine springing from the lower surface, and the
tips of these spines pass through the upper part of the throat and
project inside it, just like a row of little teeth in the wrong place. Just
as the egg, while it is being swallowed, comes against these teeth,
the snake contracts the muscles of its throat. The result is that the
teeth pierce the egg from end to end and cut it in two. Then the
contents flow onward down the throat, while the two halves of the
shell, nearly always packed one inside the other, are shortly
afterward spit out of the mouth.

Pythons
The pythons are very formidable snakes, not because they are
venomous—for they have no poison-fangs—but owing to their
immense size and strength. When fully grown they may measure as
much as thirty feet in length, while their bodies are as big round as a
man's thigh; and even when they are only half as long they are still
most dangerous creatures, for they could crush a man to death in
two or three minutes.
When a python attacks, it seizes its victim with its jaws, flings its
coils one over another around it, and then squeezes so hard that in
a very few minutes the bones fly into splinters, and the body is
reduced to pulp. And a large python can swallow a half-grown sheep
or a good-sized dog without any difficulty at all.
After the snake has swallowed its victim it becomes very drowsy,
and often sleeps heavily for several days.
Another very curious fact with regard to the python is that it
actually hatches its eggs by the warmth of its own body. It first
collects the eggs into a little pile, and then coils itself round them,
after which it remains perfectly still for nearly two months. During
the whole of that time its bodily heat is much greater than usual,
and at last the egg-shells split, and out from each comes a baby
python. A fortnight or so later they change their skins, and then are
quite large and strong enough to kill and swallow small birds.
Pythons inhabit nearly all the hotter parts of Africa, Asia, and
Australia, and are sometimes known as rock-snakes, on account of
their living much in rocky places.

Boas
The boas, one kind of which, the boa-constrictor, has long been
famous among monsters, are much like the pythons, but are found
only in tropical America and in Madagascar, and spend the greater
part of their lives in the trees. They are quite as large as the
pythons, and quite as formidable. It is said, indeed, that the
anaconda, which is the largest of all, sometimes reaches a length of
forty feet; and there is a stuffed skin, twenty-nine feet long, in the
Natural History Museum at South Kensington, London. One can
easily imagine what a terrible enemy such a snake as this would be,
and how helpless even a strong man would find himself when
wrapped in its mighty coils!
The anaconda is very fond of lying in the water with only just its
head raised above the surface, and there waiting for some animal to
swim within reach. But most of the boas lie in wait for their prey on
one of the lower branches of a tree, in readiness to strike at any
small creature that may pass beneath.
Some years ago a most singular accident happened in the reptile
house at the London Zoo. Two boas, one eleven feet long and the
other nine feet, were living in the same cage, and always seemed on
the very best of terms. One night a couple of pigeons—one for each
snake—were put into the cage, and the house was shut up as usual.
Next morning, however, when the keeper opened it, the smaller
snake had disappeared, and there was no hole in the cage through
which it could possibly have escaped. At first the keeper was
puzzled; but soon he noticed that the larger serpent was not coiled
up as usual, but was lying stretched out straight upon the ground.
Then he understood what had happened. The big snake had
swallowed the smaller one during the night, although it was only two
feet shorter than itself!
Most likely both snakes had seized the same pigeon at the same
moment. Before very long, of course, their jaws would have met in
the middle. Now when one of these big snakes has once seized its
victim it cannot let go, because of the way in which its jaws and
teeth are made, but must go on trying to swallow it. So, you see,
when the jaws of the two snakes met in the middle of the pigeon
neither could give the bird up to the other, because neither could
withdraw its teeth, and the larger one, in fact, could not help
swallowing the smaller! And since that time two or three other
accidents of the same character have been prevented only by the
constant watchfulness of the keeper.

Poisonous Snakes
In all these reptiles the poison-fangs are two in number, and are
situated in the upper jaw. They are very sharp indeed, and are
almost as brittle as glass. So while they are not in use they are
folded back out of harm's way upon the roof of the mouth. But if by
chance they should be broken, there are three or four other pairs
lying ready for use behind them which will quickly grow forward to
take their place.
Generally there is a tiny hole just under the tip of the fang, which
opens into a narrow passage running right through the center. But in
some snakes there is only a groove outside the fang. In either case,
however, the muscles which surround the poison-bag are arranged
in such a way that as soon as the snake strikes its victim a drop of
poison is squirted down each of the fangs, and so into the wound.

Vipers
The only poisonous snake found in Europe is the viper, or adder.
It is not by any means a large snake, for it is seldom more than
twelve or fourteen inches long. It has a zigzag chain of black,
lozenge-shaped markings all the way along its back.
Vipers are generally found on heathy commons and moors, and
are very fond of lying on a patch of bare, sandy ground, and
enjoying the warmth of the sun. They never attempt to bite unless
they are interfered with, but always try to crawl away, if alarmed,
into a place of safety. Their poison is not strong enough to kill a
man, unless he happens to be in a very bad state of health at the
time when he is bitten; but it would be quite sufficient to cause the
bitten limb to swell up to double its size, and to lead to a great deal
of suffering and sickness.

Cobras
Far more deadly is the bite of the cobra, which is found plentifully
in India. Any one who is bitten by this formidable snake is almost
sure to die within two or three hours.
The upper part of a cobra's neck is widened out into what is
called the hood, which can be spread out or folded up at will by the
action of the ribs. On the upper part of this hood is a dark mark,
which looks almost exactly like a pair of spectacles. When a cobra is
about to strike it always raises its head and neck and spreads this
hood before darting at its foe.
In many parts of India cobras are caught and tamed by men who
are called snake-charmers, and who sometimes capture them by
playing an odd tune upon a sort of wooden pipe. This music seems
to fascinate the snake, which comes out of its hole, rears up its head
and neck, and begins to sway slowly from side to side. Then, still
playing, the charmer moves his right hand very slowly indeed until it
is just behind the snake's head, when he suddenly grasps the reptile
round the neck. It is now, of course, quite helpless, and is quickly
transferred to his bag.
Many charmers carry cobras about with them, which they handle
quite freely. But in these cases the poison fangs have been carefully
extracted, so as to render the reptiles harmless.
Cobras are very fond of eggs, and if they can find a rat-hole
which opens into a hen-house they will often take advantage of it in
order to rob the nests. But sometimes, when they have swallowed
several eggs, and the hole happens to be a small one, they cannot
crawl out again, and are found and killed when the house is opened
in the morning.

The Puff-Adder
Quite as deadly is the puff-adder, of Africa, which has a way of
lying almost buried in the sand, so that it is not easily seen; and if it
is disturbed it does not crawl away, as most poisonous snakes will
do, but remains quite still, merely drawing back its head in order to
strike. When fully grown it is about six feet long, and its poison is so
deadly that even a horse has been known to die within two or three
hours of being bitten.
This snake is called the puff-adder because it draws in a very
deep breath when it is annoyed or irritated, and puffs out its whole
body to nearly double its proper size. It then allows the air to escape
gradually, with a kind of sighing noise, draws in another deep
breath, and so on over and over again.

Pit-Vipers
Australia, also, has some snakes whose bite is very deadly; and
in general the tropics abound in these dangerous reptiles. This is as
true of America as elsewhere, but all the American venomous
serpents are of a kind peculiar to this continent, called pit-vipers.
Some of them have rattles at the end of the tail and some lack this
appendage, but all are much alike. Certain of the most dreaded,
such as the fer-de-lance and the bushmaster, belong to the West
Indies and Northern South America; but really the worst of the
whole bad lot, because of its great size and sullen ferocity, is the
huge diamondback rattlesnake of the Southern States. It is in some
cases longer and heavier than any other known venomous snake;
and its bite, if the wound is well poisoned, means almost immediate
paralysis and death.

Rattlesnakes
Several different species of rattlesnakes are scattered over the
United States, and in some places, as on the hot dry plains of the
Southwest, and in the arid mountains of Utah and California, are
numerous enough to be troublesome. The cutting away of forests,
draining of swamps, and cultivation of prairies, soon destroy these
pests in thickly settled regions; but where rocky hills occur they
linger for a long time, because the breaks and little caves among the
ledges offer them secure retreats, winter homes where they sleep in
safety, and proper nurseries for the young, which are not produced
from eggs, as in the coluber family, but are born alive.
The rattles from which these serpents take their name, are a
number of hollow, horny, button-like structures at the tip of the tail,
which rattle together, with a peculiar humming sound, when the
creature shakes its tail, as it is sure to do when disturbed or angry. It
thus gives a warning to the man who might not have noticed the
sluggish creature in his path in time to jump aside. Not all of the
tribe have a rattle, however; and one of the reasons why our water-
moccasin and copperhead are so much dreaded is that they possess
no rattle, and therefore sound no "keep-off" warning.
All our American venomous snakes are too heavy and slow to
climb trees. They get their prey—mice, gophers, snakes, etc.—by
going to a place where it is likely to be running about, and then
patiently waiting until something comes within striking distance.
CHAPTER XXIX
AMPHIBIANS

You will remember that the amphibians are distinguished from


the true reptiles by having to pass through a tadpole stage before
they obtain their perfect form. A good example is the frog, which in
one kind or another exists in all parts of the earth except the very
coldest. No doubt, you have often seen great masses of its jelly-like
spawn floating on the surface of ponds early in the spring; and you
must have wondered how such small creatures as frogs could
possibly lay such enormous batches of eggs.
But the fact is that when these eggs are first laid they are very
tiny. Each egg is only about as big as a small pin's head. Instead of
having shells, however, they are covered with a very elastic skin,
while at the same time they soak up water. So, as soon as they pass
into the pond they begin to swell, and very soon each egg is as big
as a good-sized pea.

Tadpole and Frog


In the middle of each egg is a round black spot, which increases
in size every day. This is the future tadpole, and after a time the
egg-skin splits, and out it tumbles into the water.
It is an odd-looking creature—just a big round head with a tiny
pair of gills and a little wavy tail, and nothing else at all. But it
manages to swim by wagging its tail, and it feeds on the tiny scraps
of decaying matter which are always floating about in the water of
the pond. Before long a little pair of legs begin to show themselves
just at the base of the tail. A few days later another pair begin to
grow in front of them. Then, by slow degrees, the tail passes back
into the substance of the body, and so do the gills, while lungs are
developed and nostrils are opened. And by the time that all these
changes have taken place the tadpole has ceased to be a tadpole
and has turned into a frog.
It leaves the water now and lives upon land, feeding upon small
insects, which it catches in a most curious way. Its tongue is turned,
as it were, the wrong way round; for the root is just inside the lips,
while the tip is down the throat. Besides this, the tongue is very
elastic and very sticky. So the animal catches its victims just as the
chameleon does, flicking out its tongue at them and just touching
them with the tip, to which they adhere. And as the tongue is drawn
back into the mouth it pokes them down the throat; so that frogs do
not even have to take the trouble of swallowing their dinner.
If you look at a frog's hind feet, you will notice that the toes are
joined together by webbing. This allows them to be used in the
water as well as upon dry land. It is generally said that frogs swim.
But if you watch them in the water you will see at once that they do
not really swim at all, but leap along, just as they leap along the
ground. And each leap carries them through the water for some little
distance.

Toads
In some ways toads are like frogs; but you can tell them at once
by their rough, dry skins, which are covered with warts like glands.
And they crawl over the ground, instead of leaping as frogs do. They
are very common almost everywhere, and you may often find them
hiding under logs or large stones during the daytime.
Toads do not lay their eggs in great masses, as frogs do, but
arrange them in strings about four feet long and an eighth of an inch
wide. Each of these strings consists of two rows of eggs fastened
side by side together. The tadpoles are very much like those of the
frog, the chief difference being that they are rather smaller and
blacker.
Newts
All through their lives newts keep their tails, instead of losing
them when they cease to be tadpoles.
You can find newts in plenty all through spring and summer by
fishing with a small net in any weedy pond; but you will find that
they are not all alike. Some have wavy crests running all along their
backs; others have none; and some are brightly colored while others
are plain olive green all over. Often in the woods in certain parts of
the United States you will meet with little newts traveling about on
the damp old leaves; and they are very conspicuous because of their
brilliant vermilion color. These are young green newts which come
out of the water, live ashore for a year or so in the red suit, and then
go back to the water and a green coat.
Newts lay their eggs in a very curious manner. They do not
fasten them together in great batches, like the frog, or in long,
narrow strings, like the toad. They lay them one by one. And the
mother newt takes each egg as she lays it, places it in the middle of
the narrow leaf of some water-plant, and then twists the leaf neatly
round it with her little fore feet, so as to wrap it up in a kind of
parcel! The tadpole which hatches out of this egg is very much like
that of a toad or a frog; but the front legs are the first to appear,
instead of the hind legs, while the tail, of course, does not pass back
into the substance of the body.
Newts swim with their tails, and very pretty and graceful they
look as they move through the water. When they cease to be
tadpoles, of course, they breathe air, just as toads and frogs do, and
have to come up to the surface every two or three minutes to obtain
it. And as long as they live in the pond they feed upon grubs and
worms and tiny water-insects.

Salamanders
The curious creatures known as salamanders are related to the
newts, and begin their lives in just the same way. But after they
have ceased to be tadpoles they only visit the water for two or three
weeks in the spring.
The most celebrated member of this group is the spotted
salamander, which is found in Central and Southern Europe, and also
in Algeria and Syria. When fully grown it is about eight inches long,
and may be known at once by the two rows of large yellow blotches
which run down from the back of its head, right along its body, to
the very tip of its tail.
In days of old it was thought that the salamander had the power
of walking through fire without being burnt! And it was also
supposed, if it were attacked, to spring upon its enemy, bite out a
piece of his flesh, and then spit fire into the wound! As a matter of
fact it is almost harmless, and may be picked up and handled
without the slightest danger. But the glands on its skin, like those on
the toad's head and back, contain a rather poisonous fluid, which is
squirted out if they are squeezed. So that if a dog were to pick up a
salamander he would be quite sure to drop it again very quickly, and
would most likely foam at the mouth for some little time.
Salamanders are very slow and timid creatures, and generally
spend the whole of the day concealed in some crevice, or in the
hollow trunk of a tree, or perhaps under a large stone. They feed
upon slugs and small insects.
There are several kinds in North America, some of which, as the
hellbender, are a foot or more in length.
The giant salamander, which is sometimes nearly a yard long, is
found in the rivers of China and Japan, and spends the whole of its
life in the water. It feeds chiefly upon fishes.

The Axolotl
This is one of the most singular of all the amphibians. It is found
in North America. Sometimes it develops into its perfect form, and
sometimes it remains a tadpole all its life, and yet lays eggs just as
though it were adult!
In the lakes of the southern Rocky Mountains the life of this
creature is just like that of any other batrachian. That is, it is
hatched out of the egg as a tadpole, grows first one pair of legs and
then another, loses its gills by degrees, and at last appears in a
lizard-like form, leaving the water and living upon dry land. But in
the lake which surrounds the city of Mexico it never becomes
anything more than a big tadpole, keeps its gills throughout its life,
and does not leave the water at all.

The Olm
The olm, or proteus, is found only in the underground lakes of
Carniola and one or two other parts of Central Europe. It is about a
foot long when fully grown, and has a slender, snake-like body, with
a pair of tiny legs just behind the head, and another pair at the base
of the tail. It is perfectly blind, the eyes being hidden under the skin,
and yet cannot bear light. For if it is kept in captivity it will always
hide in the darkest corner that it can find. And it has been known to
live in confinement for five years without once taking any food.
What the habits of this extraordinary animal are in nature no one
knows, as it has never been found except in these underground
lakes.
In color the olm is pinkish gray, with bright-red gills, and there
are from twenty-four to twenty-seven grooves upon either side of its
body.

FISHES
CHAPTER XXX
FRESH-WATER FISHES

The lowest class of the vertebrate animals consists of the fishes.


These are easily distinguished. Some of the reptiles, it is true, are
very fish-like. But then they have three chambers in their hearts,
while the true fishes only have two. Then fishes never have limbs,
the place of which is taken by fins; and further, they breathe water
by means of gills. There are other differences as well; but these are
quite sufficient to show us that reptiles and fishes cannot possibly be
mistaken for one another.
Between the two, however, come several very curious creatures,
which seem to be partly reptiles and partly fishes; for they have four
slender members which hardly seem to be legs, though they cannot
possibly be described as fins, while they possess not only gills but
lungs as well.

The Mud-Fish
One of these is the odd mud-fish of the African rivers. In general
appearance this animal looks something like an eel, and it grows to
a length of about three feet. Its four long ray-like limbs seem to be
quite useless to it, and it swims by means of its tail, along the upper
part of which runs a narrow fin. It is a creature of prey, feeding upon
other fishes, and when food is plentiful, it just takes one bite out of
the lower part of their bodies and no more.
In summer the rivers in which it lives often dry up altogether, and
the mud at the bottom is baked as hard as a brick by the rays of the
sun. So, as soon as the water begins to get shallow, the animal
burrows deep down into the mud, curls itself up like a fried whiting,
and falls fast asleep for several months, just as hedgehogs and
dormice do during the winter in cold countries. Then, when the rainy
season comes and the rivers fill up again, it comes out from its
retreat and swims about as before. It is from this habit that it gets
its name of mud-fish.
Now we come to the true fishes; and perhaps our best plan will
be to read about some of the fresh-water fishes first, and afterward
about some of those which live in the sea.

Sticklebacks
Let us begin with a little fish which is very common in almost
every pond, but is nevertheless very curious and very interesting.
When fully grown, the stickleback is about three inches long, and
you can tell it at once by the sharp spines on its back, which it can
raise and lower at will. It uses these spines in fighting. For the male
sticklebacks, at any rate, are most quarrelsome little creatures, and
for several weeks during the early part of the summer they are
constantly engaged in battle.
At this season of the year they are really beautiful little fishes, for
the upper parts of their bodies are bright blue and the lower part
rich crimson, while their heads become pale drab, and their eyes
bright green! And apparently they are very jealous of one another,
for two male sticklebacks in their summer dress never seem able to
meet without fighting. Raising their spines, they dash at one another
over and over again with the utmost fury, each doing his best to
swim underneath the other and cut his body open. When one of
them is beaten he evidently feels quite ashamed of himself, for he
goes and hides in some dark corner where nobody can see him.
And, strange to say, as soon as he loses the battle his beautiful
colors begin to fade, and in a very few hours they disappear
altogether.
About the beginning of June, all the male sticklebacks which
have not been beaten set to work to build nests. These nests are
shaped like little tubs with no tops or bottoms, and they are made of
tiny scraps of grass and cut reed and dead leaf, neatly woven
together. As soon as they are finished the female sticklebacks lay
their eggs in them. Then the males get inside, and watch over the
eggs until they hatch.
NORTH AMERICAN FOOD AND
GAME FISHES

Perches
Another very handsome fresh-water fish is the perch, which is
plentiful in almost every river and lake in the warmer parts of the
whole world. In color it is rich greenish brown above and yellowish
white below, with from five to seven upright dark bands on either
side of its body, while the upper fins are brown and the lower ones
and the tail bright red.
The front fin on the back of the perch, which can be raised or
lowered at will, is really a very formidable weapon, for it consists of
a row of very sharp spines projecting for some little distance beyond
the membrane which joins them together. Even the pike is afraid of
these spines, and it is said that although he will seize any other
fresh-water fish without a moment's hesitation, he will never venture
to attack a perch.
Early in the month of May the mother perch lays her eggs, which
she fastens in long bands to the leaves of water-plants. Their
number is very great, over 280,000 having been taken from quite a
small perch of only about half a pound in weight!
The climbing perch of India, notwithstanding its name, is not a
true perch, but belongs to quite a different family. It is famous for its
power of leaving the water and traveling for a considerable distance
over dry land. It does this in the hot season if the stream in which it
is living dries up; and if you were to live in certain parts of India you
might perhaps meet quite a number of these fishes shuffling across
the road by means of their lower fins, and making their way as fast
as possible toward the nearest river!
But how do they manage to remain out of the water for so long?
Well, the fact is that fishes can live for a long time out of the
water if their gills are kept moist. In some fishes, such as the
herring, this is not possible, because their gills are made in such a
way that they become dry almost immediately. But the climbing
perch has a kind of cistern in its head, just above the gill-chambers,
which contains quite a quantity of water. And while the fish is
traveling over land this water passes down, drop by drop, to the
gills, and keeps them constantly damp.
When this fish has been kept in an earthenware vessel, without
any water at all, it has been known to live for nearly a week!

The Carp
Another fish which will live for quite a long time out of the water
is the carp, which has often been conveyed for long distances
packed in wet moss.
This fine fish is a native of the Old World, where it is found both
in rivers and lakes, but prefers still waters with a soft muddy bottom,
in which it can grovel with its snout in search of food. During the
winter, too, it often buries itself completely in the mud, and there
hibernates, remaining perfectly torpid until the return of warmer
weather. It is not at all an easy fish to catch, for it is so wary that it
will refuse to touch any bait in which it thinks that a hook may be
concealed. And if the stream in which it is living is dragged with a
net, it just burrows down into the mud at the bottom and allows the
net to pass over it.
Owing to this crafty and cunning nature, the carp has often been
called the fresh-water fox.
The carp is a very handsome fish, being olive brown above, with
a tinge of gold, while the lower parts are yellowish white. It
sometimes weighs as much as twenty-five pounds, and has been
known to lay more than 700,000 eggs! It is domesticated in many
parts of North America and other countries.

The Barbel
Found in many Old World rivers, the barbel may be known at
once by the four long fleshy organs which hang down from the nose
and the corners of the mouth. These organs are called barbules, and
may possibly be of some help to the fish when it is grubbing in the
soft mud in search of the small creatures upon which it feeds. It
spends hours in doing this, and a hungry barbel is sometimes so
much occupied in its task that a swimmer has dived down to the
bottom of the river and caught it with his hands. From this curious
way of feeding, and its great greediness, the barbel has sometimes
been called the fresh-water pig.
In color this fish is greenish brown above, yellowish green on the
sides of the body, and white underneath. When fully grown it weighs
from ten to twelve pounds.

The Roach
This is one of the prettiest of the European fresh-water fishes,
which is found in many lakes and streams. The upper part of the
head and back are grayish green, with a kind of blue gloss, which
gradually becomes paler on the sides till it passes into the silvery
white of the lower surface. The fins and the tail are bright red.
The roach does not grow to a very great size, for it seldom
weighs more than two pounds. It lives in large shoals, and in clear
water several hundred may often be seen swimming about together.

The Pike
One of the largest and quite the fiercest of the British fresh-water
fishes is the pike, which is found both in lakes and rivers. In America
we have no pike proper, but in some of the great western lakes a
very large relative of similar habits known as the maskinonge; and
our pickerels are only small pikes. Wonderful tales are told of the
ferocity of the pike. He does not seem to know what fear is, and his
muscular power is so great, and the rows of teeth with which his
jaws are furnished are so sharp and strong, that he is really a most
formidable foe. All other fresh-water fishes are afraid of him, while
he gobbles up water-birds of all kinds, and water-mice, and frogs,
and even worms and insects. And no matter how much food he eats,
he never seems to be satisfied.
When the pike is hungry, he generally hides under an
overhanging bank, or among weeds, and there waits for his victims
to pass by.
The young pike is generally known as the jack, and when only
five inches long has been known to catch and devour a gudgeon
almost as big as itself. With such a voracious appetite, it is not
surprising that the fish grows very fast, and for a long time it
increases in weight at the rate of about four pounds in every year.
How long it continues to grow nobody quite knows; but pike of
thirty-five or forty pounds have often been taken, and there have
been records of examples even larger still.
In color the pike is olive brown, marked with green and yellow.

Trout
Perhaps the greatest favorite of all anglers is the trout, which, in
one or more of its various species, is to be caught in almost every
swift stream and highland lake throughout the temperate zone,
except where the race has been destroyed by too persistent fishing.
This happens everywhere near civilization, unless protective laws
regulate the times and places where fishing may be done. Similar
laws are required to save many other kinds of fishes from quick
destruction at the hands of the thoughtless and selfish, and they
should be honestly obeyed and supported in spite of their
occasionally interfering with amusement.
Trout are graceful in form and richly colored, most of them
having arrangements of bright spots and gaily tinted fins. The
common trouts of Europe and the eastern half of the United States
and Canada are much alike; but in the Rocky and other mountains of
the western shore of our continent others quite different are
scattered from the Plains to the Pacific. One of the most interesting
and beautiful of these, the rainbow-trout, has been brought into the
East, and has made itself at home in many lakes and rivers of the
Northern States and Canada.
The trout is an extremely active fish, and when it is hooked it
tries its very hardest to break away, dashing to and fro, leaping,
twisting, and fighting, and often giving the angler a great deal of
trouble before he can bring it in. In small streams it seldom grows to
any great size, but in some of the Scottish lochs and lakes of Maine
trout weighing fifteen or even twenty pounds are often taken. It is
sometimes considered, however, that these belong to a different
species.

The Salmon
More famous even than the trout is the salmon, the largest and
finest of all our fresh-water fishes, which often reaches a weight of
forty-five or fifty pounds, and sometimes grows to still greater size.
It is hardly correct, however, to speak of it as a fresh-water fish,
for although salmon are nearly always caught in rivers, they spend a
considerable part of their lives in the sea.
Salmon are of two kinds—the Atlantic and the Pacific species;
and the life-history of each is a very curious one.
During the winter the parent fishes of the Atlantic salmon, which
used to be exceedingly numerous in all our northern rivers emptying
into the Atlantic, and still haunt the rivers of Northeastern Canada,
and of Scotland, make their way as far up a clear and gravelly river
as they possibly can, till they find a suitable place in which to lay
their eggs. The mother then scoops a hole at the bottom of the
stream, in which she deposits her eggs in batches, carefully covering
up each batch as she does so. At this time both parents are in very
poor condition, and the males are known to anglers as "kelts." For a
time they remain in the river, feeding ravenously. Then in March or
April they travel down the river and pass into the sea, where they
stay for three or four months, after which they ascend the river
again, as before.
Meanwhile the eggs remain buried in the gravel for about four
months. At the end of that time the little fishes hatch out, and
immediately hide themselves for about a fortnight under a rock or a
large stone. You would never know what they were if you were to
see them, for they look much more like tadpoles than fishes; and
each has a little bag of nourishment underneath its body on which it
lives. When this is exhausted they leave their retreat and feed upon
small insects, growing very rapidly, until in about a month's time
they are four inches long. They are now called parr and have a row
of dark stripes upon their sides, and in this condition they remain for
at least a year. Their color then changes, the stripes disappearing,
and the whole body becoming covered with bright silvery scales.
The little fishes are now known as smolts, and, like their parents;
they make their way down the river and pass into the sea. There
they remain until the autumn, when they ascend the river again. By
this time they have grown considerably, weighing perhaps five or six
pounds, and are called grilse. And it is not until they have visited the
sea again in the following year that they are termed salmon.
When salmon are ascending a river and come to a waterfall, they
climb it by leaping into the air and so springing into the stream
above the fall, trying over and over again until they succeed. When
the fall is too high to be climbed in this way, the owners of the river
often make a kind of water staircase by the side of it, so that the
fishes can leap up one stair at a time. This is called a salmon-ladder.

North Pacific Salmon


Now this description would not at all fit the case of the salmon
which live in the North Pacific and ascend the rivers of California,
British Columbia, and Alaska, and of Siberia and Japan on the other
side of the ocean. These are the salmon which supply the whole
country, and many other countries, with their pink flesh, boiled, and
sealed in cans, so that it may be sent long distances and kept many
months without spoiling. Every spring and summer, at different times
according to the locality and the species—there are five kinds of
importance, caught for the trade—vast numbers of them enter the
mouths of the rivers and begin to make their way up-stream in their
effort to reach the shallow head waters of each river, and of every
one of its tributaries. It is at this time that they are caught by
spearing, netting, and various contrivances; but laws prevent any
general obstruction which would altogether stop the advance of the
host, so that while tens of thousands are taken great numbers
escape and pass on, as it is necessary they should do in order to lay
eggs and so keep up the race.
This takes place far up at the heads of the streams in the
foothills of the mountains; and having deposited the spawn, late in
summer, the spent fish begin to drift down stream again. But all this
time they have been eating nothing, they are worn with the long
struggle against the rapids, often wounded by sharp rocks, and are
good for nothing to catch or eat. In fact, so fagged out and weak
are they that all of them die before any reach the mouth of the river.
It is a strange fact that of all the vast host of salmon which each
summer climb the rivers not a single one gets back to the sea.
A year later, however, the young hatched from the eggs which
were left behind them at the heads of the streams swim down the
rivers and enter the ocean. There they remain, probably not very far
from land, for two or three years, feeding and growing until they are
of full size and strength; and each season a class of them, having
reached the right age and condition to spawn, force their way up to
the spawning-grounds, to leave their eggs and then die, as did their
parents before them.

Eels
The only other fresh-water fishes which we can notice are the
eels, which look more like snakes than fishes, for they have long
slender bodies, with a pair of tiny fins just behind the head, a long
one running along the back and tail, like a crest, and another,
equally long, under the body. And they are clothed with a smooth,
slimy skin instead of with scales.
These curious creatures live in ponds and even in ditches as well
as in rivers, and are very plentiful in all parts of the northern
hemisphere. During the daytime, although they will sometimes bask
at the surface in the warm sunshine, they generally lie buried in the
mud at the bottom of the water, coming out soon after sunset to
feed. And when the weather is damp, so that their gills are kept
moist as they wriggle through the herbage, they will often leave the
water and travel for some little distance overland.
They frequently do this when they are traveling toward the sea.
For it is a strange fact that, although they are fresh-water fishes,
eels both begin and end their lives in the sea.
In the first place, the eggs are laid in the sea—generally quite
close to the mouth of a river. When the little elvers, as the young
eels are called, hatch out, they make their way up the river in
immense shoals. In the English river Severn, for instance, several
tons of elvers are often caught in a single day; and about thirty
million elvers go to the ton! After being pressed into cakes and fried,
these little creatures are used for food; but they are so rich that one
cannot eat very many at once.
When they have traveled far enough up the river, most of the
elvers which have escaped capture make their way to different
streams and pools and ditches, and there remain until their growth
is completed. They then begin to journey back to the sea, and when
they reach it they lay eggs in their turn. After this, apparently, they
die.
In the rivers of South America a most wonderful eel is found
which has the power of killing its victims by means of an electric
shock, wherefore it is called the electric eel. The electricity is
produced and stored up in two large organs inside the body, but
how it is discharged nobody knows. If the fish is touched it merely
gives a slight shudder. But the shock is so severe that quite a large
fish can be killed by it, while a man's arm would be numbed for a
moment right up to the shoulder.

Lampreys
The lamprey, which is found plentifully in many northern rivers, is
very much like an eel in appearance. But it has no side fins, and
instead of possessing jaws, it has a round mouth used for sucking,
and resembling that of a leech; and on either side of its neck it has a
row of seven round holes, through which water passes to the
breathing-organs.
Lampreys seem to spend the greater part of their lives in the sea,
but always come up the rivers to spawn. They lay their eggs in a
hollow in the bed of the stream, which they make by dragging away
stone after stone till the hole is sufficiently deep. Very often a large
number of lampreys combine for this purpose, and make quite a big
hole, in which they all lay their eggs together.
The length of the lamprey is generally from fifteen to eighteen
inches, and its color is olive brown.
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.

More than just a book-buying platform, we strive to be a bridge


connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.

Join us on a journey of knowledge exploration, passion nurturing, and


personal growth every day!

ebookbell.com

You might also like