Metaheuristic Computation With Matlab Hardcover Erik Cuevas Alma Rodrguez Instant Download
Metaheuristic Computation With Matlab Hardcover Erik Cuevas Alma Rodrguez Instant Download
https://ebookbell.com/product/metaheuristic-computation-with-
matlab-hardcover-erik-cuevas-alma-rodrguez-33172122
https://ebookbell.com/product/metaheuristic-computation-a-performance-
perspective-erik-cuevas-12311796
https://ebookbell.com/product/machine-learning-and-metaheuristic-
computation-1st-edition-erik-cuevas-86084764
https://ebookbell.com/product/metaheuristic-and-evolutionary-
computation-algorithms-and-applications-1st-ed-hasmat-malik-22473674
https://ebookbell.com/product/metaheuristic-optimization-
natureinspired-algorithms-swarm-and-computational-intelligence-theory-
and-applications-1st-ed-modestus-o-okwu-22496302
Advances On Computational Intelligence In Energy The Applications Of
Natureinspired Metaheuristic Algorithms In Energy 1st Ed Tutut Herawan
https://ebookbell.com/product/advances-on-computational-intelligence-
in-energy-the-applications-of-natureinspired-metaheuristic-algorithms-
in-energy-1st-ed-tutut-herawan-10489386
https://ebookbell.com/product/computational-intelligence-in-
reliability-engineering-new-metaheuristics-neural-and-fuzzy-
techniques-in-reliability-1st-edition-yunchia-liang-4191878
https://ebookbell.com/product/metaheuristic-algorithms-theory-and-
practice-gaige-wang-xiaoqi-zhao-keqin-li-55530164
https://ebookbell.com/product/metaheuristic-optimization-algorithms-
optimizers-analysis-and-applications-1st-edition-laith-
abualigah-57036712
https://ebookbell.com/product/metaheuristic-approaches-for-optimum-
design-of-reinforced-concrete-structures-emerging-research-and-
opportunities-aylin-ece-kayabekir-33122626
Metaheuristic Computation
with MATLAB®
Taylor & Francis
Taylor & Francis Group
http://taylorandfrancis.com
Metaheuristic Computation
with MATLAB®
Erik Cuevas
Alma Rodríguez
MATLAB ® is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks does not warrant the
accuracy of the text or exercises in this book. This book’s use or discussion of MATLAB ® software or related products
does not constitute endorsement or sponsorship by The MathWorks of a particular pedagogical approach or particular
use of the MATLAB ® software.
Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot
assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers
have attempted to trace the copyright holders of all material reproduced in this publication and apologize to
copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been
acknowledged please write and let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or
utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including
photocopying, microfilming, and recording, or in any information storage or retrieval system, without written
permission from the publishers.
For permission to photocopy or use material electronically from this work, access www.copyright.com or contact the
Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. For works that are
not available on CCC please contact [email protected]
Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for
identification and explanation without intent to infringe.
Typeset in Minion
by codeMantra
Contents
Preface, xi
Acknowledgments, xvii
Authors, xix
v
vi ◾ Contents
REFERENCES 181
INDEX, 257
Taylor & Francis
Taylor & Francis Group
http://taylorandfrancis.com
Preface
Optimization applications are countless. Almost all processes of practical interest can be
optimized to improve their performance. Currently, there is no company that considers the
solution of optimization problems within its activities. In general terms, many processes
in science and industry can be formulated as optimization formulations. Optimization
occurs in the minimization of the time spent for the execution of a task, the cost of a prod-
uct, and the risk in an investment or the maximization of profits, the quality of a product,
and the efficiency of a device.
The vast majority of the optimization problems with practical implications in science,
engineering, economics, and business are very complex and difficult to resolve. Such
problems cannot be solved accurately by using classical optimization methods. Under
these circumstances, metaheuristic computation methods have emerged as an alternative
solution.
Metaheuristic algorithms are considered generic optimization tools that can solve
very complex problems characterized by having very large search spaces. Metaheuristic
methods reduce the effective size of the search space through the use of effective search
strategies. In general, these methods allow solving problems faster and more robust than
classical schemes. In comparison to other heuristic algorithms, metaheuristic techniques
are simpler to design and implement.
Metaheuristic methods represent an important area in artificial intelligence and applied
mathematics. During the last 10 years, a set of several metaheuristic approaches have
appeared, which allow the intersection of different disciplines including artificial intelli-
gence, biology, social studies, and mathematics. Most of the metaheuristic methods use as
inspiration existing biological or social phenomena which at a certain level of abstraction
can be regarded as models of optimization.
Recently, metaheuristic algorithms have become popular in science and industry. An
indicator of this situation is the large number of specialized journals, sessions, and confer-
ences in this area. In practice, metaheuristic schemes have attracted great interest, since
they have proved to be efficient tools for the solution of a wide range of problems in domains
such as logistics, bio-informatics, structural design, data mining, and finance.
The main purpose of this book is to provide a unified view of the most popular meta-
heuristic methods. Under this perspective, the fundamental design principles as well as the
operators of metaheuristic approaches which are considered essential are presented. In the
explanation, not only the design aspects but also their implementation have been considered
xi
xii ◾ Preface
using the popular software MATLAB®. The idea with this combination is to motivate the
reader with the acquired knowledge of each method to reuse the existing code, configuring
it to his/her specific problems. All the MATLAB codes contained in the book, as well as
additional material, can be downloaded from www.crcpress.com/9780367438869.
This book provides the necessary concepts that enable the reader to implement and
modify the already known metaheuristic methods to obtain the desired performance for
the specific needs of each problem. For this reason, the book contains numerous examples
of problems and solutions that demonstrate the power of these methods of optimization.
The material has been written from a teaching perspective. For this reason, the book is
primarily intended for undergraduate and postgraduate students of Artificial Intelligence,
Metaheuristic Methods, and/or Evolutionary Computation. It can also be appropriate for
courses such as Optimization and Computational Mathematics. Likewise, the material can
be useful for researchers from metaheuristic and engineering communities. The objective
is to bridge the gap between metaheuristic techniques and complex optimization problems
that profit on the convenient properties of metaheuristic approaches. Therefore, students
and practitioners, who are not metaheuristic computation researchers, will appreciate that
the techniques discussed are beyond simple theoretical tools since they have been adapted
to solve significant problems that commonly arise in such areas.
Due to its content and structure, the book is suitable to fulfill the requirements of sev-
eral university subjects in the area of computing sciences, artificial intelligence, operations
research, applied mathematics, and some other disciplines. Similarly, many engineers and
professionals that work in the industry may find the content of this book interesting. In
this case, the simple explanations and the provided code can assist practitioners in finding
the solution of optimization problems which normally arise in various industrial areas.
Our original premise has been that metaheuristic methods can be easily exposed to
readers with limited mathematical skills. Consequently, we try to write a book in which the
contents are not only applicable but also understandable for any undergraduate student.
Although some concepts can be complex themselves, we try to expose them clearly without
trying to hide their implicit difficulty.
The book is structured so that the reader can clearly identify from the beginning the
objectives of each chapter and finally strengthen the knowledge acquired through the
implementation of several MATLAB programs. The book has been conceived for an intro-
ductory course. The material can be covered in a semester. The book consists of nine chap-
ters, and the details in the contents of each chapter are described below.
Chapter 1 introduces the main concepts that are involved in an optimization pro-
cess. In this way, once the optimization problem is generically formulated, the methods
used for its solution are then classified. Considering that the book focuses on the study of
metaheuristic techniques, traditional gradient-based algorithms will be only marginally
treated. Another important objective in this chapter is to explain the main characteris-
tics of the evolutionary algorithms introducing the dilemma of exploration and exploi-
tation. Furthermore, the acceptance and probabilistic selection are also analyzed. They
are two main operations used in most metaheuristic methods. Finally, three of the first
Preface ◾ xiii
evolutionary methods are exposed, which have been considered as the basis for creation of
new algorithms. The idea with this treatment is to introduce the concepts of metaheuristic
methods through implementing techniques that are easy to understand.
In Chapter 2, the metaheuristic techniques known as Genetic Algorithms (GAs) are
introduced. They implement optimization schemes that emulate evolutionary principles
found in nature. GAs represent one of the most important search approaches in several
problem domains, such as the sciences, industry, and engineering. The main reasons for
their extensive use are their flexibility, ease of implementation, and global context. Among
different GAs, we will examine in detail binary-coded and real-parameter GAs. In this
chapter, several MATLAB implementations will be discussed and explained.
Chapter 3 describes the operation of Evolutionary Strategies (ES). The evolution pro-
cess that the ES method implements to solve optimization problems is also discussed.
Throughout this chapter, the operators used by the ES are defined along with their different
variants and computational implementation in the MATLAB environment.
Chapter 4 describes the inspiration of the Moth–Flame Optimization (MFO) algorithm
as well as the search strategy it implements to solve optimization problems. Throughout
this chapter, the operators used by the MFO are defined with the objective of analyzing the
theoretical concepts involved that allow the computational implementation of the algo-
rithm in the MATLAB environment. Then, the algorithm is used to solve optimization
problems. The examples illustrate the use of MFO for solving problems with and without
constraints.
Chapter 5 analyzes the Differential Evolution (DE) scheme. This approach is a popu-
lation algorithm that implements a direct and simple search strategy. Under its oper-
ation, DE considers the generation of parameter vectors based on the addition of the
weighted difference between two members of the population. In this chapter, the opera-
tive details of the DE algorithm are discussed. The implementation of DE in MATLAB is
also described. The objective of this chapter is to provide to the reader the mathematical
description of the DE operators and the capacity to apply this algorithm in the solution
of optimization problems. To do this, in the subsequent sections, the use of the DE algo-
rithm is considered in two aspects: the first is the resolution of an optimization problem
minimizing a mathematical benchmark function, and the second, the solution of engi-
neering problems that require an optimal design in their parameters considering some
design restrictions.
Chapter 6 presents the Particle Swarm Optimization (PSO) method. This scheme is
based on the collective behavior that some animals present when they interact in groups.
Such behaviors are found in several animal groups such as a school of fish or a flock of
birds. With these interactions, individuals reach a higher level of survival by collaborating
together, generating a kind of collective intelligence. This chapter describes the main char-
acteristics of the PSO algorithm, as well as its search strategy, considering also the solution
of optimization problems. In the chapter, the operators used by the PSO are defined with
the objective of analyzing their theoretical concepts involved that allow the computational
implementation of the algorithm in the MATLAB environment. Then, the algorithm is
xiv ◾ Preface
used to solve real-world applications. The examples illustrate the use of PSO for solving
problems with and without restrictions.
In Chapter 7, the Artificial Bee Colony (ABC) algorithm is analyzed. In this chapter, the
parameters of the ABC algorithm, as well as the information necessary to implement it,
will be discussed in detail. In Section 7.1, a semblance of the ABC algorithm, as well as its
most relevant characteristics, will be discussed. In Section 7.2, the complete algorithm is
presented, reserving a sub-section for each component of the algorithm. However, a special
emphasis is placed on Section 7.2.6, where an optimization example of a two-dimensional
function using MATLAB is presented. Then, the results obtained will be discussed. Finally,
in Section 7.3, a summary of recent applications of the ABC algorithm in the area of image
processing is presented.
In Chapter 8, the main characteristics of the Cuckoo Search (CS) scheme are discussed.
Due to its importance, a multimodal version of the CS method is also reviewed. CS is a
simple and effective global optimization algorithm that is inspired by the breeding behavior
of some cuckoo species. One of the most powerful features of CS is the use of Lévy flights
to generate new candidate solutions. Under this approach, candidate solutions are modi-
fied by employing many small changes and occasionally large jumps. As a result, CS can
substantially improve the relationship between exploration–exploitation, still enhancing
its search capabilities. Despite such characteristics, the CS method still fails in providing
multiple solutions in a single execution. In order to overcome such inconvenience, a mul-
timodal optimization algorithm called the multimodal CS (MCS) is also presented. Under
MCS, the original CS is enhanced with multimodal capacities by means of (1) the incorpo-
ration of a memory mechanism to efficiently register potential local optima according to
their fitness value and the distance to other potential solutions, (2) the modification of the
original CS individual selection strategy to accelerate the detection process of new local
minima, and (3) the inclusion of a depuration procedure to cyclically eliminate duplicated
memory elements.
In Chapter 9, the most common techniques used by metaheuristic methods to opti-
mize multimodal problems are analyzed. Since the shared function scheme is the most
popular, this procedure will be treated in detail in this chapter. Additionally, in the
end, we will discuss the algorithm of fireflies. Such a method inspired by the attraction
behavior of these insects incorporates special operators that maintain interesting mul-
timodal capabilities.
Considering that writing this book has been a very enjoyable experience for the authors
and that the overall topic of metaheuristic computation has become a fruitful subject, it
has been tempting to introduce a large amount of new material and novel evolutionary
methods. However, the usefulness and potential adoption of the book seems to be founded
over a compact and appropriate presentation of successful algorithms, which in turn has
driven the overall organization of the book that we hope may provide the clearest picture
to the reader’s eyes.
Preface ◾ xv
There are many people who are somehow involved in the writing process of this book.
We thank the complete metaheuristic group at the Universidad de Guadalajara in Mexico
for supporting us in this project. We express our gratitude to Randi Cohen, who warmly
sustained this project. Acknowledgments also go to Talitha Duncan-Todd, who kindly
helped in the edition process.
Erik Cuevas
Alma Rodríguez
Guadalajara, Mexico
xvii
Taylor & Francis
Taylor & Francis Group
http://taylorandfrancis.com
Authors
xix
Chapter 1
Introduction and
Main Concepts
OBJECTIVE
The objective of this chapter is to introduce the main concepts that involve an optimi-
zation process. In this way, once the optimization problem is generically formulated,
the methods used for its solution are then classified. Considering that the book focuses
on the study of metaheuristic techniques, traditional gradient-based algorithms will be
only marginally treated. Another important objective of this chapter is to explain the
main characteristics of evolutionary algorithms, introducing the dilemma of exploration
and exploitation. Furthermore, acceptance and probabilistic selection are also analyzed.
These are the two main operations used in most metaheuristic methods. Finally, three of
the first evolutionary methods which have been considered as the basis for the creation
of new algorithms have been exposed. The idea with this treatment is to introduce the
concepts of metaheuristic methods, through implementing techniques that are easy to
understand.
1.1 INTRODUCTION
Optimization has become an essential part of all disciplines. One reason for this consid-
eration is the motivation to produce products or quality services at competitive prices.
In general, optimization is the process of finding the “best solution” to a problem among a
big set of possible solutions (Baldick, 2006).
An optimization problem can be formulated as a process in which it is desired to find the
optimum value x ∗ that minimizes or maximizes an objective function f ( x ). Such that
Minimize/Maximize f (x ), x = ( x1 ,…, x d ) ∈ d
(1.1)
Subject to: x ∈X ,
1
2 ◾ Metaheuristic Computation with MATLAB®
where x represents the vector of decision variables, while d specifies its dimension. X sym-
bolizes the set of candidate solutions, also known as the solution search space. In many
occasions, the bounds of the search space are located by the lower ( li ) or upper (ui ) limits
{
of each decision variables such that X = x ∈ d l i ≤ xi ≤ ui , i = 1,…,d . }
Sometimes it is necessary to minimize f ( x ), but in other scenarios it is necessary to
maximize. These two types of problems are easily converted from one to another through
the following relationship:
Minimize f (x ) = x 4 + 5x 3 + 4 x 2 − 4 x + 1
(1.3)
Subject to: x ∈[−4,1]
20
15
10
f(x)
0
B
A
-5
-10
-4 -3 -2 -1 0 1
FIGURE 1.1 Graphical representation of the optimization problem formulated in Eq. 1.3.
Introduction and Main Concepts ◾ 3
x k +1 = x k − α ⋅g ( f ( x )), (1.4)
where k represents the current iteration and α symbolizes the size of the search step.
In Eq. 1.4, the term g ( f ( x )) represents the gradient of the function f ( x ). The gradient g
of a function f ( x ) at the point g x expresses the direction in which the function presents its
maximum growth. Thus, in the case of a minimization problem, the descent direction can
be obtained (multiplying by −1) considering the opposite direction to g. Under this rule, it
( ) ( )
guarantees that f x k +1 < f x k , which means that the newly generated solution is better
than the previous one.
In general, although the formulation of an optimization problem involves the definition
of an objective function f ( x ), this is only for educational purposes and demonstration. In
practice, its definition is not known deterministically. Their values are known only at the
points sampled by the optimization algorithm. Under these circumstances, the gradient g
is calculated using numerical methods.
4 ◾ Metaheuristic Computation with MATLAB®
f (x)
gx
1
f (˜x1) – f(x)
h x2
x ˜x1
x1
FIGURE 1.2 Graphical representation of the numerical calculation process of the gradient.
1.2.2 Gradient Computation
( )
The gradient of a multidimensional function f ( x ) x = ( x1 ,…, x d ) ∈ d represents the way
in which the function changes with respect to one of their d dimensions. Therefore, the
gradient g x1 expresses the magnitude in which f ( x ) varies with respect to x1. This gradient
g x1 is defined as
∂ f (x )
g x1 = (1.5)
∂x1
To numerically calculate the gradient g xi , the following procedure (Mathews & Fink, 2000)
is conducted:
1. A new solution x i is generated. This solution x i is the same as x in all the decision
variables except in xi . This value will be replaced by xi + h, where h is a very small
value. Under these conditions, the new vector x i is defined as
f ( x i ) − f ( x )
g xi ≈ (1.7)
h
Minimize f ( x1 ,x 2 ) = 10 − e
(
− x12 +3 x 22 )
−1 ≤ x1 ≤ 1 (1.8)
Subject to:
−1 ≤ x 2 ≤ 1
x1k +1 = x1k − α ⋅ g x1
(1.9)
x 2k +1 = x 2k − α ⋅ g x1
This process is repeated iteratively until a maximum number of iterations Niter has been
reached.
10
9.8
9.6
f (x1, x2)
9.4
9.2
1
9 0.5
1
0.5 0
0 x1
x2 -0.5
-0.5
-1 -1
0.8
0.6
0.4
0.2
x2 0
-0.2
-0.4
-0.6
-0.8
-1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
x1
FIGURE 1.4 Solution trajectory produced during the execution of Program 1.1.
The program shows the implementation of Algorithm 1.1 in MATLAB. In the opera-
tion of the program, first the function f ( x1 , x 2 ) is plotted in order to appreciate their main
characteristics. Then, in an iterative process, a set of solutions are produced from the initial
point x 0 to the optimal value. The trajectory experimented by the solutions during the
optimization process is also illustrated in Figure 1.4.
1. k ← 0
2. x1k ← Random [−1,1], x 2k ← Random [−1,1]
3. while (k < Niter) {
f ( x1k + h, x 2k ) − f ( x1k , x 2k ) f ( x1k , x 2k + h) − f ( x1k , x 2k )
4. g x1 ← , g x2 ←
h h
5. x1k +1 ← x1k − α ⋅ g x1 , x 2k +1 ← x 2k − α ⋅ g x 2
6. k ← k+1}
funstr='10-(exp(-1*(x^2+3*y^2)))';
f=vectorize(inline(funstr));
range=[-1 1 -1 1];
% Draw the function
Ndiv=50;
dx=(range(2)-range(1))/Ndiv; dy=(range(4)-range(3))/Ndiv;
[x,y] =meshgrid(range(1):dx:range(2),range(3):dy:range(4));
z=(f(x,y));
figure(1); surfc(x,y,z);
% Define the number of iterations
k=0;
niter=200;
% Gradient step size h definition
hstep = 0.001;
% Step size of the Gradient descent method
alfa=0.05;
%Initial point selection
xrange=range(2)-range(1);
yrange=range(4)-range(3);
x1=rand*xrange+range(1);
x2=rand*yrange+range(3);
% Optimization process
while (k<niter)
% Function evaluation
zn=f(x1,x2);
% Computation of gradients gx1 y gx2
vx1=x1+hstep;
vx2=x2+hstep;
gx1=(f(vx1,x2)-zn)/hstep;
gx2=(f(x1,vx2)-zn)/hstep;
% Draw the current position
figure(2)
contour(x,y,z,15); hold on;
plot(x1,x2,'.','markersize',10,'markerfacecolor','g');
hold on;
% Computation of the new solution
x1=x1-alfa*gx1;
x2=x2-alfa*gx2;
k=k+1;
end
10
5
8
0 6
f (x1, x2) f (x1, x2)
4
-5 2
2 3 0 1
2 1 0.5
x2 0 0
1 0.5 0
-2 -1 x1 0 x1
-2 x2 -0.5 -0.5
-3
-1 -1
(a) (b)
FIGURE 1.5 Objective function types (a) multimodal or (b) not differentiable.
Introduction and Main Concepts ◾ 9
Each of these candidate solutions behaves as a search agent that leads the search strat-
egy. The idea is that as time passes, the population evolves until it eventually reaches the
optimal solution. However, this concept is not completely appropriate, since there are many
metaheuristic approaches which consist of a single candidate solution (simulated anneal-
ing or evolutionary strategies) so that on each iteration, only this solution is updated. This
reasoning considers that metaheuristic methods are more general than simple population
techniques.
Sometimes the term computational intelligence is used to refer to metaheuristic tech-
niques. Under this concept, the idea is to differentiate metaheuristic methods of expert
systems which are considered a traditional discipline of artificial intelligence. Expert
systems model deductive reasoning, while metaheuristic algorithms model inductive rea-
soning. Computational intelligence is a more general area than metaheuristic methods
and includes other approaches such as neural networks and fuzzy systems. Thus, the use of
such approaches is not restricted to the field of optimization.
Several academics use the term bio-inspired algorithms to refer to metaheuristic meth-
ods. However, this conception is not correct, since various metaheuristic techniques such
as differential evolution and the imperialist algorithm are not inspired by nature. There
are some other approaches such as evolutionary strategies and learning by an opposition
that have a very weak connection with biological processes. Under these conditions, it is
clear that the metaheuristic methods constitute a concept that is more general than the
bio-inspired algorithms.
Some authors often replace the term metaheuristic computation by heuristic algo-
rithms. Heuristic, which comes from the Greek, means find or discover. Heuristic
algorithms are methods that use intuitive rules based on common sense to solve problems.
Such algorithms do not expect to find the best solution, but any sufficiently acceptable solu-
tion. The term metaheuristic is used to describe a generic family of heuristic algorithms.
Therefore, most, if not all, of the algorithms discussed in this book can be considered as
metaheuristics.
Many researchers separate metaheuristic methods from techniques based on swarm
principles. Swarm algorithms consider the collective intelligence shown by the behavior of
groups of animals or insects. Two prominent algorithms in this category are Ant Colony
Optimization and Particle Swarm Optimization. Since the mechanism for implementing
swarm algorithms is similar to metaheuristic methods, in this book, the swarm algorithms
are considered as methods of metaheuristic computation.
From the previous discussions, it can be concluded that the terminology with which
metaheuristic methods have been defined is vague and dependent on a particular context.
In this book, and in order to amalgamate all points of view, metaheuristic algorithms are
defined as algorithms that do not consider gradient information to modify one or more
candidate solutions during the optimization process. In these methods, the search strategy
is determined by the combination of stochastic processes and deterministic models.
Metaheuristics are currently one of the most prolific areas in sciences and engineer-
ing. A reflex of its popularity is the large number of specialized journals and conferences
available in the subject. The number of proposed algorithms in the literature that fall
10 ◾ Metaheuristic Computation with MATLAB®
into the category of metaheuristics is very large so that a review of all the algorithms in
a single document is virtually impossible. Due to restrictions of space and coverage, this
book describes in detail those metaheuristic methods that, according to the literature,
are the most popular. With this in mind, it has been decided to divide the methods into
two classes. The first class corresponds to those techniques that are considered the first
approaches in using the concepts of metaheuristic computation. These techniques have
been the basis of many other algorithms. For this reason, such methods have been included
in the book. However, according to recent literature, they are not considered popular any-
more. Since these techniques are treated as a reference, its discussion is not very detailed
in this book. Therefore, they are addressed in this chapter. The second class of algorithms
involves metaheuristic methods that, according to the literature, are the most popular.
Such popularity means that they are the most applied, modified, combined, and analyzed
between the entire set of methods of metaheuristic computation. As these algorithms in
the opinion of the author are the most important, its description is deeper so that they are
treated throughout the book in separate chapters. Table 1.1 describes the methods and the
chapter in which they will be discussed.
This book has been written from a teaching perspective, in such a way that the reader
can implement and use the algorithms for the solution of his/her optimization problems.
The presented material, the discussed methods, and implemented programs are not avail-
able in any other book that considers metaheuristic methods as subject.
The book has two unique features. The first characteristic is that each method is
explained considering a detailed level. Therefore, it is possible to calibrate and change the
parameters of the methods in question. Under the perspective of this book, each meta-
heuristic technique is addressed through the use of simple and intuitive examples so that
the reader gradually gets a clear idea of the functioning of each method.
The second characteristic is the implementation of each metaheuristic method in
MATLAB. Most of the texts of metaheuristic computation explain the algorithms, differ-
ing in degree of detail and coverage of each method. However, many texts fail to provide
implementation information. This problem from the point of view of the authors of this
book is no less since most readers understand the methods completely when the theoretical
concepts are compared with the provided lines of code in its implementation.
TABLE 1.1 Metaheuristic Methods and Their Distribution through the Book
Class 1 Class 2
Chapter Algorithm Chapter Algorithm
1 Random Search 2 Genetic Algorithms (GA)
1 Simulated Annealing 3 Evolutionary Strategies (ES)
4 Moth–Flame Optimization (MFO)
5 Differential Evolution (DE)
6 Particle Swarm Optimization (PSO)
7 Artificial Bee Colony (ABC)
8 Cuckoo Search (CS)
9 Metaheuristic Multimodal Optimization
Introduction and Main Concepts ◾ 11
Minimize/Maximize f ( x ), x = ( x1 ,…, x d ) ∈ d
(1.10)
Subject to: x ∈X ,
where x represents the vector of decision variables (candidate solution), while d specifies the
number of dimensions. X symbolizes the set of possible candidate solutions, also known as
the search space. On many scenarios, the search space is bounded by the lower ( li ) or upper
{
(ui ) limits of each decision variable d so that X = x ∈ d li ≤ xi ≤ ui , i = 1,…,d . }
To solve the problem formulated in Eq. 1.10, a metaheuristic algorithm maintains a sin-
( ) ( )
gle solution x k or a population of N candidate solutions P k x 1k , x k2 ,…, x kN , which evolve
(change their values) during a determined number of iterations (Niter), from an initial
state to the end. In the initial state, the algorithm initializes the set of candidate solutions
with a random value within the limits of the search space X. In every generation, a set of
metaheuristic operators is applied to candidate solutions P k to build a new population P k+1 .
The quality of each candidate solution xik is then evaluated through an objective function
f ( x ik ) (i ∈[1,2,…, N]) that describes the optimization problem. Usually, during the evolu-
tion process, the best solution m from all candidate solutions maintains a special consider-
ation. The idea is that at the end of the evolution process, this solution represents the best
possible solution. Figure 1.6 shows a graphical representation of the optimization process
from the point of view of the metaheuristic computation paradigm. In the nomenclature of
metaheuristic algorithms, a generation is known as an iteration, while the objective func-
( )
tion value f x ik produced by the candidate solution x ik is known as the “fitness” value of x ik .
k←0
Pk ← Random [X]
Pk+1 ← Operators ( Pk )
k ← k+1
Yes No solution
k<Niter
m
FIGURE 1.6 Optimization process from the point of view of metaheuristic computation paradigm.
12 ◾ Metaheuristic Computation with MATLAB®
1.5.1 Probabilistic Decision
The probabilistic decision is an operation used frequently by metaheuristic methods to
condition the execution of different operators for searching for new solutions. The opera-
tion of the probabilistic decision can be formulated as how to execute an action A condi-
tioned to a probability PA . Since the probability dictates the frequency in which an action
will be executed, its value must be a valid probability ( PA ∈[0,1]). Under these conditions,
the process of acceptance or rejection of the action A is as follows: first, generates a random
number rA under a uniform distribution U [0,1]. If the value rA is less than or equal to PA ,
the action A is performed; otherwise, action A will have no effect.
As metaheuristic algorithms are iterative, the probability PA expresses the frequency with
which an action A is executed. Therefore, if the value PA is near to zero, the action A will
be poorly executed. While if the value of PA is close to one, action A will be executed
practically on all occasions. From the point of view of implementation, a random number
uniformly distributed (U [0,1]) is generated by the rand function.
To show the implementation of a probabilistic decision operation, the program
illustrated in Program 1.2 has been developed. The program considers 10,000 itera-
tions, of which 70% ( PA = 0.7 ) will perform the action to A, while the remaining 30%
do not. Under these conditions, after running the program, the action ActionA will
be executed approximately 7,000 times, while NoActionA will be executed around
3,000 times.
1.5.2 Probabilistic Selection
The probabilistic selection considers the process of selecting an element of a set, so that the
items with the best quality (according to a certain objective function) have a greater chance
of being chosen, compared with those that have lesser quality.
Methods of metaheuristic computation frequently have to choose a solution x ek from
( )
a population of elements P k x 1k , x 2k ,…, x kN , where e ∈[1,2,…, N ]. In the selection, it must
(( ) ( ) ( ))
consider the fitness quality of the solutions f x 1k , f x 2k ,…, f x kN , so that better solu-
tions are more likely to be chosen. Under this selection process, the probability of selecting
the solution x ek among the other N − 1 solutions is defined as
Pe =
( )f x ek
(1.11)
∑ f (x )
N
k
i
i=1
14 ◾ Metaheuristic Computation with MATLAB®
Another important concept associated with the solution x ek is the cumulative probability
PeA. This cumulative probability is defined as
e
A
P = ∑P
i=1
i (1.12)
Under these conditions, the cumulative probability PNA of the last solution of the population
P k is equal to one.
Once calculated, the probabilities {P1 , P2 ,…, PN } and the cumulative probabilities
{ }
P1A , P2A ,…, PNA of all the solutions contained in the population P k. The selection process
can be explained as follows: first, a random number rS is generated considering a uniform
distribution U [0,1]. Then, iteratively, a test process is conducted. Therefore, starting with
the first solution, it is checked if P1A > rS. If this condition is not met, the second solution is
considered. This process continues testing each solution e until the condition PeA > rS has
been reached. As a result of this procedure, the solution x ek would be selected.
In order to clarify this process, a numerical example is developed. Assume a population
k
( )
P with five elements x 1k , x k2 , x 3k , x k4 , x 5k , whose qualities (fitness values), probabilities, and
cumulative probabilities are shown in Table 1.2. Given these values, the selection process is
the following: a uniformly distributed random value rS (U [0,1]) is generated. Considering
that the produced value was rS = 0.51, it is tested by using the first element if P1A > rS. As this
does not happen, it is proven with the second solution in the same condition P2A > rS. As
the condition is still not satisfied, the third solution is tested P3A > rS. As the condition is
fulfilled, the selected element is x 3k .
1.6 RANDOM SEARCH
The random search method (Matyas, 1965) is the first method which based its optimization
strategy on a full stochastic process. Under this method, only a candidate solution x k is
maintained during the process of evolution. In each iteration, the candidate solution x k is
modified by adding a random vector ∆x. Therefore, the new candidate solution is modeled
using the following expression:
x k +1 = x k + ∆x (1.13)
Introduction and Main Concepts ◾ 15
(
Assuming that the candidate solution x k has d dimensions x1k , x 2k ,…, x dk , each coordinate )
is modified ( ∆x = {∆x 1 , ∆x 2 ,…, ∆x d }) through a random disturbance ∆xi (i ∈[1,2,…, d])
modeled for a Gaussian probability distribution defined as
1 ( ∆x i − µi )
p ( ∆x i ) = exp −0.5 ⋅
σ i2 = N ( µi ,σ i ), (1.14)
σ i ⋅ 2π
where σ i and µi symbolize the standard deviation and the mean value, respectively, for the
dimension i. As the value ∆xi represents a local modification around xik , the average value
is assumed zero ( µi = 0 ) .
Once computed x k+1, it is tested if the new position improves the quality of the previous
candidate solution x k . Therefore, if the quality of x k+1 is better than x k , the value of x k+1
is accepted as the new candidate solution; otherwise, the solution x k remains unchanged.
This process can be defined for the case of minimization problem as
k +1
x k +1
( ) ( )
si f x k +1 < f x k
x = (1.15)
x
k
si f ( x ) ≥ f ( x )
k +1 k
This replacement criterion of accepting only changes that improve the quality of candidate
solutions is known as “greedy.” In random search, the perturbation ∆x imposed to x k could
provoke that the new value x k+1 may not be located within the search space X. Outside the
search space X, there is no definition of the objective function f ( x ). To avoid this prob-
lem, the algorithm must protect the evolution of the candidate solution x k , so that if x k+1
falls out of the search space X, it should be assigned a very poor quality (represented by a
( )
very large value). This is, f x k+1 = ∞ for the case of minimization or f x k+1 = −∞ for the ( )
maximization case.
Maximize f ( x1 , x 2 ) = 3 ⋅ (1 − x1 ) ⋅ e
2 (−( x )−( x +1) ) + 10 ⋅ x1 − x 3 − x 5 ⋅ e(− x − x ) − 1 ⋅ e(−( x +1) − x )
2
1 2
2 2
1
2
2 1
2 2
2
1 2
5 3
−3 ≤ x1 ≤ 3
Subject to: (1.16)
−3 ≤ x 2 ≤ 3
10
f (x1, x2) 0
-5
-10
2
0 2
x2 0
-2 -2
x1
r = µ + σ ⋅randn;
where µ and σ represent the average value and standard deviation of the population,
respectively.
Introduction and Main Concepts ◾ 17
1. k ← 0
2. x1k ← Random [−3,3], x 2k ← Random [−3,3]
3. while (k< Niter) {
4. ∆x 1 = N (0,1), ∆x 2 = N (0,1)
5. x 1k +1 = x 1k + ∆x 1, x 2k +1 = x 2k + ∆x 2
( ) ( )
6. x k = x1k , x 2k , x k +1 = x1k +1 , x 2k+1
7. If ( x ∉X ) { f ( x ) = −∞}
k+1 k+1
8. If ( f ( x ) < f ( x )) { x = x }
k +1 k k +1 k
9. k ← k+1}
Program 1.3 shows the implementation of Algorithm 1.2 in MATLAB. In the operation of
Program 1.3, first, the function f ( x1 , x 2 ) is plotted in order to visualize its characteristics.
Then, iteratively, the set of candidate solutions generated from its initial value x 0 is shown
until the optimum value x ∗ is found. Figure 1.8 shows an example of the set of solutions
produced during the optimization process.
3 3
2
4
2 2
6
0
4
1 2 2 1
0
x2 0 0
-2
x2
2
2
0 0
-1 -2 -1
-4
-6
-2 0 -2
-2
-3
0
-3
-3 -2 -1 0 1 2 3 -3 -2 -1 0 1 2 3
x1 x1
(a) (b)
FIGURE 1.8 Solution map drawn on (a) the function contours and (b) the grayscale regions.
18 ◾ Metaheuristic Computation with MATLAB®
ync=yn+randn*1;
% It is tested if the solution falls inside the search space
if ((xnc>=range(1))&(xnc<=range(2))&(ync>=range(3))&(ync<=
range(4)))
% If yes, it is evaluated
zn2=f(xnc,ync);
else
% If not, it is assigned a low quality
zn2=-1000;
end
% It is analyzed if the new solution is accepted
if (zn2>zn1)
xn=xnc;
yn=ync;
end
k=k+1;
end
1.7 SIMULATED ANNEALING
Simulated annealing (Kirkpatrick, Gelatt, & Vecchi, 1983) is an optimization technique
that emulates the tempered process in metallic materials. The idea with this process is to
cool metallic material in a controlled way so that the crystal structures can orient them-
selves and avoid defects in metal structures.
The use of this process as an inspiration for the formulation of optimization algorithms
was first proposed by Kirkpatrick et al. (1983). Since then, several studies and applications
to analyze the scope of this method have been suggested. Different from the gradient-based
algorithms which have the disadvantage of stuck in local minima, the simulated annealing
method presents a great ability to avoid this difficulty.
In simulated annealing, the objective function to optimize is analogous to the energy
of a thermodynamic system. At high temperatures, the algorithm allows the exploration
of very distant points within the search space. Under these circumstances, the probability
with which bad-quality solutions are accepted is very large.
On the other hand, at low temperatures, the algorithm allows the generation of points
in neighbor locations. In this stage, the probability of accepting bad-quality solutions
is also reduced. Therefore, only new solutions that enhance their previous value will be
considered.
The simulated annealing maintains only one candidate solution ( x k ) during its opera-
tion. This solution is modified in each iteration using a procedure similar to the random
search method, where each point is updated through the generation of a random vector
∆x. The simulated annealing algorithm does not only accept changes that improve the
objective function. It also incorporates a probabilistic mechanism that allows accepting
solutions with lower quality (worse solutions). The idea with this mechanism is to accept
bad solutions in order to avoid getting trapped in local minima.
20 ◾ Metaheuristic Computation with MATLAB®
( ( ) ( ))
In the second option, although the quality of x k+1 is not superior to x k f x k +1 < f x k ,
k+1
the new solution x will be accepted according to an acceptance probability pa defined as
∆f
−
pa = e T , (1.17)
where T represents the temperature that controls the cooling process, while ∆f symbolizes
the energy difference between the point x k+1 and x k , which is defined as
( ) ( )
∆f = f x k +1 − f x k (1.18)
Therefore, the acceptance or not of a new position x k+1 is performed under the following
procedure. First, a random number r1 uniformly distributed between [0,1] is produced.
Then, if r1 < pa , the point x k+1 is accepted as the new solution.
For a given energy difference ∆f , if T is large, then pa →1, which means that all the
suggested values of x k+1 will be accepted regardless of their quality in comparison with
the previous candidate solution x k . If T is very small, then pa → 0, which means that only
the values of x k+1 that improve the quality of x k will be accepted. When this happens, the
search strategy of simulated annealing is similar to the random search method.
Thus, if T is large, the algorithm simulates a system with high thermal energy. Under
these conditions, the search space X is explored extensively. On the other hand, if T is very
small, the system allows refining around the position already known locally.
From the parameter description, it is clear that the most important element in simulated
annealing is the cooling control. This factor specifies the process in which the tempera-
ture is varied from high to low. Since this process depends on the specific application, it
requires a calibration stage performed by trial and error. There are several ways to control
the cooling process from an initial temperature Tini to a final temperature T fin . For this task,
two methods, the linear and the geometrical, are known.
In the linear scheme, the temperature reduction is modeled using the following
formulation:
T (k ) = Tini − β ⋅ k , (1.19)
where β is the cooling rate. It should be chosen so that T → 0 when k → Niter (maximum
number of iterations). This means that β = (Tini − T fin ) Niter. On the other hand, in the
geometric scheme, the temperature is decremented by the use of a cooling factor defined
Introduction and Main Concepts ◾ 21
on the interval [0,1]. Therefore, the geometric cooling strategy is modeled using the follow-
ing expression:
T (k ) = Tiniη k , (1.20)
2. k ← 0, T = Tini
3. x k ← Random [X]
∆f
−
10. pa ← e T
In the operation of Program 1.4, first, the objective function f ( x1 , x 2 ) is plotted in order to
appreciate its main characteristics. Then, iteratively, the set of solutions generated from its
initial value x 0 to the optimum value x ∗ is shown. Figure 1.9a presents an example of the set
of solutions produced during the evolution process when Program 1.4 is executed. Figure
1.9b shows the cooling process experimented during the evolution process.
Introduction and Main Concepts ◾ 25
3 1
2
4
2
0.8
6
4
0
1 2
0.6
2
x2 0
0
2 T
2
0.4
2
0 0
-1 -2
-6 0.2
-2
-4
0
-2
-3 0
-3 -2 -1 0 1 2 3 0 50 100 150
k
x1
(a) (b)
FIGURE 1.9 (a) Produced solution map over the function contour and (b) cooling process
experimented during the optimization process.
EXERCISES
f (x ) = ∑ − x sen( x )
i=1
i i
Determine
a. which are the decision variables,
b. the number of dimensions.
Minimize f ( x ) = x 4 − 15 x 2 + 10x + 24
Subject to: x ∈[−4,3]
26 ◾ Metaheuristic Computation with MATLAB®
Minimize f1 ( x1 , x 2 ) = x1 ⋅e
(− x12 − x22 )
−2 ≤ x1 ≤ 2
Subject to:
−2 ≤ x 2 ≤ 2
Then, analyze the performance of gradient descent method considering that the
parameter α assumes the following values: 0.05, 0.1, and 0.5.
−( x 2 + 3 x 2 )
Minimize f3 ( x1 , x 2 ) = floor 10 ⋅ 10 − e 1 2
−1 ≤ x1 ≤ 1
Subject to:
−1 ≤ x 2 ≤ 1
b. Discuss the characteristics of this problem that don’t allow the operation of gradi-
ent descent method.
Introduction and Main Concepts ◾ 27
Solution f (⋅)
x k
1 (
f x 1k = 1 )
f (x )=2
k
x 2
k
2
x 3k f (x k
3 )=5
x k4 f (x k
4 )=2
f4 ( x1 , x 2 ) = e
− ((( x −π ) +( x −π ) ) 30)
1
2
2
2
Maximize
−100 ≤ x1 ≤100
Subject to:
−100 ≤ x 2 ≤100
(
−0.2 0.5 x 2 + x 2
)
+ e(
2 0.5( cos(2πx1 )+ cos(2πx 2 )))
f5 ( x 1 , x 2 ) = 20 ⋅ e
1
Maximize − 20
−5 ≤ x1 ≤ 5
Subject to:
−5 ≤ x 2 ≤ 5
REFERENCES
Baldick, R. (2006). Applied optimization: Formulation and algorithms for engineering systems.
Cambridge: Cambridge University Press.
Bartholomew-Biggs, M. (2008). Nonlinear optimization with engineering applications. US: Springer.
Deb, K. (2001). Multi-objective optimization using evolutionary algorithms. John Wiley & Sons, Inc.
Dennis, J. E. (1978) A brief introduction to quasi-Newton methods. In G. H. Golub & J. Öliger
(Eds.), Numerical Analysis: Proceedings of Symposia in Applied Mathematics (pp. 19–52).
Providence, RI: American Mathematical Society.
Hocking, L. (1991). Optimal control: An introduction to the theory with applications. US: Oxford
University Press.
Kirkpatrick, S., Gelatt, C., & Vecchi, P. (1983). Optimization by simulated annealing. Science,
220(4598), 671–680.
Mathews, J., & Fink, K. (2000). Métodos numéricos con MATLAB. Madrid: Prentice Hall.
Matyas, J. (1965). Random optimization. Automation and Remote Control, 26, 246–253.
Reinhardt, R., Hoffmann, A., & Gerlach, T. (2013). Nichtlineare Optimierung. Heidelberg, Berlin:
Springer.
Simon, D. (2013). Evolutionary optimization algorithms, biologically inspired and population-based
approaches to computer intelligence. Hoboken, New Jersey: John Wiley & Sons, Inc.
Venkataraman, P. (2009). Applied optimization with MATLAB programming (2nd ed.). Hoboken,
New Jersey: John Wiley & Sons, Inc.
Yang, X.-S. (2010). Engineering optimization. Hoboken, New Jersey: John Wiley & Sons, Inc.
Chapter 2
2.1 INTRODUCTION
The initial concept of a genetic algorithm (GA) was first introduced by John Holland at
the University of Michigan (Holland, 1975). Nowadays, there exist several GA variants
where the original structure has been changed or combined with other computational
schemes. Such new GA approaches can be found in recent publications contained in
specialized journals such as Evolutionary Computation, Transactions on Evolutionary
Computation, Swarm and Evolutionary Computation, Applied Soft Computing, Soft
Computing, Evolutionary Intelligence, Memetic Computing, and Neural Computing and
Applications, to name a few.
GAs use natural genetics as a computational principle (Back, Pogel, & Michalewicz,
1997). In this chapter, we will discuss the characteristics in the operation of GAs.
GAs represent search strategies that have been extracted from the principles of natural
selection. Therefore, the central concepts of genetics are adopted and adapted artificially to
produce search schemes. These schemes maintain high robustness and minimal informa-
tion requirements.
GAs define three important operators (Vose, 1999):
• Selection
• Crossover
• Mutation
In its operation, a GA starts its strategy by producing a set of random solutions, instead of
considering only one candidate solution. Once the initial population of solutions is pro-
duced, each of them is evaluated in terms of the optimization problem symbolized by a
cost function. The value of this cost function is known in the metaheuristic context as a
fitness value assigned to each candidate solution. The evaluation of a solution corresponds
to combining its cost function value and its respective constraint violation. The result of
29
Discovering Diverse Content Through
Random Scribd Documents
down, as ours do, they are entirely covered by the lids with the
exception of just a tiny round space in the middle. The lizard sees, in
fact, through a hole in the middle of its eyelid. That is strange
enough; but what is stranger still is that the animal can move its
eyes in different directions at the same time. They are hardly ever
still for a single moment. But instead of moving together, like those
of all other animals, one may be looking upward toward the sky and
the other downward toward the ground; or the right eye may be
peering forward in front of the nose while the left one is glancing
backward toward the tail! Indeed, it would be very difficult to find an
odder sight than that of a chameleon when it is moving its eyes
about. They really look just as if they belonged to two different
animals.
But the most wonderful fact of all about the chameleon is that it
can change its color whenever it chooses.
How it does so no one quite knows. But the very same animal
which is brown all over as it sits upon a branch will become green all
over if you put it among leaves. The last thing at night, probably,
you will find that it is gray. Next day, perhaps, brown spots will
appear upon its body, and pinkish stripes upon its sides. And
occasionally it may be violet, and sometimes yellow, and sometimes
nearly black. So that if you were to go and look at a chameleon, and
then go and look at it again half an hour afterward, you might very
likely take it for a wholly different animal!
Then the chameleon has very odd habits. If it is annoyed, for
example, it puffs out its body in the most extraordinary way till it is
nearly double its ordinary size and its skin is stretched almost as
tight as the parchment of a drum. When it is caught it hisses like a
snake. And really it must be the very laziest creature on earth. If it
lifts a foot into the air it will often wait for quite a minute before it
puts it down again, and for two or even three minutes more before it
takes a second step. Then it always has to rest for some little time
after uncoiling its tail from a branch, while when it coils it round
another it stops and rests again. It will hardly travel two yards, in
fact, in a day.
Chameleons are found in many parts of Africa and Asia, and also
in Southeastern Europe.
CHAPTER XXVIII
SNAKES
Harmless Snakes
All serpents may properly enough be divided into two sections—
the non-poisonous ones, which are "harmless," so far as their bite is
concerned; and the poisonous ones, which inject a more or less
deadly venom into wounds made by certain long weapon-teeth
called fangs.
Let us consider first, for a moment, the harmless ones. The great
majority of them—of the common snakes of the whole world—
belong to a single family called colubers; and this family far
outnumbers all other serpents. Most of its members are of small
size; few exceed two yards in length, one of the exceptions being
our handsome king-snake of Texas and westward, which is a variety
of the northern milk-snake. All are slender, agile, sometimes
remarkably swift, with small heads, tapering and unarmed tails, and
little or no means of defence, although some of them make such a
show of fighting that they terrify many an enemy into leaving them
alone.
To this great family belong our various blacksnakes, or blue
racers, which occasionally are more than six feet long, and are
among the worst robbers of birds' nests, eating both eggs and
young, and the mother bird as well if it is small, and is not quick
enough in seeking to escape. This is the snake about which stories
of so-called fascination are told; we do not think there is much truth
in them, but that the bird is simply reckless in her efforts to drive
away the robber, and flies too near its darting jaws. The blacksnakes
are exceedingly swift runners and agile climbers. Another excellent
climber is the slender greensnake, which is so near the color of the
leaves that it will not be noticed easily as it hangs in loops upon the
branches of a bush, waiting quietly for some insect to come within
reach. Most of our snakes, however, spend their time mainly on the
ground, searching about the grass, among the tussocks of a swamp,
or amid dense thickets, after frogs, toads, tadpoles, ground-nesting
birds, mice, and especially insects, which last form the principal food
of the smaller kinds. Among these probably the most often seen are
the striped garter-snakes which abound in meadows and about
haystacks and old barns, where they search holes and corners for
mice and beetles. The warm, soft soil of old barnyards is a favorite
place for the laying of their eggs by snakes, most of which bury
them in such places and leave them to be hatched by the warmth of
the sunshine. Nearly every pond, marsh, and slow stream abounds
also in water-snakes, which are ugly in disposition as well as in color,
and feed mainly on fishes, both dead and alive. Of this kind is the
only snake to be found in England except the viper.
Perhaps the most curious of the colubrine snakes is the egg-
eating snake of South Africa. It is quite a small snake, not more than
two feet long, and scarcely thicker in body than a man's little finger;
yet it will swallow pigeons' eggs quite easily, and, if it is very hungry
indeed, will dispose of a hen's egg! This, of course, is owing to the
way in which the bones of the mouth are made. But if you were to
watch one of these snakes as it was eating an egg, you would see a
very strange thing happen. The egg would pass down the throat,
and for a few inches you would be able to watch its outline as it
moved along toward the stomach. Then, quite suddenly, the swelling
would disappear! The fact is this. About thirty of the vertebræ have
each a long, slender spine springing from the lower surface, and the
tips of these spines pass through the upper part of the throat and
project inside it, just like a row of little teeth in the wrong place. Just
as the egg, while it is being swallowed, comes against these teeth,
the snake contracts the muscles of its throat. The result is that the
teeth pierce the egg from end to end and cut it in two. Then the
contents flow onward down the throat, while the two halves of the
shell, nearly always packed one inside the other, are shortly
afterward spit out of the mouth.
Pythons
The pythons are very formidable snakes, not because they are
venomous—for they have no poison-fangs—but owing to their
immense size and strength. When fully grown they may measure as
much as thirty feet in length, while their bodies are as big round as a
man's thigh; and even when they are only half as long they are still
most dangerous creatures, for they could crush a man to death in
two or three minutes.
When a python attacks, it seizes its victim with its jaws, flings its
coils one over another around it, and then squeezes so hard that in
a very few minutes the bones fly into splinters, and the body is
reduced to pulp. And a large python can swallow a half-grown sheep
or a good-sized dog without any difficulty at all.
After the snake has swallowed its victim it becomes very drowsy,
and often sleeps heavily for several days.
Another very curious fact with regard to the python is that it
actually hatches its eggs by the warmth of its own body. It first
collects the eggs into a little pile, and then coils itself round them,
after which it remains perfectly still for nearly two months. During
the whole of that time its bodily heat is much greater than usual,
and at last the egg-shells split, and out from each comes a baby
python. A fortnight or so later they change their skins, and then are
quite large and strong enough to kill and swallow small birds.
Pythons inhabit nearly all the hotter parts of Africa, Asia, and
Australia, and are sometimes known as rock-snakes, on account of
their living much in rocky places.
Boas
The boas, one kind of which, the boa-constrictor, has long been
famous among monsters, are much like the pythons, but are found
only in tropical America and in Madagascar, and spend the greater
part of their lives in the trees. They are quite as large as the
pythons, and quite as formidable. It is said, indeed, that the
anaconda, which is the largest of all, sometimes reaches a length of
forty feet; and there is a stuffed skin, twenty-nine feet long, in the
Natural History Museum at South Kensington, London. One can
easily imagine what a terrible enemy such a snake as this would be,
and how helpless even a strong man would find himself when
wrapped in its mighty coils!
The anaconda is very fond of lying in the water with only just its
head raised above the surface, and there waiting for some animal to
swim within reach. But most of the boas lie in wait for their prey on
one of the lower branches of a tree, in readiness to strike at any
small creature that may pass beneath.
Some years ago a most singular accident happened in the reptile
house at the London Zoo. Two boas, one eleven feet long and the
other nine feet, were living in the same cage, and always seemed on
the very best of terms. One night a couple of pigeons—one for each
snake—were put into the cage, and the house was shut up as usual.
Next morning, however, when the keeper opened it, the smaller
snake had disappeared, and there was no hole in the cage through
which it could possibly have escaped. At first the keeper was
puzzled; but soon he noticed that the larger serpent was not coiled
up as usual, but was lying stretched out straight upon the ground.
Then he understood what had happened. The big snake had
swallowed the smaller one during the night, although it was only two
feet shorter than itself!
Most likely both snakes had seized the same pigeon at the same
moment. Before very long, of course, their jaws would have met in
the middle. Now when one of these big snakes has once seized its
victim it cannot let go, because of the way in which its jaws and
teeth are made, but must go on trying to swallow it. So, you see,
when the jaws of the two snakes met in the middle of the pigeon
neither could give the bird up to the other, because neither could
withdraw its teeth, and the larger one, in fact, could not help
swallowing the smaller! And since that time two or three other
accidents of the same character have been prevented only by the
constant watchfulness of the keeper.
Poisonous Snakes
In all these reptiles the poison-fangs are two in number, and are
situated in the upper jaw. They are very sharp indeed, and are
almost as brittle as glass. So while they are not in use they are
folded back out of harm's way upon the roof of the mouth. But if by
chance they should be broken, there are three or four other pairs
lying ready for use behind them which will quickly grow forward to
take their place.
Generally there is a tiny hole just under the tip of the fang, which
opens into a narrow passage running right through the center. But in
some snakes there is only a groove outside the fang. In either case,
however, the muscles which surround the poison-bag are arranged
in such a way that as soon as the snake strikes its victim a drop of
poison is squirted down each of the fangs, and so into the wound.
Vipers
The only poisonous snake found in Europe is the viper, or adder.
It is not by any means a large snake, for it is seldom more than
twelve or fourteen inches long. It has a zigzag chain of black,
lozenge-shaped markings all the way along its back.
Vipers are generally found on heathy commons and moors, and
are very fond of lying on a patch of bare, sandy ground, and
enjoying the warmth of the sun. They never attempt to bite unless
they are interfered with, but always try to crawl away, if alarmed,
into a place of safety. Their poison is not strong enough to kill a
man, unless he happens to be in a very bad state of health at the
time when he is bitten; but it would be quite sufficient to cause the
bitten limb to swell up to double its size, and to lead to a great deal
of suffering and sickness.
Cobras
Far more deadly is the bite of the cobra, which is found plentifully
in India. Any one who is bitten by this formidable snake is almost
sure to die within two or three hours.
The upper part of a cobra's neck is widened out into what is
called the hood, which can be spread out or folded up at will by the
action of the ribs. On the upper part of this hood is a dark mark,
which looks almost exactly like a pair of spectacles. When a cobra is
about to strike it always raises its head and neck and spreads this
hood before darting at its foe.
In many parts of India cobras are caught and tamed by men who
are called snake-charmers, and who sometimes capture them by
playing an odd tune upon a sort of wooden pipe. This music seems
to fascinate the snake, which comes out of its hole, rears up its head
and neck, and begins to sway slowly from side to side. Then, still
playing, the charmer moves his right hand very slowly indeed until it
is just behind the snake's head, when he suddenly grasps the reptile
round the neck. It is now, of course, quite helpless, and is quickly
transferred to his bag.
Many charmers carry cobras about with them, which they handle
quite freely. But in these cases the poison fangs have been carefully
extracted, so as to render the reptiles harmless.
Cobras are very fond of eggs, and if they can find a rat-hole
which opens into a hen-house they will often take advantage of it in
order to rob the nests. But sometimes, when they have swallowed
several eggs, and the hole happens to be a small one, they cannot
crawl out again, and are found and killed when the house is opened
in the morning.
The Puff-Adder
Quite as deadly is the puff-adder, of Africa, which has a way of
lying almost buried in the sand, so that it is not easily seen; and if it
is disturbed it does not crawl away, as most poisonous snakes will
do, but remains quite still, merely drawing back its head in order to
strike. When fully grown it is about six feet long, and its poison is so
deadly that even a horse has been known to die within two or three
hours of being bitten.
This snake is called the puff-adder because it draws in a very
deep breath when it is annoyed or irritated, and puffs out its whole
body to nearly double its proper size. It then allows the air to escape
gradually, with a kind of sighing noise, draws in another deep
breath, and so on over and over again.
Pit-Vipers
Australia, also, has some snakes whose bite is very deadly; and
in general the tropics abound in these dangerous reptiles. This is as
true of America as elsewhere, but all the American venomous
serpents are of a kind peculiar to this continent, called pit-vipers.
Some of them have rattles at the end of the tail and some lack this
appendage, but all are much alike. Certain of the most dreaded,
such as the fer-de-lance and the bushmaster, belong to the West
Indies and Northern South America; but really the worst of the
whole bad lot, because of its great size and sullen ferocity, is the
huge diamondback rattlesnake of the Southern States. It is in some
cases longer and heavier than any other known venomous snake;
and its bite, if the wound is well poisoned, means almost immediate
paralysis and death.
Rattlesnakes
Several different species of rattlesnakes are scattered over the
United States, and in some places, as on the hot dry plains of the
Southwest, and in the arid mountains of Utah and California, are
numerous enough to be troublesome. The cutting away of forests,
draining of swamps, and cultivation of prairies, soon destroy these
pests in thickly settled regions; but where rocky hills occur they
linger for a long time, because the breaks and little caves among the
ledges offer them secure retreats, winter homes where they sleep in
safety, and proper nurseries for the young, which are not produced
from eggs, as in the coluber family, but are born alive.
The rattles from which these serpents take their name, are a
number of hollow, horny, button-like structures at the tip of the tail,
which rattle together, with a peculiar humming sound, when the
creature shakes its tail, as it is sure to do when disturbed or angry. It
thus gives a warning to the man who might not have noticed the
sluggish creature in his path in time to jump aside. Not all of the
tribe have a rattle, however; and one of the reasons why our water-
moccasin and copperhead are so much dreaded is that they possess
no rattle, and therefore sound no "keep-off" warning.
All our American venomous snakes are too heavy and slow to
climb trees. They get their prey—mice, gophers, snakes, etc.—by
going to a place where it is likely to be running about, and then
patiently waiting until something comes within striking distance.
CHAPTER XXIX
AMPHIBIANS
Toads
In some ways toads are like frogs; but you can tell them at once
by their rough, dry skins, which are covered with warts like glands.
And they crawl over the ground, instead of leaping as frogs do. They
are very common almost everywhere, and you may often find them
hiding under logs or large stones during the daytime.
Toads do not lay their eggs in great masses, as frogs do, but
arrange them in strings about four feet long and an eighth of an inch
wide. Each of these strings consists of two rows of eggs fastened
side by side together. The tadpoles are very much like those of the
frog, the chief difference being that they are rather smaller and
blacker.
Newts
All through their lives newts keep their tails, instead of losing
them when they cease to be tadpoles.
You can find newts in plenty all through spring and summer by
fishing with a small net in any weedy pond; but you will find that
they are not all alike. Some have wavy crests running all along their
backs; others have none; and some are brightly colored while others
are plain olive green all over. Often in the woods in certain parts of
the United States you will meet with little newts traveling about on
the damp old leaves; and they are very conspicuous because of their
brilliant vermilion color. These are young green newts which come
out of the water, live ashore for a year or so in the red suit, and then
go back to the water and a green coat.
Newts lay their eggs in a very curious manner. They do not
fasten them together in great batches, like the frog, or in long,
narrow strings, like the toad. They lay them one by one. And the
mother newt takes each egg as she lays it, places it in the middle of
the narrow leaf of some water-plant, and then twists the leaf neatly
round it with her little fore feet, so as to wrap it up in a kind of
parcel! The tadpole which hatches out of this egg is very much like
that of a toad or a frog; but the front legs are the first to appear,
instead of the hind legs, while the tail, of course, does not pass back
into the substance of the body.
Newts swim with their tails, and very pretty and graceful they
look as they move through the water. When they cease to be
tadpoles, of course, they breathe air, just as toads and frogs do, and
have to come up to the surface every two or three minutes to obtain
it. And as long as they live in the pond they feed upon grubs and
worms and tiny water-insects.
Salamanders
The curious creatures known as salamanders are related to the
newts, and begin their lives in just the same way. But after they
have ceased to be tadpoles they only visit the water for two or three
weeks in the spring.
The most celebrated member of this group is the spotted
salamander, which is found in Central and Southern Europe, and also
in Algeria and Syria. When fully grown it is about eight inches long,
and may be known at once by the two rows of large yellow blotches
which run down from the back of its head, right along its body, to
the very tip of its tail.
In days of old it was thought that the salamander had the power
of walking through fire without being burnt! And it was also
supposed, if it were attacked, to spring upon its enemy, bite out a
piece of his flesh, and then spit fire into the wound! As a matter of
fact it is almost harmless, and may be picked up and handled
without the slightest danger. But the glands on its skin, like those on
the toad's head and back, contain a rather poisonous fluid, which is
squirted out if they are squeezed. So that if a dog were to pick up a
salamander he would be quite sure to drop it again very quickly, and
would most likely foam at the mouth for some little time.
Salamanders are very slow and timid creatures, and generally
spend the whole of the day concealed in some crevice, or in the
hollow trunk of a tree, or perhaps under a large stone. They feed
upon slugs and small insects.
There are several kinds in North America, some of which, as the
hellbender, are a foot or more in length.
The giant salamander, which is sometimes nearly a yard long, is
found in the rivers of China and Japan, and spends the whole of its
life in the water. It feeds chiefly upon fishes.
The Axolotl
This is one of the most singular of all the amphibians. It is found
in North America. Sometimes it develops into its perfect form, and
sometimes it remains a tadpole all its life, and yet lays eggs just as
though it were adult!
In the lakes of the southern Rocky Mountains the life of this
creature is just like that of any other batrachian. That is, it is
hatched out of the egg as a tadpole, grows first one pair of legs and
then another, loses its gills by degrees, and at last appears in a
lizard-like form, leaving the water and living upon dry land. But in
the lake which surrounds the city of Mexico it never becomes
anything more than a big tadpole, keeps its gills throughout its life,
and does not leave the water at all.
The Olm
The olm, or proteus, is found only in the underground lakes of
Carniola and one or two other parts of Central Europe. It is about a
foot long when fully grown, and has a slender, snake-like body, with
a pair of tiny legs just behind the head, and another pair at the base
of the tail. It is perfectly blind, the eyes being hidden under the skin,
and yet cannot bear light. For if it is kept in captivity it will always
hide in the darkest corner that it can find. And it has been known to
live in confinement for five years without once taking any food.
What the habits of this extraordinary animal are in nature no one
knows, as it has never been found except in these underground
lakes.
In color the olm is pinkish gray, with bright-red gills, and there
are from twenty-four to twenty-seven grooves upon either side of its
body.
FISHES
CHAPTER XXX
FRESH-WATER FISHES
The Mud-Fish
One of these is the odd mud-fish of the African rivers. In general
appearance this animal looks something like an eel, and it grows to
a length of about three feet. Its four long ray-like limbs seem to be
quite useless to it, and it swims by means of its tail, along the upper
part of which runs a narrow fin. It is a creature of prey, feeding upon
other fishes, and when food is plentiful, it just takes one bite out of
the lower part of their bodies and no more.
In summer the rivers in which it lives often dry up altogether, and
the mud at the bottom is baked as hard as a brick by the rays of the
sun. So, as soon as the water begins to get shallow, the animal
burrows deep down into the mud, curls itself up like a fried whiting,
and falls fast asleep for several months, just as hedgehogs and
dormice do during the winter in cold countries. Then, when the rainy
season comes and the rivers fill up again, it comes out from its
retreat and swims about as before. It is from this habit that it gets
its name of mud-fish.
Now we come to the true fishes; and perhaps our best plan will
be to read about some of the fresh-water fishes first, and afterward
about some of those which live in the sea.
Sticklebacks
Let us begin with a little fish which is very common in almost
every pond, but is nevertheless very curious and very interesting.
When fully grown, the stickleback is about three inches long, and
you can tell it at once by the sharp spines on its back, which it can
raise and lower at will. It uses these spines in fighting. For the male
sticklebacks, at any rate, are most quarrelsome little creatures, and
for several weeks during the early part of the summer they are
constantly engaged in battle.
At this season of the year they are really beautiful little fishes, for
the upper parts of their bodies are bright blue and the lower part
rich crimson, while their heads become pale drab, and their eyes
bright green! And apparently they are very jealous of one another,
for two male sticklebacks in their summer dress never seem able to
meet without fighting. Raising their spines, they dash at one another
over and over again with the utmost fury, each doing his best to
swim underneath the other and cut his body open. When one of
them is beaten he evidently feels quite ashamed of himself, for he
goes and hides in some dark corner where nobody can see him.
And, strange to say, as soon as he loses the battle his beautiful
colors begin to fade, and in a very few hours they disappear
altogether.
About the beginning of June, all the male sticklebacks which
have not been beaten set to work to build nests. These nests are
shaped like little tubs with no tops or bottoms, and they are made of
tiny scraps of grass and cut reed and dead leaf, neatly woven
together. As soon as they are finished the female sticklebacks lay
their eggs in them. Then the males get inside, and watch over the
eggs until they hatch.
NORTH AMERICAN FOOD AND
GAME FISHES
Perches
Another very handsome fresh-water fish is the perch, which is
plentiful in almost every river and lake in the warmer parts of the
whole world. In color it is rich greenish brown above and yellowish
white below, with from five to seven upright dark bands on either
side of its body, while the upper fins are brown and the lower ones
and the tail bright red.
The front fin on the back of the perch, which can be raised or
lowered at will, is really a very formidable weapon, for it consists of
a row of very sharp spines projecting for some little distance beyond
the membrane which joins them together. Even the pike is afraid of
these spines, and it is said that although he will seize any other
fresh-water fish without a moment's hesitation, he will never venture
to attack a perch.
Early in the month of May the mother perch lays her eggs, which
she fastens in long bands to the leaves of water-plants. Their
number is very great, over 280,000 having been taken from quite a
small perch of only about half a pound in weight!
The climbing perch of India, notwithstanding its name, is not a
true perch, but belongs to quite a different family. It is famous for its
power of leaving the water and traveling for a considerable distance
over dry land. It does this in the hot season if the stream in which it
is living dries up; and if you were to live in certain parts of India you
might perhaps meet quite a number of these fishes shuffling across
the road by means of their lower fins, and making their way as fast
as possible toward the nearest river!
But how do they manage to remain out of the water for so long?
Well, the fact is that fishes can live for a long time out of the
water if their gills are kept moist. In some fishes, such as the
herring, this is not possible, because their gills are made in such a
way that they become dry almost immediately. But the climbing
perch has a kind of cistern in its head, just above the gill-chambers,
which contains quite a quantity of water. And while the fish is
traveling over land this water passes down, drop by drop, to the
gills, and keeps them constantly damp.
When this fish has been kept in an earthenware vessel, without
any water at all, it has been known to live for nearly a week!
The Carp
Another fish which will live for quite a long time out of the water
is the carp, which has often been conveyed for long distances
packed in wet moss.
This fine fish is a native of the Old World, where it is found both
in rivers and lakes, but prefers still waters with a soft muddy bottom,
in which it can grovel with its snout in search of food. During the
winter, too, it often buries itself completely in the mud, and there
hibernates, remaining perfectly torpid until the return of warmer
weather. It is not at all an easy fish to catch, for it is so wary that it
will refuse to touch any bait in which it thinks that a hook may be
concealed. And if the stream in which it is living is dragged with a
net, it just burrows down into the mud at the bottom and allows the
net to pass over it.
Owing to this crafty and cunning nature, the carp has often been
called the fresh-water fox.
The carp is a very handsome fish, being olive brown above, with
a tinge of gold, while the lower parts are yellowish white. It
sometimes weighs as much as twenty-five pounds, and has been
known to lay more than 700,000 eggs! It is domesticated in many
parts of North America and other countries.
The Barbel
Found in many Old World rivers, the barbel may be known at
once by the four long fleshy organs which hang down from the nose
and the corners of the mouth. These organs are called barbules, and
may possibly be of some help to the fish when it is grubbing in the
soft mud in search of the small creatures upon which it feeds. It
spends hours in doing this, and a hungry barbel is sometimes so
much occupied in its task that a swimmer has dived down to the
bottom of the river and caught it with his hands. From this curious
way of feeding, and its great greediness, the barbel has sometimes
been called the fresh-water pig.
In color this fish is greenish brown above, yellowish green on the
sides of the body, and white underneath. When fully grown it weighs
from ten to twelve pounds.
The Roach
This is one of the prettiest of the European fresh-water fishes,
which is found in many lakes and streams. The upper part of the
head and back are grayish green, with a kind of blue gloss, which
gradually becomes paler on the sides till it passes into the silvery
white of the lower surface. The fins and the tail are bright red.
The roach does not grow to a very great size, for it seldom
weighs more than two pounds. It lives in large shoals, and in clear
water several hundred may often be seen swimming about together.
The Pike
One of the largest and quite the fiercest of the British fresh-water
fishes is the pike, which is found both in lakes and rivers. In America
we have no pike proper, but in some of the great western lakes a
very large relative of similar habits known as the maskinonge; and
our pickerels are only small pikes. Wonderful tales are told of the
ferocity of the pike. He does not seem to know what fear is, and his
muscular power is so great, and the rows of teeth with which his
jaws are furnished are so sharp and strong, that he is really a most
formidable foe. All other fresh-water fishes are afraid of him, while
he gobbles up water-birds of all kinds, and water-mice, and frogs,
and even worms and insects. And no matter how much food he eats,
he never seems to be satisfied.
When the pike is hungry, he generally hides under an
overhanging bank, or among weeds, and there waits for his victims
to pass by.
The young pike is generally known as the jack, and when only
five inches long has been known to catch and devour a gudgeon
almost as big as itself. With such a voracious appetite, it is not
surprising that the fish grows very fast, and for a long time it
increases in weight at the rate of about four pounds in every year.
How long it continues to grow nobody quite knows; but pike of
thirty-five or forty pounds have often been taken, and there have
been records of examples even larger still.
In color the pike is olive brown, marked with green and yellow.
Trout
Perhaps the greatest favorite of all anglers is the trout, which, in
one or more of its various species, is to be caught in almost every
swift stream and highland lake throughout the temperate zone,
except where the race has been destroyed by too persistent fishing.
This happens everywhere near civilization, unless protective laws
regulate the times and places where fishing may be done. Similar
laws are required to save many other kinds of fishes from quick
destruction at the hands of the thoughtless and selfish, and they
should be honestly obeyed and supported in spite of their
occasionally interfering with amusement.
Trout are graceful in form and richly colored, most of them
having arrangements of bright spots and gaily tinted fins. The
common trouts of Europe and the eastern half of the United States
and Canada are much alike; but in the Rocky and other mountains of
the western shore of our continent others quite different are
scattered from the Plains to the Pacific. One of the most interesting
and beautiful of these, the rainbow-trout, has been brought into the
East, and has made itself at home in many lakes and rivers of the
Northern States and Canada.
The trout is an extremely active fish, and when it is hooked it
tries its very hardest to break away, dashing to and fro, leaping,
twisting, and fighting, and often giving the angler a great deal of
trouble before he can bring it in. In small streams it seldom grows to
any great size, but in some of the Scottish lochs and lakes of Maine
trout weighing fifteen or even twenty pounds are often taken. It is
sometimes considered, however, that these belong to a different
species.
The Salmon
More famous even than the trout is the salmon, the largest and
finest of all our fresh-water fishes, which often reaches a weight of
forty-five or fifty pounds, and sometimes grows to still greater size.
It is hardly correct, however, to speak of it as a fresh-water fish,
for although salmon are nearly always caught in rivers, they spend a
considerable part of their lives in the sea.
Salmon are of two kinds—the Atlantic and the Pacific species;
and the life-history of each is a very curious one.
During the winter the parent fishes of the Atlantic salmon, which
used to be exceedingly numerous in all our northern rivers emptying
into the Atlantic, and still haunt the rivers of Northeastern Canada,
and of Scotland, make their way as far up a clear and gravelly river
as they possibly can, till they find a suitable place in which to lay
their eggs. The mother then scoops a hole at the bottom of the
stream, in which she deposits her eggs in batches, carefully covering
up each batch as she does so. At this time both parents are in very
poor condition, and the males are known to anglers as "kelts." For a
time they remain in the river, feeding ravenously. Then in March or
April they travel down the river and pass into the sea, where they
stay for three or four months, after which they ascend the river
again, as before.
Meanwhile the eggs remain buried in the gravel for about four
months. At the end of that time the little fishes hatch out, and
immediately hide themselves for about a fortnight under a rock or a
large stone. You would never know what they were if you were to
see them, for they look much more like tadpoles than fishes; and
each has a little bag of nourishment underneath its body on which it
lives. When this is exhausted they leave their retreat and feed upon
small insects, growing very rapidly, until in about a month's time
they are four inches long. They are now called parr and have a row
of dark stripes upon their sides, and in this condition they remain for
at least a year. Their color then changes, the stripes disappearing,
and the whole body becoming covered with bright silvery scales.
The little fishes are now known as smolts, and, like their parents;
they make their way down the river and pass into the sea. There
they remain until the autumn, when they ascend the river again. By
this time they have grown considerably, weighing perhaps five or six
pounds, and are called grilse. And it is not until they have visited the
sea again in the following year that they are termed salmon.
When salmon are ascending a river and come to a waterfall, they
climb it by leaping into the air and so springing into the stream
above the fall, trying over and over again until they succeed. When
the fall is too high to be climbed in this way, the owners of the river
often make a kind of water staircase by the side of it, so that the
fishes can leap up one stair at a time. This is called a salmon-ladder.
Eels
The only other fresh-water fishes which we can notice are the
eels, which look more like snakes than fishes, for they have long
slender bodies, with a pair of tiny fins just behind the head, a long
one running along the back and tail, like a crest, and another,
equally long, under the body. And they are clothed with a smooth,
slimy skin instead of with scales.
These curious creatures live in ponds and even in ditches as well
as in rivers, and are very plentiful in all parts of the northern
hemisphere. During the daytime, although they will sometimes bask
at the surface in the warm sunshine, they generally lie buried in the
mud at the bottom of the water, coming out soon after sunset to
feed. And when the weather is damp, so that their gills are kept
moist as they wriggle through the herbage, they will often leave the
water and travel for some little distance overland.
They frequently do this when they are traveling toward the sea.
For it is a strange fact that, although they are fresh-water fishes,
eels both begin and end their lives in the sea.
In the first place, the eggs are laid in the sea—generally quite
close to the mouth of a river. When the little elvers, as the young
eels are called, hatch out, they make their way up the river in
immense shoals. In the English river Severn, for instance, several
tons of elvers are often caught in a single day; and about thirty
million elvers go to the ton! After being pressed into cakes and fried,
these little creatures are used for food; but they are so rich that one
cannot eat very many at once.
When they have traveled far enough up the river, most of the
elvers which have escaped capture make their way to different
streams and pools and ditches, and there remain until their growth
is completed. They then begin to journey back to the sea, and when
they reach it they lay eggs in their turn. After this, apparently, they
die.
In the rivers of South America a most wonderful eel is found
which has the power of killing its victims by means of an electric
shock, wherefore it is called the electric eel. The electricity is
produced and stored up in two large organs inside the body, but
how it is discharged nobody knows. If the fish is touched it merely
gives a slight shudder. But the shock is so severe that quite a large
fish can be killed by it, while a man's arm would be numbed for a
moment right up to the shoulder.
Lampreys
The lamprey, which is found plentifully in many northern rivers, is
very much like an eel in appearance. But it has no side fins, and
instead of possessing jaws, it has a round mouth used for sucking,
and resembling that of a leech; and on either side of its neck it has a
row of seven round holes, through which water passes to the
breathing-organs.
Lampreys seem to spend the greater part of their lives in the sea,
but always come up the rivers to spawn. They lay their eggs in a
hollow in the bed of the stream, which they make by dragging away
stone after stone till the hole is sufficiently deep. Very often a large
number of lampreys combine for this purpose, and make quite a big
hole, in which they all lay their eggs together.
The length of the lamprey is generally from fifteen to eighteen
inches, and its color is olive brown.
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
ebookbell.com