0% found this document useful (0 votes)
73 views

Unit 5 Soft Computing

Unit V discusses genetic algorithms (GAs). GAs are adaptive heuristic search algorithms inspired by Darwin's theory of evolution and natural selection. GAs use techniques like crossover, mutation, and selection to evolve solutions to problems over multiple generations. GAs work by maintaining a population of potential solutions and advancing it to subsequent generations by selecting the fittest individuals for reproduction and mutation. The goal is to optimize a problem by finding the best solution in the search space.

Uploaded by

Anas Siddiqui
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views

Unit 5 Soft Computing

Unit V discusses genetic algorithms (GAs). GAs are adaptive heuristic search algorithms inspired by Darwin's theory of evolution and natural selection. GAs use techniques like crossover, mutation, and selection to evolve solutions to problems over multiple generations. GAs work by maintaining a population of potential solutions and advancing it to subsequent generations by selecting the fittest individuals for reproduction and mutation. The goal is to optimize a problem by finding the best solution in the search space.

Uploaded by

Anas Siddiqui
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Unit V (Genetic Algorithms)

Lecture # An Overview
What are GAs?
Genetic Algorithms (GAs) are adaptive heuristic search algorithm based on the evolutionary
ideas of natural selection and genetics.

Genetic algorithms (GAs) are a part of Evolutionary computing, a rapidly growing area of
artificial intelligence. GAs is inspired by Darwin's theory about evolution - "survival of the
fittest".
GAs represents an intelligent exploitation of a random search used to solve optimization
problems.
GAs, although randomized, exploit historical information to direct the search into the region of
better performance within the search space.
In nature, competition among individuals for scanty resources results in the fittest individuals
dominating over the weaker ones.

1. Introduction to Genetic Algorithms


Solving problems mean looking for solutions, which is best among others.

Finding the solution to a problem is often thought:


In computer science and AI, as a process of search through the space of possible solutions. The
set of possible solutions defines the search space (also called state space) for a given problem.
Solutions or partial solutions are viewed as points in the search space.

In engineering and mathematics, as a process of optimization. The problems are first formulated
as mathematical models expressed in terms of functions and then to find a solution, discover the
parameters that optimize the model or the function components that provide optimal system
performance.
Why Genetic Algorithms?
It is better than conventional AI; It is more robust.
Unlike older AI systems, the GA's do not break easily even if the inputs changed slightly, or in
the presence of reasonable noise.
While performing search in large state-space, or multi-modal state-space, or n-dimensional
surface, a genetic algorithms offer significant benefits over many other typical search
optimization techniques like - linear programming, heuristic, depth-first, breath-first.
"Genetic Algorithms are good at taking large, potentially huge search spaces and navigating
them, looking for optimal combinations of things, the solutions one might not otherwise find
in a lifetime.”
Optimization
Optimization is a process that finds a best, or optimal, solution for a problem. The Optimization
problems are centered around three factors:
1. An objective function which is to be minimized or maximized;
Examples
 In manufacturing, we want to maximize the profit or minimize the cost.
 In designing an automobile panel, we want to maximize the strength.
2. A set of unknowns or variables that affect the objective function,
Examples
 In manufacturing, the variables are amount of resources used or the time spent.
 In panel design problem, the variables are shape and dimensions of the panel.

3. A set of constraints that allow the unknowns to take on certain values but exclude others;
Examples

 In manufacturing, one constrain is, that all "time" variables to be non-negative.


 In the panel design, we want to limit the weight and put constrain on its shape.

An optimization problem is defined as: Finding values of the variables that minimize or
maximize the objective function while satisfying the constraints.
Search Optimization Algorithms
Fig. 1.1 below shows different types of Search Optimization algorithms.
Search Optimization
Techniques

Calculus Based Guided Random Enumerative


Techniques Search Techniques Techniques

Direct Search Indirect Search Uninformed Informed


Search Search
Search Search

Newton Finonacci
Tabu Hill Simulated Evolutionary
Search Climbing Annealing Methods
Search Search
Genetic Genetic Algorithms
Programming
Fig.1 Taxonomy of Search Optimization techniquesSearch

We are interested in evolutionary search algorithms.


Our main concern is to understand the evolutionary algorithms:
 how to describe the process of search,
 how to implement and carry out search,
 what are the elements required to carry out search, and
 the different search strategies
The Evolutionary Algorithms include:
- Genetic Algorithms and
- Genetic Programming
Evolutionary Algorithm (EAs)
Evolutionary Algorithm (EA) is a subset of Evolutionary Computation (EC) which is a subfield
of Artificial Intelligence (AI).
Evolutionary Computation (EC) is a general term for several computational techniques.
Evolutionary Computation represents powerful search and optimization paradigm influenced by
biological mechanisms of evolution: that of natural selection and genetic.
Evolutionary Algorithms (EAs) refers to Evolutionary Computational models using
randomness and genetic inspired operations. EAs involve selection, recombination, random
variation and competition of the individuals in a population of adequately represented potential
solutions. The candidate solutions are referred as chromosomes or individuals.

Genetic Algorithms (GAs) represent the main paradigm of Evolutionary Computation.


 GAs simulates natural evolution, mimicking processes the nature uses: Selection,
Crosses over, Mutation and Accepting.
 GAs simulate the survival of the fittest among individuals over consecutive generation
for solving a problem.

Development History
EC = GP + ES + EP + GA
Evolutionary Genetic Evolution Evolutionary Genetic
Computing Programming Strategies Programming Algorithms

Rechenberg Koza Rechenberg Fogel Holland


1960 1992 1965 1962 1970
Lecture # 2 Basic Concepts
Genetic algorithms (GAs) are the main paradigm of evolutionary computing. GAs is inspired by
Darwin's theory about evolution – the “survival of the fittest". In nature, competition among
individuals for scanty resources results in the fittest individuals dominating over the weaker
ones.

 GAs are the ways of solving problems by mimicking processes nature uses; ie., Selection,
Crosses over, Mutation and Accepting, to evolve a solution to a problem.
 GAs are adaptive heuristic search based on the evolutionary ideas of natural selection and
genetics.
 GAs are intelligent exploitation of random search used in optimization problems.
 GAs, although randomized, exploit historical information to direct the search into the region
of better performance within the search space.

The biological background (basic genetics)


 Every organism has a set of rules, describing how that organism is built. All living
organisms consist of cells.
 In each cell there is same set of chromosomes. Chromosomes are strings of DNA and
serve as a model for the whole organism.
 A chromosome consists of genes, blocks of DNA.
 Each gene encodes a particular protein that represents a trait (feature), e.g., color of eyes.
 Possible settings for a trait (e.g. blue, brown) are called alleles.
 Each gene has its own position in the chromosome called its locus.
 Complete set of genetic material (all chromosomes) is called a genome.
 Particular set of genes in a genome is called genotype.
 The physical expression of the genotype (the organism itself after birth) is called the
phenotype, its physical and mental characteristics, such as eye color, intelligence etc.
Creation of Offsprings
When two organisms mate they share their genes; the resultant offspring may end up having half
the genes from one parent and half from the other. This process is called recombination (cross
over) . The new created offspring can then be mutated. Mutation means, that the elements of
DNA are a bit changed. These changes are mainly caused by errors in copying genes from
parents. The fitness of an organism is measured by success of the organism in its life (survival).
Search Space
In solving problems, some solution will be the best among others. The space of all feasible
solutions (among which the desired solution resides) is called search space (also called state
space).
 Each point in the search space represents one possible solution.
 Each possible solution can be "marked" by its value (or fitness) for the problem.
 The GA looks for the best solution among a number of possible solutions represented by
one point in the search space.
 Looking for a solution is then equal to looking for some extreme value (minimum or
maximum) in the search space.
 At times the search space may be well defined, but usually only a few points in the search
pace are known. In using GA, the process of finding solutions generates other points
(possible solutions) as evolution proceeds

Figure 2.1 Search Space

Working Principles
Before getting into GAs, it is necessary to explain few terms.
 Chromosome: a set of genes; a chromosome contains the solution in form of genes.
 Gene: a part of chromosome; a gene contains a part of solution. It determines the
solution. e.g. 16743 is a chromosome and 1, 6, 7, 4 and 3 are its genes.
 Individual: same as chromosome.
 Population: number of individuals present with same length of chromosome.
 Fitness: the value assigned to an individual based on how far or close a individual is from
the solution; greater the fitness value better the solution it contains.
 Fitness function: a function that assigns fitness value to the individual. It is problem
specific.
 Breeding: taking two fit individuals and then intermingling there chromosome to create
new two individuals.
Mutation: changing a random gene in an individual.
Selection: selecting individuals for creating the next generation.

 Genetic algorithm begins with a set of solutions (represented by chromosomes) called the
population.
 Solutions from one population are taken and used to form a new population. This is
motivated by the possibility that the new population will be better than the old one.
 Solutions are selected according to their fitness to form new solutions (offspring); more
suitable they are more chances they have to reproduce.
 This is repeated until some condition (e.g. number of populations or improvement of the
best solution) is satisfied.
Lecture #3 Procedures for GA
The 3.1 Illustrate working of Genetic Algorithms
Initialization Selection Parents

Recombination

Population
Mutation

Survivor Offspring
Termination

Fig 3.1 . General Scheme of Evolutionary process

Outline of the Basic Genetic Algorithm

1. [Start] Generate random population of n chromosomes (i.e. suitable solutions for the
problem).
2. [Fitness] Evaluate the fitness f(x) of each chromosome x in the population.
3. [New population] Create a new population by repeating following steps until the new
population is complete.

a. [Selection] Select two parent chromosomes from a population according to their


fitness
b. (better the fitness, bigger the chance to be selected)
c. [Crossover] with a crossover probability, cross over the parents to form new offspring
(children). If no crossover was performed, offspring is the exact copy of parents.
d. [Mutation] with a mutation probability, mutate new offspring at each locus (position in
chromosome).
e. [Accepting] Place new offspring in the new population
4. [Replace] Use new generated population for a further run of the algorithm
5. [Test] If the end condition is satisfied, stop, and return the best solution in current population
6. [Loop] Go to step 2
Note: The genetic algorithm's performance is largely influenced by two operators called
crossover and mutation. These two operators are the most important parts of GA.
Flowchart of GA

START

Create Initial Population Genesis

Assign fitness Value


to each individual

Select two individuals Natural Selection


Parent 1 Parent 2
NO
Reproduction Use crossover operator
Recombinatio to produce offsprings Crossover

Crossover
Assign fitness value Finished?
Survival of the fittest to offsprings
Apply
replacement YES NO
operator to YE
incorporate new S
individual into Natural Selection Select one offspring
population

NO Mutation Apply mutation operator


to produce mutated
Terminate
?
Mutation Assign fitness value
YES Finished? to offsprings

END

Figure 3.2 Flowcharts of Genetic Alogrithms


Lecture 4 # Genetic Representations (Encoding)

Before a genetic algorithm can be put to work on any problem, a method is needed to encode
potential solutions to that problem in a form so that a computer can process.
One common approach is to encode solutions as binary strings: sequences of 1's and 0's, where
the digit at each position represents the value of some aspect of the solution.
Example:
A Gene represents some data (eye color, hair color, sight, etc.).
A chromosome is an array of genes. In binary form a Gene looks like: (11100010)
A Chromosome looks like: Gene1 Gene2 Gene3 Gene4
(11000010, 00001110, 001111010, 10100011)
A chromosome should in some way contain information about solution which it represents; it
thus requires encoding. The most popular way of encoding is a binary string like:
Chromosome 1: 1101100100110110
Chromosome 2: 1101111000011110
Each bit in the string represents some characteristics of the solution.
 There are many other ways of encoding, e.g., encoding values as integer or real numbers
or some permutations and so on.
 The virtue of these encoding method depends on the problem to work on .

Binary Encoding
Binary encoding is the most common to represent information contained. In genetic algorithms,
it was first used because of its relative simplicity.
 In binary encoding, every chromosome is a string of bits: 0 or 1, like
Chromosome 1: 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 0 1 1 1 0 0 1 0 1
Chromosome 2: 1 1 1 1 1 1 1 0 0 0 0 0 1 1 0 0 0 0 0 1 1 1 1 1
 Binary encoding gives many possible chromosomes even with a small number of allele’s
ie possible settings for a trait (features).
 This encoding is often not natural for many problems and sometimes corrections must be
made after crossover and/or mutation.
Example 1:

One variable function, say 0 to 15 numbers, numeric values, represented by 4 bit binary string.

Numeric 4-bit String Numeric 4-bit String Numeric 4-bit string


Value Value Value
0 0000 6 0110 12 1100
1 0001 7 0111 13 1101
2 0010 8 1000 14 1110
3 0011 9 1001 15 1111
4 0100 10 1010
5 0101 11 1011

Example 2:

Two variable functions is represented by 4 bit string for each variable.


Let two variables are X1 , X2 as (1011 0110) .
Every variable will have both upper and lower limits as ≤ ≤
Because 4-bit string can represent integers from 0 to 15, so (0000 0000) and (1111 1111)
represent the points for X1, X2 as and respectively
Thus, an n-bit string can represent integers from 0 to 2n -1, i.e. 2n integers.
Binary Coding Equivalent integer Decoded binary substring
2 10 Remainder 1010 Let Xi is coded as a substring
2 5 0 0 x 20 = 0 Si of length ni. Then decoded
2 2 1 1 x 21 = 2 binary substring Si is as
2 1 0 0 x 22 = 0

1 1 x 23 = 8 where Si can be 0 or 1 and the


string S is represented as
Sn-1 . . . . S3 S2 S1 S0
=10

Example: Decoding value


Consider a 4-bit string (0111), the decoded value is equal to
23 x 0 + 22 x 1 + 21 x 1 + 20 x 1 = 7

Knowing
and corresponding to (0000) and (1111) ,
the equivalent value for any 4-bit string can be obtained as

− For e.g. a variable Xi; let = 2, and = 17, find what value the
4-bit string Xi = (1010) would represent.
First get decoded value for Si = 1010 = 23 x 1 + 22 x 0 + 21 x 1 + 20 x 0 = 10 then
= =12

The accuracy obtained with a 4-bit code is 1/16 of search space.


By increasing the string length by 1-bit, accuracy increases to 1/32.
(Remanig Part of this lecture is Not Ncecessary)

Value Encoding The Value encoding can be used in problems where values such as real
numbers are used. Use of binary encoding for this type of problems would be difficult.
1. In value encoding, every chromosome is a sequence of some values.
2. The Values can be anything connected to the problem, such as : real numbers, characters or
objects.
Examples:
Chromosome A 1.2324 5.3243 0.4556 2.3293 .4545
Chromosome B ABDJEIFJDHDIERJFDLDFLFEGT
Chromosome C (back), (back), (right), (forward), (left)
3. Value encoding is often necessary to develop some new types of crossovers and mutations
specific for the problem.
Permutation Encoding

Permutation encoding can be used in ordering problems, such as traveling salesman problem or
task ordering problem.
1. In permutation encoding, every chromosome is a string of numbers that represent a position in
a sequence.
Chromosome A 1 5 3 2 6 4 7 9 8
Chromosome B 8 5 6 7 2 3 1 4 9
2. Permutation encoding is useful for ordering problems. For some problems, crossover and
mutation corrections must be made to leave the chromosome consistent.
Examples:
1. The Traveling Salesman problem:
There are cities and given distances between them. Traveling salesman has to visit all of them,
but he does not want to travel more than necessary. Find a sequence of cities with a minimal
traveled distance. Here, encoded chromosomes describe the order of cities the salesman visits.
2. The Eight Queens problem: There are eight queens. Find a way to place them on a chess
board so that no two queens attack each other. Here, encoding describes the position of a queen
on each row.
Tree Encoding
Tree encoding is used mainly for evolving programs or expressions. For genetic programming:
 In tree encoding, every chromosome is a tree of some objects, such as functions or
commands in programming language.
 Tree encoding is useful for evolving programs or any other structures that can be encoded
in trees.
 The crossover and mutation can be done relatively easy way.
Example:

Chromosome A Chromosome B

+ Do untill

/
Step Wall
5 y

Do until step wall


(+(/5 Y))

Fig 4 1. Example of Chromosomes with tree encoding

Note: Tree encoding is good for evolving programs. The programming language LISP is often
used. Programs in LISP can be easily parsed as a tree, so the crossover and mutation is relatively
easy.
Lecture # 5 Genetic Operators
Genetic operators used in genetic algorithms maintain genetic diversity. Genetic diversity or
variation is a necessity for the process of evolution. Genetic operators are analogous to those
which occur in the natural world:

 Reproduction (or Selection);


 Crossover (or Recombination); and
 Mutation.
In addition to these operators, there are some parameters of GA. One important parameter is
Population size.
 Population size says how many chromosomes are in population (in one generation).
 If there are only few chromosomes, then GA would have a few possibilities to perform
crossover and only a small part of search space is explored.
 If there are many chromosomes, then GA slows down.
 Research shows that after some limit, it is not useful to increase population size, because
it does not help in solving the problem faster. The population size depends on the type of
encoding and the problem.
Reproduction or Selection
Reproduction is usually the first operator applied on population. From the population, the
chromosomes are selected to be parents to crossover and produce offspring.

The problem is how to select these chromosomes? According to Darwin's evolution


theory "survival of the fittest" – the best ones should survive and create new offspring.
 The Reproduction operators are also called Selection operators.
 Selection means extract a subset of genes from an existing population, according to any
definition of quality. Every gene has a meaning, so one can derive from the gene a kind
of quality measurement called fitness function. Following this quality (fitness value),
selection can be performed.
 Fitness function quantifies the optimality of a solution (chromosome) so that particular
solution may be ranked against all the other solutions. The function depicts the closeness of a
given ‘solution’ to the desired result. Many reproduction operators exist and they all essentially
do same thing. They pick from current population the strings of above average and insert their
multiple copies in the mating pool in a probabilistic manner. The most commonly used methods
of selecting chromosomes for parents to crossover are :
 Roulette wheel selection,
 Rank selection
 Boltzmann selection,
 Steady state selection.
 Tournament selection,

The Roulette wheel and Boltzmann selections methods are illustrated next.

Example of Selection
2
Evolutionary Algorithms is to maximize the function f(x) = x with x in the integer interval
[0, 31], i.e., x = 0, 1, 30, 31.
1. The first step is encoding of chromosomes; use binary representation for integers; 5-bits are
used to represent integers up to 31.
2. Assume that the population size is 4.
3. Generate initial population at random. They are chromosomes or genotypes;
e.g., 01101, 11000, 01000, 10011.
4. Calculate fitness value for each individual.
(a) Decode the individual into an integer (called phenotypes),
01101 → 13; 11000 → 24; 01000 → 8; 10011 → 19;
(b) Evaluate the fitness according to
f(x) = x2 , 13 → 169; 24 → 576; 8 → 64; 19 → 361.
5. Select parents (two individuals) for crossover based on their fitness in pi. Out of many
methods for selecting the best chromosomes, if
Roulette-wheel selection is used, then the probability of the i th string in the population is

Where
F i is fitness for the string i in the population, expressed as f(x) pi is probability of the string i
being selected,
n is no of individuals in the population, is population size, n=4
n * pi is expected count
String No Initial X value Fitness Fi pi Expected
Population f(x) = x2 count N *
Prob i
1 01101 13 169 0.14 0.58

2 11000 24 576 0.49 1.97

3 01 0 0 0 8 64 0.06 0.22
4 10011 19 361 0.31 1.23
SUM 1170 1.00 4.00
AVERAGE 293 0.25 1.00
MAX 576 0.49 1.97

The string no 2 has maximum chance of selection.


Roulette wheel selection (Fitness-Proportionate Selection)
Roulette-wheel selection, also known as Fitness Proportionate Selection, is a genetic operator,
used for selecting potentially useful solutions for recombination.
In fitness-proportionate selection:
 The chance of an individual's being selected is proportional to its fitness, greater or less
than its competitors' fitness.
 Conceptually, this can be thought as a game of Roulette. As shown in figure 5.1

1
5% 2
8 9%
20%
3
13%
7
8%
6 4
8% 17%

5
20%
Fig. Roulette-wheel Shows 8 individuals with fitness
 The Roulette-wheel simulates 8individuals with fitness values Fi, marked at its
circumference; e.g.,
 The 5th individual has a higher fitness than others, so the wheel would choose the 5th
individual more than other individuals.

 Tthe fitness of the individuals is calculated as the wheel is spun n = 8 times, each time
selecting an instance, of the string, chosen by the wheel pointer.

 Probability of ith string is

where
n = no of individuals, called population size; pi = probability of ith string being selected; Fi =
fitness for ith string in the population. Because the circumference of the wheel is marked
according to a string's fitness, the Roulette-wheel mechanism is expected to make

copies of the ith string. Average fitness = / n; Expected count = (n =8) x pi

Cumulative Probabilities =

Boltzmann Selection
Simulated annealing is a method used to minimize or maximize a function.
 This method simulates the process of slow cooling of molten metal to achieve the
minimum function value in a minimization problem.
 The cooling phenomena are simulated by controlling a temperature like parameter
introduced with the concept of Boltzmann probability distribution.
 The system in thermal equilibrium at a temperature T has its energy distribution based on
the probability defined by P (E) = exp (- E / kT ) were k is Boltzmann constant.
 This expression suggests that a system at a higher temperature has almost uniform
probability at any energy state, but at lower temperature it has a small probability of
being at a higher energy state.
 Thus, by controlling the temperature T and assuming that the search process follows
Boltzmann probability distribution, the convergence of the algorithm is controlled.
Crossover
Crossover is a genetic operator that combines (mates) two chromosomes (parents) to produce a
new chromosome (offspring). The idea behind crossover is that the new chromosome may be
better than both of the parents if it takes the best characteristics from each of the parents.
Crossover occurs during evolution according to a user-definable crossover probability. Crossover
selects genes from parent chromosomes and creates a new offspring. The Crossover operators
are of many types.
 One-Point crossover.
 Two Point Crossover
 Uniform Crossover
 Arithmetic Crossover
 Heuristic crossovers.
The operators are selected based on the way chromosomes are encoded.
One-Point Crossover
One-Point crossover operator randomly selects one crossover point and then copy everything
before this point from the first parent and then everything after the crossover point copy from the
second parent. The Crossover would then look as shown below.
Consider the two parents selected for crossover.
Parent 1 11011|00100110110
Parent 2 11011|11000011110
Interchanging the parents chromosomes after the crossover points - The Offspring produced are:
Offspring 1 11011|11000011110
Offspring 2 11011|00100110110
Note: The symbol, a vertical line, | is the chosen crossover point.
Two-Point Crossover
Two-Point crossover operator randomly selects two crossover points within a chromosome then
interchanges the two parent chromosomes between these points to produce two new offspring.
Consider the two parents selected for crossover :
Parent 1 11011|0010011|0110
Parent 2 11011|1100001|1110
Interchanging the parents chromosomes between the crossover points –
The Offspring produced are:

Offspring 1 11011|0010011|0110
Offspring 2 11011|0010011|0110
Uniform Crossover
Uniform crossover operator decides (with some probability – know as the mixing ratio) which
parent will contribute how the gene values in the offspring chromosomes. The crossover operator
allows the parent chromosomes to be mixed at the gene level rather than the segment level (as
with one and two point crossover).
Consider the two parents selected for crossover.

Parent 1 1101100100110110
Parent 2 1101111000011110

If the mixing ratio is 0.5 approximately, then half of the genes in the offspring will come from
parent 1 and other half will come from parent 2.
The possible set of offspring after uniform crossover would be:

Offspring 1 11 12 02 11 11 12 12 02 01 01 02 11 12 11 11 02
Offspring 2 12 11 01 12 12 01 01 11 02 02 11 12 01 12 12 01

Note: The subscripts indicate which parent the gene came from.

Arithmetic Crossover
Arithmetic crossover operator linearly combines two parent chromosome vectors to produce
two new offspring according to the equations:
Offspring1 = a * Parent1 + (1- a) * Parent2
Offspring2 = (1 – a) * Parent1 + a * Parent2
where a is a random weighting factor chosen before each crossover operation.
Consider two parents (each of 4 float genes) selected for crossover:

Parent 1 (0.3) (1.4) (0.2) (7.4)


Parent 2 (0.5) (4.5) (0.1) (5.6)

Applying the above two equations and assuming the weighting factor a = 0.7, applying
above equations, we get two resulting offspring.
The possible set of offspring after arithmetic crossover would be:

Offspring 1 (0.36) (2.33) (0.17) (6.87)


Offspring 2 (0.402) (2.981) (0.149) (5.842)
Heuristic Crossover
Heuristic crossover operator uses the fitness values of the two parent chromosomes to determine
the direction of the search.
The offspring are created according to the equations:
Offspring1 = BestParent + r * (BestParent − WorstParent)
Offspring2 = BestParent
Where r is a random number between 0 and 1.
It is possible that offspring1 will not be feasible. It can happen if r is chosen such that one or
more of its genes fall outside of the allowable upper or lower bounds. For this reason, heuristic
crossover has a user defined parameter n for the number of times to try and find an r those
results in a feasible chromosome. If a feasible chromosome is not produced after n tries, the
worst parent is returned as offspring1.
Mutation
After a crossover is performed, mutation takes place. Mutation is a genetic operator used to
maintain genetic diversity from one generation of a population of chromosomes to the next.
Mutation occurs during evolution according to a user-definable mutation probability, usually set
to fairly low value, say 0.01 a good first choice. Mutation alters one or more gene values in a
chromosome from its initial state. This can result in entirely new gene values being added to
the gene pool. With the new gene values, the genetic algorithm may be able to arrive at better
solution than was previously possible. Mutation is an important part of the genetic search, helps
to prevent the population from stagnating at any local optima. Mutation is intended to prevent
the search falling into a local optimum of the state space.
The Mutation operators are of much type.
Flip Bit Mutation Boundary, Non-Uniform, Uniform, and Gaussian.
Flip Bit Mutation
The mutation operator simply inverts the value of the chosen gene. i.e. 0 goes to 1 and 1 go to 0.
This mutation operator can only be used for binary genes. Consider the two original off-springs
selected for mutation.
Original offspring 1 1101111000011110
Original offspring 2 1101100100110110
Invert the value of the chosen gene as 0 to 1 and 1 to 0 . The Mutated Off-spring produced are:
Mutated offspring 1 1100111000011110
Mutated offspring 2 1101101100110100

You might also like