Unit 5 Soft Computing
Unit 5 Soft Computing
Lecture # An Overview
What are GAs?
Genetic Algorithms (GAs) are adaptive heuristic search algorithm based on the evolutionary
ideas of natural selection and genetics.
Genetic algorithms (GAs) are a part of Evolutionary computing, a rapidly growing area of
artificial intelligence. GAs is inspired by Darwin's theory about evolution - "survival of the
fittest".
GAs represents an intelligent exploitation of a random search used to solve optimization
problems.
GAs, although randomized, exploit historical information to direct the search into the region of
better performance within the search space.
In nature, competition among individuals for scanty resources results in the fittest individuals
dominating over the weaker ones.
In engineering and mathematics, as a process of optimization. The problems are first formulated
as mathematical models expressed in terms of functions and then to find a solution, discover the
parameters that optimize the model or the function components that provide optimal system
performance.
Why Genetic Algorithms?
It is better than conventional AI; It is more robust.
Unlike older AI systems, the GA's do not break easily even if the inputs changed slightly, or in
the presence of reasonable noise.
While performing search in large state-space, or multi-modal state-space, or n-dimensional
surface, a genetic algorithms offer significant benefits over many other typical search
optimization techniques like - linear programming, heuristic, depth-first, breath-first.
"Genetic Algorithms are good at taking large, potentially huge search spaces and navigating
them, looking for optimal combinations of things, the solutions one might not otherwise find
in a lifetime.”
Optimization
Optimization is a process that finds a best, or optimal, solution for a problem. The Optimization
problems are centered around three factors:
1. An objective function which is to be minimized or maximized;
Examples
In manufacturing, we want to maximize the profit or minimize the cost.
In designing an automobile panel, we want to maximize the strength.
2. A set of unknowns or variables that affect the objective function,
Examples
In manufacturing, the variables are amount of resources used or the time spent.
In panel design problem, the variables are shape and dimensions of the panel.
3. A set of constraints that allow the unknowns to take on certain values but exclude others;
Examples
An optimization problem is defined as: Finding values of the variables that minimize or
maximize the objective function while satisfying the constraints.
Search Optimization Algorithms
Fig. 1.1 below shows different types of Search Optimization algorithms.
Search Optimization
Techniques
Newton Finonacci
Tabu Hill Simulated Evolutionary
Search Climbing Annealing Methods
Search Search
Genetic Genetic Algorithms
Programming
Fig.1 Taxonomy of Search Optimization techniquesSearch
Development History
EC = GP + ES + EP + GA
Evolutionary Genetic Evolution Evolutionary Genetic
Computing Programming Strategies Programming Algorithms
GAs are the ways of solving problems by mimicking processes nature uses; ie., Selection,
Crosses over, Mutation and Accepting, to evolve a solution to a problem.
GAs are adaptive heuristic search based on the evolutionary ideas of natural selection and
genetics.
GAs are intelligent exploitation of random search used in optimization problems.
GAs, although randomized, exploit historical information to direct the search into the region
of better performance within the search space.
Working Principles
Before getting into GAs, it is necessary to explain few terms.
Chromosome: a set of genes; a chromosome contains the solution in form of genes.
Gene: a part of chromosome; a gene contains a part of solution. It determines the
solution. e.g. 16743 is a chromosome and 1, 6, 7, 4 and 3 are its genes.
Individual: same as chromosome.
Population: number of individuals present with same length of chromosome.
Fitness: the value assigned to an individual based on how far or close a individual is from
the solution; greater the fitness value better the solution it contains.
Fitness function: a function that assigns fitness value to the individual. It is problem
specific.
Breeding: taking two fit individuals and then intermingling there chromosome to create
new two individuals.
Mutation: changing a random gene in an individual.
Selection: selecting individuals for creating the next generation.
Genetic algorithm begins with a set of solutions (represented by chromosomes) called the
population.
Solutions from one population are taken and used to form a new population. This is
motivated by the possibility that the new population will be better than the old one.
Solutions are selected according to their fitness to form new solutions (offspring); more
suitable they are more chances they have to reproduce.
This is repeated until some condition (e.g. number of populations or improvement of the
best solution) is satisfied.
Lecture #3 Procedures for GA
The 3.1 Illustrate working of Genetic Algorithms
Initialization Selection Parents
Recombination
Population
Mutation
Survivor Offspring
Termination
1. [Start] Generate random population of n chromosomes (i.e. suitable solutions for the
problem).
2. [Fitness] Evaluate the fitness f(x) of each chromosome x in the population.
3. [New population] Create a new population by repeating following steps until the new
population is complete.
START
Crossover
Assign fitness value Finished?
Survival of the fittest to offsprings
Apply
replacement YES NO
operator to YE
incorporate new S
individual into Natural Selection Select one offspring
population
END
Before a genetic algorithm can be put to work on any problem, a method is needed to encode
potential solutions to that problem in a form so that a computer can process.
One common approach is to encode solutions as binary strings: sequences of 1's and 0's, where
the digit at each position represents the value of some aspect of the solution.
Example:
A Gene represents some data (eye color, hair color, sight, etc.).
A chromosome is an array of genes. In binary form a Gene looks like: (11100010)
A Chromosome looks like: Gene1 Gene2 Gene3 Gene4
(11000010, 00001110, 001111010, 10100011)
A chromosome should in some way contain information about solution which it represents; it
thus requires encoding. The most popular way of encoding is a binary string like:
Chromosome 1: 1101100100110110
Chromosome 2: 1101111000011110
Each bit in the string represents some characteristics of the solution.
There are many other ways of encoding, e.g., encoding values as integer or real numbers
or some permutations and so on.
The virtue of these encoding method depends on the problem to work on .
Binary Encoding
Binary encoding is the most common to represent information contained. In genetic algorithms,
it was first used because of its relative simplicity.
In binary encoding, every chromosome is a string of bits: 0 or 1, like
Chromosome 1: 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 0 1 1 1 0 0 1 0 1
Chromosome 2: 1 1 1 1 1 1 1 0 0 0 0 0 1 1 0 0 0 0 0 1 1 1 1 1
Binary encoding gives many possible chromosomes even with a small number of allele’s
ie possible settings for a trait (features).
This encoding is often not natural for many problems and sometimes corrections must be
made after crossover and/or mutation.
Example 1:
One variable function, say 0 to 15 numbers, numeric values, represented by 4 bit binary string.
Example 2:
Knowing
and corresponding to (0000) and (1111) ,
the equivalent value for any 4-bit string can be obtained as
− For e.g. a variable Xi; let = 2, and = 17, find what value the
4-bit string Xi = (1010) would represent.
First get decoded value for Si = 1010 = 23 x 1 + 22 x 0 + 21 x 1 + 20 x 0 = 10 then
= =12
Value Encoding The Value encoding can be used in problems where values such as real
numbers are used. Use of binary encoding for this type of problems would be difficult.
1. In value encoding, every chromosome is a sequence of some values.
2. The Values can be anything connected to the problem, such as : real numbers, characters or
objects.
Examples:
Chromosome A 1.2324 5.3243 0.4556 2.3293 .4545
Chromosome B ABDJEIFJDHDIERJFDLDFLFEGT
Chromosome C (back), (back), (right), (forward), (left)
3. Value encoding is often necessary to develop some new types of crossovers and mutations
specific for the problem.
Permutation Encoding
Permutation encoding can be used in ordering problems, such as traveling salesman problem or
task ordering problem.
1. In permutation encoding, every chromosome is a string of numbers that represent a position in
a sequence.
Chromosome A 1 5 3 2 6 4 7 9 8
Chromosome B 8 5 6 7 2 3 1 4 9
2. Permutation encoding is useful for ordering problems. For some problems, crossover and
mutation corrections must be made to leave the chromosome consistent.
Examples:
1. The Traveling Salesman problem:
There are cities and given distances between them. Traveling salesman has to visit all of them,
but he does not want to travel more than necessary. Find a sequence of cities with a minimal
traveled distance. Here, encoded chromosomes describe the order of cities the salesman visits.
2. The Eight Queens problem: There are eight queens. Find a way to place them on a chess
board so that no two queens attack each other. Here, encoding describes the position of a queen
on each row.
Tree Encoding
Tree encoding is used mainly for evolving programs or expressions. For genetic programming:
In tree encoding, every chromosome is a tree of some objects, such as functions or
commands in programming language.
Tree encoding is useful for evolving programs or any other structures that can be encoded
in trees.
The crossover and mutation can be done relatively easy way.
Example:
Chromosome A Chromosome B
+ Do untill
/
Step Wall
5 y
Note: Tree encoding is good for evolving programs. The programming language LISP is often
used. Programs in LISP can be easily parsed as a tree, so the crossover and mutation is relatively
easy.
Lecture # 5 Genetic Operators
Genetic operators used in genetic algorithms maintain genetic diversity. Genetic diversity or
variation is a necessity for the process of evolution. Genetic operators are analogous to those
which occur in the natural world:
The Roulette wheel and Boltzmann selections methods are illustrated next.
Example of Selection
2
Evolutionary Algorithms is to maximize the function f(x) = x with x in the integer interval
[0, 31], i.e., x = 0, 1, 30, 31.
1. The first step is encoding of chromosomes; use binary representation for integers; 5-bits are
used to represent integers up to 31.
2. Assume that the population size is 4.
3. Generate initial population at random. They are chromosomes or genotypes;
e.g., 01101, 11000, 01000, 10011.
4. Calculate fitness value for each individual.
(a) Decode the individual into an integer (called phenotypes),
01101 → 13; 11000 → 24; 01000 → 8; 10011 → 19;
(b) Evaluate the fitness according to
f(x) = x2 , 13 → 169; 24 → 576; 8 → 64; 19 → 361.
5. Select parents (two individuals) for crossover based on their fitness in pi. Out of many
methods for selecting the best chromosomes, if
Roulette-wheel selection is used, then the probability of the i th string in the population is
Where
F i is fitness for the string i in the population, expressed as f(x) pi is probability of the string i
being selected,
n is no of individuals in the population, is population size, n=4
n * pi is expected count
String No Initial X value Fitness Fi pi Expected
Population f(x) = x2 count N *
Prob i
1 01101 13 169 0.14 0.58
3 01 0 0 0 8 64 0.06 0.22
4 10011 19 361 0.31 1.23
SUM 1170 1.00 4.00
AVERAGE 293 0.25 1.00
MAX 576 0.49 1.97
1
5% 2
8 9%
20%
3
13%
7
8%
6 4
8% 17%
5
20%
Fig. Roulette-wheel Shows 8 individuals with fitness
The Roulette-wheel simulates 8individuals with fitness values Fi, marked at its
circumference; e.g.,
The 5th individual has a higher fitness than others, so the wheel would choose the 5th
individual more than other individuals.
Tthe fitness of the individuals is calculated as the wheel is spun n = 8 times, each time
selecting an instance, of the string, chosen by the wheel pointer.
where
n = no of individuals, called population size; pi = probability of ith string being selected; Fi =
fitness for ith string in the population. Because the circumference of the wheel is marked
according to a string's fitness, the Roulette-wheel mechanism is expected to make
Cumulative Probabilities =
Boltzmann Selection
Simulated annealing is a method used to minimize or maximize a function.
This method simulates the process of slow cooling of molten metal to achieve the
minimum function value in a minimization problem.
The cooling phenomena are simulated by controlling a temperature like parameter
introduced with the concept of Boltzmann probability distribution.
The system in thermal equilibrium at a temperature T has its energy distribution based on
the probability defined by P (E) = exp (- E / kT ) were k is Boltzmann constant.
This expression suggests that a system at a higher temperature has almost uniform
probability at any energy state, but at lower temperature it has a small probability of
being at a higher energy state.
Thus, by controlling the temperature T and assuming that the search process follows
Boltzmann probability distribution, the convergence of the algorithm is controlled.
Crossover
Crossover is a genetic operator that combines (mates) two chromosomes (parents) to produce a
new chromosome (offspring). The idea behind crossover is that the new chromosome may be
better than both of the parents if it takes the best characteristics from each of the parents.
Crossover occurs during evolution according to a user-definable crossover probability. Crossover
selects genes from parent chromosomes and creates a new offspring. The Crossover operators
are of many types.
One-Point crossover.
Two Point Crossover
Uniform Crossover
Arithmetic Crossover
Heuristic crossovers.
The operators are selected based on the way chromosomes are encoded.
One-Point Crossover
One-Point crossover operator randomly selects one crossover point and then copy everything
before this point from the first parent and then everything after the crossover point copy from the
second parent. The Crossover would then look as shown below.
Consider the two parents selected for crossover.
Parent 1 11011|00100110110
Parent 2 11011|11000011110
Interchanging the parents chromosomes after the crossover points - The Offspring produced are:
Offspring 1 11011|11000011110
Offspring 2 11011|00100110110
Note: The symbol, a vertical line, | is the chosen crossover point.
Two-Point Crossover
Two-Point crossover operator randomly selects two crossover points within a chromosome then
interchanges the two parent chromosomes between these points to produce two new offspring.
Consider the two parents selected for crossover :
Parent 1 11011|0010011|0110
Parent 2 11011|1100001|1110
Interchanging the parents chromosomes between the crossover points –
The Offspring produced are:
Offspring 1 11011|0010011|0110
Offspring 2 11011|0010011|0110
Uniform Crossover
Uniform crossover operator decides (with some probability – know as the mixing ratio) which
parent will contribute how the gene values in the offspring chromosomes. The crossover operator
allows the parent chromosomes to be mixed at the gene level rather than the segment level (as
with one and two point crossover).
Consider the two parents selected for crossover.
Parent 1 1101100100110110
Parent 2 1101111000011110
If the mixing ratio is 0.5 approximately, then half of the genes in the offspring will come from
parent 1 and other half will come from parent 2.
The possible set of offspring after uniform crossover would be:
Offspring 1 11 12 02 11 11 12 12 02 01 01 02 11 12 11 11 02
Offspring 2 12 11 01 12 12 01 01 11 02 02 11 12 01 12 12 01
Note: The subscripts indicate which parent the gene came from.
Arithmetic Crossover
Arithmetic crossover operator linearly combines two parent chromosome vectors to produce
two new offspring according to the equations:
Offspring1 = a * Parent1 + (1- a) * Parent2
Offspring2 = (1 – a) * Parent1 + a * Parent2
where a is a random weighting factor chosen before each crossover operation.
Consider two parents (each of 4 float genes) selected for crossover:
Applying the above two equations and assuming the weighting factor a = 0.7, applying
above equations, we get two resulting offspring.
The possible set of offspring after arithmetic crossover would be: