0% found this document useful (0 votes)
6 views50 pages

Lecture 4

The document provides an overview of artificial intelligence techniques, specifically focusing on Genetic Algorithms, Particle Swarm Optimization, and Ant Colony Optimization. It covers the history, applications, components, and algorithms associated with each technique, detailing how they solve optimization problems. The document also includes examples and further readings for deeper understanding.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views50 pages

Lecture 4

The document provides an overview of artificial intelligence techniques, specifically focusing on Genetic Algorithms, Particle Swarm Optimization, and Ant Colony Optimization. It covers the history, applications, components, and algorithms associated with each technique, detailing how they solve optimization problems. The document also includes examples and further readings for deeper understanding.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 50

A General Introduction to

Artificial Intelligence
Outline

 Genetic Algorithms.
 Particle Swarm Optimization.
 Ant Colony Optimization.
Genetic Algorithms

 History of GAs.
 Applications of GAs.
 Components of GAs.
 General GA evolutionary loop.
 Examples.
Genetic Algorithms
 Invented by John Holland (1975).
 Using population-based approach.
 Based on Natural Selection ('survival of the
fittest).
 Use usually fixed length and linear
representation of chromosomes.
 Genetic Operators for modification:
Crossover and Mutation.
Genetic Algorithms -
Applications
Domain Application Types
Control gas pipeline, pole balancing, missile evasion, pursuit

Design semiconductor layout, aircraft design, keyboard


configuration, communication networks
Scheduling manufacturing, facility scheduling, resource allocation

Robotics trajectory planning

Machine Learning designing neural networks, improving classification


algorithms, classifier systems
Signal Processing filter design

Game Playing poker, checkers, prisoner’s dilemma

Combinatorial set covering, travelling salesman, routing, bin packing,


graph colouring and partitioning
Optimization
Genetic Algorithms -
Components
 A problem to solve, and ...
 Encoding technique (gene, chromosome)
 Initialization procedure (creation)
 Evaluation function (environment)
 Selection of parents (reproduction)
 Genetic operators (mutation, recombination)
 Parameter settings (practice and art)
Genetic Algorithms -
General Algorithm
Initialize a population at random

YES
Are stopping criteria satisfied ? Stop &Report

NO
Select individuals based on fitness

Perform GOs on selected individual(s)

Replace old population with the new


Genetic Algorithms -
Individual Representation

 Chromosomes could be:


 Bit strings (0101 ... 1100)
 Real numbers (43.2 -33.1 ... 0.0 89.2)
 Permutations of element (E11 E3 E7 ... E1 E15)
 Lists of rules (R1 R2 R3 ... R22 R23)
 Program elements (genetic programming)
 ... any data structure ...
Genetic Algorithms -
Individual Representation

 Chromosomes could be:


 Bit strings (0101 ... 1100)
 Real numbers (43.2 -33.1 ... 0.0 89.2)
 Permutations of element (E11 E3 E7 ... E1 E15)
 Lists of rules (R1 R2 R3 ... R22 R23)
 Program elements (genetic programming)
 ... any data structure ...
Genetic Algorithms -
Examples
 Problem 1:
Finding the global optimum of function f.
 Problem 2:
SAT problem.
 Problem 3:
TSP problem.
 Problem 4:
n-queen problem.
Genetic Algorithms -
Individual Representation
 Binary representation for problem 1, 2.

00010110100111010010
 Permutation representation for problem 3.
(2 5 7 6 8 1 3 4)

 Integer representation for problem 4


(1 3 2 2 1 5 6 7)
Genetic Algorithms -
Fitness-Based Selection
 Problem 1:
Fitness function: the function value at the point.
 Problem 2:
Fitness function: the number of satisfied clauses.
(bigger the better).
 Problem 3:
Fitness function: The cost of the tour (smaller the
better).
 Problem 4:
Fitness function: the number of hitting queens
(smaller the better).
Genetic Algorithms -
Fitness-Based Selection
Selection mechanism:
 Routledge-wheen.

 Tournament selection. 0.17


0.25
 Ranking-based selection.

 Truncated selection.
0.08
 Annealing selection. 0.5

 (, ) selection.

 (+) selection.

 ……
Genetic Algorithms -
Crossover and Mutation
 Problem 1, 2:
Crossover
01011011110 01011011110

01011011110 01011011110
Mutation
01011011110

01011001110
Genetic Algorithms -
Crossover and Mutation
 Problem 3:
Crossover

12435 13535 13542


43521 42421 32451

Mutation
43521 41523
Genetic Algorithms -
Crossover and Mutation
 Problem 4:
Crossover

Mutation
Genetic Algorithms -
Parameters

 Population Size (30-200)


 Number of generations (100-500)
 Crossover probability (0.8-0.9)
 Mutation probability (0.01-0.1)
Genetic Algorithms -
Further Readings
 M. Mitchell, An Introduction to Genetic Algorithms,
Second Edition, MIT Press, 1998.
 R.L. Haupt and S.E. Haupt, Practical Genetic
Algorithms, Second Edition, John Wiley and Sons,
2004.
 D.E. Goldberg, Genetic Algorithms in Search,
Optimization and Machine Learning, Addison-Wesley
Publishing, 1989.
 Z. Michalewicz, Genetic Algorithms + Data
Structures = Evolutionary Programs, Third Edition,
Springer-Verlag, 1998.
Particle Swarm

 History of PSO
 Full PSO model
 Aspects of the basic PSO
 Performance Measures
INVENTORS
 Proposed in 1995 by Eberhart and Kennedy
 Used for solving optimization problems of
continuous nonlinear functions

Russell Eberhart James Kennedy


electrical engineer social-psychologist
EXPERIMENTS OF HEPPNER

 Emerged from earlier


experimences on birds
flock behaviors by a
biologist - Frank
Heppner:
 Birds learn from success
of themselves
 Birds learn from the
success of their neighbors
PSO Search Scheme

 Using a number of particles, i.e., birds,


that constitute a swarm moving around in
the search space looking for the best
solution.
 Each particle is treated as a point in a N-
dimensional space which adjusts its
“flying” according to its own flying
experience as well as the flying
experience of other particles.
Particle Flying Model
 pbest  the best solution achieved so far by
that particle.
 gbest  the best solution obtained so far by
any particle in the neighborhood of that particle.

 The basic concept of PSO


lies in accelerating each
particle toward its pbest
and the gbest locations,
with a random weighted
acceleration at each time.
Particle Flying Model
w1 c1 rand ()
w2 c2 rand ()

sk 1 gbest k
gbest k
v k 1 d
k
v
k
pbest
k pbest k
s d
Particle Flying Model

 Each particle tries to


modify its position using
the following information:
w1 c1 rand ()
 the current positions, w2 c2 rand ()
 the current velocities,
sk 1 gbest k
 the distance between the gbest k
v k 1 d
current position and pbest, v k

 the distance between the pbest k


current position and the k pbest k
s d
gbest.
Particle Flying Model

k 1 k k 1
s s  v
w1 c1 rand ()
w2 c2 rand ()
i i i
sk 1 gbest k
gbest k
k 1 d
k 1 k k v
v i v  v
i i
v k

pbest k
k pbest k
s d

vik c1 rand () ( pbestik  sik )  c2 rand () ( gbest k  sik )


PSO ALGORITHM
For
Foreach
eachparticle
particle
Initialisize
Initialisize
End
End

Do
Do
For
Foreach
eachparticle
particle
Calculate
Calculatefitness
fitnessvalue
value
IfIfthe
thefitness
fitnessvalue
valueisisbetter
betterthan
thanthe
thebest
bestfitness
fitnessvalue
value(pbest)
(pbest)in
in
history
history
set
setcurrent
currentvalue
valueasasthe
thenew
newpbest
pbest
End
End

Choose
Choosethe theparticle
particlewith
withthe
thebest
bestfitness
fitnessvalue
valueof ofall
allthe
theparticles
particlesas
as
the
thegbest
gbest
For
Foreach
eachparticle
particle
Calculate
Calculateparticle
particlevelocity
velocityaccording
accordingequation
equation(*) (*)
Update
Updateparticle
particleposition
positionaccording
accordingequation
equation(**)(**)
End
End
*
k 1 k k
v
While v
i stopping  vi condition is true k 1 k k
While i
k stopping condition k is
k true k k
vi c1 rand () ( pbesti  si )  c2 rand () ( gbest  si ) ** si  si  vi
GBEST PSO AND LBEST PSO

 Gbest PSO:
 Neighbor of particles is
the entire swarm

 Lbest PSO:
 Neighbor of a particle is
some particles that are
near to the particle.
INITIALISING PSO

 The position:
 Should be initialized to uniformly cover the
search space

xij (0) xmin, j  rij ( xmax, j  xmin, j ), j 1,..., nx , i 1,..., ns

 The velocity:
 can be initialized to zero: vij (0) 0
STOPPING CONDITIONS

 Two guiding
principles:
 The stopping coditions
should not cause PSO
to prematurely
 The stopping condition
should not lead to
over-calculation of the
fitness values
STOPPING CONDITIONS
 There have been some ways to terminate
PSO algorithms
 Terminate when a maximum number of
iterations has been exceeded.
 Terminate when an acceptable solution has
been found
 Terminate when no improvement is observed
over a number of iterations
 Terminate when the normalized swarm radius
is close to zero
 Terminate when the objective function slope is
approximately zero
BASIC PSO PARAMETERS

 Swarm size:
 from 10 to 30

 Neighborhood size:
 Smaller neighborhood sizes

are less susceptible to local


optimal
 Starting with small

neighborhood and increase the


neighborhood size.
 Learning factors:
 c =c =2.0
1 2
PERFORMANCE
MEASURES
 Accuracy : accuracy( S , t ) | f ( y ' (t )  f ( x*)) |

n
 Reliability : reliabilit y ( S (t ),  ) 
N
100

 Robustness : robustness ( S (t )) [   ,   ]

Number of iterations to find a


 Efficiency : solution
with a specified accuracy
ns nx
1
 Diversity :
diversity ( S (t )) 
ns
  ( xij (t )  x j (t )) 2
i 1 j 1
PSO - Further Readings

 A. P. Engelbrecht, Fundamentals of
Computational Swarm Intelligence, John
Wiley & Sons, 2006.
 R. C. Eberhart, Swarm Intelligence, Morgan
Kaufmann, 2001.
Ant Colony Optimization
(ACO)
 ACO history
 ACO applications.
 Implementation Steps.
 ACO basic algorithms
ACO-History
 Goss et al. 1989,
Deneuborg et al.
1990, experiments
with Argentine
ants
 Dorigo et al. 1991,
applications to
shortest path
problems
 Now: established
method for various
optimization
problems
ACO-Applications
Efficiently Solves NP hard Problems 1
 Routing
5
 TSP (Traveling Salesman Problem) 2
 Vehicle Routing
 Sequential Ordering
 Assignment 3 4
 QAP (Quadratic Assignment Problem)
 Graph Coloring
 Generalized Assignment
 Frequency Assignment
 University Course Time Scheduling
ACO-Applications
 Scheduling
 Job Shop
 Open Shop
 Flow Shop
 Total tardiness (weighted/non-weighted)
 Project Scheduling
 Group Shop
 Subset
 Multi-Knapsack
 Max Independent Set
 Redundancy Allocation
 Set Covering
 Weight Constrained Graph Tree partition
 Arc-weighted L cardinality tree
 Maximum Clique
ACO-Applications
 Other
 Shortest Common Sequence
 Constraint Satisfaction
 2D-HP protein folding
 Bin Packing
 Machine Learning
 Classification Rules
 Bayesian networks
 Fuzzy systems
 Network Routing
 Connection oriented network routing
 Connection network routing
 Optical network routing
Problem Solving by ACO

1. Represent the problem in the form of sets of


components and transitions, or by a set of
weighted graphs, on which ants can build
solutions
2. Define the meaning of the pheromone trails
3. Define the heuristic preference for the ant while
constructing a solution
4. If possible implement a efficient local search
algorithm for the problem to be solved.
5. Choose a specific ACO algorithm and apply to
problem being solved
6. Tune the parameter of the ACO algorithm.
General Ant Colony
Heuristic
For discrete optimization problem with:
 Set of nodes S={1,…,N}

 Edges for nodes with time-dependent

costs
 States = lists of nodes

 Neighbourhood of states given by the

edges
 Constraints for feasible states, solutions

 Costs for solutions given by the edge-costs


General Ant Colony
Heuristic

Ants generation Pheromone


and activity evaporation

Daemon
activities
General Ant Colony
Heuristic
Ants generation and
activity: compute
make decision
 while resources transition
and move
probabilities
available: create
ant
 for each ant:
evaluate
1. initialize the current
update the
state
2. let ant run until a state
solution is found possibly
3. possibly: update update
pheromone
pheromone and
routing table
Ant System (AS)
 Used to solve TSP
 Transition from city i to j depends on:
1. Tabu list – list of cities not visited
2. Visibility = 1/dij; represents local information –
heuristic desirability to visit city j when in city i.
3. Pheromone trail Tij(t) for each edge – represents
the learned desirability to visit city j when in city i.

 Generally, have several ants searching the solution


space.
Ant System (AS)
 Transition Rule
 Probability of ant k going from city i to j:

k
p (t ) 
 ij   

(t ) .  ij

  (t ) . il 
ij  
il
J ik

 Alpha and beta are adjustable parameters.


Ant System (AS)

k
p (t ) 
 ij (t )  . 

ij

  (t )  . il 
ij  
il
J ik

 Alpha = 0 : represents a greedy


approach
 Beta = 0 : represents rapid selection of
tours that may not be optimal.
 Thus, a tradeoff is necessary.
Ant System (AS)
 Pheromone update :

 Q / L (t ) if (i, j )  T (t ) else 0.
k
ij
k k

 T is the tour done at time t by ant k, L is


the length, Q is a heuristic parameter.
 Pheromone decay:

 ij (t ) (1   ). ij (t )   ij (t )
Ant Colony System (ACS)

k
 
j arg max uJ i {  ij (t ) .iu  } if q qo j J

 Modifications to AS.
 New transition rule:
qo is a parameter that can be tweaked
 It is similar to tuning temperature in SA.
 J is a city randomly selected according to the
probability calculated previously.
 This helps ACS to improvise on the best
solutions.
Ant Colony System (ACS)

 ij (t ) (1   ). ij (t )   . ij (t )


 Pheromone update rule (new):
 However, only applied to the best ant.
 The change in the pheromone concentration
= 1/L+.
 Local updates done as follows:

 ij (t ) (1   ). ij (t )   0
ACO - Further Readings
 A. P. Engelbrecht, Fundamentals of
Computational Swarm Intelligence, John
Wiley & Sons, 2006.
 E. Bonabeau, M. Dorigo, G. Theraulaz,
Swarm Intelligence: From Natural to
Artificial Systems, Oxford University Press,
1999.
 ACO tutorials by Marco Dorigo.

You might also like