Swarm Intelligence Slides
Swarm Intelligence Slides
c Alex Wild (http://www.myrmecos.net)
Topics (continued):
◮ Self-organization by:
⋆ Direct communication: visual, or chemical contact
⋆ Indirect communication: Stigmergy (Grassé, 1959)
R1 R2 Rn−1
E1 E2 En−1 En
c Ralf Müller By curtesy of www.aerospaceweb.org
c Su Neko Result: Complex tasks/behaviors can be accomplished/exhibited in cooperation
Course on Metaheuristics and Hybrids
c C. Blum Course on Metaheuristics and Hybrids
c C. Blum
c Velo Steve
c Alex Wild (http://www.myrmecos.net)
c Alex Wild
Current state: Cemetery formation (1) Current state: Cemetery formation (2)
Note: Models for cemetery formation (brood tending) are used for clustering
c by the National Academy of Sciences (PNAS)
Current state: distributed optimization/control and robotics Current state: Division of Labour / Task Allocation (1)
◮ Problem: in any colony (ants, bees, etc) are a number of tasks to fulfill
Examples:
◮ Examples: brood tending, foraging for resources, maintaining the nest
◮ Cemetery formation (ants)
◮ Requires: dyanamic allocation of individuals to tasks
◮ Division of labour / Task allocation (ants + bees)
◮ Self-synchronization of fireflies ◮ Depends on: state of the environment, needs of the colony
◮ Nest construction (termites + ants) ◮ Requires: global assessment of the colonies current state
◮ Animal-robot interaction
However: Individuals are unable (as individuals) to make a global assessment
◮ Flocking (birds + fish)
◮ Foraging behavior of ants Solution: Response threshold models
Course on Metaheuristics and Hybrids
c C. Blum Course on Metaheuristics and Hybrids
c C. Blum
Division of Labour / Task Allocation (2) Division of Labour / Task Allocation (3)
Assume that:
This means (continued):
◮ We have m tasks to fulfill
◮ If sj = δij : pij = 0.5
◮ We have n individuals in the colony
◮ An individual i with a low δij is likely to respond to a lower stimulus sj
◮ Each individual i has a response threshold δij for each task j
◮ Let sj ≥ 0 be the stimulus of task j Additional feature: response thresholds are dynamic
◮ An individual engages in task j with probability ◮ Let ∆t be a duration of time.
s2j
pij = ◮ Let xij ∆t be the fraction of time spent by i on task j within ∆t
s2j + δij
2
◮ Then: (1 − xij )∆t is the time spent by i on other tasks
This means:
Response threshold update:
◮ If sj << δij : pij is close to 0
◮ If sj >> δij : pij is close to 1 δij → δij − ξxij ∆t + ρ(1 − xij )∆t
Division of Labour / Task Allocation (4) Current state: Division of Labour / Task Allocation (4)
where: References:
Current state: distributed optimization/control and robotics Current state: Self-synchronization of fireflies (1)
Examples:
By curtesy of www.learner.org
Course on Metaheuristics and Hybrids
c C. Blum Course on Metaheuristics and Hybrids
c C. Blum
Current state: Self-synchronization of fireflies (2) Current state: Self-synchronization of fireflies (3)
c A. Tyrrell and G. Auer
Current state: distributed optimization/control and robotics Current state: Nest construction (1)
Termite mound Ant hill
Examples:
Current state: Nest construction (2) Current state: distributed optimization/control and robotics
Automated construction Automated shape forming
Examples:
Current state: Animal-robot interaction (1) Current state: Animal-robot interaction (2)
Robot fish Influencing the fish school
Reasons for controlling animal swarms:
Examples:
◮ S. Gade, A. A. Paranjape, and S.-J. Chung. a Flock of Birds Approaching an ◮ Foraging behavior of ants
Airport Using an Unmanned Aerial Vehicle. AIAA Guidance, Navigation,
and Control Conference, 2015.
References:
Definition: The collective motion of a large number of self-propolled entities
◮ J. Kennedy and R. Eberhart. Particle Swarm Optimization, Proceedings
Note: of IEEE International Conference on Neural Networks, pages 1942–1948, 1995
◮ G. Folino, A. Forestiero and G. Spezzano. An adaptive flocking algorithm
◮ Commonly used as a demonstration of emergence and self-organization
for performing approximate clustering, Information Sciences,
◮ Modelled/simulated for the first time by Craig Reynolds (Boids, 1986) 179(18):3059–3078, 2009
◮ X. Cui, J. Gao, and E. Potok. A Flocking based algorithm for document
Model: Basic rules clustering analysis, Journal of Systems Architecture, 52, 505–515, 2006
◮ Division of labour / Task allocation (ants + bees) ◮ Indirect communication: via chemical pheromone trails
◮ Self-synchronization of fireflies
◮ Nest construction (termites + ants)
◮ Animal-robot interaction
◮ Flocking (birds + fish)
◮ Foraging behavior of ants
c Alex Wild (http://www.myrmecos.net)
c Christian Blum
Communication strategies:
Basic behaviour:
c Alex Wild (http://www.myrmecos.net)
References:
Topic 1: Ant Colony Optimization
◮ M. Dorigo and T. Stützle. Ant Colony Optimization, MIT press, 2004.
Inspiration: Foraging behavior of ant colonies
◮ C. Blum. Ant colony optimization: introduction and recent trends,
Physics of Life Reviews, 2(4):353–373, 2005.
◮ P. Korosec, J. Silc and B. Filipic. The differential ant-stigmergy
algorithm, Information Sciences, 192, 82–97, 2012
The ant colony optimization metaheuristic Simulation of the foraging behaviour (1)
Technical simulation:
Outline (ACO part):
Nest e 1 , l1 = 1 Food
a b
◮ Simulation of the foraging behaviour
◮ The ACO metaheuristic
e 2 , l2 = 2
◮ Example: traveling salesman problem (TSP)
1. We introduce artificial pheromone parameters:
◮ Example: assembly line balancing
◮ A closer look at algorithm components T1 for e1 and T2 for e2
◮ ACO for continuous optimization 2. W initialize the phermomone values:
τ1 = τ2 = c > 0
Simulation of the foraging behaviour (2) Simulation of the foraging behaviour (3)
Iterate: 1 1
% of ants using the short path
4. Each ant leaves pheromone on its traversed edge ei : Colony size: 10 ants Colony size 100 ants
1
τi ← τi + Observation: Optimization capability is due to co-operation
li
Course on Metaheuristics and Hybrids
c C. Blum Course on Metaheuristics and Hybrids
c C. Blum
Simulation of the foraging behaviour (4) The ant colony optimization metaheuristic
Pheromone laying while moving after the trip ◮ Example: assembly line balancing
◮ A closer look at algorithm components
Solution evaluation implicitly explicit quality measure ◮ ACO for continuous optimization
Metaheuristics:
Outline (ACO part):
◮ Simulated Annealing (SA) [Kirkpatrick, 1983]
◮ Tabu Search (TS) [Glover, 1986] ◮ Simulation of the foraging behaviour
◮ Genetic and Evolutionary Computation (EC) [Goldberg, 1989] ◮ The ACO metaheuristic
◮ Ant Colony Optimization (ACO) [Dorigo, 1992] ◮ Example: traveling salesman problem (TSP)
◮ Greedy Randomized Adaptive Search Procedure (GRASP) [Resende, 1995] ◮ Example: assembly line balancing
◮ Particle Swarm Optimization (PSO) [Kennedy, 1995] ◮ A closer look at algorithm components
◮ Guided Local Search (GLS) [Voudouris, 1997] ◮ ACO for continuous optimization
◮ Iterated Local Search (ILS) [Stützle, 1999]
◮ Variable Neighborhood Search (VNS) [Mladenović, 1999]
Course on Metaheuristics and Hybrids
c C. Blum Course on Metaheuristics and Hybrids
c C. Blum
2 c1,2 T1,2 3 4 3 4 3 4
τ3,4 τ3,4 τ3,4
1 2 1 2 1 2
where 3 4 3 4
2 2
◮ evaporation rate ρ ∈ (0, 1]
1 2 1 2 1 2 1 2
◮ Siter is the set of solutions generated in the current iteration
◮ quality function F : S 7→ IR+ . We use F (·) = 1
f (·)
3 4 3 4 3 4 3 4
Course on Metaheuristics and Hybrids
c C. Blum Course on Metaheuristics and Hybrids
c Christian Blum
The ant colony optimization metaheuristic The ant colony optimization metaheuristic
The ant colony optimization metaheuristic The ant colony optimization metaheuristic
The ant colony optimization metaheuristic The ant colony optimization metaheuristic
Example situation 1:
At each iteration:
9 8 10
◮ j ∗ : The current work station to be filled 3 5 7
11 17
◮ T : The set of tasks 1 2 1 2
5 12 3
1. that are not yet assigned to a work station
4 6 8
2. whose predecessors are all assigned to work stations
3. whose time requirement is such that it fits into j ∗ Example situation 2:
9 8 10
The ant colony optimization metaheuristic The ant colony optimization metaheuristic
At each iteration: How to choose a task from T ? Possible solution: The summation rule [Merkle et al., 2000]
P
j∗
h=1 τi,h
τi,j ∗ p(ci,j ∗ ) = P P ∀i∈T
p(ci,j ∗ ) = P ∀i∈T j∗
k∈T τk,j ∗ k∈T h=1 τk,h
The ant colony optimization metaheuristic The ant colony optimization metaheuristic
Pheromone update: For example with the iteration-best (IB) update rule
Outline (ACO part):
τi,j ← (1 − ρ) · τi,j τi,j ← τi,j + ρ · F (sib ) ∀ ci,j ∈ sib ◮ The ACO metaheuristic
◮ Example: traveling salesman problem (TSP)
where
◮ Example: assembly line balancing
◮ evaporation rate ρ ∈ (0, 1]
◮ A closer look at algorithm components
◮ sib is the best solution constructed in the current iteration
◮ ACO for continuous optimization
◮ quality function F : S 7→ IR+ . We use F (·) = 1
f (·)
The ant colony optimization metaheuristic The ant colony optimization metaheuristic
Definition of solution components and pheromone model Note: Different pheromone models can be used to solve a problem!
probabilistic pheromone
CO problem solution value
construction update
pheromone
model initialization
of pheromone
values
Course on Metaheuristics and Hybrids
c Christian Blum Course on Metaheuristics and Hybrids
c C. Blum
Result: Different pheromone models ⇒ different algorithm performance Solution construction: A closer look
0.0008 0.001
0.00075 (4) 0.0009 (2)
0.0007 (2) (4)
average iteration quality
◮ sp = hi ◮ Greedy algorithms:
◮ Determine N (sp )
c∗ = argmaxci,j ∈N (sp ) η(ci,j ) ,
◮ while N (sp ) 6= ∅
+
⋆ c ← ChooseFrom(N (sp )) where η : C 7→ IR is a Greedy function
⋆ sp ← extend sp by adding solution component c
⋆ Determine N (sp )
Examples for Greedy functions:
◮ end while
◮ TSP: Inverse distance between nodes (i.e., cities)
Problem: How to implement function ChooseFrom(N (sp ))?
◮ SALB: ti /C
Possibilities for implementing ChooseFrom(N (sp )): Pheromone update: A closer look
The ant colony optimization metaheuristic The ant colony optimization metaheuristic
◮ MAX –MIN Ant System(MMAS) [Stützle, Hoos, 2000] ◮ Ant Colony System(ACS) [Gambardella, Dorigo, 1996]
◮ Mix of IB-update and BS-update depending on a convergence measure ◮ Evaporation of pheromone during the construction of solution s:
The ant colony optimization metaheuristic The ant colony optimization metaheuristic
Charactersitic properties: ~τ ← ~τ + ρ · (m
~ − ~τ ) ,
Limits the pheromone values to the interval [0, 1] by using the folling update:
~ is a |C|-dimensional vector with
where m
X F (s)
τi,j ← (1 − ρ) · τi,j + ρ · P X F (s)
{s∈Supd |ci,j ∈s} s′ ∈S upd
F (s′ ) m
~ = γs · ~s and γs = P .
s∈Supd
F (s′ )
s′ ∈Supd
Course on Metaheuristics and Hybrids
c Christian Blum Course on Metaheuristics and Hybrids
c Christian Blum
The ant colony optimization metaheuristic Theoretical studies of ant colony optimization
◮ Negative bias:
(0, 0, 1) (1, 0, 1) 1. Modelling of the problem
Theoretical studies of ant colony optimization Theoretical studies of ant colony optimization
Implicit assumptions in ACO: Implicit assumptions in ACO:
Assumption 1: Assumption 1:
Good solutions are composed of good solution components. Good solutions are composed of good solution components.
(A solution component is regarded to be good, if the average quality of (A solution component is regarded to be good, if the average quality of
the solutions that contain it is high.) the solutions that contain it is high.)
Assumption 2: Assumption 2:
The pheromone update is such that good solution components on The pheromone update is such that good solution components on
average are stronger reinforced than others. average are stronger reinforced than others.
Theoretical studies of ant colony optimization Theoretical studies of ant colony optimization
Example: 2-cardinality tree problem Average iteration quality of Ant System ρ = 0.01
1 2 2 1
v1 v2 v3 v4 v5
e1 e2 e3 e4 0.35
AS_KCT_fo
0.35
0.34 0.34
average iteration quality
0.33 0.33
3 different solutions:
0.32 0.32
1 2 2 1 0.31 0.31
s1 : v1 v2 v3 v4 v5 f (s1 ) = 3 0.3 0.3
0.29 0.29
0.28 0.28
1 2 2 1 0.27 0.27
AS_KCT_fo
s2 : v1 v2 v3 v4 v5 f (s2 ) = 4 0 500 1000 1500 2000 0 500 1000 1500 2000
iteration iteration
na = 10 na = 1000
1 2 2 1
s3 : v1 v2 v3 v4 v5 f (s3 ) = 3
Course on Metaheuristics and Hybrids
c Christian Blum Course on Metaheuristics and Hybrids
c Christian Blum
Theoretical studies of ant colony optimization Theoretical studies of ant colony optimization
Benchmark instances: Ant System applied to an Internet-like instance Instance statistics:
32
0.00066
AS_KCT_fo 28
32 0.00065 48
44 64 35 37
average iteration quality
28 0.00064 1
36
26
15 18 23
48 62 6 17 40
43
44 64 35 37 0.00063 39 49 38
52
60 41 0 51
36 15 18 23 7 54 31
50 63 14 12
1 26
56 19
43
62 6 17 40
0.00062 10 46
55
42 24 61 29
49 60 41 0 51 3 58 47
39 38 11
52 2 22
50
56
63
19
14 12 7 54 31
0.00061 25 59 21 45
46 42 24 61 29 57 30 16 13 20
10
55
2
11
22
3 58 47
0.0006 5 8 34
25 59 21 45 27 9
4
57 30 16 13 20
0.00059 53
5 8 34 33
27 9
4
0.00058
53
0 500 1000 1500 2000
33
iteration
Theoretical studies of ant colony optimization Theoretical studies of ant colony optimization
s1 s1 s2 s2 s3 s3
Theoretical studies of ant colony optimization The ant colony optimization metaheuristic
What do we know?
Outline (ACO part):
1. In case an ACO algorithm applied to a problem instance is NOT a
competition-balanced system → possibility of negative search bias
◮ Simulation of the foraging behaviour
2. Existing theoretical result: The Ant System algorithm applied to
unconstrained problems does not suffer from negative search bias ◮ The ACO metaheuristic
◮ Example: traveling salesman problem (TSP)
Open questions:
◮ Example: assembly line balancing
1. Can it be shown that a competition-balanced system does not suffer from
negative search bias? ◮ A closer look at algorithm components
Ant colony optimization for continuous optimization Ant colony optimization for continuous optimization
Different approaches:
Continuous optimization
◮ N. Monmarché, G. Venturini and M. Slimane. On how Pachycondyla
Apicalis ants suggest a new search algorithm, Future Generation
Given:
Computer Systems, 16:937–946, 2000.
1. Function f : IRn 7→ IR
◮ K. Socha and M. Dorigo. Ant colony optimization for continuous
2. Constrains such as, for example, xi ∈ [li , ui ] domains, European Journal of Operational Research, 185(3):1155–1173, 2008.
◮ X. M. Hu, J. Zhang and Y. Li. Orthogonal methods based ant colony
Goal: Find search for solving continuous optimization problems, Journal of
Computer Science & Technology, 23:2–18, 2008).
X~ ∗ = (x∗1 , . . . , x∗n ) ∈ IRn ◮ P. Korosec, J. Silc and B. Filipic. The differential ant-stigmergy
such that algorithm, Information Sciences, 192:82–97, 2012.
◮ X~ ∗ fulfills all constraints ◮ T. Liao et al. A unified ant colony optimization algorithm for
continuous optimization, European Journal of Operational Research,
◮ f (X~ ∗ ) ≤ f (Y
~ ), ∀ Y
~ ∈ IRn
234(3):597–609, 2014.
ACO ACO
solution
components population
Continuous problem
of solutions
probabilistic pheromone probabilistic population
CO problem solution value solution update
construction update construction
pheromone
model initialization Main conceptual difference: initialization
of pheromone of the
values
Population instead of pheromone model population
Continuous ACO: Probabilistic solution construction Continuous ACO: Probabilistic solution construction
A solution construction: Choose a value xi ∈ IR for each variable Xi , i = 1, . . . , n A Gaussian kernel PDF:
!
k
X (x−µj )2
1 −
2σj 2
Gi (x) = ωj √ e
j=1
σj 2π
−4 −2 0 2 4
Continuous ACO: Probabilistic solution construction Continuous ACO: Probabilistic solution construction
qk 2π
Hereby:
Continuous ACO: Probabilistic solution construction Continuous ACO: Probabilistic solution construction
∗
(x− µj ∗ )2 µj ∗ = xji ,
−
1 σj ∗ 2
j ∗ -th Gaussian kernel =
2
√ e
σj ∗ 2π ∗
where xji is the value of the i-th decision variable of solution j ∗ .
1. the mean µj ∗ s 2
Pk ∗
xli − xji
2. and the standard deviation σj ∗ σj ∗ = ρ l=1
k
function values
0.6
0.4
f(x)
0.2 g(x)
h(x)
q(x)
p(x)
0
-10-9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10
x
Iteration 1
Course on Metaheuristics and Hybrids
c C. Blum Course on Metaheuristics and Hybrids
c C. Blum
Example: f (x) = x2 , population size 5, 3 ants, rho = 2.0 Example: f (x) = x2 , population size 5, 3 ants, rho = 2.0
1 1
0.8 0.8
0.6 0.6
0.4 0.4
f(x) f(x)
0.2 g(x) 0.2 g(x)
h(x) h(x)
q(x) q(x)
p(x) p(x)
0 0
-10-9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10 -10-9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10
x x
Iteration 2 Iteration 3
Example: f (x) = x2 , population size 5, 3 ants, rho = 2.0 Example: f (x) = x2 , population size 5, 3 ants, rho = 2.0
1 1
0.8 0.8
0.6 0.6
0.4 0.4
f(x) f(x)
0.2 g(x) 0.2 g(x)
h(x) h(x)
q(x) q(x)
p(x) p(x)
0 0
-10-9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10 -10-9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10
x x
Iteration 4 Iteration 5
Examples:
◮ Bird flocking
◮ Fish schooling
◮ Animal herding
Course on Metaheuristics and Hybrids
c C. Blum Course on Metaheuristics and Hybrids
c C. Blum
◮ The initial algorithm is for continuous optimization ◮ Each particle i has (resp., belongs to) a neighborhood N (i) ⊆ {1, . . . , m}
Basic division:
c X. Li
c University of Málaga
Course on Metaheuristics and Hybrids
c C. Blum Course on Metaheuristics and Hybrids
c C. Blum
◮ The gbest PSO converges fast, but might miss good solutions
2. Recommendation from literature: Start with w = 0.9 and decrease to ⋆ Particles are only influenced by their current father
w = 0.4
Discrete PSO: binary problems (1) Discrete PSO: binary problems (2)
Changes: with respect to the standard PSO ◮ L. Y. Chuang, H. W. Chang, C. J. Tu, et al. Improved binary PSO for
feature selection using gene expression data. Computational Biology and
◮ The position vectors xi are binary Chemistry, 32, pages 29–37, 2008
◮ The position update xi := xi + vi is re-interpreted:
◮ Y. Zhang et al. Binary PSO with mutation operator for feature
if (r < S(vid )) then xid = 1, otherwise xid = 0 selection using decision tree applied to spam detection,
Knowledge-Based Systems, 64, pages 22–31, 2014
where S() is a sigmoidal function, mapping all vid to [0, 1]
◮ J. C.-W. Lin et al. A binary PSO approach to mine high-utility
itemsets, Soft Computing, 2016, in press.
Note: The velocity update can now be seen as changing the probability that
bit xi will be 1, i = 1, . . . , n
Course on Metaheuristics and Hybrids
c C. Blum Course on Metaheuristics and Hybrids
c C. Blum
Note: In order to apply PSO to any optimziation problem we need to define ... Example:
◮ Position of a particle
2
◮ Velocity of a particle 1 2
1 5
◮ Addition of position and velocity. Result: Position 2 2
◮ Substraction of positions . Result: Velocity
3 4
(1, 2, 4, 3, 1) 2
◮ Addition of velocities. Result: Velocity
Position Graphically
◮ Multiplying a velocity with a real. Result: Velocity
Definition: Definition:
Example:
Example: x = (1, 2, 4, 3, 1), v = ((1, 4), (3, 4))
((1, 4), (3, 4))
First swap Second swap
2 2 2
Note: 1 2 1 2 1 2
1 5 1 5 1 5
◮ Null-velocity: empty list 2 2 2 2 2 2
◮ Oposite velocity: reversed list. Example: ((1, 4), (3, 4)) → ((3, 4), (1, 4)) 3 4 3 4 3 4
2 2 2
(1, 2, 4, 3, 1) (4, 2, 1, 3, 4) (3, 2, 1, 4, 3)
TSP example: substraction and addition TSP example: multiplication with a real-number
◮ Given: Two positions x1 and x2 . We want x1 − x2 ◮ Given: A velocity v and a real-number r ∈ [0, 1]
◮ Resulting velocity v: sequence of swaps that transforms x1 into x2 ◮ Result: Reduce v to the first 100 · r% of the swaps
◮ Given: Two velocities v1 and v2 . We want v = v1 + v2 ◮ This is only an example . This algorithm will not work very well
◮ Example: v1 = ((1, 3), (2, 4)), v2 = ((1, 4), (2, 4)) → v = ((1, 3), (2, 4), (1, 4))
Course on Metaheuristics and Hybrids
c C. Blum Course on Metaheuristics and Hybrids
c C. Blum
Self-synchronized activity phases in ant colonies (1) Self-synchronized activity phases in ant colonies (2)
◮ Each ant is modelled as an automaton
Biologist discovered:
◮ The state of an automaton i is described by a continuous state variable:
◮ Colonies of ants show synchronized activity patterns
Si (t) ∈ R where t is the time step
◮ Synchronization is achieved in a self-organized way: self-synchronization
◮ Each automaton i can move on a LxL grid with periodic boundary conditions
◮ Synchronized activity ...
◮ At time step t, each automaton i is either active or inactive :
1. ... provides a mechanism for information propagation
ai (t) = Φ(Si (t) − θact ) , where
2. ... facilitates the sampling of information from other individuals
⋆ θact : activation threshold
Model of self-synchronization: ⋆ Φ(x) = 1 if x ≥ 0, and Φ(x) = 0 otherwise
Self-synchronized activity phases in ant colonies (3) Self-synchronized activity phases in ant colonies (4)
N
1. Activity calculation: 1 X
A(t) = ai (t)
◮ Calculate ai (t) N i=1
◮ If ai (t) = 0: Spontaniously activate i with probability pa (activity level Sa ) where N is the number of automata
2. Move: Each active automaton i moves (if possible) to one of the free places in
its 8-neighborhood
1.0
0.8
0.4
X
0.2
j∈Ni
3700 3800 3900 4000 4100 4200 4300
Design of a duty-cycling protocol: energy harvesting Design of a duty-cycling protocol: duty-cycling events
Characteristics:
1: Calculate ai
◮ Sensor nodes that are equipped with solar panels 2: if ai = 0 then
3: Draw a random number p ∈ [0, 1]
◮ Each sensor i harvests a certain amount of energy per time step
4: if p ≤ pa then Si := Sa and ai := 1 endif
Daily sun intensity: 5: end if
6: Determine transmission power level Ti
7: Compute new value for state variable Si
8: Send duty-cycling message m (containing value Si ) with transmission power Ti
1.0
0.8
sun intensity
0.6
0.4
0.2
0.0
time steps
Choice of the transmission power level Experimental Results (1): Network simulator Shawn
Ideal transmission power level: Parameters: 120 static sensor nodes, no packet loss
1.0
sun
where
0.6
Experimental Results (2): Packet Loss Experimental Results (3): Restricted Energy Harvesting
Parameters: 120 static sensor nodes, different packet loss rates Parameters: 120 static sensor nodes, different cloud densities
1.0
1
0.8
0.8
mean system activity
0.6
0.4
0.4
0.2
0.2
0.0
0
0 0.2 0.4 0.6 0.8 1.0 0 0.2 0.4 0.6 0.8 1
packet loss rate cloud density
Note: System is very robust up to a packet loss rate of about 0.3 Note: linear relationship between cloud density and system activity
Some Papers:
Topic 4: Distributed Graph Coloring
◮ H. Hernández, C. Blum, M. Middendorf, K. Ramsch and A. Scheidler.
Self-synchronized duty-cycling for mobile sensor networks with energy Inspiration: Self-desynchronization of Japanese tree frogs
harvesting capabilities: A swarm intelligence study. Proceedings of SIS 2009,
pages 153–159, IEEE press, 2009.
De-synchronization in Japanese Tree Frogs (1) De-synchronization in Japanese Tree Frogs (2)
◮ Male Japanese Tree Frogs de-couple their calls ◮ A set of pulse-coupled oscillators .
◮ WHY? ◮ Some oscillators are coupled, others are independent of each other
⋆ The purpose of the calls is to attract females ◮ Each oscillator i has a phase θi ∈ [0, 1) which changes over time
⋆ Female frogs cannot distinguish between too close calls
1 1
⋆ Result: females cannot determine the correct direction 2
1
Mathematical model:
I. Aihara, H. Kitahata, K. Yoshikawa and K. Aihara. Mathematical modeling
2 2
of frogs’ calling behavior and its possible applications to artificial life
and robotics. Artificial Life and Robotics, 12(1):29–32, 2008.
Course on Metaheuristics and Hybrids
c C. Blum Course on Metaheuristics and Hybrids
c C. Blum
De-synchronization in Japanese Tree Frogs (3) Distributed graph coloring: organization of the algorithm
Topology Suboptimal de-synchronization Optimal de-synchronization
Implementation:
4 3 4 1 1, 3
◮ In static wireless ad-hoc networks (such as sensor networks)
◮ Algorithm works with communication rounds (length: 1 time unit)
1 2 3 2, 4
5
Distributed graph coloring: graph coloring event Distributed graph coloring: re-calculating θi
1: PHASE I
X
2: Recalculate θi θi := θi + inc(thetam − θi )
3: Choose a (new) color ci m∈Mi
4: Send a graph coloring event message m to the neighbors
5: Function inc():
6: PHASE II
7: Execute a kind of distributed local search 1
Messages:
0
◮ Graph coloring event messages are collected in a separate message queue Mi -1 0 1
◮ Each message m contains two values:
⋆ The θ-value thetam of the sender node -1
Distributed graph coloring: choosing a new color ci Experiments: quality of the coloring over time
0 20 40 60 80
Communication rounds
Course on Metaheuristics and Hybrids
c C. Blum Course on Metaheuristics and Hybrids
c C. Blum
Extension: Finding Large Independent Sets Idea: Use Existing FroSim Algorithm
Indication For Convergence To Good Solutions (1) Indication for Convergence To Good Solutions (2)
Example: Sparse graph on 1000 nodes Example: Dense graph on 1000 nodes
Average number of times used
150 30
50 10
0 0
0 5 10 15 0 10 20 30 40
Color index Color index
Presented Topics: ◮ Problem: Swarm intelligence has attracted too many people
◮ As a consequence:
◮ Ant colony optimization
1. Experienced researchers were overwhelmed with reviewing
◮ Particle swarm optimization 2. People who should have never been asked to do so did reviewing work
◮ Self-synchronized duty-cycling in sensor networks ◮ Therefore: nowadays we find numerous papers in the literature that are either
1. Non-sense, or
◮ Distributed graph coloring in wireless ad-hoc networks
2. Re-inventing the wheel
Outlook
Questions?