0% found this document useful (0 votes)
24 views41 pages

Swarm Optimization

The document discusses particle swarm optimization (PSO), which is an optimization technique inspired by animal behavior. PSO initializes a population of particles randomly in the search space and updates their positions and velocities iteratively based on their personal best positions and the global best position until an optimal solution is found.

Uploaded by

diyam3270
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views41 pages

Swarm Optimization

The document discusses particle swarm optimization (PSO), which is an optimization technique inspired by animal behavior. PSO initializes a population of particles randomly in the search space and updates their positions and velocities iteratively based on their personal best positions and the global best position until an optimal solution is found.

Uploaded by

diyam3270
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

GROUP-4

PARTICLE SWARM
OPTIMIZATION

-DRISHTI VASHISTH(2k22/MSCMAT/13) -HARSHITA GUPTA (2k22/MSCMAT/16)


-ISHIKA VERMA (2k22/MSCMAT/20) -PREETI (2k22/MSCMAT/31)
-SNEHAL (2k22/MSCMAT/43) -ANUJA (2k22/MSCMAT/55)
-ARYA (2k22/MSCMAT/56)
INTRODUCTION
PARTICLE SWARM OPTIMIZATION:

PSO is an iterative optimization technique that was inspired by the behavior of social animals such as
birds or fish. PSO is introduced by Eberhart and kennedy (1995), “A new optimizer using particle swarm
theory”. It involves a group of particles, or agents, that move through a search space and try to find the
optimal solution to a given problem. Each particle is guided by its own experience and the experience
of the other particles in the group, and the movement of the particles is determined by a set of rules
that are based on the particle's current position, velocity, and the best position it has
encountered so far.

PSO is based on the idea of simulating the behavior of a group of individuals, called particles, as they
search for the optimal solution within a multidimensional search space. Each particle represents a
2
potential solution to the optimization problem and moves through the search space by adjusting its
position and velocity over successive iterations.
• PSO is developed using two methodologies

>Artificial life: mimicking bird flocking, fish schooling, and swarming theory

>Evolutionary computation

• PSO is computationally inexpensive both in memory and speed, and also can be easily
implemented using computer programming.

3
HOW PSO WORKS

PSO starts with initializing population randomly similar to GA.

Unlike GA operators, solutions are assigned with randomly velocity to explore the search space.

Each solution in PSO is referred to as particle.

With GA(genetic algorithm) we referred the solution then many points so, sometimes we say points
sometimes solutions but in PSO just to make it different from other algorithm the point or solution is
referred as particle.

4
THREE DISTINCT FEATURES OF PSO

1. Best fitness of each particle - pbest(i) the best solution achieved so far by particle i. When we will
be comparing the fitness value of particle with the previous position so we keep the column
vector of decision variable corresponding to a position which will give the best fitness.

2. Best fitness of swarm:- gbest the best solution (fitness) achieved so far by any particle in the
swarm. And again to find gbest wew have to look the fitness of the solution.

3. Velocity and position update of each particle:- for exploring and exploiting the search space to
locate the optimal solution.

5
POSITION UPDATE

Position of a particle (i) will behave like a variation operator that will help to change the current
position of a particle to its new position. So, here the position of a particle (i) is adjusted as

𝑡+1 (𝑡) (𝑡+1)


𝑥𝑖 =𝑥𝑖 + 𝑣𝑖

𝑡+1 (𝑡) (𝑡+1)


Where 𝑥𝑖 is the new position which depends on the current position 𝑥𝑖 and velocity 𝑣𝑖

6
VELOCITY COMPONENTS

VELOCITY EQUATION:
Velocity of the particle (i) is updated as follows:

(𝒕+𝟏) (𝒕) (𝒕) (𝒕) (𝒕) (𝒕)


𝒗𝒊 = 𝒘𝒗𝒊 + 𝒄𝟏 𝒓𝟏 ( 𝒑(𝒊,𝒍𝒃) - 𝒙 𝒊 ) + 𝒄𝟐 𝒓𝟐 ( 𝒑 𝒈𝒃 - 𝒙 𝒊 )

Various parameters used in above equation are:

• 𝑖 is the 𝑖-th particle.


• 𝑡 is the generation counter.
• 𝑣𝑖0 is the initial velocity which we set randomly.
• 𝜔 is the weight that adds to the inertia of the particle.
• 𝑐1 and 𝑐2 are the acceleration coefficients.
• 𝑟1 and 𝑟2 are random numbers ∈ 0,1 .
(𝑡)
• 𝑝(𝑖,𝑙𝑏) is the local best of 𝑖 –th particle.
7 (𝑡)
• 𝑝𝑔𝑏 is the global best calculated with respect to the swarm.
VELOCITY COMPONENTS
(𝑡+1) (𝑡) (𝑡) (𝑡) (𝑡) (𝑡)
𝑣𝑖 = 𝑤𝑣𝑖 + 𝑐1 𝑟1 ( 𝑝(𝑖,𝑙𝑏) - 𝑥𝑖 ) + 𝑐2 𝑟2 ( 𝑝𝑔𝑏 - 𝑥𝑖 )

(𝒕)
• Momentum part , 𝒘𝒗𝒊 : The first component is called the momentum part because it involves velocity.
𝑤 (𝑤𝑒𝑖𝑔ℎ𝑡) multiplied by velocity of particle in iteration number t is called the inertia component. This part
has a memory of previous flight direction. The purpose of this part is to prevent particle from changing the
direction drastically.

(𝒕) (𝒕)
• Cognitive part , 𝒄𝟏 𝒓𝟏 ( 𝒑(𝒊,𝒍𝒃) - 𝒙𝒊 ) : This component is referred as the cognitive component. This component
is called the local best of each particle. It quantifies the performance relative to the past performance. It is the
memory of the previous best position. The difference part represent the subtraction of two vectors i.e it
represents how far is the current position from the local best position of the particle i . This component is also
referred as the nostalgia.
8
(𝑡+1) (𝑡) (𝑡) (𝑡) (𝑡) (𝑡)
𝑣𝑖 = 𝑤𝑣𝑖 + 𝑐1 𝑟1 ( 𝑝(𝑖,𝑙𝑏) - 𝑥𝑖 ) + 𝑐2 𝑟2 ( 𝑝𝑔𝑏 - 𝑥𝑖 )

(𝐭) (𝐭)
• Social part , 𝐜𝟐 𝐫𝟐 ( 𝐩𝐠𝐛 - 𝐱 𝐢 ): This part includes the global best. It is called the social part because when
we use PSO and it uses the swarm theory, the particle saves its best position so far and it also changes the
pattern based on the best fitness of any particle in the swarm. This part quantifies the performance relative to
the neighbors. This part is called as envy.

9
GRAPHICAL REPRESENTATION OF VELOCITY
COMPONENTS

(𝒕)
Momentum part : 𝒘𝒗𝒊
(𝒕) (𝒕)
Cognitive part : 𝒄𝟏 𝒓𝟏 ( 𝒑(𝒊,𝒍𝒃) - 𝒙𝒊 )
(𝐭) (𝐭)
Social part : 𝐜𝟐 𝐫𝟐 ( 𝐩𝐠𝐛 - 𝐱 𝐢 ):

In the momentum part, we add the cognitive part (which is in the


direction parallel to the local best) and the social part by vector
(𝑡+1)
addition to get the new position 𝑥𝑖 .

10
LOCAL AND GLOBAL BEST
POSITIONS
PARTICLE SWARM
OPTIMIZATION

In Particle Swarm Optimization (PSO), each particle in the swarm maintains two positions:

Local best position (pbest): This refers to the best position that a particular particle has achieved so far in its
own search history. It represents the best solution found by that particle individually.

Global best position (gbest): This refers to the best position found by any particle in the entire swarm. It
represents the best solution found by any particle in the entire population.

During each iteration of the PSO algorithm, particles adjust their positions and velocities based on their own
experience (pbest) and the experience of the entire swarm (gbest). By continuously updating these positions,
particles aim to converge towards better solutions within the search space. The gbest position guides the entire
11 swarm towards promising regions of the search space, while each particle's pbest allows it to exploit local
improvements.
PARTICLE SWARM LOCAL AND GLOBAL BEST POSITIONS
(𝑡)
▪ 𝑝 𝑖,𝑙𝑏 is the personal best position of i – th particle in t generation. Assume minimization problem.
OPTIMIZATION

𝑡+1 𝑡+1 𝑡
𝑥𝑖 , 𝑖𝑓 𝑓 𝑥𝑖 < 𝑓 𝑝 𝑖,𝑙𝑏
𝑡+1
𝑝 𝑖,𝑙𝑏 = ൞ 𝑡
𝑝 𝑖 𝑙𝑏 , 𝑂𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

𝑝 𝑔𝑏 is the global best position in t generation which is calculated as


(𝑡)

(𝑡) 𝑡 𝑡 𝑡 𝑡 𝑡
𝑝𝑔𝑏 ∈ 𝑝 1,𝑏𝑒𝑠𝑡 , … … , 𝑝 𝑁,𝑏𝑒𝑠𝑡 | 𝑓 𝑝𝑔𝑏 = min{𝑓 𝑝 1,𝑏𝑒𝑠𝑡 , … . , 𝑓 𝑝 𝑁,𝑏𝑒𝑠𝑡 }
Or,
(𝑡) 𝑡 𝑡 𝑡 𝑡 𝑡
𝑝𝑔𝑏 ∈ 𝑥 1 ,……,𝑥 𝑁 | 𝑓 𝑝𝑔𝑏 = min{𝑓 𝑥 1 ,….,𝑓 𝑥 𝑁 }

12
where N is the number of particles in the swarm.
PARTICLE SWARM
FLOWCHART OF PSO
OPTIMIZATION

P(t) : random initial Update


population position
Update velocity 𝑡+1
𝑡+1 (𝑥𝑖 )
(𝑣𝑖 )

Evaluate population and Evaluate


assign fitness 𝑡+1
(𝑥𝑖 )
and assign
fitness

no (𝑡)
Update 𝑝 𝑖,𝑙𝑏 of each Is i <=
terminate Is t<= T? (𝑡) i : i + 1
yes particle (i) and find 𝑝𝑔𝑏 N?

13

t: t+1
WORKING PRINCIPLES THROUGH
PARTICLE SWARM

WITH EXAMPLE
OPTIMIZATION

ROSENBRACK FUNCTION:-
Minimize f(𝑥1 , 𝑥2, )=100(𝑥2 −𝑥12 )2 + (1−𝑥1 )2 ,
Bounds -5≤ 𝑥1 ≤ 5 and -5≤ 𝑥2 ≤ 5

Optimal solution is 𝑥 ∗ =(1,1)𝑇 and f(x)=0

14
GENERALISED FRAMEWORK OF EC
PARTICLE SWARM

TECHNIQUES
OPTIMIZATION

1. Solution representation
2. Input: t:= 1 (i.e. the Generation counter), Maximum allowed generation = T
3. Initialize random swarm (P(t));
4. Evaluate (P(t));
5. while t ≤ T do
(𝑡) (𝑡)
6. Update 𝑝(𝑖,𝑙𝑏) of each particle (i) and fined 𝑝𝑔𝑏 ;
7. For (i=1; i≤ N, i++) do
(𝑡+1)
8. Update velocity (𝑣𝑖 );
(𝑡+1)
9. Update position (𝑥𝑖 );
(𝑡+1)
10. Evaluate (𝑥𝑖 ) and include it in P(t+1);
11. end for
15 12. t:=t+1;
13. end while
INITIAL SWARM
PARTICLE SWARM
OPTIMIZATION

Let the population size is N=8 and t=1

INDEX (i) 𝑥𝑖
1
𝒗𝑖
1

1 (2.212,3.009)𝑇 (0.0,0.0)𝑇
2 (−2,289,−2.396) 𝑇 (0.0,0.0)𝑇
3 (−2.393,−4.790) 𝑇 (0.0,0.0)𝑇
4 (−0.639,1.692) 𝑇 (0.0,0.0)𝑇
5 (−3.168,0.706) 𝑇 (0.0,0.0)𝑇
6 (0.215,−2.350) 𝑇 (0.0,0.0)𝑇
7 (−0.742,1.934) 𝑇 (0.0,0.0)𝑇
8 (−4.563,4.791) 𝑇 (0.0,0.0)𝑇

16
EVALUATE POPULATION
PARTICLE SWARM
OPTIMIZATION

Here, we calculate the objective function f(x1 , x2, )=100(x2 −x12 )2 + (1−x1 )2 for each solution.
Index (i) 𝑥𝑖
1
𝑓 𝑥𝑖
1

1 (2.212,3.009)𝑇 357.154
2 (−2,289,−2.396) 𝑇 5843.569
3 (−2.393,−4.790) 𝑇 11066.800
4 (−0.639,1.692) 𝑇 167.414
5 (−3.168,0.706) 𝑇 8718.17
6 (0.215,−2.350) 𝑇 574.796
7 (−0.742,1.934) 𝑇 194.618
7 (−4.563,4.791) 𝑇 25731.235
8 (−4.563,4.791) 𝑇 25731.235
17
LOCAL BEST OF EACH PARTICLE
PARTICLE SWARM
OPTIMIZATION

This is the first generation so the local best of each particle is itself.

Index (i) 𝑥𝑖
1
𝑓 𝑥𝑖
1 1
𝑝 ⅈ,𝑙𝑏
1 (2.212,3.009)𝑇 357.154 (2.212,3.009)𝑇
2 (−2,289,−2.396) 𝑇 5843.569 (−2,289,−2.396) 𝑇
3 (−2.393,−4.790) 𝑇 11066.800 (−2.393,−4.790) 𝑇
4 (−0.639,1.692) 𝑇 167.414 (−0.639,1.692) 𝑇
5 (−3.168,0.706) 𝑇 8718.17 (−3.168,0.706) 𝑇
6 (0.215,−2.350) 𝑇 574.796 (0.215,−2.350) 𝑇
7 (−0.742,1.934) 𝑇 194.618 (−0.742,1.934) 𝑇
8 (−4.563,4.791) 𝑇 25731.235 (−4.563,4.791) 𝑇
18
GLOBAL BEST OF SWARM
PARTICLE SWARM
OPTIMIZATION

The global best of the swarm is


(𝑡) 𝑡 (𝑡) (𝑡) 𝑡 (𝑡)
𝑝𝑔𝑏 ∈ {𝑥1 , … , 𝑥𝑁 }| 𝑓(𝑝𝑔𝑏 ) = min{𝑓(𝑥1 ),…, 𝑓(𝑥𝑁 )},

Index (i) 𝑥𝑖
1
𝑓 𝑥𝑖
1 1
𝑝 ⅈ,𝑙𝑏
(𝑡)
𝑝𝑔𝑏
1 (2.212,3.009)𝑇 357.154 (2.212,3.009)𝑇 −0.639,1.692𝑇
2 (−2,289,−2.396) 𝑇 5843.569 (−2,289,−2.396) 𝑇 −0.639,1.692𝑇
3 (−2.393,−4.790) 𝑇 11066.800 (−2.393,−4.790) 𝑇 −0.639,1.692𝑇
4 (−0.639,1.692) 𝑇 167.414 (−0.639,1.692) 𝑇 −0.639,1.692𝑇
5 (−3.168,0.706) 𝑇 8718.17 (−3.168,0.706) 𝑇 −0.639,1.692𝑇
6 (0.215,−2.350) 𝑇 574.796 (0.215,−2.350) 𝑇 −0.639,1.692𝑇
7 (−0.742,1.934) 𝑇 194.618 (−0.742,1.934) 𝑇 −0.639,1.692𝑇
19 8 (−4.563,4.791) 𝑇 25731.235 (−4.563,4.791) 𝑇 −0.639,1.692𝑇
PARTICLE SWARM
OPTIMIZATION VELOCITY UPDATE
THE VELOCITY OF EACH PARTICLE IS UPDATED USING
(𝑡+1) (𝑡) 𝑡 𝑡 𝑡 (𝑡)
𝑣𝑖 = 𝑤𝑣𝑖 + 𝑐1 𝑟1 𝑝 𝑖,𝑙𝑏 − 𝑥𝑖 + 𝑐2 𝑟2 (𝑝 𝑔𝑏 − 𝑥𝑖 )

ASSUME 𝑤 = 0.75, 𝑐1 = 1.5, 𝑐2 = 2.0

LET THE RANDOM NUMBERS FOR EACH PARTICLE BE

Particle 𝑟1 𝑟2

1 0.661 0.312
2 0.919 0.271
3 0.782 0.824
4 0.299 0.055
5 0.874 0.595
6 0.133 0.582
20 7 0.031 0.736
8 0.366 0.954
For particle 1, we know
PARTICLE SWARM

(1) (1)
𝑥1 = (2.212,3.009)𝑇 ,𝑣1 = (0.0,0.0)𝑇
OPTIMIZATION

(1) (1)
𝑝(1,𝑙𝑏) = (2.212,3.009)𝑇 , 𝑝(1,𝑔𝑏) = (−0.639, . 1692)𝑇 , 𝑟1 = 0.661 𝑎𝑛𝑑 𝑟2 = 0.312

(2)
𝑣1,1 = 0.75 × 0 + 1.5 × 0.661 2.212 − 2.212 + 2.0 × 0.312 −0.639 − 2.212 = −1.779
(2)
𝑣1,2 = 0.75 × 0 + 1.5 × 0.661 3.009 − 3.009 + 2.0 × 0.312 1.692 − 3.009 = −0.822

For particle 2, we know


(1) (1)
𝑥2 = (−2.289, −2.396)𝑇 ,𝑣2 = (0.0,0.0)𝑇
(1) (1)
𝑝(1,𝑙𝑏) = (−2.289, −2.396)𝑇 , 𝑝(1,𝑔𝑏) = (−0.639, . 1692)𝑇 , 𝑟1 = 0.919 𝑎𝑛𝑑 𝑟2 = 0.271

(2)
𝑣2,1 = 0.75 × 0 + 1.5 × 0.919 −2.289 − (−2.289) + 2.0 × 0.271(−0.639 − (−2.289)) = 0.893
(2)
𝑣2,2 = 0.75 × 0 + 1.5 × 0.271 −2.396 − (−2.396) + 2.0 × 0.212 1.692 − (−2.396) = 2.212

21
Updated velocity of each particle
PARTICLE SWARM
OPTIMIZATION

Particle (2)
𝑣𝑖,1
(2)
𝑣𝑖,2
1 -1.779 -0.882
2 0.893 2.212
3 2.890 10.683
4 0.000 0.000
5 3.010 1.174
6 -0.994 4.704
7 0.151 -0.357
8 7.491 -5.916

22
PARTICLE SWARM
OPTIMIZATION POSITION UPDATE
(𝑡+1) (𝑡) (𝑡+1)
THE POSITION OF EACH PARTICLE IS UPDATED AS: 𝑥𝑖 = 𝑥𝑖 + 𝑣𝑖

Particle (1)
𝑥𝑖 𝑣𝑖
(2) (2)
𝑥𝑖
(2)
𝑥𝑖
1 (2.212,3.009)𝑇 (-1.779,-0.822)𝑇 (0.433,2.187)𝑇 (0.433,2.187)𝑇
2 (−2,289,−2.396) 𝑇 (0.893,2.212)𝑇 (−1.396,−0.184)𝑇 (-1.396,-0.184)𝑇
3 (−2.393,−4.790) 𝑇 (2.809,10.683)𝑇 (0.498,5.893)𝑇 (0.498,5.00)𝑇
4 (−0.639,1.692) 𝑇 (0.000,0.000)𝑇 (−0.639,1.692)𝑇 (-0.639,1.692)𝑇
5 (−3.168,0.706) 𝑇 (3.010,1.174)𝑇 (−0.157,1.879)𝑇 (-0.157,1.879)𝑇
6 (0.215,−2.350) 𝑇 (-0.994,4.704)𝑇 (0.779,2.354)𝑇 (0.779,2.354)𝑇
7 (−0.742,1.934) 𝑇 (0.151,-0.357)𝑇 (−0.590,1.577)𝑇 (-0.590,1.577)𝑇
8 (−4.563,4.791) 𝑇 (7.491,-5.916)𝑇 (2.928,−1.125)𝑇 (2.928,-1.125)𝑇

23

The limit on 𝑥2 is −5,5 . Therefore, we keep 𝑥2 of solution 3 on the bound.


PARTICLE SWARM
OPTIMIZATION EVALUATE SWARM
FITNESS OF PARTICLE AFTER POSITION UPDATE

Index (𝑖) (2)


𝑥𝑖
(2)
𝑓(𝑥𝑖 )
1 (0.433,2.187)𝑇 399.984
2 (-1.396,-0.184)𝑇 460.648
3 (0.498,5.00)𝑇 2258.514
4 (-0.639,1.692)𝑇 167.414
5 (-0.157,1.879)𝑇 345.375
6 (0.779,2.354)𝑇 308.580
7 (-0.590,1.577)𝑇 153.484
8 (2.928,-1.125)𝑇 9406.994

24

Increase the generation counter by 1, i.e., 𝑡 = 𝑡 + 1 = 2


VELOCITY UPDATE : 2 𝑛𝑑
GENERATION
PARTICLE SWARM
OPTIMIZATION

THE VELOCITY OF EACH PARTICLE IS UPDATED USING


(𝑡+1) (𝑡) 𝑡 𝑡 𝑡 (𝑡)
𝑣𝑖 = 𝑤𝑣𝑖 + 𝑐1 𝑟1 𝑝 𝑖,𝑙𝑏 − 𝑥𝑖 + 𝑐2 𝑟2 (𝑝 𝑔𝑏 − 𝑥𝑖 )

ASSUME 𝑤 = 0.75, 𝑐1 = 1.5, 𝑐2 = 2.0

LET THE RANDOM NUMBERS FOR EACH PARTICLE BE

Particle 𝑟1 𝑟2

1 0.127 0.531
2 0.653 0.225
3 0.533 0.472
4 0.739 0.048
5 0.309 0.837
6 0.148 0.057
25 7 0.110 0.308
8 0.343 0.320
PARTICLE SWARM

For particle 1, we know Finding the local best for each particle
OPTIMIZATION

(1)
𝑥1 = (2.212,3.009)𝑇 , Particle (1)
𝑓(𝑥𝑖 )
(2)
𝑓(𝑥𝑖 )
(2)
𝑝(𝑖,𝑙𝑏)
(1)
𝑣1 = (−1.779, −0.822)𝑇 , 1 357.154 399.984 (2.212,3.009)𝑇
𝑟1 = 0.127 𝑎𝑛𝑑 𝑟2 = 0.531
2 5843.569 460.648 (-1.396,-0.184)𝑇
3 11066.800 2258.514 (0.498,5.000)𝑇
FOR LOCAL BEST OF PARTICLE 1:
1
𝑓 𝑥1 < 𝑓 𝑥1
1 4 167.414 167.414 (-0.639,1.692)𝑇
357.153 < 399.984 5 8718.166 345.375 (-0.157,1.879)𝑇
(1)
THEREFORE, 𝑝(1,𝑙𝑏) = (2.212,3.009)𝑇 6 574.796 308.580 (-0.779,2.354)𝑇
7 194.618 153.484 (-0.590,1.577)𝑇
8 25731.235 9406.994 (2.928,-1.125)𝑇
FOR GLOBAL BEST OF THE SWARM, WE NEED TO (2)
FIND THE LOCAL BEST FOR EACH PARTICLE The global best particle of the swarm is particle ‘7’, i.e., 𝑝𝑔𝑏 = (−0.590,1.577)𝑇 .

26
PARTICLE SWARM

Updated velocity of each particle


OPTIMIZATION

Particle (3)
𝑣𝑖,1
(3)
𝑣𝑖,2
1 -2.083 -1.107
2 1.033 2.452
3 1.140 3.934
4 0.005 -0.011
5 1.533 0.374
6 -0.724 3.439
7 0.114 -0.268
8 3.364 -2.705

27
POSITION UPDATE
PARTICLE SWARM

: 2 𝑛𝑑 G E N E R A T I O N
OPTIMIZATION

Particle 𝑥𝑖
(2)
𝑣𝑖
(3)
𝑥𝑖
(3) (3)
𝑥𝑖
1 (0.433,2.187)T (-2.083,-1.107)T (-1.649,1.079)T (−1.649,1.079)T
2 (-1.396,-0.184) T (1.033,2.452)T (-0.363,2.268)T (−0.363,2.268)T
3 (0.498,5.000) T (1.140,3.934)T (1.638,9.827)T (1.638,5.000)T
4 (-0.639,1.692) T (0.005,-0.011)T (-0.0634,1.681)T (−0.0634,1.681)T
5 (-0.157,1.879) T (1.533,0.374)T (1.376,2.254)T (1.376,2.254)T
6 (-0.779,2.354) T (-0.724,3.439)T (-1.503,5.793)T (−1.503,5.000)T
7 (-0.590,1.577) T (0.114,-0.268)T (−0.477,1.309)T (−0.477,1.309)T
8 (2.928,-1.125) T (3.364,-2.705)T (6.29,-3.830)T (5.000,−3.830)T

28
EVALUATE SWARM : 2
PARTICLE SWARM

𝑛𝑑 GENERATION
OPTIMIZATION

FITNESS OF PARTICLE AFTER 2𝑛𝑑 GENERATION POSITION UPDATE

Index (𝑖) 𝑥𝑖
(3) (3)
𝑓(𝑥𝑖 )
1 (−1.649,1.079)T 276.367
2 (−0.363,2.268)T 458.327
3 (1.638,5.000)T 537.876
4 (−0.0634,1.681)T 166.098
5 (1.376,2.254)T 13.222
6 (−1.503,5.000)T 575.777
7 (−0.477,1.309)T 119.231
8 (5.000,−3.830)T 83134.582
29
GRAPHICAL EXAMPLE:
31

X1
32

X1
\

33

X1
34

X1
35

X1
36

X1
37
PARTICLE SWARM

CONVERGENCE ANALYSIS IN PSO


OPTIMIZATION

o Velocity and Position Updates: Analyzing how the velocity and position of particles change over
iterations is fundamental in convergence analysis. Understanding how particles move towards better
solutions or explore the search space is key to assessing convergence behavior.

o Global and Local Best Solutions: Monitoring the evolution of the global best solution (gbest) and local
best solution (lbest) is essential. Convergence occurs when these solutions converge towards the
global optimum, indicating that the swarm has collectively found a promising solution.

o Convergence Criteria: Defining termination criteria is crucial for convergence analysis. These criteria
could be based on reaching a maximum number of iterations, achieving a specific target fitness value,
or observing negligible improvements in the best solution over successive iterations.

38
4.CONVERGENCE RATE: Analyzing the rate at which the algorithm converges towards the optimal solution provides
insights into its efficiency. A faster convergence rate implies quicker attainment of good solutions, while a slower rate
PARTICLE SWARM
OPTIMIZATION

may indicate issues such as premature convergence or stagnation.

5.Diversity Maintenance: Assessing how well PSO maintains diversity within the swarm is vital for convergence analysis.
Maintaining diversity helps prevent premature convergence by ensuring that the swarm explores different regions of the
search space.

6.Stability Analysis: Investigating the stability of the convergence process involves analyzing how sensitive the algorithm
is to changes in parameters, problem characteristics, or initial conditions. A stable convergence indicates robustness in
finding solutions across different scenarios.

7.Convergence Bounds: Establishing theoretical bounds on the convergence behavior of PSO provides insights into its
performance guarantees under certain assumptions. This involves proving convergence properties under conditions such
as continuity of the objective function and boundedness of the search space.

8.Empirical Analysis: Conducting empirical experiments on benchmark problems or real-world applications helps validate
convergence properties observed in theoretical analyses. Analyzing convergence curves, convergence rates, and
39
convergence variability across multiple runs provides empirical evidence of PSO's convergence behavior.
o Introduction of PSO
PARTICLE SWARM
OPTIMIZATION

o Distinct feature of PSO


• Global best, local best, velocity and position updates
o Velocity update
• Velocity components: Momentum, cognitive and social parts
• Graphical illustration
o Position update
o Flowchart of PSO
o PSO on the generalized framework
o Working principle of PSO through Rosenbrock function
o Graphical example

40
CLOSURE
THANK YOU

You might also like