LPBSAManuscript
LPBSAManuscript
net/publication/388422560
CITATIONS READS
0 7
2 authors:
All content following this page was uploaded by Dana Hamad on 30 January 2025.
8.1 Introduction
In the realm of optimization, many algorithms display significant promise; however, they
require greater efficiency and robustness to effectively solve complex problems and obtain better
results. The LPB algorithm shows potential, while, with its exploitation and exploration, there is
room to improve or enhance its overall performance. Metaheuristics are sophisticated approaches
for addressing problems that have been created to address a variety of optimization issues. These
algorithms have demonstrated efficacy in managing intricate and insolvable scenarios, providing
metaheuristic algorithms are capable of making intelligent choices among a wide range of potential
solutions [1]. Natural systems are the source of inspiration for some popular metaheuristic
algorithms, such as the GA and PSO. The LPB algorithm was recently introduced by Rahman and
1
Rashid (2021) [2] and is intended to handle both single and multi-objective situations. Rahman
and Rashid’s study illustrates how to apply LPB in a methodical manner to maximize and acquire
optimal results. Researchers interested in creating, improving, or hybridizing the method can use
The primary solution focuses on the improvement of LPB algorithm to address its limitations. To
do this we introduced LPBSA, which integrates the strengths of LPB with powerful techniques of
SA to enhance its performance. By, striking a balance between exploring new possibilities and
exploiting known ones, LPBSA provided as the best approach to the algorithm converge more
efficiently to the optimal solution. The proposed algorithm is an improvement on LPB that expands
like GAs and PSO in terms of LPB optimization. By applying crossover and mutation operations,
splitting a main population into Good and Bad populations, and using SA with a MAC, the LPBSA
technique selects only the best candidates for the subsequent iteration. By improving the
population, this technique raises the algorithm’s efficiency and improves its performance.
problems effectively. Among those algorithms, LPB has obtained impressive results by drawing
inspiration from the academic progress of students, especially students that are currently moving
to university from high school. They are new students from college and have chosen a new field
in which they may find it difficult to improve their performance, so they need to share information
with other students in order to improve their behavior. To this end, the LPB algorithm tries to use
its methods and techniques to improve the performance of students. LPB still has room to improve
address its limitation. One of the main objectives of developing LPB algorithm is to enhance
2
students' behavior with the goal of improving their overall experience across academic disciplines.
iteratively refine their search processes, leading to more efficient solutions. In order to strengthen
the LPB, simulating annealing was chosen suitably because it is a popular metaheuristic algorithm
that was inspired by the metallurgical annealing process. In this process, materials are gradually
cooled to obtain an optimal crystalline structure. Similarly, SA aids the algorithm escape local
optima, making it extremely effective in improving the exploration phase of the optimization
process. Also, SA increases the possibility of discovering the goal optimum because it helps in
Optimization algorithms play an important role for the complex problem solutions through a wide
range of fields. Such as, their application in education has garnered pivotal attention, as these
algorithms provide potential solutions for improving learning outcomes, support decision-making
and personalizing learning experience. Among these optimization approaches, the GA stands out
due to its capability to effectively search large solution spaces, inspired by the principles of nature
selection. GA has established to be essential in enhancing several tasks, from resources allocation
to curriculum design, highlighting its versatility and important in educational contexts. This
algorithm has also been applied to resource allocation and optimization scheduling in educational
settings. For instance, GAs have been used to mimic adaptive systems for learning, which tailors
educational content to the need of a specific learner [3]. PSO is another optimization algorithm
that was inspired by both “fish schooling” and the “social behavior of flocking birds”, and it has
been utilized for optimization tasks in numerus fields such as education. It is also absolutely useful
3
in e-learning systems, especially for some significant facilities related to these systems, including
improving the collaborative learning area and developing intelligent tutorials. One of PSO’s
aspects that makes it suitable for “real-time educational applications” is that this algorithm has a
high ability to converge fast to optimal solutions [4]. Moreover, [5] presents differential evolution
as another simple and effective optimization algorithm that has been utilized in solving many
problems in the education field, particularly for some relevant educational facilities like managing
resources, scheduling exams, and designing curriculums. Simplicity and robustness are the two
essential aspects that make this algorithm a popular choice for educational tasks.
Some approaches like “data-driven analytics”, “machine learning models”, and “education data
mining” are aiming to identify trends in student data for developing their involvement targets.
[6]states that one of the key links that will improve a student’s achievement and engagement is
personalized learning by addressing student learning styles, preferences, and wants. In addition,
by using real-time data, “adaptive learning systems” can be significantly utilized to alter the
contents and learning environment to meet the learner student’s needs. Moreover, optimization
algorithms are employed by these systems to modify feedback, assessment, and instructional
resources dynamically. Furthermore, the systems created based on these algorithms can lead to
greater student fulfillment and outcomes for improved learning [7]. Next, more of a focus will be
In this section, we will highlight some significant gaps which will be focused in this study,
especially regarding personalized learning and optimization algorithms. One of the crucial gap that
has not been addressed properly is the challenge of admitting students into university programs for
which they may lack foundational knowledge. This problem highlights the need for more effective
algorithms that will assist the students to improve their skills (performance) by selecting their
4
specific learning needs. By refining these algorithms, better supports will be available for the
students in overcoming knowledge gaps and enhance their academic progress. One of the most
important areas for tracking and exposing previous algorithms is “scalability”. Many optimization
algorithms struggle during applications to large educational datasets. With existing algorithms,
there is a need to provide actionable and interpretable insights for educators. This is another gap
that should be under focus. Another gap is that, so far, there is a lack of focus on integrating
optimization algorithms for real-world educational systems and practices. LPBSA, as a novel
contribution, provides a scalable, interpretable, and integrative approach to address these gaps to
optimize learner performance. LPBSA is inspired by the process of admitting students to the
colleges or universities where they are encouraged to adapt and enhance their study skills or
behaviors. This process aligns with the essential principles of personalized and adaptive learning,
which focus on tailoring education to meet the student needs. By the incorporation of SA, which
strengthens the LPB results, LPBSA has made the algorithm more reliable and, more importantly,
it also designed to be adaptable and responsive for further use in the future, making it essential for
optimizing processes and develop outcomes in various fields like healthcare and education.
8.3. LPB
The phrases required in learner acceptance are mirrored in the LPB algorithm, which draws
inspiration from the university admissions process for recent high school graduates. These
procedures include grouping students according to their cumulative rates and subsequently
enhancing their behavior and performance after admission. These techniques also help to improve
individuals’ conduct and performance level when they are admitted to a particular department.
According to [8] [9], given the necessity for learners to adopt new study habits as they transition
from junior high to college, in order to solve this, the algorithm first chooses a subset of the
5
population. These individuals are then separated into smaller groups, from which the most
qualified applications are selected in accordance with their qualifications. Then, through
cooperative efforts that resemble cooperation and facilitate information exchange during study
sessions, their conduct and performance are further improved (referred to as crossover).
Furthermore, this approach introduces mutation, or unpredictable impacts, on their behavior. LPB
incorporates mutation and crossover strategies similar to those used in GA. Researchers used the
It is important to represent how the LPB algorithm works in the next paragraph.
The initialization plays the first step in the process in which a group of people(learners) is
separated into two subgroups based on their fitness. Subsequently, work targets at enhancing
behavior through promoting working as a team and information transfer within the sub-groups.
This approach along with the development of effective study habits, is aimed at improving
individual performance and learning outcomes. Later, we have to select the best individuals from
create new solutions. This is similar to exchanging information between parents to produce
offspring. When crossover is complete, the “mutation” operation starts, which introduces random
changes to some individuals to explore new potential solutions, and assess the fitness of the new
individuals to determine their performance. Then, the process is repeated to select the next
6
Fig 8. 1: LPB Flowchart
8.4. SA
The SA algorithm was introduced in 1983 by Kirkpatrick et al. to solve the “Traveling Salesman
optimization problem across a broad search space, and it is an optimization technique that
maintains constant control limits while systematically modifying the variables, and upper and
rapidly heating a metal followed by gradual cooling [12]. At high temperatures, atoms within the
7
metal exhibit fast movement, but as the temperature decreases, their kinetic energy diminishes.
Ultimately, this process leads to a more ordered atomic arrangement, resulting in a material that is
more ductile and easier to manipulate [12]. Numerous optimization issues, including TSP, protein
folding, graph partitioning, and job-shop scheduling, have been effectively solved with SA [13]
[14]. The capacity of SA to break free from local minima and converge to a global minimum is its
primary benefit [11]. Furthermore, SA does not require prior knowledge of the search space and
is quite simple to implement. Starting with an initial solution, the SA method iteratively improves
the existing solution by randomly perturbing it and accepting the perturbation with a predetermined
probability [15]. With more iterations, there is a progressive drop in the probability of accepting a
worse answer from its original high level [15]. Fig 8. 2 offers a through flowchart that illustrates
8
Fig 8. 2: Simulated Annealing Algorithm Flowchart
involves heating a material and then cooling it slowly to remove defects, analogous to exploring
9
8.4.1 Key Concepts Behind SA
Some key concepts have a significant role to play in developing the techniques of the SA
algorithm. “Energy states and cost functions” is one of those concepts. In optimization, the energy
concept in a system is analogous to the cost function, that shows the value required to be minimized
or maximized. Each acceptable solution to the problem optimization corresponds to a state of the
system. And, based on cost function the quality of that solution is evaluated. Also, temperature
and cooling schedule are another two key concepts where temperatures acts as a controlling
parameter to determine the likelihood of accepting less optimal solution across the algorithms
progress, allowing the exploration of a broader solution space. Means with a higher probability of
a higher temperature allow the algorithm to accept worse solutions, promoting exploration of the
solution space. Based on a predefined cooling schedule, the temperature is regularly reduced. As
the temperature reductions, the possibility of accepting worse solution is reduced, guiding the
The Metropolis Acceptance Criterion is another concept and an important key to SA’s
effectiveness that allows the acceptance of worse solutions developed on the basis of a probability
that decreases with temperature assisting the algorithm in escaping local minima. LPBSA
combines the robust exploration capabilities of SA with the structured approach of LPB. So, by
incorporating the Metropolis Acceptance Criterion , LPBSA enhances its capacity to avoid local
In the context of the algorithm’s work process, here is a step-by-step presentation of how the
algorithm works.
Initialization
10
Start the algorithm with a high temperature or cooling rate. This is known as the initial solution.
Iterative Process
The following steps are repeated until the specified number of iterations is reached or the stopping
candidate solution and then the cost of the new solution is evaluated.
✓ Evaluate the new solution: The difference between the new solution and the current
✓ Apply the Metropolis Acceptance Criterion: The (MAC) determines the likelihood
✓ Update the temperature: According to the cooling schedule, reduce the temperature.
Convergence
The proposed algorithm concentrates on refining the solution discovered during the temperature
decrease and then the algorithm becomes less inclined to accept worse solutions. When the
temperature meets the stopping condition or gets close to zero, the algorithm converges to an
optimum solution.
8.5 LPBSA
LPBSA is a hybrid optimization technique that blends SA and LPB. However, it is known as an
improvement approach that focuses more on the LPB algorithm’s optimization. By increasing
LPBSA seeks to improve on established LPB approaches. By comparing LPBSA with LPB and
11
other well-known algorithms, such as PSO and GA, LPBSA performs better through assessments
utilizing benchmark test functions. LPBSA plays a crucial role in improving the efficiency of
improving performance overall. This algorithm especially helpful in dynamic settings where the
optimal solution could alter over time. Because of its adaptive qualities, LPBSA can modify its
Performance
LPBSA holds essential possible for optimizing learner performance in diverse educational settings.
In this section, this algorithm’s capabilities are integrated into real-world environments.
Essentially, given the purpose of personalized learning pathways, this algorithm provides
personalized learning pathways by analyzing individual performance data, wherever students are
struggling, and tailoring learning activities to their precise needs. The algorithm can identify such
areas, which will have a great effect in terms of improving learning experiences. There are
significant potential for the performance of optimizing learners are available in various education
settings of LPBSA, It is also has an ability to effectively applied in real world environments to
provide personalized learning pathways. LPBSA identifies areas where students are facing
challenges and tailors learning activities to address their specific requirements based on analyzing
individual performance data. As well as, this targeted approach improves learning skills and helps
in enhance overall academic outcomes. One of the most prominent implementation considerations
is “data collection”, which means it is very significant to collect comprehensive data on student
performance, such as grades, attendance, or any other relevant metrics. Optimized resource sharing
12
and adaptive learning systems, and identifying students that are currently at-risk, can be beneficial
for LPBSA, which is crucial because it will adjust the difficulty of lessons based on real-time
feedback from the algorithm, dynamically ensuring that each student is challenged appropriately.
The primary objectives of this altered algorithm are to improve convergence, adaptability and
robustness. By integrating LPB with SA the approach improves global exploration, serving to
avoid local optima. And, introduces temperature-controlled evolution that allows the acceptance
of less optimal solutions when beneficial and combines SA's exploration abilities with adaptive
several problem domain. The following section explains how LPBSA works.
LPBSA starts by initializing a population of solutions and calculating the fitness of each
individual. It then selects a subset of individuals, sorts them into Good and Bad groups based on
fitness, and assigns each individual to a group. Next, it selects individuals for crossover, ensuring
the ideal group is not empty, and performs crossover to create new individuals. Mutation is applied
to introduce random changes. The algorithm uses the Metropolis Acceptance Criterion to accept
or reject new individuals based on a random number and temperature rate. The population is
updated with the accepted individuals and the process is repeated for a specified number of
iterations or until a termination condition is met. The optimal solution is returned at the end. Fig
13
Fig 8. 3: LPBSA Flowchart
The Metropolis Acceptance Criterion is applied before proceeding to the next iteration in LPBSA
during the mutation process. The criterion is defined in Equation 8. 1. By calculating the
probability of accepting the worse solution, this equation helps in prevent premature convergence
and maintain diversity within the population by calculating the probability of accepting less
optimal solutions.
𝐶𝑜𝑠𝑡(𝑛𝑒𝑤) − 𝐶𝑜𝑠𝑡(𝑐𝑢𝑟𝑟𝑒𝑛𝑡)
𝑃(𝑎𝑐𝑐𝑒𝑝𝑡 𝑤𝑜𝑟𝑠𝑒) 𝑒𝑥𝑝 (− ) Equation 8. 1
𝑡𝑒𝑚𝑝𝑒𝑟𝑎𝑡𝑢𝑟𝑒
There are some reasons behind crossover and mutation operators being used with LPBSA, such as
“exploration and exploitation”. Crossover mainly enables exploration by sharing information from
14
diverse regions in the search space. On the other hand, mutation facilitates exploitation by
implementing random and small modifications, which fine-tune solutions. Hence, these two
operators together balance exploitation and exploration, which is very significant in order to
provide effective optimization. [16] states that, by properly balancing two modes like exploitation
and exploration, increase the chances to achieve robust results. And, Diversity maintenance is
important for preserving genetic diversity within the population, by leveraging both crossover and
mutation. Which is allowing the algorithm to reduce the risk of getting trapped in local optima.
So, this diversity is crucial for achieving global optima which is the ultimate goal of the
optimization process.
crossover and mutation. The algorithms capability to fine-tune solutions within the fitness
In LPBSA the crossover operator is inspired by the process of integrating the behaviors of
individuals to deliver a new individual with possibly superior characteristics. In LPBSA, the
crossover operator is utilized to combine information from two solutions to create a new candidate
solution, aiding to explore the search space more effectively. Two parents are selected for
crossover:
ii. The left half of the binary number for the first parent is merged with the right half of
the binary number for the second parent, and the right half of the binary number for the
first parent is integrated with the left half of the binary number for the second parent.
15
8.6.2 Mutation Operator
In LPBSA, the mutation operator provides a random change to the individual solution, which
means more improvement in selected individuals. This promotes variety within the population and
aids the avoidance of premature convergence. The mutation process is performed simply as
presented below.
iii. One point of the binary number is randomly changed, like one 0 point changed to 1.
Consider the following function: f(x) = x12 + x22, for integer x, 0 ≤ x1 ≤ 9000 and 0 ≤ x2 ≤ 9000.
Step 1: Create a random population (Bi) of between 0 and 9000 individuals and then provide an
16
B13 5201 4989 27050401 24890121 51940522
B14 2188 3477 4787344 12089529 16876873
B15 3409 1877 11621281 3523129 15144410
B16 4560 2776 20793600 7706176 28499776
Average: 28889053
Step 2: Create subpopulation (s) randomly from the main population by selecting eight individuals
and put them in descending order. Divide those eight individuals into two groups (Good) and
(Bad). Select individuals from highest to lowest from those groups, which is shown in Table 8. 2.
Table 8. 2: K Population
Bi Ki Fitness Groups
Good
B3 K1 36307681
B8 K2 34829561
B11 K3 32198401
B1 K4 28396800
Bad
B5 K5 23632317
B12 K6 19351701
B15 K7 15144410
B6 K8 11751625
Step 3: Compare the main population (Bi) individuals to the highest fitness of the Good and Bad
• If the fitness values of Bi <= highest fitness of the Bad group, move Bi to the Bad group;
• else if the fitness value of Bi <= highest fitness of the Good group, move Bi to the Good
group;
17
• else if the fitness value of Bi > highest fitness of the Good group, move Bi to the Ideal
group.
Bi Fitness Fit (Bi)<= Fit (Ki5) Fit (Bi)<= Fit(Ki1) Ideal group
B1 28396800 Good
B2 21977818 Bad
B3 36307681 Good
B4 46110125 Ideal
B5 23632317 Bad
B6 11751625 Bad
B7 29340378 Good
B8 34829561 Good
B9 34046500 Good
B10 31820365 Good
B11 32198401 Good
B12 19351701 Bad
group is not empty; if it is, choose individuals from the Good group. If the Good group is also
empty, select individuals from the Bad group. Table 8. 4 displays the selected individuals from
Table 8. 3, these four individuals are used for the next phase (Crossover stage).
Ensure that the number of selected individuals matches the specified requirement N = 4, as stated
18
B3 36,307,681 Good
B8 34,829,561 Good
Step 5: Table 8. 5 presents the applied crossover between the individuals selected from Table 8.4
Step 6: For the new individuals (children), apply mutation and convert one 0 to 1 for each
individual randomly in order to maximize the function. This process is shown in Table 8. 6.
19
C7 7008 2416 1101101100000 100101110000
C8 1028 6189 10000000100 1100000101101
Step 7: Before passing to the next iteration, apply the (MAC) formula shown in Equation 8. 1. In
this chapter, a random number = 0.6 and temperature rate = 100 are provided.
For each cost, compare the acceptance probability with the random number to determine
acceptance or rejection:
For C1: Acceptance probability = exp(-(7276 - 5228)/100) ≈ 0.6703. Since 0.6 < 0.6703,
C1 is accepted.
For C2: Acceptance probability = exp(-(7121 - 5073)/100) ≈ 0.6775. Since 0.6 < 0.6775,
C2 is accepted.
For C3: Acceptance probability = exp(-(3204 - 2180)/100) ≈ 0.7408. Since 0.6 < 0.7408,
C3 is accepted.
For C4: Acceptance probability = exp(-(7939 - 6915)/100) ≈ 0.6525. Since 0.6 < 0.6525,
C4 is accepted.
20
For C5: Acceptance probability = exp(-(3543 - 2519)/100) ≈ 0.7288. Since 0.6 < 0.7288,
C5 is accepted.
For C6: Acceptance probability = exp(-(7788 - 6764)/100) ≈ 0.6742. Since 0.6 < 0.6742,
C6 is accepted.
For C7: Acceptance probability = exp(-(7024 - 7008)/100) ≈ 0.5391. Since 0.6 > 0.5391,
C7 is rejected.
For C8: Acceptance probability = exp(-(1540 - 1028)/100) ≈ 0.6428. Since 0.6 < 0.6428,
C8 is accepted.
Therefore, based on the comparison with the random number (0.6), C7 is rejected, whereas all
other costs are accepted. Table 8. 8 shows the individuals that are accepted and used for the next
iteration.
21
C8 1540 7213 2371600 52027369 54398969
Step 9: Calculate the average for accepted individuals in Table 8.9, the individuals in Table 8.4
and (Good group) individuals in Table 8.2. The results are shown in Table 8. 10.
Average: 52649382
Steps 2–9 are repeated until the desired number of iterations or the termination condition is
satisfied, at which point the optimal solution is provided. Let’s follow Steps 2–9 for the next
iteration.
Step 2: From the main population B, create a new subpopulation K which are illustrated in Table
8. 11.
Bi K Fitness Groups
Good
B4 K1 46110125
B9 K2 34046500
22
B10 K3 31820365
B1 K4 28396800
Bad
B2 K5 21977818
B12 K6 19351701
B14 K7 16876873
B6 K8 11751625
Step 3: Compare the main population (Bi) individuals to the highest fitness of the Good
• If the fitness values of Bi <= highest fitness of the Bad group, move Bi to the Bad
group;
• else if the fitness value of Bi <= highest fitness of the Good group, move Bi to the
Good group;
• else if the fitness value of Bi > highest fitness of the Good group, move Bi to the
Ideal group.
Table 8. 12: Compare Bi individuals with Ki highest Good and Bad fitness
Bi Fitness Fit (Bi)<= Fit (Ki5) Fit (Bi)<= Fit(Ki1) Ideal group
B1 28396800 Good
B2 21977818 Good
B3 36307681 Good
B4 46110125 Good
B5 23632317 Good
B6 11751625 Bad
B7 29340378 Good
B8 34829561 Good
B9 34046500 Good
B10 31820365 Good
23
B11 32198401 Good
B12 19351701 Bad
B13 51940522 Ideal
B14 16876873 Bad
B15 15144410 Bad
B16 28499776 Good
Step 4: : In this step we have to select 4 individuals from the main population, ensure that the Ideal
group is not empty; if it is, choose individuals from the Good group. If the Good group is also
empty, select individuals from the Bad group. The selected individuals are used in the next phase
Ensure that the number of the selected individuals matches the specified requirement N = 4, as
24
C5 New 5630 3229 1010111111110 110010011101
C6 New 4102 3721 1000000000110 111010001001
E7 B8 3460 4781 110110000100 1001010101101
E8 K4 4320 3120 1000011100000 110000110000
C7 New 3488 2416 110110100000 100101110000
C8 New 4292 6189 1000011000100 1100000101101
Step 6: For the new individuals (children), apply mutation and convert one 0 to 1 for each
individual randomly in order to maximize the function. This process is shown in Table 8. 15.
this chapter, we provide random number = 0.6 and temperature rate = 100. The results are the same
as below:
25
For C1: Acceptance probability = exp(-(7171 - 5123)/100) ≈ 0.6703. Since 0.6 < 0.6703,
C1 is accepted.
For C2: Acceptance probability = exp(-(6481 - 4433)/100) ≈ 0.6703. Since 0.6 < 0.6703,
C2 is accepted.
For C3: Acceptance probability = exp(-(1600 - 1088)/100) ≈ 0.6065. Since 0.6 > 0.6065,
C3 is rejected.
For C4: Acceptance probability = exp(-(7939 - 7683)/100) ≈ 0.6311. Since 0.6 > 0.6311,
C4 is rejected.
For C5: Acceptance probability = exp(-(7678 - 5630)/100) ≈ 0.7061. Since 0.6 < 0.7061,
C5 is accepted.
For C6: Acceptance probability = exp(-(6150 - 4102)/100) ≈ 0.6703. Since 0.6 < 0.6703,
C6 is accepted.
For C7: Acceptance probability = exp(-(4000 - 3488)/100) ≈ 0.6055. Since 0.6 > 0.6055,
C7 is rejected.
For C8: Acceptance probability = exp(-(6340 - 4292)/100) ≈ 0.6703. Since 0.6 < 0.6703,
C8 is accepted.
Based on the comparison with the random number (0.6), C3, C4, and C7 are rejected, whereas all
26
Step 8: Calculate the new accepted individuals after applying simulated annealing the results are
the selected individuals from Table 8. 13. These results are displayed in Table 8. 19.
Average: 55693299
27
Therefore, it is evident from the outcomes that the population has been enhanced, leading to
increased efficiency among individuals. Table 8. 20 presents a comparison of the average for each
iteration.
Iteration N Average
Iteration 0 28889053
Iteration 1 52649382
Iteration 2 55693299
Fig 8. 4 shows the average results for each iteration and how they increased from iteration 0 to 2.
detailed comparison analysis utilizing particular benchmarks. In this section, more focus is on the
comparative analysis, the rationale behind the selected benchmarks, and the detailed outcomes that
were obtained.
28
The metrics discussed in this section have played a major role in determining the performance of
LPBSA. The average value is one of those metrics that can be utilized to represent the cost function
average value that was obtained over multiple runs, signifying the algorithm’s capability to deliver
promised solutions consistently. To reflect the algorithm’s reliability and stability, as well as
measure the cost function value variability, standard deviation is utilized as another metric.
Additionally, the next metric is the best value that represent the lowest cost function value, to
assess the algorithm's capability to find the optimal solution. The worst value is used as another
metric to obtain the highest cost function value to display the worst case of the algorithm’s
performance.
To evaluate the performance of LPBSA, 19 benchmark test functions were utilized. These
functions are commonly used to measure the efficiency of optimization algorithms due to their
potential characteristics, such as dimensionality, separability, and multimodality. For example, test
function one, which is a sphere function, is a simple unimodal function with a single global
minimum, and test function two is a Rastrigin function used as a multimodal function with
Different aspects of those algorithms are tested via those functions, such as their ability to converge
to the global minima, avoid the local minima, and maintain stability. Table 8. 21 shows a summary
of evaluating LPBSA with other algorithms. The table entries represent the average and standard
deviation of the cost function value over 30 runs for each entry.
29
Table 8. 21: Result of 19 benchmark test functions to compare LPBSA with other algorithms
30
TF1 - -
6.77522E- 2.46E- 458.29 165.37 263.094 187.13 383.91 36.60
6 1.0316000 1.0316
16 06 62 24 8 52 84 53
00 3
TF1 2.7054000 1.35504E- 0.3978 3.16E- 596.66 171.06 466.542 180.94 503.04 35.79
7 00 15 88 06 29 31 9 93 85 40
TF1 3.0000000 0.00000E 3.0001 0.00028 229.95 184.60 136.175 160.01 118.43 51.00
8 00 +00 42 3 15 95 9 87 8 18
TF1 -
3.16177E- 9.61E- 679.58 199.40 741.634 206.72 544.10 13.30
9 -3.862800 3.8627
15 07 8 14 1 96 18 16
8
The results in Table 8.21 show that LPBSA consistently achieved better performance than the
other algorithms, like PSO, across the majority of benchmark functions, GA, and the original LPB.
In addition, LPBSA achieved a lower standard deviation and superior average solution, and it also
showed its ability to deliver better solutions more consistently. Table 8.21 presents a whole map
for all the tests. The best tests are highlighted and the table shows the best outcomes of LPBSA.
Some test function results are illustrated in the following sentences. For example, in TF1, LPBSA
recorded an average of 3.87E-04 and standard deviation of 7.25E-4, outperforming LPB, which
achieved an average of 1.88E-03 and standard deviation of 2.09E-03. PSO and GA provided lower
results. PSO achieved 4.20E-18, but with a high standard deviation, and GA recorded an average
of 748.60.
By recording these results, LPBSA outperformed all the other proposed algorithms in this test
function. Moreover, LPBSA recorded the highest average and standard deviation compared to
other algorithms in TF9,TF14,TF15,TF16,TF18, and TF19. Even in TF17, LPBSA provided the
highest standard deviation of 1.35504E-15, and achieved the highest average by recording
31
2.705400000 compared to other algorithms, except LPB, which recorded an average of 1.35504E-
15. These results show the precision, consistency, robustness, and superior performance of
LPBSA, showcasing the algorithm’s enhanced capability to find the optimal solution with great
stability.
8.10 Conclusion
SA and LPB, LPBSA effectively balances exploration and exploitation, allowing for the discovery
of high-quality solutions. LPBSA’s ability to adapt to changing environments and its efficient
search capabilities make it a valuable tool for solving complex optimization problems. In addition
, LPBSA outperformed the LPB algorithm in terms of LPB optimization, indicating a notable
improvement over the LPB algorithm. A total of 19 benchmark test functions were utilized and
tested to compare the proposed algorithms. LPBSA beat well-known algorithms like PSO and GAs
includes mutation, crossover, population division, and SA techniques, successfully increased the
population and boosted overall efficiency. Thus, these outcomes highlight LPBSA’s potential as a
References
[1] K. Rajwar, K. Deep, and S. Das, “An exhaustive review of the metaheuristic algorithms for search and
optimization: taxonomy, applications, and open challenges,” Artif Intell Rev, vol. 56, no. 11, pp. 13187–
13257, Nov. 2023, doi: 10.1007/s10462-023-10470-y.
[2] S. Ahmed and T. A. Rashid, “Learner Performance-based Behavior Optimization Algorithm: A Functional
Case Study,” https://www.researchgate.net/publication/360906762_Learner_Performance-
based_Behavior_Optimization_Algorithm_A_Functional_Case_Study. [Online]. Available:
https://doi.org/10.21203/rs.3.rs-1688246/v2
[3] J. H. Holland, Adaptation in Natural and Artificial Systems. Mit Press, 1992.
[4] J. Kennedy, R. Eberhart, and bls gov, “Particle Swarm Optimization.”
[5] R. Storn and K. Price, “Differential Evolution-A Simple and Efficient Heuristic for Global Optimization
over Continuous Spaces,” Kluwer Academic Publishers, 1997. doi: 10.1023/A:1008202821328.
[6] J. F. Pane, E. D. Steiner, M. D. Baird, and L. S. Hamilton, “Continued Progress: Promising Evidence on
Personalized Learning,” 2015. [Online]. Available: www.rand.org/t/RR1365z2.
32
[7] J. M. Keller, “Development and Use of the ARCS Model of Instructional Design.”
[8] S. Wang, F. Wang, Z. Zhu, J. Wang, T. Tran, and Z. Du, “Artificial intelligence in education: A systematic
literature review,” Oct. 15, 2024, Elsevier Ltd. doi: 10.1016/j.eswa.2024.124167.
[9] L. Cao, “College Students’ Metacognitive Awareness of Difficulties in Learning the Class Content Does
Not Automatically Lead to Adjustment of Study Strategies 1,” Australian Journal of Educational &
Developmental Psychology, vol. 7, pp. 31–46, 2007.
[10] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, “Optimization by Simulated Annealing,” 1983. [Online].
Available: www.sciencemag.org
[11] D. Delahaye, S. Chaimatanan, and M. Mongeau, “Simulated annealing: From basics to applications,” vol.
272, p. 978, 2019, doi: 10.1007/978-3-319-91086-4_1ï.
[12] C. Venkateswaran, M. Ramachandran, R. Kurinjimalar, P. Vidhya, and G. Mathivanan, “Application of
Simulated Annealing in Various Field,” Materials and its Characterization, vol. 1, no. 1, pp. 01–08, Feb.
2022, doi: 10.46632/mc/1/1/1.
[13] L. Hernández-Ramírez, J. Frausto-Solis, G. Castilla-Valdez, J. Javier González-Barbosa, D. Terán-
Villanueva, and M. L. Morales-Rodríguez, “A Hybrid Simulated Annealing for Job Shop Scheduling
Problem,” © International Journal of Combinatorial Optimization Problems and Informatics, vol. 10, no. 1,
pp. 6–15.
[14] F. P. Agostini, D. D. O. Soares-Pinto, M. A. Moret, C. Osthoff, and P. G. Pascutti, “Generalized simulated
annealing applied to protein folding studies,” J Comput Chem, vol. 27, no. 11, pp. 1142–1155, Aug. 2006,
doi: 10.1002/jcc.20428.
[15] G. Rajeswarappa and S. Vasundra, “Red Deer and Simulation Annealing Optimization Algorithm-Based
Energy Efficient Clustering Protocol for Improved Lifetime Expectancy in Wireless Sensor Networks,”
Wirel Pers Commun, vol. 121, no. 3, pp. 2029–2056, Dec. 2021, doi: 10.1007/s11277-021-08808-2.
[16] A. M. Ahmed et al., “Balancing exploration and exploitation phases in whale optimization algorithm: an
insightful and empirical analysis.”
33