Moth FlameOptimizationAlgorithm2015
Moth FlameOptimizationAlgorithm2015
Seyedali Mirjalili
PII: S0950-7051(15)00258-0
DOI: http://dx.doi.org/10.1016/j.knosys.2015.07.006
Reference: KNOSYS 3207
Please cite this article as: S. Mirjalili, Moth-Flame Optimization Algorithm: A Novel Nature-inspired Heuristic
Paradigm, Knowledge-Based Systems (2015), doi: http://dx.doi.org/10.1016/j.knosys.2015.07.006
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers
we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and
review of the resulting proof before it is published in its final form. Please note that during the production process
errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Moth-Flame Optimization Algorithm: A Novel Nature-inspired
Heuristic Paradigm
Seyedali Mirjalili
Abstract
1. Introduction
Optimization refers to the process of finding the best possible solution(s) for a particular
problem. As the complexity of problems increases, over the last few decades, the need for new
optimization techniques becomes evident more than before. Mathematical optimization
1
techniques used to be the only tools for optimizing problems before the proposal of heuristic
optimization techniques. Mathematical optimization methods are mostly deterministic that suffer
from one major problem: local optima entrapment. Some of them such as gradient-based
algorithm require derivation of the search space as well. This makes them highly inefficient in
solving real problems.
The so-called Genetic Algorithm (GA) [1], which is undoubtedly the most popular stochastic
optimization algorithm, was proposed to alleviate the aforementioned drawbacks of the
deterministic algorithms. The key success of GA algorithm mostly relies on the stochastic
components of this algorithm. The selection, re-production, and mutation have all stochastic
behaviours that assist GA to avoid local optima much more efficiently than mathematical
optimization algorithms. Since the probability of selection and re-production of best individuals
is higher than worst individuals, the overall average fitness of population is improved over the
course of generations. These two simple concepts are the key reasons of the success of GA in
solving optimization problems. Another fact about this algorithm is that there is no need to have
gradient information of the problems since GA only evaluates the individuals based on the
fitness. This makes this algorithm highly suitable for solving real problems with unknown search
spaces. Nowadays, the application of the GA algorithm can be found in a wide range of fields.
The years after the proposal of the GA was accompanied by the highest attention to such
algorithms, which results in the proposal of the well-known algorithms like PSO [2] algorithm,
Ant Colony Optimization (ACO) [3], Differential Evolution (DE) [4], Evolutionary Strategy (ES)
[5], and Evolutionary Programming (EP) [6, 7]. The application of these algorithms can be
found in science and industry as well. Despite the merits of these optimizers, there is a
fundamental question here as if there is any optimizer for solving all optimization problems.
According to the No-Free-Lunch (NFL) theorem [8], there is no algorithm for solving all
optimization problems. This means that an optimizer may perform well in a set of problems and
fail to solve a different set of problems. In other words, the average performance of optimizes
are equal when considering all optimization problems. Therefore, there are still problems that
can be solved by new optimizers better than the current optimizers.
This is the motivation of this work, in which a novel nature-inspired algorithm is proposed to
compete with the current optimization algorithms. The main inspiration of the proposed
algorithm is the navigating mechanism of moths in nature called transverse orientation. The
paper first proposes the mathematical model of spiral flying path of moths around artificial lights
(flames). An optimization algorithm is then proposed using the mathematical model to solve
optimization problems in different fields. The rest of the paper is organized as follows:
Section 2 reviews the literature of stochastic and heuristic optimization algorithms. Section 3
presents the inspiration of this work and proposes the MFO algorithm. The experimental setup,
results, discussion, and analysis are provided in Section 4. Section 5 investigates the effectiveness
of the proposed MFO algorithm in solving nine constrained engineering design problems:
welded beam, gear train, three-bar truss, pressure vessel, cantilever beam, I-beam,
tension/compression spring, 15-bar truss, and 52-bar truss design problems. In addition, Section
6 demonstrates the application of MFO in the field of marine propeller design. Eventually,
Section 7 concludes the work and suggests several directions for future studies.
2
2. Literature review
This section first reviews the state-of-the-art and then discusses the motivations of this work.
There are several advantages and disadvantages for each of these families. Individual-based
algorithms need less computational cost and function evaluation but suffer from premature
convergence. Premature convergence refers to the stagnation of an optimization technique in
local optima, which prevents it from convergence toward the global optimum. In contrary,
population-based algorithms have high ability to avoid local optima since a set of solutions are
involved during optimization. In addition, information can be exchanged between the candidate
solutions and assist them to overcome different difficulties of search spaces. However, high
computational cost and the need for more function evaluation are two major drawbacks of
population-based algorithms.
The well-known algorithms in the individual-based family are: Tabu Search (TS) [6, 9], hill
climbing [10] , Iterated Local Search (ILS) [11], and Simulated Annealing (SA) [12]. TS is an
improved local search technique that utilizes short-term, intermediate-term, and long-term
memories to ban and truncate unpromising/repeated solutions. Hill climbing is also another
local search and individual-based technique that starts optimization by a single solution. This
algorithm then iteratively attempts to improve the solution by changing its variables. ILS is an
improved hill climbing algorithm to decrease the probability of trapping in local optima. In this
algorithm, the obtained optimum at the end of each run is perturbed and considered as the
starting point in the next iteration. Eventually, the SA algorithm tends to accept worse solutions
proportional to a variable called cooling factor. This assists SA to promote exploration of the
search space and consequently avoid local optima.
In contrast, the exploitation milestone aims for improving the quality of solutions by searching
locally around the obtained promising solutions in the exploration milestone. In this milestone,
3
candidate solutions are obliged to change less suddenly and search locally. In PSO, for instance,
low inertia rate causes low exploration and high tendency toward to best personal/global
solutions obtained. Therefore, the particles converge toward best points instead of churning
around the search space. The mechanism that brings GA exploitation is the mutation operators.
Mutation causes slight random changes in the individuals and local search around the candidate
solutions.
Exploration and exploitation are two conflicting milestones where promoting one results in
degrading the other [14]. A right balance between these two milestones can guarantee a very
accurate approximation of the global optimum using population-based algorithms. On one hand,
mere exploration of the search space prevents an algorithm from finding an accurate
approximation of the global optimum. On the other hand, mere exploitation results in local
optima stagnation and again low quality of the approximated optimum. Due to the unknown
shape of the search space for optimization problems, in addition, there is no clear accurate
timing for transition between these two milestones. Therefore, population-based algorithms
balance exploration and exploitation milestones to firstly find a rough approximation of the
global optimum, and then improve its accuracy.
The general framework of population-based algorithms is almost identical. The first step is to
generate a set of random initial solutions
. Each of these solutions is
considered as a candidate solution for a given problem, assessed by the objective function, and
assigned an objective value . The algorithm then
combines/moves/updates the candidate solutions based on their fitness values with the hope to
improve them. The created solutions are again assessed by the objective function and assigned
their relevant fitness values. This process is iterated until the satisfaction of an end condition. At
the end of this process, the best solution obtained is reported as the best approximation for the
global optimum.
Recently, many population-based algorithms have been proposed. They can be classified to three
main categories based on the source of inspiration: evolution, physic, or swarm. Evolutionary
algorithms are those who mimic the evolutionary processes in nature. Some of the recently
proposed evolutionary algorithms are Biogeography-based Optimization (BBO) algorithm [15],
evolutionary membrane algorithm [16], human evolutionary model [17], and Asexual
Reproduction Optimization (ARO) [18].
The number of recently proposed swarm-based algorithms is larger than evolutionary algorithms.
Some of the most recent ones are Glowworm Swarm Optimization (GSO) [19], Bees Algorithm
(BA) [20], Artificial Bee Colony (ABC) algorithm [21], Bat Algorithm (BA) [22], Firefly
Algorithm (FA) [23], Cuckoo Search (CS) algorithm [24], Cuckoo Optimization Algorithm
(COA) [25], Grey Wolf Optimizer (GWO) [26]. Dolphin Echolocation (DE) [27], Hunting
Search (HS) [28], and Fruit Fly Optimization Algorithm (FFOA) [29].
The third class of algorithms is inspired from physical phenomena in nature. The most recent
algorithms in this category are: Gravitational Search Algorithm (GSA) [30], Chemical Reaction
Optimization (CRO) [31], Artificial Chemical Reaction Optimization Algorithm (ACROA) [32],
Charged System Search (CSS) algorithm [33], Ray Optimization (RO) [34], Black Hole (BH)
4
algorithm [35], Central Force Optimization (CFO) [36], Kinetic Gas Molecules Optimization
algorithm (KGMO) [37], and Gases Brownian Motion Optimization (GBMO) [38].
In addition the above-mentioned algorithms, there are also other population-based algorithms
with different source inspirations. The most recent ones are: Harmony Search (HS) optimization
algorithm [39], Mine Blast Algorithm (MBA) [40], Symbiotic Organisms Search (SOS) [41],
Soccer League Competition (SLC) algorithm [42], Seeker Optimization Algorithm (SOA) [43],
Coral Reef Optimization (CRO) algorithm [44], Flower Pollination Algorithm (FPA) [45], and
State of Mater Search (SMS) [46].
As the above paragraphs show, there are many algorithms in this field, which indicate the
popularity of these techniques in the literature. If we consider the hybrid, multi-objective,
discrete, and constrained methods, the number of publications will be increased dramatically.
The reputation of these algorithms is due to several reasons. Firstly, simplicity is the main
advantage of the population-based algorithm. The majority of algorithms in this field follow a
simple framework and have been inspired from simple concepts. Secondly, these algorithms
consider problems as black boxes, so they do not need derivative information of the search
space in contrast to mathematical optimization algorithms. Thirdly, local optima avoidance of
population-based stochastic optimization algorithms is very high, making them suitable for
practical applications. Lastly, population-based algorithms are highly flexible, meaning that they
are readily applicable for solving different optimization problems without structural
modifications. In fact, the problem representation becomes more important than the optimizer
when using population-based algorithms.
Despite the high number of new algorithms and their applications in science and industry, there
is a question here that if we need more algorithms in this field. The answer to this questions is
positive according to the No-Free-Lunch (NFL) [8] theorem for optimization. This theorem
logically proves that there is no optimization algorithm for solving all optimization problems.
This means that an algorithm can be useful for a set of problems but useless of other types of
problems. In other words, the algorithms perform similar in average over all possible
optimization problems. This theorem allows the proposal of new algorithms with the hope to
solve a wider range of problems or specific types of unsolved problems. This is also the
motivation of this study where we try to get inspiration from the navigation of moths in nature
and design an optimization algorithm.
3. Moth-flame optimiser
3.1. Inspiration
Moths are fancy insects, which are highly similar to the family of butterflies. Basically, there are
over 160,000 various species of this insect in nature. They have two main milestones in their
lifetime: larvae and adult. The larvae is converted to moth by cocoons.
5
The most interesting fact about moths is their special navigation methods in night. They have
been evolved to fly in night using the moon light. They utilized a mechanism called transverse
orientation for navigation. In this method, a moth flies by maintaining a fixed angle with respect
to the moon, a very effective mechanism for travelling long distances in a straight path [47, 48].
Fig. 1 shows a conceptual model of transverse orientation. Since the moon is far away from the
moth, this mechanism guarantees flying in straight line. The same navigation method can be
done by humans. Suppose that the moon is in the south side of the sky and a human wants to go
the east. If he keeps moon of his left side when walking, he would be able to move toward the
east on a straight line
Despite the effectiveness of transverse orientation, we usually observe that moths fly spirally
around the lights. In fact, moths are tricked by artificial lights and show such behaviours. This is
due to the inefficiency of the transverse orientation, in which it is only helpful for moving in
straight line when the light source is very far. When moths see a human-made artificial light, they
try to maintain a similar angle with the light to fly in straight line. Since such a light is extremely
close compared to the moon, however, maintaining a similar angle to the light source causes a
useless or deadly spiral fly path for moths [48]. A conceptual model of this behaviour is
illustrated in Fig. 2.
6
It may be observed in Fig. 2 that the moth eventually converges toward the light. We
mathematically model this behaviour and propose an optimizer called Moth-Flame Optimization
(MFO) algorithm in the following subsection.
In the proposed MFO algorithm, we assume that the candidate solutions are moths and the
problem’s variables are the position of moths in the space. Therefore, the moths can fly in 1-D,
2-D, 3-D, or hyper dimensional space with changing their position vectors. Since the MFO
algorithm is a population-based algorithm, we represent the set of moths in a matrix as follows:
(3.1)
For all the moths, we also assume that there is an array for storing the corresponding fitness
values as follows:
(3.2)
Note that the fitness value is the return value of the fitness (objective) function for each moth.
The position vector (first row in the matrix M for instance) of each moth is passed to the fitness
function and the output of the fitness function is assigned to the corresponding moth as its
fitness function (OM1 in the matrix OM for instance).
Another key components in the proposed algorithm are flames. We consider a matrix similar to
the moth matrix as follows:
(3.3)
7
It may be seen in Equation (3.3) that the dimension of M and F arrays are equal. For the flames,
we also assume that there is an array for storing the corresponding fitness values as follows:
(3.4)
It should be noted here that moths and flames are both solutions. The difference between them
is the way we treat and update them in each iteration. The moths are actual search agents that
move around the search space, whereas flames are the best position of moths that obtains so far.
In other words, flames can be considered as flags or pins that are dropped by moths when
searching the search space. Therefore, each moth searches around a flag (flame) and updates it in
case of finding a better solution. With this mechanism, a moth never lose its best solution.
The MFO algorithm is three-tuple that approximates the global optimal of the optimization
problems and defined as follows:
# is a function that generates a random population of moths and corresponding fitness values.
The methodical model of this function is as follows:
The P function, which is the main function, moves the moths around the search space. This
function received the matrix of M and returns its updated one eventually.
$ & (3.7)
The T function returns true if the termination criterion is satisfied and false if the termination
criterion is not satisfied:
With I, P, and T, the general framework of the MFO algorithm is defined as follows:
M=I();
while T(M) is equal to false
M=P(M);
end
The function I has to generate initial solutions and calculate the objective function values. Any
random distribution can be used in this function. What we implement is as follows:
for i = 1 : n
for j= 1 : d
8
M(i,j)=(ub(i)-lb(i))* rand()+lb(i);
end
end
OM=FitnessFunction(M);
As can be seen, there are two other arrays called ub and lb. These matrixes define the upper and
lower bounds of the variables as follows:
After the initialization, the P function is iteratively run until the T function returns true. The P
function is the main function that moves the moths around the search space. As mentioned
above the inspiration of this algorithm is the transverse orientation. In order to mathematically
model this behaviour, we update the position of each moth with respect to a flame using the
following equation:
4 54 6 (3.11)
where 4 indicate the i-th moth, 6 indicates the j-th flame, and S is the spiral function.
We chose a logarithmic spiral as the main update mechanism of moths in this paper. However,
any types of spiral can be utilized here subject to the following conditions:
Considering these points, we define a logarithmic spiral for the MFO algorithm as follows:
where 74 indicates the distance of the i-th moth for the j-th flame, b is a constant for defining the
shape of the logarithmic spiral, and t is a random number in [-1,1].
D is calculated as follows:
74 |6 B 4 | (3.13)
where 4 indicate the i-th moth, 6 indicates the j-th flame, and 74 indicates the distance of the i-
th moth for the j-th flame.
9
Equation (3.12) is where the spiral flying path of moths is simulated. As may be seen in this
equation, the next position of a moth is defined with respect to a flame. The t parameter in the
spiral equation defines how much the next position of the moth should be close to the flame (t =
-1 is the closest position to the flame, while t = 1 shows the farthest). Therefore, a hyper ellipse
can be assumed around the flame in all directions and the next position of the moth would be
within this space. Spiral movement is the main component of the proposed method because it
dictates how the moths update their positions around flames. The spiral equation allows a moth
to fly “around” a flame and not necessarily in the space between them. Therefore, the
exploration and exploitation of the search space can be guaranteed. The logarithmic spiral, space
around the flame, and the position considering different t on the curve are illustrated in Fig. 3.
Moth
Flame
Position in one dimension
4
6 74
Figure 3. Logarithmic spiral, space around a flame, and the position with respect to t
Fig. 4 shows a conceptual model of position updating of a moth around a flame. Note that the
vertical axis shows only one dimension (1 variable/parameter of a given problem), but the
proposed method can be utilised for changing all the variables of the problem. The possible
positions (dashed black lines) that can be chosen as the next position of the moth (blue
horizontal line) around the flame (green horizontal line) in Fig. 4 clearly show that a moth can
explore and exploit the search space around the flame in one dimension. Exploration occurs
when the next position is outside the space between the moth and flam as can be seen in the
arrows labelled by 1, 3, and 4. Exploitation happens when the next position lies inside the space
between the moth and flame as can be observed in the arrow labelled by 2. There are some
interesting observations for this model as follow:
• A moth can converge to any point in the neighbourhood of the flame by changing t
• The lower t, the closer distance to the flame.
• The frequency of position updating on both sides of the flame is increased as the moth
get closer to the flame
10
Moth
1 Flame
Figure 4. Some of the possible positions that can be reached by a moth with respect to a flame using the logarithmic
spiral
The proposed position updating procedure can guarantee the exploitation around the flames. In
order to improve the probability of finding better solutions, we consider the best solutions
obtained so far as the flames. So, the matrix F in Equation (3.3) always includes n recent best
solutions obtained so far. The moths are required to update their positions with respect to this
matrix during optimization. In order to further emphasize exploitation, we assume that t is a
random number in [r,1] where r is linearly decreased from -1 to -2 over the course of iteration.
Note that we name r as the convergence constant. With this method, moths tend to exploit their
corresponding flames more accurately proportional to the number of iterations.
A question that may rise here is that the position updating in Equation (3.12) only requires the
moths to move towards a flame, yet it causes the MFO algorithm to be trapped in local optima
quickly. In order to prevent this, each moth is obliged to update its position using only one of
the flames in Equation (3.12). It each iteration and after updating the list of flames, the flames
are sorted based on their fitness values. The moths then update their positions with respect to
their corresponding flames. The first moth always updates its position with respect to the best
flame, whereas the last moth updates its position with respect to the worst flame in the list. Fig. 5
shows how each moth is assigned to a flame in the list of flames.
It should be noted that this assumption is done for designing the MFO algorithm, while possibly
it is not the actual behaviour of moths in nature. However, the transverse orientation is still done
by the artificial moths. The reason that why a specific flame is assigned to each moth is to
prevent local optimum stagnation. If all of the moths get attracted to a single flame, all of them
converge to a point in the search spaces because they can only fly towards a flame and not
outwards. Requiring them to move around different flames, however, causes higher exploration
of the search space and lower probability of local optima stagnation.
11
Moth
Flame
Therefore, the exploration of the search space around the best locations obtained so far is
guaranteed with this method due to the following reasons:
• Moths update their positions in hyper spheres around the best solutions obtained so far.
• The sequence of flames is changed based on the best solutions in each iteration, and the
moths are required to update their positions with respect to the updated flames.
Therefore, the position updating of moths may occur around different flames, a
mechanism that causes sudden movement of moths in search space and promotes
exploration.
Another concern here is that the position updating of moths with respect to n different locations
in the search space may degrade the exploitation of the best promising solutions. To resolve this
concern, we propose an adaptive mechanism for the number of flames. Fig. 6 shows that how
the number of flames is adaptively decreased over the course of iterations. We use the following
formula in this regard:
JB1 (3.14)
+-,* FG (G)FH IJ B - ∗ M
L
where l is the current number of iteration, N is the maximum number of flames, and T indicates
the maximum number of iterations.
12
N
Number of flames
N/2
0 T/2 T
Iteration
Fig. 6 shows that there is N number of flames in the initial steps of iterations. However, the
moths update their positions only with respect to the best flame in the final steps of iterations.
The gradual decrement in number of flames balances exploration and exploitation of the search
space. After all, the general steps of the P function are as follows.
As discussed above the P function is executed until the T function returns true. After
termination the P function, the best moth is returned as the best obtained approximation of the
optimum.
Computation complexity of an algorithm is a key metric for evaluating its run time, which can be
defined based on the structure and implementation of the algorithm. The computational
complexity of the MFO algorithm depends on the number of moths, number of variables,
maximum number of iterations, and sorting mechanism of flames in each iteration. Since we
13
utilize Quicksort algorithm, the sort is of F-GNF" and F " in the best and worst case,
respectively. Considering the P function, therefore, the overall computational complexity is
defined as follows:
where n is the number of moths, t is the maximum number of iterations, and d is the number of
variables.
To see how the MFO algorithm can theoretically be effective in solving optimization problems
some observations are:
These observations make the MFO algorithm theoretically able to improve the initial random
solutions and convergence to a better point in the search space. The next section investigates the
effectiveness of MFO in practice.
14
The mathematical formulation of the employed test functions are presented in Table 1, Table 2,
and Table 3. Since the original version of unimodal and multi-modal test functions are too
simple, we rotate the test functions using the rotation matrix proposed by Lorio and Li [52] and
shift their optima at every run to increase the difficulty of these functions. The shifted positions
of global optima are provided in the Table 1 and Table 2 as well. We also consider 100 variables
for unimodal and multi-modal test function for further improving their difficulties. Note that the
composite test functions are taken from CEC 2005 special session [53, 54].
15
Table 2. Multimodal benchmark functions
Function Dim Range Shift position fmin
d U" V BU4 .PF ef|U4 |g 100 [-500,500] [-300,..,-300] -418.9829T5
4W
1 1
i U" B20*US jB02k V U4 l B *US I V QG.2?U4 " M @ 20 @ * 100 [-32,32]
F 4W F 4W
0
1 U4
U" V U4 B X QG. I M @ 1 100 [-600,600] [-400,..,-400] 0
4000 4W 4W √P
? 2
U" o10.PF?p " @ V p4 B 1" 01 @ 10.PF ?p4a "3 @ p B 1" q
F 4W
@V U4 B 1" 01 @ .PF 3?U4 @ 1"3 @ U B 1" 01 @ .PF 2?U "3 q 100 [-50,50] [-100,..,-100] 0
4W
16
Since heuristic algorithms are stochastic optimization techniques, they have to be run at least
more than 10 times for generating meaningful statistical results. It is again a common that an
algorithm is run on a problem m times and average/standard deviation/median of the best
obtained solution in the last iteration are calculated as the metrics of performance. We follow the
same method to generate and report the results over 30 independent runs. However, average and
standard deviation only compare the overall performance of algorithms. In addition to average
and standard deviation, statistical tests should be done to confirm the significance of the results
based on every single runs [55]. With the statistical test, we can make sure that the results are not
generated by chance. We conduct the non-parametric Wilcoxon statistical test and report the
calculated p-values as metrics of significance of results as well.
In order to verify the performance of the proposed MFO algorithm, the latest version some of
the well-known and recent algorithms in the literature are chosen: PSO [56], GSA [30], BA [22] ,
FPA [45], SMS [46], FA [23], and GA [57]. Note that we utilized 30 number search agents and
1000 iterations for each of the algorithms. It should be noted that selection of the number of
moths (or other candidate solutions in other algorithms) should be done experimentally. The
more number of artificial moths, the higher probability of determining the global optimum.
However, we observed that 30 is a reasonable number of moths for solving optimization
problems. For expensive problems, this number can be reduced to 20 or 10.
As Table 4 shows, the MFO algorithm provides the best results on a four of test functions. The
results are followed by the FPA, PSO, and SMS algorithms. The p-values in Table 5, in addition,
show that the superiority of the MFO algorithm is statistically significant. The MFO algorithm
also provide very competitive results compared to GSA on F3, F4, and F7. The reason why the
MFO algorithm does not provide superior results on three of the unimodal test functions is due
to the selection of different flames for updating the position on moths. This mechanism mostly
promotes exploration, so the search agents spend a large number of iterations to explore the
search spaces and avoid local solutions. Since there is no local solution in unimodal test functions,
this mechanism slows down the exploitation of the MFO algorithm and prevents the algorithm
from finding a very accurate approximation of the global optimum. Since the MFO algorithm
shows the best results in 4 out of 7 unimodal test functions, it seems that this behaviour is not a
major concern. It is evident that requiring moths to update their positions with respect to only the
best flame will accelerate convergence and improve the accuracy of the results, but it also has
negative impacts of the exploration of the algorithm, which is a very important mechanism to
avoid local solutions. As discussed above, unimodal functions are suitable for benchmarking
exploitation of the algorithms. Therefore, these results evidence high exploitation capability of the
MFO algorithm.
17
Table 4. Results of unimodal benchmark functions
Table 5. P-values of the Wilcoxon ranksum test over all runs (p>=0.05 have been underlined)
The statistical results of the algorithms on multimodal test function are presented in Table 6. It
may be seen that the MFO algorithm highly outperforms other algorithms on F8, F10, F11, and
F13. Table 7 suggests that this behaviour is statistically significant. The MFO algorithm failed to
show the best results on F10 and F12. As per the p-values in Table 7, the MFO algorithm is the
only algorithm that provides a p-value greater than 0.05 on F12, which means that the superiority
of the GSA algorithm is not statistically significant. In other words, MFO and GSA perform very
similar and can be considered as the best algorithms when solving F12. In F9, however, the
superiority of the GSA algorithm is statistically significant. It should be noted that the MFO
algorithm outperforms other algorithms on this test function test functions except GSA. Due to
the low discrepancy of the results of MFO and GSA, it is evident that both algorithms obtained
the optimum, but it seems that MFO failed to exploit the obtained optimum and improve its
accuracy. This is again due to the higher exploration of the MFO algorithm, which prevents this
algorithm from finding an accurate approximation of the F9’s global optimum. Despite this
behaviour on this test function, the results of other multi-modal test function strongly prove that
high exploration of the MFO algorithm is a suitable mechanism for avoiding local solutions.
Since the multi-modal functions have an exponential number of local solutions, there results
show that the MFO algorithm is able to explore the search space extensively and find promising
regions of the search space. In addition, high local optima avoidance of this algorithm is another
finding that can be inferred from these results.
18
Table 6. Results of multimodal benchmark functions
Table 7. P-values of the Wilcoxon ranksum test over all runs (p>=0.05 have been underlined)
The rest of the results, which belong to F14 to F19, are provided in Table 8 and Table 9. The
results are consistent with those of other test functions, in which the MFO show very
competitive results compared to other algorithms. The p-values also prove the superiorities are
statistically significant occasionally. Although the MFO algorithm does not provide better results
on half of the composite test functions (F15, F17, and F19), the p-values in Table 9 show that
the results of this algorithm are very competitive. The composite functions have very difficult
search space, so the accurate approximation of their global optima needs high exploration and
exploitation combined. The results evidence that the MFO algorithm properly balance these two
conflicting milestones.
19
Table 9. P-values of the Wilcoxon ranksum test over all runs (p>=0.05 have been underlined)
So far, we discussed the results in terms of exploration and exploitation. Although these results
indirectly show that the MFO algorithm convergence to a point in a search space and improve
the initial solutions, we further investigate the convergence of the MFO algorithm in the
following paragraphs. To confirm the convergence of the MFO algorithm, we calculate and
discuss four metrics:
• Search history
• Trajectory of the first moth in its first dimension
• Average fitness of all moths
• Convergence rate
The experiments are re-done on some of the test functions with 2 variables and using 5 moths
over 100 iterations. Tue results are provided in Fig. 8.
20
Figure 8. Search history, trajectory in first dimension, average fitness of all moths, and convergence rate
The first metric is a qualitative metric that show the history of sampled points over the course of
iterations. We illustrate the sampled points during optimization using black points in Fig. 8. It
seems that the MFO algorithm follows a similar pattern on all of the test functions, in which the
moths tend to explore promising regions of the search space and exploit very accurately around
the global optima. These observations prove that the MFO algorithm can be very effective in
approximating the global optimum of optimization problems.
The second metric, which is another qualitative metric, shows the changes in the first dimension
of the first moth during optimization. This metric assists us to observe if the first moth (as a
representative of all moths) faces abrupt movements in the initial iterations and gradual changes
in the final iterations. According to Berg et al. [58], this behaviour can guarantee that a
population-based algorithm eventually convergences to a point and searches locally in a search
space. The trajectories in Fig. 8 show that the first moth starts the optimization with sudden
changes (more than 50% of the search space). This behaviour can guarantee the exploration of
the search space. It may also be observed that the fluctuations are gradually decreased over the
course of iteration, a behaviour that guarantees transition between exploration and exploitation.
Eventually, the movement of moth becomes very gradual which causes the exploitation of the
search space.
The third metric is a quantitative measure and averages the fitness of all moths in each iteration.
If an algorithm improves its candidate solutions, obviously, the average of fitness should be
improved over the course of iterations. As the average fitness curves in Fig. 8 suggest, the MFO
algorithm shows degrading fitness on all of the test functions. Another fact worth mentioning
here is the accelerated decrease in the average fitness curves, which shows that the improvement
21
of the candidate solutions becomes faster and better over the course of iteration. This is due to
the fact that the MFO algorithm is required to adaptively decrease the number of flames, so the
moths tend to converge and fly around fewer flames as iteration increases. In addition, the
adaptive convergence constraint (r) promotes this behaviour.
The last quantitative comparison metric here is the convergence rate of the MFO algorithm. We
save the fitness of the best flame in each iteration and draw the convergence curves in Fig. 8.
The reduction of fitness over the iterations proves the convergence of the MFO algorithm. It is
also interesting that the accelerated degrade can also be observed in the convergence curves as
well, which is due the above-discussed reason.
As a summary, the results of this section experimentally proved that he MFO algorithm is able to
show very competitive results and occasionally outperforms other well-known algorithms on the
test functions. In addition, the convergence of the MFO algorithm was experimentally proved by
two qualitative and two quantitative measures. Therefore, it can be stated that the MFO
algorithm is able to be effective in solving real problem as well. Since constraints are one of the
major challenges in solving real problems and the main objective of designing the MFO
algorithm is to solve real problems, we employ 7 constrained real engineering problems in the
next section to further investigate the performance of the MFO algorithm and provide a
comprehensive study.
Constraints handling refers to the process of considering both inequality and equality constraints
during optimization. Constraints divide the candidate solutions of heuristic algorithms into two
groups: feasible and infeasible. According to Coello Coello [59], there are different methods of
handling constraints: penalty functions, special operators, repair algorithms, separation of
objectives and constraints, and hybrid methods. Among these techniques, the most
straightforward method is penalty function. Such methods penalize the infeasible candidate
solutions and convert constrained optimization to an unconstrained optimization. There are
different types of penalty functions as well [59]: static, dynamic, annealing, adaptive, co-
evolutionary, and death penalty. The last penalty function, death penalty, is the simplest method,
which assigns a big objective function (in case of minimization). This process automatically
causes discarding the infeasible solutions by the heuristic algorithms during optimization. The
advantages of this method are simplicity and low computational cost. However, this method
does not utilize the information of infeasible solutions that might be helpful when solving
problems with dominated infeasible regions. For the sake of simplicity, we equip the MFO
algorithm with a death penalty function in this section to handle constraints. Therefore, a moth
would be assigned a big objective had it violates any of the constraints.
This problem is a well-known problem in the field of structural optimization [60], in which the
fabrication cost of a welded beam should be minimized. As can be seen in Fig. 9 and Appendix,
there are four parameters for this problem and seven constraints.
22
Figure 9. Design parameters of the welded beam design problem
We solved this problem by the MFO algorithm and compare it to GSA, GA [61] [62, 63], CPSO
[64], HS [65], Richardson’s random method, Simplex method, Davidon-Fletcher-Powell, and
Griffith and Stewart’s successive linear approximation [66]. Table 10 shows the best obtained
results.
The results of Table 10 show that the MFO algorithm is able to find the best optimal design
compared to other algorithms. The results of MFO are closely followed by the CPSO algorithm.
This is a mechanical engineering problem which aims for the minimization of gear ratio (
4: :: ¡:
*,( (,'PG ) for a given set of four gears of a train [70] [71].
4: 4 : ¡:
The parameters are the number of teeth of the gears, so there is a total of 4 variables for this
problem. There is no constraint in this problem, but we consider the range of variables as
constraints. The overall schematic of the system is illustrated in Fig. 10.
23
A
D
C
B
We solve this problem with MFO and compare the results to ABC, MBA, GA, CS, and ISA in
Table 11.
Table 11. Comparison results of the gear train design problem
Optimal values for variables
Algorithm f
F¢ F£ F¤ F¥
MFO 43 19 16 49 2.7009e-012
ABC [40] 49 16 19 43 2.7009e-012
MBA [40] 43 16 19 49 2.7009e-012
GA [72] 49 16 19 43 2.701 9e-012
CS [73] 43 16 19 49 2.7009e-012
ISA [70] N/A N/A N/A N/A 2.7009e-012
Kannan and Kramer [74] 33 15 13 41 2.146 9e-08
The gear train is a discrete problem, so we round the position of moths in each iteration for
solving this problem. Table 11 shows that the MFO algorithm finds the same optimal gear ratio
value compared to ABC, MBA, CS, and ISA. This proves that MFO can be effective in solving
discrete problems as well. It is also worth noticing here that although the gear ratio is equal, the
obtained optimal design parameters are different. So, MFO finds a new optimal design for this
problem.
Three-bas truss design problem is another structural optimization problem in the field of civil
engineering, in which two parameters should be manipulated in order to achieve the least weight
subject to stress, deflection, and buckling constraints. This problem has been mostly utilized
because of its difficult constrained search space [40, 73]. Different components of this problem
can be seen in Fig. 11. The formulation of this problem is also available in the Appendix.
24
Figure 11. Three-bar truss design problem
We again solve this problem using MFO and compare its results to DEDS, PSO-DE, MBA, Tsa,
and CS algorithms in the literature. Table 12 includes the optimal values for the variables and the
optimal weights obtained.
Table 12. Comparison results of the three-bar truss design problem
Optimal values for variables
Algorithm Optimal weight
x1 x2
MFO 0.788244770931922 0.409466905784741 263.895979682
DEDS [75] 0.78867513 0.40824828 263.8958434
PSO-DE [76] 0.7886751 0.4082482 263.8958433
MBA [40] 0.7885650 0.4085597 263.8958522
Ray and Sain [77] 0.795 0.395 264.3
Tsa [78] 0.788 0.408 263.68
CS [73] 0.78867 0.40902 263.9716
The results of the algorithms in three-bar truss design problem show that MFO outperforms
three of the algorithms.
This problem, which is very popular in the literature, has four parameters and four constraints.
The objective is to obtain a design for a pressure vessel with the least fabrication cost. Fig. 12
shows the pressure vessel and parameters involved in the design [73, 79].
25
We optimize the structure of the pressure vessel with MFO and compare the results to GSA,
PSO [67], GA [68, 80, 81], ES [82], DE [83], and ACO [84], augmented Lagrangian Multiplier
[85], and branch-and-bound [86] in Table 13.
Table 13. Comparison results for pressure vessel design problem
Optimal values for variables
Algorithm Optimum cost
Ts Th R L
MFO 0.8125 0.4375 42.098445 176.636596 6059.7143
GSA 1.1250 0.6250 55.988659 84.4542025 8538.8359
PSO [67] 0.8125 0.4375 42.091266 176.746500 6061.0777
GA [80] 0.8125 0.4345 40.323900 200.000000 6288.7445
GA [68] 0.8125 0.4375 42.097398 176.654050 6059.9463
GA [81] 0.9375 0.5000 48.329000 112.679000 6410.3811
ES [82] 0.8125 0.4375 42.098087 176.640518 6059.7456
DE [83] 0.8125 0.4375 42.098411 176.637690 6059.7340
ACO [84] 0.8125 0.4375 42.103624 176.572656 6059.0888
Lagrangian Multiplier [85] 1.1250 0.6250 58.291000 43.6900000 7198.0428
Branch-bound [86] 1.1250 0.6250 47.700000 117.701000 8129.1036
This table shows that the MFO algorithm finds the second low-cost design. Problem
formulation in the appendix shows that this problem is highly constrained, so the results
evidence the merits of MFO in solving such problems.
Cantilever beam consists of five hollow blocks. Fig. 13 and the problem formulation in the
Appendix show that the blocks are square so the number of parameters is five. There is also one
constraint that should not be violated by the final optimal design. The comparison results
between MFO and MMA, GCA_I, GCA_II, CS, and SOS are provided in Table 14.
26
The problem formulation for this case study in the appendix shows that the constraints are only
applied to the variables ranges. This problem is different from other employed problems in this
section, so it can mimic another characteristic of real problems. The results in Table 14 testify
that the MFO algorithm is able to solve these types of problems efficiently as well. The results
evidence that the design with minimum weight belongs to the proposed algorithm.
Another structural optimization problem employed in this section is the I-beam design problem.
This problem deals with designing of an I-shaped beam (as shown in Fig. 14) for achieving
minimal vertical deflection. Length, height, and two thicknesses are the structural parameters for
this problem. Formulation of this problem in the Appendix shows that this problem has a
constraint as well.
The results of MFO on this problem are compared to those of adaptive response surface
method (ARSM), Improved ARSM (IARSM), CS, and SOS in the literature. Table 15 shows the
experimental results on this problem.
This table shows that the MFO algorithm is able to find a design with minimal vertical deflection
compared to other algorithms. It is worth mentioning here that the improved vertical deflection
is very significant in this case study.
27
5.7. Tension/compression spring design
The last utilized engineering test problem is the tension/compression spring design problem.
The objective is again the minimization of the fabrication cost of a spring with three structural
parameters [68, 89, 90]: wire diameter (d), mean coil diameter (D), and the number of active coils
(N). Fig. 15 shows the spring and its parameters.
There are several solutions for this problem in the literature. This problem was solved using
meta-heuristics such as PSO [67], ES [82], GA [80], HS [79], and DE [83]. The mathematical
approaches are the numerical optimization technique (constraints correction at constant cost)
[89] and mathematical optimization technique [90]. We compare the best results of MFO with
those of all the above-mentioned methods in Table 16. Note that we use a similar penalty
function for MFO to perform a fair comparison [91].
Table 16 shows that MFO performs very effectively when solving this problem and provide the
best design. The results of MFO are very close to those of HS and DE, and PSO.
This problem is a structural design problem, in which the objective is to minimize the weight of
a 15-bar truss. The final optimal design for this problem should satisfy 46 constraints such as 15
tension, 15 compression, and 16 displacement constraints. There are also 8 nodes and 15 bars as
shown in Fig. 16, so there is the total number of 15 variables. It also may be seen in this figure
that three loads are applied to the nodes P1, P2, and P3. Other assumptions for this problem are
as follows:
28
• ¦ 7800 RN/1
• § 200 ,
• 5'(*.. -PP','PGF ¨120 ,
• 7P.S-,Q**F' PF /G'© HP(*Q'PGF. ¨10
• 7*.PNF ª,(P,/* .*'
1132 1432 1459 1749 1859 2359 2659 2971
o q
3086 3343 3382 4978 5076 7367 7912 10637
This problem has been solved widely in the literature with considering 35 RJ
35 RJ 1 35 RJ as the loads [92, 93]:
We solve these three cases using 30 search agents over 500 iterations and present the results in
Table 17. Since this problem is a discrete problem, the search agents of MFO were simply
rounded to the nearest integer number during optimization.
P2
P1 6 P3
14 15
12 13
4 8
11 C
5 9 6 7 10 8 B
1 2 3 4
1 2
3 5 7
A A A A
Table 17. Comparison of MFO optimization results with literature for the 15-bar truss design problem
Variables (mm2) GA [30] PSO [31] PSOPC [31] HPSO [31] MBA [93] SOS [41] MFO
A1 308.6 185.9 113.2 113.2 113.2 113.2 113.2
A2 174.9 113.2 113.2 113.2 113.2 113.2 113.2
A3 338.2 143.2 113.2 113.2 113.2 113.2 113.2
A4 143.2 113.2 113.2 113.2 113.2 113.2 113.2
A5 736.7 736.7 736.7 736.7 736.7 736.7 736.7
A6 185.9 143.2 113.2 113.2 113.2 113.2 113.2
A7 265.9 113.2 113.2 113.2 113.2 113.2 113.2
A8 507.6 736.7 736.7 736.7 736.7 736.7 736.7
A9 143.2 113.2 113.2 113.2 113.2 113.2 113.2
A10 507.6 113.2 113.2 113.2 113.2 113.2 113.2
A11 279.1 113.2 113.2 113.2 113.2 113.2 113.2
A12 174.9 113.2 113.2 113.2 113.2 113.2 113.2
A13 297.1 113.2 185.9 113.2 113.2 113.2 113.2
A14 235.9 334.3 334.3 334.3 334.3 334.3 334.3
A15 265.9 334.3 334.3 334.3 334.3 334.3 334.3
Optimal Weight (kg) 142.117 108.84 108.96 105.735 105.735 105.735 105.735
Table 17 shows that the MFO algorithm is able to find a similar structure compared to those of
HPSO, SOS, and MBA. This is the best obtained optimum so far in the literature for this
29
problem. Therefore, these results show that MFO is able to provide very competitive results in
solving this problem as well.
This problem is another popular truss design problem. As may be seen in Fig. 17, there are 52
bars and 20 nodes of which four are fixed. The truss has 52 bars, which are classified in the
following 12 groups:
• (G)S 1« ¬ ¬ ¬1 ¬[
• (G)S 2« ¬` ¬b ¬c ¬d ¬h ¬i
• (G)S 3« ¬ ¬ ¬1
• (G)S 4« ¬[ ¬` ¬b ¬c
• (G)S 5« ¬d ¬h ¬ i ¬ ¬ ¬ 1
• (G)S 6« ¬ [ ¬ ` ¬ b
• (G)S 7« ¬ c ¬ d ¬ h ¬1i
• (G)S 8« ¬1 ¬1 ¬11 ¬1[ ¬1` ¬1b
• (G)S 9« ¬1c ¬1d ¬1h
• (G)S 10« ¬[i ¬[ ¬[ ¬[1
• (G)S 11« ¬[[ ¬[` ¬[b ¬[c ¬[d ¬[h
• (G)S 12« ¬`i ¬` ¬`
Therefore, this problem has 12 parameters to be optimized. Other assumptions for this problem
are as follows:
• ¦ 78600 RN/1
• § 207*5 ,
• 5'(*.. -PP','PGF ¨180 ,
• 7*.PNF ª,(P,/* .*' ,(* Q©G.*F +(G L,/-* 16
• 100 RJ 200 RJ
30
17 18 19 20
50 51 52
44 45 46 47 48 49
40 A
41 42 43
13 14 15 16
37 38 39
31 32 33 34 35 36
27 28 29 30 A
9 10 11 12
24 25 26
18 19 20 21 22 23
14 15 16 17 A
5 6 7 8
11 12 13
5 6 7 8 9 10
1 2 3 4 A
1 2 3 4
B B B
A=3 m B=2 m
Table 18. Available cross-section areas of the AISC norm (valid values for the parameters)
No. in.2 mm2 No. in.2 mm2
1 0.111 71.613 33 3.84 2477.414
2 0.141 90.968 34 3.87 2496.769
3 0.196 126.451 35 3.88 2503.221
4 0.25 161.29 36 4.18 2696.769
5 0.307 198.064 37 4.22 2722.575
6 0.391 252.258 38 4.49 2896.768
7 0.442 285.161 39 4.59 2961.284
8 0.563 363.225 40 4.8 3096.768
9 0.602 388.386 41 4.97 3206.445
10 0.766 494.193 42 5.12 3303.219
11 0.785 506.451 43 5.74 3703.218
12 0.994 641.289 44 7.22 4658.055
13 1 645.16 45 7.97 5141.925
14 1.228 792.256 46 8.53 5503.215
15 1.266 816.773 47 9.3 5999.988
16 1.457 939.998 48 10.85 6999.986
17 1.563 1008.385 49 11.5 7419.34
18 1.62 1045.159 50 13.5 8709.66
19 1.8 1161.288 51 13.9 8967.724
20 1.99 1283.868 52 14.2 9161.272
21 2.13 1374.191 53 15.5 9999.98
22 2.38 1535.481 54 16 10322.56
23 2.62 1690.319 55 16.9 10903.2
24 2.63 1696.771 56 18.8 12129.01
25 2.88 1858.061 57 19.9 12838.68
31
No. in.2 mm2 No. in.2 mm2
26 2.93 1890.319 58 22 14193.52
27 3.09 1993.544 59 22.9 14774.16
28 3.13 2019.351 60 24.5 15806.42
29 3.38 2180.641 61 26.5 17096.74
30 3.47 2238.705 62 28 18064.48
31 3.55 2290.318 63 30 19354.8
32 3.63 2341.931 64 33.5 21612.86
Available cross-section areas of the AISC norm for this problem are available in Table 18. We
again employ 30 search agents over 500 iterations for solving this problem. Similarly to 15-bar
truss design, the search agents of MFO were simply rounded to the nearest integer number
during optimization since this problem is a discrete problem. The results are presented and
compared to several algorithms in the literature in Table 19.
Table 19. Comparison of MFO optimization results with literature for the 52-bar truss design problem
Variables (mm2) PSO [92] PSOPC [92] HPSO [92] DHPSACO [94] MBA [93] SOS [41] MFO
A1 - A4 4658.055 5999.988 4658.055 4658.055 4658.055 4658.055 4658.055
A5 - A10 1374.19 1008.38 1161.288 1161.288 1161.288 1161.288 1161.288
A11 - A13 1858.06 2696.77 363.225 494.193 494.193 494.193 494.193
A14 - A17 3206.44 3206.44 3303.219 3303.219 3303.219 3303.219 3303.219
A18 - A23 1283.87 1161.29 940 1008.385 940 940 940
A24 - A26 252.26 729.03 494.193 285.161 494.193 494.193 494.193
A27 - A30 3303.22 2238.71 2238.705 2290.318 2238.705 2238.705 2238.705
A31 - A36 1045.16 1008.38 1008.385 1008.385 1008.385 1008.385 1008.385
A37 - A39 126.45 494.19 388.386 388.386 494.193 494.193 494.193
A40 - A43 2341.93 1283.87 1283.868 1283.868 1283.868 1283.868 1283.868
A44 - A49 1008.38 1161.29 1161.288 1161.288 1161.288 1161.288 1161.288
A50 - A52 1045.16 494.19 792.256 506.451 494.193 494.193 494.193
Optimal weight (kg) 2230.16 2146.63 1905.495 1904.83 1902.605 1902.605 1902.605
As per the results in Table 19, the best optimal weight obtained by MFO is 1902.605, which is
identical to the optimal weights found by SOS and MBA. It is evident from the results that
MFO, SOS, and MBA significantly outperformed PSO, PSOPC, HPSO, and DHPSACO.
As a summary, the results of this section show that MFO outperforms other algorithms in the
majority of case studies. Since the search space of these problems is unknown, these results are
strong evidences of the applicability of MFO in solving real problems. Due to the constrained
nature of the case studies, in addition, it can be stated that the MFO algorithm is able to
optimize search spaces with infeasible regions as well. This is due to the update mechanism of
moths, in which they are required to update their positions with respect to the best recent
feasible flames. Therefore, this mechanism promotes exploration of promising feasible regions
and is the main reason of the superiority of the MFO algorithm.
To further demonstrate the performance of the proposed MFO algorithm, next section is
devoted to the real application of MFO in the field of Computational Fluid Dynamics (CFD)
problems.
In this section we optimize the shape of a marine propeller. The selected propeller is a ship
propeller with the diameter of two meters. The shape of the propeller is illustrated in Fig. 18.
32
Figure 18. Fixed pitch ship propeller with 4 blades and 2 meter diameter
Generally speaking, the efficiency of a marine propeller is a critical performance metric because
of the density of water. The ultimate goal when designing a marine propeller is to convert the
rotation power of motor to thrust with the least loss. Note that there is always 1 to 5 percent
intrinsic loss due to swirl for marine propellers. The efficiency of marine propellers is calculated
as follows [95]:
¯ °± U"
® T (6.1)
2?F7 °² U"
where V is axial velocity, D is the diameter of the propeller, n is the rotation speed of the
propeller, KT indicates the thrust coefficient, and KQ shows that torque coefficient.
KT is calculated as follows:
L
°± U " (6.2)
¦F 7
where ¦ shows the fluid density, T is the thrust, n indicates the rotation speed of the propeller,
and D is the diameter length.
In order to mathematically model the shape of the blades, Bézier curves can be chosen. In this
method, a set of controlling points define the shape of the airfoils MFOng the blades. Another
method of designing a propeller is to select and define the type and shape of airfoils MFOng the
blades. Due to the simplicity, we utilize the second method. In the employed propeller, the
blade’s arifoils are determined by NACA a=0.8 meanline and NACA 65A010 thickness sections.
The shape of airfoils across the blade define the final shape of the the propeller.
As shown in Fig. 19, in this work we divide the blades into 10 cross sections, and each cross
section has two parameters: chord length and thickness. Therefore, there are 20 parameters for
this problem.
33
Maximum thickness
Chord length
Figure 19. Airfoils MFOng the blade define the shape of the propeller [96]
After all, the problem of propeller design is formulated as a maximization problem as follows:
The CFD problems are usually very challenging with dominated infeasible regions. Therefore,
this problem is a very hard test bed for the proposed MFO algorithm. It is also worth
mentioning here that propeller design is an expensive problem because each function evaluation
takes around 2-4 minutes. Note that we utilize a freeware called Openprop as the simulator for
calculating efficiency [97]. The constant parameters of the propeller during optimization are as
follows:
We employ 60 moths over 500 iterations to solve this problem. The best obtained design
parameters are presented in Table 20.
Chord1 Thickness1 Chord 2 Thickness2 Chord3 Thickness3 Chord4 Thickness4 Chord5 Thickness5
0.13 0.14 0.160036 0.175114 0.184335 0.185283 0.174 0.144001 0.11 0.0008
Chord6 Thickness6 Chord7 Thickness7 Chord8 Thickness8 Chord9 Thickness9 Chord10 Thickness10
0.03998 0.031706 0.018 0.016645 0.013051 0.010043 0.007164 0.006201 0.003 1.00E-05
34
The best efficiency obtained by the MFO algorithm was 0.6942. Other characteristics of the
obtained optimal design are presented in Table 21.
Table 21. Performance of the obtained optimal design
Name Value
J 0.88235
KT 0.30382
KQ 0.06146
Effy 0.6942
AdEffy 0.82919
CT 0.99374
CQ 0.40205
CP 1.4315
The convergence of MFO when solving this problem is illustrated in Fig. 20. Note that we only
show the convergence between iteration 214 to 500 since there was no feasible solutions during
iteration 1 to 213.
0.695
0.69
Efficiency
0.685
0.68
214 143 500
Iteration
Figure 20. Convergence of the MFO algorithm when solving the propeller design problem
The 2D images of obtained optimized airfoils MFOng five cross sections of the blade are
illustrated in Fig. 21.
Figure 21. 2D blade image of the obtained optimal design using MFO
To see how the MFO algorithm found the optimal shape for the propeller, Fig. 22 illustrates the
initial infeasible random design created in the first iteration and the final feasible optimal design
35
obtained in the last iteration. It may be observed that how the proposed algorithm effectively
and efficiently found a smooth shape for the blades to maximize the overall efficiency of the
propeller.
MFO
Figure 22. Improved design from initial infeasible design to feasible optimal design
As discussed above, the propeller design is a CFD problem with dominated infeasible regions.
The results of this section highly demonstrate the applicability of the proposed algorithm in
solving challenging real problems with unknown and constrained search spaces. Therefore, it can
be stated that the MFO algorithm has merits in solving such problems.
7- Conclusion
In this paper transverse orientation of moths were modelled to propose a new stochastic
population-based algorithm. In fact, the spiral convergence toward artificial lights was the main
inspiration of the MFO algorithm. The algorithm was equipped with several operators to explore
and exploit the search spaces. In order to benchmark the performance of MFO, three phases of
test were conducted: test functions, classical engineering problem, and a CFD problem. In
addition the results were compared to PSO, GA, and GSA for verification. In the first test phase,
19 test functions were employed to benchmark the performance of MFO from different
perspectives. It was observed that the MFO algorithm is able show high and competitive
exploration in multi-modal functions and exploitation in unimodal functions. Moreover, the
results of the composite test functions prove the MFO balances exploration and exploitation
properly. The first test phase also considered the observation and investigation of MFO’s
convergence.
In the second test phase, seven classical engineering test problem were employed to further
investigate the effectiveness of MFO in practice. The problems were welded beam design, gear
train design, three-bar truss design, pressure vessel design, cantilever design, I-beam design, and
tension/compression spring design. The results proved that the MFO algorithm can also be
effective in solving challenging problems with unknown search spaces. The results of this
36
algorithm were compared to a variety of algorithms in the literature. The second test phase also
considered the constrained and discrete problems to observe the performance of the MFO
algorithm in solving problems with different characteristics. Eventually, the last test phase
demonstrated the application of the MFO algorithm in the field of propeller design. The
employed problem was a highly constrained and expensive problem. However, the MFO
algorithm easily optimized the structure of the employed propeller and improved its efficiency.
According to this comprehensive comparative study, the following conclusion remarks can be
made:
For future works, several research directions can be recommended. Firstly, the effect of different
spirals in improving the performance of the MFO is worth researching. Secondly, the binary
version of the MFO algorithm can be another interesting future work. Last but not least, the
proposal of specific operators to solve multi-objective algorithms is recommended.
37
Reference
[1] J. H. Holland, "Genetic algorithms," Scientific american, vol. 267, pp. 66-72, 1992.
[2] R. C. Eberhart and J. Kennedy, "A new optimizer using particle swarm theory," in Proceedings of
the sixth international symposium on micro machine and human science, 1995, pp. 39-43.
[3] A. Colorni, M. Dorigo, and V. Maniezzo, "Distributed optimization by ant colonies," in
Proceedings of the first European conference on artificial life, 1991, pp. 134-142.
[4] R. Storn and K. Price, "Differential evolution–a simple and efficient heuristic for global
optimization over continuous spaces," Journal of global optimization, vol. 11, pp. 341-359, 1997.
[5] I. Rechenberg, "Evolution Strategy: Optimization of Technical systems by means of biological
evolution," Fromman-Holzboog, Stuttgart, vol. 104, 1973.
[6] L. J. Fogel, A. J. Owens, and M. J. Walsh, "Artificial intelligence through simulated evolution,"
1966.
[7] X. Yao, Y. Liu, and G. Lin, "Evolutionary programming made faster," Evolutionary Computation,
IEEE Transactions on, vol. 3, pp. 82-102, 1999.
[8] D. H. Wolpert and W. G. Macready, "No free lunch theorems for optimization," Evolutionary
Computation, IEEE Transactions on, vol. 1, pp. 67-82, 1997.
[9] F. Glover, "Tabu search-part I," ORSA Journal on computing, vol. 1, pp. 190-206, 1989.
[10] L. Davis, "Bit-Climbing, Representational Bias, and Test Suite Design," in ICGA, 1991, pp. 18-
23.
[11] H. R. Lourenço, O. C. Martin, and T. Stutzle, "Iterated local search," arXiv preprint math/0102188,
2001.
[12] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, "Optimization by simmulated annealing," science,
vol. 220, pp. 671-680, 1983.
[13] A. E. Eiben and C. Schippers, "On evolutionary exploration and exploitation," Fundamenta
Informaticae, vol. 35, pp. 35-50, 1998.
[14] E. Alba and B. Dorronsoro, "The exploration/exploitation tradeoff in dynamic cellular genetic
algorithms," Evolutionary Computation, IEEE Transactions on, vol. 9, pp. 126-142, 2005.
[15] D. Simon, "Biogeography-based optimization," Evolutionary Computation, IEEE Transactions on, vol.
12, pp. 702-713, 2008.
[16] C. Liu, M. Han, and X. Wang, "A novel evolutionary membrane algorithm for global numerical
optimization," in Intelligent Control and Information Processing (ICICIP), 2012 Third International
Conference on, 2012, pp. 727-732.
[17] O. Montiel, O. Castillo, P. Melin, A. R. Díaz, and R. Sepúlveda, "Human evolutionary model: A
new approach to optimization," Information Sciences, vol. 177, pp. 2075-2098, 2007.
[18] A. Farasat, M. B. Menhaj, T. Mansouri, and M. R. S. Moghadam, "ARO: A new model-free
optimization algorithm inspired from asexual reproduction," Applied Soft Computing, vol. 10, pp.
1284-1292, 2010.
[19] K. Krishnanand and D. Ghose, "Glowworm swarm optimisation: a new method for optimising
multi-modal functions," International Journal of Computational Intelligence Studies, vol. 1, pp. 93-119,
2009.
[20] D. Pham, A. Ghanbarzadeh, E. Koc, S. Otri, S. Rahim, and M. Zaidi, "The bees algorithm-a
novel tool for complex optimisation problems," in Proceedings of the 2nd Virtual International
Conference on Intelligent Production Machines and Systems (IPROMS 2006), 2006, pp. 454-459.
[21] D. Karaboga and B. Basturk, "A powerful and efficient algorithm for numerical function
optimization: artificial bee colony (ABC) algorithm," Journal of global optimization, vol. 39, pp. 459-
471, 2007.
[22] X.-S. Yang, "A new metaheuristic bat-inspired algorithm," in Nature inspired cooperative strategies for
optimization (NICSO 2010), ed: Springer, 2010, pp. 65-74.
[23] X. S. Yang, "Firefly algorithm," Engineering Optimization, pp. 221-230, 2010.
[24] X.-S. Yang and S. Deb, "Cuckoo search via Lévy flights," in Nature & Biologically Inspired
Computing, 2009. NaBIC 2009. World Congress on, 2009, pp. 210-214.
[25] R. Rajabioun, "Cuckoo Optimization Algorithm," Applied Soft Computing, vol. 11, pp. 5508-5518,
2011.
[26] S. Mirjalili, S. M. Mirjalili, and A. Lewis, "Grey wolf optimizer," Advances in Engineering Software,
vol. 69, pp. 46-61, 2014.
38
[27] A. Kaveh and N. Farhoudi, "A new optimization method: Dolphin echolocation," Advances in
Engineering Software, vol. 59, pp. 53-70, 2013.
[28] R. Oftadeh, M. J. Mahjoob, and M. Shariatpanahi, "A novel meta-heuristic optimization
algorithm inspired by group hunting of animals: Hunting search," Computers & Mathematics with
Applications, vol. 60, pp. 2087-2098, 2010.
[29] W.-T. Pan, "A new fruit fly optimization algorithm: taking the financial distress model as an
example," Knowledge-Based Systems, vol. 26, pp. 69-74, 2012.
[30] E. Rashedi, H. Nezamabadi-Pour, and S. Saryazdi, "GSA: a gravitational search algorithm,"
Information Sciences, vol. 179, pp. 2232-2248, 2009.
[31] A. Y. Lam and V. O. Li, "Chemical-reaction-inspired metaheuristic for optimization," Evolutionary
Computation, IEEE Transactions on, vol. 14, pp. 381-399, 2010.
[32] B. Alatas, "A novel chemistry based metaheuristic optimization method for mining of
classification rules," Expert Systems with Applications, vol. 39, pp. 11080-11088, 2012.
[33] A. Kaveh and S. Talatahari, "A novel heuristic optimization method: charged system search,"
Acta Mechanica, vol. 213, pp. 267-289, 2010/09/01 2010.
[34] A. Kaveh and M. Khayatazad, "A new meta-heuristic method: Ray Optimization," Computers &
Structures, vol. 112–113, pp. 283-294, 2012.
[35] A. Hatamlou, "Black hole: A new heuristic optimization approach for data clustering," Information
Sciences, vol. 222, pp. 175-184, 2013.
[36] R. A. Formato, "Central force optimization: a new nature inspired computational framework for
multidimensional search and optimization," in Nature Inspired Cooperative Strategies for Optimization
(NICSO 2007), ed: Springer, 2008, pp. 221-238.
[37] S. Moein and R. Logeswaran, "KGMO: A swarm optimization algorithm based on the kinetic
energy of gas molecules," Information Sciences, vol. 275, pp. 127-144, 2014.
[38] M. Abdechiri, M. R. Meybodi, and H. Bahrami, "Gases Brownian Motion Optimization: an
Algorithm for Optimization (GBMO)," Applied Soft Computing, vol. 13, pp. 2932-2946, 2013.
[39] Z. W. Geem, J. H. Kim, and G. Loganathan, "A new heuristic optimization algorithm: harmony
search," Simulation, vol. 76, pp. 60-68, 2001.
[40] A. Sadollah, A. Bahreininejad, H. Eskandar, and M. Hamdi, "Mine blast algorithm: A new
population based algorithm for solving constrained engineering optimization problems," Applied
Soft Computing, vol. 13, pp. 2592-2612, 2013.
[41] M.-Y. Cheng and D. Prayogo, "Symbiotic Organisms Search: A new metaheuristic optimization
algorithm," Computers & Structures, vol. 139, pp. 98-112, 2014.
[42] N. Moosavian and B. Kasaee Roodsari, "Soccer league competition algorithm: A novel meta-
heuristic algorithm for optimal design of water distribution networks," Swarm and Evolutionary
Computation, vol. 17, pp. 14-24, 2014.
[43] C. Dai, Y. Zhu, and W. Chen, "Seeker optimization algorithm," in Computational Intelligence and
Security, ed: Springer, 2007, pp. 167-176.
[44] S. Salcedo-Sanz, A. Pastor-Sánchez, D. Gallo-Marazuela, and A. Portilla-Figueras, "A Novel
Coral Reefs Optimization Algorithm for Multi-objective Problems," in Intelligent Data Engineering
and Automated Learning – IDEAL 2013. vol. 8206, H. Yin, K. Tang, Y. Gao, F. Klawonn, M. Lee,
T. Weise, et al., Eds., ed: Springer Berlin Heidelberg, 2013, pp. 326-333.
[45] X.-S. Yang, "Flower pollination algorithm for global optimization," in Unconventional Computation
and Natural Computation, ed: Springer, 2012, pp. 240-249.
[46] E. Cuevas, A. Echavarría, and M. A. Ramírez-Ortegón, "An optimization algorithm inspired by
the States of Matter that improves the balance between exploration and exploitation," Applied
Intelligence, vol. 40, pp. 256-272, 2014.
[47] K. J. Gaston, J. Bennie, T. W. Davies, and J. Hopkins, "The ecological impacts of nighttime light
pollution: a mechanistic appraisal," Biological reviews, vol. 88, pp. 912-927, 2013.
[48] K. D. Frank, C. Rich, and T. Longcore, "Effects of artificial night lighting on moths," Ecological
consequences of artificial night lighting, pp. 305-344, 2006.
[49] J. Digalakis and K. Margaritis, "On benchmarking functions for genetic algorithms," International
journal of computer mathematics, vol. 77, pp. 481-506, 2001.
[50] M. Molga and C. Smutnicki, "Test functions for optimization needs," Test functions for optimization
needs, 2005.
[51] X.-S. Yang, "Test problems in optimization," arXiv preprint arXiv:1008.0549, 2010.
39
[52] A. W. Iorio and X. Li, "Rotated test problems for assessing the performance of multi-objective
optimization algorithms," in Proceedings of the 8th annual conference on Genetic and evolutionary
computation, 2006, pp. 683-690.
[53] J. Liang, P. Suganthan, and K. Deb, "Novel composition test functions for numerical global
optimization," in Swarm Intelligence Symposium, 2005. SIS 2005. Proceedings 2005 IEEE, 2005, pp.
68-75.
[54] P. N. Suganthan, N. Hansen, J. J. Liang, K. Deb, Y.-P. Chen, A. Auger, et al., "Problem
definitions and evaluation criteria for the CEC 2005 special session on real-parameter
optimization," KanGAL Report, vol. 2005005, 2005.
[55] J. Derrac, S. García, D. Molina, and F. Herrera, "A practical tutorial on the use of nonparametric
statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms,"
Swarm and Evolutionary Computation, vol. 1, pp. 3-18, 2011.
[56] J. Kennedy and R. Eberhart, "Particle swarm optimization," in Neural Networks, 1995. Proceedings.,
IEEE International Conference on, 1995, pp. 1942-1948.
[57] H. John, "Holland, Adaptation in natural and artificial systems," ed: MIT Press, Cambridge, MA,
1992.
[58] F. van den Bergh and A. Engelbrecht, "A study of particle swarm optimization particle
trajectories," Information sciences, vol. 176, pp. 937-971, 2006.
[59] C. A. Coello Coello, "Theoretical and numerical constraint-handling techniques used with
evolutionary algorithms: a survey of the state of the art," Computer methods in applied mechanics and
engineering, vol. 191, pp. 1245-1287, 2002.
[60] G.-G. Wang, L. Guo, A. H. Gandomi, G.-S. Hao, and H. Wang, "Chaotic Krill Herd algorithm,"
Information Sciences, 2014.
[61] A. Carlos and C. COELLO, "Constraint-handling using an evolutionary multiobjective
optimization technique," Civil Engineering Systems, vol. 17, pp. 319-346, 2000.
[62] K. Deb, "Optimal design of a welded beam via genetic algorithms," AIAA journal, vol. 29, pp.
2013-2015, 1991.
[63] K. Deb, "An efficient constraint handling method for genetic algorithms," Computer methods in
applied mechanics and engineering, vol. 186, pp. 311-338, 2000.
[64] R. A. Krohling and L. dos Santos Coelho, "Coevolutionary particle swarm optimization using
Gaussian distribution for solving constrained optimization problems," Systems, Man, and
Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 36, pp. 1407-1416, 2006.
[65] K. S. Lee and Z. W. Geem, "A new meta-heuristic algorithm for continuous engineering
optimization: harmony search theory and practice," Computer methods in applied mechanics and
engineering, vol. 194, pp. 3902-3933, 2005.
[66] K. Ragsdell and D. Phillips, "Optimal design of a class of welded structures using geometric
programming," ASME Journal of Engineering for Industries, vol. 98, pp. 1021-1025, 1976.
[67] Q. He and L. Wang, "An effective co-evolutionary particle swarm optimization for constrained
engineering design problems," Engineering Applications of Artificial Intelligence, vol. 20, pp. 89-99,
2007.
[68] C. A. Coello Coello and E. Mezura Montes, "Constraint-handling in genetic algorithms through
the use of dominance-based tournament selection," Advanced Engineering Informatics, vol. 16, pp.
193-203, 2002.
[69] J. N. Siddall, Analytical decision-making in engineering design: Prentice-Hall Englewood Cliffs, NJ,
1972.
[70] A. H. Gandomi, "Interior search algorithm (ISA): A novel approach for global optimization,"
ISA transactions, 2014.
[71] E. Sandgren, "Nonlinear integer and discrete programming in mechanical design optimization,"
Journal of Mechanical Design, vol. 112, pp. 223-229, 1990.
[72] K. Deb and M. Goyal, "A combined genetic adaptive search (GeneAS) for engineering design,"
Computer Science and Informatics, vol. 26, pp. 30-45, 1996.
[73] A. H. Gandomi, X.-S. Yang, and A. H. Alavi, "Cuckoo search algorithm: a metaheuristic
approach to solve structural optimization problems," Engineering with Computers, vol. 29, pp. 17-35,
2013.
40
[74] B. Kannan and S. N. Kramer, "An augmented Lagrange multiplier based method for mixed
integer discrete continuous optimization and its applications to mechanical design," Journal of
Mechanical Design, vol. 116, pp. 405-411, 1994.
[75] M. Zhang, W. Luo, and X. Wang, "Differential evolution with dynamic stochastic selection for
constrained optimization," Information Sciences, vol. 178, pp. 3043-3074, 2008.
[76] H. Liu, Z. Cai, and Y. Wang, "Hybridizing particle swarm optimization with differential evolution
for constrained numerical and engineering optimization," Applied Soft Computing, vol. 10, pp. 629-
640, 2010.
[77] T. Ray and P. Saini, "Engineering design optimization using a swarm with an intelligent
information sharing among individuals," Engineering Optimization, vol. 33, pp. 735-748, 2001.
[78] J.-F. Tsai, "Global optimization of nonlinear fractional programming problems in engineering
design," Engineering Optimization, vol. 37, pp. 399-409, 2005.
[79] M. Mahdavi, M. Fesanghary, and E. Damangir, "An improved harmony search algorithm for
solving optimization problems," Applied Mathematics and Computation, vol. 188, pp. 1567-1579,
2007.
[80] C. A. Coello Coello, "Use of a self-adaptive penalty approach for engineering optimization
problems," Computers in Industry, vol. 41, pp. 113-127, 2000.
[81] K. Deb and A. S. Gene, "A robust optimal design technique for mechanical component design,"
presented at the D. Dasgupta, Z. Michalewicz (Eds.), Evolutionary Algorithms in Engineering
Applications, Berlin, 1997.
[82] E. Mezura-Montes and C. A. C. Coello, "An empirical study about the usefulness of evolution
strategies to solve constrained optimization problems," International Journal of General Systems, vol.
37, pp. 443-473, 2008.
[83] L. Li, Z. Huang, F. Liu, and Q. Wu, "A heuristic particle swarm optimizer for optimization of pin
connected structures," Computers & structures, vol. 85, pp. 340-349, 2007.
[84] A. Kaveh and S. Talatahari, "An improved ant colony optimization for constrained engineering
design problems," Engineering Computations: Int J for Computer-Aided Engineering, vol. 27, pp. 155-182,
2010.
[85] B. Kannan and S. N. Kramer, "An augmented Lagrange multiplier based method for mixed
integer discrete continuous optimization and its applications to mechanical design," Journal of
mechanical design, vol. 116, p. 405, 1994.
[86] E. Sandgren, "Nonlinear integer and discrete programming in mechanical design," 1988, pp. 95-
105.
[87] H. Chickermane and H. Gea, "Structural optimization using a new local approximation method,"
International journal for numerical methods in engineering, vol. 39, pp. 829-846, 1996.
[88] G. G. Wang, "Adaptive response surface method using inherited latin hypercube design points,"
Journal of Mechanical Design, vol. 125, pp. 210-220, 2003.
[89] J. S. Arora, Introduction to optimum design: Academic Press, 2004.
[90] A. D. Belegundu, "Study of mathematical programming methods for structural optimization,"
Dissertation Abstracts International Part B: Science and Engineering[DISS. ABST. INT. PT. B- SCI. &
ENG.], vol. 43, p. 1983, 1983.
[91] X. S. Yang, Nature-inspired metaheuristic algorithms: Luniver Press, 2011.
[92] L. Li, Z. Huang, and F. Liu, "A heuristic particle swarm optimization method for truss structures
with discrete variables," Computers & structures, vol. 87, pp. 435-443, 2009.
[93] A. Sadollah, A. Bahreininejad, H. Eskandar, and M. Hamdi, "Mine blast algorithm for
optimization of truss structures with discrete variables," Computers & structures, vol. 102, pp. 49-
63, 2012.
[94] A. Kaveh and S. Talatahari, "A particle swarm ant colony optimization for truss structures with
discrete variables," Journal of Constructional Steel Research, vol. 65, pp. 1558-1568, 2009.
[95] G. Xie, "Optimal Preliminary Propeller Design Based on Multi-objective Optimization
Approach," Procedia Engineering, vol. 16, pp. 278-283, 2011.
[96] Y.-C. Kim, T.-W. Kim, S. Pyo, and J.-C. Suh, "Design of propeller geometry using streamline-
adapted blade sections," Journal of marine science and technology, vol. 14, pp. 161-170, 2009.
[97] B. Epps, J. Chalfant, R. Kimball, A. Techet, K. Flood, and C. Chryssostomidis, "OpenProp: An
open-source parametric design and analysis tool for propellers," in Proceedings of the 2009 Grand
Challenges in Modeling & Simulation Conference, 2009, pp. 104-111.
41
Appendix
x x @ x 1
Rk @ "
4 2
x x @ x1
J 2 Ì√2xx Í @ I M ÎÏ
4 2
6PL 6PL1
σ x" δx"
x[ x1 Ex1 x[
x xb
4013EÆ 1 [
36 j1 x1 k E l
PÄ x" B
L 2L 4G
P 6000 lb L 14 in E 30 T 1b psi G 12 T 10b psi
τ¾¿À 13600 psi σ¾¿À 30000 psi δ¾¿À 025 in
42
Consider x 0U U 3 0¬ ¬ 3
Minimize fx" 2√2x @ x ∗ l,
√2U @ U
Subject to g x" PBσ_0
√2U @ 2UU
U
g x" PBσ _ 0
√2U @ 2UU
1
g1 x" PBσ _0
√2U @ U
Variable range 0 _ U U _ 1
where l 100 cm P 2 KN/cm × 2 KN/cm
V. Cantilever design:
43
VII. Tension/compression spring design
14045x
g 1 x" 1 B _0
x x1
x @ x
g [ x" B1_0
15
Variable range 005 _ x _ 200
025 _ x _ 130
200 _ x 1 _ 150
44