0% found this document useful (0 votes)
66 views15 pages

Pervasive and Mobile Computing: Maria D. Jaraiz-Simon, Juan A. Gomez-Pulido, Miguel A. Vega-Rodriguez

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views15 pages

Pervasive and Mobile Computing: Maria D. Jaraiz-Simon, Juan A. Gomez-Pulido, Miguel A. Vega-Rodriguez

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Pervasive and Mobile Computing 19 (2015) 141–155

Contents lists available at ScienceDirect

Pervasive and Mobile Computing


journal homepage: www.elsevier.com/locate/pmc

Embedded intelligence for fast QoS-based vertical handoff in


heterogeneous wireless access networks
Maria D. Jaraiz-Simon, Juan A. Gomez-Pulido ∗ , Miguel A. Vega-Rodriguez
Department of Technologies of Computers and Communications, University of Extremadura, Polytechnic School, Campus Universitario
s/n, 10003 Caceres, Spain

article info abstract


Article history: One of the most important aspects of the modern communications deals with the access
Received 25 July 2013 to wireless networks by mobile devices, looking for a good quality of service under
Received in revised form 8 January 2014 the user’s preferences. Nevertheless, a mobile terminal can discover more than one
Accepted 27 January 2014
network of different technology along its trajectory in heterogeneous scenarios, being
Available online 10 February 2014
capable of connecting to other wireless access points according to their quality of service
values. This is the case of the vertical handoff decision phase, present in many sceneries
Keywords:
Wireless networks
such as 3G-LTE access networks. In this context, an efficient resource management of
QoS different networks (a good selection of weights for their quality of service parameters)
Evolutionary algorithms constitutes an optimization problem, where several heuristic methods using simple rules
Embedded processors try to find the best available network. Nevertheless, the characteristics of the current
mobile devices advise to use fast and efficient algorithms to provide solutions near
to real-time. These constraints have moved us to develop intelligent algorithms that
avoid the slow and massive computations associated with direct search techniques, so
reducing the computation time. In this paper we propose an evolutionary algorithm
capable of computing rapidly in embedded processors, improving the performance of other
algorithms designed in order to solve this optimization problem.
© 2014 Elsevier B.V. All rights reserved.

1. Introduction

Nowadays the wireless mobile devices (tablets, smartphones) want to have Internet access the whole time. Let us suppose
a terminal is connected to a base station which supplies Internet access; along its trajectory, the terminal can discover
different base stations matching with heterogeneous wireless networks (UMTS, WiMax, WLAN, etc.) that can provide
Internet access as well to the terminal (we do not consider here tethering techniques where the terminal acts as router
for third devices).
A network is characterized by the values of the Quality of Service (QoS) parameters. When a wireless terminal
can link to more than one router of the same network, the well-known procedure of Horizontal Handoff (HH) (also
known as intrasystem handoff/handover) happens [1]; this procedure changes the link to other access points using the
same technology. Nevertheless, it is necessary to keep in mind two important facts: On the one hand, when a wireless
terminal moves quickly, more than one network can appear; on the other hand, the increasing complexity of the wireless
networks and their associated technologies forces us to consider heterogeneous scenarios. In these cases, the terminal
could disconnect the current network and connect to other wireless access points of the same or different networks and
technologies according to their QoS values. This process is named Vertical Handoff (VH), and it consists of three phases:
discover, decision and execution. Our interest consists in the VH decision phase, which is driven by algorithms.

∗ Corresponding author. Tel.: +34 927257264.


E-mail addresses: [email protected] (M.D. Jaraiz-Simon), [email protected] (J.A. Gomez-Pulido), [email protected] (M.A. Vega-Rodriguez).

http://dx.doi.org/10.1016/j.pmcj.2014.01.009
1574-1192/© 2014 Elsevier B.V. All rights reserved.
142 M.D. Jaraiz-Simon et al. / Pervasive and Mobile Computing 19 (2015) 141–155

The QoS parameters play a key role in the VH decision phase, so we can consider the VH process as a QoS-based procedure.
In traditional VH processes, only channel availability and signal strength were considered as QoS parameters; nowadays,
the new generation networks consider other important parameters [2,3], such as monetary cost, bandwidth, response time,
latency, packet loss, bit error rate, battery level, security level, etc. Typically, the delay and the available bandwidth are
known to be the usual QoS parameters; nevertheless, the diversity of possible user’s profiles (conversational, streaming,
secure transactions, downloads, etc.), among other reasons, could require to take into account additional QoS parameters.
Many challenges are present in the VH decision phase. On the one hand, sometimes the terminal is moving quickly along
its trajectory, so the algorithms that support the VH decision phase must be fast and able to give a solution near to real-time in
such dynamic sceneries (the mobility aspect is a key driver for the future Internet, within the field of mobility and ubiquitous
access to networks [4]). On the other hand, some decision algorithms handle many parameters that involve quite a lot of
floating-point arithmetic calculation, and the computational effort increases with the required precision for the solutions,
the number of QoS parameters or available networks discovered during the movement of the terminal. A high computational
effort is in conflict with the low response time restriction, especially taking into account the low-performance processors
embedded in many mobile devices.
We have considered this limitation to develop a new algorithm proposal to handle the VH decision phase. This proposal is
based on evolutionary features instead of adaptive or direct-search approaches previously explored that were giving worse
results in precision and computation time.

2. Related work

There are many algorithmic proposals for the VH decision phase for heterogeneous wireless networks. These algorithms
cover a wide spectrum of features, often making hard a homogeneous comparison among them. For example, [5] presents a
VH decision algorithm that seeks for balancing the overall load among the base stations and access points, and maximizing
the collective battery lifetime of mobile nodes. Also, [6] proposes a simple two-step algorithm which looks for the robustness
required by battery-powered mobile nodes, taking into account the amount of resources provided by them.
These and other algorithmic proposals offer good behaviors under certain conditions and having in mind determined
aspects of the wireless communications. Nevertheless, our research interest is specifically focused on the QoS features of
the communications. In this sense, there are many algorithms for the VH decision phase that found their behavior on the QoS
parameters provided by the heterogeneous wireless networks. Thus, [7] shows a comprehensive survey of the VH decision
algorithms designed to provide the required QoS to a wide range of applications. These algorithms are categorized into four
groups based on the decision criteria used (received signal strength [8–10], bandwidth [11–13], cost function [14–16], or
combination [17–19]). Also, [20] exposes an overview of VH decision algorithms, grouping them as traditional, function-
based, user-centric, multiple attributes decision making, fuzzy logic and neural networks, and context-aware algorithms.
Another interesting way exploits the Markov Decision Process (MDP) to formulate decision algorithms. Thus, [21] proposes
an MDP-based algorithm with the objective of maximizing the expected total reward of a connection taking into account the
QoS, and [22] proposes two improved MDP-based algorithms (MDP-SAW and MDP-TOPSIS) in order to get the best available
network in bandwidth terms.
Nevertheless, these VH decision algorithms do not handle many QoS parameters; they usually consider the received
signal strength together with other few parameters. For example, a decision algorithm based on the utility function and
using Shannon’s capacity is proposed in [23], but considering only one QoS parameter: throughput. Moreover, in [24] a
fuzzy logic-based proposal considers user profiles, application requirements and network conditions, but limiting to three
the number of parameters (data rate, received signal strength, and mobile speed) in order to pay attention to the interference
conditions. Four QoS parameters (available bandwidth, end-to-end delay, jitter, and bit error rate) are considered in [25],
where a QoS-aware fuzzy rule-based algorithm makes a multi-criteria-based decision.
Our interest lies in considering methodologies able to handle a large number of QoS parameters, avoiding to increase
too much the computation time. In this area, some algorithms consider a set of weights for the QoS parameters as the basis
for the VH decision. These algorithms look for a good selection of QoS weights that minimizes a fitness function that gives
the goodness degree of each available network at a determined time. There is a kind of decision algorithms named Multiple
Criteria Decision-Making (MCDM), standing out some options such as Analytic Hierarchy Process (AHP) [26], Simple Additive
Weighting (SAW) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) [27]. These algorithms have
low complexity and use simple rules to determine a good combination of QoS weights. Also intelligent approaches has been
proven, like a fuzzy multiple attribute decision making algorithm (FMADM) that selects up to nine QoS parameters [28].
The algorithms we present in this paper were designed and programmed to consider the possibility of handling many
QoS parameters, and they implement intelligent techniques to select efficiently the QoS weights. For these reasons,
and conditioned by the few experimental instances available in published articles, we have carried out a performance
comparison with other MCDM-based algorithmic proposals.

3. Problem definition

A determined set of weights produces certain quality or merit degrees for each network; these merit values change if
we consider another set of weights. The goal is to obtain the best merit value, that will correspond to the selected network
for the VH decision phase. Accordingly, the more combinations of weights, the more possibilities to get better merit values
M.D. Jaraiz-Simon et al. / Pervasive and Mobile Computing 19 (2015) 141–155 143

we will have. It is important to obtain an optimal value for this merit metric for wireless networks because many advanced
applications need high QoS [14].
The amount of possible combinations of weights can be high enough to discard trivial searches, due to the computational
effort required as we will see later, hence the nature of the present optimization problem: to find a combination of weights
that supplies the best quality value, so that we do not have to generate all the possible combinations.

3.1. Fitness function

Choosing the best wireless network to establish the connection needs a metric able to give the goodness degree of each
available network in a determined time. The measure of the quality of the network is calculated from a set of weights
assigned to each QoS parameter, where this allocation can be based on the user’s preferences [29,2]. This measure is given
by the fitness function (F), that is computed in the VH decision phase.
Eq. (1) formulates the fitness function, where n identifies the available network, E (n) is an elimination factor that
reflects whether current network conditions are suitable for the mobile node’s requested services, s identifies with the
communication service (data, voice, video, etc.), i represents a determined QoS parameter, ws,i is the weight assigned to the
QoS parameter i for performing the service s, and N is a normalization function for the cost pns,i applied to the parameter i
for performing the service s.

F (n) = E (n) ws,i N (p(s,ni) ) :


 
wi = 1. (1)
s i i

The condition of the sum of weights in (1) is a key question, because it will determine strongly the methodology for
solving the optimization problem.
Since our purpose is to focus on an algorithmic research for the VH decision phase, we have simplified the research
framework considering monoservice networks (the summation on s disappears), removing the elimination factor (we
assume that the conditions of the available networks are suitable for the service considered), and taking as normalization
′(n)
function the natural logarithm [2,29]. This way the fitness function (2) is easier to be computed, where pi = p(i n) if to
′(n)
higher p, higher fitness (the fitness gets worse as when we consider delay or economic cost), or pi = 1/p(i n) if to higher p,
lower fitness (the fitness gets better as when we consider bandwidth).

F (n) = wi ln(p′(i n) ) :
 
wi = 1. (2)
i i

3.2. Weight adjustment

The values of the QoS parameters characterize a given network, whereas a combination of weights assigned to them that
satisfies the constraint given in (2) gives a measure of the network quality by means of the fitness function. Each weight
has a value between 0 and 1, and only the combinations that satisfy that constraint are considered as valid combinations or
solutions for the optimization problem (there are many more combinations than solutions). Each solution is evaluated for
the available networks, obtaining different fitness values because each network has its own values for the QoS parameters.
This way, the network that offers the lowest fitness for the same solution will be considered as the best network for the VH
decision phase. Nevertheless, there are many solutions, so other sets of weights could give lower fitness even for other
networks. Therefore, the optimization problem consists of searching the optimal solution that, applied to the available
networks, returns the lowest fitness value.
The space of possible solutions can be huge: it depends on the number of QoS parameters NQoS and the required precision
to generate the weights. Therefore, and due to the low response time requirement, direct search algorithms are discarded,
so we must use fast non-exhaustive search heuristics. It is important to take into account that many mobile terminals often
perform the VH algorithms using low-performance embedded microprocessors due to the low power and reduced size
requirements, so the challenge of tackling this optimization problem is to design a small and fast heuristic, hence the need
for efficient optimization algorithms for the weight adjustment.

3.3. A direct search algorithm as basis

We base our intelligent heuristic proposals on SEFI (from ‘‘Weight Combinations SEarch by Fixed Intervals’’), an algorithm
that we have designed to search solutions for a given precision [30]. SEFI is a non-exhaustive direct search algorithm that
finds all the possible solutions for a given search precision. This algorithm has been used with a double purpose: on the one
hand, to determine the computation time and the size of the space of solutions (both are related to the search precision and
the number of QoS parameters considered); on the other hand, to help our heuristic proposals to get better solutions near to
real time. We have designed SEFI for this optimization problem considering a particular scenario of interest nowadays [31]:
a mobile wireless sensor moving along heterogeneous wireless sensor networks [32].
SEFI explores the space of solutions looking for combinations uniformly distributed according to a given interval h, named
search precision(with the limit h > 10−9 ). This way, if h decreases, the number of combinations of weights (and therefore
144 M.D. Jaraiz-Simon et al. / Pervasive and Mobile Computing 19 (2015) 141–155

the number of solutions) found increases. The uniform search avoids leaving unexplored areas of the space of solutions.
SEFI generates all the possible combinations for a given h, analyzes how many of them satisfy the constraint given in (2),
computes the fitness of the solutions for the available networks, and finally reports the optimal network, that matches the
solution with the minimum fitness found.
We programmed SEFI in C language using recursive loops for the uniform generation of all the possible combinations.
The code was successfully tested on the embedded low-power microprocessor Microblaze [30]. This processor is based on
reconfigurable hardware [33] and FPGA devices [34], and has similar features as some low performance processors in small
mobile terminals.
The experiments done with SEFI give us information about the computing time and the number of generated
combinations and possible solutions, getting an idea of the computational effort of the optimization problem. For example,
for five QoS parameters, h = 0.01, data set of four networks and general user’s profile, the results were: 10 s, 100,000,000
generated combinations and 8,000,000 solutions. After performing many different experiments, we have reached the
following conclusions:
• The computing time is due to the generation of all the possible combinations, evaluation of the constraint given in (2)
and by calculating the fitness of the solutions. This time increases if we consider more QoS parameters or higher search
precisions. We fix in 1 s the maximum time for obtaining an optimal solution in dynamic sceneries. This constraint limits
the h value depending on NQoS.
• The fitness improves always with higher search precisions, for a given number of QoS parameters.
• The optimal network depends on both NQoS and h, so other optimal networks could be found considering additional QoS
parameters or different h. Hence the need to design heuristics that search efficiently the optimal network. Nevertheless,
the analysis of the results of SEFI indicates that it is more important to consider more QoS parameters than increasing
the search precision.

4. Embedded intelligence for solving the problem

We have developed two algorithms placed on the computational intelligence in order to obtain optimal solutions. On the
one hand, the computational intelligence avoids generating and evaluating all the possible solutions for a given precision and
number of QoS parameters, as SEFI does; this way, the requirement of a reduced computing time is addressed. On the other
hand, these algorithms can be programmed with small-size codes, needed for embedded applications. It is important to take
this fact into consideration because nowadays the mobile devices are demanded for giving the same performance level at a
lower price, where their low-performance processors also reduce the power consumption improving so the mobility. This
way, to develop optimized software to be run on these devices is a current trend [35].
Next we present a couple of algorithms: the first one is based on an adaptive approach and the second one has an
evolutionary nature. For each of them, the main features are exposed after a brief introduction; then, the operation is
explained with pseudocode and figure support. Finally, we point out that our final proposal to solve the VH optimization
problem relies on the second algorithm, due to its better performance as we will see later.

4.1. An adaptive algorithm proposal: SEFISA

The key to find the optimal network is increasing the number of QoS parameters, but this implies to reduce the search
precision to keep the low computing time constraint. The apparent contradiction (low search precision is not good to find
optimal solutions) basis the design of a heuristic able to find optimal solutions in lesser times using low search precisions.

4.1.1. Features
SEFISA is the name of a heuristic proposal that, starting from SEFI, is based on the Simulated Annealing (SA) algorithm. The
SA algorithm [36,37] is inspired in the cooling process of a metal where a final structure of minimum energy is searched.
The final structure is reached after successive stages (generations) where structures more and more cooled are found. In
its original formulation, SA starts looking for an optimal solution within a space of solutions well defined; once found, the
following generation reduces this space and centers it in the optimum found before, starting the search again, this time with
higher precision. The amount of the successive reductions is defined by a factor of reduction red.

4.1.2. Operation
The SA algorithm is very useful for many optimization problems. In our case, we use its adaptation feature for defining
SEFISA. Thus, the operation of SEFISA is shown in the pseudo-code given in Algorithm 1 and in Fig. 1, and it is as follows:
Initializing from a determined data set and profile (line #1 in Algorithm 1), SEFISA starts performing SEFI (generation #0)
where the search spaces of sizes Di for the QoS weights wi are constrained by the limits umini and umaxi (line #2). These limits
are imposed by the application profile or user’s preferences. In this first generation, we establish the factor or reduction
red and a constant div , named division factor, which determines how many samples the smallest search space Dmin will be
divided (line #3). The precision h for SEFI (only for this generation, line #4) is calculated dividing Dmin (it corresponds to w1
M.D. Jaraiz-Simon et al. / Pervasive and Mobile Computing 19 (2015) 141–155 145

in Fig. 1) by div , so some Di will have a greater or equal number of samples than the division factor to generate combinations.
Therefore, h is different in each generation (a generation goes from line #5 to #13) and it depends on Dmin (line #11) and
div .

Algorithm 1 SEFISA pseudo-code.


1: Select data set and profile
2: Determine limits Umini , Umaxi ⇒ Di ⇒ Dmin
3: Select red and div ⇒ h
4: IdGeneration = 0
5: while stop criterion not reached do
6: Run SEFI (h) ⇒ obtain optimal w bi
7: Di = Di /2 and centered in w bi ⇒ determine Vmini and Vmaxi
8: if limits exceeded or other causes then
9: Take correcting actions on the search spaces
10: end if
11: Determine Dmin ⇒ h
12: IdGeneration++
13: end while

Once the minimum fitness is found in the initial generation (line #6), the corresponding set of weights (optimal solution)
is used in order to center the new and smaller search spaces D′i on them for the next generation (see generation #1 in Fig. 1),
where all Di are reduced by red, usually equal to 2 (successive reductions in half). The new D′i are used to calculate the new
search limits vmini and vmaxi (line #7), keeping in mind that, for the first generation, the search spaces were determined by
umini and umaxi (3).

Di
D′i = : (Di = vmaxi − vmini ) ∧ (Di,0 = umaxi − umini ). (3)
red
The calculation of the new limits from D′i must consider some possible special situations (lines #8 to #10). The most usual
(J )
circumstance appears when the limits imposed by umini and umaxi are exceeded. Let us suppose a generation #J where Di ,
(J ) (J )
defined by the interval {Vmini , Vmaxi }, is inside the interval {Umini , Umaxi } (that does not depend on the generation). The search
(J ) (J +1) (J +1)
(J +1) Di (J +1) (J ) Di (J +1) (J ) Di
space is reduced in half in the following generation #J + 1, Di = 2
, so Vmini = w bi − 2
and Vmaxi = w bi + 2
,
(J )
where w bi is the ith weight of the best solution found in generation #J. In order to avoid these new limits fall before or
(J +1)
after the minimum or maximum possible Umini and Umaxi respectively, we use these adjustment actions: if Vmini ≤ Umini
(J +1) (J +1) (J +1)
then Vmini = Umini , and if Vmaxi ≥ Umaxi then Vmaxi = Umaxi . Other casuistries could appear (precision issues are usual
after many generations), being necessary to take correcting actions.
Once determined the newer search spaces, Dmin and h (line 11), SEFI generates all the possible combinations NC and
supplies the valid solutions NS (NS ⊆ NC ) that are those satisfying the constraint now formulated in (4), where NQ is the
number of QoS parameters.
i=NQ −1

wi = 1 : (vmini ≤ wi ≤ vmaxi ) ∧ (0 ≤ umini ≤ vmini ≤ vmaxi ≤ umaxi − 1). (4)
i=0

Finally, the new optimal solution found will determine the positions and sizes of the search spaces for the following
generation (line #12).
We have limited SEFISA to four generations in general, considering the low computing time constraint. Nevertheless,
in some cases SEFISA can stop and finish the executions because of several reasons, that we name stop criterion. The stop
criterion for SEFISA are:
• Predefined: When a predefined value of the elapsed time or the search precision has been reached.
• Compulsory: This stop criterion is caused when, for example, there are not any solution found in a generation, so we
cannot center the new reduced search spaces in the optimal weight for the next generation. The absence of solutions can
be often stated quickly, reinitializing SEFISA with other settings that can offer a better performance.

4.2. A genetic algorithm proposal: GAVH

We have developed an algorithm based on Genetic Algorithms (GAs) [38,39] in order to obtain better performance than
SEFISA and overcome the limitations exposed before, mainly the limitation of the number of generations, the appearance
of compulsory stop criterion, the overflow of search space limits, stagnation of optimal solutions along generations, and so
on. This new algorithm is named GAVH, and it fits the main problem constraints: small size and low response time.
146 M.D. Jaraiz-Simon et al. / Pervasive and Mobile Computing 19 (2015) 141–155

Fig. 1. SEFISA runs SEFI in successive generations, reducing and centering the search spaces for the QoS weights.

The reason to choose GAs is because they have demonstrated to be efficient and robust in search processes which produce
near optimal solutions, making them appropriate for solving large combinatorial optimization problems. Unlike the adaptive
feature of the SA algorithm, the GAs are biologically inspired algorithms for conducting random search and optimization
guided by the principles of natural evolution and genetics.

4.2.1. Features
The individuals of a population are, like in SEFISA, possible solutions of the optimization problem: vectors of QoS weights.
This common feature allows comparing performances between both algorithms in a realistic way.
Fig. 2 shows the problem approach. The population is initially generated with an even number of individuals. Each
individual is identified by w _indiv[ip][iq] , where ip is the index matching with the number of the individual in the population
and iq is an index that corresponds with a QoS parameter. In turn, each individual has a set of fitness values, because there are
different available networks. This way, the fitness of an individual is identified by F [ip][in], where ip identifies the wireless
network. Finally, we consider a constant population size along generations, popsize.
For generating the initial population, GAVH needs to fix values for the following parameters: NQ (number of QoS
parameters), NN (number of discovered wireless networks), h (interval for generating the weights, between 0 and 1), and
popsize (size of the population).
The initial population is obtained using SEFI. If the population size is not too big, generating the initial population with
SEFI is fast, even for a high number of QoS parameters. It is very useful to apply SEFI for generating the initial population
instead of considering other classic procedures as random generation of individuals, because SEFI guarantees the uniform
selection of combinations that cover the entire space of solutions, according to the search interval h fixed. This way, GAVH
can start from an initial population where the individuals represent all the areas of the space of solutions, allowing a high
degree of diversity for the successive evolutions of the population.
M.D. Jaraiz-Simon et al. / Pervasive and Mobile Computing 19 (2015) 141–155 147

Fig. 2. A mobile terminal discovers different heterogeneous wireless networks acting as access points. A combination of QoS weights (individual of the
population) is associated with as many fitness values as existing networks.

4.2.2. Operation
The operation of GAVH is shown in Algorithm 2. Starting from a determined data set, user’s profile, number of networks
and QoS parameters, precision level and population size (line #1), an initial population is built using SEFI (line #2), appearing
the first generation (SEFI is only used for this purpose and we will never use it again). Three sequential phases with the
genetic operations are applied to the population in a generation, and are repeated in a closed loop (lines #6 to #12) along
the successive generations until a stop criterion is reached, making up an experiment. At the beginning of each generation,
a new initial population is used (line #5) as result of the genetic operations performed, and an optimal solution is obtained
at the end of each generation (line #10). The global optimal solution is obtained at the end of the experiment (line #13).
The genetic operators are Selection (line #7), Crossover (line #8) and Mutation (line #9).

Selection. This phase consists of three successive steps: First, the individuals are classified according to their fitness values
(better fitness means lower value). Next, the half of the population (popsize/2) with the best fitness values are selected and
named parents A. These individuals will survive this generation. Finally, the individuals of the other half (with the worse
fitness values) are named parents B and they will not survive the current generation.
There is a hard condition for GAVH: the initial population must have an even number of individuals. This way, the sets
of parents A and B have the same size and the crossing operation can be correctly done.

Crossover. The crossover operator (Fig. 3) starts matching each parent A (wA) with a parent B (wB), making up an unique pair
whose members will not be used for other pairs. Next, the genes of the son wS of each pair are generated calculating the
average of the corresponding genes of the parents, according to (5). Nevertheless, this average value could not be applied
to the son from a certain gene position, in order to observe the hard constraint for the solutions given in (2); in this case,
remaining genes would take a fair value (6) to guarantee the final sum is 1.
 
j=i−1
wAi + w Bi  w Ai + w Bi w Ai + w Bi
+ < 1 → w Si = (5)
2 j =0
2 2
148 M.D. Jaraiz-Simon et al. / Pervasive and Mobile Computing 19 (2015) 141–155

Algorithm 2 GAVH pseudo-code.


1: Initializes: dataset , profile, NN , NQ , h, popsize
2: SEFI generates the initial population
3: Set strategy and percentage for mutation
4: while number of runs not reached do
5: Set initial population
6: while stop criterion not reached do
7: Phase #1: Parents selection {w A, w B} according to F values
8: Phase #2: Parents cross: generate sons w S and discard w B
9: Phase #3: Mutation of some individuals
10: Obtain optimal solution in the generation
11: IdGeneration++
12: end while
13: Obtain global optimal solution
14: end while
15: Obtain average global optimal solution

Fig. 3. Parents w A and w B generate son w S, where w Si is the ith gene or QoS weight of the individual, according to (5) and (6).

j=i−1
 wAi +wBi

j=i−1
 1− 2
wAi + wBi  w Ai + w Bi j=0
+ ≥ 1 → w Sk = R : R = , ∀k = {i, . . . , popsize − 1}. (6)
2 j=0
2 popsize − 1

Mutation. The previous crossover phase uses exact operations, in other words, it is a random free phase. This means that
several runs of the same experiment (understanding as an experiment a different configuration for GAVH) would always
give the same result. It is a common feature in GAs and many other evolutionary algorithms [40] to have some mutation
strategies in order to avoid this situation and introduce a certain degree of dynamic diversity in the population along the
generations. The random nature of the mutation strategies will require performing several runs of the same experiment
(lines #4 to #14) to statistically analyze the obtained results. Accordingly, an average global optimal solution (line #15) is
obtained.
We consider two possible mutation policies:
• Shift mutation. The genes of the individual w are shifted qmut positions to the right, as Fig. 4 shows. Obviously, if
qmut = 0 there is no mutation on having cancelled the shift. The value of qmut is fixed at the beginning of the experiment,
and its maximum value is NQ − 1.
• Exchange mutation. Two genes of an individual are exchanged according to two predefined positions, as Fig. 5 shows.
There are three ways to define these positions, according to the parameter qmut:
– qmut = 0: the two positions are randomly chosen at the beginning of each run of the experiment, being the same for
all the individuals in any generation, but different in each GAVH run.
– qmut = 1: the two positions are randomly chosen at the beginning of each generation, being the same for all the
individuals in the same generation, but different in other generations and runs.
– qmut = 2: the two positions are randomly chosen for each individual at the beginning of each generation, so they
could be different for each individual, generation and run.
The individuals to be mutated are randomly selected in each generation. The percentage pmut of these individuals is
fixed in each experiment, being the same for all their runs (line #3). We have proven it is better to select low values for
qmut, between 5% and 10%.

5. Experimentation

This section shows several aspects of the experimentation with the algorithms: the test bed considered (data sets and
profiles for user’s preferences), some implementation details and the parameter tuning for GAVH.
M.D. Jaraiz-Simon et al. / Pervasive and Mobile Computing 19 (2015) 141–155 149

Fig. 4. Shift mutation. The genes of the individual are shifted some positions to the right (in this figure, two positions).

Fig. 5. Exchange mutation. Two genes of the individual are exchanged; their positions can be fixed in the same run of the algorithm (but different for other
runs), in the same generation (but different for other generations), or change randomly for each individual in each generation and run.

5.1. Test bed

The optimization problem needs a test bed to perform the experiments that validate the heuristics developed. We have
considered three data sets and five profiles for the user’s preferences. This way, it is possible to replicate any experiment,
analyze the results obtained from our heuristics, and compare performances with other algorithms from the literature.

5.1.1. Data sets


The data sets are sceneries consisting of networks of different technologies characterized by the values of their QoS
parameters. Table 1 shows the three data sets considered, with the names DS1, DS2 and DS3.
• DS1. This data set [26] consists of three WLAN and one UMTS networks and has been used in simulated scenery where
a terminal moves transferring data files. The interest of DS1 resides in the high number of QoS parameters (up to 10),
supplying a high computational effort to the optimization algorithms when we consider more than 5 parameters.
• DS2. This data set [27] considers two services for conversational and streaming applications (the most important QoS
parameters for these services are defined in [41]). The terminal moves in a scenery made up of 6 heterogeneous networks
characterized by 5 QoS parameters. The security level QoS parameter goes from 0 (non-secure) to 5 (high security
network). The bandwidth values for IEEE802.11b (Wi-Fi), WiMax and UMTS networks are given in [42,43,27] respectively.
• DS3. Data set extracted from [28], where three networks (UMTS, WLAN-1 and WLAN-2), 9 QoS parameters and two
user’s profiles (downloading files and videoconference) were considered. The terminal evaluates the quality of WLAN-1
and WLAN-2 to establish a connection from UMTS. DS2 was used to compare GAVH with FMADM.

5.1.2. Profiles for user’s preferences


The user can establish preferences for the different QoS parameters depending on the service required. The
experimentation is more realistic considering different profiles for the user’s preferences, where the QoS parameters more
important have higher values of their weights (Fig. 6):
• Profile P1 (general). Is the most general possible profile, where the user does not specify any constraint or interval for
the QoS parameters. The following intervals have been established for the QoS parameters: ‘‘Any QoS parameter can have
assigned any weight between 0 and 1’’.
• Profile P2 (conversational). The most important parameters are delay and cost, because a conversation must be processed
in real time and be cheap. We consider this interval: ‘‘Delay: weight between 0.5 and 0.7’’.
• Profile P3 (streaming). This is a typical profile for multimedia applications, where delay is lesser important than
bandwidth (for which high values permit transmitting much data per second). The QoS parameters are limited by the
following intervals: ‘‘Bandwidth: weight between 0.5 and 0.7; Delay: weight between 0.1 and 0.3’’.
• Profile P4 (downloading files). This profile [28] takes into account the user has just started to download some multimedia
files using the UMTS network and wishes to use a cheaper high bandwidth access network to complete downloading the
files. The bandwidth is more important than all the other QoS parameters, while service cost is the following parameter
in importance. We have considered two possibilities for the weight ranges of these parameters in order to contrast with
150 M.D. Jaraiz-Simon et al. / Pervasive and Mobile Computing 19 (2015) 141–155

Table 1
DS1, DS2 and DS3 data sets consist of several networks characterized by the following QoS parameters: B = bandwidth (kbps), E = BER (dB), D = delay
(ms), S = security level, C = cost (eur/MB), L = network latency (ms), J = jitter (ms), R = burst error, A = average retransmissions/packet, P = packet
loss (%), G = received signal strength indication RSSI (dBm), N = network coverage area (km), T = reliability, W = battery power requirement (W), and
V = mobile terminal velocity (m/s).

DS1:
Net Type B E D S C L J R A P
0 UMTS 1700 0.001 19 8 0.9 9 6 0.5 0.4 0.07
1 WLAN 2500 10E−5 30 7 0.1 30 10 0.2 0.2 0.05
2 WLAN 2000 10E−5 45 6.5 0.2 28 10 0.25 0.3 0.04
3 WLAN 2500 10E−6 50 6 0.5 30 10 0.2 0.2 0.04
DS2:
Net Type B E D S C
0 Wi-Fi 5100 0.01 70 2 0.2
1 Wi-Fi 5100 0.01 65 1 0.2
2 WiMax 256 0.01 85 3 0.3
3 Wi-Fi 5100 0.01 75 3 0.2
4 Wi-Fi 5100 0.01 55 3 0.2
5 UMTS 384 0.03 80 5 0.2
DS3:
Net Type G B N L T S W V. C.
0 UMTS 2.46 1.105 1.822 0.549 2.460 2.460 0.449 2.46 0.549
1 WLAN-1 2.46 2.014 1.221 0.449 2.226 1.916 0.549 1.01 0.407
2 WLAN-2 2.46 2.46 1.105 0.427 2.226 1.822 0.549 1.01 0.407

Fig. 6. Weight limits for the user’s profiles.

more experimental results: profiles P4A (‘‘Bandwidth: weight between 0.6 and 0.8; Cost: weight between 0.1 and 0.2’’) and
P4B (‘‘Bandwidth: weight between 0.5 and 0.7; Cost: weight between 0.2 and 0.4’’).
• Profile P5 (videoconference). This profile [28] corresponds with a video call where the order of importance is: bandwidth,
service cost, and network latency. Thus, we have limited the QoS parameters by the following intervals: ‘‘Bandwidth:
weight between 0.4 and 0.6; Cost: weight between 0.2 and 0.3; Network latency: weight between 0.1 and 0.2’’.

Profiles P4 and P5 are just considered for DS3. Note that these profiles are not quantified in [28], so we have established
the numerical limits for P4A, P4B and P5 in order to perform experiments with GAVH and SEFI.
M.D. Jaraiz-Simon et al. / Pervasive and Mobile Computing 19 (2015) 141–155 151

Table 2
Tuned values for GAVH.
Parameter Value Parameter Value Parameter Value

NG 50 h 0.005 smut 2
popsize 1000 pmut 5 qmut 2

5.2. Implementation details

GAVH has been programmed in ANSI C without any external library, with the purpose of allowing its portability to any
processing environment of mobile device. In this sense, it has been successfully tested on an embedded processor based on
a small and low-power FPGA device.
We have used the same data sets for GAVH, SEFI and SEFISA, in order to make realistic performance comparisons among
them.
Many experiments have been performed. They are different configurations of the algorithm (population size, number of
generations, mutation characteristics) as well as the problem (data set, user’s profile, number of available wireless networks
and considered QoS parameters), in order to tune the main parameters (that is to say, to obtain the best GAVH configuration),
do precision comparisons of the results and evaluate computational efforts.
In addition, it is necessary to perform several runs of the same GAVH configuration to obtain reliable conclusions, because
the uncertainty degree due to the random nature of the mutation operator needs for a statistical analysis. For this purpose,
we have considered 50 runs for each experiment. The statistical information obtained in each experiment is: minimum,
maximum and average values of the fitness, variance and standard deviation. We consider as solution given by GAVH, for
the parameter tuning, the average fitness, although there is a better value as the minimum fitness found.

5.3. Parameter tuning

The parameter tuning affects to the mutation strategy, population size and number of generations. The common
configuration was: general purpose user’s profile, data set DS1, four wireless networks, and four QoS parameters.
The parameter tuning has involved 21,000 runs: 4 experimental rounds (combinations of h = {0.01; 0.005} and
pmut = {5; 20}) ×15 experiment types (combinations of NG = {50; 500; 2000} and popsize = {50; 100; 500; 1000; 2500})
×7 experiments (smut = 0 and combinations of qmut = {1; 2; 3} for smut = 1 and qmut = {0; 1; 2} for smut = 2) ×50
runs.
The experimental rounds shed some light on the setting of the main GAVH parameters. A detailed and in depth analysis
of the results provides some interesting conclusions: lower population sizes produce worse fitness values, the best mutation
strategy is by exchange in the third gene position, and there is a light stagnation of the optimal solution when more than 50
generations are performed. The selected values for the main GAVH parameters have in mind the computing time constraint,
that is to say, we select well-balanced values that avoid increasing the computing time and provide good results. On the
other hand, we have chosen h = 0.005 and a 5% of mutation after seeing that there is a better behavior of GAVH for these
values (the dispersion of the results of the different runs given by the standard deviations is lower) under any experimental
configuration. All these considerations have moved us to set the values shown in Table 2. Moreover, additional experiments
considering NQ = 5 have led to the same conclusions.

6. Performance comparison

We have carried out performance comparisons between our heuristic proposals (SEFI, SEFISA and GAVH) and other
algorithms found in the literature (AHP [26] and FMADM [28]). The algorithms AHP and FMADM are well-proven proposals
and have been applied in environments characterized by well-described data sets, allowing us to use the same experimental
framework for this optimization problem in order to make realistic comparisons.

6.1. GAVH vs AHP

The common experimental framework for the comparison of our heuristics with AHP consists of data set DS2, six wireless
networks, five QoS parameters and two user’s profiles: conversational (P2) and streaming (P3).
Table 3 shows the optimal solutions found by the algorithms. Each fitness value of AHP has been calculated using (2)
for the combination of weights given in [27]. Each fitness value of GAVH is the average value of the 50 runs of the same
experiment, so there is not any corresponding combination of weights; nevertheless, we have pointed out the combination
of weights that matches with the minimum fitness found in these 50 runs. Finally, we also have considered different search
precisions in order to extend the comparison.
As we can see in Table 3, the solutions found by GAVH have lower fitness than AHP, SEFI and SEFISA in all the cases, even
with low precision, proving so the best performance of GAVH in precision terms.
In addition, the performance of GAVH also improves in time terms, giving lesser computing times than SEFI and SEFISA
when precision increases, as we have checked using an Intel Core2Duo E6750 2.6 GHz test CPU (computing time of AHP was
152 M.D. Jaraiz-Simon et al. / Pervasive and Mobile Computing 19 (2015) 141–155

Table 3
Solutions (weights and their corresponding fitness values) given by AHP, SEFI, SEFISA and GAVH for the same experimental framework: DS2, NN =
4, NQoS = 5, P2 and P3, considering different search precisions. Computing times are also shown.
Weight: w0 w1 w2 w3 w4 Fitness Time
QoS: B E D S C (s)

P2:
AHP 0.065 0.065 0.614 0.128 0.128 1.26 N/A
SEFI (h = 0.01) 0.47 0.01 0.5 0.01 0.1 −2.08 0.55
SEFI (h = 0.005) 0.485 0.005 0.5 0.005 0.005 −2.17 9.43
SEFISA (div = 3) 0.34 0.09 0.5 0.03 0.04 −1.41 0.05
SEFISA (div = 10) 0.44 0.02 0.5 0.02 0.02 −1.89 3.06
GAVH (h = 0.01) 0.54 0.36 0.02 0.07 0.01 −5.77 0.84
GAVH (h = 0.005) 0.68 0.23 0.015 0.07 0.005 −5.82 0.84
P3:
AHP 0.545 0.035 0.178 0.121 0.121 −4.43 N/A
SEFI (h = 0.01) 0.6 0.28 0.1 0.01 0.01 −6.04 0.33
SEFI (h = 0.005) 0.6 0.29 0.1 0.005 0.005 −6.07 3.67
SEFISA (div = 3) 0.7 0.1 0.1 0.05 0.05 −6.17 0.03
SEFISA (div = 10) 0.7 0.16 0.1 0.02 0.02 −6.36 2.00
GAVH (h = 0.01) 0.69 0.13 0.01 0.06 0.11 −6.48 0.84
GAVH (h = 0.005) 0.685 0.19 0.005 0.02 0.1 −6.55 0.84

Table 4
Optimal solutions given by FMADM, GAVH and SEFI for the same experimental framework: DS3, NN = 3 and NQoS = 9, for profiles P4 (P4A and P4B only
are applied when we consider GAVH or SEFI) and P5. All these solutions correspond with the chosen optimal network, WLAN-2.
Weight: w0 w1 w2 w3 w4 w5 w6 w7 w8
QoS: G B N L T S W V C

FMADM (P4) 0.047 0.459 0.047 0.047 0.047 0.047 0.047 0.047 0.21
FMADM (P5) 0.040 0.453 0.040 0.098 0.040 0.040 0.040 0.040 0.21
GAVH (P4A) 0.06 0.64 0.02 0.02 0.02 0.02 0.02 0.02 0.18
GAVH (P4B) 0.02 0.5 0.02 0.02 0.02 0.02 0.02 0.02 0.36
GAVH (P5) 0.14 0.46 0.02 0.02 0.02 0.02 0.02 0.02 0.28
SEFI (P4A) 0.02 0.66 0.02 0.02 0.02 0.02 0.02 0.02 0.2
SEFI (P4B) 0.02 0.56 0.02 0.02 0.02 0.02 0.02 0.02 0.3
SEFI (P5) 0.02 0.48 0.02 0.1 0.02 0.02 0.02 0.02 0.3

not reported from [27]). For other microprocessors with lower performances, we also observe the imposed computing time
restriction of 1 s as maximum in order to allow using GAVH in mobile devices, adjusting properly the GAVH configuration
if it is necessary.

6.2. GAVH vs FMADM

FMADM is an intelligent algorithm of type Multiple Criteria Decision-Making that uses fuzzy logic to select an optimal
combination of QoS weights according to the user’s profile considered. The common experimental framework for the
comparison of our heuristics with AHP consists of data set DS3, three wireless networks, nine QoS parameters and two
user’s profiles: downloading files (P4A and P4B) and videoconference (P5).
We have adjusted GAVH according to Table 2 but with h = 0.02, because the very high number of QoS parameters in DS3
increases too much the computing time above the real-time requirements if h = 0.005 is used. Nevertheless, even with this
lower search precision value, we have found best results with GAVH and SEFI than with FMADM, maintaining computing
times near to 0.2 s for our algorithmic proposal.
Table 4 shows the optimal solutions found by GAVH and SEFI and the two solutions found by FMADM, while Table 5
shows their fitness values, that belong to WLAN-2 in all the cases. Hence we can extract two conclusions: on the one hand,
SEFI and GAVH improve always the FMADM results; on the other hand, GAVH improves (P5) or makes equal (P4) the SEFI
results.
All the experiments done, involving different sceneries, user profiles and precision levels, induce to recommend GAVH
to the user against other heuristics (SEFI and SEFISA developed by us, AHP and FMADM proposed by other authors), to solve
practically the optimization problem.

6.3. Hardware evaluation of GAVH

Tables 3 and 5 are representative of many other comparisons with AHP and FMADM done for other experimental
frameworks, where different numbers of networks, QoS parameters, data sets and user’s profiles have been configured.
This way, GAVH emerges as a very suitable algorithm in precision and computing time to solve the optimization problem,
giving the best network for the QoS-based VH decision phase.
M.D. Jaraiz-Simon et al. / Pervasive and Mobile Computing 19 (2015) 141–155 153

Table 5
Fitness values corresponding with the optimal solutions given in Table 4. For GAVH and SEFI,
statistic is also supplied: number of generated combinations, evaluated solutions, variance and
standard deviation.
Fitness Combinations Solutions Variance Standard deviation

FMADM (P4) −0.784


FMADM (P5) −0.799

GAVH (P4A) −0.851 24,255 12,825 7.1E−7 8.4E−4


GAVH (P4B) −0.851 24,310 12,870 7.1E−7 8.4E−4
GAVH (P5) −0.851 48,180 23,980 3.6E−6 1.9E−3

SEFI (P4A) −0.851 24,255 12,825


SEFI (P4B) −0.851 24,310 12,870
SEFI (P5) −0.847 48,180 23,980

Fig. 7. Platform with low-power Spartan3 FPGA, having similar features than some mobile devices.

As GAVH is our bet for embedded intelligence for the VH decision phase in mobile terminals, we have tested it in
hardware, just like we did previously with SEFI [30]. The test was done by means of the Microblaze embedded low-power and
low-performance FPGA microprocessor [44], because it could be considered as a first approach to the low power embedded
microprocessors present in many mobile devices, where few hundreds of MHz and limited processing resources are often
available. Nevertheless, modern mobile devices have better features than Microblaze, so if GAVH can run satisfactorily on
it, we could assure its behavior on modern terminals. The prototyped processor was designed as a single-processor system
with a system clock frequency of 125 MHz, floating point unit, standalone operating system, and 256 KB of local memory that
stores the algorithm. It was implemented on a XUPV5-LX110T evaluation platform that utilizes Xilinx Virtex 5 XC5VLX110T-
FF1136 device just for testing and comparison purposes, provided that the main goal of our work is heuristic.
The limited performance of this soft processor forces us to adjust some of the GAVH parameters shown in Table 2
(population size and number of generations), in order to maintain the response time below 1 s (using initial populations
previously stored to avoid spending more time in generating them). To reduce the population size and/or the number of
generations could imply a certain precision loss in the found solution, but even so the result continues being better than the
reported one by AHP and the remaining algorithms. For example, if we consider the case DS2, NN = 4, NQoS = 5, P2, and
h = 0.01 (Table 3), where the fitness was −5.77 for GAVH considering the values of Table 2, now we can obtain a fitness
of −4.36 reducing both population size and number of generations to 20. Note that this result is not only better than AHP
(1.26), but SEFI (−2.08) and SEFISA (−1.89).
Pursuing the goal of reducing the power consumption, we have implemented the same processor design using a cheaper
and low-power FPGA as Xilinx Spartan3E XC3S500E-FG320 device (Fig. 7), which has some similarities with current
microprocessors en certain mobile devices. We have done a power analysis of the Microblaze processor on this device,
obtaining a total on-chip power of 96.98 mW, which is a very low value in comparison with general purpose CPUs where
idle values are few watts.
Finally, it is important to know the energy impact of running GAVH. As a first approximation, we have measured the
CPU power consumption in watts in two states of a battery-powered architecture, idle and running GAVH, expressing the
difference as a percentage. Although the measures were done for a determined processor (Intel Core2Duo E6750 2.6 GHz
CPU), the obtained rate could be extrapolated to any CPU-based battery-powered architecture. The results were obtained
from Powerstat tool under Linux Ubuntu 13, which measures the power consumption of a laptop using the ACPI battery
information. Considering the same configuration given in Table 2, the measures were: CPU idle, 32 watts; CPU running
GAVH, 48 watts. Therefore, the power impact of GAVH is about 50% of the idle CPU consumption.
154 M.D. Jaraiz-Simon et al. / Pervasive and Mobile Computing 19 (2015) 141–155

7. Conclusions and future work

We have developed some novel algorithmic proposals deep in the intelligent computing that solve the optimization
problem, which is to find the best combination of weights for the quality of service parameters of heterogeneous wireless
networks for a mobile terminal. The solution found allows the mobile terminal to decide the best network to establish
connection in a Vertical Handoff process, according to the user’s preferences. Among these algorithms, the one based on
Genetic Algorithms has demonstrated to give the best performance, in precision and computing time terms, even against
another heuristic satisfactorily applied to the same optimization problem.
Our proposal tries to improve the overall performance of a mobile terminal managing efficiently the quality of service
resources of the discovered wireless networks along its trajectory, emphasizing key functionalities such as security level,
bandwidth, delay, and so on, for the different services requested to the networks. This way, we have developed an algorithm
proposal that is fast to give solutions near to real time, and light to be implemented in low power embedded microprocessors,
taking into account the type of electronic devices involved in such sceneries of heterogeneous wireless communications.
Finally, we want to face a real checking decisively as a future research line. On the one hand, we plan to develop GAVH
optimized code for Android OS in order to run it on smartphones and tablets in real wireless and mobile environments. On
the other hand, we want to develop a tool capable of integrating simulated dynamic environments (obtained from real data,
randomly generated values, or other known tools) where a mobile terminal discovers different wireless access networks
with their corresponding QoS values and constantly applies GAVH to evaluate performance and behaviors, among many
other possibilities.

References

[1] M. Kassar, B. Kervella, G. Pujolle, An overview of vertical handover decision strategies in heterogeneous wireless networks, Comput. Commun. 31
(2008) 2607–2620.
[2] J. McNair, F. Zhu, Vertical handoffs in fourth-generation multinetwork environments, IEEE Wirel. Commun. 11 (3) (2004) 8–15.
[3] C. Chiasserini, F. Cuomo, L. Piacentini, M. Rossi, I. Tinirello, F. Vacirca, Architectures and protocols for mobile computing applications: a reconfigurable
approach, Comput. Netw. 44 (4) (2004) 545–567.
[4] J. Pan, S. Paul, R. Jain, A survey of the research on future internet architectures, IEEE Commun. Mag. 49 (7) (2011) 26–36.
[5] S. Lee, K. Sriram, K. Kim, Y. Kim, N. Golmie, Vertical handoff decision algorithms for providing optimized performance in heterogeneous wireless
networks, IEEE Trans. Veh. Technol. 58 (2) (2009) 865–881.
[6] D. He, C. Chi, S. Chan, C. Chen, J. Bu, M. Yin, A simple and robust vertical handoff algorithm for heterogeneous wireless mobile networks, Wirel. Pers.
Commun. 59 (2) (2011) 361–373.
[7] X. Yan, Y. Sekercioglu, S. Narayanan, A survey of vertical handover decision algorithms in fourth generation heterogeneous wireless networks, Comput.
Netw. 54 (11) (2010) 1848–1863.
[8] A. Zahran, B. Liang, Performance evaluation framework for vertical handoff algorithms in heterogeneous networks, in: 2005 IEEE International
Conference on Communications, ICC’05, 2005, pp. 173–178.
[9] S. Mohanty, I. Akyildiz, A cross-layer (layer 2 + 3) handoff management protocol for next-generation wireless systems, IEEE Trans. Mob. Comput. 5
(10) (2006) 1347–1360.
[10] X. Yan, N. Mani, Y. Sekercioglu, A traveling distance prediction based method to minimize unnecessary handovers from cellular networks to WLANs,
IEEE Commun. Lett. 12 (1) (2008) 14–16.
[11] C. Lee, L. Chen, M. Chen, Y. Sun, A framework of handoffs in wireless overlay networks based on mobile IPv6, IEEE J. Sel. Areas Commun. 23 (11) (2005)
2118–2128.
[12] K. Yang, I. Gondal, B. Qiu, L. Dooley, Combined SINR based vertical handoff algorithm for next generation heterogeneous wireless networks, in: 2007
IEEE Global Telecommunications Conference, GLOBECOM’07, 2007, pp. 4483–4487.
[13] C. Chi, X. Cai, R. Hao, F. Liu, Modeling and analysis of handover algorithms, in: 2007 IEEE Global Telecommunications Conference, GLOBECOM’07, 2007,
pp. 4473–4477.
[14] F. Zhu, J. McNair, Optimizations for vertical handoff decision algorithms, in: 2004 IEEE Wireless Communications and Networking Conference,
WCNC’04, 2004, pp. 867–872.
[15] A. Hasswa, N. Nasser, H. Hassanein, Tramcar: a context-aware crosslayer architecture for next generation heterogeneous wireless networks, in: 2006
IEEE International Conference on Communications, ICC’06, 2006. pp. 240–245.
[16] R. Tawil, G. Pujolle, O. Salazar, A vertical handoff decision scheme in heterogeneous wireless systems, in: 67th Vehicular Technology Conference,
VTC’08, 2008, pp. 2626–2630.
[17] N. Nasser, S. Guizani, E. Al-Masri, Middleware vertical handoff manager: a neural network-based solution, in: 2007 IEEE International Conference on
Communications, ICC’07, 2007, pp. 5671–5676.
[18] K. Pahlavan, P. Krishnamurthy, A. Hatami, M. Ylianttila, J. Makela, R. Pichna, J. Vallstron, Handoff in hybrid mobile data networks, IEEE Pers. Commun.
7 (2) (2000) 34–47.
[19] L. Xia, L. Jiang, C. He, A novel fuzzy logic vertical handoff algorithm with aid of differential prediction and pre-decision method, in: 2007 IEEE
International Conference on Communications, ICC’07, 2007, pp. 5665–5670.
[20] A. Bhuvaneswari, G. Raj, An overview of vertical handoff decision making algorithms, Int. J. Comput. Netw. Inf. Secur. 9 (2012) 55–62.
[21] E. Stevens-Navarro, Y. Lin, V. Wong, An MDP-based vertical handoff decision algorithm for heterogeneous wireless networks, IEEE Trans. Veh. Technol.
57 (2008) 1243–1254.
[22] S. Sharna, M. Murshed, Performance improvement of vertical handoff algorithms for QoS support over heterogeneous wireless networks, in: Thirty-
Fourth Australasian Computer Science Conference, ACSC ’11, 2011, pp. 17–24.
[23] D. Lee, Y. Han, J. Hwang, QoS-based vertical handoff decision algorithm in heterogeneous systems, in: 17th IEEE International Symposium on Personal,
Indoor and Mobile Radio Communications, 2006, pp. 11–14.
[24] C. Ceken, S. Yarkan, H. Arslan, Interference aware vertical handoff decision algorithm for quality of service support in wireless heterogeneous networks,
Comput. Netw. 54 (2010) 726–740.
[25] K. Vasu, S. Maheshwari, S. Mahapatra, C. Kumar, QoS-aware fuzzy rule-based vertical handoff decision algorithm incorporating a new evaluation
model for wireless heterogeneous networks, EURASIP J. Wirel. Commun. Netw. (2012) 3–22.
[26] Q. Song, A. Jamalipour, A network selection mechanism for next generation networks, in: IEEE International Conference on Communications, ICC 2005,
2005, pp. 1418–1422.
[27] I. Lassoued, J. Bonnin, Z. Hamouda, A. Belghith, A methodology for evaluating vertical handoff decision mechanisms, in: Seventh International
Conference on Networking, ICN 2008, 2008, pp. 377–384.
M.D. Jaraiz-Simon et al. / Pervasive and Mobile Computing 19 (2015) 141–155 155

[28] Y. Nkansah-Gyekye, J. Agbinya, Vertical handoff decision algorithm for UMTS-WLAN, in: The 2nd International Conference on Wireless Broadband
and Ultra Wideband Communications, AusWireless 2007, 2007, pp. 1–6.
[29] Q. Song, A. Jamalipour, A quality of service negotiation-based vertical handoff decision scheme in heterogeneous wireless systems, European J. Oper.
Res. 191 (3) (2008) 1059–1074.
[30] M. Jaraiz, J. Gomez-Pulido, M. Vega-Rodriguez, J. Sanchez-Perez, Fast decision algorithms in low-power embedded processors for quality-of-service
based connectivity of mobile sensors in heterogeneous wireless sensor networks, Sensors 12 (2) (2012) 1612–1624.
[31] N. Labraoui, M. Gueroui, M. Aliouat, Secure DV-Hop localization scheme against wormhole attacks in wireless sensor networks, Trans. Emerg.
Telecommun. Technol. 23 (4) (2012) 303–316.
[32] J. Yick, B. Mukherjee, D. Ghosal, Wireless sensor network survey, Comput. Netw. 52 (12) (2008) 2292–2330.
[33] D. Buell, T. El-Ghazawi, K. Gaj, V. Kindratenko, High-performance reconfigurable computing, Computer 40 (2007) 23–27.
[34] S. Hauck, A. DeHon, Reconfigurable Computing, The Theory and Practice of FPGA-Based Computation, Morgan Kaufmann, 2008.
[35] M. Duranton, et al., The HIPEAC vision, in: HiPEAC Network of Excellence on High Performance and Embedded Architecture and Compilation, 2011.
Available online: http://www.hipeac.net/system/files/hipeacvision.pdf (accessed on 02.02.13).
[36] S. Kirkpatrick, D. Gelatt, M. Vecchi, Optimization by simulated annealing, Science 220 (1983) 671–680.
[37] V. Cerny, A thermodynamical approach to the travelling salesman problem: an efficient simulation algorithm, J. Optim. Theory Appl. 45 (1) (1985)
41–51.
[38] D. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wesley, Reading, 1989.
[39] C. Reeves, J. Rowe, Genetic Algorithms — Principles and perspectives. A guide to GA Theory, Kluwer, 2003.
[40] C.W. Ahn, Advances in Evolutionary Algorithms, Springer, 2006.
[41] European Telecommunications Standards Institute. 2008. Quality of Service (QoS) concept and architecture. 3rd Generation Partnership Project
(3GPP), TS 23.107 V8.0.0.
[42] J. Chen, J. Gilbert, Measured Performance of 5-GHz 802.11a Wireless LAN Systems, Atheros Communications, Inc., White Paper, 2001.
[43] L. Betancur, R. Hincapie, R. Bustamante, WiMAX channel — PHY model in network simulator 2. in: 2006 Workshop on ns-2: the IP network simulator,
WNS2’06, 2006.
[44] Xilinx Inc. MicroBlaze Processor Reference Guide. UG081 (v13.4), Xilinx Inc., 2012

You might also like