0% found this document useful (0 votes)
15 views10 pages

FinalResearchPaperACN

This paper analyzes task offloading strategies in Mobile Edge Computing (MEC) systems, focusing on the impact of machine learning algorithms like Deep Reinforcement Learning (DRL) and Meta Reinforcement Learning (MRL) on performance metrics such as latency and resource utilization. It evaluates various approaches, including SARSA-based unloading and QoS-driven offloading, highlighting their strengths and weaknesses to guide future research. The findings emphasize the importance of optimizing task offloading to enhance application responsiveness and efficiency in resource-constrained environments.

Uploaded by

Hamda Arfan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views10 pages

FinalResearchPaperACN

This paper analyzes task offloading strategies in Mobile Edge Computing (MEC) systems, focusing on the impact of machine learning algorithms like Deep Reinforcement Learning (DRL) and Meta Reinforcement Learning (MRL) on performance metrics such as latency and resource utilization. It evaluates various approaches, including SARSA-based unloading and QoS-driven offloading, highlighting their strengths and weaknesses to guide future research. The findings emphasize the importance of optimizing task offloading to enhance application responsiveness and efficiency in resource-constrained environments.

Uploaded by

Hamda Arfan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Comparative Analysis of Task Offloading

Strategies in Mobile Edge Computing


Systems
Hamda Arfan

Abstract -- Task offloading plays a crucial aspect in edge computing and the Internet of Things (IoT). It involves
transferring computation-intensive tasks to more powerful remote devices. This process has many benefits which include
prolonged battery life, reduced latency, and enhanced application performance. But most important of all is that a task
offloading method determines whether certain segments of an application should be executed locally or remotely. This
decision-making process is influenced by different factors such as the characteristics of the application, network
conditions, hardware capabilities, and mobility. All of these collectively shape the operational environment of the
offloading system. This paper delves into the approaches that are currently used in task offloading and allocation of
resources in mobile edge computing environments (MEC). In this paper, machine learning-based algorithms for task
offloading and resource allocation are analyzed. We examine Meta Reinforcement Learning (MRL) and Deep
Reinforcement Learning (DRL) algorithms like RMlCO, DRL-Based offload, QoSDriven offloading, and SARSA-based
unloading. Their efficacy is assessed in reducing latency, optimizing resource utilization, and enhancing overall system
performance. Finally, we highlighted the strengths and weaknesses of each approach which offer valuable insights for
future research endeavors in Mobile edge computing (MEC) task offloading.

Keywords -- Task offloading; machine learning; algorithm; SARSA-based unloading; Mobile Edge Computing (MEC); Deep
Reinforcement Learning (DRL); Meta Reinforcement Learning (MRL)

applications between mobile devices with limited


computational capacity to remote cloud servers with
1. INTRODUCTION heavy-duty computing resources. This migration not
only overcomes restrictions that are natural with
mobile devices but also gives large amounts of
The rapid growth of the Internet of Things (IoT) computer power and storage space to the servers
has established a seamless connection between located in the cloud. Furthermore, vital automation
devices and sensors on a worldwide level. It has led tools and features are abundant within the cloud
to the continuous generation of large amounts of data environment which assist greatly in simplifying the
streams. This interconnected network of IoT devices expansion process toward end devices. Resource
has significantly influenced various aspects of provisioning and management for computational
modern life. For instance, automation of household purposes becomes a complex infrastructure
tasks in smart homes, revolutionizing healthcare, and management burden to the end users, however,
enhancing communication systems. Despite the automation makes this task easier thereby relieving
evident advantages brought in by these technologies, them.
they usually have strict needs for memory, energy
usage, and computing capacity. These requirements Therefore, the act of offloading processing that
are based on the desire to maximize the efficiency of requires a lot of computing power from IoT devices
devices while still ensuring that they can operate for to a centralized cloud has come up as the most
longer periods. However, even with continuous broadly used resolution within the IoT sector. This
advancements in device technology, certain tasks move is beneficial in dealing with limited resources
within IoT always demand many resources which is a as well as increasing processing capabilities in IoT-
challenge. based electronics. On the other, it also opens up
numerous chances for accomplishing more tasks by
. scaling up innovations across IOTS. Applications
Leveraging the strong infrastructures provided by grasp the ability of cloud computing to get better
data centers, the cloud becomes a flexible solution. It results, save more energy, and to be able to grow
allows for scalability and transferability of according to particular needs without affecting other

1
IoT-sector parameters. This in turn promotes their requirement comes from the need to transfer data
success in different areas. [1]. between mobile devices and edge servers, as well as
the processing time required to complete work
While leveraging remote cloud infrastructure
tasks. This increased load can cause additional delays
gives numerous benefits it also has some drawbacks,
in completing
especially for applications that require high
tasks and increase energy consumption as mobile
throughput and low-latency communication. The
devices use more power to communicate with edge
challenges associated with remote cloud
servers. The trade-off between offloading
infrastructures are because of the considerable
computation to reduce device energy consumption
physical distance between end devices and cloud
and the associated communication overhead
infrastructure, unreliable transport networks, the
becomes critical for efficient and effective task
substantial costs associated with networks, and
offloading in MEC environments. [3]
increased security risks. Consequently, it is important
to explore alternative solutions to address these
challenges effectively. [1].
This paper offers a comprehensive analysis of
Mobile Edge Computing (MEC) presents a task offloading in mobile edge computing (MEC),
revolutionary model because it solves the challenges focusing on offloading strategies, algorithms, and
associated with distant cloud infrastructure by factors influencing offloading decisions. The
deploying edge servers close to end users. These contributions of this article include:
components of the edge servers make it easier for
 A thorough evaluation of the current
devices to connect to the cloud and work together by
strategies for unloading and allocation of
creating a network of computers. It greatly reduced
resources within the MEC framework,
the physical distance between consumer devices and
including an extensive examination of
computing resources. These infrastructure
existing strategies, algorithms, and
components are designed to provide additional
influencing factors. It will facilitate a
resources between consumer devices and the distant
more thorough understanding of the
cloud, which is capable of creating a distributed
situation in which tasks are transferred
computer environment. MEC reduces the use of
within MEC environments by providing
bandwidth and communication delay, thereby
a comprehensive overview.
improving the responsiveness of applications in the
Internet of Things, by using edge servers located
closer to end users. The proximity to end users also
allows real-time applications, such as augmented
reality, video analysis, and industrial automation, to 2. MOTIVATIONS
be supported. These applications require immediate
data processing and decision-making. In addition, by
processing sensitive information locally, MEC Mobile Edge Computing (MEC) systems play a
reduces the need for data transmission over public or critical role in optimizing task offloading strategies.
long-distance networks and hence it improves data Growing demand for low-latency services and real-
privacy and security. [2] time applications implies that task offloading
The main technical hurdle in task offloading techniques have to be effective to satisfy changing
within Mobile Edge Computing (MEC) is customer demands. However, determining the best
guaranteeing delay-bounded Quality of Service offloading technique is difficult in MEC
(QoS) for applications utilizing this approach. This environments due to the variety of applications,
challenge has two primary dimensions. First is that network conditions, and edge resources. Through a
the experience of mobile users can suffer thorough comparative analysis of task offloading
significantly when numerous users engage in task mechanisms in MEC systems, this study aims to
offloading at the same time. These users are address this difficulty. The goal of this paper is to
contending for limited communication and provide insight into the advantages, disadvantages,
computation resources available at the edge servers. and applicability of different modern task offloading
The resources become strained due to the increased strategies by comparing and contrasting them.
demand and as a result, it can cause delays in task For MEC stakeholders, including network
execution and data processing. It can directly impact operators, application developers, and end users, to
the responsiveness and reliability of the applications make well-informed decisions about offloading
which is being offloaded. [3] strategies that best suit their unique needs and goals
Second, moving tasks to edge servers introduces are essential. Effective task offloading is critical for
an additional data transfer layer. This additional the application's ability to analyze data in real time

2
and communicate with low latency. Furthermore, essential when considering dynamic
there is a necessity for strong and flexible offloading nature of mobile edge computing
techniques. We seek to determine the best methods systems.
for maximizing latency, resource usage, and
operating expenses across various MEC deployment c) Dueling Deep Q-Network (DQN): To
situations by performing a comparison analysis. increase the effectiveness and stability
The goal of this research is to use recent advances of the learning process the method
in reinforcement learning methods, particularly in the combines DQN and double-DQN
areas of deep reinforcement learning and meta- approaches. Double-DQN lessens the
reinforcement learning. The effectiveness and possibility of overestimation bias,
adaptability of task-offloading methods in MEC whereas DQN aids in calculating the
systems could potentially be improved by integrating value of doing various actions in
meta-reinforcement learning and deep reinforcement various stages.
learning approaches. These approaches improve
responsiveness and performance by allowing the Results of algorithm
system to utilize learning from the past and make
intelligent choices promptly. We want to investigate  In Cases when tasks are delay
how these techniques could be used to optimize task sensitive or when edge nodes have
offloading Which will eventually progress MEC a high workload this algorithm
technology and improve user experiences in latency- works efficiently. Also when there
sensitive applications. is an increase in the number of
connected devices or an increase in
3. LITERATURE REVIEW application complexity the
algorithm is scalable. This property
of the algorithm to adapt to
A. Deep Reinforcement Learning for Task changing workloads and network
Offloading in Mobile Edge Computing conditions shows its potential for
Systems the future.
 Hence it enhances the overall
This paper has described the Deep
system performance and
Reinforcement Learning (DRL) algorithm for task
responsiveness, regardless of
offloading in mobile edge computing systems.
whether it is working on time-
sensitive activities or in situations
with high edge node use.
1. Explanation of algorithm  The approach greatly decreases the
The paper presents a novel approach for task ratio of dropped tasks and average
offloading in mobile edge computing systems that task latency by 86.4% to 95.4% and
makes use of deep reinforcement learning. This 18.0% to 30.1%, respectively as
technique uses non-divisible tasks with queuing compared to many current
systems, which better try to mimic real-world algorithms when simulations are
circumstances. It is different from prior efforts that done with 50 mobile devices and
concentrated on divisible or delay-tolerant activities. five edge nodes.
It increases realism by enabling jobs to span several
time windows.
B. QoS Driven Task Offloading with Statistical
1. Key Components: It has the following three Guarantee in Mobile Edge Computing
key components:
a) Deep Reinforcement Learning The paper uses Statistical QoS-Driven Task
Offloading Approach
(DRL): Each mobile device may
independently decide which tasks to
offload without having to know about
1. Explanation of algorithm
task models or decisions made by other
The goal of this technique used in the literature is
devices because of model-free DRL
to establish a relationship between the strategies for
approach used by algorithm. task offloading in Mobile Edge Computing (MEC)
b) Long Short-Term Memory (LSTM): and the statistical Quality of Service (QoS) needs.
LSTM is used to improve the estimation
of long-term cost by capturing long- It has two main pillars Gibbs sampling method
term dependencies in the data. This is and convex optimization theory. These offer a

3
strong basis for creating an algorithm that
successfully assures QoS.
A Monte Carlo process called the Gibbs sampling
method is used to create samples from a probability C. Task Offloading and Resource Allocation
distribution. In machine learning and statistics, it is for Mobile Edge Computing by Deep
frequently used for tasks like model fitting and Reinforcement Learning Based on SARSA
parameter estimation. The Gibbs sampling technique This paper has used the algorithm SARSA (State-
probably helps this algorithm discover the best task Action-Reward-State-Action). The literature
offloading options based on statistical QoS criteria surveyed computation offloading within mobile edge
and effectively explore the solution space. computing networks (MECNs), emphasizing their
integration into cyber-physical-social systems
Convex optimization theory deals with (CPSS). The paper presents a new SARSA technique
mathematical optimization problems that have both that is based on reinforcement learning and is
convex objective functions and convex constraints. intended to optimize offloading choices in MEC
This theory provides strong methods for determining networks. The Bellman equation is used to update Q
the best solutions to a variety of issues such as values in mathematical formulations to measure the
resource allocation and job offloading in MEC overall cost of the MEC offloading system.
systems.
The paper highlights two important aspects
4. Introduction of Algorithm
2. Effectiveness of algorithm in optimizing task SARSA stands for State-Action-Reward-State-
offloading strategies: To verify the Action. It is an advanced reinforcement learning
efficiency of the suggested method, a algorithm specially designed for optimizing
thorough experimental process is conducted. offloading decisions within mobile edge computing
These investigations use simulations or real- networks (MECNs).
world testing to evaluate the algorithm's
The approach is based on reinforcement learning.
performance in various scenarios. Even in This algorithm's functions include managing
an environment with limited resources the resources in edge servers and opting for the most
algorithm effectively achieves task effective decision for task offloading.
offloading with QoS requirements. The By intelligently choosing when and where to offload
literature shows that the method leads to a activities, OD-SARSA aims to decrease energy
considerable improvement in energy consumption and computation time delay.
efficiency and also provides reliable Based on the state-action-reward-state-action
convergence. This is a critical component in (SARSA) architecture, the algorithm selects actions
MEC because of their distributed that result in desired results by using it to learn from
architecture and resource limitations. MEC past experiences.
systems have to deal with significant energy In today’s world of MEC, where cyber-physical
consumption challenges. The algorithm in systems (CPSS) are playing an integral role, the
comparison with the baseline algorithm has efficient management of resources is very important
superiority in providing statistical QoS for enhancing system performance and also for
guarantee. It is because it consistently gave minimizing operational costs. SARSA solves these
satisfactory performance levels more than challenges by making use of reinforcement learning
techniques to make intelligent decisions about task
what is achievable by conventional methods.
offloading. This ultimately enhances the overall
[3] efficiency and effectiveness of MECNs.
3. Contributions to the Field: The algorithm
significantly improves the understanding of The paper performs a comparative analysis
task offloading strategies with the statistical between OD-SARSA and RL-QL. This analysis has
QoS guarantee approach. It uses convex shown the superiority of SARSA in achieving
efficacy in solving resource management challenges.
optimization theory and the Gibbs sampling
These challenges are common within edge servers.
method and the algorithm outperforms in MEC's
terms of QoS assurance which means tasks
are allowed to be completed before a Empirical Validation and Practical Applicability:
deadline with a probability above a given The paper provides valuable insights into the
threshold, aiming to enhance energy empirical evaluation of the SARSA algorithm
efficiency. [3] through rigorous experimentation. These

4
experimentations involve a wide range of tasks and a their communication interfaces. These differences are
large number of users. By doing this experimentation important to understand for formulating effective
the study highlights the superior performance of the offloading strategies that leverage the strengths of
algorithm as compared to traditional learning each device type while mitigating their limitations.
algorithms. This not only confirms the efficiency of
SARSA but it also makes it practically applicable in
real-world settings. It also highlights its potential to
solve the challenges of task offloading in mobile
edge computing more efficiently than existing E. Ultra-Low Latency Multi-Task Offloading in
methods. Mobile Edge Computing
The paper uses a Deep Deterministic Policy
Gradient Algorithm for task offloading.
D. Time-Minimized Offloading for Mobile
1. Deep Reinforcement Learning (DRL) and
Edge Computing Systems
MEC Integration:
This paper proposes the Cost-Evaluated The study report optimizes workload offloading
Reassignment Offloading (ERO) algorithm for task techniques by combining deep reinforcement learning
offloading. technology with a Mobile Edge Computing (MEC)
1. Discussion of algorithm: The primary goal of architecture.
Cost evaluated offloading algorithm is to minimize The paper suggests a parallel offloading paradigm to
application completion time while keeping in view handle issues such as inefficient server usage and
energy and time constraints. At the initial level, tasks frequent SMD mobility in a multi-task, multi-server,
are assigned to devices based on their urgency to and multi-smart Mobile Device (SMD) MEC system.
reduce offloading cost and ultimately it would reduce
the time for completion of tasks. To reduce the task completion time, a multi-
constraint optimization problem is constructed and
Also, the algorithm has introduced the concept of then a Markov decision technique is used to turn it
relative remaining costs for task reassignment. It is into a deep reinforcement learning-based offloading
crucial for further optimizing completion time within scheme.
system constraints. Task assignment is the process of To efficiently determine the offloading technique, a
selecting devices to achieve the shortest completion Deep Deterministic Policy Gradient approach is
time and also to prioritize tasks based on their provided.
urgency for efficient task offloading.
Parallel Offloading Model:
As there are multiple devices to choose from for
reassignment between devices the complexity of the The algorithm introduces a model where tasks can
algorithm is O ((N - 1) N^2j^2i^2). be offloaded simultaneously in a small area, which
helps meet the need for super-fast response times in
Empirical Validation and Practical Applicability densely populated cellular networks.
By doing comprehensive stimulations it has been This model focuses on making sure all tasks are
proved that the algorithm outperforms existing finished as quickly as possible by using deep
algorithms like MCO and ITAGS. ERO has also reinforcement learning to decide how to offload tasks
shown resilience against various system effectively.
configurations like changing energy thresholds,
fluctuation in several mobile terminals, and diverse Markov Decision Process (MDP):
task dependencies. It highlights its importance in They use something called a Markov decision
real-world MEC systems. process to deal with the fact that the MEC system is
An essential feature of the algorithm is that it always changing. This process helps make decisions
considers task interdependencies within applications. about task offloading based on what's happening at
By using Directed Acyclic graphs(DAGs) to model the moment.
task interdependencies the algorithm has proven to be It considers things like how busy the servers are
efficient in task assignment and minimizing and how well the SMDs are communicating to figure
completion time with application-specific out the best way to handle task offloading.
requirements and constraints.
The heterogeneity of computation resources in
MEC is highlighted in the paper. Every device type 2. Empirical Validation and Practical
including mobile terminals, edge host, and cloud Applicability
servers has its computational capabilities and also

5
In the algorithm used in the paper, the offloading
policy is validated through experiments. These
experiments show a significant improvement in long- 4. ANALYSIS
term performance when compared to traditional In this section, we performed a comparative
algorithms like Offload-MBS, Offload-Nearby, and analysis of five methodologies used for task
Offload-Local in a multi-server, multi-SMD, and offloading in mobile edge computing (MEC)
multi-task scenario. It outperforms by at least 19% in systems. Each methodology uses distinct algorithms
ultra-low latency, efficiency in server usage, and and approaches to address the challenges associated
SMD mobility. Hence it shows the practical with offloading decisions, aiming to optimize system
application and efficiency of the offloading approach performance, resource utilization, and Quality of
in real-world scenarios. Service (QoS) guarantees. Through this analysis, we
aim to provide insights into the strengths and
weaknesses of each approach and identify potential
F. Fast Adaptive Task Offloading in Edge avenues for further research.
Computing Based on Meta Reinforcement
Learning
G. MRLCO
Strengths
This paper has used the Meta Reinforcement
 On several testing datasets,
Learning for Task Offloading (MRLCO) algorithm.
MRLCO outperforms heuristic
3. 1)Introduction Meta Reinforcement Learning baseline methods in terms of
(MRL): average latency which shows its
Old research uses Deep reinforcement learning effectiveness in cutting latency by
(DRL) techniques for offloading policies in MEC up to 25% when compared to other
environments. However, because of the low sample baseline methodologies.
efficiency of DRL, they have the limitation of  It shows a decrease in average
adaptability to new environments. [5] The research latency by 25% on different
proposes a new approach which is based on meta- datasets as compared to other
reinforcement learning (MRL). This approach heuristic baseline algorithms.
overcomes the shortcomings of DRL approaches.  The hyperparameters of the
MRL resolves the drawbacks of traditional DRL algorithm are well-defined and are
techniques by enabling quick adaptation to novel implanted using Tensorflow.
contexts with low gradient changes and sample
needs. This approach models mobile applications as
directed acyclic graphs (DAGs) Also it uses a unique 5. Limitations of the Study:
sequence-to-sequence (seq2seq) neural network to  The performance of the algorithm
express the offloading strategy. Hence it offers a depends on the quality and
thorough framework for addressing the task diversity of training data. So if the
offloading issue in MEC contexts. dataset does not adequately
represent the real-world task
offloading scenarios it could lead to
non-desired results.
 The time complexity of the
4. Efficiency of MRLCO Method: algorithm is directly proportional to
The paper proposes the MRLCO approach, which the square of the number of tasks
shows remarkable sample efficiency for new learning so it can become exponentially
tasks. Because of its efficiency, user equipment (UE) high, especially for large mobile
may train with its data even when it has limited applications with large numbers of
computational resources. So it shows the usefulness tasks.
of the suggested method [5]
This literature shows the evolution from conventional H.
MEC and DRL techniques to the novel MRL-based I. B. Deep Reinforcement Learning-based
strategy. It makes a significant contribution to the
Algorithms
development of MEC applications by highlighting
flexibility and efficiency in task-offloading. This The algorithm has both strengths as well as
shows the potential for outcomes in network limitations and we will highlight them one by one.
optimization and latency reduction. [5] Strengths:

6
 It has a proven ability to optimize sustainability and cost-
offloading techniques even in effectiveness of MEC systems.
scenarios when resources are  The scalability limitation due to the
constrained. time complexity of task offloading
 In a controlled experimental strategies is another problem in this
environment, it provides promising algorithm. Especially in cases when
results. all mobile devices are coordinated,
 It has superior performance even there is an exponential increase in
when there are highly sensitive time complexity which could lead
tasks or increased workloads to computational overhead. Hence
it hiders scalability.
Limitations:
 While the main focus of research is
 The algorithm may not take into on the energy efficiency of User
account every constraint on the Equipment (UEs) in the context of
edge node which could restrict its task offloading. It oversights the
application. The location, available energy consumption of the edge
resources, and network connection server. Because the energy
of individual nodes can affect the consumption of the edge server
capabilities of individual nodes. may have a major influence on the
The algorithm did not take it into overall energy efficiency of MEC
account which could lead to systems, this oversight ignores a
suboptimal performance or limited
crucial aspect of the overall energy
capability in certain scenarios.
consumption inside these systems.
 The suggested distributed method
might not be able to handle
completely the significant signaling
cost. Although different task
densities affect the algorithm's L. 4. SARSA-based Offloading Algorithms
performance, the precise thresholds
are not well examined. 6. Strengths:
 The SARSA algorithm makes use
 Researchers should of reinforcement learning to make
concentrate on scaling up optimal offloading decisions in
deep learning models to edge servers of Mobile Edge
manage the complexity of Computing Networks (MECNs).
MEC settings without This approach is useful to minimize
system costs related to energy
compromising performance. consumption and computing time
J. delays.
 The algorithm has superior
K. 3. Statistical QoS Guarantee Approach: performance over RL-QL in terms
a) Strengths: of resource management and
 It provides a method to achieve offloading decisions.
statistical QoS demands in  SARSA considers both computing
offloading strategies. time delay and power consumption
 It significantly increases energy in its system model and it
efficiency. formulates an optimization problem
for reduced system costs.
b) Limitations  The reinforcement learning nature
 QoS may have the drawback of of the algorithm allows it to
compromising energy efficiency. perform well in changing network
Using the QoS algorithm gives conditions and task requirements.
priority to task completion within This makes it suitable for dynamic
specified time bounds and it leads edge computing environments.
to increased energy consumption.  SARSA has the ability to select
So it is important to have a balance safe paths. Selecting safe paths is
between QoS and energy efficiency crucial in critical decision-making
in order to ensure the overall situations. Hence this stability in
decision-making enhances the

7
algorithm's reliability in offloading contexts, this efficiency plays a
intensive tasks to edge clouds. critical role in achieving
application requirements and
7. LIMITATIONS: maximizing the consumption of
 One drawback of the reinforcement resources.
learning technique is its need for  Using the specified state and
high computing resources. And parameters, the DDPG method
devices or networks with limited guarantees that the output action is
resources may not be able to fulfill deterministic. Because of its
it. deterministic character, the
 The performance of the algorithm offloading technique guarantees
depends on the accuracy of the accuracy.
system model as well as the reward
function. So it can be challenging
to define these in real-world
8. Limitations:
scenarios. [6]
 Dynamic characteristics of edge
networks and varying task  In complex environments with
requirements might influence the high-dimensional state and action
efficacy of OD-SARSA. These spaces DDPG can be
could lead to less ideal offloading computationally expensive and
choices in certain situations. [6] time-consuming to train.
 The learning rate parameter  The performance of the algorithm
selection of the method is critical to is highly sensitive to
its effectiveness, and selecting an hyperparameters such as learning
inappropriate value could impact rates, exploration noise, and
both the rate of convergence and network architectures. This
the overall quality of the sensitivity can make it challenging
optimization. [6] to tune effectively.
 It may suffer from sample
inefficiency as it requires a large
number of samples to learn an
effective policy.
 Algorithm involves training both
. actor and critic networks
simultaneously so ensuring stability
M. in training networks can be a
N. Deep Q-Network (DQN) Algorithm challenge. This can introduce
instability during learning.
c) Strengths:

 Integrating experience replay


buffer and target networks lowers
O. Cost-Evaluated Reassignment Offloading
data correlation and enhances
stability during training. Because of (ERO) algorithm
this feature, the algorithm performs d) Strengths:
more reliably and consistently and  As it intelligently assigns jobs to
avoids becoming trapped in local the right devices. It minimizes the
optima. overall completion time.
 The optimization of effective  It can perform well in a range of
offloading techniques is achieved system configurations.
through the use of DDPG-based  It makes informed decisions and to
offloading. It outperforms do so it considers task
traditional tactics in ultra-low dependencies within applications.
latency scenarios, frequent mobility  Considers task dependencies within
of Smart Mobile Devices (SMDs), applications for more informed
and efficient server utilization by at offloading decisions.
least 19% over the long run. In
e) Limitations of study
Mobile Edge Computing (MEC)

8
 The optimal solution is not always the development of adaptive offloading
guaranteed due to the NP-hard
nature of task offloading.
policies that can autonomously adjust to
 As it uses the assumption of changing circumstances in real-time is really
simplified models to explore task important.
offloading problems. Such
simplified models such as local 9. Conclusion
hosts with infinite capacity and In conclusion, we studied various methodologies
negligible communication delay for task offloading in mobile edge computing (MEC)
between local processors do not systems. Each methodology has different techniques
fully represent real-world to optimize system performance, resource utilization,
scenarios.
 It focuses on minimizing and Quality of Service (QoS) guarantees. Latest
completion time under energy and deep reinforcement learning algorithms like
time constraints. But it overlooks DQN and SARSA are being used, which
other factors like cost efficiency show promising results in offloading
and resource utilization decision optimization, especially in
resource-constrained and dynamic MEC
P. Overall Assessment and Future Directions situations. These algorithms demonstrate
exceptional flexibility in responding to
different workloads and network
circumstances. These methodologies also
Although the above methodologies offer provide significant improvements in terms
substantial advances, there is still much of completion time and resource
space for further research to solve current consumption while satisfying application-
problems and uncover new possibilities. In specific needs.
particular, future study endeavors should Moreover, the advent of innovative methods
concentrate on augmenting scalability, such as ERO and MRLCO highlights the
strengthening adaptability, and tackling the importance of meta-reinforcement learning
complexities of dynamic MEC environments. procedures to achieve effective offloading
tactics. These techniques show robustness
across a range of system configurations and
Improving Scalability: In large-scale MEC
deployments, the scalability of offloading algorithms
offer important new perspectives on
is one area that is in need of more investigation. With handling issues related to the scalability and
the expedition of edge computing ecosystems flexibility of the MEC system.
offloading techniques that can effectively manage the Even though each approach has advantages
growing amount of different application needs has and disadvantages of its own they progress
become of crucial importance. Investigations are
needed in this area to find innovative methods for
task offloading policies in MEC systems.
augmenting current algorithms and for using Future studies should concentrate on
networked computing strategies or crafting improving scalability and improving
offloading models that are lightweight and tailored adaptability to dynamic environments.
for edge environments with resource constraints.

5. REFERENCES
Improving Adaptability: An
improvement of current algorithms to
dynamic environmental conditions is
required. As MEC environments have a
dynamic nature and so have varying network
conditions, varying workload demands, and
evolving resource availability. Therefore,

9
[1] X. QIU, L. ZHAI and H. WANG, "Time-Minimized Offloading for Mobile Edge Computing
Systems," IEEE Access, p. 9, 2019.
[2] M. T. A. V. W. Wong, "Deep Reinforcement Learning for Task Offloading in Mobile Edge
Computing Systems," p. 12, 2020.
[3] S. W. S. M. I. A. Z. M. I. X. M. Qing Li, "QoS Driven Task Offloading with Statistical
Guarantee in Mobile Edge Computing," IEEE TRANSACTIONS ON MOBILE COMPUTING, p. 13,
2020.
[4] Y. Y. H. F. M. I. A. P. Z. HONGXIA ZHANG, "Ultra-Low Latency Multi-Task Offloading in
Mobile Edge computing," IEEE Access, p. 13, 2021.
[5] J. H. G. M. A. Y. Z. F. I. a. N. G. Jin Wang, "Fast Adaptive Task Offloading in Edge
Computing based on Meta Reinforcement Learning," p. 12, 2020.
[6] M. M. H. (. M. I. G. S. F. TAHA ALFAKIH, "Task Offloading and Resource Allocation for
Mobile Edge Computing by Deep Reinforcement Learning Based on SARSA," IEEE Access, p. 11,
2020.

10

You might also like