0% found this document useful (0 votes)
25 views10 pages

Cloud Computing Offloading Paper2 - Scopus

This document discusses resource allocation and dynamic resource sharing in edge cloud computing. It proposes a cooperative computing model that utilizes computing resources from edge devices, collaborating peer devices, and edge clouds. When peer devices are idle, they can offer their computing power for computation offloading. The paper investigates stochastic offloading control strategies to minimize the total energy consumption of devices and collaborators while meeting computational deadlines. It formulates the problem as a Markov decision process and develops an optimal offloading strategy using stochastic optimization theory. Simulation results show cooperative computing can significantly reduce energy consumption compared to other offloading techniques.

Uploaded by

Shravan_red
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views10 pages

Cloud Computing Offloading Paper2 - Scopus

This document discusses resource allocation and dynamic resource sharing in edge cloud computing. It proposes a cooperative computing model that utilizes computing resources from edge devices, collaborating peer devices, and edge clouds. When peer devices are idle, they can offer their computing power for computation offloading. The paper investigates stochastic offloading control strategies to minimize the total energy consumption of devices and collaborators while meeting computational deadlines. It formulates the problem as a Markov decision process and develops an optimal offloading strategy using stochastic optimization theory. Simulation results show cooperative computing can significantly reduce energy consumption compared to other offloading techniques.

Uploaded by

Shravan_red
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

EDGE CLOUD ON RESOURCE ALLOCATION

MANAGEMENT AND DYNAMIC RESOURCE-


SHARING COLLABORATORS: COMPUTATION
OFFLOADING TOWARDS CLOUD

Naga Lakshmi Somu1, Dr Prasadu Peddi2


1
Research Scholar, Department of Computer Science, Shri Jagdishprasad Jhabarmal
Tibrewala University, Rajasthan.
2
Associate Professor, Dep of CSE & IT, Shri Jagdishprasad Jhabarmal Tibrewala
University, Rajasthan
[email protected]

Abstract:
The variety of computing resources at the network edge and the diversity in
communication technologies, both the edge cloud along with peer devices
(collaborators) may be scavenged to give compute resources to those resource-limited
in devices. This study proposes a revolutionary cooperative computing model that
fully utilises the computing power of the device, opportunistically idle collaborators,
and specialised edge clouds. At idle/busy situations, collaborators offer aid with
computation/offloading, respectively. We investigate the stochastic offloading control
for a device, considering the amount of compute load that is processed locally,
transferred to the edge of the cloud and opportunistically distributed across partners.
The challenge is characterized as an infinite horizon Markov decisions problem with
the objective of minimizing the expected total energy consumption of the collaborator
and the device while meeting the hard computational deadline restriction. The
optimum offloading strategy is created using the stochastic optimization theory,
shows that cooperative computing can cut energy consumption by a proportional
amount. Higher computation energy deficits among collaborators or better wireless
channel conditions both result in greater energy savings. In comparison to many other
offloading techniques, simulation results support the policy's optimum as well as the
efficiency of edge cloud and end device cooperative computing.
Keywords: Edge computing and offloading of computation minimising of latency
and energy consumption

1 INTRODUCTION
Real-time simulation of military equipment systems and systems could be supported
by high-quality cloud computing centres that integrate with the capabilities from the
edge computing ecosystem (edge cloud). The limitations of heterogeneity, variation,
and instabilities of assets from the edge cloud, in addition to the differences in
resources and capabilities between cloud computing center computing and Edge cloud
computing nodes, make it challenging to implement the scheduling and handling of
resources process in conventional cloud computing centers. The "cloud
computer/edge computing" system claims that real-time demand adaptation is
provided via multi-granularity simulation services.[1-3].

Thus, by analyzing of cloud-edge computing, large data analytics technology and


analysis of topology in resources along with machine learning theoretic strategies
built in these areas are suggested to allow simulation inside the "cloud
computational/edge computing" paradigm, there was a need for unity in the
administration, control, and scheduling of computing resources. Models and set up a
cloud-edge cooperation resource and resource distributing system based on the
perception of demand for services and an active framework for resource optimization
that is based on complex resource balance in order to realize the objective of
environment-aware adaptable task planning as well as resource integration. Focus on
cloud operations and edge computing inside the computing ecosystem. The computer
simulation of service requests and optimal scheduling problems examine immediate
planning for computing resources. A simulation load forecasting system based on big
data load analysis is created by using data feature mining and analysis to anticipate
service load. A cutting-edge cloud-edge cooperative shared dynamic configuration
technique called "sharing mode" as well as a real-time approach to create service
replicas in an unstable cloud edge environment will ensure that the service quality
simulated meets the needs of the job.[10-11] .

2 LITERATURE SURVEY
Collocating virtual machines may lead to resource contention, thus it's important to
recognise it and take steps to reduce it. Multiple co-tenant virtual machines in cloud
data centres may generate resource conflict. There has been discussion of the many
methods available to address memory and cache contention[11]. The reasons of
disagreement may be determined either after disagreement has occurred and remedies
have been developed, or when disagreement can be anticipated using certain
techniques and prevented before it happens. The second strategy is significantly
superior than the former since conflict may be stopped in its tracks. The following
subsections examine a few of the prediction, placement, and monitoring methods
currently in use for cloud resource management.

Continuous Markov Chain prediction technique to select the host so that VM transfer
does not create any security risk. Prior to the transfer of VMs, the destination server
was examined for security concerns, resulting in a dynamic and proactive network
protection.

Elasticity has been a key component of cloud computing since it allowed for the
provision of resources during peak demand without the need for prior preparation
[13]. These elastic virtual machines (VMs) were offered for auction at spot prices that
were less than their fixed pricing. Spot rates change based on the availability of
supply and the user's current request. The rental cost and choice of the best resources
were significantly lowered by the early spot price forecast. The different demands of
cloud computing as well as the various degree of volatility of spot prices were
analyzed separately by using this method. Hidden Markov Model and Expectation-
based Prediction Method (HMM-E) using Markov methods.

By minimising the number of active physical computers and lowering resource waste,
a VM placement technique based on multi-objective integer linear programming [14]
has decreased the operating expenses of datacenters. The three goals that have been
taken into account in this study [15] are the quantity of resource waste, the quantity of
hosted virtual computers, and the quantity of utilised and active physical equipment.
The results showed that the Multi-Objective Integer Linear Programming model
(MOILP) performed better than the conventional ILP..
3 MANAGING RESOURCE CONTENTION IN EDGE COMPUTING
In an extension of cloud computing, edge computing makes use of networked devices
to manage and analyse data. Edge computing won't replace cloud computing, but
because of the rise in Internet-of-Things (IoT) devices and microelectronics, it has
become necessary to examine resource management over the edge. For instance, a
mobile device running mobile apps offloads computationally heavy tasks to a close-
by edge server. The closest edge server is constantly shifting dependent on the
location of the phone when the mobile user goes between home and work and back.
Depending on the user's location, it will be necessary to transparently relocate the
VMs that are utilised for compute in the edge server to a new edge server. Therefore,
predicting resource utilisation in an edge computing environment will undoubtedly
prevent resource contention and preserve the application's needed performance..

Because they include more mobile and IoT devices than standard cloud workloads,
edge computing workloads have also become more dynamic. When evaluating the
resources for edge clouds, it is important to take into account the frequent location
changes made by mobile users. Similar to this, certain events produced by IoT
workloads also call for the dynamic provisioning of these resources.

3.1 Various resource allocation techniques in edge computing


The edge devices and edge clouds operate resource-intensive applications with high
bandwidth, low latency, and high processing power, such as virtual reality, augmented
reality, video, and audio transmission at high speeds. In order to schedule applications
on edge devices and achieve the necessary QoS, an effective resource allocation
mechanism is crucial.

Resource Allocation based on Computational Offloading


Techniques that offload from the edge to the cloud are required for optimal resource
allocation in edge computing and for the seamless operation of mobile devices. The
main concerns, strategies, and cutting-edge initiatives pertaining to the offloading
issue. Based on the overall cost of local computation at mobile devices against the
edge server, the offloading option suggested [13] was made. Analytic Hierarchy
Process based on Covariance, a multi-objective decision-making process, is used to
choose the edge servers (Cov-AHP).
4 PROPOSED METHOD:
SECOND ORDER MARKOV MODEL FOR PREDICTION OF RESOURCE
CONTENTION (SOMRCP)
The majority of resource contention arises when use of shared resources is required.
Resource contention is more pronounced when numerous virtual machines (VMs) are
running on a single host. The cloud-based devices are unable to maintain their
intended functionality due to resource congestion. The virtual computing environment
used by users suffers as a result, which also has an impact on QoS. Resource
contention must be avoided to guarantee ongoing availability of the required
resources to the cloud devices in order to preserve the dependability of cloud
operations.

In this chapter, the SOMRCP model is put out with the goal of forecasting host
resource contention inside a datacenter in a cloud context. Additionally, we put
forward a placement strategy for VMs that have been moved from a host that was
expected to compete for resources in the future. Reduce the amount of VM transfer
processes improves the performance. The method of predicting what the hosts will act
in the near future. The contention manager creates an underlying transition probability
model for every VM that is running on the host using the second-order Markov model
to predict the host's future state. The second order probability matrix can be used to
calculate that for every virtual machine the potential overload state probability or the
probability for the host to stay under the overload condition. This is carried out for
each of the host's VMs. The odds of all the VMs on that specific host's future overload
states are averaged. The likelihood that the host may be overwhelmed in the future is
indicated by this. The host's future condition is classified as overloaded if this average
likelihood (Average POO) reaches a certain upper threshold. Underloaded is the word
used to describe the host's future situation if this average likelihood is lower than the
threshold of a lower. The host's future condition host is described as being typically
loaded if the average likelihood falls within the area between the thresholds of lower
and upper limits.

Algorithm:
Future state host Prediction
Input : Workload_monitor
Output: Host Predicted
Construct Matrix of Probability Transimition
Extract the Overload protocol O
end
O Overload avg
if O Overload avg ≥ Threshold_Upper then
Host has been overloaded
Migration = False
Call transmission Algorithm again
end
else if O Overload avg ≤ Threshold_Lower then
Host is under loaded
Migration =True
end
End

5 EXPERIMENTAL SETUP
The CloudSim simulator is used to assess how well the planned work performs. The
cloud infrastructure may be seamlessly modelled using the CloudSim simulator,
which also has established algorithms that can be changed and applied as needed. It is
economical and offers a structure for repeated implementation.
Workload Data - As described in this part, there are two forms of workload data that
we employed in our simulation studies in CloudSim. The features of the workload
data utilised in our trials are detailed in Table 1. We utilised Planetlab workload with
real-time CPU use, and future workload is based on historical numbers.

Table 3.1 Workload Data


Workload Number of Number of Time Period
Hosts VMs
Day1
Day2
Day3
Day4
Real 3 3 Day5
Day6
Day7
Day8
Day9
Random 3 30 24hrs

Each host has a storage capacity of 1TB and a bandwidth of 1GB/second. There are
four VM kinds that are used that resemble Amazon EC2 instances. High compute
instances that have capacities in 2500 MIPS and 0.85 GB, larger instances that have a
capacity of 2500 MIPS along with 3.75 GB, small instances that have capacities that
is 1000 MIPS and 1.7 1 GB and micro instances that have a capacity in excess of 500
MIPS and 0.633 gigabytes make up the configuration of VMs.. Each kind of VM has
a 100 MB bandwidth and a 2.5 GB storage capacity.

With a total of 30 VMs, we have mimicked 3 hosts. Various workloads are used in the
trials. The functions carried out by VMs on hosts need a lot of resources and the CPU.
The VMs are migrated to other hosts in order to fulfill the CPU demand for the VM
and guarantee that the work is executed continuously and without violating SLAs.
Occasionally, the recently relocated virtual machines may not need more CPU, but
this might result in the host's present virtual machines running low on resources.
Since VM migrations have an impact on the application's performance, they should
only be performed when necessary. If compared with an initial order Markov model
in comparison, the second-order Markov model is 90% less migrations. The number
of migrations in the second-order Markov model is equal to the number of migrant
events in the first order Markov model for certain labor loads on day 6. Nonetheless,
the number of migratory events in the second model is lower for 75% of the work
load that is the case. The frequency of migrations across the first and second order is
the same for 1% of the load on days 7 and 8, however for 95% of the load, there are
fewer migratory incidents in the second order. According to the second third order
Markov model, there are no migrations on day 9. In ninety-five percent of the work
completed on day ten, the second-order Markov model had less movements than the
initial order.

Figure 1: Planetlab Workload: Average VM Migrations-for-82%-CPU-Utilization

Figure 2: Workload on Planetlab: Mean Count of VM Migrations for 72% CPU


Usage

If you use the Markov model of second order It has been shown that when the
threshold is set at 0.8, 0.1 for each of Planetlab and random workloads, the average
number of VM migrations drops by more than 60%. Similarly, using this second
ordered Markov model and selecting thresholds of 0.7 and 0.2 for random workloads
results in a reduction of around 80% in the number of virtual machine migrations. The
number of migration in the Planetlab workloads with thresholds set to 0.7 and 0.2
drops by 40% when using the second-order Markov model.
6 CONCLUSION:
In a data center of a cloud computing architecture, this chapter offers a method to
anticipate and reduce resource contention across hosts. In-depth explanations of the
design, methods, and outcomes of the tests conducted using the suggested SOMRCP
model were presented in this chapter. The first order Markov model has more VM
migrations than the SOMRCP model that is suggested in this chapter. We go ahead to
find unique methods for anticipating and reducing resource contention across the
upcoming architectures, such as edge computing, since resource contention may also
happen there.

7 REFERENCES:
1. N. Fernando, S. W. Loke, and W. Rahayu, “Mobile cloud computing: a survey,”
Future Generation Computer Systems, vol. 29, no. 1, pp. 84–106, 2013.
2. G. H. Forman and J. Zahorjan, “*e challenges of mobile computing,”
Communications of the ACM, vol. 36, no. 7, pp. 75–84, 1993.
3. Y. C. Hu, M. Patel, D. Sabella, N. Sprecher, and V. Young, “Mobile edge
computing—a key technology towards 5G,” ETSI white paper, vol. 11, no. 11, pp. 1–
16, 2015.
4. A. C. Baktir, A. Ozgovde, and C. Ersoy, “How can edge computing benefit from
software-defined networking: a survey, use cases & future directions,” IEEE
Communications Surveys & Tutorials, vol. 19, 2017.
5. F. Bonomi, R. Milito, J. Zhu, and S. Addepalli, “Fog computing and its role in the
internet of things,” in Proceedings of the First Edition of the MCC Workshop on
Mobile Cloud Computing, pp. 13–16, Helsinki, Finland, August 2012.
6. D. Mazza, D. Tarchi, and G. E. Corazza, “A cluster based computation offloading
technique for mobile cloud computing in smart cities,” in Proceedings of the IEEE
International Conference on Communications (ICC), 2016, pp. 1–6, Kuala Lumpur,
Malaysia, October 2016.
7. S. Ranadheera, S. Maghsudi, and E. Hossain, “Computation offloading and
activation of mobile edge computing servers: a minority game,” IEEE Wireless
Communications Letters, vol. 7, no. 5, pp. 688–691, 2018.
8. K. Habak, M. Ammar, K. A. Harras, and E. Zegura, “Femto clouds: leveraging
mobile devices to provide cloud service at the edge,” in Proceedings of the IEEE 8th
International Conference on Cloud Computing (CLOUD), 2015, pp. 9–16, Nice,
France, March 2015.
9. S. Mu, Z. Zhong, D. Zhao, M. Ni, Joint job partitioning and collaborative
computation offloading for Internet of Things. IEEE Internet Things J. 6(1), 1046–
1059 (2019)
10. Y. He, J. Ren, G. Yu, Y. Cai, D2D communications meet mobile edge computing
for enhanced computation capacity in cellular networks. IEEE Trans. Wirel.
Commun. 18(3), 1750–1763 (2019)
11. G. Hu, Y. Jia, Z. Chen, Multi-user computation offloading with D2D for mobile
edge computing, in Proceedings of IEEE GLOBECOM, pp. 1–6 (2018)
12. C. You, K. Huang, Exploiting non-causal CPU-state information for energy-
efcient mobile cooperative computing. IEEE Trans. Wirel. Commun. 17(6), 4104–
4117 (2018)
13. Y. Tao, C. You, P. Zhang, K. Huang, Stochastic control of computation offloading
to a helper with a dynamically loaded CPU. IEEE Trans. Wirel. Commun. 18(2),
1247–1262 (2019)
14. A.S. Prasad, M. Arumaithurai, D. Koll, X. Fu, Raera: a robust auctioning
approach for edge resource allocation, in Proceedings of the Workshop on Mobile
Edge Communications, pp. 49–54 (2017)
15. W. Zhang, Y. Wen, K. Guan, D. Kilper, H. Luo, D.O. Wu, Energy-optimal mobile
cloud computing under stochastic wireless channel. IEEE Trans. Wirel. Commun.
12(9), 4569–4581 (2013)
16. Naga Lakshmi Somu, Prasadu Peddi (2021), An Analysis Of Edge-Cloud
Computing Networks For Computation Offloading, Webology (ISSN: 1735-188X),
Volume 18, Number 6, pp 7983-7994.

You might also like