0% found this document useful (0 votes)
19 views12 pages

Virtualization Cloud Study

This document presents a comprehensive study on virtualization techniques and their significance in cloud computing, focusing on hypervisor-based, container-based, and hardware-assisted virtualization. It compares their architectures, performance, scalability, and security features, highlighting the advantages and disadvantages of each method. The paper also discusses future trends in virtualization technologies, such as serverless computing and edge virtualization.

Uploaded by

ranawaji6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views12 pages

Virtualization Cloud Study

This document presents a comprehensive study on virtualization techniques and their significance in cloud computing, focusing on hypervisor-based, container-based, and hardware-assisted virtualization. It compares their architectures, performance, scalability, and security features, highlighting the advantages and disadvantages of each method. The paper also discusses future trends in virtualization technologies, such as serverless computing and edge virtualization.

Uploaded by

ranawaji6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/391768446

A Comprehensive Study on Virtualization Techniques and Their Role in Cloud


Computing

Preprint · May 2025


DOI: 10.13140/RG.2.2.14067.64800

CITATIONS READS
0 288

1 author:

Mohamed Almoudane
Nanjing University of Information Science and Technology
2 PUBLICATIONS 0 CITATIONS

SEE PROFILE

All content following this page was uploaded by Mohamed Almoudane on 15 May 2025.

The user has requested enhancement of the downloaded file.


A Comprehensive Study on Virtualization
Techniques and Their Role in Cloud Computing
Almoudane Mohamed

Department of Computer Science, Nanjing University of Information Science and Technology


Email: [email protected]

Abstract—Virtualization is one of the key technologies of based virtualization, alternatively called operating system-level
today’s cloud computing, enabling effective resource usage, scala- virtualization, is increasingly more prominent today, particu-
bility, and workload isolation. This paper is a detailed description larly for native cloud-native applications. Containers provide
of the most significant methods of virtualization, in comparison of
their architecture, virtues and weaknesses, performance profile, process-level isolation of processes by kernel capabilities, a
and security features. We have emphasized three predominant lightweight and efficient substitute for virtual machines. Apart
methods: hypervisor-based virtualization, container-based virtu- from this, the evolution of modern CPU architectures has sup-
alization, and hardware-assisted virtualization. Hypervisor-based ported hardware-assisted virtualization, where the processors
VMs provide quality isolation and heterogeneous OS support but supply natural capabilities to enhance the performance and
at the cost of higher resource utilization. Container-based virtual-
ization supports lightweight deployment with minimal overhead security of the virtualized environments.
but at some special security challenges due to shared kernels. All of these approaches are paramount to cloud com-
Hardware-based virtualization such as Intel VT-x and AMD- puting, and we should understand what distinguishes them
V further enhanced hypervisor performance and protection by for system designers and researchers. This paper provided a
utilizing architectural support at the CPU level. comprehensive review of hypervisor-based, container-based,
We compare the theoretic underpinnings and practical use
of these approaches in cloud computing, looking at their per- and hardware-assisted virtualization. We contrasted their ar-
formance, scalability, and impact on security. We consider chitectures, examined their performance overheads, considered
significant security considerations such as isolation guarantees, security aspects, and subsequently talked about applying them
attack surfaces, and countermeasures. We also look at upcoming to current cloud setups. Future trends and directions in virtu-
trends such as serverless computing, unikernels, hybrid models alization are covered as well.
(e.g., VM-container hybrid), confidential computing, and edge
virtualization. The objective is to provide an open, comparative
The remaining part of this paper is in the following
explanation of how these virtualization technologies complement structure. Section II is about background information on
and build the dynamic cloud infrastructure environment. virtualization concepts and types. Section III describes the
Index Terms—Virtualization, Cloud Computing, Hypervisor, three virtualization approaches and their architectures and
Containers, Hardware-assisted Virtualization, Performance, Se- working principles in detail. Section IV provides an overview
curity
of the performance comparison of the approaches in cloud
environments. Section V discusses security concerns and iso-
I. I NTRODUCTION
lation guarantees. Section VI enumerates upcoming trends
Cloud computing is a significant application of virtual- and evolving virtualization technologies. Section VII provides
ization to facilitate on-demand flexible computing resources. conclusions to the paper with final remarks.
Virtualization technology facilitates many varying computing
II. BACKGROUND
environments to run on a shared physical host by virtual-
izing the hardware resources in terms of virtual instances. A. Virtualization Concepts and Evolution
The ability allows cloud providers high utilization, multi- Virtualization is the process of supplying a virtual com-
tenancy, and immediate service provisioning. Virtualization puting space called a virtual machine (VM) that behaves as
largely unchains operating systems and applications from an actual, physical computer. One of the earliest and most
the underlying physical hardware, resulting in scalability, widespread definitions, offered by Popek and Goldberg, spec-
efficiency, and manageability advantages. There have been ified a virtual machine as ”an efficient and isolated duplicate of
several virtualization techniques over the years with varied the real machine” [1]. What this means in practice is that the
architectures and compromises. software run in a virtual machine needs to come very close to
Hypervisor-based virtualization (or virtual machine or vir- performing as well as it would on real hardware (efficiency),
tual machine virtualization) has also been the earlier cloud while still being securely isolated from other virtual machines
infrastructure, preceding the late 2000s. This involves the running on the same physical machine (isolation). These ideas
introduction of a software layer called a virtual machine started taking shape in the 1960s and 1970s with the arrival
monitor (VMM), or hypervisor, that builds and operates virtual of mainframe computers that could run multiple operating
machines, each having a guest operating system. Container- systems at the same time [1].
Flash-forward to today and virtualization is a cornerstone of bernetes have taken over the Platform-as-a-Service (PaaS)
modern cloud computing. In modern platforms, virtualization and Container-as-a-Service (CaaS) spaces, where light and
can be done at different levels of the system stack. Two quick deployment is a must. This transition from virtualization
commonly recognized types are: (1) system virtualization and of hardware (with VMs) to operating system virtualization
(2) process or application virtualization. System virtualization (with containers) is symptomatic of the wider shift to more
is where entire virtual machines that include a full operating application-centered and more efficient cloud architecture [6].
system and virtualized hardware are created. This is the Both solutions are evolving at high speeds, stimulated by
strategy used by hypervisors like VMware and KVM. On innovative new hardware features and pioneering software
the other hand, process virtualization encapsulates individual solutions.
applications in their user-space environments. That is what
we have in container technologies like Docker and LXC. Our B. Architectural Overview
concern in this paper is with system-level virtualization (using Figure 1 indicates a simple, top-level diagram of hypervisor-
hypervisors) and operating system-level virtualization (using based and container-based virtualization system architectures.
containers) because these are most relevant to the delivery of Both offer the feature to run several workloads on a single
cloud services as of the current day. physical host, but both accomplish this differently. Hardware-
One of the significant distinctions in system virtualization assisted virtualization a new feature atop existing hypervisors
is between two categories of hypervisors: Type-1 and Type-2. is sort of on the edge of being conceived as part of hypervisors,
Type-1 hypervisors or bare-metal hypervisors run directly on as it initiates and expands virtual machine operation in the
the underlying hardware. They manage the guest operating background.
systems with high performance and are typically used in In a hypervisor-based configuration, the hypervisor exe-
data centers and cloud platforms within the enterprise. Xen, cutes directly on the host hardware (Type-1 or bare metal)
VMware ESXi, and Microsoft Hyper-V are a few examples. or hosted on a host operating system (Type-2 or hosted).
Type-2 hypervisors, on the other hand, typically run as appli- It offers each guest virtual machine (VM) an independent
cation software on a host operating system. Examples include virtualized platform to execute a complete operating system
VMware Workstation and Oracle VirtualBox. They are easier and applications exactly as on a physical machine. Each VM
to work with and are widely utilized in development and test is completely isolated, with its kernel, memory address space,
environments, yet they lack the same level of performance or and file system.
isolation guarantees available from Type-1 hypervisors. In contrast, container-based virtualization is not the same.
Sometimes virtualization on x86 hardware was not an Instead of emulating a complete machine, containers run liter-
easy thing to begin with. Some CPU instructions were not ally on top of the host OS kernel. They all share one OS each
possible to intercept or ”trap,” and that prevented complete with only the program and the dependencies required for it like
virtualization. These initial virtualization implementations like libraries and config files but not a second kernel. Containers
from VMware avoided this with the assistance of binary inherit isolation from kernel features such as namespaces
translation, in which certain instructions were translated at (which partition resources such as process IDs and network
runtime. The second trick of evasion was paravirtualization, interfaces) and control groups (cgroups), which track and limit
where the guest OS was redesigned to address the hypervisor each container’s usage of resources.
directly using special calls (or so-called ”hypercalls”). Both
solutions had their trade-offs: binary translation was expensive,
while paravirtualization required source code changes in the
OS. All this changed in 2005–2006 when Intel and AMD
introduced hardware support for virtualization (Intel VT-x
and AMD-V) [8]. These technologies allow CPUs to run
guest operating systems in a special mode where sensitive
operations are better performed by the hardware itself, without
the necessity of clever software tricks. Later additions, such
as Second Level Address Translation (SLAT) Intel’s EPT, and
AMD’s Nested Paging—kept increasing memory virtualization
Fig. 1. Comparison of (a) hypervisor-based virtualization vs. (b) container-
by speeding up address translation. These hardware solutions based virtualization architecture.
made full virtualization faster and simpler, narrowing most of
the performance gap with paravirtualization [4], [8]. One of the biggest differences between the two approaches
Virtualization is pervasive in the cloud today. In is resource use. Since containers leverage the host OS, they
Infrastructure-as-a-Service (IaaS) deployments, infrastructure are much lighter in terms of memory and storage than virtual
providers like Amazon EC2, Microsoft Azure, and Google machines. But that efficiency comes at a price: the containers
Cloud employ hypervisor-based virtual machines to provide draw heavily on the host OS kernel for security and isolation.
isolated, secure computing environments to customers. At If the kernel were to be compromised, all the containers on
the same time, container technologies like Docker and Ku- that host would be vulnerable.
Hypervisor-based virtual machines, on the other hand, re- Architecture: A hypervisor-based virtualization system typ-
quire higher system resources because each virtual machine ically includes the following components:
has its OS. However, they offer stronger isolation between • Host Machine: The host physical machine where the
workloads because each virtual machine is isolated at the OS hardware CPU, memory, storage, and network interfaces
level. This means that a fault in one virtual machine is far where virtual machines will run.
less likely to affect the others. The use of hardware-assisted • Hypervisor or VMM (Virtual Machine Monitor): It is the
virtualization subsequently explained in Section III-C—has lowest software layer that performs the virtualization.
also enabled reducing the performance overhead normally In some systems, like Xen, the hypervisor is a thin,
associated with VMs, and it is now feasible to execute many privileged layer that works together with a privileged
of them on a single modern server with hardly any degradation control domain (Dom0) that is responsible for hardware
in performance. driver management and VM management.
Throughout the rest of this article, we take a closer look at In others, like KVM, hypervisor tasks are performed by
each of these methods of virtualization. We analyze how they the host operating system.
operate, their strengths and limitations, and how they work in • Guest Virtual Machines (VMs): These are isolated envi-
today’s cloud computing environment. ronments supplied by the hypervisor. A VM is assigned
virtual hardware and a complete guest operating system
III. V IRTUALIZATION T ECHNIQUES
(e.g., Windows, Linux). The guest OS runs the same as it
A. Hypervisor-Based Virtualization would on a dedicated physical machine and has no idea
Hypervisor virtualization is the most popular method to run that it’s being utilized by another user.
a set of VMs on a single physical host. The key concept in this In the early days of x86 virtualization, hypervisors needed
is an abstraction layer in the form of software that is called to circumvent architectural limits on intercepting or emulating
a hypervisor or Virtual Machine Monitor (VMM). It allocates specific CPU instructions in a safe manner. A common tech-
and manages virtual machines, which are each simulating one, nique was to employ a mechanism known as binary transla-
isolated computer. tion, which dynamically translated sensitive code, as the first
The hypervisor allocates a virtual emulation of each piece product in VMware did. The second was paravirtualization,
of hardware such as CPU, RAM, disk space, and network card which had been used in systems like initial Xen when the
to each VM. Due to this, the guest OS (the VM’s operating guest operating system was specifically built to speak to the
system) generally has no idea that it is running on a virtualized hypervisor directly with the help of special function calls
setup. On full-virtualization-capable platforms, this guest OS referred to as ”hypercalls.”
will be running exactly as if it were installed on its separate Today, new CPUs have hardware virtualization support such
physical hardware, and the operation is therefore so transparent as Intel VT-x and AMD-V, which highly simplify and speed
to software that it isn’t even apparent. up virtualization. With them, hypervisors such as VMware and
There are two broad hypervisor categories: Type-1 and KVM can run unmodified guest operating systems with very
Type-2. Type-1 hypervisors or bare-metal hypervisors run little overhead as the CPU itself traps and handles privileged
directly on the host machine hardware. This gives them full instructions effectively [8].
control of system resources and results in superior perfor- Advantages: Robust isolation is one of the biggest benefits
mance as well as VM isolation. These are the types of of hypervisor-based virtualization. Each VM is running in its
hypervisors used in cloud as well as enterprise data centers, own operating system, so a problem in one VM like a crash
examples of which include Xen [2], VMware ESXi, and KVM or even a security breach will not affect others or compromise
(Kernel-based Virtual Machine, which is natively part of the the host (if no hypervisor vulnerabilities are used). This model
Linux kernel). also allows the running of different operating systems or kernel
Type-2 hypervisors, nonetheless, run as application software versions concurrently, which comes in handy when doing
above an installed operating system like ordinary program testing as well as heterogeneous cloud environments.
software. Such an arrangement depends on the host OS to Hypervisor-based virtualization is an old and mature tech-
provide hardware interactions such as device drivers and I/O nology. It supports strong management features such as live
operations. As much as this facilitates easier installation and migration of running VMs [10], system snapshotting, au-
usage, especially for development and testing, this comes at tomatic backups, and load balancing. In cloud computing,
the expense of performance in overhead. VirtualBox, QEMU, especially in Infrastructure-as-a-Service (IaaS) deployments,
and VMware Workstation are some examples of Type-2 hy- VMs remain the default unit of deployment. They offer end-
pervisors. users full control of their operating system environment and
Short of it, hypervisor-based virtualization is a mature and are supported by a rich ecosystem of monitoring, scaling, and
solid workload partitioning technology for running more than automation tools.
one operating system on shared hardware. Type-1 hypervisors Disadvantages: Though potent, hypervisor-based virtual-
are meant for the production environment where performance ization does have its shortcomings. Its worst flaw is resource
and scalability become a problem, and Type-2 hypervisors are overhead. As each virtual machine must run an entire operating
ideal for home testing. system, it occupies more memory space and more hard drive
space than more spartan alternatives like containers. This Since all the containers on a host share the same operating
duplication amounts to more total system use of resources. system kernel, they will be compatible with it. You can’t, for
Performance will also be affected due to CPU overhead for example, run a Windows container natively on a Linux host
activities like trap-and-emulate loops and hypervisor-VM con- kernel, and the reverse is also not possible. Containers may
text switching. Input/output (I/O) operations are also affected, vary in user-space parts, however. One might have Ubuntu
especially if virtual drivers or hardware device emulation are libraries and tools, for example, and another might have
involved. Boot time for a VM is also increased since it is CentOS on the same underlying Linux kernel.
booting a full operating system, which takes tens of seconds. Container virtualization gained enormous momentum with
These boundaries can limit how many VMs are allowed the emergence of Docker around 2013–2014 [3]. Docker main-
to run at one time on a specific host machine, eating into streamed containerization to a much larger degree by bundling
density and potentially influencing performance. Though ex- important but complex Linux features—namely, namespaces,
isting hardware support and software performance tuning have cgroups, and union file systems—into a simple-to-use devel-
helped keep these limitations to a minimum, they still exist. oper toolchain. This generated a flood of container adoption
Another issue arrives in the guise of operational complexity across the industry.
running, monitoring, and controlling several different OS When containers gained popularity, the need to manage
instances is enormously time-consuming as well as man-power them at scale required orchestration platforms such as Kuber-
intensive, yet automation tools do help cushion this impact. netes. With these platforms, developers and administrators can
deploy and scale hundreds to thousands of containers across
B. Container-Based Virtualization
machine clusters, allowing for the ability to manage multiple
Container virtualization, or OS-level virtualization, offers a workloads and automate resiliency.
more lightweight and efficient alternative. Instead of hardware Advantages: The biggest benefit of containers is that they
emulation, such as VMs, containers isolate applications by are incredibly light. As they don’t need an operating system for
leveraging host OS features. LXC, Docker, and others do this each application, containers are very lightweight on memory
using namespaces (to limit things like process IDs, network- and storage needs. They can boot in virtually no time—usually
ing, and file systems) and cgroups (to limit and report resource in less than a second because booting a container is closer to
consumption). booting a regular process, not an OS.
With this setup, the host OS (most likely, Linux) also leads This lightness allows for high levels of density on one
a dual life. It is the natural OS on some hardware and also host server that is, many more containers can be run than
the origin for isolated worlds that containers run into. As a on traditional virtual machines. This is particularly useful
contained application from the perspective of an app, it is like with microservices patterns and where scale-out needs to
it’s running on its machine. It possesses its file system display, be accomplished extremely quickly with minimal overhead.
list of processes, and network devices. All the containers, Another major benefit is portability: containers pack up an app
though, use a single kernel, and it is the kernel that establishes and all of its dependencies into a bundle and take it along, so
the walls between them. it will run the same in any environment dev, test, or prod [3].
Efficiency is one of the biggest advantages that this system This consistent packaging also plays nicely into modern-day
provides. Because containers don’t need to boot up an entire continuous integration and continuous deployment (CI/CD)
operating system, they boot almost immediately and consume practice.
much less. There is no hypervisor layer, so there is even Disadvantages: The most fundamental trade-off containers
less overhead and even better performance. Architecture: A have is in the area of security and isolation. Since containers
typical container-based system is composed of the following share a single host kernel, a corruption within that kernel or
elements: misconfigured container has the potential to allow an attacking
• Host Operating System with a Container Engine: Host application to escape from its confined area and do havoc to
OS usually a Linux distribution provides kernel-level other containers or the host OS altogether. Virtual machines
functionality to isolate and manage containers. A con- provide stronger isolation since they run other operating sys-
tainer engine or runtime, such as Docker, CRI-O, or tem kernels.
containerd, is responsible for creating and managing Owing to this reason, public cloud providers in most cases
containers. It defines namespace mappings for isolation, avoid relying on containers to run workloads of multiple
employs control groups (cgroups) to place resource limits, clients. They offer an additional level of security in most
and packages applications into portable units. cases by running containers on top of virtual machines, taking
• Containers: A container is a group of one or more pro- advantage of containers’ lightness as well as the stronger VM
cesses separated from the rest of the system. Containers isolation.
are typically specified from images—self-contained pack- An additional limitation is that containers need to run under
ages comprising the application code, necessary libraries, the same kernel as the host. For example, one cannot natively
binaries, and configuration files. This makes containers run a Windows container on a Linux host and vice versa unless
easily reproducible, deployable, and transferred across additional virtualization techniques or compatibility layers are
environments. used. Containers also pose new operational issues such as the
need to monitor, manage, and secure potentially large numbers alternatives since MMU virtualization was not yet finished.
of containers. Container image management is also an issue; Nevertheless, when extensions like EPT and nested paging
many publicly available images contain outdated libraries or arrived on the scene, hardware-assisted virtualization over-
known bugs. In fact, in a study done in 2015, more than 30% came the conventional alternatives and turned into an industry
of official images in public repositories contained high-priority standard as widely used.
security vulnerabilities [12]. All the leading hypervisors now—VMware, KVM, Hyper-
Notwithstanding such concerns, container space has evolved V, and Xen (in HVM mode)—are taking advantage of these
rapidly. New technology and best practices such as improved hardware extensions. With this support, the guest OS now runs
user namespace support, mandatory access control capabili- nearly as quickly as it does at native system speeds. For most,
ties like AppArmor and SELinux, and sandboxing initiatives the rest of the overhead is spent in context switching or in
like gVisor and Kata Containers—are improving containers’ executing certain I/O operations, and in most workloads, es-
security even for multi-tenant and untrusted environments. pecially in CPU- or memory-bound applications, performance
overhead is hardly some percent higher compared to bare
C. Hardware-Assisted Virtualization
metal [4]. Such efficiency may be the highest factor because
Hardware-assisted virtualization refers to the collection of virtualization is being deployed on a massive scale for cloud
features in modern processors that make virtualization faster, computing today.
more efficient, and more secure. Intel and AMD each devel- Besides performance, hardware-assisted virtualization also
oped specific technologies such as Intel VT-x and AMD-V for improves security. Intel CPUs, for example, use a special
CPU virtualization and Intel VT-d or AMD-Vi for device vir- privilege level (usually called ring-1) that keeps the hypervisor
tualization that accelerate and simplify how virtual machines out of reach of external interference by the guest OS. Certain
run on a host system. Those features have helped over the more recent CPUs do even more than this: technologies such
past decade to enhance the performance and scalability of as AMD’s Secure Encrypted Virtualization (SEV) can even
hypervisor-based virtualization. encrypt the RAM of a VM so that even the hypervisor or
Before hardware support was available, hypervisors had cloud administrator cannot access it. That comes in particularly
to resort to sophisticated software workarounds to deal with handy in the public cloud arena where data privacy is most
privileged operations from guest operating systems. These important.
included techniques such as binary translation, which re- In short, hardware-assisted virtualization itself isn’t a virtu-
compiled CPU instructions at runtime, or paravirtualization, alization method it’s a faster, more secure, and more scalable
which required porting the guest OS to explicitly invoke extension of hypervisor-based virtualization. It’s the founda-
the hypervisor. Hardware-assisted virtualization made these tion for much of the rich functionality we’ve come to expect in
workarounds unnecessary by enabling the CPU itself to offer cloud infrastructure today, including support for high-density
direct support. VM deployment and live migration with low-performance
• CPU virtualization: More modern CPUs have a hardware cost.
execution mode for guest virtual machines, supporting
”guest mode” or some similarly named mode, with the IV. P ERFORMANCE C OMPARISON
hypervisor in a more privileged ”root mode.” This sup-
ports running most instructions natively in guest OSes but One of the key factors to consider when selecting a virtu-
with the CPU occasionally trapping (e.g., for hardware alization method is the performance overhead it introduces.
access or control register modification). Cloud computing environments aim to deliver the highest
• Memory virtualization: The Second-Level Address Trans- possible performance while maintaining adequate isolation be-
lation (SLAT) feature is implemented by the Memory tween workloads. In this section, we compare the performance
Management Unit (MMU) of the Intel Extended Page of hypervisor-based virtual machines (VMs) and containers,
Tables (EPT) or the AMD Nested Page Tables. It enables and explore how hardware-assisted virtualization influences
the CPU to translate the addresses more efficiently by both approaches.
adding a second-page table level, which is bound with
considerably lower overhead than regular shadow-paging A. Resource Overhead and Efficiency
components. CPU and Memory: Modern hypervisors that take advan-
• Device virtualization: Storage controllers or network tage of hardware-assisted virtualization place low-performance
interface cards are securely and directly mapped to overhead for most compute-intensive workloads. Under appro-
VMs through features like VT-d and SR-IOV. It delivers priate tuning, studies have concluded that CPU cost for virtual
almost-natural I/O performance and also offers DMA machines is typically within the 2–5% overhead of native
isolation between VMs. (bare-metal) performance [4]. In a similar vein, memory
While advocated as early as 2005-2008, hardware-assisted virtualization overhead has been significantly reduced owing
virtualization did incur performance penalties as well. Adams to features like Second-Level Address Translation (SLAT).
and Agesen [8], for example, showed that early imple- Today, memory access in VMs is nearly as fast as it is on
mentations were slower than very optimized software-based the host system with only minor pauses when some privileged
operations are called most of which are now addressed well same host might be able to run several hundred containers
by hardware. based on each container having a few tens of megabytes.
Containers, conversely, have essentially zero extra CPU or This density of resources makes containers highly appealing
memory overhead. As containerized processes are ordinary to cloud setups with batch-processing workloads, stateless
processes managed by the host OS (but isolated through apps, or microservices where efficiency of resources is critical.
namespaces and cgroups), they are scheduled by the kernel Yet, hypervisor-based virtualization has also been optimized.
exactly like ordinary processes. Trap-and-emulate loops and Features like Kernel Same-page Merging (KSM) on KVM are
hypervisor mediation are unnecessary. As a result, in CPU- able to deduplicate VM memory, and cloud providers prefer to
intensive and memory-intensive tasks, containers run almost overcommit physical resources judiciously to reach maximum
as fast as native. utilization.
An experiment by Felter et al. for IBM found that Docker Lastly, the most suitable choice depends on the use cases.
containers delivered equal or better performance than KVM- Where the application requires absolute OS-level isolation,
based virtual machines in a benchmark suite of tests [4]. A more OSs, or more robust security guarantees, virtual ma-
separate unbiased test by Morabito et al. also reached the same chines are perfect. But where services just so happen to be
finding: containers use fewer CPU cycles to execute similar stateless, low-footprint, and designed for fast scaling specifi-
work than VMs, primarily because they avoid the overhead of cally within the same trust boundary—containers offer better
the hypervisor [5]. One such article referenced that containers packaging and flexibility.
are able to use 30–50% less CPU and memory compared to B. Use Case Performance Considerations
similar VMs but at similar performance levels [5].
In practice, IaaS clouds have typically depended on virtual
I/O Performance:: Historically, I/O activity (e.g., disk and
machines to deliver the operating system full control and
network I/O) has proven harder in virtual environments since
robust isolation from other tenants. On the basis of hardware
they include the mimicry of hardware devices. The hypervisor
virtualization and hypervisor optimizations, however, the per-
reacted to this challenge by employing para-virtualized drivers
formance gap between apps on virtualized VM versus natively
(e.g., virtio) allowing guest operating systems to speak with
running on physical hardware has fallen significantly.
host I/O subsystems in a more efficient manner. Additional
For SaaS and PaaS styles, cloud consumers usually have
capabilities from the hardware, including Single Root I/O
runtime environment managed on their behalf. For these
Virtualization (SR-IOV), furnish virtual machines near native
scenarios, containers are usually the preferred option because
access to storage and networking devices by delivering virtual
they are scalable. For example, Google uses container-based
functions straight.
scheduling of workloads inside its internal infrastructure such
Even with these improvements, VMs will continue to
as Borg and its open-source clone Kubernetes to run massive
experience a little more I/O latency and lower throughput
numbers of jobs with minimal overhead.
compared to native execution on the host, since there are
Interestingly, the performance gap between VMs and con-
extra layers of scheduling—first in the guest OS, then in
tainers is not as large as one might expect. In most real-
the hypervisor. Containers avoid this overhead entirely, as
world workloads, the gap is zero. For instance, Felter et
they use the host’s I/O stack. As a result, they can deliver
al. [4] observed virtually identical networking and storage
near-native I/O performance. However, containers are still
performance when comparing Docker containers to KVM-
prone to performance degradation if numerous containers are
based virtual machines in optimized cases. What is important
competing for shared I/O resources much like the situation
is that the container-based approach was able to achieve this
with highly dense VMs.
performance without requiring special drivers or low-level
A major advantage of containers is their boot time and
hypervisor configuration.
scalability. Booting a new container will usually take a fraction
This is a significant finding: in controlled environments
of a second because it’s just booting a new process. By
where the efficiency of resources and rapid scaling are the
contrast, booting a virtual machine involves loading an entire
highest priority, containers do hold an advantage. For ap-
operating system, which can take far longer. This makes
plications, however, that require multiple operating systems,
containers particularly attractive to applications that have to
higher security isolation, or total administrative control, virtual
scale very fast, like serverless computing environments, where
machines remain the choice.
services have to respond almost instantly to demand variations. Briefly, containers typically provide raw performance and
Density and Scalability: One of the biggest advantages of efficiency benefits through their lower overhead, whereas
containers is that they can be run in huge numbers on a single hypervisor-based VMs provide strong isolation and operational
host because of such low resource overhead. Since containers flexibility. As a result of hardware-supported virtualization, the
do not require a full operating system, they use much less performance trade-off with VMs is small enough that they are
memory and CPU per container. For example, where each still widely used in most cloud implementations.
virtual machine would take a few hundred megabytes of RAM
for its operating system and virtual CPU per VM, a server with V. S ECURITY A SPECTS
16 GB of memory and 8 CPU cores might be able to support Security and isolation are critical concerns in cloud com-
only 10–20 VMs depending on workload. In comparison, the puting environments, where multiple tenants typically share
containers offer more isolation than running multiple appli-
cations directly on the host, they are generally considered to
be a little less secure than virtual machines. As summarized
in [11], container security is ”a bit more secure than hosting
many apps over one OS, yet a bit less secure than KVM virtual
machines.” That’s because containers are more separated from
each other than regular processes but use a common kernel as
opposed to VMs that each utilize an OS kernel.
To improve container security, several measures have been
put in place:
• Mandatory Access Control (MAC): Linux security mod-
ules such as AppArmor and SELinux can be utilized to
Fig. 2. Comparison of (a) hypervisor-based virtualization vs. (b) container- enforce strict limits on which resources the container gets
based virtualization architecture.
to use and which system calls are permitted.
• Seccomp (Secure Computing Mode): Kernel capability to

the same physical infrastructure. Virtualization is charged with allow containers to filter and constrain the system calls
providing this isolation, but different virtualization techniques that they are permitted to make, reducing the exposed
provide different levels of security and present different attack kernel surface area.
surfaces. • Rootless Containers: These allow containers to execute
without root on the host. Even if a process in the container
A. Isolation and Attack Surface escapes, it will not have admin on the host.
• Small Kernel Attack Surface: gVisor (at Google) and
Hypervisor-based virtualization segregates by running each others place a user-space kernel between a container and a
virtual machine with its own guest operating system on host OS kernel, going through system calls and with tight
emulated hardware. The hypervisor is the only layer to interact controls—some performance loss, though. Kata Contain-
with the underlying physical resources. In well-designed Type- ers is another robust alternative that runs each container
1 hypervisors, the codebase is deliberately kept small similar in a lightweight VM, leveraging the VM isolation and
to a microkernel architecture. For example, the Xen hypervisor lightweight nature of containers.
delegates device management to a privileged control domain Hardware virtualization can also enhance security, for ex-
(Dom0) and restricts the hypervisor’s exposure to potential ample, in a multi-tenant cloud where confidentiality is critical.
attacks. Capabilities like AMD-V’s Secure Encrypted Virtualization
This small trusted computing base offers fewer potential (SEV) enable memory to be encrypted at the hardware level. If
vulnerabilities for attackers to exploit. If a virtual machine memory in a virtual machine in SEV is encrypted so that even
were compromised, the attacker would typically need to if malware or a naughty admin gets access to the host they
find a vulnerability in the hypervisor or hardware itself to cannot inspect what is in the VM’s memory. This capability
impact other VMs or the host. While rare, these kinds of is in response to growing concerns over data confidentiality
attacks are not entirely theoretical. For instance, the well- within multi-tenant infrastructure and is an integral part of
known ”VENOM” vulnerability (CVE-2015-3456) in the vir- emerging confidential computing initiatives.
tual floppy disk controller of QEMU allowed for guest code Similarly, Intel’s Trust Domain Extensions (TDX)—an up-
execution on the host. However, the actual frequency of coming technology goal to protect virtual machines from pos-
hypervisor escapes in major public clouds is extremely low sibly tainted hypervisors. These hardware-based technologies
due primarily to stringent patching and hardening practices represent a new generation of trusted execution environments
by cloud providers. that aim to strengthen the isolation and integrity of VMs, even
Containers operate on a different security model. All con- with privileged attacks at the hypervisor level.
tainers share the same host operating system kernel, which has The second security threat in virtualization is a category of
a bigger attack surface. If the host kernel is compromised, all security vulnerabilities involving side-channel attacks. Such
the containers on the system could be affected. The container attacks leverage common hardware resources such as CPU
isolation is rooted in Linux kernel features such as namespaces caches, memory buses, or branch predictors to forecast sen-
and control groups (cgroups), which isolate the view of the sitive data across isolation boundaries. Side-channel attacks
system by the container but do not provide the same degree differ from typical software vulnerabilities because they do
of hardware-level isolation as VMs. not break isolation by direct access but by observing the way
A compromised user in a container could attempt to shared pieces behave under different execution contexts.
privilege-escalate and exploit kernel or container runtime The Spectre and Meltdown exploits, released in 2018, had
vulnerabilities. Instances of such breaks have been known demonstrated that speculative execution capabilities of modern
to occur, typically due to insecure system call interfaces or CPUs could be misused to leak memory contents beyond vir-
Docker daemon misconfiguration. While properly configured tual machine confines. The vulnerabilities established that even
strong logical separation at the software level was susceptible capabilities. Both techniques are complemented by a defense-
to being broken by nuanced hardware behavior. Consequently, in-depth approach employing techniques such as network seg-
cloud providers deployed a range of mitigations, including mentation, firewalls, access controls, and intrusion detection to
software patches, microcode updates, and, for highly secure introduce multiple layers of protection against compromise.
VMs, dedicated physical cores to reduce the exposure to Briefly, hypervisor-based VMs remain the go-to solution in
shared resources. highly sensitive, multi-tenant scenarios due to their excellent
Containers, which share the same host kernel, were also isolation. Containers, with weaker isolation requirements, are
vulnerable to some side-channel vectors and required the same sufficient for most scenarios specifically where used by a
mitigations. Side-channel attacks are not restricted to virtual- single trusted tenant or supplemented with other security
ization but may occur in any setting with shared resources the mechanisms. As container security enhances and hardware-
use of virtualization broadens the set of circumstances under level capabilities evolve to offer improved isolation, the gap
which such an attack may occur, especially in multi-tenant between the two models is diminishing over time.
settings.
Side-channel threats are also a continuing research area that VI. E MERGING T RENDS AND F UTURE D IRECTIONS
needs to be addressed. Both software developers and hardware
vendors are studying novel architectures, enhanced isolation Virtualization technologies continue evolving, with various
mechanisms, and more robust runtime monitoring to avoid data new trends rewriting the means of provisioning and manage-
leakage via covert channels. ment of computing resources in the cloud. The technologies
are developed with the goal of delivering better performances,
scalability, security, and operational ease.
B. Multi-Tenancy and Trust Models
Serverless Computing: Serverless platforms are also
In public clouds, multiple customers referred to as tenants known as Function-as-a-Service (FaaS) such as AWS Lambda
share a common physical server. To achieve proper isolation and Azure Functions, that completely abstract infrastructure.
between tenants, the norm is to rely on hypervisor-based Authors merely publish individual functions or small codes
virtualization, which provides robust security boundaries at without needing to care about servers, virtual machines, or
the hardware abstraction level. For instance, each Amazon containers. Behind their backs, however, cloud vendors still
EC2 instance traditionally ran as a virtual machine on the Xen rely on virtualization for dynamically running such functions
hypervisor, and more recently the AWS Nitro system, built on on need, with both isolation as well as scalability.
KVM, continues this trend of strong VM-based separation. This paradigm shift pushes virtualization to an even finer
Containers, on the other hand, are run inside one tenant’s level of granularity. Instead of provisioning an entire VM or
environment for encapsulating unrelated segments of an appli- container per user, the platform schedules lightweight function
cation, say inside Kubernetes clusters. When cloud providers instances often used as containers or microVMs—across its
offer containers as a raw customer offering like that of AWS infrastructure in a very short time. This model demands ultra-
Fargate or Azure Container Instances such containers often low startup latency and the ability to support extremely high
end up getting run inside virtual machines behind the scenes. density. Techniques like microVMs and minimalist runtimes
Such a hybrid guarantees VMs good isolation holds even when are being employed to meet these requirements. Serverless
presenting container-level abstractions to consumers. computing is thus a novel level of virtualization abstraction:
This is a valuable point: containers and virtual machines event-driven, short-lived, and focused on running code au-
are not competing technologies. They are often used together. tomatically upon triggers, leaving the provision of resources
A company will have numerous Docker containers within a solely in the hands of the cloud provider.
VM, and that VM will sit on top of a hypervisor. Within such Unikernels: A unikernel is a highly specialized, one-
a scenario, the VM provides a wall of isolation that is secure address space binary that ties an application to just those
between tenants, but inside the VM, containers provide very bare kernel components required for it to execute [7]. Rather
lightweight isolation between services or microservices for the than using a general-purpose OS, unikernels construct only the
same tenant. This multi-layered strategy balances performance, pertinent kernel services (for example, networking and files)
security, and operational flexibility [11]. as libraries linked directly against the application code.
Clearly, this tactic introduces some overhead and complexity This creates highly lean virtual machine images with rapid
due to redundancy in isolated layers. Attempts such as Kata boot and minimal memory usage generally a few megabytes.
Containers aim to alleviate this by providing lightweight Additionally, as unikernels are only exposed to the external
virtual machines with the isolation benefit of VMs but a world services they are explicitly known to be offering, they
developer experience akin to containers. have significantly fewer attack points to be concerned with
From a security management perspective, hypervisor-based possible security advantages. Studies like MirageOS [7] and
virtualization requires reducing the hypervisor software to a OSv have demonstrated the feasibility of booting unikernels in
minimum and keeping it regularly patched to restrict the attack a matter of milliseconds and put them as compelling choices
surface. Containers require hardening of the host operating for those requiring fast response time combined with high
system and stringent control of container permissions and resource constraints.
In cloud computing, unikernels offer the potential to run any Major cloud providers currently provide confidential
function or application inside a self-contained environment of VMs on these technologies. This will be important in
its own with performance levels nearly on par with bare-metal confidentiality-heavy industries such as government, health,
hardware. Nevertheless, they are not yet widely used. One and finance where the use of the cloud was previously con-
of the biggest hindrances is the fact that developers have to strained by security-related considerations.
port or recompile applications onto unikernel environments, Edge Computing and IoT Virtualization: The emergence
which do not typically offer traditional POSIX interfaces of edge computing involving compute resources nearer to end-
and system libraries. Even though these are the modern-day users or data sources, e.g., in IoT gateways or 5G networks,
limitations, unikernels present a future path toward cloud- poses new challenges and opportunities for virtualization.
native, high-performance, safe, and low-footprint workloads Resource-constrained edge devices must be able to host vir-
and to be able to utilize VM-quality isolation with container- tualized functions (e.g., IoT applications or network services)
like performance. with low latency and high reliability.
Lightweight VMMs and Hybrid Approaches: One of Lightweight virtualization is necessary here. Containers are
the new directions for cloud computing is the combination particularly well-suited to most edge computing applications
of virtual machines and containers to harness the power of due to their low overhead and fast boot times. For instance,
both paradigms. Light VMMs like AWS Firecracker [9], for an edge gateway can support multiple containerized services
instance, have been designed to host multi-tenant workloads such as data aggregation, filtering, and analytics for multiple
like serverless functions and containers with zero overhead. clients simultaneously. The effectiveness of containers enables
Firecracker is a lightweight KVM-based hypervisor that can responsive and scalable deployment even on low-end hard-
boot microVMs within 125 milliseconds and utilize as little ware.
as 5 MB of memory per instance. This would enable cloud However, where workload isolation is crucial especially
providers to boot thousands of microVMs on a host, each of in multi-tenant or untrusted edge environments lightweight
which securely isolates an individual function or container [9]. hypervisors are applicable. They provide more rigorous se-
Similarly, Kata Containers is an open source initiative that curity guarantees compared to containers with isolation, so
makes every container a lightweight VM out of the box. The that a broken function will not affect others. More recent
hybrid approach allows developers to preserve containers fast, endeavors are also looking into the use of enclave-based
agile development processes while also benefiting from the or partitioning hypervisors specifically tailored to real-time
greater isolation that’s traditionally associated with VMs. and safety-critical edge applications where high isolation and
These hybrid approaches are gaining popularity in usage predictability requirements are required.
patterns where both security and agility are of prime impor- The focus in edge virtualization is on low overhead, efficient
tance such as zero-trust networks and multi-tenant serverless provisioning, and at times real-time responsiveness. As a
environments. We can look forward to even more innovation result, current techniques are being re-purposed: partitioning a
here, including the adoption of sandboxing technology such as multicore edge device into static VMs, or placing microVMs
gVisor (which emulates a user-space kernel for containers) and on IoT devices that haven’t had resources sufficient to support
future hardware isolation capabilities, to enable more detailed traditional virtualization previously.
process-level isolation. Also, the majority of edge devices use non-x86 architec-
Confidential Computing: As noted earlier in the security tures, i.e., ARM processors. ARM virtualization has evolved
segment, vendors now are introducing capabilities to protect significantly in recent years to enable efficient VM execution
information not just while data is transferred or stored but also on a broad range of low-power devices. This allows for
while processing. This paradigm shift is what is known as consistent virtualization strategies across heterogeneous edge
confidential computing and is giving way to a next-generation platforms.
secure virtual machine known as confidential VMs. Summary: In the future, virtualization in cloud computing
Technologies like AMD’s Secure Encrypted Virtualization appears even more diversified and hybridized. virtual machines
(SEV) and SEV-SNP allow for a VM’s memory to be en- (VMs) will remain to provide sufficient isolation for normal
crypted using keys that are inaccessible to the hypervisor and workloads. Containers will remain at the forefront of cloud-
host operating system. Intel’s upcoming Trust Domain Exten- native application deployment and development. At the same
sions (TDX) is also designed to provide similar guarantees by time, emerging technologies such as partitioned execution
isolating VM execution from privileged software components. environments, unikernels, and microVMs will carve out niche
The key idea of confidential computing is to ensure that even positions for high-security, high-density, or real-time applica-
when the software stack of the cloud provider or a malicious tion use cases.
insider is hacked, data processed within a virtual machine The boundaries between these models are getting thinner.
will remain confidential. This shift of trust from software For example, modern container orchestration software already
to hardware can possibly revolutionize virtualization designs. works with VMs, and hypervisors get optimized to execute
Hypervisors in the future may play a smaller role in enforcing single-process workloads with low overheads. This broader set
security and instead function as performance managers, the of tools for cloud consumers as well as cloud providers offers
hardware itself ensuring the isolation guarantee. the flexibility of using the best virtualization strategy based on
specific workload requirements whether that is achieving max- (serverless) for yet another, all orchestrated together via or-
imum performance, maximum isolation, maximum scaling, or chestration platforms. Both cloud users and operators need to
maximum resource usage. have an in-depth knowledge of these virtualization options to
make correct trade-offs between agility, performance, security,
VII. C ONCLUSION and cost.
Virtualization technologies have been the prime facilitators Lastly, virtualization remains a crucial component of cloud
of the cloud computing revolution through their efficient computing. Its forms may change and proliferate, but its
sharing of physical resources with workload isolation. In this intrinsic value-efficient, adaptable, and isolated utilization of
paper, we presented a comprehensive investigation of the most computer resources will remain the pillar upon which cloud
prevalent forms of virtualization: hypervisor-based virtual ma- expansion and development rest.
chines, container-based virtualization, and hardware-assisted
optimizations. We presented their architectures: ranging from R EFERENCES
full hardware emulation and guest OS isolation in hypervisors,
to kernel-level process isolation in containers, to the CPU [1] G. J. Popek and R. P. Goldberg, “Formal Requirements for Virtualizable
features offering virtualization more flawlessly. Third Generation Architectures,” Communications of the ACM, vol. 17,
no. 7, pp. 412–421, 1974.
Our comparison indicates that there is no single virtu- [2] P. Barham et al., “Xen and the Art of Virtualization,” in Proc. 19th ACM
alization technique better than all others; rather, each has Symposium on Operating Systems Principles (SOSP), Bolton Landing,
strengths well-suited for particular circumstances. Hypervisor- NY, 2003, pp. 164–177.
[3] D. Merkel, “Docker: Lightweight Linux Containers for Consistent De-
based VMs offer strong security isolation and support for velopment and Deployment,” Linux Journal, vol. 2014, no. 239, Article
heterogeneous operating systems, which remains valuable for 2, Mar. 2014.
multi-tenant public clouds and applications needing high de- [4] W. Felter, A. Ferreira, R. Rajamony, and J. Rubio, “An Updated
grees of trust separation. Container-based virtualization pro- Performance Comparison of Virtual Machines and Linux Containers,”
in Proc. IEEE Int. Symp. Performance Analysis of Systems and Software
vides excellent performance and efficiency and is ideally (ISPASS), 2015, pp. 171–172.
suited for high densities of application instances with minimal [5] R. Morabito, J. Kjällman, and M. Komu, “Hypervisors vs. Lightweight
overhead in cloud-native deployments. Hardware-assisted vir- Virtualization: A Performance Comparison,” in Proc. IEEE Int. Conf.
Cloud Engineering (IC2E), 2015, pp. 386–393.
tualization has bridged most of the performance gap between [6] A. Bhardwaj and C. R. Krishna, “Virtualization in Cloud Computing:
these approaches, enabling hypervisors to run workloads with Moving from Hypervisor to Containerization—A Survey,” Arabian
minimal slowdown and enabling cloud platforms to use VMs Journal for Science and Engineering, vol. 46, no. 9, pp. 8585–8601,
2021.
widely without sacrificing efficiency. [7] A. Madhavapeddy et al., “Unikernels: Library Operating Systems for the
Performance analysis shows that containers overall achieve Cloud,” in Proc. 18th Int. Conf. Architectural Support for Programming
nearly-native performance from the absence of a hypervisor Languages and Operating Systems (ASPLOS), 2013, pp. 461–472.
[8] K. Adams and O. Agesen, “A Comparison of Software and Hardware
layer, though hypervisors introduce some overhead, though Techniques for x86 Virtualization,” in Proc. 12th Int. Conf. Architectural
this is reduced significantly by modern optimizations and Support for Programming Languages and Operating Systems (ASPLOS
hardware assistance. Security analysis shows that although XII), 2006, pp. 2–13.
[9] A. Agache et al., “Firecracker: Lightweight Virtualization for Serverless
hypervisors have a diminished attack surface but at the cost Applications,” in Proc. 17th USENIX Symp. Networked Systems Design
of increased resources per VM, containers must be controlled and Implementation (NSDI), 2020, pp. 419–434.
strictly so as not to allow kernel-level attacks and novel [10] C. Clark et al., “Live Migration of Virtual Machines,” in Proc. 2nd
solutions are making container security increasingly close to USENIX Symp. Networked Systems Design and Implementation (NSDI),
2005, pp. 273–286.
reaching VM levels. [11] M. Eder, “Hypervisor- vs. Container-based Virtualization,” Seminar
We also mentioned some of the emerging trends behind Paper, Technische Universität München, Jul. 2016.
the use of virtualization. Serverless computing takes virtual- [12] J. Gummaraju, T. Sun, and Y. Turner, “Over 30% of Official Im-
ages in Docker Hub Contain High Priority Security Vulnerabilities,”
ization to even finer granularity and demands ultra-fast, tran- BanyanOps Blog, 2015. [Online]. Available: http://www.banyanops.com/
sient instances. Unikernels and microVMs suggest merging blog/analyzing-docker-hub/
the isolation of VMs with the performance of containers. [13] J. Gummaraju, T. Sun, and Y. Turner, “Over 30% of Official Im-
ages in Docker Hub Contain High Priority Security Vulnerabilities,”
Confidential computing is introducing new models of trust BanyanOps Blog, 2015. [Online]. Available: http://www.banyanops.com/
where even the hypervisor is not necessarily fully trusted, blog/analyzing-docker-hub/
but rather leverages hardware for isolation. Edge computing [14] S. Soltesz, H. Pötzl, M. E. Fiuczynski, A. Bavier, and L. Peterson,
“Container-Based Operating System Virtualization: A Scalable, High-
is virtualization outside the centralized data center, with an Performance Alternative to Hypervisors,” ACM SIGOPS Operating
emphasis on light footprints and real-time requirements. Systems Review, vol. 41, no. 3, pp. 275–287, 2007.
The virtualization landscape within cloud computing is [15] A. Varghese and P. Chandramohan, “A Survey on Virtualization Tech-
niques,” International Journal of Computer Applications, vol. 116, no.
evolving. With the new use cases and new hardware (increased 18, pp. 15–19, 2015.
cores, memory, and special accelerators), virtualization tech- [16] F. Zhang, J. Chen, H. Chen, and B. Zang, “CloudVisor: Retrofitting
niques will continue to evolve. We see a future where cloud Protection of Virtual Machines in Multi-Tenant Cloud with Nested
infrastructure seamlessly mixes together different virtualiza- Virtualization,” in Proc. 23rd ACM Symposium on Operating Systems
Principles (SOSP), 2011, pp. 203–216.
tion technologies: for example, one cloud application can use [17] D. J. Bernstein, “Containers and Cloud: From LXC to Docker to
VMs for one component, containers for another, and functions Kubernetes,” IEEE Cloud Computing, vol. 1, no. 3, pp. 81–84, 2014.
[18] Y. Fu and Z. Lin, “Space Traveling Across VM: Automatically Bridging
the Semantic Gap in Virtual Machine Introspection via Online Kernel
Data Redirection,” in Proc. IEEE Symposium on Security and Privacy,
2012, pp. 586–600.
[19] M. Satyanarayanan, “The Emergence of Edge Computing,” Computer,
vol. 50, no. 1, pp. 30–39, Jan. 2017.

View publication stats

You might also like