0% found this document useful (0 votes)
117 views12 pages

Performance Evaluation of Server Virtualization in Data Center

This document summarizes a thesis that evaluates the performance of server virtualization compared to physical servers. The thesis introduces problems with dedicating one physical server to each application, which does not take full advantage of server processing power. It aims to evaluate server virtualization versus physical servers in terms of cost, quality of service parameters like delay time and CPU usage. The thesis uses simulation software to test scenarios using physical servers and virtualization, collecting data on performance metrics to evaluate which approach reduces resource consumption while improving system performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
117 views12 pages

Performance Evaluation of Server Virtualization in Data Center

This document summarizes a thesis that evaluates the performance of server virtualization compared to physical servers. The thesis introduces problems with dedicating one physical server to each application, which does not take full advantage of server processing power. It aims to evaluate server virtualization versus physical servers in terms of cost, quality of service parameters like delay time and CPU usage. The thesis uses simulation software to test scenarios using physical servers and virtualization, collecting data on performance metrics to evaluate which approach reduces resource consumption while improving system performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 12

Performance Evaluation of Server

Virtualization in Data Center


Sabir Mohammed Mahmmoud Mohammed1,
Dr Mohammed Al-Ghazali Hamza Khalil2

The Future University,


College of Postgraduate Studies,
Master of Data Communication and Network Engineering
Khartoum-Sudan
[email protected]
Received :00 December 00
Accepted: 00 February 00
XXXXXXXXX
Abstract consuming a lot of power and generating heat.
Virtualization is the technology that creates The aim of this thesis is evaluating the
virtual environments based on the existing efficiency of server virtualization compared to
physical resources. Server virtualization is the the physical server in a network in term of
masking of server resources, including the power cost and quality of service parameters
number and identity of individual physical that includes the delay time, Central
servers, processors, and operating systems, Processing Unit (CPU) usage, processing
from server’s users. The server administrator time, average processing time per task. the
uses a software application to divide one results of evaluation for different criteria
physical server into multiple isolated virtual confirm that server virtualization technique
environments. Virtualization isn't a new has high throughput and CPU usage as well as
concept in Computer scientists. In this thesis a performance enhancement with noticeable
there is problems introduced including like agility. Experimental results which are
one application per server, one physical server obtained through demonstrate that server
per one application doesn't take advantage of virtualization can significantly reduce
modern server computers' processing power. resource consumption while improves system
Moreover, most servers use small fraction of performance. Obtained results done using two
the processing capabilities thus as a computer different methods to get accurate outputs, first
network gets larger and more complex, the method is Dynamic Voltage and Frequency
servers begin to take up a lot of physical Scaling (DVFS) and second one is threshold
space, and finally, the data center might algorithm the results in the two methods
become overcrowded with racks of servers certify that server virtualization can be a
strategic decision for saving investment and on supercomputers for decades. But it's only
increasing service performance. been a few years since virtualization has
Keywords: CloudSim, Load Balance, Energy, become feasible for servers. In the world of
Green Cloud, Virtualization Information Technology (IT), server
i. Introduction virtualization is a hot topic. It's still a young
Server computers are machines that host files technology and several companies offer
and applications on computer networks and different approaches. [2]
have to be powerful in their performance. ii. Background of the problem
Some have Central Processing Units (CPUs)
Many companies provide their servers online
with multiple processors that give these
through providing a core network with
servers the ability to run complex tasks
routing and link between network elements,
smoothly. Computer network administrators
these elements require a high cost to be
usually dedicate each server to a specific
implemented which becomes a problem
application or task. Many of these tasks don't
facing small and medium companies.
play well with others, each needs its own
Servers as a hardware is a powerful resource
dedicated machine. One application per server
of processing but through one application per
also makes it easier to track down problems
server the server power is useless. One server
as they arise [1]. It's a simple way to
per application is high power consumption
streamline a computer network from a
with unused resource [3]. Many data centers
technical standpoint. There are a couple of
provide their services through Server
problems with this approach, though. Server
hardware which give limitations including the
virtualization attempts to address both of
usage of the equipments and their capabilities
these issues in one fell swoop. By using
is lower than the actual performance of the
specially designed software, an administrator
server; provide a single solution into a server
can convert one physical server into multiple
hardware increase the cost on all of the
virtual machines. Each virtual server acts like
sectors.
a unique physical device, capable of running
its own Operating System (OS) [2]. In theory, iii. Problem Statement
you could create enough virtual servers to use
In this thesis different problems introduced
all of a machine's processing power, though in
including one application per server also
practice that's not always the best idea.
makes it easier to track down problems as
Virtualization isn't a new concept. Computer
they arise, one physical server per one
scientists have been creating virtual machines
application doesn't take advantage of modern
server computers' processing power. java platform to test the system and it allow
Moreover most servers use only a small the developers to test their networks and its
fraction of their overall processing elements, two scenarios were done to evaluate
capabilities thus As a computer network gets the system while using physical servers and
larger and more complex, the servers begin to while using server virtualization, the
take up a lot of physical space and finally the simulation covers the QoS parameters to
data center might become overcrowded with evaluate the performance.
racks of servers consuming a lot of power and
vi. Literature Review
generating heat.
Performance Evaluation of Hypervisors and
iv. Objectives the Effect of Virtual CPU on Performance
Organizations are adopting virtualization
The aim of this research is to evaluate the
technology to reduce the cost while
efficiency of server virtualization compared to
maximizing the productivity, flexibility,
the physical server in a network in term of
responsiveness, and efficiency. There are a
cost, QoS parameters that includes the delay
variety of vendors for the virtualization
time, CPU usage, processing time, average
environments, all of them claim that their
processing time per task and power
virtualization hypervisor is the best in terms
consumption.
of performance. Furthermore, when a system
administrator or a researcher wants to deploy
 To study and analysis of the computer
a virtual machine in a cloud environment,
network physical hardware and the
which CPU-VM configuration is the best for
virtual servers its requirements.
better performance. The author in this paper,
 To simulate the physical server and
prior to evaluating the latest version of
the virtual server’s efficiency and
hypervisors (commercial and open source),
evaluate its performance during a
the best virtual CPU to virtual machine
specific traffics.
(vCPU-VM) configuration as well as the
 To compare results of server
effect of virtual CPUs on performance is
virtualization and hardware server.
analyzed for each hypervisor. The author uses

v. Methodology Phoronix Test Suite (PTS) benchmarking tool


as a traffic generator and analyzer. The results
First of all a simulation program is installed in have shown that commercial and open source
windows operating system, the selected hypervisors have similar performance. As per
simulation program called CloudSim that uses our observation, the performance of a system
would degrade by improper allocation of their analysis and, even more importantly,
vCPUs to VMs, or when there is a massive Docker. Compared with our work includes a
over-allocation of vCPUs. [p-1], In [p-2] wider range of parameters in the network
provide a performance comparison between performance analysis [2].
KVM and Xen. The authors conducted vii. Design and Simulation
several experiments to examine the energy In this chapter a mathematical representation
consumption of the two different platforms of the evaluation matric is included along
considering different network traffic patterns with the computer model, simulation
and CPU affinity Open-VZ among the scenarios and a description to the hardware
technologies under evaluation can be found in and software used in the simulation.
[2]. The authors discover that an adaptive viii. Simulation Environment
packet buffering in KVM can reduce the CloudSim is an extensible simulation
energy consumption caused by network framework enables smooth modelling of
transaction. Jin et al. [3] evaluate the impact cloud, run simulations, and experiment with
of server virtualization in terms of energy ease to analysis Cloud computing
efficiency by using several configurations and infrastructures and application services.
two different hypervisors. They observe that CloudSim helps researchers and industry-
the energy overhead depends on the type of based developers focus on specific system
used hypervisor, and the particular design issues requiring investigation without
configuration chosen. Joulemeter is a solution getting caught up in low level Cloud-based
Without using auxiliary hardware equipment infrastructures and services.
– or any software integration – the authors ix. Mathematical Model
propose different “power models to infer a) Consumed Energy
power consumption from resource usage at The total energy consumed is calculated
runtime and identify the challenges that arise using below equation:
when applying such models for VM power ETotal=((Epon_initally+Epdn_intially)*n)+
metering [4]. (Eres*n)+Eexe_total+ Emig_total……..(3.1)
Finally, a recent paper proposes a real-time [15]
power estimation of software processes Where, ETotal: is the total energy consumed by
running on any level of virtualization [5] by all the VMs for 60s, Epon: is energy
using an application-agnostic power model. consumed during VM's energying on,
Expect for the work of none of the related Epdn: is energy consumed during VM's
work include Container-based platforms in energy off, Eres: is energy consumed to bring
VM’s to resume state from pause state, Emig: cloud computing. Performance analysis of
is the energy consumed during migration of cloud under different Virtual Machine (VM)
VM’s. The algorithm for scheduling the capacity is investigated by varying the VM
incoming tasks to the virtual machines is parameters like RAM and number of
shown in figure 2. The algorithm is iterative processors. The internal memory i.e., RAM of
and allows running the steps repeatedly for all 512 Megabytes and 1024 Megabytes and the
the virtual machines. Initially all Vms have no number of CPUs are varied from one to three.
load i.e all are free, cloudlets are allocated to The internal Million Instructions Per Second
vm on the basis of FCFS. After the first MIPS and the bandwidth is maintained
cycle, load of vms are calculated using constant at 1000 each. Two Datacenters are
following formula created and 20 VM. Total number of VM used
PW(k)=PW(k)+CPU(ri)*size(ri)/CPU(nK) (2) is 40. The simulations are conducted for four
 CPU-no of processing elements different combinations of RAM and CPU as
follows:
 Size- size of cloudlets
1. 512 Mb RAM, 1 CPU
 Energy- present capacity 2. 512 Mb RAM, 2 CPU
b) Transmission Delay 3. 512 Mb RAM, 3 CPU
The transmission delay is calculated as 4. 1024 Mb RAM, 1 CPU
�=𝑁𝑢𝑚𝑏𝑒� 𝑜� 𝑏𝑖𝑡�*� (5) xii.Execution Time for VMs
c) Data Rate Figure 4.1 shows the time taken from
The data rate is calculated based on the cloudSim output for the 40 VM to be
modulation technique as executed in different scenarios.
�=𝐵� � 𝑙𝑜�2(�) …..(6)
x. Results and Discussion
This chapter represents the results and
discussion including a various condition to
the network and configuration based on
random runner java file that apply different
Figure 4.1: VM Execution time in sec
loads on the network to examine network
xiii.Creation of VMs in Different Data
performance.
Centers
xi.Simulation Program “CloudSim”
Table 4.1 and Figure 4.2 show the number of
CloudSim is used for simulating the various
VMs created and the number of VM executed
scenarios to study the performance of the
in each datacenter.
Table 4.1: Number of VMs created and the
number of VM executed
Data Center 2 Data Center 3
RAM, Virtual Cloudlet Virtual Cloudlet
CPU Machine executed Machine executed
created Datacenter2 created Datacenter
Datacenter2 Datacenter 3 3
512, 1 6 22 6 18
512, 2 3 21 3 19
512, 3 1 20 1 20
1024, 4 20 1 20
1
Figure 4.3: Debt incurred for different
scenarios
xiv.Executed VMs in Different Datacenters It is observed from the figures and tables that
(D) the varying of the VM characteristics affects
the time taken for cloudlet execution, debt
incurred. Further investigations are required
to study the impact of VMs in cloud
computing.
Table 4.3: Hardware Analysis
Cloudlets Shared Memory Bandwidth Delay
10 43658.52 196.66 600.01
50 204255.54 920.07 2807.13
Figure 4.2: Number of VMs created and the
100 409965.18 1846.69 5634.25
number of VM executed in each data center 500 2035666.74 9169.67 2796.66
xv.Debt is calculated according to this 1000 4072787.58 18345.89 55973.31
1500 6104730.27 27498.78 83898.79
formula:
2000 8140372.59 36668.34 111875.12
Debt = Ram of VM*CostPerRam + Size of 2500 10177512.3 45844.65 139872.03

VM*CostPerStorage [24] 3000 12209454.99 54997.54 167797.51


4500 18314136.42 82496.11 251695.63
Table 4.2: Debt incurred for different
5000 20349778.74 91665.67 279671.96
scenarios 10000 40701323.49 183339.3 559368.19

RAM, CPU Debt 15000 61047872.13 274990.42 838995.76


512, 1 6153.6 20000 81398119.29 366658.2 1118674.15
512, 2 3076.8
512, 3 1025.6 25000 101749843.9 458332.63 1398372.85
1024, 1 4204.8
30000 122096439.1 549983.96 1678001.06
35000 142446932.7 641652.85 1957682.85
40000 162798903.7 733328.4 2237384.93
xvi.Debt Analysis in Different Scenarios 50000 203496190.1 916649.51 2796697.64
70000 284897424.1 1283321.73 3915414.6
80000 325594712.7 1466642.85 4474727.34
90000 366293476.5 1649970.62 5034060.35
100000 406995942.2 1833315.06 5593444.23
150000 610490517.3 2749957.28 8390119.68
50000 117329571.8 705105.6 2350134.74
200000 813988790.9 3666616.17 11186845.95
70000 164262051.8 987151.75 3290201.68
250000 1017488544 4583281.73 13983592.56
80000 187728291.8 1128174.83 3760235.15
300000 1220983119 5499923.96 16780268
90000 211194531.8 1269197.91 4230268.62
350000 1424481393 6416582.85 19576994.28
100000 234660771.8 1410220.98 4700302.09
400000 1627981144 7333248.39 22373740.85
150000 351991971.8 2115336.37 7050469.45
200000 469323171.8 2820451.75 9400636.8
250000 610490517.3 2115351.75 7050520.74
The above Table4.2 and table 4.3 shows that
300000 703985571.8 4230682.52 14100971.51
as the number VM increases the execution 350000 821316771.8 4935797.91 16451138.86
time also increases. As VM i.e. number of 400000 938647971.8 5640913.29 18801306.21

instruction lines are increases the execution


xx.Cloudlets vs. Shared memory
time also increases. Throughput remains the
From the following graph it was found that
same as the number of processes completed
the shared memory usage increase with
per second and the average execution time is
number of cloudlets but its also found that the
approximately same every time. This is the
size of shared memory of VM is less than the
initial stage of the proposed algorithm in
Hardware Server.
which it checks that if the demands of the
users are increases the execution time also
increases that means number of VM are
directly proportional to the execution time.
Table 4.4: Virtual Machine
Cloudlets Shared Memory Bandwidth Delay

10 22186.24 133.33 444.39


50 116087.04 697.64 2325.25
Figure 4.4: number of cloudlets vs required
100 233416.96 1402.75 4675.39
500 1172048.64 7043.56 23476.37
shard memory
1000 2345340.16 14094.59 46977.63 xvii.Cloudlets vs. bandwidth
1500 3518630.4 21145.62 70478.87
From the following graph it was found that
2000 4691920.64 28196.64 93980.11
2500 5865212.16 35247.67 117481.37 the bandwidth usage increase with number of
3000 7038502.4 42298.69 140982.61 cloudlets but it’s also found that the
4500 10558374.4 63451.77 211486.35
bandwidth of VM is less than the Hardware
5000 11731664.64 70502.79 234987.58
10000 23464611.84 141013.29 470000.86 Server.
15000 35197731.84 211524.83 705017.59
20000 46930851.84 282036.37 940034.33
25000 58663971.84 352547.91 1175051.06
30000 70397091.84 423059.45 1410067.8
35000 82130211.84 493570.98 1645084.53
40000 93863331.84 564082.52 1880101.27
under different capacity is investigated by
varying the VM parameters like RAM and
number of processors. Simulation results
demonstrated that the varying of the VM
characteristics affects the time taken for
cloudlet execution, and debt incurred. Further
investigations are required to study the impact
Figure 4.5: number of cloudlets vs required
of VMs in network performance. After
bandwidth
simulating and comparing results the virtual
xviii.Cloudlets vs. Delay
machine has a decreased delay, bandwidth
From the following graph it was found that
and shared memory usage.
the delay usage increase with number of
For the future work it's recommended to
cloudlets but it’s also found that the delay of
evaluate virtualization server with other
VM is less than the Hardware Server.
measurement algorithms and compare output
results with this thesis result for continuous
performance evaluation techniques.

References
1. Tickoo, Omesh; Iyer, Ravi; Illikkal,
Ramesh; Newell, Don; Modeling Virtual
Figure 4.6: number of cloudlets vs. required Machine Performance: Challenges and
delay time Approaches, ACM SIGMETRICS
xxii. Conclusion Performance Evaluation Review, Volume
Server virtualization is a business-related 37 Issue 3, December 2009,
2. Ardagna, Damilo; Tanelli, Mara; Lovera,
infrastructure which is capable of eradicating
Marco; Zhang, Li; Black-box
the importance of high-priced computing
Performance Models for Virtualized Web
hardware and maintenance. In the
Service Applications, WOSP/SIPEW 10
virtualization environment, the computing
Proceedings of the First Joint
power is supplied by many data centers,
WOSP/SIPEW International Conference
installed with hundreds to thousands of
on Performance Engineering, ACM, 2010,
servers. Cloudsim simulates various scenarios
http://doi.acm.org/10.1145/1712605.1712
to study server virtualization performance.
630, Suggests Linear Parameter Varying
Performance analysis of server visualization
(LPV) models for performance analysis of
web service applications in virtualized http://dx.doi.org/10.1109/HPCA.2010.546
environments. 3058, Implemented a particular execution
3. Calheiros, Rodrigo N.; Ranjan, Rajiv; De
of artificial neural network (ANN) model
Rose, Cesar A. F.; Buyya, Rajkumar;
to predict the performance of applications
CloudSim: A Novel Framework for
running on virtualized systems.
Modeling and Simulation of Cloud 6. Apparao, Padma; Iyer, Ravi; Newell, Don;
Computing Infrastructures and Services, Towards Modeling & Analysis of
CloudSim ICCP 2009, Consolidated CMP Servers, ACM
http://www.gridbus.org/reports/CloudSim- SIGARCH Computer Architecture News,
ICPP2009.pdf, Introduces and emphasizes Volume 36 Issue 2, May 2008, pp.38-45,
the benefits of CloudSim, a new http://doi.acm.org/10.1145/1399972.1399
customizable modeling and simulation 980, Presents a consolidation performance
tool developed specifically for cloud model for the performance analysis of
computing. consolidated servers, and utilizes the
4. Watson, Brian J.; Marwah, Manish;
benchmark vConsolidate in a case study
Gmach, Daniel; Chen, Yuan; Arlitt,
illustrating this modelâ potential.
Martin; Wang, Zhikui; Probabilistic 7. Jang, Jiyong; Han, Saeyoung; Kim,
Performance Modeling of Virtualized Jinseok; Park, Sungyong; Bae, Seungjo;
Resource Allocation, ICAC 10 Proceeding Choon Woo, Young; A Performance
of the 7th International Conference on Evaluation Methodology in Virtual
Autonomic Computing, pp.99-108, ACM, Environments, Computer and Information
2010, Technology, International Conference on,
http://doi.acm.org/10.1145/1809049.1809 pp. 351-358, 7th IEEE International
067, Proposes a model for application Conference on Computer and Information
performance in a virtualized system based Technology, 2007,
on the probability distributions of http://www.computer.org/portal/web/csdl/
performance metrics. doi/10.1109/CIT.2007.179, Defines four
5. Kundu, Sajib; Rangaswami, Raju; Dutta,
performance models representing different
Kaushik; Zhao, Ming; “Application
virtualized systems, introduces a new
Performance Modeling in a Virtualized
performance metric, and use one model
Environment, High Performance
and performance metric M to evaluate the
Computer Architecture (HPCA), 2010
performance of virtualized versus non-
IEEE 16th International Symposium on ,
virtualized environments.
pp.1-14 Jan. 2010,
8. VMware ESXi Cloud Simplified, and global system based on fuzzy logic
Hostway UK, for the management of virtualized
http://www.hostway.co.uk/small- resources.
12. Lu, Jie; Makhlis, Lev; Chen, Jianjiun;
business/dedicated-
Measuring and Modeling the Performance
hosting/cloud/vmware-esxi.php,
of the XEN VMM, International CMG
Comprehensive explanation of the
Conference 2006, pp.621-628,
features and benefits of VMware ESXi
http://svn.assembla.com/svn/biSTgsRbOr
hypervisor.
9. "Guest OS Install Guide", VMware 3y0wab7jnrAJ/trunk/artikels/Measuring_a
Community Page, nd_Modeling_the_Performance_of_XEN
http://blogs.vmware.com/guestosguide/20 _VMM.pdf, Presents Xen as a server
09/09/vmi-retirement.html, Web page virtualization option, discusses why
announcing that VMware will no longer traditional modeling methods will not
support their Virtual Machine Interface work with a virtualized system, and
(VMI) technology. suggests new modeling techniques.
10. Huber, Nikolaus; Von Quast, Marcel; 13. Iyer, Ravi; Illikkal, Ramesh; Tickoo,
Brosig, Fabian; Kounev, Samuel; Analysis Omesh; Zhao, Li; Apparao, Padma;
of the Performance-Influencing Factors of Newell, Don; VM3: Measuring, Modeling
Virtualization Platforms, On the Move to and Managing VM Shared Resourcesâ,
Meaningful Internet Systems, OTM 2010, Computer Networks: The International
Springer-Verlag Berlin, 2010, pp. 811- Journal of Computer and
828; http://dx.doi.org/10.1007/978-3-642- Telecommunications Networking, Volume
16949-6_10, Offers a benchmark based 53 Issue 17, December, 2009,
approach to predict the performance of a http://dx.doi.org/10.1016/j.comnet.2009.0
Xen virtualized environment. 4.015, Models virtual machine
11. Xu, Jing;Â Zhao, Ming; Fortes, Jose;
performance on a consolidated chip-
Carpenter, Robert; Yousif, Mazin; On the
multiprocessor platform (CMP),
Use of Fuzzy Modeling in Virtualized
measuring the effects of server
Data Center Management, Proceedings of
consolidation with the benchmark
the Fourth International Conference on
vConsolidate, and discusses methods to
Autonomic Computing (ICAC), IEEE
manage shared resources.
Computer Society, p. 25, June 2007, 14. Jun, Hai; Cao, Wenzhi; Yuan, Pingpeng;
http://portal.acm.org/citation.cfm? Xie, Xia; VSCBenchmark: Benchmark for
id=1270385.1270747, Proposes a local Dynamic Server Performance of
Virtualization Technologyâ, IFMT , virtualized systems with an emphasis on
Proceedings of the First International VMware vSphere virtualization platform.
18. Benevenuto, FabrÃcio; Fernandes,
Forum on Next-generation
Santos, Matheus; Almeida, VirgÃlio;
Multicore/Manycore Technologies, ACM,
Almeida, Jussara; Janakiraman, G. (John);
2008, pp. 5:1-5:8,
Santos, Jos Renato; Performance Models
http://doi.acm.org/10.1145/1463768.1463
for Virtualized Applications, Lecture
775, Discusses the VSCBenchmark for
Notes in Computer Science, Volume 4331,
analyzing server consolidation and
Frontiers of High Performance Computing
compares this benchmark with
and Networking ISPA 2006 Workshops
vConsolidate and VMmark benchmarks.
15. Features of VMmark, Virtualization pp.427-439, 2006,
Benchmark, VMware Product Page, http://dx.doi.org/10.1007/11942634_45,
http://www.vmware.com/products/vmmar Discusses methodology for building
k/features.html, Web page listing the models for performance prediction of
features and benefits of VMmark, a tile- applications migrated from a non-
based benchmark. virtualized system to a virtualized
16. Deshane, Todd; Shepherd, Zachary;
environment.
Matthews, Jeanna N.; Ben-Yehuda, Muli; 19. Cloud Computing Services with VMware
Rao, Balaji; Shah, Amit; Quantitative Virtualization Cloud Infrastructure,
Comparison of Xen and KVM, Xen http://www.vmware.com/solutions/cloud-
Summit, Boston, MA, June 23, 2008, computing/index.html, Provides an
http://www.todddeshane.net/research/Xen overview of vCloud, VMwareâ approach
_versus_KVM_20080623.pdf, Developed to cloud computing.
20. Understanding Memory Resource
benchvm, a virtualization benchmarking
Management in VMware ESX 4.1,
suite, and used this benchmark to compare
VMware, Inc. Performance Study, 2010,
two hypervisors, Xen and KVM.
17. McDougall, Richard; Anderson, Jennifer; http://www.vmware.com/files/pdf/techpap
Virtualization Performance: Perspectives er/vsp_41_perf_memory_mgmt.pdf, This
and Challenges Ahead, ACM SIGOPS published performance study describes the
Operating Systems Review, Volume 44, basic memory management concepts in
Issue 4, December 2010, pp.40-56, ESX, the configuration options available,
http://doi.acm.org/10.1145/1899928.1899 and provides results to show the
933, Discusses the performance issues of performance impact of these options.
21. Virtualization Overview White Paper,
VMware, 2006,
http://www.vmware.com/pdf/virtualizatio
n.pdf, Overview and definitions of
virtualization concepts and terms
including para-virtualization.
22. Cloud Computing, Wikipedia,
http://en.wikipedia.org/wiki/Cloud_comp
uting, Overview of cloud computing
concepts.

23. What is Grid-Computing? Definition and


Meaning, Business Dictionary,
http://www.businessdictionary.com/definit
ion/grid-computing.html, Provides a
definition of grid-computing.
24. Quiroz, H. Kim, M. Parashar, N.
Gnanasambandam, and N. Sharma.
Towards Autonomic Workload
Provisioning for Enterprise Grids and
Clouds. Proceedings of the 10th
IEEE/ACM International Conference on
Grid Computing (Grid 2009), Banf,
Alberta, Canada, October 13-15, 2009,
IEEE Computer Society Press.

You might also like