We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 17
coud Computing 5-13 Cloud Technologies and Advancements
Google App Engine
Google App Engine (GAE) is a Platform-as-a-Service cloud computing model that
supports many programming languages. GAE is a scalable runtinie environment most)
gevoted to execute Web applications. In fact, it allows developers to integrate third inp
frameworks and libraries with the infrastructure still being managed by Google. It ali
developers to use teadymade platform to develop and deploy web applications using
development tools, runtime engine, databases and middleware solutions. It supports
Janguages like Java, Python, NET, PHP, Ruby, Nodejs and Go in which developers can
write their code and deploy it on available google infrastructure with the help of Software
Development Kit (SDK). In GAE, SDKs are required to. set up your computer for
developing, deploying, and managing your apps in App Engine. GAE enables users to
run their applications on a large number of data centers associated with Google’s search
engine operations. Presently, Google App Engine uses fully managed, serverless platform
that allows to choose from several popular languages, libraries, and frameworks to
develop user applications and then uses App Engine to take care of provisioning servers
and scaling application instances based on demand. The functional architecture of the
Google cloud platform for app engine is shown in Fig. 5.5.1.
a,
Load Scheduler GFS Node
balancer master
Google cloud infrastructure
BE &
Bigtable Map-reduce Chubby Node
Fig. 5.5.1 : Functional architecture of the Google cloud platform for app engine
TECHNICAL PUBLICATIONS® - An up thrust for knowledgeCloud Computing 5-14 Cloud Technologies and Advancemeny
The infrastructure for google cloud is managed inside datacenter. All the cloug
services and applications on Google runs through servers inside datacenter. Inside each
data center, there are thousands of servers forming different clusters. Each luster, can run
multipurpose servers. The infrastructure for GAE composed of four main components
like Google File System (GFS), MapReduce, BigTable, and Chubby. The GFS as used for
storing large amounts of data on google storage clusters. The MapReduce is used for
application program development with data processing on large clusters. Chubby is used
as a distributed application locking services while BigTable offers a storage service for
accessing structured as well as unstructured data. In this architecture, users can interact
with Google applications via the web interface provided by each application.
The GAE platform comprises five main components like
* Application runtime environment offers a platform that has built-in execution
engine for scalable web programming and execution.
Software Development Kit (SDK) for local application development and
deployment over google cloud platform.
Datastore to provision object-oriented, distributed, structured data storage to store
application and data. It also provides secures data management operations based
on BigTable techniques.
* Admin console used for easy management of user application development and
resource management
* GAE web service for providing APIs and interfaces.
Eq Programming Environment for Google App Engine
The Google provides programming support for its cloud environment, that is, Google
Apps Engine, through Google File System (GFS), Big Table, and Chubby. The following
sections provide a brief description about GFS, Big Table, Chubby and Google APIs.
EER] The Google File System (GFS)
Google has designed a distributed file system, named GES, for meeting its exacting
demands off processing a large amount of data. Most of the objectives of designing the
GFS are similar to those of the earlier designed distributed systems. Some of the
objectives include availability, performance, reliability, and scalability of systems. GFS
has also been designed: with certain challenging assumptions that also provide
opportunities for developers and researchers to achieve these objectives. Some of the
assumptions are listed as follows :
TECHNICAL PUBLICATIONS® - An up thrust for knowledgeyr
loud Computing 5-15 Cloud Technologies and Advancements
a) Automatic recovery from component failure on a routine basis
b) Efficient storage support for large - sized files as a huge amount of data to be
processed is stored in these files. Storage support is provided for small - sized files
without requiring any optimization for them.
o) With the workloads that mainly consist of two large streaming reads and small
random reads, the system should be performance conscious so that the small reads
are made steady rather than going back and forth by batching and sorting while
advancing through the file.
d) The system supports small writes without being inefficient, along with the usual
large and sequential writes through which data is appended to files.
e) Semantics that are defined well are implemented.
f) Atomicity is maintained with the least overhead due to synchronization.
8) Provisions for sustained bandwidth is given priority rather than a reduced latency.
Google takes the aforementioned assumptions into consideration, and supports its
cloud platform, Google Apps Engine, through GFS. Fig. 5.6.1 shows the architecture of
the GFS clusters.
Client 1 Control flow
[ez]
Data |
chunks t
-
- servers
See
Fig. 5.6.1 : Architecture of GFS clusters
GFS provides a file system interface and different APIs for supporting different file
Operations such as create to create a new file instance, delete to delete a file instance, open to
pen a named file and return a handle, close to close a given file specified by a handle,
"ead to read data from a specified file and write to write data to a specified file.
TECHNICAL PUBLIGATIONS® - An up thrust for knowledgeClu Conputing 5-16 Cloud Technologies and Advancements
It can be seen from Figure 5.6.1, that a single GFS Master and three chunk servers are
serving to two clierits comprise a GFS cluster. These clients and servers, as well as the
Master, are Linux machines, each running a server process at the user level. These
processes are known as user-level server processes. z
In GFS, the metadata is managed by the GFS Master that takes care of all the
communication between the clients and’ the-chunk servers. Chunks are small blocks of
data that are created from the system files. Their usual size is 64 MB. The clients interact
directly with chunk servers for transferring chunks of data. For better reliability, these
chunks are replicated across three machines so that whenever the data is required, it can
be obtained in its complete form from at least one machine. By default, GES stores three
replicas of the chunks of data. However, users can designate any levels of replication.
Chunks are created by dividing the files into fixed-sized blocks. A unique immutable
handle (of 64-bit) is assigned to each chunk at the time of their creation by the GFS
Master. The data that can be obtained from the chunks, the selection of which is specified
by the unique handles, is read or written on local disks by the chunk servers. GFS has all
the familiar system interfaces. It also has additional interfaces in the form of snapshots
and appends operations. These two features are responsible for creating a copy of files or
folder structure at low costs and for permitting a guaranteed atomic data-append
operation to be performed by multiple clients of the same file concurrently.
Applications contain a specific file system, Application Programming Interface (APIs)
that are executed by the code that is written for the GFS client. Further, the
communication with the GFS Master and chunk servers are established for performing
the read and write operations on behalf of the application. The clients interact with the
Master only for metadata operations. However, data-bearing’ communications are
forwarded directly to chunk servers. POSIX API, a feature that is common to most of the
popular file systems, is not included in GFS, and therefore, Linux vnode layer hook-in is
not required. Clients or servers do not perform the caching of file data. Due to the
presence of the streamed workload, caching does not benefit clients, whereas caching by
servers has the least consequence as a buffer cache that already maintains a record for
frequently requested files locally.
The GFS provides the following features :
«Large - scale data processing and storage support
* Normal treatment for components that stop responding
TECHNICAL PUBLICATIONS® - An up thrust for knowledgeBere
coud Computing 5-17 Cloud Technologies and Advancements
« Optimization for large-sized files (mostly appended concurrently and read
sequentially)
Fault tolerance by constant monitoring, data replication, and automatic recovering
Data corruption detections at the disk or Integrated Development Environment
(IDE) subsystem level through the checksum method
High throughput for concurrent readers and writers
Simple designing of the Master that is centralized and not bottlenecked
GFS provides caching for the performance and scalability of a file system and logging
for debugging and performance analysis.
Big Table
Googles Big table is a distributed storage system that allows storing huge volumes of
structured as well as unstructured data on storage mediums. Google created Big Table
with an aim to develop a fast, reliable, efficient and scalable storage system that can
process concurrent requests at a high speed. Millions of users access billions of web pages
and many hundred TBs of satellite images. A lot of semi-structured data is generated
from Google or web access by users. This data needs to be stored, managed, and
processed to retrieve insights. This required data management systems to have very high
scalability.
Google's aim behind developing Big. Table was to provide a highly efficient system for
managing a huge amount of data so that it can help cloud storage services. It is required
for concurrent processes that can update various data pieces so that the most recent data
can be accessed easily at a fast speed. The design requirements of Big Table are as
follows :
1. High speed
2 Reliability
3. Scalability
4. Efficiency
5. High performance
6. Examination of changes that take place in data over a period of time.
Big Table is a popular, distributed data storage system that is highly scalable and self-
Managed. It involves thousands of servers, terabytes of data storage for in-memory
TECHNICAL PUBLICATIONS® - An up thrust for knowledgeCloud Computing 5-18 Cloud Technologies and Advancemen,
operations, millions of read/write requests by users in a second and petabytes of data
stored on disks. Its self-managing services help in dynamic addition. and removal of
servers that are capable of adjusting the load imbalance by themselves.
It has gained extreme popularity at Google as it stores almost all kinds of data, such as
Web indexes, personalized searches, Google Earth, Google Analytics, and Google
Finance. It contains data from the Web is referred to as a Web table. The generalizeg
architecture of Big table is shown in Fig. 5.6.2.
Query execution
Control and
data operations
Tablet
allocation
Row access
Fig. 5.6.2 : Generalized architecture of Big table
Ttis composed of three entities, namely Client, Big table master and Tablet servers, Big
tables are implemented over one or more clusters that are similar to GFS clusters. The
client application uses libraries to execute Big table queries on the master server. Big table
is initially broken up into one or more slave servers called tablets for the execution of
secondary tasks. Each tablet is 100 to 200 MB in size.
The master server is responsible for allocating tablets to tasks, clearing garbage
collections and monitoring the performance’of tablet servers. The master server splits
tasks and executes them over tablet servers, The master server is also responsible for
maintaining a centralized view of the system to support optimal placement and load-
balancing decisions. It performs separate control and data operations strictly with tablet
servers. Upon granting the tasks, tablet servers provide row access to clients. Fig. 5.6.3
shows the structure of Big table :
—ICATIONS® = An up thrust for knowledge
encou Computing S219
Columns
Cloud Technologies and Advancements
Golurnn Family 3-4]
Fig. 5.6.3 : Structure of Big table
Big Table is arranged as a sorted map that is spread in multiple dimensions and
involves sparse, distributed, and persistence features. The Big Table’s data model
primarily combines three dimensions, namely row, column, and timestamp. The first two
dimensions are string types, whereas the time dimension is taken as a 64-bit integer. The
resulting combination of these dimensions is a string type.
Each row in Big table has an associated row key that is an arbitrary string of up to
64 KB in size. In Big Table, a row name is a string, where the rows are ordered in a
lexicological form. Although Big Table rows do not support the relational model, they
offer atomic access to the data, which means you can access only one record at a time. The
Tows contain a large amount of data about a given entity such as a web page. The row
keys represent URLs that contain information about the resources that are referenced by
the URLs.
The naming conventions that are used for columns are more structured than those of
Tows. Columns are organized into a number of column families that logically groups the
data under a family of the same type. Individual columns are designated by qualifiers
Within families, In other words, a given column is referred to use the syntax column_
family: optional_ qualifier, where column_ family is a printable string and qualifier is an
ubitrary string, It is necessary to provide an arbitrary name to one level which is known
% column family, but it is not mandatory to give a name to a qualifier, The column
Amily contains information about the data type and is actually the unit of access control.
TECHNICAL PUBLICATIONS® - An up thrust for knowledge
>E Cloud Technologies and Adve
Cloud Computing 5-20 Z agi jancementy
Qualifiers are used for assigning columns in each row. The number of columns that can
be assigned in a row is not restricted.
The other important dimension that is assigned to Big Table is a timestamp. In Big
table, the multiple versions of data are indexed by timestamp for a given cell. The
timestamp is either related to real-time or can be an arbitrary value that is assigned bya
programmer. It is used for storing various data versions in a cell. By default, any new
data that is inserted into Big Table is taken as current, but you can explicitly set the
timestamp for any new write operation in Big Table. Timestamps provide the Big Table
lookup option that returns the specified number of the most recent values. It can be used
for marking the. attributes of the column families. The attributes either retain the most
‘ecent values in a specified number or keep the values for a particular time duration.
Big Table supports APIs that can be used by developers to perform a wide range of
operations such as metadata operations, read/write operations, or modify/update
operations. The commonly used operations by APIs are as follows:
¢ Creation and deletion of tables
¢ Creation and deletion of column families within tables
+ Writing or deleting cell values
* Accessing data from rows
* Associate metadata such as access control information with tables and column
families
The functions that are used for atomic write operations are as follows :
* Set () is used for writing cells in a row.
* DeleteCells () is used for deleting cells from a row.
* DeleteRow() is used for deleting the entire row, ie,, all the cells from a row are
deleted.
It is clear that Big Table is a highly reliable, efficient, and fan system that can be used
for storing different types of semi-structured or unstructured data by users.
Chubby
Chubby is the crucial service in the Google infrastructure that offers storage and
coordination for other infrastructure services such as GFS and Bigtable. It is a coarse -
grained distributed locking service that is used for synchronizing distributed activities in
an asynchronous environment on a large scale. It is used as a name service within Google
and provides reliable storage for file systems along with the election of coordinator for
multiple replicas. The Chubby interface is similar to the interfaces that are provided by
TECHNICAL PUBLICATIONS® - An up thrust for knowledgeuting 7
cloud Comes 5-21 Cloud Technologies and Advancements
jsibuted systems with advisory locks. However, the aim of designing Chubby is to
rovide reliable storage with consistent availability. It is designed to use with loosely
coupled distributed systems that are connected in
| small-sized machines a high-speed network and contain
geveral sma a f
The lock service enables the synchronization of the
activities of dents and permits the clients to reach a consensus about the environment in
which they are placed. Chubby’s main aim is to efficiently handle a large set of clients by
providing them a highly reliable and available system. Its other important characteristics
that include throughput and storage capacity are secondary. Fig. 5.6.4 shows the typical
sircture of a Chubby system :
5 servers of a chubby cell
Local
database
ipshots
Local
database,
application pshots
Logs] [Local
database ay Current
master
Chubby
client
| Client processes
Local
database
pshots
Logs] Local
database
‘Snapshots,
Fig. 5.6.4 : Structure of a Chubby system
The chubby architecture involves two primary components, namely server and client
"rary, Both the components communicate through a Remote Procedure Call (RPC).
lowever, the brary has a special purpose, ie, linking the clients against the chubby cel
A Chubby cell contains a small set of servers. The servers are also called replicas, and
“ually, five servers are used in every cell, The Master is elected from the five replicas
‘hough a distributed protocol that is used for consens
Most of the replicas must vote
TECHNICAL PUBLICATIONS® - An up thrust for knowledgeCloud Technologies and Advane
Cloud Computing 5-22 “ements
for the Master with the assurance that no other Master will be elected by replicas that
have once voted for one Master for a duration. This duration is termed as a Master lease,
Chubby supports a similar file system as Unix. However, the Chubby file system is
simpler than the Unix one. The files and directories, known as nodes, are contained in the
Chubby namespace. Each node is associated with different types of metadata. ae Nodes
are opened to obtain the Unix file descriptors known as handles. The specifiers fo,
handles include check digits for preventing the guess handle for clients, handle sequence
numbers, and mode information for recreating the lock state when the Master changes,
Reader and writer locks are implemented by Chubby using files and directories. While
exclusive permission for a lock in the writer mode can be obtained by a single client, there
can be any number of clients who share a lock in the reader’s mode. The nature of locks is
advisory, and a conflict occurs only when the same lock is requested again for an
acquisition. The distributed locking mode is complex. On one hand, its use is costly, and
on the other hand, it only permits numbering the interactions that are already using locks.
The status of locks after they are acquired can bé described using specific descriptor
strings called sequencers. The sequencers are requested by locks and passed by clients to
servers in order to progress with protection.
Another important term that is used with Chubby is an event that can be subscribed
by clients after the creation of handles. An event is delivered when the action that
corresponds to it is completed. An event can be :
a. Modification in the contents of a file
b. Addition, removal, or modification of a child node
c. Failing over of the Chubby Master
d. Invalidity of a handle
e. Acquisition of lock by others
{, Request for a conflicting lock from another client
In Chubby, caching is done by a client that stores file data and metadata to reduce the
traffic for the reader lock. Although there is a possibility for caching of handles and files
locks, the Master maintains a list of clients that may be cached. The clients, due to
caching, find data to be consistent. If this is not the case, an error is flagged. Chubby
maintains sessions between clients and servers with the help of a Keep-alive message,
which is required every few seconds to remind the system that the session is still active:
Handles that are held by clients are released by the ;
for any reason. If the Master responds late to a ke
server in case the session is overdue
ep-alive message, as the case may be, at
TECHNICAL PUBLICATIONS® An up thrust for knowledge:
cloud Computing. 5-23 Cloud Technologies and Advancements
mes, @ client has its oun timeout (which is longer than the server timeout) for the
detection of the server failure.
If the server failure has indeed occurred, the Master does not respond to a client about
the keep-alive message in the local lease timeout. This incident sends the session in
jeopardy. It can be recovered in a manner as explained in the following points:
The cache needs to be cleared.
+ The client needs to wait for a grace period, which is about 45 seconds.
« Another attempt is made to contact the Master.
If the attempt to contact the Master is successful, the session resumes and its jeopardy
js over. However, if this attempt fails, the client assumes that the session is lost. Fig. 5.6.5
shows the case of the failure of the Master :
Old master dies | No master al master elected
peeeenenese weer seen ,
r 1 LeaseM2) fT Hl
Lease M1 H Lease M3 ‘New master
!
Client
'
Jeopardy Safe
Fig. 5.6.5 : Case of failure of Master server
Chubby offers a decent level of scalability, which means that there can be any
(unspecified) number of the Chubby cells. If these cells are fed with heavy loads, the lease
timeout increases, This increment can be anything between 12 seconds and 60 seconds.
The data is fed in a small package and held in the Random-Access Memory (RAM) only.
The Chubby system also uses partitioning mechanisms to divide data into smaller
Packages, All of its excellent services and applications included, Chubby has proved to
bea great innovation when it comes to storage, locking, and program support services.
The Chubby is implemented using the following APIs:
1. Creation of handles using the open() method
2. Destruction of handles using the close() method
The other important, methods include GetContentsAndStat(), GetStat(), ReadDir(),
SetContents(), SetACIQ, Delete(), Acquire), TryAcquire(), Release(), GetSequencer(),
SetSequencer(), and CheckSequencer(). The commonly used APIs in chubby are listed in
Table5.6.1:
TECHNICAL PUBLICATIONS® - An up thrust for knowledge—
Cloud Technologies and Advancements
Cloud Computing
Table 5.6.1 : APIs in Chubby
EGE] Google APIs
Google developed a set of Application Programming Interfaces (APIs) that can be used
to communicate with Google Services. This set of APIs is referred as Google APIs. and
their integration to other services. They also help in integrating Google Services to other
services. Google App Engine help in deploying an API for an app while not being aware.
about its infrastructure. Google App Engine also hosts the endpoint APIs which are
created by Google Cloud Endpoints. A set of libraries, tools; and capabilities that can be
used to generate client libraries and APIs from an App Engine application is known as
Google Cloud Endpoints, It eases the data accessibility for client applications. We can also
save the time of writing the network communication code by using Google Cloud
Endpoints that can also generate client libraries for accessing the backend API.Map Reduce
The MapReduce is a programming model provided by Hadoop that allows expressing
distributed computations on huge amount of data.it provides easy scaling of data
processing over multiple computational, nodes or clusters. In MapReduce s model the
data processing primitives used are called mapper and reducer. Every MapReduce
program must have at least one mapper and reducer subroutines. The mapper has map
method that transforms input key value pair in to any number of intermediate key value
pairs while reducer has a reduce method that transform intermediate key value pairs that
aggregated in to any number of output key, value pairs.
‘The MapReduce keeps all processing operations separate for parallel executions where
a comiplex problem with extremely large in size is decomposed in to sub tasks. These
subtasks are executed independently from each other's. After that the result of all
independent executions are combined together to get the complete output.
Features of MapReduce
The different features provided by MapReduce are explained as follows
«. Synchronization : The MapReduce supports execution of concurrent tasks. When
the concurrent tasks are executed, they need synchronization, The synchronization
is provided by reading the state of each MapReduce operation during the execution
and uses shared variables for those. -
~ECHNICAL PUBLICATIONS® - An up thrust for ‘knowledgeComputing. Se.
cloud 9 Cloud Technologies and Advancements
ity :
ee ee on em although the data resides on different clusters, it
appears like a local to the users’ application. To obtain the best result the code and
data of application should resides on same machine.
ae ae + MapReduce engine provides different fault tolerance mechanisms
in case of failure. When the tasks are running on different cluster nodes during
which if any failure occurs then MapReduce engine find out those incomplete tasks
and reschedule them for execution on different nodes,
Scheduling : The MapReduce involves map and reduce operations that divide large
problems in to smaller chunks and those are run in parallel by different machines.so
there is a need to schedule different tasks on computational nodes on priority basis
which is taken care by MapReduce engine.
EEE Working of MapReduce Framework
The unit of work in MapReduce is a job. During map phase the input data is divided
in to input splits for analysis where each split is an independent task. These tasks run in
parallel across Hadoop clusters. The reducer phase uses result obtained from mapper as
aninput to generate the final result.
The MapReduce takes a set of input pairs and produces a set of output
pairs by supplying data through map and reduce functions. The typical
MapReduce operations are shown in Fig. 5.3.1.
ps
eo
ED
Split
{k1, v1]
Sort Merge
by kt (k4,[v4, v2, v3.1)
Fig. 5.3.1 : MapReduce operations
Brery MapReduce program undergoes different phases of execution. Each phase has
is own significance in MapReduce framework, The different phases of execution in
MapReduce are shown in Fig. 5.3.2 and explained as follows.
TECHNICAL PUBLIGATIONS® - An up thrust for knowledgeCloud Computing
File loaded from HDFS
Ee Gombina
Fig, 5.3.2 : Different phases of execution in MapReduce
In input phase the large data set in.the form of pair is provided as a
standard input for MapReduce program. The input files used by MapReduce are kept on
HDFS (Hadoop Distributed File System) store which has standard InputFormat specified
by user.
Once input file is selected then the split phase reads the input data and divided those
in to smaller chunks. The splitted chunks are then given to the mapper. The map
operations extract the relevant data and generate intermediate key value pairs. It reads
input data from split using record reader and generates intermediate results, It is used to
transform the input key, value list data to output key, value list which is then pass to
combiner.
The combiner is used with both mapper and reducer to reduce the volume of data
transfer.it is also known as semi reducer which accepts input from mapper and passes
output key, value pair to reducer. The shuffle and sort are the components of reducer:
The shuffling is’a process of partitioning and moving a mapped output to the reducer
whete intermediate keys are assigned to the reducer. Each partition is called subset. Each
subset becomes input to the reducer.in general shuffle phase ensures that the partitioned
splits reached at appropriate reducers where reducer uses http Protocol to retrieve their _
own partition from mapper.
TECHNICAL PUBLICATIONS® - An up thrust for knowledgecomputing . ; vancentents
aid 5-11 Clovid Technologies and Advance”
The a hase Js esponsible for sorting the intermediate keys on single node
automatically “y Sf¢ Presented to the reducer. The shuffle and sort phases OCcUr
“jmultaneously Whi :
a rogram is generated with key value pairs written in
os :
ontput Fle ted ch is written back to the HDFS store. In exaimple of Word count process
using MapReduce with all phases of execution are illustrated in Fig. 5.3.3.
‘rout Spliting Mapping Shufling Reducing Final result
{Besta
Deer Bear River
beet Bear River as
S Car Car River |
Fig. 5.3.3 : Word count process using MapReduce
EX] Virtual Box
Virtual Box (formerly Sun Virtual Box and presently called Oracle VM Virtual Box) is
an x86 virtualization software package, created by software company Innotek GmbH,
Purchased by Sun Microsystems, and now takeover by Oracle Corporation as part of its
family of virtualization products. It is cross-platform virtualization software that allows
Users to extend their existing computer to run multiple operating systems at the same
time. VirtualBox runs on Microsoft Windows, Mac OS, Linux, and Solaris systems. It is
ideal for testing, developing, demonstrating, and deploying solutions actoss multiple
Platforms on single machine. 7
Ttisa Type II (hosted) hypervisor that can be installed on an existing host operating
‘ystem as an application. This hosted application allows to run additional operating
SYstems inside it known as a Guest OS. Each guest OS can be loaded and run with its own,
Virtual environment. VirtualBox allows you to run guest operating systems using its own,
TECHNICAL PUBLICATIONS® - An up thrust for knowledgeCloud Technologies and Advancemey
a
5-12
Cloud Computing,
of guest OS
virtual hardware. Each instance
functional
is called a “Virtual machine”. The ;
architecture of Virtual Box hypervisor is
shown in Fig. 5.4.1.
It has lightweight, extremely fast and
powerful virtualization engine. The guest
system will run in its VM environment just
as if it were installed on a real computer. It
— aie as —— Fig, 5.4.1 : Functional architecture of Virtual
ave specified. All software oe Hyperviser
to aan the guest system will operate just Bog Min
as it would on a physical computer. Each
hardware. |
‘The latest version of VirtualBox simplifies cloud deployment by allowing developers
to create multiplatform environments and to develop applications for Container and
Virtualization technologies within Oracle VM VirtualBox on a single machine. VirtualBox
also supports OS (vmdk) and Virtual hard disk (.vhd) images made using VMware
Workstation or Microsoft Virtual PC, thus it can flawlessly run and integrate guest
machines which were configured via VMware Workstation or other hypervisors.
The VirtualBox provides the following main features
* Itsupports Fully Para virtualized environment along with Hardware virtualization.
VM runs over its independent virtualized
* It provides device drivers fiom driver stack which improves the performance of
virtualized input/output devices. .
It provides shared folder support to copy data from host OS to guest OS and vice
versa.
Ithas latest Virtual USB controller support.
It facilitates broad range of virtual network driv ng with host, bridge
er si
support along ,
It supports Remote Desktop Protocol to connect windows virtual machine (guest
OS) remotely on a thin, thick or mobile client seamlessly.
* It has Support for Virtual Disk formats which are
Microsoft Virtual PC hypervisors. used by both VMware and
. TECHNICAL PUBLICATIONS
‘An up thrust for, ‘knowledge