0% found this document useful (0 votes)
36 views37 pages

Cloud Computing Unit-2

The document discusses virtualization concepts and technologies used in cloud computing. It describes different types of virtualization including hardware, operating system, server, storage, application, and network virtualization. It also covers benefits, characteristics, and differences between file level and block level storage.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views37 pages

Cloud Computing Unit-2

The document discusses virtualization concepts and technologies used in cloud computing. It describes different types of virtualization including hardware, operating system, server, storage, application, and network virtualization. It also covers benefits, characteristics, and differences between file level and block level storage.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 37

Cloud Computing

UNIT-2
Virtualization Concepts and Technologies
Virtualization in Cloud Computing
• Virtualization is a technique of how to separate a service from the underlying
physical delivery of that service.
• It is cost-effective, hardware reducing and energy saving techniques used by
cloud providers.
• It allows sharing of a single physical instance of a resource or an application
among multiple customers and organizations at one time.
• Virtualization is the "creation of a virtual (rather than actual) version of
something, such as a server, a desktop, a storage device, an operating system or
network resources".
• In other words, Virtualization is a technique, which allows to share a single
physical instance of a resource or an application among multiple customers and
organizations. It does by assigning a logical name to a physical storage and
providing a pointer to that physical resource when demanded.
• The machine on which the virtual machine is going to create is known
as Host Machine and that virtual machine is referred as a Guest Machine.
Benefits of Virtualization
• More flexible and efficient allocation of resources.
• Enhance development productivity.
• It lowers the cost of IT infrastructure.
• Remote access and rapid scalability.
• High availability and disaster recovery.
• Pay peruse of the IT infrastructure on demand.
• Enables running multiple operating systems.
Types of Virtualization:
1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.
5. Application Virtualization
6. Network Virtualization
• 1) Hardware Virtualization:
• When the virtual machine software or virtual machine manager (VMM) is
directly installed on the hardware system is known as hardware virtualization.
• The main job of hypervisor is to control and monitoring the processor,
memory and other hardware resources.
• After virtualization of hardware system we can install different operating
system on it and run different applications on those OS.
• Usage:
• Hardware virtualization is mainly done for the server platforms, because
controlling virtual machines is much easier than controlling a physical server.
2) Operating System Virtualization:
• When the virtual machine software or virtual machine manager (VMM) is installed
on the Host operating system instead of directly on the hardware system is known as
operating system virtualization.
• Usage:
• Operating System Virtualization is mainly used for testing the applications on
different platforms of OS.
3) Server Virtualization:
• When the virtual machine software or virtual machine manager (VMM) is directly
installed on the Server system is known as server virtualization.
• Usage:
• Server virtualization is done because a single physical server can be divided into
multiple servers on the demand basis and for balancing the load.
Server Virtualization:
4) Storage Virtualization:
Storage virtualization provides a way for many users or applications to access storage without being
concerned with where or how that storage is physically located or managed.
• It is the process of grouping the physical storage from multiple network storage devices so that it looks
like a single storage device.
• Storage virtualization is also implemented by using software applications.
For example, a single large disk may be partitioned into smaller, logical disks that each user can
access as though they were a single network drive or a number of disks may be aggregated to
present a single storage interface to end-users and applications.
Advantages :
Provides very fast and reliable storage for computing and data processing
• Advanced data protection features
• Allows the administrator of the storage system greater flexibility in how they manage storage for
end-users
• Provides opportunities to optimize storage use and server consolidation and to perform non-
disruptive file migration
• Allows each virtual server to run its own operating system and each virtual server can also be
independently rebooted of one another
• Reduces cost because less hardware is required
• Utilizes resources to save operational costs (e.g. using a lower number of physical servers reduces
hardware maintenance)
• Usage:
• Storage virtualization is mainly done for back-up and recovery purposes.
File Level Storage Vs Block Level Storage
• File storage and block storage are two of the most common and popular ways to store and access data in on
premises virtual and cloud servers.
File Level Storage:
File level storage is used for unstructured data.
It is commonly deployed in Network Attached Storage(NAS) systems.
It is a network attached file level data storage.
You can store, access and share data in the network attached storage appliance via a configured network using NAS
storage protocols: NFS/CIFS and SMB.
The network attached storage device is the best option for simplified and effective storage of unstructured big data.
It is used to store videos, files, backups, snapshots, emails etc.
This systems are less costly than block storage
It is dually(scale up or scale out) scalable.
Use Cases:
1. Shared storage location for multiple user groups, departments and teams accessible via local area network
2. Local storage systems for data archiving, retention and compliance.
3. Target storage for databases, applications, physical and virtual servers, backup software etc.
Block Level Storage:
• Block level storage is used for structured data.
• It is commonly deployed in Storage Area Network systems(SAN).
• It uses Internet Small Computer Systems Interface(iSCSI) and Fibre Channel(FC) protocols.
• The data is stored without any metadata e.g. data format, type, ownership etc.
• It uses blocks, which are a set sequence of bytes, to store structured workloads.
• Each block is assigned a unique hash value which functions as an address.
• It is built to facilitate larger workloads and enhance input/output operations per second, they tend to be
more expensive than file storage systems.
• Block storage appliances can scale up.
• Use Cases:
• Email servers with applications such as Microsoft Exchange
• Storage for industry standard hypervisors such as VMware, Microsoft Hyper-V, KVM, Citrix(formerly
XenServer)
• Storage for structured workloads such as Oracle, MySQL, NoSQL databases and applications like SAP etc.
• Block storage with RAID (Redundant Array of Inexpensive Disks)-improves fault tolerance, enhances
performance and ensures high availability(HA).
SAN(Storage Area Network) Vs NAS(Network Attached Storage):
• SAN approach deploys specialized hardware and software to
transform disk drives into a data storage solution.
• It transfers data on its high performance network.
• Corporate data must be available 24/7 and needs to be conveniently
managed.
• The price for this approach is very high.
• It offers scalability and reduced downtime.
• It is not directly attached with any server or network which makes
sharing possible.
• Provides long distance connectivity with fiber channel.
• It is truly versatile.
Network Attached Storage(NAS)
• NAS is the first step of storage virtualization.
• It provides a single source of data and facilitating data backup.
• It avoids the problem of accessing multiple servers to access data which is
located on different location.
• The user can share file by running differItent types of machines and os
systems.
• A lower administration overhead is required.
• It is a centralized storage.
• It is easy and cheaper to maintain, administer and backup.
• A first response time for users but slower than a local disk.
• It is a shared storage, so it is vulnerable.
• Heavy use of NAS can block up the shared LAN.
5)Application Virtualization: Application virtualization helps a user to
have remote access to an application from a server. The server stores all
personal information and other characteristics of the application but can
still run on a local workstation through the internet.
• An example of this would be a user who needs to run two different
versions of the same software. Technologies that use application
virtualization are hosted applications and packaged applications.
• Network Virtualization: The ability to run multiple virtual networks with each having a separate
control and data plan. It co-exists together on top of one physical network. It can be managed by
individual parties that are potentially confidential to each other. Network virtualization provides a
facility to create and provision virtual networks, logical switches, routers, firewalls, load
balancers, Virtual Private Networks (VPN), and workload security within days or even weeks.
• Saves money by reducing hardware costs
• Reduces the overall electricity consumption
• Confers the ability to quickly recover from hardware failure
• Automatically and instantaneously makes the transfer from failing host to another host so
that the downtime is eliminated
• Enables full disaster recovery
Characteristics of Virtualization
• Increased Security: The ability to control the execution of a guest
program in a completely transparent manner opens new possibilities for
delivering a secure, controlled execution environment. All the operations
of the guest programs are generally performed against the virtual
machine, which then translates and applies them to the host programs.
• Managed Execution: In particular, sharing, aggregation, emulation, and
isolation are the most relevant features.
• Sharing: Virtualization allows the creation of a separate computing
environment within the same host.
• Aggregation: It is possible to share physical resources among several
guests, but virtualization also allows aggregation, which is the opposite
process.
Elasticity and Scalability:
• Elasticity refers to the ability of a cloud to automatically expand or compress the
infrastructural resources on a sudden up and down in the requirement so that the
workload can be managed efficiently.
• This elasticity helps to minimize infrastructural costs.
• it is helpful to address only those scenarios where the resource requirements
fluctuate up and down suddenly for a specific time interval.
• It is not quite practical to use where persistent resource infrastructure is required
to handle the heavy workload.
• Example: Consider an online shopping site whose transaction workload
increases during festive season like Christmas. So for this specific period of time,
the resources need a spike up. In order to handle this kind of situation, we can go
for a Cloud-Elasticity service rather than Cloud Scalability. As soon as the season
goes out, the deployed resources can then be requested for withdrawal.
• Cloud Scalability: Cloud scalability is used to handle the growing
workload where good performance is also needed to work efficiently
with software or applications. Scalability is commonly used where the
persistent deployment of resources is required to handle the workload
statically.
• Example: Consider you are the owner of a company whose database
size was small in earlier days but as time passed your business does
grow and the size of your database also increases, so in this case you
just need to request your cloud service vendor to scale up your
database capacity to handle a heavy workload.
• Vertical Scalability (Scale-up) –
In this type of scalability, we increase the power of existing
resourcesin the working environment in an upward direction.
2. Horizontal Scalability: In this kind of scaling, the resources 3. Diagonal Scalability –It is a mixture of both Horizontal
are added in a horizontal row. and Vertical scalability where the resources are added both
vertically and horizontally

Diagonal Scalability –.
Difference Between Cloud Elasticity and Scalability
Cloud Elasticity Cloud Scalability

Elasticity is used just to meet the sudden up and Scalability is used to meet the static increase in
1 down in the workload for a small period of time. the workload.

Elasticity is used to meet dynamic changes, Scalability is always used to address the increase
2 where the resources need can increase or in workload in an organization.
decrease.

Elasticity is commonly used by small companies Scalability is used by giant companies whose
3 whose workload and demand increases only for customer circle persistently grows in order to do
a specific period of time. the operations efficiently.

It is a short term planning and adopted just to Scalability is a long term planning and adopted
4 deal with an unexpected increase in demand or just to deal with an expected increase in
seasonal demands. demand.
Data Replication
• Data replication is the process by which data residing on a physical/virtual
server(s) or cloud instance (primary instance) is continuously replicated or
copied to a secondary server(s) or cloud instance (standby instance).
Organizations replicate data to support high availability, backup, and/or
disaster recovery.
• The creation of multiple instances of the same resource, it enables data from
one resource to be replicated to one or more resources.
• For example, if you need to recover from a system failure, your standby instance should be on
your local area network (LAN). For critical database applications, you can then replicate data
synchronously from the primary instance across the LAN to the secondary instance. This makes
your standby instance “hot” and in sync with your active instance, so it is ready to take over
immediately in the event of a failure. This is referred to as high availability (HA).
Importance of Replication:

1. While a major disaster, such as a fire, flood, storm, etc., can devastate your primary
instance, your secondary instance is safe in the cloud and can be used to recover the
data and applications impacted by the disaster.
2. Cloud data replication is less expensive than replicating data to your own data center.
3. For smaller businesses, replicating data to the cloud can be more secure especially if
you do not have security expertise on staff. Both the physical and network security
provided by cloud providers is unmatched.
4. Replicating data to the cloud provides on-demand scalability. As your business grows
or contracts, you do not need to invest in additional hardware to support your
secondary instance or have that hardware sit idle if business slows down. You also
have no long-term contracts.
5. When replicating data to the cloud, you have many geographic choices, including
having a cloud instance in the next city, across the country, or in another country as
your business dictates.
Hypervisor Management Software
• Virtualization is achieved by means of a hypervisor or virtual machine manager (VMM).
• The term hypervisor was first introduced in 1956 by IBM to refer to software program distributed with IBM
RPQ for the IBM 360/65.
• The basic of virtualization on server is hypervisor; it enables hardware to be divided into multiple logical
partitions and ensures isolation among them.
• Hypervisor takes control as soon as the system is powered on and gathers information about memory, CPU,
I/O, and other resources that are available to the system.
• Hypervisor owns and controls all the resources that are global to the system.
• The hypervisor is installed on the server which controls the guest oper- ating system running on the host
machine.
• Its main job is to provision the needs of the guest operating system and effectively manage it such that the
instances of multiple operating systems do not interrupt one another.
• Hypervisors are directly responsible for hosting and managing virtual machines on the host or server.
• The host is another name for the physical server
• The virtual machines that run on the host are termed as guest VM or guest operating system. Guest VM is a
hypervisor which can operate on hardware of different vendors.
• There are two types of hypervisors known as “Type 1” and “Type 2” hypervisors. Type
• There are two types of hypervisors known as “Type 1” and “Type 2” hypervisors.
• Type 1 is a hypervisor which is installed directly on the hardware and is even called as “bare-metal.”
• Type 2 is a hypervisor which is installed on top of an operating system and is even called as “hosted”
hypervisor.

• Bare-metal/native hypervisors (type-1): Type 1 hypervisors are positioned between the hardware
and virtual machines. These are software systems that run directly on the host’s software as a hardware
control. A type-1 hypervisor is a type of client hypervisor that interacts directly with the hardware that
is being virtualized

• It is independent of the OS, and boots before the operating system (OS). Currently, type-1 hypervisors
are being used by all the major professionals in the desktop virtualization space such as VMware,
Microsoft, and Citrix.
Virtualization Characteristic
• Virtualization is using computer resources to imitate other computer resources or an entire computer
system. It separates resources and services from the underlying physical environment. Virtualization
has three major characteristics that make it absolute for cloud computing:
• i. Partitioning: In virtualization, many operating systems and applications are supported on a single
physical system by partitioning (separating) the available resources.
• ii. Encapsulation: A virtual machine can be represented (and even stored) as a single file, so it can be
easily identified based on the service it provides. An encapsulated process could be a business service
and can be presented to an application as a complete entity. Therefore, encapsulation protects each
application so that it doesn’t interfere with another application.
• iii. Isolation: The virtual machine is isolated from its host physical system and other virtualized
machines. Because of this isolation, if one virtual- instance crashes it won’t affect the other virtual
machines. Also due to isolation the data is not shared among different virtual containers.
Virtualization Types
Two kinds of virtualization approaches are available according to virtual machine monitor (VMM).

• Hosted approach: When VMM runs on an operating system, it is installed and run as an application.
This approach relies on the host OS for device support and physical resource management.
• Bare-metal approach: In this approach, VMM runs on top of hardware directly. There is no need to
have host OS as the medium but it is directly installed on the hardware.
VIRTUALIZATION BENEFITS
• There are number of virtualization benefits and some of these are stated below:
• i. Availability and reliability: Other virtual machines are not affected by a software failure happening
in one virtual machine.
• ii. Security: Splitting up environments with different security requirements in different virtual
machines, one can select the guest operating
system and the tools that are more apt for each environment. A security attack on one virtual machine
does not compromise the others because of their isolation.
• iii. Cost: It is possible to achieve cost reduction by consolidating smaller servers into more powerful
servers. Cost reductions can be achieve from hardware costs, operations cost reductions in terms of
personnel, floor space, and software licenses.
• iv. Adaptability to workload variations: Changes in workload intensity levels can be easily taken care
of by relocating resources and priority allo- cations among virtual machines. Autonomic computing-
based resource allocation techniques, such as dynamically moving processors from one virtual machine
to another, help in adapting to workload variations.
• v. Load balancing: It is relatively easy to migrate virtual machines to other platforms as the software
state of an entire virtual machine is completely encapsulated by the VMM. Hence this helps to improve
performance through better load balancing.
Features of Hypervisor
• The common features of hypervisor are “High Availability (HA),” “Fault Tolerance (FT),” and “Live
migration (LM).”
• The prime goal of a High Availability is to minimize the impact of downtime and continuously monitor
all virtual machines running in the virtual resource pool.
• The virtual resource pool is a set of resources or physical servers which run virtual machines (VM).
When a physical server fails the VM is automatically restarted on another server.
Fault Tolerance
• There are three physical servers.
• When there is a failure in server B, the virtual machines B1 and B2 are restarted on server A and server
C.
• This can be done because images of the virtual machines are stored in the storage system, which the
servers are connected to.
• However, a hardware failure can lead to data loss. This problem is solved with fault tolerance (FT). With
fault tolerance it is possible to run an identical copy of the VM on another server. As a result, there will
be no data loss or downtime.
Live Migration
• Fault tolerance is used for virtual machines B1 and B2. With fault tolerance copies of B1 and B2 will be
maintained and run on a separate host or physical server in real-time. Every instruction of the primary
VM will also be executed on the secondary VM. If server B fails, B1 and B2 will continue on server A and
C without any downtime. The technology to move virtual machines across different hosts or physical
servers is called live migration. An example of a live migration is
• virtual machines are migrated from one host to another.
• The reasons for live migration can be an increase in the server workload and also for server
maintenance purposes.
• As a virtual machine (VM) is hardware (configuration) independent, it is not dedicated to a single
physical server or hardware configuration .
• It can be moved from one server to another even when it is in operation.
• This makes it possible to balance capacity across servers ensuring that each virtual machine has access
to appropriate resources on time.
Advantages of Hypervisor-based Systems

• Following are some of the advantages of Hypervisor-based system:


• i. Hypervisor controls the hardware; this capability allows hypervisor- based virtualization to
have a secure infrastructure. Hypervisor prevent unauthorized users from compromising the
hardware infrastructure and so it act as a firewall.
• ii. Hypervisor is implemented below the guest OS, which means that if an attack passes the
security systems in the guest OS then the hypervisor can detect it.
• iii. The hypervisor acts as a layer of abstraction to isolate the virtual environ- ment from the
hardware underneath.
• iv. Hypervisor level of virtualization controls all the access between the guests OS and the
shared hardware underneath. Therefore, hypervisor simplifies the transaction monitoring
process in the cloud environment.
Hypervisors Classification
The hypervisors that are frequently used in the market are VMware, Xen, and Microsoft Virtual Server.
Hypervisors are classified into two types:

• Bare-metal/native hypervisors (type-1): Type 1 hypervisors are positioned between the hardware
and virtual machines. These are software systems that run directly on the host’s software as a hardware
control. A type-1 hypervisor is a type of client hypervisor that interacts directly with the hardware that
is being virtualized

• It is independent of the OS, and boots before the operating system (OS). Currently, type-1 hypervisors
are being used by all the major professionals in the desktop virtualization space such as VMware,
Microsoft, and Citrix.
• ii. Embedded/host hypervisor (type-2): Type-2 hypervisors are software applications that run
within a conventional operating system environment. In contrast to type 1, the hypervisor is placed
above the operating system and not below the operating system or virtual machines.
• Hosted hypervisors can be used to run a different type of operating system on top of another operating
system. Considering the hypervisor layer being a distinct software layer, the guest operating system
runs at the third level above the hardware.
• A type-2 hypervisor is a type of client hypervisor that sits on top of an operating system.
• It cannot boot until the operating system is already up and running. If for any reason the operating
system crashes, all the end-users are affected. This is a big drawback of type-2 hypervisors, as they are
only as secure as the operating system on which they rely.

VM(Virtual Machine) Provisioning:
• Provisioning refers to the process of setting up IT infrastructure and
providing access to the various resources that are part of the
infrastructure.
• Virtual machine provisioning or virtual server provisioning is a systems
management process that creates a new virtual machine on a physical host
server and allocates computing resources to support the VM.
• Historically, there is a need to install a new server for a certain workload to
provide a particular service for a client, lots of effort was exerted by the IT
administrator and much time was spent to install and provision a new
server.
• Check the inventory for a new machine,get one, format, install os required
and install services; a server is needed along with lots of security batches
and appliances
• With the emergence of virtualization technology and the cloud
computing IaaS model, it is just a minute to achieve the same task.
• All you need is to provision a virtual server through a self-service
interface with small steps to get what you desire with the required
specifications.
1. provisioning this machine in a public cloud like Amazon Elastic
Compute Cloud(EC2), or
2. using a virtualization management software package or a private
cloud management solution installed at your data center in order to
provision the virtual machine inside the organization and within the
private cloud set up.

You might also like