Unit 3
Unit 3
UNIT III
Types of Virtualization
1. Virtualization ranging from hardware to applications in five abstraction levels.
Based on the functionality of virtualized applications, there are five basic types of
virtualization which are explained as follows.
Desktop Virtualization
The processing of multiple virtual desktops occurs on one or a few physical servers, typically at
the centralized data center. The copy of the OS and applications that each end user utilizes will
typically be cached in memory as one image on the physical server.
The Desktop virtualization provides a virtual desktop environment where client can access the
system resources remotely through the network.
The ultimate goal of desktop virtualization is to make computer operating system accessible
from anywhere over the network. The virtual desktop environments do not require a specific
system or hardware resources on the client side; however, it requires just a network connection.
The user can utilize the customized and personalized desktop from a remote area through the
network connection. The virtualization of the desktop is sometimes referred as Virtual Desktop
Infrastructure (VDI) where all the operating systems like windows, or Linux are installed as a
virtual machine on a physical server at one place and deliver them remotely through the Remote
Desktop Protocols like RDP (in windows) or VNC (in Linux).
The processing of multiple virtual desktops occurs on one or more physical servers placed
commonly at the centralized data center. The copy of the OS and applications that each end client
uses will commonly be stored in memory as one image the physical server.
Currently, VMware Horizon and Citrix Xen Desktop are the two most popular VDI solutions
available in the market with so many dominating features. Although, Desktop operating system
provided by VDI is virtual but appears like a physical desktop operating system. The virtual desktop
can run all the types of applications that are supported on physical computer but only difference is
III IT VIRTUALIZATION INFRASTRUCTURE AND DOCKER (CCS335-Cloud Computing)
Network Virtualization
The Network virtualization is the ability to create virtual networks that are decoupled from the
underlying network hardware. This ensures the network can better integrate with and support
increasingly virtual environments. It has capability to combine multiple physical networks into
one virtual, or it can divide one physical network into separate, independent virtual networks.
The Network virtualization is the ability to make virtual networks that are decoupled from the
underlying network hardware. This ensures the network can better integrate with and support
increasingly virtual environments. It has capacity to combine multiple physical networks into
single virtual, or it can divide one physical network into separate, independent virtual networks.
The Network virtualization can combine the entire network into a single mode and allocates its
bandwidth, channels, and other resources based on its workload.
Network virtualization is similar to server virtualization but instead of dividing up a physical
server among several virtual machines, physical network resources are divided up among
multiple virtual networks.
network. The physical network resources, such as switches and routers, are pooled and accessible
by any user via a centralized management system. The benefits of network virtualization are
It consolidates the physical hardware of a network into a single virtual network that reduce
the management overhead of network resources.
It gives better scalability and flexibility in network operations.
It provides automated provisioning and management of network resources.
It reduces the hardware requirements and will have a corresponding impact on your
power consumption.
It is cost effective as it requires reduced the number of physical devices.
Storage Virtualization
Storage virtualization is the process of grouping multiple physical storages using software
to appear as a single storage device in a virtual form.
It pools the physical storage from different network storage devices and makes it appear to be
a single storage unit that is handled from a single console. Storage virtualization helps to address
the storage and data management issues by facilitating easy backup, archiving and recovery tasks
in less time.
It aggregates the functions and hides the actual complexity of the storage area network. The
storage virtualization can be implemented with data storage technologies like snapshots and
RAID that take physical disks and present them in a virtual format. These features can allow to
perform redundancy to the storage and gives optimum performance by presenting host as a
volume.
Virtualizing storage separates the storage management software from the underlying
hardware infrastructure in order to provide more flexibility and scalable pools of storage
resources. The benefits provided by storage virtualization are
Automated management of storage mediums with estimated of down time.
Enhanced storage management in heterogeneous IT environment.
Better storage availability and optimum storage utilization.
It gives scalability and redundancy in storage.
It provides consummate features like disaster recovery, high availability, consistency,
replication & re-duplication of data.
The backup and recovery are very easier and efficient in storage virtualization.
Server Virtualization
A Server virtualization is the process of dividing a physical server into multiple unique and
III IT VIRTUALIZATION INFRASTRUCTURE AND DOCKER (CCS335-Cloud Computing)
isolated virtual servers by means of software. It partitions a single physical server into the
multiple virtual servers; each virtual server can run its own operating system and applications
independently. The virtual server is also termed as virtual machine. The consolidation helps in
running many virtual machines under a single physical server.
Each virtual machine shares the hardware resources from physical server that leads to better
utilization of the physical servers’ resources. The resources utilized by virtual machine include
CPU, memory, storage, and networking. The hypervisor is the operating system or software that
runs on the physical machine to perform server virtualization.
The hypervisor running on physical server is responsible for providing the resources to the
virtual machines. Each virtual machine runs independently of the other virtual machines on the
same box with different operating systems that are isolated from each other.
The popular server virtualization softwares are VMware’s vSphere, Citrix Xen Server,
Microsoft’s Hyper-V, and Red Hat’s Enterprise Virtualization.
The benefits of server virtualization are
Application Virtualization
Application virtualization is a technology that encapsulates an application from the underlying
operating system on which it is executed. It enables access to an application without needing to
install it on the local or target device. From the user’s perspective, the application works and
interacts like it’s native on the device.
It allows to use any cloud client which supports BYOD like Thin client, Thick client, Mobile
client, PDA and so on.
Application virtualization utilizes software to bundle an application into an executable and run
anywhere type of application. The software application is isolated from the operating system
and runs in an environment called as "sandbox”.
There are two types of application virtualization: remote and streaming of the application. In
first type, the remote application will run on a server, and the client utilizes some kind of
remote display protocol to communicate back. For large number of administrators and users, it’s
fairly simple to set up remote display protocol for applications.
III IT VIRTUALIZATION INFRASTRUCTURE AND DOCKER (CCS335-Cloud Computing)
In second type, the streaming application will run one copy of the application on the server,
and afterward have client desktops access and run the streaming application locally. With
streaming application, the upgrade process is simpler, since you simply set up another streaming
application with the upgrade version and have the end users point to the new form of the
application.
Some of the popular application virtualization softwares in the commercial center are VMware
ThinApp, Citrix XenApp, Novell ZENworks Application Virtualization and so on.
Some of the prominent benefits of application virtualization are
This technique alleviates the burden and inefficiency of managing hardware resources by
software.
It is located under the ISA and remains unmodified by the operating system or VMM
(hypervisor). The below figure illustrates the technique of a software-visible VCPU moving from one
core to another and temporarily suspending execution of a VCPU when there are no appropriate
III IT VIRTUALIZATION INFRASTRUCTURE AND DOCKER (CCS335-Cloud Computing)
Multicore virtualization
Virtual Hierarchy
The emerging many-core chip multiprocessors (CMPs) provide a new computing landscape.
Instead of supporting time-sharing jobs on one or a few cores, abundant cores are used in a space-
sharing, where single-threaded or multithreaded jobs are simultaneously assigned to separate
groups of cores for long time intervals.
To optimize for space-shared workloads, they propose using virtual hierarchies to overlay a
coherence and caching hierarchy onto a physical processor. Unlike a fixed physical hierarchy, a
virtual hierarchy can adapt to fit how the work is space shared for improved performance and
performance isolation.
Virtual clusters are built with VMs installed at distributed servers from one or more
physical [Link] VMs in a virtual cluster are interconnected logically by a virtual network across
several physical networks. The below Figure illustrates the concepts of virtual clusters and physical
clusters.
Each virtual cluster is formed with physical machines or a VM hosted by multiple physical clusters.
The virtual cluster boundaries are shown as distinct boundaries.
The Virtual clusters are advantageous than physical clusters by following properties.
The system should have the capability of fast deployment. Here, deployment means two things:
1. To construct and distribute software stacks (OS, libraries, applications) to a physical node inside
III IT VIRTUALIZATION INFRASTRUCTURE AND DOCKER (CCS335-Cloud Computing)
It is important to efficiently manage the disk spaces occupied by template software packages.
Some storage architecture design can be applied to reduce duplicated blocks in a distributed file
system of virtual clusters. Hash values are used to compare the contents of data blocks. Users have
their own profiles which store the identification of the data blocks for corresponding VMs in a user-
specific virtual [Link], there are four steps to deploy a group of VMs onto a target cluster:
3. Explain about Virtualization for Linux and windows and NT Platform. Design the process of
Live Migration of VM from one host to another.
Steps 0 and 1: Start migration. This step makes preparations for the migration, including
determining the migrating VM and the destination host. Although users could manually make a VM
migrate to an appointed host, in most circumstances, the migration is automatically started by
strategies such as load balancing and server consolidation.
Steps 2: Transfer memory. Since the whole execution state of the VM is stored in memory, sending
the VM’s memory to the destination node ensures continuity of the service provided by the VM. All of
the memory data is transferred in the first round, and then the migration controller recopies the
memory data which is changed in the last round. These steps keep iterating until the dirty portion of
the memory is small enough to handle the final copy. Although precopying memory is performed
iteratively, the execution of programs is not obviously interrupted.
Step 3: Suspend the VM and copy the last portion of the data. The migrating VM’s execution is
suspended when the last round’s memory data is transferred. Other nonmemory data such as CPU
and network states should be sent as well. During this step, the VM is stopped and its applications
will no longer run. This “service unavailable” time is called the “downtime” of migration, which
should be as short as possible so that it can be negligible to users.
Steps 4 and 5: Commit and activate the new host. After all the needed data is copied, on the
III IT VIRTUALIZATION INFRASTRUCTURE AND DOCKER (CCS335-Cloud Computing)
destination host, the VM reloads the states and recovers the execution of programs in it, and the
service provided by this VM continues. Then the network connection is redirected to the new VM
and the dependency to the source host is cleared.
The whole migration process finishes by removing the original VM from the source host.
The below diagram shows the effect on the data transmission rate (Mbit/second) of live migration
of a VM from one host to [Link] copying the VM with 512 KB files for 100 clients, the data
throughput was 870 MB/second.
The first precopy takes 63 seconds, during which the rate is reduced to 765 MB/second. Then the
data rate reduces to 694 MB/second in 9.8 seconds for more iterations of the copying process. The
system experiences only 165 ms of downtime, before the VM is restored at the destination host. This
experimental result shows a very small migration overhead in live transfer of a VM between host
nodes.
III IT VIRTUALIZATION INFRASTRUCTURE AND DOCKER (CCS335-Cloud Computing)
Effect on data transmission rate of a VM migrated from one failing web server to another.
c) Migration of Memory, Files, and Network
Memory Migration is one of the most important aspects of VM migration. Moving the memory
instance of a VM from one physical host to another can be approached in any number of ways.
Memory migration can be in a range of hundreds of megabytes to a few gigabytes in a typical
system today, and it needs to be done in an efficient manner.
The Internet Suspend-Resume (ISR) technique exploits temporal locality, as memory states are
likely to have considerable overlap in the suspended and the resumed instances of a VM.
Temporal locality refers to the fact that the memory states differ only by the amount of work
done since a VM was last suspended before being initiated for migration.
Network Migration: A migrating VM should maintain all open network connections without relying
on forwarding mechanisms on the original host or on support from mobility or redirection
mechanisms.
To enable remote systems to locate and communicate with a VM, each VM must beassigned a
virtual IP address known to other entities.
This address can be distinct from the IP address of the host machine where the VM is currently
located. Each VM can also have its own distinct virtual MAC address.
III IT VIRTUALIZATION INFRASTRUCTURE AND DOCKER (CCS335-Cloud Computing)
The VMM maintains a mapping of the virtual IP and MAC addresses to their corresponding VMs.
In general, a migrating VM includes all the protocol states and carries its IP address with it.
The idea is illustrated by the prototype implementation of the COD shown in figure. The COD
partitions a physical cluster into multiple virtual clusters (vClusters). vCluster owners specify the
operating systems and software for their clusters through an XML-RPC interface. The vClusters run a
batch schedule from Sun’s GridEngine on a web server cluster. The COD system can respond to load
changes in restructuringthe virtual clusters dynamically.
The term “storage virtualization” was widely used before the renaissance of system
virtualization. Yet the term has a different meaning in a system virtualization environment.
Previously, storage virtualization was largely used to describe the aggregation and repartitioning of
disks at very coarse time scales for use by physical machines. In system virtualization, virtual storage
includes the storage managed by VMMs and guest OSes. Generally, the data stored in this
environment can be classified into two categories:
i. VM images and
ii. Application Data.
The VM images are special to the virtual environment, while application data includes all other data
which is the same as the data in traditional OS environments. The most important aspects of system
virtualization are encapsulation and [Link] following are the major functions provided by
Virtual Storage Management:
This procedure complicates storage operations. On the one hand, storage management of the
guest OS performs as though it is operating in a real hard disk while the guest OSes cannot access the
hard disk directly.
On the other hand, many guest OSes contest the hard disk when many VMs are running on a
single physical machine. Therefore, storage management of the underlying VMM is much more
III IT VIRTUALIZATION INFRASTRUCTURE AND DOCKER (CCS335-Cloud Computing)
The architecture of Parallax is scalable and especially suitable for use in cluster-based
[Link] below figure shows a high-level view of the structure of a Parallax-based cluster. A
cluster-wide administrative domain manages all storage appliance VMs, which makes storage
management [Link] mechanism enables advanced storage features such as snapshot facilities to
be implemented in software and delivered above commodity network storage targets.
Dockers:
4. Explain in detail about Dockers and its types of components with an example.
Introduction to Dockers
What is Docker?
Docker is an open-source centralized platform designed to create, deploy, and run applications.
Docker uses container on the host's operating system to run applications. It allows applications to
use the same Linux kernel as a system on the host computer, rather than creating a whole virtual
operating system. Containers ensure that our application works in any environment like
development, test, or production. Docker includes components such as Docker client, Docker
server, Docker machine, Docker hub, Docker composes, etc.
Why Docker?
Docker is designed to benefit both the Developer and System Administrator. There are the
following reasons to use Docker -
o Docker allows us to easily install and run software without worrying about setup or
dependencies.
III IT VIRTUALIZATION INFRASTRUCTURE AND DOCKER (CCS335-Cloud Computing)
o Developers use Docker to eliminate machine problems, i.e. "but code is worked on my
laptop." when working on code together with co-workers.
o Operators use Docker to run and manage apps in isolated containers for better compute
density.
o Enterprises use Docker to securely built agile software delivery pipelines to ship new
application features faster and more securely.
o Since docker is not only used for the deployment, but it is also a great platform for
development, that's why we can efficiently increase our customer's satisfaction.
Advantages of Docker
o Docker allows you to use a remote repository to share your container with others.
Disadvantages of Docker
o Some features such as container self -registration, containers self-inspects, copying files form
host to the container, and more are missing in the Docker.
o Docker is not a good solution for applications that require rich graphical interface.
Docker Engine
o The REST API is used to specify interfaces that programs can use to talk to the daemon and
instruct it what to do.
Docker components
• Docker Images
• Registries
• Docker Containers
Docker is a client-server application. The Docker client talks to the Docker server
or daemon, which, in turn, does all the work. Docker ships with a command line client binary, docker,
as well as a full RESTful API. You can run the Docker daemon and client on the same host or connect
your local Docker client to a remote daemon running on another host. You can see Docker's
architecture depicted here:
III IT VIRTUALIZATION INFRASTRUCTURE AND DOCKER (CCS335-Cloud Computing)
Docker Architecture
Docker images
Images are the building blocks of the Docker world. You launch your containers
from images. Images are the "build" part of Docker's life cycle. They are a layered
format, using Union file systems, that are built step-by-step using a series of
• Add a file.
• Run a command.
• Open a port.
You can consider images to be the "source code" for your containers. They are
highly portable and can be shared, stored, and updated. In the book, we'll learn
Registries:
Docker stores the images you build in registries. There are two types of registries: public and
private. Docker, Inc., operates the public registry for images, called the Docker Hub. You can
create an account on the Docker Hub and use it to share and store your own images.
The Docker Hub also contains, at last count, over 10,000 images that other people have built
and shared. Want a Docker image for an Nginx web server, the Asterisk open source PABX
system, or a MySQL database? All of these are available, along with a whole lot more.
You can also store images that you want to keep private on the Docker Hub. These images
might include source code or other proprietary information you want to keep secure or only
share with other members of your team or organization.
Containers
Docker helps you build and deploy containers inside of which you can package your applications and
services. As we've just learnt, containers are launched from images and can contain one or more
running processes. You can think about images as the building or packing aspect of Docker and the
containers as the running or execution aspect of Docker.
• An image format.
• An execution environment.
Docker borrows the concept of the standard shipping container, used to transport goods
globally, as a model for its containers. But instead of shipping goods, Docker containers ship
software.
Each container contains a software image -- its 'cargo' -- and, like its physical counterpart,
allows a set of operations to be performed. For example, it can be created, started, stopped, restarted,
and destroyed. Like a shipping container, Docker doesn't care about the contents of the container
when performing these actions; for example, whether a container is a web server, a database, or an
application server. Each container is loaded the same as any other container.
Docker also doesn't care where you ship your container: you can build on your laptop, upload to a
registry, then download to a physical or virtual server, test, deploy to a cluster of a dozen Amazon
EC2 hosts, and run. Like a normal shipping container, it is interchangeable, stackable, portable, and
as generic as possible.
III IT VIRTUALIZATION INFRASTRUCTURE AND DOCKER (CCS335-Cloud Computing)
Docker can be run on any x64 host running a modern Linux kernel; we recommend kernel version 3.8
and later. It has low overhead and can be used on servers,desktops, or laptops. It includes:
• A native Linux container format that Docker calls libcontainer, as well as the popular container
platform, lxc. The lib container format is now the default format.
• Linux kernel namespaces, which provide isolation for file systems, processes, and networks.
• Resource isolation and grouping: resources like CPU and memory are allocated individually to each
• Logging: STDOUT, STDERR and STDIN from the container are collected, logged,
A Docker image is made up of filesystems layered over each other. At the base is a boot filesystem,
bootfs, which resembles the typical Linux/Unix boot filesystem. A Docker user will probably never
interact with the boot filesystem. Indeed, when a container has booted, it is moved into memory,
and the boot filesystem is unmounted to free up the RAM used by the initrd disk image.
Docker calls each of these filesystems images. Images can be layered on top of one another. The
image below is called the parent image and you can traverse each layer until you reach the
bottom of the image stack where the final image is called the base image. Finally, when a container is
launched from an image, Docker mounts a read-write filesystem on top of any layers below. This is
where whatever processes we want our Docker container to run will execute.
III IT VIRTUALIZATION INFRASTRUCTURE AND DOCKER (CCS335-Cloud Computing)
Let's get started with Docker images by looking at what images are available to us on our Docker
host. We can do this using the docker images command.
That image was downloaded from a repository. Images live inside repositories, and repositories
live on registries. The default registry is the public registry man- aged by Docker, Inc., Docker Hub.
III IT VIRTUALIZATION INFRASTRUCTURE AND DOCKER (CCS335-Cloud Computing)
Each repository can contain multiple images (e.g., the ubuntu repository contains images
for Ubuntu 12.04, 12.10, 13.04, 13.10, and 14.04). Let's get the rest of the images in the
ubuntu repository now.
$ sudo docker pull ubuntu
. . .
Here we've used the docker pull command to pull down the entire contents of the
ubuntu repository.
Let's see what our docker images command reveals now.
You can see we've now got a series of ubuntu images. We can see that the ubuntu
image is actually a series of images collected under a single repository.
We can refer to a specific image inside a repository by suffixing the repository name with
a colon and a tag name, for example:
Running a tagged Docker image
bash
root@79e36bff89b4:/#
This launches a container from the ubuntu:12.04 image, which is an Ubuntu 12.04
operating system. We can also see that some images with the same ID (see image ID
74fe38d11401) are tagged more than once. Image ID 74fe38d11401 is actu- ally
tagged both 12.04 and precise: the version number and code name for that Ubuntu
release, respectively.
Pulling images
When we run a container from images with the docker run command, if the image isn't
present locally already then Docker will download it from the Docker Hub. By default, if
you don't specify a specific tag, Docker will download the latest tag, for example:
will download the ubuntu:latest image if it isnt already present on the host.
Alternatively, we can use the docker pull command to pull images down our- selves.
Using docker pull saves us some time launching a container from a new image. Let's see
that now by pulling down the fedora base image.
Let's see this new image on our Docker host using the docker images command. This
time, however, let's narrow our review of the images to only the fedora↩ images. To do
so, we can specify the image name after the docker images↩ command.
We can see that the fedora image contains the development Rawhide release as well
as Fedora 20. We can also see that the Fedora 20 release is tagged in three ways -- 20,
heisenbug, and latest -- but it is the same image (we can see all three entries have an
ID of b7de3133ff98). If we wanted the Fedora 20 image, therefore, we could use any
of the following:
• fedora:20
• fedora:heisenbug
• fedora:latest
We could have also just downloaded one tagged image using the docker pull
command.
$ sudo docker pull fedora:20
We can also search all of the publicly available images on Docker Hub using the
docker search command:
III IT VIRTUALIZATION INFRASTRUCTURE AND DOCKER (CCS335-Cloud Computing)
wfarr/puppet-module...
jamtur01/puppetmaster
. . .
Here, we've searched the Docker Hub for the term puppet. It'll search images and return:
• Repository names
• Image descriptions
This will pull down the jamtur01/puppetmaster image (which, by the way, con- tains
a pre-installed Puppet master).
We can then use this image to build a new container. Let's do that now using the
docker run command again.
3.4.3
You can see we've launched a new container from our jamtur01/puppetmaster im- age.
III IT VIRTUALIZATION INFRASTRUCTURE AND DOCKER (CCS335-Cloud Computing)
We've launched the container interactively and told the container to run the Bash shell. Once
inside the container's shell, we've run Facter (Puppet's inventory application), which was
pre-installed on our image. From inside the container, we've also run the puppet binary
to confirm it is installed.
Docker Hub:
Docker Hub is a repository service and it is a cloud-based service where people push their Docker
Container Images and also pull the Docker Container Images from the Docker Hub anytime or
anywhere via the internet. It provides features such as you can push your images as private or
public. Mainly DevOps team uses the Docker Hub. It is an open-source tool and freely available for
all operating systems. It is like storage where we store the images and pull the images when it is
required. When a person wants to push/pull images from the Docker Hub they must have a basic
knowledge of Docker. Let us discuss the requirements of the Docker tool.
Docker is a tool nowadays enterprises adopting rapidly day by day. When a Developer team wants
to share the project with all dependencies for testing then the developer can push their code
on Docker Hub with all dependencies. Firstly create the Images and push the Image on Docker
Hub. After that, the testing team will pull the same image from the Docker Hub eliminating the need
for any type of file, software, or plugins for running the Image because the Developer team shares
the image with all dependencies.
Docker hub plays a very important role in industries as it becomes more popular day by day
and it acts as a bridge between the developer team and the testing team.
If a person wants to share their code, software any type of file for public use, you can just make
III IT VIRTUALIZATION INFRASTRUCTURE AND DOCKER (CCS335-Cloud Computing)
1. Push Command
This command as the name suggests itself is used to pushing a docker image onto the docker hub.
Implementation
Follow this example to get an idea of the push command:
The above command will list all the images on your system.
III IT VIRTUALIZATION INFRASTRUCTURE AND DOCKER (CCS335-Cloud Computing)
Step 4: Then give your credential and type in your docker hub username or password.
username
password
Step 5: After that hit the Enter key you will see login success on your screen.
Step 7: Then type the tag images name, docker hub username, and give the name it appears on the
docker hub using the below command:
# docker tag geeksforgeek mdahtisham/geeksimage
III IT VIRTUALIZATION INFRASTRUCTURE AND DOCKER (CCS335-Cloud Computing)
geeksimage - With this name Image will appear on the docker hub
Step 8: Now push your image using the below command:
# docker push mdahtisham/geeksimage
Note:Below you can see the Docker Image successfully pushed on the docker hub:
mdahtisham/geeksimage
2. Pull Command
The pull command is used to get an image from the Docker Hub.
III IT VIRTUALIZATION INFRASTRUCTURE AND DOCKER (CCS335-Cloud Computing)
Implementation:
Follow the example to get an overview of the pull command in Docker:
Step 1: Now you can search the image using the below command in docker as follows:
# docker search imagename
One can see all images on your screen if available images with this [Link] can also pull the
images if one knows the exact name
geeksimage - With this name Image will appear on the docker hub
Step 3: Now check for the pulled image using the below command as follows:
# docker images