CC
CC
QUESTION BANK
III YEAR – 05TH SEMESTER
DEPARTMENT OF ARTIFICIAL INTELLIGENCE & DATA SCIENCE
CCS335 CLOUD COMPUTING
TABLE OF CONTENT
CCS335 CLOUD COMPUTING
Syllabus
I
CLOUD ARCHITECTURE MODELS AND
INFRASTRUCTURE
3
II VIRTUALIZATION BASICS 38
III VIRTUALIZATION INFRASTRUCTURE AND DOCKER 67
IV CLOUD DEPLOYMENT ENVIRONMENT 88
V
CLOUD SECURITY 107
CCS335 CLOUD COMPUTING L T P C 2 0 2 3
COURSE OBJECTIVES:
To understand the principles of cloud architecture, models and infrastructure.
To understand the concepts of virtualization and virtual machines.
To gain knowledge about virtualization Infrastructure.
To explore and experiment with various Cloud deployment environments.
To learn about the security issues in the cloud environment.
UNIT I CLOUD ARCHITECTURE MODELS AND INFRASTRUCTURE 6
Cloud Architecture: System Models for Distributed and Cloud Computing – NIST Cloud
Computing Reference Architecture – Cloud deployment models – Cloud service models; Cloud
Infrastructure: Architectural Design of Compute and Storage Clouds – Design Challenges
UNIT V
CLOUD SECURITY 5
IMAGE bge
Img bg11
image bg12
formed by mapping each physical machine with its ID, logically, through a virtual mapping as
shown in Figure 1.17. When a new peer joins the system, its peer ID is added as a node in the
overlay network. When an existing peer leaves the system, its peer ID is removed from the
overlay network automatically. Therefore, it is the P2P overlay network that characterizes the
logical connectivity among the peers.
There are two types of overlay networks: unstructured and structured. An unstructured
overlay network is characterized by a random graph. There is no fixed route to send messages or
files among the nodes. Often, flooding is applied to send a query to all nodes in an unstructured
overlay, thus resulting in heavy network traffic and nondeterministic search results. Structured
overlay net-works follow certain connectivity topology and rules for inserting and removing
nodes (peer IDs) from the overlay graph. Routing mechanisms are developed to take advantage
of the structured overlays.
3.3 P2P Application Families
Based on application, P2P networks are classified into four groups, as shown in Table
1.5. The first family is for distributed file sharing of digital contents (music, videos, etc.) on the
P2P network. This includes many popular P2P networks such as Gnutella, Napster, and
BitTorrent, among others. Colla-boration P2P networks include MSN or Skype chatting, instant
messaging, and collaborative design, among others. The third family is for distributed P2P
computing in specific applications. For example, SETI@home provides 25 Tflops of distributed
computing power, collectively, over 3 million Internet host machines. Other P2P platforms,
such as JXTA,
.NET, and FightingAID@home, support naming, discovery, communication, security, and
resource aggregation in some P2P applications. We will dis-cuss these topics in more detail in
Chapters 8 and 9.
3.4 P2P Computing Challenges
P2P computing faces three types of heterogeneity problems in hardware, software, and
network requirements. There are too many hardware models and architectures to select from;
incompatibility exists between software and the OS; and different network connections and
protocols
BG13
make it too complex to apply in real applications. We need system scalability as the workload
increases. System scaling is directly related to performance and bandwidth. P2P networks do
have
these properties. Data location is also important to affect collective performance. Data locality,
network proximity, and interoperability are three design objectives in distributed P2P
applications.
P2P performance is affected by routing efficiency and self-organization by participating peers.
Fault tolerance, failure management, and load balancing are other important issues in using
overlay networks. Lack of trust among peers poses another problem. Peers are strangers to one
another. Security, privacy, and copyright violations are major worries by those in the industry in
terms of applying P2P technology in business applications [35]. In a P2P network, all clients
provide resources including computing power, storage space, and I/O bandwidth. The distributed
nature of P2P net-works also increases robustness, because limited peer failures do not form a
single point of failure.
By replicating data in multiple peers, one can easily lose data in failed nodes. On the other
hand, disadvantages of P2P networks do exist. Because the system is not centralized, managing it
is difficult. In addition, the system lacks security. Anyone can log on to the system and cause
damage or abuse. Further, all client computers connected to a P2P network cannot be considered
reliable or virus-free. In summary, P2P networks are reliable for a small number of peer nodes.
They are only useful for applications that require a low level of security and have no concern for
data sensitivity. We will discuss P2P networks in Chapter 8, and extending P2P technology to
social networking in Chapter 9.
4. Cloud Computing over the Internet
Gordon Bell, Jim Gray, and Alex Szalay [5] have advocated: “Computational science is
changing
to be data-intensive. Supercomputers must be balanced systems, not just CPU farms but also
petascale I/O and networking arrays.” In the future, working with large data sets will typically
mean sending the computations (programs) to the data, rather than copying the data to the
workstations. This reflects the trend in IT of moving computing and data from desktops to large
data centers, where there is on-demand provision of software, hardware, and data as a service.
This data explosion has promoted the idea of cloud computing.
Cloud computing has been defined differently by many users and designers. For example,
IBM, a major player in cloud computing, has defined it as follows: “A cloud is a pool of
virtualized computer resources. A cloud can host a variety of different workloads, including
batch-style backend jobs and interactive and user-facing applications.” Based on this definition, a
cloud allows workloads to be deployed and scaled out quickly through rapid provisioning of
virtual or physical machines. The cloud supports redundant, self-recovering, highly scalable
programming models that allow workloads to recover from many unavoidable hardware/software
failures. Finally, the cloud system should be able to monitor resource use in real time to enable
rebalancing of allocations when needed.
4.1 Internet Clouds
Cloud computing applies a virtualized platform with elastic resources on demand by
provisioning hardware, software, and data sets dynamically (see Figure 1.18). The idea is to
move desktop computing to a service-oriented platform using server clusters and huge databases
at data centers. Cloud computing leverages its low cost and simplicity to benefit both users and
providers. Machine virtualization has enabled such cost-effectiveness. Cloud computing intends
to satisfy many user
BG15
BG16
Service applications in this layer include daily office management work such as information
retrieval, document processing and calendar and authentication services.
●◆
The application layer i
s
al
s
o heavily
us
ed by enterpri
s
e
s
in
bus
ine
s
marketing and
s
ale
s
, consumer relationship management (CRM),financial transactions and supply chain
management.
From the provider's perspective, the services at various layers demand
different amounts of functionality support and resource management by providers.
In general, SaaS demands the most work from the provider, PaaS is in the middle, and laaS
demands the least.For example, Amazon EC2 provides not only virtualized CPU resources to
users
but also management of these provisioned resources.
Services at the application layer demand more work from providers.
◆●
The be
s
t example of thi
s
i
s
the
S
ale
sforce.com CRM s
ervice in
w
hich the provider
s
upplie
s
not
only the hardware at the bottom layer and the software at the top layer but also the platform
and software tools for user application development and monitoring.
• In Market Oriented Cloud Architecture, as consumers rely on cloud providers to meet more of
their computing needs, they will require a specific level of QoS to be maintained by their
providers,
in order to meet their objectives and sustain their operations. Market-oriented resource
management
is necessary to regulate the supply and demand of cloud resources to achieve market equilibrium
between supply and demand.
●◆
Thi
s
cloud i
s
ba
s
ically built
w
ith the foll
ow
ing entitie
s
:
Users or brokers acting on user's behalf submit service requests from anywhere in the world to
the
data center and cloud to be processed.The request examiner ensures that there is no overloading
of
resources whereby many service requests cannot be fulfilled successfully due to limited
resources.
o The Pricing mechanism decides how service requests are charged. For instance, requests can be
charged based on Submission time (peak/off-peak), pricing Rates
fixed/changing),(supply/demand)
of availability Of resources
• The VM Monitor mechanism keeps track of the availability of VMs and their resource
entitlements.
The Accounting mechanism maintains the actual usage of resources by requests so that the final
cost can be computed and charged to users.
In addition, the maintained historical usage information can be utilized by the Service Request
Examiner and Admission Control mechanism to improve resource allocation decisions.
The Dispatcher mechanism starts the execution of accepted service requests on allocated VMs.
The
Service Request Monitor mechanism keeps track of the execution progress of service requests.
4.Explain in detail about architectural design challenges of
(i)service availability and data lock in problem
(ii)Data Privacy and Security Concerns?BTL4
(Concept Explanation (i) 7marks, Concept Explanation (ii)6marks)
Challenge 1: Service Availability and Data Lock-in Problem
The management of a cloud service by a single company is often the source of single points
of failure.
• To achieve HA, one can consider using multiple cloud providers. Even if a company has
multiple data centers located in different geographic regions, it may have common software
infrastructure and
accounting systems.
●◆
Therefore,
us
ing multiple cloud provide
rs
may provide more protection from
failure
s
.
Another availability obstacle is distributed denial of service (DDoS)
attac
ks
.
◆●
Criminal
s
threaten to cut off the income
s
of
S
aa
S
provide
rs
by
making their services unavailable. Some utility computing services offer SaaS providers the
opportunity
to defend against DDoS attacks by using quick scale ups. • Software stacks have improved
interoperability among different cloud platforms, but the APIs itself are still proprietary. Thus,
customers cannot easily extract their data and programs from one site to run on another.
The obvious solution is to standardize the APIs so that a SaaS developer can deploy services and
data ac
ross
multiple cloud provide
rs
.
◆●
Thi
s
w
ill re
s
cue the l
oss
of all data due to the failure
of a
single company. In addition to mitigating data lock-in concerns, standardization of
APIs enables a new usage model in which the same software
infrastructure can be used in both public and private clouds.
Such an option could enable surge computing, in which the public cloud is used to capture the
extra tasks that cannot be easily run in the data center of a private cloud.
Challenge 2:
Data Privacy and Security Concerns
Current cloud offerings are essentially public (rather than private) networks, exposing the system
to more attacks.
Many obstacles can be overcome immediately with well understood technologies such as
encrypted storage, virtual LANs, and network middle boxes (e.g., firewalls, packet filters).
●◆
For example, the end us
er could encrypt data before placing it in a cloud.
M
any nati
ons
have
laws requiring SaaS providers to keep
customer data and copyrighted material within national boundaries.
attacks include buffer overflows, DoS attacks,
◆●
Traditional
net
work
spyw
are, mal
w
are, rootkit
s
, Trojan horse
s
, and
w
orm
s
.
◆●
In a cloud environment, ne
w
er
attac
ks
may result from hypervisor
malware, guest hopping and hijacking or VM rootkits.
Another type of attack is the man-in-the-middle attack for VM migrations.
In general, passive attacks steal sensitive data or passwords. On the other hand, Active attacks
may manipulate kernel data structures which will cause major damage to cloud servers.
5.Explain in detail about architectural design challenges of
(i) Unpredictable Performance and Bottlenecks
(ii) Distributed Storage and Widespread Software Bugs?BTL4
(Concept Explanation,Concept (i) 7marks,Concept Explanation (ii)6marks)
Challenge 3: Unpredictable Performance and Bottlenecks
●◆
M
ultiple
VMs
can
s
hare
CPUs
and main memory in cloud
computing,
but I/O sharing is problematic.
●◆
For example, to run 75 EC2 ins
tance
s
w
ith the
S
TRE
AM
benchmark require
s
a
mean
bandwidth of 1,355 MB/second.
However, for each of the 75 EC2 instances to write 1 GB files to the local disk requires a mean
disk write bandwidth of only 55
M
B/
s
econd.
◆●
Thi
s
dem
ons
trate
s
the problem of I/
O
interference bet
w
een
VMs
.
One solution is to improve I/O architectures and operating systems to efficiently virtualize
interrupts and I/O channels.
●◆
Internet applicati
ons
continue to become more data inte
ns
ive.
●◆
If
w
ea
ss
ume
applicati
ons
to
be pulled apart across the boundaries
of clouds, this
may complicate data placement and tra
ns
port.
◆●
Cloud
us
e
rs
and provide
rs
have
to
think about the implications of
placement and traffic at every level of the system, if they want to minimize costs.
●◆
Thi
s
kind of rea
s
oning can be
s
een in
A
mazon
's
development of it
s
ne
w
CloudFront
s
ervice.
◆●
Therefore, data tra
nsfer bottlenecks
m
us
t be removed, bottleneckli
nks
m
us
t be
w
idened
and
weak servers should be removed.
Challenge 4: Distributed Storage and Widespread Software Bugs
The database is always growing in cloud applications.
●◆
The opportunity i
s
to create a
s
torage
sys
tem that
w
ill not only meet thi
s
growth
but al
s
o
combine it with the cloud advantage of scaling arbitrarily up and down on demand.
●◆
Thi
s
dema
nds
the de
s
ign of efficient di
s
tributed
SANS
.
●◆
D
ata cente
rs
m
us
t meet
programmer's expectations in terms of scalability, data durability and HA.
Data consistence checking in SAN connected data centers is a major
challenge in cloud computing. Large scale distributed bugs cannot be reproduced, so
thedebugging
must occur at a scale in the production data centers.No data center will provide such a
convenience.
One solution may be a reliance on using VMs in cloud computing.The level of virtualization may
make it possible to capture valuable information in ways that are impossible without using VMs.
●◆
D
ebugging over
s
imulat
ors
i
s
another approach to attacking
the
problem, if the simulator is well designed.
6.Explain in detail about architectural design challenges of
(i) Cloud Scalability, Interoperability
(ii) Software Licensing and Reputation?BTL4
(Concept Explanation (i) 8marks,Concept Explanation (ii)5marks)
Challenge 5: Cloud Scalability, Interoperability,
Standardization
●◆
The pay a
s
you go model applie
s
to
s
torage and net
work bandwidth; both are counted in
terms
of the number of bytes used.
●◆
Computation i
s
different depending on virtualization level.
●◆
GA
E automatically
s
cale
s
in re
sponse to load increas
e
s
or
decrea
s
e
s
and the users are charged by the cycles used.
●◆
A
W
S
charge
s
by the hour for the number of
VM
i
ns
tance
s
us
ed, even
if the machine is idle.The opportunity here is to scale quickly up and down in response to
load
variation, in order to save money, but without violating SLAS. Open Virtualization Format
(OVF)
describes an open, secure, portable, efficient and extensible format for the packaging
anddi
s
tribution of
VMs
.
◆●
It al
s
o define
s
a format for di
s
tributing
softw
are to be deployed in
VMs.
●◆
Thi
s
VM
format doe
s
not rely on the
us
e of a
s
pecific
hos
t platform, virtualization platform
or
guest operating system.
●◆
The approach i
s
to addre
ss
virtual platform i
s
agnostic packaging
w
ith certification
and
integrity of packaged software.The package supports virtual appliances to span more than
one VM.
●◆
OVF
al
s
o define
s
a tra
ns
port mechani
s
m for
VM
template
s
and the format can
apply to
different virtualization platforms with different levels of virtualization.
●◆
In term
s
of cloud
s
tandardization, the ability for virtual appliance
s
to run on any
virtual
platform.The user is also need to enable VMs to run on heterogeneous hardware
platform hypervisors.
◆●
Thi
s
require
s
hypervi
sor-agnostic VMs
.
A
nd al
s
o the
us
er need to realize c
ross
platform
live
migration between x86 Intel and AMD technologies and support legacy hardware for
load balancing..
●◆
A
ll the
s
ei
ss
ue
s
are
w
ide open for further re
s
earch.
Challenge 6: Software Licensing and Reputation SharinMany cloud computing providers
originally relied on open source software because the licensing model for commercial software is
not ideal for utility computing.
The primary opportunity is either for open source to remain popular or simply for commercial
software companies to change their licensing structure to better fit cloud computing. • One can
consider using both pay for use and bulk use licensing schemes to widen the business coverage.
PART C
15 Marks
1.Explain in details about Models of Cloud Computing?BTL4
(Definition:2marks,Diagram
:3marks,Concept,Explanation:6marks,Advantages:2marks,Disadvantages:2 marks)
Cloud Computing helps in rendering several services according to roles, companies, etc.
Cloud computing models are explained below.
Infrastructure as a service (IaaS)
Platform as a service (PaaS)
Software as a service (SaaS)
1. Infrastructure as a service (IaaS)
Infrastructure as a Service (IaaS) helps in delivering computer infrastructure on an external basis
for supporting operations. Generally, IaaS provides services to networking equipment, devices,
databases, and web servers.
Infrastructure as a Service (IaaS) helps large organizations, and large enterprises in managing
and building their IT platforms. This infrastructure is flexible according to the needs of the
client.
Advantages of IaaS
IaaS is cost-effective as it eliminates capital expenses.
IaaS cloud provider provides better security than any other software.
IaaS provides remote access.
Disadvantages of IaaS
In IaaS, users have to secure their own data and applications.
Cloud computing is not accessible in some regions of the World.
2.Platform as a service (PaaS)
Platform
as
a
Service
(PaaS) is a type of cloud computing that helps developers to build
applications and services over the Internet by providing them with a platform.
PaaS helps in maintaining control over their business applications.
Advantages of PaaS
PaaS is simple and very much convenient for the user as it can be accessed via a web
browser.
PaaS has the capabilities to efficiently manage the lifecycle.
Disadvantages of PaaS
PaaS has limited control over infrastructure as they have less control over the
environment and are not able to make some customizations.
PaaS has a high dependence on the provider.
3. Software as a service (SaaS)
Software as a Service (SaaS) is a type of cloud computing model that is the work of delivering
services and applications over the Internet. The SaaS applications are called Web-Based
Software or Hosted Software.
SaaS has around 60 percent of cloud solutions and due to this, it is mostly preferred by
companies.
Advantages of SaaS
SaaS can access app data from anywhere on the Internet.
SaaS provides easy access to features and services.
Disadvantages of SaaS
SaaS solutions have limited customization, which means they have some restrictions
within the platform.
SaaS has little control over the data of the user.
SaaS are generally cloud-based, they require a stable internet connection for proper
working.
Cloud infrastructure
Cloud Computing which is one of the demanding technology of current scenario and which has
been proved as a revolutionary technology trend for businesses of all sizes. It manages a broad
and
complex infrastructure setup to provide cloud services and resources to the customers. Cloud
Infrastructure which comes under the backend part of cloud architecture represents the hardware
and software component such as server, storage, networking, management software, deployment
software and virtualization software etc. In backend, cloud infrastructure enables the complete
cloud computing system.
Why Cloud Computing Infrastructure :
Cloud computing refers to providing on demand services to the customer anywhere and anytime
irrespective of everything where the cloud infrastructure represents the one who activates the
complete cloud computing system. Cloud infrastructure has more capabilities of providing the
same services as the physical infrastructure to the customers. It is available for private cloud,
public cloud, and hybrid
cloud systems
with low cost, greater flexibility and scalability.
Cloud infrastructure components :
Different components of cloud infrastructure supports the computing requirements of a cloud
computing model. Cloud infrastructure has number of key components but not limited to only
server, software, network and storage devices. Still cloud infrastructure is categorized into three
parts in general i.e.
1. Computing
2. Networking
3. Storage
The most important point is that cloud infrastructure should have some basic infrastructural
constraints like transparency, scalability, security and intelligent monitoring etc.
The below figure represents components of cloud infrastructure
Components of Cloud Infrastructure
BG1D
1. Hypervisor :
Hypervisor is a firmware or a low level program which is a key to enable virtualization. It is used
to divide and allocate cloud resources between several customers. As it monitors and manages
cloud services/resources that’s why hypervisor is called as VMM (Virtual Machine Monitor) or
(Virtual Machine Manager).
2. Management Software :
Management software helps in maintaining and configuring the infrastructure. Cloud
management software monitors and optimizes resources, data, applications and services.
3. Deployment Software :
Deployment software helps in deploying and integrating the application on the cloud. So,
typically it helps in building a virtual computing environment.
4. Network :
It is one of the key component of cloud infrastructure which is responsible for connecting cloud
services over the internet. For the transmission of data and resources externally and internally
network is must required.
5. Server :
Server which represents the computing portion of the cloud infrastructure is responsible for
managing and delivering cloud services for various services and partners, maintaining
security etc.
6. Storage :
Storage represents the storage facility which is provided to different organizations for storing and
managing data. It provides a facility of extracting another resource if one of the resource fails as
it keeps many copies of storage.
Along with this, virtualization is also considered as one of important component of cloud
infrastructure. Because it abstracts the available data storage and computing power away from
the actual hardware and the users interact with their cloud infrastructure through GUI (Graphical
User Interface).
2. Explain about NIST reference architecture?BTL4
(Definition:2 marks,Diagram:4 marks,Explanation:9 marks)
NIST stands for National Institute of Standards and Technology
The goal is to achieve effective and secure cloud computing to reduce cost and improve services
• NIST composed for six major workgroups specific to cloud
computing o Cloud computing target business use cases work group
o Cloud computing Reference architecture and Taxonomy work
group
o Cloud computing standards roadmap work group
o Cloud computing SAJACC (Standards Acceleration to Jumpstart Adoption of Cloud
Computing)
work group
o Cloud Computing security work group
• Objectives of NIST Cloud Computing reference
architecture Illustrate and understand the various level of
services
o To provide technical reference
o Categorize and compare services of cloud computing
o Analysis of security, interoperatability and portability
●◆
In general,
NIS
T generate
s
report for future reference
w
hich include
s
s
urvey, anal
ys
i
s
of
existing cloud computing reference model, vendors and federal agencies.
The conceptual reference architecture shown in figure 1.4 involves five actors. Each actor as
entity
participates in cloud computing
Cloud consumer: A person or an organization that maintains a business relationship with and
uses a
services from cloud providers
Cloud provider: A person, organization or entity responsible for making a service available to
interested parties
Cloud auditor: A party that conduct independent assessment of cloud services, information
system
operation, performance and security of cloud implementation
●◆
Cloud broker:
A
n entity that manage
s
the performance and delivery of cloud
s
ervice
s
and
negotiates relationship between cloud provider and consumer.
●◆
Cloud carrier:
A
n intermediary that provide
s
connectivity and tra
ns
port of cloud
s
ervice
s
from
cloud providers to
consumers.
BG20
Figure 1.5 illustrates the common interaction exist in between cloud consumer and provider
where
as the broker used to provide service to consumer and auditor collects the audit information.
The interaction between the actors may lead to different use case scenario.
BG21
Figure 1.6 shows one kind of scenario in which the Cloud consumer may request service from a
cloud broker instead of contacting service provider directly. In this case, a cloud broker can
create a
new service by combining multiple services
BG21 A
Figure 1.7 illustrates the usage of different kind of Service Level Agreement (SLA) between
consumer, provider and carrier.
BG21 B
Cloud consumer is a principal stake holder for the cloud computing service and requires service
level agreements to specify the performance requirements fulfilled by a cloud provider.
●◆
The
s
ervice level agreement cove
rs
Q
uality of
S
ervice and
S
ecurity
a
s
pect
s
.
Consumers have limited rights to access the software applications.
There are three kinds of cloud consumers: SaaS consumers, PaaS Consumers and IaaS
consumers.
◆●
S
aa
S
c
ons
ume
rs
are membe
rs
directly acce
ss
the
softw
are application.
For example,
document
management, content management, social networks, financial billing and so on.
PaaS consumers are used to deploy, test, develop and manage applications hosted in cloud
environment. Database application deployment, development and testing is an example for these
kind of consumer.
●◆
laa
S
Consumer can acce
ss
the virtual computer,
s
torage and net
work infras
tructure.
For
example, usage of Amazon EC2 instance to deploy the web application.
On the other hand, Cloud Providers have complete rights to access software applications. In
Software as a Service model, cloud provider is allowed to configure, maintain and update the
operations of software application.
• Management process is done by Integrated Development environment and Software
Development
Kit in Platform as a Service model.
Infrastructure as a Service model covers Operating System and Networks.
●◆
N
ormally, the
s
ervice layer define
s
the interface
s
for cloud c
ons
ume
rs
to acce
ss
the
computing
services.
• Resource abstraction and control layer contains the system components that cloud provider use
to
provide and mange access to the physical computing resources through software abstraction.
• Resource abstraction covers virtual machine management and virtual storage
management. Control layer focus on resource allocation, access control and usage
monitoring.
• Physical resource layer includes physical computing resources such as CPU, Memory, Router,
Switch, Firewalls and Hard Disk Drive.
Service orchestration describes the automated arrangement, coordination and management of
complex computing system
• In cloud service management, business support entails the set of business related services
dealing
with consumer and supporting services which includes content management, contract
management,
inventory management, accounting service, reporting service and rating service.
• Provisioning of equipments, wiring and transmission is mandatory to setup a new service that
provides a specific application to cloud consumer. Those details are described in Provisioning
and
Configuring management.
Portability enforces the ability to work in more than one computing environment without major
task. Similarly, Interoperatability means the ability of the system work with other system.
• Security factor is applicable to enterprise and Government. It may include privacy.
Privacy is one applies to a cloud consumer's rights to safe guard his information from other
consumers are parties.
3.Explain in details about Cloud Deployment Models?BTL4
(Diagram 3 marks,Explanation:6 marks,Advantages 2 marks,Disadvantages 2 marks,Tabular
column 2 marks)
In cloud computing, we have access to a shared pool of computer resources (servers, storage,
programs, and so on) in the cloud. You simply need to request additional resources when you
require them. Getting resources up and running quickly is a breeze thanks to the clouds. It is
possible to release resources that are no longer necessary. This method allows you to just pay for
what you use. Your cloud provider is in charge of all upkeep.
Cloud Deployment Model
Cloud Deployment Model functions as a virtual computing environment with a deployment
architecture that varies depending on the amount of data you want to store and who has access to
the infrastructure
Types of Cloud Computing Deployment Models
The cloud deployment model identifies the specific type of cloud environment based on
ownership, scale, and access, as well as the cloud’s nature and purpose. The location of the
servers you’re utilizing and who controls them are defined by a cloud deployment model. It
specifies how your cloud infrastructure will look, what you can change, and whether you will be
given services or will have to create everything yourself. Relationships between the
infrastructure and your users are also defined by cloud deployment types. Different types of
cloud computing deployment models are described below.
Public Cloud
Private Cloud
Hybrid Cloud
Community Cloud
Multi-Cloud
Public Cloud
The public cloud makes it possible for anybody to access systems and services. The public cloud
may be less secure as it is open to everyone. The public cloud is one in which cloud
infrastructure services are provided over the internet to the general people or major industry
groups. The infrastructure in this cloud model is owned by the entity that delivers the cloud
services, not by the consumer. It is a type of cloud hosting that allows customers and users to
easily access systems and services. This form of cloud computing is an excellent example of
cloud hosting, in which service providers supply services to a variety of customers. In this
arrangement, storage backup and retrieval services are given for free, as a subscription, or on a
per-user basis. For example, Google App Engine etc.
Public Cloud
BG24
BG25
BG26
BG27
Community Cloud
BG27 A
We’re talking about employing multiple cloud providers at the same time under this
paradigm, as the name implies. It’s similar to the hybrid cloud deployment approach, which
combines public and private cloud resources. Instead of merging private and public clouds,
multi- cloud uses many public clouds. Although public cloud providers provide numerous tools
to improve the reliability of their services, mishaps still occur. It’s quite rare that two distinct
clouds would have an incident at the same moment. As a result, multi-cloud deployment
improves the high availability of your services even more.
Multi-Cloud
Advantages of the Multi-Cloud Model
You can mix and match the best features of each cloud provider’s services to suit the
demands of your apps, workloads, and business by choosing different cloud
providers.
Reduced Latency: To reduce latency and improve user experience, you can choose
cloud regions and zones that are close to your clients.
High availability of service: It’s quite rare that two distinct clouds would have an
incident at the same moment. So, the multi-cloud deployment improves the high
availability of your services.
BG2C
Type 1 hypervisors run directly on the system hardware. They are often referred to as a
"native"or "bare metal" or "embedded" hypervisors in vendor literature.
Type 2 hypervisors run on a host operating system.
13. What is Virtualized Infrastructure Manager (VIM). ?BTL1
The virtualized infrastructure manager (VIM) in a Network Functions Virtualization
(NFV) implementation manages the hardware and software resources that the service provider
uses to create service chains and deliver network services to customers.
14. Differentiate between system VM and Process VM?BTL2
A Process virtual machine, sometimes called an application virtual machine, runs as a
normal application inside a host OS and supports a single process. It is created when that process
is started and destroyed when it exits. Its purpose is to provide a platform-independent
programming environment that abstracts away details of the underlying hardware or operating
system, and allows a program to executein the same way on any platform.
A System virtual machine provides a complete system platform which supports the
execution ofa complete operating system (OS),Just like you said VirtualBox is one example.
15. Mention the signification of Network Virtualization?BTL1
Network virtualization helps organizations achieve major advances in speed, agility, and security
by automating and simplifying many of the processes that go into running a data center network
and managing networking and security in the cloud. Here are some of the key benefits of
network
virtualization: Reduce network provisioning time from weeks to minutes
Achieve greater operational efficiency by automating manual processes Place and move
workloads
independently of physical topology Improve network security within the data center
16. List the implementation levels of virtualization
[R]?BTL1
Instruction set architecture(ISA) level
Hardware abstraction
layer(HAL) level Operating
System Level Library(user-level
API) level Application level
17. Explain hypervisor architecture ?BTL1
A hypervisor or virtual machine monitor (VMM) is a piece of computer software,
firmware orhardware that creates and runs virtual machines.
18. Define para-virtualization?BTL1
Para-virtualization is a virtualization technique that presents a software interface to virtual
machines that is similar, but not identical to that of the underlying hardware.
19. What are the two types of
hypervisor ?BTL1
Micro-kernel architecture Monolithic
hypervisor architecture
20. Define Application virtualization?BTL1
Application-level virtualization is a technique allowing applications to be run in runtime
environments that do not natively support all the features required by such applications. These
techniques are mostly concerned with partial file systems, libraries, and operating system
componentemulation.
21. Define server virtualization?BTL1
Server virtualization is the process of dividing a physical server into multiple unique and
isolated virtual servers by means of a software application. Each virtual server can run its own
operating systemsindependently.
PART B
13 Marks
1. Explain in details about Virtualization in Cloud Computing andTypes?BTL4
(Definition:2 marks,Diagram:3 marks,Explanation:8 marks)
Virtualization is a technique how to separate a service from the underlying physical delivery of
that service. It is the process of creating a virtual version of something like computer hardware. It
was initially developed during the mainframe era. It involves using specialized software to create
a
virtual or software-created version of a computing resource rather than the actual version of the
same resource. With the help of Virtualization, multiple operating systems and applications can
run on the same machine and its same hardware at the same time, increasing the utilization and
flexibility of hardww `q1are.In other words, one of the main cost-effective, hardware-reducing,
and energy-saving techniques used by cloud providers is Virtualization. Virtualization allows
sharing of a single physical instance of a resource or an application among multiple customers
and
organizations at one time. It does this by assigning a logical name to physical storage and
providing a pointer to that physical resource on demand. The term virtualization is often
synonymous with hardware virtualization, which plays a fundamental role in efficiently
delivering
Infrastructure-as- a-Service (IaaS) solutions for cloud computing. Moreover, virtualization
technologies provide a virtual environment for not only executing applications but also for
storage, memory, and networking.
BG2F
Virtualization
Host Machine: The machine on which the virtual machine is going to be built is
known as Host Machine.
Guest Machine: The virtual machine is referred to as a Guest Machine.
Work of Virtualization in Cloud Computing
Virtualization has a prominent impact on Cloud Computing. In the case of cloud computing,
users but with the help of Virtualization, users have the extra benefit of sharing the
infrastructure.
Cloud Vendors take care of the required physical resources, but these cloud providers charge a
huge amount for these services which impacts every user or organization. Virtualization helps
Users or Organisations in maintaining those services which are required by a company through
external (third-party) people, which helps in reducing costs to the company. This is the way
through which Virtualization works in Cloud Computing.
Benefits of Virtualization
More flexible and efficient allocation of resources.
Enhance development productivity.
It lowers the cost of IT infrastructure.
Remote access and rapid scalability.
High availability and disaster recovery.
Pay peruse of the IT infrastructure on demand.
Enables running multiple operating
systems. Drawback of Virtualization
High Initial Investment: Clouds have a very high initial investment, but it is also true
that it will help in reducing the cost of companies.
Learning New Infrastructure: As the companies shifted from Servers to Cloud, it
requires highly skilled staff who have skills to work with the cloud easily, and for this,
you have to hire new staff or provide training to current staff.
Risk of Data: Hosting data on third-party resources can lead to putting the data at
risk, it has the chance of getting attacked by any hacker or cracker very easily.
For more benefits and drawbacks, you can refer to the Pros and Cons of
Virtualization. Characteristics of Virtualization
Increased Security: The ability to control the execution of a guest program in a
completely transparent manner opens new possibilities for delivering a secure,
controlled execution environment. All the operations of the guest programs are
generally performed against the virtual machine, which then translates and applies them
to the host programs.
Managed Execution: In particular, sharing, aggregation, emulation, and isolation are
the most relevant features.
Sharing: Virtualization allows the creation of a separate computing environment
within the same host.
Aggregation: It is possible to share physical resources among several guests, but
virtualization also allows aggregation, which is the opposite process.
For more characteristics, you can refer to Characteristics of Virtualization.
Types of Virtualization
1. Application Virtualization
2. Network Virtualization
3. Desktop Virtualization
4. Storage Virtualization
5. Server Virtualization
6. Data virtualization
Types of Virtualization
BG30
BG31
Network Virtualization
3. Desktop Virtualization: Desktop virtualization allows the users’ OS to be remotely stored on
a server in the data center. It allows the user to access their desktop virtually, from any location
by a different machine. Users who want specific operating systems other than Windows Server
will need to have a virtual desktop. The main benefits of desktop virtualization are user mobility,
portability, and easy management of software installation, updates, and patches.
4. Storage Virtualization: Storage virtualization is an array of servers that are managed by a
virtual storage system. The servers aren’t aware of exactly where their data is stored and instead
function more like worker bees in a hive. It makes managing storage from multiple sources be
managed and utilized as a single repository. storage virtualization software maintains smooth
operations, consistent performance, and a continuous suite of advanced functions despite
changes, breaks down, and differences in the underlying equipment.
5. Server Virtualization: This is a kind of virtualization in which the masking of server
resources takes place. Here, the central server (physical server) is divided into multiple different
virtual servers by changing the identity number, and processors. So, each system can operate its
operating systems in an isolated manner. Where each sub-server knows the identity of the central
server. It causes an increase in performance and reduces the operating cost by the deployment of
main server resources into a sub-server resource. It’s beneficial in virtual migration, reducing
energy consumption, reducing infrastructural costs, etc.
BG31 A
6. Data Virtualization: This is the kind of virtualization in which the data is collected from
various sources and managed at a single place without knowing more about the technical
information like how data is collected, stored & formatted then arranged that data logically so
that its virtual view can be accessed by its interested people and stakeholders, and users through
the various cloud services remotely. Many big giant companies are providing their services like
Oracle, IBM, At scale, Cdata, etc.
Uses of Virtualization
Data-integration
Business-integration
Service-oriented architecture data-services
Searching organizational data
2.
What are the difference between Cloud computing and Virtualization:-BTL2
(Comparison:13 marks)
S.NO Cloud Computing Virtualization
1.
Cloud computing is used to provide
pools and automated resources that can
be accessed on-demand.
While It is used to make various
simulated environments through a
physical hardware system.
2. Cloud computing setup is tedious,
complicated.
While virtualization setup is simple as
compared to cloud computing.
3.
Cloud computing is high scalable. While virtualization is low scalable
compared to cloud computing.
4.
Cloud computing is Very flexible. While virtualization is less flexible than
cloud computing.
5.
In the condition of disaster recovery,
cloud computing relies on multiple
machines.
While it relies on single peripheral
device.
6. In cloud computing, the workload is
stateless.
In virtualization, the workload is
stateful.
7. The total cost of cloud computing is
higher than virtualization.
The total cost of virtualization is lower
than Cloud Computing.
S.NO Cloud Computing Virtualization
8. Cloud computing requires many
dedicated hardware.
While single dedicated hardware can
do a great job in it.
9.
Cloud computing provides unlimited
storage space.
While storage space depends on
physical server capacity in
virtualization.
10.
Cloud computing is of two types :
Public cloud and Private cloud.
Virtualization is of two types :
Hardware virtualization and
Application virtualization.
11. In Cloud Computing, Configuration is
image based.
In Virtualization, Configuration is
template based.
12.
In cloud computing, we utilize the
entire server capacity and the entire
servers are consolidated.
In Virtualization, the entire servers are
on-demand.
3. Explain in details about hypervisor and it is types?BTL4
(Definition:2 marks,Concept Explanation:8marks,Diagram:3 marks)
Hypervisor
●◆
H
a
rdw
are level virtualization i
s
a virtualization technique that provide
s
an a
bs
tract execution
environment in terms of computer hardware on top of which a guest operating system can be run.
●◆
In thi
s
model, the gue
s
ti
s
repre
s
ented by the operating
sys
tem, the
hos
t by the physical
computer hardware, the virtual machine by its emulation and the virtual machine manager by the
hypervisor.
●◆
The hypervi
sor is
generally a program or a combination of
softw
are and ha
rdw
are that
all
ows
the abstraction of the underlying physicalhardware.
Hardware level virtualization is also called system virtualization, since it provides ISA to virtual
machines, which is the representation of the hardware interface of a system.
This is to differentiate it from process virtual machines, which expose ABI to virtual machines.
●◆
H
ypervi
sors
i
s
a fundamental element of ha
rdw
are virtualization i
s
the hypervi
sor, or virtual
machine manager (VMM).
●◆
It recreate
s
a ha
rdw
are environment in
w
hich gue
s
t operating
sys
tem
s
are i
ns
talled.
●◆
There are
two major types of hypervisor: Type I and Type II. Figure
2.3 shows different type of hypervisors.
o Type I hypervisors run directly on top of the hardware.
■Type I hypervisor take the place of the operating systems and interact directly with the ISA
interface exposed by the underlying hardware and they emulate this interface in order to allow
the
management of guest operating systems.
This type of hypervisor is also called a native virtual machine since it runs natively on hardware.
o
Type II hypervisors require the support of an operating system to provide virtualization services.
This means that they are programs managed by the
operating system, which interact with it through the ABI and emulate the ISA of virtual hardware
for guest operating systems.
■This type of hypervisor is also called a hosted virtual
machine since it is hosted within an operating system.
BG34
BG36
At the bottom layer, the model for the hardware is expressed in terms of the Instruction Set
Architecture (ISA), which defines the instruction set for the processor, registers, memory and an
interrupt management.
●◆
ISA
i
s
the interface bet
w
een ha
rdw
are and
softw
are.
●◆
ISA
i
s
important to the operating
sys
tem
(OS) developer (Sys
tem
ISA) and developers
of
applications that directly manage the underlying hardware (User ISA).
●◆
The application binary interface
(ABI) s
eparate
s
the operating
sys
tem layer from
the
applications and libraries, which are managed by the
OS. • ABI covers details such as low level data types, alignment, call
conventi
ons
and define
s
a format for executable program
s
.
◆●
Sys
tem call
s
are defined at thi
s
level. This interface allows portability of applications and libraries across operating systems that
implement the same ABI.
●◆
The highe
s
t level of a
bs
traction i
s
repre
s
ented by the application programming interface
(API),
which interfaces applications to libraries and the underlying operating system.
●◆
For this
purpose, the i
ns
truction
s
et exposed by the ha
rdw
are ha
s
been
divided into different security classes that define who can operate with them. The first distinction
can be made between privileged and non
privileged instructions.
o Non privileged instructions are those instructions that can be used without interfering with
other tasks because they do not access shared resources.
This category contains all the floating, fixed-point, and arithmetic instructions.
• Privileged instructions are those that are executed under specific restrictions and are mostly
used for sensitive operations, which expose (behavior-sensitive) or modify (control-sensitive) the
privileged state.
●◆
.
S
ome type
s
of architecture feature more than one cla
ss
of privileged i
ns
tructi
ons
and
implement a finer control of how these instructions can be accessed.
For instance, a possible implementation features a hierarchy of privileges illustrate in the figure
2.2
in the form of ring-based security: Ring 0, Ring 1, Ring 2, and Ring 3;
Ring 0 is in the most privileged level and Ring 3 in the least privileged level.
Ring 0 is used by the kernel of the OS, rings 1 and 2 are used by the OS level services, and Ring
3 is used by the user.
Recent systems support only two levels, with Ring 0 for
supervisor mode and Ring 3 for user mode.
BG37
BG3C
BG44
BG45
BG48
2.CPU Virtualization
A VM is a duplicate of an existing computer system in which a majority of the VM
instructions are executed on the host processor in native mode. Thus, unprivileged instructions of
VMs run directly on the host machine for higher efficiency. Other critical instructions should be
handled carefully for correctness and stability. The critical instructions are divided into three
categories: privileged instructions, control-sensitive instructions, and behavior-sensitive
instructions. Privileged instructions execute in a privileged mode and will be trapped if executed
outside this mode. Control-sensitive instructions attempt to change the configuration of resources
used. Behavior-sensitive instructions have different behaviors depending on the configuration of
resources, including the load and store operations over the virtual memory.
A CPU architecture is virtualizable if it supports the ability to run the VM’s privileged and
unprivileged instructions in the CPU’s user mode while the VMM runs in supervisor mode.
When the privileged instructions including control- and behavior-sensitive instructions of a VM
are exe- cuted, they are trapped in the VMM. In this case, the VMM acts as a unified mediator
for hardware access from different VMs to guarantee the correctness and stability of the whole
system. However, not all CPU architectures are virtualizable. RISC CPU architectures can be
naturally virtualized because all control- and behavior-sensitive instructions are privileged
instructions. On the contrary, x86 CPU architectures are not primarily designed to support
virtualization. This is because about 10 sensitive instructions, such as SGDT and SMSW, are not
privileged instructions. When these instruc-tions execute in virtualization, they cannot be trapped
in the VMM.
On a native UNIX-like system, a system call triggers the 80h interrupt and passes control to
the OS kernel. The interrupt handler in the kernel is then invoked to process the system call. On a
para-virtualization system such as Xen, a system call in the guest OS first triggers the 80h
interrupt nor-mally. Almost at the same time, the 82h interrupt in the hypervisor is triggered.
Incidentally, control is passed on to the hypervisor as well. When the hypervisor completes its
task for the guest OS system call, it passes control back to the guest OS kernel. Certainly, the
guest OS kernel may also invoke the hypercall while it’s running. Although paravirtualization of
a CPU lets unmodified applications run in the VM, it causes a small performance penalty.
2.1 Hardware-Assisted CPU Virtualization
This technique attempts to simplify virtualization because full or paravirtualization is
complicated. Intel and AMD add an additional mode called privilege mode level (some people
call it Ring-1) to x86 processors. Therefore, operating systems can still run at Ring 0 and the
hypervisor can run at Ring -1. All the privileged and sensitive instructions are trapped in the
hypervisor automatically. This technique removes the difficulty of implementing binary
translation of full virtualization. It also lets the operating system run in VMs without
modification.
Example 3.5 Intel Hardware-Assisted CPU Virtualization
Although x86 processors are not virtualizable primarily, great effort is taken to virtualize them.
They are used widely in comparing RISC processors that the bulk of x86-based legacy systems
cannot discard easily. Virtuali-zation of x86 processors is detailed in the following sections.
Intel’s VT-x technology is an example of hardware-assisted virtualization, as shown in Figure
BG4A
3.11. Intel calls the privilege level of x86 processors the VMX Root Mode. In order to control
the
start and stop of a VM and allocate a memory page to maintain the
CPU state for VMs, a set of additional instructions is added. At the time of this writing, Xen,
VMware, and the Microsoft Virtual PC all implement their hypervisors by using the VT-x
technology.
Generally, hardware-assisted virtualization should have high efficiency. However, since the
transition from the hypervisor to the guest OS incurs high overhead switches between processor
modes, it sometimes cannot outperform binary translation. Hence, virtualization systems such as
VMware now use a hybrid approach, in which a few tasks are offloaded to the hardware but the
rest is still done in software. In addition, para-virtualization and hardware-assisted virtualization
can be combined to improve the performance further.
3. Memory Virtualization
Virtual memory virtualization is similar to the virtual memory support provided by modern
operat- ing systems. In a traditional execution environment, the operating system maintains
mappings of virtual memory to machine memory using page tables, which is a one-stage
mapping from virtual memory to machine memory. All modern x86 CPUs include a memory
management unit (MMU) and a translation lookaside buffer (TLB) to optimize virtual memory
performance. However, in a virtual execution environment, virtual memory virtualization
involves sharing the physical system memory in RAM and dynamically allocating it to the
physical memory of the VMs.
That means a two-stage mapping process should be maintained by the guest OS and the VMM,
respectively: virtual memory to physical memory and physical memory to machine memory.
Furthermore, MMU virtualization should be supported, which is transparent to the guest OS. The
guest OS continues to control the mapping of virtual addresses to the physical memory addresses
of VMs. But the guest OS cannot directly access the actual machine memory. The VMM is
responsible for mapping the guest physical memory to the actual machine memory. Figure 3.12
shows the two-level memory mapping procedure.
BG4B
Since each page table of the guest OSes has a separate page table in the VMM corresponding
to it, the VMM page table is called the shadow page table. Nested page tables add another layer
of indirection to virtual memory. The MMU already handles virtual-to-physical translations as
defined by the OS. Then the physical memory addresses are translated to machine addresses
using another set of page tables defined by the hypervisor. Since modern operating systems
maintain a set of page tables for every process, the shadow page tables will get flooded.
Consequently, the perfor-mance overhead and cost of memory will be very high.
VMware uses shadow page tables to perform virtual-memory-to-machine-memory address
translation. Processors use TLB hardware to map the virtual memory directly to the machine
memory to avoid the two levels of translation on every access. When the guest OS changes the
virtual memory to a physical memory mapping, the VMM updates the shadow page tables to
enable a direct lookup. The AMD Barcelona processor has featured hardware-assisted memory
virtualization since 2007. It provides hardware assistance to the two-stage address translation in a
virtual execution environment by using a technology called nested paging.
Example 3.6 Extended Page Table by Intel for Memory Virtualization
Since the efficiency of the software shadow page table technique was too low, Intel developed a
hardware-based EPT technique to improve it, as illustrated in Figure 3.13. In addition, Intel
offers a Virtual Processor ID (VPID) to improve use of the TLB. Therefore, the performance of
memory virtualization is greatly improved. In Figure 3.13, the page tables of the guest OS and
EPT are all four-level.
When a virtual address needs to be translated, the CPU will first look for the L4 page table
pointed to by Guest CR3. Since the address in Guest CR3 is a physical address in the guest OS,
the CPU needs to convert the Guest CR3 GPA to the host physical address (HPA) using EPT. In
this procedure, the CPU will check the EPT TLB to see if the translation is there. If there is no
required translation in the EPT TLB, the CPU will look for it in the EPT. If the CPU cannot find
the translation in the EPT, an EPT violation exception will be raised.
When the GPA of the L4 page table is obtained, the CPU will calculate the GPA of the L3 page
table by using the GVA and the content of the L4 page table. If the entry corresponding to the
GVA in the L4
BG4C
page table is a page fault, the CPU will generate a page fault interrupt and will let the guest OS
kernel handle the interrupt. When the PGA of the L3 page table is obtained, the CPU will look
for the EPT to get the HPA of the L3 page table, as described earlier. To get the HPA
corresponding
to a GVA, the CPU needs to look for the EPT five times, and each time, the memory needs to be
accessed four times. There-fore, there are 20 memory accesses in the worst case, which is still
very slow. To overcome this short-coming, Intel increased the size of the EPT TLB to decrease
the number of memory accesses.
4. I/O Virtualization
I/O virtualization involves managing the routing of I/O requests between virtual devices and the
shared physical hardware. At the time of this writing, there are three ways to implement I/O
virtualization: full device emulation, para-virtualization, and direct I/O. Full device emulation is
the first approach for I/O virtualization. Generally, this approach emulates well-known, real-
world devices.
BG4D
All the functions of a device or bus infrastructure, such as device enumeration, identification,
interrupts, and DMA, are replicated in software. This software is located in the VMM and acts as
a virtual device. The I/O access requests of the guest OS are trapped in the VMM which interacts
with the I/O devices. The full device emulation approach is shown in Figure 3.14.
A single hardware device can be shared by multiple VMs that run concurrently. However,
software emulation runs much slower than the hardware it emulates [10,15]. The para-
virtualization method of I/O virtualization is typically used in Xen. It is also known as the split
driver model consisting of a frontend driver and a backend driver. The frontend driver is running
in Domain U and the backend dri-ver is running in Domain 0. They interact with each other via a
block of shared memory. The frontend driver manages the I/O requests of the guest OSes and the
backend driver is responsible for managing the real I/O devices and multiplexing the I/O data of
different VMs. Although para-I/O-virtualization achieves better device performance than full
device emulation, it comes with a higher CPU overhead.
Direct I/O virtualization lets the VM access devices directly. It can achieve close-to-native
performance without high CPU costs. However, current direct I/O virtualization implementations
focus on networking for mainframes. There are a lot of challenges for commodity hardware
devices. For example, when a physical device is reclaimed (required by workload migration) for
later reassign-ment, it may have been set to an arbitrary state (e.g., DMA to some arbitrary
memory locations) that can function incorrectly or even crash the whole system. Since software-
based I/O virtualization requires a very high overhead of device emulation, hardware-assisted
I/O
virtualization is critical. Intel VT-d supports the remapping of I/O DMA transfers and device-
generated interrupts. The architecture of VT-d provides the flexibility to support multiple usage
models that may run unmodified, special-purpose, or “virtualization-aware” guest OSes.
Another way to help I/O virtualization is via self-virtualized I/O (SV-IO) [47]. The key idea of
SV-IO is to harness the rich resources of a multicore processor. All tasks associated with
virtualizing an I/O device are encapsulated in SV-IO. It provides virtual devices and an
associated access API to VMs and a management API to the VMM. SV-IO defines one virtual
interface (VIF) for every kind of virtua-lized I/O device, such as virtual network interfaces,
virtual block devices (disk), virtual camera devices, and others. The guest OS interacts with the
VIFs via VIF device drivers. Each VIF consists of two mes-sage queues. One is for outgoing
messages to the devices and the other is for incoming messages from the devices. In addition,
each VIF has a unique ID for identifying it in SV-IO.
3. Explain in detail about Hypervisor and Xen architecture?BTL 4
(Definition:2 marks,Diagram:3 marks,Concept explanation:10 marks)
●◆
The hypervi
sor s
upport
s
ha
rdw
are level virtualization on bare
metal
devices like CPU, memory, disk and network interfaces.
●◆
The hypervi
sor softw
are
s
it
s
directly bet
w
een the physical ha
rdw
are
and its OS. This virtualization layer is referred to as either the VMM or the hypervisor.
The hypervisor provides hypercalls for the guest OSes and applications. Depending on the
functionality, a hypervisor can assume microkernel architecture like the Microsoft Hyper-V.
●◆
It can a
ss
ume monolithic hypervi
sor architecture like the
VMw
are
ESX for server virtualization.
●◆
A
micro kernel hypervi
sor includes
only the ba
s
ic and unchanging functi
ons
(s
uch a
s
physical
memory management and processor scheduling).
●◆
The device drive
rs
and other changeable component
s
are out
s
ide the hypervi
sor.
●◆
The hypervi
sor s
upport
s
ha
rdw
are level virtualization on bare
metal
devices like CPU, memory, disk and network interfaces.
●◆
The hypervi
sor softw
are
s
it
s
directly bet
w
een the physical
ha
rdw
are and its OS. This virtualization layer is referred to as either
the VMM
or the hypervisor.
The hypervisor provides hypercalls for the guest OSes and applications.
Depending on the functionality, a hypervisor can assume micro
kernel architecture like the Microsoft Hyper-V.
●◆
It can a
ss
ume monolithic hypervi
sor architecture like the
VMw
are
ESX for server virtualization.
●◆
A
micro kernel hypervi
sor includes
only the ba
s
ic and unchanging functi
ons
(s
uch a
s
physical
memory management and processor scheduling).
●◆
The device drive
rs
and other changeable component
s
are out
s
ide the hypervi
sor.
A monolithic hypervisor implements all the aforementioned functions, including those of the
device drivers. Therefore, the size of the hypervisor code of a micro-kernel hypervisor is smaller
than that of a monolithic hypervisor.
Essentially, a hypervisor must be able to convert physical devices into
virtual resources dedicated for the deployed VM to use.
Xen architecture
• Xen is an open source hypervisor program developed by Cambridge
University. • Xen is a microkernel hypervisor, which separates the policy from the
mechanism.
• The Xen hypervisor implements all the mechanisms, leaving the policy to be handled by
Domain
1. Figure 2.4 shows architecture of Xen hypervisor.
Xen does not include any device drivers natively. It just provides a mechanism by which a guest
OS
can have direct access to the physical devices.
●◆
As
a re
s
ult, the
s
ize of the
X
en hypervi
sor is
kept rather
s
mall.
• Xen provides a virtual environment located between the hardware and the OS.
BG50
The Docker daemon is responsible for managing various Docker services and
communicates with other daemons to do so. Using Docker's API requests, the daemon
manages Docker objects such as images, containers, networks, and volumes.
Docker Client
The Docker client allows users to interact with Docker and utilize its functionalities. It
communicates with the Docker daemon using the Docker API.
The Docker client has the capability to communicate with multiple daemons. When a
user runs a Docker command on the terminal, the instructions are sent to the daemon.
The Docker daemon receives these instructions in the form of commands and REST
API requests from the Docker client.
The primary purpose of the Docker client is to facilitate actions such as pulling
images from the Docker registry and running them on the Docker host.
Commonly used commands by Docker clients include docker build, docker pull, and
docker run.
Docker Host
A Docker host is a machine that is capable of running multiple containers and is
equipped with the Docker daemon, Images, Containers, Networks, and Storage to
enable containerization
Docker Registry
Docker images are stored in the Docker registry, which can either be a public registry
like Docker Hub, or a private registry that can be set up.
To obtain required images from a configured registry, the 'docker run' or 'docker pull'
commands can be used. Conversely, to push images into a configured registry, the
'docker push' command can be used.
Docker Objects
When working with Docker, various objects such as images, containers, volumes, and
networks are created and utilized.
Docker Images
A docker image is a set of instructions used to create a container, serving as a read-
only template that can store and transport applications.
Images play a critical role in the Docker ecosystem by enabling collaboration among
developers in ways that were previously impossible
Docker Storage
Docker storage is responsible for storing data within the writable layer of the
container, and this function is carried out by a storage driver.The storage driver is
responsible for managing and controlling the images and containers on the Docker
host.There are several types of Docker storage.
о Data Volumes, which can be mounted directly into the container's filesystem, are
essentially directories or files on the Docker Host filesystem.
о Volume Container is used to maintain the state of the containers' data produced by
the running container, where Docker volumes file systems are mounted on Docker
containers. These volumes are stored on the host, making it easy for users to exchange
file systems among containers and backup data.
O Directory Mounts, where a host directory is mounted as a volume in the container,
can also be specified.Finally, Docker volume plugins allow integration with external
volumes, such as Amazon EBS, to maintain the state of the container.
Docker networking
Docker networking provides complete isolation for containers, allowing users to link
them to multiple networks with minimal OS instances required to run workloads.
There are different types of Docker networks available, including:
o Bridge: This is the default network driver and is suitable for different containers
that need to communicate with the same Docker host.
o Host: This network is used when there is no need for isolation between the container
and the host.
оOverlay: This network allows to communicate with each other. None: This network
disables all networking.
Macvlan: This assigns a Media Access Control (MAC)
address to containers, which looks like a physical address.
9. Explain the Docker Containers?BTL4
(Definition:2 marks,Concept explanation:11 marks)
Containers can be connected to one or multiple networks, storage can be attached,
and a new image can even be created based on its current state.
By default, a container is isolated from other containers and its host machine. It is
possible to control the level of isolation for a container's network, storage or other
underlying subsystems from other containers or from the host machine.
A container is defined by its image and configuration options provided during
creation or start-up.
Any changes made to a container's state that are not stored in persistent storage will
be lost once the container is removed.
Advantages of Docker Containers
Docker provides a consistent environment for running applications from design and
development to production and maintenance, which eliminates production issues and
allows developers to focus on introducing quality features instead of debugging errors
and resolving configuration/compatibility issues.
Docker also allows for instant creation and deployment of containers for every
process, without needing to boot the OS, which saves time and increases agility.
Creating, destroying, stopping or starting a container can be done with ease, and
YAML configuration files can be used to automate deployment and scale the
infrastructure.
In multi-cloud environments with different configurations, policies and processes,
Docker containers can be easily moved across any environment, providing efficient
management. However, it is important to remember that data inside the container is
permanently destroyed once the container is destroyed.
Docker environments are highly secure, as applications running in Docker containers
are isolated from each other and possess their own resources without interacting with
other containers. This allows for better control over traffic flow and easy removal of
applications.
Docker enables significant infrastructure cost reduction, with minimal costs for
running applications when compared with VMs and other technologies. This can lead
to increased ROI and operational cost savings with smaller engineering teams
PART C
15 Marks
1. What are the other types of virtualization?BTL1
(Definition:2 marks,Diagram:5 marks,Concept explanation:8 marks)
Other than execution virtualization, other types of virtualization provide an abstract
environment to interact with.
1.Programming language-level virtualization Programming language level
virtualization is mostly used to achieve case of deployment of applications, managed
execution, portability
across different platforms and operating systems.
●◆
It c
ons
i
s
t
s
of a virtual machine executing the byte code of a
program
which is the result of the compilation process.
• Compilers implemented and used this technology to produce a binary format
representing the machine code for an abstract architecture.
●◆
The characteri
s
tic
s
of thi
s
architecture
vary from implementation to
implementation.
●◆
G
enerally the
s
e virtual machine
s
c
ons
titute a
s
implification of the
underlying
hardware instruction set and provide some high level instructions that map some of
the features of the languages compiled for them.
●◆
A
t runtime, the byte code can be either interpreted or compiled on the fly
agai
ns
t
the underlying hardware instruction set.
●◆
Programming language level virtualization has
a long trail in computer
s
cience
history and originally was used in 1966 for the implementation of Basic Combined
Programming Language (BCPL), a language for writing compilers and one of the
ancestors of the C
programming language.
●◆
O
ther important example
s
of the
us
e of thi
s
technology have been the
UCSD
Pascal and Smalltalk
.
●◆
V
irtual machine programming language
s
become popular again
w
ith
Sun's
introduction of the Java platform in 1996.
The Java virtual machine was originally designed for the execution of programs
written in the Java language, but other languages such as
Python, Pascal, Groovy and Ruby were made available.
◆●
The ability to
s
upport multiple programming language
s
ha
s
been one of the
key
elements of the Common Language Infrastructure (CLI) which is the
specification behind .NET Framework
2.Application server virtualization
Application server virtualization abstracts a collection of application servers that
provide the same services as a single virtual application server by using load
balancing strategies and providing a high availability infrastructure for the services
hosted in the application server.
This is a particular form of virtualization and serves the same purpose of storage
virtualization by providing a better quality of service rather than emulating a different
environment. 3.6.3 Virtualization Support and Disaster Recover
◆●
O
ne very di
s
tingui
s
hing feature of cloud computing infra
s
tructure i
s
the
us
e of
system virtualization and the modification to provisioning tools.
• Virtualization of servers on a shared cluster can consolidate web services. • In cloud
computing, virtualization also means the resources and
fundamental infrastructure are virtualized. • The user will not care about the
computing resources that are used for providing the services.
●◆
Cloud
us
e
rs
do not need to know and have no
w
ay to di
s
cover physical
re
s
ource
s
that are involved while processing a service request. In addition,
application developers do not care about some infrastructure issues such as
scalability and fault
tolerance. Application developers focus on service logic. In
many cloud computing
systems, virtualization software is used to virtualize the
hardware.System
virtualization software is a special kind of software which
simulates the execution of
hardware and runs even unmodified
operating systems.
●◆
Cloud computing
sys
tem
s
us
e virtualization
s
o
w
are a
s
the running
environment for legacy software such as old operating systems and unusual
applications.
3. Hardware Virtualization
Virtualization software is also used as the platform for developing new cloud
applications that enable developers to use any operating systems and programming
environments they like.
The development environment and deployment environment can now be the same,
which eliminates some runtime problems.
VMs provide flexible runtime services to free users from worrying about the system
environment.
●◆
Us
ing
VMs
in a cloud computing platform e
nsures
extreme flexibility for
us
e
rs
.
As the computing resources are shared by many users, a method is required
to maximize the user's privileges and still keep them separated safely.
Traditional sharing of cluster resources depends on the user agr mechanism on a
system.
Such sharing is not flexible.
o Users cannot customize the system for their special purposes.
o Operating systems cannot be changed.
o The separation is not complete.
An environment that meets one user's requirements often cannot satisfy another user
Virtualization allows us to have full privileges while keeping them separate.
Users have full access to their own VMs, which are completely separate from other
user's VMs.
●◆
M
ultiple
VMs
can be mounted on the
s
ame phy
s
ical
s
erver.
D
ifferent
VMs
may
run with different OSes.
The virtualized resources form a resource pool.
The virtualization is carried out by special servers dedicated to generating the
virtualized resource pool.The virtualized infrastructure (black box in the middle) is
built with many virtualizing integration managers.
These managers handle loads, resources, security, data, and provisioning functions.
Figure 3.2 shows two VM platforms.
●◆
Each platform carrie
s
out a virtual
s
olution to a
us
er job.
A
ll cloud
s
ervice
s
are
managed in the boxes at the top.
4. Virtualization Support in Public Clouds
AWS provides extreme flexibility (VMS) for users to execute their own applications.
GAE provides limited application level virtualization for users to build applications
only based on the services that are created by Google.
Microsoft provides programming level virtualization (.NET virtualization) for users to
build their applications.
The VMware tools apply to workstations, servers, and virtual infrastructure.
●◆
The
M
ic
rosoft tools
are
us
ed on
PCs
and
s
ome
s
pecial
s
erve
rs
.
• The XenEnterprise tool applies only to Xen-based servers.
5. Virtualization for IaaS
VM technology has increased in ubiquity.
This has enabled users to create customized environments atop physical infrastructure
for cloud computing.
Use of VMs in clouds has the following distinct benefits:
o System administrators consolidate workloads of underutilized servers in fewer
servers
VMs have the ability to run legacy code without interfering with other APIs VMs can
be used to improve security through creation of sandboxes for running applications
with questionable reliability
o Virtualized cloud platforms can apply performance isolation, letting providers offer
some guarantees and better QoS to customer applications
2. Explain in detail about Containers with advantages and disadvantages?BTL1
(Definition:2marks,Concept Explanation:7 marks,Diagram:2 marks,Advantages:2
marks,Disadvantages:2 marks)
Containers are software packages that are lightweight and self- contained, and they
comprise all the necessary dependencies to run an application.
The dependencies include external third-party code packages, system libraries, and
other operating system-level applications.
These dependencies are organized in stack levels that are higher than the operating
system.
Advantages:
One advantage of using containers is their fast iteration speed. Due to their
lightweight nature and focus on high-level software, containers can be quickly
modified and updated.
o Additionally, container runtime systems often provide a robust ecosystem, including
a hosted public repository of pre- made containers.
。
This repository offers popular software applications such as databases and
messaging systems that can be easily downloaded and executed, saving valuable time
for development
teams. Disadvantages:
о As containers share the same hardware system beneath the operating system layer,
any vulnerability in one container can potentially affect the underlying hardware and
break out of the container.
Although many container runtimes offer public repositories of pre-built containers,
there is a security risk associated with using these containers as they may contain
exploits or be susceptible to hijacking by malicious actors. Examples:
о Docker is the most widely used container runtime that offers Docker Hub, a public
repository of containerized applications that can be easily deployed to a local Docker
runtime.
о RKT, pronounced "Rocket," is a container system focused on security, ensuring that
insecure container functionality is not allowed by default.
о Linux Containers (LXC) is an open-source container runtime system that isolates
system-level processes from one another and is utilized by Docker in the background.
CRI-O, on the other hand, is a lightweight alternative to using Docker as the runtime
for Kubernetes, implementing the Kubernetes Container Runtime Interface (CRI) to
support Open Container Initiative (OCI)-compatible runtimes.
Virtual Machines
Virtual machines are software packages that contain a complete emulation of low-
level hardware devices, such as CPU, disk, and networking devices.They may also
include a complementary software stack that can run on the emulated hardware.
Together, these hardware and software packages create a functional snapshot of a
computational system.
Advantages:
O Virtual machines provide full isolation security since they operate as standalone
systems, which means that they are protected from any interference or exploits from
other virtual machines on the same host.
o Though a virtual machine can still be hijacked by an exploit, the affected virtual
machine will be isolated and cannot contaminate other adjacent virtual machines.
O On the other hand, virtual machines can be interactively developed, unlike
containers, which are usually static definitions of the required dependencies and
configuration to run the container.
After defining the basic hardware specifications for a virtual machine, it can be
treated as a bare-bones computer.
о One can manually install software to the virtual machine and snapshot the virtual
machine to capture the present configuration state.
о The virtual machine snapshots can then be utilized to restore the virtual machine to
that particular point in time or create additional virtual machines with that
configuration.
Disadvantages:
о Virtual machines are known for their slow iteration speed due to the fact that they
involve a complete system stack.
o Any changes made to a virtual machine snapshot can take a considerable amount of
time to rebuild and validate that they function correctly.
o Another issue with virtual machines is that they can occupy a significant amount of
storage space, often several gigabytes in size.
о This can lead to disk space constraints on the host machine where the virtual
machines are stored.
Examples:
Virtualbox is an open source emulation system that emulates x86 architecture, and is
owned by Oracle. It is widely used and has a set of additional tools to help develop
and distribute virtual machine images.
oVMware is a publicly traded company that provides a hypervisor along with its
virtual machine platform, which allows deployment and management of multiple
virtual machines. VMware offers robust UI for managing virtual machines, and is a
popular enterprise virtual machine solution with support.
о QEMU is a powerful virtual machine option that can emulate any generic hardware
architecture. However, it lacks a graphical user interface for configuration or
execution, and is a command line only utility. As a result, QEMU is one of the fastest
virtual machine options available.
3.Explain Docker Repositories with its features?BTL1
(Definition:2 marks,Concept explanation:13 marks)
The Docker Hub is a cloud-based repository service where users can push their
Docker Container Images and access them from anywhere via the internet.It offers the
option to push images as private or public and is primarily used by DevOps teams.
The Docker Hub is an open-source tool that is available for all operating systems. It
functions as a storage system for Docker images and allows users to pull the required
images when needed.However, it is necessary to have a basic knowledge of Docker to
push or pull images from the Docker Hub. If a developer team wants to share a
project along with its dependencies for testing, they can push the code to Docker Hub.
To do this, the developer must create images and push them to Docker Hub. The
testing team can then pull the same image from Docker Hub without needing any
files, software, or plugins, as the developer has already shared the image with all
dependencies.
Features of Docker Hub
Docker Hub simplifies the storage, management, and sharing of images with others. It
provides security checks for images and generates comprehensive reports on any
security issues.
Additionally, Docker Hub can automate processes like Continuous Deployment and
Continuous Testing by triggering webhooks when a new image is uploaded.
Through Docker Hub, users can manage permissions for teams, users, and
organizations.
Moreover, Docker Hub can be integrated with tools like GitHub and Jenkins,
streamlining workflows.
Advantages of Docker Hub
Docker Container Images have a lightweight design, which enables us to push images
in a matter of minutes using a simple command.
This method is secure and offers the option of pushing private or public images.
Docker Hub is a critical component of industry workflows as its popularity grows,
serving as a bridge between developer and testing teams.
Making code, software or any type of file available to the public can be done easily by
publishing the images on the Docker Hub as public
UNIT IV
CLOUD DEPLOYMENT ENVIRONMENT
SYLLABUS: Google App Engine – Amazon AWS – Microsoft Azure; Cloud
Software Environments – Eucalyptus – OpenStack.
PART A
2 Marks
1. Describe about GAE?BTL1
Google's App Engine (GAE) which offers a PaaS platform supporting various cloud
and web applications.This platform specializs in supporting scalable (elastic) web
applications.GAE enables users to run their applications on a large number of data
centers associated with Google's search engine operations.
2. Mention the components maintained in a node of Google cloud
platform?BTL1
GFS is used for storing large amounts of data.
MapReduce is for use in application program development. Chubby is used for
distributed application lock services. BigTable offers a storage service for accessing
structured data.
3. List the functional modules of GAE?BTL1
Datastore Application runtime environment
Software development kit (SDK) • Administration consoleGAE web service
infrastructure
4. List some of the storage tools in Azure?BTL1
Blob, Queue, File, and Disk Storage, Data Lake Store, Backup, and Site Recovery.
5. List the applications of GAE?BTL1
Well-known GAE applications include the Google Search Engine, Google Docs,
Google Earth, and Gmail.These applications can support large numbers of users
simultaneously.Users can interact with Google applications via the web interface
provided by each application.Third-party application providers can use GAE to build
cloud applications for providing services.
6. Mention the goals for design and implementation of the BigTable
system?BTL1
The applications want asynchronous processes to be continuously updating different
pieces of data and want access to the most current data at all times.The database needs
to support very high read/write rates and the scale might be millions of operations per
second.The application may need to examine data changes over time.
7. Describe about Openstack?BTL1
The OpenStack project is an open source cloud computing platform for all types of
clouds, which aims to be simple to implement, massively scalable, and feature
rich.Developers and cloud computing technologists from around the world create the
OpenStack project.OpenStack provides an Infrastructure-as-a-Service (IaaS) solution
through a set of interrelated services.
8. List the key services of OpenStack?BTL1
The OpenStack system consists of several key services that are separately installed.
Compute, Identity, Networking, Image, Block Storage, Object Storage, Telemetry,
Orchestration and Database services.
9. Describe about Eucalyptus?BTL1
Eucalyptus is an open-source cloud computing software architecture based on Linux
that offers Infrastructure as a Service (IaaS) and a storage platform.
It delivers fast and effective computing services and is designed to be compatible with
Amazon's EC2 cloud and Simple Storage Service (S3).
Eucalyptus Command Line Interfaces (CLIS) have the capability to manage both
Amazon Web Services and private instances.
10. List different types of computing environment?BTL1
Mainframe
Client-Server
Cloud Computing
Mobile Computing
Grid Computing
11. Write short note on Amazon EC2?BTL1
Amazon Elastic Compute Cloud (Amazon EC2) is a cloud-based web service that
offers a secure and scalable computing capacity.It allows organizations to customize
virtual compute capacity in the cloud, with the flexibility to choose from a range of
operating systems and resource configurations such as CPU, memory, and
storage.With Amazon EC2, capacity can be increased or decreased within minutes,
and it supports the use of hundreds or thousands of server instances
simultaneously.This is all managed through web service APIs, enabling applications
to scale themselves up or down as needed.
12. Mention the advantages of Dynamo DB?BTL1
Amazon DynamoDB is a NoSQL database service that offers fast and flexible storage
for applications requiring consistent, low-latency access at any scale.It's fully
managed and supports both document and key-value data models.
13. What is Microsoft Azure?BTL1
Azure is a cloud platform developed by Microsoft, similar to Google Cloud and
Amazon Web Services (AWS).It provides access to Microsoft's resources, such as
virtual machines, analytical and monitoring tools, and fast data processing.Azure is a
cost-effective platform with simple pricing based on the "Pay As You Go" model,
which means the user only pay for the resources the user use.
14. List the three modes of network component in Eucalyptus?BTL1
Static mode, which allocates IP addresses to instances
System mode, which assigns a MAC address and connects the instance's network
interface to the physical network via NC Managed mode, which creates a local
network of instances.
15. Mention the disadvantages of AWS?BTL1
AWS can present a challenge due to its vast array of services and functionalities,
which may be hard to comprehend and utilize, particularly for inexperienced
users.The cost of AWS can be high, particularly for high-traffic applications or when
operating multiple services.
PART B
13 Marks
1. What is Google App Engine and explain its architecture?BTL1
(Definition:2 marks,Concept explanation:8,Diagram:3 marks)
Google has the world's largest search engine facilities.The company has extensive
experience in massive data processing that has led to new
insights into data-center design and novel programming models that scale to
incredible sizes.
Google platform is based on its search engine expertise.Google has hundreds of data
centers and has installed more than 460,000 servers worldwide.
For example, 200 Google data centers are used at one time for a number of cloud
applications.
Data items are stored in text, images, and video and are replicated to tolerate faults or
failures.
Google's App Engine (GAE) which offers a PaaS platform supporting various cloud
and web applications.Google has pioneered cloud development by leveraging the
large number of data centers it operates.
For example, Google pioneered cloud services in Gmail, Google Docs, and Google
Earth, among other applications.These applications can support a large number of
users simultaneously with HA.
Notable technology achievements include the Google File System (GFS),
MapReduce, BigTable, and Chubby.In 2008, Google announced the GAE web
application platform which is becoming a common platform for many small cloud
service providers.This platform specializes in supporting scalable (elastic) web
applications.GAE enables users to run their applications on a large number of data
centers associated with Google's search engine operations.
1.1 GAE Architecture
GFS is used for storing large amounts of data.
MapReduce is for use in application program development.Chubby is used for
distributed application lock services.BigTable offers a storage service for accessing
structured data.
Users can interact with Google applications via the web interface provided by each
application.
Third-party application providers can use GAE to build cloud applications for
providing services.
The applications all run in data centers under tight management by Google engineers.
Inside each data center, there are thousands of servers forming different clusters
The Node Controller manages the lifecycle of instances and interacts with the
operating system, hypervisor, and Cluster Controller.On the other hand, the Cluster
Controller manages multiple Node Controllers and the Cloud Controller, which acts
as the front-end for the entire architecture.
The Storage Controller, also known as Walrus, allows the creation of snapshots of
volumes and persistent block storage over VM instances.
Eucalyptus operates in different modes, each with its own set of features.In Managed
Mode, users are assigned security groups that are isolated by VLAN between the
Cluster Controller and Node Controller. In Managed (No VLAN) Node mode,
however, the root user on the virtual machine can snoop into other virtual machines
running on the same network layer.The System Mode is the simplest mode with the
least number of features, where a MAC address is assigned to a virtual machine
instance and attached to the Node Controller's bridge Ethernet device. Finally, the
Static Mode is similar to System Mode but provides more control over the assignment
of IP addresses, as a MAC address/IP address pair is mapped to a static entry within
the DHCP server.
Features of Eucalyptus
Eucalyptus offers various components to manage and operate cloud infrastructure.The
Eucalyptus Machine Image is an example of an image, which is software packaged
and uploaded to the cloud, and when it is run, it becomes an instance.
The networking component can be divided into three modes: Static mode, which
allocates IP addresses to instances, System mode, which assigns a MAC address and
connects the instance's network interface to the physical network via NC, and
Managed mode, which creates a local network of instances.Access control is used to
limit user permissions. Elastic Block Storage provides block-level storage volumes
that can be attached to instances.Auto-scaling and load balancing are used to create or
remove instances or services based on demand.
Advantages of Eucalyptus
Eucalyptus is a versatile solution that can be used for both private and public cloud
computing.
Users can easily run Amazon or Eucalyptus machine images on either type of cloud.
Additionally, its API is fully compatible with all Amazon Web Services, making it
easy to integrate with other tools like Chef and Puppet for DevOps.
Although it is not as widely known as other cloud computing solutions like
OpenStack and CloudStack, Eucalyptus has the potential to become a viable
alternative.It enables hybrid cloud computing, allowing users to combine public and
private clouds for their needs. With Eucalyptus, users can easily transform their data
centers into private clouds and extend their services to other organizations.
PART C
15 Marks
1. Explain in deail about Amazon AWS and its services?BTL4
(Definition:2 marks,Diagram:3 marks,Concept explanation:10 marks)
Amazon Web Services (AWS), a subsidiary of Amazon.com, has invested significant
resources in IT infrastructure distributed globally, which is shared among all AWS
account holders worldwide.
These accounts are isolated from each other, and on-demand IT resources are
provided to account holders on a pay-as-you-go pricing model with no upfront
costs.AWS provides flexibility by allowing users to pay only for the services they
need, helping enterprises reduce their capital expenditure on building private IT
infrastructure.AWS has a physical fiber network that connects availability zones,
regions, and edge locations, with maintenance costs borne by AWS. While cloud
security is AWS's responsibility, security in the cloud is the responsibility of the
customer.Performance efficiency in the cloud has four main areas: selection, review,
monitoring, and tradeoff.
Advantages of AWS
AWS provides the convenience of easily adjusting resource usage based on your
changing needs, resulting in cost savings and ensuring that your application always
has sufficient resources.
With multiple data centers and a commitment to 99.99 for many of its services, AWS
offers a reliable and secure infrastructure.
Its flexible platform includes a variety of services and tools that can be combined to
build and deploy various applications.Additionally, AWS's pay-as-you-go pricing
model means user only pay for the resource use, eliminating upfront costs and long-
term commitments.
Disadvantages:
AWS can present a challenge due to its vast array of services and functionalities,
which may be hard to comprehend and utilize, particularly for inexperienced
users.The cost of AWS can be high, particularly for high-traffic applications or when
operating multiple services.Furthermore, service expenses can escalate over time,
necessitating frequent expense monitoring.AWS's management of various
infrastructure elements may limit authority over certain parts of your environment and
application.
Global infrastructure
The AWS infrastructure spans across the globe and consists of geographical regions,
each with multiple availability zones that are physically isolated from each other.
When selecting a region, factors such as latency optimization, cost reduction, and
government regulations are considered. In case of a failure in one zone, the
infrastructure in other availability zones remains operational, ensuring business
continuity.AWS's largest region, North Virginia, has six availability zones that are
connected by high-speed fiber-optic networking.
To further optimize content delivery, AWS has over 100 edge locations worldwide
that support the CloudFront content delivery network.This network caches frequently
accessed content, such as images and videos, at these edge locations and distributes
them globally for faster delivery and lower latency for end-users. Additionally,
CloudFront offers protection against DDoS attacks
AWS Service model
AWS provides three main types of cloud computing services:
Infrastructure as a Service (IaaS): This service gives developers access to basic
building blocks such as data storage space, networking features, and virtual or
dedicated computer hardware. It provides a high degree of flexibility and management
control over IT resources. Examples of laaS services on AWS include VPC, EC2, and
EBS.
Platform as a Service (PaaS): In this service model, AWS manages the underlying
infrastructure, including the operating system and hardware. This allows developers to
be more efficient and focus on deploying and managing applications rather than
managing infrastructure. Examples of PaaS services on AWS include RDS, EMR, and
ElasticSearch.
Software as a Service (SaaS): This service model provides complete end-user
applications that typically run on a browser. The service provider runs and manages
the software, so end-users only need to worry about using the software that suits their
needs. Examples of SaaS applications on AWS include Salesforce.com, web-based
email, and Office 365.
2.Explain in detail about OpenStack?BTL4
(Definition:2 marks,Diagram:4 marks,Concept explanation: 9 marks)
The OpenStack project is an open source cloud computing platform for all types of
clouds, which aims to be simple to implement, massively scalable and feature
rich.Developers and cloud computing technologists from around the world create the
OpenStack project.OpenStack provides an Infrastructure as a Service (IaaS) solution
through a set of interrelated services.Each service offers an application programming
interface (API) that facilitates this integration.Depending on their needs, administrator
can install some or all services.
OpenStack began in 2010 as a joint project of Rackspace Hosting and NASA.As of
2012, it is managed by the OpenStack Foundation, a non-profit corporate entity
established in September 2013 to promote OpenStack software and its
community.Now, More than 500 companies have joined the projectThe OpenStack
system consists of several key services that are separately installed.
These services work together depending on the user cloud needs and include the
Compute, Identity, Networking, Image, Block Storage, Object Storage, Telemetry,
Orchestration, and Database services.
The administrator can install any of these projects separately and configure them
standalone or as connected entities.
Figure 4.4 shows the relationships among the OpenStack services:
The controller node runs the Identity service, Image service, Placement service,
management portions of Compute, management portion of Networking, various
Networking agents, and the Dashboard.It also includes supporting services such as an
SQL database, message queue, and NTP.
Optionally, the controller node runs portions of the Block Storage, Object Storage,
Orchestration, and Telemetry services.The controller node requires a minimum of two
network interfaces.The compute node runs the hypervisor portion of Compute that
operates instances. By default, Compute uses the KVM hypervisor. The compute node
also runs a Networking service agent that connects instances to virtual networks and
provides firewalling services to instances via security groups.
Administrator can deploy more than one compute node. Each node requires a
minimum of two network interfaces. The optional Block Storage node contains the
disks that the BlockStorage and Shared File System services provision for instances.
For simplicity, service traffic between compute nodes and this node uses the
management network.
Production environments should implement a separate storage network to increase
performance and security. Administrator can deploy more than one block storage
node. Each node requires a minimum of one network interface.The optional Object
Storage node contains the disks that the Object Storage service uses for storing
accounts, containers, and objects.For simplicity, service traffic between compute
nodes and this node uses the management network.Production environments should
implement a separate storage network to increase performance and security.This
service requires two nodes. Each node requires a minimum of one network interface.
Administrator can deploy more than two object storage nodes.The provider networks
option deploys the OpenStack Networking service in the simplest way possible with
primarily layer 2 (bridging/switching) services and VLAN segmentation of networks.
Essentially, it bridges virtual networks to physical networks and relies on physical
network infrastructure for layer-3 (routing) services.Additionally, a DHCP service
provides IP address information to instances.
UNIT V
CLOUD SECURITY
SYLLABUS:Virtualization System-Specific Attacks: Guest hopping – VM migration
attack – hyperjacking. Data Security and Storage; Identity and Access Management
(IAM) - IAM Challenges - IAM Architecture and Practice.
PART A
2 Marks
1.What is a virtualization attack?BTL1
Virtualization Attacks One of the top cloud computing threats involves one of its core
enabling technologies: virtualization. In virtual environments, the attacker can take
control of virtual machines installed by compromising the lower layer hypervisor.
2.What are the different types of VM attacks?BTL1
However, virtualization introduces serious threats to service delivery such as Denial
of Service (DoS) attacks, Cross-VM Cache Side Channel attacks, Hypervisor Escape
and Hyper-jacking. One of the most sophisticated forms of attack is the cross-VM
cache side channel attack that exploits shared cache memory between VMs.
3.What is guesthopping?BTL1
Guest-hopping attack: In this type of attack, an attacker will try to get access to one
virtual machine by penetrating another virtual machine hosted in the same hardware.
One of the possible mitigations of guest hopping attack is the Forensics and VM
debugging tools to observe the security of cloud.
4.What is a hyperjacking attack?BTL1
Hyperjacking is an attack in which a hacker takes malicious control over the
hypervisor that creates the virtual environment within a virtual machine (VM) host.
5.How does a hyperjacking attack work?BTL1
Hyperjacking is an attack in which an adversary takes malicious control over the
hypervisor that creates the virtual environment within a virtual machine (VM) host.
6.What is data security and storage in cloud computing?BTL1
Cloud data security is the practice of protecting data and other digital information
assets from security threats, human error, and insider threats. It leverages technology,
policies, and processes to keep your data confidential and still accessible to those who
need it in cloud-based environments
7.What are the 5 components of data security in cloud computing?BTL1
Visibility.
Exposure Management.
Prevention Controls.
Detection.
Response
8.What is cloud storage and its types?BTL1
What are the types of cloud storage? There are three main cloud storage types: object
storage, file storage, and block storage. Each offers its own advantages and has its
own use cases.
9.What are the four principles of data security?BTL1
There are many basic principles to protect data in information security. The primary
principles are confidentiality, integrity, accountability, availability, least privilege,
separation of privilege, and least common mechanisms. The most common security
principle is CIA triad with accountability
10. What is the definition if IAM?BTL1
Identity and access management (IAM) ensures that the right people and job roles in
your organization (identities) can access the tools they need to do their jobs. Identity
management and access systems enable your organization to manage employee apps
without logging into each app as an administrator
11. What are the challenges of IAM?BTL1
Lack of centralized view
Difficulties in User Lifecycle Management
Keeping Application Integrations Updated
Compliance Visibility into Third Party SaaS Tools
12. What is the principle of IAM?BTL1
A principal is a human user or workload that can make a request for an action or
operation on an AWS resource. After authentication, the principal can be granted
either permanent or temporary credentials to make requests to AWS, depending on
the principal type.
13. What is IAM tools?BTL1
Identity access management (IAM) or simply put, identity management, is a category
of software tools that allows businesses of all sized to generally manage the identities
and access rights of all their employees.
14. How many types of IAM are there?BTL1
IAM roles are of 4 types, primarily differentiated by who or what can assume the role:
Service Role. Service-Linked Role. Role for Cross-Account Access.
15. What are IAM requirements?BTL1
IAM requirements are organized into four categories: Account Provisioning & De-
provisioning, Authentication, Authorization & Role Management, and Session
HomeMy LibraryAsk AI
Cloud computing question bank
MC4101-ADSA Unit-V - DATA STRUCTURES AND ALGORITHMS
UNIT 5 - NP complete and NP hard is covered
Algorithms as a technology
MC5301 Advanced DATA Structures AND Algorithms
CP 4151 Advanced Data Structures And Algorithms I old question paper
Unit 1 Advanced Data structures and Algorithms
Dynamic Programming - Advanced Data Structures and Algorithms
CP 4151 Advanced Data Structures And Algorithms I old question paper
NP Completeness 1 - learn it practice it
Disaster Risk Reduction ManagementMX3084
BE Computer ScienceCSE
Disaster Risk reduction and ManagementMX3084
Computer Science18PYb151J
Computer ScienceSEC1
computer architectureCS422
Computer science engineering
ccn
Sp
Computer Science
Mx3084
You don't have any courses yet.
You don't have any books yet.
Questions
Information
AI Chat
Multiple Choice
Multiple Choice
Cloud ComputingCCS335
Loyola-ICAM College of Engineering and Technology
5 Documents
Cclab 1 - ..............................................
None
7
None
45
100%(8)
1
100%(6)
164
SQT QB Final C
Master of Computer Applications
100%(5)
31
100%(4)
The maturity model takes into account the dynamic nature of IAM users, systems, and
applications in the cloud and addresses the four key components of the IAM
automation process: • User Management, New Users • User Management, User
Modifications • Authentication Management • Authorization Management Table 5-3
defines the maturity levels as they relate to the four key components.
By matching the model’s descriptions of various maturity levels with the cloud
services delivery model’s (SaaS, PaaS, IaaS) current state of IAM, a clear picture
emerges of IAM maturity across the four IAM components. If, for example, the
service delivery model (SPI) is “immature” in one area but “capable” or “aware” in all
others, the IAM maturity model can help focus attention on the area most in need of
attention.
Although the principles and purported benefits of established enterprise IAM
practices and processes are applicable to cloud services, they need to be adjusted to
the cloud environment. Broadly speaking, user management functions in the cloud
can be categorized as follows:
• Cloud identity administration
• Federation or SSO
• Authorization management
• Compliance management
We will now discuss each of the aforementioned practices in detail.
Cloud Identity Administration Cloud identity administrative functions should focus
on life cycle management of user identities in the cloud—provisioning,
deprovisioning, identity federation, SSO, password or credentials management,
profile management, and administrative management. Organizations that are not
capable of supporting federation should explore cloud-based identity management
services. This new breed of services usually synchronizes an organization’s internal
directories with its directory (usually multitenant) and acts as a proxy IdP for the
organization. By federating identities using either an internal Internet-facing IdP or a
cloud identity management service provider, organizations can avoid duplicating
identities and attributes and storing them with the CSP. Given the inconsistent and
sparse support for identity standards among CSPs, customers may have to devise
custom methods to address user management functions in the cloud. Provisioning
users when federation is not supported can be complex and laborious. It is not unusual
for organizations to employ manual processes, web-based administration, outsourced
(delegated) administration that involves uploading of spreadsheets, and execution of
custom scripts at both the customer and CSP locations. The latter model is not
desirable as it is not scalable across multiple CSPs and will be costly to manage in the
long run. Federated Identity (SSO) Organizations planning to implement identity
federation that enables SSO for users can take one of the following two paths
(architectures): • Implement an enterprise IdP within an organization perimeter. •
Integrate with a trusted cloud-based identity management service provider. Both
architectures have pros and cons.
Enterprise identity provider
In this architecture, cloud services will delegate authentication to an organization’s
IdP. In this delegated authentication architecture, the organization federates identities
within a trusted circle of CSP domains. A circle of trust can be created with all the
domains that are authorized to delegate authentication to the IdP. In this deployment
architecture, where the organization will provide and support an IdP, greater control
can be exercised over user identities, attributes, credentials, and policies for
authenticating and authorizing users to a cloud service. Figure 5-7 illustrates the IdP
deployment architecture.
Here are the specific pros and cons of this approach: Pros Organizations can leverage
the existing investment in their IAM infrastructure and extend the practices to the
cloud. For example, organizations that have implemented SSO for applications within
their data center exhibit the following benefits:
• They are consistent with internal policies, processes, and access management
frameworks. • They have direct oversight of the service-level agreement (SLA) and
security of the IdP. • They have an incremental investment in enhancing the existing
identity architecture to support federation. Cons By not changing the infrastructure to
support federation, new inefficiencies can result due to the addition of life cycle
management for non-employees such as customers. Most organizations will likely
continue to manage employee and long-term contractor identities using organically
developed IAM infrastructures and practices. But they seem to prefer to outsource the
management of partner and consumer identities to a trusted cloudbased identity
provider as a service partner. Identity management-as-a-service In this architecture,
cloud services can delegate authentication to an identity management-asa-service
(IDaaS) provider. In this model, organizations outsource the federated identity
management technology and user management processes to a third-party service
provider, such as Ping Identity, TriCipher’s Myonelogin.com, or Symplified.com.
When federating identities to the cloud, organizations may need to manage the
identity life cycle using their IAM system and processes. However, the organization
might benefit from an outsourced multiprotocol federation gateway (identity
federation service) if it has to interface with many different partners and cloud service
federation schemes. For example, as of this writing, Salesforce.com supports SAML
1.1 and Google Apps supports SAML 2.0. Enterprises accessing Google Apps and
Salesforce.com may benefit from a multiprotocol federation gateway hosted by an
identity management CSP such as Symplified or TriCipher. In cases where
credentialing is difficult and costly, an enterprise might also outsource credential
issuance (and background investigations) to a service provider, such as the GSA
Managed Service Organization (MSO) that issues personal identity verification (PIV)
cards and, optionally, the certificates on the cards. The GSA MSO† is offering the
USAccess management end-to-end solution as a shared service to federal civilian
agencies. In essence, this is a SaaS model for identity management, where the SaaS
IdP stores identities in a “trusted identity store” and acts as a proxy for the
organization’s users accessing cloud services, as illustrated in Figure 5-8.• They are
consistent with internal policies, processes, and access management frameworks. •
They have direct oversight of the service-level agreement (SLA) and security of the
IdP. • They have an incremental investment in enhancing the existing identity
architecture to support federation. Cons By not changing the infrastructure to support
federation, new inefficiencies can result due to the addition of life cycle management
for non-employees such as customers. Most organizations will likely continue to
manage employee and long-term contractor identities using organically developed
IAM infrastructures and practices. But they seem to prefer to outsource the
management of partner and consumer identities to a trusted cloudbased identity
provider as a service partner. Identity management-as-a-service In this architecture,
cloud services can delegate authentication to an identity management-asa-service
(IDaaS) provider. In this model, organizations outsource the federated identity
management technology and user management processes to a third-party service
provider, such as Ping Identity, TriCipher’s Myonelogin.com, or Symplified.com.
When federating identities to the cloud, organizations may need to manage the
identity life cycle using their IAM system and processes. However, the organization
might benefit from an outsourced multiprotocol federation gateway (identity
federation service) if it has to interface with many different partners and cloud service
federation schemes. For example, as of this writing, Salesforce.com supports SAML
1.1 and Google Apps supports SAML 2.0. Enterprises accessing Google Apps and
Salesforce.com may benefit from a multiprotocol federation gateway hosted by an
identity management CSP such as Symplified or TriCipher. In cases where
credentialing is difficult and costly, an enterprise might also outsource credential
issuance (and background investigations) to a service provider, such as the GSA
Managed Service Organization (MSO) that issues personal identity verification (PIV)
cards and, optionally, the certificates on the cards. The GSA MSO† is offering the
USAccess management end-to-end solution as a shared service to federal civilian
agencies. In essence, this is a SaaS model for identity management, where the SaaS
IdP stores identities in a “trusted identity store” and acts as a proxy for the
organization’s users accessing cloud services, as illustrated in Figure 5-8
The identity store in the cloud is kept in sync with the corporate directory through a
providerproprietary scheme (e.g., agents running on the customer’s premises
synchronizing a subset of an organization’s identity store to the identity store in the
cloud using SSL VPNs). Once the IdP is established in the cloud, the organization
should work with the CSP to delegate authentication to the cloud identity service
provider. The cloud IdP will authenticate the cloud users prior to them accessing any
cloud services (this is done via browser SSO techniques that involve standard HTTP
redirection techniques). Here are the specific pros and cons of this approach:
Pros
Delegating certain authentication use cases to the cloud identity management service
hides the complexity of integrating with various CSPs supporting different federation
standards. Case in point: Salesforce.com and Google support delegated authentication
using SAML. However, as of this writing, they support two different versions of
SAML: Google Apps supports only SAML 2.0, and Salesforce.com supports only
SAML 1.1. Cloudbased identity management services that support both SAML
standards (multiprotocol federation gateways) can hide this integration complexity
from organizations adopting cloud services. Another benefit is that there is little need
for architectural changes to support this model. Once identity synchronization
between the organization directory or trusted system of record and the identity service
directory in the cloud is set up, users can sign on to cloud services using corporate
identity, credentials (both static and dynamic), and authentication policies.
Cons
When you rely on a third party for an identity management service, you may have less
visibility into the service, including implementation and architecture details. Hence,
the availability and authentication performance of cloud applications hinges on the
identity management service provider’s SLA, performance management, and
availability. It is important to understand the provider’s service level, architecture,
service redundancy, and performance guarantees of the identity management service
provider. Another drawback to this approach is that it may not be able to generate
custom reports to meet internal compliance requirements. In addition, identity
attribute management can also become complex when identity attributes are not
properly defined and associated with identities (e.g., definitions of attributes, both
mandatory and optional). New governance processes may be required to authorize
various operations (add/modify/remove attributes) to govern user attributes that move
outside the organization’s trust boundary. Identity attributes will change through the
life cycle of the identity itself and may get out of sync. Although both approaches
enable the identification and authentication of users to cloud services, various features
and integration nuances are specific to the service delivery model— SaaS, PaaS, and
IaaS—as we will discuss in the next section.