0% found this document useful (0 votes)
26 views11 pages

IOT CC Scheme 1st Ia

The document outlines the course structure for Cloud Computing & Security at East West Institute of Technology, detailing the course objectives, evaluation scheme, and various cloud computing concepts. It discusses the characteristics and benefits of cloud computing, compares it with traditional and other computing models, and explains virtualization techniques and their implementation levels. Additionally, it covers load balancing in cloud computing and peer-to-peer architecture, emphasizing the importance of resource optimization and efficient communication.

Uploaded by

gangambika
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views11 pages

IOT CC Scheme 1st Ia

The document outlines the course structure for Cloud Computing & Security at East West Institute of Technology, detailing the course objectives, evaluation scheme, and various cloud computing concepts. It discusses the characteristics and benefits of cloud computing, compares it with traditional and other computing models, and explains virtualization techniques and their implementation levels. Additionally, it covers load balancing in cloud computing and peer-to-peer architecture, emphasizing the importance of resource optimization and efficient communication.

Uploaded by

gangambika
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

East West Institute of Technology

(Affiliated to Visvesvaraya Technological University, Belagavi)


Bengaluru-91

Dept. of CSE-IOT

Dept. of CSE-IOT Academic Year: 2024– 25/ EVEN


Course Title: Cloud Computing & Security Course Code: BIC613B Sem: 6 CSE-IOT
Date: 21/03/2025 Max Marks: 25
Name of the Course Instructor: Prof. Vishaka Rani
BIS613D.1 1. Describe various cloud computing platforms and service providers.
BIS613D.2 2. Illustrate the significance of various types of virtualization..
3. Identify the architecture, delivery models and industrial platforms for cloud computing based
BIS613D.3
applications.
BIS613D.4 4. Analyze the role of security aspects in cloud computing.
BIS613D.5 5. Demonstrate cloud applications in various fields using suitable cloud platforms.
SCHEME OF EVALUATION

Bloom's
Sl Answers Marks CO level
No
1a) Cloud computing, characterized by on-demand access to shared resources 6 CO1 L1
over the internet, offers benefits like cost savings, scalability, and
flexibility, making it a valuable tool for businesses and individuals alike.
Here's a more detailed look at the characteristics and benefits of cloud
computing:
Characteristics of Cloud Computing:
On-demand self-service:
Users can provision computing resources (like servers, storage, and
applications) on demand, without requiring human intervention
from the service provider.
Broad network access:
Users can access cloud resources from anywhere with an internet
connection, using various devices.
Resource pooling:
Cloud providers pool computing resources (like servers, storage,
and networks) and dynamically allocate them to multiple users.
Rapid elasticity:
Cloud resources can be scaled up or down quickly and easily to
meet changing demands.
Measured service:
Cloud providers track and measure resource usage, allowing users
to pay only for the resources they consume.
Multi-tenancy:
Cloud providers can support multiple users (tenants) on the same
infrastructure, while maintaining data privacy and security.
Virtualization:
Cloud providers use virtualization technology to abstract
underlying hardware resources and present them as logical
resources to users.

Benefits of Cloud Computing:


Cost savings:
Cloud computing can reduce IT infrastructure costs by eliminating
the need for physical hardware, software licensing, and IT staff.
Scalability:
Cloud resources can be scaled up or down quickly and easily to
meet changing demands, allowing businesses to adapt to fluctuating
workloads.
Flexibility:
Cloud computing offers flexibility in accessing and managing
resources, allowing users to work from anywhere and collaborate
more effectively.
Accessibility:
Users can access cloud resources from anywhere with an internet
connection, using various devices.
Collaboration:
Cloud-based applications and tools facilitate real-time collaboration
and information sharing among team members.
Disaster recovery:
Cloud providers offer robust disaster recovery solutions, ensuring
business continuity in case of outages or disasters.
Innovation:
Cloud computing enables faster innovation by providing access to a
wide range of technologies and resources.
Reduced environmental impact:
Cloud computing can reduce the environmental impact of IT
infrastructure by consolidating resources and improving energy
efficiency.

1b) 6 CO1 L1
Traditional Computing
Traditional Computing is a process of using physical data centers for
various data assets. As a result, it also runs complete networking systems
for day-to-day operations.
However, access to data, software, and storage is limited to users and
devices. Hence, it only allows access to authorized devices that connect to
the official network.
Therefore, it limits the users to only access the data from the system that
stores it.

Cloud Computing
Cloud Computing is the combination of configurable components.
Moreover, components like system resources and advanced services help
deliver tasks using internet connections.Further, it runs tasks on third-party
servers and enables the ability to access data from multiple locations. It
also provides a cost-efficient solution and is more user-friendly.
Above all, it offers more storage space, servers, and computing power to
help the apps run efficiently and smoothly. Moreover, it only requires fast,
eligible, and stable internet connections to execute tasks.

Grid Computing
Grid Computing is a process where computers and devices from various
locations work on a single problem. Further, in this system clusters jointly
execute given tasks. As a result, it applies resources from multiple
computers and nodes.Therefore, it is a type of computing environment that
utilizes several and scattered resources. Hence, these resources provide a
functioning environment for executing a single task.

Distributed Computing
Distributed Computing takes place when multiple computers and devices
connect using a common network but are separated physically. As a result,
a single task is performed by various functional units of different and
distributed nodes and units.
Simultaneously, different programs of an application run on separate
nodes. Therefore, communication takes place between different nodes of a
system over the network to execute the task.

Cluster Computing
In this type of computing environment, clusters execute tasks. Cluster
Computing allows clusters to work as a set of loosely or tightly connected
computers.
Consequently, it is viewed as a single system and executes tasks parallelly.
Hence, it is also similar to a parallel type of computing environment.
As a result, the cluster computing environment prefers cluster-aware
applications.

Personal Computing
A Personal Computing Environment includes a single machine. Moreover,
it incorporates complete programs on a computer and performs it.
For example, machines like laptops, mobiles, printers, etc are a part of the
Personal Computing Environment. As a result, this type of computing
environment is for single users to run tasks at home or offices.

Time-Sharing Computing
A Time-Sharing Computing Environment enables multiple users to share a
system concurrently. Furthermore, it allows various time slots for various
users and processes. Hence, the processor switches rapidly and changes
users according to their slots.
For example, Windows 95 and its later versions, Unix, IOS, Linux OS all
run on the time-sharing computing environment.

Client-Server Computing
Client-Server Computing is a type of environment that incorporates two
machines. Therefore, it includes a client machine and a server machine.
Sometimes, the same machine serves as the client and the server.
Subsequently, a client requests a resource or service and a server provides
the same. Moreover, a server provides a resource or service to multiple
clients simultaneously. Hence, the communication takes place using a
computer network.
Categorization of Client-Server Computing Environment is into two types:
● Computer Server: It provides the interface to the clients. Hence, it
helps communicate requests to execute tasks.
Meanwhile, the server performs the task and responds with the
outcome.
● File-Server: The environment provides a file-system interface.
Therefore, allowing clients to create, update, read, and delete files.

Peer-to-Peer Computing
Peer-to-Peer Computing is a type of environment similar to a Distributed
type of Computing Environment. That is to say, there are no differences
between clients and servers in this type of computing environment.
P2P provides an advantage over traditional client-server environments.
That is to say, it provides services using several nodes throughout the
network.

Mobile Computing
Mobile Computing refers to the type of environment that runs tasks on
smartphones and tablets. Hence, it is computing on portable and
lightweight devices.
Although, compared to other devices, mobile systems lack screen size,
memory capacity, and other traditional functionalities. However, it does
provide remote access to multiple services.
Today, mobile computing environments consist of multiple functions.
Hence, it offers services as good as any other traditional device. Moreover,
the two main operating systems that dominate this market are Apple iOS
and Google Android.

2a) Load balancing is an essential technique used in cloud computing to 6 CO1 L1


optimize resource utilization and ensure that no single resource is
overburdened with traffic. It is a process of distributing workloads across
multiple computing resources, such as servers, virtual machines, or
containers, to achieve better performance, availability, and scalability.
1. In cloud computing, load balancing can be implemented at various
levels, including the network layer, application layer, and database
layer. The most common load balancing techniques used in cloud
computing are:
2. Network Load Balancing: This technique is used to balance the
network traffic across multiple servers or instances. It is
implemented at the network layer and ensures that the incoming
traffic is distributed evenly across the available servers.
3. Application Load Balancing: This technique is used to balance the
workload across multiple instances of an application. It is
implemented at the application layer and ensures that each instance
receives an equal share of the incoming requests.
4. Database Load Balancing: This technique is used to balance the
workload across multiple database servers. It is implemented at the
database layer and ensures that the incoming queries are distributed
evenly across the available database servers.
Load balancing helps to improve the overall performance and reliability of
cloud-based applications by ensuring that resources are used efficiently
and that there is no single point of failure. It also helps to scale applications
on demand and provides high availability and fault tolerance to handle
spikes in traffic or server failures.

2b) In cloud computing, a peer-to-peer (P2P) architecture operates at two


abstraction levels: a physical network of peers and an overlay network that
facilitates communication and resource sharing.
Here's a more detailed explanation:
Physical Network (Peers):
● This level consists of individual computers or devices, each
acting as both a client and a server (a "peer").
● Peers voluntarily join or leave the network, forming a
dynamic and decentralized structure.
● Each peer contributes resources (like storage, processing
power, or bandwidth) and consumes resources from other
peers.
● There is no centralized authority or server; all peers are 6 CO1 L2
equal and interact directly.
Overlay Network:
● This level builds upon the physical network, creating a
logical network structure that enables efficient
communication and resource discovery.
● It uses protocols and algorithms to help peers find each
other, exchange information, and share resources.
● Examples of overlay network technologies include DHT
(Distributed Hash Tables) or structured P2P networks.
● The overlay network allows peers to interact with each
other even if they are geographically distributed.

3a) This approach was implemented by VMware and many other software 6 CO2 L2
companies. As shown in Figure, VMware puts the VMM at Ring 0 and the
guest OS at Ring 1. The VMM scans the instruction stream and identifies
the privileged, control- and behavior-sensitive instructions. When these
instructions are identified, they are trapped into the VMM, which emulates
the behavior of these instructions. The method used in this emulation is
called binary translation. Therefore, full vir-tualization combines binary
translation and direct execution. The guest OS is completely decoupled
from the underlying hardware. Consequently, the guest OS is unaware that
it is being virtualized.
The performance of full virtualization may not be ideal, because it involves
binary translation which is rather time-consuming. In particular, the full
virtualization of I/O-intensive applications is a really big challenge. Binary
translation employs a code cache to store translated hot instructions to
improve performance, but it increases the cost of memory usage. At the
time of this writing, the performance of full virtualization on the x86
architecture is typically 80 percent to 97 percent that of the host machine.

3b) IMPLEMENTATION LEVELS OF VIRTUALIZATION 7 CO2 L2


Virtualization is a computer architecture technology by which multiple
virtual machines (VMs) are multiplexed in the same hardware machine.
The purpose of a VM is to enhance resource sharing by many users and
improve computer performance in terms of resource utilization and
application flexibility. Hardware resources (CPU, memory, I/O devices,
etc.) or software resources (operating system and software libraries) can be
virtualized in various functional layers. This virtualization technology has
been revitalized as the demand for distributed and cloud computing
increased sharply in recent years.
The idea is to separate the hardware from the software to yield better
system efficiency. virtualization techniques can be applied to enhance the
use of compute engines, networks, and storage. In this chapter we will
discuss VMs and their applications for building distributed systems.
According to a 2009 Gartner Report, virtualization was the top strategic
technology poised to change the computer industry. With sufficient
storage, any computer platform can be installed in another host computer,
even if they use processors with different instruction sets and run with
distinct operating systems on the same hardware.

Levels of Virtualization Implementation


A traditional computer runs with a host operating system specially tailored
for its hardware architecture, as shown in Figure. After virtualization,
different user applications managed by their own operating systems (guest
OS) can run on the same hardware, independent of the host OS. This
is often done by adding additional software, called a virtualization layer as
shown in Figure .This virtualization layer is known as hypervisor or virtual
machine monitor (VMM). The VMs are shown in the upper boxes, where
applications run with their own guest OS over the virtualized CPU,
memory, and I/O resources. The main function of the software layer for
virtualization is to virtualize the physical hardware of a host machine into
virtual resources to be used by the VMs, exclusively. This can be
implemented at various operational levels, as we will discuss shortly. The
virtualization software creates the abstraction of VMs by interposing a
virtualization layer at various levels of a computer system. Common
virtualization layers include the instruction set architecture (ISA) level,
hardware level,operating system level, library support level, and
application level

1 Instruction Set Architecture Level


At the ISA level, virtualization is performed by emulating a given ISA by
the ISA of the host machine. For example, MIPS binary code can run on an
x86-based host machine with the help of ISA emulation. With this
approach, it is possible to run a large amount of legacy binary code written
for various processors on any given new hardware host machine.
Instruction set emulation leads to virtual ISAs created on any hardware
machine.
The basic emulation method is through code interpretation. An interpreter
program interprets the source instructions to target instructions one by one.
One source instruction may require tens or hundreds of native target
instructions to perform its function. Obviously, this process is relatively
slow. For better performance, dynamic binary translation is desired. This
approach translates basic blocks of dynamic source instructions to target
instructions. The basic blocks can also be extended to program traces or
super blocks to increase translation efficiency. Instruction set emulation
requires binary translation and optimization. A virtual instruction set
architecture (V-ISA) thus requires adding a processor-specific software
translation layer to the compiler.

2 Hardware Abstraction Level


Hardware-level virtualization is performed right on top of the bare
hardware. On the one hand, this approach generates a virtual hardware
environment for a VM. On the other hand, the process manages the
underlying hardware through virtualization. The idea is to virtualize a
computer’s resources, such as its processors, memory, and I/O devices.
The intention is to upgrade the hardware utilization rate by multiple users
concurrently. The idea was implemented in the IBM VM/370 in the 1960s.
More recently, the Xen hypervisor has been applied to virtualize x86-based
machines to run Linux or other guest OS applications.

3 Operating System Level


This refers to an abstraction layer between traditional OS and user
applications. OS-level virtualiza-tion creates isolated containers on a single
physical server and the OS instances to utilize the hard-ware and software
in data centers. The containers behave like real servers. OS-level
virtualization is commonly used in creating virtual hosting environments to
allocate hardware resources among a large number of mutually distrusting
users. It is also used, to a lesser extent, in consolidating server
hardware by moving services on separate hosts into containers or VMs on
one server.
4 Library Support Level
Most applications use APIs exported by user-level libraries rather than
using lengthy system calls by the OS. Since most systems provide well-
documented APIs, such an interface becomes another candidate for
virtualization. Virtualization with library interfaces is possible by
controlling the com-munication link between applications and the rest of a
system through API hooks. The software tool WINE has implemented this
approach to support Windows applications on top of UNIX hosts.
Another example is the vCUDA which allows applications executing
within VMs to leverage GPU hardware acceleration.
5 User-Application Level
Virtualization at the application level virtualizes an application as a VM.
On a traditional OS, an application often runs as a process. Virtual
Machines and Virtualization of Clusters and Data Centers process-level
virtualization. The most popular approach is to deploy high level language
(HLL) VMs. In this scenario, the virtualization layer sits as an application
program on top of the operating system, and the layer exports an
abstraction of a VM that can run programs written and compiled to a
particular abstract machine definition. Any program written in the HLL
and compiled for this VM will be able to run on it. The Microsoft .NET
CLR and Java Virtual Machine (JVM) are two good examples of this class
of VM.Other forms of application-level virtualization are known as
application isolation, application sandboxing, or application streaming. The
process involves wrapping the application in a layer that is isolated from
the host OS and other applications. The result is an application that is much
easier to distribute and remove from user workstations. An example is the
LANDesk application virtuali-zation platform which deploys software
applications as self-contained, executable files in an isolated environment
without requiring installation, system modifications, or elevated security
privileges.

4a) Since the efficiency of the software shadow page table technique was too 6 CO2 L2
low, Intel developed a hardware-based EPT technique to improve it, as
illustrated in Figure. In addition, Intel offers a Virtual Processor ID (VPID)
to improve use of the TLB. Therefore, the performance of memory
virtualization is greatly improved. In Figure, the page tables of the guest
OS and EPT are all four-level.When a virtual address needs to be
translated, the CPU will first look for the L4 page table pointed to by Guest
CR3. Since the address in Guest CR3 is a physical address in the guest OS,
the CPU needs to convert the Guest CR3 GPA to the host physical address
(HPA) using EPT. In this procedure, the CPU will check the EPT TLB to
see if the translation is there. If there is no required translation in the EPT
TLB, the CPU will look for it in the EPT. If the CPU cannot find the
translation in the EPT, an EPT violation exception will be raised.

When the GPA of the L4 page table is obtained, the CPU will calculate the
GPA of the L3 page table by using the GVA and the content of the L4 page
table. If the entry corresponding to the GVA in the L4 VM1 Process1
Process2 VM2 Process1 Process2 Virtual VA memory Physical PA
memory Machine MA memory FIGURE Two-level memory mapping
procedure.(Courtesy of R. Rblig, et al.
Virtualization of CPU, Memory, and I/O Devices 149 page table is a page
fault, the CPU will generate a page fault interrupt and will let the guest OS
kernel handle the interrupt. When the PGA of the L3 page table is
obtained, the CPU will look for the EPT to get the HPA of the L3 page
table, as described earlier. To get the HPA corresponding to a GVA, the
CPU needs to look for the EPT five times, and each time, the memory
needs to be accessed four times. There- fore, there are 20 memory accesses
in the worst case, which is still very slow. To overcome this short- coming,
Intel increased the size of the EPT TLB to decrease the number of memory
accesses.

4b) Unlike the full virtualization architecture which intercepts and emulates 7 CO2 L3
privileged and sensitive instructions at runtime, para-virtualization handles
these instructions at compile time. The guest OS kernel is modified to
replace the privileged and sensitive instructions with hypercalls to the
hypervisor or VMM. Xen assumes such a para-virtualization architecture.
The guest OS running in a guest domain may run at Ring 1 instead of at
Ring 0. This implies that the guest OS may not be able to execute some
privileged and sensitive instructions. The privileged instructions are
implemented by hypercalls to the hypervisor. After replacing the
instructions with hypercalls, the modified guest OS emulates the behavior
of the original guest OS. On an UNIX system, a system call involves an
interrupt or service routine. The hypercalls apply a dedicated service
routine in Xen.

Example VMware ESX Server for Para-Virtualization


VMware pioneered the software market for virtualization. The company
has developed virtualization tools for desktop systems and servers as well
as virtual infrastructure for large data centers. ESX is a VMM or a
hypervisor for bare-metal x86 symmetric multiprocessing (SMP) servers. It
accesses hardware resources such as I/O directly and has complete
resource management control. An ESX-enabled server consists of four
components: a virtualization layer, a resource manager, hardware interface
components, and a service console, as shown in Figure. To improve
performance, the ESX server employs a para-virtualization architecture in
which the VM kernel interacts directly with the hardware without
involving the host OS.
The VMM layer virtualizes the physical hardware resources such as CPU,
memory, network and disk controllers, and human interface devices. Every
VM has its own set of virtual hardware resources. The resource manager
allocates CPU, memory disk, and network bandwidth and maps them to the
virtual hardware resource set of each VM created. Hardware interface
components are the device drivers and the VMware ESX Server File
System. The service console is responsible for booting the system,
initiating the execution of the VMM and resource manager, and
relinquishing control to those layers. It also facilitates the process for
system administrators.

Course Instructor Chief Course Instructor Head of Department

Prof. Vishaka Rani Prof.Shubha C Dr. Venkatesh G

You might also like