0% found this document useful (0 votes)
32 views110 pages

60 SahilNazare CCL

Uploaded by

prup06
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views110 pages

60 SahilNazare CCL

Uploaded by

prup06
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 110

Department of Computer Engineering

Name: SAHIL PRAVIN NAZARE

Roll No: 60

Subject code: CSL605

Subject Name: Cloud Computing Lab (CCL)

Class/Sem: T.E. / SEM-VI

Year: 2021-2022
COMPUTER ENGINEERING DEPARTMENT

Subject: Cloud Computing Lab (CCL) Class/Sem: TE/VI

Name of the Laboratory: SDC Lab Year: 2021-2022

LIST OF EXPERIMENTS

Expt. Date Name of the Experiment Page


No. No.
1 24/01/2022 Study of Cloud Computing and it’s Architecture. 01-10

2 02/02/2022 To study and implement Hosted Virtualization using


11-18
VirtualBox and KVM.
3 08/02/2022 To study and Implement Bare-metal Virtualization using Xen,
HyperV or VMware Esxi. 19-28

15/02/2022 To study and implement Infrastructure as a Service using AWS /


4 Microsoft Azure. 29-40

22/02/2022 To study and implement Platform as a Service using AWS Elastic


5 Beanstalk / Microsoft Azure App Service. 41-51
6 08/03/2022 To implement Storage as a Service using S3 and S3 Glacier. 52-70
7 15/03/2022 To study and Implement Database as a Service on SQL/NOSQL 71-80
databases like AWS RDS, AZURE SQL/ MongoDB Lab/
Firebase.
8 25/03/2022 To study and Implement Security as a Service on AWS/Azure
81-92
9 08/04/2022 To study and implement Identity and Access Management (IAM) 93-95
practices on AWS/Azure cloud.
10 13/04/2022 To study and implement Containerization using Docker. 96-99
1 25/01/2022 Assignment No. 1 100-
103
2 15/03/2022 Assignment No. 2 104-
108

H/W Requirement P I and above, RAM 128MB, Printer, Cartridges.


S/W Requirement VirtualBox Machine, Ubuntu, AWS/Azure account, Docker.

Prof. Bhagyashri Sonawale Prof. Sujata Bhairnallykar


Subject In-charge HOD
60_SahilNazare
EXPERIMENT NO. 1

AIM: Study of Cloud Computing and it’s Architecture.

THEORY:
What is Cloud Computing?
Cloud computing is the delivery of computing services—including servers, storage, databases, networking,
software, analytics, and intelligence—over the Internet (“the cloud”) to offer faster innovation, flexible
resources, and economies of scale. You typically pay only for cloud services you use, helping lower your
operating costs, run your infrastructure more efficiently and scale as your business needs change.
Cloud computing is a virtualization-based technology that allows us to create, configure, and customize
applications via an internet connection. The cloud technology includes a development platform, hard disk,
software application, and database. The term cloud refers to a network or the internet. It is a technology that
uses remote servers on the internet to store, manage, and access data online rather than local drives. The data
can be anything such as files, images, documents, audio, video, and more.

Architecture of Cloud Computing:


The Architecture is a combination of service-oriented architecture and event-driven architecture.
Cloud computing architecture is divided into the following two parts –
• Front End.
• Back End.
60_SahilNazare
The below diagram shows the architecture of cloud computing,

Front End:-
The front end is used by the client. It contains client-side interfaces and applications that are required to
access the cloud computing platforms. The front end includes web servers (including Chrome, Firefox,
internet explorer, etc.), thin & fat clients, tablets, and mobile devices.
Back End:-
The back end is used by the service provider. It manages all the resources that are required to provide cloud
computing services. It includes a huge amount of data storage, security mechanism, virtual machines,
deploying models, servers, traffic control mechanisms, etc.

Components of Cloud Computing Architecture:-


There are following components of cloud computing architecture,
1. Client Infrastructure:
Client Infrastructure is a Frontend component. It provides GUI (Graphical User Interface) to interact
with the cloud.
2. Application:
The application may be any software or platform that a client wants to access.
3. Services:
A Cloud Services manages that which type of service you access according to the client’s
requirement.
Cloud computing offers the following three types of services:-
1. Software as a Service (SaaS):- It is also known as cloud application services. Mostly, SaaS applications
run directly through the web browser means we do not require to download and install these applications.
Some important examples of SaaS is given below,
60_SahilNazare
Eg.- Google Apps, Salesforce Dropbox, Slack, Hubspot, Cisco WebEx.
2. Platform as a Service (PaaS):- It is also known as cloud platform services. It is quite similar to SaaS, but
the difference is that PaaS provides a platform for the software creation, but using SaaS, we can access
software over the internet without the need of any platform.
Eg.- Windows Azure, Force.com, Magneto Commerce Cloud, OpenShift.
3. Infrastructure as a Service (IaaS):- It is also known as cloud infrastructure services. It is responsible for
managing application data, middleware and runtime environments.
Eg.- Amazon Web Services (AWS) EC2, Google Compute Engine (GCE), Cisco Metapod.
4. Runtime Cloud:- Runtime Cloud provides the execution and runtime environment to the virtual machines.
5. Storage:- Storage is one of the most important components of cloud computing. It provides a huge amount
of storage capacity in the cloud to store and manage data.
6. Infrastructure:- It provides services on the host level, application level, and network level. Cloud
infrastructure includes hardware and software components such as servers, storage, network devices,
virtualization software, and other storage resources that are needed to support the cloud computing model.
7. Management:- Management is used to manage components such as application, service, runtime cloud,
storage, infrastructure, and other security issues in the backend and establish coordination between them.
8. Security:- Security is an in-built backend component of cloud computing. It implements a security
mechanism in the back end.
9. Internet:- The Internet is medium through which front end and back end can interact and communicate
with each other.

MODELS OF CLOUD COMPUTING:-


1. Deployment Models:-
The cloud deployment model identifies the specific type of cloud environment based on ownership, scale,
and access, as well as the cloud’s nature and purpose. The location of the servers you’re utilizing and who
controls them are defined by a cloud deployment model. It specifies how your cloud infrastructure will look,
what you can change, and whether you will be given services or will have to create everything yourself.
Relationships between the infrastructure and your users are also defined by cloud deployment types.
Different types of cloud computing deployment models are:
I. Public Cloud:
The public cloud makes it possible for anybody to access systems and services. The public cloud may be
less secure as it is open for everyone. The public cloud is one in which cloud infrastructure services are
provided over the internet to the general people or major industry groups. The infrastructure in this cloud
model is owned by the entity that delivers the cloud services, not by the consumer. It is a type of cloud
hosting that allows customers and users to easily access systems and services. This form of cloud computing
is an excellent example of cloud hosting, in which service providers supply services to a variety of
customers. In this arrangement, storage backup and retrieval services are given for free, as a subscription, or
on a per-use basis. Example: Google App Engine etc.
60_SahilNazare
II. Private Cloud:
The private cloud deployment model is the exact opposite of the public cloud deployment model. It’s a one-
on-one environment for a single user (customer). There is no need to share your hardware with anyone else.
The distinction between private and public cloud is in how you handle all of the hardware. It is also called
the “internal cloud” & it refers to the ability to access systems and services within a given border or
organization. The cloud platform is implemented in a cloud-based secure environment that is protected by
powerful firewalls and under the supervision of an organization’s IT department. The private cloud gives the
greater flexibility of control over cloud resources.
III. Hybrid Cloud:
By bridging the public and private worlds with a layer of proprietary software, hybrid cloud computing
gives the best of both worlds. With a hybrid solution, you may host the app in a safe environment while
taking advantage of the public cloud’s cost savings. Organizations can move data and applications between
different clouds using a combination of two or more cloud deployment methods, depending on their needs.
IV. Community Cloud:
It allows systems and services to be accessible by a group of organizations. It is a distributed system that is
created by integrating the services of different clouds to address the specific needs of a community, industry,
or business. The infrastructure of the community could be shared between the organization which has shared
concerns or tasks. It is generally managed by a third party or by the combination of one or more
organizations in the community.
V. Multi-Cloud:
We’re talking about employing multiple cloud providers at the same time under this paradigm, as the name
implies. It’s similar to the hybrid cloud deployment approach, which combines public and private cloud
resources. Instead of merging private and public clouds, multi-cloud uses many public clouds. Although
public cloud providers provide numerous tools to improve the reliability of their services, mishaps still
occur. It’s quite rare that two distinct clouds would have an incident at the same moment. As a result, multi-
cloud deployment improves the high availability of your services even more.

2. Service Models:-
There are the following three types of cloud service models -
1. Infrastructure as a Service (IaaS)
2. Platform as a Service (PaaS)
3. Software as a Service (SaaS)
60_SahilNazare
1. Infrastructure as a Service (IaaS):
IaaS is also known as Hardware as a Service (HaaS). It is a computing infrastructure managed over the
internet. The main advantage of using IaaS is that it helps users to avoid the cost and complexity of
purchasing and managing the physical servers.

IaaS provider provides the following services –


• Compute:- Computing as a Service includes virtual central processing units and virtual main memory
for the Vms that is provisioned to the end- users.
• Storage:- IaaS provider provides back-end storage for storing files.
• Network:- Network as a Service (NaaS) provides networking components such as routers, switches
and bridges for the Vms.
• Load balancers:- it provides load balancing capability at the infrastructure layer.
Characteristics of IaaS:
These are the following characteristics of IaaS:-
• Resources are available as a service.
• Services are highly scalable.
• Dynamic and flexible.
• GUI and API-based access.
• Automated administrative tasks.
Eg.- DigitalOcean, Linode, Amazon Web Services (AWS), Microsoft Azure, Google Compute Engine
(GCE), Rackspace, and Cisco Metacloud.

2. Platform as a Service (PaaS):-


PaaS cloud computing platform is created for the programmer to develop, test, run, and manage the
applications.
PaaS providers provide the Programming languages, Application frameworks, Databases, and Other tools:
60_SahilNazare

• Programming Languages:- PaaS provides various programming languages for the developers to
develop the applications. Some popular programming languages provided by PaaS providers are
Java, PHP, Ruby, Perl and Go.
• Application Framework:- PaaS providers provide application frameworks to easily understand the
application development. Some popular application frameworks provided by PaaS providers are
Node.js, Drupal, Joomla, WordPress, Spring, Play, Rack and Zend.
• Databases: PaaS providers provide various databases such as ClearDB, PostgreSQL, MongoDB and
Redis to communicate with the applications.
Characteristics of PaaS:
There are the following characteristics of PaaS:
• Accessible to various users via the same development application.
• Integrates with web services and databases.
• Builds on virtualization technology, so resources can easily be scaled up or down as per the
organization’s need.
• Support multiple languages and frameworks.
• Provides an ability to “Auto-scale”.
Eg.- AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com, Google App Engine, Apache Stratos,
Magento Commerce Cloud, and OpenShift.

3. Software as a Service (SaaS):-


SaaS is also known as "on-demand software". It is a software in which the applications are hosted by a cloud
service provider. Users can access these applications with the help of internet connection and web browser.
60_SahilNazare

These are the following services provided by SaaS providers:


• Business Services: SaaS providers provides various business services to start-up the business. The
SaaS business services include ERP (Enterprise Resource Planning), CRM (Customer Relationship
Management), billing and sales.
• Document Management: SaaS document managements is a software application offered by a third
party (SaaS provider) to create, manage and track electronic documents.
• Social Networks: As we all know, social networking sites are used by the general public, so social
networking service providers use SaaS for their convenience and handle the general public’s
information.
• Mail Services: TO handle the unpredictable number of users and load on e-mail services, many e-
mail providers offering their services using SaaS.
Characteristics of SaaS:
These are the following characteristics of SaaS,
• Managed from a central location
• Hosted on a remote server.
• Accessible over the internet.
• Users are not responsible for hardware and software updates. Updates are applied automatically.
Eg.- BigCommerce, Google Apps, Salesforce, Dropbox, ZenDesk, Cisco WebEx, Slack, and GoToMeeting.

ADVANTAGES AND DISADVANTAGES OF CLOUD COMPUTING:-


ADVANTAGES:
1) Cost Savings:
Cost saving is one of the biggest Cloud Computing benefits. It helps you to save substantial capital cost as it
does not need any physical hardware investments. Also, you do not need trained personnel to maintain the
hardware. The buying and managing of equipment is done by the cloud service provider.
2) Strategic edge:
Cloud computing offers a competitive edge over your competitors. It is one of the best advantages of Cloud
services that helps you to access the latest applications any time without spending your time and money on
installations.
60_SahilNazare
3) High Speed:
Cloud computing allows you to deploy your service quickly in fewer clicks. This faster deployment allows
you to get the resources required for your system within fewer minutes.
4) Back-up and restore data:
Once the data is stored in a Cloud, it is easier to get the back-up and recovery of that, which is otherwise
very time taking process on-premise.
5) Automatic Software Integration:
In the cloud, software integration is something that occurs automatically. Therefore, you don’t need to take
additional efforts to customize and integrate your applications as per your preferences.
6) Reliability:
Reliability is one of the biggest benefits of Cloud hosting. You can always get instantly updated about the
changes.
7) Mobility:
Employees who are working on the premises or at the remote locations can easily access all the could
services. All they need is an Internet connectivity.
8) Unlimited storage capacity:
The cloud offers almost limitless storage capacity. At any time you can quickly expand your storage
capacity with very nominal monthly fees.
9) Collaboration:
The cloud computing platform helps employees who are located in different geographies to collaborate in a
highly convenient and secure manner.
10) Quick Deployment:
Last but not least, cloud computing gives you the advantage of rapid deployment. So, when you decide to
use the cloud, your entire system can be fully functional in very few minutes. Although, the amount of time
taken depends on what kind of technologies are used in your business.

DISADVANTAGES:
1) Performance Can Vary:
When you are working in a cloud environment, your application is running on the server which
simultaneously provides resources to other businesses. Any greedy behavior or DDOS attack on your tenant
could affect the performance of your shared resource.
2) Technical Issues:
Cloud technology is always prone to an outage and other technical issues. Even, the best cloud service
provider companies may face this type of trouble despite maintaining high standards of maintenance.
3) Security Threat in the Cloud:
Another drawback while working with cloud computing services is security risk. Before adopting cloud
technology, you should be well aware of the fact that you will be sharing all your company’s sensitive
information to a third-party cloud computing service provider. Hackers might access this information.
60_SahilNazare
4) Downtime:
Downtime should also be considered while working with cloud computing. That’s because your cloud
provider may face power loss, low internet connectivity, service maintenance, etc.
5) Internet Connectivity:
Good Internet connectivity is a must in cloud computing. You can’t access cloud without an internet
connection. Moreover, you don’t have any other way to gather data from the cloud.
6) Lower Bandwidth:
Many cloud storage service providers limit bandwidth usage of their users. So, in case if your organization
surpasses the given allowance, the additional charges could be significantly costly
7) Lacks of Support:
Cloud Computing companies fail to provide proper support to the customers. Moreover, they want their user
to depend on FAQs or online help, which can be a tedious job for non-technical persons.

REAL TIME APPLICATIONS OF CLOUD COMPUTING:-


Backup and Recovery:
Cloud vendors provide security from their side by storing safe to the data as well as providing a backup
facility to the data. They offer various recovery application for retrieving the lost data. In the traditional way
backup of data is a very complex problem and also it is very difficult sometimes impossible to recover the
lost data. But cloud computing has made backup and recovery applications very easy where there is no fear
of running out of backup media or loss of data.
The cloud has become one of the hottest topics of conversation in the world lately. Thanks to its plethora of
advantages, it has become an essential part of the data storage market for organizations of different sizes in a
variety of industries. When one talks about data storage, data backup and data recovery also become integral
parts of the conversation as well.
Given the increasing number of recent data breaches and cyber attacks, data security has become a key issue
for businesses. And while the importance of data backup and recovery can’t be overlooked, it is important to
first understand the what a company’s data security needs are before implemented a data backup and
recovery solution within the world of cloud computing.
1) Cloud cost:
In most cases, just about any digital file can be stored in the cloud. However, this isn’t always the case as the
usage and the storage space rented are important elements that need to be taken into account before choosing
a disaster recovery plan. Some data plans can include the option of backing and recovering important files
when necessary. They can also include options on how they are retrieved, where their storage location is,
what the usage of the servers look like and more. These elements might seem trivial in the beginning, but
they may prove to be important later on during the disaster recovery process. Different cloud vendors
provide server space to businesses according to the their usage, and organizations need to be clear about
what they are storing in the cloud, as well as what pricing tier plan they would like.
2) Backup speed and frequency:
Data recovery is not the only problem on tab when considering data backup within the cloud. Some cloud
providers transfer up to 5TB of data within a span of 12 hours. However, some services might be slower, as
60_SahilNazare
it all depends on the server speed, the number of files being transferred and the server space available.
Determining and negotiating this price is an important point to consider in the long run.
3) Availability for backups:
During the disaster recovery process, in order to keep a business firing on all cylinders, it is important to
understand the timelines for recovering the back p data. Backups should be available as soon as possible to
avoid any roadblocks that may negatively impact the business. The cloud vendor can inform you of the
recovery timelines and how soon backed up data can be restored during a disaster situation.
4) Data security:
The security of stored data and backups needs to meet certain security guidelines in order to prevent cyber
criminals from exploiting any vulnerabilities. The cloud vendor needs to ensure the all backed up data is
secured with the appropriate security measures such as firewalls and encryption tools.
5) Ease of use:
Cloud-based storage comes with its own set of servers, which should be available from the business location
and any other locations as needed. If the cloud server is not available remotely as well as from the business
location, it won’t serve the purpose it is needed for. User experience should be an important factor in the
backup process. If the procedure for data recovery and backup is not convenient, then it might become more
of a hassle.
Data recovery is an integral part of the cloud computing world and it needs to be taken seriously with a great
degree of planning from all ends.

CONCLUSION:
Hence, we successfully learned Cloud Computing, its architecture, models of cloud
computing, advantages and disadvantages and one real-time application.
60_SahilNazare
EXPERIMENT NO. 2

AIM: To study and implement Hosted Virtualization using VirtualBox and KVM.
RESOURCES REQUIRED: VirtualBox Machine, Ubuntu.
THEORY:
VirtualBox is a general-purpose full virtualizer for x86 hardware, targeted at server, desktop and embedded
use.
Ubuntu is a Linux distribution based on Debian and composed mostly of free and open-source software.
Ubuntu is officially released in three editions: Desktop, Server, and Core for Internet of things devices and
robots. All the editions can run on the computer alone, or in a virtual machine.
IMPLEMENTATION:
1. Hosted Virtualization on Oracle VirtualBox Hypervisor:
Step 1:- Download Oracle Virtual Box from https://www.virtualbox.org/wiki/Downloads.

Step 2:- Install it in Windows. Once the installation is done, open it.
60_SahilNazare
Step 3:- Create Virtual Machine by clicking on new.

Step 4:- Specify RAM size, HDD size and Network Configuration and finish the set up wizard.
60_SahilNazare
60_SahilNazare

Step 5:- Give the path of the Ubuntu ISO file.


60_SahilNazare
Step 6:- Complete the installation and start using it.

2. Hosted Virtualization on KVM Hypervisor:


Step 1:- Check whether CPU has hardware virtualization support.
KVM only works if your CPU has hardware virtualization support – either Intel VT-x or AMD V.
To determine whether your CPU includes these features, run the following command:
#sudo grep -c "svm\|vmx" /proc/cpuinfo

A 0 indicates that your CPU doesn’t support hardware virtualization, while a 1 or more indicates that it does.

Step 2:- Install KVM and supporting packages.


Virt-Manager is a graphical application for managing your virtual machines. You can use the kvm command
directly, but libvirt and Virt-Manager simplify the process.
#sudo apt-get install qemu-kvm libvirt-bin bridge-utils virt-manager
60_SahilNazare

Step 3:- Create user.


Only the root user and users in the libvirtd group have permission to use KVM virtual machines.
Run the following command to add your user account to the libvirtd group:
#sudo adduser tsec

#sudo adduser tsec libvirtd

After running this command, log out and log back in as tsec.

Step 4:- Check whether everything is working correctly.


Run following command after logging back in as tsec and you should see an empty list of virtual machines.
This indicates that everything is working correctly.
#virsh -c qemu:///system list
60_SahilNazare

Step 5:- Open Virtual Machine Manager application and create virtual machine.
#virt-manager

Step 6:- Create and run virtual machines.


60_SahilNazare

CONCLUSION:
Hence, we successfully implemented Virtualization using VirtualBox and KVM.
60_SahilNazare
EXPERIMENT NO. 3

AIM: To study and Implement Bare-metal Virtualization using Xen, HyperV or VMware Esxi.
RESOURCES REQUIRED: Xen Server.
THEORY:
Technology: XEN/ Vmwares EXSi
• Hosted Virtualization on Oracle Virtual Box Hypervisor
In computing, virtualization refers to the act of creating a virtual (rather than actual) version of something,
including virtual computer hardware platforms, operating systems, storage devices, and computer network
resources.
Why is virtualization useful?
The techniques and features that VirtualBox provides are useful for several scenarios:
• Running multiple operating systems simultaneously. VirtualBox allows you to run more than one
operating system at a time. Since you can configure what kinds of "virtual" hardware should be
presented to each such operating system, you can install an old operating system such as DOS or
OS/2 even if your real computer's hardware is no longer supported by that operating system.
• Easier software installations. Software vendors can use virtual machines to ship entire software
configurations. For example, installing a complete mail server solution on a real machine can be a
tedious task. With VirtualBox, such a complex setup (then often called an "appliance") can be
packed into a virtual machine. Installing and running a mail server becomes as easy as importing
such an appliance into VirtualBox.
• Testing and disaster recovery. Once installed, a virtual machine and its virtual hard disks can be
considered a "container" that can be arbitrarily frozen, woken up, copied, backed up, and transported
between hosts. On top of that, with the use of another VirtualBox feature called "snapshots", one can
save a particular state of a virtual machine and revert back to that state, if necessary. This way, one
can freely experiment with a computing environment. If something goes wrong (e.g. after installing
misbehaving software or infecting the guest with a virus), one can easily switch back to a previous
snapshot and avoid the need of frequent backups and restores. Any number of snapshots can be
created, allowing you to travel back and forward in virtual machine time. You can delete snapshots
while a VM is running to reclaim disk space.
• Infrastructure consolidation. Virtualization can significantly reduce hardware and electricity costs.
Most of the time, computers today only use a fraction of their potential power and run with low
average system loads. A lot of hardware resources as well as electricity is thereby wasted. So, instead
of running many such physical computers that are only partially used, one can pack many virtual
machines onto a few powerful hosts and balance the loads between them.
Hypervisor
A hypervisor or virtual machine monitor (VMM) is a piece of computer software, firmware or hardware that
creates and runs virtual machines. It allows multiple operating systems to share a single hardware host. Each
operating system appears to have the host's processor, memory, and other resources all to itself. However,
the hypervisor is actually controlling the host processor and resources, allocating what is needed to each
operating system in turn and making sure that the guest operating systems (called virtual machines) cannot
disrupt each other.
There are two types of hypervisors: Type 1 and Type 2.
60_SahilNazare
• Type 1 hypervisor (also called a bare metal hypervisor) is installed directly on physical host server
hardware just like an operating system. Type 1 hypervisors run on dedicated hardware. They require
a management console and are used in data centers. Examples include Oracle OVM for SPARC,
ESXi Hyper-V and KVM.

• Type 2 hypervisors support guest virtual machines by coordinating calls for CPU, memory, disk,
network and other resources through the physical host's operating system. This makes it easy for an
end user to run a virtual machine on a personal computing device. Examples include VMware
Fusion, Oracle Virtual Box, Oracle VM for x86, Solaris Zones, Parallels and VMware Workstation.

Terminology
• Host operating system (host OS).
This is the operating system of the physical computer on which VirtualBox was installed. There are versions
of VirtualBox for Windows, Mac OS X, Linux and Solaris hosts.
• Guest operating system (guest OS).
This is the operating system that is running inside the virtual machine. Theoretically, VirtualBox can run any
operating system (DOS, Windows, OS/2, FreeBSD, OpenBSD).
Virtual machine (VM).
• Guest Additions.
This refers to special software packages which are shipped with VirtualBox but designed to be installed
inside a VM to improve performance of the guest OS and to add extra features.
60_SahilNazare
IMPLEMENTATION:
Step 1: Install Xen Server

Step i-: Insert Bootable Xen Server CD into CDROM and Make first boot device as a
CDROM from BIOS
Step ii-: press F2 to see the advanced options, otherwise press Enter to start installation
Step iii -: Select Keyboard Layout
Step iv -: Press Enter to load Device Drivers
Step v -: Press Enter to Accept End user license Agreement
Step vi -: Select Appropriate disk on which you want to install Xen server
Step vii -: Select Appropriate installation Media (LOCAL Media)
Step viii -: Select Additional Packages for installation
Step ix-: Specify Root password
Step x -: Specify IP Address to a Xen Server
Step xi-: Select Time Zone

Step xii-: Specify NTP Servers address or use manual time entry then start installation. Once installation is
done you will see the final screen shown below.
Step 2: Connect Xen Server to Xen Center
Firstly, download the xen center a management utily from xen server by opening the xen severs IP address
as a URL on browser. Once Xen center is downloaded, install it.Open Xen center from start menu of
Windows.
60_SahilNazare

To connect to the XenServer host you configured earlier, click Add a server.

Enter the IP address I asked you to take note of earlier. Also enter the password you assigned for your root
account. Click Add.

One of the first things you want to make sure as you’re adding a new XenServer to XenCenter is to save and
restore the server connection state on startup. Check the box that will do just that.
60_SahilNazare
Once you do that, you will be allowed to configure a master password for all the XenServers you’ll be
associating with this XenCenter. Click the Require a master password checkbox if that’s what you want to
do, and then enter your desired master password in the fields provided.

After you click OK, you’ll be brought back to the main screen, where you’ll see your XenServer already
added to XenCenter.

Step 3: Create Storage Repository and Installing VM


Now Before Creating VM we have to Create Storage Repository first which is nothing but shared directory
on Xen Center which holds all iso files and which is required to install
Operating system on Xen Server its steps are as follows.Right click on Xenserver icon on xen center and
click on New SR

Now Select Windows CIFS library


60_SahilNazare

Specify Storage Repository Name

Now specify path of shared folder at client side which holds all iso files of os or VM which we are going to
install on Xen Server.

At the end Click on finish to create SR. To check all iso files click on CIFS library and select storage this
will show you all iso files.
Installation of UBUNTU Server on Xen Server
Step 1 -: Right click on Xenserver icon on xen center and select New VM
60_SahilNazare

Now select an Operating System to be install here select Ubuntu Lucid Lynx and click on next
Now specify Instance Name as ubuntu server
Select iso file of Ubuntu server 10.10 to be install
Now select hardware for vm i.e. no. of cpu’s and memory
Select local storage
Select network
And click on finish
Now go to Console tab to install ubuntu and follow installation Steps.
60_SahilNazare
60_SahilNazare

The Xen orchestra provides web based functionality of Xen Center.it provides access to all the VMs with
their lifecycle management which are installed over Xen Server shown in figure Xen Orchestra (XOA)
Portal.

The Windows XP image running on Xen Orchestra over Google chrome web browser is shown in following
screenshot
60_SahilNazare

CONCLUSION:
Hence, we successfully implemented Bare-metal Virtualization using Xen, HyperV or
VMware Esxi.
60_SahilNazare
EXPERIMENT NO. 4

AIM: To study and implement Infrastructure as a Service using AWS / Microsoft Azure.
RESOURCES REQUIRED: AWS / Azure account.
THEORY:
WHAT IS AWS?
The full form of AWS is Amazon Web Services. It is a platform that offers flexible, reliable, scalable, easy-
to-use and, cost-effective cloud computing solutions. AWS (Amazon Web Services) is a comprehensive,
evolving cloud computing platform provided by Amazon that includes a mixture of infrastructure as a
service (IaaS), platform as a service (PaaS) and packaged software as a service (SaaS) offerings. AWS
services can offer an organization tool such as compute power, database storage and content delivery
services.
Amazon Web Services offers a wide range of different business purpose global cloud-based products. The
products include storage, databases, analytics, networking, mobile, development tools, enterprise
applications, with a pay-as-you-go pricing model.

AWS SERVICES:-
AWS COMPUTE SERVICES
Here, are Cloud Compute Services offered by Amazon:-
1. EC2 (Elastic Compute Cloud): EC2 is a virtual machine in the cloud on which you have OS level control.
You can run this cloud server whenever you want.
2. LightSail: This cloud computing tool automatically deploys and manages the computer, storage, and
networking capabilities required to run your application.
3. Elastic Beanstalk: The tool offers automated deployment and provisioning of resources like a highly
scalable production website.
4. EKS (Elastic Container Service for Kubernetes): The tool allows you to Kubernetes on Amazon cloud
environment without installation.
5. AWS Lambda: This AWS services allows you to run functions in the cloud. The tool is a big cost saver
for you as you have to pay only when your functions executes.
60_SahilNazare
MIGRATION:
Migration services used to transfer data physically between your datacentre and AWS.
1. DMS (Database Migration Service): DMS service can be used to migrate on-site databases to AWS. It
helps you to migrate from one type of database to another. For example, Oracle to MySQL.
2. SMS (Server Migration Service): SMS migration services allows you to migrate on-site servers to AWS
easily and quickly.
3. Snowball: Snowball is a small application which allows you to transfer terabytes of data inside and
outside of AWS environment.
STORAGE:
1. Amazon Glacier: It is an extremely low-cost storage service. It offers secure and fast storage for data
archiving and backup.
2. Amazon Elastic Block Store (EBS): It provides block-level storage to use with Amazon EC2 instances.
Amazon Elastic Block Store volumes are network-attached and remain independent from the life of an
instance.
3. AWS Storage Gateway: This AWS service is connecting on-premises software applications with cloud-
based storage. It offers secure integration between the company’s on-premises and AWS’s storage
infrastructure.
SECURITY SERVICES:
1. IAM (Identity and Access Management): IAM is a secure cloud security service which helps you to
manage users, assign policies, form groups to manage multiple users.
2. Inspector: It is an agent that you can install on your virtual machines, which reports any security
vulnerabilities.
3. Certificate Manager: The service offers free SSL certificates for your domains that are managed by
Route53.
4. WAF (Web Application Firewall): WAF security service offers application-level protection and allows
you to block SQL injection and helps you to block cross-site scripting attacks.
5. Cloud Directory: This service allows you to create flexible, cloud-native directories for managing
hierarchies of data along multiple dimensions.
DATABSE SERVICES:
1. Amazon RDS: This Database AWS service is easy to set up, operate, and scale a relational database in the
cloud.
2. Amazon DynamoDB: It is a fast, fully managed NoSQL database service. It is a simple service which
allow cost-effective storage and retrieval of data. It also allows you to serve any level of request traffic.
3. Amazon ElastiCache: It is a web service which makes it easy to deploy, operate, and scale an in-memory
cache in the cloud.
4. Neptune: It is a fast, reliable and scalable graph database service.
5. Amazon RedShift: It is Amazon’s data warehousing solution which you can use to perform complex
OLAP queries.
ANALYTICS:
60_SahilNazare
1. Athena: This analytics service allows perm SQL queries on your S3 bucket to find files.
2. CloudSearch: You should use this AWS service to create a fully managed search engine for your website.
3. ElasticSearch: It is similar to CloudSearch. However, it offers more features like application monitoring.
4. Kinesis: This AWS analytics service helps you to stream and analyzing real-time data at massive scale.
5. QuickSight: It is a business analytics tool. It helps you to create visualizations in a dashboard for data in
Amazon Web Services. For example, S3, DynamoDB, etc.
6. EMR (Elastic Map Reduce): This AWS analytics service mainly used for big data processing like Spark,
Splunk, Hadoop, etc.
MANAGEMENT SERVICES:
1. CloudWatch: Cloud watch helps you to monitor AWS environments like EC2, RDS instances, and CPU
utilization. It also triggers alarms depends on various metrics.
2. AWS Auto Scaling: The service allows you to automatically scale your resources up and down based on
given CloudWatch metrics.
3. Systems Manager: This AWS service allows you to group your resources. It allows you to identify issues
and act on them.
INTERNET OF THINGS:
1. IoT Core: It is a managed cloud AWS service. The service allows connected devices? like cars, light
bulbs, sensor grids, to securely interact with cloud applications and other devices.
2. IoT Device Management: It allows you to manage your IoT devices at any scale.
3. IoT Analytics: This AWS IOT service is helpful to perform analysis on data collected by your IoT
devices.
4. Amazon FreeRTOS: This real-time operating system for microcontrollers helps you to connect IoT
devices in the local server or into the cloud.
APPLICATION SERVICES:
1. Step Functions: It is a way of visualizing what’s going inside your application and what different
microservices it is using.
2. SWF (Simple Workflow Service): The service helps you to coordinate both automated tasks and human-
led tasks.
3. SNS (Simple Notification Service): You can use this service to send you notifications in the form of email
and SMS based on given AWS services.
4. SQS (Simple Queue Service): Use this AWS service to decouple your applications. It is a pull-based
service.
5. Elastic Transcoder: This AWS service tool helps you to changes a video’s format and resolution to
support various devices like tablets, smartphones, and laptops of different resolutions.
DEPLOYMENT AND MANAGEMENT:
1. AWS CloudTrail: The services records AWS API calls and send backlog files to you.
2. Amazon CloudWatch: The tools monitor AWS resources like Amazon EC2 and Amazon RDS DB
Instances. It also allows you to monitor custom metrics created by user’s applications and services.
60_SahilNazare
3.AWS CloudHSM: This AWS service helps you meet corporate, regulatory, and contractual, compliance
requirements for maintaining data security by using the Hardware Security Module(HSM) appliances inside
the AWS environment.
DEVELOPER TOOLS:
1. CodeStar: Codestar is a cloud-based service for creating, managing, and working with various software
development projects on AWS.
2. CodeCommit: It is AWS’s version control service which allows you to store your code and other assets
privately in the cloud.
3. CodeBuild: This Amazon developer service help you to automates the process of building and compiling
your code.
4. CodeDeploy: It is a way of deploying your code in EC2 instances automatically.
5. CodePipeline: It helps you create a deployment pipeline like testing, building, testing, authentication,
deployment on development and production environments.
MOBILE SERVICES:
1. Mobile Hub: Allows you to add, configure and design features for mobile apps.
2. Cognito: Allows users to signup using his or her social identity.
3. Device Farm: Device farm helps you to improve the quality of apps by quickly testing hundreds of mobile
devices.
4. AWS AppSync: It is a fully managed GraphQL service that offers real-time data synchronization and
offline programming features.
DESKTOP AND APP STREAMING:
1. WorkSpaces: Workspace is a VDI (Virtual Desktop Infrastructure). It allows you to use remote desktops
in the cloud.
2. AppStream: A way of streaming desktop applications to your users in the web browser. For example,
using MS Word in Google Chrome.
ARTIFICIAL INTELLIGENCE:
1. Lex: Lex tool helps you to build chatbots quickly.
2. Polly: It is AWS’s text-to-speech service allows you to create audio versions of your notes.
3. Rekognition: It is AWS’s face recognition service. This AWS service helps you to recognize faces and
object in images and videos.
4. SageMaker: Sagemaker allows you to build, train, and deploy machine learning models at any scale.
AR AND VR:
1. Sumerian: Sumerian is a set of tool for offering high-quality virtual reality (VR) experiences on the web.
The service allows you to create interactive 3D scenes and publish it as a website for users to access.
CUSTOMER ENGAGEMENT:
2. Amazon Connect: Amazon Connect allows you to create your customer care center in the cloud.
3. Pinpoint: Pinpoint helps you to understand your users and engage with them.
60_SahilNazare
4. SES (Simple Email Service): Helps you to send bulk emails to your customers at a relatively cost-
effective price.
GAME DEVELOPMENT:
1. GameLift– It is a service which is managed by AWS. You can use this service to host dedicated game
servers. It allows you to scale seamlessly without taking your game offline.

APPLICATION:
Amazon Web services are widely used for various computing purposes like:
• Web site hosting
• Application hosting/SaaS hosting
• Media Sharing (Image/ Video)
• Mobile and Social Applications
• Content delivery and Media Distribution
• Storage, backup, and disaster recovery
• Development and test environments
• Academic Computing
• Search Engines
• Social Networking

COMPANIES USING AWS:-


• Instagram
• Netflix
• LinkedIn
• Facebook
• Pinterest
• Dropbox

WHAT IS EC2?
An EC2 instance is nothing but a virtual server in Amazon Web services terminology. It stands for Elastic
Compute Cloud. It is a web service where an AWS subscriber can request and provision a compute server in
AWS cloud.
An on-demand EC2 instance is an offering from AWS where the subscriber/user can rent the virtual server
per hour and use it to deploy his/her own applications.
The instance will be charged per hour with different rates based on the type of the instance chosen. AWS
provides multiple instance types for the respective business needs of the user.
Thus, you can rent an instance based on your own CPU and memory requirements and use it as long as you
want. You can terminate the instance when it’s no more used and save on costs.

IMPLEMENTATION:
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare

CONCLUSION:
Hence, we successfully studied and implemented Infrastructure as a Service using AWS /
Microsoft Azure.
60_SahilNazare
EXPERIMENT NO. 5

AIM: To study and implement Platform as a Service using AWS Elastic Beanstalk / Microsoft Azure App
Service.
RESOURCES REQUIRED: AWS / Azure Account.
THEORY:
Amazon Web Services (AWS) comprises over one hundred services, each of which exposes an area of
functionality. While the variety of services offers flexibility for how you want to manage your AWS
infrastructure, it can be challenging to figure out which services to use and how to provision them.
With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without having
to learn about the infrastructure that runs those applications. Elastic Beanstalk reduces management
complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk
automatically handles the details of capacity provisioning, load balancing, scaling, and application health
monitoring. Elastic Beanstalk supports applications developed in Go, Java, .NET, Node.js, PHP, Python, and
Ruby. When you deploy your application, Elastic Beanstalk builds the selected supported platform version
and provisions one or more AWS resources, such as Amazon EC2 instances, to run your application.
You can interact with Elastic Beanstalk by using the Elastic Beanstalk console, the AWS Command Line
Interface (AWS CLI), a high-level CLI designed specifically for Elastic Beanstalk. To learn more about how
to deploy a sample web application using Elastic Beanstalk, see Getting Started with AWS: Deploying a
Web App. You can also perform most deployment tasks, such as changing the size of your fleet of Amazon
EC2 instances or monitoring your application, directly from the Elastic Beanstalk web interface (console).
To use Elastic Beanstalk, you create an application, upload an application version in the form of an
application source bundle (for example, a Java .war file) to Elastic Beanstalk, and then provide some
information about the application. Elastic Beanstalk automatically launches an environment and creates and
configures the AWS resources needed to run your code. After your environment is launched, you can then
manage your environment and deploy new application versions. The following diagram illustrates the
workflow of Elastic Beanstalk.

After you create and deploy your application, information about the application—including metrics, events,
and environment status—is available through the Elastic Beanstalk console, APIs, or Command Line
Interfaces, including the unified AWS CLI. AWS Elastic Beanstalk enables you to manage all of the
resources that run your application as environments. Here are some key Elastic Beanstalk concepts.
APPLICATION:-
An Elastic Beanstalk application is a logical collection of Elastic Beanstalk components, including
environments, versions, and environment configurations. In Elastic Beanstalk an application is conceptually
similar to a folder.
APPLICATION VERSION:-
60_SahilNazare
In Elastic Beanstalk, an application version refers to a specific, labeled iteration of deployable code for a
web application. An application version points to an Amazon Simple Storage Service (Amazon S3) object
that contains the deployable code, such as a Java WAR file. An application version is part of an application.
Applications can have many versions and each application version is unique. In a running environment, you
can deploy any application version you already uploaded to the application, or you can upload and
immediately deploy a new application version. You might upload multiple application versions to test
differences between one version of your web application and another.
ENVIRONMENT:-
An environment is a collection of AWS resources running an application version. Each environment runs
only one application version at a time, however, you can run the same application version or different
application versions in many environments simultaneously. When you create an environment, Elastic
Beanstalk provisions the resources needed to run the application version you specified.
ENVIRONMENT TIER:-
When you launch an Elastic Beanstalk environment, you first choose an environment tier. The environment
tier designates the type of application that the environment runs, and determines what resources Elastic
Beanstalk provisions to support it. An application that serves HTTP requests runs in a web server
environment tier. A backend environment that pulls tasks from an Amazon Simple Queue Service (Amazon
SQS) queue runs in a worker environment tier.
ENVIRONMENT CONFIGURATION:-
An environment configuration identifies a collection of parameters and settings that define how an
environment and its associated resources behave. When you update an environment’s configuration settings,
Elastic Beanstalk automatically applies the changes to existing resources or deletes and deploys new
resources (depending on the type of change).
SAVED CONFIGURATION:-
A saved configuration is a template that you can use as a starting point for creating unique environment
configurations. You can create and modify saved configurations, and apply them to environments, using the
Elastic Beanstalk console, EB CLI, AWS CLI, or API. The API and the AWS CLI refer to saved
configurations as configuration templates.
PLATFORM:-
A platform is a combination of an operating system, programming language runtime, web server, application
server, and Elastic Beanstalk components. You design and target your web application to a platform. Elastic
Beanstalk provides a variety of platforms on which you can build your applications.
AWS Elastic Beanstalk for Node.js makes it easy to deploy, manage, and scale your Node.js web
applications using Amazon Web Services. Elastic Beanstalk for Node.js is available to anyone developing or
hosting a web application using Node.js. This chapter provides step-by-step instructions for deploying your
Node.js web application to Elastic Beanstalk using the Elastic Beanstalk management console, and provides
walkthroughs for common tasks such as database integration and working with the Express framework.
After you deploy your Elastic Beanstalk application, you can continue to use EB CLI to manage your
application and environment, or you can use the Elastic Beanstalk console, AWS CLI, or the APIs.

IMPLEMENTATION:
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare

CONCLUSION:
Hence, we successfully studied and implemented Platform as a Service using AWS Elastic
Beanstalk / Microsoft Azure App Service.
60_SahilNazare
EXPERIMENT NO. 6

AIM: To implement Storage as a Service using S3 and S3 Glacier.


RESOURCES REQUIRED: AWS Account.
THEORY:
1. TO IMPLEMENT STORAGE AS A SERVICE USING S3:-
WHAT ARE AWS S3?
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading
scalability, data availability, security, and performance. Customers of all sizes and industries can use
Amazon S3 to store and protect any amount of data for a range of use cases, such as data lakes, websites,
mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics.
Amazon S3 provides management features so that you can optimize, organize, and configure access to your
data to meet your specific business, organizational, and compliance requirements.

FEATURES OF S3:-
STORAGE CLASSES:
Amazon S3 offers a range of storage classes designed for different use cases. For example, you can store
mission-critical production data in S3 Standard for frequent access, save costs by storing infrequently
accessed data in S3 Standard-IA or S3 One Zone-IA, and archive data at the lowest costs in S3 Glacier
Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive.
STORAGE MANAGEMENT:
Amazon S3 has storage management features that you can use to manage costs, meet regulatory
requirements, reduce latency, and save multiple distinct copies of your data for compliance requirements.
• S3 Lifecycle: Configure a lifecycle policy to manage your objects and store them cost effectively
throughout their lifecycle. You can transition objects to other S3 storage classes or expire objects that
reach the end of their lifetimes.
• S3 Object Lock: Prevent Amazon S3 objects from being deleted or overwritten for a fixed amount of
time or indefinitely. You can use Object Lock to help meet regulatory requirements that require
write-once-read-many (WORM) storage or to simply add another layer of protection against object
changes and deletions.
• S3 Replication: Replicate objects and their respective metadata and object tags to one or more
destination buckets in the same or different AWS Regions for reduced latency, compliance, security,
and other use cases.
• S3 Batch Operations: Manage billions of objects at scale with a single S3 API request or a few clicks
in the Amazon S3 console. You can use Batch Operations to perform operations such as copy,
invoke AWS Lambda function, and restore on millions or billions of objects.
ACCESS MANAGEMENT:
Amazon S3 provides features for auditing and managing access to your buckets and objects. By default, S3
buckets and the objects in them are private. You have access only to the S3 resources that you create. To
grant granular resource permissions that support your specific use case or to audit the permissions of your
Amazon S3 resources, you can use the following features.
60_SahilNazare
• S3 Block Public Access: Block public access to S3 buckets and objects. By default, Block Public
Access settings are turned on at the account and bucket level.
• AWS Identity and Access Management (IAM): Create IAM users for your AWS account to manage
access to your Amazon S3 resources. For example, you can use IAM with Amazon S3 to control the
type of access a user or group of users has to an S3 bucket that your AWS account owns.
• Bucket Policies: Use IAM-based policy language to configure resource-based permissions for your
S3 buckets and the objects in them.
• Amazon S3 access points: Configure named network endpoints with dedicated access policies to
manage data access at scale for shared datasets in Amazon S3.
• Access control lists (ACLs): Grant read and write permissions for individual buckets and objects to
authorized users. As a general rule, we recommend using S3 resource-based policies (bucket policies
and access point policies) or IAM policies for access control instead of ACLs. ACLs are an access
control mechanism that predates resource-based policies and IAM. For more information about when
you'd use ACLs instead of resource-based policies or IAM policies, see Access policy guidelines.
• S3 Object Ownership: Disable ACLs and take ownership of every object in your bucket, simplifying
access management for data stored in Amazon S3. You, as the bucket owner, automatically own and
have full control over every object in your bucket, and access control for your data is based on
policies.
• Access Analyzer for S3: Evaluate and monitor your S3 bucket access policies, ensuring that the
policies provide only the intended access to your S3 resources.
STORAGE LOGGING AND MONITORING:
Amazon S3 provides logging and monitoring tools that you can use to monitor and control how your
Amazon S3 resources are being used.
STRONG CONSISTENCY:
Amazon S3 provides strong read-after-write consistency for PUT and DELETE requests of objects in your
Amazon S3 bucket in all AWS Regions. This behaviour applies to both writes of new objects as well as PUT
requests that overwrite existing objects and DELETE requests.

HOW AMAZON S3 WORKS:


Amazon S3 is an object storage service that stores data as objects within buckets. An object is a file and any
metadata that describes the file. A bucket is a container for objects.
To store your data in Amazon S3, you first create a bucket and specify a bucket name and AWS Region.
Then, you upload your data to that bucket as objects in Amazon S3. Each object has a key (or key name),
which is the unique identifier for the object within the bucket.
S3 provides features that you can configure to support your specific use case. For example, you can use S3
Versioning to keep multiple versions of an object in the same bucket, which allows you to restore objects
that are accidentally deleted or overwritten.
Buckets and the objects in them are private and can be accessed only if you explicitly grant access
permissions. You can use bucket policies, AWS Identity and Access Management (IAM) policies, access
control lists (ACLs), and S3 Access Points to manage access.
BUCKETS:
A bucket is a container for objects stored in Amazon S3. You can store any number of objects in a bucket
and can have up to 100 buckets in your account.
60_SahilNazare
Buckets also:
• Organize the Amazon S3 namespace at the highest level.
• Identify the account responsible for storage and data transfer charges.
• Provide access control options, such as bucket policies, access control lists (ACLs), and S3 Access
Points, that you can use to manage access to your Amazon S3 resources.
• Serve as the unit of aggregation for usage reporting.
OBJECTS:
Objects are the fundamental entities stored in Amazon S3. Objects consist of object data and metadata. The
metadata is a set of name-value pairs that describe the object. These pairs include some default metadata,
such as the date last modified, and standard HTTP metadata, such as Content-Type. An object is uniquely
identified within a bucket by a key (name) and a version ID (if S3 Versioning is enabled on the bucket).
KEYS:
An object key is the unique identifier for an object within a bucket. Every object in a bucket has exactly one
key. The combination of a bucket, object key, and optionally, version ID (if S3 Versioning is enabled for the
bucket) uniquely identify each object.
S3 VERSIONING:
S3 Versioning is used to keep multiple variants of an object in the same bucket. With S3 Versioning, we can
preserve, retrieve, and restore every version of every object stored in our buckets.
VERSION ID:
When you enable S3 Versioning in a bucket, Amazon S3 generates a unique version ID for each object
added to the bucket. Objects that already existed in the bucket at the time that you enable versioning have a
version ID of null. If you modify these (or any other) objects with other operations, such as CopyObject and
PutObject, the new objects get a unique version ID.
BUCKET POLICY:
A bucket policy is a resource-based AWS Identity and Access Management (IAM) policy that you can use
to grant access permissions to your bucket and the objects in it. Only the bucket owner can associate a policy
with a bucket. The permissions attached to the bucket apply to all of the objects in the bucket that are owned
by the bucket owner. Bucket policies are limited to 20 KB in size.
S3 ACCESS POINTS:
Amazon S3 Access Points are named network endpoints with dedicated access policies that describe how
data can be accessed using that endpoint. Access Points are attached to buckets that you can use to perform
S3 object operations, such as GetObject and PutObject. Access Points simplify managing data access at
scale for shared datasets in Amazon S3.
Each access point has its own access point policy. You can configure Block Public Access settings for each
access point. To restrict Amazon S3 data access to a private network, you can also configure any access
point to accept requests only from a virtual private cloud (VPC).
ACCESS CONTROL LISTS (ACLs):
ACLs are used to grant read and write permissions to authorized users for individual buckets and objects.
Each bucket and object has an ACL attached to it as a subresource. The ACL defines which AWS accounts
or groups are granted access and the type of access.
REGIONS:
60_SahilNazare
You can choose the geographical AWS Region where Amazon S3 stores the buckets that you create. You
might choose a Region to optimize latency, minimize costs, or address regulatory requirements. Objects
stored in an AWS Region never leave the Region unless you explicitly transfer or replicate them to another
Region. For example, objects stored in the Europe (Ireland) Region never leave it.

IMPLEMENTATION:
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
2. TO IMPLEMENT STORAGE AS A SERVICE USING S3 GLACIER:-
Amazon S3 Glacier is a secure, durable, and extremely low-cost Amazon S3 storage class for data archiving
and long-term backup. With S3 Glacier, customers can store their data cost effectively for months, years, or
even decades. S3 Glacier enables customers to offload the administrative burdens of operating and scaling
storage to AWS, so they don't have to worry about capacity planning, hardware provisioning, data
replication, hardware failure detection and recovery, or time-consuming hardware migrations. S3 Glacier is
one of the many different storage classes for Amazon S3.
HOW IT WORKS:-
The Amazon S3 Glacier storage classes are purpose-built for data archiving, providing you with the highest
performance, most retrieval flexibility, and the lowest cost archive storage in the cloud. You can now choose
from three archive storage classes optimized for different access patterns and storage duration.

IMPLEMENTATION:
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare

CONCLUSION:
Hence, we have successfully studied and implemented Storage as a Service using S3 and S3 Glacier.
60_SahilNazare
EXPERIMENT NO. 7

AIM: To study and Implement Database as a Service on SQL/NOSQL databases like AWS RDS, AZURE
SQL/ MongoDB Lab/ Firebase.
RESOURCES REQUIRED: AWS account, MySQL.
THEORY:
WHAT IS AMAZON RDS?
Amazon Relational Database Service (RDS) is a managed SQL database service provided by Amazon Web
Services (AWS). Amazon RDS supports an array of database engines to store and organize data. It also
helps with relational database management tasks, such as data migration, backup, recovery and patching.
Amazon RDS facilitates the deployment and maintenance of relational databases in the cloud. A cloud
administrator uses Amazon RDS to set up, operate, manage and scale a relational instance of a cloud
database. Amazon RDS is not itself a database; it is a service used to manage relational databases.

HOW DOES AMAZON RDS WORK?


Databases are used to store large quantities of data that applications can draw on to help them perform
various functions. A relational database uses tables to store data. It is called relational because it organizes
data points with defined relationships.
Administrators control Amazon RDS with the AWS Management Console, Amazon RDS API calls or the
AWS Command Line Interface. They use these interfaces to deploy database instances to which users can
apply specific settings. Amazon provides several instance types with different combinations of resources,
such as CPU, memory, storage options and networking capacity. Each type comes in a variety of sizes to
suit the needs of different workloads.
RDS users can use AWS Identity and Access Management to define and set permissions for who can access
an RDS database.

AMAZON RDS FEATURES:


REPLICATION:
RDS uses the replication feature to create read replicas. These are read-only copies of database instances
that applications use without altering the original production database. Administrators can also enable
automatic failover across multiple availability zones through RDS Multi-AZ deployment and with
synchronous data replication.
STORAGE:
RDS provides three types of storage:
1. General-purpose solid-state drive (SSD): Amazon recommends this storage as the default choice.
2. Provisioned input-output operations per second (IOPS): SSD storage for I/O-intensive workloads.
3. Magnetic: A lower cost option.
MONITORING:
The Amazon CloudWatch service enables managed monitoring. It lets users view capacity and I/O metrics.
60_SahilNazare
PATCHING:
RDS provides patches for whichever database engine the user chooses.
BACKUPS:
Another feature is failure detection and recovery. RDS provides managed instance backups with transaction
logs to enable point-in-time recovery. Users pick a retention period and restore databases to any time during
that period. They also can manually take snapshots of instances that remain until they are manually deleted.
RDS lets users specify the time and duration of the backup processes. They also can choose how long to
retain backups and snapshots.
INCREMENTAL BILLING:
Users pay a monthly fee for the instances they launch.
ENCRYPTION:
RDS uses public key encryption to secure automated backups, read replicas, data snapshots and other data
stored at rest.

WHAT ARE THE BENEFITS AND DRAWBACKS OF AMAZON RDS?


BENEFITS: The main benefit of Amazon RDS is that it helps organizations deal with the complexity of
managing large relational databases. Other benefits include the following:
• Ease of use: Admins don't need to learn specific database management tools. They also can manage
multiple database instances using the management console. RDS is compatible with database engines
that users may already be familiar with, such as MySQL and Oracle And it automates manual backup
and recovery processes.
• Cost-effectiveness: According to AWS, customers only pay for what they use. Also, the time spent
maintaining instances is reduced, because maintenance tasks, such as backups and patching, are
automated.
• The use of read replicas routes read-heavy traffic away from the main database instance, reducing the
workload on that one instance.
DRAWBACKS: Some downsides of using Amazon RDS include the following:
• Lack of root access: Because it is a managed service, users do not have root access to the server
running RDS. RDS restricts access for certain procedures to those with advanced privileges.
• Downtime: Systems must go offline for some patching and scaling procedures. The timing on these
processes varies. With scaling, compute resources need a few minutes downtime on average.

AMAZON RDS DATABASE INSTANCES:


A database administrator can create, configure, manage and delete an Amazon RDS instance, along with the
resources it uses. An Amazon RDS instance is a cloud database environment. Admins can also spin up many
databases or schemas; how many depends on the database used.
Amazon RDS limits each customer to a total of 40 database instances per account. AWS imposes further
limitations for Oracle and SQL Server instances. With those database instances, a user generally can only
have up to 10.
60_SahilNazare
AMAZON RDS DATABASE ENGINES:
An AWS customer can spin up six types of database engines within Amazon RDS:
1. Amazon Aurora is a proprietary AWS relational database engine. Amazon Aurora is compatible with
MySQL and PostgreSQL.
2. RDS for MariaDB is compatible with MariaDB, an open source relational database management
system (RDBMS) that's an offshoot of MySQL.
3. RDS for MySQL is compatible with the MySQL open source RDBMS.
4. RDS for Oracle Database is compatible with several editions of Oracle Database, including bring-
your-own-license and license-included versions.
5. RDS for PostgreSQL is compatible with PostgreSQL open source object-RDBMS.
6. RDS for SQL Server is compatible with Microsoft SQL Server, an RDBMS.

AMAZON RDS USE CASES:


Amazon RDS' scalability, security and availability make it useful for a variety of applications. Some
possible uses include the following:
• Online retailing: These applications manage complex databases that track inventories, transactions
and pricing.
• Mobile and online gaming: RDS supports developers that need to continuously update these
applications and users who need high availability.
• Travel applications: Applications like Airbnb take advantage of RDS' ability to simplify time-
consuming database administration tasks and automate database replication.
• Streaming applications: Applications like Netflix take advantage of RDS' storage scalability as well,
and availability of Amazon RDS, which allows them to handle high demand daily.
• Finance applications: These applications, like other mobile applications, can use RDS to simplify
administrative database tasks and save time and money.

IMPLEMENTATION:
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare

CONCLUSION:
Hence, we successfully studied and implemented database as a service on MYSQL
database using AWS RDS.
60_SahilNazare
EXPERIMENT NO. 8

AIM: To study and Implement Security as a Service on AWS/Azure.


RESOURCES REQUIRED: AWS/Azure account.
THEORY:
We know that security is job one in the cloud and how important it is that you find accurate and timely
information about Azure security. One of the best reasons to use Azure for your applications and services is
to take advantage of its wide array of security tools and capabilities. These tools and capabilities help make
it possible to create secure solutions on the secure Azure platform. Microsoft Azure provides confidentiality,
integrity, and availability of customer data, while also enabling transparent accountability.
GENERAL AZURE SECURITY:
Microsoft Defender for Cloud A cloud workload protection solution that provides security management and
advanced threat protection across hybrid cloud workloads. Azure Key Vault A secure secrets store for the
passwords, connection strings, and other information you need to keep your apps working. Azure Monitor
logs A monitoring service that collects telemetry and other data, and provides a query language and
analytics engine to deliver operational insights for your apps and resources. Can be used alone or with other
services such as Defender for Cloud. Azure Dev/Test Labs A service that helps developers and testers
quickly create environments in Azure while minimizing waste and controlling cost.
STORAGE SECURITY:
Azure Storage Service Encryption A security feature that automatically encrypts your data in Azure storage.
StorSimple Encrypted Hybrid Storage An integrated storage solution that manages storage tasks between
on-premises devices and Azure cloud storage.
Azure Client-Side Encryption A client-side encryption solution that encrypts data inside client applications
before uploading to Azure Storage; also decrypts the data while downloading.
Azure Storage Shared Access Signatures A shared access signature provides delegated access to resources in
your storage account.
Azure Storage Account Keys An access control method for Azure storage that is used for authentication
when the storage account is accessed.
Azure File shares with SMB 3.0 Encryption A network security technology that enables automatic network
encryption for the Server Message Block (SMB) file sharing protocol.
Azure Storage Analytics A logging and metrics-generating technology for data in your storage account.
DATABASE SECURITY:
Azure SQL Firewall A network access control feature that protects against network-based attacks to
database.
Azure SQL Cell Level Encryption A database security technology that provides encryption at a granular
level.
Azure SQL Connection Encryption To provide security, SQL Database controls access with firewall rules
limiting connectivity by IP address, authentication mechanisms requiring users to prove their identity, and
authorization mechanisms limiting users to specific actions and data.
60_SahilNazare
Azure SQL Always Encryption Protects sensitive data, such as credit card numbers or national identification
numbers (for example, U.S. social security numbers), stored in Azure SQL Database or SQL Server
databases.
Azure SQL Transparent Data Encryption A database security feature that encrypts the storage of an entire
database.
Azure SQL Database Auditing A database auditing feature that tracks database events and writes them to an
audit log in your Azure storage account.
IDENTITY AND ACCESS MANAGEMENT:
Azure role-based access control An access control feature designed to allow users to access only the
resources they are required to access based on their roles within the organization.
Azure Active Directory A cloud-based authentication repository that supports a multitenant, cloud-based
directory and multiple identity management services within Azure.
Azure Active Directory B2C An identity management service that enables control over how customers sign-
up, sign-in, and manage their profiles when using Azure-based applications.
Azure Active Directory Domain Services A cloud-based and managed version of Active Directory Domain
Services.
Azure AD Multi-Factor Authentication A security provision that employs several different forms of
authentication and verification before allowing access to secured information.
BACKUP AND DISASTER RECOVERY:
Azure Backup An Azure-based service used to back up and restore data in the Azure cloud. Azure Site
Recovery An online service that replicates workloads running on physical and virtual machines (VMs) from
a primary site to a secondary location to enable recovery of services after a failure.
NETWORKING:
Network Security Groups A network-based access control feature using a 5-tuple to make allow or deny
decisions.
Azure VPN Gateway A network device used as a VPN endpoint to allow cross-premises access to Azure
Virtual Networks.
Azure Application Gateway An advanced web application load balancer that can route based on URL and
perform SSL-offloading.
Web application firewall (WAF) A feature of Application Gateway that provides centralized protection of
your web applications from common exploits and vulnerabilities
Azure Load Balancer A TCP/UDP application network load balancer.
Azure ExpressRoute A dedicated WAN link between on-premises networks and Azure Virtual Networks.
Azure Traffic Manager A global DNS load balancer.
Azure Application Proxy An authenticating front-end used to secure remote access for web applications
hosted on-premises.
Azure Firewall A managed, cloud-based network security service that protects your Azure Virtual Network
resources.
60_SahilNazare
Azure DDoS protection Combined with application design best practices, provides defence against DDoS
attacks.
Virtual Network service endpoints Extends your virtual network private address space and the identity of
your VNet to the Azure services, over a direct connection.

IMPLEMENTATION:
DATABASE FIREWALL PROTECTTION:
Database:

Adding Firewall Security:

ACCESS CONTROL (IAM):


60_SahilNazare
60_SahilNazare

DDOS:
Creating DDOS plan:
60_SahilNazare

Creating Network with DDOS plan:


60_SahilNazare

Adding DDOS plan as security:

Network with active DDOS plan:


60_SahilNazare

MICROSOFT DEFENDER SECURITY:


Creating Microsoft defender resource:

Adding resources:

Checking active plan:


60_SahilNazare

Enabling Integration:
60_SahilNazare

Enable Logging:

Setting Email for notification:


60_SahilNazare

Work flow automation:

Continuous export:
60_SahilNazare

Security Policy:

CONCLUSION:
Hence, we successfully implemented Security as a Service on Azure.
60_SahilNazare
EXPERIMENT NO. 9

AIM: To study and implement Identity and Access Management (IAM) practices on AWS/Azure cloud.
RESOURCES REQUIRED: AWS/Azure account.
THEORY:
Microsoft Azure IAM, also known as Access Control (IAM), is the product provided in Azure for RBAC
and governance of users and roles. Identity management is a crucial part of cloud operations due to security
risks that can come from misapplied permissions. Whenever you have a new identity (a user, group, or
service principal) or a new resource (such as a virtual machine, database, or storage blob), you should
provide proper access with as limited of a scope as possible. Here are some of the questions you should ask
yourself to maintain maximum security:
1. Who needs access?
Granting access to an identity includes both human users and programmatic access from applications and
scripts. If you are utilizing Azure Active Directory, then you likely want to use those managed identities for
role assignments. Consider using an existing group of users or making a new group to apply similar
permissions across a set of users, as you can then remove a user from that group in the future to revoke those
permissions.
Programmatic access is typically granted through Azure service principals. Since it’s not a user logging in,
the application or script will use the App Registration credentials to connect and run any commands.
2. What role do they need?
Azure IAM uses roles to give specific permissions to identities. Azure has a number of built-in roles based
on a few common functions:
• Owner – Full management access, including granting access to others
• Contributor – Management access to perform all actions except granting access to others
• User Access Administrator – Specific access to grant access to others
• Reader – View-only access
These built-in roles can be more specific, such as “Virtual Machine Contributor” or “Log Analytics Reader”.
However, even with these specific pre-defined roles, the principle of least privilege shows that you’re almost
always giving more access than is truly needed.
For even more granular permissions, you can create Azure custom roles and list specific commands that can
be run.
3. Where do they need access?
The final piece of an Azure IAM permission set is deciding the specific resource that the identity should be
able to access. This should be at the most granular level possible to maintain maximum security. For
example, a Cloud Operations Manager may need access at the management group or subscription level,
while a SQL Server utility may just need access to specific database resources. When creating or assigning
the role, this is typically referred to as the “scope” in Azure.
The scope of a role is to always think twice before using the subscription or management group as a scope.
The scale of your subscription is going to come into consideration, as organizations with many smaller
subscriptions that have very focused purposes may be able to use the subscription-level scope more
60_SahilNazare
frequently. On the flip side, some companies have broader subscriptions, then use resource groups or tags to
limit access, which means the scope is often smaller than a whole subscription.

IMPLEMENTATION:
60_SahilNazare

CONCLUSION:
Hence, we successfully implemented Identity and Access Management (IAM) practices on
Azure cloud.
60_SahilNazare
EXPERIMENT NO. 10

AIM: To study and implement Containerization using Docker.


RESOURCES REQUIRED: Docker, OwnCloud.
THEORY:
What is Docker?
Docker is an open source containerization platform. It enables developers to package applications into
containers—standardized executable components combining application source code with the operating
system (OS) libraries and dependencies required to run that code in any environment. Containers simplify
delivery of distributed applications, and have become increasingly popular as organizations shift to cloud-
native development and hybrid multicloud environments.
Developers can create containers without Docker, but the platform makes it easier, simpler, and safer to
build, deploy and manage containers. Docker is essentially a toolkit that enables developers to build, deploy,
run, update, and stop containers using simple commands and work-saving automation through a single API.
Docker also refers to Docker, Inc. (link resides outside IBM), the company that sells the commercial version
of Docker, and to the Docker open source project (link resides outside IBM), to which Docker, Inc. and
many other organizations and individuals contribute.

How containers work, and why they're so popular?


Containers are made possible by process isolation and virtualization capabilities built into the Linux kernel.
These capabilities - such as control groups (Cgroups) for allocating resources among processes, and
namespaces for restricting a processes access or visibility into other resources or areas of the system - enable
multiple application components to share the resources of a single instance of the host operating system in
much the same way that a hypervisor enables multiple virtual machines (VMs) to share the CPU, memory
and other resources of a single hardware server.
As a result, container technology offers all the functionality and benefits of VMs - including application
isolation, cost-effective scalability, and disposability - plus important additional advantages:
• Light weight - Unlike VMs, containers don’t carry the payload of an entire OS instance and
hypervisor; they include only the OS processes and dependencies necessary to execute the code.
Container sizes are measured in megabytes (vs. gigabytes for some VMs), make better use of
hardware capacity, and have faster startup times.
• Greater Resource Efficiency - With containers, you can run several times as many copies of an
application on the same hardware as you can using VMs. This can reduce your cloud spending.
• Improved Developer Community - Compared to VMs, containers are faster and easier to deploy,
provision and restart. This makes them ideal for use in continuous integration and continuous
delivery (CI/CD) pipelines and a better fit for development teams adopting Agile and DevOps
practices.

STEPS:
1. Control panel->program->Turn window feature on/off->check Hyper-v option and also check Window
subsystem for linux than click on ok and reboot your pc.
60_SahilNazare

2. www.docker.com Get started->Docker Desktop->Download for windows

3. Downloaded file(Docker DEsktop Installer->right click on this file and select Run as administrator)
4. Open cmd run as administrator and paste the given command –
docker run --rm --name oc-eval -d -e OWNCLOUD_DOMAIN=localhost:8080 -p8080:8080
owncloud/server
60_SahilNazare
5. Again right click on docker and select run as administrator.
6. So, at last go to container option at right side of docker window->there is oc_eval option->click on
browser.

7. Now login with admin,admin(Username,Pwd)


8. Explore the option available in dashboard in owncloud.
60_SahilNazare

CONCLUSION:
Hence, we successfully implemented Containerization using Docker.
60_SahilNazare

ASSIGNMENT NO.: 01

Q1. Recent trends in cloud computing and related technologies.


Ans: -
IMPORTANCE OF CLOUD COMPUTING:-
Companies had to keep all of their information and software on their own hard disks and servers prior to
cloud computing. The larger the business, the more storage was required. This method of handling data is
not scalable in terms of speed. For example, if news of your firm spread and you unexpectedly received a
large number of online orders, your computers would most likely collapse. For the IT division, good
business meant a lot of hard labour.
Cloud computing is beneficial to individuals as well as corporations. Individually, the cloud has changed our
life as well. Daily, many of us utilize cloud services. When we use cloud-hosted apps to update our profile
on social media, binge a new subscription series, or check our bank balances, we’re most certainly utilizing
cloud-hosted applications. Instead of being placed on our hard drives or devices, these applications are
accessible over the internet.
Cloud technology allows businesses to expand and adapt quickly, accelerating innovation, driving business
agility, streamlining operations, and lowering costs. This can not only help firms get through the present
situation, but it can also contribute to enhanced, long-term growth.

TRENDS IN CLOUD COMPUTING:-


1. EDGE COMPUTING:
One weak point of current cloud computing is that it’s handled by a limited number of providers who
dominate the space. These large, centralized data-processing centers tie your computing and storage ability
to the proximity, bandwidth and resources provided. With 127 new IoT devices connecting to the Internet
every second, issues of latency, bandwidth, and security are inevitable. Intelligent technologies like AI and
robotics require greater speed and processing power and edge computing is the answer to capitalizing on
these advancements and shaping them in the years ahead. Edge computing is an emerging cloud trend that
involves building localized data centers for computation and storage at or near where they are needed. This
offsets the load on the cloud and improves the deployment and running of a wide array of applications.
Instead of relying on centralized networks, computing and management are handled locally.
2. AS A SERVICE:
For organizations, one of the easiest points of entry into cloud use is the ‘as a service’ model. With the ease,
flexibility, and choice of applications, employing the cloud as a service can greatly impact your organization
and its goals. By enabling companies to provide new services and create applications faster, the cloud as a
service helps you keep up with customer demand. In the aftermath of the pandemic, we’ve seen growth in
key ‘as a service’ offerings as well as the emergence of some new applications. As business continues to
adjust to hybrid work environments in 2021, these applications will only expand.
Platform as a Service (PaaS):
Platform as a Service (PaaS) is also known as cloud application infrastructure services and includes
hardware and software tools. Its use has been steadily rising as organizations invest in modernizing their
‘old school’ applications with cloud-native capabilities. The PaaS market is expected to grow 26.6 percent in
60_SahilNazare

2021, Gartner forecasts, stating that the growth is driven by remote workers needing access to ‘to high
performing, content-rich and scalable infrastructure to perform their duties’. This area is only expected to
grow as more organizations migrate their IT functions to the cloud in response to COVID-19. Enterprise
adoption of platforms like Azure or Google Drive has skyrocketed as teams look for solutions to storing
information and collaborating from a distance. In fact, 59 percent of enterprises expect cloud usage to
exceed prior plans due to COVID-19.
Software as a Service (SaaS):
Software as a Service (SaaS) is one of the first and most successful ‘as a service’ cloud offerings. It includes
all of the services and software offered through a third party on the internet, trading subscriptions for
licensing fees. As one of the biggest cloud application services, SaaS now contributes $20 billion to the
quarterly revenues of software vendors. The number is expected to grow by 32 percent each year.
Competition between SaaS companies has led to a wide array of inexpensive solutions that ensure public
cloud services will dominate the market for years to come. The next generation of SaaS offerings will also
include machine learning as part of their services. While some applications may be better than others, rest
assured, in the near future you’ll be hard-pressed to find a SaaS product that is not labeled ‘intelligent’.
Infrastructure as a Service (Iaas):
Infrastructure as a Service (IaaS) has been around since the beginning of cloud services, but its potential is
yet to be fully actualized. Organizations have been slow to adopt this technology, owing to a reported skill
gap in the cloud migration process. However, thanks to an uptick in cloud education and understanding
borne out of necessity, this up-and-coming cloud solution is expected to eventually outgrow SaaS in
revenue. IaaS refers to pay-as-you-go services that organizations use for storage, networking, and
virtualization. Many companies have taken the path of least resistance by adopting the ‘lift and shift’
approach to cloud migration, not adapting their workflows to get the most out of the cloud. In order to
compete, organizations have discovered they must take a different approach – modernizing processes,
investing in cloud-native development, and refactoring apps to achieve true cloud optimization.
3. CLOUD SECURITY:
Between January and April of 2020, cybercrime saw a sharp increase by 630 percent as new ways of
working created new vulnerabilities to exploit. Spreading workloads between various cloud providers
presents organizations with a considerable issue of governance. No surprise that we found that 65 percent of
senior IT executives believe security and compliance risk are the greatest barriers to realizing the benefits of
cloud. Generating and acting on insights across platforms requires a proactive approach equipped with
sensitivity to potential blind spots. This explains why 28 percent of enterprises consider security to be the
most important criterion when picking a cloud vendor. Although the cloud’s efficiency in terms of time and
money is its most popular feature, organizations are realizing that cutting corners on the cloud can render
their organizational processes opaque; opening a plethora of discreet entry-points for cybercriminals.
As interactions between the cloud and enterprises proliferate, a more organizational understanding of cloud
capabilities is being developed. The lack of this expertise has been one of the biggest drivers of public cloud
adoption as it was easier for businesses to outsource the services they could not manage or develop
themselves. However, as this wealth of knowledge grows in abundance, more organizations will opt for their
own private clouds to maintain greater control over their processes without trading in future flexibility.
Although the private cloud industry saw no significant growth in 2020, we attribute this to the heightened
ability of public providers to navigate organizations through the novel demands of the year. Moving
forward, the power dynamic between the public and private clouds is likely to equalize to some extent. This
will create a more democratic cloud industry guided by organizational needs rather than industrial fixtures.
4. MULTICLOUD:
60_SahilNazare

While most organizations do not make the jump from on-premises to multi-vendor deployments in one go,
93 percent of enterprises have built up to a multicloud strategy. As more workloads are migrated to the
cloud, the industry is becoming more sensitive to the unique requirements of different processes. An average
of 3.4 public clouds and 3.9 private clouds are being deployed or tested per organization, allowing them to
tailor their cloud capabilities to their cloud requirements. Moving forward, more organizations will develop
entirely cloud-native applications with little to no architectural dependence on a specific cloud provider.
Cultivating a firmer understanding of their cloud needs and the cloud industry will teach organizations to
develop with clearer intent than before. However, this paradigm shift is also dependent on the evolution of
cloud capabilities, as time-to-market is steeply improving and the ability to integrate changing workloads
enables organizations to take advantage of even the smallest trends.
5. HYBRID CLOUD:
While a multi-cloud approach leverages the differing allowances of different providers—regardless of public
or private cloud, a hybrid cloud approach categorically focuses on taking advantage of both, the private and
the public cloud. A well-integrated and balanced hybrid strategy gives businesses the best of both worlds.
They can scale further and faster at the behest of the public cloud’s innovative and flexible services without
losing out on the higher cost efficiency, reaction speed and regulatory compliance that go hand in hand with
the capabilities of the private cloud.
6. PUBLIC CLOUD:
Our research finds that migrating areas of your business to the public cloud can cut your Total Cost of
Ownership (TCO) by as much as 40 percent. That number will only increase as top public cloud providers
AWS, Azure and Google improve their services and prices to strengthen their competitive posture.
However, harsher competition could be detrimental to the trend of interoperability, as cloud providers might
look to create an edge for themselves by driving their customers to commit fully to their services. This
would force businesses to compromise on certain capabilities by picking the provider that fits their key
operations the best. Alternatively, public cloud providers could strengthen their existing capabilities and
allow a greater range of choice to promote customer loyalty.
7. CLOUD REALITY:
Despite the tremendous potential of virtual and augmented realities, their dependency on source computing
devices has limited their penetration into the market. In combination with 5G networks, the cloud can bypass
the hardware requirements of AR/VR to allow applications to be rendered, executed and distributed through
the cloud to a larger audience. High capacity, low latency broadband networks will be the key to unlocking
real-time displays, renders, feedback and delivery, maximizing the potential of both, cloud and AR/VR
solutions.
8. CLOUD MONITORING:
Trends like cloud coalitions, machine learning, and data fabrics enable the cloud industry to hone one of its
key components: monitoring. Facing pressure to quickly migrate workloads to the cloud, companies are now
challenged by the task of consolidating metrics on their various cloud servers to generate monetizable
insights. This pursuit is expected to grow the cloud monitoring industry annually by 22.7 percent between
2020 and 2026, when it will be valued at approximately $4.5 billion. End-user services are leveraging
available technologies to develop facilities that monitor and manage applications across cloud platforms. As
new regulations are enforced over the management of information and clouds shift to HTML5, existing
monitoring services will also have to display ingenuity and flexibility.
9. CLOUD-NATIVE APPLICATIONS:
60_SahilNazare

Cloud-native applications are applications born in the cloud, not just reworked to be compatible. These
applications run on cloud infrastructure, as opposed to being installed on an OS or server. This means that
instead of demanding compatibility, cloud-native applications can dictate their environment by interacting
directly via APIs. Such independently linked applications are more resilient and manageable, enabling
organizations to build and scale quickly and efficiently. It’s becoming clear that cloud-native architecture is
the future of application development in an increasingly fast-paced and dynamic time. The use of cloud-
native projects in production continues to grow, with many projects reaching more than 50 percent use in
production.
10. APPLICATION MOBILITY:
For organizations focused on agility and transformation, the runtime environment for apps is liable to
change constantly as technology rapidly matures. This has created a greater need for application mobility—
freeing apps from any one data center or infrastructure and enabling organizations to select the best platform
for their needs. By decoupling applications from their runtime environment, IT teams can migrate between
hypervisors, public cloud, and container-based environments without losing data or risking excessive
downtime.
11. DISTRIBUTED CLOUD:
The cloud derives its name from its omnipresence and lack of physicality. Most CIOs have observed issues
due to its lack of presence, either from the server or in the speed of transmission. However, as the cloud
solidifies its position in enterprise operations, the consequences of latency issues grow. At any one point,
your website is just a 2-second delay away from racking up a 100 percent bounce rate. A component of edge
computing, the distributed cloud has origins in the public and hybrid cloud environments. Public cloud
providers have the opportunity to package their hybrid services and distribute them to different locations,
easing the tension from their central servers and enabling them to better serve high-value clients. Operating
physically closer to clients with large workloads resolves most latency issues and mitigates the risk of total
server failure. The widening of compute zones could also democratize cloud services as smaller businesses
close to the distributed locations could avail the services without incurring traditional server costs.
12. OPEN-SOURCE CLOUD:
Open-source applications employ source code that is publicly accessible to inspect, edit and improve. ‘As a
Service’ cloud solutions built with such code can be dispersed across private, public, and hybrid cloud
environments. There are several reasons why 77 percent of IT leaders intend to utilize open-source code
with greater frequency. Royalty-free source codes will be a significant deterrent to vendor lock-in as data
transferability and open data platforms will allow for services and analytics to be interoperable. Reusing
software stacks, libraries and components will also create more common ground between applications for
interoperability.
60_SahilNazare

ASSIGNMENT NO.: 02

Q1. Comparative study of different computing technologies (Parallel, Distributed, Cluster, Grid, Quantum).
Ans: -
PARALLEL COMPUTING:-
Parallel computing refers to the process of breaking down larger problems into smaller, independent, often
similar parts that can be executed simultaneously by multiple processors communicating via shared memory,
the results of which are combined upon completion as part of an overall algorithm. The primary goal of
parallel computing is to increase available computation power for faster application processing and problem
solving.
Parallel computing infrastructure is typically housed within a single datacentre where several processors are
installed in a server rack; computation requests are distributed in small chunks by the application server that
are then executed simultaneously on each server.
There are generally four types of parallel computing, available from both proprietary and open source
parallel computing vendors -- bit-level parallelism, instruction-level parallelism, task parallelism, or super
word-level parallelism:
• Bit-level parallelism: increases processor word size, which reduces the quantity of instructions the
processor must execute in order to perform an operation on variables greater than the length of the
word.
• Instruction-level parallelism: the hardware approach works upon dynamic parallelism, in which the
processor decides at run-time which instructions to execute in parallel; the software approach works
upon static parallelism, in which the compiler decides which instructions to execute in parallel
• Task parallelism: a form of parallelization of computer code across multiple processors that runs
several different tasks at the same time on the same data
• Superword-level parallelism: a vectorization technique that can exploit parallelism of inline code

DISTRIBUTED COMPUTING:
Distributed computing is a much broader technology that has been around for more than three decades now.
Simply stated, distributed computing is computing over distributed autonomous computers that
communicate only over a network. Distributed computing systems are usually treated differently from
parallel computing systems or shared-memory systems, where multiple computers share a common memory
pool that is used for communication between the processors. Distributed memory systems use multiple
computers to solve a common problem, with computation distributed among the connected computers
(nodes) and using message-passing to communicate between the nodes. For example, grid computing,
studied in the previous section, is a form of distributed computing where the nodes may belong to different
administrative domains. Another example is the network-based storage virtualization solution described in
an earlier section in this chapter, which used distributed computing between data and metadata servers.
60_SahilNazare

CLUSTER COMPUTING:-
Cluster computing refers that many of the computers connected on a network and they perform like a single
entity. Each computer that is connected to the network is called a node. Cluster computing offers solutions
to solve complicated problems by providing faster computational speed, and enhanced data integrity. The
connected computers execute operations all together thus creating the impression like a single system
(virtual machine). This process is termed as transparency of the system. Based on the principle of distributed
systems, this networking technology performs its operations. And here, LAN is the connection unit. This
process is defined as the transparency of the system. Cluster computing goes with the features of:
All the connected computers are the same kind of machines
They are tightly connected through dedicated network connections
All the computers share a common home directory.
Clusters’ hardware configuration differs based on the selected networking technologies. Cluster is
categorized as Open and Close clusters wherein Open Clusters all the nodes need IP’s and those are
accessed only through the internet or web. This type of clustering causes enhanced security concerns. And in
Closed Clustering, the nodes are concealed behind the gateway node and they offer increased protection.
60_SahilNazare

Types of Cluster Computing


As clusters are extensively utilized in correspondence to the complexity of the information, to manage
content and the anticipated operating speed. Many of the applications that anticipate high availability
without a reduction in downtime employ the scenarios of cluster computing. The types of cluster computing
are:
Cluster load balancing
• High–Availability clusters
• High-performance clusters
• Cluster Load Balancing
Load balancing clusters:
Load Balancing Clusters are employed in the situations of augmented network and internet utilization and
these clusters perform as the fundamental factor. This type of clustering technique offers the benefits of
increased network capacity and enhanced performance. Here the entire nodes stay as cohesive with all the
instance where the entire node objects are completely attentive of the requests those are present in the
network. All the nodes will not operate in a single process whereas they readdress the requests individually
as they arrive depending on the scheduler algorithm. The other crucial element on the load balancing
technique is scalability where this feature is accomplished when every server is totally employed.
High Availability Clusters
These are also termed as failover clusters. Computers so often faces failure issues. So, High Availability
comes in line with the augmenting dependency of computers as computers hold crucial responsibility in
many of the organizations and applications. In this approach, redundant computer systems are utilized in the
situation of any component malfunction. So, when there is a single point malfunction, the system seems to
be completely reliable as the network has redundant cluster elements. Through the implementation of high
availability clusters, systems can go with extended functionality and provides consistent computing services
like complicated databases, business activities, customer services like e-websites and network file
distribution.
High-Performance Clusters:
This networking approach utilizes supercomputers to resolve complex computational problems. Along with
the management of IO-dependent applications like web services, high-performance clusters are employed in
computational models of climate and in-vehicle breakdowns. More tightly connected computer clusters are
developed for work that might consider “supercomputing”.
High Availability + Load Balancing:
This integrated solution provides extended performance showing no complicated breakdowns. The
combined performance of two clustering techniques provides an ideal solution for the network applications
and for ISPs too. Few of the features of this integrated technique are as follows:
• Enhanced levels of service quality for conventional network activities
• Offer an extensively scalable architecture context
• Transparent assimilation for stand-alone functionalities and non-clustered collectively in a single
virtual system
60_SahilNazare

GRID COMPUTING:-
The use of a widely dispersed system strategy to accomplish a common objective is called grid computing.
A computational grid can be conceived as a decentralized network of interrelated files and non-interactive
activities. Grid computing differs from traditional powerful computational platforms like cluster computing
in that each unit is dedicated to a certain function or activity. Grid computers are also more diverse and
spatially scattered than cluster machines and are not physically connected. However, a particular grid might
be allocated to a unified platform, and grids are frequently utilized for various purposes. General-purpose
grid network application packages are frequently used to create grids. The size of the grid might be
extremely enormous.
Grids are decentralized network computing in which a "super virtual computer" is made up of several
loosely coupled devices that work together to accomplish massive operations. Distributed or grid computing
is a sort of parallel processing that uses entire devices (with onboard CPUs, storage, power supply, network
connectivity, and so on) linked to a network connection (private or public) via a traditional network
connection, like Ethernet, for specific applications. This contrasts with the typical quantum computer
concept, consisting of several cores linked by an elevated universal serial bus on a local level. This
technique has been used in corporate entities for these applications ranging from drug development, market
analysis, seismic activity, and backend data management in the assistance of e-commerce and online
services. It has been implemented to computationally demanding research, numerical, and educational
difficulties via volunteer computer technology.
Grid computing brings together machines from numerous organizational sectors to achieve a similar aim,
such as completing a single work and then vanishes just as rapidly. Grids can be narrowed to a group of
computer terminals within a firm, such as accessible alliances involving multiple organizations and systems.
"A limited grid can also be referred to as intra-nodes collaboration, while a bigger, broader grid can be
referred to as inter-nodes cooperative".
Managing Grid applications can be difficult, particularly when dealing with the data flows among distant
computational resources. A grid sequencing system is a combination of workflow automation software that
has been built expressly for composing and executing a sequence of mathematical or data modification
processes or a sequence in a grid setting.

QUANTUM COMPUTING:-
Quantum Computing is the process of using quantum-mechanics for solving complex and massive
operations quickly and efficiently. As classical computers are used for performing classical computations,
similarly, a Quantum computer is used for performing Quantum computations. Quantum Computations are
too complex to solve that it becomes almost impossible to solve them with classical computers. The word
'Quantum' is derived from the concept of Quantum Mechanics in Physics that describes the physical
properties of the nature of electrons and photons. Quantum is the fundamental framework for deeply
describing and understanding nature. Thus, it is the reason that quantum calculations deal with complexity.
Quantum Computing is a subfield of Quantum Information Science. It describes the best way of dealing
with a complicated computation. Quantum-mechanics is based on the phenomena of superposition and
entanglement, which are used to perform the quantum computations.
A Quantum deals with the smallest particles found in nature, i.e., electrons and photons. These three
particles are known as Quantum particles. In this, superposition defines the ability of a quantum system to
be present in multiple states (one or more) at the same time.
There are the following applications of Quantum Computing:
60_SahilNazare

• Cybersecurity – Personal information is stored in computers in the current era of digitization. So, we
need a very strong system of cybersecurity to protect data from stealing. Classical computers are
good enough for cybersecurity, but the vulnerable threats and attacks weaken it. Scientists are
working with quantum computers in this field. It is also found that it is possible to develop several
techniques to deal with such cybersecurity threats via machine learning.
• Cryptography – It is also a field of security where quantum computers are helping to develop
encryption methods to deliver the packets onto the network safely. Such creation of encryption
methods is known as Quantum Cryptography.
• Weather Forecasting – Sometimes, the process of analyzing becomes too long to forecast the weather
using classical computers. On the other hand, Quantum Computers have enhanced power to analyze,
recognize the patterns, and forecast the weather in a short period and with better accuracy, Even
quantum systems are able to predict more detailed climate models with perfect timings.
• AI and Machine Learning – AI has become an emerging area of digitization. Many tools, apps and
features have been developed via AI and ML. As the days are passing by, numerous applications are
being developed. As a result, it has challenged the classical systems to match up accuracy and speed.
But, Quantum Computers can help to process such complex problems in less time for which a
classical computer will take hundreds of years to solve those problems.

You might also like