60 SahilNazare CCL
60 SahilNazare CCL
Roll No: 60
Year: 2021-2022
COMPUTER ENGINEERING DEPARTMENT
LIST OF EXPERIMENTS
THEORY:
What is Cloud Computing?
Cloud computing is the delivery of computing services—including servers, storage, databases, networking,
software, analytics, and intelligence—over the Internet (“the cloud”) to offer faster innovation, flexible
resources, and economies of scale. You typically pay only for cloud services you use, helping lower your
operating costs, run your infrastructure more efficiently and scale as your business needs change.
Cloud computing is a virtualization-based technology that allows us to create, configure, and customize
applications via an internet connection. The cloud technology includes a development platform, hard disk,
software application, and database. The term cloud refers to a network or the internet. It is a technology that
uses remote servers on the internet to store, manage, and access data online rather than local drives. The data
can be anything such as files, images, documents, audio, video, and more.
Front End:-
The front end is used by the client. It contains client-side interfaces and applications that are required to
access the cloud computing platforms. The front end includes web servers (including Chrome, Firefox,
internet explorer, etc.), thin & fat clients, tablets, and mobile devices.
Back End:-
The back end is used by the service provider. It manages all the resources that are required to provide cloud
computing services. It includes a huge amount of data storage, security mechanism, virtual machines,
deploying models, servers, traffic control mechanisms, etc.
2. Service Models:-
There are the following three types of cloud service models -
1. Infrastructure as a Service (IaaS)
2. Platform as a Service (PaaS)
3. Software as a Service (SaaS)
60_SahilNazare
1. Infrastructure as a Service (IaaS):
IaaS is also known as Hardware as a Service (HaaS). It is a computing infrastructure managed over the
internet. The main advantage of using IaaS is that it helps users to avoid the cost and complexity of
purchasing and managing the physical servers.
• Programming Languages:- PaaS provides various programming languages for the developers to
develop the applications. Some popular programming languages provided by PaaS providers are
Java, PHP, Ruby, Perl and Go.
• Application Framework:- PaaS providers provide application frameworks to easily understand the
application development. Some popular application frameworks provided by PaaS providers are
Node.js, Drupal, Joomla, WordPress, Spring, Play, Rack and Zend.
• Databases: PaaS providers provide various databases such as ClearDB, PostgreSQL, MongoDB and
Redis to communicate with the applications.
Characteristics of PaaS:
There are the following characteristics of PaaS:
• Accessible to various users via the same development application.
• Integrates with web services and databases.
• Builds on virtualization technology, so resources can easily be scaled up or down as per the
organization’s need.
• Support multiple languages and frameworks.
• Provides an ability to “Auto-scale”.
Eg.- AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com, Google App Engine, Apache Stratos,
Magento Commerce Cloud, and OpenShift.
DISADVANTAGES:
1) Performance Can Vary:
When you are working in a cloud environment, your application is running on the server which
simultaneously provides resources to other businesses. Any greedy behavior or DDOS attack on your tenant
could affect the performance of your shared resource.
2) Technical Issues:
Cloud technology is always prone to an outage and other technical issues. Even, the best cloud service
provider companies may face this type of trouble despite maintaining high standards of maintenance.
3) Security Threat in the Cloud:
Another drawback while working with cloud computing services is security risk. Before adopting cloud
technology, you should be well aware of the fact that you will be sharing all your company’s sensitive
information to a third-party cloud computing service provider. Hackers might access this information.
60_SahilNazare
4) Downtime:
Downtime should also be considered while working with cloud computing. That’s because your cloud
provider may face power loss, low internet connectivity, service maintenance, etc.
5) Internet Connectivity:
Good Internet connectivity is a must in cloud computing. You can’t access cloud without an internet
connection. Moreover, you don’t have any other way to gather data from the cloud.
6) Lower Bandwidth:
Many cloud storage service providers limit bandwidth usage of their users. So, in case if your organization
surpasses the given allowance, the additional charges could be significantly costly
7) Lacks of Support:
Cloud Computing companies fail to provide proper support to the customers. Moreover, they want their user
to depend on FAQs or online help, which can be a tedious job for non-technical persons.
CONCLUSION:
Hence, we successfully learned Cloud Computing, its architecture, models of cloud
computing, advantages and disadvantages and one real-time application.
60_SahilNazare
EXPERIMENT NO. 2
AIM: To study and implement Hosted Virtualization using VirtualBox and KVM.
RESOURCES REQUIRED: VirtualBox Machine, Ubuntu.
THEORY:
VirtualBox is a general-purpose full virtualizer for x86 hardware, targeted at server, desktop and embedded
use.
Ubuntu is a Linux distribution based on Debian and composed mostly of free and open-source software.
Ubuntu is officially released in three editions: Desktop, Server, and Core for Internet of things devices and
robots. All the editions can run on the computer alone, or in a virtual machine.
IMPLEMENTATION:
1. Hosted Virtualization on Oracle VirtualBox Hypervisor:
Step 1:- Download Oracle Virtual Box from https://www.virtualbox.org/wiki/Downloads.
Step 2:- Install it in Windows. Once the installation is done, open it.
60_SahilNazare
Step 3:- Create Virtual Machine by clicking on new.
Step 4:- Specify RAM size, HDD size and Network Configuration and finish the set up wizard.
60_SahilNazare
60_SahilNazare
A 0 indicates that your CPU doesn’t support hardware virtualization, while a 1 or more indicates that it does.
After running this command, log out and log back in as tsec.
Step 5:- Open Virtual Machine Manager application and create virtual machine.
#virt-manager
CONCLUSION:
Hence, we successfully implemented Virtualization using VirtualBox and KVM.
60_SahilNazare
EXPERIMENT NO. 3
AIM: To study and Implement Bare-metal Virtualization using Xen, HyperV or VMware Esxi.
RESOURCES REQUIRED: Xen Server.
THEORY:
Technology: XEN/ Vmwares EXSi
• Hosted Virtualization on Oracle Virtual Box Hypervisor
In computing, virtualization refers to the act of creating a virtual (rather than actual) version of something,
including virtual computer hardware platforms, operating systems, storage devices, and computer network
resources.
Why is virtualization useful?
The techniques and features that VirtualBox provides are useful for several scenarios:
• Running multiple operating systems simultaneously. VirtualBox allows you to run more than one
operating system at a time. Since you can configure what kinds of "virtual" hardware should be
presented to each such operating system, you can install an old operating system such as DOS or
OS/2 even if your real computer's hardware is no longer supported by that operating system.
• Easier software installations. Software vendors can use virtual machines to ship entire software
configurations. For example, installing a complete mail server solution on a real machine can be a
tedious task. With VirtualBox, such a complex setup (then often called an "appliance") can be
packed into a virtual machine. Installing and running a mail server becomes as easy as importing
such an appliance into VirtualBox.
• Testing and disaster recovery. Once installed, a virtual machine and its virtual hard disks can be
considered a "container" that can be arbitrarily frozen, woken up, copied, backed up, and transported
between hosts. On top of that, with the use of another VirtualBox feature called "snapshots", one can
save a particular state of a virtual machine and revert back to that state, if necessary. This way, one
can freely experiment with a computing environment. If something goes wrong (e.g. after installing
misbehaving software or infecting the guest with a virus), one can easily switch back to a previous
snapshot and avoid the need of frequent backups and restores. Any number of snapshots can be
created, allowing you to travel back and forward in virtual machine time. You can delete snapshots
while a VM is running to reclaim disk space.
• Infrastructure consolidation. Virtualization can significantly reduce hardware and electricity costs.
Most of the time, computers today only use a fraction of their potential power and run with low
average system loads. A lot of hardware resources as well as electricity is thereby wasted. So, instead
of running many such physical computers that are only partially used, one can pack many virtual
machines onto a few powerful hosts and balance the loads between them.
Hypervisor
A hypervisor or virtual machine monitor (VMM) is a piece of computer software, firmware or hardware that
creates and runs virtual machines. It allows multiple operating systems to share a single hardware host. Each
operating system appears to have the host's processor, memory, and other resources all to itself. However,
the hypervisor is actually controlling the host processor and resources, allocating what is needed to each
operating system in turn and making sure that the guest operating systems (called virtual machines) cannot
disrupt each other.
There are two types of hypervisors: Type 1 and Type 2.
60_SahilNazare
• Type 1 hypervisor (also called a bare metal hypervisor) is installed directly on physical host server
hardware just like an operating system. Type 1 hypervisors run on dedicated hardware. They require
a management console and are used in data centers. Examples include Oracle OVM for SPARC,
ESXi Hyper-V and KVM.
• Type 2 hypervisors support guest virtual machines by coordinating calls for CPU, memory, disk,
network and other resources through the physical host's operating system. This makes it easy for an
end user to run a virtual machine on a personal computing device. Examples include VMware
Fusion, Oracle Virtual Box, Oracle VM for x86, Solaris Zones, Parallels and VMware Workstation.
Terminology
• Host operating system (host OS).
This is the operating system of the physical computer on which VirtualBox was installed. There are versions
of VirtualBox for Windows, Mac OS X, Linux and Solaris hosts.
• Guest operating system (guest OS).
This is the operating system that is running inside the virtual machine. Theoretically, VirtualBox can run any
operating system (DOS, Windows, OS/2, FreeBSD, OpenBSD).
Virtual machine (VM).
• Guest Additions.
This refers to special software packages which are shipped with VirtualBox but designed to be installed
inside a VM to improve performance of the guest OS and to add extra features.
60_SahilNazare
IMPLEMENTATION:
Step 1: Install Xen Server
Step i-: Insert Bootable Xen Server CD into CDROM and Make first boot device as a
CDROM from BIOS
Step ii-: press F2 to see the advanced options, otherwise press Enter to start installation
Step iii -: Select Keyboard Layout
Step iv -: Press Enter to load Device Drivers
Step v -: Press Enter to Accept End user license Agreement
Step vi -: Select Appropriate disk on which you want to install Xen server
Step vii -: Select Appropriate installation Media (LOCAL Media)
Step viii -: Select Additional Packages for installation
Step ix-: Specify Root password
Step x -: Specify IP Address to a Xen Server
Step xi-: Select Time Zone
Step xii-: Specify NTP Servers address or use manual time entry then start installation. Once installation is
done you will see the final screen shown below.
Step 2: Connect Xen Server to Xen Center
Firstly, download the xen center a management utily from xen server by opening the xen severs IP address
as a URL on browser. Once Xen center is downloaded, install it.Open Xen center from start menu of
Windows.
60_SahilNazare
To connect to the XenServer host you configured earlier, click Add a server.
Enter the IP address I asked you to take note of earlier. Also enter the password you assigned for your root
account. Click Add.
One of the first things you want to make sure as you’re adding a new XenServer to XenCenter is to save and
restore the server connection state on startup. Check the box that will do just that.
60_SahilNazare
Once you do that, you will be allowed to configure a master password for all the XenServers you’ll be
associating with this XenCenter. Click the Require a master password checkbox if that’s what you want to
do, and then enter your desired master password in the fields provided.
After you click OK, you’ll be brought back to the main screen, where you’ll see your XenServer already
added to XenCenter.
Now specify path of shared folder at client side which holds all iso files of os or VM which we are going to
install on Xen Server.
At the end Click on finish to create SR. To check all iso files click on CIFS library and select storage this
will show you all iso files.
Installation of UBUNTU Server on Xen Server
Step 1 -: Right click on Xenserver icon on xen center and select New VM
60_SahilNazare
Now select an Operating System to be install here select Ubuntu Lucid Lynx and click on next
Now specify Instance Name as ubuntu server
Select iso file of Ubuntu server 10.10 to be install
Now select hardware for vm i.e. no. of cpu’s and memory
Select local storage
Select network
And click on finish
Now go to Console tab to install ubuntu and follow installation Steps.
60_SahilNazare
60_SahilNazare
The Xen orchestra provides web based functionality of Xen Center.it provides access to all the VMs with
their lifecycle management which are installed over Xen Server shown in figure Xen Orchestra (XOA)
Portal.
The Windows XP image running on Xen Orchestra over Google chrome web browser is shown in following
screenshot
60_SahilNazare
CONCLUSION:
Hence, we successfully implemented Bare-metal Virtualization using Xen, HyperV or
VMware Esxi.
60_SahilNazare
EXPERIMENT NO. 4
AIM: To study and implement Infrastructure as a Service using AWS / Microsoft Azure.
RESOURCES REQUIRED: AWS / Azure account.
THEORY:
WHAT IS AWS?
The full form of AWS is Amazon Web Services. It is a platform that offers flexible, reliable, scalable, easy-
to-use and, cost-effective cloud computing solutions. AWS (Amazon Web Services) is a comprehensive,
evolving cloud computing platform provided by Amazon that includes a mixture of infrastructure as a
service (IaaS), platform as a service (PaaS) and packaged software as a service (SaaS) offerings. AWS
services can offer an organization tool such as compute power, database storage and content delivery
services.
Amazon Web Services offers a wide range of different business purpose global cloud-based products. The
products include storage, databases, analytics, networking, mobile, development tools, enterprise
applications, with a pay-as-you-go pricing model.
AWS SERVICES:-
AWS COMPUTE SERVICES
Here, are Cloud Compute Services offered by Amazon:-
1. EC2 (Elastic Compute Cloud): EC2 is a virtual machine in the cloud on which you have OS level control.
You can run this cloud server whenever you want.
2. LightSail: This cloud computing tool automatically deploys and manages the computer, storage, and
networking capabilities required to run your application.
3. Elastic Beanstalk: The tool offers automated deployment and provisioning of resources like a highly
scalable production website.
4. EKS (Elastic Container Service for Kubernetes): The tool allows you to Kubernetes on Amazon cloud
environment without installation.
5. AWS Lambda: This AWS services allows you to run functions in the cloud. The tool is a big cost saver
for you as you have to pay only when your functions executes.
60_SahilNazare
MIGRATION:
Migration services used to transfer data physically between your datacentre and AWS.
1. DMS (Database Migration Service): DMS service can be used to migrate on-site databases to AWS. It
helps you to migrate from one type of database to another. For example, Oracle to MySQL.
2. SMS (Server Migration Service): SMS migration services allows you to migrate on-site servers to AWS
easily and quickly.
3. Snowball: Snowball is a small application which allows you to transfer terabytes of data inside and
outside of AWS environment.
STORAGE:
1. Amazon Glacier: It is an extremely low-cost storage service. It offers secure and fast storage for data
archiving and backup.
2. Amazon Elastic Block Store (EBS): It provides block-level storage to use with Amazon EC2 instances.
Amazon Elastic Block Store volumes are network-attached and remain independent from the life of an
instance.
3. AWS Storage Gateway: This AWS service is connecting on-premises software applications with cloud-
based storage. It offers secure integration between the company’s on-premises and AWS’s storage
infrastructure.
SECURITY SERVICES:
1. IAM (Identity and Access Management): IAM is a secure cloud security service which helps you to
manage users, assign policies, form groups to manage multiple users.
2. Inspector: It is an agent that you can install on your virtual machines, which reports any security
vulnerabilities.
3. Certificate Manager: The service offers free SSL certificates for your domains that are managed by
Route53.
4. WAF (Web Application Firewall): WAF security service offers application-level protection and allows
you to block SQL injection and helps you to block cross-site scripting attacks.
5. Cloud Directory: This service allows you to create flexible, cloud-native directories for managing
hierarchies of data along multiple dimensions.
DATABSE SERVICES:
1. Amazon RDS: This Database AWS service is easy to set up, operate, and scale a relational database in the
cloud.
2. Amazon DynamoDB: It is a fast, fully managed NoSQL database service. It is a simple service which
allow cost-effective storage and retrieval of data. It also allows you to serve any level of request traffic.
3. Amazon ElastiCache: It is a web service which makes it easy to deploy, operate, and scale an in-memory
cache in the cloud.
4. Neptune: It is a fast, reliable and scalable graph database service.
5. Amazon RedShift: It is Amazon’s data warehousing solution which you can use to perform complex
OLAP queries.
ANALYTICS:
60_SahilNazare
1. Athena: This analytics service allows perm SQL queries on your S3 bucket to find files.
2. CloudSearch: You should use this AWS service to create a fully managed search engine for your website.
3. ElasticSearch: It is similar to CloudSearch. However, it offers more features like application monitoring.
4. Kinesis: This AWS analytics service helps you to stream and analyzing real-time data at massive scale.
5. QuickSight: It is a business analytics tool. It helps you to create visualizations in a dashboard for data in
Amazon Web Services. For example, S3, DynamoDB, etc.
6. EMR (Elastic Map Reduce): This AWS analytics service mainly used for big data processing like Spark,
Splunk, Hadoop, etc.
MANAGEMENT SERVICES:
1. CloudWatch: Cloud watch helps you to monitor AWS environments like EC2, RDS instances, and CPU
utilization. It also triggers alarms depends on various metrics.
2. AWS Auto Scaling: The service allows you to automatically scale your resources up and down based on
given CloudWatch metrics.
3. Systems Manager: This AWS service allows you to group your resources. It allows you to identify issues
and act on them.
INTERNET OF THINGS:
1. IoT Core: It is a managed cloud AWS service. The service allows connected devices? like cars, light
bulbs, sensor grids, to securely interact with cloud applications and other devices.
2. IoT Device Management: It allows you to manage your IoT devices at any scale.
3. IoT Analytics: This AWS IOT service is helpful to perform analysis on data collected by your IoT
devices.
4. Amazon FreeRTOS: This real-time operating system for microcontrollers helps you to connect IoT
devices in the local server or into the cloud.
APPLICATION SERVICES:
1. Step Functions: It is a way of visualizing what’s going inside your application and what different
microservices it is using.
2. SWF (Simple Workflow Service): The service helps you to coordinate both automated tasks and human-
led tasks.
3. SNS (Simple Notification Service): You can use this service to send you notifications in the form of email
and SMS based on given AWS services.
4. SQS (Simple Queue Service): Use this AWS service to decouple your applications. It is a pull-based
service.
5. Elastic Transcoder: This AWS service tool helps you to changes a video’s format and resolution to
support various devices like tablets, smartphones, and laptops of different resolutions.
DEPLOYMENT AND MANAGEMENT:
1. AWS CloudTrail: The services records AWS API calls and send backlog files to you.
2. Amazon CloudWatch: The tools monitor AWS resources like Amazon EC2 and Amazon RDS DB
Instances. It also allows you to monitor custom metrics created by user’s applications and services.
60_SahilNazare
3.AWS CloudHSM: This AWS service helps you meet corporate, regulatory, and contractual, compliance
requirements for maintaining data security by using the Hardware Security Module(HSM) appliances inside
the AWS environment.
DEVELOPER TOOLS:
1. CodeStar: Codestar is a cloud-based service for creating, managing, and working with various software
development projects on AWS.
2. CodeCommit: It is AWS’s version control service which allows you to store your code and other assets
privately in the cloud.
3. CodeBuild: This Amazon developer service help you to automates the process of building and compiling
your code.
4. CodeDeploy: It is a way of deploying your code in EC2 instances automatically.
5. CodePipeline: It helps you create a deployment pipeline like testing, building, testing, authentication,
deployment on development and production environments.
MOBILE SERVICES:
1. Mobile Hub: Allows you to add, configure and design features for mobile apps.
2. Cognito: Allows users to signup using his or her social identity.
3. Device Farm: Device farm helps you to improve the quality of apps by quickly testing hundreds of mobile
devices.
4. AWS AppSync: It is a fully managed GraphQL service that offers real-time data synchronization and
offline programming features.
DESKTOP AND APP STREAMING:
1. WorkSpaces: Workspace is a VDI (Virtual Desktop Infrastructure). It allows you to use remote desktops
in the cloud.
2. AppStream: A way of streaming desktop applications to your users in the web browser. For example,
using MS Word in Google Chrome.
ARTIFICIAL INTELLIGENCE:
1. Lex: Lex tool helps you to build chatbots quickly.
2. Polly: It is AWS’s text-to-speech service allows you to create audio versions of your notes.
3. Rekognition: It is AWS’s face recognition service. This AWS service helps you to recognize faces and
object in images and videos.
4. SageMaker: Sagemaker allows you to build, train, and deploy machine learning models at any scale.
AR AND VR:
1. Sumerian: Sumerian is a set of tool for offering high-quality virtual reality (VR) experiences on the web.
The service allows you to create interactive 3D scenes and publish it as a website for users to access.
CUSTOMER ENGAGEMENT:
2. Amazon Connect: Amazon Connect allows you to create your customer care center in the cloud.
3. Pinpoint: Pinpoint helps you to understand your users and engage with them.
60_SahilNazare
4. SES (Simple Email Service): Helps you to send bulk emails to your customers at a relatively cost-
effective price.
GAME DEVELOPMENT:
1. GameLift– It is a service which is managed by AWS. You can use this service to host dedicated game
servers. It allows you to scale seamlessly without taking your game offline.
APPLICATION:
Amazon Web services are widely used for various computing purposes like:
• Web site hosting
• Application hosting/SaaS hosting
• Media Sharing (Image/ Video)
• Mobile and Social Applications
• Content delivery and Media Distribution
• Storage, backup, and disaster recovery
• Development and test environments
• Academic Computing
• Search Engines
• Social Networking
WHAT IS EC2?
An EC2 instance is nothing but a virtual server in Amazon Web services terminology. It stands for Elastic
Compute Cloud. It is a web service where an AWS subscriber can request and provision a compute server in
AWS cloud.
An on-demand EC2 instance is an offering from AWS where the subscriber/user can rent the virtual server
per hour and use it to deploy his/her own applications.
The instance will be charged per hour with different rates based on the type of the instance chosen. AWS
provides multiple instance types for the respective business needs of the user.
Thus, you can rent an instance based on your own CPU and memory requirements and use it as long as you
want. You can terminate the instance when it’s no more used and save on costs.
IMPLEMENTATION:
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
CONCLUSION:
Hence, we successfully studied and implemented Infrastructure as a Service using AWS /
Microsoft Azure.
60_SahilNazare
EXPERIMENT NO. 5
AIM: To study and implement Platform as a Service using AWS Elastic Beanstalk / Microsoft Azure App
Service.
RESOURCES REQUIRED: AWS / Azure Account.
THEORY:
Amazon Web Services (AWS) comprises over one hundred services, each of which exposes an area of
functionality. While the variety of services offers flexibility for how you want to manage your AWS
infrastructure, it can be challenging to figure out which services to use and how to provision them.
With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without having
to learn about the infrastructure that runs those applications. Elastic Beanstalk reduces management
complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk
automatically handles the details of capacity provisioning, load balancing, scaling, and application health
monitoring. Elastic Beanstalk supports applications developed in Go, Java, .NET, Node.js, PHP, Python, and
Ruby. When you deploy your application, Elastic Beanstalk builds the selected supported platform version
and provisions one or more AWS resources, such as Amazon EC2 instances, to run your application.
You can interact with Elastic Beanstalk by using the Elastic Beanstalk console, the AWS Command Line
Interface (AWS CLI), a high-level CLI designed specifically for Elastic Beanstalk. To learn more about how
to deploy a sample web application using Elastic Beanstalk, see Getting Started with AWS: Deploying a
Web App. You can also perform most deployment tasks, such as changing the size of your fleet of Amazon
EC2 instances or monitoring your application, directly from the Elastic Beanstalk web interface (console).
To use Elastic Beanstalk, you create an application, upload an application version in the form of an
application source bundle (for example, a Java .war file) to Elastic Beanstalk, and then provide some
information about the application. Elastic Beanstalk automatically launches an environment and creates and
configures the AWS resources needed to run your code. After your environment is launched, you can then
manage your environment and deploy new application versions. The following diagram illustrates the
workflow of Elastic Beanstalk.
After you create and deploy your application, information about the application—including metrics, events,
and environment status—is available through the Elastic Beanstalk console, APIs, or Command Line
Interfaces, including the unified AWS CLI. AWS Elastic Beanstalk enables you to manage all of the
resources that run your application as environments. Here are some key Elastic Beanstalk concepts.
APPLICATION:-
An Elastic Beanstalk application is a logical collection of Elastic Beanstalk components, including
environments, versions, and environment configurations. In Elastic Beanstalk an application is conceptually
similar to a folder.
APPLICATION VERSION:-
60_SahilNazare
In Elastic Beanstalk, an application version refers to a specific, labeled iteration of deployable code for a
web application. An application version points to an Amazon Simple Storage Service (Amazon S3) object
that contains the deployable code, such as a Java WAR file. An application version is part of an application.
Applications can have many versions and each application version is unique. In a running environment, you
can deploy any application version you already uploaded to the application, or you can upload and
immediately deploy a new application version. You might upload multiple application versions to test
differences between one version of your web application and another.
ENVIRONMENT:-
An environment is a collection of AWS resources running an application version. Each environment runs
only one application version at a time, however, you can run the same application version or different
application versions in many environments simultaneously. When you create an environment, Elastic
Beanstalk provisions the resources needed to run the application version you specified.
ENVIRONMENT TIER:-
When you launch an Elastic Beanstalk environment, you first choose an environment tier. The environment
tier designates the type of application that the environment runs, and determines what resources Elastic
Beanstalk provisions to support it. An application that serves HTTP requests runs in a web server
environment tier. A backend environment that pulls tasks from an Amazon Simple Queue Service (Amazon
SQS) queue runs in a worker environment tier.
ENVIRONMENT CONFIGURATION:-
An environment configuration identifies a collection of parameters and settings that define how an
environment and its associated resources behave. When you update an environment’s configuration settings,
Elastic Beanstalk automatically applies the changes to existing resources or deletes and deploys new
resources (depending on the type of change).
SAVED CONFIGURATION:-
A saved configuration is a template that you can use as a starting point for creating unique environment
configurations. You can create and modify saved configurations, and apply them to environments, using the
Elastic Beanstalk console, EB CLI, AWS CLI, or API. The API and the AWS CLI refer to saved
configurations as configuration templates.
PLATFORM:-
A platform is a combination of an operating system, programming language runtime, web server, application
server, and Elastic Beanstalk components. You design and target your web application to a platform. Elastic
Beanstalk provides a variety of platforms on which you can build your applications.
AWS Elastic Beanstalk for Node.js makes it easy to deploy, manage, and scale your Node.js web
applications using Amazon Web Services. Elastic Beanstalk for Node.js is available to anyone developing or
hosting a web application using Node.js. This chapter provides step-by-step instructions for deploying your
Node.js web application to Elastic Beanstalk using the Elastic Beanstalk management console, and provides
walkthroughs for common tasks such as database integration and working with the Express framework.
After you deploy your Elastic Beanstalk application, you can continue to use EB CLI to manage your
application and environment, or you can use the Elastic Beanstalk console, AWS CLI, or the APIs.
IMPLEMENTATION:
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
CONCLUSION:
Hence, we successfully studied and implemented Platform as a Service using AWS Elastic
Beanstalk / Microsoft Azure App Service.
60_SahilNazare
EXPERIMENT NO. 6
FEATURES OF S3:-
STORAGE CLASSES:
Amazon S3 offers a range of storage classes designed for different use cases. For example, you can store
mission-critical production data in S3 Standard for frequent access, save costs by storing infrequently
accessed data in S3 Standard-IA or S3 One Zone-IA, and archive data at the lowest costs in S3 Glacier
Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive.
STORAGE MANAGEMENT:
Amazon S3 has storage management features that you can use to manage costs, meet regulatory
requirements, reduce latency, and save multiple distinct copies of your data for compliance requirements.
• S3 Lifecycle: Configure a lifecycle policy to manage your objects and store them cost effectively
throughout their lifecycle. You can transition objects to other S3 storage classes or expire objects that
reach the end of their lifetimes.
• S3 Object Lock: Prevent Amazon S3 objects from being deleted or overwritten for a fixed amount of
time or indefinitely. You can use Object Lock to help meet regulatory requirements that require
write-once-read-many (WORM) storage or to simply add another layer of protection against object
changes and deletions.
• S3 Replication: Replicate objects and their respective metadata and object tags to one or more
destination buckets in the same or different AWS Regions for reduced latency, compliance, security,
and other use cases.
• S3 Batch Operations: Manage billions of objects at scale with a single S3 API request or a few clicks
in the Amazon S3 console. You can use Batch Operations to perform operations such as copy,
invoke AWS Lambda function, and restore on millions or billions of objects.
ACCESS MANAGEMENT:
Amazon S3 provides features for auditing and managing access to your buckets and objects. By default, S3
buckets and the objects in them are private. You have access only to the S3 resources that you create. To
grant granular resource permissions that support your specific use case or to audit the permissions of your
Amazon S3 resources, you can use the following features.
60_SahilNazare
• S3 Block Public Access: Block public access to S3 buckets and objects. By default, Block Public
Access settings are turned on at the account and bucket level.
• AWS Identity and Access Management (IAM): Create IAM users for your AWS account to manage
access to your Amazon S3 resources. For example, you can use IAM with Amazon S3 to control the
type of access a user or group of users has to an S3 bucket that your AWS account owns.
• Bucket Policies: Use IAM-based policy language to configure resource-based permissions for your
S3 buckets and the objects in them.
• Amazon S3 access points: Configure named network endpoints with dedicated access policies to
manage data access at scale for shared datasets in Amazon S3.
• Access control lists (ACLs): Grant read and write permissions for individual buckets and objects to
authorized users. As a general rule, we recommend using S3 resource-based policies (bucket policies
and access point policies) or IAM policies for access control instead of ACLs. ACLs are an access
control mechanism that predates resource-based policies and IAM. For more information about when
you'd use ACLs instead of resource-based policies or IAM policies, see Access policy guidelines.
• S3 Object Ownership: Disable ACLs and take ownership of every object in your bucket, simplifying
access management for data stored in Amazon S3. You, as the bucket owner, automatically own and
have full control over every object in your bucket, and access control for your data is based on
policies.
• Access Analyzer for S3: Evaluate and monitor your S3 bucket access policies, ensuring that the
policies provide only the intended access to your S3 resources.
STORAGE LOGGING AND MONITORING:
Amazon S3 provides logging and monitoring tools that you can use to monitor and control how your
Amazon S3 resources are being used.
STRONG CONSISTENCY:
Amazon S3 provides strong read-after-write consistency for PUT and DELETE requests of objects in your
Amazon S3 bucket in all AWS Regions. This behaviour applies to both writes of new objects as well as PUT
requests that overwrite existing objects and DELETE requests.
IMPLEMENTATION:
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
2. TO IMPLEMENT STORAGE AS A SERVICE USING S3 GLACIER:-
Amazon S3 Glacier is a secure, durable, and extremely low-cost Amazon S3 storage class for data archiving
and long-term backup. With S3 Glacier, customers can store their data cost effectively for months, years, or
even decades. S3 Glacier enables customers to offload the administrative burdens of operating and scaling
storage to AWS, so they don't have to worry about capacity planning, hardware provisioning, data
replication, hardware failure detection and recovery, or time-consuming hardware migrations. S3 Glacier is
one of the many different storage classes for Amazon S3.
HOW IT WORKS:-
The Amazon S3 Glacier storage classes are purpose-built for data archiving, providing you with the highest
performance, most retrieval flexibility, and the lowest cost archive storage in the cloud. You can now choose
from three archive storage classes optimized for different access patterns and storage duration.
IMPLEMENTATION:
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
CONCLUSION:
Hence, we have successfully studied and implemented Storage as a Service using S3 and S3 Glacier.
60_SahilNazare
EXPERIMENT NO. 7
AIM: To study and Implement Database as a Service on SQL/NOSQL databases like AWS RDS, AZURE
SQL/ MongoDB Lab/ Firebase.
RESOURCES REQUIRED: AWS account, MySQL.
THEORY:
WHAT IS AMAZON RDS?
Amazon Relational Database Service (RDS) is a managed SQL database service provided by Amazon Web
Services (AWS). Amazon RDS supports an array of database engines to store and organize data. It also
helps with relational database management tasks, such as data migration, backup, recovery and patching.
Amazon RDS facilitates the deployment and maintenance of relational databases in the cloud. A cloud
administrator uses Amazon RDS to set up, operate, manage and scale a relational instance of a cloud
database. Amazon RDS is not itself a database; it is a service used to manage relational databases.
IMPLEMENTATION:
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
60_SahilNazare
CONCLUSION:
Hence, we successfully studied and implemented database as a service on MYSQL
database using AWS RDS.
60_SahilNazare
EXPERIMENT NO. 8
IMPLEMENTATION:
DATABASE FIREWALL PROTECTTION:
Database:
DDOS:
Creating DDOS plan:
60_SahilNazare
Adding resources:
Enabling Integration:
60_SahilNazare
Enable Logging:
Continuous export:
60_SahilNazare
Security Policy:
CONCLUSION:
Hence, we successfully implemented Security as a Service on Azure.
60_SahilNazare
EXPERIMENT NO. 9
AIM: To study and implement Identity and Access Management (IAM) practices on AWS/Azure cloud.
RESOURCES REQUIRED: AWS/Azure account.
THEORY:
Microsoft Azure IAM, also known as Access Control (IAM), is the product provided in Azure for RBAC
and governance of users and roles. Identity management is a crucial part of cloud operations due to security
risks that can come from misapplied permissions. Whenever you have a new identity (a user, group, or
service principal) or a new resource (such as a virtual machine, database, or storage blob), you should
provide proper access with as limited of a scope as possible. Here are some of the questions you should ask
yourself to maintain maximum security:
1. Who needs access?
Granting access to an identity includes both human users and programmatic access from applications and
scripts. If you are utilizing Azure Active Directory, then you likely want to use those managed identities for
role assignments. Consider using an existing group of users or making a new group to apply similar
permissions across a set of users, as you can then remove a user from that group in the future to revoke those
permissions.
Programmatic access is typically granted through Azure service principals. Since it’s not a user logging in,
the application or script will use the App Registration credentials to connect and run any commands.
2. What role do they need?
Azure IAM uses roles to give specific permissions to identities. Azure has a number of built-in roles based
on a few common functions:
• Owner – Full management access, including granting access to others
• Contributor – Management access to perform all actions except granting access to others
• User Access Administrator – Specific access to grant access to others
• Reader – View-only access
These built-in roles can be more specific, such as “Virtual Machine Contributor” or “Log Analytics Reader”.
However, even with these specific pre-defined roles, the principle of least privilege shows that you’re almost
always giving more access than is truly needed.
For even more granular permissions, you can create Azure custom roles and list specific commands that can
be run.
3. Where do they need access?
The final piece of an Azure IAM permission set is deciding the specific resource that the identity should be
able to access. This should be at the most granular level possible to maintain maximum security. For
example, a Cloud Operations Manager may need access at the management group or subscription level,
while a SQL Server utility may just need access to specific database resources. When creating or assigning
the role, this is typically referred to as the “scope” in Azure.
The scope of a role is to always think twice before using the subscription or management group as a scope.
The scale of your subscription is going to come into consideration, as organizations with many smaller
subscriptions that have very focused purposes may be able to use the subscription-level scope more
60_SahilNazare
frequently. On the flip side, some companies have broader subscriptions, then use resource groups or tags to
limit access, which means the scope is often smaller than a whole subscription.
IMPLEMENTATION:
60_SahilNazare
CONCLUSION:
Hence, we successfully implemented Identity and Access Management (IAM) practices on
Azure cloud.
60_SahilNazare
EXPERIMENT NO. 10
STEPS:
1. Control panel->program->Turn window feature on/off->check Hyper-v option and also check Window
subsystem for linux than click on ok and reboot your pc.
60_SahilNazare
3. Downloaded file(Docker DEsktop Installer->right click on this file and select Run as administrator)
4. Open cmd run as administrator and paste the given command –
docker run --rm --name oc-eval -d -e OWNCLOUD_DOMAIN=localhost:8080 -p8080:8080
owncloud/server
60_SahilNazare
5. Again right click on docker and select run as administrator.
6. So, at last go to container option at right side of docker window->there is oc_eval option->click on
browser.
CONCLUSION:
Hence, we successfully implemented Containerization using Docker.
60_SahilNazare
ASSIGNMENT NO.: 01
2021, Gartner forecasts, stating that the growth is driven by remote workers needing access to ‘to high
performing, content-rich and scalable infrastructure to perform their duties’. This area is only expected to
grow as more organizations migrate their IT functions to the cloud in response to COVID-19. Enterprise
adoption of platforms like Azure or Google Drive has skyrocketed as teams look for solutions to storing
information and collaborating from a distance. In fact, 59 percent of enterprises expect cloud usage to
exceed prior plans due to COVID-19.
Software as a Service (SaaS):
Software as a Service (SaaS) is one of the first and most successful ‘as a service’ cloud offerings. It includes
all of the services and software offered through a third party on the internet, trading subscriptions for
licensing fees. As one of the biggest cloud application services, SaaS now contributes $20 billion to the
quarterly revenues of software vendors. The number is expected to grow by 32 percent each year.
Competition between SaaS companies has led to a wide array of inexpensive solutions that ensure public
cloud services will dominate the market for years to come. The next generation of SaaS offerings will also
include machine learning as part of their services. While some applications may be better than others, rest
assured, in the near future you’ll be hard-pressed to find a SaaS product that is not labeled ‘intelligent’.
Infrastructure as a Service (Iaas):
Infrastructure as a Service (IaaS) has been around since the beginning of cloud services, but its potential is
yet to be fully actualized. Organizations have been slow to adopt this technology, owing to a reported skill
gap in the cloud migration process. However, thanks to an uptick in cloud education and understanding
borne out of necessity, this up-and-coming cloud solution is expected to eventually outgrow SaaS in
revenue. IaaS refers to pay-as-you-go services that organizations use for storage, networking, and
virtualization. Many companies have taken the path of least resistance by adopting the ‘lift and shift’
approach to cloud migration, not adapting their workflows to get the most out of the cloud. In order to
compete, organizations have discovered they must take a different approach – modernizing processes,
investing in cloud-native development, and refactoring apps to achieve true cloud optimization.
3. CLOUD SECURITY:
Between January and April of 2020, cybercrime saw a sharp increase by 630 percent as new ways of
working created new vulnerabilities to exploit. Spreading workloads between various cloud providers
presents organizations with a considerable issue of governance. No surprise that we found that 65 percent of
senior IT executives believe security and compliance risk are the greatest barriers to realizing the benefits of
cloud. Generating and acting on insights across platforms requires a proactive approach equipped with
sensitivity to potential blind spots. This explains why 28 percent of enterprises consider security to be the
most important criterion when picking a cloud vendor. Although the cloud’s efficiency in terms of time and
money is its most popular feature, organizations are realizing that cutting corners on the cloud can render
their organizational processes opaque; opening a plethora of discreet entry-points for cybercriminals.
As interactions between the cloud and enterprises proliferate, a more organizational understanding of cloud
capabilities is being developed. The lack of this expertise has been one of the biggest drivers of public cloud
adoption as it was easier for businesses to outsource the services they could not manage or develop
themselves. However, as this wealth of knowledge grows in abundance, more organizations will opt for their
own private clouds to maintain greater control over their processes without trading in future flexibility.
Although the private cloud industry saw no significant growth in 2020, we attribute this to the heightened
ability of public providers to navigate organizations through the novel demands of the year. Moving
forward, the power dynamic between the public and private clouds is likely to equalize to some extent. This
will create a more democratic cloud industry guided by organizational needs rather than industrial fixtures.
4. MULTICLOUD:
60_SahilNazare
While most organizations do not make the jump from on-premises to multi-vendor deployments in one go,
93 percent of enterprises have built up to a multicloud strategy. As more workloads are migrated to the
cloud, the industry is becoming more sensitive to the unique requirements of different processes. An average
of 3.4 public clouds and 3.9 private clouds are being deployed or tested per organization, allowing them to
tailor their cloud capabilities to their cloud requirements. Moving forward, more organizations will develop
entirely cloud-native applications with little to no architectural dependence on a specific cloud provider.
Cultivating a firmer understanding of their cloud needs and the cloud industry will teach organizations to
develop with clearer intent than before. However, this paradigm shift is also dependent on the evolution of
cloud capabilities, as time-to-market is steeply improving and the ability to integrate changing workloads
enables organizations to take advantage of even the smallest trends.
5. HYBRID CLOUD:
While a multi-cloud approach leverages the differing allowances of different providers—regardless of public
or private cloud, a hybrid cloud approach categorically focuses on taking advantage of both, the private and
the public cloud. A well-integrated and balanced hybrid strategy gives businesses the best of both worlds.
They can scale further and faster at the behest of the public cloud’s innovative and flexible services without
losing out on the higher cost efficiency, reaction speed and regulatory compliance that go hand in hand with
the capabilities of the private cloud.
6. PUBLIC CLOUD:
Our research finds that migrating areas of your business to the public cloud can cut your Total Cost of
Ownership (TCO) by as much as 40 percent. That number will only increase as top public cloud providers
AWS, Azure and Google improve their services and prices to strengthen their competitive posture.
However, harsher competition could be detrimental to the trend of interoperability, as cloud providers might
look to create an edge for themselves by driving their customers to commit fully to their services. This
would force businesses to compromise on certain capabilities by picking the provider that fits their key
operations the best. Alternatively, public cloud providers could strengthen their existing capabilities and
allow a greater range of choice to promote customer loyalty.
7. CLOUD REALITY:
Despite the tremendous potential of virtual and augmented realities, their dependency on source computing
devices has limited their penetration into the market. In combination with 5G networks, the cloud can bypass
the hardware requirements of AR/VR to allow applications to be rendered, executed and distributed through
the cloud to a larger audience. High capacity, low latency broadband networks will be the key to unlocking
real-time displays, renders, feedback and delivery, maximizing the potential of both, cloud and AR/VR
solutions.
8. CLOUD MONITORING:
Trends like cloud coalitions, machine learning, and data fabrics enable the cloud industry to hone one of its
key components: monitoring. Facing pressure to quickly migrate workloads to the cloud, companies are now
challenged by the task of consolidating metrics on their various cloud servers to generate monetizable
insights. This pursuit is expected to grow the cloud monitoring industry annually by 22.7 percent between
2020 and 2026, when it will be valued at approximately $4.5 billion. End-user services are leveraging
available technologies to develop facilities that monitor and manage applications across cloud platforms. As
new regulations are enforced over the management of information and clouds shift to HTML5, existing
monitoring services will also have to display ingenuity and flexibility.
9. CLOUD-NATIVE APPLICATIONS:
60_SahilNazare
Cloud-native applications are applications born in the cloud, not just reworked to be compatible. These
applications run on cloud infrastructure, as opposed to being installed on an OS or server. This means that
instead of demanding compatibility, cloud-native applications can dictate their environment by interacting
directly via APIs. Such independently linked applications are more resilient and manageable, enabling
organizations to build and scale quickly and efficiently. It’s becoming clear that cloud-native architecture is
the future of application development in an increasingly fast-paced and dynamic time. The use of cloud-
native projects in production continues to grow, with many projects reaching more than 50 percent use in
production.
10. APPLICATION MOBILITY:
For organizations focused on agility and transformation, the runtime environment for apps is liable to
change constantly as technology rapidly matures. This has created a greater need for application mobility—
freeing apps from any one data center or infrastructure and enabling organizations to select the best platform
for their needs. By decoupling applications from their runtime environment, IT teams can migrate between
hypervisors, public cloud, and container-based environments without losing data or risking excessive
downtime.
11. DISTRIBUTED CLOUD:
The cloud derives its name from its omnipresence and lack of physicality. Most CIOs have observed issues
due to its lack of presence, either from the server or in the speed of transmission. However, as the cloud
solidifies its position in enterprise operations, the consequences of latency issues grow. At any one point,
your website is just a 2-second delay away from racking up a 100 percent bounce rate. A component of edge
computing, the distributed cloud has origins in the public and hybrid cloud environments. Public cloud
providers have the opportunity to package their hybrid services and distribute them to different locations,
easing the tension from their central servers and enabling them to better serve high-value clients. Operating
physically closer to clients with large workloads resolves most latency issues and mitigates the risk of total
server failure. The widening of compute zones could also democratize cloud services as smaller businesses
close to the distributed locations could avail the services without incurring traditional server costs.
12. OPEN-SOURCE CLOUD:
Open-source applications employ source code that is publicly accessible to inspect, edit and improve. ‘As a
Service’ cloud solutions built with such code can be dispersed across private, public, and hybrid cloud
environments. There are several reasons why 77 percent of IT leaders intend to utilize open-source code
with greater frequency. Royalty-free source codes will be a significant deterrent to vendor lock-in as data
transferability and open data platforms will allow for services and analytics to be interoperable. Reusing
software stacks, libraries and components will also create more common ground between applications for
interoperability.
60_SahilNazare
ASSIGNMENT NO.: 02
Q1. Comparative study of different computing technologies (Parallel, Distributed, Cluster, Grid, Quantum).
Ans: -
PARALLEL COMPUTING:-
Parallel computing refers to the process of breaking down larger problems into smaller, independent, often
similar parts that can be executed simultaneously by multiple processors communicating via shared memory,
the results of which are combined upon completion as part of an overall algorithm. The primary goal of
parallel computing is to increase available computation power for faster application processing and problem
solving.
Parallel computing infrastructure is typically housed within a single datacentre where several processors are
installed in a server rack; computation requests are distributed in small chunks by the application server that
are then executed simultaneously on each server.
There are generally four types of parallel computing, available from both proprietary and open source
parallel computing vendors -- bit-level parallelism, instruction-level parallelism, task parallelism, or super
word-level parallelism:
• Bit-level parallelism: increases processor word size, which reduces the quantity of instructions the
processor must execute in order to perform an operation on variables greater than the length of the
word.
• Instruction-level parallelism: the hardware approach works upon dynamic parallelism, in which the
processor decides at run-time which instructions to execute in parallel; the software approach works
upon static parallelism, in which the compiler decides which instructions to execute in parallel
• Task parallelism: a form of parallelization of computer code across multiple processors that runs
several different tasks at the same time on the same data
• Superword-level parallelism: a vectorization technique that can exploit parallelism of inline code
DISTRIBUTED COMPUTING:
Distributed computing is a much broader technology that has been around for more than three decades now.
Simply stated, distributed computing is computing over distributed autonomous computers that
communicate only over a network. Distributed computing systems are usually treated differently from
parallel computing systems or shared-memory systems, where multiple computers share a common memory
pool that is used for communication between the processors. Distributed memory systems use multiple
computers to solve a common problem, with computation distributed among the connected computers
(nodes) and using message-passing to communicate between the nodes. For example, grid computing,
studied in the previous section, is a form of distributed computing where the nodes may belong to different
administrative domains. Another example is the network-based storage virtualization solution described in
an earlier section in this chapter, which used distributed computing between data and metadata servers.
60_SahilNazare
CLUSTER COMPUTING:-
Cluster computing refers that many of the computers connected on a network and they perform like a single
entity. Each computer that is connected to the network is called a node. Cluster computing offers solutions
to solve complicated problems by providing faster computational speed, and enhanced data integrity. The
connected computers execute operations all together thus creating the impression like a single system
(virtual machine). This process is termed as transparency of the system. Based on the principle of distributed
systems, this networking technology performs its operations. And here, LAN is the connection unit. This
process is defined as the transparency of the system. Cluster computing goes with the features of:
All the connected computers are the same kind of machines
They are tightly connected through dedicated network connections
All the computers share a common home directory.
Clusters’ hardware configuration differs based on the selected networking technologies. Cluster is
categorized as Open and Close clusters wherein Open Clusters all the nodes need IP’s and those are
accessed only through the internet or web. This type of clustering causes enhanced security concerns. And in
Closed Clustering, the nodes are concealed behind the gateway node and they offer increased protection.
60_SahilNazare
GRID COMPUTING:-
The use of a widely dispersed system strategy to accomplish a common objective is called grid computing.
A computational grid can be conceived as a decentralized network of interrelated files and non-interactive
activities. Grid computing differs from traditional powerful computational platforms like cluster computing
in that each unit is dedicated to a certain function or activity. Grid computers are also more diverse and
spatially scattered than cluster machines and are not physically connected. However, a particular grid might
be allocated to a unified platform, and grids are frequently utilized for various purposes. General-purpose
grid network application packages are frequently used to create grids. The size of the grid might be
extremely enormous.
Grids are decentralized network computing in which a "super virtual computer" is made up of several
loosely coupled devices that work together to accomplish massive operations. Distributed or grid computing
is a sort of parallel processing that uses entire devices (with onboard CPUs, storage, power supply, network
connectivity, and so on) linked to a network connection (private or public) via a traditional network
connection, like Ethernet, for specific applications. This contrasts with the typical quantum computer
concept, consisting of several cores linked by an elevated universal serial bus on a local level. This
technique has been used in corporate entities for these applications ranging from drug development, market
analysis, seismic activity, and backend data management in the assistance of e-commerce and online
services. It has been implemented to computationally demanding research, numerical, and educational
difficulties via volunteer computer technology.
Grid computing brings together machines from numerous organizational sectors to achieve a similar aim,
such as completing a single work and then vanishes just as rapidly. Grids can be narrowed to a group of
computer terminals within a firm, such as accessible alliances involving multiple organizations and systems.
"A limited grid can also be referred to as intra-nodes collaboration, while a bigger, broader grid can be
referred to as inter-nodes cooperative".
Managing Grid applications can be difficult, particularly when dealing with the data flows among distant
computational resources. A grid sequencing system is a combination of workflow automation software that
has been built expressly for composing and executing a sequence of mathematical or data modification
processes or a sequence in a grid setting.
QUANTUM COMPUTING:-
Quantum Computing is the process of using quantum-mechanics for solving complex and massive
operations quickly and efficiently. As classical computers are used for performing classical computations,
similarly, a Quantum computer is used for performing Quantum computations. Quantum Computations are
too complex to solve that it becomes almost impossible to solve them with classical computers. The word
'Quantum' is derived from the concept of Quantum Mechanics in Physics that describes the physical
properties of the nature of electrons and photons. Quantum is the fundamental framework for deeply
describing and understanding nature. Thus, it is the reason that quantum calculations deal with complexity.
Quantum Computing is a subfield of Quantum Information Science. It describes the best way of dealing
with a complicated computation. Quantum-mechanics is based on the phenomena of superposition and
entanglement, which are used to perform the quantum computations.
A Quantum deals with the smallest particles found in nature, i.e., electrons and photons. These three
particles are known as Quantum particles. In this, superposition defines the ability of a quantum system to
be present in multiple states (one or more) at the same time.
There are the following applications of Quantum Computing:
60_SahilNazare
• Cybersecurity – Personal information is stored in computers in the current era of digitization. So, we
need a very strong system of cybersecurity to protect data from stealing. Classical computers are
good enough for cybersecurity, but the vulnerable threats and attacks weaken it. Scientists are
working with quantum computers in this field. It is also found that it is possible to develop several
techniques to deal with such cybersecurity threats via machine learning.
• Cryptography – It is also a field of security where quantum computers are helping to develop
encryption methods to deliver the packets onto the network safely. Such creation of encryption
methods is known as Quantum Cryptography.
• Weather Forecasting – Sometimes, the process of analyzing becomes too long to forecast the weather
using classical computers. On the other hand, Quantum Computers have enhanced power to analyze,
recognize the patterns, and forecast the weather in a short period and with better accuracy, Even
quantum systems are able to predict more detailed climate models with perfect timings.
• AI and Machine Learning – AI has become an emerging area of digitization. Many tools, apps and
features have been developed via AI and ML. As the days are passing by, numerous applications are
being developed. As a result, it has challenged the classical systems to match up accuracy and speed.
But, Quantum Computers can help to process such complex problems in less time for which a
classical computer will take hundreds of years to solve those problems.