Cloud Computing Compressed
Cloud Computing Compressed
Subject
Cloud Computing
Vol.01
Empowering Youth!
Cloud Computing
Course name:
•
•
Course Id-
•
Candidate Eligibility : Diploma/ Graduate
Course Duration: (In hours) 550
The student guide contains modules which will help you to acquire relevant knowledge and skills
(generic and domain-specific skills) related to the ‘Cloud Architect’ job role. Knowledge in each
module is easily understood and grasped by you before you move on to the next module.
Comprehensible diagrams & images from world of work have been included to bring about visual
appeal and to make the text lively and interactive for you. You can also try to create your own
illustrations using your imagination or taking the help of your trainer.
Let us now see what the sections in the modules have for you.
This section introduces you to the learning objectives and knowledge criteria covered in the module.
It also tells you what you will learn through the various topics covered in the module.
This section provides you with the knowledge to achieve relevant skill and proficiency to perform
tasks of the Cloud Architect. The knowledge developed through the module will enable you to
perform certain activities related to the job market. You should read through the textual information
to develop an understanding on the various aspects of the module before you complete the
exercise(s).
Section 3: Exercises
Each module has exercises, which you should practice on completion of the learning sessions of the
module. You will perform the activities in the classroom, at home or at the workplace. The activities
included in this section will help you to develop necessary knowledge, skills and attitude that you
need for becoming competent in performing the tasks at workplace. The activities should be done
under the supervision of your trainer who will guide you in completing the tasks and also provide
feedback to you for improving your performance.
The review questions included in this section will help you to check your progress. You must be able
to answer all the questions before you proceed to the next module.
Ideal Resources
Videos
Files Application
Music
eBook
Providers of IT services achieve better operational costs; hardware and software infrastructures are
built to provide multiple solutions and serve many users, thus increasing efficiency and ultimately
leading to faster return on investment (ROI) as well as lower total cost of ownership (TCO).
The mainframe era collapsed with the advent of fast and inexpensive microprocessors and IT data
centres moved to collections of commodity servers. Apart from its clear advantages, this new model
inevitably led to isolation of workload into dedicated servers, mainly due to incompatibilities
These facts reveal the potential of delivering computing services with the speed and reliability that
businesses enjoy with their local machines. The benefits of economies of scale and high utilization
allow providers to offer computing services for a fraction of what it costs for a typical company that
generates its own computing power.
This includes virtualized computers with guaranteed processing power and reserved bandwidth for
storage and Internet access.
The data-Storage-as-a-Service (dSaaS) provides storage that the consumer is used including
bandwidth requirements for the storage.
Cloud development environments are built on top of infrastructure services to offer application
development and deployment capabilities; in this level, various programming models, libraries, APIs,
and mashup editors enable the creation of a range of business, Web, and scientific applications.
Once deployed in the cloud, these applications can be consumed by end users.
Enabling Technologies
Key technologies that enabled cloud computing are described in this section; they include
virtualization, Web service and service-oriented architecture, service flows and workflows, and Web
2.0 and mashup.
▪ Many service providers, such as Amazon, del.icio.us, Facebook, and Google, make their service
APIs publicly accessible using standard protocols such as SOAP and REST.
▪ In the Software as a Service (SaaS) domain, cloud applications can be built as compositions of
other services from the same or different providers.
▪ Services such user authentication, e-mail, payroll management, and calendars are examples of
building blocks that can be reused and combined in a business solution in case a single, ready-
made system does not provide all those features. Many building blocks and solutions are now
available in public marketplaces.
Grid Computing
Grid computing enables aggregation of distributed resources and transparently access to them.
Most production grids such as TeraGrid and EGEE seek to share compute and storage resources
distributed across different administrative domains, with their main focus being speeding up a broad
range of scientific applications, such as climate modelling, drug design, and protein analysis.
▪ The Open Grid Services Architecture (OGSA) addresses this need for standardization by
defining a set of core capabilities and behaviors that address key concerns in grid systems.
Utility Computing
▪ In utility computing environments, users assign a ―utility‖ value to their jobs, where utility is a
fixed or time-varying valuation that captures various QoS constraints (deadline, importance,
satisfaction).
▪ The valuation is the amount they are willing to pay a service provider to satisfy their demands.
The service providers then attempt to maximize their own utility, where said utility may directly
correlate with their profit.
Autonomic Computing
▪ The increasing complexity of computing systems has motivated research on autonomic
computing, which seeks to improve systems by decreasing human involvement in their operation.
In other words, systems should manage themselves, with high-level guidance from humans.
▪ Autonomic, or self-managing, systems rely on monitoring probes and gauges (sensors), on an
adaptation engine (autonomic manager) for computing optimizations based on monitoring data,
and on effectors to carry out changes on the system.
▪ IBM’s Autonomic Computing Initiative has contributed to define the four properties of autonomic
systems: self-configuration, self- optimization, self-healing, and self-protection.
▪ A Cloud Computing environment which uses a mix of on-premises, private cloud and third-party,
public cloud services.
▪ It helps you leverage the best of both worlds
▪ User-centric interface
Cloud interfaces are location independent and can be accesses by well-established interfaces
such as Web services and Internet browsers.
▪ Pricing
Cloud computing does not require up-from investment. No capital expenditure is required. Users
pay for services and capacity as they need them.
Cloud Providers
Section 3: Exercises
Exercise 1: Mark the Things Managed by You and Vendor in below Layered Architecture of Cloud
Computing.
3. Applications and services that run on a distributed network using virtualized resources is known
as:
a. Parallel computing
b. Soft computing
c. Distributed computing
d. Cloud computing
----------End of Module----------
Elasticity
▪ Cloud computing gives the illusion of infinite computing resources available on demand.
Therefore, users expect clouds to rapidly provide resources in any quantity at any time.
▪ In particular, it is expected that the additional resources can be
(a) provisioned, possibly automatically, when an application load increases
(b) released when load decreases (scale up and down)
Cloud computing is the distributed computing model that provides computing facilities and
resources to users in an on-demand, pay-as-you-go model.
Fair Comparison
▪ One of the objectives of service provisioning is the fair comparison among the available
services or with the CSP. Generally, users compare different cloud offerings according to their
priorities and along several dimensions to select whatever is appropriate to their needs.
▪ It is a difficult task to perform an unbiased comparison and evaluation of all services. Several
challenges must be addressed to develop an evaluation model that precisely measures the
service level of each cloud provider. This study aims to provide a comparable service analysis
for the cloud user to choose among desired services.
Prediction
▪ Prediction is important in cloud service provisioning.
▪ A service user should be ensured of the elasticity and scalability of the services, even during
peak hours or when the user suddenly makes an unusually high demand on the resources.
▪ In this situation, one of the objectives of the service provisioning selection is that the request
should be instantly fulfilled by the service provider.
▪ Therefore, the user should be assured of the available required resources on demand with the
predictable elastic and scalable services.
Rank
▪ Selecting the best and most appropriate service is a vital factor for the cloud service user.
▪ Selecting services depends on comparing and ranking them suitably.
▪ A reasonable and acceptable ranking system helps the cloud customer to make decisions
about service selection.
▪ Therefore, the cloud service ranking system is an important aspect of a fair cloud service
comparison and selection process.
▪ However, there is a lack of comparison of services across providers due to a lack of common
comparable criteria or attributes.
▪ SaaS delivers several simple software programs and applications as well as customer
interfaces to the end users. Thus, in the application layer, this type of service is called
software as a service (SaaS).
▪ By using the client software or browser, the user can connect to services from providers via
the Internet and pay fees according to the services consumed, in a pay-as-you-go model.
▪ Customer relationship management (CRM) from Salesforce is one of the early SaaS
applications. Among other services, Google provides online office tools such as
documentation, presentations, and spreadsheets, which are all part of SaaS.
There are several types of service provisioning from which we can make need-based selections, as
discussed below.
▪ Agility & Availability
▪ Pricing
▪ Security & Trust
▪ Quality of Service
Survey of Storage
▪ Companies such as Google, Amazon and Microsoft have been building massive data centers
over the past few years.
▪ Spanning geographic and administrative domains, these data centers tend to be built out of
commodity desktops with the total number of computers managed by these companies being
in the order of millions.
▪ Additionally, the use of virtualization allows a physical node to be presented as a set of virtual
nodes resulting in a seemingly inexhaustible set of computational resources.
▪ By leveraging economies of scale, these data centers can provision cpu, networking, and
storage at substantially reduced prices which in turn underpins the move by many institutions
to host their services in the cloud.
▪ Let’s see what are the most dominant storage strategies that are currently being used in cloud
computing settings. There are several unifying themes that underlie the systems.
➢ Firstly, it is necessary to protect the data during upload into the data center to ensure
that the data do not get hijacked on the way into the database.
----------End of Module----------
Salesforce.com, which relies on the SaaS model, offers business productivity applications (CRM)
that reside completely on their servers, allowing customers to customize and access applications
on demand.
3. Competitive Landscape
Competitive analysis is an essential part of your overall product strategy for SaaS companies. One
can do competitive analysis relatively quickly for a horizontal SaaS company. There are standard
tools that most companies use, such as Google Alerts and SEMRush. These tools allow you to track
competitors’ rankings, keywords, and traffic volume over time.
▪ Horizontal SaaS companies need to monitor the traditional factors that impact their industry:
product features, pricing, brand value, and customer reviews.
▪ Vertical Saas companies need to analyze the competitive landscape of their niche market. For
example, if you’re in real estate software, you’ll want to understand the search queries used by
potential buyers and sellers. You’ll also want to understand the advertising options available on
relevant feeds, i.e., Facebook and Instagram.
4. Marketing
When it comes to marketing strategies, horizontal SaaS is focused on user acquisition, and vertical
SaaS concentrates on customer retention.
▪ The goal of horizontal SaaS is to get as many users as possible using their software. So, they
have a high market share and can eventually charge more for their product once they’ve
established themselves. They often offer their product for free or at a low cost and then set a
premium for the extra features users need to pay more to access.
▪ They rely heavily on user feedback and use it to adjust the features they provide and how they
market their product. Their key metric is user adoption and how often users opt-in to use the
software.
▪ Vertical SaaS products are built with a specific industry or group of people (i.e., real estate
agents, hospitals). So the goal is not necessarily to attract as many new users as possible but
rather to establish trust with existing customers so that those customers stay loyal to their
service over time. To keep those customers reliable, vertical SaaS companies will sometimes
offer free trials. Thereby allowing potential customers to try out the software before buying it.
APIs are Insufficient: Many SaaS providers have responded to the integration challenge by
developing application programming interfaces (APIs). Unfortunately, accessing and managing data
via an API requires a significant amount of coding as well as maintenance due to frequent API
modifications and updates.
Data Transmission Security: SaaS providers go to great length to ensure that customer data is
secure within the hosted environment. However, the need to transfer data from on-premise systems
or applications behind the firewall with SaaS applications hosted outside of the client‘s data center
poses new challenges that need to be addressed by the integration solution of choice.
Business Services - SaaS Provider provides various business services to start-up the
business. The SaaS business services include ERP (Enterprise Resource Planning), CRM
(Customer Relationship Management), billing, and sales.
Social Networks - As we all know, social networking sites are used by the general public, so
social networking service providers use SaaS for their convenience and handle the general
public's information.
Google Apps
▪ Google Apps (2010) is a typical SaaS implementation.
▪ It provides several Web applications with similar functionality to traditional office software (word
processing, spreadsheets etc.), but also enables users to communicate, create and collaborate
easily and efficiently.
▪ Since all the applications are kept online and are accessed through a web browser, users can
access their accounts from any internet-connected computer, and there is no need to install
anything extra locally.
OpenSaaS
2) One to Many
▪ SaaS services are offered as a one-to-many model means a single instance of the application
is shared by multiple users.
6) Multidevice support
▪ SaaS services can be accessed
from any device such as desktops,
laptops, tablets, phones, and thin
clients.
7) API Integration
▪ SaaS services easily integrate with
other software or services through
standard APIs.
8) No client-side installation
▪ SaaS services are accessed directly from the service provider using the internet connection, so
do not need to require any software installation.
Disadvantages of SaaS
1) Security
Actually, data is stored in the cloud, so security may be an issue for some users. However, cloud
computing is not more secure than in-house deployment.
2) Latency issue
Since data and applications are stored in the cloud at a variable distance from the end-user, there is
a possibility that there may be greater latency when interacting with the application compared to
Applicability of SAAS
▪ Enterprise Software application
▪ Sharing of data between internal and external users e.g.
➢ Salesforce CRM application
➢ Single user Software application
▪ Runs on single user computer and serves 1 user at a time e.g. : Microsoft office
▪ Business Utility SaaS - Applications like Salesforce automation are used by businesses and
individuals for managing and collecting data, streamlining collaborative processes and providing
actionable analysis. Popular use cases are Customer Relationship Management (CRM),
Human Resources and Accounting.
▪ Social Networking SaaS - Applications like Facebook are used by individuals for networking
and sharing information, photos, videos, etc.
SaaS-Adoption challenges
Some limitations slow down the acceptance of SaaS and prohibit it from being used in some cases:
▪ Because data is stored on the vendor's servers, data security becomes an issue.
▪ SaaS applications are hosted in the cloud, far away from the application users. This introduces
latency into the environment; for example, the SaaS model is not suitable for applications that
demand response times in milliseconds (OLTP).
1. Define SaaS?
2. What are the Benefits of SaaS?
3. Which are different types of SaaS?
4. Give 2-3 Examples of SaaS.
5. What Is a B2B SaaS Product?
6. What Is a B2C SaaS Product?
7. What are the Characteristics of SaaS?
8. List Few of the SaaS Provider.
9. List Fes of Limitation of SaaS.
10. What are the Disadvantages of SaaS?
----------End of Module----------
PaaS Architecture
▪ PaaS enables developers to develop, test, and deploy in the
same environment. A typical PaaS architecture consists of the
following categories:
➢ Integration and Middleware: It refers to the software that
offers runtime services.
➢ API: It implies Application Platform Interface, which acts as
a communication between client and server that offers
abstraction (running the details in the background) and
core connectivity.
➢ Hardware: It comprises of all hard requirements to handle
the resources.
▪ This facilitates and allows the users to build and run
applications without the complexity of constructing and
maintaining the infrastructure as the PaaS architecture covers
the requirements.
Machine Learning
▪ If you genuinely want to take advantage of your data, it’s not enough to just store it in the cloud.
The data is still just sitting around, only in a new location.
▪ You need to set up
algorithms to sift
through your data
and find meaningful
insights and
actionable steps.
▪ With cloud-based
machine learning
platforms, you can
easily create models
(from templates),
apply them to your
databases, and scale
your computing
power as needed.
Public PaaS
▪ Public Platform as a Service runs on the public cloud the user have to focus on building
application.
▪ It helps developers to be more agile, which helps them to develop and deliver faster. And the
vendor manages and maintains the infrastructure.
Hybrid PaaS
▪ Hybrid Platform as a Service offers flexibility to choose what percent of the user's infrastructure in
his control.
▪ Private PaaS provides scalability for hybrid PaaS. Well, a hybrid is a combination of a bit of
private and public.
▪ These platforms reduce the time taken to develop and deploy, increase flexibility, help users
achieve performance and better results, and maintain control over the cost.
Serverless vs PaaS
▪ Both serverless and PaaS provide the same facilities, as they both are backend architectures
that hide the backend from the developers.
▪ They only differ in scalability, timing, start-up time and tools, and deployment process.
▪ Differences are:
➢ The pricing of serverless is exact as it charges developers for the time the application
utilizes. On the other hand, PaaS pricing is not as precise as serverless, as PaaS vendors
charge a monthly fee for the services offered.
➢ PaaS provides more control over the deployment environment, while on the other hand,
serverless provides less control over the environment.
➢ Serverless applications are active most of the time. The built-in PaaS applications can be up
and run quickly, but they are not as lightweight as serverless. Serverless provides agility to
its built-in applications makes it more suitable for web applications.
▪ It is not that serverless services are more affordable.
▪ It depends on the type of application we are developing and the facilities and services we
require.
▪ We have to choose between PaaS and serverless according to the project requirements.
Development framework
▪ PaaS provides a framework that developers can build upon to develop or customise cloud-based
applications.
▪ Similar to the way you create an Excel macro, PaaS lets developers create applications using
built-in software components.
▪ Cloud features such as scalability, high-availability and multi-tenant capability are included,
reducing the amount of coding that developers must do.
PaaS Feature
▪ Programming Models, Languages, and Frameworks
Programming mod-els made available by IaaS providers define how users can express their
applications using higher levels of abstraction and efficiently run them on the cloud platform.
▪ Persistence Options
A persistence layer is essential to allow applications to record their state and recover it in case of
crashes, as well as to store user data.
IBM Cloud
▪ An early innovator in computing, IBM has put a lot of money and effort into developing its cloud
services suite.
▪ IBM first launched its PaaS services as IBM Bluemix in 2014.
Google Cloud
▪ Google isn’t just a search engine. It’s also one of the leading SaaS companies, with Google
Docs, Drive, Gmail, and the entire Google Workspace.
▪ Google also lets you rent the infrastructure and platforms that make it possible to handle billions
of visitors every month.
▪ Launched in 2008, Google Cloud was the second major player to enter the market. Its extensive
list of products shows why it’s still one of the market leaders.
Microsoft Azure
▪ Microsoft isn’t just responsible for the operating systems on most desktop and laptop computers
around the world.
▪ It also has one of the largest public cloud services collections, including Office 365, Microsoft
Teams (SaaS), and Azure (IaaS & PaaS).
▪ The Azure cloud platform includes a range of services from AI and machine learning to analytics,
development tools, data processing, and more.
▪ This represents a large paradigm shift away from typical hosting arrangements that were
prevalent in the past, where average customers were locked into hosting contracts (with set
monthly/yearly fees and excess data charges) on shared hosting services like DreamHost.
▪ Larger enterprise customers typically utilized pervasive and high-performing Content Delivery
Networks (CDNs), who operate extensive networks of “edge” servers that deliver content across
the globe.
▪ Movement between providers is restricted by the effort the user wants to vest into porting the
capabilities to another environment, implying in most cases reprogramming of the according
applications.
▪ This makes the user dependent not only on the provider’s decisions, but also on his/her failures:
As the example of the Google crash on the May 14, 2009 showed, relying too much on a specific
provider can lead to serious problems with service consumption.
Price
▪ PaaS is more cost-effective than leveraging IaaS in many cases. Overhead is reduced because
PaaS customers don't need to manage and provision virtual machines.
▪ In addition, some providers have a pay-as-you-go pricing structure, in which the vendor only
charges for the computing resources used by the application, usually saving customers money.
However, each vendor has a slightly different pricing structure, and some platform providers
charge a flat fee per month.
Database Monitoring
▪ Because most cloud applications rely on databases, this
technique reviews processes, queries, availability, and
consumption of cloud database resources.
▪ This technique can also track queries and data integrity,
monitoring connections to show real-time usage data.
▪ For security purposes, access requests can be tracked as well.
For example, an uptime detector can alert if there’s database
instability and can help improve resolution response time from the precise moment that a
database goes down.
▪ Cloud testing tools offer optimized test environments with all requisite software-hardware
configuration in place.
▪ With platforms like BrowserStack, testers can be assured that every device on the real device
cloud is pristine. Every device offered is calibrated to factory settings. Once a test is complete,
every last bit of data is destroyed.
▪ With automated testing and parallel testing, testing in the cloud allows QAs to accelerate test
execution and results significantly. Faster results can also be achieved by virtue of features that
allow for improved collaboration and project management.
▪ Leading cloud testing platforms like BrowserStack offer 99% uptime. That means testers can
access real desktop and mobile devices for testing anytime, from anywhere.
▪ Cloud-based testing on platforms like BrowserStack offers integrations with numerous tools that
assist with implementing DevOps and CI/CD workflows. This allows for a more streamlined,
result-oriented software development pipeline.
Providers Services
Salesforce.com
Windows Azure
AppFog
Openshift
Exercise 2: Write Down all the Parts of Content Delivery Network in below Diagram.
1. What is PaaS?
2. Who is the End Customer of PaaS?
3. What are ways to Deliver PaaS?
4. What Services Does PaaS Includes?
5. What is Public PaaS?
6. What is Hybrid PaaS?
7. What are the Common PaaS Scenarios?
8. What are the Features of PaaS?
9. What are Popular PaaS Providers?
10. List few of the Content Delivery Network Using Cloud.
----------End of Module----------
High-Performance Computing
▪ High-performance computing (HPC) can help solve complex and complex problems with millions
of variables and calculations, by running them on supercomputers or large clusters of computers.
CaaS Overview
▪ The exposure of a cluster via a Web service is intricate and comprises several services running
on top of a physical cluster. Figure shows the complete CaaS technology.
▪ A typical cluster is comprised of three elements:
➢ Nodes
➢ Data storage
➢ Middleware
▪ The middleware virtualizes the cluster into a single system image; thus, resources such as the
CPU can be used without knowing the organization of the cluster.
▪ As time progresses, the amount of free memory, disk space, and CPU usage of each cluster
node changes. Information about how quickly the scheduler can take a job and start it on the
cluster also is vital in choosing a cluster.
Job Submission
▪ After selecting a required cluster, all executables and data files have to be transferred to the
cluster and the job submitted to the scheduler for execution.
Job Monitoring
▪ During execution, clients should be able to view the execution progress of their jobs. Even
though the cluster is not the owned by the client, the job is.
▪ It is the right of the client to see how the job is progressing and (if the client decides) terminate
the job and remove it from the cluster.
Result Collection
▪ The final role of the CaaS Service is addressing jobs that have terminated or completed their
execution successfully.
▪ In Both cases, error or data files need to be transferred to the client. Figure presents the
workflow and CaaS Service modules used to retrieve error or result files from the cluster.
2. Storage Virtualization
3. Network Virtualization
Disadvantages
▪ No software or hardware solution is perfect, and that is certainly the case with private cloud
virtualization. Before building and deploying one, organizations have to consider its
disadvantages:
➢ Integration with other in-house systems can be an issue.
➢ Managing and supporting virtualization will often require dedicated IT staff, and that may
bring costs up, if there is already not a good-sized department. This is the primary reason
why smaller businesses opt for external cloud services.
➢ Scaling and security will require specific expertise.
▪ Retail or e-tail holiday seasonal demand, in which demand increases dramatically from Black
Friday shopping specials until the end of the holiday season in early January
▪ School district registration which spikes in demand during the spring and wanes after the school
term begins
▪ Businesses that see a sudden spike in demand due to a popular product introduction or social
media boost, such as a streaming service like Netflix adding VMs and storage to meet demand
for a new release or positive review.
▪ Disaster Recovery and Business Continuity (DR/BC). Organizations can leverage public cloud
capabilities to provide off-site snapshots or backups of critical data and applications, and spin up
VMs in the cloud if on-premises infrastructure suffers an outage or loss.
Agility
By eliminating the need to purchase, configure, and install new infrastructure when demand
changes, Cloud Elasticity prevents the need to plan for such unexpected demand spikes, and
enables organizations to meet any unexpected demand, whether due to seasonal spike, mention on
Reddit, or selection by Oprah’s book club.
Pay-as-needed pricing
▪ Rather than paying for infrastructure whether or not is being used, Cloud Elasticity enables
organizations to pay only for the resources that are in use at any given point tin time, closely
tracking IT expenditures to the actual demand in real-time.
▪ In this way, although spending may fluctuate, organizations can ‘right-size’ their infrastructure as
elasticity automatically allocates or deallocates resources on the basis of real-time demand.
▪ Amazon has stated that organizations that adopt its instance scheduler with their EC2 cloud
service can achieve savings of over 60 percent versus organizations that do not.
High Availability
▪ Cloud elasticity facilitates both high availability and fault tolerance, since VMs or containers can
be replicated if they appear to be failing, helping to ensure that business services are
uninterrupted and that users do not experience downtime.
▪ This helps ensure that users perceive a consistent and predictable experience, even as
resources are provisioned or deprovisioned automatically and without impact on operations.
Time to
Agility
Market
Pay-as-
Efficiency
needed
High
Availability
Speed/Time-to-market
Organizations have access to capacity in minutes instead of the weeks or months it may take
through a traditional procurement process.
AWS EBS
▪ Amazon Elastic Block Store (Amazon EBS) is a block-level storage service for use with Amazon
EC2 instances.
▪ When mounted on an Amazon EC2 instance, you can use Amazon EBS volumes like any other
raw block storage device.
▪ It can be formatted and mirrored for specific file systems, host operating systems, and
applications.
AWS Lambda
▪ AWS Lambda is a serverless, on-demand IT service that provides developers with a fully
managed, event-driven cloud system that executes code.
▪ AWS Lambda uses Lambda functions anonymous functions that are not associated with
identifiers enabling users to package any code into a function and run it, independently of other
infrastructure
Scalability
▪ The traditional information technology (IT) provisioning model requires organizations to make
large investments in their on-premises infrastructure. That needs extensive preparation and
Speed
Organizations’ developers can quickly spin up several workloads on-demand, so the companies no
longer require IT administrators to provide and manage computing resources.
Cost Savings
While traditional on-premises technology requires large upfront investments, many cloud service
providers let their customers pay for only what they consume. But the attractive economics of cloud
services presents challenges, too, which may require organizations to develop a cloud management
strategy.
Policy enforcement
▪ User cloud provisioning helps streamline requests and manage resources but requires strict rules
to make sure unnecessary resources are not provided. That is time-consuming since different
users require varying levels of access and frequency.
▪ Setting up rules to know who can provide which resources, for how long, and with what
budgetary controls can be difficult.
Repurchasing
▪ In most cases, repurchasing is as easy as moving from an on-premise application to a SaaS
platform.
▪ Typical examples are switching from internal CRM to Salesforce.com, or switching from internal
email server to Google’s G Suite.
▪ It is a simple license change, which can reduce labor, maintenance, and storage costs for the
organization.
Retire
▪ When planning a move to the cloud, it often turns out that part of the company's IT product
portfolio is no longer useful and can be decommissioned.
▪ Removing old applications allows you to focus time and budget on high priority applications and
improve overall productivity.
Retain
▪ Moving to the cloud doesn't make sense for all applications. You need a strong business model
to justify migration costs and downtime.
▪ Additionally, some industries require strict compliance with laws that prevent data migration to
the cloud. Some on-premises solutions should be kept on-premises, and can be supported in a
hybrid cloud migration model.
AWS CloudTrail
▪ CloudTrail is a service that you can use to track events across your account.
AWS CloudWatch
▪ CloudWatch is a service you can use to aggregate, visualize, and respond to service metrics.
▪ CloudWatch has two main components: alarms, which create alerts according to thresholds for
single metrics, and events, which can automate responses to metric values or system changes.
Section 3: Exercises
----------End of Module----------
Infrastructure Case
▪ Usually, there is some compelling event to initiate a move to the cloud, such as a compute
upgrade due to increased demands from users or applications, end-of-life of data centre facility
assets or a facility move where everything needs to be built again.
▪ The initial and most significant savings are usually found when abandoning infrastructure in
favour of the cloud, with infrastructure savings being the most significant part of the business
case in terms of cost savings.
▪ The reason why is that in-house IT is typically under-utilised, because when infrastructure
purchases are being considered, not all applications that will be deployed on it are known and so
a margin is added for this misunderstood capacity requirement.
▪ Additionally, over-deployment is a result of companies configuring infrastructure for peak loads.
▪ We add to these the other fixed and variable costs we identified previously: cost of specialised
data centre assets to house, power and cool servers, the cost of real estate which includes
carrying finance charges, lease costs and other terms, the skilled staffing costs of maintaining
the data centre and the systems within it.
▪ Other costs include back-ups, redundancy at a second facility, certifications, security and
decommissioning costs when moving to the cloud.
Applications case
▪ With the three main types of cloud there are a number of options of what to do with applications,
but this requires a look at what the main drivers are.
Compute
▪ The costs for compute depend entirely on what you’re going to do with it. How much processing
power is required for your computing projects? A common way to deal with computing capability
has been to purchase more than you need. Better to have too much than too little, as they say.
But the scalability of cloud computing means that you can take a much different approach.
▪ If you have a temporary project that is entirely contingent upon the response of website visitors,
you can use cloud computing to scale up automatically and scale back down just as quickly. The
computing systems that you set up with cloud providers can purchase for on-demand usage or
for a fixed period of time.
▪ The parameters involved in selecting a CPU include the operating system and the expected
usage (in percent). Cloud providers will then calculate the cost of CPU based on their cost
per gigabyte (GB) of virtual RAM.
Network
▪ This area is generally measured in GB of data transfer. But bandwidth is also calculated
in terabytes (TB) or petabytes (PB).
▪ While these are the main cost centers, not everything that a cloud provider offers will fit neatly in
these three categories.
▪ Each provider packages their offerings in different ways. It might even seem like comparing
apples to oranges when putting one service up against another
There are four consumptions strategies identified, where the differences in objectives, conditions
and actions reflect the decision of an organization to trade-off hosting costs, controllability and
resource elasticity of IT resources for software and data. These are discussed in the following:
2. Storage Provision. This strategy is relevant when the elasticity requirements is high for data and
low for software, while the controllability of software is more critical than for data. This can be the
case for data intensive applications, where the results from processing in the application are more
critical and sensitive than the data itself.
3. Solution Provision. This strategy is relevant when the elasticity and cost reduction requirements
are high for software and data, but the controllability requirements can be entrusted to the CDC.
4. Redundancy Services. This strategy can be considered as a hybrid enterprise cloud strategy,
where the organization switches between traditional, software, storage or solution management
based on changes in its operational conditions and business demands.
Just-in-Time Infrastructure
▪ In the past, if your application became popular and your systems or your infrastructure did not
scale, you became a victim of your own success.
▪ By deploying applications in-the-cloud with just-in-time self-provisioning, you do not have to
worry about pre-procuring capacity for large-scale systems.
▪ This increases agility, lowers risk, and lowers operational cost because you scale only as you
grow and only pay for what you use.
Usage-Based Costing
▪ With utility-style pricing, you are billed only for the infrastructure that has been used. You are not
paying for allocated infrastructure
but instead for unused
infrastructure. This adds a new
dimension to cost savings.
▪ You can see immediate cost
savings (some- times as early as
your next month’s bill) when you
deploy an optimization patch to
update your cloud application.
▪ For example, if a caching layer can reduce your data requests by 70%, the savings begin to
accrue immediately and you see the reward right in the next bill. Moreover, if you are building
platforms on the top of the cloud, you can pass on the same flexible, variable usage-based cost
structure to your own customers.
Proactive Scaling
Scale your application up and down to meet your anticipated demand with proper planning
understanding of your traffic patterns so that you keep your costs low while scaling.
Improved Testability
Never run out of hardware for testing. Inject and automate
testing at every stage during the development process.
You can spawn up an “instant test lab” with preconfigured
environments only for the duration of testing phase.
Porter’s five forces market model (adjusted for the cloud market)
Types of SLA
Service-level agreement provides a framework within which both seller and buyer of a service can
pursue a profitable service business relationship. It outlines the broad understanding between the
service provider and the service consumer for conducting business and forms the basis for
maintaining a mutually beneficial relationship.
There are two types of SLAs from the perspective of application hosting. These are described in
detail here.
Infrastructure SLA. The infrastructure provider manages and offers guarantees on availability of
the infrastructure, namely, server machine, power, network connectivity, and so on.
Application SLA. In the application co-location hosting model, the server capacity is available to
the applications based solely on their resource demands. Therefore, the service
Negotiation
Once the customer has discovered a service provider who can meet their application hosting need,
the SLA terms and conditions needs to be mutually agreed upon before signing the agreement for
hosting the application.
Operationalization
SLA operation consists of SLA monitoring, SLA ac- counting, and SLA enforcement. SLA monitoring
involves measuring parameter values and calculating the metrics defined as a part of SLA and
determining the deviations.
De-commissioning
SLA decommissioning involves termination of all activ- ities performed under a particular SLA when
the hosting relationship between the service provider and the service consumer has ended.
Production On-Boarding
Pre-
Production
Feasibility Analysis
MSP conducts the feasibility study of hosting an application on their cloud platforms. This study
involves three kinds of feasibility:
(1) Technical Feasibility
(2) Infrastructure Feasibility
(3) Financial Feasibility
On-Boarding of Application
▪ Once the customer and the MSP agree in principle to host the application based on the findings
of the feasibility study, the application is moved from the customer servers to the hosting
platform.
▪ The application is accessible to its end users only after the on- boarding activity is completed.
Preproduction
▪ Once the determination of policies is completed as discussed in previous phase, the application
is hosted in a simulated production environment.
▪ Once both parties agree on the cost and the terms and conditions of the SLA, the customer sign-
off is obtained. On successful completion of this phase the MSP allows the applica- tion to go on-
live.
Production
▪ In this phase, the application is made accessible to its end users under the agreed SLA.
▪ In the case of the former, on-boarding activity is repeated to analyse the application and its
policies with respect to SLA fulfilment. In case of the latter, a new set of policies are formulated to
meet the fresh terms and conditions of the SLA.
Termination
When the customer wishes to withdraw the hosted application and does not wish to continue to
avail the services of the MSP for managing the hosting of its application, the termination activity is
initiated.
Technical disasters: Perhaps the most obvious of the three, technical disasters encompass
anything that could go wrong with the cloud technology. This could include power failures or a loss
of network connectivity.
Human disasters:
▪ Human failures are a common occurrence and are usually accidents that happen whilst using the
cloud services. These could include inadvertent misconfiguration or even malicious third-party
access to the cloud service.
▪ The cloud providers are responsible for everything they have direct control over. This includes
the resiliency of the general infrastructure such as the hardware, software, network and facilities.
You, the customer, are usually responsible for areas such as the cloud configuration, secure data
backups, the workload architecture and the availability.
OVHCloud
A data centre run by OVHCloud was destroyed in early 2021 by a fire. All four data centres had
been too close, and it took over six hours for firefighters at the scene to put out the blaze. This
severely affected the cloud services run by OVHCloud and spelt disaster for companies whose
entire assets were hosted on those servers.
AWS
In June 2016, storms in Sydney battered the electrical infrastructure and caused an extensive
power outage. This led to the failure of a number of Elastic Compute Cloud instances and
Elastic Block Store volumes which hosted critical workloads for a number of large companies.
This meant that some heavily trafficked websites and the online presence of some of the biggest
brands was decimated for over ten hours on a weekend, severely affecting business.
2. If you haven’t done so already, define the RTO and RPO for your disaster recovery
This forms the basis of your disaster recovery plan and, in turn, the kinds of disaster recovery
services you’ll need.
Exercise 1: Write down the name of all enterprise cloud consumption strategies in below diagram.
----------End of Module----------
Unexpected Costs
▪ When you migrate an application, you could face unexpected costs resulting from the complexity
of the migration process.
▪ For example, you may have to train staff in using the new system or toolset, requiring extra hours
and expenses. For your migration to be successful, you need to assess the expected costs
realistically, considering potential complications.
Maintaining Privacy
▪ It is essential to protect the privacy of your business operations and data when migrating to a
third-party system, such as a cloud server. Whenever you work with a third-party vendor, you
need to carefully oversee the migration process and ensure the proper SLAs are in place.
Maintaining Compliance
▪ You need to ensure that the new environment is compliant with regulations such as HIPAA.
▪ It is important to have a compliance strategy in place before you begin the migration process to
find suitable vendors and solutions.
Migration Blueprint
▪ In a complete blueprint service offer, your vendor helps you define your migration objectives and
strategy by recognizing your users’ needs and your organizational requirements.
▪ They also collect details about your environment and applications, developing a complete action
plan for the migration process.
Migration Deployment
▪ If you select a managed deployment, your vendor helps you strategize and plan your migration.
▪ They also help you manage the migration and any related troubleshooting and testing. This
method is typically a turn-key option that features full-scale and end-to-end support.
Application Modernization
▪ Application modernization services provide custom development services.
▪ They can help you prepare legacy applications for utilization in the cloud, by adapting them to run
in virtualized environments or containers.
Cost
▪ Cloud providers take over maintenance and upgrades, companies migrating to the cloud can
spend significantly less on IT operations.
Performance
Migrating to the cloud can improve performance and end-user experience. Applications and
websites hosted in the cloud can easily scale to serve more users or higher throughput, and can run
in geographical locations near to end-users, to reduce network latency.
Digital experience
Users can access cloud services and data from anywhere, whether they are employees or
customers. This contributes to digital transformation, enables an improved experience for
customers, and provides employees with modern, flexible tools.
Lack of Strategy
▪ Many organizations start migrating to the cloud without devoting sufficient time and attention to
their strategy.
▪ Successful cloud adoption and implementation requires rigorous end-to-end cloud migration
planning.
▪ Each application and dataset may have different requirements and considerations, and may
require a different approach to cloud migration.
▪ The organization must have a clear business case for each workload it migrates to the cloud.
Vendor Lock-In
▪ Vendor lock-in is a common problem for adopters of cloud technology. Cloud providers offer a
large variety of services, but many of them cannot be extended to other cloud platforms.
▪ Migrating workloads from one cloud to another is a lengthy and costly process. Many
organizations start using cloud services, and later find it difficult to switch providers if the current
provider doesn't suit their requirements.
▪ The promise of cloud computing has raised the IT expectations of small and medium enterprises
beyond measure. Large companies are deeply debating it.
▪ Cloud computing is a disruptive model of IT whose innovation is part technology and part
business model in short, a disruptive techno-commercial model of IT.
▪ We propose the following definition of cloud computing: ―It is a techno-business disruptive
model of using distributed large-scale data centers either private or public or hybrid offering
customers a scalable virtualized infrastructure or an abstracted set of services qualified by
service-level agreements (SLAs) and charged only by the abstracted IT resources consumed
▪ Several small and medium business enterprises, however, leveraged the cloud much beyond the
cautious user. Many start-ups opened their IT departments exclusively using cloud services very
successfully and with high ROI. Having observed these successes, several large enterprises
have started successfully running pilots for leveraging the cloud.
▪ Many large enterprises run SAP to manage their operations. SAP itself is experimenting with
running its suite of products: SAP Business One as well as SAP Netweaver on Amazon cloud
offerings.
There are many more considerations to address in your cloud governance model, depending on
your industry and business needs. Be sure to keep your documentation flexible to allow for change
and optimization after the migration process is completed and your workloads settle in the new
cloud environment.
Here are several tools and techniques you can use to manage your cloud costs:
Tag your resources - to manage costs, you need visibility into cloud resource consumption. You
can set this up by tagging resources and monitoring them. Be sure to use standard tags and keep
this organized.
▪ Cloud resources are highly scalable and this can make manual tagging and monitoring incredibly
time consuming.
▪ Use policies to standardize the process and automation to enforce these rules.
▪ You can leverage either third-party and first-party tools for tagging.
▪ There are also tools dedicated to cost management and optimization and monitoring. In addition,
you can set up role-based access control (RBAC) to ensure resources are properly used by
authorized users, and set up several resource groups.
▪ Moving Data
Here are several aspects to consider when migrating to Google Cloud:
➢ Move your data first - and then move the rest of the application. This is recommended by
Google.
➢ Choose the relevant storage - Google Cloud offers several tiers for hot and warm storage,
as well as several archiving options. You can also leverage SSDs and hard disks, or choose a
cloud-based database service, such as Bigtable, Datastore, and Google Cloud SQL.
➢ Plan the data transfer process - determine and define how to physically move your data.
You can, for example, send your offline disk to a Google data center or opt to stream to
persistent disks.
▪ Moving Applications
There are several ways to migrate applications, depending on the application’s suitability to the
cloud. In some cases, you might need to re-architect the entire application before it can be moved to
the cloud. In other cases, you might need to do light modification before the migration. Ideally, when
possible, your application can be lifted and shifted to the cloud.
A lift and shift migration means you do not need to make any changes to your application. You can
lift it and move it directly to the new cloud environment. For example, you can create a local VM
within your on-premise center, and then import it as a Google VM. Alternatively, you can back-up
your application to GCP - this option lets you automatically create a cloud copy.
▪ Optimize
After the migration process is complete and your application is safely hosted in the cloud, you need
to set up measures that help you continuously optimize your cloud environment. Here are several
tools offered by Google:
▪ Google Cloud Pub/Sub - helps you set up communication between any independent
applications. You can use Pub/Sub to rapidly scale, decouple applications, and improve
performance.
▪ Google Cloud Deployment Manager - lets you automate the configuration of your applications.
You specify the requirements and Deployment Manager automatically initiates the deployments.
Less overhead- a cloud-first strategy lowers or eliminates the overhead associated with equipment
and maintenance costs incurred when using on-premises server solutions.
More resources- cloud vendors provide access to additional services, which typically require lower
or no initial investment.
Cost-effective upgrades- cloud vendors offer various pricing options you can leverage to reduce
the costs of upgrades on-demand.
Support- cloud service providers offer support for their services, provided by experts.
Quick release- working directly in the cloud can help you achieve a faster speed of delivery for
repairs, improvements, and updates.
Vendor Lock-In
▪ Even when organizations gain effective control over cloud deployments, there are hidden costs
of vendor lock-in. Enterprise-grade commercial agreements with cloud providers are rigid and
difficult to change over time, as an organization’s requirements change.
▪ While the market is heading in a good direction, customer protections in cloud agreements are
not comparable to those offered by other IT outsourcing contracts. Without good commercial
protection, organizations can unknowingly give away future flexibility.
Define success
Ask yourself what will make your cloud migration a success. Are you aiming to shut down the on-
premises data center or move all new development to the cloud? Define the organization's ultimate
goal with specific metrics to measure migration success.
According to research by Gartner, the ideal cloud migration roadmap consists of five steps.
Align Objectives
▪ Organizations should create a cloud migration value proposition for business and IT early in the
cloud migration roadmap. Start by conducting a survey to understand the use cases for cloud
adoption, aligning cloud strategy with IT goals, and defining action steps to achieve your goals.
▪ Another important aspect is to define migration principles based on application and team
readiness, business priorities, and vendor capabilities. Use data available in the organization to
define the metrics and key performance indicators (KPIs) for a successful migration.
Collaborate
Cloud migration processes can only succeed by achieving cooperation between cross-departmental
teams. The following roles should be included in your roadmap and in relevant planning stages:
▪ CIO—provides strategic and planning guidance and can help define the goals of cloud migration.
The CIO can help communicate progress to other stakeholders.
▪ Development leaders and teams—provide technical advice and can help establish a vision.
They can work with other IT leaders to define specific cloud migration plans using up-to-date
progress and planning information.
▪ Operations leaders and teams—provide insight into the infrastructure and operations
requirements of cloud migration and determine activities required to implement the strategy.
They will typically manage the operational mechanisms needed to enable the migration.
▪ Cloud experts—any cloud migration program will benefit from a team of cloud experts, either in-
house or outsourced, who can provide architectural and process plans for the project. They can
help evaluate and select the best tools and processes for migrating and refactoring systems and
help build the required skills among other teams.
Refactor
Refactoring, or ‘lift, tinker, and shift,’ is when you tweak and optimize your applications for the
cloud. In this case, a platform-as-a-service (PaaS) model is employed. The core architecture of
the applications remains unchanged, but adjustments are made to enable the better use of
cloud-based tools.
Revise
Revising builds upon the previous strategies, requiring more significant changes to the
architecture and code of the systems being moved to the cloud. This is done to enable
applications to take full advantage of the services available in the cloud, which may require
introducing major code changes. This strategy requires foreplaning and advanced knowledge.
Rebuild. Rebuilding takes the Revise approach even further by discarding the existing code
base and replacing it with a new one. This process takes a lot of time and is only considered
when companies decide that their existing solutions don’t meet current business needs.
Replace
Replacing is another solution to the challenges that inform the Rebuild approach. The difference
here is that the company doesn’t redevelop its own native application from scratch. This involves
migrating to a third-party, prebuilt application provided by the vendor. The only thing that you
migrate from your existing application is the data, while everything else about the system is new.
At the same time, be prepared to address several common challenges during a cloud migration:
➢ Interoperability
➢ Data and application portability
➢ Data integrity and security
➢ Business continuity
Without proper planning, a migration could degrade workload performance and lead to higher IT
costs -- thereby negating some of the main benefits of cloud computing.
4. Ongoing Upkeep
▪ Once that data has been migrated to the cloud, it is important to ensure that it is optimized,
secure, and easily retrievable moving forward. It also helps to monitor for real-time changes to
critical infrastructure and predict workload contentions.
▪ Apart from real-time monitoring, you should also assess the security of the data at rest to ensure
that working in your new environment meets regulatory compliance laws such as HIPAA and
GDPR.
▪ Another consideration to keep in mind is meeting ongoing performance and availability
benchmarks to ensure your RPO and RTO objectives should they change.
Exercise 1: Write down the Purpose of Use in Single word for Different Cloud Services in below
Diagram.
Architecture:
▪ For any computing, architecture is an imperative viewpoint as it encourages you to comprehend
the application’s design.
▪ Cloud-based applications are made for infrastructure development.
▪ They work on theories of automation and user interface, while Traditional applications are made
on three essential levels known as:
➢ App logic tier
➢ Presentation tier
➢ Database tier
Security:
▪ Security is one of the crucial requirements to maintain a business appropriately.
▪ Both traditional computing and cloud computing have distributable highlights regarding security.
▪ You can have numerous layers of security while utilizing cloud-based computing.
▪ Cyberattacks are uncommon on account of cloud-based computing because of the presence of
various hosts.
▪ On the off chance that you intend to start a business and extend it, the assistance situated cloud
computing can be your best support network, while traditional computing is made statically, it just
gives a solitary security layer to the business-related data.
Availability:
▪ Cloud-based computing is unique. On account of this computing, you can get customary updates
and improved highlights. Therefore, maintaining your business will turn out to be substantially
more reasonable. Indeed, even if there should arise an occurrence of certain escape clauses, the
IT group works dedicatedly to quick eradication.
▪ This sensibility helps your business run at a decent speed, and you can procure a decent benefit,
while the IT heads discharge customary applications over long stretches, frequently half a
months or weeks.
▪ It happens as the traditional applications need manual scripting. Also, it can’t be delivered except
if all the parts of coding are finished.
Private Cloud
Benefits
▪ Compare control of the entire stack
▪ Security – in a few cases, organization may need to keep all or some of their applications and
data in house.
Benefits
▪ Variable Expense instead of capital expense
▪ Economies of Scale
▪ Massive Elasticity
Hybrid Cloud
Benefits
▪ Allows companies to keep the critical applications and sensitive data in a traditional data center
environment or private cloud.
▪ Take advantage of the public cloud resources like SaaS, for the latest applications and IaaS for
elastic virtual resources.
▪ Facilitates probability of data, apps, and services and more choices for deployment models.
Multi-cloud
Use Cases
Advantages of AWS
Easy to use
AWS is designed to allow application providers, ISVs, and vendors to quickly and securely host your
applications – whether an existing application or a new SaaS-based application.
Flexible
AWS enables you to select the operating system, programming language, web application platform,
database, and other services you need.
Cost-Effective
You pay only for the compute power, storage, and other resources you use, with no long-term
contracts or up-front commitments.
Reliable
With AWS, you take advantage of a scalable, reliable, and secure global computing infrastructure,
the virtual backbone of Amazon.com’s multi-billion-dollar online business that has been honed for
over a decade.
Secure
AWS utilizes an end-to-end approach to secure and harden our infrastructure, including physical,
operational, and software measures.
AWS Architecture
Amazon Infrastructure is divided into following categories:
▪ Regions
▪ Availability Zones
Sign In Process
▪ AWS responsibility “Security of the Cloud” - AWS is responsible for protecting the infrastructure
that runs all of the services offered in the AWS Cloud.
▪ This infrastructure is composed of the hardware, software, networking, and facilities that run
AWS Cloud services.
▪ A shared responsibility model is a cloud security framework that dictates the security obligations
of a cloud computing provider and its users to ensure accountability.
6. Go global in minutes
IAM Groups
IAM Roles
▪ An IAM role is very similar to a user, in that it is an identity with permission policies that
determine what the identity can and cannot do in AWS.
▪ However, a role does not have any credentials (password or access keys) associated with it.
Instead of being uniquely associated with one person, a role is intended to be assumable by
anyone who needs it.
▪ An IAM user can assume a role to temporarily take on different permissions for a specific task.
▪ A role can be assigned to a federated user who signs in by using an external identity provider
instead of IAM.
▪ AWS uses details passed by the identity provider to determine which role is mapped to the
federated user.
IAM Authentication
▪ Authentication occurs whenever a user attempts to access your organization's network and
downstream resources.
▪ The user must verify their identity before being granted entry for security.
Multi-Factor Authentication
▪ Multi-factor authentication (MFA) in AWS is a simple best practice that adds an extra layer of
protection on top of your username and password.
▪ As a Security Best Practice, we should always require IAM Users to have Multi-Factor
Authentication (MFA) enabled when accessing the AWS Console.
▪ A hardware device that generates a six-digit numeric code based upon a time-synchronized
one-time password algorithm. The user must type a valid code from the device on a second
webpage during sign-in. Each MFA device assigned to a user must be unique.
▪ SCPs aren't available if your organization has enabled only the consolidated billing features.
▪ SCPs alone are not sufficient to granting permissions to the accounts in your organization.
▪ No permissions are granted by an SCP.
▪ An SCP defines a guardrail, or sets limits, on the actions that the account's administrator can
delegate to the IAM users and roles in the affected accounts.
▪ The administrator must still attach identity-based or resource-based policies to IAM users or
roles, or to the resources in your accounts to actually grant permissions.
Measurements:
▪ CPU is measured in Gigahertz (Ghz)
▪ RAM is measured in Gigabyte (GB)
▪ HDD is measured in Gigabyte (GB)
▪ NIC is measured in Megabits per second (Mbps) or Gigabits per second (Gbps)
Amazon LightSail
▪ Low cost and ideal for users with less
technical expertise.
▪ Compute, storage and network
▪ Preconfigures virtual servers
▪ Virtual servers. Databases and load
balancers
▪ SSH and RDP access
▪ Can access Amazon VPC
Docker Containers
▪ Docker is a software platform that allows you to build, test, and deploy applications quickly.
Monolithic Application
▪ A monolithic application is built as a single unit. Enterprise
applications are built in three parts:
➢ A database: consisting of many tables usually in a
relational database management system.
➢ A client-side user interface: consisting of HTML pages
➢ JavaScript: running in a browser
▪ They're typically complex applications that encompass
several tightly coupled functions.
▪ For example, consider a monolithic ecommerce SaaS
application. It might contain a web server, a load balancer, a
catalogue service that services up product images, an ordering system, a payment function, and
a shipping component.
Microservices Application
▪ Microservices are an architectural and organizational approach to software development where
software is composed of small independent services that communicate over well-defined APIs.
▪ These services are owned by small, self-contained teams.
▪ Microservices architectures make applications easier to scale and faster to develop, enabling
innovation and accelerating time-to-market for new features.
Characteristics of Microservices
Autonomous
▪ Each component service in a microservices architecture can be developed, deployed, operated,
and scaled without affecting the functioning of other services.
▪ Services do not need to share any of their code or implementation with other services.
▪ Any communication between individual components happens via well-defined APIs.
Specialized
▪ Each service is designed for a set of capabilities and focuses on solving a specific problem.
▪ If developers contribute more code to a service over time and the service becomes complex, it
can be broken into smaller services.
Block Storage
▪ Other enterprise applications like databases or ERP systems often require dedicated, low latency
storage for each host.
▪ This is analogous to direct-attached storage (DAS) or a Storage Area Network (SAN).
▪ Block-based cloud storage solutions like Amazon Elastic Block Store (EBS) are provisioned with
each virtual server and offer the ultra-low latency required for high performance workloads.
File Storage
▪ Some applications need to access shared files and require a file system.
▪ This type of storage is often supported with a Network Attached Storage (NAS) server.
▪ File storage solutions like Amazon Elastic File System (EFS) are ideal for use cases like large
content repositories, development environments, media stores, or user home directories.
Object Storage
▪ Applications developed in the cloud often take advantage of object storage's vast scalability and
metadata characteristics.
▪ Object storage solutions like Amazon Simple Storage Service (S3) are ideal for building modern
applications from scratch that require scale and flexibility, and can also be used to import existing
data stores for analytics, backup, or archive.
Elements
The following are the key elements of Amazon Data Lifecycle Manager.
▪ Snapshots
▪ EBS-backed AMIs
▪ Target resource tags
▪ Amazon Data Lifecycle Manager tags
▪ Lifecycle policies
▪ Policy schedules
Additional Features
Amazon S3 Replication
Amazon S3 Glacier
▪ Extremely low cost and you pay only for what you need with no commitments of upfront fees
▪ Two classes Glacier and Glacier Deep Archive
▪ Three options for access to archives, listed in the table below:
Amazon Route 53
▪ Amazon Route 53 connects user requests to internet applications running on AWS or on-
premises.
▪ You can use Amazon Route 53 as the DNS service for your domain, such as example.com.
▪ When Route 53 is your DNS service, it routes internet traffic to your website by translating
friendly domain names like www.example.com into numeric IP addresses, like 192.0. 2.1, that
computers use to connect to each other.
Fault Tolerance
▪ Fault tolerance is the ability of a workload to remain operational with zero downtime or data loss
in the event of a disruption.
High Availability
▪ It is the ability of a workload to remain operational, with minimal downtime, in the event of a
disruption. Disruptions include hardware failure, networking problems or security events, such as
DDoS attacks.
▪ In a highly available system, workloads are spread across a cluster of servers. If one server fails,
the workloads running on it automatically move to other servers.
Management:
▪ Fault-tolerant workloads are more challenging to set up and administer.
▪ To ensure fault tolerance, admins must keep two or more workload instances in sync.
▪ This mean that changes in one instance are implemented in the other instance instantaneously.
▪ In contrast, high-availability workloads are less complex to set up and manage.
Scaling Policies
▪ Target Tracking - Attempts to keep the group at or close to the metric
Serverless Services
▪ Amazon VPC enables you to build a virtual network in the AWS cloud - no VPNs, hardware, or
physical datacenters required.
▪ You can define your own network space, and control how your network and the Amazon EC2
resources inside your network are exposed to the Internet.
▪ When you create a VPC, you must specify a range of IPv4 addresses for the VPC in the form of
a Classless Inter-Domain Routing (CIDR) block; for example, 10.0.0.0/16
▪ A VPC spans all the Availability Zones in the region
▪ You have full control over who has access to the AWS resources inside your VPC
▪ By default, you can create up to 5 VPCs per region
▪ A default VPC is created in each region with a subnet in each AZ
NAT Gateways
A NAT gateway is a Network Address Translation (NAT) service. You can use a NAT gateway so
that instances in a private subnet can connect to services outside your VPC but external services
cannot initiate a connection with those instances.
AWS CloudFormation
▪ AWS CloudFormation is an infrastructure as code (IaC) service that allows you to easily model,
provision, and manage AWS and third-party resources.
▪ It gives developers and businesses an easy way to create a collection of related AWS and third-
party resources, and provision and manage them in an orderly and predictable fashion.
AWS CodeStar
▪ AWS CodeStar provides the tools you need to quickly develop, build, and deploy applications
on AWS.
▪ With AWS CodeStar, you can set up your entire continuous delivery toolchain in minutes,
allowing you to start releasing code faster.
▪ AWS CodeStar makes it easy for your whole team to work together securely, with built-in role-
based policies that allow you to easily manage access and add owners, contributors, and
viewers to your projects.
▪ With the AWS CodeStar project dashboard, you can easily track your entire software
development process, from a backlog work item to production code deployment.
▪ AWS X-Ray helps developers analyze and debug production, distributed applications, such as
those built using a microservices architecture.
▪ AWS X-Ray supports applications running on:
➢ Amazon EC2
➢ Amazon ECS
➢ AWS Lambda
➢ AWS Elastic Beanstalk
▪ Need to integrate the X-Ray SDK with your application and install the X-Ray agent.
AWS OpsWorks
▪ AWS OpsWorks is a configuration management service that provides managed instances of
Chef and Puppet.
▪ Updates include patching, updating, backup, configuration and compliance management.
Document
▪ A document database stores data in JSON, BSON, or XML documents (not Word documents or
Google Docs, of course).
▪ In a document database, documents can be nested. Particular elements can be indexed for
faster querying.
Amazon RedShift
▪ Amazon Redshift is a data warehouse product which forms part of the larger cloud-computing
platform Amazon Web Services.
▪ It is built on top of technology from the massive parallel processing data warehouse company
ParAccel, to handle large scale data sets and database migrations.
▪ Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud.
▪ You can start with just a few hundred gigabytes of data and scale to a petabyte or more. This
enables you to use your data to acquire new insights for your business and customers.
▪ ElastiCache nodes run on Amazon EC2 instances, so you must choose an instance family/type
Section 3: Exercises
Transfer
Requester Pays
Events
We have tried to design the labs in such a way that we make use of the free tier and the
participant incurs no charge. Yet it is possible that due to extra work done by the
participants on their own or not terminating some instances, not releasing storage or due
to change in Amazon pricing policy or any other factor some fee may be incurred. We are
not responsible for any charge that gets levied by Amazon for usage as part of this lab.
▪ The first screen that will appear when the ‘Launch Instance’ button is pressed is the AMI
selection screen.
▪ This is where you need to select an AMI of your choice.
▪ In our case we will choose the Amazon Linux AMI, which is free.
▪ In ‘Tag Instance’ screen you can assign any key-value combination that you want so that you can
identify the instance easily.
▪ For our exercise let the Key be ‘Name’ and the Value can be ‘App Server’ or ‘Web Server’
▪ Click on ‘Next: Configure Security Group’
▪ The ‘Review Instance Launch’ page provides all the options we have chosen in a single page.
We can review the options and then click ‘Launch’
Note: All options are not seen in the screenshot. In actual case you need to scroll to see all the
options
▪ This will take you to the instance dashboard and you will see the list of all instances (in our case
there must be only one instance)
▪ Click on the ‘Browse’button. From the file menu choose the .ppk file that you have saved earlier
▪ Click ‘Open’ and click ‘Yes’ against the PuTTY security warning pop-up
▪ This will lead you to the Login screen
▪ The local directories are listed on the left and the Amazon Instance files are listed on the right.
▪ You can now drag and drop files into the Amazon Instance
▪ To terminate an instance, select the instance you want to terminate and click on
‘Actions’→’Instance State’→’Terminate’
▪ (Before pressing the ‘Create Snapshot’ button make sure you note down the name of the volume
for which you want to create the snapshot.) Press the ‘Create Snapshot’ button.
▪ Since we are entering RDS service for the first time, we will see this screen. Press on ‘Get
Started Now’
▪ The next screen asks us if we are going to use this Database in Production.
▪ If you are going to use Database in a production environment, you need to select a Multi
Availability Zone setup so that you uptime is high.
▪ You should also select Provisioned IOPS for high performance.
▪ In our case as we are not going to use this in production environment select, ‘No’ and click on
‘Next Step’.
▪ In the Backup and Maintenance segment (not shown here) leave the defaults as they are and
click ‘Launch Database’
▪ After the Database is launched, click on ‘View your DB Instance’.
▪ This will show all your DB instances. Click on the DB Instance to get the details.
▪ Next, ensure that your security groups have given permission for the mysql port which is 3306.
6. What type of computing technology refers to services and applications that typically run
on a distributed network through virtualized resources?
1. Distributed Computing
2. Cloud Computing
3. Soft Computing
4. Parallel Computing
12. Which of the following is most important area of concern in cloud computing?
1. Security
2. Storage
3. Scalability
4. All of the mentioned
14. Which one of the following cloud concepts is related to sharing and pooling the
resources?
1. Polymorphism
2. Virtualization
3. Abstraction
4. None of the mentioned
15. All cloud computing applications suffer from the inherent that is intrinsic in their WAN
connectivity.
1. propagation
2. latency
3. noise
4. None of the mentioned
16. What August event was widely seen as an example of the risky nature of cloud
computing?
1. Spread of Conficker virus
2. Gmail outage for more than an hour
3. Theft of identities over the Internet
4. Power outages in the Midwest
17. Which one of the following can be considered as a utility is a dream that dates from the
beginning of the computing industry itself?
1. Computing
28. This is a software distribution model in which applications are hosted by a vendor or
service provider and made available to customers over a network, typically the Internet.
1. Platform as a Service (PaaS)
2. Infrastructure as a Service (IaaS)
3. Software as a Service (SaaS)
30. In the Planning Phase, which of the following is the correct step for performing the
analysis?
1. Cloud Computing Value Proposition
2. Cloud Computing Strategy Planning
3. Both A and B
4. Business Architecture Development
34. In which one of the following, a strategy record or Document is created respectively to
the events, conditions a user may face while applying cloud computing mode.
1. Cloud Computing Value Proposition
2. Cloud Computing Strategy Planning
3. Planning Phase
4. Business Architecture Development
35. What second programming language did Google add for App Engine development?
1. C++
2. Flash
3. Java
4. Visual Basic
38. What facet of cloud computing helps to guard against downtime and determines costs?
1. Service-level agreements
2. Application programming interfaces
3. Virtual private networks
4. Bandwidth fees
39. Which one of the following refers to the non-functional requirements like disaster
recovery, security, reliability, et
1. Service Development
2. Quality of service
3. Plan Development
4. Technical Service
41. Which one of the following is related to the services provided by Cloud?
1. Sourcing
2. Ownership
3. Reliability
4. PaaS
42. SaaS supports multiple users and provides a shared data model through model.
1. single-tenancy
2. multi-tenancy
3. multiple-instance
4. all of the mentioned
45. Which one of the following refers to the user’s part of the Cloud Computing system?
1. Back End
2. Management
3. Infrastructure
4. Front End
47. provides virtual machines, virtual storage, virtual infrastructure, and other hardware
assets.
1. IaaS
2. SaaS
3. PaaS
4. All of the mentioned
48. Which one of the following can be considered as the example of the Front-end?
1. Web Browser
2. Google Compute Engine
3. Cisco Metapod
4. Amazon Web Services
50. Cloud computing is also a good option when the cost of infrastructure and management
is
1. Low
2. High
3. Moderate
4. None of the mentioned
54. Amazon Web Services falls into which of the following cloud-computing category?
A. Platform as a Service
B. Software as a Service
C. Infrastructure as a Service
D. Back-end as a Service
64. Which of the following is a message queue or transaction system for distributed Internet-
based applications?
A. Amazon Simple Notification Service
B. Amazon Elastic Compute Cloud
C. Amazon Simple Queue Service
D. Amazon Simple Storage System
67. Which service performs the function that when an instance is healthy it is terminated and
replaced with a new one?
A. Sticky Sessions
B. Fault Tolerance
C. Connection Draining
D. None of the above
70. A virtual CloudFront user is called an OAI. This stands for what?
A. Origin Archive Initiative
B. Origin Access Identity
C. Original Archive Identity
D. Original Accessible Initiative
78. Applications and services that run on a distributed network using virtualized resources is
known as ___________
a) Parallel computing
b) Soft computing
c) Distributed computing
d) Cloud computing
81. Which of the following is the correct statement about cloud computing?
a) Cloud computing abstracts systems by pooling and sharing resources
b) Cloud computing is nothing more than the Internet
c) The use of the word “cloud” makes reference to the two essential concepts
d) All of the mentioned
83. Which of the following model attempts to categorize a cloud network based on four
dimensional factors?
a) Cloud Cube
b) Cloud Square
c) Cloud Service
d) All of the mentioned
84. Which of the following is the correct statement about cloud types?
a) Cloud Square Model is meant to show is that the traditional notion of a network boundary being
the network’s firewall no longer applies in cloud computing
b) A deployment model defines the purpose of the cloud and the nature of how the cloud is located
c) Service model defines the purpose of the cloud and the nature of how the cloud is located
86. All cloud computing applications suffer from the inherent _______ that is intrinsic in their
WAN connectivity.
a) noise
b) propagation
c) latency
d) all of the mentioned
87. Which of the following architectural standards is working with cloud computing
industry?
a) Web-application frameworks
b) Service-oriented architecture
c) Standardized Web services
d) All of the mentioned
89. What is the correct formula to calculate the cost of a cloud computing deployment?
a) CostCLOUD = Σ(UnitCostCLOUD / (Revenue + CostCLOUD))
b) CostCLOUD = Σ(UnitCostCLOUD / (Revenue – CostCLOUD))
c) CostCLOUD = Σ(UnitCostCLOUD x (Revenue – CostCLOUD))
d) None of the mentioned
90. Which of the following is the wrong statement about cloud computing?
a) Private cloud doesn’t employ the same level of virtualization
b) Data center operates under average loads
c) Private cloud doesn’t pooling of resources that a cloud computing provider can achieve
d) Abstraction enables the key benefit of cloud computing: shared, ubiquitous access
95. Cloud computing is a concept that involves pooling physical resources and offering them
as which sort of resource?
a) cloud
b) real
c) virtual
d) none of the mentioned
97. Into which expenditures does Cloud computing shifts capital expenditures?
a) local
b) operating
c) service
d) none of the mentioned
99. Which of the following is the most essential element in cloud computing by CSA?
a) Virtualization
100. Which of the following monitors the performance of the major cloud-based services in
real time in Cloud Commons?
a) CloudWatch
b) CloudSensor
c) CloudMetrics
d) All of the mentioned
101. Which of the following model consists of the service that you can access on a cloud
computing platform?
a) Deployment
b) Service
c) Application
d) None of the mentioned
102. Which of the following is the most important area of concern in cloud computing?
a) Scalability
b) Storage
c) Security
d) All of the mentioned
103. Which of the following is the most refined and restrictive cloud service model?
a) PaaS
b) IaaS
c) SaaS
d) CaaS
108. In which of the following service models the hardware is virtualized in the cloud?
a) NaaS
b) PaaS
c) CaaS
d) IaaS
110. Which of the following is a workflow control and policy-based automation service by
CA?
a) CA Cloud Compose
b) CA Cloud Insight
c) CA Cloud Optimize
d) CA Cloud Orchestrate
empower yourself