100% found this document useful (1 vote)
503 views15 pages

Oracle Cloud Infrastructure Foundations

Oracle Cloud Infrastructure Foundations

Uploaded by

VanioRamos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
503 views15 pages

Oracle Cloud Infrastructure Foundations

Oracle Cloud Infrastructure Foundations

Uploaded by

VanioRamos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

Hi, everyone. Welcome to this course on Oracle Cloud Infrastructure fundamentals.

This is the
first lecture in the fundamentals series focused on core cloud concepts. My name is Rohit Rahi,
and I'm part of the Oracle Cloud Infrastructure team.

So in this particular lecture, let's look at what it means to use cloud. Cloud is very different
than on-premises infrastructure and environment. So we'll look into that. What does it mean
to have different service models in cloud? We'll look into that.

What are some of the common cloud terminology, like elasticity, fault tolerance. We'll go with
those. And then what does it mean from a business sense to use cloud? So we'll talk about
capital expenditure and operational expenditure. So let's get started.

First, the best place to find the definition of cloud computing is through the National Institute
of Standards and Technology. And even though I have given a link at the bottom, so you can
find-- in 2011 they came up with the standard definition of cloud computing. And even though
it is a bit dated, going back to 2011, the definition they came up with is still very relevant.

And just to close on that, they have updated those standards in years since 2011. And you can
always find the latest out in the web. But let's go over their definition. And it actually closely
resembles most of the public cloud environment.

So the first thing a cloud needs to have is this concept of on-demand self service. You should
be able to provision computing capabilities as needed without requiring human interaction
with a service provider. So you don't have to go to a service desk or a help desk and request a
virtual machine, which takes a couple of days for IT to provision for you. It should be all based
on on-demand and self-service.

That the cloud service should be available through broad network access using standard
mechanisms, the web standards. So this, again, says that you should be able to access these
using self-service mechanism. And it should be as simple as using, let's say, a web console.

In a cloud provider environment, the resources need to be pooled. What it means that
resources are pooled to serve multiple customers, using this model called multi-tenant. And
we'll look into that as we get into subsequent lectures, with different resources dynamically
assigned and reassigned according to demand.

Why do you do this resource pooling. Well, you pool these resources, you make them available
to customers. So you can save money as a provider.

And then if the customer needs more resources, you can give them. And some customers need
less resources, so you can take away the resources. So this dynamic pooling is a core essence
of cloud.

The fourth one is rapid elasticity which actually ties closely with the pooling concept.
Capabilities can be elastically provisioned and released in cases that automatically to scale
rapidly outward and inward, depending on your demand. So if your demand increases, you can
get more resources.

[? You'll ?] have elasticity. If the demand is not that much, you can release those resources. So
this goes, again, tied closely with resource pooling concept.
And then finally, the services can be monitored, controlled, and reported, providing
transparency for both the provider and consumer. So this concept of pay-as-you-go, or
consumption-based pricing, where what would you consume, you only pay for those
resources. And you have complete visibility into what you're using and what you're paying for.
So these five things, the five broad areas, this list covers is actually pretty true for a public
cloud environment.

All right, so let's look at some of the service models which are prevalent. The one that you are
familiar with is the traditional IT. In this case, your IT organization manages everything end-to-
end. So whether it's your core infrastructure, or whether it's some of the higher level things,
like your middleware and runtime, everything is managed by your IT organization.

So the first model, about which we talk a lot more detail subsequently, is infrastructure-as-a-
service-- sometimes also referred to as IaaS. So in this particular model, the cloud provider
manages the core infrastructure. So what that means is the cloud provider has data centers.
The physical infrastructure, the servers, the networks, the storage machines, even the
virtualization layer, is all managed by the cloud provider and delivered as a service.

And you, as a customer, are responsible for things like your operating system, your
middleware, your apps, and your data. Those are all your responsibility. So a good example,
you'll get a virtual machine in the cloud, you'll have complete control over it. You could install
whatever operating system you want, whatever application you want. So that's a good
example of an infrastructure as a service offering.

The second model in the cloud is platform-as-a-service. So in this case, the provider is
responsible for managing a little bit more than infrastructure-as-a-service. So the provider is
also managing things like your runtime.

So let's say you have a Java runtime, or you have a Node.js runtime. And what do you do as a
customer is, you can write your applications. And you could run them on the platform without
worrying about the underlying virtual machine. You don't get a VM. You just get a runtime
environment.

A classic example here are serverless offerings. So we, at Oracle, have an offering called Oracle
Functions. And that gives you various runtimes. And you can run your code on those runtimes.
And we take care of things like scalability, high availability, etc.

Now, one of the big differences between this and the infrastructure model is it takes your pay-
as-you-go pricing to even the next level. You have this consumption-based pricing. So your
functions run only for a certain period, and you only pay for the invocation. It's not like your
virtual machine is running based on an hourly billing model, where even if you don't use it,
you're still paying for it. So Oracle Functions serverless in general is a good example of platform
as a service.

And then finally, we have software-as-a-service, where the vendor delivers everything as a
service, and all the components, all the layers, as a service. So a good example here would be
ERP-- would be a supply chain manage it the cloud, or a human capital management SaaS
offering in the cloud. Oracle has a bunch of these SaaS offerings, as other providers do.

So this is again and area where there are lots of offerings where you, as a user, are just
interacting with that particular SaaS offering. You're not getting the VMs. You're not writing
your own code for your applications.

All right, so let's look at another key construct. Now we are getting to some of the cloud
terminology, which is around high availability. This is a core concept you really need to
understand.

Computing environments configured to provide nearly full-time availability are known as high-
availability systems. So that's basically what it means. What it means is these systems have
redundant hardware and software that makes the system available, despite any kind of failure.

Well-designed high-availability systems avoid having single points of failure. And we'll talk
more about this in our next module. But the idea is, because you have redundancy built in the
system, you don't have a single place of dependency where things can fail.

So a classic example is using a load balancer. So let's say you have two web servers here, and
you put a load balancer in front of them. And so if the web traffic is coming in-- let's say you
are running some kind of website here. So if the web traffic is coming here, the fact you have a
load balancer right in front of the web servers gives you some kind of redundancy.

Now, when a failure occurs in these higher availability systems, a failover process moves
processing performed by the failed component to the backup component. The more
transparent the failover is to the end users, the higher the availability of the system. So in this
case, what happens is, let's say this particular web server goes down.

This web server is still running. And your website, whatever you're running here, is still up and
running. If let's say you had only one server running here, if this ever had gone down, that site
would go down. So this is a very common basic example of making a system and making it
highly available.
The next concept, which you need to understand in cloud is disaster recovery. It involves a set
of policies, tools, and procedures to enable the recovery or continuation of your technology
infrastructure and systems. Now, the two key definitions which come up all the time are
recovery point objective and recovery time objective. So what do these things mean?

At a very high level, recovery time objective basically means how much downtime your
business can tolerate. Recovery point objective basically means how much data loss or
transaction loss can your business tolerate. Let me give you an example.

Let's say your RTO is 24 hours, and so is your RPO same as RTO-- 24 hours. Now, they mean
very different things just based on what they actually imply. So let's look at that.

Let's say your RTO is 24 hours. This means that if a disaster happens, you are OK for having a
downtime of 24 hours. So let's say your IT organization and your processes in place are there
so that you can recover within, let's say, eight hours. So your recovery happens within eight
hours. You are OK, because you can tolerate a 24-hour downtime.

Now, instead of eight hours, if the recovery takes 48 hours, that is not OK, because your RTO is
less than that. You cannot tolerate a 48-hour downtime. Let's look at the case of RPO. Most of
the businesses would do some kind of backup.

So let's say you do a backup at 12:00 AM, in midnight, and then the disaster happens at 8:00
AM. So what happens in this case is your last good backup was done eight hours back. So this
is the amount of data which you have lost, because your last backup happened at midnight. So
in this case, let's say your RPO is 24 hours. You are still OK because you have only lost eight
hours worth of your backup data.

Now imagine if your RPO is 24 hours-- let's say your RPO was, instead of 24 hours, lets say it
was four hours. Now, in that case, this eight-hour loss is not good, because in that case, you
have lost more data than your business can tolerate. So again, understand these core concepts
around [INAUDIBLE] and [INAUDIBLE] are relevant when you talk about cloud.

Let's look at some of the other cloud terminology. So the first one here is fault tolerance. Fault
tolerance describes how a cloud vendor ensures minimal downtime for their own services. So
for example, we looked at this previous example before, and we said avoid single points of
failure. So the fact that you have a web server here ensures that you don't have a single point
of failure.
But what about this load balancer? It is a single point of failure, because if the load balancer
dies, basically, the website dies. So what we do in Oracle, and other cloud providers, is we
manage a standby copy of the load balancer. So if there is an issue with this load balancer, we
can switch the traffic to this standby load balancer. And in subsequent lectures, we will look
into how we do fault tolerance for every service, whether it's storage services, it's database
services, compute services, etc.

Now, another concept which is relevant in cloud is scalability. Now, there are two kinds of
scalability which are there. One is called horizontal scaling. It is also called scaling out.

So what it means is, you have a server here. Let's say you want to add more servers. So this is
your scaling out. And along with that, suppose if you don't need these [? few ?] servers, you
could remove them. And you have this concept of scaling in.

The other kind of scaling is called vertical scaling. And basically, it means that if you have a
machine running here, you could always get a bigger machine and a bigger machine, and so on
and so forth. And if you don't need a bigger machine, you can always go to a smaller size,
smaller [? shape. ?] So scaling up or scaling down.

The next concept is elasticity. And similar to scalability, it's the ability to quickly increase or
decrease resources. Not just limited to virtual machines, it can be your storage.

It can be your database. It can be any other load balancer, etc. So the idea is, if you throw a lot
more traffic to my load balancer, my load balancer should be able to scale seamlessly and
[INAUDIBLE].

Now the final topic in this lecture is around cloud economics. Now, I'm not getting deep into
every aspect of how the costs changes between on-prem devices and cloud. But the thing to
note here is CAPEX and OPEX, the most common cloud terminologies.

So capital expenditure, or CAPEX, is the money an organization or corporate entity spends to


buy, maintain, or improve its fixed assets, such as buildings, vehicles, equipment, or land. So a
good example of CAPEX would be your data center. [INAUDIBLE] you need to spend money to
build a data center. You fill that data center with equipment. So there is no capital expense
involved with that.

An example of operational expenditure, or OPEX, is the ongoing cost. So for example, you
might be-- [INAUDIBLE] data center you are getting a power bill, a utility bill. That's an example
of OPEX. The labor cost is an example of an OPEX.
And what makes cloud so interesting is it lets you trade CAPEX for OPEX. So instead of having
to invest heavily in data centers and infrastructure in the cloud, you can pay only when you
consume resources. So you don't have to do a massive capital intensive buildout, and you only
pay for how much you consume-- the pay-as-you-go consumption based pricing. So it's a
game-changer. That's one of the reasons, on the business side, cloud is so attractive.

All right, so we covered a lot of ground in this lecture. Let's just recap some of the core
concepts we learned. Cloud computing, we talked about the different characteristics-- self-
service, available to network, ability to pool resources, provide rapid elasticity, and ability to
measure what you are using. Service models, we looked into infrastructure-as-a-service,
platform-as-a-service, software-as-a-service. And we looked at specific examples.

We looked at some of the core cloud terminologies-- high availability, disaster recovery, fault
tolerance, scalability, and elasticity. And of course, there are more terms. But these are some
of the common ones you really need to understand. And then finally, we really quickly touched
on cloud economics, what CAPEX is, what is OPEX, and why the cloud model is so much
attractive from a business perspective, because it lets you create CAPEX or OPEX.

Thank you for joining this lecture. In the next lecture, we'll talk about on the core high-
availability design architectures in OCA. Thank you.

Search...

Resources

Copyright © 2020 Oracle University. All rights reserved.

You might also like