Cloud - Native - Development
Cloud - Native - Development
Vídeo Introdução)
Hello and welcome to Oracle University's explorer learning path on Cloud Native. I'm Nikita
Abraham, and I will be showing you how Cloud Native enables faster software development
and gives you the ability to build applications that are resilient, manageable, and dynamicall
y scalable.
Cloud Native technologies require developers to adopt a new set of frameworks. This learni
ng path will equip you with the essential skills required to adopt and efficiently leverage Ora
cle Cloud Native services that run on OCI.
In this learning path, you will learn about the Cloud Native architecture and its building bloc
ks; Oracle Container Engine for Kubernetes as a deployment platform for containerized app
lications; Oracle Cloud Infrastructure Registry, which is an Oracle managed registry to simp
lify the development to production workflow; and Oracle Functions, a fully managed serverl
ess platform that helps you just focus on writing code.
This learning path will be useful for a cloud app developer or any developer looking to trans
ition to Cloud Native. Complete this learning-path training to earn the Oracle Cloud Native
Development Explorer Badge. Good luck.
Welcome. Our final topic in this module is cloud-native building blocks. Cloud native refers
less to where an application resides and more to how it is built and deployed. In this topic, I
will take you through the core components of cloud-native applications.
Microservices are loosely coupled services organized around business capability. They
are smaller code bases that are managed by independent teams. They consist of a single
well-defined task and are independently deployable.
Let's look at some of the benefits of microservices. Microservices allow faster verification, d
eployment, and releases. With microservices, it's easier to deliver new value to your custo
mers.
Microservices use the best tools, frameworks, and languages, and they make it easier to m
easure and observe individual services and specific functionality. The challenges of micros
ervice applications, like performance and network overhead along with logging and monitori
ng, are addressed by way of deployment.
Welcome to the world of containers. Containers encapsulate discrete components of applic
ation logic provisioned only with the minimal resources needed to do their job, addressing p
erformance, logging, and portability.
To summarize, in cloud native, microservices act as building blocks and are often package
d in containers. That's our last topic for this module.
Módulo – Aula 2) Create OKE Cluster
Welcome to our next topic on the ways to run Kubernetes on OCI. A Kubernetes cluster is
used to deploy a containerized application in the cloud.
There
is more than one way to run Kubernetes in Oracle Cloud. In this topic, I will walk you throug
h the available methods.
There are basically three methods in which you can run Kubernetes on Oracle Cloud. In thi
s model, you make use of the OCI components to create a Kubernetes cluster and then de
ploy a container runtime like Docker and Kubernetes. Is a do-it-
yourself model for highly customized setups where the integration and design of the infrastr
ucture is manual.
Quickstart Experience is an automated model with Terraform to build confidence of the Kub
ernetes cluster. This is the kind of approach where you need to perform very little customiz
ation, but instant creation is possible. You'd have very limited integration tools.
The most popular method is Oracle Container Engine for Kubernetes, abbreviated as
OKE. OKE is a manageable service in OCI used for deploying to Kubernetes cluster within
a few steps. This model provides you with a hybrid experience of integration, easy impleme
ntation, and connectivity to Oracle tools without compromising on time.
That's it for this topic.
Aula 2 Vídeo 3) Oracle Container Engine for Kubernetes
Welcome to this topic on Oracle Container Engine for Kubernetes, OKE is a developer frie
ndly, container-native, enterprise-ready and managed Kubernetes service for running highl
y available clusters.
In this topic, I will take you through the features and benefits of OKE. OKE is a highly availa
ble and manageable service in Oracle Cloud Infrastructure. It enables both horizontal and v
ertical scaling.
OKE is used to deploy cloud-native applications on OCI. You can create an application by u
sing Docker containers and then deploy them on OCI using Kubernetes.
You can manage OKE either through the console or through the API. Use the OCI console
as it provides you with well-defined console services without the burden of setting up the en
vironment to access the cluster.
OKE provides a quick-start mechanism that enables developers like you to get started and
deploy containers quickly. It gives DevOps teams visibility and control for Kubernetes mana
gement.
OKE provides an enriched experience by combining the production-grade container orchest
ration of open Kubernetes with the control, security, and highly predictable performance of
Oracle's next-generation cloud infrastructure. It provides easy integration and has the tools
to let you create, scale, manage, and control your own Kubernetes clusters instantly. It also
provides you with the tools for monitoring.
OKE is customizable and manages the Kubernetes container service to deploy and run you
r own container-based apps.
That's it for this topic.
Aula 2 Vídeo 4) Examining the Prerequisites for an OKE Cluster
Welcome to this topic on examining the prerequisites for an OKE cluster. Before you can c
reate an Oracle Container Engine cluster in OCI, there are a few prerequisites that you nee
d to take care of. In order to create a cluster, you need access to core cloud resources suc
h as a cloud account and a compartment. The compartment should have the appropriate p
olicies applied.
Network resources such as a VCN, subnets, and security lists must be appropriately config
ured in the region in which you want to create and deploy clusters.
In order to access your cluster outside the OCI console, SSH utility like PuTTY would also
be required.
Your tenancy must have sufficient quota for the different types of resources, such as comp
ute-instance quota, block-volume quota, and load-balancer quota.
To create an OKE cluster, the required policy in the root compartment of your tenancy is all
ow service OKE to manage all resources in tenancy.
To manage a cluster family, you must either be part of the admin group or a part of the grou
p to which a policy grants the appropriate permissions. For example, to enable nonadmin u
sers in a group named Dev Team to perform any operation on the cluster, create the allow
group Dev Team to manage cluster family in tenancy policy.
You also need appropriate permissions for networking, such as a VCN, prior to OKE creatio
n. Based on whether it's a public or private cluster, an appropriate subnet would be require
d.
That's it for this topic.
Aula 2 Vídeo 5) Creating na OKE Cluster
Welcome tothis topic on creating an OKE cluster. In this topic, I will take you through the st
eps you need to follow to create a cluster. The process has now been made easy with the i
ntroduction of wizards, and you are required to provide very few details, which makes the p
rocess easier and faster.
Let's look at the five steps involved in setting up an OKE cluster through the quick-start opti
on. The first step is to choose the name for your cluster and the scope of the Kubernetes ve
rsion. The scope determines the API resource objects in the cluster.
In the next step, you need to define the required shape and the network to be used. A publi
c cluster requires a public subnet. This step also defines the number of nodes in your node
pool.
In step three, you customize SSH keys and provide tags for your cluster by clicking Advanc
ed Options. Then review the objects being created for your cluster, such as virtual cloud net
work, node pools, and so on. You can also review the security lists and subnets.
When all the required nodes and services are complete, your cluster is ready, and you can
access it.
That's it for this topic.
Welcome to this topic on the kubectl command line tool. In this topic, I will provide you with
an overview of kubectl as a command-line utility. There are different ways in which you can
access your Kubernetes cluster. YAML is a text format used to specify data related to confi
guration. You can create objects with YAML in Kubernetes and interact with the cluster.
kubectl is the CLI to manage the cluster. We will talk more about this in just a minute. kube
ctl handles locating and authenticating to the API server. If you want to access the REST A
PI directly with an HTTP client like cURL, there are several ways to authenticate. kubectl is
the Kubernetes CLI.
This tool allows you to run commands against Kubernetes clusters. For configuration, kube
ctl looks for a file named Config in the .kube directory under $home. kubectl authenticates t
hrough the Kubernetes API via HTTP protocol. You can also use this tool to inspect and ma
nage cluster resources and view logs. That's it for this topic.
Aula 2 Video 7) Accessing OKE Clusters Using Kubectl
Welcome. Our final topic in this module is accessing OKE clusters using kubectl. This topic
describes how you can configure and access an OKE cluster through kubectl. There are tw
o ways in which you can access an OKE cluster. One is through the OCI Console, and the
other is by using SSH PuTTY.
However, the recommended option is to use the OCI Console because it helps you set up t
he infrastructure faster. The fourth step is to set up the .kube folder. kubectl requires the .ku
be folder to be present in your Home directory. Then configure the kubectl.
To configure this, run the OCICE cluster create kube config command. This will connect the
kubectl resource and your OCI compartment. After completing step 2, you test the configur
ation using kubectl cluster info. This command displays your cluster details. With this, your
setup of kubectl is complete. That's it for this topic.
Aula 2 Vídeo 8) Demonstration (Demonstração)
n this demonstration, I will show you how to connect to Oracle Container Engine for Kuber
netes cluster using kubectl via Cloud Shell. We will connect to the Oracle Cloud Infrastructu
re Cloud Shell, which is a web-browser-based terminal, configure the Kubernetes comman
d line tool, view cluster details, and inspect the worker nodes. This demo will help you acqui
re the necessary skills to configure kubectl and view the cluster details.
The assumptions for this demonstration are that you have a tenancy, username, and passw
ord. You should also have access to an OKE cluster.
So let's get started. Let me log into my Oracle Cloud account.
On the top frame of the OCI console, click the Cloud Shell icon. This will open the OCI Clou
d Shell. We are now connected to the Cloud Shell.
From the main menu of the OCI console, choose Developer Services and then Container C
lusters. Choose your compartment on the left pane. Click your cluster. Repeat. Click your cl
uster name, which will open the Cluster Details page.
Click Node Pools under resources and see the total number of worker nodes. Clicking this
displays all the worker nodes in the cluster. In this case, there are three worker nodes, and
each of them has an IP assigned.
Let's go back to the Cluster Details page. Click the Access Cluster button. As you know, yo
u can access the cluster either from the Cloud Shell or through local access. However, in th
is demo, we will use the Cloud Shell.
Copy the second command and paste this in your Cloud Shell. This creates the config file i
n the .cube directory. Let us list the files in this directory.
Now if I check my present work directory, I am in my user folder. Randy the ls -a command,
and you can see that a .cube exists.
To check the cluster details, let's run the kubectl cluster info command. The cluster details
along with the master node are displayed.
Now if I run kubectl get nodes, the worker-node details are displayed. As we observed on t
he Cluster Details speech, the cluster had three worker nodes, and the same are being fetc
hed here.
So to summarize, in this demo, you have seen how to access a cluster using kubectl from y
our Cloud Shell.
Aula 3 Vídeo 1) Introducing OCI Registry Service
Oracle Cloud Infrastructure Registry makes it easy for you as a developer to store shared a
nd managed development artifacts like Docker images. In this topic, I will introduce you to t
he OCIR service and discuss its benefits to OKE. So why should you use OCIR?
Without a history, development teams will find it hard to maintain a consistent set of Docker
images for their containerized applications. It is difficult to find the right images and have th
em available in the region of deployment. And without a manage registry, it is also tough to
enforce access rights and security policies for images.
OCIR is a repository of Docker images. It is a high availability Docker version 2 container re
gistry service. It helps you store Docker images in private or public repositories. It runs as a
fully managed service on Oracle Cloud Infrastructure, and it provides you with a secured en
vironment where you can share repositories across users if needed.
OCIR offers you full integration with container engine for Kubernetes. Registries are private
by default but can be made public by an admin. OCIR is co-located regionally with Contain
er Engine for low-latency Docker image deploys, and OCIR lets you leverage OCI for high
performance, low latency, and high availability.
Aula 3 Video 2) Working with OKE and OCIR on OCI
Welcome to our next topic on working with OKE and OCIR on OCI. In this topic, I'll briefly t
ake you to the functions that Oracle manages for its customers and the functions that custo
mers manage themselves. The first two columns, OCIR and Container Engine, showcase t
he functions that Oracle manages for its customers.
It includes an integrated registry and an image storage and the container engine that has m
anaged Kubernetes. Oracle also manages the ETCD and master nodes of the Kubernetes i
nstance in your high-availability setup. Customers manage the clusters or worker nodes tha
t are set up by the managed service for that instance in their own OCI tenancy. They bring t
heir own OCI account to create clusters for the managed Kubernetes cloud service and pay
for any infrastructure usage incurred from clusters of worker nodes. That's it for this topic
Aula 4 Vídeo 1)
Serverless computing refers to an execution model in which the cloud provider dynamically
allocates resources whenever a piece of code is executed and only charges for the amount
of resources used to run that code. Welcome to this module on deploying and invoking cod
e using serverless functions. In this topic I will take you through what serverless computing
is in the context of cloud computing.
Serverless is computing is growing in popularity among developers because it allows them
to focus on what matters most, which is writing code, without worrying about the underlying
infrastructure. Here you can see how computing in the cloud has moved from bulky hardwa
re infrastructure dependency to VM workloads to further lightweight processors, such as co
ntainers, and finally tapering into functions.
Serverless is a category of cloud services that raises the abstraction level so that developer
s don't need to think about servers, VMs, or other IAS components. Let's look at the feature
s of serverless architecture. In serverless, your code can auto-scale as needed. It is elastic
in its compute utilization, which means you use compute resources on demand.
One of the many valuable features of a serverless architecture is that you'll pay only for exe
cution time. Serverless computing is gaining popularity because it allows developers to focu
s on the functionality of their code and not be worried about the target deployment environ
ment.
You could say it takes a lot of the ops out of DevOps. In serverless computing, abstractions
are used to hide the implementation details of the lower levels. Typically with each new abs
traction, less domain-specific knowledge is required to make use of the underlying system.
That's it for this topic.
Arquitetura Sem-Servidor
Aula 5)
Having completed the Oracle Cloud native development explorer learning path, you are fa
miliar with the core principles of the cloud native methodology and the success that its impl
ementation results in. You can now continue your professional learning for a career in cloud
native technologies characterized by the use of containers, micro
services, and serverless functions.
An OCI developer prepares for the future of application development by learning cloud nati
ve technologies and is responsible for the enduring development and deployment of cloud
native applications. If you aspire to be an OCI developer, Oracle University's Application De
velopment Cloud Learning subscription has just the learning path for you titled Build Contai
nerized Applications for Cloud.
Now that you're ready to start your journey with cloud native, you can head straight to the b
uild containerized applications for cloud learning path. This learning path is organized to tea
ch you the skills that you will need to adopt and efficiently leverage or cloud native services
that run on OCI. This learning path contains courses that are taught by a cloud native exper
t.
You can explore the integrated developer experience provided on OCI through Micro Servic
es, OCIR, OKE and Functions. In addition, you can schedule hands on labs, which will allo
w you to practice configuration tasks in a live Oracle cloud environment. The skill checks wi
ll help you test your understanding of the concepts covered.
Click here to get started to become an OCI developer. We're excited to have you join us in t
his training and hope that you will enjoy your Oracle University learning experience. Thank
you for watching.