0% found this document useful (0 votes)
33 views

Cloud - Native - Development

Uploaded by

Jefferson Santos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views

Cloud - Native - Development

Uploaded by

Jefferson Santos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 36

Cloud Native Development Explorer

Vídeo Introdução)
Hello and welcome to Oracle University's explorer learning path on Cloud Native. I'm Nikita
Abraham, and I will be showing you how Cloud Native enables faster software development
and gives you the ability to build applications that are resilient, manageable, and dynamicall
y scalable.
Cloud Native technologies require developers to adopt a new set of frameworks. This learni
ng path will equip you with the essential skills required to adopt and efficiently leverage Ora
cle Cloud Native services that run on OCI.
In this learning path, you will learn about the Cloud Native architecture and its building bloc
ks; Oracle Container Engine for Kubernetes as a deployment platform for containerized app
lications; Oracle Cloud Infrastructure Registry, which is an Oracle managed registry to simp
lify the development to production workflow; and Oracle Functions, a fully managed serverl
ess platform that helps you just focus on writing code.
This learning path will be useful for a cloud app developer or any developer looking to trans
ition to Cloud Native. Complete this learning-path training to earn the Oracle Cloud Native
Development Explorer Badge. Good luck.

Aula 1 Video 1) Getting Started with Cloud Native


Welcome to this module on getting started with cloud native. Cloud native is a term used to
describe container-based environments. Does the prospect of using cloud-native technologi
es excite you? Well, then let's get started.
In this topic, you will get a general idea of what the cloud-native framework is. Cloud-native
architecture is the design or plan for applications and services built specifically to exist in th
e cloud. This framework is a collection of small, independent, and loosely coupled services.
It natively utilizes services and infrastructure from cloud-computing providers. It describes c
ontainer-based environments deployed as microservices and managed on elastic infrastruc
ture, and it builds, runs, and improves applications based on well-known techniques and te
chnologies for cloud computing.
Here are some of the benefits of this framework. The cloud-native framework helps you enh
ance business-critical applications that are not yet able or ready to move to the cloud. It hel
ps you enjoy the benefits of cloud-native tools and capabilities despite living on premises.
Are you looking to speed up time to market? This framework helps your development proce
ss match the speed and innovation demanded by today's business environment. It also allo
ws enterprises to embrace open source, hybrid cloud, and multicloud options. That's it for t
his topic.

Aula 1 Vídeo 2) Patterns of Building Cloud-native Applications

Welcome to our next topic on patterns of building cloud-native applications. Cloud-native pr


omises highly resilient and scalable applications. But how do you get there? That's done by
adopting apt principles and patterns.
Let's look at what drives cloud native. Development should focus on minimizing failures inst
ead of expecting and dealing with them. This gives birth to microservices-based application
s. You must build every cloud-native app with resiliency, agility, operability, and observabilit
y.
Think of containers as packages of isolated code that can run independently as application
s. Containers are lightweight deployments best suited for the cloud and developed for micro
services needs.
Stateless doesn't mean the software doesn't store any information at all because it does. It'
s just the way it stores information that is different from stateful services. Stateless services
use
a replicating process to take snapshots of the process and store the information in persiste
nt stores.
One way of making microservices communicate with each other is through a request or res
ponse protocol. The orchestration of these microservices will enable event-driven systems.
So instead of being limited to just client-side requests, your code can be executed by a nu
mber of events or triggers.
A distributed system is a network that stores data on more than one node at the same time.
All cloud applications are indeed distributed systems.
The CAP theorem indicates that a distributed system can deliver the characteristics of co
nsistency, availability, and partition tolerance. Consistency means that all clients see th
e same data at the same time no matter which node they connect to. Availability means tha
t any client making a request for data gets a response even if one or more nodes are down.
And a partition tolerance means that the cluster must continue to work despite any number
of communication breakdowns between nodes in the system.
That's it for this topic.
Aula 1 Video 3) Cloud-native Building Blocks

Welcome. Our final topic in this module is cloud-native building blocks. Cloud native refers
less to where an application resides and more to how it is built and deployed. In this topic, I
will take you through the core components of cloud-native applications.
Microservices are loosely coupled services organized around business capability. They
are smaller code bases that are managed by independent teams. They consist of a single
well-defined task and are independently deployable.
Let's look at some of the benefits of microservices. Microservices allow faster verification, d
eployment, and releases. With microservices, it's easier to deliver new value to your custo
mers.
Microservices use the best tools, frameworks, and languages, and they make it easier to m
easure and observe individual services and specific functionality. The challenges of micros
ervice applications, like performance and network overhead along with logging and monitori
ng, are addressed by way of deployment.
Welcome to the world of containers. Containers encapsulate discrete components of applic
ation logic provisioned only with the minimal resources needed to do their job, addressing p
erformance, logging, and portability.
To summarize, in cloud native, microservices act as building blocks and are often package
d in containers. That's our last topic for this module.
Módulo – Aula 2) Create OKE Cluster

Aula 2 Video 1) Creating na Oracle Container Engine for Kubernes


Welcome to this module on creating an Oracle Container Engine for Kubernes cluster for c
ontainerized deployment. The containerization technique brings virtualization at the operati
ng-system level.
The most common application containerization technology is Docker, so let's discuss Dock
er and Kubernetes. Docker is an open-source project and a containerization platform. Dock
er containers are really lightweight virtual machines. Docker standardizes the packaging of
applications and their dependencies into containers.
Kubernetes is an open-source and portable platform. It links containers running on multiple
hosts and enables orchestration. Containers communicate with each other via Kubernetes.
Kubernetes allows the upgrading of an application without service interruption and also mo
nitors the health of the application.
Let's discuss the steps involved in developing a cloud-native application. Design your applic
ation as a microservice. This enables the arranging of an application as a collection of loos
ely coupled services.
Containerization is the approach of running applications on an OS so that the application is
isolated from the rest of the system. Docker is what enables you to run, create, and manag
e containers on a single OS. Kubernetes allows you to automate container provisioning and
networking. A collection of nodes that are managed by a single Kubernetes instance is refe
rred to as a Kubernetes cluster.
That's it for this topic.
Aula 2 Video 2) The Ways to Run Kubernetes on OCI

Welcome to our next topic on the ways to run Kubernetes on OCI. A Kubernetes cluster is
used to deploy a containerized application in the cloud.
There
is more than one way to run Kubernetes in Oracle Cloud. In this topic, I will walk you throug
h the available methods.
There are basically three methods in which you can run Kubernetes on Oracle Cloud. In thi
s model, you make use of the OCI components to create a Kubernetes cluster and then de
ploy a container runtime like Docker and Kubernetes. Is a do-it-
yourself model for highly customized setups where the integration and design of the infrastr
ucture is manual.
Quickstart Experience is an automated model with Terraform to build confidence of the Kub
ernetes cluster. This is the kind of approach where you need to perform very little customiz
ation, but instant creation is possible. You'd have very limited integration tools.
The most popular method is Oracle Container Engine for Kubernetes, abbreviated as
OKE. OKE is a manageable service in OCI used for deploying to Kubernetes cluster within
a few steps. This model provides you with a hybrid experience of integration, easy impleme
ntation, and connectivity to Oracle tools without compromising on time.
That's it for this topic.
Aula 2 Vídeo 3) Oracle Container Engine for Kubernetes
Welcome to this topic on Oracle Container Engine for Kubernetes, OKE is a developer frie
ndly, container-native, enterprise-ready and managed Kubernetes service for running highl
y available clusters.
In this topic, I will take you through the features and benefits of OKE. OKE is a highly availa
ble and manageable service in Oracle Cloud Infrastructure. It enables both horizontal and v
ertical scaling.
OKE is used to deploy cloud-native applications on OCI. You can create an application by u
sing Docker containers and then deploy them on OCI using Kubernetes.
You can manage OKE either through the console or through the API. Use the OCI console
as it provides you with well-defined console services without the burden of setting up the en
vironment to access the cluster.
OKE provides a quick-start mechanism that enables developers like you to get started and
deploy containers quickly. It gives DevOps teams visibility and control for Kubernetes mana
gement.
OKE provides an enriched experience by combining the production-grade container orchest
ration of open Kubernetes with the control, security, and highly predictable performance of
Oracle's next-generation cloud infrastructure. It provides easy integration and has the tools
to let you create, scale, manage, and control your own Kubernetes clusters instantly. It also
provides you with the tools for monitoring.
OKE is customizable and manages the Kubernetes container service to deploy and run you
r own container-based apps.
That's it for this topic.
Aula 2 Vídeo 4) Examining the Prerequisites for an OKE Cluster
Welcome to this topic on examining the prerequisites for an OKE cluster. Before you can c
reate an Oracle Container Engine cluster in OCI, there are a few prerequisites that you nee
d to take care of. In order to create a cluster, you need access to core cloud resources suc
h as a cloud account and a compartment. The compartment should have the appropriate p
olicies applied.
Network resources such as a VCN, subnets, and security lists must be appropriately config
ured in the region in which you want to create and deploy clusters.
In order to access your cluster outside the OCI console, SSH utility like PuTTY would also
be required.
Your tenancy must have sufficient quota for the different types of resources, such as comp
ute-instance quota, block-volume quota, and load-balancer quota.
To create an OKE cluster, the required policy in the root compartment of your tenancy is all
ow service OKE to manage all resources in tenancy.
To manage a cluster family, you must either be part of the admin group or a part of the grou
p to which a policy grants the appropriate permissions. For example, to enable nonadmin u
sers in a group named Dev Team to perform any operation on the cluster, create the allow
group Dev Team to manage cluster family in tenancy policy.
You also need appropriate permissions for networking, such as a VCN, prior to OKE creatio
n. Based on whether it's a public or private cluster, an appropriate subnet would be require
d.
That's it for this topic.
Aula 2 Vídeo 5) Creating na OKE Cluster
Welcome tothis topic on creating an OKE cluster. In this topic, I will take you through the st
eps you need to follow to create a cluster. The process has now been made easy with the i
ntroduction of wizards, and you are required to provide very few details, which makes the p
rocess easier and faster.
Let's look at the five steps involved in setting up an OKE cluster through the quick-start opti
on. The first step is to choose the name for your cluster and the scope of the Kubernetes ve
rsion. The scope determines the API resource objects in the cluster.
In the next step, you need to define the required shape and the network to be used. A publi
c cluster requires a public subnet. This step also defines the number of nodes in your node
pool.
In step three, you customize SSH keys and provide tags for your cluster by clicking Advanc
ed Options. Then review the objects being created for your cluster, such as virtual cloud net
work, node pools, and so on. You can also review the security lists and subnets.
When all the required nodes and services are complete, your cluster is ready, and you can
access it.
That's it for this topic.

Aula 2 Vídeo 6) Introdicing Kubectl Command Line Tool.

Welcome to this topic on the kubectl command line tool. In this topic, I will provide you with
an overview of kubectl as a command-line utility. There are different ways in which you can
access your Kubernetes cluster. YAML is a text format used to specify data related to confi
guration. You can create objects with YAML in Kubernetes and interact with the cluster.
kubectl is the CLI to manage the cluster. We will talk more about this in just a minute. kube
ctl handles locating and authenticating to the API server. If you want to access the REST A
PI directly with an HTTP client like cURL, there are several ways to authenticate. kubectl is
the Kubernetes CLI.
This tool allows you to run commands against Kubernetes clusters. For configuration, kube
ctl looks for a file named Config in the .kube directory under $home. kubectl authenticates t
hrough the Kubernetes API via HTTP protocol. You can also use this tool to inspect and ma
nage cluster resources and view logs. That's it for this topic.
Aula 2 Video 7) Accessing OKE Clusters Using Kubectl

Welcome. Our final topic in this module is accessing OKE clusters using kubectl. This topic
describes how you can configure and access an OKE cluster through kubectl. There are tw
o ways in which you can access an OKE cluster. One is through the OCI Console, and the
other is by using SSH PuTTY.
However, the recommended option is to use the OCI Console because it helps you set up t
he infrastructure faster. The fourth step is to set up the .kube folder. kubectl requires the .ku
be folder to be present in your Home directory. Then configure the kubectl.
To configure this, run the OCICE cluster create kube config command. This will connect the
kubectl resource and your OCI compartment. After completing step 2, you test the configur
ation using kubectl cluster info. This command displays your cluster details. With this, your
setup of kubectl is complete. That's it for this topic.
Aula 2 Vídeo 8) Demonstration (Demonstração)

n this demonstration, I will show you how to connect to Oracle Container Engine for Kuber
netes cluster using kubectl via Cloud Shell. We will connect to the Oracle Cloud Infrastructu
re Cloud Shell, which is a web-browser-based terminal, configure the Kubernetes comman
d line tool, view cluster details, and inspect the worker nodes. This demo will help you acqui
re the necessary skills to configure kubectl and view the cluster details.
The assumptions for this demonstration are that you have a tenancy, username, and passw
ord. You should also have access to an OKE cluster.
So let's get started. Let me log into my Oracle Cloud account.
On the top frame of the OCI console, click the Cloud Shell icon. This will open the OCI Clou
d Shell. We are now connected to the Cloud Shell.
From the main menu of the OCI console, choose Developer Services and then Container C
lusters. Choose your compartment on the left pane. Click your cluster. Repeat. Click your cl
uster name, which will open the Cluster Details page.
Click Node Pools under resources and see the total number of worker nodes. Clicking this
displays all the worker nodes in the cluster. In this case, there are three worker nodes, and
each of them has an IP assigned.
Let's go back to the Cluster Details page. Click the Access Cluster button. As you know, yo
u can access the cluster either from the Cloud Shell or through local access. However, in th
is demo, we will use the Cloud Shell.
Copy the second command and paste this in your Cloud Shell. This creates the config file i
n the .cube directory. Let us list the files in this directory.
Now if I check my present work directory, I am in my user folder. Randy the ls -a command,
and you can see that a .cube exists.
To check the cluster details, let's run the kubectl cluster info command. The cluster details
along with the master node are displayed.
Now if I run kubectl get nodes, the worker-node details are displayed. As we observed on t
he Cluster Details speech, the cluster had three worker nodes, and the same are being fetc
hed here.
So to summarize, in this demo, you have seen how to access a cluster using kubectl from y
our Cloud Shell.
Aula 3 Vídeo 1) Introducing OCI Registry Service
Oracle Cloud Infrastructure Registry makes it easy for you as a developer to store shared a
nd managed development artifacts like Docker images. In this topic, I will introduce you to t
he OCIR service and discuss its benefits to OKE. So why should you use OCIR?
Without a history, development teams will find it hard to maintain a consistent set of Docker
images for their containerized applications. It is difficult to find the right images and have th
em available in the region of deployment. And without a manage registry, it is also tough to
enforce access rights and security policies for images.
OCIR is a repository of Docker images. It is a high availability Docker version 2 container re
gistry service. It helps you store Docker images in private or public repositories. It runs as a
fully managed service on Oracle Cloud Infrastructure, and it provides you with a secured en
vironment where you can share repositories across users if needed.
OCIR offers you full integration with container engine for Kubernetes. Registries are private
by default but can be made public by an admin. OCIR is co-located regionally with Contain
er Engine for low-latency Docker image deploys, and OCIR lets you leverage OCI for high
performance, low latency, and high availability.
Aula 3 Video 2) Working with OKE and OCIR on OCI
Welcome to our next topic on working with OKE and OCIR on OCI. In this topic, I'll briefly t
ake you to the functions that Oracle manages for its customers and the functions that custo
mers manage themselves. The first two columns, OCIR and Container Engine, showcase t
he functions that Oracle manages for its customers.
It includes an integrated registry and an image storage and the container engine that has m
anaged Kubernetes. Oracle also manages the ETCD and master nodes of the Kubernetes i
nstance in your high-availability setup. Customers manage the clusters or worker nodes tha
t are set up by the managed service for that instance in their own OCI tenancy. They bring t
heir own OCI account to create clusters for the managed Kubernetes cloud service and pay
for any infrastructure usage incurred from clusters of worker nodes. That's it for this topic

Aula 3 Vídeo 3) Creating Repos in Oracle Cloud Infrastructure registry


Welcome to our next topic on creating repos in Oracle Cloud Infrastructure registry. Relate
d images in OCIR can be grouped into meaningfully named repositories for your convenien
ce. So how do you create a repository in OCIR?
Let's first take a look at the prerequisites. To use the registry service, you should either be
a part of the admin group or a part of the group to which
a policy grants the appropriate fuinctions. The policy that allows you to see a list of all the r
epositories belonging to the tendency in OCIR is allow group, your group name, to inspect r
epos in tenancy.
The policy that allows you to perform any operation on any repository in OCIR is allow grou
p, your group name, to manage repos in tenancy. You need to have an OCI username and
auth token before being able to pull an image. Repositories can be private or public. Anyon
e with internet access and knowledge of the appropriate URL can pull images from a public
repository in OCIR.
Here's how you can create a registry. You first log into the cloud, go to Developer Services,
and choose OCIR. Then go to Create Registry. Remember that registry names should be in
lowercase. Specify the scope of the registry. That's whether it is private or public. Then the
registry is created. Now you can push images to this registry via Docker. That's it for this to
pic.
Aula 3 Video 4) Pushing Images Through Registry
Welcome to our next topic on pushing images through registry. Having created a new repo
sitory, you can push an image through the repository using the Docker CLI. So how do you
push images to OCI registry? First you need to generate an AuthToken in OCI. This will ser
ve as a credential to connect the repository.
So you will create an AuthToken and copy it, log into the OCIR from Docker CLI, and pull th
e image from Docker Hub. The image chosen should be prepared and tagged for pushing.
Now the chosen image can be pushed into the repository. The pushed image will generate
a digest and be saved in the repository.
To verify the pushed image, log into the OCI Console with your credentials. Visit your repos
itory by going to the Main Menu, Developer Services, and OCIR. Review the repository na
me. You can add readme by visiting the image. The image metadata and the information wi
ll be displayed when you select the image. That's it for this topic.
Aula 3 Vídeo 5) Pulling Images from Registry for OKE Deployment
Welcome. Our last topic in this module is pulling images from registry for OKE deployment.
During the deployment of an application to a Kubernetes cluster, you will typically want one
or more images to be pulled from a Docker registry.
In this topic, I will take you through the steps to extract images from OCIR for OKE deploym
ent. First, generate an AuthToken from OCI. This serves as the password. Through OCI Sh
ell, create a port in Kubernetes via the kubectl utility. Create a secret object to connect to th
e registry, then verify the secret object created in the cluster.
Using kubectl, create a port to connect to the image in the OCI Repository. If the access se
cret is successfully validated, the object should be created, and you can verify the status of
the object by viewing it. That's our last topic in this module.
Aula 3 Video 6) Demo 1
In this demo, I will show you how to push an image to OCIR registry and then verify it in the
OCI Console. We'll look at generating an authentication token for a user to connect to OCI
Registry, downloading an Apache image from Docker Hub, tagging the downloaded image,
and then pushing it to OCI Registry. This demo will help you acquire the necessary skills to
create an authentication token and use it to push an image to OCI Registry. So let's get star
ted.
Let me log in with my username and password. We will first generate an authentication tok
en, or auth token, to connect the OCIR. To generate an auth token, click User Settings. Scr
oll down and click Auth Tokens on the left. Click Generate Token.
I already have an auth token generated here, and I've made note of it in my editor for future
reference. Auth token is required to log in
to OCI Registry from our cloud shell. Let's click the Cloud Shell icon here at the top. As you
can see, I'm now connected to the Cloud Shell. Now let's log in to Docker CLI.
Here, IAD represents the region code that is US Ashburn. The username is the tenency na
me, slash, username, and the password is the auth token. Let me paste the auth token her
e that I'd copied before. We have now successfully logged in. Let's pull an image HTTP fro
m the Docker Hub by running Docker pull command.
As you can see here, this pulls the Apache image from hub.docker.com. The download is n
ow complete. The next step is to tag the image before pushing it to OCIR. The syntax here i
s Docker tag, downloaded image name, region, slash, tenancy, slash, repo name, and then
the image name.
Let us now run the Docker images command. You can see our images present in the list he
re. Let me copy this. Now let's push this image to OCIR using the Docker push command.
As you can see, the different layers of the image are getting pushed to OCI registry. Be pati
ent until the process completes.
The image is now successfully pushed to OCIR. Let's verify the image in OCIR. In the Cons
ole, go to Developer Services and click Registry. Click the Refresh icon and scroll down to
view your repository. To add
a readme text, click Edit. Select Plaintext and add the appropriate content.
When you're done, click Save. The repository metadata, such as the date of creation, the si
ze, and the image pushed, are all displayed. So to summarize, in this demo, you have seen
how to push a Docker image to OCIR and verify it.
Aula 3 Vídeo 7) Demo 2
Welcome to this demonstration on pulling images from OCIR out for OKE deployment. In t
his demo, I will show you how to pull an image from OCIR and deploy it on
an OKE cluster. We will look at creating a Docker container using the image from OCI regis
try, generating a kubectl secret, creating a yml file for deployment, deploying a pod on the c
luster with kubectl using the image from OCI registry, and verifying the external IP in the se
rvice.
This demo will help you acquire the necessary skills to deploy an image present in
OCI registry using a kubectl secret to an OKE cluster. So let's get started. Let's begin by lo
gging into the cloud account.
Click the Cloud Shell icon on the top right. We are now connected. When I click my userna
me and navigate to auth tokens, you will see that I already have an auth token generated a
nd copied in my editor.
Let me go back to my cloud shell and log into the Docker CLI using the docker login comm
and. The username is tenancy name slash username, and the password is the auth token.
The login is successful. This Docker login should have created a .doc folder in the user prof
ile directory, as shown here. It should also have created a config.json file.
Let's run the Docker ps command to check for any running containers. There aren't any no
w. In the earlier demo, we uploaded the new httpd image into OCIR. Let's now create a Doc
ker container using this image.
Let us also run Docker images. We can see that our images here
are in the list. If we verify the containers, we can see that there is a container that is running
. This curl command will verify that the service is available via exposed port 8000. We see t
hat it now reflects the index.html of the Apache server.
Now let's extend this to OKE to create a pod. Let's create a secret object so that the kubectl
command has access to pull images from OCIR. Let us customize the secret. Here the na
me of the secret is access. dockerconfigjson is the name of the data item. And then we give
the path to the config.json file.
Set the type to kubernetes.io/dockerconfigjson. As you can see, the secret is successfully c
reated. If we run kubectl get secrets, we can see our secret here.
Let us create a yml file to use the image from OCIR and then deploy the pod. Here I have t
he file already created. So let me just change the name of the secret to the one I created, a
nd then save the file. In order to create the pod, let's run the kubectl create command using
the yml file we have. The pod is successfully created. To get
a list of all the pods, run the kubectl get pods command.
Before we expose the pod, let's first label it. This is now done. Expose the pod as a load bal
ancer. This will generate an external IP address for the service.
The error
is because the service name already exists. Let me delete the existing one and try again. T
he service is now created. Run the kubectl get svc command and note the service. The ext
ernal IP address is not yet available. Let's wait for a couple of minutes. Now if
we try the command again, we can get the external IP. Let me copy this. Run the command
curl -kl IP address. You can see that this is working.
So to summarize, in this demo, we learned to pull an image from OCIR and then deploy it
to the cluster.
AULA 4) Deploy and Invoke Code using Servless Functions

Aula 4 Vídeo 1)

Serverless computing refers to an execution model in which the cloud provider dynamically
allocates resources whenever a piece of code is executed and only charges for the amount
of resources used to run that code. Welcome to this module on deploying and invoking cod
e using serverless functions. In this topic I will take you through what serverless computing
is in the context of cloud computing.
Serverless is computing is growing in popularity among developers because it allows them
to focus on what matters most, which is writing code, without worrying about the underlying
infrastructure. Here you can see how computing in the cloud has moved from bulky hardwa
re infrastructure dependency to VM workloads to further lightweight processors, such as co
ntainers, and finally tapering into functions.
Serverless is a category of cloud services that raises the abstraction level so that developer
s don't need to think about servers, VMs, or other IAS components. Let's look at the feature
s of serverless architecture. In serverless, your code can auto-scale as needed. It is elastic
in its compute utilization, which means you use compute resources on demand.
One of the many valuable features of a serverless architecture is that you'll pay only for exe
cution time. Serverless computing is gaining popularity because it allows developers to focu
s on the functionality of their code and not be worried about the target deployment environ
ment.
You could say it takes a lot of the ops out of DevOps. In serverless computing, abstractions
are used to hide the implementation details of the lower levels. Typically with each new abs
traction, less domain-specific knowledge is required to make use of the underlying system.
That's it for this topic.
Arquitetura Sem-Servidor

Aula 4 Vídeo 2) Introducing Oracle Functions


Welcome. Our next topic is Introducing Oracle Functions. Oracle Functions is a fully mana
ged highly scalable and on-demand service platform. In this topic, I will take you through Or
acle Functions in the context of serverless computing.
Let's get an overview of Oracle Functions. Oracle Functions is built on enterprise-grade Ora
cle Cloud Infrastructure and is powered by the Fn Project open source engine. It is a truly s
erverless function as a service platform. Although Oracle Functions is intended to run in clo
ud environments, it is not tied to a specific cloud vendor. The platform itself can be hosted o
n any cloud environment that supports Docker.
Because the only requirement to run Functions is a Docker engine, you can also run the pla
tform on your local development system provided you have Docker installed on that system
. You test your functions on your local system and if they are done there, they will run on an
y system. This avoids a vendor lock from a specific cloud vendor. As long as your function i
s not using any cloud-specific APIs, you can move it from one cloud to another, or you can
still run it in an on premises environment if you want to. That's it for this topic.

Aula 4 Video 3) The key features of Oracle functions


Welcome. Our next topic is on the key features of Oracle functions. In this topic, I will take
you through the main features of Oracle functions that make it the most sought after serverl
ess offering. Oracle functions is built on the Apache 2.0 licensed open source fn project.
It runs on container native applications and is connected to all the services with Oracle Clo
ud integration. It is a multi-tenant, highly scalable, functions as a service platform. It ensure
s your app is highly available, scalable, secure, and monitored. And it eliminates compute i
nfrastructure, and fn cluster management, and is a truly serverless platform.
The serverless and elastic architecture of Oracle functions means there's no infrastructure
administration or software administration for you to perform. You don't provision or manage
compute instances and operating systems software patches and upgrades are applied auto
matically. With Oracle functions, you can write code in Java, Python, Node, Go, and Ruby,
and leverage various SDKs. That's it for this topic.

Aula 4 Video 4) Applications and Functions


Welcome. Our next topic is on applications and functions. Before implementing Oracle func
tions, it is important that you understand the core concepts of this platform. In this topic, I wi
ll take you through the concepts of applications and functions.
Applications and functions are the building blocks of serverless architecture. In Oracle funct
ions, an application is a logical grouping of functions. Functions are small but powerful bloc
ks of code that generally do one simple thing. They are stored as docker images in a specifi
ed docker registry. They're invoked in response to a CLI command or signed HTTP request
. And they're grouped under applications.
When you define an application in Oracle functions, you specify the subnets in which to run
the functions in the application. You can use a common context to store configuration varia
bles that are available to all functions in the application. And when functions from different a
pplications are invoked simultaneously, Oracle functions ensures these executions are isol
ated from each other.
A definition of the function is stored as metadata in the Oracle function server. This include
s the maximum length of time the function is allowed to execute for and the maximum amo
unt of memory the function is allowed to consume. Oracle functions shows functions and th
e applications into which they are grouped in the console. That's it for this topic.

Aula 4 Vídeo 5) Invocations and Triggers


Welcome. Our next topic is on invocations and triggers. Now that we know about applicatio
ns and functions, let's move on and learn the core concepts of invocations and triggers.
In Oracle functions, a function's code is executed when the function is invoked. You can inv
oke a function that you've deployed to Oracle functions from the Fn project CLI, Oracle Clo
ud infrastructure SDKs, signed HTTP request to the functions invoke end point and other O
racle Cloud Services.
When you invoke a function for the first time, Oracle functions pulls the function's docker im
age from the specified docker registry, runs it as a docker container, and executes the funct
ion. If there are subsequent requests to the same function, Oracle functions directs those re
quests of the same container. After a period of being idle, the docker container is removed.
Oracle functions shows information about function invocations in metric charts.
A trigger is the result of an action elsewhere in the system that sends a request to invoke a
function in Oracle Functions. An event in the event service might cause a trigger to send re
quest to Oracle Functions to invoke a function. Alternatively, a trigger might send regular re
quest to invoke a function on a defined time-based schedule. A function might not be assoc
iated with any triggers or it can be associated with one or multiple triggers. That's it for this t
opic.

Aula 4 Vídeo 6) Deploying a Function


Welcome. Our next topic is on deploying a function. After you
have written the code for a function, it's ready for deployment. So let's look at the high level
steps you have to follow to deploy your function.
You can create applications for deploying functions in Oracle functions. These applications
can be created using the console, the fn project CLI, and the API.
There are three steps to deploy a function. In the first step, you create an application by def
ining metadata. After the application is defined. You define the function by using fn project
CLI commands or the Oracle consoles. Then configure memory, time, and network resourc
es to applications. You can consider using default values.
To push the image to the docker registry that's configured for Oracle functions, set the cont
ext registry for Oracle functions in place. Finally, deployed the function to an application in
Oracle functions. That's it for this topic.

Aula 4 Vídeo 7) Invoking a Function


Welcome. Our final topic in this module is invoking a function. You can invoke a function th
at you've deployed to Oracle functions in different ways. In this topic, I will take you through
the steps of invoking a function.
First, you need to have some policy set for accessing functions, such as allow group group-
name to manage functions-family in compartment compartment-name. And allow group gro
up-name to read metrics in compartment compartment-name.
Then you need a policy for network called allow service FaaS to use virtual-network-family i
n compartment compartment-name. And, finally, a policy for registry, which is allow service
FaaS to read repos in tenancy.
Let's now look at the steps to invoke a function. You can invoke a function that you've deplo
yed to Oracle functions in different ways. By using the event projects CLI, the OCI CLI, or O
CI SDKs, and by making a
signed HTTP request to the functions invoke end point. Every function has an invoke end p
oint. Switch to your environment by invoking a cloud shell and authenticate as appropriate.
You can invoke a function by using the fn invoke command or by using the REST API endp
oint associated with the function. You can also invoke a function by using the oci fn comma
nd. And you can invoke OCI curl as a REST end point as well. That's our last topic in this m
odule.

Aula 4 Video 8) Demo 1 – Creating, Deploying and Invoking a Function


Welcome to this demonstration on creating, deploying, and invoking a function. In this dem
o, we will create a VCN, which is a prerequisite, and an application to hold your functions.
We will configure an Fn CLI on Cloud Shell by setting up the context. And we will initialize,
deploy, and invoke a function. This demo will help you acquire the necessary skills to start
off on Oracle Functions by using Fn CLI to deploy and invoke functions. So let's get started.
I'm logging into the OCI console of my tenancy by using my credentials. A virtual cloud net
work is required to set up an application in Oracle Functions. So let's set one up. In the OCI
console, go
to Networking and click Virtual Cloud Network. Click Start VCN Wizard. Select VCN with Int
ernet Connectivity, and then click Start VCN Wizard.
Specify a name for the VCN. Specify your compartment, and then click Next. Review the V
CN paramaters and values. And when you're done, click Create. Wait for the resource creat
ion to be completed. Click View Virtual Cloud Network to go to your VCN details page. You
can view the subnets associated with a VCN here.
Upon successful completion of the VCN, we will go ahead and create an application for fun
ctions. In the OCI console, navigate to Developer Services and Functions. An application is
a logical collection of functions. A little later in the demo, we will be deploying Oracle Functi
ons with the context of this Oracle Application.
Click Create Application with the name fnappdemo. Choose the VCN and the correspondin
g subnets created in the previous steps, and then click Create. The application is created, b
ut with no functions yet.
Let's move on and create a Function CLI context to connect to OCI. To configure Fn CLI to
connect to your OCI tenancy, you have to create a new context. The context specifies Orac
le Functions endpoints, your compartment OCID, and the Docker registry to push and pull i
mages.
Switch to your Cloud Shell to create a new Fn CLI context. The command is Fn Create Con
text, your context name, and your provider is Oracle CS.
You will notice that the context is successfully created. Use the new context by entering the
fn use context context name command. Configure the new context with the OCID of your c
ompartment that will own the deployed functions. Copy the compartment ID that you had fet
ched earlier, and use the fn update context, your compartment OCI command.
Sorry, missed the argument. So I will add Oracle.compartmentID.
Configure the new context with the API URL endpoint to use when calling the API. Use the
fn update context API URL, your API endpoint command. API endpoint is in the format func
tions.regionIdentifier.OCI.OracleCloud.com. The region identifier in this case is us-ashburn-
1.
Update the context with the location of the OCI registry you want to use. The repository refe
renced here was created earlier. Use the fn update context registry iad.ocir.io/
tenancyName/repoName command.
List the available context using the
fin list context command, and verify the current context that is in use. The function has to b
e initialized with no runtime to generate a simple Node.js application. The name of the funct
ion is demoappnew. There will be three files created-- func.js, func.yaml, and package.json
for the dependencies.
Generate a demoappnew boilerplate node function using the fn init runtime node demoapp
new command. Prior to deployment, verify the existence of function applications in your co
mpartment by using the command fn list apps. This application was created via the console
in the previous steps.
Now, with all the files, deploy your function to the application fnappdemo using the fn deplo
y command. The fn deploy will deploy the current node application on the app fnappdemo.
The -v is verbose mode to provide visibility of execution. Wait for the deployment to comple
te.
After successful execution, fn list will display the list of functions that are part of the applicat
ion. Now that the deployment is successful, invoke the function using the invoke app name
function name command. You will see the JSON output. The function is now successfully d
eployed and invoked.
Minimize your Cloud Shell window and switch to the
OCI console. Click your Function application. You should be able to see the deployed functi
on, demoappnew.
So to summarize, in this demonstration, you learned to create an application for your functi
on and successfully set up Function CLI on Cloud Shell to create, deploy, and invoke your f
unction.

Aula 5)
Having completed the Oracle Cloud native development explorer learning path, you are fa
miliar with the core principles of the cloud native methodology and the success that its impl
ementation results in. You can now continue your professional learning for a career in cloud
native technologies characterized by the use of containers, micro
services, and serverless functions.
An OCI developer prepares for the future of application development by learning cloud nati
ve technologies and is responsible for the enduring development and deployment of cloud
native applications. If you aspire to be an OCI developer, Oracle University's Application De
velopment Cloud Learning subscription has just the learning path for you titled Build Contai
nerized Applications for Cloud.
Now that you're ready to start your journey with cloud native, you can head straight to the b
uild containerized applications for cloud learning path. This learning path is organized to tea
ch you the skills that you will need to adopt and efficiently leverage or cloud native services
that run on OCI. This learning path contains courses that are taught by a cloud native exper
t.
You can explore the integrated developer experience provided on OCI through Micro Servic
es, OCIR, OKE and Functions. In addition, you can schedule hands on labs, which will allo
w you to practice configuration tasks in a live Oracle cloud environment. The skill checks wi
ll help you test your understanding of the concepts covered.
Click here to get started to become an OCI developer. We're excited to have you join us in t
his training and hope that you will enjoy your Oracle University learning experience. Thank
you for watching.

You might also like