0% found this document useful (0 votes)
25 views15 pages

CCS Module 5

OpenStack is an open-source cloud computing platform that provides Infrastructure-as-a-Service (IaaS) through a modular architecture, allowing users to manage computing, storage, and networking resources. It supports various deployment modes including public, private, hybrid, community, multi-cloud, and edge computing, while also facilitating serverless computing through event-driven execution and auto-scaling. Mobile Cloud Computing (MCC) enhances mobile applications by offloading processing to the cloud, improving performance and storage capacity, but faces challenges such as low bandwidth and security concerns.

Uploaded by

darishdias30
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views15 pages

CCS Module 5

OpenStack is an open-source cloud computing platform that provides Infrastructure-as-a-Service (IaaS) through a modular architecture, allowing users to manage computing, storage, and networking resources. It supports various deployment modes including public, private, hybrid, community, multi-cloud, and edge computing, while also facilitating serverless computing through event-driven execution and auto-scaling. Mobile Cloud Computing (MCC) enhances mobile applications by offloading processing to the cloud, improving performance and storage capacity, but faces challenges such as low bandwidth and security concerns.

Uploaded by

darishdias30
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Open stack Cloud platform & Serverless Computing

It is a free open standard cloud computing platform that first came into existence on July 21′ 2010.
It was a joint project of Rackspace Hosting and NASA to make cloud computing more ubiquitous in
nature. It is deployed as Infrastructure-as-a-service(IaaS) in both public and private clouds where
virtual resources are made available to the users.

The software platform contains interrelated components that control multi-vendor hardware pools
of processing, storage, networking resources through a data center. In OpenStack, the tools which
are used to build this platform are referred to as “projects”.

These projects handle a large number of services including computing, networking, and storage
services. Unlike virtualization, in which resources such as RAM, CPU, etc are abstracted from the
hardware using hypervisors, OpenStack uses a number of APIs to abstract those resources so that
users and the administrators are able to directly interact with the cloud services.
Components of Openstack
Networking Service
Telemetry Service
Block Storage Service
Image Storage Service
Compute Service
Object Storage Service

Open stack Architecture


Apart from various projects which constitute the OpenStack platform, there are nine major services
namely Nova, Neutron, Swift, Cinder, Keystone, Horizon, Ceilometer, and Heat. Here is the basic
definition of all the components which will give us a basic idea about these components.

Nova (compute service): It manages the compute resources like creating, deleting, and handling the
scheduling. It can be seen as a program dedicated to the automation of resources that are
responsible for the virtualization of services and high-performance computing.

Neutron (networking service): It is responsible for connecting all the networks across OpenStack. It is
an API driven service that manages all networks and IP addresses.
Swift (object storage): It is an object storage service with high fault tolerance capabilities and it used
to retrieve unstructured data objects with the help of Restful API. Being a distributed platform, it is
also used to provide redundant storage within servers that are clustered together. It is able to
successfully manage petabytes of data.

Cinder (block storage): It is responsible for providing persistent block storage that is made accessible
using an API (self- service). Consequently, it allows users to define and manage the amount of cloud
storage required.
Keystone (identity service provider): It is responsible for all types of authentications and
authorizations in the OpenStack services. It is a directory-based service that uses a central repository
to map the correct services with the correct user.

Glance (image service provider): It is responsible for registering, storing, and retrieving virtual disk
images from the complete network. These images are stored in a wide range of back-end systems.
Horizon (dashboard): It is responsible for providing a web-based interface for OpenStack services. It
is used to manage, provision, and monitor cloud resources.

Ceilometer (telemetry): It is responsible for metering and billing of services used. Also, it is used to
generate alarms when a certain threshold is exceeded.
Heat (orchestration): It is used for on-demand service provisioning with auto-scaling of cloud
resources. It works in coordination with the ceilometer.

Features of OpenStack
Modular architecture: OpenStack is designed with a modular architecture that enables users to
deploy only the components they need. This makes it easier to customize and scale the platform to
meet specific business requirements.
Multi-tenancy support: OpenStack provides multi-tenancy support, which enables multiple users to
access the same cloud infrastructure while maintaining security and isolation between them. This is
particularly important for cloud service providers who need to offer services to multiple customers.

Open-source software: OpenStack is an open-source software platform that is free to use and
modify. This enables users to customize the platform to meet their specific requirements, without
the need for expensive proprietary software licenses.
Distributed architecture: OpenStack is designed with a distributed architecture that enables users to
scale their cloud infrastructure horizontally across multiple physical servers. This makes it easier to
handle large workloads and improve system performance.

API-driven: OpenStack is API-driven, which means that all components can be accessed and
controlled through a set of APIs. This makes it easier to automate and integrate with other tools and
services.
Comprehensive dashboard: OpenStack provides a comprehensive dashboard that enables users to
manage their cloud infrastructure and resources through a user-friendly web interface. This makes it
easier to monitor and manage cloud resources without the need for specialized technical skills.

Resource pooling: OpenStack enables users to pool computing, storage, and networking resources,
which can be dynamically allocated and de-allocated based on demand. This enables users to
optimize resource utilization and reduce waste.

Advantages of using OpenStack


It boosts rapid provisioning of resources due to which orchestration and scaling up and down of
resources becomes easy.
Deployment of applications using OpenStack does not consume a large amount of time.
Since resources are scalable therefore they are used more wisely and efficiently.
The regulatory compliances associated with its usage are manageable.

Disadvantages of using OpenStack


OpenStack is not very robust when orchestration is considered.
Even today, the APIs provided and supported by OpenStack are not compatible with many of the
hybrid cloud providers, thus integrating solutions becomes difficult.
Like all cloud service providers OpenStack services also come with the risk of security breaches.

Modes of Operation of Open Stack


OpenStack operates in different modes depending on the deployment strategy, use case, and
infrastructure requirements. Here are the primary modes of operation
:1. Public Cloud
ModeOpenStack is used as the backend for a public cloud service.
Multiple tenants (users/organizations) share the cloud infrastructure.
Resources are available on a pay-as-you-go basis.
Example: City Cloud, OVHcloud, and other OpenStack-based public cloud provider

Private Cloud Mode


OpenStack is deployed within an organization’s own data center.
Used exclusively by a single organization for internal operations.
Provides security, control, and compliance with corporate policies.
Example: Enterprises and government organizations running OpenStack for internal use.

Hybrid Cloud Mode


Combines both private and public clouds to provide flexibility.
Organizations can move workloads between OpenStack private clouds and public clouds.
Commonly used for disaster recovery, burst capacity, and regulatory compliance.
Example: A company using OpenStack in-house while leveraging AWS or Azure for overflow capacity.

Community Cloud Mode


Shared infrastructure among organizations with common goals or regulatory requirements.
Typically used by industries like healthcare, research institutions, or government agencies.
Provides cost efficiency while maintaining security and compliance.
Example: A consortium of universities using OpenStack for collaborative research.

Multi-Cloud Mode
OpenStack is part of a larger cloud strategy that includes multiple cloud providers.
Resources are managed across different cloud environments (OpenStack, AWS, Google Cloud, etc.).
Helps avoid vendor lock-in and improves redundancy.
Example: A global enterprise distributing workloads across OpenStack and other cloud providers.

Edge Computing Mode


OpenStack is deployed at edge locations (close to end users/devices) instead of centralized data
centers.
Reduces latency and improves performance for IoT, AI, and 5G applications.
Example: OpenStack running on telco edge nodes for 5G networks.

Mobile Cloud Computing


MCC stands for Mobile Cloud Computing which is defined as a combination of mobile computing,
cloud computing, and wireless network that come up together purpose such as rich computational
resources to mobile users, network operators, as well as to cloud computing providers

Mobile Cloud Computing is meant to make it possible for rich mobile applications to be executed on
a different number of mobile devices. In this technology, data processing, and data storage happen
outside of mobile devices.

Architecture Overview
MCC architecture consists of three main layers:Mobile Device Layer (Front-End)
Includes smartphones, tablets, IoT devices, and wearable technology.
Devices request services from the cloud, process minimal data, and display results.
Communicates with the cloud via wireless networks (Wi-Fi, 4G/5G, LTE).
Handles user interfaces and applications (e.g., mobile apps, web browsers)

Network Layer (Communication Layer)Facilitates communication between mobile devices and cloud
servers.
Uses wireless technologies:Cellular Networks: 4G, 5G, LTEWi-Fi: Public and private networks
Satellite Networks: For remote accessManages data transmission, bandwidth allocation, and
network security.
May include intermediate nodes (edge computing) to improve performance.

Cloud Computing Layer (Back-End)Comprises cloud data centers, servers, and virtual machines.
Processes and stores data offloaded by mobile devices.Provides services like:
Infrastructure as a Service (IaaS) – Virtual machines, storage
Platform as a Service (PaaS) – Application development environments
Software as a Service (SaaS) – Cloud-hosted mobile applications
Uses virtualization, load balancing, and distributed computing for efficiency.
Characteristics Of Mobile Cloud Computing Application
Cloud infrastructure: Cloud infrastructure is a specific form of information architecture that is used
to store data.
Data cache: In this, the data can be locally cached.
User Accommodation: Scope of accommodating different user requirements in cloud app
development is available in mobile Cloud Computing.
Easy Access: It is easily accessed from desktop or mobile devices alike.
Cloud Apps facilitate to provide access to a whole new range of services.

Benefits of Mobile Cloud Computing

Benefits of Mobile Cloud Computing:


Increased Storage Capacity:
Mobile devices often have limited storage space. Mobile cloud computing allows users to store data
and applications in the cloud, expanding storage capabilities and freeing up device space.
Improved Application Performance:
By offloading resource-intensive tasks to the cloud, mobile devices can run applications more
efficiently, leading to faster response times and improved user experience.
Greater Flexibility and Accessibility:
Mobile cloud computing enables users to access their data and applications from any device with an
internet connection, regardless of location.
Cost Savings:
Mobile cloud computing can reduce the need for costly hardware upgrades and maintenance, and
users only pay for the resources they consume.
Faster Development and Deployment:
Cloud-based development tools and infrastructure enable developers to build and deploy mobile
applications more quickly and efficiently.
Improved Scalability:
Cloud infrastructure allows mobile applications to scale resources based on demand, ensuring
optimal performance and adaptability.
Enhanced Security:
Cloud providers often implement robust security measures to protect user data and applications.
Better Reliability:
Cloud infrastructure can provide higher uptime and availability than traditional applications.
Ease of Integration:
Mobile cloud applications can be easily integrated with other systems and platforms.
Data Synchronization and Collaboration:
Mobile cloud computing enables real-time data synchronization and collaboration among users and
devices.

Challenges of Mobile Cloud Computing


Low bandwidth: This is one of the big issues in mobile cloud computing. Mobile cloud use radio
waves which are limited as compared to wired networks. Available wavelength is distributed in
different mobile devices. Therefore, it has been three times slower in accessing speed as compared
to a wired network.

Security and Privacy: It is difficult to identify and manage threats on mobile devices as compared to
desktop devices because in a wireless network there are more chances of the absence of the
information from the network.

Service Availability: Users often find complaints like a breakdown of the network, transportation
crowding, out of coverage, etc. Sometimes customers get a low-frequency signal, which affects the
access speed and storage facility.

Alteration of Networks: Mobile cloud computing is used in a different operating system driven
platforms like Apple iOS, Android, and Windows Phone. So it has to be compatible with different
platforms. The performance of different mobile platform network is managed by the IRNA
(Intelligent Radio Network Access) technique.

Alteration of Networks: Mobile cloud computing is used in a different operating system driven
platforms like Apple iOS, Android, and Windows Phone. So it has to be compatible with different
platforms. The performance of different mobile platform network is managed by the IRNA
(Intelligent Radio Network Access) technique.
Serverless Computing

Serverless computing is a method of providing backend services on an as-used basis. A serverless


provider allows users to write and deploy code without the hassle of worrying about the underlying
infrastructure.

A company that gets backend services from a serverless vendor is charged based on their
computation and do not have to reserve and pay for a fixed amount of bandwidth or number of
servers, as the service is auto-scaling. Note that despite the name serverless, physical servers are still
used but developers do not need to be aware of them

Serverless computing is an application development and execution model that enables developers to
build and run application code without provisioning or managing servers or back-end infrastructure.

Serverless does not mean "no servers." The name notwithstanding, servers in serverless computing
are managed by a cloud service provider (CSP). Serverless describes the developer's experience with
those servers—they are invisible to the developer, who doesn't see them, manage them or interact
with them in any way.

Developers can focus on writing the best front-end application code and business logic with
serverless computing. All they need to do is write their application code and deploy it to containers
managed by a CSP.

The cloud provider handles the rest—provisioning the cloud infrastructure required to run the code
and scaling the infrastructure up and down on demand as needed—and is also responsible for all
routine infrastructure management and maintenance, such as operating system updates and
patches, security management, capacity planning, system monitoring and more.

Moreover, developers never pay for idle capacity with serverless. The cloud provider spins up and
provisions the required computing resources on demand when the code executes and spins them
back down again—called ''scaling to zero''—when execution stops. The billing starts when execution
starts and ends when execution stops; typically, pricing is based on execution time and resources
required.

Along with infrastructure as a service (IaaS), platform as a service (PaaS), function as a service (FaaS)
and software as a service (SaaS), serverless has become a leading cloud service offering.

Working of Serverless Computing


Serverless computing is a cloud computing execution model where developers build and run
applications without managing servers, as the cloud provider handles infrastructure provisioning,
scaling, and maintenance, allowing developers to focus on code and pay only for resources used

Serverless computing, also known as Function as a Service (FaaS), is a cloud-native development


model where developers write and deploy code, and a cloud provider manages the underlying
infrastructure.

Developers write code that is triggered by events (e.g., HTTP requests, data uploads). The cloud
provider dynamically allocates resources to execute the code, and then deallocates them when the
code is no longer running
The advantages of serverless computing include:
1.Cost-effectiveness, as users are only charged for the time code runs.
2.Easy deployment of apps, doing so in hours or days instead of weeks or months.
3.Auto scaling, as providers spin resources up or down when code isn't running.
4.And increased productivity, as developers can spend their time developing apps instead of dealing
with servers.

Characteristics of Serverless Computing


Event-Driven Execution:
Serverless functions are triggered by events (e.g., HTTP requests, database changes, message queue
events) rather than running continuously.
Auto-Scaling:
The underlying infrastructure automatically scales up or down based on demand, ensuring optimal
resource utilization.
Pay-Per-Use Billing:
You only pay for the compute time consumed by your functions, making it a cost-effective solution.

Server Management Abstraction:


Developers are relieved of the responsibility of managing servers, including provisioning, patching,
and scaling.
Concurrency Management:
Serverless platforms handle the concurrency of function executions, ensuring efficient resource
allocation and execution.
API Gateway Integration:
Serverless functions are often triggered by external event sources, and an API gateway is used to
manage and expose these functions as APIs.
Stateless Functions:
Serverless functions are designed to be stateless, meaning they don't store any data between
invocations, which simplifies management and scaling
Advantages of Serverless Computing
No server management is necessary
Although 'serverless' computing does actually take place on servers, developers never have to deal
with the servers. They are managed by the vendor. This can reduce the investment necessary in
DevOps, which lowers expenses, and it also frees up developers to create and expand their
applications without being constrained by server capacity.

Developers are only charged for the server space they use, reducing cost
As in a 'pay-as-you-go' phone plan, developers are only charged for what they use. Code only runs
when backend functions are needed by the serverless application, and the code automatically scales
up as needed. Provisioning is dynamic, precise, and real-time. Some services are so exact that they
break their charges down into 100-millisecond increments. In contrast, in a traditional 'server-full'
architecture, developers have to project in advance how much server capacity they will need and
then purchase that capacity, whether they end up using it or not.
Serverless architectures are inherently scalable
Applications built with a serverless infrastructure will scale automatically as the user base grows or
usage increases. If a function needs to be run in multiple instances, the vendor's servers will start up,
run, and end them as they are needed, often using containers. (The function will start up more
quickly if it has been run recently – see 'Performance may be affected' below.)

As a result, a serverless application will be able to handle an unusually high number of requests just
as well as it can process a single request from a single user. A traditionally structured application
with a fixed amount of server space can be overwhelmed by a sudden increase in usage.

Quick deployments and updates are possible


Using a serverless infrastructure, there is no need to upload code to servers or do any backend
configuration in order to release a working version of an application. Developers can very quickly
upload bits of code and release a new product. They can upload code all at once or one function at a
time, since the application is not a single monolithic stack but rather a collection of functions
provisioned by the vendor.

Code can run closer to the end user, decreasing latency


Because the application is not hosted on an origin server, its code can be run from anywhere. It is
therefore possible, depending on the vendor used, to run application functions on servers that are
close to the end user. This reduces latency because requests from the user no longer have to travel
all the way to an origin server. Cloudflare Workers enables this kind of serverless latency reduction.

What are the disadvantages of serverless computing?​

Testing and debugging become more challenging


It is difficult to replicate the serverless environment in order to see how code will actually perform
once deployed. Debugging is more complicated because developers do not have visibility into
backend processes, and because the application is broken up into separate, smaller functions
Serverless computing introduces new security concerns
When vendors run the entire backend, it may not be possible to fully vet their security, which can
especially be a problem for applications that handle personal or sensitive data.

Because companies are not assigned their own discrete physical servers, serverless providers will
often be running code from several of their customers on a single server at any given time. This issue
of sharing machinery with other parties is known as 'multitenancy' – think of several companies
trying to lease and work in a single office at the same time. Multitenancy can affect application
performance and, if the multi-tenant servers are not configured properly, could result in data
exposure.

Multitenancy has little to no impact for networks that sandbox functions correctly and have
powerful enough infrastructure. For instance, Cloudflare runs a 15-Tbps network with enough excess
capacity to mitigate service degradation, and all serverless functions hosted by Cloudflare run in
their own sandbox (via the Chrome V8 engine).

Serverless architectures are not built for long-running processes


This limits the kinds of applications that can cost-effectively run in a serverless architecture. Because
serverless providers charge for the amount of time code is running, it may cost more to run an
application with long-running processes in a serverless infrastructure compared to a traditional one.

Performance may be affected


Because it's not constantly running, serverless code may need to 'boot up' when it is used. This
startup time may degrade performance. However, if a piece of code is used regularly, the serverless
provider will keep it ready to be activated – a request for this ready-to-go code is called a 'warm
start.' A request for code that hasn't been used in a while is called a 'cold start.'

Vendor lock-in is a risk


Allowing a vendor to provide all backend services for an application inevitably increases reliance on
that vendor. Setting up a serverless architecture with one vendor can make it difficult to switch
vendors if necessary, especially since each vendor offers slightly different features and workflows.

Use case of Serverless Computing


Serverless computing excels in applications with variable workloads, supporting microservices,
handling IoT data, building APIs, and processing multimedia, offering scalability and
cost-effectiveness by eliminating infrastructure management

1. Building RESTful APIs:


Serverless functions can be used to create and manage APIs for web and mobile applications.
These APIs can handle various requests, process data, and interact with other services.
Serverless platforms provide built-in features for API management, such as authentication,
authorization, and scaling.

Data Processing Pipelines:


Serverless computing is well-suited for building data processing pipelines that run in the background
as microservices.
These pipelines can handle tasks like data migration, data analysis, and real-time data processing.
Serverless functions can be triggered by events, such as data uploads or database changes, and
process the data accordingly.

Event-Triggered Computing:
Serverless functions can be triggered by various events, such as user actions, data changes, or
scheduled tasks.
This allows for building event-driven applications where functions respond to specific events.
Examples include sending notifications, processing images, or updating databases based on events

IoT Data Processing:


Serverless computing can be used to process data from IoT devices, such as sensors and smart
devices.
Serverless functions can handle data ingestion, processing, and storage.
This allows for building real-time applications that respond to data from IoT devices.

Serverless Event
An event is a change or occurrence detected by a system, which triggers a function to execute in a
serverless environment. These events allow applications to be event-driven, meaning they only run
when needed.
Types of Event
HTTP & API Gateway EventsTriggered by HTTP requests via an API Gateway, commonly used for web
applications and microservices.Example: A user requests data from an API.Providers: AWS API
Gateway, Azure API Management, Google Cloud Endpoints

Storage Events
Triggered when files are uploaded, modified, or deleted in a cloud storage system.
Example: Uploading an image to S3 triggers a function to generate a thumbnail.
Providers: AWS S3, Google Cloud Storage, Azure Blob Storage

Database EventsTriggered by changes in a database (insert, update, delete).Example: A new user is


added to a database, triggering a welcome email.Providers: AWS DynamoDB Streams, Firebase
Firestore Triggers, Azure Cosmos DB Triggers

Messaging & Queue EventsTriggered by messages sent to a queue or pub/sub system.Example: A


user submits a form, and the message is added to a queue for processing.Providers: AWS SQS,
Google Pub/Sub, Azure Event Grid

Scheduled & Cron EventsTriggered by time-based schedules (e.g., cron jobs).Example: A function
runs every day at midnight to generate reports.Providers: AWS EventBridge, Google Cloud Scheduler,
Azure Timer Triggers

IoT & Streaming EventsTriggered by data from IoT devices or real-time event streams.Example: A
smart sensor sends temperature data, triggering an alert if it exceeds a threshold.Providers: AWS IoT
Core, Google IoT Core, Azure IoT Hub, Kafka
Serverless Functions
A serverless function is a small, independent unit of execution that runs in response to an event.
These functions are stateless and managed by a cloud provider, eliminating the need for developers
to manage servers.

What is a Serverless Function?


A serverless function is:
Event-driven: It executes in response to triggers such as API requests, file uploads, or database
changes.
Stateless: It does not retain data between executions (though it can store data externally).
Scalable: It automatically scales up or down based on demand.

How Serverless Functions Work​

An event occurs (e.g., an API request, database update, file upload).


The function is triggered by the event.
The function executes and processes the event.
The function returns a response (or triggers another action).
The function shuts down until the next trigger.

AWS Lambda
What are Lambdas Functions?
AWS lambda are server-less compute functions are fully managed by the AWS where developers can
run there code without worrying about servers. AWS lambda functions will allow you to run the code
with out provisioning or managing servers.

Once you upload the source code file into AWS lambda in the form of ZIP file then AWS lambda will
automatically run the code without you provision the servers and also it will automatically scaling
your functions up or down based on demand. AWS lambda are mostly used for the event-driven
application for the data processing Amazon S3 buckets, or responding to HTTP requests.

Use Cases of AWS Lambda Functions


You can trigger the lambda in so many ways some of which are mentioned below.

File Processing: AWS lambda can be triggered by using simple storage services (S3). Whenever files
are added to the S3 service Lambda data processing will be triggered.
Web Applications: You can combine both web applications and AWS lambda which will scale up and
down automatically based on the incoming traffic.

IoT (Internet of Things) applications: You can trigger the AWS lambda based on certain conditions
while processing the data from the device which are connected to the IOT applications. It will
analyze the data which are received from the IOT application.
Stream Processing: Lambda functions can be integrated with the Amazon kinesis to process
real-time streaming data for application tracking, log filtering, and so on.

Features of AWS Lambda Functions


The following are the some features which are provided by the AWS (Amazon Web Services):
AutoScaling and High Availability: AWS lambda will make sure that your application was highly
available to the end users when there is sudden incoming traffic. High availability can be achieved by
scaling the application.
Serverless Execution: There is no need for provisioning the servers manually in AWS. AWS lambda
will provision the underlying infrastructure based on the triggers you are mentioned whenever a
new file uploaded to a particular then AWS lambda will automatically trigger and takes care of the
infrastructure.

Pay-per-use-pricing: AWS will charge you only for the time that time compute engine was active.
AWS bills you based on the time taken to execute the code.
Supports different programming languages: AWS lambda function will support different
programming languages. You can build the function with the language at your convenience.

Integrates with other AWS Services: AWS lambda can be integrated with different AWS services like
the following :
API Gateway
DynamoDB
S3
Step Functions
SNS
SQS

Versioning and Deployment: AWS lambda function will maintain the different kinds of versions of
the code by which you can change between the versions without any disruptions y based on the
application performances.
Security and Identity Management: AWS lambda will leverage AWS Identity and Access
Management (IAM) to control the access to the functions which are built by using lambda You can
define fine-grained permissions and policies to secure your functions and ensure that only
authorized entities can invoke them.

Working of AWS Lambda Functions


Start off by uploading the code to AWS Lambda. From there, set up the code to trigger from other
AWS services, HTTP endpoints, or mobile apps. AWS Lambda will only run the code when it’s
triggered and will also only use the computing resources needed to run it. The user has to pay only
for the compute time used.

AWS Lambda Characteristics

1. Serverless Computing:
Lambda is a serverless compute service, meaning you don't need to manage or provision servers.
It handles the underlying infrastructure, including scaling, capacity provisioning, and server
maintenance.

2. Event-Driven:

Lambda functions are triggered by events, such as:

Object uploads to Amazon S3.

Requests to Amazon API Gateway.

Messages from Amazon Kinesis streams.

You can define triggers for your functions, and Lambda automatically manages the execution in response
to those events.

3. Scalability:

Lambda automatically scales to handle incoming traffic, allowing your application to seamlessly scale up
or down as needed.

You don't need to worry about manually provisioning or scaling resources.

4. Fault Tolerance:

Lambda is designed for high availability and fault tolerance.

It runs your code in multiple Availability Zones to ensure that it is available even if one zone experiences
an outage.

Lambda also includes features like versioning, retries, and dead-letter queues to help improve resilience.

5. Pay-as-you-go pricing:

You only pay for the compute time your Lambda functions consume.

There are no charges when your code is not running.

6. Cost Optimization:

Lambda's pay-as-you-go model helps optimize costs by only paying for the resources used.

You can further optimize costs by adjusting memory allocation and optimizing your code.

7. Languages and Libraries:

Lambda supports multiple languages through runtimes, including Node.js, Python, Java, and C#.

You can also use the Lambda Runtime API to write functions in other languages.

You can use third-party libraries and package code as Lambda Layers.

8. Monitoring and Logging:

Lambda provides built-in monitoring capabilities, including logs, metrics, and traces.
You can use these features to track the performance of your functions and troubleshoot issues.

Lambda also integrates with AWS CloudWatch and AWS X-Ray for more advanced monitoring and
observability.

9. Concurrency:

Lambda manages concurrency to control how many instances of your function can run concurrently.

This helps prevent overloading the system and ensures optimal performance.

10. State Management:

Lambda itself is inherently stateless.

You can use external services, like Amazon DynamoDB or Amazon DocumentDB, to manage stateful data
while using Lambda

You might also like