0% found this document useful (0 votes)
286 views59 pages

Serverless White Paper PDF

Uploaded by

RohitSansiya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
286 views59 pages

Serverless White Paper PDF

Uploaded by

RohitSansiya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

 

Title: Serverless Computing ​Security


in 2020
Explanation: A Cloud Security Alliance working group
initiative

Date: 2020

Reviewers/Visitors:​
● If you have a Google Account, please login before commenting. Otherwise, please note
your name in the comment you leave to ensure we assign you proper credit for your
efforts.
● Use the Comments or Suggesting features on Google docs to leave your feedback on
the document. Suggestions will be written in and identified by your Google Account. To
use the comments feature, highlight the phrase you would like to comment on, right click
and select “Comment“ (or Ctrl+Alt+M). Or, highlight the phrase, select “Insert” from the
top menu, and select “Comment.” All suggestions and comments will be reviewed by the
editing committee.
● Focus all comments on the content of the document rather than syntax or grammar.
CSA will have copy editors address syntax and grammar once the review period is
complete.

For more information about Google's Comments feature, please refer to


http://support.google.com/docs/bin/answer.py?hl=en&answer=1216772&ctx=cb&src=cb&cbid=-r
x63b0fx4x0v&cbrank=1
The permanent and official location for Cloud Security Alliance Serverless Computing research
​ ttps://cloudsecurityalliance.org/research/working-groups/serverless/
is h

© 2020 Cloud Security Alliance – All Rights Reserved. You may download, store, display on
your computer, view, print, and link to the Cloud Security Alliance at
https://cloudsecurityalliance.org​ subject to the following: (a) the draft may be used solely for
your personal, informational, non-commercial use; (b) the draft may not be modified or altered in
any way; (c) the draft may not be redistributed; and (d) the trademark, copyright or other notices
may not be removed. You may quote portions of the draft as permitted by the Fair Use
provisions of the United States Copyright Act, provided that you attribute the portions to the
Cloud Security Alliance.

Page 2 of 59
Page 3 of 59
Acknowledgments
Editor:

Team Leaders/Authors:

Authors:

CSA Staff:

Reviewers:

Special Thanks:
Copy Editor

Page 4 of 59
0.1 Document Project Plan​:
Meeting Dates Key points regarding the deliverable

Contributors​:
Contributor Email

Aradhna​ Chetal [email protected]

Amit ​Bendor [email protected]

Anthousa​ Karkoglou [email protected]

Brad​ Woodward [email protected]

Eric​ Matlock [email protected]

John​ Wrobel [email protected]

Liz​ Vasquez [email protected]

Madhav ​Chablani [email protected]

Marina​ Bregkou [email protected]

Michael​ Roza [email protected]

Nikos​ Tsagkarakis [email protected]

Peter ​Campbell [email protected]

Raja​ Rajenderan [email protected]

Ricardo ​Ferreira [email protected]

Vrettos ​Moulos [email protected]

Vimal​ Subramanian [email protected]

Vishwas​ Manral [email protected]

Page 5 of 59
0.2 Team / Contributor Composition
Contributors Areas of Contribution

Aradhna Chetal Goals and Objectives, Security Threats, Security Controls, conclusion
[email protected]

Liz Vasquez Security Threat Model (Platform and Serverless Application Threats) ,
[email protected] Security Controls

Eric Matlock What is Serverless, Security Controls and Best Practices


[email protected]

Michael Roza Intro, Goals & Audience, Security Controls & Best Practice,
[email protected] Conclusion

Marina Bregkou Why Serverless, serverless characteristics


[email protected]

Ricardo Ferreira Security Threat Model, Future Vision


[email protected]

Vishwas Manral What is Serverless, Serverless for containers


[email protected]

Brad Woodward Security Threat Model of Serverless, Security controls and Best
Practices, Use Cases, and Security Threat Model
[email protected]

John Wrobel (Eastern Time) Why Serverless, Security Threat Model of Serverless
[email protected]

Amit Bendor Why Serverless


[email protected]

Madhav Chablani Security controls


[email protected]

Vimal Subramanian Security Controls, Best Practices


[email protected]

Nikos Tsagkarakis Use cases


[email protected]

Anthousa Karkoglou Use cases


[email protected]

Peter Campbell Risk and Governance

Page 6 of 59
[email protected]

Vrettos Moulos Use cases


[email protected]

John Kinsella Security controls, Risk and Governance


[email protected]

0.3 Team Breakout by area of interest:


1.Intro 2. What is 3. Why 4.Security 5.Security 6. Risk 7. Use Cases 9.Future 8. 10.Conclusi
Serverless Serverless Threat Controls & and Vision Other/ on
Model Best Practice Governanc Conside
e rations

Lead Vishwas Vishwas John Elizabeth Aradhna Peter Brad Ricardo ALL Aradhna
Manral Manral Wrobel Vasquez Chetal- Raja Campbell Woodward Ferreira Chetal
(Pacific) (Pacific () (Pacific Rajenderan (GMT) (Pacific
Time) Time) Time)

Michael Raja Marina Ricardo Brad John Vrettos Eric Amit


Roza Rajenderan Bregkou Ferreira Woodward Kinsella Moulos Matlock Bendor
(CET) (EET) (GMT) (EET) (EET)

Eric Madhav John Vimal Aradhna


Matlock Chablani Wrobel Subramanian Chetal

Michael Amit Madhav Anthousa


Roza Bendor Chablani Karkoglou
(CET) (EET)

John Kinsella Nikos


Tsagkarakis

Page 7 of 59
Table of Contents
Acknowledgments 4

Table of Contents 8

Executive summary 9

1. Introduction 10

Purpose and Scope 10

Audience 10

2. What is Serverless 12

3. Why Serverless (Lead: John Wrobel) 17

4. Security Threat Model of Serverless - Liz (lead) 18


4.1 Relevant Threats to Serverless (Ricardo) 18
4.2 Categories of Threats (John W) 26
4.3 Weaknesses: (Liz) 29
4.4 Threat Model and Mitigations: (John W, Ricardo and Liz) 32

5. Security controls and best practices - (Aradhna&Raja) 34

6. Risk and Governance (Peter and J. Kinsella) 45


6.1 Serverless Risk Themes 46
6.2 Serverless Governance 47

7. A. Use cases and examples - Brad Woodward, lead 47

8. Other considerations 49

9. Futuristic Vision for serverless security (Lead: Ricardo Ferreira) 49

10. Conclusions 51

11. References 52

Appendix A: Acronyms 53

Page 8 of 59
Executive summary
Serverless platforms enable developers to develop and deploy faster, allowing an easy way to
move to Cloud native services without having to manage infrastructure - including container
clusters or virtual machines. This paper covers security for the serverless applications, focusing
on best practices and recommendations.
From a software development perspective, organizations adopting serverless architectures can
focus on core product functionality, without having to be bothered by the underlying operating
system, application server or software runtime environment.

Recommendations and best practices were developed through extensive collaboration among a
diverse group with strong knowledge and practical experience in information security,
operations, application containers, and microservices. The recommendations and best practices
contained herein are intended for Developer, Operator and Architect audiences.

Page 9 of 59
1. Introduction

Purpose and Scope


The purpose of this document is to present best practices and recommendations for
implementing a secure serverless solution. The scope of this document is limited to Serverless
implementations including both Serverless Container-as-a-Service (SCaaS) and
Function-as-a-service (FaaS).

As a lot of the details of the SCaaS are covered in other documents, this document will focus
only on the aspects that change as a result of the Serverless implementation, and not all CaaS
related details.

There are two parts to a Serverless platform:


Serverless Platform Provider
Serverless Application Owner

As part of this document we will focus mainly on the Serverless Application Owner and the
recommended security practices.

The primary goals of this paper are to present and promote serverless as a secure
cloud-computing execution model. The aim is also to help organizations looking to adopt the
serverless architecture. Then to identify applicable risks, threats and vulnerabilities followed by
recommendations for security controls & best practices needed to secure a serverless
environment. The final commitment is a vision of serverless including it’s forms benefits, risks
and controls.

Audience
The intended audience of this document is application developers, application architects,
Security Professionals, CISOs, Risk mgmt professionals, system and security administrators,
security program managers, information system security officers, and others who have
responsibilities for or are otherwise interested in the security of serverless computing.

The document assumes the readers have some knowledge of coding practices along with some
security and networking expertise, as well as application containers, microservices, functions
and agile application development. Because of the constantly changing nature of technologies
in the serverless space readers are encouraged to take advantage of the other resources,
including those listed in this document, for current and more detailed information.

The audience are encouraged to follow industry standard practices related to secure software

Page 10 of 59
design, development and build.

Page 11 of 59
2. What is Serverless
a. Definition of serverless

Serverless computing is a cloud-computing execution model in which the cloud provider offloads
the runtime management aspects of the compute and dynamically manages the allocation of
machine resources whether physical or virtual, including all aspects of compute, storage and
networking.

Well, if there are no servers involved in the execution, how does it work then? The name
serverless actually applies only to the behavior experienced by the end-user who is using the
service. Under the hood there still exists some servers which actually execute the code, but are
abstracted away from the developer and serverless users.

Pricing for serverless is based on the actual amount of resources consumed by an application,
rather than on pre-purchased units of capacity.
 
Function as a Service (FaaS) and Serverless Container as a Service (SCaaS) are two well 
known models of serverless computing. 
 
b. FaaS and SCaaS differences

The basic difference is that while in SCaaS the container is the basic unit of execution
that a customer provides, for FaaS it is code functions that the customer provides.

In SCaaS the application is packaged into a container by the customer. The provider
uses docker/ containerd as the virtualization layer, manages the scaling of containers
and all aspects of machine resources the container runs on.

In FaaS the application function code is provided by the customer, along with the event
triggers that cause the function to be invoked. The provider takes the function provided,
builds it and creates the triggers. No infrastructure administration, provisioning or
management is required by the customer and is the responsibility of the provider.

Attribute Function as a Service Serverless Container as a


(FaaS) Service (SCaaS)

Execution Unit A function code with single A Container Image with


logic multiple functional logics.

Page 12 of 59
Dependency Application programming Can run applications
language specific independent of the language
the code is written in,
because the binary and
dependencies are packaged.

Application State Generally Stateless State can be maintained by


configuring storage volumes.

Execution Time Not time limited


Generally limited to a few
minutes. Even less time if
multiple HTTP operations are
involved within a single
function.

Scaling Auto scale Auto scale

Examples AWS Lambda, Azure AWS Fargate, Azure


Functions, Google Cloud Container Instances, Google
Functions Cloud Run

Runtime responsibility Serverless Service Provider Serverless User

Networking This should come in the


execution limits section

Deployment Package Limit Generally package sizes are No restriction on package


restricted to a few size.
megabytes.

c. Serverless Security overview


Serverless security brings in a new paradigm of security where the application owner is only
responsible for the security of the application. All aspects of managing the server or its security
including bringing up, patching, updating and bringing down are managed by the Serverless
platform provider, thus reliving the application owner to focus on the application itself.

To better explain the shared responsibility model between the Platform Provider and Application
owner for the different models we have created the diagram below.

Page 13 of 59
Page 14 of 59
d. Hybrid serverless architecture (private & public)

There are many serverless architectures. Some common infrastructures examples are
(not comprehensive):
● Amazon: Lambda, Fargate, AWS Batch
● Google: Cloud Functions, Knative

Page 15 of 59
● Azure: Azure Functions, Azure Container Instances
● Nimbella: OpenWhisk
● IBM: OpenWhisk

Forrester research: ​https://reprints.forrester.com/#/assets/2/108/RES155938/reports

Page 16 of 59
3. Why Serverless (Lead: John Wrobel)
a. Characteristics of Serverless (​Contributing: Marina B.​) J
​ oin with c - Shared 
responsibility Model 
b.​ Advantages/Benefits of Serverless architecture​ ​(Contributing: Amit B.) 
● Cost 
○ Infrastructure Cost 
■ Priced (usually) in a per-request basis, which means you 
don’t need to pay when you’re not using the infrastructure   
■ Cost efficient on burst workloads - you don’t have to 
maintain servers on times they are not required  
○ Operational Cost 
■ Not having an infrastructure to manage can cut labor cost 
and time spent on maintaining it. 
● Developer experience 
○ Easy to deploy 
■ Serverless services can be easily deployed with minimal 
configuration with CLI tools, from source control or 
through a simple API  
○ Easy to monitor 
■ Most cloud providers offer out-of-the-box logging and 
monitoring solutions bundled with their serverless offering 
○ No server management overhead 
■ Serverless services abstract all server management tasks 
such as patching, provisioning, capacity management, 
operation system maintenance  
● Scale 
○ Scalable by nature 
■ Serverless auto-scales based on usage without having to 
setup any additional infrastructure 
■ There’s no need to configure policies for up or down 
scaling 
■ When working on premise, scaling is limited to the 
available infrastructure 
● Security? 
c. ​Shared responsibility model for serverless 
 
Service  Application owner  Serverless Platform 
Provider 

Platform patching    CaaS, FaaS 

Image patching  CaaS  FaaS 

Page 17 of 59
Secure coding practices  CaaS, FaaS   

     

 
d. ​When is serverless appropriate

The serverless model is most appropriate in cases where there is a relatively large application
or set of applications, and several DevOps type teams available to support them. In such a
case, the application(s) can be broken down into smaller components called Microservices (​see
Microservices Best Practices whitepaper​), with each being supported by one or more teams and
running in a serverless environment. This allows for more effective use of development
resources by allowing them to focus on a single specific piece of functionality. This model also
allows for more agile development of each individual microservice when compared to a
monolithic application, because functionality for each part of the application can be moved into
production without as much concern for full integration and regression testing with the other
parts of the application.

With relatively small applications or teams, a serverless model can sometimes be less efficient
than having a traditional infrastructure to support the application (such as IaaS or PaaS
services). With a smaller application, there is typically less complexity, and the benefits of
breaking the application down into microservices are lost. In such a case microservices can end
up being so tightly coupled with other services that some benefits of microservices such as
reusability are lost. Also, with insufficient resources to support many microservices, teams may
have to stop work on one microservice to support another.

It is also important to note that in almost all cases serverless architectures will simplify the
deployment process. This is because in most cases deployment consists of simply uploading a
container image or set of code, without as much concern for resource provisioning and network
architecture as with a traditional application deployment. It is important for organizations to
perform a cost/benefit analysis when making a decision around using serverless architectures,
so that they can choose the solution that is most technically efficient and cost-effective for their
needs.

4. Security Threat Model of Serverless - ​Liz (lead)


4.1 Relevant Threats to Serverless ​(Ricardo)
In the last few years, new paradigms have emerged to increase agility, business growth, and
reduction of time to market.

Page 18 of 59
Serverless is a good example as it allows a beginner to deploy functions without any
understanding of the underlying Cloud infrastructure, saving time and resources by being
focused on the real business value​.
As such serverless quickly gained adoption, bringing a paradigm shift on how organisations
consume Cloud services.
If we look from a historical point of view, we have been devolving from a reliance on perimeter
security, in 2007 the Jericho Forum project [Jericho Forum-White Paper, 2007] identified the
Collaboration-Oriented Architecture as a precursor to the massive changes that will be
introduced by these new paradigms [Jericho Forum Commandments, 2007], such as

● People and systems need to manage permissions of resources and users they don't
control.
● There must be a trust mechanism for organizations, which can authenticate individuals
or groups, thus eliminating the need to create separate identities.
● Systems must be able to pass on security credentials.
● Multiple areas of control must be supported.

These best practices were envisioned by the necessity of being secure in an increasingly
de-parametrized world, mostly driven by the increase in connectivity, ability to successfully
inter-work securely over the Internet, and proliferation of cheap IP devices (IoT). The below
diagram highlights the evolution of the deperimeterization driven by business value and digital
economy, pushed by cost, flexibility, and agility.
Currently we reached this point with serverless, event driven architectures, and microservices.

Page 19 of 59
^^Jericho Forum and the de-perimeterization push - Jericho Forum-White Paper, 2007

Since the digital economy expands the corporate boundary, the existing networking model has
faded and become less relevant for security boundaries, as organisations started to use new
computing paradigms.
This means the new paradigms changed the traditional risk in these new environments
(serverless, microservices, and IoT) where typical flows cross untrusted boundaries, such as
getting data from a public location, putting data into customers premises, calling functions in
other locations, etc.

Traditional 3 Tier Architecture

Traditionally in an N Tier architecture as above we would “wall garden” and segment by


function, responsibility, or risk.
As an example in a traditionally 3-tier architecture we would leverage Firewalls, Proxy Servers,
and Intrusion Prevention Systems in the First Tier, we would use reverse proxies, ACL’s in
conjunction with network segmentation in the Second Tier, and leverage added segmentation to
divide and reduce the risk surface in the Third Tier.
With a serverless environment it changes, as by default, the network construct is removed, the
choke points do not exist, and the traditional controls were replaced.
As such it requires a more thoughtful security architecture to secure the serverless components,
below we have a diagram on the serverless architecture:

Page 20 of 59
Serverless Model

In this new model, a traditional network is not needed for the functions to operate, it’s lifecycle is
quite different, sometimes functions have a duration of milliseconds before they get destroyed.
In this scenario of increased connectivity and integration, the de-perimeterization is quite
evident.
As such we do see a risk evolution where traditional controls like firewalls, L7 firewall, routing,
IDS/IPS agents, SIEM’s are not yet able to efficiently deal with this new type of paradigm.
The threat landscape has now shifted as the attack surface tied to managing the Operating
System, the programming language runtime, and infrastructure is removed.

But now we have internal service calls that expand across the Cloud estate, requiring further
thought about our services forming part of our supply chain, as we architect with security best
practices in mind, for example below are some scenarios that highlight some of the risks:
● A function that now spans across the Cloud estate streaming data into several services,
crossing untrusted boundaries.
● Functions being triggered increase enormously, due to the nature of event driven
architectures, e.g. every time a file changes, several functions might be used.
● Chained functions being used, some of those pulling data from several trusted or
untrusted sources.

With these models the complexity is transferred to another part of the stack, requiring us to
architect differently for security, some of the below examples highlight the required changes:
● Configuring HTTPS, managing certificates
● Configuring and administering DNS

Page 21 of 59
● Addressing all the security requirements in a serverless stack:
○ Functions
○ Storage
○ Message queuing
○ noSQL
○ Automation
○ Gateways
○ Authentication

As previously stated despite the traditional risk surface decreasing due to the infrastructure
abstraction, now there are other security vectors that shift the attack surface, requiring the
organisation to manage the Cloud provider configuration, tightening of the IAM roles, and
thinking about the security process in the application development lifecycle.
Aspects like logging and monitoring that traditionally have been coupled inside the application
e.g. Nginx, Apache abstracting them from the user, in this paradigm requires developers to write
logging and monitoring events along with the code.
Some key areas to improve the security of such platforms have started to be researched to
create a more robust approach. Translating security policies from existing applications, offering
security API’s for dynamic use in functions e.g. A function may have to delegate security
privileges to another function or Cloud service. Access control mechanisms using
cryptographically protected security contexts could also be a fit for this distributed security
model.[Berkley paper, 2019] We expand more on these topics in the futuristic chapter of this
paper.
Following are key areas that require special focus during serverless adoption:
● Supply Chain security
● Data Flow Security
● Attack Detection
● Delivery Pipeline
The above areas are expanded and described in more detail below, for a comprehensive
reference we unified and collapsed several sources[sources 1,2,3]] to cover the most common
serverless threats:

Threat Description

Data Injection ● At a high level, injection flaws occur


when untrusted input is passed
directly to an interpreter before being
executed or evaluated.
● However, in Serverless architectures,
function event-data injections are not
strictly limited to direct-user input
● Most serverless architectures provide

Page 22 of 59
a multitude of event sources, which
can trigger the execution of a
serverless function.

● Since serverless architectures


Broken Authentication
promote a microservices-oriented
system design, applications built for
such architectures may contain
dozens (or even hundreds) of distinct
serverless functions, each with a
specific purpose.
● These functions are weaved together
and orchestrated to form the overall
system logic. Some serverless
functions may expose public web
APIs, while others may serve as an
“internal glue” between processes or
other functions.
● Some functions may consume events
of different source types, such as
cloud storage events, NoSQL
database events, IoT device telemetry
signals or even SMS notifications.

● Certain configuration parameters have


Insecure Serverless deployment defaults
critical implications for overall security
postures of applications and should
be given attention.
● Settings provided by serverless
architecture vendors may not be
suitable for a developer’s needs.

● Only the minimum necessary rights


Broad and generic permissions
should be assigned to a subject that
requests access to a resource and
should be in effect for the shortest
duration necessary.
● Permissions to a user beyond the
scope of the necessary rights of an
action can allow that user to obtain or
change information in unwanted ways.
● Careful delegation of access rights
can limit attackers from damaging a
system.

● Every cyber intrusion usually starts


Insufficient logging
with a reconnaissance phase, it is the

Page 23 of 59
point in which attackers assess the
application for weaknesses and
potential vulnerabilities.
● Many successful attacks could have
been prevented if organisations had
efficient and adequate real-time
security event monitoring and logging.

● One of the most frequently recurring


Insecure management of secrets
mistakes related to application secrets
storage, is to simply store these
secrets in a plain text configuration file
that is a part of the software project.
● The situation gets much worse if the
project secrets are stored on a public
repository.

● Generally, a serverless function


Supply Chain and dependency issues
should be a small piece of code that
performs a single discrete task.
● Functions often depend on third-party
software packages, open-source
libraries and even the consumption of
third-party remote web services
through API calls to perform tasks.

● In the past decade, denial-of- service


Financial and resource exhaustion
(DoS) attacks have increased
(DoW and DoS) dramatically in frequency and volume.
● Such attacks became one of the
primary risks facing nearly every
company with an online presence.

● Cloud native debugging options for


Error handling and verbose error messages
serverless-based applications are
limited (and more complex) when
compared to debugging capabilities
for standard applications.
● This reality is especially true when
serverless function utilizes
cloud-based services not available
when debugging the code locally.

● In a scenario where the serverless


Data persistency across executions
execution environment is reused for
subsequent invocations, which may
belong to different end users or

Page 24 of 59
session contexts, it is possible that
sensitive data will be left behind and
might be exposed.

The serverless paradigm uses functions that correctly coupled with third party services allow an
organisation to run end to end applications without needing to care about traditional
Infrastructure management, but with any new technology, serverless patterns are still emerging
which require a more thoughtful security approach, as those patterns will mature over time, with
the effort of this and other papers in the area.

Page 25 of 59
4.2 Categories of Threats​ (John W)

There are a few categories of threats to serverless, most of which can be broadly broken
into two categories: threats to serverless platforms, and threats directly to the
applications built on serverless platforms. Platform threats target the containerization
and orchestration platforms that enables the serverless functionality, while application
threats focus on the applications that developers build to run on top of serverless
platforms. Aside from these, there are a few relevant threats that do not fit neatly into
either category, which will also be addressed.

A. Platform Threats (Serverless Container as a Service Platforms)


Platform Threats target the underlying infrastructure that powers
containerized applications. In the context of Serverless Containers as a
Service, these platforms will generally be operated and maintained by the
cloud provider with some configuration options available to the customer.
Common threats to SCaaS platforms include:
1. Containerization and Orchestration Vulnerabilities
a. Orchestration and Containerization tools (such as
Kubernetes and Docker, which dominate the market at the
time of writing) themselves are susceptible to
vulnerabilities in their own code such as improper error
handling, for example.
b. Containerization/Orchestration API abuse
i. Orchestration and Containerization tools rely on
APIs for management activities. By their nature,
these APIs are often exposed to some extent, and
with improper configuration they can be more
widely exposed than intended. This exposure,
combined with API-specific vulnerabilities can lead
to several varieties of attacks (DoS, leaked
sensitive information, application hijacking, injection
of malicious code, etc.) against the platform and
applications on the platform.
2. Unrestricted/admin access assigned inappropriately
a. Some applications or tools require more extensive
privileges than standard user access to operate properly.
Often, such applications or their users will request “root” or
“admin” access, instead of the specific enhanced privileges
they need to operate properly. Unnecessarily assigning
such a high level of access to applications or users results
in the creation of additional threat vectors towards the

Page 26 of 59
platform.

3. Unauthorized access
a. Improper configuration of user access can result in some
users having technical access that exceeds the access
they are intended to have as dictated by policy. This can
result in users having access to configuration items, data,
or other secrets which they are not authorized to have
access to according to organizational policy.
4. Portal/console vulnerabilities
a. Most container platforms and SCaaS offerings have some
sort of portal or console which allows users to configure
and manage the platform. These portals/consoles are often
just a web application that grants access to manage the
platform in one way or another. Like any other web
application, these portals/consoles are susceptible to any
number of vulnerabilities, which in their case could result in
unauthorized access to the platform on multiple levels.
B. Serverless Application Threats
Applications built with serverless architectures are also exposed to many
of the same threats as applications built with other architectures, such as:
1. Environment misconfiguration
a. Even with FaaS and SCaaS environments, users are given
varying levels of control over the configuration of the
environment. Mismanagement of these configurations can
leave the platform and resident applications vulnerable.
b. Exposed ports
i. As with many types of environments, the open
ports associated with the environment and resident
applications need to be carefully managed to
prevent unintentionally leaving ports open that may
leak information, allow attackers to gain
unauthorized access to either the environment or
application, or launch attacks against it.
c. Disabled/default configuration
i. Many services for hosting serverless applications
are insecure by default. This may include default
credentials for administrative accounts, open ports,
or lack of authentication services.
d. Exposed credentials
i. Some services use a standard set of credentials for
administrative accounts that should be changed by
users during setup, or some combination of other
vulnerabilities may end up exposing credentials to
attackers.

Page 27 of 59
2. Vulnerable dependencies
a. As with any other kind of applications, those that run in a
serverless environment are susceptible to vulnerabilities
introduced by software dependencies that are required for
the application to be run.
b. Vulnerable images
i. Unique to container-based serverless applications
is the threat of vulnerable images. Often,
developers use some sort of base image upon
which they will build their application. Depending on
the source of these images, they are susceptible to
many types of vulnerabilities, including
vulnerabilities in pre-installed dependencies, as
well as pre-installed malware in some cases.
3. “Embedded” malware
a. As mentioned above, container-based serverless
applications frequently use pre-made images as the
foundation of their applications. In some cases, these
images may have malware already installed on them
before the developer starts modifying the base image.
Malware seen in this capacity is widely varied, but some of
the more popular cases include keyloggers and
crypto-miners.
C. Other threats (Deployment, execution, operational etc.)
The serverless model also faces some other categories of threats, such
as:
1. Attacks against or through automated deployment tools
a. Serverless architectures frequently lend themselves
towards highly automated change integration and
deployment techniques. By their nature, these tools
operate in a “hands-off” capacity, and DevOps teams may
not frequently interact with their configurations. As a result,
some attackers may target these automation tools as a
way of incorporating malicious code into a target
application, or as a way to cause a denial of service with
regards to application updates.
2. Exploited code repositories
a. Similarly to automated deployment tools, shared (public or
private) code repositories present an enticing target for
attackers seeking to carry out a supply chain attack. If
code repositories are not properly secured, attackers may
attempt to incorporate malicious code into the application
by committing said code to the application’s code
repository.

Page 28 of 59
3. Exploited image registries
a. Image registries can be exploited in a similar way to code
repositories if they are not properly secured. An attacker
may attempt to overwrite an existing image with a version
that has some sort of malicious tool/application/code
embedded within it. See “Embedded” malware above.

Note for John - See suggested reformatting below:

A. Platform Threats (Serverless Container as a Service Platforms)


Platform Threats target the underlying infrastructure that powers containerized
applications. In the context of Serverless Containers as a Service, these
platforms will generally be operated and maintained by the cloud provider with
some configuration options available to the customer.
Common threats to SCaaS platforms include:

Kubernetes and Docker Kubernetes and Docker themselves are


Vulnerabilities susceptible to vulnerabilities in their own code,
such as improper error handling.

Kubernetes/Docker API Kubernetes and Docker rely on APIs for


abuse management activities. By their nature, these
APIs are often exposed to some extent, and with
improper configuration they can be more widely
exposed than intended. This exposure, combined
with API-specific vulnerabilities can lead to
several varieties of attacks (DoS, leaked sensitive
information, application hijacking, injection of
malicious code, etc.) against the platform and
applications on the platform.

Unrestricted/admin Some applications or tools require more


access assigned extensive privileges than standard user access to
inappropriately operate properly. Often, such applications or their
users will request “root” or “admin” access,
instead of the specific enhanced privileges they
need to operate properly. Unnecessarily
assigning such a high level of access to
applications or users results in the creation of
additional threat vectors towards the platform.

Page 29 of 59
Unauthorized access Improper configuration of user access can result
in some users having technical access that
exceeds the access they are intended to have as
dictated by policy. This can result in users having
access to configuration items, data, or other
secrets which they are not authorized to have
access to according to organizational policy.

Portal/console Most container platforms and SCaaS offerings


vulnerabilities have some sort of portal or console which allows
users to configure and manage the platform.
These portals/consoles are often just a web
application that grants access to manage the
platform in one way or another. Like any other
web application, these portals/consoles are
susceptible to any number of vulnerabilities,
which in their case could result in unauthorized
access to the platform on multiple levels.

B. Serverless Application Threats


Applications built with serverless architectures are also exposed to many of the
same threats as applications built with other architectures, such as:

Environment Even with FaaS and SCaaS environments, users


misconfiguration are given varying levels of control over the
configuration of the environment.
Mismanagement of these configurations can
leave the platform and resident applications
vulnerable.

Exposed ports As with many types of environments, the open


ports associated with the environment and
resident applications need to be carefully
managed to prevent unintentionally leaving ports
open that may leak information, or allowing
attackers to gain unauthorized access to either
the environment or application, or launch attacks
against it.

Disabled/default Many services for hosting serverless applications


configuration are insecure by default. This may include default

Page 30 of 59
credentials for administrative accounts, open
ports, or lack of authentication services.

Exposed credentials Some services use a standard set of credentials


for administrative accounts that should be
changed by users during setup, or some
combination of other vulnerabilities may end up
exposing credentials to attackers.

Vulnerable dependencies As with any other kind of applications, those that


run in a serverless environment are susceptible
to vulnerabilities introduced by software
dependencies that are required for the application
to be run.

Vulnerable images Unique to container-based serverless


applications is the threat of vulnerable images.
Often, developers use some sort of base image
upon which they will build their application.
Depending on the source of these images, they
are susceptible to many types of vulnerabilities,
including vulnerabilities in pre-installed
dependencies, as well as pre-installed malware in
some cases.

“Embedded” malware As mentioned above, container-based serverless


applications frequently use pre-made images as
the foundation of their applications. In some
cases, these images may have malware already
installed on them before the developer starts
modifying the base image. Malware seen in this
capacity is widely varied, but some of the more
popular cases include keyloggers and
crypto-miners.

C. Other threats (Deployment, execution, operational etc.)

The serverless model also faces some other categories of threats, such as:

Attacks against or Serverless architectures frequently lend


through automated themselves towards highly automated change
deployment tools integration and deployment techniques. By their

Page 31 of 59
nature, these tools operate in a “hands-off”
capacity, and DevOps teams may not frequently
interact with their configurations. As a result,
some attackers may target these automation
tools as a way of incorporating malicious code
into a target application, or as a way to cause a
denial of service with regards to application
updates.

Exploited code Similarly to automated deployment tools, shared


repositories (public or private) code repositories present an
enticing target for attackers seeking to carry out
a supply chain attack. If code repositories are not
properly secured, attackers may attempt to
incorporate malicious code into the application
by committing said code to the application’s code
repository.

Exploited image Image registries can be exploited in a similar way


registries to code repositories if they are not properly
secured. An attacker may attempt to overwrite an
existing image with a version that has some sort
of malicious tool/application/code embedded
within it. See “Embedded” malware above.

4.3 Weaknesses: ​(Liz)

A. Inherent Weaknesses vs Weaknesses Introduced by Design

Application Architects have to be cognizant of both inherent weaknesses present in


Serverless technology vs weaknesses they can introduce in their design. Understanding
these weaknesses will allow them to better understand why the threats we call out in this
paper are possible, and why the security controls we expound on in the next chapter
need to be included in their secure design.

But first, let’s not discount the fact that serverless architecture has many benefits even
from a security perspective that are being weighed as part of Secure Application Design
considerations. Some of those benefits include:

Page 32 of 59
1. Stateless and Ephemeral: Short-lived serverless functions are processing
unencrypted data in memory for a short period of time. Serverless functions do
not write to a local disk. Hence why functions that need to persist state rely on
external data stores. ​Reducing the likelihood of exploits by attacks designed for
long-lived targets.

2. Each serverless function requires only a subset of your data to perform it’s
micro-focused process. ​So as long as this function is permissioned correctly to
access only the data it requires, then a successful exploit of a function should
also be more focused on what data it can potentially exfiltrate.

3. Serverless applications run within containers managed by a CSP or within


self-managed containers. Hence, they have some of the inherent security
benefits of containers, where they run on immutable container images and as
containers that do not require long-lived servers, they can easily be assigned to
continuously patched container images and compute instances. ​Lessening
concerns of running on Vulnerable or Unpatched underlying infrastructure.

B. What are some inherent weaknesses?

By now you have a greater understanding of what Serverless is and how the security
responsibility has shifted, where you now rely most of the underlying responsibility to a
Platform Provider. Along with this shift of responsibility you have also inherently lost a lot
of visibility and manageability. Where are my functions actually running? What is the
actual network exposure of my functions? Are my functions executed in an idle
container already initialized by a previous execution as a “warm start” or a newly
instantiated container as a “cold start”? Is any of my previous data still available in
cached memory?

So let’s start by understanding some of the inherent weaknesses that you need to
consider as you design your Serverless applications.

1. Functions inherently are enabled with public-facing egress.

Functions that run on containers managed by the CSP (FaaS) are configured with
open network policies by default. Those functions can therefore access any
endpoint on the internet. If a function with a privileged access control level allowing
retrieval of your confidential data is compromised, then that same function can be
used to exfiltrate that data to any internet accessible endpoint.

Page 33 of 59
When you run serverless functions in your managed container environment (SCaaS),
then you are typically already defining strict network and service policies, including
allowable communication paths and the network exposure of your services.

Suggested Controls for FaaS:

i. Apply network policies to limit the end points that are reachable from that FaaS
by using the Virtual Private Cloud (VPC) network policy configurations that are
being made available by the various CSPs.

ii. Apply service or resource policies that allow you to limit the end points that can
access your data stores / services, and therefore you can reduce the exfiltration
paths for your data.

Examples include: AWS VPC endpoints, Azure VNet Service endpoints, and
Google VPC Service Controls

2. Lack of visibility of the serverless application deployed architecture.

Serverless Applications are built upon microservices architecture so you build many
logically focused functions to run either concurrently and thus scale out your
processing, or sequentially with dependencies on state and expected outcomes.
However as you continue to grow the number of functions that you make available
whether for a single application or as distributed APIs for use by many larger
functional applications, challenges arise to fully know how that function is executed.

Are all those functions executed within a VPC, especially if the criticality of the data
requires that you minimize any public exposure? Are all the roles you created for
each function tuned to allow the minimum required privilege? Did developers reuse
already existing roles so that functions that need to read data only are using the
same role as functions that actually need to process and update your data values? If
an application redesign now specifies the function will be launched by an event
queue instead of an HTTP event. Has the HTTP event trigger been removed as part
of the deployment options. Are all the functions that can be triggered by an HTTP
event behind an API gateway?

Suggested Controls:

i. Logging: Ensure that you are using integrated logging that you can centralize to
facilitate your overall application performance and security monitoring. Platform
Provider logging will help collect statistics on the number of, duration of, and
memory usage of your function’s executions. Visibility of application errors is

Page 34 of 59
within your control by adding logging statements as needed within your
serverless functions. For instance, is your error logging enabling you to identify if
the failure occurred in a process you defined, from unexpected data input or as a
result of a 3rd party functional process?

ii. Monitoring: Use application and security monitoring tools that will help surface
that visibility of not only how often a function is executed, but the logical
execution path. You should be able to monitor and discover all the APIs or
endpoints that you are exposing and all your downstream or dependent APIs.
You should be able to monitor and discover which events and roles are executing
your functions, and any unexpected execution paths or methods.

3. Default Platform Provider configurations can impact security.

Serverless security depends on the configurations of not just the functions but on all
of the upstream and downstream services that compose your Serverless application.
Your Serverless application is more than likely using many Platform Provider PaaS
services, so it is important that you understand the default configurations of those
PaaS services and all of your security hardening options.

For instance, if you are using PaaS data services, are you still using any default
configurations, and is that configuration still potentially exposing your data publicly.
Are you using default Platform Provider IAM roles that allow your function to read
and write data, when your function only needs to read and process that data.

Suggested Controls:

i. Do not overlook or rely on default platform provider configurations. Determine


how to securely configure each service, based on your application and data
security requirements and risk posture.

ii. Perform application security testing to ensure that your configurations meet all of
your security requirements. Application security testing should help you validate
the methods and roles that are able to access your functions or data. More
importantly application security testing should help reveal any configuration
weaknesses that still exist, not just that your application logic or input validation is
still exploitable by injection attacks.

4. Serverless FaaS and CaaS is executed on infrastructure that is not encrypted


by default.

Page 35 of 59
Are you designing a Serverless Application that will access and process confidential
data or highly regulated data? If so, then the security requirements for how that data
is protected increases, and you have to consider not only data exfiltration by external
attackers but malicious insiders as well.

Platform Providers are executing your Serverless FaaS and CaaS applications on
compute instances that are not encrypted by default. Furthermore, your Serverless
applications most likely rely on PaaS data services offered by Platform Providers that
are not encrypted by default.

Suggested Controls:

i. Ascertain if your Platform Provider offers the option of deploying your Serverless
code on either FaaS or CaaS that executes on a confidential compute instance.
This may help you decide between the use of Serverless FaaS or CaaS to
properly meet application and data security requirements and risk posture.

ii. Ascertain the default or managed encryption options for each PaaS Data Service
that you are incorporating into your Serverless application and will hold that
confidential or highly regulated data. Ensure that your use of and configuration of
the PaaS Data Service will meet your application and data security requirements.

5. Serverless application reliance on 3rd party libraries.

Serverless application code and deployment vulnerability scanning has limited


visibility of dependent 3rd party libraries. Especially concerning are any
vulnerabilities that may have been introduced into those 3rd party libraries that are
managed outside of your source code repository.

Suggested Controls:

i. Incorporate Source Composition Analysis of any 3rd party libraries into your
deployments. This allows you to discover not only how extensive your
dependencies are but if you are introducing risk with already known vulnerable
components.

ii. Use Security monitoring solutions that can identify vulnerable 3rd party libraries
at run time, and also identify included libraries that are not being used. Remove
redundant libraries to reduce the risk of unnecessarily including vulnerabilities.

Page 36 of 59
C. What weaknesses can you introduce in your design?

After developing a better understanding of inherent weaknesses in Serverless platforms


we will now shift the focus to the impact of your design decisions on your security
posture.

1. Is your function susceptible to Denial of Service (DoS) or Denial of Wallet (DoW)


attacks?

Scenario 1: You decide to publish your Serverless functions to be directly callable


HTTP requests.

Potential Result: An attacker launches a synchronous flood of requests against your


function, which in turn is continuously instantiated, processing each request and
leading to Denial of Service of legitimate requests along with financial resource
exhaustion, also known as Denial of Wallet.

Recommendation:​ Publish your Serverless functions behind an API Gateway, which


will allow you to set request limits to throttle all incoming requests to each function,
and where you can normally also set Input Request Filters to limit the risk of having
your function repeatedly launched to process completely invalid requests. API
Gateways can help by rejecting requests that fall outside of your thresholds and
filters and are more likely purposeful attacks designed to overwhelm and ultimately
deny the service to your functions.

Scenario 2: You do publish your Serverless functions behind an API Gateway, that
limits the number or requests (such as setting an upper limit to 10,000)
and a Request Filter (such as defining the expected JSON parameters
expected with the Request), but you are not enforcing authentication and
authorization.

Potential Result:​ An attacker launches your function with requests that are formatted
like legitimate requests, thus passing the API Gateway request filtering checks, and
are successfully instantiating a function until reaching the upper limit, but this in turn
causes you to process a high number of unauthorized requests designed solely to
force you to pay for services leading to financial resource exhaustion, also known as
Denial of Wallet.

Recommendation:​ Publish your Serverless functions behind an API Gateway, where


you not only set request limits to throttle all incoming requests to each function, and

Page 37 of 59
set Input Request Filters, but where you also incorporate Authentication and
Authorization for each of those functions.

4.4 Threat Model and Mitigations: ​ (John W, Ricardo and Liz)

Sr. Threat Weakness Threat Mitigations to be Scope


No Description Impact considered

1 Included 3rd A successful Implementing FaaS,


Vulnerable
Party exploit may Source Code CaaS
dependencies
Library be used to Analysis to
dependenci carry out discover and
Attackers are es with Injection identify all your 3rd
aware or existing or Attacks which party library
discover newly can lead to dependencies.
vulnerability in a discovered anything from Combine that with
common 3rd Vulnerability Data Security solutions
Party Library. , Exfiltration, that will help
They formulate Data identify which
attacks to exploit Manipulation, libraries have
this vulnerability Denial of known
knowing it will Service and vulnerabilities.
eventually be so on.
successful if a
Serverless app
happens to have
included that
weak library.

Note: Threat Description includes what Attack Vector is being exploited.


Note: Mitigations (Reference controls)
Scope: Faas vs CaaS

Threat Model Diagrams:

1. FaaS Threat Model


2. Serverless on CaaS Threat Model

Page 38 of 59
We need to check
https://cloudsecurityalliance.org/artifacts/best-practices-in-implementing-a-secure-micros
ervices-architecture/
And see which threats we can reference from there.

Talk about Process Flow or data flow and how important they are for correct modeling

Page 39 of 59
5. Security controls and best practices -
(Aradhna&Raja)
​ (Cloud controls matrix for serverless)

Overall Security Architecture pattern for CaaS and FaaS

Page 40 of 59
Figure A : Function order of execution can lead to security vulnerabilities; instead, treat every function
as its own security perimeter: Ref : Serverless Security by Liran Tal

We need a Diagram here that shows different layers of the stack and the shared responsibility
model viz. For CaaS layers that the provider controls

Page 41 of 59
Control Sub - Category Control FaaS CaaS Comments
Category Description

Application & Application Security


Interface
Security

Customer Access
Requirements

Data Integrity

Data Security

Encryption & Entitlement


Key
Management

Key Generation

Sensitive Data
Protection

Storage and Access

Identity & Audit Tools Access


Access
Management

Credential Lifecycle /
Provision
Management

Diagnostic /
Configuration Ports
Access

Policies and
Procedures

Segregation of
Duties

Source Code Access


Restriction

Third Party Access

Page 42 of 59
Trusted Sources

User Access
Authorization

Infrastructure Audit Logging /


& Intrusion Detection
Virtualization
Security

Change Detection

Clock
Synchronization

Vulnerability
Management

Network Security

Production /
Non-Production
Environments

Segmentation

Platform Platform Patching


security Configuration vulnerabilities in
open source
a.Secure the dependencies
triggering agent
(input binding)

b.Least privilege for


output binding

C. Cleanup
temporary space for
isolated and batch
processing functions

Platform Component
Security
Configuration to
address any
vulnerabilities

Page 43 of 59
Platform Mitigations
against compromise

Orchestration
Orchestration 
i. Detection 
j. Policy 
enforcement 
and 
management 
k. Response 
automation 
l. Resiliency  
m.Scalability 
n. Synchronous 
or 
asynchronous 
functions 
o. Error handling 

Identity and Account


Access mgmt. Management

Access Enforcement
(RBAC/ABAC/NGAC

Information Flow
enforcement

Least Privilege

Time stamps

Audit events

Config mgmt. Baseline Config

Config Change Images can be used


control to help manage
change control for
apps.

Access restriction for

Page 44 of 59
changes (policy
enforcement)

Authentication (user,
device, Service,
Function)

Credential mgmt.
And IDentifier mgmt.

Development
process, tools,
technologies controls

Dev Security testing,


evaluation in CI-CD
pipelines

Data in transit
protection

Data leak protection

Integrity checks for Unauthorized


software/API’s changes to the
contents of images
resources etc. can easily be
detected and the
altered image
replaced with a
known good copy.

Vulnerability Unauthorized
management changes to the
contents of images
can easily be
detected and the
altered image
replaced with a
known good copy.

Secure ● Input Brad to expand


Coding and validation and finish this
Validation ● Remote code section.
execution
● Insecure
Logging
● Exception

Page 45 of 59
handling
● Vuln mgmt of
app and
dependencie
s
● ETC

Security Separating security


function functions from
non-security
isolation functions can be
accomplished in part
by using containers
or other virtualization
technologies so that
the functions are
performed in
different containers.

Separating user
functionality from
administrator
functionality can be
accomplished in part
by using containers
or other virtualization
technologies so that
the functionality is
performed in
different containers.

The 12 Most Critical Risks for Serverless Applications 2019

SAS-1: Function Event-Data Injection ​(Raja)

Serverless functions and containers can be invoked by both an web based HTTP call and also
on a Cloud based event. An event in a cloud platform indicates a change in the cloud resource.
Event-Data Injection is one of the most abused attack vectors in serverless platforms till date
and it applies to both FaaS and SCaaS. The most common type of injections are OS Command
and SQL Injections. Essentially an Event passes data to the backend serverless application as
part of the trigger. The input from the event data will vary based This event data is something
that was not planned or developed by the application developer. This will allow passing strings
that can be executed in the runtime.

Best Practices​:

Page 46 of 59
● Input Validation: Developers are always responsible for the application code and should
provide utmost care on sanitizing input from the event data. Abstract only what is
required for the application to execute.
● Adding input validation as part of the deployment package could result in poor
performance, added cost with increased execution time. Best practice is to separate
input validation functionality and deployment packages. The likes of AWS Lambda layer,
Azure Functions input bindings can be used.
● Use a Function wrapper (a function that proxies the original function). The wrapper
should have the logic to validate input and then pass on the validated input to the actual
processing function. One input validation function can be used for more than one app
function if the logic is the same, but cost is a factor here.
● Several commercial solutions are available as runtime protection to inspect input data
and prevent malicious code to be executed as part of the function.
● If functions are hosted as API, then front-end them with any proxy like an API Gateway.
Most of the Cloud providers gateway services can handle input validation and are handy
ready to use solutions.

Authentication and Access Controls (SAS-2: Broken Authentication- remove this after)
(Aradhna)

In a serverless architecture, the functions need to be customized and configured in terms of the

a. The cloud platform resources and events that trigger a function’s execution
b. The resources to which functions have access to
c. The set of permissions that functions need to access the resources. The functions need
to be all connected to form business flows.

Functions require managing of instance per-function permissions and mapping fine-grained


resource access. Due to a number of pieces and this being a lot of work, developers may take
shortcuts and define global roles and permissions applied to all functions, or may not change
insecure defaults that cloud providers set as an initial starting point.

Here are some key principles for managing access and permissions on FaaS platforms.

can cause for the overall integration of several functions.

This also reduces the impact in case of abuse and minimizes the likelihood of privilege
escalation. e.g.

Page 47 of 59
If a function doesn’t require access to a database, they shouldn’t be provisioned with
permissions to connect to that database. This ensures that when attackers compromise a
function, they are isolated to a minimal scope of resources, limiting the scope and impact of the
attack.

Security requirements must be part of the design elements early on the development lifecycle.
After functions are deployed, reducing permissions at a later time is difficult to do and can be
error prone. Teams that lack visibility into each and every function’s purpose often choose to
avoid this because it can introduce breaking changes; avoiding this means that these
permission sets expand but never contract. Over time, this significantly increases risk to the
organization because any vulnerability, be it in the source code, in a third party, or in an open
source library used, might have given an attacker access to your entire kingdom, rather than a
restricted view of a small set of resources.

In a serverless platform, functions are triggered by a variety of cloud-supported events. Thus


when a function subscribes to an event type, the function can then be triggered every time an
event of that type occurs. Events can originate from multiple sources, internal to the platform as
well as external. Some origins might be unknown and therefore should be perceived as
untrusted. If this isn’t taken under consideration, event sources can mistakenly be regarded as
internal to the platform. So, application developers might not pay the attention required to
handle them as untrusted inputs, which require security screening.e.g. Storage-related events
occur such as adding or saving a file in an S3 bucket, or a new message is delivered to the
queue in Azure’s Service Bus Queue ​Principle of Least Privilege
Functions are meant to be small, thus enabling the reduction of permissions so they can be set
to a minimal scope so that they can access only what is individually required by each individual
function in order to execute their requirements. This results in reducing the attack surface and
amount of damage a successful attack

SAS-3: Insecure Serverless Deployment Configuration​ (Brad)

Serverless architectures deployed to cloud providers and or other hosted platforms benefit
immensely from the programmatic nature of their destined environments. Modern concepts such
as Infrastructure as Code, Continuous Integration/Continuous Deployment, and
Commoditization of Components are all supported by using purpose built tools or pipelines to
deliver changes into production in a repeatable and consistent manner. However, automation to
this extent typically carries many high-risk considerations that must be accounted for when
architecting the security of a serverless environment.

Page 48 of 59
Best Practices:
● Authorizations to create, modify, or destroy elements within the environment must be
necessarily provided to the tool responsible for deployments. If insufficiently protected,
this level of privilege could be exploited to create unauthorized infrastructure, affect
inappropriate data access or infrastructure changes, or in extreme cases can result in
catastrophic destruction of production data or resources.
○ Deployment tooling will by necessity have more privilege than developers,
engineers, administrators, or other contributors should ever have individually. As
a result, access to deployment tooling and associated functions should be strictly
controlled to ensure that contributors cannot exploit the tooling to escalate their
own level of privilege within the environment.
○ Deployment pipelines, as well as the assets, artifacts, and configurations that
drive them, should be protected against unauthorized modification. An effective
mitigation against inappropriate access is the use of version control with commit
signing, as this will ensure that any modifications are visible to all contributors
while providing both integrity and non-repudiation.
● Resources within an environment that are not controlled by deployment automation are
often completely ignored by deployment tooling. As a result, such artifacts can easily
become undocumented, derelict, or may unwittingly become single-points-of-failure
within an otherwise resilient environment. Effective inventory management, specifically
with regard to resources created without the use of automation, is critical for avoiding
unmaintained or ‘snow-flake’ assets within the environment.

SAS-4: Over-Privileged Function Permissions and Roles (Vimal)


SAS-5: Inadequate Function Monitoring and Logging (Vimal)
SAS-6: Insecure Third-Party Dependencies (Vimal)
SAS-7: Insecure Application Secrets Storage (Brad)

Different components within serverless environments will differ in what other components
(internal or external) that they interact with, as well as the level of access required to effectively
perform the tasks they are responsible for. Ensuring that these components have access to
secrets they need, without granting unnecessary access to secrets for other components, can
be an important part of a serverless security architecture. The following are examples of secrets
where granular access control may be prudent:

● Data encryption keys


● Authentication token signing keys
● Database or storage credentials
● API keys, tokens, or credentials for external interactions
● Private keys for asymmetric cryptography

Best Practices:

Authentication of service principles can be a difficult challenge, and some cloud service
providers have features that make this easier.

SAS-8: Denial of Service and Financial Resource Exhaustion (Raja)

Page 49 of 59
Serverless functions and containers are invoked based on demand and mostly serve just that
one session. If each request is going to be handled separately, how true are DoS, DDoS attacks
and applicable for serverless applications?

Link to chapter which addresses serverless execution limits!

Most Cloud Providers have a fair use policy to control serverless usage to help one customer
being affected from another. This is one of the default shields the provider uses to prevent a
customer from not having the needed resource when a different customer is under a DDoS
attack and depleting the available resource. For example if the number of functions that can be
executed concurrently is 1000, any new requests are put on hold and cause throttling.

Note: These limits are mostly a soft limit and can always be increased upon raising a request to
the serverless provider.

Best Practices:
1. Limit the number of requests from a specific IP address, a range of IP addresses or from a
geographic location using an API Gateway.
2. Rate limiting helps to prevent abusing public API's. But there are scenarios where there might
be paid subscribers you might need to access the API beyond the default rate limit. API keys
and JWT tokens can help achieve this using gateway level policies and configurations.
3. Secure the cloud based event that will trigger the function. Event driven functions are
generally not behind an API Gateway and hence rate limiting via API Gateway will not help. You
must secure the event trigger.
4. In case the event trigger is compromised and you still need to make sure requests do not
receive an HTTP error, you can configure the event to send the message to a queue. The
message in the queue in turn can invoke the function. Beware this is going to add latency.
5. Finally a trust between API Gateway and the function must be established, making sure it's
only the API Gateway that can invoke the function at any given time.

SAS-9: Serverless Business Logic Manipulation (Raja)


SAS-10: Improper Exception Handling and Verbose Error Messages (John K)
SAS-11: Legacy / Unused Functions & Cloud Resources (John K)
SAS-12: Cross-Execution Data Persistency (John K)

A Platform Security
Platform Configuration
● Secure the triggering agent (SAS-1)
● Least privilege for output binding
● Cleanup temporary space for isolated and batch processing functions
Platform Component Security Configuration to address any vulnerabilities
Platform Mitigations against compromise

B Functions Secure Coding and Validation - ​(Brad)


- ​Input validation [through API Gateway] 

Page 50 of 59
- Remote code execution 
- Vulnerability management of application and dependencies 
- Secure data access 
- Insecure logging 
- Exception handling 
- Confused Deputy attacks 
a. Managed versus unmanaged Functions as a service (private CaaS) 
b. Auditability and compliance 
a. Data and encryption, key mgmt. (Storage Security and Interfaces) 
b. Identity and access mgmt 
c. Threat detection - 
d. Zero trust, Service mesh implementations and controls/Microsegmentation
g. Identity and access mgmt 
Service Principal vs Users 
H. Orchestration 
i. Detection 
j. Policy enforcement and management 
k. Response automation 
l. Resiliency  
m.Scalability 
n. Synchronous or asynchronous functions 
o. Error handling  

6. Risk and Governance ​(Peter and J. Kinsella)


When we leverage the Cloud computing model the organization is able to not only reduce
investments in hardware, facilities, utilities, and data centers, but in theory transferring this risk
to the CSP should reduce overall risk. There is a broad assumption being that the Cloud
provider is making ongoing investments in platform security and managing those areas of risk,
but how do we know that? Ongoing and regular assessment of the cloud providers performance
and quality of service are an essential part of the organization's security assurance program.
The organizations sourcing team should regularly analyze and assess the state of the cloud
provider to ensure that contractual obligations are being met.

The focus of the security team should be directed to identifying risk themes related to their
Serverless applications. What risks are now new in the organization because of the use of
Serverless computing? An analysis of the risks or risk themes related to Serverless computing
can then be examined and used to develop a better understanding of overall risk. We can then

Page 51 of 59
form risk statements that can be used to determine the likely harm resulting from a particular
risk and begin an assessment or what to do about the risk.

6.1 Serverless Risk Themes

of Serverless functions and Serverless dependencies. Code vulnerabilities, overly-permissive


permissions, improper secrets management.

Serverless functions can receive data from multiple sources like API Gateways, message
queues as well as other Serverless functions. These risks associated with this expanded attack
surface should all be clearly identified and documented in the risk register so that appropriate
controls can be designed and implemented.

Table: Risk Themes for Serverless, etc. along with the appropriate controls.

Risk Type Control

Datacenter outage Vendor

3rd party dependency Vendor


(database, message queue,
API) outage

Compromise of 3rd party


dependencies (IAM,
keystores, etc.)

Vulnerabilities in dependent
libraries (e.g. react, boto…)

Failure of deployment
processes (tools failing to
deploy, APIs not taking
commands to change
versions, etc.)

6.2 Serverless Governance


From a governance point of view - function/container inventory should be mentioned
somewhere in this doc, in that it’s something different to track that users have not had to do in
the past, and requires new tools/thoughts.

Page 52 of 59
7. A. Use cases and examples - ​Brad Woodward,
lead
Identify 4 examples why an organization would pursue a Serverless architecture.
Add a paragraph of each of the use cases and then the specifics for each.

Use cases​:
- Low cost/low maintenance for lowest barrier to entry.
- Minimizing maintenance responsibilities through the shared responsibility model.
- <Vrettos to add two, with supporting paragraphs>

One use-case for the embrace of serverless architectures is the ability to bring immense scale
without broad technical expertise. It’s possible to build, deploy, manage, and scale a serverless
application without any resources expended maintaining hardware, operating systems, or
supporting software. Cost is typically extremely granular, allowing costs to scale linearly with
use which results in a consistent economic viability. Naturally, this approach is extremely
beneficial in cases where technical or financial resources are limited, which makes it very
popular among startups.

When embracing serverless primarily for cost, the security of the architecture can be an
afterthought. Consideration for the following can lead to a stronger security posture and better
outcomes:
● Minimize the blast radius. Ensuring that each individual component has only the access
required to accomplish its own task will help minimize the impact if that component is
compromised. (For example AWS framework via the Security Token Service (STS)
generates a pair of temporary AWS credentials (an API key and a secret key) for the
user of the application/use case. These keys are essential for the application in order to
invoke Lambda.)
● Secure the data from the ground-up. Encryption at-rest and in-transit, strict access and
authorization controls, least privilege, and immutable access logging form the foundation
for a secure environment.
● Offload authorizations to the platform. Many hyperscale cloud providers allow for
extreme granularity in authorization controls. Leverage these as much as possible, and
only write custom authorizations when absolutely necessary. (In general the Serverless
frameworks do not offer a way to manage API keys, secrets or credentials. On top of that
none of the current solutions can ensure that the produced keys were actually produced
before the deployment. The developer has the option to use the environment variables
to store the keys, but that method is not viable in cases where more than two developers
are working together. On the other hand the old-fashion tacktick to put them into a
repository violates the best practices. Finally, the same applies if the developers use

Page 53 of 59
tools to encrypt with KMS as looking it holistically it shifts the problem from a repo to S3
(or similar storage services)).

Organizations with compliance objectives can be attracted to serverless architectures due to the
nature of the shared responsibility model with the service providers. Adherence to frameworks
such as HIPAA, PCI, NIST, and others can be a substantial administrative burden, and allowing
a service provider to lighten the load may be the ultimate justification for embracing serverless
technologies.

When embracing serverless to benefit from the shared responsibility model, consider the
following:
● Configuration matters. Whether a particular service is compliance with a particular
framework is likely to be heavily dependent on how the service is configured. It is your
responsibility to ensure that the configuration is appropriate for your requirements.
● Know how the responsibility is shared. Some services may have unintuitive boundaries
on where they draw the line of responsibility with various compliance frameworks, and
unfamiliarity with these boundaries may result in non-compliance. Take the time to read
the shared responsibility model for the services you are considering, and ideally in
specific relation to the security frameworks you’re pursuing.

Design Patterns and Anti Patterns for Serverless ​(Vrettos)

Serverless architecture is based on the ground that the deployment is a transparent process
where the developer is not aware of the cluster/server that the stateless functions are deployed.
Although the inability to specify where the functions should run seems to weaken the
architecture that could be transformed into a huge advantage that could benefit the performance
of the application. Since architecturally it is enforced to colocate computations and data,
explicitly it achieves a major design goal for prior systems like Hadoop and Spark were
designed for. On top of that several studies [1] have shown that storage locality does not
provide significant benefits to the final performance of the system especially nowadays where
most infrastructure deployments tend to emphasize network bandwidth instead of storage I/O
bandwidth. A great example that could benefit from that design is the numpy wren, a Serverless
Linear Algebra framework [2].

[1] G. ANANTHANARAYANAN,A. GHODSI,S.SHENKER, I. STOICA. Disk-Locality in


DatacenterComputing Considered Irrelevant. InProc. HotOS(2011)

[2] https://arxiv.org/pdf/1810.09679.pdf

#check if needed elsewhere…….B. Lifecycle and Inventory, Discovery of functions

#check if needed elsewhere…...C. Portability of functions across multiple platforms or any


limitations thereof
Interoperability

Page 54 of 59
8. Other considerations
A holding place for anything we have missed or something that is relevant but doesn’t go in any
of the other sections.

9.​ Futuristic Vision for serverless security ​(Lead:


Ricardo Ferreira​)
As we have seen in this Paper Serverless brings many benefits and challenges along. One of
the key areas that is going to be important in the upcoming years will be, the quality of code
being run on the platform, since the model of execution is very distributed , the ability to operate
on encrypted data from a variety of sources, as well as methods to preserve the privacy while
the data is accessed.

As such we will focus on what is on the horizon for the Application security coupled with
advances in programming language adoptions, Secure Development Lifecycle, as well formal
methods in software development and

1. The road for Serverless


a. FaaS evolution
i. openwhisk/Knative integration/adoption by the CSP’s
ii. Language adoption in serverless
iii. Infrastructure Lightweight virtualization frameworks(brief paragraphs
informative )
1. Anykernels/Unikernels Firecracker/Gvisor/Kata

2. Where is security heading with serverless Security considerations of moving into a


moving into a Code first approach ( Appsec based)​(ERIC)

IAM
zero trust
Supply chain

Page 55 of 59
3-​(Ricardo)
We also touch briefly advances on cryptography, machine learning, and privacy methods that
allow operations to be made on the data without revealing
a. Appsec SDLC/IAM formal verification
b. Encryption advances (OPE, Homomorphic, ML methods, ABE, PIR
Privacy-preserving indexing )

Seed Papers​:
Rise of Serverless Computing, Overview of Current State andFuture Trends in Research and Industry
https://arxiv.org/ftp/arxiv/papers/1906/1906.02888.pdf
Go Serverless: Securing Cloud via Serverless Design Patterns
https://pdfs.semanticscholar.org/89cf/9e58b931bc4755ab00b7d100ba13e43d64d9.pdf?_ga=2.12874417.725391457.1585303684-1
318777614.1585303684
Formal Foundations of Serverless Computing
https://arxiv.org/pdf/1902.05870.pdf
Beyonprod
https://cloud.google.com/security/beyondprod

Page 56 of 59
10. Conclusions

Page 57 of 59
11. References

[Jericho Forum-White Paper, Jericho Forum - White Paper. (2007). Business rationale for
2007] de-perimeterisation.
https://collaboration.opengroup.org/jericho/Business_Case_for
_DP_v1.0.pdf
[Last accessed: 09/04/2020]

[Jericho Forum Jericho Forum™ Commandments The Jericho Forum


Commandments, 2007] commandments define both the areas and the principles
that must be observed when planning for a
de-perimeterized future.
https://serverless.com/blog/fantastic-serverless-security-risks-and-where-to-find-them

https://owasp.org/www-pdf-archive/OWASP-Top-10-Serverless-Interpretation-en.pdf

https://cloudsecurityalliance.org/blog/2019/02/11/critical-risks-serverless-applications

[Berkley serverless 2019] https://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-3.pdf

Page 58 of 59
Appendix A: Acronyms

Selected acronyms and abbreviations used in this paper are defined below.

API Application Program Interface

CaaS Container as a Service

CSP Cloud Service Provider

FaaS Function as a Service

SCaaS Serverless Container as a Service

VNet Virtual Network

VPC Virtual Private Cloud

Page 59 of 59

You might also like