Serverless White Paper PDF
Serverless White Paper PDF
Date: 2020
Reviewers/Visitors:
● If you have a Google Account, please login before commenting. Otherwise, please note
your name in the comment you leave to ensure we assign you proper credit for your
efforts.
● Use the Comments or Suggesting features on Google docs to leave your feedback on
the document. Suggestions will be written in and identified by your Google Account. To
use the comments feature, highlight the phrase you would like to comment on, right click
and select “Comment“ (or Ctrl+Alt+M). Or, highlight the phrase, select “Insert” from the
top menu, and select “Comment.” All suggestions and comments will be reviewed by the
editing committee.
● Focus all comments on the content of the document rather than syntax or grammar.
CSA will have copy editors address syntax and grammar once the review period is
complete.
© 2020 Cloud Security Alliance – All Rights Reserved. You may download, store, display on
your computer, view, print, and link to the Cloud Security Alliance at
https://cloudsecurityalliance.org subject to the following: (a) the draft may be used solely for
your personal, informational, non-commercial use; (b) the draft may not be modified or altered in
any way; (c) the draft may not be redistributed; and (d) the trademark, copyright or other notices
may not be removed. You may quote portions of the draft as permitted by the Fair Use
provisions of the United States Copyright Act, provided that you attribute the portions to the
Cloud Security Alliance.
Page 2 of 59
Page 3 of 59
Acknowledgments
Editor:
Team Leaders/Authors:
Authors:
CSA Staff:
Reviewers:
Special Thanks:
Copy Editor
Page 4 of 59
0.1 Document Project Plan:
Meeting Dates Key points regarding the deliverable
Contributors:
Contributor Email
Page 5 of 59
0.2 Team / Contributor Composition
Contributors Areas of Contribution
Aradhna Chetal Goals and Objectives, Security Threats, Security Controls, conclusion
[email protected]
Liz Vasquez Security Threat Model (Platform and Serverless Application Threats) ,
[email protected] Security Controls
Michael Roza Intro, Goals & Audience, Security Controls & Best Practice,
[email protected] Conclusion
Brad Woodward Security Threat Model of Serverless, Security controls and Best
Practices, Use Cases, and Security Threat Model
[email protected]
John Wrobel (Eastern Time) Why Serverless, Security Threat Model of Serverless
[email protected]
Page 6 of 59
[email protected]
Lead Vishwas Vishwas John Elizabeth Aradhna Peter Brad Ricardo ALL Aradhna
Manral Manral Wrobel Vasquez Chetal- Raja Campbell Woodward Ferreira Chetal
(Pacific) (Pacific () (Pacific Rajenderan (GMT) (Pacific
Time) Time) Time)
Page 7 of 59
Table of Contents
Acknowledgments 4
Table of Contents 8
Executive summary 9
1. Introduction 10
Audience 10
2. What is Serverless 12
8. Other considerations 49
10. Conclusions 51
11. References 52
Appendix A: Acronyms 53
Page 8 of 59
Executive summary
Serverless platforms enable developers to develop and deploy faster, allowing an easy way to
move to Cloud native services without having to manage infrastructure - including container
clusters or virtual machines. This paper covers security for the serverless applications, focusing
on best practices and recommendations.
From a software development perspective, organizations adopting serverless architectures can
focus on core product functionality, without having to be bothered by the underlying operating
system, application server or software runtime environment.
Recommendations and best practices were developed through extensive collaboration among a
diverse group with strong knowledge and practical experience in information security,
operations, application containers, and microservices. The recommendations and best practices
contained herein are intended for Developer, Operator and Architect audiences.
Page 9 of 59
1. Introduction
As a lot of the details of the SCaaS are covered in other documents, this document will focus
only on the aspects that change as a result of the Serverless implementation, and not all CaaS
related details.
As part of this document we will focus mainly on the Serverless Application Owner and the
recommended security practices.
The primary goals of this paper are to present and promote serverless as a secure
cloud-computing execution model. The aim is also to help organizations looking to adopt the
serverless architecture. Then to identify applicable risks, threats and vulnerabilities followed by
recommendations for security controls & best practices needed to secure a serverless
environment. The final commitment is a vision of serverless including it’s forms benefits, risks
and controls.
Audience
The intended audience of this document is application developers, application architects,
Security Professionals, CISOs, Risk mgmt professionals, system and security administrators,
security program managers, information system security officers, and others who have
responsibilities for or are otherwise interested in the security of serverless computing.
The document assumes the readers have some knowledge of coding practices along with some
security and networking expertise, as well as application containers, microservices, functions
and agile application development. Because of the constantly changing nature of technologies
in the serverless space readers are encouraged to take advantage of the other resources,
including those listed in this document, for current and more detailed information.
The audience are encouraged to follow industry standard practices related to secure software
Page 10 of 59
design, development and build.
Page 11 of 59
2. What is Serverless
a. Definition of serverless
Serverless computing is a cloud-computing execution model in which the cloud provider offloads
the runtime management aspects of the compute and dynamically manages the allocation of
machine resources whether physical or virtual, including all aspects of compute, storage and
networking.
Well, if there are no servers involved in the execution, how does it work then? The name
serverless actually applies only to the behavior experienced by the end-user who is using the
service. Under the hood there still exists some servers which actually execute the code, but are
abstracted away from the developer and serverless users.
Pricing for serverless is based on the actual amount of resources consumed by an application,
rather than on pre-purchased units of capacity.
Function as a Service (FaaS) and Serverless Container as a Service (SCaaS) are two well
known models of serverless computing.
b. FaaS and SCaaS differences
The basic difference is that while in SCaaS the container is the basic unit of execution
that a customer provides, for FaaS it is code functions that the customer provides.
In SCaaS the application is packaged into a container by the customer. The provider
uses docker/ containerd as the virtualization layer, manages the scaling of containers
and all aspects of machine resources the container runs on.
In FaaS the application function code is provided by the customer, along with the event
triggers that cause the function to be invoked. The provider takes the function provided,
builds it and creates the triggers. No infrastructure administration, provisioning or
management is required by the customer and is the responsibility of the provider.
Page 12 of 59
Dependency Application programming Can run applications
language specific independent of the language
the code is written in,
because the binary and
dependencies are packaged.
To better explain the shared responsibility model between the Platform Provider and Application
owner for the different models we have created the diagram below.
Page 13 of 59
Page 14 of 59
d. Hybrid serverless architecture (private & public)
There are many serverless architectures. Some common infrastructures examples are
(not comprehensive):
● Amazon: Lambda, Fargate, AWS Batch
● Google: Cloud Functions, Knative
Page 15 of 59
● Azure: Azure Functions, Azure Container Instances
● Nimbella: OpenWhisk
● IBM: OpenWhisk
Page 16 of 59
3. Why Serverless (Lead: John Wrobel)
a. Characteristics of Serverless (Contributing: Marina B.) J
oin with c - Shared
responsibility Model
b. Advantages/Benefits of Serverless architecture (Contributing: Amit B.)
● Cost
○ Infrastructure Cost
■ Priced (usually) in a per-request basis, which means you
don’t need to pay when you’re not using the infrastructure
■ Cost efficient on burst workloads - you don’t have to
maintain servers on times they are not required
○ Operational Cost
■ Not having an infrastructure to manage can cut labor cost
and time spent on maintaining it.
● Developer experience
○ Easy to deploy
■ Serverless services can be easily deployed with minimal
configuration with CLI tools, from source control or
through a simple API
○ Easy to monitor
■ Most cloud providers offer out-of-the-box logging and
monitoring solutions bundled with their serverless offering
○ No server management overhead
■ Serverless services abstract all server management tasks
such as patching, provisioning, capacity management,
operation system maintenance
● Scale
○ Scalable by nature
■ Serverless auto-scales based on usage without having to
setup any additional infrastructure
■ There’s no need to configure policies for up or down
scaling
■ When working on premise, scaling is limited to the
available infrastructure
● Security?
c. Shared responsibility model for serverless
Service Application owner Serverless Platform
Provider
Page 17 of 59
Secure coding practices CaaS, FaaS
d. When is serverless appropriate
The serverless model is most appropriate in cases where there is a relatively large application
or set of applications, and several DevOps type teams available to support them. In such a
case, the application(s) can be broken down into smaller components called Microservices (see
Microservices Best Practices whitepaper), with each being supported by one or more teams and
running in a serverless environment. This allows for more effective use of development
resources by allowing them to focus on a single specific piece of functionality. This model also
allows for more agile development of each individual microservice when compared to a
monolithic application, because functionality for each part of the application can be moved into
production without as much concern for full integration and regression testing with the other
parts of the application.
With relatively small applications or teams, a serverless model can sometimes be less efficient
than having a traditional infrastructure to support the application (such as IaaS or PaaS
services). With a smaller application, there is typically less complexity, and the benefits of
breaking the application down into microservices are lost. In such a case microservices can end
up being so tightly coupled with other services that some benefits of microservices such as
reusability are lost. Also, with insufficient resources to support many microservices, teams may
have to stop work on one microservice to support another.
It is also important to note that in almost all cases serverless architectures will simplify the
deployment process. This is because in most cases deployment consists of simply uploading a
container image or set of code, without as much concern for resource provisioning and network
architecture as with a traditional application deployment. It is important for organizations to
perform a cost/benefit analysis when making a decision around using serverless architectures,
so that they can choose the solution that is most technically efficient and cost-effective for their
needs.
Page 18 of 59
Serverless is a good example as it allows a beginner to deploy functions without any
understanding of the underlying Cloud infrastructure, saving time and resources by being
focused on the real business value.
As such serverless quickly gained adoption, bringing a paradigm shift on how organisations
consume Cloud services.
If we look from a historical point of view, we have been devolving from a reliance on perimeter
security, in 2007 the Jericho Forum project [Jericho Forum-White Paper, 2007] identified the
Collaboration-Oriented Architecture as a precursor to the massive changes that will be
introduced by these new paradigms [Jericho Forum Commandments, 2007], such as
● People and systems need to manage permissions of resources and users they don't
control.
● There must be a trust mechanism for organizations, which can authenticate individuals
or groups, thus eliminating the need to create separate identities.
● Systems must be able to pass on security credentials.
● Multiple areas of control must be supported.
These best practices were envisioned by the necessity of being secure in an increasingly
de-parametrized world, mostly driven by the increase in connectivity, ability to successfully
inter-work securely over the Internet, and proliferation of cheap IP devices (IoT). The below
diagram highlights the evolution of the deperimeterization driven by business value and digital
economy, pushed by cost, flexibility, and agility.
Currently we reached this point with serverless, event driven architectures, and microservices.
Page 19 of 59
^^Jericho Forum and the de-perimeterization push - Jericho Forum-White Paper, 2007
Since the digital economy expands the corporate boundary, the existing networking model has
faded and become less relevant for security boundaries, as organisations started to use new
computing paradigms.
This means the new paradigms changed the traditional risk in these new environments
(serverless, microservices, and IoT) where typical flows cross untrusted boundaries, such as
getting data from a public location, putting data into customers premises, calling functions in
other locations, etc.
Page 20 of 59
Serverless Model
In this new model, a traditional network is not needed for the functions to operate, it’s lifecycle is
quite different, sometimes functions have a duration of milliseconds before they get destroyed.
In this scenario of increased connectivity and integration, the de-perimeterization is quite
evident.
As such we do see a risk evolution where traditional controls like firewalls, L7 firewall, routing,
IDS/IPS agents, SIEM’s are not yet able to efficiently deal with this new type of paradigm.
The threat landscape has now shifted as the attack surface tied to managing the Operating
System, the programming language runtime, and infrastructure is removed.
But now we have internal service calls that expand across the Cloud estate, requiring further
thought about our services forming part of our supply chain, as we architect with security best
practices in mind, for example below are some scenarios that highlight some of the risks:
● A function that now spans across the Cloud estate streaming data into several services,
crossing untrusted boundaries.
● Functions being triggered increase enormously, due to the nature of event driven
architectures, e.g. every time a file changes, several functions might be used.
● Chained functions being used, some of those pulling data from several trusted or
untrusted sources.
With these models the complexity is transferred to another part of the stack, requiring us to
architect differently for security, some of the below examples highlight the required changes:
● Configuring HTTPS, managing certificates
● Configuring and administering DNS
Page 21 of 59
● Addressing all the security requirements in a serverless stack:
○ Functions
○ Storage
○ Message queuing
○ noSQL
○ Automation
○ Gateways
○ Authentication
As previously stated despite the traditional risk surface decreasing due to the infrastructure
abstraction, now there are other security vectors that shift the attack surface, requiring the
organisation to manage the Cloud provider configuration, tightening of the IAM roles, and
thinking about the security process in the application development lifecycle.
Aspects like logging and monitoring that traditionally have been coupled inside the application
e.g. Nginx, Apache abstracting them from the user, in this paradigm requires developers to write
logging and monitoring events along with the code.
Some key areas to improve the security of such platforms have started to be researched to
create a more robust approach. Translating security policies from existing applications, offering
security API’s for dynamic use in functions e.g. A function may have to delegate security
privileges to another function or Cloud service. Access control mechanisms using
cryptographically protected security contexts could also be a fit for this distributed security
model.[Berkley paper, 2019] We expand more on these topics in the futuristic chapter of this
paper.
Following are key areas that require special focus during serverless adoption:
● Supply Chain security
● Data Flow Security
● Attack Detection
● Delivery Pipeline
The above areas are expanded and described in more detail below, for a comprehensive
reference we unified and collapsed several sources[sources 1,2,3]] to cover the most common
serverless threats:
Threat Description
Page 22 of 59
a multitude of event sources, which
can trigger the execution of a
serverless function.
Page 23 of 59
point in which attackers assess the
application for weaknesses and
potential vulnerabilities.
● Many successful attacks could have
been prevented if organisations had
efficient and adequate real-time
security event monitoring and logging.
Page 24 of 59
session contexts, it is possible that
sensitive data will be left behind and
might be exposed.
The serverless paradigm uses functions that correctly coupled with third party services allow an
organisation to run end to end applications without needing to care about traditional
Infrastructure management, but with any new technology, serverless patterns are still emerging
which require a more thoughtful security approach, as those patterns will mature over time, with
the effort of this and other papers in the area.
Page 25 of 59
4.2 Categories of Threats (John W)
There are a few categories of threats to serverless, most of which can be broadly broken
into two categories: threats to serverless platforms, and threats directly to the
applications built on serverless platforms. Platform threats target the containerization
and orchestration platforms that enables the serverless functionality, while application
threats focus on the applications that developers build to run on top of serverless
platforms. Aside from these, there are a few relevant threats that do not fit neatly into
either category, which will also be addressed.
Page 26 of 59
platform.
3. Unauthorized access
a. Improper configuration of user access can result in some
users having technical access that exceeds the access
they are intended to have as dictated by policy. This can
result in users having access to configuration items, data,
or other secrets which they are not authorized to have
access to according to organizational policy.
4. Portal/console vulnerabilities
a. Most container platforms and SCaaS offerings have some
sort of portal or console which allows users to configure
and manage the platform. These portals/consoles are often
just a web application that grants access to manage the
platform in one way or another. Like any other web
application, these portals/consoles are susceptible to any
number of vulnerabilities, which in their case could result in
unauthorized access to the platform on multiple levels.
B. Serverless Application Threats
Applications built with serverless architectures are also exposed to many
of the same threats as applications built with other architectures, such as:
1. Environment misconfiguration
a. Even with FaaS and SCaaS environments, users are given
varying levels of control over the configuration of the
environment. Mismanagement of these configurations can
leave the platform and resident applications vulnerable.
b. Exposed ports
i. As with many types of environments, the open
ports associated with the environment and resident
applications need to be carefully managed to
prevent unintentionally leaving ports open that may
leak information, allow attackers to gain
unauthorized access to either the environment or
application, or launch attacks against it.
c. Disabled/default configuration
i. Many services for hosting serverless applications
are insecure by default. This may include default
credentials for administrative accounts, open ports,
or lack of authentication services.
d. Exposed credentials
i. Some services use a standard set of credentials for
administrative accounts that should be changed by
users during setup, or some combination of other
vulnerabilities may end up exposing credentials to
attackers.
Page 27 of 59
2. Vulnerable dependencies
a. As with any other kind of applications, those that run in a
serverless environment are susceptible to vulnerabilities
introduced by software dependencies that are required for
the application to be run.
b. Vulnerable images
i. Unique to container-based serverless applications
is the threat of vulnerable images. Often,
developers use some sort of base image upon
which they will build their application. Depending on
the source of these images, they are susceptible to
many types of vulnerabilities, including
vulnerabilities in pre-installed dependencies, as
well as pre-installed malware in some cases.
3. “Embedded” malware
a. As mentioned above, container-based serverless
applications frequently use pre-made images as the
foundation of their applications. In some cases, these
images may have malware already installed on them
before the developer starts modifying the base image.
Malware seen in this capacity is widely varied, but some of
the more popular cases include keyloggers and
crypto-miners.
C. Other threats (Deployment, execution, operational etc.)
The serverless model also faces some other categories of threats, such
as:
1. Attacks against or through automated deployment tools
a. Serverless architectures frequently lend themselves
towards highly automated change integration and
deployment techniques. By their nature, these tools
operate in a “hands-off” capacity, and DevOps teams may
not frequently interact with their configurations. As a result,
some attackers may target these automation tools as a
way of incorporating malicious code into a target
application, or as a way to cause a denial of service with
regards to application updates.
2. Exploited code repositories
a. Similarly to automated deployment tools, shared (public or
private) code repositories present an enticing target for
attackers seeking to carry out a supply chain attack. If
code repositories are not properly secured, attackers may
attempt to incorporate malicious code into the application
by committing said code to the application’s code
repository.
Page 28 of 59
3. Exploited image registries
a. Image registries can be exploited in a similar way to code
repositories if they are not properly secured. An attacker
may attempt to overwrite an existing image with a version
that has some sort of malicious tool/application/code
embedded within it. See “Embedded” malware above.
Page 29 of 59
Unauthorized access Improper configuration of user access can result
in some users having technical access that
exceeds the access they are intended to have as
dictated by policy. This can result in users having
access to configuration items, data, or other
secrets which they are not authorized to have
access to according to organizational policy.
Page 30 of 59
credentials for administrative accounts, open
ports, or lack of authentication services.
The serverless model also faces some other categories of threats, such as:
Page 31 of 59
nature, these tools operate in a “hands-off”
capacity, and DevOps teams may not frequently
interact with their configurations. As a result,
some attackers may target these automation
tools as a way of incorporating malicious code
into a target application, or as a way to cause a
denial of service with regards to application
updates.
But first, let’s not discount the fact that serverless architecture has many benefits even
from a security perspective that are being weighed as part of Secure Application Design
considerations. Some of those benefits include:
Page 32 of 59
1. Stateless and Ephemeral: Short-lived serverless functions are processing
unencrypted data in memory for a short period of time. Serverless functions do
not write to a local disk. Hence why functions that need to persist state rely on
external data stores. Reducing the likelihood of exploits by attacks designed for
long-lived targets.
2. Each serverless function requires only a subset of your data to perform it’s
micro-focused process. So as long as this function is permissioned correctly to
access only the data it requires, then a successful exploit of a function should
also be more focused on what data it can potentially exfiltrate.
By now you have a greater understanding of what Serverless is and how the security
responsibility has shifted, where you now rely most of the underlying responsibility to a
Platform Provider. Along with this shift of responsibility you have also inherently lost a lot
of visibility and manageability. Where are my functions actually running? What is the
actual network exposure of my functions? Are my functions executed in an idle
container already initialized by a previous execution as a “warm start” or a newly
instantiated container as a “cold start”? Is any of my previous data still available in
cached memory?
So let’s start by understanding some of the inherent weaknesses that you need to
consider as you design your Serverless applications.
Functions that run on containers managed by the CSP (FaaS) are configured with
open network policies by default. Those functions can therefore access any
endpoint on the internet. If a function with a privileged access control level allowing
retrieval of your confidential data is compromised, then that same function can be
used to exfiltrate that data to any internet accessible endpoint.
Page 33 of 59
When you run serverless functions in your managed container environment (SCaaS),
then you are typically already defining strict network and service policies, including
allowable communication paths and the network exposure of your services.
i. Apply network policies to limit the end points that are reachable from that FaaS
by using the Virtual Private Cloud (VPC) network policy configurations that are
being made available by the various CSPs.
ii. Apply service or resource policies that allow you to limit the end points that can
access your data stores / services, and therefore you can reduce the exfiltration
paths for your data.
Examples include: AWS VPC endpoints, Azure VNet Service endpoints, and
Google VPC Service Controls
Serverless Applications are built upon microservices architecture so you build many
logically focused functions to run either concurrently and thus scale out your
processing, or sequentially with dependencies on state and expected outcomes.
However as you continue to grow the number of functions that you make available
whether for a single application or as distributed APIs for use by many larger
functional applications, challenges arise to fully know how that function is executed.
Are all those functions executed within a VPC, especially if the criticality of the data
requires that you minimize any public exposure? Are all the roles you created for
each function tuned to allow the minimum required privilege? Did developers reuse
already existing roles so that functions that need to read data only are using the
same role as functions that actually need to process and update your data values? If
an application redesign now specifies the function will be launched by an event
queue instead of an HTTP event. Has the HTTP event trigger been removed as part
of the deployment options. Are all the functions that can be triggered by an HTTP
event behind an API gateway?
Suggested Controls:
i. Logging: Ensure that you are using integrated logging that you can centralize to
facilitate your overall application performance and security monitoring. Platform
Provider logging will help collect statistics on the number of, duration of, and
memory usage of your function’s executions. Visibility of application errors is
Page 34 of 59
within your control by adding logging statements as needed within your
serverless functions. For instance, is your error logging enabling you to identify if
the failure occurred in a process you defined, from unexpected data input or as a
result of a 3rd party functional process?
ii. Monitoring: Use application and security monitoring tools that will help surface
that visibility of not only how often a function is executed, but the logical
execution path. You should be able to monitor and discover all the APIs or
endpoints that you are exposing and all your downstream or dependent APIs.
You should be able to monitor and discover which events and roles are executing
your functions, and any unexpected execution paths or methods.
Serverless security depends on the configurations of not just the functions but on all
of the upstream and downstream services that compose your Serverless application.
Your Serverless application is more than likely using many Platform Provider PaaS
services, so it is important that you understand the default configurations of those
PaaS services and all of your security hardening options.
For instance, if you are using PaaS data services, are you still using any default
configurations, and is that configuration still potentially exposing your data publicly.
Are you using default Platform Provider IAM roles that allow your function to read
and write data, when your function only needs to read and process that data.
Suggested Controls:
ii. Perform application security testing to ensure that your configurations meet all of
your security requirements. Application security testing should help you validate
the methods and roles that are able to access your functions or data. More
importantly application security testing should help reveal any configuration
weaknesses that still exist, not just that your application logic or input validation is
still exploitable by injection attacks.
Page 35 of 59
Are you designing a Serverless Application that will access and process confidential
data or highly regulated data? If so, then the security requirements for how that data
is protected increases, and you have to consider not only data exfiltration by external
attackers but malicious insiders as well.
Platform Providers are executing your Serverless FaaS and CaaS applications on
compute instances that are not encrypted by default. Furthermore, your Serverless
applications most likely rely on PaaS data services offered by Platform Providers that
are not encrypted by default.
Suggested Controls:
i. Ascertain if your Platform Provider offers the option of deploying your Serverless
code on either FaaS or CaaS that executes on a confidential compute instance.
This may help you decide between the use of Serverless FaaS or CaaS to
properly meet application and data security requirements and risk posture.
ii. Ascertain the default or managed encryption options for each PaaS Data Service
that you are incorporating into your Serverless application and will hold that
confidential or highly regulated data. Ensure that your use of and configuration of
the PaaS Data Service will meet your application and data security requirements.
Suggested Controls:
i. Incorporate Source Composition Analysis of any 3rd party libraries into your
deployments. This allows you to discover not only how extensive your
dependencies are but if you are introducing risk with already known vulnerable
components.
ii. Use Security monitoring solutions that can identify vulnerable 3rd party libraries
at run time, and also identify included libraries that are not being used. Remove
redundant libraries to reduce the risk of unnecessarily including vulnerabilities.
Page 36 of 59
C. What weaknesses can you introduce in your design?
Scenario 2: You do publish your Serverless functions behind an API Gateway, that
limits the number or requests (such as setting an upper limit to 10,000)
and a Request Filter (such as defining the expected JSON parameters
expected with the Request), but you are not enforcing authentication and
authorization.
Potential Result: An attacker launches your function with requests that are formatted
like legitimate requests, thus passing the API Gateway request filtering checks, and
are successfully instantiating a function until reaching the upper limit, but this in turn
causes you to process a high number of unauthorized requests designed solely to
force you to pay for services leading to financial resource exhaustion, also known as
Denial of Wallet.
Page 37 of 59
set Input Request Filters, but where you also incorporate Authentication and
Authorization for each of those functions.
Page 38 of 59
We need to check
https://cloudsecurityalliance.org/artifacts/best-practices-in-implementing-a-secure-micros
ervices-architecture/
And see which threats we can reference from there.
Talk about Process Flow or data flow and how important they are for correct modeling
Page 39 of 59
5. Security controls and best practices -
(Aradhna&Raja)
(Cloud controls matrix for serverless)
Page 40 of 59
Figure A : Function order of execution can lead to security vulnerabilities; instead, treat every function
as its own security perimeter: Ref : Serverless Security by Liran Tal
We need a Diagram here that shows different layers of the stack and the shared responsibility
model viz. For CaaS layers that the provider controls
Page 41 of 59
Control Sub - Category Control FaaS CaaS Comments
Category Description
Customer Access
Requirements
Data Integrity
Data Security
Key Generation
Sensitive Data
Protection
Credential Lifecycle /
Provision
Management
Diagnostic /
Configuration Ports
Access
Policies and
Procedures
Segregation of
Duties
Page 42 of 59
Trusted Sources
User Access
Authorization
Change Detection
Clock
Synchronization
Vulnerability
Management
Network Security
Production /
Non-Production
Environments
Segmentation
C. Cleanup
temporary space for
isolated and batch
processing functions
Platform Component
Security
Configuration to
address any
vulnerabilities
Page 43 of 59
Platform Mitigations
against compromise
Orchestration
Orchestration
i. Detection
j. Policy
enforcement
and
management
k. Response
automation
l. Resiliency
m.Scalability
n. Synchronous
or
asynchronous
functions
o. Error handling
Access Enforcement
(RBAC/ABAC/NGAC
Information Flow
enforcement
Least Privilege
Time stamps
Audit events
Page 44 of 59
changes (policy
enforcement)
Authentication (user,
device, Service,
Function)
Credential mgmt.
And IDentifier mgmt.
Development
process, tools,
technologies controls
Data in transit
protection
Vulnerability Unauthorized
management changes to the
contents of images
can easily be
detected and the
altered image
replaced with a
known good copy.
Page 45 of 59
handling
● Vuln mgmt of
app and
dependencie
s
● ETC
Separating user
functionality from
administrator
functionality can be
accomplished in part
by using containers
or other virtualization
technologies so that
the functionality is
performed in
different containers.
Serverless functions and containers can be invoked by both an web based HTTP call and also
on a Cloud based event. An event in a cloud platform indicates a change in the cloud resource.
Event-Data Injection is one of the most abused attack vectors in serverless platforms till date
and it applies to both FaaS and SCaaS. The most common type of injections are OS Command
and SQL Injections. Essentially an Event passes data to the backend serverless application as
part of the trigger. The input from the event data will vary based This event data is something
that was not planned or developed by the application developer. This will allow passing strings
that can be executed in the runtime.
Best Practices:
Page 46 of 59
● Input Validation: Developers are always responsible for the application code and should
provide utmost care on sanitizing input from the event data. Abstract only what is
required for the application to execute.
● Adding input validation as part of the deployment package could result in poor
performance, added cost with increased execution time. Best practice is to separate
input validation functionality and deployment packages. The likes of AWS Lambda layer,
Azure Functions input bindings can be used.
● Use a Function wrapper (a function that proxies the original function). The wrapper
should have the logic to validate input and then pass on the validated input to the actual
processing function. One input validation function can be used for more than one app
function if the logic is the same, but cost is a factor here.
● Several commercial solutions are available as runtime protection to inspect input data
and prevent malicious code to be executed as part of the function.
● If functions are hosted as API, then front-end them with any proxy like an API Gateway.
Most of the Cloud providers gateway services can handle input validation and are handy
ready to use solutions.
Authentication and Access Controls (SAS-2: Broken Authentication- remove this after)
(Aradhna)
In a serverless architecture, the functions need to be customized and configured in terms of the
a. The cloud platform resources and events that trigger a function’s execution
b. The resources to which functions have access to
c. The set of permissions that functions need to access the resources. The functions need
to be all connected to form business flows.
Here are some key principles for managing access and permissions on FaaS platforms.
This also reduces the impact in case of abuse and minimizes the likelihood of privilege
escalation. e.g.
Page 47 of 59
If a function doesn’t require access to a database, they shouldn’t be provisioned with
permissions to connect to that database. This ensures that when attackers compromise a
function, they are isolated to a minimal scope of resources, limiting the scope and impact of the
attack.
Security requirements must be part of the design elements early on the development lifecycle.
After functions are deployed, reducing permissions at a later time is difficult to do and can be
error prone. Teams that lack visibility into each and every function’s purpose often choose to
avoid this because it can introduce breaking changes; avoiding this means that these
permission sets expand but never contract. Over time, this significantly increases risk to the
organization because any vulnerability, be it in the source code, in a third party, or in an open
source library used, might have given an attacker access to your entire kingdom, rather than a
restricted view of a small set of resources.
Serverless architectures deployed to cloud providers and or other hosted platforms benefit
immensely from the programmatic nature of their destined environments. Modern concepts such
as Infrastructure as Code, Continuous Integration/Continuous Deployment, and
Commoditization of Components are all supported by using purpose built tools or pipelines to
deliver changes into production in a repeatable and consistent manner. However, automation to
this extent typically carries many high-risk considerations that must be accounted for when
architecting the security of a serverless environment.
Page 48 of 59
Best Practices:
● Authorizations to create, modify, or destroy elements within the environment must be
necessarily provided to the tool responsible for deployments. If insufficiently protected,
this level of privilege could be exploited to create unauthorized infrastructure, affect
inappropriate data access or infrastructure changes, or in extreme cases can result in
catastrophic destruction of production data or resources.
○ Deployment tooling will by necessity have more privilege than developers,
engineers, administrators, or other contributors should ever have individually. As
a result, access to deployment tooling and associated functions should be strictly
controlled to ensure that contributors cannot exploit the tooling to escalate their
own level of privilege within the environment.
○ Deployment pipelines, as well as the assets, artifacts, and configurations that
drive them, should be protected against unauthorized modification. An effective
mitigation against inappropriate access is the use of version control with commit
signing, as this will ensure that any modifications are visible to all contributors
while providing both integrity and non-repudiation.
● Resources within an environment that are not controlled by deployment automation are
often completely ignored by deployment tooling. As a result, such artifacts can easily
become undocumented, derelict, or may unwittingly become single-points-of-failure
within an otherwise resilient environment. Effective inventory management, specifically
with regard to resources created without the use of automation, is critical for avoiding
unmaintained or ‘snow-flake’ assets within the environment.
Different components within serverless environments will differ in what other components
(internal or external) that they interact with, as well as the level of access required to effectively
perform the tasks they are responsible for. Ensuring that these components have access to
secrets they need, without granting unnecessary access to secrets for other components, can
be an important part of a serverless security architecture. The following are examples of secrets
where granular access control may be prudent:
Best Practices:
Authentication of service principles can be a difficult challenge, and some cloud service
providers have features that make this easier.
Page 49 of 59
Serverless functions and containers are invoked based on demand and mostly serve just that
one session. If each request is going to be handled separately, how true are DoS, DDoS attacks
and applicable for serverless applications?
Most Cloud Providers have a fair use policy to control serverless usage to help one customer
being affected from another. This is one of the default shields the provider uses to prevent a
customer from not having the needed resource when a different customer is under a DDoS
attack and depleting the available resource. For example if the number of functions that can be
executed concurrently is 1000, any new requests are put on hold and cause throttling.
Note: These limits are mostly a soft limit and can always be increased upon raising a request to
the serverless provider.
Best Practices:
1. Limit the number of requests from a specific IP address, a range of IP addresses or from a
geographic location using an API Gateway.
2. Rate limiting helps to prevent abusing public API's. But there are scenarios where there might
be paid subscribers you might need to access the API beyond the default rate limit. API keys
and JWT tokens can help achieve this using gateway level policies and configurations.
3. Secure the cloud based event that will trigger the function. Event driven functions are
generally not behind an API Gateway and hence rate limiting via API Gateway will not help. You
must secure the event trigger.
4. In case the event trigger is compromised and you still need to make sure requests do not
receive an HTTP error, you can configure the event to send the message to a queue. The
message in the queue in turn can invoke the function. Beware this is going to add latency.
5. Finally a trust between API Gateway and the function must be established, making sure it's
only the API Gateway that can invoke the function at any given time.
A Platform Security
Platform Configuration
● Secure the triggering agent (SAS-1)
● Least privilege for output binding
● Cleanup temporary space for isolated and batch processing functions
Platform Component Security Configuration to address any vulnerabilities
Platform Mitigations against compromise
Page 50 of 59
- Remote code execution
- Vulnerability management of application and dependencies
- Secure data access
- Insecure logging
- Exception handling
- Confused Deputy attacks
a. Managed versus unmanaged Functions as a service (private CaaS)
b. Auditability and compliance
a. Data and encryption, key mgmt. (Storage Security and Interfaces)
b. Identity and access mgmt
c. Threat detection -
d. Zero trust, Service mesh implementations and controls/Microsegmentation
g. Identity and access mgmt
Service Principal vs Users
H. Orchestration
i. Detection
j. Policy enforcement and management
k. Response automation
l. Resiliency
m.Scalability
n. Synchronous or asynchronous functions
o. Error handling
The focus of the security team should be directed to identifying risk themes related to their
Serverless applications. What risks are now new in the organization because of the use of
Serverless computing? An analysis of the risks or risk themes related to Serverless computing
can then be examined and used to develop a better understanding of overall risk. We can then
Page 51 of 59
form risk statements that can be used to determine the likely harm resulting from a particular
risk and begin an assessment or what to do about the risk.
Serverless functions can receive data from multiple sources like API Gateways, message
queues as well as other Serverless functions. These risks associated with this expanded attack
surface should all be clearly identified and documented in the risk register so that appropriate
controls can be designed and implemented.
Table: Risk Themes for Serverless, etc. along with the appropriate controls.
Vulnerabilities in dependent
libraries (e.g. react, boto…)
Failure of deployment
processes (tools failing to
deploy, APIs not taking
commands to change
versions, etc.)
Page 52 of 59
7. A. Use cases and examples - Brad Woodward,
lead
Identify 4 examples why an organization would pursue a Serverless architecture.
Add a paragraph of each of the use cases and then the specifics for each.
Use cases:
- Low cost/low maintenance for lowest barrier to entry.
- Minimizing maintenance responsibilities through the shared responsibility model.
- <Vrettos to add two, with supporting paragraphs>
One use-case for the embrace of serverless architectures is the ability to bring immense scale
without broad technical expertise. It’s possible to build, deploy, manage, and scale a serverless
application without any resources expended maintaining hardware, operating systems, or
supporting software. Cost is typically extremely granular, allowing costs to scale linearly with
use which results in a consistent economic viability. Naturally, this approach is extremely
beneficial in cases where technical or financial resources are limited, which makes it very
popular among startups.
When embracing serverless primarily for cost, the security of the architecture can be an
afterthought. Consideration for the following can lead to a stronger security posture and better
outcomes:
● Minimize the blast radius. Ensuring that each individual component has only the access
required to accomplish its own task will help minimize the impact if that component is
compromised. (For example AWS framework via the Security Token Service (STS)
generates a pair of temporary AWS credentials (an API key and a secret key) for the
user of the application/use case. These keys are essential for the application in order to
invoke Lambda.)
● Secure the data from the ground-up. Encryption at-rest and in-transit, strict access and
authorization controls, least privilege, and immutable access logging form the foundation
for a secure environment.
● Offload authorizations to the platform. Many hyperscale cloud providers allow for
extreme granularity in authorization controls. Leverage these as much as possible, and
only write custom authorizations when absolutely necessary. (In general the Serverless
frameworks do not offer a way to manage API keys, secrets or credentials. On top of that
none of the current solutions can ensure that the produced keys were actually produced
before the deployment. The developer has the option to use the environment variables
to store the keys, but that method is not viable in cases where more than two developers
are working together. On the other hand the old-fashion tacktick to put them into a
repository violates the best practices. Finally, the same applies if the developers use
Page 53 of 59
tools to encrypt with KMS as looking it holistically it shifts the problem from a repo to S3
(or similar storage services)).
Organizations with compliance objectives can be attracted to serverless architectures due to the
nature of the shared responsibility model with the service providers. Adherence to frameworks
such as HIPAA, PCI, NIST, and others can be a substantial administrative burden, and allowing
a service provider to lighten the load may be the ultimate justification for embracing serverless
technologies.
When embracing serverless to benefit from the shared responsibility model, consider the
following:
● Configuration matters. Whether a particular service is compliance with a particular
framework is likely to be heavily dependent on how the service is configured. It is your
responsibility to ensure that the configuration is appropriate for your requirements.
● Know how the responsibility is shared. Some services may have unintuitive boundaries
on where they draw the line of responsibility with various compliance frameworks, and
unfamiliarity with these boundaries may result in non-compliance. Take the time to read
the shared responsibility model for the services you are considering, and ideally in
specific relation to the security frameworks you’re pursuing.
Serverless architecture is based on the ground that the deployment is a transparent process
where the developer is not aware of the cluster/server that the stateless functions are deployed.
Although the inability to specify where the functions should run seems to weaken the
architecture that could be transformed into a huge advantage that could benefit the performance
of the application. Since architecturally it is enforced to colocate computations and data,
explicitly it achieves a major design goal for prior systems like Hadoop and Spark were
designed for. On top of that several studies [1] have shown that storage locality does not
provide significant benefits to the final performance of the system especially nowadays where
most infrastructure deployments tend to emphasize network bandwidth instead of storage I/O
bandwidth. A great example that could benefit from that design is the numpy wren, a Serverless
Linear Algebra framework [2].
[2] https://arxiv.org/pdf/1810.09679.pdf
Page 54 of 59
8. Other considerations
A holding place for anything we have missed or something that is relevant but doesn’t go in any
of the other sections.
As such we will focus on what is on the horizon for the Application security coupled with
advances in programming language adoptions, Secure Development Lifecycle, as well formal
methods in software development and
IAM
zero trust
Supply chain
Page 55 of 59
3-(Ricardo)
We also touch briefly advances on cryptography, machine learning, and privacy methods that
allow operations to be made on the data without revealing
a. Appsec SDLC/IAM formal verification
b. Encryption advances (OPE, Homomorphic, ML methods, ABE, PIR
Privacy-preserving indexing )
Seed Papers:
Rise of Serverless Computing, Overview of Current State andFuture Trends in Research and Industry
https://arxiv.org/ftp/arxiv/papers/1906/1906.02888.pdf
Go Serverless: Securing Cloud via Serverless Design Patterns
https://pdfs.semanticscholar.org/89cf/9e58b931bc4755ab00b7d100ba13e43d64d9.pdf?_ga=2.12874417.725391457.1585303684-1
318777614.1585303684
Formal Foundations of Serverless Computing
https://arxiv.org/pdf/1902.05870.pdf
Beyonprod
https://cloud.google.com/security/beyondprod
Page 56 of 59
10. Conclusions
Page 57 of 59
11. References
[Jericho Forum-White Paper, Jericho Forum - White Paper. (2007). Business rationale for
2007] de-perimeterisation.
https://collaboration.opengroup.org/jericho/Business_Case_for
_DP_v1.0.pdf
[Last accessed: 09/04/2020]
https://owasp.org/www-pdf-archive/OWASP-Top-10-Serverless-Interpretation-en.pdf
https://cloudsecurityalliance.org/blog/2019/02/11/critical-risks-serverless-applications
Page 58 of 59
Appendix A: Acronyms
Selected acronyms and abbreviations used in this paper are defined below.
Page 59 of 59