CLOUD SECURITY
Architecture + Engineering
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Copyright Notice
All Rights Reserved.
All course materials (the “Materials”) are protected by copyright under U.S. Copyright laws and are the property of 2nd Sight Lab. They
are provided pursuant to a royalty free, perpetual license to the course attendee (the "Attendee") to whom they were presented by 2nd
Sight Lab and are solely for the training and education of the Attendee. The Materials may not be copied, reproduced, distributed,
offered for sale, published, displayed, performed, modified, used to create derivative works, transmitted to others, or used or exploited
in any way, including, in whole or in part, as training materials by or for any third party.
ANY SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT
LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN
NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential 2
Content is provided in electronic format. We request that you abide by the terms of
the agreement and only use the content in the books and labs for your personal use.
If you like the class and want to share with others we love referrals! You can ask
people to connect with Teri Radichel on LinkedIn or visit the 2nd Sight Lab website for
more information.
https://www.2ndsightlab.com
https://www.linkedin.com/in/teriradichel
Day 1: Cloud Security Strategy and Planning
Cloud Architectures and Cybersecurity
Introduction to Cloud Automation
Governance, Risk, and Compliance (GRC)
Costs and Budgeting
Malware and Cloud Threats
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential 3
Welcome to Day 1 of Cloud Security Architecture and Engineering by 2nd Sight Lab.
On Day 1 we look at the fundamentals that drive security decisions in the cloud.
Developers and technical folks tend to think that security is all about the technical
implementation of devices and tools that defend networks and applications, but really
the picture is much bigger. Security is often more about risk calculations that drive
business decisions on a broader scale. In order to understand this fully we’ll take a
look at some of the traditional drivers that impact business cyber security decisions.
About this class
Assumes basic knowledge of cloud. See links in notes if needed.
Real world scenarios ~ personal experiences moving to the cloud.
Designed for anyone with some technology background.
Hands on labs ~ designed for different levels. Beginner and bonus labs.
Focused on public cloud and infrastructure as a service.
Some discussion of other clouds but not the focus.
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential 4
This class assumes you have some idea what the cloud is, however if you want to take a
look at a few definitions:
Amazon’s Definition:
“Cloud computing is the on-demand delivery of compute power, database storage,
applications, and other IT resources through a cloud services platform via the internet
with pay-as-you-go pricing.” https://aws.amazon.com/what-is-cloud-computing/
NIST:
Cloud computing is a model for enabling ubiquitous, convenient, on-demand network
access to a shared pool of configurable computing resources (e.g., networks, servers,
storage, applications, and services) that can be rapidly provisioned and released with
minimal management effort or service provider interaction. This cloud model is
composed of five essential characteristics, three service models, and four deployment
models. https://csrc.nist.gov/publications/detail/sp/800-145/final
Setup to receive content and participate in labs
For documents ~ a gmail account https://gmail.com
Sign into the 2nd Sight Lab portal (We sent an email to your gmail account).
For labs ~ complete the setup instructions if you haven’t.
AWS account https://aws.amazon.com
Azure: https://portal.azure.com
Bitbucket account setup using gmail address https://bitbucket.org
Slack: We’ll set this up in the last lab.
5
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
You should have received an email from 2nd Sight Lab at your gmail address by now.
This email included instructions telling you how to log into the 2nd Sight Lab portal.
If you haven’t done this yet you’ll want to do it now to access the slides, and if you
want to do the labs, the lab content and tools.
Just let your instructor know if you have any questions or have problems accessing
the materials.
About the screenshots in documents...
One of the biggest challenges with cloud services is the rate of change.
The nature of the cloud services is that they can roll out changes any time.
You generally won’t be notified for a lot of them…
The same is true when we write labs for this class
You may notice some of the screenshots don’t exactly match.
Welcome to life in the cloud!
6
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
As we were writing this class, new screens were appearing. Additionally during the
first official launch of this class, a new CloudFormation portal was launched. Each
year AWS, Azure, and Google launch thousands of new features, services, and
enhancements. Unfortunately they do not ask us before making these changes.
Therefore as you go through the material, you might see that some of the screenshots
don’t exactly match what you see. Hate to break it to you, but this will happen to you a
lot in the cloud, so this is your introduction - to life in the cloud. Then only thing that is
constant, is change. One of the tricky things for security teams will be tracking and
dealing with these changes, when they occur. This class will help you consider the
things you can do to manage this change and still take advantage of the innovative
new features offered by cloud providers, as they appear.
Cloud account setup - Initial Best Practices
Use an email alias
Remove programmatic access for the global administrator / root user
Set up MFA - especially on the root account but better if on all accounts
Create a secondary user and only use the root account if required
Set a password policy
Turn on all logging (but can cost money)
7
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
If you are setting up a new cloud account, there are some initial best practices you’ll
want to consider as you get started. If you set up a cloud account for this class, we
recommend you do these things as well, if possible.
Use an email alias. When you set up your account, you can avoid someone guessing
your email and login if you use an alias. Additionally, if you are setting up a cloud
account for a company, it’s best to use an alias that gets forward to multiple people
rather than tying the account to an individual’s email address. What if that individual
leaves the company? Another tip - think about your naming convention in advance. If
you name your accounts consistently it will be easier to find all your accounts and
email addresses related to them. For example, maybe all your cloud accounts start
with cloud-[unique-name]@ or your AWS accounts start with aws-[unique-name]@.
Set up MFA. Everywhere! Make it that much harder for someone to get into your
account by setting up MFA on all your accounts.
Create a secondary user and only use the root account if required. The root account
or owner is the user account that you used to create the account. It is an all powerful
user that can do anything in your account - including delete it! Often people will create
this user, add MFA, and store the credentials in a safe or some other secure manner
in which your company typically stores these types of credentials.
Set up a password policy. Although this recommendation is in question in the latest
version of NIST and a lot of companies are starting to offer “passwordless” solutions,
the cloud providers still recommend a password policy. Whatever you do, try to avoid
short, simple passwords with common words such as the name of your company, the
time of year, or the local sports team!
Turn on all logging. In the case of a security incident, logs are required to determine
what happened. Look at the cost of the logging, but wherever possible turn on all the
logs. We have a lab that looks at different logging options in the cloud later in the
week.
Certificate of Cloud Security Knowledge (CCSK) - Cloud Security Alliance (CSA)
Certified Cloud Security Professional (CCSP) ~ CSA and (ISC)2
AWS, Azure, and GCP certifications ~ from the cloud providers
ISACA ~ exams for auditors.
SANS Certification (under development)
CISSP ~ not so much about cloud but likely evolving
Cloud security certifications
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential 9
A lot of people ask if cloud certifications are helpful, and if this class will help obtain a
certification. We’ve already had one student obtain a certification after taking this class,
however as a general rule you have to understand the requirements for a particular
certification and focus on the recommended documents and reading.
An unscientific survey by the author reveals that some hiring managers value
certifications, and others not so much. In general, hiring managers with certifications or
who have had them in the past value them more than those that never obtained one.
Having a certification proves that a person more junior in their career put in the work in a
particular field to get that certification. Over time experience becomes more relevant. In
any case a certification may help a candidate get past non-technical human resources
staff and recruiters. Since they don’t have the technical knowledge to assess skills, the
certificates can be helpful in determining a person has qualifications for a particular job.
The following are some certifications and links to more information if you are interested.
Certificate of Cloud Security Knowledge (CCSK) from the Cloud Security Alliance.
Open book. Governance, Risk, Compliance. Evaluating cloud providers.
https://cloudsecurityalliance.org/education/ccsk/#_overview
AWS Implementations with AWS tools and services. https://aws.amazon.com/certification/
Azure Implementation with Azure tools and services.
https://www.microsoft.com/en-us/learning/azure-exams.aspx
Google Cloud Platform (GCP) Implementations with GCP tools and services.
https://cloud.google.com/certification/
ISACA Certifications for auditors. http://www.isaca.org/certification/pages/default.aspx
SANS Institute. SANS Institute is working on a certification for their could security class.
Broad, general, security knowledge applied to cloud. SANS has many other classes that go
deep into specific aspects of security such as reverse engineering malware, network
intrusion detection (packet analysis), forensics, and pentesting. They also have an
accredited masters program (which the author has taken). https://sans.org
CISSP. Although one of the most widely known security certifications, it is very broad and
deals with security at a high level, rather than things like packet and malware analysis. It
also includes things like physical security for data centers. So although it’s one of the most
well-known it won’t be the most applicable to some. It is probably the most recognized by
human resources staff and recruiters. https://www.isc2.org/Certifications/CISSP
There are many other types of certificates by various organizations focusing on specific
aspects of IT or security as well. Also many universities are now offering security
undergrad and masters program. It’s important to look at the credentials of the person
running the program and the instructors.
Cloud architectures and
impact on cybersecurity
9
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
What exactly is the cloud? Is it just someone else’s computer or is it more than that?
Let’s look at some architectures that are uniquely cloud and consider the
characteristics of a service or system that qualify it as a “cloud architecture.” Then we
can explore how these new architectures impact the security of networks, systems,
and applications.
The Golem Project ~ A true cloud architecture
10
Share your computer - get some cryptocurrency. https://golem.network/
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
The Golem Project is probably the truest form of cloud architecture. The architecture
consists of people who sign up for the network and contribute compute power in
exchange for cryptocurrency. Computer owners all over the world can sign up and
other people can use their computer’s resources when they are not in use.
Applications need to be written in such a way they can operate correctly on this
“distributed architecture.” Distributed means the compute power is spread out over
many systems, often located in different locations, and generally not a fixed number of
systems. If one system fails, the application will keep running because other nodes
will seamlessly pick up the work. Additionally, if the system needs more compute
power, a distributed application will often automatically add additional nodes to help
process the data. Contrast this with a system that is designed to run on one
computer, or a cluster of a specific or limited number of nodes.
Private clouds and OpenStack
Joint project of Rackspace and NASA (2010)
Open source software for cloud computing
Host in your own data center
Many companies tried...and gave up
Complex. Limited by hardware resources
https://www.openstack.org/
11
11
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
So you want to build your own cloud? OpenStack offers a way for you to do this using
open source software in your own data center. OpenStack began in 2010 as a joint
project of RackSpace Hosting and NASA. In 2012, NASA withdrew to use AWS.
OpenStack is now managed by the OpenStack foundation and includes support from
companies such as Oracle and Hewlett Packard.
Some companies reluctant to put data into the public cloud started with OpenStack,
building private clouds on premises and giving developers access to create resources
like compute, storage, and networking. The idea sounded great, but it turns out it’s
complicated to run a cloud platform efficiently and with the same usability of the public
clouds. Additionally, scalability is limited to the systems available in the private data
center, so private clouds cannot take advantage of the economies of scale of a public
cloud. In addition, private clouds are typically run by organizations that specialize in
specific business domains and don’t have a lot of expertise or staff that can maintain
the private cloud. It’s difficult to keep up with the features, functionality, scalability, and
usability of the public cloud platforms. In many cases, developers are unsatisfied with
internal clouds after using public clouds and push for access to the public clouds. The
author had such an experience at a large company that ultimately gave up on an
attempt to implement a company-wide private cloud and ultimately moved to public
cloud instead. Many companies have had similar experiences, though some
organizations still do run private clouds and use OpenStack.
https://www.openstack.org/
Public cloud services
Third-party hosted cloud computing platforms that anyone can use.
Salesforce offers a hosted API and GUI for sales applications.
AWS is one of the most widely used and known infrastructure clouds.
Azure followed suit, though years later. Azure started as a PAAS platform.
Google offers gmail, Google Docs, and other hosted services.
iCloud...People call almost anything hosted by someone else “the cloud.”
14
14
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Initially companies running applications on the Internet insisted on running all their
software on their own servers. They also insisted on owning all the code. For some
web-based businesses, this stemmed from the dot-com (later to become the
dot-bomb) era where companies would build websites and then either sell their
companies or “go public” (become publicly traded on the stock market). In order to
show the value of their companies they wanted to own all the intellectual property (IP)
associated with the systems that ran their businesses.
Eventually the cost of running and maintaining secure and scalable systems
outweighed businesses’ desire to own all their own code and infrastructure.
Additionally the rise of open source software changed the idea that companies had to
own all their own software. Organizations could get things done faster by using
software created, maintained, and in some cases hosted by other companies.
Initially organizations started moving from hosting their own servers to hosting them in
colocation facilities, where another company maintained the building and network but
the companies owned their own servers. Next companies started using managed
hosting services where they rented servers from companies that managed the
physical hardware, networks, and building.
GoDaddy started in 1997 and started a service that allowed customers to use a
database and create a website on a shared platform instead of hosting everything
themselves. Customers managed these systems through a dashboard. This was one
of the initial steps towards the cloud hosting model.
Salesforce started a revolutionary service in 1999. This service offered a hosted
platform for sales applications. Instead of owning and hosting all the software
companies could leverage this shared platform which offered a lot of features and
functionality without having to custom build a whole new system. Additionally the
system offered APIs and components developers could use to build systems more
quickly. Salesforce was one of the first major SAAS (Software-As-A-Service)
platforms.
https://www.computerworld.com.au/article/641778/brief-history-salesforce-com/
Amazon was one of the first companies to create a public cloud. The first AWS
service was SQS (Simple Queue Service) and was launched in 2010. Amazon started
offering virtualized infrastructure. Instead of renting a server, or using a software only
service, companies could set up virtual and scalable infrastructure. The details of how
AWS came about vary in different reports but Jeff Barr, chief AWS evangelist,
published the following timeline on the official AWS blog:
https://aws.amazon.com/blogs/aws/aws-blog-the-first-five-years/
Now it seems like every company offers some sort of cloud service. All types of data,
infrastructure, and applications are hosted “in the cloud” - on someone else’s
computer.
Public versus private cloud
13
Another way to think about
public versus private clouds
is based on networking.
A private cloud is contained
to your private network.
A public cloud is accessible
to and from any network.
13
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Another way to define a public versus a private cloud is based on what networks can
connect to it. A private cloud is typically only accessible from a private network or in
other words not from the Internet. A public network is typically accessible from the
Internet. Using those definitions a private cloud would only be accessible from a
specific private network belonging to a particular organization. A public cloud would be
accessible from any address on the Internet.
Hybrid clouds
A hybrid cloud typically
consists of resources in a
public and private cloud.
It may simply be a
connection allowing data to
pass between two clouds.
Applications may also be
designed to scale from
private to public clouds.
14
14
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
A hybrid cloud refers to a cloud that connects private and public clouds into a larger
cloud. This connection is typically created with a VPN (virtual private network) or
private connection between the organization’s data center and the public cloud to
secure the transmission of data instead of having it flow directly over the Internet.
Many organizations use private clouds to connect private networks to public clouds
and vice versa. Some example use cases for a hybrid cloud:
A company wants to allow an application in the cloud to connect to a database hosted
in the company's data center.
A company wants to allow developers on the internal network to access the public
cloud over a secure connection.
A company may want to backup data to the cloud or vice versa over a private
connection.
An on-premises application may scale up into the public cloud when the demand is
greater than the on-premises data center systems can support.
Cloud services ~ typical characteristics
Not hosted by you
On-Demand
Scalable
Shared resources
Pay as you go
18
Log in, push a button, get a virtual machine
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Not hosted by you: Typically when people think of cloud, they think of systems hosted by
someone else. Some companies do install software for cloud platforms they host internally, but
this class is focused on external cloud resources. Many companies that tried to set up private
clouds found it challenging and have opted to move to public cloud infrastructure because
third-party cloud providers specialize in this service and it is their business, whereas other
companies may be focused on a different line of business, such banking, retail, hospitality,
health care, or real estate.
On-Demand: As shown in the picture, it’s possible to click a button to get a new virtual
computer in the cloud. Instead of waiting for the IT team in a company to purchase a server,
install it, and get the network team to set it up, developers can just run a new machine
instantly for a new project.
Scalable: Most cloud environments have architectures and services that can grow and shrink
automatically as you use resources. Instead of having to define how many servers you need
for a new big project in advance, storage, network, and compute capacity can be added on
demand as the need for additional resources arises. This can alleviate problems associated
with spending more money than is needed when capacity needs were overestimated, or
having systems go down due to underestimating requirements.
Shared resources: Cloud architectures are often delivered in what is called a multi-tenant
environment. That means the systems or data belonging to a single customer may be
deployed on the same physical hardware as other customers. In some cloud services,
customer data may be stored in the same database or on the same operating system as other
customers.
Pay-as-you-go: In theory, companies can save money in the cloud because they can reduce
capacity when they are not using it. In practice, this is sometimes the opposite when
companies do not correctly manage resources carefully. Organization's need to udnerstand
who is instantiating what resources and ensure systems are right-sized and terminated when
not in use.
Impact to cybersecurity: Not hosted by you
Loss of control of some aspects of configuration
Some logs might not be accessible - reliant on the cloud provider
Harder to capture network traffic in the cloud
Can’t pentest or audit certain environments as you normally would
Different implementations than a typical data center environment
Location of hosted resources may impact legal jurisdiction
16
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
What happens when you’re not the one hosting an application? Some parts of the system are
not accessible or controlled by you that you normally controlled in the past.
Developers probably notice the impact of this less as they are typically running their code on
systems provided by other people. They may not be aware of the security implemented by the
teams that deploy servers and networks. For them, life is easier in a lot of ways. In some
cases the developers got to the cloud first and got things working. But is it secure? We’ll take a
look at that as the class proceeds.
In some ways, the jobs of the people managing security is harder. System configurations and
certain types of logs may no longer be accessible for review. Tasks like pentesting and
auditing are no longer possible to complete in the typical fashion - if at all for some types of
clouds. Security tools that work in an on-premises environment may not work well in the cloud,
which means the security team has to evaluate, purchase, and learn new types of security
tools. In other ways, security becomes easier, because the security team can offload some
work and liability (potentially) to the cloud provider. We’ll also talk about how the automation
capabilities of the cloud platform can help later.
Typically large organizations pentest and audit data center environments. With some cloud
providers the data centers will not be accessible. Validating vendor security requires new
methods of assessing and dealing with cybersecurity risks.
The location of the resources in the cloud may impact legal jurisdiction. If an incident occurs,
the legal jurisdiction where the data or system is hosted may apply and the company may end
up having to fight a court case far from where the company does business, incurring additional
costs and possibly being subject to new and different laws.
Impact to cybersecurity: On-demand
Perhaps the developers got there first - and it needs a security makeover.
Anyone with permissions can create resources.
Resources with higher privileges can be instantiated and abused.
Security policy enforcement can be easier - or harder. It depends.
Malware and stolen credentials can quickly lead to unauthorized resources.
Terminated resources - ephemeral logs may be gone for good.
17
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Because the cloud is easy to access and use, developers may have gotten to the cloud before
the security and networking teams and started creating resources. In many companies this is
the case. Often developers are not trained in security or networking and some adjustments
may be needed as a result to bring the organization into compliance and align with company
security policies and standards.
In the cloud, anyone with permissions can create and access resources. One very important
aspect of cloud security is correct implementation of permissions in the cloud, otherwise
known as IAM (identity and access management). In the cloud, resources like virtual machines
that run applications can also be granted permissions. If not careful when implementing IAM
policies, a user could instantiate a virtual machine with elevated privileges and get access to
things the user would normally not have access to see or do.
Enforcing security policies in an on-demand world may be easier or harder. It depends on how
the permissions and deployment system is structured as we will discuss later.
Malware and stolen credentials can lead to unauthorized resources. We’ll talk about how
attackers are leveraging these credentials in the cloud in the section on cloud threats.
Resources in the cloud are “ephemeral” meaning they are not persisted or saved after they are
deleted or terminated. Just as a cloud resource can easily be created, it can easily be
destroyed with the correct permissions. When a resource is destroyed, any logs on that
resource will be gone as well. Security teams need to make sure logs are stored in a way that
keeps them around in case of an incident.
Impact to cybersecurity: Scalable
Auto-scaling: New servers and containers created to handle load.
Manual patching will not apply to auto-scaled resources.
IP addresses are dynamic - change when systems are restarted.
An IP address assigned to your system might later point to someone else’s
A two hour TTL is not a good idea…
Security appliances cannot depend on fixed IP addresses.
Resources scaled down - may lose ephemeral logs.
18
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Auto-scaling is an important aspect of a cloud security architecture. The idea is that resources
are created when application load required and excess resources are terminated when no
longer needed. The issue is that the auto-scaled resources are created from a base
configuration. Patches manually applied to resources running at a particular point and time
may not get applied to the new resources as they go up and down in the cloud. A new strategy
may be more effective.
IP addresses in the cloud are not static, with few exceptions. Most IP addresses change every
time a system is started, restarted, or redeployed. The cloud platform randomly assigns IP
addresses, sometimes within subnets you define, other times not. A security team and the the
security appliances and services deployed in the cloud need to be able to handle the dynamic
nature of cloud IP addresses.
As you can imagine, a 2 hour TTL (time to live for DNS records) is not a good idea. If your
DNS record points to an IP address for two hours before it updates, the IP that was assigned
to your resource may suddenly be pointing to another company’s cloud server. Your traffic may
be going to the wrong place for two hours!
Resources that scale up and down based on demand or triggered by an event to run
temporarily will also have short-lived logs.
Impact to cybersecurity: Shared resources
Many virtual machines on the same physical hardware.
Many containers on the same virtual machine.
Sometimes well-defined trust boundaries do not exist
No VPNs and Federated IAM on some cloud services.
Can’t easily get your data out once you put it in.
No physical disk copy in the case of an incident.
Harder to get logs in some cases.
23
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Shared resources create some of the economies of scale and potential cost savings
in the cloud. They also cause some security concerns. Who else is hosted on your
server besides you? Can the other virtual machines or containers on the same
physical hardware get to your virtual machine or container?
In some cases cloud services may not have well defined trust boundaries between
components, systems, or people. When AWS, Azure, and Google first launched, they
did not have the concept of virtual private networks. Every customer’s resources were
running in one big flat network. New networking services exist, but some cloud
providers and services still do not have well defined trust boundaries to separate
customers, or allow customers to create segregation according to best practices due
to the limitations of the particular service.
Some cloud providers do not allow access to cloud services over private networks or
using federated IAM. We’ll talk about why that is a problem later in class.
Some have concerns that once the data is in, it’s hard to get out. For example, AWS
allows you to load up data on a semi-truck and add it to AWS for free. How would you
get that data back out? When you send data into the AWS network it is free. When
you send data out there’s a charge. Also in some cloud services, data is co-mingled in
such a way that it is almost impossible to extract from other customer data.
The typical method of copying a physical disk for incident handling no longer applies
in the cloud. Security teams need to learn and practice new methods for capturing
incident data. In addition some logs may be not accessible at all, lacking data, or
harder to capture because the log data is co-mingled with other customers or
managed by the cloud provider.
Impact to cybersecurity: Pay as you go
The idea was that by turning off unused services, you save money.
In practice, people forget to turn things off.
Architectures are not designed correctly to realize savings.
Malware spins up new hosts and containers to run cryptominers.
Lift and shift deployments cost more than on-premises deployments.
Lack of management leads to waste and overspending.
20
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Pay as you go is great...until it’s not. The idea in the cloud is that you can turn on a
resource when you’re using it and then turn it off when you’re done. You only pay for it
during the time period when it was running. You can also right-size your resources to
your application and take advantage of all sorts of methods for reducing costs by
aligning your architecture with your application needs.
In reality, developers come to the cloud, spin up instances larger than needed, and
forget to turn them off. Architectures are moved to the cloud in a lift-and-shift manner
that does not recognize cost savings or performance optimizations. Malware gets into
the cloud and creates unauthorized resources running cryptominers and other
malware. Lack of management of this pay-as-you-go model will lead to waste and
overspending vs. finite resources in a datacenter where the spending happens at the
time the server was purchased and resources are fixed.
Other concerns...
Everything is interconnected.
One web page calls numerous other APIs
What is all this stuff??
Application security is paramount
Loss of credentials can wreak havoc
Misconfigurations abound
Questions, trust...and contracts
26
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Little Snitch (Mac firewall)
In addition to security concerns based on the architecture of a typical cloud
application, we have some other big picture concerns when looking at all things cloud
and cloud applications.
Everything is interconnected. Websites are calling so many different services and
APIs it’s hard to track what data is going where. The screen shot above shows some
web traffic for a few websites and all the different APIs and services being called. You
can get similar information by turning on developer tools in some web browsers. Do
you want your data going to all these different places when you visit one website? Is
there a better way to architect web applications so these APIs and third party
websites are not exposed to every visitor? Yes! In addition, all these dynamic
connections create challenges for traditional ways of implementing network security.
Application security becomes even more important when all these different APIs are
interconnected and websites are calling each other. We’ll talk about some of the
newer exploits occurring with incorrectly configured APIs and web servers.
Loss of credentials can wreak havoc in the cloud. If an attacker gets on premises
they may delete your data but they can’t delete your entire server! In the cloud loss of
credentials could mean deletion of everything in your account if credentials are not
handled properly.
Misconfigurations are one of the biggest threats in the cloud as we’ll discuss.
Security teams need visibility into cloud configurations and the ability to set and
maintain configuration policies. Often security teams were involved in setting standard
configurations for on-premises systems but in the cloud, they may not have been
involved when the initial systems were rolled out. Additionally, many new cloud
services need to be evaluated to determine the appropriate configuration for all the
available settings. Container configurations, database, and other storage service
configurations also need to be evaluated. We’ll discuss containers and virtual
machines later if you’re not familiar with those terms.
In regards to the things security teams no longer have access to in the cloud, the
security team needs to come up with a new approach for determining if these things
are secure. As we’ll discuss, this really depends on asking questions, whether you
trust the cloud provider’s answers to the questions and contracts.
Geography and Jurisdiction
When you host data in a cloud service, do you know where it is located?
Location is critical from a legal standpoint.
The jurisdiction that applies in a court case may depend on data location.
Different laws apply in different locations.
Some organizations disallow data access by citizens of foreign countries.
In an ongoing case, Microsoft handed over data in the US related to a court case,
but refused for data in Ireland citing that it was a different jurisdiction.
22
When you are using a particular cloud provider, do you know where your data is
located? Where is it backed up? Who has access to it, including support, security, and
operations staff at the cloud provider? Where is the authentication service to access
the data located? Where are the system logs?
Location is critical from a legal standpoint in the cloud. If an organization’s data is
hosted in another legal jurisdiction and a security incident occurs, the organization
may be required to show up in court in that jurisdiction. Additionally, the laws in that
jurisdiction may apply. Understand the laws that apply as data transfers between one
location and another.
Some organizations disallow access by foreign countries. Different laws may apply to
data if it includes citizens of other countries. We’ll talk about GDPR more later.
In a recent case, Microsoft was asked to give up data related to a legal matter.
Microsoft provided the data hosted in U.S. data centers but refused to provide data
hosted in Ireland, arguing that it was a different jurisdiction and different laws applied.
https://www.lawfareblog.com/microsoft-ireland-case-supreme-court-preface-congressi
onal-debate
The Upside!
Possibly shift liability to a third-party via a contract.
Built-in inventory exists by nature of how the platforms work (CIS Controls).
Additional resources exist for your security team via cloud provider support.
IAAS clouds are huge configuration management platforms.
Automation can reduce human error.
Segregation of duties and networks may be easier.
New ways of doing things may be more efficient and reliable.
29
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Shift Liability. One of the upside of giving up control of certain aspects of security
may be the ability to shift liability to a third-party via a contract. Businesses look at
and try to reduce or mitigate risk. If you allow the cloud provider to handle a certain
aspect of your security and something goes wrong, who will be liable? Of course the
impact to your brand must also be considered in this case. Choosing a shoddy cloud
provider may not sit well with customers if and when something goes wrong and it is
not handled properly.
Inventory. Cloud platforms such as AWS, Azure, and Google have built in inventory
tracking. As we’ll discuss, inventory tracking is one of the top recommendations of the
Critical Controls. You can simply run a query on most cloud platforms to get this data.
Support. Some cloud providers provide excellent support, especially for customers
with enterprise support plans. When a security incident occurs or when implementing
new security appliances and services, the cloud provider can provide additional
resources to help. Additionally the platforms off the ability to automate a lot of
error-prone tasks that can lead to security incidents.
Configuration management. By virtue of how the cloud platforms work, they provide
built in configuration management - if used properly. We’ll talk about how to leverage
this functionality effectively.
Automation. Studies cite human error as one of the primary reasons for security
incidents. In some cases this is due to phishing attacks but in other cases
misconfigurations or deployment mistakes can also contribute to the problem. By
automating as much as possible, repeatable methods can limit manual actions and
limit the chance of human error in the process.
Segregation. Most cloud services (but not all!) create immutable logs that can be
used to track incidents and know that the attacker or a person working on the platform
has not altered them. The major cloud platforms also offer the ability to segregate
resources via accounts and other constructs based on IAM policies, roles, and other
settings. Cloud networks also allow for very fine-grained network configurations to
ensure access to and from resources is limited to only what is required, down to the
virtual machine, container, and data store, for some services.
Efficient and Reliable Processes. By leveraging event driven automation, reaction
to repeated events can be quick and reliable, saving valuable time and preventing
mistakes.
Types of Clouds: IAAS, PAAS, and SAAS
Infrastructure-as-a-Service (IAAS)
A virtual data center on shared physical resources.
Platform-as-a-Service (PAAS)
A platform for building applications with developer components.
Software-as-a-Service (SAAS)
Software delivered to you over the Internet - you don’t have to install it.
31
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
You may have heard these terms before in relation to cloud providers:
Infrastructure-as-a-Service (IAAS) A virtual data center on shared physical
resources. In and IAAS environment you have more control over resources such as
virtual machines, where you are responsible for managing the operating system.
Platform-as-a-Service (PAAS) A platform for building applications with developer
components. In a PAAS environment the customer doesn’t manage the operating
system or database server. The customer has access to components at a higher layer
that can be leveraged to write code and build applications, without administering or
accessing the underlying infrastructure.
Software-as-a-Service (SAAS) Software delivered to you over the Internet - you
don’t have to install it. A SAAS application makes use of shared resources. That
means a SAAS application is typically not something you install or manage
on-premises or in your own cloud account. It’s typically software that you access via a
web console or an API (application programming interface). More on APIs later.
These categories have evolved to allow cloud providers to define the types of
functionality they deliver. The lines get blurry sometimes trying to determine which
category an application falls into, but does it really matter which category a service is
in? The main thing that concerns us, as security professionals, is that we need to
understand the features of any particular cloud service we are using, what risks are
present, and we need to do about it.
That being said, it’s still good to understand the definitions of the different categories,
so that when talking to customers or cloud providers, we have a general
understanding of the terms and are talking about the same thing. It is also helpful
when trying to understand our security responsibilities in a general way, because the
amount of responsibility you have versus the cloud provider changes depending on
the type of cloud service you are using.
IAAS
25
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
https://www.rightscale.com/lp/state-of-the-cloud?campaign=7010g0000016JiA
The three IAAS clouds you’ve probably heard of by now: AWS, Azure, and Google
Cloud Platform.
Others exist but the market share is far and away mostly divided between AWS and
Azure with Google Cloud Platform at a distant third. Other infrastructure-as-a-service
cloud providers are barely on the map.
AWS has been around the longest with a 10 year lead in the industry. Azure started
as a PAAS platform and Google as SAAS with gmail, initially. Now all three are racing
to keep up with each other building new and better IAAS services and features.
PAAS
26
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
A few notable PAAS providers:
Heroku, now purchased by Salesforce, offers a platform designed to make it easier
for developers to deploy applications using simpler components that work together.
RedHat/IBM Openshift - build and deploy containers in public, private, and hosted
cloud environments.
Cloud Foundry - components to build and deploy cloud applications.
SAAS
27
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Sample SAAS Applications:
DocuSign - integration document signing and storage into your applications.
SumoLogic - Operations, security, and business analytics based on your logs -
support for multiple clouds.
DropBox - store your documents on a third-party cloud.
Shared Responsibility Model
Concept created by Amazon to explain security responsibilities.
Explains what security is handled by the CSP and what customers need to do.
General rule: If you can see it and change it, it’s probably your responsibility
Make sure this is defined in your contract and assigns liability appropriately.
If a compliance violation and fine comes along...who’s responsible?
If there’s a data breach fine or lawsuit - who pays?
28
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
AWS came up with the shared responsibility model to explain to customers what
aspects of the infrastructure AWS was responsible for securing and which aspects are
the responsibility of the cloud provider. The other IAAS clouds have followed in these
footsteps to provide guidance to customers as to where the responsibility lies for
different aspects of the cloud. Regardless of which cloud you are using you need to
understand what parts of the cloud service the provider will secure, how they will
secure, and if it meets your requirements. In addition you need to understand your
own responsibilities and make sure you have secured your part.
In addition to cloud provider documentation and statements, you need to ensure your
contract clearly defines this responsibility. If something goes wrong, who will be liable
for any damages, fines, or other legal ramifications? When it comes to the courtroom,
the contract will be the most binding. What if a compliance fine results from some
misconfiguration? Will your organization need to pay it or the cloud provider? What if
there is a data breach that results in a lawsuit? Who will be liable? What about GDPR
requirements for deletion of customer data if an automated data deletion routine
provided by a cloud provider fails or data is leaked due to a cloud provider error?
For large organizations who could face hefty litigation or fines, it is important to
understand these things before signing an agreement with a cloud provider and
deploying systems to the cloud.
29
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
https://aws.amazon.com/compliance/shared-responsibility-model/
The AWS shared responsibility model shows the components the cloud provider is
responsible for and what the customer is responsible for. Amazon likes to say they are
responsible for the security of the cloud and the customer is responsible for security in
the cloud.
https://aws.amazon.com/compliance/shared-responsibility-model/
AWS provides a deep dive into their security processes in this whitepaper:
Amazon Web Services: Overview of Security Processes
https://d1.awsstatic.com/whitepapers/Security/AWS_Security_Whitepaper.pdf
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential 30
https://docs.microsoft.com/en-us/azure/security/azure-security-infrastructure
Azure also has what they call Responsibility Zones. They break down these zones
by different types of cloud services - SAAS, PAAS, IAAS, and On-Prem. This means
that the customer needs to understand the responsibility for each type of cloud
service and what category each service falls into.
https://docs.microsoft.com/en-us/azure/security/azure-security-infrastructure
31
https://cloud.google.com/files/PCI_DSS_Shared_Responsibility_GCP_v32.pdf
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
This diagram shows the Google responsibility matrix for PCI compliance.
https://cloud.google.com/files/PCI_DSS_Shared_Responsibility_GCP_v32.pdf
They have also published some information on shared responsibility in relation to
containers. If you are not familiar with containers and Kubernetes we will discuss
those more on day 3.
https://cloud.google.com/blog/products/containers-kubernetes/exploring-container-sec
urity-the-shared-responsibility-model-in-gke-container-security-shared-responsibility-
model-gke
You can also check out a deep dive of Google’s security and responsibility model in
their whitepaper:
Google Infrastructure Security Design Overview
https://cloud.google.com/security/infrastructure/design/
Shared Responsibility Model Resources
AWS:
https://aws.amazon.com/compliance/shared-responsibility-model/
Azure:
https://gallery.technet.microsoft.com/Shared-Responsibilities-81d0ff91
Google (focused on PCI):
https://cloud.google.com/files/PCI_DSS_Shared_Responsibility_GCP_v32.pdf
32
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
A few more helpful links for people who are familiar with one cloud, but not the other:
Azure for AWS Professionals:
https://docs.microsoft.com/en-us/azure/architecture/aws-professional/
Google Cloud for AWS Professionals:
https://cloud.google.com/docs/compare/aws/
Google Cloud for Azure Professionals
https://cloud.google.com/docs/compare/azure/
SAAS and PAAS
For each type of cloud, determine
which responsibility is yours or
theirs.
Make sure your contract clearly
defines that responsibility.
Involve a lawyer - language
around data breaches, privacy,
jurisdiction, liability, transfer of
ownership.
33
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
No matter what cloud provider you use, you’ll want to understand who is assigned
responsibility for all the different aspects of security. The contract or agreement needs
to clearly define these responsibilities. You will probably want to involve a lawyer and
consider all the language around data breaches, privacy, jurisdiction, and transfer of
ownership, among other things. Transfer of ownership means when you sell your
business, do you want to have to wait for a cloud provider to approve the sale? Check
your contract.
SAAS and PAAS providers will have more responsibility for the underlying platform.
This slide shows a part of the document where Dropbox defines their responsibility
versus what the customer needs to do. You can read the full document below.
https://assets.dropbox.com/documents/en/trust/shared-responsibility-guide.pdf
What about all the other cloud providers your company uses? Do you understand
what the cloud provider is doing to secure their systems? Do you know what the legal
ramifications and steps you need to take will be during a security incident?
The issues: contract, risk, and trust
The issues with cloud security are largely legal and risk issues.
Your contract is key:
Who is responsible for carrying out which security activities?
What happens if something goes wrong?
Even if you have a great contract, do you trust them to do their part?
What risk will does the organization face by moving data to the cloud?
34
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Although engineers and DevOps teams like to focus on the technical
aspects of security, a large portion of the issues in cloud security are
largely legal and risk assessments.
The contract is key to defining who is responsible for carrying out which
security activities. It also determines who is liable and who pays for
damages if something goes wrong.
Although you might have a great contract, do you trust the cloud provider
to do what they say they are going to do? What is the risk if they don’t?
Even if you are able to collect damages what harm might be done to the
organization’s reputation? Would loss of data put the organization out of
business? What other risks might the company face by moving to the
cloud?
Contract Considerations
35
Right to Audit: Security assessments, review logs, or ask for periodic reports.
Availability: Definition of downtime? Backups? Scheduled outages? Monitoring and alerts? BCP/DR?
Compliance: Can they meet your requirements?
Data Access: Encryption standards, key management, employee access, sharing, location
Data Breach: Notification, damages and liability, chain of custody, insurance, forensic evidence
E-Discovery and Legal Holds: Can they meet your requirements?
Insurance: Does their insurance cover your deductible?
Intellectual Property: Who owns what? Software, processes, reports, data, etc.
Termination and Disposal: What happens when you terminate the contract or delete a resource?
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
When negotiating a contract with a vendor it is important to include the security
requirements you establish during an assessment of the vendor. If the vendor claims
they do backups and you write that down on an assessment it may not qualify in court
if the vendor fails to do so and loses all your data. Your legal team and security team
will want to coordinate to ensure important clauses are contained within the vendor
contract. If the vendor is meeting some type of compliance standard such as
IS027001, SOC2, or CSA Star, PCI, GDPR, or HIPAA compliance, you can reference
that compliance in the contract so the vendor will be obligated to maintain that
compliance over time.
Consider what happens in the case of a legal issue or data breach. What logs are
available to you? Can the vendor fulfill your obligations for legal holds? Will they be
able to provide data with proper chain of custody in case of a breach where litigation
ensues? Will they be able to provide logs that show you the exact scope of a breach
to avoid excessive fines and fees for data that was not actually exposed in a breach?
The following article provides some additional details about the items above and what
should be covered in your contract:
https://securityintelligence.com/posts/does-your-cloud-vendor-contract-include-these-
crucial-security-requirements/
Legal and Risk Issues
This is not a contract class so we can’t cover all the legal aspects of cloud.
Ensure your lawyer is involved in reviewing the contract.
The Cloud Security Alliance has a legal working group that may be able to help
https://cloudsecurityalliance.org/working-groups/legal/#_overview
We will talk about risk assessments later in the class.
36
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
This is not a legal class so we won’t cover all aspects of contracts here. Your
instructor is not a lawyer - so get your lawyer involved to review the your contract and
to help you understand any legal risks.
In addition to your own lawyer, or if you are seeing a lawyer, the Cloud Security
Alliance has a legal working group that may be able to help. Their web states:
Our mission is to provide unbiased information about the applicability of existing laws
and also identify laws that are being impacted by technology trends and may require
modification.
They also have some commonly asked legal questions on their website:
https://cloudsecurityalliance.org/working-groups/legal/#_overview
Security impact: Individual cloud services
IAAS clouds offer numerous different “services” you can use.
Compute, storage, networking, security, and other services.
Compute resources are used to execute code in the cloud.
You may have control over networking rules and configuration.
Storage is used to store files, data in a database, or other types of storage.
All these services can be combined to create cloud-hosted applications.
37
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Cloud platforms consist of many different services that can all be used together to
create applications. Compute resources, storage, networking and other types of
services can all be combined in different ways to create new types of applications and
architectures. Each of these services will have different configuration options that
need to be set correctly for optimal security. It is important for developers and security
teams to understand the options available to them and set them appropriately.
Amazon Web “Services”
You’ll see a list of services grouped
by categories like Compute, Storage,
Database, etc. when you log in at
https://aws.amazon.com
Amazon gives things funny names.
EC2 = a virtual machine
EBS Volume = a virtual hard drive
See more in the notes on this page
46
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
A few core AWS Services:
IAM or Identity and Access Management is the AWS service for creating and managing users,
roles, groups, and permissions in your cloud account. This service is used to validate users
who access your account and specify who can do what.
EC2 stands for Elastic Cloud Compute. Basically when you create an “EC2 instance” you are
starting up a computer in the cloud. It’s called a “virtual machine” because it’s actually just
software running an operating system, as opposed to a physical piece of hardware running
single operating system. One physical server can run many “virtual machines” in the cloud.
AMI means “Amazon Machine Image.” You select an “AMI” when you want to create a virtual
machine. It’s a template that specifies what type of machine you want to create. The AMI will
specify the operating system and software contained on the machine you instantiate.
EBS Volume A a virtual drive you can attach to an EC2 instance. Just like you have hard
drives on your laptop you can associate a hard drive with a virtual machine. You can also
remove a harddrive from a virtual machine and associate it with a different virtual machine.
S3 is a service that enables storing “object” data. The objects look like files when you log into
AWS but the way they are stored is not technically not file based. Object storage allows for
more scalable storage in the cloud. This service allows you to create a place to store files, but
without specifying how much you need up front. You just keep adding files and the service
grows and charges you based on the amount of data you add. This is different from old-school
models where you had to calculate and define how much data you needed in advance. It also
is a great way to get scalable log storage instead of being limited by the size of services in
your data center.
RDS is the AWS relational database service. Relational databases store data and are queried
using SQL (structured query language). These types of transactional databases are typically
used for things like financial applications that need strong data integrity and unquestionable
transactions. Instead of having a database administrator (DBA) handle backups, replication,
and other management tasks, this AWS service can do some of that automatically for you.
RDS offers different database platforms like SQL Server, MySQL, Postres, and Amazon
Aurora.
VPC or Virtual Private Cloud is the AWS service where you’ll define networking resources and
rules. Networking rules are defined to allow or deny access to cloud resources via networking
endpoints, protocols, routes, and rules.
Azure Services
Azure also shows a list of
services you can use when
you log in to
https://portal.azure.com
Some of the services have
recognizable names like
“Virtual Machines”
39
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Azure typically names things with familiar names.
Azure AD is like Active Directory on premises and used for IAM on Azure.
Virtual machines are...virtual machines.
Storage accounts are used to store data such as files.
SQL databases, as the name states are SQL databases.
Networking in Azure starts with Virtual Networks or VNETS.
Google Cloud Platform (GCP)
Google Cloud Platform has the same concept
Google tends to focus more on compute and IAM then network controls
40
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Google cloud also has a number of different services along the same lines. Many of
the Google services start with “Cloud”. In most cases the names are decipherable.
Cloud Identity is Google’s built in identity and access management service.
Compute Engine is Google’s virtual machine service.
Cloud Storage is object storage, similar to S3 on AWS or Azure’s storage accounts
used to store files used by applications, for example.
Cloud SQL is Google’s relational database service.
Virtual Private Network (VPC) Network is Google’s base networking service.
Security implications of each service
Can it run in a private network or does it require Internet access?
Can you encrypt the data and who has access to the encryption key?
Does it meet compliance requirements if you need it?
What settings can be used to secure the service? Who can change them?
What logs does the service offer and what do they contain?
What is the SLA for each service ( may vary within a single cloud provider)?
See the notes for a few more.
41
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Some security and risk-related questions you might want to ask about each service
used in your cloud account:
Can it run in a private network or does it require Internet access?
Can you encrypt the data and who has access to the encryption key?
Does it meet compliance requirements if you need it?
What settings can be used to secure the service? Who can change them?
What logs does the service offer and what do they contain?
What is the SLA for each service (they may vary within a single cloud provider)
What actions can the service take in your account?
What data does it cache and where?
How does authentication work for the service?
How much does it cost?
Security architecture concerns ~ big picture
Cloud security architects need to look at cloud services holistically.
Many cloud services can be combined to create applications.
Look at where the data can flow - networking, APIs, cloud accounts.
Consider the entire attack surface as a whole vs. separate components.
Mashup of cloud services can create leaks and vulnerabilities.
Architect a solution for deployment, governance, and risk reporting.
42
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Sometimes people look at individual components but not at the intersection of
components or the big picture as a whole. This is where security breaks down.
Security in your account is only as good as the weakest link. Look at the systems and
the account architecture as a whole to determine where attackers can get malware in,
and data out.
Sometimes an individual cloud service may be fine on it’s own, but when multiple
services are linked, problems can occur. Look at the attack surface of all the things
that are connected as a whole.
When architecting a cloud solution, one of the most important points, which we
discuss in a lot more detail later, is to gain visibility into deployments. You’ll want to
see what is being deployed in the cloud and ultimately create guardrails for those
deployments. You’ll also want to be able to manage governance and ensure systems
are compliant both before and after deployments. Finally you’ll want to ensure you
understand the risks that exist in your environment. In order to do that you need to
know what’s deployed and be able to understand what vulnerabilities and problems
exist. In some cases that information will only be available via the deployment system,
for ephemeral services like Lambda functions that only exist for a short amount of
time. You’ll want to review the code that is deployed, and ensure the logs are stored
for each invocation, even after a short-lived, ephemeral resource has terminated.
By capturing all this data, you can provide meaningful risk reports to decision makers
who prioritize implementation of vulnerability fixes.
Be aware of limits - hard and soft
Each different cloud service may impose limits.
If you hit a limit, systems may go down - so monitor overall use.
Some limits can be changed upon request.
Other limits are hard limits so you’ll need to work within those requirements.
Azure trial accounts limit the number of compute resources you can create.
AWS trial accounts may come with an initial limit that can be increased.
43
The cloud is scalable, but in some cases not unlimited. Be aware of limits and monitor
them. If you hit a limit while using a critical system, that system may go down. For
large companies your account manager may help you monitor these limits. Some of
the cloud providers offer tools for monitoring if and when you will hit limits. Just be
aware these limits exist and evaluate each new service to determine if your needs fall
within the limits of the system.
Many times the cloud provider can and will raise the limits if you ask. You can put in a
support request or ask your account manager. In other cases, the limits are hard limits
because increasing them would create excessive cost or performance problems for
you or the cloud provider if implemented.
Cloud Provider Limits
44
AWS Azure GCP
AWS Service Limits
AWS Limit Monitor
Trusted Advisor
Azure Limits Google Cloud Quotas
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Azure:
https://docs.microsoft.com/en-us/azure/azure-subscription-service-limits
AWS:
https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html
Google Quotas:
https://cloud.google.com/docs/quota
To Summarize
Cloud architectures are different - security needs to adapt.
Some aspects of security become the cloud provider’s responsibility.
The shared responsibility model defines your security responsibilities.
Make sure your contract clearly states cloud provider responsibilities.
You lose some control, but you gain some powerful new capabilities.
We’ll be covering ways to secure cloud architectures throughout the class.
45
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
In summary - the cloud is different! Security tools and practices used in the cloud
need to adapt to cloud architectures. Understanding the shared responsibility model
for each cloud provider is key. Make sure your contract clearly delineates
responsibility and liability. Ensure you are securing your part. Although you lose some
control in the cloud you may be able to shift some liability. You will also gain powerful
new tools.
Introduction to automation
and infrastructure as code
46
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Infrastructure as code
Creating resources using code instead of clicking buttons.
This quick overview is for those who are not familiar with the term.
Also, we will discuss why this matters in the context of security.
The best way to explain infrastructure as code is via some examples.
The related lab will make sure everyone’s has their environment working.
We’ll test it out by running some code to create some cloud resources.
47
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
One of the first thing you’ll want to understand when thinking about security for IAAS
clouds (and any cloud where possible) is the concept of infrastructure as code. What
this means is that we’ll be writing code to create resources, instead of clicking
buttons.
Think about the first day you got a new laptop or computer. You probably had to login
and click a lot of buttons to get it set up just the way you want. It’s the same in the
cloud. You create a new virtual machine. You could login and manually deploy
software and click a lot of buttons. Instead, we want to write code to deploy and
configure that virtual machine. In fact, we can configure all the networking, storage,
databases and pretty much anything on a typical IAAS cloud using code instead of
button clicking. The following will demonstrate how this works.
Create an EC2 Instance by clicking buttons
Here’s how you create a new virtual machine on AWS:
Login to the AWS Console at https://aws.amazon.com
Type EC2 in the search box:
48
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
First let’s click some buttons. Log into AWS and type EC2 in the search box and
choose the EC2 service. As a reminder this is the AWS virtual machine service.
Remember to choose the region in which you want to create your resource. In our
labs we will always use us-west-2.
Launch an EC2 Instance
Click the Launch Instance button.
49
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Click the blue button to launch an EC2 instance.
Choose an AMI (Amazon Machine Image)
You can just click the first blue button for now. We’ll talk about AMIs later.
50
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Choose an AMI. Old school AWS’ers from Seattle might pronounce this “ah-mee”
however a lot of people now pronounce this A-M-I. Some people like @quinneypig on
Twitter have an ongoing debate on this matter. Your instructor may say it one way or
the other but both are acceptable. The first Amazon Linux AMI in the list will work just
fine.
Choose a size (make sure it is “Free tier eligible”)
51
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Next choose a size. In this case we want to choose an size in the “free tier” so you
won’t get charged as long as you stay under the AWS time and usage limit if you are
in a free trial account. The first AMI in the list will work just fine.
Configure details - use defaults
Uses the default
networking.
Assigns public IP.
Shared Tenancy -
dedicated will cost a
lot more!
52
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Leave the defaults on the configure instance page. Note that tenancy is shared. Don’t
choose dedicated unless you want to pay a lot of money! A dedicated host is an
anti-cloud pattern where you get a server all to yourself. It will cost a lot more. Most
organizations will want to disallow this option so someone doesn’t choose it by
mistake.
Use default storage
Note that you could add additional virtual drives (EBS Volumes)
You can also change drive settings and size.
53
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
This is the page where you can choose an “EBS Volume” which is just an Amazon
way of saying “virtual hard drive.” You can just use all the defaults.
Create a “Name” tag
The name tag is a special tag that will show up in resource lists
54
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
On the tags page enter a special tag. In the Key field put “Name” - using this key will
cause the name to show up in the list of instances as you’ll see on an upcoming slide.
The Value field put whatever you want. In this case the value is “Lab 1”.
Select an existing security group
The default security group is selected here, which doesn’t allow any traffic
We’ll fix that in the upcoming lab.
55
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
In this slide we’ll just choose the existing default security group. However the default
security group will not allow any traffic in or out. In the lab we’ll create a new security
group instead.
Review your settings and launch your instance
56
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Review your settings and click Launch.
After you launch - choose a key pair
Since this is a new account,
select the option to “Create a
new key pair.”
Make sure you download your
key pair and put it in a safe
place. This SSH key is essentially
a password to log into this EC2
instance remotely. Anyone who
has it can use it. If you lose it,
you can’t log in.
57
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
After the launch button is clicked you can choose a key pair. An EC2 key pair is an
SSH key that allows you to log into an Amazon Linux instance. Choose the option to
create a new key pair. SSH keys are passwords and should be treated as such.
Also, if you lose this key, you won’t be able to login to this instance again - so put it
somewhere you’ll remember it! Click Launch Instance.
Launch your instance
The Launch Status on the screen has a link to the instance being launched
Note the value starting with i-xxxxxxxxxxxxx - that’s your instance id.
Click your instance id.
58
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
After you click the Launch Instance button you’ll see that your instance is launching.
You can click the link for the value in the format i-xxxxxxxxxxxx. This is your instance
ID. We’ll be using instance IDs in the lab. They uniquely identify an instance in the
cloud.
Monitor the instance status checks
Wait for the instance status to change from initializing to ready
When it’s ready the status will change to indicate status checks have passed.
59
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Next you’ll see the list of EC2 instances in your account. As explained earlier by
adding the Name tag “Lab 1”, this name now appears next to our instance id in the
instance list. Notice the that the status checks say “initializing” at the top. You’ll need
to wait until the 2 status checks have passed before you can log into your instance.
Scroll down to view instance details
Here you’ll see the
instance details like:
Public IP address
(184.72.125.71)
Private IP address
(172.31.46.77)
Security groups
AMI ID
Key pair name
60
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Scroll down to see the details of your instance. Take a look at the various properties.
They should match what you selected.
How does automation work in the cloud?
Most everything you can do in the AWS console can be done by an API call.
Azure and GCP have some automation as well, though not as much.
The cloud providers offer tools to help with automation.
Instead of clicking buttons to create an EC2 instance, we can run code.
Code can be checked into source control to track changes and versions.
The code allows us to create a repeatable, automated process.
61
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
That was a lot of button clicking! What if we wanted to automate this in the cloud?
For every button we clicked, there’s an API (application programming interface) that
we can call instead using code to perform the same actions. AWS probably has the
most robust API support, but Azure and Google are constantly adding features to
catch up.
If we can write code to create our instance, then it can be checked into source control.
You’ll get to use a source control system called BitBucket in the labs. Source control
allows developers to store code, including different versions in case the code history
needs to be reviewed or code needs to be rolled back to a prior state. Source control
systems track who made what change.
By checking code into source control, deployments can be tested in advance, if the
code is written correctly. This reduces errors during deployments and ensures
deployments are more secure and repeatable.
AWS Tools
AWS offers a command line
interface (CLI) which we will use in
the labs.
In addition, AWS offers SDKs in
many different programming
languages to call AWS APIs.
You’ll need the secret key and
access key id you created in the
setup instructions.
62
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
APIs can be called by many different tools on AWS. Software development kits
(SDKs) exist for many popular programming languages.
In class we’re going to use something called the AWS CLI (command line interface).
This tool allows you to run scripts at a command prompt to deploy resources in the
cloud. We pre-installed the AWS CLI on a cloud instance so you don’t have to install
anything on your own laptop or run our lab code on your laptop. The code is designed
to run on the instance and should only be run there for security reasons as well. We
can’t guarantee that all the code we included is secure so it’s best you restrict it to the
cloud instance.
However many developers run the CLI on their own machines. If you want to do that
you can download and install the CLI by following these instructions:
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html
A simple CLI command to view EC2 instances
Run this command to see all the EC2 instances in your account.
You’ll see the one we just created with the same id. (i-xxxxxxxxxxx).
63
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Here’s a simple command to create an EC2 instance using code instead of clicking
buttons.
Create an EC2 instance from the command line
Now create a command to create an instance, but save it in a file.
64
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
This script can be saved to a file and checked into source control
Run a script to create an EC2 instance
Execute the script you created to create the instance.
The instance will be created and show up on the EC2 Dashboard
65
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Now we can run the script instead of running the command directly.
AWS CLI Reference
Lists all CLI commands.
Drill down from AWS to EC2.
Then scroll down to run-instances.
Click it for details.
Scroll down to find parameters for the command.
https://docs.aws.amazon.com/cli/latest/reference/
66
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
For more information about all the commands available in the CLI check out this
reference page:
https://docs.aws.amazon.com/cli/latest/reference/
AWS CloudFormation Templates
Use templates to define resources
YAML or JSON
Separate definition from execution
Idempotent
Built-in dependency management
Deployment logging
Parameters and Outputs
76
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Another option on AWS is to deploy something called CloudFormation Templates.
As the name implies, the template defines a type of resource and can be used to
deploy resources that match the settings in the template.
Templates can be written in YAML or JSON. These are simply defined file formats for
how to write a file that matches a particular specification. Once you understand the
rules they aren’t so bad but it takes a bit to get used to them. JSON was the initial
format but now a lot of developers prefer YAML because they find it simpler. The
template in this slide is written in the YAML format.
Some of the benefits of CloudFormation:
Separate data from execution: The definition of something should not be altered
when executing code to create it. By using a CloudFormation template a deployment
system can be created which deploys, but does not alter, the template that defines
what to deploy. This is an important distinction for security reasons. An application
team may be responsible for a particular template. The team managing the
deployment systems should not be able to change the template.
Idempotent is a fancy way of saying templates are re-runnable. If a deployment fails
half-way through, you should be able to run the template again and get the expected
results.
Built-in dependency management - because AWS created the template language
and knows all the resource dependencies, it will manage most of the work to
determine when to wait before proceeding with the next resource in the template.
Unfortunately it won’t do this if you break your templates into multiple scripts. In that
case you need to manage some dependencies yourself.
Deployment logging - we’ll look at some of this later in the labs. You’ll see that
CloudFormation logs the deployment events, inputs, outputs, and the template that
was deployed. There’s also a new feature called Drift Detection which tells you if a
resource has been changed and is out of sync with the template that deployed it.
Parameters and Outputs - templates allow you to pass in parameters, so you can
use the same template in different places. For example, we give you one template in
class but everyone can use it because we pass in parameters when things need to be
different in each account. Outputs of templates can be passed in as parameters to
other templates. This is very useful for example, when you create networking that is
used by multiple application stacks. One networking template is deployed with an
output that is used by all the other cloud formation stacks. (A stack is a set of
resources deployed by a template).
Parameters and Outputs
Use parameters so you can use the same code in multiples places.
Pseudo parameters will detect details about your environment (region, etc.).
Use outputs to track information about things you have created.
Other templates can reference those outputs.
For example, create a security group in one template.
Pass the security group ID into another template.
68
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Parameters are placeholders in templates that allow you to pass in values when the
template is used to deploy resources. This allows you to use the same code
(template) over and over because you abstract out the values that differ each time.
Then pass those in when you deploy the template.
Pseudo parameters exist that will automatically populate with values from your
current environment. For example, if you are deploying in the us-west-2 region a
region pseudo parameter will figure that out and pass it into your code.
Outputs are values generated after the the template is deployed, such as a security
group ID or an instance ID.
Sample template - Security Group
Notice one of the
outputs is a
reference to the
security group.
This allows us to
reference this
security group in
another template.
69
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
This slide shows an example of a template used to create a Security Group. A
Security Group is a set of network rules that can be applied to a resource to define
the traffic allowed in and out of that resource.
Execute the code to create the security group
The following command creates a CloudFormation “stack.”
We specify the file that contains our CloudFormation template.
The output is the StackId for our CloudFormation stack.
70
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
A CloudFormation stack is a group of resources deployed by a template. The
command in this screenshot executes a cloudformation template stored in the
securitygroup.yaml file. The output is the stack id for our cloudformation template.
View the CloudFormation stack
Log into the
console.
Choose
CloudFormation
Click your stack
The events tab will
show any errors.
71
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
In AWS when you navigate to the CloudFormation service, you can view the output
of the command to run the template.
View Outputs and the new Security Group
Click the outputs tab to view the outputs
Go to the EC2 Service. Click on Security Groups to see your new group.
72
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Click on the different tabs to see information about the CloudFormation stack. Then
go the EC2 service and click on Security Groups on the left to view your new
security group.
Why do we want to use code instead of buttons?
Writing code may seem more complicated than buttons at first.
However, this is a pay now or pay later option.
It will be faster to get up and running by clicking buttons.
However, when something goes wrong, you can’t quickly redeploy.
You’ll be reliant on the person who knows what button to click.
It might be harder to prevent unwanted actions and track who did what.
You can resolve issues and redeploy faster if you invested in automation.
73
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Why is infrastructure as code a best practice in the cloud? It may seem easier to click
buttons, and at first it will be. A good approach would be to create a sandbox account
to try out things in a manual fashion before deploying to production bound accounts.
However, this is a pay now or pay later scenario. You may quickly lift and shift
something into your cloud account and get it all working - and then your DevOps
person leaves the company. Who remembers how all these systems were deployed?
Perhaps you have a security incident and ransomware gets onto some of your
instances. How fast can you recover? In the case of any malware on your machines,
do you have to try to get it off, or can you simply click a button to redeploy everything?
What about tracking who made what changes on a system? How will you do that? If
all changes go through an approved deployment system and must come from source
control, the changes should be available in source control showing the different
versions of code, who made the change, and what got deployed when.
What about the next time you need to make a change that has been tested in a dev or
QA environment or push it to production? How do you ensure the changes made in
the test account are exactly the same as the changes made in the production
account? If you are using code constructed properly the same code used to deploy to
the development account will be used to deploy to the production account with no
changes. You can ensure what was tested is what exists in production.
Other benefits of code deployments
Spot unauthorized changes when resources don’t match desired state.
Separate the people from secrets and sensitive data.
Eliminate phishing; automated systems don’t click things.
Deployment systems can operate in a locked down network.
Prevent human error - one of the biggest causes of security problems.
Immutable infrastructure - once deployed it can’t change. Limits malware.
84
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Some other benefits of automated deployments include the ability to more quickly
spot unauthorized changes. If changes are made outside the approved deployment
system, trigger alerts. This can be accomplished with services like AWS Config
service which we will discuss later in class. CloudFormation also has a feature called
Drift Detection which will tell you if a resource in an account differs from the template
used to deploy it.
With automation you can write code to deploy things in sensitive environments
instead of people. If set up in a completely automated fashion, people never need to
have access to the data or secrets when deploying a system. For example, your
automation could generate an SSH key that is used to log into a system from within
the cloud, and take an action using code. A person never needs to have access to the
SSH key used to perform the automated action in the account. You can have multiple
checks in your deployment process to ensure someone doesn’t change the code to
get access to that key.
Often credentials are stolen via phishing attacks. If you have automated your system
in such a way that no humans have access to the credentials and they do not exist on
anyone’s laptop in memory or in a text file, then there’s no human to click on a link in
an email and reveal those credentials.
You can operate in a completely locked down network if actions are taken via
automated systems. You can build the automation systems inside a closed network,
perform operations via code, and never have humans connect to systems in that
closed network. Of course, your build system security is also very important in this
case as well.
Using fully automated systems will help prevent human error such as accidentally
hitting the delete button on the wrong EC2 instance or pointing the database
deployment to the wrong server.
Finally, with automated deployments, you can deploy immutable infrastructure. We’ll
talk about immutable infrastructure more later in class but it basically means that once
deployed, a resource cannot change. If you want to change it, you have to destroy it
and create a new instance of that resource. Immutable infrastructure limits your attack
surface and the ways in which malware, vulnerabilities, or misconfigurations could be
deployed.
Let’s do it!
We created an AWS Linux AMI for you to use in class.
The AMI has all the tools you need on it, including the AWS CLI.
You will need to use this AMI to create an EC2 instance manually.
Then log into the EC2 Instance using SSH.
Download the lab code and run some automation commands.
If you have time, do the bonus labs to see how this works in other clouds.
If possible make sure to create the resource group in Azure.
75
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
In Lab 1.1 you’ll be able to try out creating resources in the cloud - both manually and
via automation.
Lab: Intro to
AWS automation
76
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
This lab is an introduction to deploying cloud resources using automated and manual
methods. It’s also a chance to make sure accounts are setup correctly and lab tools
are working.
Governance, compliance
and risk (GRC)
77
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
It’s fun to think about bits and bytes and malware, however at some point we need to
step back and look at the big picture to determine cybersecurity risks and response,
compliance requirements, and how to maintain cyber security in an organization via
policies and enforcement, otherwise known as governance.
Governance
Governance is making sure people follow policies to reduce risk and loss.
An organization needs to consider risks it faces and what to do about them.
Based on this assessment the organization creates policies.
Then the organization needs to enforce the policies.
Reporting and auditing can help determine if policies are being followed.
If policies are not being followed, this typically indicates increased risk.
78
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Governance means making sure people follow the rules. In a cybersecurity context,
rules are created in order to ensure systems are compliant with standards and
policies. Standards and policies are created to reduce risk by ensuring systems are
created according to best practices that limit cybersecurity risk and potential exposure
to vulnerabilities and malware.
In the on-premises world, policies are often written in documents that few people
actually read and see, in the author’s experience. In the cloud, these policies can be
transformed into technical controls and guardrails that can help ensure people are
following the rules at the time of deployment. Additionally reports can be created to
review configurations automatically in the cloud to detect vulnerabilities and
misconfigurations and produce reports and alerts. In some cases, it may even be
possible to automate remediation of the problem. More details on how to do that are
provided in subsequent class modules.
If a company finds that a lot of systems are not in compliance, this can indicate
increased risk for the company. Systems that are out of compliance may cause the
company to fail security audits, resulting in fines or loss of business from customers
who rely on these audits to prove the company is following best cybersecurity
practices. Out of compliance systems may indicate vulnerabilities that expose
systems to potential attacks and malware.
Policies
Why do we need policies?
Policies define what is and is not allowed in order to maintain security.
Maintaining security minimizes risk.
Companies may need to follow policies for compliance or legal reasons.
Existing policies need to change to accommodate the new cloud environment.
New technologies, shared responsibility.
90
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Policies explicitly define what is and is not allowed. By creating policies, companies
document the rules people deploying technology in the company are supposed to
follow. The policies are created to reduce risk, including cybersecurity risks and
potentially related to costs or loss of business. In addition, policies may be created to
enforce configurations that meet compliance or legal standards.
Do policies matter? Documented policies are required in some cases for compliance,
to obtain insurance, or for other legal reasons. If a cybersecurity incident occurs and
the company is not following the policies they defined and documented, this could
lead to legal scrutiny. Law enforcement may question the company, or as is the case
in some large breaches, the CEO may be asked to testify in front of congress as to
why policies were not followed or enforced. If a company documented a policy but
does not follow it and needs to put in an insurance claim, the insurance company may
not honor the claim if the polices were not enforced. Companies are sometimes also
required to have policies in place to obtain contracts with other companies. Not
having or implementing policies in a contract could lead to breach of contract.
When a company moves to the cloud, policies need to change. Things work differently
in the cloud. New services, tools, and software will be used in the cloud. The typical
way companies do incident handling changes. Scoping for penetration tests by
internal teams or external vendors will change, as well as what needs to be tested.
The way companies handle encryption keys will likely change. IAM implementation
will be different. After going through this class, likely those responsible for security
policies will come up with many more aspects of security policies that may need to be
adjusted.
An Example
Chris Farris @jcfarris
works on cloud
security at Turner
and published a blog
post about creating a
cloud security policy
for AWS, Azure, and
Google.
https://www.chrisfarris.com/post/clou
d-security-standard/
80
Chris Farris handles security for Turner and published a blog post outlining how they
approach security policies at Turner Broadcasting. Each organization will create a
security policy unique to its needs, but this blog post may give some ideas to consider
when creating your own policy. In the examples shown here, clearly we need to think
about different things in the cloud, such as what domains and emails can be used to
create cloud accounts. The policy also takes into account concerns of the governance
and legal departments.
Standards and Procedures
Standards define how the company will implement the policies.
Procedures define the process by which standards will be implemented.
Security teams typically have standards and procedures in place. Example:
The company uses a standard OS configuration.
Procedures define who will create the base image and how to deploy it.
These too will need to be adapted to work in the cloud.
81
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Security standards are created to define how various types of systems need to be
configured. A company may have a standard that it only uses RedHat Linux operating
systems.
Your company may have a standard such as to use RedHat Linux. A procedure
may define how operating systems are configured and deployed. Most certainly
the procedure will need to change in the cloud. The company is now dealing with
virtual machines deployed on a cloud platform instead of physical machines
inside a data center.
A company may have a standards such as Linux systems will be deployed with
RedHat Linux with a specific configuration. Amazon Linux comes with AWS cloud
tools built-in which developers may want to use. It is hardened and Amazon
updates patches very quickly. Do you want to revise your policy to allow
developers to use Amazon Linux? This is just one example.
Other procedural considerations include who will create the base images, and
how. How much will developers be allowed to change the base image? Can they
install new software? Can they create a new cloud image from that base image
that incorporates their software on top of that base?
These are the types of questions that will need to be considered that may cause
security policies and procedures to change.
What needs to change in the cloud?
Who approves which projects can go to the cloud?
What types of data can go to the cloud? PCI, PII, HIPAA, GDPR?
Who will manage cloud accounts? IAM? Networking? Encryption keys?
What is the review process for an application moving to production?
What on-premises data can cloud systems access and vice versa?
How will applications be deployed and by whom?
How will incidents be handled? How will you maintain chain-of-custody?
82
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
These are just a few questions to ask when you move to the cloud. Likely you will
need to make changes for each of the following. Likely those who are responsible for
security policies will think of a lot more as you go through this class and consider your
existing security policies. Organizations typically need a unique policy that is relevant
to their systems, compliance, and legal requirements.
Who approves which projects can go to the cloud?
What types of data can go to the cloud? PCI, PII, HIPAA, GDPR?
Who will manage cloud accounts? IAM? Networking? Encryption keys?
What is the review process for an application moving to production?
What on-premises data can cloud systems access and vice versa?
How will applications be deployed and by whom?
How will incidents be handled? How will you maintain chain-of-custody?
What may or may not change
Which operating systems are allowed?
What is your patching strategy?
What you do for PCI compliance (e.g. antivirus, pentest)
What will be encrypted and how?
How will data be classified? Data loss prevention (DLP)?
What security products will you use?
Acceptable open source licenses.
95
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
These are some things that may or may not change - we’ll explain your options, pros,
and cons for each of the following throughout class.
Operating systems: You may try to use the same standard operating systems you
use on-premises. Developers may want to use cloud specific operating systems
instead that come with built in tools that work with the native cloud platform. These
tools make it easier to deploy systems in the cloud. The tools and agents provided by
the cloud provider may also seamlessly integrate with the cloud platform. You may opt
to use the cloud provider operating system and tools - just make sure you understand
their capabilities and what they can access.
Patching: Patch running systems or redeploy?
Compliance: Will you use the same or different tools and processes to handle
vulnerability scanning, antivirus, and cloud pentesting? Even if you use some of the
same tools you may want to install them from the cloud provider marketplace. Some
companies have used compensating controls in place of antivirus - it depends on your
auditor whether or not this will be approved.
Encryption: Will you classify data or simply encrypt everything?
DLP: Will you classify data and how? Will you deploy a DLP solution?
Security products: Will you use the cloud native security products or products that
you are used to?
Software Licenses: Do you have license restrictions on open source products? Do
those policies allow developers to use cloud software development tools that are
provided by the cloud provider and designed to work with their platform?
Cloud Patterns
Design Patterns - A software construct.
Well-designed patterns for common problems.
Create pre-approved patterns that developers can use.
If people use the pre-approved secure patterns, they
get to production quickly.
If they choose something else that’s ok - it will just take
longer as they need to go through an approval process.
84
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Design Patterns is a well known software construct for creating software that
follows well-designed patterns for solving common problems.
The same concept can be applied to cloud infrastructure as code. Create
pre-approved patterns that developers can use. In fact, by providing
pre-approved templates developers can deploy new systems without too much
scrutiny or delay. If they use the pre-approved patterns, they get to production
quickly. If they choose something else that’s ok - it will just take longer.
Other questions to consider:
Who will define the cloud patterns?
How will they be managed, used, and monitored?
How will these patterns be implemented and deployed for each new project?
How and when will the patterns be adjusted to ensure the company can remain
innovative and move quickly?
Exceptions
Exceptions happen. Always. Be prepared to handle them.
An exception comes down to - a risk assessment.
What will your exception process look like for cloud deployments?
How will you document - and track - exceptions?
If an exception leads to a breach, will you know who approved it?
How will you monitor the risk incurred by exceptions?
Will they have time limits?
85
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Exceptions will happen. Often DevOps or security teams want to set up stringent rules
and enforce that everyone must follow those rules. Unfortunately the day will come
when an exception needs to be made, for whatever reason, to allow a less than
secure scenario. The most important thing you can do is be ready for this exception
and determine how it will be handled, who will approve it, how you will document it
and manage it going forward.
Exceptions should be tracked in such a way that they can feed into your overall risk
reporting. When an exception occurs, can you track who allowed that exception, so in
the case of a breach you can determine who was responsible? Can you give an
exception a time limit and track it in a way that you will be able to go back and
remember that it occurred and time is up? Who will communicate this and how will
you go about getting it prioritized and fixed - before that time is up?
Change
Change is the one constant in the cloud.
As soon as security creates a policy, the cloud provider will make a change.
How will you monitor for change?
What will you do when it occurs?
One of the most challenging times of year is right after AWS re:Invent
Developers want to try all the new shiny things...how will this be handled?
86
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
One thing is for sure in the cloud: things will change all the time. Security teams need
to be prepared for this and consider how they will deal with change in the cloud.
When the cloud provider offers a new service, how long will it take for developers to
be able to try it out? If it takes a long time people may get frustrated. Will you have a
sandbox account? Will you explain the ramifications of new services and potential
security implications to all developers? Will a select group of people be allowed to test
and evaluate the new service? Who will that be and how long will it take? Or will you
allow new services in the development account and monitor for anomalies related to
cost and security? It’s probably best to consider these questions up front and
communicate them to developers so they understand the reason and the process for
evaluating new services. Especially after big cloud conferences where a cloud
provider releases a bunch of awesome new tools that everyone wants to try out!
By the way, the good thing about the cloud is that anyone can create an account.
When developers could not use services at a particular company, the author
explained to the developers that they could easily go out and create an account to try
out the services while they were waiting - and most of the time there’s minimal to no
expense just to try them out.
New cloud providers and services (besides IAAS)
Which cloud providers and services will developers be allowed to use?
How will this be enforced?
How will you monitor for new services and features (comes frequently)?
Who decides if and when developers can use a new service?
How will new services be vetted?
Will you develop unique standards, policies, and procedures for use?
87
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Besides the IAAS, cloud platforms developers may want to use a myriad of other
cloud providers. DropBox, Google Docs, Evernote, DocuSign, SumoLogic, DataDoc,
Nagios, Loggly, SalesForce, OpenShift...where is your data going? How will you
manage requests and communicate policies to developers signing up for new
services? How will you monitor for service usage?
How will these services be vetted? We will look at some ways to vet new services in
the following sections.
Risk
Information security risk management (ISRM)
Process of managing risks associated with the use of information technology,
Considers Confidentiality, Integrity, and Availability (CIA) of Assets.
If an event associated with a risk occurs it can negatively impact the business.
Risk considers business losses should an event occur.
Security people need to present risks to executives accurately.
Risk is ultimately a business decision - and is the responsibility of top executives.
88
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Why do businesses care about risk? Businesses look at events that could happen to
a business and consider the risk of that event occurring because it will impact the
business by generating losses. Losses could be in the form of lost revenue due to
downtime or loss of customers due to a negatively impacted brand. Costs may be
incurred such as employee time spent dealing with a breach instead of building the
business, legal expenses due to lawsuits, fines, and other negative consequences of
a security incident. The company stock price may drop and insurance costs may rise.
When reviewing new technology, policies, procedures, and standards, a company is
really trying to determine the appropriate steps to minimize risk and ultimately, losses.
Risk as an opportunity
Fire Doesn’t Innovate - by Kip Boyle.
Former CISO sees risk as an opportunity.
If you manage risk you can avoid breaches.
While others deal with the consequences…
Your company can thrive!
89
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Another way to look at risk is in the form of negative risk. By reducing risk, a company
will not waste time dealing with data breaches. While competitors are dealing with all
the negative consequences on the previous slide, companies can thrive by eliminating
or mitigating risks that are causing losses for competitors.
Kip’s book is available on Amazon and may be free to students of this class - just ask!
https://www.amazon.com/Fire-Doesnt-Innovate-Executives-Practical/dp/1544513194
What is the risk of moving data into the cloud?
Some people believe moving to the cloud is a massive risk
Someone else might see your data!
Other people are managing the network and hardware.
Data on a shared host may be accessed by other customers on the host.
How are these risks managed in the cloud?
Technology, assessments, contracts, and monitoring.
90
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Many companies fear moving to the cloud for security reasons. They believe that the
cloud poses a massive risk. Why is this? We have a section later on cloud threats but
some of the key concerns include the fact that the cloud provider might see your data.
Another issue is loss of control and the ability for internal employees to have access
to and manage the systems. Additionally data on a shared host may be accessed by
other customers.
These are all valid concerns, however companies must weigh these concerns with
other impacts to the business and other internal risks. Just as with our historic
examples, these risks are managed via technology, assessments, and contracts.
Consider history: Frame Relay
In the early 90s Frame Relay became a thing.
At first companies were skeptical.
Companies formerly used 100% dedicated physical dedicated lines.
They switched over to shared physical lines with logical separation.
Ultimately: Cheaper, economies of scale, trust the network provider.
Technology, a risk assessment, and a business decision, and contracts.
91
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
For those who remember, Frame Relay came out in the early 90’s. At the time a lot of
businesses had dedicated leased lines between different locations to send data back
and forth. By using fixed lines the companies could be sure no one else was
connected and viewing that data.
At some point, the large telecommunications companies started offering frame relay.
Instead of a dedicated line that only one company could use, the telecom companies
wanted to leverage economies of scale and have multiple companies share the same
lines for a lower cost. These shared lines would have data logically separated via
technology developed by the telecom companies.
Initially companies may have been skeptical, but over time they started using frame
relay because setting up a leased line between every location was too cost prohibitive
and not always feasible. Although the risk might be higher than using a dedicated
fixed line, the cost savings, contracts, and trusting the network provider outweighed
the risk.
More history: E-commerce
In the late 90s: No way you were going to put your credit card in a web page.
Banks would never approve the transactions, people said.
A guy started selling books online…
Someone figured out how to encrypt those transactions.
Public Key Infrastructure was created to facilitate trust in third parties.
The business financial opportunities far outweighed the security risks.
Technology, a risk assessment, and a business decision.
92
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Another technology that initially faced a lot of pushback was e-commerce. People
thought that banks would never approve transactions sent over the Internet because
the risk of fraud and loss of funds would ultimately be too great. Ironically, one of the
original e-commerce pioneers, Jeff Bezos, is also the founder of Amazon which runs
AWS, the first major IAAS cloud provider.
As everyone knows, e-commerce ultimately succeeded. New technology allowed
companies to encrypt transactions via certificates validated by a third party. Banks
determined the financial upside from accepting e-commerce transactions was greater
than the risk and potential loss. The solution was a combination of technology to
mitigate the risk, an assessment of the risk versus the potential business upside, and
a business decision to accept the risk.
Still skeptical? Questions to think about.
What is the risk of moving data to the cloud?
Is the risk less than or greater than risks faced on-premises?
What can be done to mitigate that risk?
Is the potential financial upside greater than the downside risk?
Have you performed a risk assessment?
Are your standards, policies, and procedures better than the CSP’s?
Does everyone in your organization follow your policies?
106
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
For those who are still concerned with cloud risk, it may be that the risk of moving to
the cloud is too great for your particular organization, or certain applications. However
before drawing this conclusion make sure you evaluate the following:
What is the actual risk of moving to the cloud? Is it the risk that another company
may see your data? It that risk greater or less than the potential your internal system
may be breached?
Is the risk due to the fact that the cloud provider employees may see your data? Is
there a reason why you trust your own employees more than the employees of
the cloud provider? Do you have more stringent hiring processes? (You might!)
However how long did you know your current employees before your hired them?
How well do you know them now?
Have you reviewed the cloud provider’s policies and procedures? AWS has very
well documented cloud policies in their Overview of Security Processes whitepaper -
are your security processes better?
https://d1.awsstatic.com/whitepapers/Security/AWS_Security_Whitepaper.pdf
Even if your security processes on paper are better, does everyone actually
follow them? The author of this class worked at many companies as a contractor,
employee, and via her business and has not worked at a company with policies that
match those of AWS - that were actually followed. Amazon has been extensively
audited, including by the US government, to prove that they actually follow their
published policies.
In the end, the potential upside of the business needs to be compared to the
potential losses. You can estimate potential losses based on hypothetical scenarios
as to what risk events may occur and also evaluate actual events to determine the
potential loss for the business. We’ll look at some cloud threats shortly.
How cybersecurity may benefit from cloud
Examples of benefits of using a public cloud like AWS:
Built-in inventory management and ability to enforce data classification.
The cloud is a huge configuration management platform - if used properly.
Built-in logging with scalable storage.
Automated deployments, security checks, failover, incident response.
Easier to implement segregation of duties.
App specific networking rules and just in time administrative access.
94
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
The cloud definitely brings some new risks but it also offers security teams some
benefits. The thing is you’ll need to understand and take advantage of them! The
cloud has built in inventory management. You can simply run a query against the
cloud platform and get back a list of all your servers in the cloud. If you leverage the
cloud automation and create a well defined deployment process you’ll be able to
inventory the software and systems used throughout your organization.
All the cloud services have built in logging that can be seamlessly logged to
cloud-native platforms. The storage is also scalable so you don’t have to decide how
big the servers need to be to store the logs in advance up front.
So many things can be automated in the cloud. It takes time to learn and implement
the automation, but by investing in automation companies can reduce human error,
prevent incidents, auto-remediate problems, and spend less time on manual repetitive
tasks, instead focusing on things that help the company be more efficient and
profitable in the long run.
It’s easier to implement segregation of duties in the cloud by creating separate
accounts and fine-grained IAM rules. Segregation of duties can ensure two or more
people need to be involved before a risky action can take place.
The cloud makes it easier to create application specific networking. Using the concept
of security groups which exist in all major IAAS cloud providers you can limit which
apps can communicate. In the case of a data breach, exposure can be limited.
When the cloud won’t help cybersecurity.
Too much access - developers have full control.
No automation - button clicking - people have access to data.
No oversight to prevent security flaws; lack of standard assessments.
People untrained in network security implementing networks.
Deploying cloud services with no understanding of security controls.
No monitoring.
Monitoring, but no remediation.
95
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Although some aspects of the cloud may benefit a company, rushing to the cloud
without proper assessments and controls creates excessive risk. Some companies
have let developers rule in the cloud. Developers are generally not trained in security
and networking and do not understand the risks posed by poor implementations. In
other cases oversight of the use of new technologies and features has been thrown
out the window in the name of speed, innovation and modern technology. The same
fundamental security principles apply inside the cloud that apply outside the cloud.
Companies that fail to grasp this may experience a nasty breach. The benefits of the
cloud are not realized by companies that do not leverage automation to help enforce
compliance, governance, and proper architectures to reduce risk. We’ll look at some
specific examples near the end of the day.
Risk assessments for cloud providers and services
Companies should establish a process for bring on new cloud providers.
Each new cloud provider needs to meet the company’s security requirements.
Security and privacy requirements include legal and technical concerns.
To prevent shadow IT include financial and procurement teams.
Establish a standard risk assessment process.
Define roles and responsibilities.
Measure and track risk acceptance and exceptions.
110
When allowing people to use new cloud services, you’ll want to have a process for
evaluating the security of each cloud provider. Make it easy for individuals to request
new services and clear as to what the process is, and the requirements are. If they
understand security and the reasons behind your process and decision it will be
easier to enforce the policies. If your instructions are clear and straightforward people
with good intentions will comply. Shadow IT generally is a result of lack of
communications, unwieldy processes, and rules that are hard to follow. Shadow IT
also occurs when people do not understand or believe the risk exists. This is where
proper training will help.
Your process should involve finance, technical, and legal teams. Often people that
don’t know or are subverting the process will submit a request to a procurement
department or include the purchase on an expense report. At this point, the financial
teams need to be aware of what is and is not a cloud service, or have someone to ask
if they are not sure. These teams can ensure that whomever is purchasing the service
has submitted a request to use it through the proper channels.
Next have the legal team and security teams work together. The security team may
ask for information from the vendor, or ask the person who wants to use the service to
go get the information from the vendor so the security team can review it. Make it
clear what is required and why. If your process is consistent people will deem it to be
more fair than if you seem to have random requirements or say “it depends” a lot.
Make sure whatever your security requirements are then make it to the contractual
obligations for that vendor. If the vendor is supposed to backup the data, then make
sure that is in the contract or a related document. You may have a standard list of
requirements you can add as an addendum to contracts. If any obligations fall back to
you, as a customer, as they could not be negotiated in the contract, make sure this is
clear to the users of the system.
You may want to maintain a database of these assessments and the overall
associated risk all your cloud vendors as a whole, and be able to produce a report
showing that risk, and any outstanding exceptions to your standard security policies.
Cloud provider Audits
When you can’t inspect the
cloud provider yourself, you
can look at third party
audits.
A SANS survey found the
following audits were most
commonly requested and
reviewed when assessing
cloud providers.
97
When assessing third-party cloud providers it is not always possible to get into their
data center or do a penetration test against their systems directly. When assessing a
cloud provider you can ask for evidence that they are following best security practices
by evaluating third-party assessments, audits, and penetration tests performed on the
vendor’s environment and systems. We’ll talk more about penetration testing on day
5. In terms of audits, the most common types of audits requested from cloud
providers are shown in the diagram on the slide which comes from the 2019 SANS
Cloud Security Survey.
https://www.sans.org/reading-room/whitepapers/analyst/2019-cloud-security-survey-3
8940
In many cases the cloud providers already have this documentation and can provide it
quickly. Using a common framework to assess third-party vendors can help ensure
consistency when evaluating the controls and direct access is not possible. Audits are
not perfect but can provide some reassurance that the vendor understands and has
taken the time to implement security best practices.
Lab: Intro to
Azure automation
98
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
This lab is an introduction to deploying cloud resources using automated and manual
methods. It’s also a chance to make sure accounts are setup correctly and lab tools
are working.
Compliance
Compliance typically means adherence to some law or regulation.
Not doing so could result in fines or loss of business.
Required in some industries and jurisdictions.
PCI if processing credit cards
HIPAA if processing health care data
GDPR if storing data of European citizens
SAAS providers are getting SOC 2 compliance to attract customers.
99
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Compliance exists due to the fact that companies experienced major problems in
terms of monetary losses, fraud, privacy issues, or data breaches. If too many
problems occur, the government or industry regulatory bodies step in and create rules
companies must follow, otherwise they will face fines or loss of ability to do certain
types of business.
In some cases compliance and audits exist to prove a company is following best
practices. By showing that a company follows best security practices, the organization
may win new business contracts. Many SAAS providers are now trying to obtain
SOC2 compliance for this reason.
Example: PCI compliance
100
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
The Payment Card Industry Data Security Standard (PCI DSS) is a way to evaluate
whether or not a company is following best practices in relation to accepting and
handling credit card data. This standard applies to any company that accepts credit
card payments.
https://www.pcisecuritystandards.org/
According to WorldPay, “Between 1988 and 1998, Visa and MasterCard lost $750
million due to credit card fraud.” The companies defined a security standard and a
method for evaluating companies to see if they met those standards. If a company
fails to meet these standards, they may be denied the ability to process credit card
transactions.
https://www.vantiv.com/vantage-point/safer-payments/history-of-pci-data-security-stan
dards
101
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
This is an example of some of the PCI requirements. As you can see,
it is basically a
checklist companies need to follow if they want to process credit cards. These same
requirements apply for companies that want to process credit cards using systems
hosted in the cloud.
Any type of compliance a company must or hopes to meet outside of the cloud will
still apply when systems are moved to the cloud.
Compliance does not make a company secure
Compliance is a set of standards required by some regulatory body.
It is often a minimum requirement, and may only cover certain scope.
It may be concerned with particular aspects of data protection.
It contains some best practices, but may not be comprehensive.
Regulations can’t be updated fast enough to keep up with new threats.
However...without compliance, some companies would do nothing.
102
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Many companies that achieved compliance requirements including PCI and others
have experienced data breaches. How can this be? Compliance requires are a set of
best practices but are often a minimum. Compliance is good, because unfortunately
without compliance some companies would do nothing, but it is often not enough. The
reason compliance doesn’t stop data breaches is because in many cases, compliance
and regulations can’t be adjusted fast enough to keep up with evolving threats.
Additionally, compliance is often scoped to a subset of an organization’s systems that
are related to the particular compliance being obtained.
Compliance is a shared responsibility in the cloud
Just because you move to the cloud doesn’t excuse you from compliance.
Compliance audits are still required, however, the CSP Is partly responsible.
At an IAAS cloud provider some services may be compliant and not others.
Some SAAS providers specialize in compliant services, e.g. SRFax - HIPAA.
Separate accounts can be set up for compliance to limit the scope.
In some cases, auditors are allowing compensating controls.
Automation can help with governance and compliance.
103
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
So what happens when we move systems requiring compliance to the cloud?
Compliance still applies. Organizations still need to pass audits. However, in the
cloud, some systems will be the responsibility of the cloud provider. In this case, the
audit will need to refer to audit documentation from the cloud provider for parts of the
audit. The other portion of the audit will be the responsibility of the company being
audited.
When evaluating services to use at an IAAS provider, note that some of the individual
services may be compliant and not others. Evaluate each individual service.
Some SAAS providers specialize in providing compliant services. This may be a good
option for some companies.
Moving systems that require compliance into separate accounts may help limit scope.
In some cases auditors are allowing compensating controls where compliance
requirements created prior to heavy use of cloud systems don’t make as much sense.
This is dependent on the particular auditor making the decision.
Automation can help with governance and compliance. By automatically reviewing
systems before they are deployed, non-compliant systems can generate alerts or be
completely rejected. After systems are deployed automated scans can determine if
systems are compliant.
AWS Artifact
104
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
AWS offers a service called Artifact where customers can access the cloud provider’s
compliance documents. Some will require permission and others can be downloaded
by anyone.
https://console.aws.amazon.com/artifact/home?#!/reports
AWS Compliance Center
105
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
AWS also offers a service which helps companies find compliance documents from
around the world. This service is called Atlas.
https://www.atlas.aws/
Azure Compliance Manager
106
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Azure has a service that automatically scans and reports on system compliance. This
service is part of Azure security center.
https://docs.microsoft.com/en-us/office365/securitycompliance/meet-data-protection-a
nd-regulatory-reqs-using-microsoft-cloud
Google Compliance
107
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Google has a compliance page where companies can view information about
Google’s compliance with various standards.
Check for compliance at the service level
108
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
The cloud providers will generally have pages for specific service requirements.
These screenshots show the list of HIPAA compliant services on AWS and Azure.
HIPAA compliance on AWS:
https://aws.amazon.com/compliance/hipaa-compliance/
HIPAA compliance on Azure:
https://www.microsoft.com/en-us/trustcenter/compliance/hipaa
“Security People Like Lists”
A statement by a developer who was frustrated with security people.
Security professionals like lists for multiple reasons:
Best practices based on the most common causes of data breaches.
Policies and procedures for legal purposes.
Compliance requirements in certain industries and jurisdictions.
Assessment of risk due to cybersecurity weaknesses.
Developers don’t want to just implement lists - they want to know why.
124
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Security professionals get training on a myriad of threats, malware, and compliance
requirements. All of these details can be overwhelming, so the obvious solution is to
make lists of all the things that need to be done to create a secure configuration.
These lists are based on underlying research such as attacks that have occurred in
the past and how to prevent them.
From a developer perspective the lists just look like a huge roadblock that make no
sense. Developers and software engineers are analytical types that need to know why
these lists exist. Providing security training to developers can help them understand
the reasons behind security requirements. Additionally, developers can also help
implement security requirements more efficiently.
Security lists exist for a number of reasons:
Past data breaches show the ways in which attackers have obtained access to
systems. Certain lists outline steps to take to prevent similar attacks.
Some security policies and procedures exist for legal or contractual reasons. These
requirements are driven by law and not optional.
Compliance drives certain security requirements. As mentioned, in order to process
credit cards, organizations must adhere to the rules for PCI compliance.
Organizations have created vulnerability lists that help companies find cybersecurity
weaknesses and in some cases include defined metrics for risk assessments. Using
an established standard helps security professionals assess risks in an industry
standard manner.
AWS Well-Architected Framework
Questions to ask about a system.
Covers architecture and security.
Aligns to AWS services.
Limited security questions.
The idea is to keep it simple initially.
Plans for adding more for compliance.
110
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
AWS offers the Well-Architected Framework to help companies assess their
architecture. This framework was created because many people were asking AWS
and partner companies for assessments. AWS wanted to create a service that
companies could use to do these assessments themselves.
https://d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.
pdf
AWS then created a service that companies can use to track answers to questions
and architecture status over time.
https://aws.amazon.com/well-architected-tool/
This questionnaire and associated design principles cover more than just security.
We’ll take a look at the security portions of this framework. As you will see the
questions are pretty open-ended. This is just a starting point. Amazon has plans to
add more details over time. They also designed it to work in any cloud or
environment, not just on AWS.
Identity and Access Management
111
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
AWS Well-Architected Framework Identity and Access Management questions.
Detective Controls
112
Infrastructure Protection
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Detective controls and Infrastructure Protection Questions.
Data Protection
113
Incident Response
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Data Protection and Incident Response questions.
AWS Well-Architected Tool
114
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
This screenshot shows the AWS Well-Architected tool in the AWS console. You can
get to this took by logging into AWS and searching for the AWS Well-Architected Tool
in the list of AWS services.
AWS Well-Architected Tool - Improvement Plan
115
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
The AWS Well-Architected Tool allows you to track architecture risks over time.
Azure Scaffold (Cloud Adoption Framework)
Azure Scaffold offers best practices for Azure deployments
116
116
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Azure Scaffold offers best practices for deployments in an Azure account. The Azure
Scaffold is more about best practices for account structure than individual applications
like the AWS Well-Architected Framework. It also only applies to Enterprise accounts.
Account structure is covered in more detail in other parts of the class. However, you’ll
want to think about governance and account structure as early as possible and
structure your accounts and services so you can manage policies at the
organizational level if needed.
https://docs.microsoft.com/en-us/azure/architecture/cloud-adoption/appendix/azure-sc
affold
Center for Internet Security (CIS) Critical Controls
A prioritized set of
actions organizations
can take to protect
against known cyber
attack vectors.
Based on known attacks
and data breaches.
Claims to stop 85% of
attacks.
117
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Center for Internet Security offers a set of controls derived from studying data
breaches and attack patterns. This well-known set of security controls claims to stop
85% of attacks.
https://www.cisecurity.org/controls/
You can download the latest set of CIS critical controls with more details here:
https://learn.cisecurity.org/cis-controls-download
CIS controls applied to the Target Breach
Do the CIS controls work?
This case study applies the critical controls to the Target breach.
It demonstrates how these controls may have prevented the data loss.
https://www.sans.org/reading-room/whitepapers/casestudies/case-study-criti
cal-controls-prevented-target-breach-35412
On day five we’ll look at how a similar system might be architected in the
cloud to prevent similar attacks.
118
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Do the critical control work? A prior version of the critical controls was applied to the
Target breach to see if and how they could have helped. Hypothetically, if the controls
had been applied the breach would have been harder to accomplish and possibly
prevented.
One of the labs for this class involves taking a look at the Target architecture and
redesigning it to work in the cloud. You can apply the critical controls to your new
architecture and consider if they would help in the cloud the same way they would
help on premises.
CIS Benchmarks
Over 100
configuration
guidelines.
Security best
practices for
configuring
commonly used
technology
components.
119
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
In addition to the critical controls the Center for Internet Security offers CIS
benchmarks for different applications and products. These benchmarks help you
ensure your systems are configured according to best practices.
https://www.cisecurity.org/cis-benchmarks/
For example, we applied the CIS benchmarks to the AWS AMI created for this class.
You’ll get to see how we did that in an upcoming lab and try it out for yourself. Here
are some of the benchmarks that may be applicable to your cloud infrastructure and
applications:
Amazon Linux
Amazon Web Services
AWS Three Tier Web Architecture
CentosOS Linux
Google Cloud Computing Platform
Kubernetes
Microsoft Azure
Microsoft Windows Server
Ubuntu Linux
VMWare
Docker
Vendor Baselines and Best Practices
Each vendor will publish baselines and best practices.
Each individual cloud service or product will have specific guidance.
This sounds obvious, but people don’t do it in my experience:
Read The Cloud Manual!
We’ll cover as much as possible in class, but it’s still a good idea to read up.
120
AWS Security best practices:
https://d1.awsstatic.com/whitepapers/Security/AWS_Security_Best_Practices.pdf
Azure Security best practices:
https://docs.microsoft.com/en-us/azure/security/fundamentals/best-practices-and-patt
erns
Google Security best practices:
https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organization's
Microsoft Office baseline:
https://blogs.technet.microsoft.com/secguide/2018/02/13/security-baseline-for-office-2
016-and-office-365-proplus-apps-final/
Proposed new baseline:
https://techcommunity.microsoft.com/t5/Microsoft-Security-Baselines/Security-baselin
e-for-Office-365-ProPlus-v1907-July-2019-DRAFT/ba-p/771308
OWASP Top 10
137
137
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
OWASP or the Open Web Application
Security Project is an organization that
focuses on secure coding and
application security best practices.
The website includes examples and
testing methodologies.
OWASP is developing a serverless top
10 - but at this time….it’s the same!
OWASP Top 10
https://www.owasp.org/images/7/72/OWASP_Top_10-2017_%28en%29.pdf.pdf
_______________________________________________________________________
________
A1:2017-Injection
Injection flaws occur when an attacker sends unexpected input to a program that allows
the attacker to send commands to the underlying program and execute unauthorized
code.
Magento / Magecart (British Airways, Ticketmaster, Newegg, and more)
https://duo.com/decipher/critical-magento-flaw-puts-commerce-sites-at-risk
_______________________________________________________________________
________
A2:2017-Broken Authentication
Improperly implemented authentication may allow attackers to steal or manipulate keys,
credentials, passwords, and session tokens, etc. to gain access to systems.
Facebook access token breach, September 2018
https://www.theguardian.com/technology/2018/sep/28/facebook-50-million-user-accou
Failure to encrypt data in transit and at rest. Additionally, beware of data cached in
memory, output to log files, maintained in cookies, and other storage locations.
Facebook passwords unencrypted for years, reported March 2019
https://krebsonsecurity.com/2019/03/facebook-stored-hundreds-of-millions-of-user-pa
sswords-in-plain-text-for-years/
_______________________________________________________________________
________
A4:2017-XML External Entities (XXE)
Poorly designed XML processors allow for data exposure.
Wordpress
https://www.zdnet.com/article/wordpress-vulnerability-affects-a-third-of-most-popular-
websites-online/
_______________________________________________________________________
________
A5:2017-Broken Access Control
Improper data access restrictions allow attackers to access other people’s data and
accounts.
Salesforce
https://threatpost.com/salesforce-com-warns-marketing-customers-of-data-leakage-sn
afu/134703/
_______________________________________________________________________
________
A6:2017-Security Misconfiguration
Simply exposing data or creating vulnerabilities through improper configurations.
S3 bucket breaches (many cases - 2018)
https://businessinsights.bitdefender.com/worst-amazon-breaches
Database exposure (many cases - March 2019)
https://www.infosecurity-magazine.com/news/indian-mongodb-snafu-exposes-info-1/
https://www.bleepingcomputer.com/news/security/open-mongodb-databases-expose-
chinese-surveillance-data/
https://securitydiscovery.com/800-million-emails-leaked-online-by-email-verification-se
rvice/
within an application.
Magecart skimmer software (British Airways, Newegg, and others):
https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/magecart-an-overvi
ew-and-defense-mechanisms/
Equifax:
https://arstechnica.com/information-technology/2017/09/massive-equifax-breach-caus
ed-by-failure-to-patch-two-month-old-bug/
_______________________________________________________________________
________
A9:2017-Using Components with Known Vulnerabilities
Using components with known CVEs (common vulnerabilities).
Equifax:
https://arstechnica.com/information-technology/2017/09/massive-equifax-breach-caus
ed-by-failure-to-patch-two-month-old-bug/
_______________________________________________________________________
________
A10:2017-Insufficient Logging & Monitoring
Insufficient logging allows an attacker to infiltrate systems, stay there and continue to
pivot to other systems. Sometimes attackers remain in breached systems for years.
Marriott (Starwood hotels):
https://www.nytimes.com/2018/11/30/business/marriott-data-breach.html
_______________________________________________________________________
________
OWASP is also working on a Serverless Top 10 ~ but at this time it’s currently
the same.
https://github.com/OWASP/Serverless-Top-10-Project/
MITRE ATT&CK
122
122
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
The MITRE ATT&CK framework is a knowledge base of common tactics and
techniques based on real world events. This slide only shows part of the list. The
framework applies mostly to traditional application layers, and does not have a lot of
cloud-specific attacks, but these same attacks apply in the cloud for any applications
using similar technologies deployed in the cloud.
https://attack.mitre.org/matrices/enterprise/
If you have time you can also contribute to the MITRE ATT&CK framework if you are
aware of new types of breaches and attacks:
https://attack.mitre.org/resources/contribute/
CAPEC ~ Common Attack Pattern Enumeration & Classification
123
123
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
CAPEC or the Common Attack Pattern Enumeration & Classification framework
organizes attacks by mechanisms of attack and domains of attack.
https://capec.mitre.org/index.html
NIST ~ National Institute of Standards & Technology (US)
6 step process - one government, one standard - reciprocity.
If an agency wants to use a system another audited, no need to re-audit.
NIST 800-145 Definition of Cloud Computing (a bit dated ~ 2011).
NIST 800-53 Vulnerability Database (Controls to meet FISMA requirements).
FIPS - Federal Information Processing Standards (Cryptography).
Cybersecurity Framework (security best practices).
142
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
NIST or the National Institute of Standards & Technology is a US government
organization that publishes security best practices.
NIST 800-53 is a set of guidelines to help government agencies and contractors meet
FISMA (Federal Information Security Management Act Requirements).
Other countries have similar organizations that define the controls government
agencies need to follow. In the US many companies use the NIST guidelines even
outside of the federal government as a list of best practices.
Some other NIST documents:
NIST Definition of cloud computing ~ a bit dated but still referenced at times:
https://csrc.nist.gov/publications/detail/sp/800-145/final
800-30 Guide for conducting risk management
800-37 RMF Framework
800-39 Managing Information Security Risk
800-137 Continuous Monitoring
800-60 Data Categorization
800-171 Industrial Security
NIST - 6 steps still applicable to cloud systems
1. Document the system.
2. Define the controls and overlays.
3. Document how your system/application implements each control.
4. Assess security controls - Assessors look at and test controls.
5. Risk management step - Risk executive accepts or tells you to fix the risk.
6. Continuous monitoring - ensure systems meet the controls over time.
125
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
The NIST framework defines 6 steps organizations should follow to maintain secure
system. These same 6 steps are still applicable to systems deployed to the cloud:
1. Document the system
2. Define the controls and overlays
3. Document how your system/application implements each control
4. Assess security controls - Assessors look at and test controls
5. Risk management step - Risk executive accepts the risk or tells you to fix it
6. Continuous monitoring - ensure systems meet the controls over time
Based on CIA Triad
Confidentiality
What is the impact on your mission if this information got out.
Integrity
Would changing the data affect your mission.
Availability
Could your mission continue if the data was unavailable.
126
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
CIA stands for confidentiality, integrity, and availability. Organizations can use these
three characteristics of data security to evaluate risk and business impact if a system
is breached. Different organizations will place more importance on one or the other
depending on the impact of failure to maintain confidentiality, integrity, or availability.
Categorizing data
Categorize data to
determine what
the risk level is if
different types of
data exposed,
changed in
appropriately or
manipulated in
some way, or made
unavailable.
127
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Using the CIA triad you can categorize your data.
First break down data into different data type categories. You could use the categories
shown above or some other category that makes sense for your business.
Next for each characteristic of the CIA triad, determine whether the risk for each
category of data is high medium or low.
Based on this information you can determine whether a the fix for a particular
vulnerability in a certain system needs to be prioritized.
Overlays and common control providers
Overlays
Subset of all the controls that apply to a particular organization.
NIST, PCI, etc.
Common control providers
DNS server was already assessed then don’t re-assess.
If the cloud provider systems were already assessed, use that assessment.
128
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
When looking at which controls need to be audited consider overlays and common
control providers.
First, you need to see which sets of controls apply to your organization. Are you trying
to become SOC2 compliant? Do you host personal data for European citizens
(GDPR)? Do you process credit cards (PCI) or health data (HIPAA)?
Choose all the controls that apply to you based on the required frameworks for
auditing and monitoring compliance.
Next, determine if there are any common control providers. If a system was already
audited by one audit, it shouldn’t need to be re-audited by a second audit. In the case
of the cloud, any controls that are the responsibility of the cloud provider likely fall into
the category of common control provider. You can show the auditor the audit
documentation from the cloud provider to prove that the compliance control
requirements are satisfied.
Cybersecurity Framework
129
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
The NIST Cybersecurity framework has specific controls that are well
defined and numbered.
NIST Cybersecurity Framework controls include specific tests to determine
if a control passes inspection or not. Compare this to the open-ended
questions of the AWS Well-Architected Framework, for example. These are
very different methods of evaluating cyber security.
The NIST Framework was created to try to create consistency when
systems are audited.
https://www.nist.gov/cyberframework
Spreadsheet:
https://www.nist.gov/file/448306
Cloud Security Alliance (CSA)
Security Guidance for Critical Areas of Focus in Cloud Computing
GRC Stack (2010)
CSA Star
Certifications for cloud professionals.
National organization with local chapters.
130
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
The Cloud Security Alliance was established in 2008 to explore what steps should be
taken for best cybersecurity practices in a cloud environment. This organization now
has chapters around the world.
https://cloudsecurityalliance.org/chapters/global/
If you don’t have a Cloud Security Alliance chapter near you, then you can start one!
https://cloudsecurityalliance.org/chapters/
CSA Star
Cloud Security Alliance (CSA) STAR ~ certification for cloud providers.
Three levels of assurance - Self Assessment, 3rd party, continuous auditing.
Leverages the following documents:
Cloud Controls Matrix (CCM)
Consensus Assessments Initiative Questionnaire (CAIQ)
Code of Conduct for GDPR Compliance
131
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
The cloud security alliance offers a certification called CSA Star. This is a way for
cloud providers to demonstrate they are following best security practices in the cloud.
https://cloudsecurityalliance.org/star/#_overview
CSA Star Open Certification Framework
151
151
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
The framework provides incremental steps cloud providers can take to get certified.
Level 1 - Self Assessment + GDPR Code of Conduct
Submit one of the following:
A completed The Consensus Assessments Initiative Questionnaire (CAIQ)
A report documenting compliance with Cloud Controls Matrix (CCM)
For GDPR, both of the following apply:
Code of Conduct Statement of Adherence
Self-assessment results based on the PLA Code of Practice (CoP) Template
Level 2 - Attestation and Certification
Attestation - CPAs conduct SOC 2 assessments using criteria from:
AICPA (Trust Service Principles, AT 101)
CSA Cloud Controls Matrix
Certification - Third-party independent assessment of the security of a CSP
Currently under development
Enables automation of the current security practices of cloud providers
Providers publish their security practices according to CSA specifications
Customers and vendors can retrieve and use data in a variety of contexts
CCM
133
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
The CCM, the only meta-framework of cloud-specific security controls, mapped to
leading standards, best practices and regulations. CCM provides organizations with
the needed structure, detail and clarity relating to information security tailored to
cloud computing. CCM is currently considered a de-facto standard for cloud security
assurance and compliance
CAIQ
134
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
The CAIQ is based upon the CCM and provides a set of Yes/No questions a cloud
consumer and cloud auditor may wish to ask of a cloud provider to ascertain their
compliance to the Cloud Controls Matrix.
This is a very useful questionnaire because it covers a lot of different aspects of
security. In the questionnaire the questions are aligned with various compliance and
security frameworks such as PCI, HIPAA, and NIST. If your particular framework does
not exist in the questionnaire, you can add a column and map these questions to it.
Then you can go search in the CSA database to see if the cloud provider exists in the
CSA database, and if they have already filled out this questionnaire. This should save
people who are trying to perform and provide data for risk assessments a lot of time.
Other Frameworks
Control Objectives for Information and Related Technology (COBIT) ~ ISACA
Information Technology Infrastructure Library (ITIL)
International Organization for Standardization (ISO)
Common Security Framework (CSF) ~ HITRUST
Australian Signals Directorate (ASD) Essential 8
NZISM Protective Security Requirements (PSR) Framework
155
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Control Objectives for Information and Related Technology (COBIT)
COBIT is a framework focused on identifying and mitigating risk released in 1996 by
ISACA. Initially designed for governance, it has evolved into helping align business
and IT objectives. COBIT is mostly used in the financial industry to help comply with
standards like Sarbanes-Oxley.
ITIL
ITIL also attempts to align business and IT objectives, with the goal of delivering
services in a predictable manner. It was created in the 1990s by the UK Central
Computer and Telecommunications Agency (CCTA).
ISO
ISO was founded in 1947 by a group of delegates from 25 countries, the 67 original
technical committees. The group wanted to ensure products and services are safe,
reliable, and of good quality. ISO created cybersecurity standards which are widely
used such 27001 and 27002 to demonstrate the quality of an organization’s
cybersecurity programs. ISO 27002 has the following sections:
1. Risk assessment
2. Security policy
3. Organization of information security
4. Asset management
5. Human resources security
6. Physical and environmental security
7. Communications and operations management
8. Access control
9. Information systems acquisition, development and maintenance
10. Information security incident management
11. Business continuity management
12. Compliance
Common Security Framework (CSF)
HITRUST (Health Information Trust Alliance) is a privately held company located in
the United States that has established a Common Security Framework (CSF) that can
be used by all organizations that create, access, store or exchange sensitive and/or
regulated data.
Teles XACTA ~ Product for risk assessments
136
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Telos corporation had an interesting risk management product and approach that was
used in the evaluation of AWS for the second US GovCloud region.
https://www.telos.com/cyber-risk-management/xacta/continuous-compliance-assessm
ent/
The first AWS government cloud (GovCloud) assessment of air-gapped cloud (not
connected to the Internet) took two years. The US government asked another
company to do the assessment to get it done faster - only had 4 months.
Here’s a video from the AWS Summit in Singapore where one of the people involved
is talking about the assessment:
https://www.youtube.com/watch?v=ru0-lL09aPc&index=13&list=PLhr1KZpdzukcqM9
wmBu9nZLbuOXlXuwK8
Track applicable controls
137
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
The Teleos Xacta system lets you track security controls.
Select your control sets
138
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Select your control sets such as NIST or PCI compliant to add the information to the
system that applies to your organization.
Inheritance ~ What has AWS already covered?
139
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
The system will tell you in the case of AWS, for example, which controls are already
covered by AWS audits.
Security Assessment ~ Pass, Fail, Monitor
140
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Assessors can go through the system and mark controls as pass or fail. The system
allows you to track audits and controls over time.
After a breach
- After a breach, check this system
- Figure out what mission that system was serving
- Figure out what controls were in place
- Were there any risks that were not mitigated properly?
- Is there a paper trail regarding any exceptions or failure to mitigate?
Lab: Intro to
GCP automation
141
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
This lab is an introduction to deploying cloud resources using automated and manual
methods. It’s also a chance to make sure accounts are setup correctly and lab tools
are working.
Costs and Budgeting
142
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Any company moving to the cloud will want to consider cloud costs and budgeting.
This includes security teams! There are multiple reasons why security teams need to
consider cloud costs, as we will discuss.
Costs and Budgeting
143
AWS Azure GCP
Budgets AWS Budgets Azure Budgets Google Budgets
Billing & Cost
Management
Billing & Cost
Management
Billing & Cost Management Cost Management
Reports AWS Budget Reports Azure Cost Reporting Billing Reports
Billing APIs Cost Explorer API Azure Cost Management APIs Google Billing APIs
Right-sizing
Recommendations
Optimization
Recommendations
Sizing Recommendations
Advisor Trusted Advisor Azure Advisor
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Cost control - security?
Watching your costs might help you determine if you have a security problem.
Attackers spin up cryptominers that can increase cloud bills.
Charges for non-compliant and unauthorized services.
Need to evaluate the cost of security services and options.
Work with people in finance to find rogue cloud accounts.
To find the price of any service search “[service name] pricing” in google.
144
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
There are many reasons why security teams need to consider the cost of cloud
accounts.
Of course security teams will want to understand the cost of the systems they are
evaluating and determine if those systems are within the desired budget. Systems
may need to be architected to minimize or limit spending.
Another reason security teams want to be aware of costs is because an increase in
cost may indicate a security problem. Companies have racked up large bills due to
stolen account credentials. Attackers use the stolen credentials to create
unauthorized resources. In other cases, cryptominers are deployed on systems that
increase CPU usage and network traffic and may cause a company to incur additional
costs.
Finally, the security team may want to coordinate with members of the accounting
department who are paying the bills. Find out if employees are expensing or paying
for rogue cloud accounts that were created outside of approved channels.
How much will that cost?
Every service has a pricing formula.
You’ll need to understand your inputs to get your output (cost).
Tuning and tweaking applications can reduce cloud costs.
No matter how much you try to predict what your costs will be…
Beta test early to validate your cost estimates
Use calculators provided by cloud providers - or a spreadsheet.
Vendors with old licensing models do not align with pay-as-you-go services.
166
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Cloud pricing is based on a formula for each cloud service. Different services will have
different formulas to determine the cost. For example, some services may charge
based on bandwidth. S3 buckets charge based on the number of gets and puts into a
bucket plus storage. Data transferred into AWS is free. Data transferred out or
between accounts has a cost. Each service will have its own unique pricing model
and metrics.
To come up with a price for an application you’ll need to understand the inputs. If the
cost is based on gets (when you request a file) and puts (when you upload a file) how
many files will you be adding and retrieving from the S3 bucket? What will the total file
storage size be? For an EC2 instance, how many hours will it run? Don’t forget about
the attached EBS volume which costs money even when the EC2 instance is
stopped.
Once you have the inputs you can plug those into the formula for a service to get your
cost. But no matter how hard you try to think of every aspect of the system that may
incur a fee, beta test and validate your assumptions as early as possible in case any
surprise costs drive an architectural change.
The cloud providers offer calculators as we’ll see that can help, or you can use a
spreadsheet.
Note that vendors with per-machine licensing models don’t work well in the cloud
where architectures should be scalable. You want to only be paying for resources
while they are in use and only have as many resources deployed as required to
handle the load and maintain high availability. If you expect your system to scale to 5
instances and you have to buy 5 expensive licenses when most of the time you’re
only running two instances, that licensing model is not aligned with the cloud.
For a deeper dive on cloud pricing check out the appendix of this whitepaper:
https://www.sans.org/reading-room/whitepapers/detection/paper/37905
AWS also has a whitepaper on how AWS pricing works:
https://d1.awsstatic.com/whitepapers/aws_pricing_overview.pdf
AWS EC2 pricing ~ on demand
146
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
This is a screenshot of the AWS EC2 pricing page showing the cost of a Linux
instance charged at an hourly rate. Note that you aren’t charged for a full hour if you
only run the EC2 instance for 5 minutes. The cost is prorated.
Reserved instances (Pay Up Front)
147
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
If you pay up front for instances you can get a discount. Instances can be reserved for
a year or longer. If you reserve an instance and subsequently do not use it, you’ll still
have to pay for it. If you reserve a set of instances across multiple accounts, any
account will get the discounted rate for that pool of instances.
Spot Instance Pricing (Bid)
148
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
On AWS you can also bid on an EC2 instance to pay a lower rate. Note that if you bid
and your price is accepted, but later the price goes above your bid, your instance may
be terminated. This model works for batch processes that can be restarted without
errors.
Cost of Security Services
❏ Did the calculation of the cost of an application include security costs?
❏ How much will the log storage cost?
❏ Vulnerability scanner?
❏ Will you have a WAF (Web Application Firewall) in front of your website?
❏ Will a WAF front APIs exposed to the Internet?
❏ Do you need other security services with separate licensing costs?
❏ Will cloud encryption keys be used and how many?
❏ What security services will be enabled in each account?
❏ Have the overall cost of cloud included these costs?
❏ How are security costs handled by accounting in cloud environments?
149
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Some questions to ask when determining the cost of security services in your cloud
accounts:
Did the calculation of the cost of an application include security costs?
How much will the log storage cost? Vulnerability scanner?
Will your website have a WAF (Web Application Firewall) associated with it?
Do you need other security services with separate licensing costs?
Have those in charge of estimating cloud costs overall included these costs?
How are security costs handled by accounting in cloud environments?
AWS GuardDuty ~ North Virginia
150
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
This is a different type of service - AWS GuardDuty. In this case you’ll pay for the
amount of logs processed. There are different tiers, so as the amount of logs
processed increases, the price will go down.
CloudWatch ~ Log almost anything you want
VPC Flow Logs may end up here, though they can also be sent to S3
151
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
AWS CloudWatch is another logging service. For this service you’ll pay a fee per GB
of data collected and stored.
AWS “simple” monthly calculator
152
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
AWS has a simple monthly calculator which can help you determine the cost of a
project. It has some of the pricing formulas in the tools so you can plug in numbers
and get a price. It may not have every service so you’ll have to make sure you’re not
missing something. Some people like it. Other people find it easier to use a
spreadsheet.
https://calculator.s3.amazonaws.com/index.html
AWS Total Cost of Ownership Calculator
153
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
AWS also has a total cost of ownership calculator which helps you compare the costs
of servers and virtual machines. It generates reports that can be used in executive
presentations.
https://aws.amazon.com/tco-calculator/
AWS Enterprise Agreements
AWS offers Enterprise agreements.
Claim to offer up to 75% discount on services.
If you have large spend or interesting products ~ ask…
Companies with large spend heavily influence new features ~ ask…
Usually a commitment to spend a certain amount (reserved pricing).
Don’t need an enterprise agreement to link accounts (Consolidated Billing).
154
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
AWS offers discounts for enterprise agreements. Usually comes with a commitment to
spend a certain amount of money.
https://aws.amazon.com/pricing/enterprise/
AWS Budgets
Get alerts when you pass or are forecasted to exceed the budget you set.
Set alerts for reserved instance (RI) utilization that drops below a threshold.
RI alerts for EC2, RDS, Redshift, and ElastiCache reservations.
Monthly, quarterly, or yearly with customizable start and end dates.
Track other dimensions like AWS services, linked accounts, tags.
Can be created in the UI or in an automated fashion.
155
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
AWS budgets allow you to set alerts if your spending exceeds a certain dollar amount.
We’ll look at creating alerts in the upcoming lab.
Azure VM pricing ~ on demand
156
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Azure has a similar pricing model, though not laid out in as much detail as AWS on
their pricing page.
Azure VM Pricing ~ pay up front
157
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
You can also pay up front for a discount on Azure.
Azure MFA Pricing
(MFA is free on AWS)
158
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Azure charges for MFA - AWS and Google do not.
Azure Calculator
159
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Azure also has a calculator. Similar to AWS you can plug in values and get projected
costs.
https://azure.microsoft.com/en-us/pricing/calculator/
Azure Total Cost of Ownership Calculator
160
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Just like AWS, Azure has a TCO calculator.
https://azure.microsoft.com/en-us/pricing/tco/calculator/
Azure Enterprise Agreements ~ or...
Azure licensing is complicated.
Some things fall under Office 356 ~ e.g. Office 365 Premium for MFA.
Companies can move existing data center licenses to Azure for discount.
Less than 1% of Azure customers get the Enterprise Agreement.
Requires upfront spend commitment minimum $1000 per month.
Typically manage through subscriptions, not accounts.
161
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
The Azure licensing model is more complicated than the AWS model due to related
products and services. For example, if you want MFA with Azure, you sign up for this
under Office 365. Additionally, companies can move existing licenses from their data
center to Azure for a discount.
Azure also has enterprise agreements. Less than 1% of Azure customers leverage
the Enterprise Agreement per a recent discussion with an Azure support
representative. The Enterprise agreement requires a minimum spending commitment
of $1000 per month for three years.
Typically customers manage different billing for different departments through
subscriptions, rather than accounts. However for companies with an Enterprise
agreement they can link accounts.
It used to be that you could only get an Enterprise agreement from a Microsoft
partner, but now anyone can get an enterprise agreement.
Azure Billing Management
Tenant = Azure Active Directory.
Subscriptions = Bills.
You can have transfer payment of a subscription to another entity.
That’s how you can link different subscription.
Azure enterprise account setup cannot be automated at this time.
162
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
A few terms you’ll want to know on Azure:
Tenant = Azure Active Directory
Subscriptions = Bills
When structuring accounts and subscriptions consider who will pay the bill. If the bill
needs to be split between two departments you might want to create two separate
subscriptions to make it easier to track and handle in your accounting department.
Azure Budgets
Search for “Subscriptions”
Select a subscription
Click on Budgets
Budgets can also be created in powershell
163
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Azure also has a budgets feature that allows you to set up a budget and get alerts if
you go over that budget.
https://docs.microsoft.com/en-us/azure/cost-management/tutorial-acm-create-budgets
https://azure.microsoft.com/en-us/resources/templates/create-budget/
Google Cloud Platform
Google doesn’t seem as well equipped to handle enterprise at this time.
However the new CEO of Google claims Google is focused on this goal.
Google has a budgeting feature and calculators.
Google groups billing by projects instead of subscriptions or accounts.
They claim to be cheaper than AWS for virtual machines.
Security services exist but are often more limited.
164
164
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Google cloud budget notifications
https://cloud.google.com/billing/docs/how-to/budgets#manage-notifications
Google cloud calculator
https://cloud.google.com/products/calculator/
Tracking costs via account structure
When structuring cloud accounts consider the bills
In AWS a bill is associated with an account
In Azure the bill is associated with a subscription
In Google Cloud a bill is associated with a billing account and its projects
Use the constructs to determine who gets the bill for a set of resources
Try to structure cloud bills so you don’t have to track individual resources.
187
187
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Each cloud service has a different way of structuring accounts and billing. Dealing
with the bills and tacking resources if you consider who is going to pay the bill when
you set up your initial account structure.
On AWS you can set up multiple accounts. Each account gets its own root account
and bill. The bills can be linked via consolidated billing. Another service called
Organizations, which we will discuss on Day 4 can be used to create nested
accounts.
Azure has the concept of subscriptions. A single account is set up for an organization
and typically the organization’s domain is associated with it (like 2ndsightlab.com).
Then within that account the organization creates “subscriptions” and each one gets a
separate bill.
GCP has the concept of billing accounts. Each billing account is associated with a
billing profile that pays the bill. A billing profile can have one or more billing accounts
associated with it. Projects are created on GCP and associated with billing accounts.
When setting up cloud accounts, consider who is going to get and pay the bills. Trying
to track every individual resource in an account is difficult, error prone, and in some
cases simply can’t be done because there’s no way to identify a resource as
belonging to a particular entity. Instead think about how to structure accounts and
resources so that all the resources associated with an AWS account, Azure
subscription, or Google billing account are paid by the same entity. It’s also easier for
the accounting department to pay a bill and assign the costs to one cost center than
to have to split up and track bills that get paid by different departments.
Lab: Budgets
and Calculators
166
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Malware and
Cloud Threats
167
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
How does malware work and how is it different in the cloud? Is it different? What
types of new threats and attacks do we need to be worried about in the cloud.
Cyber Kill Chain
Lockheed Martin defined the cyber killchain to identify common actions.
191
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
The cyber kill chain was developed by Lockheed Martin. It attempts to define what
adversaries need to do to complete a cyber attack. Malware follows common patterns
- by looking at an understanding those behavioral patterns, instead of a specific file or
signature, attacks can be spotted and blocked. The steps include:
Reconnaissance: Looking for information that can be used in the attack such as
email addresses for phishing, phone numbers for social engineering, system
information, and other types of data that will help the adversary get into company
systems.
Weaponization: Finding a vulnerability and using a backdoor or other method to
create an exploit for a system.
Delivery: Delivering a weaponized piece of code - meaning some sort of exploit has
to cross the network.
Exploitation: Executing the attack on the vulnerable system using the malware or
other exploit.
Installation: Installing the malware on the system. (Note that newer malware may
only load the malware in memory.)
Command and Control (C2): Control the attacked system via a remote server.
tools at each step in the process to try to detect and prevent the different steps
attackers take during an attack.
Top Cloud Threats (According to 2nd Sight Lab)
Misconfigurations & Poor Architecture - S3 Buckets, broad permissions, etc.
Credentials - Stolen or “found”
Cryptominers and Ransomware.
Unpatched or exposed DevOps systems and tools (Jenkins, AWS CLI).
Lack of Network Security exposes Elasticsearch, Mongodb, etc.
Programming Flaws - OWASP top 10 and web related attacks still apply.
Escapes - containers, VMs access the host or the control plane is breached.
169
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
CSA publishes their top threats to cloud computing. It’s mentioned here but the
coverage seems to be mostly older breaches. It also doesn’t address some of the
most relevant threats for those using public cloud computing services as confirmed by
media reports, customers, and the cloud service providers.
https://cloudsecurityalliance.org/artifacts/top-threats-to-cloud-computing-deep-dive/
The list compiled for this class of top threats to your accounts includes the following:
Misconfigurations - S3 Buckets, broad permissions, etc.
Credentials - Stolen or “found”
Cryptominers and Ransomware.
Unpatched or exposed DevOps systems and tools (Jenkins, AWS CLI).
Lack of Network Security exposes Elasticsearch, Mongodb, etc.
Programming Flaws - OWASP top 10 and web related attacks still apply.
Escapes - containers, VMs access the host or the control plane is breached.
SANS Survey
170
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
A survey by SANS
Institute found account
and credential hijacking
topped the list. These
two items are typically
involved in many of the
other more specific
categories listed in the
survey.
A SANS survey takes a look at common attacks and categorizes them in different
ways. The chart shows what people that responded to that particular survey reported.
https://www.sans.org/reading-room/whitepapers/analyst/2019-cloud-security-survey-3
8940
S3 Buckets (Cloud Misconfigurations)
Booz Allen Hamilton When: May 2017
Data Exposed: Battlefield imagery and administrator credentials to sensitive systems
U.S. Voter Records When: June 2017
Data Exposed: Personal data about 198 million American voters
Dow Jones & Co When: July 2017
Data Exposed: Personally identifiable information for 2.2 million people
WWE When: July 2017
Data Exposed: Personally identifiable information about over 3 million wrestling fans
Verizon Wireless When: July 2017 and September 2017
Data Exposed: PII about 6 million people and sensitive corporate information about IT systems, including login credentials
171
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
The following two slides list some of the biggest S3 bucket breaches in recent history.
As you can see some very prominent organizations have experienced this cloud
snafu and heaps of data has been exposed to the Internet as a result.
Source: https://businessinsights.bitdefender.com/worst-amazon-breaches
Initially people wanted to blame Amazon for these breaches, including the PR
company the author of this course was working with at the time. However as
explained earlier today, this responsibility lies squarely with the customer in the AWS
Shared Responsibility Model.
More S3 Buckets
Time Warner Cable When: September 2017
Data Exposed: PII about 4 million customers, proprietary code, and administrator credentials
Pentagon Exposures When: 3 leaks found in September and November
Data Exposed: Terabytes from spying archive, resume for intelligence positions
Alteryx When: December 2017
Data Exposed: Personal information about 123 million American households
Accenture When: October 2017
Data Exposed: The keys to the kingdom--master access keys for Accenture's account with AWS KMS Key
National Credit Federation When: December 2017
Data Exposed: 111GB of detailed financial information--including full credit reports--about 47,000 people
172
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
S3 bucket breaches...continued.
An analysis S3 Buckets in the Alexa Top 10,000
Rhino Security Labs did
some analysis of the
Alexa top 10,000
websites.
They discovered which
sites use S3 and what
permissions were
applied to the buckets
they discovered.
173
173
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
During the S3-bucket craze, Rhino Security Labs published a report on all the S3
buckets exposed by domains in the Alexa Top 10,000 - a site that tracks the most
popular domain names.
https://rhinosecuritylabs.com/penetration-testing/penetration-testing-aws-storage/
They found a lot of faulty configurations when the scanned these buckets. There are
very rare use cases where a bucket should be exposed directly to the Internet.
Not a public bucket but...
The Capital One breach did
not involve a public bucket.
In this case the attacker
leveraged a host with
excessive permissions.
A private source told me this
internal server had access to
ALL the S3 buckets in the
account. Not a good idea.
174
174
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
An attacker stole about 140 million documents from an S3 bucket in the Capital One
breach. In this case, the bucket was not public. It was in the private internal network,
likely protected with an S3 endpoint as will be explained on a later day. The attacker
bypassed some website protection controls to get onto a virtual machine in the AWS
account. From there the attacker used excessive permissions on that virtual machine
to exfiltrate all the files from the S3 bucket to the attacker.
For some reason the attacker posted information about the attack on Twitter and
stored files in Github so was almost immediately caught. The attacker formerly
worked at AWS, but had been fired.
The exploit in this case was completely preventable. The permissions assigned to the
server that accessed the S3 bucket were excessive and the architecture was not
following some best practices that would have prevented this breach.
More: https://medium.com/cloud-security/whats-in-your-cloud-673c3b4497fd
Magecart Skimmers
Attackers are inserting code that
works as skimmers into websites.
Steal credit cards as people check out
on e-commerce websites.
Content loaded from a third-party
sites that isn’t validated.
Getting inserted into S3 buckets and
served up by AWS CloudFront (CDN).
175
175
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Magecart skimmers are a common threat to websites inside and outside the cloud.
The attackers insert JavaScript or some other type of code into a legitimate website.
The malicious code steals credit cards or other information as users check out on the
website. One type of attack targets content management systems (CMS) like Drupal
or Wordpress. Vulnerabilities in these systems are used to insert the malicious code.
An alternative attack will insert code into open S3 buckets, replacing valid files with
malicious files. Alternatively the code could be loaded via any sort of third party script
or advertisement that developers load in addition to the code on the website itself.
These third party components pose great risk if not validated before exposing the
customer to the external code and files.
Subdomain Takeover
When companies leave
CNAMES pointing to
subdomains they are
not using but someone
else can, an attacker
who registers that
subdomain can
monitor requests or
post malicious content.
200
200
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
A domain name is used to point to a website like https://2ndsightlab.com that points to
a particular server hosting a web site, for example.
A subdomain adds some prefix to the domain and can point to some other location.
https://i.2ndsightlab.com hosts images for https://2ndsightlab.com.
A CNAME points one domain to another. I could set up an S3 bucket called
2sl.s3.com and then create a CNAME like http://mybucket.2ndsightlab.com and have
it redirect to http://2sl.s3.com.
The problem would be if I deleted my S3 bucket and stopped using it but left
http://mybucket.2ndsightlab.com pointing to http://2sl.s3.com. An attacker could come
along and create a new bucket and any traffic going to my
http://mybucket.2ndsightlab.com bucket would be directed to the bucket setup by the
attacker.
If you’re not familiar with S3 buckets and static web hosting in an AWS S3 bucket we’ll
be talking about that more later. The important thing to be aware of is the fact that
developers should not leave CNAMES set up pointing to content you no longer
control.
In the case of EA it looks like they set up some sort of domain name where users
could register for something on the domain ea-invite-reg.azurewebsites.net. They
probably pointed to that subdomain from something on their own site. Later they
stopped using the domain and let it lapse but kept pointing to it from some other
domain they still hosted. An attacker could then set up
ea-invite-reg.azurewebistes.net and put malicious content in it. Users that were
redirected there from some EA source would think they were on the EA web site but
actually they would be putting content into the attacker’s malicious site.
Stolen cloud credentials
Phishing emails
Social engineering
Posted to Github
Shared on Slack
Emailed to a coworker
Moves files to cloud account
The human factor
202
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Humans. Sometimes they do things they shouldn’t. They don’t always do it on
purpose. Sometimes actions have good intentions or are simply due to curiosity or
misunderstanding, but in any case here are some problems caused by unwanted
human actions.
Credentials are stolen due to phishing emails, social engineering or posted to GitHub.
Developers may share credentials, keys, and secrets on Slack, Confluence, and other
internal social media, chat and communication platforms that offer an attack vector.
Credentials may be emailed to a coworker or shared (as was the case per a student
in one of the classes an author was taking, with Edward Snowden. Apparently he was
a nice guy and asked to borrow a coworkers credentials to access sensitive
documents).
Additionally people have been known to steal company assets by moving them to
cloud storage accounts like Box, DropBox, Evernote, or Google Docs. They also use
these systems to bypass company systems designed to prevent data loss in order to
share information with vendors, partners, and customers when they are simply trying
to get their job done and security products are preventing the transmission of data.
Developers often create security problems when they are simply trying to make
something work. They may open up a network too broadly just to get systems
working, or create a broad CORS configure rule because it allows their microservices
to work without browser warnings. Typically these things are not done maliciously.
They are done in with the best of intentions! They want to make the system work and
get their jobs done. From a security perspective this may seem ridiculous, but until
you have worked as a software engineer - don’t judge. It’s not as easy as you think.
Former employee steals data
178
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Shared credentials may have allowed a former employee to access HIPAA data in the
cloud. When a person leaves the company it’s important to track and understand all
the systems they have access to in order to remove that access appropriately.
https://www.csoonline.com/article/3265109/former-employee-visits-cloud-and-steals-c
ompany-data.html
Fired employees steals credentials, kills servers
179
179
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
In this tale of terminated AWS servers, an employee loses his job, steals the
credentials of his coworker, “Speedy” Gonzales, and terminates a number of servers
at his former employer. “Speedy” was not using 2-Factor authentication.
https://nakedsecurity.sophos.com/2019/03/22/sacked-it-guy-annihilates-23-of-his-ex-e
mployers-aws-servers/
Attacks on Credentials and Access
Password Reuse - check https://haveibeenpwned.com
Password Spraying (recent Citrix breach)
Secrets Published to Github such as keys and passwords.
Extracted from memory (e.g. Mimikatz on Windows)
Stolen session tokens via CORS misconfigurations and other
Brute-force SSH and RDP logins on cloud servers
PHISHING and Social Engineering to obtain passwords and access systems
206
206
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Credentials are on of the main ways attackers breach cloud accounts and install
malware. From money wasting cryptominers to completely deleted accounts, malware
has been a source of problems in cloud security for many companies in the cloud.
Some of the problems companies have experienced:
Password Reuse is not cloud specific. Reusing passwords in multiple places allows
attackers to use credentials stolen in one breach to access other systems that were
not actually breached. Troy Hunt is a security research that publishes a website that
tracks stolen credentials from data breaches. You can enter your email address to see
if your account or data has been breached and get alerts for future breaches at
https://haveibeenpwned.com
Passwords Spraying means using a potential password and trying it on many
different systems within an organization at the same time. The reason attackers do
this is because a system may have a lockout policy or a rate limiting feature that will
block or lock out the account if too many bad attempts are made. Instead of making a
number of attempts on one system, attackers make one attempt on many systems
and spread the attempts out to avoid these security features. The recent Citrix breach
was a result of password spraying. https://krebsonsecurity.com/tag/citrix/
Publishing Secrets to GitHub has proven to have been a common problem for cloud
developers. Github is a source control system like BitBucket which we use in the
class labs. Developers sometimes share code publicly and have embedded in that
code user accounts and passwords, encryption keys, cloud credentials and other
secrets. Attackers scan the code, find and use these credentials for malicious activity.
Malware can extract credentials from memory. One example is Mimikatz, a
software tool used by attackers to steal passwords from memory on Windows
systems.
Attackers can steal session tokens used to track logged in users and grant access
to system resources after initial authentication. These tokens are sometimes passed
around in systems unencrypted or stored in insecure cookies.
SSH and RDP credentials constantly face brute-force attacks when exposed to
the Internet. Many Linux systems in the cloud are accessed via SSH. RDP is used to
remotely access Windows systems.
Phishing and other forms of social engineering are used to trick users into giving up
their passwords or click links that pass their credentials to attackers.
Although not foolproof one of the best ways to limit these attacks is via MFA
(multi-factor authentication).
Privileged Credentials
In 2018 the CSA reported
that 74% breaches involved
access to a privileged
account.
The 2019 SANS cloud
survey reported that 48.9%
of incidents involved
credential hijacking;
37.8% involved privileged
user abuse.
181
181
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Credentials are a source of misery when it comes to incidents in the cloud. Attackers
are stealing credentials either because end users expose them publicly or because
they are stolen via other means as noted. This is your number one threat in the cloud.
It seems that credentials WILL be stolen, so we’ll look at strategies for minimizing the
resulting damage on Day 4.
CSA blog:
https://blog.cloudsecurityalliance.org/2019/05/10/cloud-workloads-privileged-access/
2019 SANS Survey:
https://www.sans.org/reading-room/whitepapers/analyst/2019-cloud-security-survey-3
8940
Code Spaces ~ The Company that got deleted
182
182
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Code Spaces is the company that got deleted in the cloud. Attackers apprehended
the credentials, used them to take over the account, demanded money and, when the
company didn’t pay, deleted everything in the account. Code Spaces was out of
business. Code Spaces hosted code and project data for other companies - all those
companies lost their data as well in this incident.
https://www.csoonline.com/article/2365062/code-spaces-forced-to-close-its-doors-afte
r-security-incident.html
Credentials in Github
183
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Many organizations, including one the author worked at, experienced developers
pushing credentials to GitHub. Attackers are scanning the site looking for secrets and
credentials they can use to attack systems. In the case of Uber, it’s even worse.
Attackers were not only able to steal data from Uber but the company tried to cover it
up by paying off the thieves. Eventually the news got out and the CISO at Uber lost
his job - and went to work at CloudFlare.
https://arstechnica.com/information-technology/2015/03/in-major-goof-uber-stored-se
nsitive-database-key-on-public-github-page/
https://www.cnet.com/news/uber-to-pay-148-million-for-failing-to-report-2016-hack/
https://www.cnbc.com/2018/05/16/fired-uber-cybersecurity-chief-joe-sullivan-joins-star
t-up-cloudflare.html
Social Engineering
184
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Not exactly a cloud breach, but a common form of social engineering at this time.
Companies are facing scams where a person impersonating a top executive tells a
lower level person to wire money out of the company. Although this is not a cloud
breach, you can image how an order might come down from an executive to provide
access to another person or delete a critical system in a similar way and a lower level
employee might carry out those orders without question.
https://www.csoonline.com/article/2961066/supply-chain-management/ubiquiti-networ
ks-victim-of-39-million-social-engineering-attack.html
Ransomware
185
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
A cloud hosting system faced ransomware on Christmas Eve in 2018. Ransomware
can be installed on cloud systems the same way it can be installed on-premises.
https://krebsonsecurity.com/2019/01/cloud-hosting-provider-dataresolution-net-battlin
g-christmas-eve-ransomware-attack/
RDP and SSH passwords - $10 on the Dark Web
186
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Many stolen RDP and SSH passwords appear on the dark web. Attackers are stealing
these credentials through many means including brute force attacks to repeatedly
guess the passwords on cloud systems.
https://securingtomorrow.mcafee.com/other-blogs/mcafee-labs/organizations-leave-bac
kdoors-open-to-cheap-remote-desktop-protocol-attacks/
CloudHopper targeting MSPs
187
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Managed service providers (MSPs) and Managed Security Service Providers
(MSSPs) are companies that handle IT and security for other companies.
CloudHopper malware is targeting managed service providers. An Advanced
Persistent Threat (APT) is a term for a group that persistently takes actions to break
into systems and companies. The attacks themselves may be simple but the groups
carrying out the attacks are organized and very stealthy. In this particular attack,
sensitive data and systems at organizations are being access via stealing credentials
at the MSPs that are providing IT services for that organization.
https://www.computing.co.uk/ctg/news/3070613/norways-visma-the-latest-cloud-comp
uting-company-targeted-by-china-linked-apt10-hacking-group
Wipro Breach
Wipro is India’s third-largest
outsourcing firm.
Phishing campaigns obtained
stolen credentials and to access
100’s of computers.
Pivoted from Wipro to
customers systems including
Fortune 500.
Other similar firms targeted.
188
188
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
The Wipro breach is another example of attackers trying to target vendors to leverage
their systems as a pivot point to get into other companies, or steal data managed by
the vendor. Wipro is India’s third-largest consulting firm. Many of their clients are
Fortune 500 companies. Attackers breached Wipro with phishing emails and then
used stolen credentials to access other computers. Brian Krebs broke the story which
Wipro originally disputed.
https://krebsonsecurity.com/2019/04/experts-breach-at-it-outsourcing-giant-wipro/
Wipro was surprised on an earnings call when they were stating that the facts of the
story were incorrect. What they didn’t know was that Brian Krebs was on the call. He
confronted them about which facts were incorrect. You can hear the recording on the
link below. This second story talks about how other vendors are also being targeted
such as Infosys and Cognizant. Companies give vendors a lot of access at times into
systems. Consider how attackers can leverage system access and data provided to
vendors to get into your company.
https://krebsonsecurity.com/tag/wipro-data-breach/
Nation State Attacks and APTs
189
189
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Nation state attackers are backed by governments to perform cyber espionage,
attacks,or campaigns on other countries. These and other groups associated with
organized crime are called Advanced Persistent Threats (APTs) because they will
spend a lot of time and money attempting to get into systems - over years. In some
cases governments have secret hacking groups which are no longer so secret. The
Chinese government is linked to groups like APT10 and APT 17. Fancy Bear in
Russia is said to have ties to the Russian government. The U.S. also has elite
hackers in organization's like the NSA and CIA.
The U.S. was said to have launched the first first cyber weapon, called Stuxnet,
against Iran. You can read the whole story in this book and learn about famous US
hackers like Mudge (@dotMudge on Twitter)
https://www.amazon.com/Countdown-Zero-Day-Stuxnet-Digital/dp/0770436196
MITER ATT&CK covers some nation state attackers. For example, you can read more
about APT 17 and other attackers located around the world on this page:
https://attack.mitre.org/groups/G0025/
Critical Infrastructure Attacks
Operation Ivy Bells
carried out by the US
Navy tapped into
underwater Soviet
cables.
NATO has expressed
concerns over Soviet
submarines near critical
underwater cables in
Nordic waters.
190
In the 1970’s, US divers scoured the depths of the ocean floor on a top secret mission
to find underwater Soviet communication channels. They found what they were
looking for and installed a 20-foot long tap onto the cable, which was then used to
record conversations. Eventually an employee of the NSA sold the information about
the cable to the Russians for $35,000. The Russians retrieved the tap and it is now on
display at the KGB museum in Moscow according to this article:
https://www.military.com/history/operation-ivy-bells.html
Recently NATO raised concerns that Russian submarines are prowling around
undersea cable in Nordic waters. What are the implications to using cloud providers
and services that send data across the ocean floor if someone can tap into those
messages? How secure are our encryption algorithms to protect that data?
https://www.washingtonpost.com/world/europe/russian-submarines-are-prowling-arou
nd-vital-undersea-cables-its-making-nato-nervous/2017/12/22/d4c1f3da-e5d0-11e7-9
27a-e72eac1e73b6_story.html
In a possibly related event, at least 14 sailors on a Russian submersible were killed.
Many questions were raised about the mission of those sailors since many of them
were captains. Usually only one captain is on a single submarine. The threats to cloud
infrastructure does not only exist in data centers and a much larger picture is at stake.
https://www.washingtonpost.com/world/europe/fire-on-russian-submersible-vessel-kill
s-at-least-14-sailors/2019/07/02/d0e327da-9cd0-11e9-83e3-45fded8e8d2e_story.html
Cryptominers at Tesla
191
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Cryptominers are prevalent in the cloud. In a conversation with employees from
Microsoft they are constantly shutting down cryptominers. Presumably AWS came out
with a new service called GuardDuty to help thwart the problem, because spotting
cryptominers was one of the first detections spotted by the service. Cryptominers
don’t require GPUs to run. Newer algorithms and more anonymous cryptocurrencies
like Monero can run on CPUs and even IOT devices and mobile phones. Attackers
are stealing credentials, creating cloud resources and using them to install
cryptominers. In this case Tesla’s public cloud was running cryptominers in a
Kubernetes cluster that had been installed with no password protection.
https://www.wired.com/story/cryptojacking-tesla-amazon-cloud/
Cryptojacking honeypot
192
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Sysdig set up a honeypot
and did some interesting
research to see what
types of malware would
affect an exposed virtual
machine in the cloud.
The first attackers were
attempting to perform
cryptojacking.
Cryptojacking is a term used for other people using your resources to perform
cryptomining. They install cryptominers on your resources or use your account to spin
up new resources in order to use them to mine for cryptocurrency. The cost of running
cryptominers is becoming more and more expensive because they use a lot of
electricity and computer power to guess numbers. When they guess the right
numbers they “prove” that a transaction is valid and get paid either in cryptocurrency
or transaction fees from the person who is trying to use the cryptocurrency to buy or
sell something. The whole concept of guessing numbers to prove transactions are
valid is not rocket science and doesn’t seem very intelligent to the author of this
course, but for some reason it has caught on. It is in part due to the ability to hide
transactions from governments as in the case of people in China trying to send
money outside the country without the government knowing. This fact may be why
countries and others have banned it. By hiding funds people can potentially leave the
country with their money without the government knowing about it or avoid paying
taxes. In some cases the transfer of funds is used to hide criminal activity and launder
money. In any case, when people perform cryptocurrency transactions, people known
as “miners” get paid when they validate these transactions and they want to use your
resources that you are paying for to do it! They get onto your systems using malware
or via stolen credentials.
https://sysdig.com/blog/detecting-cryptojacking/
Unpatched DevOps Systems
193
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Jenkins, software
used to deploy
cloud resources, is
being attacked
frequently because
it is exposed and
unpatched.
One student performing penetration tests for companies told the author of this class,
“We always get the Jenkins server.” Jenkins servers are used to deploy systems in
the cloud. If an attacker can get onto a Jenkins server then presumably the attacker
can also deloy systems in the cloud. This is happening according to some accounts
such as the example on the page. In this case the attacker chose to install
cryptomining software to make money. Given that Jenkins servers typically have a lot
of potential power to wreak havoc in a cloud environment, it could have been worse!
https://www.csoonline.com/article/3256314/security/hackers-exploit-jenkins-servers-m
ake-3-million-by-mining-monero.html
Jenkins allows developers to perform tasks using plugins. 100+ Jenkins plugins were
found to be vulnerable.
https://www.zdnet.com/article/security-flaws-in-100-jenkins-plugins-put-enterprise-net
works-at-risk/
In yet another example, a Jenkins server used by GE was exposed to the Internet and
revealed passwords and source code.
https://threatpost.com/ge-aviation-passwords-jenkins-server/146302/
Elasticsearch databases exposed to Internet
194
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Developers are exposing
records in too many cases
directly to the Internet via
cloud databases like
Elasticsearch.
As more people without security backgrounds are able to deploy systems in the cloud
and define networking as they see fit, more and more data stores are being exposed.
This is happening repeatedly. There are many examples. In this case, an open
Elasticsearch instance exposed 82 million records.
https://securityaffairs.co/wordpress/78643/data-breach/elasticsearch-instances-data-l
eak.html
Mongodb on Internet exposes 2 billion records
195
195
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Mongodb is
another database
exposed all too
frequently.
In this breach a Mongodb database exposed to the Internet leaked the most records
ever. 2 billion records were exposed as a result of this misconfiguration. If you signed
up for https://haveibeenpwned.com, there’s a good chance you got an alert.
https://www.hackread.com/verifications-io-breach-database-with-2-billion-records-leak
ed/
DevOps tools vulnerabilities
Confluence allows
teams to share
information.
A security tester found
a flaw and then used
Google to search for
all the systems that
were exposed to the
Internet that
contained this flaw.
196
196
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
In this instance, a person who does some bug bounty testing found a flaw in an
application called Confluence. He then used Google to find numerous systems with
that flaw exposed to the Internet in a short amount of time. Had these systems not
been exposed to the Internet, this person would not have found what he was looking
for in Google. If these companies were using Confluence via a public SAAS solution,
they would have been vulnerable no matter what. Consider if and how you can lock
down your accounts, even if provided by a vendor, on a private network. We’ll talk
more about that on day 2. Also be very careful with third party components, plugins,
and widgets you include in your applications as we go over in more detail on day 3.
Email on Azure
The Deloitte breach involved an
email system hosted on Azure.
The details of this breach are
unknown, but somehow the
attackers accessed administrative
accounts.
Initially Deloitte reported only six
clients were affected. Later reports
indicate the breach was bigger.
197
197
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
The details of the Deloitte breach are unclear, but it seems that somehow attackers
got ahold of cloud credentials and accessed a mail server with sensitive data. In some
reports Deloitte was apparently migrating mail servers to the cloud.
VM and container escapes
CloudBurst ~ Blackhat 2009
Exploits vulnerability in VMware Workstation via a specially crafted video file.
Container escape ~ CVE-2019-5736
A container escape allows taking over a host.
DNSMasq Vulnerability ~ CVE 2017-14491
Affected Kubernetes ~ could allow for taking over a cluster
198
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
When the same hardware is hosting virtual machines for multiple customers, there is
always a chance that a programming error or system flaw allows unauthorized access
to systems and data. This could occur when an attacker in a virtual machine escapes
from the VM and is able to access the hypervisor, or code in a container is able to
escape the container and access the host. The issue also arises when the control
plane used to manage virtual machines or containers is accessed or breached.
Although this is a threat, it appears at this time most breaches are not requiring such
extensive effort due to simple mistakes that give attackers easy access to systems
and data.
CloudBurst
https://searchcloudsecurity.techtarget.com/definition/Cloudburst-VM-escape
https://www.blackhat.com/presentations/bh-usa-09/KORTCHINSKY/BHUSA09-Kortch
insky-Cloudburst-SLIDES.pdf
Container Escape
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5736
https://www.zdnet.com/article/virtual-machine-exploit-lets-attackers-take-over-host/
DNSMasq
https://security.googleblog.com/2017/10/behind-masq-yet-more-dns-and-dhcp.html
CloudBleed
CloudFlare is a CDN or
content delivery
network that caches
and hosts data closer
to end users for many
big organizations.
A flaw in CloudFlare
results in a buffer
overflow that exposes
customer data.
199
199
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Another type of “escape” entails systems with flaws that allow one customer to see
another customer’s data, or data escapes from the system through some sort of
vulnerability. In this example, the CloudFlare system leaked data for many customers
through a software bug called a buffer overflow, which exposes data in memory.
CloudFlare is a CDN, or content delivery network. Companies hire CloudFlare to host
their data closer to their end customers or to front their websites to handle excessive
load or protect against DDOS attacks. This breach affected many different websites.
https://www.theregister.co.uk/2017/02/24/cloudbleed_buffer_overflow_bug_spaffs_pe
rsonal_data/
200
200
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
These days it seems like every breach needs a some branding - a logo, a website,
and a public relations campaign. The Specter and Meltdown malware discovered by
security researchers at Google demonstrate this very well. This attack actually
affected the underlying hardware. It would allow a malicious program to access
secrets in memory. As you can imagine hardware is not easy to patch. One of the
biggest concerns with this particular vulnerability was the potential for attackers to
leverage it on cloud systems to gain access to the underlying host system and other
customer virtual machines on the same host. Rather than fix the underlying hardware,
companies that make operating systems found a way to patch the software to prevent
access to the vulnerability. The cloud providers had most of their systems patched
within about 2 days.
https://meltdownattack.com/
https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.
html
https://aws.amazon.com/security/security-bulletins/AWS-2018-013/
Common attacks still apply
If you host your web application in the cloud how different is it really?
The OWASP top 10 and MITRE attack framework still apply.
If you deploy Internet accessible vulnerable software it’s still vulnerable.
Malware that can run on a server in your datacenter can run in the cloud.
Some things are not available to you - but they are still there under the hood.
Routers, and other network equipment managed by the CSP.
Architectural differences may change attack vectors, but the threats still exist.
201
201
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Some recent attacks related to the OWASP top 10 were listed in the notes of that
prior slide. All these same attacks still apply. Additionally, although you can’t access
the cloud hardware and systems, organizations still need a way to validate the
security of those systems through third party audits, as discussed. All the systems
that connect to the cloud may also prove to have a vulnerability that offers a gateway
to the cloud - or vice versa - on private networks.
Security Research and Malware
For new breaches, security researchers try to
get a copy of the malware and then:
- Open the code in a disassembler
- Review the assembly code.
- Run in a segregated environment
- Determine how it works
- Indicators of compromise (IOCs).
These IOCs can be used to block the malware
using various tools.
229
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
What happens when new malware is discovered? Security researchers around the
world are constantly evaluating and trying to stop malware. There are different tools
and services that will help evaluate malware. The first thing researchers will do when
they see new malware is try to get a copy. They will then potentially take any or all of
the following steps:
Open the malware in a disassembler which shows them the actual machine code (not
the high level language like Java, C++, etc. but the actual code used to interact with
the hardware called assembly language. One of the most well-known disassemblers
is IDA Pro but it can be very expensive. The NSA recently released an open source
disassembler called Ghidra but some researchers are skeptical that it has a backdoor.
They may also review web software scripts such as VBScript, JavaScript or software
that is not packed or compiled in its natural state to try to reverse engineer it. Other
tools can also help like tools that look into the details of documents or debuggers and
other tools that reveal information about the application at runtime.
Researchers may also run the software in a controlled environment to discover what it
is doing. They may run it in an air-gapped, segregated network and take steps to trick
the malware into thinking it is connect to the command server to try to reverse
engineer its behavior. This can be risky and must be done carefully to make sure the
malware cannot infect computers around it. Some malware will detect when it is
running in a VM and shut down. Other malware will delay starting to try to trick the
researcher that it is benign.
By exploring the malware behavior the researcher can determine indicators of
compromise - or IOCs - that companies can use to block the unwanted behavior. The
malware may reach out to a certain domain name, use a particular user agent, or do
something else unique that is dissimilar to the behavior of legitimate systems. This
behavior can be blocked to thwart the malware. Traditional virus scanners would
make a hash of the malware and block any executable with the same hash, but most
malware can bypass these type of checks by simply changing one character in the
file, which changes the hash.
Twitter is one of your best real time threat feeds!
Real time threat data, many researchers reporting...a few examples.
203
203
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
If you are not already on Twitter you may be surprised to know that Twitter is one of
your best sources for real time information about security threats. Many prominent
researchers exist on Twitter. They report on security breaches, comment on security
issues, and publish information to help you protect your systems.
For example, the author of this course worked on a security research team for a
company. WannaCry broke out overnight. Checking Twitter the next morning alerted
to the fact that a major security breach was shutting down hospitals in the UK and
other businesses. Top researches were publishing a detailed malware analysis down
to the bytecodes on Twitter. One particular reacher, Marcus Hutchins, registered a
domain that turned out to be a kill switch that stopped the malware. The author was
following all these researches and watched these events occur almost in real time as
they unfolded by following the right people on Twitter.
Marcus Hutchins, a security research for Malwarebytes, was later arrested by the FBI
on his trip to DefCon the next year on unrelated charges for past activities selling
malware in years past. The outcome of this trial is still pending. Many in the security
community believe he is innocent, as there’s a fine line between security research to
determine if systems are vulnerable and writing malware that harms companies. The
trial is set for July 2018. Hutchins has since dropped off Twitter, citing abuse by
people online, so you won’t be able to follow him anymore.
Security Hype and Drama
Breaches with brand names.
Over-sensationalized headlines.
Speculation and questions instead of facts.
Security vendors rushing to put out a story.
Marketing people misunderstandings.
General rule: Wait two days.
The hype will fade. The facts will emerge.
204
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
While teaching a security class in San Jose, a story emerged about a chip embedded
inside Apple and AWS servers that had supposedly existed for over 7 years.
Bloomberg claimed it had an anonymous source that revealed this spy chip to them.
The story broke and was all over the news. Apple and AWS came out with strong
denials of the existence of this chip. Some in the security community where adamant
the chip story would prove to be true.
After following many breaches and malware outbreaks, the best advice is to follow
the news closely for two days before making any sort of judgement because
likely new facts will arise after the initial announcement. In the US, we have a premise
in the court of law: Innocent until proven guilty. In the author’s opinion, after watching
the story unfold for two days and being asked by many people, there is not enough
evidence to prove this chip actually existed. Additionally, it seems highly unlikely that
this chip could have existed for over seven years without the story somehow being
exposed by someone. That’s not to say it couldn’t be proven to be true later, but at
this time there is not enough evidence to prove there was a chip, and more evidence
based on later stories and statements that the chip never existed. The underlying
sources used for the story have never been fully validated or accessible to public
scrutiny.
https://www.bloomberg.com/news/features/2018-10-04/the-big-hack-how-china-used-
a-tiny-chip-to-infiltrate-america-s-top-companies
Threat lists for software tools
Security organizations and vendors publish
most prevalent malware.
This list comes from CIS (Center for Internet
Security).
Keep in mind that vendor threat lists are
dependent on what their devices catch.
Threat Lists of IP addresses or domain
names can be used in cloud security services
like GuardDuty.
205
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Some organization's publish threat lists you can follow to learn about new security
problems, breaches, and malware outbreaks. The Center For Internet Security has a
threat list on their website. The cloud vendors incorporate threat lists into their security
tools.
https://www.cisecurity.org/cybersecurity-threats/
Bear in mind that threat lists are only as good as the tools that analyze the data.
When companies say there is a huge increase in a certain type of malware it could be
that only just now their software started recognizing that particular type of malware,
while other types of malware may be going unreported by that particular vendor. Use
multiple sources!
The cloud vendors have an enormous amount of data that can be used to find
malware and security problems in the cloud. Using tools from the major IAAS vendors
is beneficial for this reason. We’ll talk more about tools that do this later such as
Amazon GuardDuty.
Be aware
Knowing that threats exist is one step in the right direction.
Be aware of security incidents and breaches that are occuring.
Use those threats to analyze your own environment.
Prioritize security efforts where the risk is highest and most damaging.
Understanding the threats will help inform architecture decisions.
Consider threats against to a system using threat modeling.
206
206
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
The most important takeaway from this section is to be aware. Understand the types
of threats that are prevalent in the cloud in general and in your specific industry. By
being aware you can take the appropriate steps to stop, block, and find malicious
behavior. Awareness needs to expand beyond the security team! Awareness needs to
include developers, project managers, product managers, legal teams, human
resources personnel, line of business owners, and most of all - executive leadership.
By understanding the threats to a business, companies can be more proactive and
smarter about stopping them!
Lab: Sharing secrets
with GPG
207
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
Credentials are king in the cloud. Credentials are one of the primary ways that
attackers get into and take actions in cloud systems (and on-premises systems for
that matter). When sharing credentials for systems it is important to send then to other
people securely - email is not secure. Posting them on slack is not secure. Leaving
them in a plain text file in an S3 bucket with broad permissions is not secure. One
way to secure credentials when sending them in email is to use GPG. This lab does
two things: 1.) It helps students understand asymmetric cryptography. 2.) It gives
students some experience with GPG. There are manual and more automated
methods of using GPG. This is a simple introduction.
Day 1: Cloud Security Strategy and Planning
Cloud Architectures and Cybersecurity
Introduction to Cloud Automation
Governance, Risk, and Compliance (GRC)
Costs and Budgeting
Malware and Cloud Threats
Author: Teri Radichel © 2019 2nd Sight Lab. Confidential 208

Day 1 - Cloud Security Strategy and Planning ~ 2nd Sight Lab ~ Cloud Security Class ~ 2020

  • 1.
    CLOUD SECURITY Architecture +Engineering Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
  • 2.
    Copyright Notice All RightsReserved. All course materials (the “Materials”) are protected by copyright under U.S. Copyright laws and are the property of 2nd Sight Lab. They are provided pursuant to a royalty free, perpetual license to the course attendee (the "Attendee") to whom they were presented by 2nd Sight Lab and are solely for the training and education of the Attendee. The Materials may not be copied, reproduced, distributed, offered for sale, published, displayed, performed, modified, used to create derivative works, transmitted to others, or used or exploited in any way, including, in whole or in part, as training materials by or for any third party. ANY SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Author: Teri Radichel © 2019 2nd Sight Lab. Confidential 2 Content is provided in electronic format. We request that you abide by the terms of the agreement and only use the content in the books and labs for your personal use. If you like the class and want to share with others we love referrals! You can ask people to connect with Teri Radichel on LinkedIn or visit the 2nd Sight Lab website for more information. https://www.2ndsightlab.com https://www.linkedin.com/in/teriradichel
  • 3.
    Day 1: CloudSecurity Strategy and Planning Cloud Architectures and Cybersecurity Introduction to Cloud Automation Governance, Risk, and Compliance (GRC) Costs and Budgeting Malware and Cloud Threats Author: Teri Radichel © 2019 2nd Sight Lab. Confidential 3 Welcome to Day 1 of Cloud Security Architecture and Engineering by 2nd Sight Lab. On Day 1 we look at the fundamentals that drive security decisions in the cloud. Developers and technical folks tend to think that security is all about the technical implementation of devices and tools that defend networks and applications, but really the picture is much bigger. Security is often more about risk calculations that drive business decisions on a broader scale. In order to understand this fully we’ll take a look at some of the traditional drivers that impact business cyber security decisions.
  • 4.
    About this class Assumesbasic knowledge of cloud. See links in notes if needed. Real world scenarios ~ personal experiences moving to the cloud. Designed for anyone with some technology background. Hands on labs ~ designed for different levels. Beginner and bonus labs. Focused on public cloud and infrastructure as a service. Some discussion of other clouds but not the focus. Author: Teri Radichel © 2019 2nd Sight Lab. Confidential 4 This class assumes you have some idea what the cloud is, however if you want to take a look at a few definitions: Amazon’s Definition: “Cloud computing is the on-demand delivery of compute power, database storage, applications, and other IT resources through a cloud services platform via the internet with pay-as-you-go pricing.” https://aws.amazon.com/what-is-cloud-computing/ NIST: Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics, three service models, and four deployment models. https://csrc.nist.gov/publications/detail/sp/800-145/final
  • 5.
    Setup to receivecontent and participate in labs For documents ~ a gmail account https://gmail.com Sign into the 2nd Sight Lab portal (We sent an email to your gmail account). For labs ~ complete the setup instructions if you haven’t. AWS account https://aws.amazon.com Azure: https://portal.azure.com Bitbucket account setup using gmail address https://bitbucket.org Slack: We’ll set this up in the last lab. 5 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential You should have received an email from 2nd Sight Lab at your gmail address by now. This email included instructions telling you how to log into the 2nd Sight Lab portal. If you haven’t done this yet you’ll want to do it now to access the slides, and if you want to do the labs, the lab content and tools. Just let your instructor know if you have any questions or have problems accessing the materials.
  • 6.
    About the screenshotsin documents... One of the biggest challenges with cloud services is the rate of change. The nature of the cloud services is that they can roll out changes any time. You generally won’t be notified for a lot of them… The same is true when we write labs for this class You may notice some of the screenshots don’t exactly match. Welcome to life in the cloud! 6 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential As we were writing this class, new screens were appearing. Additionally during the first official launch of this class, a new CloudFormation portal was launched. Each year AWS, Azure, and Google launch thousands of new features, services, and enhancements. Unfortunately they do not ask us before making these changes. Therefore as you go through the material, you might see that some of the screenshots don’t exactly match what you see. Hate to break it to you, but this will happen to you a lot in the cloud, so this is your introduction - to life in the cloud. Then only thing that is constant, is change. One of the tricky things for security teams will be tracking and dealing with these changes, when they occur. This class will help you consider the things you can do to manage this change and still take advantage of the innovative new features offered by cloud providers, as they appear.
  • 7.
    Cloud account setup- Initial Best Practices Use an email alias Remove programmatic access for the global administrator / root user Set up MFA - especially on the root account but better if on all accounts Create a secondary user and only use the root account if required Set a password policy Turn on all logging (but can cost money) 7 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential If you are setting up a new cloud account, there are some initial best practices you’ll want to consider as you get started. If you set up a cloud account for this class, we recommend you do these things as well, if possible. Use an email alias. When you set up your account, you can avoid someone guessing your email and login if you use an alias. Additionally, if you are setting up a cloud account for a company, it’s best to use an alias that gets forward to multiple people rather than tying the account to an individual’s email address. What if that individual leaves the company? Another tip - think about your naming convention in advance. If you name your accounts consistently it will be easier to find all your accounts and email addresses related to them. For example, maybe all your cloud accounts start with cloud-[unique-name]@ or your AWS accounts start with aws-[unique-name]@. Set up MFA. Everywhere! Make it that much harder for someone to get into your account by setting up MFA on all your accounts. Create a secondary user and only use the root account if required. The root account or owner is the user account that you used to create the account. It is an all powerful user that can do anything in your account - including delete it! Often people will create this user, add MFA, and store the credentials in a safe or some other secure manner in which your company typically stores these types of credentials. Set up a password policy. Although this recommendation is in question in the latest version of NIST and a lot of companies are starting to offer “passwordless” solutions,
  • 8.
    the cloud providersstill recommend a password policy. Whatever you do, try to avoid short, simple passwords with common words such as the name of your company, the time of year, or the local sports team! Turn on all logging. In the case of a security incident, logs are required to determine what happened. Look at the cost of the logging, but wherever possible turn on all the logs. We have a lab that looks at different logging options in the cloud later in the week.
  • 9.
    Certificate of CloudSecurity Knowledge (CCSK) - Cloud Security Alliance (CSA) Certified Cloud Security Professional (CCSP) ~ CSA and (ISC)2 AWS, Azure, and GCP certifications ~ from the cloud providers ISACA ~ exams for auditors. SANS Certification (under development) CISSP ~ not so much about cloud but likely evolving Cloud security certifications Author: Teri Radichel © 2019 2nd Sight Lab. Confidential 9 A lot of people ask if cloud certifications are helpful, and if this class will help obtain a certification. We’ve already had one student obtain a certification after taking this class, however as a general rule you have to understand the requirements for a particular certification and focus on the recommended documents and reading. An unscientific survey by the author reveals that some hiring managers value certifications, and others not so much. In general, hiring managers with certifications or who have had them in the past value them more than those that never obtained one. Having a certification proves that a person more junior in their career put in the work in a particular field to get that certification. Over time experience becomes more relevant. In any case a certification may help a candidate get past non-technical human resources staff and recruiters. Since they don’t have the technical knowledge to assess skills, the certificates can be helpful in determining a person has qualifications for a particular job. The following are some certifications and links to more information if you are interested. Certificate of Cloud Security Knowledge (CCSK) from the Cloud Security Alliance. Open book. Governance, Risk, Compliance. Evaluating cloud providers.
  • 10.
    https://cloudsecurityalliance.org/education/ccsk/#_overview AWS Implementations withAWS tools and services. https://aws.amazon.com/certification/ Azure Implementation with Azure tools and services. https://www.microsoft.com/en-us/learning/azure-exams.aspx Google Cloud Platform (GCP) Implementations with GCP tools and services. https://cloud.google.com/certification/ ISACA Certifications for auditors. http://www.isaca.org/certification/pages/default.aspx SANS Institute. SANS Institute is working on a certification for their could security class. Broad, general, security knowledge applied to cloud. SANS has many other classes that go deep into specific aspects of security such as reverse engineering malware, network intrusion detection (packet analysis), forensics, and pentesting. They also have an accredited masters program (which the author has taken). https://sans.org CISSP. Although one of the most widely known security certifications, it is very broad and deals with security at a high level, rather than things like packet and malware analysis. It also includes things like physical security for data centers. So although it’s one of the most well-known it won’t be the most applicable to some. It is probably the most recognized by human resources staff and recruiters. https://www.isc2.org/Certifications/CISSP There are many other types of certificates by various organizations focusing on specific aspects of IT or security as well. Also many universities are now offering security undergrad and masters program. It’s important to look at the credentials of the person running the program and the instructors.
  • 11.
    Cloud architectures and impacton cybersecurity 9 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential What exactly is the cloud? Is it just someone else’s computer or is it more than that? Let’s look at some architectures that are uniquely cloud and consider the characteristics of a service or system that qualify it as a “cloud architecture.” Then we can explore how these new architectures impact the security of networks, systems, and applications.
  • 12.
    The Golem Project~ A true cloud architecture 10 Share your computer - get some cryptocurrency. https://golem.network/ Author: Teri Radichel © 2019 2nd Sight Lab. Confidential The Golem Project is probably the truest form of cloud architecture. The architecture consists of people who sign up for the network and contribute compute power in exchange for cryptocurrency. Computer owners all over the world can sign up and other people can use their computer’s resources when they are not in use. Applications need to be written in such a way they can operate correctly on this “distributed architecture.” Distributed means the compute power is spread out over many systems, often located in different locations, and generally not a fixed number of systems. If one system fails, the application will keep running because other nodes will seamlessly pick up the work. Additionally, if the system needs more compute power, a distributed application will often automatically add additional nodes to help process the data. Contrast this with a system that is designed to run on one computer, or a cluster of a specific or limited number of nodes.
  • 13.
    Private clouds andOpenStack Joint project of Rackspace and NASA (2010) Open source software for cloud computing Host in your own data center Many companies tried...and gave up Complex. Limited by hardware resources https://www.openstack.org/ 11 11 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential So you want to build your own cloud? OpenStack offers a way for you to do this using open source software in your own data center. OpenStack began in 2010 as a joint project of RackSpace Hosting and NASA. In 2012, NASA withdrew to use AWS. OpenStack is now managed by the OpenStack foundation and includes support from companies such as Oracle and Hewlett Packard. Some companies reluctant to put data into the public cloud started with OpenStack, building private clouds on premises and giving developers access to create resources like compute, storage, and networking. The idea sounded great, but it turns out it’s complicated to run a cloud platform efficiently and with the same usability of the public clouds. Additionally, scalability is limited to the systems available in the private data center, so private clouds cannot take advantage of the economies of scale of a public cloud. In addition, private clouds are typically run by organizations that specialize in specific business domains and don’t have a lot of expertise or staff that can maintain the private cloud. It’s difficult to keep up with the features, functionality, scalability, and usability of the public cloud platforms. In many cases, developers are unsatisfied with internal clouds after using public clouds and push for access to the public clouds. The author had such an experience at a large company that ultimately gave up on an attempt to implement a company-wide private cloud and ultimately moved to public cloud instead. Many companies have had similar experiences, though some organizations still do run private clouds and use OpenStack. https://www.openstack.org/
  • 14.
    Public cloud services Third-partyhosted cloud computing platforms that anyone can use. Salesforce offers a hosted API and GUI for sales applications. AWS is one of the most widely used and known infrastructure clouds. Azure followed suit, though years later. Azure started as a PAAS platform. Google offers gmail, Google Docs, and other hosted services. iCloud...People call almost anything hosted by someone else “the cloud.” 14 14 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Initially companies running applications on the Internet insisted on running all their software on their own servers. They also insisted on owning all the code. For some web-based businesses, this stemmed from the dot-com (later to become the dot-bomb) era where companies would build websites and then either sell their companies or “go public” (become publicly traded on the stock market). In order to show the value of their companies they wanted to own all the intellectual property (IP) associated with the systems that ran their businesses. Eventually the cost of running and maintaining secure and scalable systems outweighed businesses’ desire to own all their own code and infrastructure. Additionally the rise of open source software changed the idea that companies had to own all their own software. Organizations could get things done faster by using software created, maintained, and in some cases hosted by other companies. Initially organizations started moving from hosting their own servers to hosting them in colocation facilities, where another company maintained the building and network but the companies owned their own servers. Next companies started using managed hosting services where they rented servers from companies that managed the physical hardware, networks, and building. GoDaddy started in 1997 and started a service that allowed customers to use a database and create a website on a shared platform instead of hosting everything themselves. Customers managed these systems through a dashboard. This was one of the initial steps towards the cloud hosting model.
  • 15.
    Salesforce started arevolutionary service in 1999. This service offered a hosted platform for sales applications. Instead of owning and hosting all the software companies could leverage this shared platform which offered a lot of features and functionality without having to custom build a whole new system. Additionally the system offered APIs and components developers could use to build systems more quickly. Salesforce was one of the first major SAAS (Software-As-A-Service) platforms. https://www.computerworld.com.au/article/641778/brief-history-salesforce-com/ Amazon was one of the first companies to create a public cloud. The first AWS service was SQS (Simple Queue Service) and was launched in 2010. Amazon started offering virtualized infrastructure. Instead of renting a server, or using a software only service, companies could set up virtual and scalable infrastructure. The details of how AWS came about vary in different reports but Jeff Barr, chief AWS evangelist, published the following timeline on the official AWS blog: https://aws.amazon.com/blogs/aws/aws-blog-the-first-five-years/ Now it seems like every company offers some sort of cloud service. All types of data, infrastructure, and applications are hosted “in the cloud” - on someone else’s computer.
  • 16.
    Public versus privatecloud 13 Another way to think about public versus private clouds is based on networking. A private cloud is contained to your private network. A public cloud is accessible to and from any network. 13 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Another way to define a public versus a private cloud is based on what networks can connect to it. A private cloud is typically only accessible from a private network or in other words not from the Internet. A public network is typically accessible from the Internet. Using those definitions a private cloud would only be accessible from a specific private network belonging to a particular organization. A public cloud would be accessible from any address on the Internet.
  • 17.
    Hybrid clouds A hybridcloud typically consists of resources in a public and private cloud. It may simply be a connection allowing data to pass between two clouds. Applications may also be designed to scale from private to public clouds. 14 14 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential A hybrid cloud refers to a cloud that connects private and public clouds into a larger cloud. This connection is typically created with a VPN (virtual private network) or private connection between the organization’s data center and the public cloud to secure the transmission of data instead of having it flow directly over the Internet. Many organizations use private clouds to connect private networks to public clouds and vice versa. Some example use cases for a hybrid cloud: A company wants to allow an application in the cloud to connect to a database hosted in the company's data center. A company wants to allow developers on the internal network to access the public cloud over a secure connection. A company may want to backup data to the cloud or vice versa over a private connection. An on-premises application may scale up into the public cloud when the demand is greater than the on-premises data center systems can support.
  • 18.
    Cloud services ~typical characteristics Not hosted by you On-Demand Scalable Shared resources Pay as you go 18 Log in, push a button, get a virtual machine Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Not hosted by you: Typically when people think of cloud, they think of systems hosted by someone else. Some companies do install software for cloud platforms they host internally, but this class is focused on external cloud resources. Many companies that tried to set up private clouds found it challenging and have opted to move to public cloud infrastructure because third-party cloud providers specialize in this service and it is their business, whereas other companies may be focused on a different line of business, such banking, retail, hospitality, health care, or real estate. On-Demand: As shown in the picture, it’s possible to click a button to get a new virtual computer in the cloud. Instead of waiting for the IT team in a company to purchase a server, install it, and get the network team to set it up, developers can just run a new machine instantly for a new project. Scalable: Most cloud environments have architectures and services that can grow and shrink automatically as you use resources. Instead of having to define how many servers you need for a new big project in advance, storage, network, and compute capacity can be added on demand as the need for additional resources arises. This can alleviate problems associated with spending more money than is needed when capacity needs were overestimated, or having systems go down due to underestimating requirements. Shared resources: Cloud architectures are often delivered in what is called a multi-tenant environment. That means the systems or data belonging to a single customer may be deployed on the same physical hardware as other customers. In some cloud services, customer data may be stored in the same database or on the same operating system as other customers.
  • 19.
    Pay-as-you-go: In theory,companies can save money in the cloud because they can reduce capacity when they are not using it. In practice, this is sometimes the opposite when companies do not correctly manage resources carefully. Organization's need to udnerstand who is instantiating what resources and ensure systems are right-sized and terminated when not in use.
  • 20.
    Impact to cybersecurity:Not hosted by you Loss of control of some aspects of configuration Some logs might not be accessible - reliant on the cloud provider Harder to capture network traffic in the cloud Can’t pentest or audit certain environments as you normally would Different implementations than a typical data center environment Location of hosted resources may impact legal jurisdiction 16 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential What happens when you’re not the one hosting an application? Some parts of the system are not accessible or controlled by you that you normally controlled in the past. Developers probably notice the impact of this less as they are typically running their code on systems provided by other people. They may not be aware of the security implemented by the teams that deploy servers and networks. For them, life is easier in a lot of ways. In some cases the developers got to the cloud first and got things working. But is it secure? We’ll take a look at that as the class proceeds. In some ways, the jobs of the people managing security is harder. System configurations and certain types of logs may no longer be accessible for review. Tasks like pentesting and auditing are no longer possible to complete in the typical fashion - if at all for some types of clouds. Security tools that work in an on-premises environment may not work well in the cloud, which means the security team has to evaluate, purchase, and learn new types of security tools. In other ways, security becomes easier, because the security team can offload some work and liability (potentially) to the cloud provider. We’ll also talk about how the automation capabilities of the cloud platform can help later. Typically large organizations pentest and audit data center environments. With some cloud providers the data centers will not be accessible. Validating vendor security requires new methods of assessing and dealing with cybersecurity risks. The location of the resources in the cloud may impact legal jurisdiction. If an incident occurs, the legal jurisdiction where the data or system is hosted may apply and the company may end up having to fight a court case far from where the company does business, incurring additional costs and possibly being subject to new and different laws.
  • 21.
    Impact to cybersecurity:On-demand Perhaps the developers got there first - and it needs a security makeover. Anyone with permissions can create resources. Resources with higher privileges can be instantiated and abused. Security policy enforcement can be easier - or harder. It depends. Malware and stolen credentials can quickly lead to unauthorized resources. Terminated resources - ephemeral logs may be gone for good. 17 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Because the cloud is easy to access and use, developers may have gotten to the cloud before the security and networking teams and started creating resources. In many companies this is the case. Often developers are not trained in security or networking and some adjustments may be needed as a result to bring the organization into compliance and align with company security policies and standards. In the cloud, anyone with permissions can create and access resources. One very important aspect of cloud security is correct implementation of permissions in the cloud, otherwise known as IAM (identity and access management). In the cloud, resources like virtual machines that run applications can also be granted permissions. If not careful when implementing IAM policies, a user could instantiate a virtual machine with elevated privileges and get access to things the user would normally not have access to see or do. Enforcing security policies in an on-demand world may be easier or harder. It depends on how the permissions and deployment system is structured as we will discuss later. Malware and stolen credentials can lead to unauthorized resources. We’ll talk about how attackers are leveraging these credentials in the cloud in the section on cloud threats. Resources in the cloud are “ephemeral” meaning they are not persisted or saved after they are deleted or terminated. Just as a cloud resource can easily be created, it can easily be destroyed with the correct permissions. When a resource is destroyed, any logs on that resource will be gone as well. Security teams need to make sure logs are stored in a way that keeps them around in case of an incident.
  • 22.
    Impact to cybersecurity:Scalable Auto-scaling: New servers and containers created to handle load. Manual patching will not apply to auto-scaled resources. IP addresses are dynamic - change when systems are restarted. An IP address assigned to your system might later point to someone else’s A two hour TTL is not a good idea… Security appliances cannot depend on fixed IP addresses. Resources scaled down - may lose ephemeral logs. 18 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Auto-scaling is an important aspect of a cloud security architecture. The idea is that resources are created when application load required and excess resources are terminated when no longer needed. The issue is that the auto-scaled resources are created from a base configuration. Patches manually applied to resources running at a particular point and time may not get applied to the new resources as they go up and down in the cloud. A new strategy may be more effective. IP addresses in the cloud are not static, with few exceptions. Most IP addresses change every time a system is started, restarted, or redeployed. The cloud platform randomly assigns IP addresses, sometimes within subnets you define, other times not. A security team and the the security appliances and services deployed in the cloud need to be able to handle the dynamic nature of cloud IP addresses. As you can imagine, a 2 hour TTL (time to live for DNS records) is not a good idea. If your DNS record points to an IP address for two hours before it updates, the IP that was assigned to your resource may suddenly be pointing to another company’s cloud server. Your traffic may be going to the wrong place for two hours! Resources that scale up and down based on demand or triggered by an event to run temporarily will also have short-lived logs.
  • 23.
    Impact to cybersecurity:Shared resources Many virtual machines on the same physical hardware. Many containers on the same virtual machine. Sometimes well-defined trust boundaries do not exist No VPNs and Federated IAM on some cloud services. Can’t easily get your data out once you put it in. No physical disk copy in the case of an incident. Harder to get logs in some cases. 23 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Shared resources create some of the economies of scale and potential cost savings in the cloud. They also cause some security concerns. Who else is hosted on your server besides you? Can the other virtual machines or containers on the same physical hardware get to your virtual machine or container? In some cases cloud services may not have well defined trust boundaries between components, systems, or people. When AWS, Azure, and Google first launched, they did not have the concept of virtual private networks. Every customer’s resources were running in one big flat network. New networking services exist, but some cloud providers and services still do not have well defined trust boundaries to separate customers, or allow customers to create segregation according to best practices due to the limitations of the particular service. Some cloud providers do not allow access to cloud services over private networks or using federated IAM. We’ll talk about why that is a problem later in class. Some have concerns that once the data is in, it’s hard to get out. For example, AWS allows you to load up data on a semi-truck and add it to AWS for free. How would you get that data back out? When you send data into the AWS network it is free. When you send data out there’s a charge. Also in some cloud services, data is co-mingled in such a way that it is almost impossible to extract from other customer data. The typical method of copying a physical disk for incident handling no longer applies
  • 24.
    in the cloud.Security teams need to learn and practice new methods for capturing incident data. In addition some logs may be not accessible at all, lacking data, or harder to capture because the log data is co-mingled with other customers or managed by the cloud provider.
  • 25.
    Impact to cybersecurity:Pay as you go The idea was that by turning off unused services, you save money. In practice, people forget to turn things off. Architectures are not designed correctly to realize savings. Malware spins up new hosts and containers to run cryptominers. Lift and shift deployments cost more than on-premises deployments. Lack of management leads to waste and overspending. 20 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Pay as you go is great...until it’s not. The idea in the cloud is that you can turn on a resource when you’re using it and then turn it off when you’re done. You only pay for it during the time period when it was running. You can also right-size your resources to your application and take advantage of all sorts of methods for reducing costs by aligning your architecture with your application needs. In reality, developers come to the cloud, spin up instances larger than needed, and forget to turn them off. Architectures are moved to the cloud in a lift-and-shift manner that does not recognize cost savings or performance optimizations. Malware gets into the cloud and creates unauthorized resources running cryptominers and other malware. Lack of management of this pay-as-you-go model will lead to waste and overspending vs. finite resources in a datacenter where the spending happens at the time the server was purchased and resources are fixed.
  • 26.
    Other concerns... Everything isinterconnected. One web page calls numerous other APIs What is all this stuff?? Application security is paramount Loss of credentials can wreak havoc Misconfigurations abound Questions, trust...and contracts 26 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Little Snitch (Mac firewall) In addition to security concerns based on the architecture of a typical cloud application, we have some other big picture concerns when looking at all things cloud and cloud applications. Everything is interconnected. Websites are calling so many different services and APIs it’s hard to track what data is going where. The screen shot above shows some web traffic for a few websites and all the different APIs and services being called. You can get similar information by turning on developer tools in some web browsers. Do you want your data going to all these different places when you visit one website? Is there a better way to architect web applications so these APIs and third party websites are not exposed to every visitor? Yes! In addition, all these dynamic connections create challenges for traditional ways of implementing network security. Application security becomes even more important when all these different APIs are interconnected and websites are calling each other. We’ll talk about some of the newer exploits occurring with incorrectly configured APIs and web servers. Loss of credentials can wreak havoc in the cloud. If an attacker gets on premises they may delete your data but they can’t delete your entire server! In the cloud loss of credentials could mean deletion of everything in your account if credentials are not handled properly. Misconfigurations are one of the biggest threats in the cloud as we’ll discuss. Security teams need visibility into cloud configurations and the ability to set and
  • 27.
    maintain configuration policies.Often security teams were involved in setting standard configurations for on-premises systems but in the cloud, they may not have been involved when the initial systems were rolled out. Additionally, many new cloud services need to be evaluated to determine the appropriate configuration for all the available settings. Container configurations, database, and other storage service configurations also need to be evaluated. We’ll discuss containers and virtual machines later if you’re not familiar with those terms. In regards to the things security teams no longer have access to in the cloud, the security team needs to come up with a new approach for determining if these things are secure. As we’ll discuss, this really depends on asking questions, whether you trust the cloud provider’s answers to the questions and contracts.
  • 28.
    Geography and Jurisdiction Whenyou host data in a cloud service, do you know where it is located? Location is critical from a legal standpoint. The jurisdiction that applies in a court case may depend on data location. Different laws apply in different locations. Some organizations disallow data access by citizens of foreign countries. In an ongoing case, Microsoft handed over data in the US related to a court case, but refused for data in Ireland citing that it was a different jurisdiction. 22 When you are using a particular cloud provider, do you know where your data is located? Where is it backed up? Who has access to it, including support, security, and operations staff at the cloud provider? Where is the authentication service to access the data located? Where are the system logs? Location is critical from a legal standpoint in the cloud. If an organization’s data is hosted in another legal jurisdiction and a security incident occurs, the organization may be required to show up in court in that jurisdiction. Additionally, the laws in that jurisdiction may apply. Understand the laws that apply as data transfers between one location and another. Some organizations disallow access by foreign countries. Different laws may apply to data if it includes citizens of other countries. We’ll talk about GDPR more later. In a recent case, Microsoft was asked to give up data related to a legal matter. Microsoft provided the data hosted in U.S. data centers but refused to provide data hosted in Ireland, arguing that it was a different jurisdiction and different laws applied. https://www.lawfareblog.com/microsoft-ireland-case-supreme-court-preface-congressi onal-debate
  • 29.
    The Upside! Possibly shiftliability to a third-party via a contract. Built-in inventory exists by nature of how the platforms work (CIS Controls). Additional resources exist for your security team via cloud provider support. IAAS clouds are huge configuration management platforms. Automation can reduce human error. Segregation of duties and networks may be easier. New ways of doing things may be more efficient and reliable. 29 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Shift Liability. One of the upside of giving up control of certain aspects of security may be the ability to shift liability to a third-party via a contract. Businesses look at and try to reduce or mitigate risk. If you allow the cloud provider to handle a certain aspect of your security and something goes wrong, who will be liable? Of course the impact to your brand must also be considered in this case. Choosing a shoddy cloud provider may not sit well with customers if and when something goes wrong and it is not handled properly. Inventory. Cloud platforms such as AWS, Azure, and Google have built in inventory tracking. As we’ll discuss, inventory tracking is one of the top recommendations of the Critical Controls. You can simply run a query on most cloud platforms to get this data. Support. Some cloud providers provide excellent support, especially for customers with enterprise support plans. When a security incident occurs or when implementing new security appliances and services, the cloud provider can provide additional resources to help. Additionally the platforms off the ability to automate a lot of error-prone tasks that can lead to security incidents. Configuration management. By virtue of how the cloud platforms work, they provide built in configuration management - if used properly. We’ll talk about how to leverage this functionality effectively. Automation. Studies cite human error as one of the primary reasons for security
  • 30.
    incidents. In somecases this is due to phishing attacks but in other cases misconfigurations or deployment mistakes can also contribute to the problem. By automating as much as possible, repeatable methods can limit manual actions and limit the chance of human error in the process. Segregation. Most cloud services (but not all!) create immutable logs that can be used to track incidents and know that the attacker or a person working on the platform has not altered them. The major cloud platforms also offer the ability to segregate resources via accounts and other constructs based on IAM policies, roles, and other settings. Cloud networks also allow for very fine-grained network configurations to ensure access to and from resources is limited to only what is required, down to the virtual machine, container, and data store, for some services. Efficient and Reliable Processes. By leveraging event driven automation, reaction to repeated events can be quick and reliable, saving valuable time and preventing mistakes.
  • 31.
    Types of Clouds:IAAS, PAAS, and SAAS Infrastructure-as-a-Service (IAAS) A virtual data center on shared physical resources. Platform-as-a-Service (PAAS) A platform for building applications with developer components. Software-as-a-Service (SAAS) Software delivered to you over the Internet - you don’t have to install it. 31 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential You may have heard these terms before in relation to cloud providers: Infrastructure-as-a-Service (IAAS) A virtual data center on shared physical resources. In and IAAS environment you have more control over resources such as virtual machines, where you are responsible for managing the operating system. Platform-as-a-Service (PAAS) A platform for building applications with developer components. In a PAAS environment the customer doesn’t manage the operating system or database server. The customer has access to components at a higher layer that can be leveraged to write code and build applications, without administering or accessing the underlying infrastructure. Software-as-a-Service (SAAS) Software delivered to you over the Internet - you don’t have to install it. A SAAS application makes use of shared resources. That means a SAAS application is typically not something you install or manage on-premises or in your own cloud account. It’s typically software that you access via a web console or an API (application programming interface). More on APIs later. These categories have evolved to allow cloud providers to define the types of functionality they deliver. The lines get blurry sometimes trying to determine which category an application falls into, but does it really matter which category a service is in? The main thing that concerns us, as security professionals, is that we need to understand the features of any particular cloud service we are using, what risks are present, and we need to do about it.
  • 32.
    That being said,it’s still good to understand the definitions of the different categories, so that when talking to customers or cloud providers, we have a general understanding of the terms and are talking about the same thing. It is also helpful when trying to understand our security responsibilities in a general way, because the amount of responsibility you have versus the cloud provider changes depending on the type of cloud service you are using.
  • 33.
    IAAS 25 Author: Teri Radichel© 2019 2nd Sight Lab. Confidential https://www.rightscale.com/lp/state-of-the-cloud?campaign=7010g0000016JiA The three IAAS clouds you’ve probably heard of by now: AWS, Azure, and Google Cloud Platform. Others exist but the market share is far and away mostly divided between AWS and Azure with Google Cloud Platform at a distant third. Other infrastructure-as-a-service cloud providers are barely on the map. AWS has been around the longest with a 10 year lead in the industry. Azure started as a PAAS platform and Google as SAAS with gmail, initially. Now all three are racing to keep up with each other building new and better IAAS services and features.
  • 34.
    PAAS 26 Author: Teri Radichel© 2019 2nd Sight Lab. Confidential A few notable PAAS providers: Heroku, now purchased by Salesforce, offers a platform designed to make it easier for developers to deploy applications using simpler components that work together. RedHat/IBM Openshift - build and deploy containers in public, private, and hosted cloud environments. Cloud Foundry - components to build and deploy cloud applications.
  • 35.
    SAAS 27 Author: Teri Radichel© 2019 2nd Sight Lab. Confidential Sample SAAS Applications: DocuSign - integration document signing and storage into your applications. SumoLogic - Operations, security, and business analytics based on your logs - support for multiple clouds. DropBox - store your documents on a third-party cloud.
  • 36.
    Shared Responsibility Model Conceptcreated by Amazon to explain security responsibilities. Explains what security is handled by the CSP and what customers need to do. General rule: If you can see it and change it, it’s probably your responsibility Make sure this is defined in your contract and assigns liability appropriately. If a compliance violation and fine comes along...who’s responsible? If there’s a data breach fine or lawsuit - who pays? 28 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential AWS came up with the shared responsibility model to explain to customers what aspects of the infrastructure AWS was responsible for securing and which aspects are the responsibility of the cloud provider. The other IAAS clouds have followed in these footsteps to provide guidance to customers as to where the responsibility lies for different aspects of the cloud. Regardless of which cloud you are using you need to understand what parts of the cloud service the provider will secure, how they will secure, and if it meets your requirements. In addition you need to understand your own responsibilities and make sure you have secured your part. In addition to cloud provider documentation and statements, you need to ensure your contract clearly defines this responsibility. If something goes wrong, who will be liable for any damages, fines, or other legal ramifications? When it comes to the courtroom, the contract will be the most binding. What if a compliance fine results from some misconfiguration? Will your organization need to pay it or the cloud provider? What if there is a data breach that results in a lawsuit? Who will be liable? What about GDPR requirements for deletion of customer data if an automated data deletion routine provided by a cloud provider fails or data is leaked due to a cloud provider error? For large organizations who could face hefty litigation or fines, it is important to understand these things before signing an agreement with a cloud provider and deploying systems to the cloud.
  • 37.
    29 Author: Teri Radichel© 2019 2nd Sight Lab. Confidential https://aws.amazon.com/compliance/shared-responsibility-model/ The AWS shared responsibility model shows the components the cloud provider is responsible for and what the customer is responsible for. Amazon likes to say they are responsible for the security of the cloud and the customer is responsible for security in the cloud. https://aws.amazon.com/compliance/shared-responsibility-model/ AWS provides a deep dive into their security processes in this whitepaper: Amazon Web Services: Overview of Security Processes https://d1.awsstatic.com/whitepapers/Security/AWS_Security_Whitepaper.pdf
  • 38.
    Author: Teri Radichel© 2019 2nd Sight Lab. Confidential 30 https://docs.microsoft.com/en-us/azure/security/azure-security-infrastructure Azure also has what they call Responsibility Zones. They break down these zones by different types of cloud services - SAAS, PAAS, IAAS, and On-Prem. This means that the customer needs to understand the responsibility for each type of cloud service and what category each service falls into. https://docs.microsoft.com/en-us/azure/security/azure-security-infrastructure
  • 39.
    31 https://cloud.google.com/files/PCI_DSS_Shared_Responsibility_GCP_v32.pdf Author: Teri Radichel© 2019 2nd Sight Lab. Confidential This diagram shows the Google responsibility matrix for PCI compliance. https://cloud.google.com/files/PCI_DSS_Shared_Responsibility_GCP_v32.pdf They have also published some information on shared responsibility in relation to containers. If you are not familiar with containers and Kubernetes we will discuss those more on day 3. https://cloud.google.com/blog/products/containers-kubernetes/exploring-container-sec urity-the-shared-responsibility-model-in-gke-container-security-shared-responsibility- model-gke You can also check out a deep dive of Google’s security and responsibility model in their whitepaper: Google Infrastructure Security Design Overview https://cloud.google.com/security/infrastructure/design/
  • 40.
    Shared Responsibility ModelResources AWS: https://aws.amazon.com/compliance/shared-responsibility-model/ Azure: https://gallery.technet.microsoft.com/Shared-Responsibilities-81d0ff91 Google (focused on PCI): https://cloud.google.com/files/PCI_DSS_Shared_Responsibility_GCP_v32.pdf 32 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential A few more helpful links for people who are familiar with one cloud, but not the other: Azure for AWS Professionals: https://docs.microsoft.com/en-us/azure/architecture/aws-professional/ Google Cloud for AWS Professionals: https://cloud.google.com/docs/compare/aws/ Google Cloud for Azure Professionals https://cloud.google.com/docs/compare/azure/
  • 41.
    SAAS and PAAS Foreach type of cloud, determine which responsibility is yours or theirs. Make sure your contract clearly defines that responsibility. Involve a lawyer - language around data breaches, privacy, jurisdiction, liability, transfer of ownership. 33 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential No matter what cloud provider you use, you’ll want to understand who is assigned responsibility for all the different aspects of security. The contract or agreement needs to clearly define these responsibilities. You will probably want to involve a lawyer and consider all the language around data breaches, privacy, jurisdiction, and transfer of ownership, among other things. Transfer of ownership means when you sell your business, do you want to have to wait for a cloud provider to approve the sale? Check your contract. SAAS and PAAS providers will have more responsibility for the underlying platform. This slide shows a part of the document where Dropbox defines their responsibility versus what the customer needs to do. You can read the full document below. https://assets.dropbox.com/documents/en/trust/shared-responsibility-guide.pdf What about all the other cloud providers your company uses? Do you understand what the cloud provider is doing to secure their systems? Do you know what the legal ramifications and steps you need to take will be during a security incident?
  • 42.
    The issues: contract,risk, and trust The issues with cloud security are largely legal and risk issues. Your contract is key: Who is responsible for carrying out which security activities? What happens if something goes wrong? Even if you have a great contract, do you trust them to do their part? What risk will does the organization face by moving data to the cloud? 34 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Although engineers and DevOps teams like to focus on the technical aspects of security, a large portion of the issues in cloud security are largely legal and risk assessments. The contract is key to defining who is responsible for carrying out which security activities. It also determines who is liable and who pays for damages if something goes wrong. Although you might have a great contract, do you trust the cloud provider to do what they say they are going to do? What is the risk if they don’t? Even if you are able to collect damages what harm might be done to the organization’s reputation? Would loss of data put the organization out of business? What other risks might the company face by moving to the cloud?
  • 43.
    Contract Considerations 35 Right toAudit: Security assessments, review logs, or ask for periodic reports. Availability: Definition of downtime? Backups? Scheduled outages? Monitoring and alerts? BCP/DR? Compliance: Can they meet your requirements? Data Access: Encryption standards, key management, employee access, sharing, location Data Breach: Notification, damages and liability, chain of custody, insurance, forensic evidence E-Discovery and Legal Holds: Can they meet your requirements? Insurance: Does their insurance cover your deductible? Intellectual Property: Who owns what? Software, processes, reports, data, etc. Termination and Disposal: What happens when you terminate the contract or delete a resource? Author: Teri Radichel © 2019 2nd Sight Lab. Confidential When negotiating a contract with a vendor it is important to include the security requirements you establish during an assessment of the vendor. If the vendor claims they do backups and you write that down on an assessment it may not qualify in court if the vendor fails to do so and loses all your data. Your legal team and security team will want to coordinate to ensure important clauses are contained within the vendor contract. If the vendor is meeting some type of compliance standard such as IS027001, SOC2, or CSA Star, PCI, GDPR, or HIPAA compliance, you can reference that compliance in the contract so the vendor will be obligated to maintain that compliance over time. Consider what happens in the case of a legal issue or data breach. What logs are available to you? Can the vendor fulfill your obligations for legal holds? Will they be able to provide data with proper chain of custody in case of a breach where litigation ensues? Will they be able to provide logs that show you the exact scope of a breach to avoid excessive fines and fees for data that was not actually exposed in a breach? The following article provides some additional details about the items above and what should be covered in your contract: https://securityintelligence.com/posts/does-your-cloud-vendor-contract-include-these- crucial-security-requirements/
  • 44.
    Legal and RiskIssues This is not a contract class so we can’t cover all the legal aspects of cloud. Ensure your lawyer is involved in reviewing the contract. The Cloud Security Alliance has a legal working group that may be able to help https://cloudsecurityalliance.org/working-groups/legal/#_overview We will talk about risk assessments later in the class. 36 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential This is not a legal class so we won’t cover all aspects of contracts here. Your instructor is not a lawyer - so get your lawyer involved to review the your contract and to help you understand any legal risks. In addition to your own lawyer, or if you are seeing a lawyer, the Cloud Security Alliance has a legal working group that may be able to help. Their web states: Our mission is to provide unbiased information about the applicability of existing laws and also identify laws that are being impacted by technology trends and may require modification. They also have some commonly asked legal questions on their website: https://cloudsecurityalliance.org/working-groups/legal/#_overview
  • 45.
    Security impact: Individualcloud services IAAS clouds offer numerous different “services” you can use. Compute, storage, networking, security, and other services. Compute resources are used to execute code in the cloud. You may have control over networking rules and configuration. Storage is used to store files, data in a database, or other types of storage. All these services can be combined to create cloud-hosted applications. 37 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Cloud platforms consist of many different services that can all be used together to create applications. Compute resources, storage, networking and other types of services can all be combined in different ways to create new types of applications and architectures. Each of these services will have different configuration options that need to be set correctly for optimal security. It is important for developers and security teams to understand the options available to them and set them appropriately.
  • 46.
    Amazon Web “Services” You’llsee a list of services grouped by categories like Compute, Storage, Database, etc. when you log in at https://aws.amazon.com Amazon gives things funny names. EC2 = a virtual machine EBS Volume = a virtual hard drive See more in the notes on this page 46 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential A few core AWS Services: IAM or Identity and Access Management is the AWS service for creating and managing users, roles, groups, and permissions in your cloud account. This service is used to validate users who access your account and specify who can do what. EC2 stands for Elastic Cloud Compute. Basically when you create an “EC2 instance” you are starting up a computer in the cloud. It’s called a “virtual machine” because it’s actually just software running an operating system, as opposed to a physical piece of hardware running single operating system. One physical server can run many “virtual machines” in the cloud. AMI means “Amazon Machine Image.” You select an “AMI” when you want to create a virtual machine. It’s a template that specifies what type of machine you want to create. The AMI will specify the operating system and software contained on the machine you instantiate. EBS Volume A a virtual drive you can attach to an EC2 instance. Just like you have hard drives on your laptop you can associate a hard drive with a virtual machine. You can also remove a harddrive from a virtual machine and associate it with a different virtual machine. S3 is a service that enables storing “object” data. The objects look like files when you log into AWS but the way they are stored is not technically not file based. Object storage allows for more scalable storage in the cloud. This service allows you to create a place to store files, but without specifying how much you need up front. You just keep adding files and the service grows and charges you based on the amount of data you add. This is different from old-school models where you had to calculate and define how much data you needed in advance. It also is a great way to get scalable log storage instead of being limited by the size of services in
  • 47.
    your data center. RDSis the AWS relational database service. Relational databases store data and are queried using SQL (structured query language). These types of transactional databases are typically used for things like financial applications that need strong data integrity and unquestionable transactions. Instead of having a database administrator (DBA) handle backups, replication, and other management tasks, this AWS service can do some of that automatically for you. RDS offers different database platforms like SQL Server, MySQL, Postres, and Amazon Aurora. VPC or Virtual Private Cloud is the AWS service where you’ll define networking resources and rules. Networking rules are defined to allow or deny access to cloud resources via networking endpoints, protocols, routes, and rules.
  • 48.
    Azure Services Azure alsoshows a list of services you can use when you log in to https://portal.azure.com Some of the services have recognizable names like “Virtual Machines” 39 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Azure typically names things with familiar names. Azure AD is like Active Directory on premises and used for IAM on Azure. Virtual machines are...virtual machines. Storage accounts are used to store data such as files. SQL databases, as the name states are SQL databases. Networking in Azure starts with Virtual Networks or VNETS.
  • 49.
    Google Cloud Platform(GCP) Google Cloud Platform has the same concept Google tends to focus more on compute and IAM then network controls 40 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Google cloud also has a number of different services along the same lines. Many of the Google services start with “Cloud”. In most cases the names are decipherable. Cloud Identity is Google’s built in identity and access management service. Compute Engine is Google’s virtual machine service. Cloud Storage is object storage, similar to S3 on AWS or Azure’s storage accounts used to store files used by applications, for example. Cloud SQL is Google’s relational database service. Virtual Private Network (VPC) Network is Google’s base networking service.
  • 50.
    Security implications ofeach service Can it run in a private network or does it require Internet access? Can you encrypt the data and who has access to the encryption key? Does it meet compliance requirements if you need it? What settings can be used to secure the service? Who can change them? What logs does the service offer and what do they contain? What is the SLA for each service ( may vary within a single cloud provider)? See the notes for a few more. 41 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Some security and risk-related questions you might want to ask about each service used in your cloud account: Can it run in a private network or does it require Internet access? Can you encrypt the data and who has access to the encryption key? Does it meet compliance requirements if you need it? What settings can be used to secure the service? Who can change them? What logs does the service offer and what do they contain? What is the SLA for each service (they may vary within a single cloud provider) What actions can the service take in your account? What data does it cache and where? How does authentication work for the service? How much does it cost?
  • 51.
    Security architecture concerns~ big picture Cloud security architects need to look at cloud services holistically. Many cloud services can be combined to create applications. Look at where the data can flow - networking, APIs, cloud accounts. Consider the entire attack surface as a whole vs. separate components. Mashup of cloud services can create leaks and vulnerabilities. Architect a solution for deployment, governance, and risk reporting. 42 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Sometimes people look at individual components but not at the intersection of components or the big picture as a whole. This is where security breaks down. Security in your account is only as good as the weakest link. Look at the systems and the account architecture as a whole to determine where attackers can get malware in, and data out. Sometimes an individual cloud service may be fine on it’s own, but when multiple services are linked, problems can occur. Look at the attack surface of all the things that are connected as a whole. When architecting a cloud solution, one of the most important points, which we discuss in a lot more detail later, is to gain visibility into deployments. You’ll want to see what is being deployed in the cloud and ultimately create guardrails for those deployments. You’ll also want to be able to manage governance and ensure systems are compliant both before and after deployments. Finally you’ll want to ensure you understand the risks that exist in your environment. In order to do that you need to know what’s deployed and be able to understand what vulnerabilities and problems exist. In some cases that information will only be available via the deployment system, for ephemeral services like Lambda functions that only exist for a short amount of time. You’ll want to review the code that is deployed, and ensure the logs are stored for each invocation, even after a short-lived, ephemeral resource has terminated. By capturing all this data, you can provide meaningful risk reports to decision makers who prioritize implementation of vulnerability fixes.
  • 52.
    Be aware oflimits - hard and soft Each different cloud service may impose limits. If you hit a limit, systems may go down - so monitor overall use. Some limits can be changed upon request. Other limits are hard limits so you’ll need to work within those requirements. Azure trial accounts limit the number of compute resources you can create. AWS trial accounts may come with an initial limit that can be increased. 43 The cloud is scalable, but in some cases not unlimited. Be aware of limits and monitor them. If you hit a limit while using a critical system, that system may go down. For large companies your account manager may help you monitor these limits. Some of the cloud providers offer tools for monitoring if and when you will hit limits. Just be aware these limits exist and evaluate each new service to determine if your needs fall within the limits of the system. Many times the cloud provider can and will raise the limits if you ask. You can put in a support request or ask your account manager. In other cases, the limits are hard limits because increasing them would create excessive cost or performance problems for you or the cloud provider if implemented.
  • 53.
    Cloud Provider Limits 44 AWSAzure GCP AWS Service Limits AWS Limit Monitor Trusted Advisor Azure Limits Google Cloud Quotas Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Azure: https://docs.microsoft.com/en-us/azure/azure-subscription-service-limits AWS: https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html Google Quotas: https://cloud.google.com/docs/quota
  • 54.
    To Summarize Cloud architecturesare different - security needs to adapt. Some aspects of security become the cloud provider’s responsibility. The shared responsibility model defines your security responsibilities. Make sure your contract clearly states cloud provider responsibilities. You lose some control, but you gain some powerful new capabilities. We’ll be covering ways to secure cloud architectures throughout the class. 45 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential In summary - the cloud is different! Security tools and practices used in the cloud need to adapt to cloud architectures. Understanding the shared responsibility model for each cloud provider is key. Make sure your contract clearly delineates responsibility and liability. Ensure you are securing your part. Although you lose some control in the cloud you may be able to shift some liability. You will also gain powerful new tools.
  • 55.
    Introduction to automation andinfrastructure as code 46 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
  • 56.
    Infrastructure as code Creatingresources using code instead of clicking buttons. This quick overview is for those who are not familiar with the term. Also, we will discuss why this matters in the context of security. The best way to explain infrastructure as code is via some examples. The related lab will make sure everyone’s has their environment working. We’ll test it out by running some code to create some cloud resources. 47 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential One of the first thing you’ll want to understand when thinking about security for IAAS clouds (and any cloud where possible) is the concept of infrastructure as code. What this means is that we’ll be writing code to create resources, instead of clicking buttons. Think about the first day you got a new laptop or computer. You probably had to login and click a lot of buttons to get it set up just the way you want. It’s the same in the cloud. You create a new virtual machine. You could login and manually deploy software and click a lot of buttons. Instead, we want to write code to deploy and configure that virtual machine. In fact, we can configure all the networking, storage, databases and pretty much anything on a typical IAAS cloud using code instead of button clicking. The following will demonstrate how this works.
  • 57.
    Create an EC2Instance by clicking buttons Here’s how you create a new virtual machine on AWS: Login to the AWS Console at https://aws.amazon.com Type EC2 in the search box: 48 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential First let’s click some buttons. Log into AWS and type EC2 in the search box and choose the EC2 service. As a reminder this is the AWS virtual machine service. Remember to choose the region in which you want to create your resource. In our labs we will always use us-west-2.
  • 58.
    Launch an EC2Instance Click the Launch Instance button. 49 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Click the blue button to launch an EC2 instance.
  • 59.
    Choose an AMI(Amazon Machine Image) You can just click the first blue button for now. We’ll talk about AMIs later. 50 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Choose an AMI. Old school AWS’ers from Seattle might pronounce this “ah-mee” however a lot of people now pronounce this A-M-I. Some people like @quinneypig on Twitter have an ongoing debate on this matter. Your instructor may say it one way or the other but both are acceptable. The first Amazon Linux AMI in the list will work just fine.
  • 60.
    Choose a size(make sure it is “Free tier eligible”) 51 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Next choose a size. In this case we want to choose an size in the “free tier” so you won’t get charged as long as you stay under the AWS time and usage limit if you are in a free trial account. The first AMI in the list will work just fine.
  • 61.
    Configure details -use defaults Uses the default networking. Assigns public IP. Shared Tenancy - dedicated will cost a lot more! 52 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Leave the defaults on the configure instance page. Note that tenancy is shared. Don’t choose dedicated unless you want to pay a lot of money! A dedicated host is an anti-cloud pattern where you get a server all to yourself. It will cost a lot more. Most organizations will want to disallow this option so someone doesn’t choose it by mistake.
  • 62.
    Use default storage Notethat you could add additional virtual drives (EBS Volumes) You can also change drive settings and size. 53 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential This is the page where you can choose an “EBS Volume” which is just an Amazon way of saying “virtual hard drive.” You can just use all the defaults.
  • 63.
    Create a “Name”tag The name tag is a special tag that will show up in resource lists 54 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential On the tags page enter a special tag. In the Key field put “Name” - using this key will cause the name to show up in the list of instances as you’ll see on an upcoming slide. The Value field put whatever you want. In this case the value is “Lab 1”.
  • 64.
    Select an existingsecurity group The default security group is selected here, which doesn’t allow any traffic We’ll fix that in the upcoming lab. 55 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential In this slide we’ll just choose the existing default security group. However the default security group will not allow any traffic in or out. In the lab we’ll create a new security group instead.
  • 65.
    Review your settingsand launch your instance 56 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Review your settings and click Launch.
  • 66.
    After you launch- choose a key pair Since this is a new account, select the option to “Create a new key pair.” Make sure you download your key pair and put it in a safe place. This SSH key is essentially a password to log into this EC2 instance remotely. Anyone who has it can use it. If you lose it, you can’t log in. 57 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential After the launch button is clicked you can choose a key pair. An EC2 key pair is an SSH key that allows you to log into an Amazon Linux instance. Choose the option to create a new key pair. SSH keys are passwords and should be treated as such. Also, if you lose this key, you won’t be able to login to this instance again - so put it somewhere you’ll remember it! Click Launch Instance.
  • 67.
    Launch your instance TheLaunch Status on the screen has a link to the instance being launched Note the value starting with i-xxxxxxxxxxxxx - that’s your instance id. Click your instance id. 58 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential After you click the Launch Instance button you’ll see that your instance is launching. You can click the link for the value in the format i-xxxxxxxxxxxx. This is your instance ID. We’ll be using instance IDs in the lab. They uniquely identify an instance in the cloud.
  • 68.
    Monitor the instancestatus checks Wait for the instance status to change from initializing to ready When it’s ready the status will change to indicate status checks have passed. 59 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Next you’ll see the list of EC2 instances in your account. As explained earlier by adding the Name tag “Lab 1”, this name now appears next to our instance id in the instance list. Notice the that the status checks say “initializing” at the top. You’ll need to wait until the 2 status checks have passed before you can log into your instance.
  • 69.
    Scroll down toview instance details Here you’ll see the instance details like: Public IP address (184.72.125.71) Private IP address (172.31.46.77) Security groups AMI ID Key pair name 60 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Scroll down to see the details of your instance. Take a look at the various properties. They should match what you selected.
  • 70.
    How does automationwork in the cloud? Most everything you can do in the AWS console can be done by an API call. Azure and GCP have some automation as well, though not as much. The cloud providers offer tools to help with automation. Instead of clicking buttons to create an EC2 instance, we can run code. Code can be checked into source control to track changes and versions. The code allows us to create a repeatable, automated process. 61 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential That was a lot of button clicking! What if we wanted to automate this in the cloud? For every button we clicked, there’s an API (application programming interface) that we can call instead using code to perform the same actions. AWS probably has the most robust API support, but Azure and Google are constantly adding features to catch up. If we can write code to create our instance, then it can be checked into source control. You’ll get to use a source control system called BitBucket in the labs. Source control allows developers to store code, including different versions in case the code history needs to be reviewed or code needs to be rolled back to a prior state. Source control systems track who made what change. By checking code into source control, deployments can be tested in advance, if the code is written correctly. This reduces errors during deployments and ensures deployments are more secure and repeatable.
  • 71.
    AWS Tools AWS offersa command line interface (CLI) which we will use in the labs. In addition, AWS offers SDKs in many different programming languages to call AWS APIs. You’ll need the secret key and access key id you created in the setup instructions. 62 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential APIs can be called by many different tools on AWS. Software development kits (SDKs) exist for many popular programming languages. In class we’re going to use something called the AWS CLI (command line interface). This tool allows you to run scripts at a command prompt to deploy resources in the cloud. We pre-installed the AWS CLI on a cloud instance so you don’t have to install anything on your own laptop or run our lab code on your laptop. The code is designed to run on the instance and should only be run there for security reasons as well. We can’t guarantee that all the code we included is secure so it’s best you restrict it to the cloud instance. However many developers run the CLI on their own machines. If you want to do that you can download and install the CLI by following these instructions: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html
  • 72.
    A simple CLIcommand to view EC2 instances Run this command to see all the EC2 instances in your account. You’ll see the one we just created with the same id. (i-xxxxxxxxxxx). 63 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Here’s a simple command to create an EC2 instance using code instead of clicking buttons.
  • 73.
    Create an EC2instance from the command line Now create a command to create an instance, but save it in a file. 64 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential This script can be saved to a file and checked into source control
  • 74.
    Run a scriptto create an EC2 instance Execute the script you created to create the instance. The instance will be created and show up on the EC2 Dashboard 65 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Now we can run the script instead of running the command directly.
  • 75.
    AWS CLI Reference Listsall CLI commands. Drill down from AWS to EC2. Then scroll down to run-instances. Click it for details. Scroll down to find parameters for the command. https://docs.aws.amazon.com/cli/latest/reference/ 66 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential For more information about all the commands available in the CLI check out this reference page: https://docs.aws.amazon.com/cli/latest/reference/
  • 76.
    AWS CloudFormation Templates Usetemplates to define resources YAML or JSON Separate definition from execution Idempotent Built-in dependency management Deployment logging Parameters and Outputs 76 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Another option on AWS is to deploy something called CloudFormation Templates. As the name implies, the template defines a type of resource and can be used to deploy resources that match the settings in the template. Templates can be written in YAML or JSON. These are simply defined file formats for how to write a file that matches a particular specification. Once you understand the rules they aren’t so bad but it takes a bit to get used to them. JSON was the initial format but now a lot of developers prefer YAML because they find it simpler. The template in this slide is written in the YAML format. Some of the benefits of CloudFormation: Separate data from execution: The definition of something should not be altered when executing code to create it. By using a CloudFormation template a deployment system can be created which deploys, but does not alter, the template that defines what to deploy. This is an important distinction for security reasons. An application team may be responsible for a particular template. The team managing the deployment systems should not be able to change the template. Idempotent is a fancy way of saying templates are re-runnable. If a deployment fails half-way through, you should be able to run the template again and get the expected results.
  • 77.
    Built-in dependency management- because AWS created the template language and knows all the resource dependencies, it will manage most of the work to determine when to wait before proceeding with the next resource in the template. Unfortunately it won’t do this if you break your templates into multiple scripts. In that case you need to manage some dependencies yourself. Deployment logging - we’ll look at some of this later in the labs. You’ll see that CloudFormation logs the deployment events, inputs, outputs, and the template that was deployed. There’s also a new feature called Drift Detection which tells you if a resource has been changed and is out of sync with the template that deployed it. Parameters and Outputs - templates allow you to pass in parameters, so you can use the same template in different places. For example, we give you one template in class but everyone can use it because we pass in parameters when things need to be different in each account. Outputs of templates can be passed in as parameters to other templates. This is very useful for example, when you create networking that is used by multiple application stacks. One networking template is deployed with an output that is used by all the other cloud formation stacks. (A stack is a set of resources deployed by a template).
  • 78.
    Parameters and Outputs Useparameters so you can use the same code in multiples places. Pseudo parameters will detect details about your environment (region, etc.). Use outputs to track information about things you have created. Other templates can reference those outputs. For example, create a security group in one template. Pass the security group ID into another template. 68 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Parameters are placeholders in templates that allow you to pass in values when the template is used to deploy resources. This allows you to use the same code (template) over and over because you abstract out the values that differ each time. Then pass those in when you deploy the template. Pseudo parameters exist that will automatically populate with values from your current environment. For example, if you are deploying in the us-west-2 region a region pseudo parameter will figure that out and pass it into your code. Outputs are values generated after the the template is deployed, such as a security group ID or an instance ID.
  • 79.
    Sample template -Security Group Notice one of the outputs is a reference to the security group. This allows us to reference this security group in another template. 69 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential This slide shows an example of a template used to create a Security Group. A Security Group is a set of network rules that can be applied to a resource to define the traffic allowed in and out of that resource.
  • 80.
    Execute the codeto create the security group The following command creates a CloudFormation “stack.” We specify the file that contains our CloudFormation template. The output is the StackId for our CloudFormation stack. 70 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential A CloudFormation stack is a group of resources deployed by a template. The command in this screenshot executes a cloudformation template stored in the securitygroup.yaml file. The output is the stack id for our cloudformation template.
  • 81.
    View the CloudFormationstack Log into the console. Choose CloudFormation Click your stack The events tab will show any errors. 71 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential In AWS when you navigate to the CloudFormation service, you can view the output of the command to run the template.
  • 82.
    View Outputs andthe new Security Group Click the outputs tab to view the outputs Go to the EC2 Service. Click on Security Groups to see your new group. 72 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Click on the different tabs to see information about the CloudFormation stack. Then go the EC2 service and click on Security Groups on the left to view your new security group.
  • 83.
    Why do wewant to use code instead of buttons? Writing code may seem more complicated than buttons at first. However, this is a pay now or pay later option. It will be faster to get up and running by clicking buttons. However, when something goes wrong, you can’t quickly redeploy. You’ll be reliant on the person who knows what button to click. It might be harder to prevent unwanted actions and track who did what. You can resolve issues and redeploy faster if you invested in automation. 73 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Why is infrastructure as code a best practice in the cloud? It may seem easier to click buttons, and at first it will be. A good approach would be to create a sandbox account to try out things in a manual fashion before deploying to production bound accounts. However, this is a pay now or pay later scenario. You may quickly lift and shift something into your cloud account and get it all working - and then your DevOps person leaves the company. Who remembers how all these systems were deployed? Perhaps you have a security incident and ransomware gets onto some of your instances. How fast can you recover? In the case of any malware on your machines, do you have to try to get it off, or can you simply click a button to redeploy everything? What about tracking who made what changes on a system? How will you do that? If all changes go through an approved deployment system and must come from source control, the changes should be available in source control showing the different versions of code, who made the change, and what got deployed when. What about the next time you need to make a change that has been tested in a dev or QA environment or push it to production? How do you ensure the changes made in the test account are exactly the same as the changes made in the production account? If you are using code constructed properly the same code used to deploy to the development account will be used to deploy to the production account with no changes. You can ensure what was tested is what exists in production.
  • 84.
    Other benefits ofcode deployments Spot unauthorized changes when resources don’t match desired state. Separate the people from secrets and sensitive data. Eliminate phishing; automated systems don’t click things. Deployment systems can operate in a locked down network. Prevent human error - one of the biggest causes of security problems. Immutable infrastructure - once deployed it can’t change. Limits malware. 84 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Some other benefits of automated deployments include the ability to more quickly spot unauthorized changes. If changes are made outside the approved deployment system, trigger alerts. This can be accomplished with services like AWS Config service which we will discuss later in class. CloudFormation also has a feature called Drift Detection which will tell you if a resource in an account differs from the template used to deploy it. With automation you can write code to deploy things in sensitive environments instead of people. If set up in a completely automated fashion, people never need to have access to the data or secrets when deploying a system. For example, your automation could generate an SSH key that is used to log into a system from within the cloud, and take an action using code. A person never needs to have access to the SSH key used to perform the automated action in the account. You can have multiple checks in your deployment process to ensure someone doesn’t change the code to get access to that key. Often credentials are stolen via phishing attacks. If you have automated your system in such a way that no humans have access to the credentials and they do not exist on anyone’s laptop in memory or in a text file, then there’s no human to click on a link in an email and reveal those credentials. You can operate in a completely locked down network if actions are taken via automated systems. You can build the automation systems inside a closed network, perform operations via code, and never have humans connect to systems in that
  • 85.
    closed network. Ofcourse, your build system security is also very important in this case as well. Using fully automated systems will help prevent human error such as accidentally hitting the delete button on the wrong EC2 instance or pointing the database deployment to the wrong server. Finally, with automated deployments, you can deploy immutable infrastructure. We’ll talk about immutable infrastructure more later in class but it basically means that once deployed, a resource cannot change. If you want to change it, you have to destroy it and create a new instance of that resource. Immutable infrastructure limits your attack surface and the ways in which malware, vulnerabilities, or misconfigurations could be deployed.
  • 86.
    Let’s do it! Wecreated an AWS Linux AMI for you to use in class. The AMI has all the tools you need on it, including the AWS CLI. You will need to use this AMI to create an EC2 instance manually. Then log into the EC2 Instance using SSH. Download the lab code and run some automation commands. If you have time, do the bonus labs to see how this works in other clouds. If possible make sure to create the resource group in Azure. 75 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential In Lab 1.1 you’ll be able to try out creating resources in the cloud - both manually and via automation.
  • 87.
    Lab: Intro to AWSautomation 76 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential This lab is an introduction to deploying cloud resources using automated and manual methods. It’s also a chance to make sure accounts are setup correctly and lab tools are working.
  • 88.
    Governance, compliance and risk(GRC) 77 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential It’s fun to think about bits and bytes and malware, however at some point we need to step back and look at the big picture to determine cybersecurity risks and response, compliance requirements, and how to maintain cyber security in an organization via policies and enforcement, otherwise known as governance.
  • 89.
    Governance Governance is makingsure people follow policies to reduce risk and loss. An organization needs to consider risks it faces and what to do about them. Based on this assessment the organization creates policies. Then the organization needs to enforce the policies. Reporting and auditing can help determine if policies are being followed. If policies are not being followed, this typically indicates increased risk. 78 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Governance means making sure people follow the rules. In a cybersecurity context, rules are created in order to ensure systems are compliant with standards and policies. Standards and policies are created to reduce risk by ensuring systems are created according to best practices that limit cybersecurity risk and potential exposure to vulnerabilities and malware. In the on-premises world, policies are often written in documents that few people actually read and see, in the author’s experience. In the cloud, these policies can be transformed into technical controls and guardrails that can help ensure people are following the rules at the time of deployment. Additionally reports can be created to review configurations automatically in the cloud to detect vulnerabilities and misconfigurations and produce reports and alerts. In some cases, it may even be possible to automate remediation of the problem. More details on how to do that are provided in subsequent class modules. If a company finds that a lot of systems are not in compliance, this can indicate increased risk for the company. Systems that are out of compliance may cause the company to fail security audits, resulting in fines or loss of business from customers who rely on these audits to prove the company is following best cybersecurity practices. Out of compliance systems may indicate vulnerabilities that expose systems to potential attacks and malware.
  • 90.
    Policies Why do weneed policies? Policies define what is and is not allowed in order to maintain security. Maintaining security minimizes risk. Companies may need to follow policies for compliance or legal reasons. Existing policies need to change to accommodate the new cloud environment. New technologies, shared responsibility. 90 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Policies explicitly define what is and is not allowed. By creating policies, companies document the rules people deploying technology in the company are supposed to follow. The policies are created to reduce risk, including cybersecurity risks and potentially related to costs or loss of business. In addition, policies may be created to enforce configurations that meet compliance or legal standards. Do policies matter? Documented policies are required in some cases for compliance, to obtain insurance, or for other legal reasons. If a cybersecurity incident occurs and the company is not following the policies they defined and documented, this could lead to legal scrutiny. Law enforcement may question the company, or as is the case in some large breaches, the CEO may be asked to testify in front of congress as to why policies were not followed or enforced. If a company documented a policy but does not follow it and needs to put in an insurance claim, the insurance company may not honor the claim if the polices were not enforced. Companies are sometimes also required to have policies in place to obtain contracts with other companies. Not having or implementing policies in a contract could lead to breach of contract. When a company moves to the cloud, policies need to change. Things work differently in the cloud. New services, tools, and software will be used in the cloud. The typical way companies do incident handling changes. Scoping for penetration tests by internal teams or external vendors will change, as well as what needs to be tested. The way companies handle encryption keys will likely change. IAM implementation will be different. After going through this class, likely those responsible for security policies will come up with many more aspects of security policies that may need to be
  • 91.
  • 92.
    An Example Chris Farris@jcfarris works on cloud security at Turner and published a blog post about creating a cloud security policy for AWS, Azure, and Google. https://www.chrisfarris.com/post/clou d-security-standard/ 80 Chris Farris handles security for Turner and published a blog post outlining how they approach security policies at Turner Broadcasting. Each organization will create a security policy unique to its needs, but this blog post may give some ideas to consider when creating your own policy. In the examples shown here, clearly we need to think about different things in the cloud, such as what domains and emails can be used to create cloud accounts. The policy also takes into account concerns of the governance and legal departments.
  • 93.
    Standards and Procedures Standardsdefine how the company will implement the policies. Procedures define the process by which standards will be implemented. Security teams typically have standards and procedures in place. Example: The company uses a standard OS configuration. Procedures define who will create the base image and how to deploy it. These too will need to be adapted to work in the cloud. 81 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Security standards are created to define how various types of systems need to be configured. A company may have a standard that it only uses RedHat Linux operating systems. Your company may have a standard such as to use RedHat Linux. A procedure may define how operating systems are configured and deployed. Most certainly the procedure will need to change in the cloud. The company is now dealing with virtual machines deployed on a cloud platform instead of physical machines inside a data center. A company may have a standards such as Linux systems will be deployed with RedHat Linux with a specific configuration. Amazon Linux comes with AWS cloud tools built-in which developers may want to use. It is hardened and Amazon updates patches very quickly. Do you want to revise your policy to allow developers to use Amazon Linux? This is just one example. Other procedural considerations include who will create the base images, and how. How much will developers be allowed to change the base image? Can they install new software? Can they create a new cloud image from that base image that incorporates their software on top of that base? These are the types of questions that will need to be considered that may cause security policies and procedures to change.
  • 94.
    What needs tochange in the cloud? Who approves which projects can go to the cloud? What types of data can go to the cloud? PCI, PII, HIPAA, GDPR? Who will manage cloud accounts? IAM? Networking? Encryption keys? What is the review process for an application moving to production? What on-premises data can cloud systems access and vice versa? How will applications be deployed and by whom? How will incidents be handled? How will you maintain chain-of-custody? 82 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential These are just a few questions to ask when you move to the cloud. Likely you will need to make changes for each of the following. Likely those who are responsible for security policies will think of a lot more as you go through this class and consider your existing security policies. Organizations typically need a unique policy that is relevant to their systems, compliance, and legal requirements. Who approves which projects can go to the cloud? What types of data can go to the cloud? PCI, PII, HIPAA, GDPR? Who will manage cloud accounts? IAM? Networking? Encryption keys? What is the review process for an application moving to production? What on-premises data can cloud systems access and vice versa? How will applications be deployed and by whom? How will incidents be handled? How will you maintain chain-of-custody?
  • 95.
    What may ormay not change Which operating systems are allowed? What is your patching strategy? What you do for PCI compliance (e.g. antivirus, pentest) What will be encrypted and how? How will data be classified? Data loss prevention (DLP)? What security products will you use? Acceptable open source licenses. 95 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential These are some things that may or may not change - we’ll explain your options, pros, and cons for each of the following throughout class. Operating systems: You may try to use the same standard operating systems you use on-premises. Developers may want to use cloud specific operating systems instead that come with built in tools that work with the native cloud platform. These tools make it easier to deploy systems in the cloud. The tools and agents provided by the cloud provider may also seamlessly integrate with the cloud platform. You may opt to use the cloud provider operating system and tools - just make sure you understand their capabilities and what they can access. Patching: Patch running systems or redeploy? Compliance: Will you use the same or different tools and processes to handle vulnerability scanning, antivirus, and cloud pentesting? Even if you use some of the same tools you may want to install them from the cloud provider marketplace. Some companies have used compensating controls in place of antivirus - it depends on your auditor whether or not this will be approved. Encryption: Will you classify data or simply encrypt everything? DLP: Will you classify data and how? Will you deploy a DLP solution? Security products: Will you use the cloud native security products or products that
  • 96.
    you are usedto? Software Licenses: Do you have license restrictions on open source products? Do those policies allow developers to use cloud software development tools that are provided by the cloud provider and designed to work with their platform?
  • 97.
    Cloud Patterns Design Patterns- A software construct. Well-designed patterns for common problems. Create pre-approved patterns that developers can use. If people use the pre-approved secure patterns, they get to production quickly. If they choose something else that’s ok - it will just take longer as they need to go through an approval process. 84 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Design Patterns is a well known software construct for creating software that follows well-designed patterns for solving common problems. The same concept can be applied to cloud infrastructure as code. Create pre-approved patterns that developers can use. In fact, by providing pre-approved templates developers can deploy new systems without too much scrutiny or delay. If they use the pre-approved patterns, they get to production quickly. If they choose something else that’s ok - it will just take longer. Other questions to consider: Who will define the cloud patterns? How will they be managed, used, and monitored? How will these patterns be implemented and deployed for each new project? How and when will the patterns be adjusted to ensure the company can remain innovative and move quickly?
  • 98.
    Exceptions Exceptions happen. Always.Be prepared to handle them. An exception comes down to - a risk assessment. What will your exception process look like for cloud deployments? How will you document - and track - exceptions? If an exception leads to a breach, will you know who approved it? How will you monitor the risk incurred by exceptions? Will they have time limits? 85 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Exceptions will happen. Often DevOps or security teams want to set up stringent rules and enforce that everyone must follow those rules. Unfortunately the day will come when an exception needs to be made, for whatever reason, to allow a less than secure scenario. The most important thing you can do is be ready for this exception and determine how it will be handled, who will approve it, how you will document it and manage it going forward. Exceptions should be tracked in such a way that they can feed into your overall risk reporting. When an exception occurs, can you track who allowed that exception, so in the case of a breach you can determine who was responsible? Can you give an exception a time limit and track it in a way that you will be able to go back and remember that it occurred and time is up? Who will communicate this and how will you go about getting it prioritized and fixed - before that time is up?
  • 99.
    Change Change is theone constant in the cloud. As soon as security creates a policy, the cloud provider will make a change. How will you monitor for change? What will you do when it occurs? One of the most challenging times of year is right after AWS re:Invent Developers want to try all the new shiny things...how will this be handled? 86 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential One thing is for sure in the cloud: things will change all the time. Security teams need to be prepared for this and consider how they will deal with change in the cloud. When the cloud provider offers a new service, how long will it take for developers to be able to try it out? If it takes a long time people may get frustrated. Will you have a sandbox account? Will you explain the ramifications of new services and potential security implications to all developers? Will a select group of people be allowed to test and evaluate the new service? Who will that be and how long will it take? Or will you allow new services in the development account and monitor for anomalies related to cost and security? It’s probably best to consider these questions up front and communicate them to developers so they understand the reason and the process for evaluating new services. Especially after big cloud conferences where a cloud provider releases a bunch of awesome new tools that everyone wants to try out! By the way, the good thing about the cloud is that anyone can create an account. When developers could not use services at a particular company, the author explained to the developers that they could easily go out and create an account to try out the services while they were waiting - and most of the time there’s minimal to no expense just to try them out.
  • 100.
    New cloud providersand services (besides IAAS) Which cloud providers and services will developers be allowed to use? How will this be enforced? How will you monitor for new services and features (comes frequently)? Who decides if and when developers can use a new service? How will new services be vetted? Will you develop unique standards, policies, and procedures for use? 87 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Besides the IAAS, cloud platforms developers may want to use a myriad of other cloud providers. DropBox, Google Docs, Evernote, DocuSign, SumoLogic, DataDoc, Nagios, Loggly, SalesForce, OpenShift...where is your data going? How will you manage requests and communicate policies to developers signing up for new services? How will you monitor for service usage? How will these services be vetted? We will look at some ways to vet new services in the following sections.
  • 101.
    Risk Information security riskmanagement (ISRM) Process of managing risks associated with the use of information technology, Considers Confidentiality, Integrity, and Availability (CIA) of Assets. If an event associated with a risk occurs it can negatively impact the business. Risk considers business losses should an event occur. Security people need to present risks to executives accurately. Risk is ultimately a business decision - and is the responsibility of top executives. 88 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Why do businesses care about risk? Businesses look at events that could happen to a business and consider the risk of that event occurring because it will impact the business by generating losses. Losses could be in the form of lost revenue due to downtime or loss of customers due to a negatively impacted brand. Costs may be incurred such as employee time spent dealing with a breach instead of building the business, legal expenses due to lawsuits, fines, and other negative consequences of a security incident. The company stock price may drop and insurance costs may rise. When reviewing new technology, policies, procedures, and standards, a company is really trying to determine the appropriate steps to minimize risk and ultimately, losses.
  • 102.
    Risk as anopportunity Fire Doesn’t Innovate - by Kip Boyle. Former CISO sees risk as an opportunity. If you manage risk you can avoid breaches. While others deal with the consequences… Your company can thrive! 89 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Another way to look at risk is in the form of negative risk. By reducing risk, a company will not waste time dealing with data breaches. While competitors are dealing with all the negative consequences on the previous slide, companies can thrive by eliminating or mitigating risks that are causing losses for competitors. Kip’s book is available on Amazon and may be free to students of this class - just ask! https://www.amazon.com/Fire-Doesnt-Innovate-Executives-Practical/dp/1544513194
  • 103.
    What is therisk of moving data into the cloud? Some people believe moving to the cloud is a massive risk Someone else might see your data! Other people are managing the network and hardware. Data on a shared host may be accessed by other customers on the host. How are these risks managed in the cloud? Technology, assessments, contracts, and monitoring. 90 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Many companies fear moving to the cloud for security reasons. They believe that the cloud poses a massive risk. Why is this? We have a section later on cloud threats but some of the key concerns include the fact that the cloud provider might see your data. Another issue is loss of control and the ability for internal employees to have access to and manage the systems. Additionally data on a shared host may be accessed by other customers. These are all valid concerns, however companies must weigh these concerns with other impacts to the business and other internal risks. Just as with our historic examples, these risks are managed via technology, assessments, and contracts.
  • 104.
    Consider history: FrameRelay In the early 90s Frame Relay became a thing. At first companies were skeptical. Companies formerly used 100% dedicated physical dedicated lines. They switched over to shared physical lines with logical separation. Ultimately: Cheaper, economies of scale, trust the network provider. Technology, a risk assessment, and a business decision, and contracts. 91 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential For those who remember, Frame Relay came out in the early 90’s. At the time a lot of businesses had dedicated leased lines between different locations to send data back and forth. By using fixed lines the companies could be sure no one else was connected and viewing that data. At some point, the large telecommunications companies started offering frame relay. Instead of a dedicated line that only one company could use, the telecom companies wanted to leverage economies of scale and have multiple companies share the same lines for a lower cost. These shared lines would have data logically separated via technology developed by the telecom companies. Initially companies may have been skeptical, but over time they started using frame relay because setting up a leased line between every location was too cost prohibitive and not always feasible. Although the risk might be higher than using a dedicated fixed line, the cost savings, contracts, and trusting the network provider outweighed the risk.
  • 105.
    More history: E-commerce Inthe late 90s: No way you were going to put your credit card in a web page. Banks would never approve the transactions, people said. A guy started selling books online… Someone figured out how to encrypt those transactions. Public Key Infrastructure was created to facilitate trust in third parties. The business financial opportunities far outweighed the security risks. Technology, a risk assessment, and a business decision. 92 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Another technology that initially faced a lot of pushback was e-commerce. People thought that banks would never approve transactions sent over the Internet because the risk of fraud and loss of funds would ultimately be too great. Ironically, one of the original e-commerce pioneers, Jeff Bezos, is also the founder of Amazon which runs AWS, the first major IAAS cloud provider. As everyone knows, e-commerce ultimately succeeded. New technology allowed companies to encrypt transactions via certificates validated by a third party. Banks determined the financial upside from accepting e-commerce transactions was greater than the risk and potential loss. The solution was a combination of technology to mitigate the risk, an assessment of the risk versus the potential business upside, and a business decision to accept the risk.
  • 106.
    Still skeptical? Questionsto think about. What is the risk of moving data to the cloud? Is the risk less than or greater than risks faced on-premises? What can be done to mitigate that risk? Is the potential financial upside greater than the downside risk? Have you performed a risk assessment? Are your standards, policies, and procedures better than the CSP’s? Does everyone in your organization follow your policies? 106 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential For those who are still concerned with cloud risk, it may be that the risk of moving to the cloud is too great for your particular organization, or certain applications. However before drawing this conclusion make sure you evaluate the following: What is the actual risk of moving to the cloud? Is it the risk that another company may see your data? It that risk greater or less than the potential your internal system may be breached? Is the risk due to the fact that the cloud provider employees may see your data? Is there a reason why you trust your own employees more than the employees of the cloud provider? Do you have more stringent hiring processes? (You might!) However how long did you know your current employees before your hired them? How well do you know them now? Have you reviewed the cloud provider’s policies and procedures? AWS has very well documented cloud policies in their Overview of Security Processes whitepaper - are your security processes better? https://d1.awsstatic.com/whitepapers/Security/AWS_Security_Whitepaper.pdf Even if your security processes on paper are better, does everyone actually follow them? The author of this class worked at many companies as a contractor, employee, and via her business and has not worked at a company with policies that match those of AWS - that were actually followed. Amazon has been extensively
  • 107.
    audited, including bythe US government, to prove that they actually follow their published policies. In the end, the potential upside of the business needs to be compared to the potential losses. You can estimate potential losses based on hypothetical scenarios as to what risk events may occur and also evaluate actual events to determine the potential loss for the business. We’ll look at some cloud threats shortly.
  • 108.
    How cybersecurity maybenefit from cloud Examples of benefits of using a public cloud like AWS: Built-in inventory management and ability to enforce data classification. The cloud is a huge configuration management platform - if used properly. Built-in logging with scalable storage. Automated deployments, security checks, failover, incident response. Easier to implement segregation of duties. App specific networking rules and just in time administrative access. 94 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential The cloud definitely brings some new risks but it also offers security teams some benefits. The thing is you’ll need to understand and take advantage of them! The cloud has built in inventory management. You can simply run a query against the cloud platform and get back a list of all your servers in the cloud. If you leverage the cloud automation and create a well defined deployment process you’ll be able to inventory the software and systems used throughout your organization. All the cloud services have built in logging that can be seamlessly logged to cloud-native platforms. The storage is also scalable so you don’t have to decide how big the servers need to be to store the logs in advance up front. So many things can be automated in the cloud. It takes time to learn and implement the automation, but by investing in automation companies can reduce human error, prevent incidents, auto-remediate problems, and spend less time on manual repetitive tasks, instead focusing on things that help the company be more efficient and profitable in the long run. It’s easier to implement segregation of duties in the cloud by creating separate accounts and fine-grained IAM rules. Segregation of duties can ensure two or more people need to be involved before a risky action can take place. The cloud makes it easier to create application specific networking. Using the concept of security groups which exist in all major IAAS cloud providers you can limit which apps can communicate. In the case of a data breach, exposure can be limited.
  • 109.
    When the cloudwon’t help cybersecurity. Too much access - developers have full control. No automation - button clicking - people have access to data. No oversight to prevent security flaws; lack of standard assessments. People untrained in network security implementing networks. Deploying cloud services with no understanding of security controls. No monitoring. Monitoring, but no remediation. 95 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Although some aspects of the cloud may benefit a company, rushing to the cloud without proper assessments and controls creates excessive risk. Some companies have let developers rule in the cloud. Developers are generally not trained in security and networking and do not understand the risks posed by poor implementations. In other cases oversight of the use of new technologies and features has been thrown out the window in the name of speed, innovation and modern technology. The same fundamental security principles apply inside the cloud that apply outside the cloud. Companies that fail to grasp this may experience a nasty breach. The benefits of the cloud are not realized by companies that do not leverage automation to help enforce compliance, governance, and proper architectures to reduce risk. We’ll look at some specific examples near the end of the day.
  • 110.
    Risk assessments forcloud providers and services Companies should establish a process for bring on new cloud providers. Each new cloud provider needs to meet the company’s security requirements. Security and privacy requirements include legal and technical concerns. To prevent shadow IT include financial and procurement teams. Establish a standard risk assessment process. Define roles and responsibilities. Measure and track risk acceptance and exceptions. 110 When allowing people to use new cloud services, you’ll want to have a process for evaluating the security of each cloud provider. Make it easy for individuals to request new services and clear as to what the process is, and the requirements are. If they understand security and the reasons behind your process and decision it will be easier to enforce the policies. If your instructions are clear and straightforward people with good intentions will comply. Shadow IT generally is a result of lack of communications, unwieldy processes, and rules that are hard to follow. Shadow IT also occurs when people do not understand or believe the risk exists. This is where proper training will help. Your process should involve finance, technical, and legal teams. Often people that don’t know or are subverting the process will submit a request to a procurement department or include the purchase on an expense report. At this point, the financial teams need to be aware of what is and is not a cloud service, or have someone to ask if they are not sure. These teams can ensure that whomever is purchasing the service has submitted a request to use it through the proper channels. Next have the legal team and security teams work together. The security team may ask for information from the vendor, or ask the person who wants to use the service to go get the information from the vendor so the security team can review it. Make it clear what is required and why. If your process is consistent people will deem it to be more fair than if you seem to have random requirements or say “it depends” a lot. Make sure whatever your security requirements are then make it to the contractual
  • 111.
    obligations for thatvendor. If the vendor is supposed to backup the data, then make sure that is in the contract or a related document. You may have a standard list of requirements you can add as an addendum to contracts. If any obligations fall back to you, as a customer, as they could not be negotiated in the contract, make sure this is clear to the users of the system. You may want to maintain a database of these assessments and the overall associated risk all your cloud vendors as a whole, and be able to produce a report showing that risk, and any outstanding exceptions to your standard security policies.
  • 112.
    Cloud provider Audits Whenyou can’t inspect the cloud provider yourself, you can look at third party audits. A SANS survey found the following audits were most commonly requested and reviewed when assessing cloud providers. 97 When assessing third-party cloud providers it is not always possible to get into their data center or do a penetration test against their systems directly. When assessing a cloud provider you can ask for evidence that they are following best security practices by evaluating third-party assessments, audits, and penetration tests performed on the vendor’s environment and systems. We’ll talk more about penetration testing on day 5. In terms of audits, the most common types of audits requested from cloud providers are shown in the diagram on the slide which comes from the 2019 SANS Cloud Security Survey. https://www.sans.org/reading-room/whitepapers/analyst/2019-cloud-security-survey-3 8940 In many cases the cloud providers already have this documentation and can provide it quickly. Using a common framework to assess third-party vendors can help ensure consistency when evaluating the controls and direct access is not possible. Audits are not perfect but can provide some reassurance that the vendor understands and has taken the time to implement security best practices.
  • 113.
    Lab: Intro to Azureautomation 98 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential This lab is an introduction to deploying cloud resources using automated and manual methods. It’s also a chance to make sure accounts are setup correctly and lab tools are working.
  • 114.
    Compliance Compliance typically meansadherence to some law or regulation. Not doing so could result in fines or loss of business. Required in some industries and jurisdictions. PCI if processing credit cards HIPAA if processing health care data GDPR if storing data of European citizens SAAS providers are getting SOC 2 compliance to attract customers. 99 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Compliance exists due to the fact that companies experienced major problems in terms of monetary losses, fraud, privacy issues, or data breaches. If too many problems occur, the government or industry regulatory bodies step in and create rules companies must follow, otherwise they will face fines or loss of ability to do certain types of business. In some cases compliance and audits exist to prove a company is following best practices. By showing that a company follows best security practices, the organization may win new business contracts. Many SAAS providers are now trying to obtain SOC2 compliance for this reason.
  • 115.
    Example: PCI compliance 100 Author:Teri Radichel © 2019 2nd Sight Lab. Confidential The Payment Card Industry Data Security Standard (PCI DSS) is a way to evaluate whether or not a company is following best practices in relation to accepting and handling credit card data. This standard applies to any company that accepts credit card payments. https://www.pcisecuritystandards.org/ According to WorldPay, “Between 1988 and 1998, Visa and MasterCard lost $750 million due to credit card fraud.” The companies defined a security standard and a method for evaluating companies to see if they met those standards. If a company fails to meet these standards, they may be denied the ability to process credit card transactions. https://www.vantiv.com/vantage-point/safer-payments/history-of-pci-data-security-stan dards
  • 116.
    101 Author: Teri Radichel© 2019 2nd Sight Lab. Confidential This is an example of some of the PCI requirements. As you can see, it is basically a checklist companies need to follow if they want to process credit cards. These same requirements apply for companies that want to process credit cards using systems hosted in the cloud. Any type of compliance a company must or hopes to meet outside of the cloud will still apply when systems are moved to the cloud.
  • 117.
    Compliance does notmake a company secure Compliance is a set of standards required by some regulatory body. It is often a minimum requirement, and may only cover certain scope. It may be concerned with particular aspects of data protection. It contains some best practices, but may not be comprehensive. Regulations can’t be updated fast enough to keep up with new threats. However...without compliance, some companies would do nothing. 102 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Many companies that achieved compliance requirements including PCI and others have experienced data breaches. How can this be? Compliance requires are a set of best practices but are often a minimum. Compliance is good, because unfortunately without compliance some companies would do nothing, but it is often not enough. The reason compliance doesn’t stop data breaches is because in many cases, compliance and regulations can’t be adjusted fast enough to keep up with evolving threats. Additionally, compliance is often scoped to a subset of an organization’s systems that are related to the particular compliance being obtained.
  • 118.
    Compliance is ashared responsibility in the cloud Just because you move to the cloud doesn’t excuse you from compliance. Compliance audits are still required, however, the CSP Is partly responsible. At an IAAS cloud provider some services may be compliant and not others. Some SAAS providers specialize in compliant services, e.g. SRFax - HIPAA. Separate accounts can be set up for compliance to limit the scope. In some cases, auditors are allowing compensating controls. Automation can help with governance and compliance. 103 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential So what happens when we move systems requiring compliance to the cloud? Compliance still applies. Organizations still need to pass audits. However, in the cloud, some systems will be the responsibility of the cloud provider. In this case, the audit will need to refer to audit documentation from the cloud provider for parts of the audit. The other portion of the audit will be the responsibility of the company being audited. When evaluating services to use at an IAAS provider, note that some of the individual services may be compliant and not others. Evaluate each individual service. Some SAAS providers specialize in providing compliant services. This may be a good option for some companies. Moving systems that require compliance into separate accounts may help limit scope. In some cases auditors are allowing compensating controls where compliance requirements created prior to heavy use of cloud systems don’t make as much sense. This is dependent on the particular auditor making the decision. Automation can help with governance and compliance. By automatically reviewing systems before they are deployed, non-compliant systems can generate alerts or be completely rejected. After systems are deployed automated scans can determine if systems are compliant.
  • 119.
    AWS Artifact 104 Author: TeriRadichel © 2019 2nd Sight Lab. Confidential AWS offers a service called Artifact where customers can access the cloud provider’s compliance documents. Some will require permission and others can be downloaded by anyone. https://console.aws.amazon.com/artifact/home?#!/reports
  • 120.
    AWS Compliance Center 105 Author:Teri Radichel © 2019 2nd Sight Lab. Confidential AWS also offers a service which helps companies find compliance documents from around the world. This service is called Atlas. https://www.atlas.aws/
  • 121.
    Azure Compliance Manager 106 Author:Teri Radichel © 2019 2nd Sight Lab. Confidential Azure has a service that automatically scans and reports on system compliance. This service is part of Azure security center. https://docs.microsoft.com/en-us/office365/securitycompliance/meet-data-protection-a nd-regulatory-reqs-using-microsoft-cloud
  • 122.
    Google Compliance 107 Author: TeriRadichel © 2019 2nd Sight Lab. Confidential Google has a compliance page where companies can view information about Google’s compliance with various standards.
  • 123.
    Check for complianceat the service level 108 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential The cloud providers will generally have pages for specific service requirements. These screenshots show the list of HIPAA compliant services on AWS and Azure. HIPAA compliance on AWS: https://aws.amazon.com/compliance/hipaa-compliance/ HIPAA compliance on Azure: https://www.microsoft.com/en-us/trustcenter/compliance/hipaa
  • 124.
    “Security People LikeLists” A statement by a developer who was frustrated with security people. Security professionals like lists for multiple reasons: Best practices based on the most common causes of data breaches. Policies and procedures for legal purposes. Compliance requirements in certain industries and jurisdictions. Assessment of risk due to cybersecurity weaknesses. Developers don’t want to just implement lists - they want to know why. 124 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Security professionals get training on a myriad of threats, malware, and compliance requirements. All of these details can be overwhelming, so the obvious solution is to make lists of all the things that need to be done to create a secure configuration. These lists are based on underlying research such as attacks that have occurred in the past and how to prevent them. From a developer perspective the lists just look like a huge roadblock that make no sense. Developers and software engineers are analytical types that need to know why these lists exist. Providing security training to developers can help them understand the reasons behind security requirements. Additionally, developers can also help implement security requirements more efficiently. Security lists exist for a number of reasons: Past data breaches show the ways in which attackers have obtained access to systems. Certain lists outline steps to take to prevent similar attacks. Some security policies and procedures exist for legal or contractual reasons. These requirements are driven by law and not optional. Compliance drives certain security requirements. As mentioned, in order to process credit cards, organizations must adhere to the rules for PCI compliance. Organizations have created vulnerability lists that help companies find cybersecurity
  • 125.
    weaknesses and insome cases include defined metrics for risk assessments. Using an established standard helps security professionals assess risks in an industry standard manner.
  • 126.
    AWS Well-Architected Framework Questionsto ask about a system. Covers architecture and security. Aligns to AWS services. Limited security questions. The idea is to keep it simple initially. Plans for adding more for compliance. 110 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential AWS offers the Well-Architected Framework to help companies assess their architecture. This framework was created because many people were asking AWS and partner companies for assessments. AWS wanted to create a service that companies could use to do these assessments themselves. https://d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework. pdf AWS then created a service that companies can use to track answers to questions and architecture status over time. https://aws.amazon.com/well-architected-tool/ This questionnaire and associated design principles cover more than just security. We’ll take a look at the security portions of this framework. As you will see the questions are pretty open-ended. This is just a starting point. Amazon has plans to add more details over time. They also designed it to work in any cloud or environment, not just on AWS.
  • 127.
    Identity and AccessManagement 111 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential AWS Well-Architected Framework Identity and Access Management questions.
  • 128.
    Detective Controls 112 Infrastructure Protection Author:Teri Radichel © 2019 2nd Sight Lab. Confidential Detective controls and Infrastructure Protection Questions.
  • 129.
    Data Protection 113 Incident Response Author:Teri Radichel © 2019 2nd Sight Lab. Confidential Data Protection and Incident Response questions.
  • 130.
    AWS Well-Architected Tool 114 Author:Teri Radichel © 2019 2nd Sight Lab. Confidential This screenshot shows the AWS Well-Architected tool in the AWS console. You can get to this took by logging into AWS and searching for the AWS Well-Architected Tool in the list of AWS services.
  • 131.
    AWS Well-Architected Tool- Improvement Plan 115 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential The AWS Well-Architected Tool allows you to track architecture risks over time.
  • 132.
    Azure Scaffold (CloudAdoption Framework) Azure Scaffold offers best practices for Azure deployments 116 116 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Azure Scaffold offers best practices for deployments in an Azure account. The Azure Scaffold is more about best practices for account structure than individual applications like the AWS Well-Architected Framework. It also only applies to Enterprise accounts. Account structure is covered in more detail in other parts of the class. However, you’ll want to think about governance and account structure as early as possible and structure your accounts and services so you can manage policies at the organizational level if needed. https://docs.microsoft.com/en-us/azure/architecture/cloud-adoption/appendix/azure-sc affold
  • 133.
    Center for InternetSecurity (CIS) Critical Controls A prioritized set of actions organizations can take to protect against known cyber attack vectors. Based on known attacks and data breaches. Claims to stop 85% of attacks. 117 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Center for Internet Security offers a set of controls derived from studying data breaches and attack patterns. This well-known set of security controls claims to stop 85% of attacks. https://www.cisecurity.org/controls/ You can download the latest set of CIS critical controls with more details here: https://learn.cisecurity.org/cis-controls-download
  • 134.
    CIS controls appliedto the Target Breach Do the CIS controls work? This case study applies the critical controls to the Target breach. It demonstrates how these controls may have prevented the data loss. https://www.sans.org/reading-room/whitepapers/casestudies/case-study-criti cal-controls-prevented-target-breach-35412 On day five we’ll look at how a similar system might be architected in the cloud to prevent similar attacks. 118 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Do the critical control work? A prior version of the critical controls was applied to the Target breach to see if and how they could have helped. Hypothetically, if the controls had been applied the breach would have been harder to accomplish and possibly prevented. One of the labs for this class involves taking a look at the Target architecture and redesigning it to work in the cloud. You can apply the critical controls to your new architecture and consider if they would help in the cloud the same way they would help on premises.
  • 135.
    CIS Benchmarks Over 100 configuration guidelines. Securitybest practices for configuring commonly used technology components. 119 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential In addition to the critical controls the Center for Internet Security offers CIS benchmarks for different applications and products. These benchmarks help you ensure your systems are configured according to best practices. https://www.cisecurity.org/cis-benchmarks/ For example, we applied the CIS benchmarks to the AWS AMI created for this class. You’ll get to see how we did that in an upcoming lab and try it out for yourself. Here are some of the benchmarks that may be applicable to your cloud infrastructure and applications: Amazon Linux Amazon Web Services AWS Three Tier Web Architecture CentosOS Linux Google Cloud Computing Platform Kubernetes Microsoft Azure Microsoft Windows Server Ubuntu Linux VMWare Docker
  • 136.
    Vendor Baselines andBest Practices Each vendor will publish baselines and best practices. Each individual cloud service or product will have specific guidance. This sounds obvious, but people don’t do it in my experience: Read The Cloud Manual! We’ll cover as much as possible in class, but it’s still a good idea to read up. 120 AWS Security best practices: https://d1.awsstatic.com/whitepapers/Security/AWS_Security_Best_Practices.pdf Azure Security best practices: https://docs.microsoft.com/en-us/azure/security/fundamentals/best-practices-and-patt erns Google Security best practices: https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organization's Microsoft Office baseline: https://blogs.technet.microsoft.com/secguide/2018/02/13/security-baseline-for-office-2 016-and-office-365-proplus-apps-final/ Proposed new baseline: https://techcommunity.microsoft.com/t5/Microsoft-Security-Baselines/Security-baselin e-for-Office-365-ProPlus-v1907-July-2019-DRAFT/ba-p/771308
  • 137.
    OWASP Top 10 137 137 Author:Teri Radichel © 2019 2nd Sight Lab. Confidential OWASP or the Open Web Application Security Project is an organization that focuses on secure coding and application security best practices. The website includes examples and testing methodologies. OWASP is developing a serverless top 10 - but at this time….it’s the same! OWASP Top 10 https://www.owasp.org/images/7/72/OWASP_Top_10-2017_%28en%29.pdf.pdf _______________________________________________________________________ ________ A1:2017-Injection Injection flaws occur when an attacker sends unexpected input to a program that allows the attacker to send commands to the underlying program and execute unauthorized code. Magento / Magecart (British Airways, Ticketmaster, Newegg, and more) https://duo.com/decipher/critical-magento-flaw-puts-commerce-sites-at-risk _______________________________________________________________________ ________ A2:2017-Broken Authentication Improperly implemented authentication may allow attackers to steal or manipulate keys, credentials, passwords, and session tokens, etc. to gain access to systems. Facebook access token breach, September 2018 https://www.theguardian.com/technology/2018/sep/28/facebook-50-million-user-accou
  • 138.
    Failure to encryptdata in transit and at rest. Additionally, beware of data cached in memory, output to log files, maintained in cookies, and other storage locations. Facebook passwords unencrypted for years, reported March 2019 https://krebsonsecurity.com/2019/03/facebook-stored-hundreds-of-millions-of-user-pa sswords-in-plain-text-for-years/ _______________________________________________________________________ ________ A4:2017-XML External Entities (XXE) Poorly designed XML processors allow for data exposure. Wordpress https://www.zdnet.com/article/wordpress-vulnerability-affects-a-third-of-most-popular- websites-online/ _______________________________________________________________________ ________ A5:2017-Broken Access Control Improper data access restrictions allow attackers to access other people’s data and accounts. Salesforce https://threatpost.com/salesforce-com-warns-marketing-customers-of-data-leakage-sn afu/134703/ _______________________________________________________________________ ________ A6:2017-Security Misconfiguration Simply exposing data or creating vulnerabilities through improper configurations. S3 bucket breaches (many cases - 2018) https://businessinsights.bitdefender.com/worst-amazon-breaches Database exposure (many cases - March 2019) https://www.infosecurity-magazine.com/news/indian-mongodb-snafu-exposes-info-1/ https://www.bleepingcomputer.com/news/security/open-mongodb-databases-expose- chinese-surveillance-data/ https://securitydiscovery.com/800-million-emails-leaked-online-by-email-verification-se rvice/
  • 139.
    within an application. Magecartskimmer software (British Airways, Newegg, and others): https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/magecart-an-overvi ew-and-defense-mechanisms/ Equifax: https://arstechnica.com/information-technology/2017/09/massive-equifax-breach-caus ed-by-failure-to-patch-two-month-old-bug/ _______________________________________________________________________ ________ A9:2017-Using Components with Known Vulnerabilities Using components with known CVEs (common vulnerabilities). Equifax: https://arstechnica.com/information-technology/2017/09/massive-equifax-breach-caus ed-by-failure-to-patch-two-month-old-bug/ _______________________________________________________________________ ________ A10:2017-Insufficient Logging & Monitoring Insufficient logging allows an attacker to infiltrate systems, stay there and continue to pivot to other systems. Sometimes attackers remain in breached systems for years. Marriott (Starwood hotels): https://www.nytimes.com/2018/11/30/business/marriott-data-breach.html _______________________________________________________________________ ________ OWASP is also working on a Serverless Top 10 ~ but at this time it’s currently the same. https://github.com/OWASP/Serverless-Top-10-Project/
  • 140.
    MITRE ATT&CK 122 122 Author: TeriRadichel © 2019 2nd Sight Lab. Confidential The MITRE ATT&CK framework is a knowledge base of common tactics and techniques based on real world events. This slide only shows part of the list. The framework applies mostly to traditional application layers, and does not have a lot of cloud-specific attacks, but these same attacks apply in the cloud for any applications using similar technologies deployed in the cloud. https://attack.mitre.org/matrices/enterprise/ If you have time you can also contribute to the MITRE ATT&CK framework if you are aware of new types of breaches and attacks: https://attack.mitre.org/resources/contribute/
  • 141.
    CAPEC ~ CommonAttack Pattern Enumeration & Classification 123 123 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential CAPEC or the Common Attack Pattern Enumeration & Classification framework organizes attacks by mechanisms of attack and domains of attack. https://capec.mitre.org/index.html
  • 142.
    NIST ~ NationalInstitute of Standards & Technology (US) 6 step process - one government, one standard - reciprocity. If an agency wants to use a system another audited, no need to re-audit. NIST 800-145 Definition of Cloud Computing (a bit dated ~ 2011). NIST 800-53 Vulnerability Database (Controls to meet FISMA requirements). FIPS - Federal Information Processing Standards (Cryptography). Cybersecurity Framework (security best practices). 142 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential NIST or the National Institute of Standards & Technology is a US government organization that publishes security best practices. NIST 800-53 is a set of guidelines to help government agencies and contractors meet FISMA (Federal Information Security Management Act Requirements). Other countries have similar organizations that define the controls government agencies need to follow. In the US many companies use the NIST guidelines even outside of the federal government as a list of best practices. Some other NIST documents: NIST Definition of cloud computing ~ a bit dated but still referenced at times: https://csrc.nist.gov/publications/detail/sp/800-145/final 800-30 Guide for conducting risk management 800-37 RMF Framework 800-39 Managing Information Security Risk 800-137 Continuous Monitoring 800-60 Data Categorization
  • 143.
  • 144.
    NIST - 6steps still applicable to cloud systems 1. Document the system. 2. Define the controls and overlays. 3. Document how your system/application implements each control. 4. Assess security controls - Assessors look at and test controls. 5. Risk management step - Risk executive accepts or tells you to fix the risk. 6. Continuous monitoring - ensure systems meet the controls over time. 125 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential The NIST framework defines 6 steps organizations should follow to maintain secure system. These same 6 steps are still applicable to systems deployed to the cloud: 1. Document the system 2. Define the controls and overlays 3. Document how your system/application implements each control 4. Assess security controls - Assessors look at and test controls 5. Risk management step - Risk executive accepts the risk or tells you to fix it 6. Continuous monitoring - ensure systems meet the controls over time
  • 145.
    Based on CIATriad Confidentiality What is the impact on your mission if this information got out. Integrity Would changing the data affect your mission. Availability Could your mission continue if the data was unavailable. 126 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential CIA stands for confidentiality, integrity, and availability. Organizations can use these three characteristics of data security to evaluate risk and business impact if a system is breached. Different organizations will place more importance on one or the other depending on the impact of failure to maintain confidentiality, integrity, or availability.
  • 146.
    Categorizing data Categorize datato determine what the risk level is if different types of data exposed, changed in appropriately or manipulated in some way, or made unavailable. 127 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Using the CIA triad you can categorize your data. First break down data into different data type categories. You could use the categories shown above or some other category that makes sense for your business. Next for each characteristic of the CIA triad, determine whether the risk for each category of data is high medium or low. Based on this information you can determine whether a the fix for a particular vulnerability in a certain system needs to be prioritized.
  • 147.
    Overlays and commoncontrol providers Overlays Subset of all the controls that apply to a particular organization. NIST, PCI, etc. Common control providers DNS server was already assessed then don’t re-assess. If the cloud provider systems were already assessed, use that assessment. 128 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential When looking at which controls need to be audited consider overlays and common control providers. First, you need to see which sets of controls apply to your organization. Are you trying to become SOC2 compliant? Do you host personal data for European citizens (GDPR)? Do you process credit cards (PCI) or health data (HIPAA)? Choose all the controls that apply to you based on the required frameworks for auditing and monitoring compliance. Next, determine if there are any common control providers. If a system was already audited by one audit, it shouldn’t need to be re-audited by a second audit. In the case of the cloud, any controls that are the responsibility of the cloud provider likely fall into the category of common control provider. You can show the auditor the audit documentation from the cloud provider to prove that the compliance control requirements are satisfied.
  • 148.
    Cybersecurity Framework 129 Author: TeriRadichel © 2019 2nd Sight Lab. Confidential The NIST Cybersecurity framework has specific controls that are well defined and numbered. NIST Cybersecurity Framework controls include specific tests to determine if a control passes inspection or not. Compare this to the open-ended questions of the AWS Well-Architected Framework, for example. These are very different methods of evaluating cyber security. The NIST Framework was created to try to create consistency when systems are audited. https://www.nist.gov/cyberframework Spreadsheet: https://www.nist.gov/file/448306
  • 149.
    Cloud Security Alliance(CSA) Security Guidance for Critical Areas of Focus in Cloud Computing GRC Stack (2010) CSA Star Certifications for cloud professionals. National organization with local chapters. 130 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential The Cloud Security Alliance was established in 2008 to explore what steps should be taken for best cybersecurity practices in a cloud environment. This organization now has chapters around the world. https://cloudsecurityalliance.org/chapters/global/ If you don’t have a Cloud Security Alliance chapter near you, then you can start one! https://cloudsecurityalliance.org/chapters/
  • 150.
    CSA Star Cloud SecurityAlliance (CSA) STAR ~ certification for cloud providers. Three levels of assurance - Self Assessment, 3rd party, continuous auditing. Leverages the following documents: Cloud Controls Matrix (CCM) Consensus Assessments Initiative Questionnaire (CAIQ) Code of Conduct for GDPR Compliance 131 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential The cloud security alliance offers a certification called CSA Star. This is a way for cloud providers to demonstrate they are following best security practices in the cloud. https://cloudsecurityalliance.org/star/#_overview
  • 151.
    CSA Star OpenCertification Framework 151 151 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential The framework provides incremental steps cloud providers can take to get certified. Level 1 - Self Assessment + GDPR Code of Conduct Submit one of the following: A completed The Consensus Assessments Initiative Questionnaire (CAIQ) A report documenting compliance with Cloud Controls Matrix (CCM) For GDPR, both of the following apply: Code of Conduct Statement of Adherence Self-assessment results based on the PLA Code of Practice (CoP) Template Level 2 - Attestation and Certification Attestation - CPAs conduct SOC 2 assessments using criteria from: AICPA (Trust Service Principles, AT 101) CSA Cloud Controls Matrix Certification - Third-party independent assessment of the security of a CSP
  • 152.
    Currently under development Enablesautomation of the current security practices of cloud providers Providers publish their security practices according to CSA specifications Customers and vendors can retrieve and use data in a variety of contexts
  • 153.
    CCM 133 Author: Teri Radichel© 2019 2nd Sight Lab. Confidential The CCM, the only meta-framework of cloud-specific security controls, mapped to leading standards, best practices and regulations. CCM provides organizations with the needed structure, detail and clarity relating to information security tailored to cloud computing. CCM is currently considered a de-facto standard for cloud security assurance and compliance
  • 154.
    CAIQ 134 Author: Teri Radichel© 2019 2nd Sight Lab. Confidential The CAIQ is based upon the CCM and provides a set of Yes/No questions a cloud consumer and cloud auditor may wish to ask of a cloud provider to ascertain their compliance to the Cloud Controls Matrix. This is a very useful questionnaire because it covers a lot of different aspects of security. In the questionnaire the questions are aligned with various compliance and security frameworks such as PCI, HIPAA, and NIST. If your particular framework does not exist in the questionnaire, you can add a column and map these questions to it. Then you can go search in the CSA database to see if the cloud provider exists in the CSA database, and if they have already filled out this questionnaire. This should save people who are trying to perform and provide data for risk assessments a lot of time.
  • 155.
    Other Frameworks Control Objectivesfor Information and Related Technology (COBIT) ~ ISACA Information Technology Infrastructure Library (ITIL) International Organization for Standardization (ISO) Common Security Framework (CSF) ~ HITRUST Australian Signals Directorate (ASD) Essential 8 NZISM Protective Security Requirements (PSR) Framework 155 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Control Objectives for Information and Related Technology (COBIT) COBIT is a framework focused on identifying and mitigating risk released in 1996 by ISACA. Initially designed for governance, it has evolved into helping align business and IT objectives. COBIT is mostly used in the financial industry to help comply with standards like Sarbanes-Oxley. ITIL ITIL also attempts to align business and IT objectives, with the goal of delivering services in a predictable manner. It was created in the 1990s by the UK Central Computer and Telecommunications Agency (CCTA). ISO ISO was founded in 1947 by a group of delegates from 25 countries, the 67 original technical committees. The group wanted to ensure products and services are safe, reliable, and of good quality. ISO created cybersecurity standards which are widely used such 27001 and 27002 to demonstrate the quality of an organization’s cybersecurity programs. ISO 27002 has the following sections: 1. Risk assessment
  • 156.
    2. Security policy 3.Organization of information security 4. Asset management 5. Human resources security 6. Physical and environmental security 7. Communications and operations management 8. Access control 9. Information systems acquisition, development and maintenance 10. Information security incident management 11. Business continuity management 12. Compliance Common Security Framework (CSF) HITRUST (Health Information Trust Alliance) is a privately held company located in the United States that has established a Common Security Framework (CSF) that can be used by all organizations that create, access, store or exchange sensitive and/or regulated data.
  • 157.
    Teles XACTA ~Product for risk assessments 136 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Telos corporation had an interesting risk management product and approach that was used in the evaluation of AWS for the second US GovCloud region. https://www.telos.com/cyber-risk-management/xacta/continuous-compliance-assessm ent/ The first AWS government cloud (GovCloud) assessment of air-gapped cloud (not connected to the Internet) took two years. The US government asked another company to do the assessment to get it done faster - only had 4 months. Here’s a video from the AWS Summit in Singapore where one of the people involved is talking about the assessment: https://www.youtube.com/watch?v=ru0-lL09aPc&index=13&list=PLhr1KZpdzukcqM9 wmBu9nZLbuOXlXuwK8
  • 158.
    Track applicable controls 137 Author:Teri Radichel © 2019 2nd Sight Lab. Confidential The Teleos Xacta system lets you track security controls.
  • 159.
    Select your controlsets 138 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Select your control sets such as NIST or PCI compliant to add the information to the system that applies to your organization.
  • 160.
    Inheritance ~ Whathas AWS already covered? 139 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential The system will tell you in the case of AWS, for example, which controls are already covered by AWS audits.
  • 161.
    Security Assessment ~Pass, Fail, Monitor 140 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Assessors can go through the system and mark controls as pass or fail. The system allows you to track audits and controls over time. After a breach - After a breach, check this system - Figure out what mission that system was serving - Figure out what controls were in place - Were there any risks that were not mitigated properly? - Is there a paper trail regarding any exceptions or failure to mitigate?
  • 162.
    Lab: Intro to GCPautomation 141 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential This lab is an introduction to deploying cloud resources using automated and manual methods. It’s also a chance to make sure accounts are setup correctly and lab tools are working.
  • 163.
    Costs and Budgeting 142 Author:Teri Radichel © 2019 2nd Sight Lab. Confidential Any company moving to the cloud will want to consider cloud costs and budgeting. This includes security teams! There are multiple reasons why security teams need to consider cloud costs, as we will discuss.
  • 164.
    Costs and Budgeting 143 AWSAzure GCP Budgets AWS Budgets Azure Budgets Google Budgets Billing & Cost Management Billing & Cost Management Billing & Cost Management Cost Management Reports AWS Budget Reports Azure Cost Reporting Billing Reports Billing APIs Cost Explorer API Azure Cost Management APIs Google Billing APIs Right-sizing Recommendations Optimization Recommendations Sizing Recommendations Advisor Trusted Advisor Azure Advisor Author: Teri Radichel © 2019 2nd Sight Lab. Confidential
  • 165.
    Cost control -security? Watching your costs might help you determine if you have a security problem. Attackers spin up cryptominers that can increase cloud bills. Charges for non-compliant and unauthorized services. Need to evaluate the cost of security services and options. Work with people in finance to find rogue cloud accounts. To find the price of any service search “[service name] pricing” in google. 144 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential There are many reasons why security teams need to consider the cost of cloud accounts. Of course security teams will want to understand the cost of the systems they are evaluating and determine if those systems are within the desired budget. Systems may need to be architected to minimize or limit spending. Another reason security teams want to be aware of costs is because an increase in cost may indicate a security problem. Companies have racked up large bills due to stolen account credentials. Attackers use the stolen credentials to create unauthorized resources. In other cases, cryptominers are deployed on systems that increase CPU usage and network traffic and may cause a company to incur additional costs. Finally, the security team may want to coordinate with members of the accounting department who are paying the bills. Find out if employees are expensing or paying for rogue cloud accounts that were created outside of approved channels.
  • 166.
    How much willthat cost? Every service has a pricing formula. You’ll need to understand your inputs to get your output (cost). Tuning and tweaking applications can reduce cloud costs. No matter how much you try to predict what your costs will be… Beta test early to validate your cost estimates Use calculators provided by cloud providers - or a spreadsheet. Vendors with old licensing models do not align with pay-as-you-go services. 166 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Cloud pricing is based on a formula for each cloud service. Different services will have different formulas to determine the cost. For example, some services may charge based on bandwidth. S3 buckets charge based on the number of gets and puts into a bucket plus storage. Data transferred into AWS is free. Data transferred out or between accounts has a cost. Each service will have its own unique pricing model and metrics. To come up with a price for an application you’ll need to understand the inputs. If the cost is based on gets (when you request a file) and puts (when you upload a file) how many files will you be adding and retrieving from the S3 bucket? What will the total file storage size be? For an EC2 instance, how many hours will it run? Don’t forget about the attached EBS volume which costs money even when the EC2 instance is stopped. Once you have the inputs you can plug those into the formula for a service to get your cost. But no matter how hard you try to think of every aspect of the system that may incur a fee, beta test and validate your assumptions as early as possible in case any surprise costs drive an architectural change. The cloud providers offer calculators as we’ll see that can help, or you can use a spreadsheet. Note that vendors with per-machine licensing models don’t work well in the cloud where architectures should be scalable. You want to only be paying for resources
  • 167.
    while they arein use and only have as many resources deployed as required to handle the load and maintain high availability. If you expect your system to scale to 5 instances and you have to buy 5 expensive licenses when most of the time you’re only running two instances, that licensing model is not aligned with the cloud. For a deeper dive on cloud pricing check out the appendix of this whitepaper: https://www.sans.org/reading-room/whitepapers/detection/paper/37905 AWS also has a whitepaper on how AWS pricing works: https://d1.awsstatic.com/whitepapers/aws_pricing_overview.pdf
  • 168.
    AWS EC2 pricing~ on demand 146 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential This is a screenshot of the AWS EC2 pricing page showing the cost of a Linux instance charged at an hourly rate. Note that you aren’t charged for a full hour if you only run the EC2 instance for 5 minutes. The cost is prorated.
  • 169.
    Reserved instances (PayUp Front) 147 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential If you pay up front for instances you can get a discount. Instances can be reserved for a year or longer. If you reserve an instance and subsequently do not use it, you’ll still have to pay for it. If you reserve a set of instances across multiple accounts, any account will get the discounted rate for that pool of instances.
  • 170.
    Spot Instance Pricing(Bid) 148 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential On AWS you can also bid on an EC2 instance to pay a lower rate. Note that if you bid and your price is accepted, but later the price goes above your bid, your instance may be terminated. This model works for batch processes that can be restarted without errors.
  • 171.
    Cost of SecurityServices ❏ Did the calculation of the cost of an application include security costs? ❏ How much will the log storage cost? ❏ Vulnerability scanner? ❏ Will you have a WAF (Web Application Firewall) in front of your website? ❏ Will a WAF front APIs exposed to the Internet? ❏ Do you need other security services with separate licensing costs? ❏ Will cloud encryption keys be used and how many? ❏ What security services will be enabled in each account? ❏ Have the overall cost of cloud included these costs? ❏ How are security costs handled by accounting in cloud environments? 149 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Some questions to ask when determining the cost of security services in your cloud accounts: Did the calculation of the cost of an application include security costs? How much will the log storage cost? Vulnerability scanner? Will your website have a WAF (Web Application Firewall) associated with it? Do you need other security services with separate licensing costs? Have those in charge of estimating cloud costs overall included these costs? How are security costs handled by accounting in cloud environments?
  • 172.
    AWS GuardDuty ~North Virginia 150 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential This is a different type of service - AWS GuardDuty. In this case you’ll pay for the amount of logs processed. There are different tiers, so as the amount of logs processed increases, the price will go down.
  • 173.
    CloudWatch ~ Logalmost anything you want VPC Flow Logs may end up here, though they can also be sent to S3 151 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential AWS CloudWatch is another logging service. For this service you’ll pay a fee per GB of data collected and stored.
  • 174.
    AWS “simple” monthlycalculator 152 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential AWS has a simple monthly calculator which can help you determine the cost of a project. It has some of the pricing formulas in the tools so you can plug in numbers and get a price. It may not have every service so you’ll have to make sure you’re not missing something. Some people like it. Other people find it easier to use a spreadsheet. https://calculator.s3.amazonaws.com/index.html
  • 175.
    AWS Total Costof Ownership Calculator 153 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential AWS also has a total cost of ownership calculator which helps you compare the costs of servers and virtual machines. It generates reports that can be used in executive presentations. https://aws.amazon.com/tco-calculator/
  • 176.
    AWS Enterprise Agreements AWSoffers Enterprise agreements. Claim to offer up to 75% discount on services. If you have large spend or interesting products ~ ask… Companies with large spend heavily influence new features ~ ask… Usually a commitment to spend a certain amount (reserved pricing). Don’t need an enterprise agreement to link accounts (Consolidated Billing). 154 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential AWS offers discounts for enterprise agreements. Usually comes with a commitment to spend a certain amount of money. https://aws.amazon.com/pricing/enterprise/
  • 177.
    AWS Budgets Get alertswhen you pass or are forecasted to exceed the budget you set. Set alerts for reserved instance (RI) utilization that drops below a threshold. RI alerts for EC2, RDS, Redshift, and ElastiCache reservations. Monthly, quarterly, or yearly with customizable start and end dates. Track other dimensions like AWS services, linked accounts, tags. Can be created in the UI or in an automated fashion. 155 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential AWS budgets allow you to set alerts if your spending exceeds a certain dollar amount. We’ll look at creating alerts in the upcoming lab.
  • 178.
    Azure VM pricing~ on demand 156 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Azure has a similar pricing model, though not laid out in as much detail as AWS on their pricing page.
  • 179.
    Azure VM Pricing~ pay up front 157 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential You can also pay up front for a discount on Azure.
  • 180.
    Azure MFA Pricing (MFAis free on AWS) 158 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Azure charges for MFA - AWS and Google do not.
  • 181.
    Azure Calculator 159 Author: TeriRadichel © 2019 2nd Sight Lab. Confidential Azure also has a calculator. Similar to AWS you can plug in values and get projected costs. https://azure.microsoft.com/en-us/pricing/calculator/
  • 182.
    Azure Total Costof Ownership Calculator 160 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Just like AWS, Azure has a TCO calculator. https://azure.microsoft.com/en-us/pricing/tco/calculator/
  • 183.
    Azure Enterprise Agreements~ or... Azure licensing is complicated. Some things fall under Office 356 ~ e.g. Office 365 Premium for MFA. Companies can move existing data center licenses to Azure for discount. Less than 1% of Azure customers get the Enterprise Agreement. Requires upfront spend commitment minimum $1000 per month. Typically manage through subscriptions, not accounts. 161 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential The Azure licensing model is more complicated than the AWS model due to related products and services. For example, if you want MFA with Azure, you sign up for this under Office 365. Additionally, companies can move existing licenses from their data center to Azure for a discount. Azure also has enterprise agreements. Less than 1% of Azure customers leverage the Enterprise Agreement per a recent discussion with an Azure support representative. The Enterprise agreement requires a minimum spending commitment of $1000 per month for three years. Typically customers manage different billing for different departments through subscriptions, rather than accounts. However for companies with an Enterprise agreement they can link accounts. It used to be that you could only get an Enterprise agreement from a Microsoft partner, but now anyone can get an enterprise agreement.
  • 184.
    Azure Billing Management Tenant= Azure Active Directory. Subscriptions = Bills. You can have transfer payment of a subscription to another entity. That’s how you can link different subscription. Azure enterprise account setup cannot be automated at this time. 162 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential A few terms you’ll want to know on Azure: Tenant = Azure Active Directory Subscriptions = Bills When structuring accounts and subscriptions consider who will pay the bill. If the bill needs to be split between two departments you might want to create two separate subscriptions to make it easier to track and handle in your accounting department.
  • 185.
    Azure Budgets Search for“Subscriptions” Select a subscription Click on Budgets Budgets can also be created in powershell 163 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Azure also has a budgets feature that allows you to set up a budget and get alerts if you go over that budget. https://docs.microsoft.com/en-us/azure/cost-management/tutorial-acm-create-budgets https://azure.microsoft.com/en-us/resources/templates/create-budget/
  • 186.
    Google Cloud Platform Googledoesn’t seem as well equipped to handle enterprise at this time. However the new CEO of Google claims Google is focused on this goal. Google has a budgeting feature and calculators. Google groups billing by projects instead of subscriptions or accounts. They claim to be cheaper than AWS for virtual machines. Security services exist but are often more limited. 164 164 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Google cloud budget notifications https://cloud.google.com/billing/docs/how-to/budgets#manage-notifications Google cloud calculator https://cloud.google.com/products/calculator/
  • 187.
    Tracking costs viaaccount structure When structuring cloud accounts consider the bills In AWS a bill is associated with an account In Azure the bill is associated with a subscription In Google Cloud a bill is associated with a billing account and its projects Use the constructs to determine who gets the bill for a set of resources Try to structure cloud bills so you don’t have to track individual resources. 187 187 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Each cloud service has a different way of structuring accounts and billing. Dealing with the bills and tacking resources if you consider who is going to pay the bill when you set up your initial account structure. On AWS you can set up multiple accounts. Each account gets its own root account and bill. The bills can be linked via consolidated billing. Another service called Organizations, which we will discuss on Day 4 can be used to create nested accounts. Azure has the concept of subscriptions. A single account is set up for an organization and typically the organization’s domain is associated with it (like 2ndsightlab.com). Then within that account the organization creates “subscriptions” and each one gets a separate bill. GCP has the concept of billing accounts. Each billing account is associated with a billing profile that pays the bill. A billing profile can have one or more billing accounts associated with it. Projects are created on GCP and associated with billing accounts. When setting up cloud accounts, consider who is going to get and pay the bills. Trying to track every individual resource in an account is difficult, error prone, and in some cases simply can’t be done because there’s no way to identify a resource as belonging to a particular entity. Instead think about how to structure accounts and resources so that all the resources associated with an AWS account, Azure subscription, or Google billing account are paid by the same entity. It’s also easier for
  • 188.
    the accounting departmentto pay a bill and assign the costs to one cost center than to have to split up and track bills that get paid by different departments.
  • 189.
    Lab: Budgets and Calculators 166 Author:Teri Radichel © 2019 2nd Sight Lab. Confidential
  • 190.
    Malware and Cloud Threats 167 Author:Teri Radichel © 2019 2nd Sight Lab. Confidential How does malware work and how is it different in the cloud? Is it different? What types of new threats and attacks do we need to be worried about in the cloud.
  • 191.
    Cyber Kill Chain LockheedMartin defined the cyber killchain to identify common actions. 191 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential The cyber kill chain was developed by Lockheed Martin. It attempts to define what adversaries need to do to complete a cyber attack. Malware follows common patterns - by looking at an understanding those behavioral patterns, instead of a specific file or signature, attacks can be spotted and blocked. The steps include: Reconnaissance: Looking for information that can be used in the attack such as email addresses for phishing, phone numbers for social engineering, system information, and other types of data that will help the adversary get into company systems. Weaponization: Finding a vulnerability and using a backdoor or other method to create an exploit for a system. Delivery: Delivering a weaponized piece of code - meaning some sort of exploit has to cross the network. Exploitation: Executing the attack on the vulnerable system using the malware or other exploit. Installation: Installing the malware on the system. (Note that newer malware may only load the malware in memory.) Command and Control (C2): Control the attacked system via a remote server.
  • 192.
    tools at eachstep in the process to try to detect and prevent the different steps attackers take during an attack.
  • 193.
    Top Cloud Threats(According to 2nd Sight Lab) Misconfigurations & Poor Architecture - S3 Buckets, broad permissions, etc. Credentials - Stolen or “found” Cryptominers and Ransomware. Unpatched or exposed DevOps systems and tools (Jenkins, AWS CLI). Lack of Network Security exposes Elasticsearch, Mongodb, etc. Programming Flaws - OWASP top 10 and web related attacks still apply. Escapes - containers, VMs access the host or the control plane is breached. 169 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential CSA publishes their top threats to cloud computing. It’s mentioned here but the coverage seems to be mostly older breaches. It also doesn’t address some of the most relevant threats for those using public cloud computing services as confirmed by media reports, customers, and the cloud service providers. https://cloudsecurityalliance.org/artifacts/top-threats-to-cloud-computing-deep-dive/ The list compiled for this class of top threats to your accounts includes the following: Misconfigurations - S3 Buckets, broad permissions, etc. Credentials - Stolen or “found” Cryptominers and Ransomware. Unpatched or exposed DevOps systems and tools (Jenkins, AWS CLI). Lack of Network Security exposes Elasticsearch, Mongodb, etc. Programming Flaws - OWASP top 10 and web related attacks still apply. Escapes - containers, VMs access the host or the control plane is breached.
  • 194.
    SANS Survey 170 Author: TeriRadichel © 2019 2nd Sight Lab. Confidential A survey by SANS Institute found account and credential hijacking topped the list. These two items are typically involved in many of the other more specific categories listed in the survey. A SANS survey takes a look at common attacks and categorizes them in different ways. The chart shows what people that responded to that particular survey reported. https://www.sans.org/reading-room/whitepapers/analyst/2019-cloud-security-survey-3 8940
  • 195.
    S3 Buckets (CloudMisconfigurations) Booz Allen Hamilton When: May 2017 Data Exposed: Battlefield imagery and administrator credentials to sensitive systems U.S. Voter Records When: June 2017 Data Exposed: Personal data about 198 million American voters Dow Jones & Co When: July 2017 Data Exposed: Personally identifiable information for 2.2 million people WWE When: July 2017 Data Exposed: Personally identifiable information about over 3 million wrestling fans Verizon Wireless When: July 2017 and September 2017 Data Exposed: PII about 6 million people and sensitive corporate information about IT systems, including login credentials 171 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential The following two slides list some of the biggest S3 bucket breaches in recent history. As you can see some very prominent organizations have experienced this cloud snafu and heaps of data has been exposed to the Internet as a result. Source: https://businessinsights.bitdefender.com/worst-amazon-breaches Initially people wanted to blame Amazon for these breaches, including the PR company the author of this course was working with at the time. However as explained earlier today, this responsibility lies squarely with the customer in the AWS Shared Responsibility Model.
  • 196.
    More S3 Buckets TimeWarner Cable When: September 2017 Data Exposed: PII about 4 million customers, proprietary code, and administrator credentials Pentagon Exposures When: 3 leaks found in September and November Data Exposed: Terabytes from spying archive, resume for intelligence positions Alteryx When: December 2017 Data Exposed: Personal information about 123 million American households Accenture When: October 2017 Data Exposed: The keys to the kingdom--master access keys for Accenture's account with AWS KMS Key National Credit Federation When: December 2017 Data Exposed: 111GB of detailed financial information--including full credit reports--about 47,000 people 172 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential S3 bucket breaches...continued.
  • 197.
    An analysis S3Buckets in the Alexa Top 10,000 Rhino Security Labs did some analysis of the Alexa top 10,000 websites. They discovered which sites use S3 and what permissions were applied to the buckets they discovered. 173 173 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential During the S3-bucket craze, Rhino Security Labs published a report on all the S3 buckets exposed by domains in the Alexa Top 10,000 - a site that tracks the most popular domain names. https://rhinosecuritylabs.com/penetration-testing/penetration-testing-aws-storage/ They found a lot of faulty configurations when the scanned these buckets. There are very rare use cases where a bucket should be exposed directly to the Internet.
  • 198.
    Not a publicbucket but... The Capital One breach did not involve a public bucket. In this case the attacker leveraged a host with excessive permissions. A private source told me this internal server had access to ALL the S3 buckets in the account. Not a good idea. 174 174 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential An attacker stole about 140 million documents from an S3 bucket in the Capital One breach. In this case, the bucket was not public. It was in the private internal network, likely protected with an S3 endpoint as will be explained on a later day. The attacker bypassed some website protection controls to get onto a virtual machine in the AWS account. From there the attacker used excessive permissions on that virtual machine to exfiltrate all the files from the S3 bucket to the attacker. For some reason the attacker posted information about the attack on Twitter and stored files in Github so was almost immediately caught. The attacker formerly worked at AWS, but had been fired. The exploit in this case was completely preventable. The permissions assigned to the server that accessed the S3 bucket were excessive and the architecture was not following some best practices that would have prevented this breach. More: https://medium.com/cloud-security/whats-in-your-cloud-673c3b4497fd
  • 199.
    Magecart Skimmers Attackers areinserting code that works as skimmers into websites. Steal credit cards as people check out on e-commerce websites. Content loaded from a third-party sites that isn’t validated. Getting inserted into S3 buckets and served up by AWS CloudFront (CDN). 175 175 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Magecart skimmers are a common threat to websites inside and outside the cloud. The attackers insert JavaScript or some other type of code into a legitimate website. The malicious code steals credit cards or other information as users check out on the website. One type of attack targets content management systems (CMS) like Drupal or Wordpress. Vulnerabilities in these systems are used to insert the malicious code. An alternative attack will insert code into open S3 buckets, replacing valid files with malicious files. Alternatively the code could be loaded via any sort of third party script or advertisement that developers load in addition to the code on the website itself. These third party components pose great risk if not validated before exposing the customer to the external code and files.
  • 200.
    Subdomain Takeover When companiesleave CNAMES pointing to subdomains they are not using but someone else can, an attacker who registers that subdomain can monitor requests or post malicious content. 200 200 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential A domain name is used to point to a website like https://2ndsightlab.com that points to a particular server hosting a web site, for example. A subdomain adds some prefix to the domain and can point to some other location. https://i.2ndsightlab.com hosts images for https://2ndsightlab.com. A CNAME points one domain to another. I could set up an S3 bucket called 2sl.s3.com and then create a CNAME like http://mybucket.2ndsightlab.com and have it redirect to http://2sl.s3.com. The problem would be if I deleted my S3 bucket and stopped using it but left http://mybucket.2ndsightlab.com pointing to http://2sl.s3.com. An attacker could come along and create a new bucket and any traffic going to my http://mybucket.2ndsightlab.com bucket would be directed to the bucket setup by the attacker. If you’re not familiar with S3 buckets and static web hosting in an AWS S3 bucket we’ll be talking about that more later. The important thing to be aware of is the fact that developers should not leave CNAMES set up pointing to content you no longer control. In the case of EA it looks like they set up some sort of domain name where users could register for something on the domain ea-invite-reg.azurewebsites.net. They probably pointed to that subdomain from something on their own site. Later they
  • 201.
    stopped using thedomain and let it lapse but kept pointing to it from some other domain they still hosted. An attacker could then set up ea-invite-reg.azurewebistes.net and put malicious content in it. Users that were redirected there from some EA source would think they were on the EA web site but actually they would be putting content into the attacker’s malicious site.
  • 202.
    Stolen cloud credentials Phishingemails Social engineering Posted to Github Shared on Slack Emailed to a coworker Moves files to cloud account The human factor 202 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Humans. Sometimes they do things they shouldn’t. They don’t always do it on purpose. Sometimes actions have good intentions or are simply due to curiosity or misunderstanding, but in any case here are some problems caused by unwanted human actions. Credentials are stolen due to phishing emails, social engineering or posted to GitHub. Developers may share credentials, keys, and secrets on Slack, Confluence, and other internal social media, chat and communication platforms that offer an attack vector. Credentials may be emailed to a coworker or shared (as was the case per a student in one of the classes an author was taking, with Edward Snowden. Apparently he was a nice guy and asked to borrow a coworkers credentials to access sensitive documents). Additionally people have been known to steal company assets by moving them to cloud storage accounts like Box, DropBox, Evernote, or Google Docs. They also use these systems to bypass company systems designed to prevent data loss in order to share information with vendors, partners, and customers when they are simply trying to get their job done and security products are preventing the transmission of data. Developers often create security problems when they are simply trying to make something work. They may open up a network too broadly just to get systems working, or create a broad CORS configure rule because it allows their microservices to work without browser warnings. Typically these things are not done maliciously. They are done in with the best of intentions! They want to make the system work and
  • 203.
    get their jobsdone. From a security perspective this may seem ridiculous, but until you have worked as a software engineer - don’t judge. It’s not as easy as you think.
  • 204.
    Former employee stealsdata 178 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Shared credentials may have allowed a former employee to access HIPAA data in the cloud. When a person leaves the company it’s important to track and understand all the systems they have access to in order to remove that access appropriately. https://www.csoonline.com/article/3265109/former-employee-visits-cloud-and-steals-c ompany-data.html
  • 205.
    Fired employees stealscredentials, kills servers 179 179 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential In this tale of terminated AWS servers, an employee loses his job, steals the credentials of his coworker, “Speedy” Gonzales, and terminates a number of servers at his former employer. “Speedy” was not using 2-Factor authentication. https://nakedsecurity.sophos.com/2019/03/22/sacked-it-guy-annihilates-23-of-his-ex-e mployers-aws-servers/
  • 206.
    Attacks on Credentialsand Access Password Reuse - check https://haveibeenpwned.com Password Spraying (recent Citrix breach) Secrets Published to Github such as keys and passwords. Extracted from memory (e.g. Mimikatz on Windows) Stolen session tokens via CORS misconfigurations and other Brute-force SSH and RDP logins on cloud servers PHISHING and Social Engineering to obtain passwords and access systems 206 206 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Credentials are on of the main ways attackers breach cloud accounts and install malware. From money wasting cryptominers to completely deleted accounts, malware has been a source of problems in cloud security for many companies in the cloud. Some of the problems companies have experienced: Password Reuse is not cloud specific. Reusing passwords in multiple places allows attackers to use credentials stolen in one breach to access other systems that were not actually breached. Troy Hunt is a security research that publishes a website that tracks stolen credentials from data breaches. You can enter your email address to see if your account or data has been breached and get alerts for future breaches at https://haveibeenpwned.com Passwords Spraying means using a potential password and trying it on many different systems within an organization at the same time. The reason attackers do this is because a system may have a lockout policy or a rate limiting feature that will block or lock out the account if too many bad attempts are made. Instead of making a number of attempts on one system, attackers make one attempt on many systems and spread the attempts out to avoid these security features. The recent Citrix breach was a result of password spraying. https://krebsonsecurity.com/tag/citrix/ Publishing Secrets to GitHub has proven to have been a common problem for cloud developers. Github is a source control system like BitBucket which we use in the class labs. Developers sometimes share code publicly and have embedded in that code user accounts and passwords, encryption keys, cloud credentials and other
  • 207.
    secrets. Attackers scanthe code, find and use these credentials for malicious activity. Malware can extract credentials from memory. One example is Mimikatz, a software tool used by attackers to steal passwords from memory on Windows systems. Attackers can steal session tokens used to track logged in users and grant access to system resources after initial authentication. These tokens are sometimes passed around in systems unencrypted or stored in insecure cookies. SSH and RDP credentials constantly face brute-force attacks when exposed to the Internet. Many Linux systems in the cloud are accessed via SSH. RDP is used to remotely access Windows systems. Phishing and other forms of social engineering are used to trick users into giving up their passwords or click links that pass their credentials to attackers. Although not foolproof one of the best ways to limit these attacks is via MFA (multi-factor authentication).
  • 208.
    Privileged Credentials In 2018the CSA reported that 74% breaches involved access to a privileged account. The 2019 SANS cloud survey reported that 48.9% of incidents involved credential hijacking; 37.8% involved privileged user abuse. 181 181 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Credentials are a source of misery when it comes to incidents in the cloud. Attackers are stealing credentials either because end users expose them publicly or because they are stolen via other means as noted. This is your number one threat in the cloud. It seems that credentials WILL be stolen, so we’ll look at strategies for minimizing the resulting damage on Day 4. CSA blog: https://blog.cloudsecurityalliance.org/2019/05/10/cloud-workloads-privileged-access/ 2019 SANS Survey: https://www.sans.org/reading-room/whitepapers/analyst/2019-cloud-security-survey-3 8940
  • 209.
    Code Spaces ~The Company that got deleted 182 182 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Code Spaces is the company that got deleted in the cloud. Attackers apprehended the credentials, used them to take over the account, demanded money and, when the company didn’t pay, deleted everything in the account. Code Spaces was out of business. Code Spaces hosted code and project data for other companies - all those companies lost their data as well in this incident. https://www.csoonline.com/article/2365062/code-spaces-forced-to-close-its-doors-afte r-security-incident.html
  • 210.
    Credentials in Github 183 Author:Teri Radichel © 2019 2nd Sight Lab. Confidential Many organizations, including one the author worked at, experienced developers pushing credentials to GitHub. Attackers are scanning the site looking for secrets and credentials they can use to attack systems. In the case of Uber, it’s even worse. Attackers were not only able to steal data from Uber but the company tried to cover it up by paying off the thieves. Eventually the news got out and the CISO at Uber lost his job - and went to work at CloudFlare. https://arstechnica.com/information-technology/2015/03/in-major-goof-uber-stored-se nsitive-database-key-on-public-github-page/ https://www.cnet.com/news/uber-to-pay-148-million-for-failing-to-report-2016-hack/ https://www.cnbc.com/2018/05/16/fired-uber-cybersecurity-chief-joe-sullivan-joins-star t-up-cloudflare.html
  • 211.
    Social Engineering 184 Author: TeriRadichel © 2019 2nd Sight Lab. Confidential Not exactly a cloud breach, but a common form of social engineering at this time. Companies are facing scams where a person impersonating a top executive tells a lower level person to wire money out of the company. Although this is not a cloud breach, you can image how an order might come down from an executive to provide access to another person or delete a critical system in a similar way and a lower level employee might carry out those orders without question. https://www.csoonline.com/article/2961066/supply-chain-management/ubiquiti-networ ks-victim-of-39-million-social-engineering-attack.html
  • 212.
    Ransomware 185 Author: Teri Radichel© 2019 2nd Sight Lab. Confidential A cloud hosting system faced ransomware on Christmas Eve in 2018. Ransomware can be installed on cloud systems the same way it can be installed on-premises. https://krebsonsecurity.com/2019/01/cloud-hosting-provider-dataresolution-net-battlin g-christmas-eve-ransomware-attack/
  • 213.
    RDP and SSHpasswords - $10 on the Dark Web 186 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Many stolen RDP and SSH passwords appear on the dark web. Attackers are stealing these credentials through many means including brute force attacks to repeatedly guess the passwords on cloud systems. https://securingtomorrow.mcafee.com/other-blogs/mcafee-labs/organizations-leave-bac kdoors-open-to-cheap-remote-desktop-protocol-attacks/
  • 214.
    CloudHopper targeting MSPs 187 Author:Teri Radichel © 2019 2nd Sight Lab. Confidential Managed service providers (MSPs) and Managed Security Service Providers (MSSPs) are companies that handle IT and security for other companies. CloudHopper malware is targeting managed service providers. An Advanced Persistent Threat (APT) is a term for a group that persistently takes actions to break into systems and companies. The attacks themselves may be simple but the groups carrying out the attacks are organized and very stealthy. In this particular attack, sensitive data and systems at organizations are being access via stealing credentials at the MSPs that are providing IT services for that organization. https://www.computing.co.uk/ctg/news/3070613/norways-visma-the-latest-cloud-comp uting-company-targeted-by-china-linked-apt10-hacking-group
  • 215.
    Wipro Breach Wipro isIndia’s third-largest outsourcing firm. Phishing campaigns obtained stolen credentials and to access 100’s of computers. Pivoted from Wipro to customers systems including Fortune 500. Other similar firms targeted. 188 188 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential The Wipro breach is another example of attackers trying to target vendors to leverage their systems as a pivot point to get into other companies, or steal data managed by the vendor. Wipro is India’s third-largest consulting firm. Many of their clients are Fortune 500 companies. Attackers breached Wipro with phishing emails and then used stolen credentials to access other computers. Brian Krebs broke the story which Wipro originally disputed. https://krebsonsecurity.com/2019/04/experts-breach-at-it-outsourcing-giant-wipro/ Wipro was surprised on an earnings call when they were stating that the facts of the story were incorrect. What they didn’t know was that Brian Krebs was on the call. He confronted them about which facts were incorrect. You can hear the recording on the link below. This second story talks about how other vendors are also being targeted such as Infosys and Cognizant. Companies give vendors a lot of access at times into systems. Consider how attackers can leverage system access and data provided to vendors to get into your company. https://krebsonsecurity.com/tag/wipro-data-breach/
  • 216.
    Nation State Attacksand APTs 189 189 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Nation state attackers are backed by governments to perform cyber espionage, attacks,or campaigns on other countries. These and other groups associated with organized crime are called Advanced Persistent Threats (APTs) because they will spend a lot of time and money attempting to get into systems - over years. In some cases governments have secret hacking groups which are no longer so secret. The Chinese government is linked to groups like APT10 and APT 17. Fancy Bear in Russia is said to have ties to the Russian government. The U.S. also has elite hackers in organization's like the NSA and CIA. The U.S. was said to have launched the first first cyber weapon, called Stuxnet, against Iran. You can read the whole story in this book and learn about famous US hackers like Mudge (@dotMudge on Twitter) https://www.amazon.com/Countdown-Zero-Day-Stuxnet-Digital/dp/0770436196 MITER ATT&CK covers some nation state attackers. For example, you can read more about APT 17 and other attackers located around the world on this page: https://attack.mitre.org/groups/G0025/
  • 217.
    Critical Infrastructure Attacks OperationIvy Bells carried out by the US Navy tapped into underwater Soviet cables. NATO has expressed concerns over Soviet submarines near critical underwater cables in Nordic waters. 190 In the 1970’s, US divers scoured the depths of the ocean floor on a top secret mission to find underwater Soviet communication channels. They found what they were looking for and installed a 20-foot long tap onto the cable, which was then used to record conversations. Eventually an employee of the NSA sold the information about the cable to the Russians for $35,000. The Russians retrieved the tap and it is now on display at the KGB museum in Moscow according to this article: https://www.military.com/history/operation-ivy-bells.html Recently NATO raised concerns that Russian submarines are prowling around undersea cable in Nordic waters. What are the implications to using cloud providers and services that send data across the ocean floor if someone can tap into those messages? How secure are our encryption algorithms to protect that data? https://www.washingtonpost.com/world/europe/russian-submarines-are-prowling-arou nd-vital-undersea-cables-its-making-nato-nervous/2017/12/22/d4c1f3da-e5d0-11e7-9 27a-e72eac1e73b6_story.html In a possibly related event, at least 14 sailors on a Russian submersible were killed. Many questions were raised about the mission of those sailors since many of them were captains. Usually only one captain is on a single submarine. The threats to cloud infrastructure does not only exist in data centers and a much larger picture is at stake. https://www.washingtonpost.com/world/europe/fire-on-russian-submersible-vessel-kill s-at-least-14-sailors/2019/07/02/d0e327da-9cd0-11e9-83e3-45fded8e8d2e_story.html
  • 218.
    Cryptominers at Tesla 191 Author:Teri Radichel © 2019 2nd Sight Lab. Confidential Cryptominers are prevalent in the cloud. In a conversation with employees from Microsoft they are constantly shutting down cryptominers. Presumably AWS came out with a new service called GuardDuty to help thwart the problem, because spotting cryptominers was one of the first detections spotted by the service. Cryptominers don’t require GPUs to run. Newer algorithms and more anonymous cryptocurrencies like Monero can run on CPUs and even IOT devices and mobile phones. Attackers are stealing credentials, creating cloud resources and using them to install cryptominers. In this case Tesla’s public cloud was running cryptominers in a Kubernetes cluster that had been installed with no password protection. https://www.wired.com/story/cryptojacking-tesla-amazon-cloud/
  • 219.
    Cryptojacking honeypot 192 Author: TeriRadichel © 2019 2nd Sight Lab. Confidential Sysdig set up a honeypot and did some interesting research to see what types of malware would affect an exposed virtual machine in the cloud. The first attackers were attempting to perform cryptojacking. Cryptojacking is a term used for other people using your resources to perform cryptomining. They install cryptominers on your resources or use your account to spin up new resources in order to use them to mine for cryptocurrency. The cost of running cryptominers is becoming more and more expensive because they use a lot of electricity and computer power to guess numbers. When they guess the right numbers they “prove” that a transaction is valid and get paid either in cryptocurrency or transaction fees from the person who is trying to use the cryptocurrency to buy or sell something. The whole concept of guessing numbers to prove transactions are valid is not rocket science and doesn’t seem very intelligent to the author of this course, but for some reason it has caught on. It is in part due to the ability to hide transactions from governments as in the case of people in China trying to send money outside the country without the government knowing. This fact may be why countries and others have banned it. By hiding funds people can potentially leave the country with their money without the government knowing about it or avoid paying taxes. In some cases the transfer of funds is used to hide criminal activity and launder money. In any case, when people perform cryptocurrency transactions, people known as “miners” get paid when they validate these transactions and they want to use your resources that you are paying for to do it! They get onto your systems using malware or via stolen credentials. https://sysdig.com/blog/detecting-cryptojacking/
  • 220.
    Unpatched DevOps Systems 193 Author:Teri Radichel © 2019 2nd Sight Lab. Confidential Jenkins, software used to deploy cloud resources, is being attacked frequently because it is exposed and unpatched. One student performing penetration tests for companies told the author of this class, “We always get the Jenkins server.” Jenkins servers are used to deploy systems in the cloud. If an attacker can get onto a Jenkins server then presumably the attacker can also deloy systems in the cloud. This is happening according to some accounts such as the example on the page. In this case the attacker chose to install cryptomining software to make money. Given that Jenkins servers typically have a lot of potential power to wreak havoc in a cloud environment, it could have been worse! https://www.csoonline.com/article/3256314/security/hackers-exploit-jenkins-servers-m ake-3-million-by-mining-monero.html Jenkins allows developers to perform tasks using plugins. 100+ Jenkins plugins were found to be vulnerable. https://www.zdnet.com/article/security-flaws-in-100-jenkins-plugins-put-enterprise-net works-at-risk/ In yet another example, a Jenkins server used by GE was exposed to the Internet and revealed passwords and source code. https://threatpost.com/ge-aviation-passwords-jenkins-server/146302/
  • 221.
    Elasticsearch databases exposedto Internet 194 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Developers are exposing records in too many cases directly to the Internet via cloud databases like Elasticsearch. As more people without security backgrounds are able to deploy systems in the cloud and define networking as they see fit, more and more data stores are being exposed. This is happening repeatedly. There are many examples. In this case, an open Elasticsearch instance exposed 82 million records. https://securityaffairs.co/wordpress/78643/data-breach/elasticsearch-instances-data-l eak.html
  • 222.
    Mongodb on Internetexposes 2 billion records 195 195 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Mongodb is another database exposed all too frequently. In this breach a Mongodb database exposed to the Internet leaked the most records ever. 2 billion records were exposed as a result of this misconfiguration. If you signed up for https://haveibeenpwned.com, there’s a good chance you got an alert. https://www.hackread.com/verifications-io-breach-database-with-2-billion-records-leak ed/
  • 223.
    DevOps tools vulnerabilities Confluenceallows teams to share information. A security tester found a flaw and then used Google to search for all the systems that were exposed to the Internet that contained this flaw. 196 196 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential In this instance, a person who does some bug bounty testing found a flaw in an application called Confluence. He then used Google to find numerous systems with that flaw exposed to the Internet in a short amount of time. Had these systems not been exposed to the Internet, this person would not have found what he was looking for in Google. If these companies were using Confluence via a public SAAS solution, they would have been vulnerable no matter what. Consider if and how you can lock down your accounts, even if provided by a vendor, on a private network. We’ll talk more about that on day 2. Also be very careful with third party components, plugins, and widgets you include in your applications as we go over in more detail on day 3.
  • 224.
    Email on Azure TheDeloitte breach involved an email system hosted on Azure. The details of this breach are unknown, but somehow the attackers accessed administrative accounts. Initially Deloitte reported only six clients were affected. Later reports indicate the breach was bigger. 197 197 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential The details of the Deloitte breach are unclear, but it seems that somehow attackers got ahold of cloud credentials and accessed a mail server with sensitive data. In some reports Deloitte was apparently migrating mail servers to the cloud.
  • 225.
    VM and containerescapes CloudBurst ~ Blackhat 2009 Exploits vulnerability in VMware Workstation via a specially crafted video file. Container escape ~ CVE-2019-5736 A container escape allows taking over a host. DNSMasq Vulnerability ~ CVE 2017-14491 Affected Kubernetes ~ could allow for taking over a cluster 198 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential When the same hardware is hosting virtual machines for multiple customers, there is always a chance that a programming error or system flaw allows unauthorized access to systems and data. This could occur when an attacker in a virtual machine escapes from the VM and is able to access the hypervisor, or code in a container is able to escape the container and access the host. The issue also arises when the control plane used to manage virtual machines or containers is accessed or breached. Although this is a threat, it appears at this time most breaches are not requiring such extensive effort due to simple mistakes that give attackers easy access to systems and data. CloudBurst https://searchcloudsecurity.techtarget.com/definition/Cloudburst-VM-escape https://www.blackhat.com/presentations/bh-usa-09/KORTCHINSKY/BHUSA09-Kortch insky-Cloudburst-SLIDES.pdf Container Escape https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5736 https://www.zdnet.com/article/virtual-machine-exploit-lets-attackers-take-over-host/ DNSMasq https://security.googleblog.com/2017/10/behind-masq-yet-more-dns-and-dhcp.html
  • 226.
    CloudBleed CloudFlare is aCDN or content delivery network that caches and hosts data closer to end users for many big organizations. A flaw in CloudFlare results in a buffer overflow that exposes customer data. 199 199 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Another type of “escape” entails systems with flaws that allow one customer to see another customer’s data, or data escapes from the system through some sort of vulnerability. In this example, the CloudFlare system leaked data for many customers through a software bug called a buffer overflow, which exposes data in memory. CloudFlare is a CDN, or content delivery network. Companies hire CloudFlare to host their data closer to their end customers or to front their websites to handle excessive load or protect against DDOS attacks. This breach affected many different websites. https://www.theregister.co.uk/2017/02/24/cloudbleed_buffer_overflow_bug_spaffs_pe rsonal_data/
  • 227.
    200 200 Author: Teri Radichel© 2019 2nd Sight Lab. Confidential These days it seems like every breach needs a some branding - a logo, a website, and a public relations campaign. The Specter and Meltdown malware discovered by security researchers at Google demonstrate this very well. This attack actually affected the underlying hardware. It would allow a malicious program to access secrets in memory. As you can imagine hardware is not easy to patch. One of the biggest concerns with this particular vulnerability was the potential for attackers to leverage it on cloud systems to gain access to the underlying host system and other customer virtual machines on the same host. Rather than fix the underlying hardware, companies that make operating systems found a way to patch the software to prevent access to the vulnerability. The cloud providers had most of their systems patched within about 2 days. https://meltdownattack.com/ https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side. html https://aws.amazon.com/security/security-bulletins/AWS-2018-013/
  • 228.
    Common attacks stillapply If you host your web application in the cloud how different is it really? The OWASP top 10 and MITRE attack framework still apply. If you deploy Internet accessible vulnerable software it’s still vulnerable. Malware that can run on a server in your datacenter can run in the cloud. Some things are not available to you - but they are still there under the hood. Routers, and other network equipment managed by the CSP. Architectural differences may change attack vectors, but the threats still exist. 201 201 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Some recent attacks related to the OWASP top 10 were listed in the notes of that prior slide. All these same attacks still apply. Additionally, although you can’t access the cloud hardware and systems, organizations still need a way to validate the security of those systems through third party audits, as discussed. All the systems that connect to the cloud may also prove to have a vulnerability that offers a gateway to the cloud - or vice versa - on private networks.
  • 229.
    Security Research andMalware For new breaches, security researchers try to get a copy of the malware and then: - Open the code in a disassembler - Review the assembly code. - Run in a segregated environment - Determine how it works - Indicators of compromise (IOCs). These IOCs can be used to block the malware using various tools. 229 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential What happens when new malware is discovered? Security researchers around the world are constantly evaluating and trying to stop malware. There are different tools and services that will help evaluate malware. The first thing researchers will do when they see new malware is try to get a copy. They will then potentially take any or all of the following steps: Open the malware in a disassembler which shows them the actual machine code (not the high level language like Java, C++, etc. but the actual code used to interact with the hardware called assembly language. One of the most well-known disassemblers is IDA Pro but it can be very expensive. The NSA recently released an open source disassembler called Ghidra but some researchers are skeptical that it has a backdoor. They may also review web software scripts such as VBScript, JavaScript or software that is not packed or compiled in its natural state to try to reverse engineer it. Other tools can also help like tools that look into the details of documents or debuggers and other tools that reveal information about the application at runtime. Researchers may also run the software in a controlled environment to discover what it is doing. They may run it in an air-gapped, segregated network and take steps to trick the malware into thinking it is connect to the command server to try to reverse engineer its behavior. This can be risky and must be done carefully to make sure the malware cannot infect computers around it. Some malware will detect when it is running in a VM and shut down. Other malware will delay starting to try to trick the researcher that it is benign.
  • 230.
    By exploring themalware behavior the researcher can determine indicators of compromise - or IOCs - that companies can use to block the unwanted behavior. The malware may reach out to a certain domain name, use a particular user agent, or do something else unique that is dissimilar to the behavior of legitimate systems. This behavior can be blocked to thwart the malware. Traditional virus scanners would make a hash of the malware and block any executable with the same hash, but most malware can bypass these type of checks by simply changing one character in the file, which changes the hash.
  • 231.
    Twitter is oneof your best real time threat feeds! Real time threat data, many researchers reporting...a few examples. 203 203 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential If you are not already on Twitter you may be surprised to know that Twitter is one of your best sources for real time information about security threats. Many prominent researchers exist on Twitter. They report on security breaches, comment on security issues, and publish information to help you protect your systems. For example, the author of this course worked on a security research team for a company. WannaCry broke out overnight. Checking Twitter the next morning alerted to the fact that a major security breach was shutting down hospitals in the UK and other businesses. Top researches were publishing a detailed malware analysis down to the bytecodes on Twitter. One particular reacher, Marcus Hutchins, registered a domain that turned out to be a kill switch that stopped the malware. The author was following all these researches and watched these events occur almost in real time as they unfolded by following the right people on Twitter. Marcus Hutchins, a security research for Malwarebytes, was later arrested by the FBI on his trip to DefCon the next year on unrelated charges for past activities selling malware in years past. The outcome of this trial is still pending. Many in the security community believe he is innocent, as there’s a fine line between security research to determine if systems are vulnerable and writing malware that harms companies. The trial is set for July 2018. Hutchins has since dropped off Twitter, citing abuse by people online, so you won’t be able to follow him anymore.
  • 232.
    Security Hype andDrama Breaches with brand names. Over-sensationalized headlines. Speculation and questions instead of facts. Security vendors rushing to put out a story. Marketing people misunderstandings. General rule: Wait two days. The hype will fade. The facts will emerge. 204 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential While teaching a security class in San Jose, a story emerged about a chip embedded inside Apple and AWS servers that had supposedly existed for over 7 years. Bloomberg claimed it had an anonymous source that revealed this spy chip to them. The story broke and was all over the news. Apple and AWS came out with strong denials of the existence of this chip. Some in the security community where adamant the chip story would prove to be true. After following many breaches and malware outbreaks, the best advice is to follow the news closely for two days before making any sort of judgement because likely new facts will arise after the initial announcement. In the US, we have a premise in the court of law: Innocent until proven guilty. In the author’s opinion, after watching the story unfold for two days and being asked by many people, there is not enough evidence to prove this chip actually existed. Additionally, it seems highly unlikely that this chip could have existed for over seven years without the story somehow being exposed by someone. That’s not to say it couldn’t be proven to be true later, but at this time there is not enough evidence to prove there was a chip, and more evidence based on later stories and statements that the chip never existed. The underlying sources used for the story have never been fully validated or accessible to public scrutiny. https://www.bloomberg.com/news/features/2018-10-04/the-big-hack-how-china-used- a-tiny-chip-to-infiltrate-america-s-top-companies
  • 233.
    Threat lists forsoftware tools Security organizations and vendors publish most prevalent malware. This list comes from CIS (Center for Internet Security). Keep in mind that vendor threat lists are dependent on what their devices catch. Threat Lists of IP addresses or domain names can be used in cloud security services like GuardDuty. 205 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Some organization's publish threat lists you can follow to learn about new security problems, breaches, and malware outbreaks. The Center For Internet Security has a threat list on their website. The cloud vendors incorporate threat lists into their security tools. https://www.cisecurity.org/cybersecurity-threats/ Bear in mind that threat lists are only as good as the tools that analyze the data. When companies say there is a huge increase in a certain type of malware it could be that only just now their software started recognizing that particular type of malware, while other types of malware may be going unreported by that particular vendor. Use multiple sources! The cloud vendors have an enormous amount of data that can be used to find malware and security problems in the cloud. Using tools from the major IAAS vendors is beneficial for this reason. We’ll talk more about tools that do this later such as Amazon GuardDuty.
  • 234.
    Be aware Knowing thatthreats exist is one step in the right direction. Be aware of security incidents and breaches that are occuring. Use those threats to analyze your own environment. Prioritize security efforts where the risk is highest and most damaging. Understanding the threats will help inform architecture decisions. Consider threats against to a system using threat modeling. 206 206 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential The most important takeaway from this section is to be aware. Understand the types of threats that are prevalent in the cloud in general and in your specific industry. By being aware you can take the appropriate steps to stop, block, and find malicious behavior. Awareness needs to expand beyond the security team! Awareness needs to include developers, project managers, product managers, legal teams, human resources personnel, line of business owners, and most of all - executive leadership. By understanding the threats to a business, companies can be more proactive and smarter about stopping them!
  • 235.
    Lab: Sharing secrets withGPG 207 Author: Teri Radichel © 2019 2nd Sight Lab. Confidential Credentials are king in the cloud. Credentials are one of the primary ways that attackers get into and take actions in cloud systems (and on-premises systems for that matter). When sharing credentials for systems it is important to send then to other people securely - email is not secure. Posting them on slack is not secure. Leaving them in a plain text file in an S3 bucket with broad permissions is not secure. One way to secure credentials when sending them in email is to use GPG. This lab does two things: 1.) It helps students understand asymmetric cryptography. 2.) It gives students some experience with GPG. There are manual and more automated methods of using GPG. This is a simple introduction.
  • 236.
    Day 1: CloudSecurity Strategy and Planning Cloud Architectures and Cybersecurity Introduction to Cloud Automation Governance, Risk, and Compliance (GRC) Costs and Budgeting Malware and Cloud Threats Author: Teri Radichel © 2019 2nd Sight Lab. Confidential 208