Page 1 of 9
Cyber Security for Critical Infrastructure
Mohit Rampal- MS Systems & Information BITS Pilani -Regional Manager South Asia Codenomicon Software India Pvt. Ltd.
Shubika Soni - Manager Security- North & West
ABSTRACT
Cyber Security has gained a huge importance in the last few years
specially towards Industrial control systems (ICS) and specially
critical infrastructure. The Mindset of Air-GAP and not
connecting to the internet at all for ICS and Critical Infrastructure
does not guarantee to any organization that it is secured and that
no cyber crime can be committed against the organization.
Stuxnet a present live example..
Cyber space is gaining importance and it is a fact that borders are
no longer physical but CYBER BORDERS with nations trying to
take control over each other's Cyber Borders. It is also believed
that the First Online Cyber killing could happen this year end as
cyber criminals, APT Actors, state sponsored cyber terrorists
exploit internet technologies to target victims. Europol said it
expected ' injury and possible deaths" caused by computer attacks
on critical infrastructure.
With a global manufacturing base and where technology changes
rapidly and organizations optimizing costs state sponsored or
back doors for unknown vulnerabilities being present knowingly
or unknowingly is a growing threat. This paper would highlight
processes on using modern day technologies which would help
organizations to be proactive in terms of security. It would cover
aspects of how fuzzing can be used to determine Zero Day or
Unknown Vulnerabilities like Stuxnet or Heartbleed and ways to
create quick patches so as to be able to mitigate them and create a
stronger robust network.
INDEX TERMS
Fuzzing, Infrastructure & Critical Services, Hacktivists, Power
System Security, SCADA Security, Security, Specification
based fuzzing, Zero Day Vulnerability, Unknown Vulnerability
INTRODUCTION
Industrial control systems (ICS) and Critical Infrastructures have
relied on security through obscurity and isolation. But with
increasing connectivity and the use of standard protocols and off-
the-shelf software and hardware the risk of cyber attacks has
increased. ICS protocols are extremely vulnerable to attacks,
because they were not designed with security in mind and the
critical Infrastructure are not typically hardened against attacks.
Presently organizations are migrating from traditional SCADA
systems to IP Based SCADA which are far more advanced but
also bring along with them security issues. The transition from
the matured IPv4 to the new standard, IPv6, only increases this
risk. Unknown vulnerabilities expose networks and services to
security, quality and robustness issues, reducing their operational
reliability.
Vulnerabilities are unpatched software flaws. Without
vulnerabilities there would not be attacks, because hackers need
to find vulnerabilities in a system, in order to devise an attack
against it. However, vulnerabilities can also be triggered by
simple unexpected inputs in events like heavier than normal use
or system maintenance. Unknown vulnerabilities differ from
known vulnerabilities in that their existence is unknown, thus
there are no ready patches and updates and attacks against them
can go unnoticed. Unknown Vulnerabilities are typical entry
vector for Malware and other harmful exploits to access
information networks. As a reference, Stuxnet relied on four
different Unknown Vulnerabilities to function. Finding the
Unknown Vulnerabilities is critical hardening activity for the
networks and networked devices where high level of trust is
required.
Robustness testing is based on the systematic creation of a very
large number of protocol messages (tens or hundreds of
thousands) that contain exceptional elements simulating
malicious attacks. This method provides a proactive way of
assessing software robustness, which, in turn, is defined as "the
ability of software to tolerate exceptional input and stressful
environment conditions". A piece of software which is not robust
fails when facing such circumstances. A malicious intruder can
take advantage of robustness shortcomings to compromise the
system running the software. In fact, a large portion of the
information security vulnerabilities reported in public is caused
by robustness weaknesses. Robustness problems can be
exploited, for example, by intruders seeking to cause a denial-of-
service condition by feeding maliciously formatted inputs into the
vulnerable component. Certain types of robustness flaws (e.g.,
Page 2 of 9
common buffer overflows) can also be exploited to run externally
supplied code on the vulnerable component.
The software vulnerabilities found in robustness testing are
primarily caused by implementation-time mistakes (i.e. mistakes
made during programming). Many of these mistakes are also
vulnerabilities from a security point of view. During testing, these
mistakes can manifest themselves in various ways:
 The component crashes and then possibly restarts.
 The component hangs in a busy loop, causing a
permanent Denial-of-Service situation.
 The component slows down momentarily causing a
temporary Denial-of- Service situation.
 The component fails to provide useful services causing a
Denial-of-Service situation (i.e. new network
connections are refused).
On the programming languages level, there are numerous
possible types of mistakes, which can cause robustness problems:
missing length checks, pointer failures, index handing failures,
memory allocation problems, threading problems, etc. Not all
problems have a direct security impact, yet their removal always
promotes the reliability of the assessed software component.
Internet abuse refers to the misuse of the Internet to injure and
disturb other users. It is an umbrella term covering cyber crime,
hacktivism, attacks by nation-state sponsored adversaries and
hobbyist crackers. Different types of internet abuse include
unauthorized network access, data theft and corruption,
disruptions to normal traffic flow (e.g. DoS and DDoS attacks),
the propagation of malware, spamming, phishing and botnets.
APT refers to sophisticated Internet abuse performed by highly
motivated and dedicated hackers who are Domain specialists and
well-resourced groups, such as organized cyber criminals, hostile
nation states and hacktivists. These attacks frequently utilize
unknown, zero-day vulnerabilities. Zero-day vulnerabilities pose
the greatest threat to network security, because there are no
defenses for attacks against them. The attacks can go unnoticed
and once discovered it takes time to locate the vulnerabilities and
to create patches for them. Advanced attacks, like the Stuxnet,
can utilize multiple zero-days making them extremely difficult to
defend against.
Presently most organizations have security teams which focus
mostly on known vulnerability management using tools for
penetration testing and application testing and at times may write
a few negative test cases which are not exhaustive enough to
determine unknown vulnerabilities.
Crackers need to find a vulnerability in the protocol
implementation in order to devise an attack against a target
system. By identifying potential zero-day vulnerabilities
proactively, one can make it significantly harder for crackers to
devise attacks and the best way to prevent zero-day attacks is to
get rid of exploitable vulnerabilities proactively. Fuzzing enables
you to find previously unknown, zero-day vulnerabilities by
triggering them with unexpected
With growing global economies manufacturing has also become
global to reduce costs and increase efficiency, but at a cost, which
is cost of Security. There may be Vulnerabilities both Known
and Unknown existing in these products present their knowingly
or unknowingly. Does that mean we stop procuring devices
manufactured outside our country and try to be Country Specific?
This may not be possible in global economies and could cause
other global issues. So does it mean stop progress and wait for
something? Answer is NO. We don't wait but proceed and create
stronger processes which help us to mitigate these unknown
vulnerabilities and be proactive and prepared rather than wait to
be compromised.
It is believe that attacks against Critical Infrastructure would
increase resulting in unavailability of services. In this paper we
will discuss in detail and see how some technologies can be
adapted to help us to be proactive and mitigate unknown
vulnerabilities.
Some Useful Definitions
• Vulnerability – a weakness in software, a bug
• Threat/Attack – exploit/worm/virus against a specific
vulnerability
• Protocol Modeling – Technique for explaining interface
message sequences and message structures
• Fuzzing – process and technique for security testing
• Anomaly – abnormal or unexpected input
• Failure – crash, busy-loop, memory corruption, or other
indication of a bug in software
• Test cases - A test case is the basic unit of fuzzing. Each
anomalized message or file is a test case. With network
protocol testing, one test case may consist of a single
interaction, with one or more messages exchanged, with
an anomaly in one or more of them or in how the
messages are sent.
• Failures - Broadly speaking, target software fails when it
behaves in a way its creators did not intend. In specific
terms, the bugs uncovered by fuzzing cause various
symptoms, including the following: Process crashes and
panics, Process hangs (endless loops), Resource ,
shortages (disk space shortage, handle shortage, etc.),
Assertion failures, Handled and unhandled exceptions,
Data corruption (databases, log files, etc.), Any other
unexpected behavior
• FTMM- Fuzz Testing Maturity Model
Defining the Problem
Global economies mean global production and global purchase.
As discussed it also means buying what you get. In short the
problem definition is to buy what is manufactured by global
OEM's and buy with it known and unknown threats. Known
threats can be easily mitigated with tools and the next generation
security technologies like Intrusion detection and protection
systems, firewalls etc. in place, but what about the Unknown or
ZERO day vulnerabilities which may have crept in unknowingly
Page 3 of 9
during development process or knowingly implanted to create
back doors or trap doors. Stuxnet a live example and recently
Heartbleed which show how ICS SCADA devices can be
compromised. A recent study of some attacks showed that five
of the attacks surveyed utilized a Root Compromise, four took
advantage of a User Compromise, four others used a Trojan,
three of the attacks involved a Misuse of Resources, two attacks
used a Worm, one utilized a Denial of Service, one was a Virus,
and one was a Social Engineering attack. The Impact with
majority being disrupted operations, three disclosed data, two
distorted data, one destroyed data, and one had unknown impact.
Approach
Before we start we would redefine the problem to be more
focused to ensure we have identified it so as to be able to clearly
have a clear cut solution for the same. Let us see a examine a
typical SCADA System used for power generation and
distribution.
fig 1. Power SCADA Generation & Distribution
fig 2. Power SCADA Generation & Distribution
fig 3. A typical SCADA System is used to control a
geographically distributed process
If one looks at it holistically one would not find any security
loopholes etc. as IT would have enable firewalls and other
security measures for security. SCADA systems are different
and mostly do not easily integrate with mainstream IT which
results is separate teams to monitor these networks. Installing a
firewall before such networks does give some protection but not
total protection.
If one looks at the network one can see that the network has
various network devices which are IT and SCADA and they all
intercommunicate with each other via protocols. All protocols are
created as per specifications which help define them and also
help the software community to integrate in their hardware &
software products. TO determine the implementation of these
protocols in products one would use FUZZING to find out
Vulnerabilities.
Fuzzing, what is it what does it do. Fuzzing is a black-box
robustness testing technique used to reveal unknown zero-day
vulnerabilities by triggering them with unexpected inputs.
Basically, unexpected data in the form of modified protocol
messages are fed to the inputs of a system, and the behavior of
the system is monitored. If the system fails, e.g., by crashing or
by failing built-in code assertions, then there is an exploitable
vulnerability in the software. While many security techniques
focus on finding known vulnerabilities or variations of them,
fuzzing reveals previously unknown vulnerabilities, so called
zero-day vulnerabilities by triggering them with unexpected
inputs . Figure 4 depicts the flow of fuzzing tests. Discovering
Zero-Day Vulnerabilities modifying the samples either randomly
or based on the sample structure In generation-based fuzzing, the
process of data element identification is automated by using
protocol models
Page 4 of 9
fig 4. The flow of Fuzzing
As mentioned above Fuzzing helps us to determine unknown or
zero day vulnerabilities and there are many technologies in use.
In the industry there are Mutation based, Template based,
Generation based and Specification based fuzzing as methods to
determine unknown vulnerabilities.
Specification-based fuzzing is a form of generation-based
fuzzing, which uses protocol and file format specifications to
provide the fuzzer with protocol or file format specific
information, e.g., on the boundary limits of the data elements.
Specification-based test generation achieves excellent coverage
testing the protocol features included in the specification.
However, new features and proprietary features not included in
the specification are not covered. If no specification is available,
then the best fuzzing results can be achieved with mutation-
based fuzzers. Generation-based testing can also be
complemented with longer mutation-based fuzzing test runs.
Some vulnerabilities might only be triggered through more
aggressive input space testing . Thus, the best test results are
achieved by combining testing techniques.
Let us go into the process of how modern day technology can be
used to help us be more proactive and help us understand the
attack landscape better. We will now define the process for
Unknown Vulnerability Management and how Fuzz Testing
Maturity Model (FTMM) can help us. But before we move into
the process of testing for Unknown Vulnerabilities it is advised
for used to test the products for known vulnerabilities using
known vulnerability tools available.
Another name for Unknown Vulnerability Management is Zero-
Day Vulnerability Management. It is the process for
• Detecting attack vectors
• Finding zero-day vulnerabilities
• Building defenses
• Performing patch verification
• Deployment in one big security push
It is best described as below in figure 5.
fig 5. UVM Phases
Identifying the target and attack surface is important. FTMM and
FTMM levels apply to a specific target. A target is a single piece
of software or a collection of software. The target is the thing that
is fuzz tested, often referred to as System Under Test (SUT),
Device Under Test (DUT), interface, firmware, system, service,
etc. Here are some example target types:
 A single executable file
 An operating system
 An industrial controller
 A network router
 A mobile phone
 A smart television
 An automobile
 A medical infusion pump
In Phase 1 is one needs to look at the network holistically and
analyse the attack surface. In this phase user should use tools like
port scanners, resource scanners and network analyzers to
analyse and examine the network. What needs to be tested has to
be identified. Take a typical power network, it would have
thousands of network devices which are IP Based and would be
different.
To understand it better what to test and what not too it would be
advisable to Record traffic at multiple points in your network and
use tools to automatically visualize the network. It would be
important that such a tool helps you to drill up and down from
looking at high-level visualizations to inspecting the
corresponding packet data and provide a real time analysis and
also reveal hidden interfaces and possible exploits.
Once done the security team can identify and list all critical
elements which need to be tested.
Page 5 of 9
Phase 2 which is testing of the devices. Before starting the
process of testing it is advised to list down all the protocols the
particular device under test (DUT) supports and then identify the
correct fuzzing tool which supports the said protocol. One would
need to use a number of tools and not be limited to just one type
of fuzzing and the mechanism of fuzzing would need to be
dynamic keeping in mind the attack vectors and the environment
where the device would be used.
To start with one should identify a good Specification based
Fuzzing tool which covers all the specifications of the protocols
the DUT supports. It should also
1. A savvy test case engine creates the malformed inputs,
or test cases, that will be used to exercise the target.
Because fuzzing is an infinite space problem, the test
case engine must be smart about creating test cases that
are likely to trigger failures in the target software.
Experience counts—the developers who create the test
case engine should, ideally, have been testing and
breaking software for many years.
2. Creating high-quality test cases is not enough; a fuzzer
must also include automation for delivering the test
cases to the target. Depending on the complexity of the
protocol or file format being tested, a generational
fuzzer can easily create hundreds of thousands, even
millions of test cases.
3. As the test cases are delivered to the target, the fuzzer
uses instrumentation to monitor and detect if a failure
has occurred. This is one of the fundamental
mechanisms of fuzzing.
4. When outright failure or unusual behavior occurs in a
target, understanding what happened is critical. A great
fuzzer keeps detailed records of its interactions with the
target.
Phase 3 which is critical and in terms provides reports. The tools
should provide the following
1. Hand in hand with careful recordkeeping is the idea of
repeatability. If your fuzzer delivers a test case that
triggers a failure, delivering the same test case to
reproduce the same failure should be straightforward.
This is the key to effective remediation—when testers
locate a vulnerability with a fuzzer, developers should
be able to reproduce the same vulnerability, which
makes determining the root cause and fixing the bug
relatively easy.
2. A fuzzer should be easy to use. If the learning curve is
too steep, no one will want to use it and it will just
gather dust.
3. Management reports provide an high-level overview of
the test execution
4. Log files and spreadsheets help you to identify
troublesome tests and to minimize false negatives
5. Individual tests by augmenting the already extensive test
case documentation with PCAP traffic recordings
6. Remediation Packages can be send to third parties for
automated reproduction
Phase 4 which is Mitigation and is important. If the testing
organization is a user organization remediating becomes
important as one can't wait infinitely to wait for a patch. The tool
should be able to help and assist the user to remediate without
waiting for a patch from the OEM. The tool should provide
1. Mitigation tools quickly and easily reproduce
vulnerabilities, perform regression testing and verify
patches
2. The tools automatically generate reports, which contain
risk assessment and CWE values for the found
vulnerabilities and direct links to the test suites that
triggered the vulnerabilities
3. Identification of the test cases that triggered the
vulnerability is critical
4. The test case documentation can be used to create
tailored IDS rules to block possible zero-day attacks. stn
How much testing per protocol is enough depends on the
criticality of the user organization. It is important to understand
that depending on the environment the quantum and type of
security testing could vary. However Fuzz Testing Maturity
Model (FTMM) does provide us a way forward. It is based on
ISO/IEC 15504 framework
Testing Requirements Overview per Target Attack Vector.
In general, the overall maturity level of a target is the lowest
maturity level of any attack vector, with Level 0 representing
“no fuzzing has been done.” If a target includes 27 attack vectors
that have been fuzzed to Level 5, and 1 attack vector that has not
been fuzzed at all, the target is at FTMM Level 0. Security is only
as strong as its weakest link—the 1 attack vector that was not
fuzzed will likely make the target highly vulnerable, despite the
rigorous testing performed on the other attack vectors.
A prudent practice is to balance the risk associated with an attack
vector against an appropriate FTMM level. It makes sense to fuzz
remotely accessible attack vectors to a higher level than front-
panel input, for example.
Let us define all the 5 levels.
1. Level 0: Immature where no fuzzing has been performed
on any attack vector in a target, the target is at FTMM
Level 0. If minimal fuzzing has been done, but does not
meet the Level 1 requirements, then the target is still at
FTMM Level 0.
2. Level 1: Initial . Level 1 represents an initial exposure
to fuzz testing. Either generational or template fuzzing is
used on the known attack vectors of the target, although
a full attack surface analysis is not required. For each
tested attack vector, fuzzing should be performed for at
Page 6 of 9
least 2 hours or 100,000 test cases, whichever comes
first. Assertion failures and transient failures are
acceptable but must be documented.
Level 1 is not comprehensive in any sense, but it is an
excellent first step for organizations that wish to
improve their security posture by becoming adept at
fuzzing. Some fuzzing is better than no fuzzing. If the
target has not been fuzzed previously, FTMM Level 1
can provide quick improvements in robustness and
security.
3. Level 2: Defined. The starting point for Level 2 is an
attack surface analysis of the target. For each attack
vector, a generational fuzzer should be used for 8 hours
or 1 million test cases, whichever comes first. If a
generational fuzzer is unavailable for an attack vector, a
template fuzzer can be used instead, for at least 8 hours
or 5 million test cases, whichever comes first.
Level 2 introduces a more defined approach for
performing fuzz testing. What matters in fuzzing is
catching failures. Instrumentation is the mechanism for
catching failures, so while Level 2 does not require
automated instrumentation, it is highly recommended.
4. Level 3: Managed. Both generational and template
fuzzing must be performed for each attack vector in
Level 3. The generational fuzzer must be run for 16
hours or 2 million test cases, whichever comes first,
while the template fuzzer must be run for 16 hours or 5
million test cases. Automated instrumentation must be
used. The baseline test configuration must be
documented.
Compared to previous layers, Level 3 emphasizes
completeness and documentation so that it is easy to
observe and improve the fuzzing process. This is an
excellent baseline for builders.
5. Level 4: Integrated. Level 4 increases the fuzzing time
per fuzzer type to one week. There is no longer a
minimum threshold for test cases—for each attack
vector, a generational fuzzer and a template fuzzer must
both be run until the minimum required time is reached.
In Level 4, fuzzing must be incorporated in the
organization’s automated testing. Level 4 also
introduces component analysis, a type of static analysis
in which a target binary is examined to understand its
internal components, such as third-party libraries. These
components might have known vulnerabilities, which
could be exposed through the target software. The
component analysis provides a comprehensive picture of
the components of a binary and their associated known
vulnerabilities.
It sets the stage for fuzzing—if vulnerable components
are present, they can be assessed and replaced to
eliminate or mitigate known vulnerabilities. Once the
known vulnerabilities have been addressed, fuzzing is
used to search for unknown vulnerabilities.
Level 4 is intended for systems with high reliability and
security requirements Parallel execution can be used to
reduce the elapsed testing time, while still meeting the
required total testing time. For example, for an attack
vector, if you can perform the generational fuzzing on
eight identical targets, dividing the test cases evenly,
then the required one week of generational fuzzing can
be accomplished in 168 / 8 = 21 hours.
6. Level 5: Optimized. Level 5 increases testing time to 30
days for each fuzzing type, and requires the use of at
least two different fuzzers per fuzzing type. Because
fuzzing is an infinite space problem, and because
different fuzzers work differently, using two
generational and two template fuzzers increases the
probability of locating vulnerabilities.
Target software must be run with available developer
tools to detect and monitor subtle failure modes, and
code coverage and component analysis must also be
performed.
Again, parallel fuzzing can reduce the elapsed testing
time. For example, a Web browser target that could
be virtualized and replicated in the cloud could achieve
FTMM Level 5 for one of its required fuzzers by
executing 100 parallel test runs in fewer than 8 hours!
For a single attack vector, Level 5 requires the use of
two generational fuzzers and two template fuzzers, or
four fuzzers run for 720 hours each. For a target with
two attack vectors, the required total testing time is
as follows:
2 x 4 x 720 hr = 3760 hr
Parallel testing shrinks this number to manageable size.
250 parallel runs brings the elapsed time to just
over 15 hours.
Be aware that Level 5 does not represent the ultimate in
fuzzing, or an endpoint in the quest for software
quality. More fuzzing can always be performed, but
Level 5 represents fuzzing that is appropriate for
systems with extremely high reliability and security
requirements.
Figure 6 below shows the requirements for each level of
the maturity model. The columns and abbreviations are
fully explained in the subsequent sections.
Page 7 of 9
fig 6. FTMM maturity levels
Explanation of the Chart:
1. The types of fuzzers used are
In random fuzzing, test cases are generated using a
random or pseudo-random generator. Random fuzzers
are minimally effective because the inputs they generate
for target software are entirely implausible.
Template fuzzing, also known as block or mutational
fuzzing, generates test cases by introducing anomalies
into a valid message or file. Template fuzzers are more
effective than random fuzzers, but have some
important shortcomings. In particular, the effectiveness
of the fuzzing depends on the quality of the
template. Template fuzzing is only as good as the
template used to generated test cases. If the template or
templates used do not cover a specific functional
scenario, the corresponding part of the target code will
not be exercised and lurking vulnerabilities will remain
hidden.
In generational fuzzing, or model-based fuzzing, the
fuzzer itself is implemented from the specifications of
the protocol or file format being tested. The fuzzer
knows every possible message and field and fully
understands the protocol rules for interacting with a
target. The fuzzer will correctly handle checksums,
session IDs, and other stateful protocol features. A
generational fuzzer generates test cases by iterating
through its internal protocol model, creating test cases
for each field of each message. In general,
generational fuzzing finds more vulnerabilities in less
time than any other kind of fuzzing.
2. Test Cases : is the numbers indicated in the Test cases
column are minimum amounts of test cases. Testing
must be performed until either the minimum test cases
are reached or the minimum time is reached, whichever
comes first.
The label infinite indicates that the fuzzer should be
placed in a mode where it generates test cases
indefinitely. In this case, testing should be performed for
at least the indicated time.
3. Time is the minimum time, in hours, for fuzzing.
Testing must be performed until either the minimum test
cases are reached or the minimum time is reached,
whichever comes first.
Care must be taken to ensure that the testing performed
covers as many features of the tested attack vector as
possible, resulting in testing as many code paths as
possible. This is especially important when limiting a
test run to a certain time.
Anyone willing to invest time and resources to take
either of these approaches might as well strive to do as
good testing as possible. The ultimate goal is locating
and fixing vulnerabilities, not achieving a certain
FTMM level.
Likewise, is minimum testing time meaningful, given
that test cases might be delivered at very different
rates for different targets? Bear in mind that the same
difficulties encountered in fuzzing a very slow target
will also be encountered by anyone attacking the target.
Attackers use fuzzing as tool for locating
vulnerabilities—they will have the same challenges as
you in fuzzing the target.
4. Instrumentation is the method a fuzzer uses for
monitoring the target during testing and for collecting
telemetrics. This maturity model defines the following
instrumentation methods:
Human observation uses human cognitive ability to
identify failures. While the fuzzer delivers test cases to
the target, a human tester observes the behavior of the
target. This can be accomplished by looking at log
files or console output for the target, looking at the front
panel (if present), or monitoring existing sessions
involving the target. Fundamentally, the tester is looking
for target behavior that is out of the ordinary. Allowed
failures. The tester should be familiar with
the functionality of the target and be able to differentiate
between normal behavior and anomalous
behavior. Typical facilities such as log files and other
management user interfaces should be used in
Page 8 of 9
addition to any available developer tools. Human
observation can be effective, but automated
instrumentation is recommended and required for higher
FTMM levels.
In automated instrumentation, the fuzzer automatically
checks on the health of the target during testing,
usually after each test case is delivered. One simple and
effective method is valid case instrumentation, in
which every test case is followed by a valid message
from the fuzzer. If the target responds with a valid
response, the fuzzer considers the target healthy and
continues by sending the next test case.
5. Allowed failures : describes the types of failures that
are allowed to remain after testing. One of the
challenges of fuzzing is that software can fail in many
different ways:
o Crashes
o Kernel panics
o Unhandled exceptions
o Assertion failures
o Busy loops
o Resource consumption
Resource consumption usually refers to processing
power, available memory, and available persistent
storage, but the important resources are ultimately
determined by the target and its environment.
Monitoring resource consumption is a matter of defining
baseline and critical threshold values for resource
consumption, documenting these values in the test plan,
and then comparing the resource values during
testing to the defined thresholds. Resource monitoring
can be as simple as a human observing the output of the
top utility on a Linux-based target to automated retrieval
of SNMP values for targets that support SNMP.
6. Test harness integration : Builder organizations will
initially run fuzz testing tools manually. Over time,
however, usage will naturally migrate to automatic fuzz
testing as part of an overall automated testing process.
This integration is a sign of maturity in an organization’s
use of fuzzing. With the required times in higher FTMM
levels, test harness integration and test automation are
crucial. Such automation can ease the transition to
parallel testing that is likely necessary to achieve higher
FTMM levels. This column does not apply when fuzzing
is being used as a verification and validation tool.
7. Processes: Fuzzing can be performed on any available
attack vector. Testers with a basic knowledge of the
target will know about at least some of the available
attack vectors. A comprehensive analysis of the attack
surface of the target, in its intended configuration, is
required for rigorous testing. The end result of
attack surface analysis is a list of all attack vectors
for the target. Note that the attack surface consists
of only those attack vectors that are active in the
used configuration. A target might have additional
capabilities that would expose additional attack
vectors, but if they are not enabled in the used
configuration, they do not need to be fuzzed to
achieve a specific FTMM level for the target in this
configuration.
8. Documentation : A fuzzing report should include the
following information:
 A summary table providing an overview of
testing, including the following
information for each attack vector:
 Fuzzing tool and version
 Test run verdict
 Instrumentation method
 Number of test cases
 Testing time
 Date of test run
 Notes
 For each attack vector, detailed results must be
submitted. These must be generated from the
fuzzing tool and include the following:
 Data for each test case delivered to the
target, such as test case verdict, time,
duration, and amount of output and input.
 The log of the fuzzer.
Documentation is a crucial component of effective, repeatable
fuzzing. The test plan can be adapted from the attack surface
analysis and should include information about the tools and
techniques that will be used for testing. The baseline test
configuration should include information about the test bed,
target configuration, and fuzzer configuration.
CONCLUSION
All software and contains vulnerabilities. Like death and taxes,
software vulnerabilities are inescapable. You can significantly
reduce your risk by creating stronger dynamic process which help
you create stronger robust networks and giving you a control over
the security of your network by making it harder and more time
consuming for adversaries to find exploitable vulnerabilities.
Finding and fixing more vulnerabilities increases the overall
security and robustness of your target and reduces your risk
profile.
This process and maturity model gives software builders and
buyers a standard scale for describing fuzz testing performed on
target software and the associated risks.
Page 9 of 9
References
1. “SDL Process: Verification.” Microsoft. 27 Oct. 2013.
http://www.microsoft.com/security/sdl/process/verificati
on.aspx
2. “Knudsen, Jonathan. “Make Software Better with
Fuzzing.” ISSA Journal, July
2013.http://www.codenomicon.com/news/editorial/Mak
e%20Software%20Better%20with%20Fuzzing.pdf
3. Knudsen, Jonathan. “That Warm, Fuzzy Feeling...and
How You Can Get It.” Professional Tester, April 2012.
4. http://www.codenomicon.com/news/editorial/profession
al_tester_0412_that_warm_fuzzy_feeling.pdf
5. Knudsen Jonathan & Mikko Varpiola "Fuzz Testing
Maturity Model"
http://www.codenomicon.com/resources/Fuzz%20Testin
g%20Maturity%20Model.pdf
6. Takanen, Ari, et al. Fuzzing for Software Security
Testing and Quality Assurance. Artech House, 2008.
Disclaimer
The contents of this paper have been gleaned from
open literature, mostly the Internet. The material for
this paper has been acknowledged through
references. However if the references at any place
have been omitted, they are unintentional and are
by no means meant to violate copy rights or
intellectual property rights of the original authors.

Cyber Security for Critical Infrastructure

  • 1.
    Page 1 of9 Cyber Security for Critical Infrastructure Mohit Rampal- MS Systems & Information BITS Pilani -Regional Manager South Asia Codenomicon Software India Pvt. Ltd. Shubika Soni - Manager Security- North & West ABSTRACT Cyber Security has gained a huge importance in the last few years specially towards Industrial control systems (ICS) and specially critical infrastructure. The Mindset of Air-GAP and not connecting to the internet at all for ICS and Critical Infrastructure does not guarantee to any organization that it is secured and that no cyber crime can be committed against the organization. Stuxnet a present live example.. Cyber space is gaining importance and it is a fact that borders are no longer physical but CYBER BORDERS with nations trying to take control over each other's Cyber Borders. It is also believed that the First Online Cyber killing could happen this year end as cyber criminals, APT Actors, state sponsored cyber terrorists exploit internet technologies to target victims. Europol said it expected ' injury and possible deaths" caused by computer attacks on critical infrastructure. With a global manufacturing base and where technology changes rapidly and organizations optimizing costs state sponsored or back doors for unknown vulnerabilities being present knowingly or unknowingly is a growing threat. This paper would highlight processes on using modern day technologies which would help organizations to be proactive in terms of security. It would cover aspects of how fuzzing can be used to determine Zero Day or Unknown Vulnerabilities like Stuxnet or Heartbleed and ways to create quick patches so as to be able to mitigate them and create a stronger robust network. INDEX TERMS Fuzzing, Infrastructure & Critical Services, Hacktivists, Power System Security, SCADA Security, Security, Specification based fuzzing, Zero Day Vulnerability, Unknown Vulnerability INTRODUCTION Industrial control systems (ICS) and Critical Infrastructures have relied on security through obscurity and isolation. But with increasing connectivity and the use of standard protocols and off- the-shelf software and hardware the risk of cyber attacks has increased. ICS protocols are extremely vulnerable to attacks, because they were not designed with security in mind and the critical Infrastructure are not typically hardened against attacks. Presently organizations are migrating from traditional SCADA systems to IP Based SCADA which are far more advanced but also bring along with them security issues. The transition from the matured IPv4 to the new standard, IPv6, only increases this risk. Unknown vulnerabilities expose networks and services to security, quality and robustness issues, reducing their operational reliability. Vulnerabilities are unpatched software flaws. Without vulnerabilities there would not be attacks, because hackers need to find vulnerabilities in a system, in order to devise an attack against it. However, vulnerabilities can also be triggered by simple unexpected inputs in events like heavier than normal use or system maintenance. Unknown vulnerabilities differ from known vulnerabilities in that their existence is unknown, thus there are no ready patches and updates and attacks against them can go unnoticed. Unknown Vulnerabilities are typical entry vector for Malware and other harmful exploits to access information networks. As a reference, Stuxnet relied on four different Unknown Vulnerabilities to function. Finding the Unknown Vulnerabilities is critical hardening activity for the networks and networked devices where high level of trust is required. Robustness testing is based on the systematic creation of a very large number of protocol messages (tens or hundreds of thousands) that contain exceptional elements simulating malicious attacks. This method provides a proactive way of assessing software robustness, which, in turn, is defined as "the ability of software to tolerate exceptional input and stressful environment conditions". A piece of software which is not robust fails when facing such circumstances. A malicious intruder can take advantage of robustness shortcomings to compromise the system running the software. In fact, a large portion of the information security vulnerabilities reported in public is caused by robustness weaknesses. Robustness problems can be exploited, for example, by intruders seeking to cause a denial-of- service condition by feeding maliciously formatted inputs into the vulnerable component. Certain types of robustness flaws (e.g.,
  • 2.
    Page 2 of9 common buffer overflows) can also be exploited to run externally supplied code on the vulnerable component. The software vulnerabilities found in robustness testing are primarily caused by implementation-time mistakes (i.e. mistakes made during programming). Many of these mistakes are also vulnerabilities from a security point of view. During testing, these mistakes can manifest themselves in various ways:  The component crashes and then possibly restarts.  The component hangs in a busy loop, causing a permanent Denial-of-Service situation.  The component slows down momentarily causing a temporary Denial-of- Service situation.  The component fails to provide useful services causing a Denial-of-Service situation (i.e. new network connections are refused). On the programming languages level, there are numerous possible types of mistakes, which can cause robustness problems: missing length checks, pointer failures, index handing failures, memory allocation problems, threading problems, etc. Not all problems have a direct security impact, yet their removal always promotes the reliability of the assessed software component. Internet abuse refers to the misuse of the Internet to injure and disturb other users. It is an umbrella term covering cyber crime, hacktivism, attacks by nation-state sponsored adversaries and hobbyist crackers. Different types of internet abuse include unauthorized network access, data theft and corruption, disruptions to normal traffic flow (e.g. DoS and DDoS attacks), the propagation of malware, spamming, phishing and botnets. APT refers to sophisticated Internet abuse performed by highly motivated and dedicated hackers who are Domain specialists and well-resourced groups, such as organized cyber criminals, hostile nation states and hacktivists. These attacks frequently utilize unknown, zero-day vulnerabilities. Zero-day vulnerabilities pose the greatest threat to network security, because there are no defenses for attacks against them. The attacks can go unnoticed and once discovered it takes time to locate the vulnerabilities and to create patches for them. Advanced attacks, like the Stuxnet, can utilize multiple zero-days making them extremely difficult to defend against. Presently most organizations have security teams which focus mostly on known vulnerability management using tools for penetration testing and application testing and at times may write a few negative test cases which are not exhaustive enough to determine unknown vulnerabilities. Crackers need to find a vulnerability in the protocol implementation in order to devise an attack against a target system. By identifying potential zero-day vulnerabilities proactively, one can make it significantly harder for crackers to devise attacks and the best way to prevent zero-day attacks is to get rid of exploitable vulnerabilities proactively. Fuzzing enables you to find previously unknown, zero-day vulnerabilities by triggering them with unexpected With growing global economies manufacturing has also become global to reduce costs and increase efficiency, but at a cost, which is cost of Security. There may be Vulnerabilities both Known and Unknown existing in these products present their knowingly or unknowingly. Does that mean we stop procuring devices manufactured outside our country and try to be Country Specific? This may not be possible in global economies and could cause other global issues. So does it mean stop progress and wait for something? Answer is NO. We don't wait but proceed and create stronger processes which help us to mitigate these unknown vulnerabilities and be proactive and prepared rather than wait to be compromised. It is believe that attacks against Critical Infrastructure would increase resulting in unavailability of services. In this paper we will discuss in detail and see how some technologies can be adapted to help us to be proactive and mitigate unknown vulnerabilities. Some Useful Definitions • Vulnerability – a weakness in software, a bug • Threat/Attack – exploit/worm/virus against a specific vulnerability • Protocol Modeling – Technique for explaining interface message sequences and message structures • Fuzzing – process and technique for security testing • Anomaly – abnormal or unexpected input • Failure – crash, busy-loop, memory corruption, or other indication of a bug in software • Test cases - A test case is the basic unit of fuzzing. Each anomalized message or file is a test case. With network protocol testing, one test case may consist of a single interaction, with one or more messages exchanged, with an anomaly in one or more of them or in how the messages are sent. • Failures - Broadly speaking, target software fails when it behaves in a way its creators did not intend. In specific terms, the bugs uncovered by fuzzing cause various symptoms, including the following: Process crashes and panics, Process hangs (endless loops), Resource , shortages (disk space shortage, handle shortage, etc.), Assertion failures, Handled and unhandled exceptions, Data corruption (databases, log files, etc.), Any other unexpected behavior • FTMM- Fuzz Testing Maturity Model Defining the Problem Global economies mean global production and global purchase. As discussed it also means buying what you get. In short the problem definition is to buy what is manufactured by global OEM's and buy with it known and unknown threats. Known threats can be easily mitigated with tools and the next generation security technologies like Intrusion detection and protection systems, firewalls etc. in place, but what about the Unknown or ZERO day vulnerabilities which may have crept in unknowingly
  • 3.
    Page 3 of9 during development process or knowingly implanted to create back doors or trap doors. Stuxnet a live example and recently Heartbleed which show how ICS SCADA devices can be compromised. A recent study of some attacks showed that five of the attacks surveyed utilized a Root Compromise, four took advantage of a User Compromise, four others used a Trojan, three of the attacks involved a Misuse of Resources, two attacks used a Worm, one utilized a Denial of Service, one was a Virus, and one was a Social Engineering attack. The Impact with majority being disrupted operations, three disclosed data, two distorted data, one destroyed data, and one had unknown impact. Approach Before we start we would redefine the problem to be more focused to ensure we have identified it so as to be able to clearly have a clear cut solution for the same. Let us see a examine a typical SCADA System used for power generation and distribution. fig 1. Power SCADA Generation & Distribution fig 2. Power SCADA Generation & Distribution fig 3. A typical SCADA System is used to control a geographically distributed process If one looks at it holistically one would not find any security loopholes etc. as IT would have enable firewalls and other security measures for security. SCADA systems are different and mostly do not easily integrate with mainstream IT which results is separate teams to monitor these networks. Installing a firewall before such networks does give some protection but not total protection. If one looks at the network one can see that the network has various network devices which are IT and SCADA and they all intercommunicate with each other via protocols. All protocols are created as per specifications which help define them and also help the software community to integrate in their hardware & software products. TO determine the implementation of these protocols in products one would use FUZZING to find out Vulnerabilities. Fuzzing, what is it what does it do. Fuzzing is a black-box robustness testing technique used to reveal unknown zero-day vulnerabilities by triggering them with unexpected inputs. Basically, unexpected data in the form of modified protocol messages are fed to the inputs of a system, and the behavior of the system is monitored. If the system fails, e.g., by crashing or by failing built-in code assertions, then there is an exploitable vulnerability in the software. While many security techniques focus on finding known vulnerabilities or variations of them, fuzzing reveals previously unknown vulnerabilities, so called zero-day vulnerabilities by triggering them with unexpected inputs . Figure 4 depicts the flow of fuzzing tests. Discovering Zero-Day Vulnerabilities modifying the samples either randomly or based on the sample structure In generation-based fuzzing, the process of data element identification is automated by using protocol models
  • 4.
    Page 4 of9 fig 4. The flow of Fuzzing As mentioned above Fuzzing helps us to determine unknown or zero day vulnerabilities and there are many technologies in use. In the industry there are Mutation based, Template based, Generation based and Specification based fuzzing as methods to determine unknown vulnerabilities. Specification-based fuzzing is a form of generation-based fuzzing, which uses protocol and file format specifications to provide the fuzzer with protocol or file format specific information, e.g., on the boundary limits of the data elements. Specification-based test generation achieves excellent coverage testing the protocol features included in the specification. However, new features and proprietary features not included in the specification are not covered. If no specification is available, then the best fuzzing results can be achieved with mutation- based fuzzers. Generation-based testing can also be complemented with longer mutation-based fuzzing test runs. Some vulnerabilities might only be triggered through more aggressive input space testing . Thus, the best test results are achieved by combining testing techniques. Let us go into the process of how modern day technology can be used to help us be more proactive and help us understand the attack landscape better. We will now define the process for Unknown Vulnerability Management and how Fuzz Testing Maturity Model (FTMM) can help us. But before we move into the process of testing for Unknown Vulnerabilities it is advised for used to test the products for known vulnerabilities using known vulnerability tools available. Another name for Unknown Vulnerability Management is Zero- Day Vulnerability Management. It is the process for • Detecting attack vectors • Finding zero-day vulnerabilities • Building defenses • Performing patch verification • Deployment in one big security push It is best described as below in figure 5. fig 5. UVM Phases Identifying the target and attack surface is important. FTMM and FTMM levels apply to a specific target. A target is a single piece of software or a collection of software. The target is the thing that is fuzz tested, often referred to as System Under Test (SUT), Device Under Test (DUT), interface, firmware, system, service, etc. Here are some example target types:  A single executable file  An operating system  An industrial controller  A network router  A mobile phone  A smart television  An automobile  A medical infusion pump In Phase 1 is one needs to look at the network holistically and analyse the attack surface. In this phase user should use tools like port scanners, resource scanners and network analyzers to analyse and examine the network. What needs to be tested has to be identified. Take a typical power network, it would have thousands of network devices which are IP Based and would be different. To understand it better what to test and what not too it would be advisable to Record traffic at multiple points in your network and use tools to automatically visualize the network. It would be important that such a tool helps you to drill up and down from looking at high-level visualizations to inspecting the corresponding packet data and provide a real time analysis and also reveal hidden interfaces and possible exploits. Once done the security team can identify and list all critical elements which need to be tested.
  • 5.
    Page 5 of9 Phase 2 which is testing of the devices. Before starting the process of testing it is advised to list down all the protocols the particular device under test (DUT) supports and then identify the correct fuzzing tool which supports the said protocol. One would need to use a number of tools and not be limited to just one type of fuzzing and the mechanism of fuzzing would need to be dynamic keeping in mind the attack vectors and the environment where the device would be used. To start with one should identify a good Specification based Fuzzing tool which covers all the specifications of the protocols the DUT supports. It should also 1. A savvy test case engine creates the malformed inputs, or test cases, that will be used to exercise the target. Because fuzzing is an infinite space problem, the test case engine must be smart about creating test cases that are likely to trigger failures in the target software. Experience counts—the developers who create the test case engine should, ideally, have been testing and breaking software for many years. 2. Creating high-quality test cases is not enough; a fuzzer must also include automation for delivering the test cases to the target. Depending on the complexity of the protocol or file format being tested, a generational fuzzer can easily create hundreds of thousands, even millions of test cases. 3. As the test cases are delivered to the target, the fuzzer uses instrumentation to monitor and detect if a failure has occurred. This is one of the fundamental mechanisms of fuzzing. 4. When outright failure or unusual behavior occurs in a target, understanding what happened is critical. A great fuzzer keeps detailed records of its interactions with the target. Phase 3 which is critical and in terms provides reports. The tools should provide the following 1. Hand in hand with careful recordkeeping is the idea of repeatability. If your fuzzer delivers a test case that triggers a failure, delivering the same test case to reproduce the same failure should be straightforward. This is the key to effective remediation—when testers locate a vulnerability with a fuzzer, developers should be able to reproduce the same vulnerability, which makes determining the root cause and fixing the bug relatively easy. 2. A fuzzer should be easy to use. If the learning curve is too steep, no one will want to use it and it will just gather dust. 3. Management reports provide an high-level overview of the test execution 4. Log files and spreadsheets help you to identify troublesome tests and to minimize false negatives 5. Individual tests by augmenting the already extensive test case documentation with PCAP traffic recordings 6. Remediation Packages can be send to third parties for automated reproduction Phase 4 which is Mitigation and is important. If the testing organization is a user organization remediating becomes important as one can't wait infinitely to wait for a patch. The tool should be able to help and assist the user to remediate without waiting for a patch from the OEM. The tool should provide 1. Mitigation tools quickly and easily reproduce vulnerabilities, perform regression testing and verify patches 2. The tools automatically generate reports, which contain risk assessment and CWE values for the found vulnerabilities and direct links to the test suites that triggered the vulnerabilities 3. Identification of the test cases that triggered the vulnerability is critical 4. The test case documentation can be used to create tailored IDS rules to block possible zero-day attacks. stn How much testing per protocol is enough depends on the criticality of the user organization. It is important to understand that depending on the environment the quantum and type of security testing could vary. However Fuzz Testing Maturity Model (FTMM) does provide us a way forward. It is based on ISO/IEC 15504 framework Testing Requirements Overview per Target Attack Vector. In general, the overall maturity level of a target is the lowest maturity level of any attack vector, with Level 0 representing “no fuzzing has been done.” If a target includes 27 attack vectors that have been fuzzed to Level 5, and 1 attack vector that has not been fuzzed at all, the target is at FTMM Level 0. Security is only as strong as its weakest link—the 1 attack vector that was not fuzzed will likely make the target highly vulnerable, despite the rigorous testing performed on the other attack vectors. A prudent practice is to balance the risk associated with an attack vector against an appropriate FTMM level. It makes sense to fuzz remotely accessible attack vectors to a higher level than front- panel input, for example. Let us define all the 5 levels. 1. Level 0: Immature where no fuzzing has been performed on any attack vector in a target, the target is at FTMM Level 0. If minimal fuzzing has been done, but does not meet the Level 1 requirements, then the target is still at FTMM Level 0. 2. Level 1: Initial . Level 1 represents an initial exposure to fuzz testing. Either generational or template fuzzing is used on the known attack vectors of the target, although a full attack surface analysis is not required. For each tested attack vector, fuzzing should be performed for at
  • 6.
    Page 6 of9 least 2 hours or 100,000 test cases, whichever comes first. Assertion failures and transient failures are acceptable but must be documented. Level 1 is not comprehensive in any sense, but it is an excellent first step for organizations that wish to improve their security posture by becoming adept at fuzzing. Some fuzzing is better than no fuzzing. If the target has not been fuzzed previously, FTMM Level 1 can provide quick improvements in robustness and security. 3. Level 2: Defined. The starting point for Level 2 is an attack surface analysis of the target. For each attack vector, a generational fuzzer should be used for 8 hours or 1 million test cases, whichever comes first. If a generational fuzzer is unavailable for an attack vector, a template fuzzer can be used instead, for at least 8 hours or 5 million test cases, whichever comes first. Level 2 introduces a more defined approach for performing fuzz testing. What matters in fuzzing is catching failures. Instrumentation is the mechanism for catching failures, so while Level 2 does not require automated instrumentation, it is highly recommended. 4. Level 3: Managed. Both generational and template fuzzing must be performed for each attack vector in Level 3. The generational fuzzer must be run for 16 hours or 2 million test cases, whichever comes first, while the template fuzzer must be run for 16 hours or 5 million test cases. Automated instrumentation must be used. The baseline test configuration must be documented. Compared to previous layers, Level 3 emphasizes completeness and documentation so that it is easy to observe and improve the fuzzing process. This is an excellent baseline for builders. 5. Level 4: Integrated. Level 4 increases the fuzzing time per fuzzer type to one week. There is no longer a minimum threshold for test cases—for each attack vector, a generational fuzzer and a template fuzzer must both be run until the minimum required time is reached. In Level 4, fuzzing must be incorporated in the organization’s automated testing. Level 4 also introduces component analysis, a type of static analysis in which a target binary is examined to understand its internal components, such as third-party libraries. These components might have known vulnerabilities, which could be exposed through the target software. The component analysis provides a comprehensive picture of the components of a binary and their associated known vulnerabilities. It sets the stage for fuzzing—if vulnerable components are present, they can be assessed and replaced to eliminate or mitigate known vulnerabilities. Once the known vulnerabilities have been addressed, fuzzing is used to search for unknown vulnerabilities. Level 4 is intended for systems with high reliability and security requirements Parallel execution can be used to reduce the elapsed testing time, while still meeting the required total testing time. For example, for an attack vector, if you can perform the generational fuzzing on eight identical targets, dividing the test cases evenly, then the required one week of generational fuzzing can be accomplished in 168 / 8 = 21 hours. 6. Level 5: Optimized. Level 5 increases testing time to 30 days for each fuzzing type, and requires the use of at least two different fuzzers per fuzzing type. Because fuzzing is an infinite space problem, and because different fuzzers work differently, using two generational and two template fuzzers increases the probability of locating vulnerabilities. Target software must be run with available developer tools to detect and monitor subtle failure modes, and code coverage and component analysis must also be performed. Again, parallel fuzzing can reduce the elapsed testing time. For example, a Web browser target that could be virtualized and replicated in the cloud could achieve FTMM Level 5 for one of its required fuzzers by executing 100 parallel test runs in fewer than 8 hours! For a single attack vector, Level 5 requires the use of two generational fuzzers and two template fuzzers, or four fuzzers run for 720 hours each. For a target with two attack vectors, the required total testing time is as follows: 2 x 4 x 720 hr = 3760 hr Parallel testing shrinks this number to manageable size. 250 parallel runs brings the elapsed time to just over 15 hours. Be aware that Level 5 does not represent the ultimate in fuzzing, or an endpoint in the quest for software quality. More fuzzing can always be performed, but Level 5 represents fuzzing that is appropriate for systems with extremely high reliability and security requirements. Figure 6 below shows the requirements for each level of the maturity model. The columns and abbreviations are fully explained in the subsequent sections.
  • 7.
    Page 7 of9 fig 6. FTMM maturity levels Explanation of the Chart: 1. The types of fuzzers used are In random fuzzing, test cases are generated using a random or pseudo-random generator. Random fuzzers are minimally effective because the inputs they generate for target software are entirely implausible. Template fuzzing, also known as block or mutational fuzzing, generates test cases by introducing anomalies into a valid message or file. Template fuzzers are more effective than random fuzzers, but have some important shortcomings. In particular, the effectiveness of the fuzzing depends on the quality of the template. Template fuzzing is only as good as the template used to generated test cases. If the template or templates used do not cover a specific functional scenario, the corresponding part of the target code will not be exercised and lurking vulnerabilities will remain hidden. In generational fuzzing, or model-based fuzzing, the fuzzer itself is implemented from the specifications of the protocol or file format being tested. The fuzzer knows every possible message and field and fully understands the protocol rules for interacting with a target. The fuzzer will correctly handle checksums, session IDs, and other stateful protocol features. A generational fuzzer generates test cases by iterating through its internal protocol model, creating test cases for each field of each message. In general, generational fuzzing finds more vulnerabilities in less time than any other kind of fuzzing. 2. Test Cases : is the numbers indicated in the Test cases column are minimum amounts of test cases. Testing must be performed until either the minimum test cases are reached or the minimum time is reached, whichever comes first. The label infinite indicates that the fuzzer should be placed in a mode where it generates test cases indefinitely. In this case, testing should be performed for at least the indicated time. 3. Time is the minimum time, in hours, for fuzzing. Testing must be performed until either the minimum test cases are reached or the minimum time is reached, whichever comes first. Care must be taken to ensure that the testing performed covers as many features of the tested attack vector as possible, resulting in testing as many code paths as possible. This is especially important when limiting a test run to a certain time. Anyone willing to invest time and resources to take either of these approaches might as well strive to do as good testing as possible. The ultimate goal is locating and fixing vulnerabilities, not achieving a certain FTMM level. Likewise, is minimum testing time meaningful, given that test cases might be delivered at very different rates for different targets? Bear in mind that the same difficulties encountered in fuzzing a very slow target will also be encountered by anyone attacking the target. Attackers use fuzzing as tool for locating vulnerabilities—they will have the same challenges as you in fuzzing the target. 4. Instrumentation is the method a fuzzer uses for monitoring the target during testing and for collecting telemetrics. This maturity model defines the following instrumentation methods: Human observation uses human cognitive ability to identify failures. While the fuzzer delivers test cases to the target, a human tester observes the behavior of the target. This can be accomplished by looking at log files or console output for the target, looking at the front panel (if present), or monitoring existing sessions involving the target. Fundamentally, the tester is looking for target behavior that is out of the ordinary. Allowed failures. The tester should be familiar with the functionality of the target and be able to differentiate between normal behavior and anomalous behavior. Typical facilities such as log files and other management user interfaces should be used in
  • 8.
    Page 8 of9 addition to any available developer tools. Human observation can be effective, but automated instrumentation is recommended and required for higher FTMM levels. In automated instrumentation, the fuzzer automatically checks on the health of the target during testing, usually after each test case is delivered. One simple and effective method is valid case instrumentation, in which every test case is followed by a valid message from the fuzzer. If the target responds with a valid response, the fuzzer considers the target healthy and continues by sending the next test case. 5. Allowed failures : describes the types of failures that are allowed to remain after testing. One of the challenges of fuzzing is that software can fail in many different ways: o Crashes o Kernel panics o Unhandled exceptions o Assertion failures o Busy loops o Resource consumption Resource consumption usually refers to processing power, available memory, and available persistent storage, but the important resources are ultimately determined by the target and its environment. Monitoring resource consumption is a matter of defining baseline and critical threshold values for resource consumption, documenting these values in the test plan, and then comparing the resource values during testing to the defined thresholds. Resource monitoring can be as simple as a human observing the output of the top utility on a Linux-based target to automated retrieval of SNMP values for targets that support SNMP. 6. Test harness integration : Builder organizations will initially run fuzz testing tools manually. Over time, however, usage will naturally migrate to automatic fuzz testing as part of an overall automated testing process. This integration is a sign of maturity in an organization’s use of fuzzing. With the required times in higher FTMM levels, test harness integration and test automation are crucial. Such automation can ease the transition to parallel testing that is likely necessary to achieve higher FTMM levels. This column does not apply when fuzzing is being used as a verification and validation tool. 7. Processes: Fuzzing can be performed on any available attack vector. Testers with a basic knowledge of the target will know about at least some of the available attack vectors. A comprehensive analysis of the attack surface of the target, in its intended configuration, is required for rigorous testing. The end result of attack surface analysis is a list of all attack vectors for the target. Note that the attack surface consists of only those attack vectors that are active in the used configuration. A target might have additional capabilities that would expose additional attack vectors, but if they are not enabled in the used configuration, they do not need to be fuzzed to achieve a specific FTMM level for the target in this configuration. 8. Documentation : A fuzzing report should include the following information:  A summary table providing an overview of testing, including the following information for each attack vector:  Fuzzing tool and version  Test run verdict  Instrumentation method  Number of test cases  Testing time  Date of test run  Notes  For each attack vector, detailed results must be submitted. These must be generated from the fuzzing tool and include the following:  Data for each test case delivered to the target, such as test case verdict, time, duration, and amount of output and input.  The log of the fuzzer. Documentation is a crucial component of effective, repeatable fuzzing. The test plan can be adapted from the attack surface analysis and should include information about the tools and techniques that will be used for testing. The baseline test configuration should include information about the test bed, target configuration, and fuzzer configuration. CONCLUSION All software and contains vulnerabilities. Like death and taxes, software vulnerabilities are inescapable. You can significantly reduce your risk by creating stronger dynamic process which help you create stronger robust networks and giving you a control over the security of your network by making it harder and more time consuming for adversaries to find exploitable vulnerabilities. Finding and fixing more vulnerabilities increases the overall security and robustness of your target and reduces your risk profile. This process and maturity model gives software builders and buyers a standard scale for describing fuzz testing performed on target software and the associated risks.
  • 9.
    Page 9 of9 References 1. “SDL Process: Verification.” Microsoft. 27 Oct. 2013. http://www.microsoft.com/security/sdl/process/verificati on.aspx 2. “Knudsen, Jonathan. “Make Software Better with Fuzzing.” ISSA Journal, July 2013.http://www.codenomicon.com/news/editorial/Mak e%20Software%20Better%20with%20Fuzzing.pdf 3. Knudsen, Jonathan. “That Warm, Fuzzy Feeling...and How You Can Get It.” Professional Tester, April 2012. 4. http://www.codenomicon.com/news/editorial/profession al_tester_0412_that_warm_fuzzy_feeling.pdf 5. Knudsen Jonathan & Mikko Varpiola "Fuzz Testing Maturity Model" http://www.codenomicon.com/resources/Fuzz%20Testin g%20Maturity%20Model.pdf 6. Takanen, Ari, et al. Fuzzing for Software Security Testing and Quality Assurance. Artech House, 2008. Disclaimer The contents of this paper have been gleaned from open literature, mostly the Internet. The material for this paper has been acknowledged through references. However if the references at any place have been omitted, they are unintentional and are by no means meant to violate copy rights or intellectual property rights of the original authors.