0% found this document useful (0 votes)
58 views308 pages

Cybersecurity Guide

The document is a Learner's Handbook focused on Cyber Security, emphasizing the growing need for skilled professionals in the field due to increasing cyber threats and the importance of safeguarding information. It highlights the CyberShikshaa initiative aimed at enhancing women's representation in the cybersecurity workforce and outlines various modules covering fundamental aspects of cybersecurity. The handbook also discusses key concepts such as the CIA triad (Confidentiality, Integrity, Availability) and essential cybersecurity principles necessary for effective information protection.

Uploaded by

KESAVARTHINI R
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views308 pages

Cybersecurity Guide

The document is a Learner's Handbook focused on Cyber Security, emphasizing the growing need for skilled professionals in the field due to increasing cyber threats and the importance of safeguarding information. It highlights the CyberShikshaa initiative aimed at enhancing women's representation in the cybersecurity workforce and outlines various modules covering fundamental aspects of cybersecurity. The handbook also discusses key concepts such as the CIA triad (Confidentiality, Integrity, Availability) and essential cybersecurity principles necessary for effective information protection.

Uploaded by

KESAVARTHINI R
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

LEARNER’S

HANDBOOK

3
FOREWORD
The discipline of Cyber Security is one which continues to evolve rapidly in scope, complexity and needs.
With a specific and concerted focus on digitization and rising sophistication of cybercrime, the need for
having a comprehensive plan to secure and safeguard the information infrastructure, personal identity,
and digital assets has never been more pronounced than during the current times. The only way this
massive ask gets addressed is by ensuring availability of adequate pool of cyber professionals who are
well equipped with the core technical expertise which enables them to effectively take up the various roles
that enterprises of today are coming up with as part of their elaborate strategies and initiatives for Cyber
preparedness, including but not limited to, networks, applications, devices, identities et al.
Thanks to the capacity building and skilling efforts that are underway across the country for this niche
domain, there is a substantial pool of Cyber Security professionals who have been working on furthering the
Cyber Security agenda of their respective organizations and in turn the country’s cyberspace. Having said
that, efforts have to be scaled up to bridge that gap between the demand and the supply side. Currently,
the engineering graduates who are joining the workforce do not have adequate exposure to Cyber Security
and it is precisely this gap that needs to be bridged. DSCI and Sector Skills Council, NASSCOM, have been
working with Industry in accordance with a job role focussed framework with an intent to step up focus on
industry readiness on cybersecurity, which will be needed by the governments and businesses of all shape
and size.
There is another dimension to this whole scenario which is that of having a workforce which is balanced
from diversity standpoint. At present, the situation is rather skewed wherein very few women are part of
Cyber Security workforce of the country, which could be due to their lack of awareness and opportunities
in this area, as well as not enough exposure to training courses.
Microsoft-DSCI CyberShikshaa is an endeavour that intends to enhance the women representation in
Cyber Security workforce and prime them for taking up various job roles in Industry as well as government
sector. This particular courseware has been prepared with the intent of having a reference and a baseline
for the CyberShikshaa trainers as well as the trainees. Given the massive scope of this realm having a
single course guide may not be feasible. The courseware covers the fundamental aspects of different
modules viz. System Fundamentals, Introduction to Cyber Security, Network Security, Application Security,
Security Auditing & Cyber Forensics. Good knowledge of security issues, continuous learning of emerging
threats, and proactive approach to protect-detect-respond are three key hallmarks of any Cyber Security
professional. The CyberShikshaa candidates will find this material useful from guidance perspective and
their learnings certainly need to be augmented by hands on trainings, labs and practical case study based
approach, and online content.

Happy learning to the CyberShikshaa students.

Keshav Dhakad Rama Vedashree


Asst. General Counsel, Microsoft India CEO, Data Security Council of India
Table of Contents
UNIT 1 INTRODUCTION TO CYBER SECURITY 1
1.1 FUNDAMENTALS OF INFORMATION SECURITY 3
1.2 THREATS, ATTACK CATEGORIES & HACKING 10
1.3 CYBER SECURITY CONTROLS 16
1.4 INTRODUCTION TO NETWORK SECURITY 25
1.5 INTRODUCTION TO IDAM 29
1.6 INTRODUCTION TO CYBER FORENSICS 33
1.7 INTRODUCTION TO APPLICATION SECURITY 37
1.8 INTRODUCTION TO DATA, DATA CENTRE AND CLOUD SECURITY 39

UNIT 2 CRYPTOGRAPHY 51
2.1 BASICS OF CRYPTOGRAPHY 53
2.2 DES AND AES 65
2.3 THE RSA ALGORITHM 70
2.4 HASH FUNCTION 71
2.5 SOME CRYPTOGRAPHIC TOOLS 74
2.6 APPLICATION OF CRYPTOGRAPHY 82

UNIT 3 NETWORK SECURITY 109


3.1 NETWORKS AND THEIR VULNERABILITIES 111
3.2 NETWORK SECURITY MEASURES 123
3.3 INTRUSION DETECTION & PREVENTION SYSTEM 139
3.4 IMPLEMENTATION OF A FIREWALL 142
3.5 SECURITY INFORMATION AND EVENT MANAGEMENT (SIEM) 150

UNIT 4 APPLICATION SECURITY 165


4.1 IDENTIFYING APPLICATION SECURITY RISKS 167
4.2 COUNTERMEASURES 190
4.3 OWASP TOP 10 195
UNIT 5 SECURITY AUDITING 205
5.1 AUDIT PLANNING (SCOPE, PRE-AUDIT PLANNING,
DATA GATHERING, AUDIT RISK) 207
5.2 RISK ANALYSIS 209
5.3 PHASE APPROACH – RISK ASSESSMENT 220

UNIT 6 CYBER FORENSICS 225


6.1 INTRODUCTION TO CYBER FORENSICS 227
6.2 FIRST RESPONSE 237
6.3 FORENSIC DUPLICATION 244
6.4 STANDARD OPERATING PROCEDURES FOR DISK FORENSICS 254
6.5 MOBILE AND CDR FORENSICS 273
UNIT 1
INTRODUCTION TO CYBER
SECURITY

“ At the end of this unit you will be able to:




Explain the relevance of cyber security in the society
Explain basic cyber security principles and concepts
Describe various types of threats and attacks


• Describe the commonly used cyber security controls
• Provide a brief introduction to key domains of cyber security
2
Unit 1 - Introduction to Cyber Security

1.1 FUNDAMENTALS OF INFORMATION SECURITY

1.1.1 Importance of Information Security


The world today is generating and using a massive amount of digital information, much of which is confidential. But,
with the increase in digital information there has been a corresponding increase in incidents of information theft,
including, cyber-attacks by hackers. This has happened both in governments and in private companies.

Cybercrime has become one of the fastest growing crimes in the digital environment as advanced technologies
continue to progress by offering high speed, ease of usage and connectivity. Cybercrime continues to diverge into
different paths as time passes. More and more criminals are exploiting the advantages provided by modern day
technology in order to perpetrate a diverse range of criminal activities using digital devices.

Due to this, the field of information/cyber security has seen significant growth in recent times. Recent incidents
of information theft from large companies like Target, Sony and Citibank has shown the risks and challenges of
this field and this necessitates the growing need for information/cyber security professionals in this field. We are
also witnessing a rising level of data leakage from governments, businesses and other organizations, families and
individuals.

The key objective of Information/Cyber Security is the protection of information and its critical elements, including
the systems and hardware that are involved in the creation, use, storage, transmission and deletion of the information.
It is used to protect information from unauthorized access, use, disclosure, disruption, modification or destruction.

By selecting and applying appropriate safeguards, Information/Cyber Security helps organizations in protecting their
resources, reputation, legal position and other tangible and intangible assets.

Information/Cyber security tracks


Information/Cyber security comprises of the following tracks:

2. Application 3. Data Protection 4. Identity & Access


1. Network Security
Security and Privacy Management

5. Cyber 7. Incident
6. IT Forensics 8. BCM/DR
Assurance / GRC Management

9. End Point 10. Security 11. Industrial


Security Operations Control Security

3
Unit 1 - Introduction to Cyber Security

A brief description is as follows:

1. Network to protect networking components, connections, and contents from unauthorized


Security access, misuse, malfunction, modification, destruction, or improper disclosure.

2. Application to protect various applications or the underlying system (vulnerabilities) from


Security external threats or flaws in the design, development, deployment, upgrade, or
maintenance.

3. Data Protection to prevent unauthorized access to computers, databases and websites and protect
and Privacy data from corruption. It also includes protective digital privacy measures.

4. Identity & Access to enable the right individuals to access the right resources at the right times for the
Management right reasons by authentication and authorisation of identities and access.

5. Cyber
to develop and administer processes for Governance, Risk and Compliance.
Assurance / GRC

to collect, analyse and report on digital data in a way that is legally admissible. It can
6. IT Forensics be used in the detection and prevention of crime and in any dispute where evidence
is stored digitally.

7. Incident to manage information security incidents and identify, analyze, and correct hazards
Management to prevent a future re-occurrence

to develop and administer processes for creating systems of prevention and recovery
8. BCM/DR to deal with potential threats to a company thus protecting an organization from
the effects of significant negative events

9. End Point to protect the corporate network when accessed via remote devices such as laptops
Security or other wireless and mobile devices. Each device with a remote connection to the
network creates a potential entry point for security threats.

10. Security to monitor, assess and defend enterprise information systems (web sites, applications,
Operations databases, data centers and servers, networks, desktops, etc.)

to secure control systems used in industrial production, including supervisory control


11. Industrial
and data acquisition (SCADA) systems, distributed control systems (DCS), and other
Control Security
smaller control system configurations such as programmable logic controllers (PLC)
often found in the industrial sectors and critical infrastructure

4
Unit 1 - Introduction to Cyber Security

The key concerns in information assets security are:

• theft
• fraud/ forgery
• unauthorized information access
• interception or modification of data and data
management systems

The above concerns are materialized, in the event of a breach caused by exploitation of vulnerability.

Vulnerabilities
• Vulnerability is a weakness in an information system, system security procedures, internal
controls or implementations that are exposed. This may be exploited or triggered by a threat
source.
• ‘Threat agent or actor’ refers to the intent and method targeted at the intentional exploitation
of the vulnerability or a situation and method that may accidentally trigger the vulnerability.
• A ‘Threat vector’ is a path or a tool that a threat actor uses to attack the target.
• ‘Threat targets’ are anything of value to the threat actor such as PC, laptop, PDA, tablet,
mobile phone, online bank account or identity.

Information States

Information has 3 basic states. At any point of time, Information could be in a state of being transmitted, stored or
processed. This is so, irrespective of the media in which information is residing.

Transmission

INFORMATION STATES

Processing Storage

Fig1.1: Different characteristics of Information states

Information systems security concerns itself with the maintenance of three critical characteristics of information:
confidentiality, integrity and availability
These characteristics of information represent all the security concerns in an automated environment. All organisations
are concerned about these irrespective of their outlook on sharing information.

5
Unit 1 - Introduction to Cyber Security

1.1.2 CIA Triad


Security concerning IT and information is normally categorized in three categories to facilitate the management of
information.

Confidentiality Integrity Availability

Prevention of unauthorized Prevention of unauthorized Ensuring authorized access


disclosure or use of modification of information of information assets when
information assets assets required for the duration
required

Fig1.2 : Different characteristics of Information

The triad shows the three goals of information security: confidentiality, integrity and availability. Information is
protected when all the three tenets are put together.
1. The first tenet of the information security triad is confidentiality.
Confidentiality is defined by ISO-17799 as “ensuring that information is accessible only to those authorized
to have access to it.”
This can be one of the most difficult tasks to ever undertake. To attain confidentiality, we have to keep
secret information secret. People from both inside and outside the organization will be threatening to
reveal the secret information.
2. The second tenet of the information security triad is integrity. Integrity is defined by ISO-17799 as “the
action of safeguarding the accuracy and completeness of information and processing methods.”
This means that when a user requests for any type of information from the system, the information provided
would be correct.
3. The last tenet of the information security triad is availability.
ISO-17799 defines availability as ensuring that authorized users have access to information and associated
assets when required. This means that when a user needs a file or system, the file or system is there to be
accessed. This seems simple enough, but there are so many factors working against system availability.
There are hardware failures, natural disasters, malicious users, and outside attackers all fighting to remove
the availability from systems. Some common mechanisms to fight against this downtime include fault-
tolerant systems, load balancing, and system failover.

1.1.3 Cyber Security Principles


Risk is a potential threat, and the process of understanding and responding to factors that may lead to a failure in the
confidentiality, integrity or availability of an information system constitutes risk management.

6
Unit 1 - Introduction to Cyber Security

Prevention vs. detection


Security efforts to assure confidentiality, integrity and availability can be divided into those oriented on prevention
and those focused on detection. The latter aims to rapidly discover and correct lapses that could not be (or at least
were not) prevented. The balance between prevention and detection depends on the circumstances and available
security technologies.

Basic information/cyber security concepts

Given below are some terms that relate to basic information/ cyber security concepts:

Identification Authentication Authorisation Confidentiality

1 2 3 4

5 6 7

Non–
Integrity Availability
Repudiation

• Identification is the first step in the ‘identify-authenticate-authorise’ sequence that is performed every day
countless times by humans and computers alike when access to information or information processing resources
are required. While the particulars of the identification systems differ depending on who or what is being identified,
some intrinsic properties of identification apply regardless of these in particular. Just three of these properties are
scope, locality and uniqueness of IDs.
Identification name spaces can be local or global in scope. To explain this concept, let's use the familiar notation
of email addresses. While many email accounts named Sameer may exist around the world, an email address
sameer@[Link] must refer exactly to one such user in the [Link] locality. If the company in question is a small
one, then maybe only one employee is named Sameer. His colleagues may refer to that certain person by only
using his first name. That would work because the colleagues are in the same locality and only one Sameer works
there. However, if Sameer was someone in another country or even from the other end of the town, to refer to
Sameer@[Link] as simply Sameer would make no sense because username Sameer is not globally unique and
would indicate different persons in different localities. This is one of the reasons why two user accounts should
never use the same name on the same system. This will ensure that access controls are not based on names that
can be misinterpreted and also that there is ease of establishing accountability for user actions.

• Authentication happens right after identification and before authorization. It verifies authenticity of the identity
declared at the identification stage. The three methods of authentication are what you know, what you have and
what you are. Regardless of the authentication method used, the aim is to obtain reasonable assurance that the
identity declared at the identification stage belongs to the party in communication.

7
Unit 1 - Introduction to Cyber Security

Reasonable assurance could mean different degrees of assurance, depending on the environment and application.
Therefore, one may require different approaches towards authentication. Authentication requirements of a national
security system would be critical and would naturally differ from authentication requirements of a small company.
As different authentication methods have different costs and properties as well as different returns on investment,
the choice of authentication method for a system or organisation should be made after these factors have been
carefully considered.

• Authorisation
Authorisation is the process of ensuring that a user has sufficient rights to perform the requested operation. It
also is the process of preventing others, who do not have sufficient rights, from doing the same. For this, after the
users declare their identity at the identification stage and prove it at the authentication stage, they are assigned
a set of authorizations (rights, privileges or permissions) that define what all they can do on the system. These
authorisations are usually defined by the system’s security policy and are implemented by the security system
administrator. These privileges may range from one extreme of "permit nothing" to the other extreme of "permit
everything" and include anything in between.

• Confidentiality
It means persons authorised to have access to receive or use information, documents, etc. Unauthorised access
to confidential information may have devastating consequences not only in national security applications, but
also in commerce and industry. Main mechanisms of protection of confidentiality in information systems are
cryptography and access controls. Examples of threats to confidentiality are malware, intruders, social engineering,
insecure networks and poorly administered systems.

• Integrity
It is concerned with trustworthiness, origin, completeness and correctness of information as well as prevention of
improper or unauthorised modification of information. Integrity in information/cyber security context refers not
only to integrity of information itself but also to the origin of integrity i.e. integrity of the source of information.
Integrity protection mechanisms may be grouped into two broad types: preventive mechanisms, such as access
controls that prevent unauthorised modification of information and detective mechanisms, which are intended
to detect unauthorised modifications when preventive mechanisms have failed. Controls that protect integrity
include principles of least privilege, separation and rotation of duties.

• Availability
Availability of information, although usually mentioned last, is not the least important pillar of information/cyber
security. If the authorised users of the information cannot access and use the information, then there is what is
the use of the having that information at all? Therefore, even though availability is the last item in the C-I-A triad,
it is just as important and as necessary as confidentiality and integrity. Attacks against availability are known as
Denial of Service (DoS) attacks. Natural and manmade disasters also affect availability. While natural disasters are
infrequent, they have a severe impact. Human errors are frequent but usually not as severe. Business continuity
and disaster recovery planning (which at the very least includes regular and reliable backups) are used to minimize
the losses in the case of availability.

• Non-repudiation
In the information/cyber security context, it refers to one of the properties of cryptographic digital signatures that
offer the possibility of proving whether a message has been digitally signed by the holder of a digital signature’s
private key.

Non-repudiation is fast becoming very important due to the increase of electronic commerce. However, it can also
be controversial. An owner of a digital signature, may use it maliciously to repudiate from a legitimate transaction
by claiming that his/ her digital signature key was stolen.

8
Unit 1 - Introduction to Cyber Security

The following types of non-repudiation services are defined in international standard ISO 14516:2002 (guidelines for
the use and management of trusted third party services).
• Approval: non-repudiation of approval provides proof of who is responsible for approval of the contents of a
message.
• Sending: non-repudiation of sending provides proof of who sent the message.
• Origin: non-repudiation of origin is a combination of approval and sending.
• Submission: non-repudiation of submission provides proof that a delivery agent has accepted the message for
transmission.
• Transport: non-repudiation of transport provides proof for the message originator that a delivery agent has
delivered the message to the intended recipient.
• Receipt: non-repudiation of receipt provides proof that the recipient received the message.
• Knowledge: non-repudiation of knowledge provides proof that the recipient recognized the content of the
received message.
• Delivery: non-repudiation of delivery is a combination of receipt and knowledge, as it provides proof that the
recipient received and recognized the content of the message.

What type of security is associated with each level of the OSI model?

We know that the OSI reference model for networking is designed around seven layers arranged in a stack. The OSI
security architecture reference model (ISO 7498-2) is also designed around seven layers, reflecting a high level
view of the different requirements within Information Security.

Authenticatio n
For these 3 layers the security measures used are user ac-
count management is used to control access, Host Intrusion
Access Control Detection System, Rules based access control, digital certifi-
cates, encrypted passwords (safely stored), and timers to limit
the number of attempts that may be made to establish a
Non-Repudiation session

Limiting access to the transmission protocols and a strong


Data Integrity
firewall protection are common security measures
Route and anti-spoofing filters in conjunction with strong-
Confidentiality
ly configured firewalls can provide security in this layer

Filtering MAC addresses and ensuring all wireless appli-


Assurance / Availability cations have authentication and encryption built in are
common strategies for this layer

Biometric authentication, electromagnetic shielding,


Notarization / Signature and advanced locking mechanisms are typically used
to secure it

Fig 1.3: OSI Security Architecture—Some security measures for each layer

9
Unit 1 - Introduction to Cyber Security

1.2 THREATS, ATTACK CATEGORIES & HACKING

1.2.1 Vulnerability, Threat and Risk


We have been using the terms vulnerabilities, threats and risk a number of times in this book till now. Most times
readers do not understand the difference and may use it interchangeably. However in the field of security these terms
have distinct meanings and it is very important to understand each.
Security of assets is the main focus. The relationship between assets, vulnerabilities, threats and risks can be stated
as follows:

Asset + Threat + Vulnerability = Risk


i.e. Risk is a function of threats exploiting vulnerabilities to obtain,
damage or destroy assets.

Accurately assessing threats and identifying vulnerabilities is critical to understanding the risk to assets. Understanding
the difference between threats, vulnerabilities, and risk is the first step.
Let us look at each of these terms separately.

Assets

Assets are People, property, and information that are valuable and we are trying to protect.
• People may include employees and customers along with other invited persons such as contractors or guests.
• Property assets consist of both tangible and intangible items that can be assigned a value. Intangible assets
include reputation and proprietary information.
• Information may include databases, software code, critical company records, and many other intangible items.

Vulnerabilities

A vulnerability is a weakness or gap in our protection efforts. These weaknesses or gaps can be exploited by threats
to gain unauthorized access to an asset. These are security flaws in a system that allow an attack to be successful.
Vulnerability can be treated. Weaknesses should be identified and proactive measures taken to correct identified
vulnerabilities.

Therefore, vulnerability testing is performed on an ongoing basis by the people responsible for resolving such
vulnerabilities. It helps to provide data used to identify unexpected dangers to security that need to be addressed.
Testing for vulnerabilities is useful for maintaining ongoing security, allowing the people responsible for the security
of one’s resources to respond effectively to new dangers as they arise.

Threats

Threats are anything that can exploit a vulnerability, intentionally or accidentally and obtain, damage, or destroy an
asset. A threat is what we’re trying to protect against. It refers to the source and means of a particular type of attack.
Threats (effects) generally cannot be controlled. One can’t stop the efforts of an international terrorist group, prevent
a hurricane, or tame a tsunami in advance. However, threats need to be identified so that measures can be taken to
protect the asset.

10
Unit 1 - Introduction to Cyber Security

A threat assessment is performed to determine the best approaches to securing a system against a particular threat,
or class of threat. Penetration testing exercises are substantially focused on assessing threat profiles, to help one
develop effective countermeasures against the types of attacks represented by a given threat.
Analyzing threats can help one develop specific security policies to implement in line with policy priorities and
understand the specific implementation needs for securing one’s resources.

Risks
Risks are the potential for loss, damage or destruction of an asset as a result of a threat exploiting a vulnerability. Risk
is the intersection of assets, threats, and vulnerabilities. The term “risk” refers to the likelihood of being targeted by a
given attack, of an attack being successful, and general exposure to a given threat.

Risk can be mitigated Risk can be managed to either lower vulnerability or the overall impact on the business.
A risk assessment is performed to determine the most important potential security breaches to address now, rather
than later. One enumerates the most critical and most likely dangers, and evaluates their levels of risk relative to each
other as a function of the interaction between the cost of a breach and the probability of that breach.
Analyzing risk can help one determine appropriate security budgeting — for both time and money — and prioritize
security policy implementations so that the most immediate challenges can be resolved the most quickly.

1.2.2 Types of Attacks

Microsoft has proposed a threat classification called STRIDE from the initials of threat categories:
Spoofing of user identity
Tampering
Repudiation
Information disclosure (privacy breach or data leak)
Denial of Service (D.o.S.)
Elevation of privilege

Threat agents (individuals and groups) can be classified as follows:

• Non-Target specific: Non-Target specific threat agents are computer viruses, worms, Trojans and logic
bombs.
• Employees: staff, contractors, operational/ maintenance personnel or security guards who are annoyed
with the company.
• Organized crime and criminals: criminals target information that is of value to them, such as bank
accounts, credit cards or intellectual property that can be converted into money. Criminals will often make
use of insiders to help them.
• Corporations: corporations are engaged in offensive information warfare or competitive intelligence.
Partners and competitors come under this category.
• Unintentional human error: accidents, carelessness etc.
• Intentional human error: insider, outsider etc.
• Natural: Flood, fire, lightning, meteor, earthquakes etc.

11
Unit 1 - Introduction to Cyber Security

Attacks

What are the different types of attacks?

The common types of attacks can be classified as follows:

Network Application Phishing


Malwares
Attacks Attacks Attacks

Let us look at each one of these.

Network Attacks

• Watering hole attack - This is a more complex type of a phishing attack. Instead of the usual way of
sending spoofed emails to end users in order to trick them into revealing confidential information, attackers
use multiple staged approaches to gain access to the targeted information.
• Eavesdropping - Network communications usually occur in an unsecured or “cleartext” format, which
allows an attacker who has gained access to data paths to “listen in” or interpret (read) the traffic. When an
attacker is eavesdropping it is referred to as sniffing or snooping. The ability of an eavesdropper to monitor
the network is generally the biggest security problem that administrators face in an enterprise. Without
strong encryption services that are based on cryptography, data can be read by others as it travels through
the network.
• Spoofing - It is a technique used to masquerade a person, program or an address as another by falsifying
the data with purpose of unauthorized access.
• Network Sniffing (Packet Sniffing) - A process to capture the data packets travelling in the network.
Network sniffing can be used both by IT professionals to analyse and monitor the traffic for example,
in order to find unexpected suspicious traffic, but as well by perpetrators to collect data sent over clear
text that is easily readable with use of network sniffers (protocol analysers). Best counter measure against
sniffing is the use of encrypted communication between the hosts.
• Data Modification - After an attacker has read the data, the next logical step is to alter it. An attacker can
modify the data in the packet without the knowledge of the sender or receiver. No-one wants any of their
messages to be modified in transit.
• Denial of Service attack - An attack designed to cause a disturbance or elimination of services of a
particular host/ server by flooding it with major quantities of useless traffic or the communication requests
which are external. After the success of DoS attack, it becomes impossible for the server to answer to even
the legitimate requests, this can be observed through a variety of ways – slow response of the server,
slow network performance, unavailability of soware or web page, inability to access data, website or other
resources. Distributed Denial of Service Aack (DDoS) occurs where certain different infected systems
(botnet) are flooded on a specific host with traffic simultaneously.
• Man-in-the-middle attack - The attack is in the form of active monitoring or eavesdropping on victims’
connections and communication between victim hosts. This form of attack includes interaction between
both victim parties, having the communication, and the attacker. This is achieved by attacker intercepting
all or part of the communication, changing the content of it and sending it back as legitimate replies.
• Compromised-Key Attack - A key is a secret code or number necessary to interpret secured information.
Although obtaining a key is a difficult and resource-intensive process for an attacker, it is possible. After an
attacker obtains a key, that key is referred to as a compromised key. An attacker uses the compromised key
to gain access to a secured communication without the sender or receiver being aware of the attack. With
the compromised key, the attacker can also decrypt or modify data.

12
Unit 1 - Introduction to Cyber Security

Application Attacks

• Injection - Injections let attackers modify a back-end statement of command through unsanitized user
input. Moynihan It is the most common type of Application Layer Attacks.
• Cross-Site Scripting - Cross-site scripting is a type of vulnerability that lets attackers insert Javascript in the
pages of a trusted site. By doing so, they can completely alter the contents of the site to do their bidding
• Buffer overflow attack - In this type of attack the victim host is being provided with traffic/ data that
is out of range of the processing specs of the vicm host, protocols or applicaons, overflowing the buffer
and overwriting the adjacent memory. One example can be the mentioned, Ping of Death attack, where
malformed ICMP packet with size exceeding the normal value can cause the buffer overflow.
• Trojan Horse - Trojan horses are fake programs which pretend to be the original programs. Since they can
replicate most of the application level behavior of an application, a trojan horse is one of the of the most
famous styles to launch application layer attacks.
• HTTP flood - HTTP flood is a type of layer 7 application attack hitting web servers that apply the GET
requests used to fetch information, as in URL data retrievals during SSL sessions. Hackers sends the GET or
POST requests to a target web server. These requests are specifically designed to consume considerable
resources. Then, bots start from a given HTTP link and follow all links on the provided website in a recursive
way.

Phishing Attacks

• Phishing attack – This type of attack uses social engineering techniques to steal confidential information.
The most common purpose of such attack targets victim’s banking account details and credentials. Phishing
attacks tend to use schemes involving spoofed emails sent to users that lead them to malware infected
websites designed to appear as real online banking websites.
• Social phishing – In the recent years, phishing techniques evolved much to include social media like
Facebook or Twitter. This type of Phishing is often called Social Phishing. The purpose remains the same - to
obtain confidential information and gain access to personal files.
• Spear phishing attack – This is a type of phishing attack is targeted at specific individuals, groups of
individuals or companies. Spear phishing attacks are performed mostly with primary purpose of industrial
espionage and the of sensitive information while ordinary phishing attacks are directed against wide public
with intent of financial fraud.
• Whaling – It is a type of phishing attack specifically targeted at senior executives or other high profile
targets within a company.
• Vishing (Voice Phishing or VoIP Phishing) – It is a use of social engineering techniques over telephone
system to gain access to confidential information from users. This phishing attack is often combined with
caller ID spoofing that masks the real source phone number and instead of it displays the number familiar
to the phishing victim or number known to be of a real banking institution.

Malware

• Virus - Virus is a malicious program able to inject its code into other programs/ applications or data files
and the targeted areas become “infected”. Installation of a virus is done without user’s consent, and spreads
in form of executable code transferred from one host to another. Types of viruses include Resident virus
, non-resident virus; boot sector virus; macro virus; file-infecting virus (file-infector); Polymorphic virus;
Metamorphic virus; Stealth virus; Companion virus and Cavity virus.
• Worm - Worm is a malicious program category, exploiting operating system vulnerabilities to spread itself.
In its design, worm is quite similar to a virus - considered even its sub-class. Unlike the viruses though
worms can reproduce/ duplicate and spread by itself. During this process worm does not require to attach
itself to any existing program or executable. Different types of worms based on their method of spread are
email worms; internet worms; network worms and multi-vector worms.

13
Unit 1 - Introduction to Cyber Security

• Trojan - Computer Trojan or Trojan Horses are named after the mythological Trojan horse owing to their
similarity in operation strategy. Trojans are a type of malware software that masquerades itself as a not-
malicious even useful application but it will actually do damage to the host computer after its installation.
Unlike virus, Trojans do not self-replicate unless end user intervene to install.

1.2.3 Hacking
Hacking is attempting to find security gaps and exploit a computer or network system to gain access and/or control
over the systems. Hackers are highly intelligent and skilled in computers, network, programming and the use of
hacking tools. They could hack systems and commit criminal acts such as privacy invasion, theft of corporate/personal
data, frauds, etc. Sometimes, organisations use the skills of hackers to help them improve the security of their systems
by identifying loopholes and weaknesses in their security systems.

There are various kinds of hackers as mentioned below:

White Hat Hackers


White hat hackers are the certified and authorised hackers, who are also called ethical hackers or penetration
testers. They are experts helping organisations and Governments in testing and identifying the gap and flaws
in their cybersecurity, so that the same can be plugged. They break into systems by creating algorithms and
performing multiple methodologies. By doing this they assist in strengthening the systems and protecting them
from malicious hackers.

Black Hat Hackers


Black Hat Hackers are unauthorized hackers who could be hacking with a malicious intent for unethical reasons.
They are also called Crackers. Their activities could involve cracking the security of systems, gaining unauthorised
access, damaging information, stealing private information, attacking web pages or entire servers, introducing
viruses into certain software, etc.
They usually have a good knowledge of hacking tools, computer networking, network protocols and system
administration of various operating systems. They would also possess good programming and scripting skills.

Grey Hat Hackers


A grey hat hacker is in between a white hat hacker and a black hat hacker. This means that they use their skills for
legal as well as illegal acts, however, not for personal gains. They mostly do this to prove themselves and others
that they can accomplish the feat. If they aim to gain anything else for themselves they become a black hat hacker.
A grey hat hacker may end up identifying flaws in software security systems and could share with the authorities
and be compensated for it, however they still stand the risk of punishment too, as it was an unauthorised attempt.

Script Kiddies
Script Kiddies are amateur hackers, who may not be very skilled, or may be doing this just for the fun of it or to
impress their friends. They download off-the-shelf tools and codes and are not very concerned about learning the
science and the art of hacking. They are also quite dangerous because they do not fully understand the repercussions
of their actions and could end up doing a lot of damage just for fun.
A particular type of Script Kiddy is the Blue Hat Hackers, whose key agenda is to take revenge on anyone who
makes them angry. Like script kiddies they do not want to learn, but use simple cyber attacks like flooding the IP
with overloaded packets, resulting in DoS attacks, etc.

14
Unit 1 - Introduction to Cyber Security

Green Hat Hacker


This type of hacker is the one who learns in the world of hacking. A green hat hacker is usually responsible for no
real activity but is easily recognizable for his intent to learn and understand how it all works. Green Hat Hackers
are often part of large learning communities online, where they watch videos and tutorials on how to make it big.
These are the newbee hackers.
They are also amateurs in the world of hacking but they are bit different from script kiddies. They care about hacking
and strive to become full-blown hackers. They are inspired by the hackers and ask them few questions about it.

Hacktivist
Hacktivists are protestors on the internet who may have political intentions. Instead of carrying placards and
marching streets to call attention towards social causes, they deface websites and upload promotional material, so
that the viewers would receive information about the cause they propagate, but anonymously. They use the same
knowledge, skills and tools of a black hat hacker but with the objective of getting public attention to a political
matter. They could also extract unauthorised information from government or organisational sources and make it
public, acting like a whistleblower.

Red Hat Hackers


Red Hat Hackers are similar to white hat hackers and are called eagle-eyed hackers. Their agenda is to stop the
Blackhat hackers. They will do anything to stop them. Instead of identifying them and reporting them, they go the
extra mile of taking down the black hat hacker completely. They may aggressively launch a series of cyber attacks
and malware on the hacker. Their objective is to destroy the effort of each malicious hacker type and to bring down
their entire infrastructure and take them out of business

15
Unit 1 - Introduction to Cyber Security

1.3 CYBER SECURITY CONTROLS

1.3.1 What are Cyber Security Controls ?


Anyone who uses digital devices is vulnerable to security incidents and data breaches and needs to strengthen their
defenses to protect their critical data assets. Security controls are technical or administrative safeguards or counter
measures to avoid, counteract or minimize loss or unavailability due to threats acting on their matching vulnerability,
i.e., security risk.
The primary objective of security controls is to help users to manage their risk and protect their critical data assets
from intrusions, security incidents and data loss.

Fig1.4: Security controls are countermeasures for managing risks

Security threats can affect an institution by the exploitation of numerous types of vulnerabilities. No single control or
security device can completely protect a system that is connected to a public network. Effective security will require
the establishment of layers of various types of controls, monitoring, and testing methods.

16
Unit 1 - Introduction to Cyber Security

Types of Controls

Central to information/cyber security is the concept of controls, which may be categorized by the following:

FUNCTIONALITY PLANE OF APPLICATION

• Preventive • Physical
• Detective • Administrative
• Corrective • Technical
• Deterrent
• Recovery
• Compensating

By functionality:

Preventive controls

Preventive controls are the first controls met by an adversary. These try to prevent security violations and enforce
access control. Like other controls, these may be physical, administrative or technical. Doors, security procedures
and authentication requirements are examples of physical, administrative and technical preventive controls
respectively.

Detective controls

Detective controls are in place to detect security violations and alert the defenders. They come into play when
preventive controls have failed or have been circumvented and are no less crucial than detective controls. Detective
controls include cryptographic checksums, file integrity checkers, audit trails and logs and similar mechanisms.

Corrective controls

Corrective controls try to correct the situation after a security violation has occurred. Although a violation occurred,
but the data remains secure, so it makes sense to try and fix the situation. Corrective controls vary widely, depending
on the area being targeted, and they may be technical or administrative in nature.

Deterrent controls

Deterrent controls are intended to discourage potential attackers. Examples of deterrent controls include notices
of monitoring and logging as well as the visible practice of sound information/cyber security management.

Recovery controls

Recovery controls are somewhat like corrective controls, but they are applied in more serious situations to
recover from security violations and restore information and information processing resources. Recovery controls
may include disaster recovery and business continuity mechanisms, backup systems and data, emergency key
management arrangements and similar controls.

Compensating controls

Compensating controls are intended to be alternative arrangements for other controls when the original controls
have failed or cannot be used. When a second set of controls addresses the same threats that are addressed by
another set of controls, it acts as a compensating control

17
Unit 1 - Introduction to Cyber Security

By plane of application:

Physical controls

Physical controls include doors, secure facilities, fire extinguishers, flood protection and air conditioning.

Administrative controls

Administrative controls are the organization’s policies, procedures and guidelines intended to facilitate information/
cyber security.

Technical or Logical controls

Technical or Logical controls are the various technical measures, such as firewalls, authentication systems intrusion
detection systems and file encryption among others.

1.3.2 Logical Controls


Logical security controls help in protecting computing systems from unauthorized access, destruction or alteration
of software/application programs and data. They restrict the ability of users to access the system and also prevent
unauthorized users from accessing.
We may find logical security controls in operating systems, database management systems, application programs, or
all three. The number and type of controls used will vary with the type of operating system, database management
system, application, and telecommunication device.

They could include:


• user IDs,
• passwords with a specific length, digits and character requirement,
• suspension of user IDs after successive failed sign-on attempts,
• directory and file access restrictions,
• time-of- day and day-of-week restrictions
• specific terminal usage restrictions, etc.

Many systems are programmed with controls as per the degree of risk associated with the system. For example, a
high-risk money transfer processing system at a financial institution would have a lot more controls than a lower-risk
non-transactional record-keeping system at the same institution.
Yet, there are many high-risk systems that may not be programmed with adequate control features or the control
may not be implemented properly. In such cases programmers and/or process owners are not aware of one or more
of the risks faced by the organization.

1.3.3 Physical Controls


Computer hardware includes the CPU and all peripheral devices. In networked systems, these devices include all
bridges, routers, gateways, switches, modems, hubs, telecommunication media, and any other devices involved in
the physical transmission of data. These pieces of equipment must be adequately protected against physical damage
resulting from natural disasters, such as earthquakes, hurricanes, tornadoes, and floods, as well as other dangers, such
as bombings, fires, power surges, theft, vandalism, and unauthorized tampering.

18
Unit 1 - Introduction to Cyber Security

Controls that protect against these threats are called physical security controls. Examples of physical security controls
include various types of locks (e.g., conventional keys, electronic access badges, biometric locks, cipher locks);
insurance coverage over hardware and the costs to re-create data; procedures to perform daily backups of system
software, application programs, and data; as well as off-site storage and rotation of the backup media (e.g., magnetic
tapes, disks, compact disks [CDs]) to a secure location; and current and tested disaster recovery programs.

Physical security controls pertain to the central processing unit and associated hardware and peripheral devices.

1.3.4 Tools and Techniques


Security Vulnerability Management

Security vulnerability management has evolved from the vulnerability assessment systems that began in the early 1990s
with the advent of network security scanner S.A.T.A.N. (Security Administrator’s Tool for Analyzing Networks). It was
followed by the first commercial vulnerability scanner from ISS. While the early tools mostly found the vulnerabilities,
and produced reports, today’s solutions deliver comprehensive discovery and support the entire security vulnerability
management lifecycle.
Vulnerabilities can exist anywhere in the IT environment. They could be the result of many different root causes. The
security vulnerability management solutions collect intelligence from the endpoint and network comprehensively
and then apply some advanced analytics to identify and even prioritise vulnerabilities that pose the maximum risks
to systems. This results in actionable data that helps the IT security teams to focus on tasks that will most quickly and
effectively reduce overall network risk with fewest possible resources.
Security vulnerability management works in a closed-loop workflow system that usually includes identifying
the networked systems and their associated applications, auditing or scanning the systems and applications for
vulnerabilities and remediating them. Any IT infrastructure component could present existing or new security
concerns and vulnerabilities. It may be a fault in the product/ component or it may be inadequate configuration.
Malicious code or unauthorised individuals may exploit these vulnerabilities to cause damage, such as disclosure of
data to competition or using passwords and userids to conduct frauds. Vulnerability management is the process of
identifying those vulnerabilities and taking appropriate measures to mitigate risk.

Vulnerability assessment and management is an essential piece for managing overall IT risk because:

Persistent threats

Attacks exploiting security vulnerabilities for financial gain and criminal agendas continue to dominate headlines.

Regulation

Many government and industry regulations mandate rigorous vulnerability management practices.

Risk management

Mature organizations treat it as a key risk management component. Organizations that follow mature IT security
principles understand the importance of risk management.

Properly planned and implemented threat and vulnerability management programs represent a key element in
an organization’s information/cyber security program, providing an approach to risk and threat mitigation that is
proactive and business aligned, not just reactive and technology focused.

Vulnerability Assessment

Includes assessment of the environment for known vulnerabilities, and to assess IT components, using security
configuration policies (by device role) that have been defined for the environment. This is accomplished through
scheduled vulnerability and configuration assessments of the environment.

19
Unit 1 - Introduction to Cyber Security

Network based vulnerability assessment (VA) has been the main method used in order to baseline networks, servers
and hosts. The strength of VA is its breadth of coverage.
A comprehensive and accurate vulnerability assessment can be done for managed systems by using credentialed
access. Unmanaged systems can be identified and a basic assessment can be done. It is also important to evaluate
databases and web applications for security weaknesses considering the increase in attacks that target these
components.
Database scanners are used to check database configuration and properties, and to verify whether they comply with
database security best practices. Web application scanners test an application’s logic for ‘abuse’ cases that can break
or exploit the application. There are more tools that can be used to perform more in-depth testing and analysis.
All these scanning technologies (whether it is for network, application or database) assess different types of security
weaknesses, and most organisations need to implement a combination.

Risk assessment
Larger issues should be expressed in the language of risk (e.g. ISO 27005), precisely expressing influence in terms of
business. The business case for any remedial action should incorporate considerations relating to reduction of risk
and compliance with policy. This incorporates the basis of action to be agreed upon between the relevant line of
business and the security team.

Risk analysis
‘Fixing’ the issue may involve acceptance of risk, shifting of risk to another party or reducing risk by applying remedial
action, which could be anything from a configuration change to implementing a new infrastructure (e.g., data loss
prevention, firewalls, host intrusion prevention software).
Elimination of the root cause of security weaknesses may require changes to user administration and system
provisioning processes. Many processes and often several teams may come into play (e.g., configuration management,
change management, patch management, etc.). Monitoring and incident management processes are also required
to maintain the environment.

Security Testing
Hackers or attackers are people who gain unauthorised access to an application. Their motive can range from
malicious or harmful to simple curiosity or wanting to brag/show off. There is another type of hacker, who is hired to
find if the application can be breached. They are called ‘ethical hackers’. Hackers who have malicious intent and wish
to break into an application to steal data or causing damage are called ‘crackers’.

Types of attacks
The most common types of attacks are:
• State sponsored attacks: State sponsored attacks are penetrations conducted by terrorist groups, foreign
governments and other outside entities.
• Advanced persistent threats: Advanced persistent threats are continuous attacks aimed at an organisation
often for political reasons.
• Ransomware: Ransomware locks data and requires the owner to pay a fee to have their data released.
• Denial of Service (DoS): Denial of Service makes an application inaccessible to its users.

20
Unit 1 - Introduction to Cyber Security

How do attacks happen?


Some of the usual means by which hackers and crackers attack through are: Cross-site scripting: cross-site scripting
involves adding a JavaScript, ActiveX or HTML script into a website on the client side to obtain clients’ confidential
information.
• Brute force attacking: brute force attacking requires automation and is used to obtain unauthorised access
by trying large numbers and combinations of user identifications and passwords.
• Session hijacking: session hijacking is used to steal the session once a legitimate user has successfully logged
in.
• SQL injection: using SQL injection, an attacker manually edits SQL queries that pass-through URLs or text
fields.
• URL manipulation: with URL manipulation, a hacker attempts to gain access by changing the URL.

What is security testing?


Security testing is validating that an application does not have code issues that could allow unauthorized access to
data and potential data destruction or loss. The goal of security testing is to identify these bugs, which are called
threats and vulnerabilities. Some of the most common types of security testing include:
• Vulnerability and security scanning: Vulnerability scanning is an automated test where the application code is
compared against known vulnerability signatures. Vulnerabilities are bugs in code that allow hackers to alter
the operation of the application to cause damage. Security scans find network and application weaknesses
• Penetration testing: Penetration testing simulates an attack by a hacker.
• Security auditing: Security auditing is a code review designed to find security flaws.
• Ethical hacking: Ethical hacking involves attempting to break into the application to expose security flaws.

The challenges of security testing


Security testing requires a very different mindset. Rather than attempting to ensure the application works as designed,
security testing attempts to prove that the application does not have vulnerabilities. Security vulnerabilities are bugs
that are very difficult to find and to fix. Often, fixing a security vulnerability involves design changes, and, therefore, it
is important to consider security testing in the earliest possible phases of any project.
Security testing requires automation and specialized skills, however some areas in which all testers can easily spot
security vulnerabilities by incorporating security testing into their functional testing are:
• Logins/passwords: ensuring passwords are encrypted, validating that a user is locked out after three invalid
password attempts
• Roles and entitlements: use of least privilege i.e. applications roles carry only required privileges
Application roles define duties that entitle access only to the functions and data necessary for performing the defined
tasks of that duty.
• Session timeouts: user is timed out after the required number of minutes of inactivity content uploads: limit on
size and type, or compulsory scanning before upload forward and backward navigation tests involving financial
or private information
• For comprehensive security testing, consultant network security will need to start by learning to use security
testing scanners and tools.

21
Unit 1 - Introduction to Cyber Security

Remediation Planning

Prioritization
Vulnerability and security configuration assessments usually generate long remediation work lists. This remediation
work needs to be prioritized. When organizations implement vulnerability assessment and security configuration
baselines for the first time, they may discover that many systems contain multiple vulnerabilities and security
configuration errors. There is a lot of work and therefore, prioritization is important.

Root Cause Analysis (RCA)


It is important to analyze security and vulnerability assessments to determine the root cause. In many cases, the
root cause can be found in the provisioning, administration and maintenance processes of IT operations or in the
development or procurement processes of the applications. Elimination of the root cause of security weaknesses
could need changes in the user administration or changes in the system provisioning processes.

What makes a good RCA?


RCA is an analysis of a failure, for the purpose of determining the first (or root) failure that caused the condition in
which the system finds itself. For example, if there is an application crash, RCA would want one to think, why did it
crash and why in this way?
A forensic specialist’s job in performing an RCA is to keep asking "why" until there are no more questions, and then
they will be able to see the problem at the root of the situation.

Example: an application that had its database pilfered by hackers where the ultimate failure the forensic
specialist may be investigating is the exfiltration of consumer private data, but SQL Injection isn’t what caused
the failure.
Why did the SQL Injection happen?
Was the root of the problem that the developer responsible simply didn’t follow the corporate policy for
building SQL queries?
Or was the issue a failure to implement something like the OWASP ESAPI (ESAPI - The OWASP Enterprise
Security API is a free, open source web application security control library that makes it easier for programmers
to write lower-risk applications.) in the appropriate manner?
Or maybe the cause was a vulnerable open-source piece of code that was incorporated into the corporate
application without passing it through the full source code lifecycle process?

A forensic specialist’s job when they are performing an RCA


is to figure this out. Root-cause analysis is super critical in the
software security world. A number of automated solutions are Read more at:
also available for various types of RCA. For example, HP’s web
[Link]
application security testing technology which can link XSS
whitepapers/detection/decision-tree-
issues to a single line of code in the application input handler.
analysis-intrusion-detection-how-to-
Decision tree and algorithms may be used for further detailed guide-33678.
analysis as tools.

22
Unit 1 - Introduction to Cyber Security

Access Control Models


Logical access control models are the abstract foundations upon which actual access control mechanisms and systems
are built. Access control is among the most important concepts in computer security. Access control models define
how computers enforce access of subjects (such as users, other computers, applications and so on) to objects (such
as computers, files, directories, applications, servers and devices).
Three main access control models exist:

Discretionary Access Mandatory Access Role Based Access


Control model Control model Control model

Discretionary Access Control (DAC)

The Discretionary Access Control model is the most widely used of the three models. In this model, the owner or the
creator of the information (which could a file or directory) can decide and set the access control restrictions on the file
or directory that carries this information. The advantage of DAC is its flexibility. The users may decide who can access
the information and what privileges to give, such as read, write, delete, rename, execute, etc.

Mandatory Access Control (MAC)

Mandatory access control, takes a stricter approach to access control. User of systems utilising MAC, have little or
no choice as to what access permissions they can set on their information. They have to abide by mandatory access
controls specified in a system-wide security policy, which are enforced by the operating system and applied to all
operations on the system.

Data classification levels (such as public, confidential, secret and top secret) are used in MAC based systems. They
also use security clearance labels corresponding to data classification levels. This helps to decide what access control
restrictions to enforce in accordance with the security policy set by the system administrator. Apart from this access
control restrictions may be imposed per group and/ or per domain. i.e. aprt from having the required security clearance
level, the users or applications must also belong to the appropriate group or domain. For example, a file that carries
a 'confidential' label and belongs only to the research group cannot be accessed by a user from the marketing group
even if that user has a security clearance level higher than confidential (such as secret or top secret). This concept is
known as compartmentalization or ‘need to know’.

When used appropriately, MAC based systems, are usually more secure than DAC based systems, however, they are
also much more difficult to use and administer because of the additional restrictions and limitations imposed by the
operating system. MAC based systems are thus, mostly used in government, military and financial institutions where
more than usual security is required and where the complexity and costs can be tolerated. environments.

Role-Based Access Control (RBAC)

In role-based access control model, rights and permissions are assigned to roles instead of individual users. This
added layer of abstraction permits easier and more flexible administration and enforcement of access controls. For
example, access to marketing files may be restricted only to the marketing managers, and users Ann, David, and
Joe may be assigned the role of a marketing manager. Later, when David moves from the marketing department
elsewhere, it is enough to revoke his role of marketing manager, and no other changes would be necessary.

When this approach is applied to an organisation with thousands of employees and hundreds of roles, the added
security and convenience of using RBAC can be seen. Solaris has supported RBAC since release 8.

23
Unit 1 - Introduction to Cyber Security

Centralized vs. Decentralized Access Control

Further distinction should be made between centralized and decentralized (distributed) access control models. In
environments with centralized access control, a single, central entity makes access control decisions and manages the
access control system whereas in distributed access control environments, these decisions are made and enforced
in a decentralized manner. Both approaches have their pros and cons, and it is generally inappropriate to say that
one is better than the other. The selection of a specific access control approach should be made only after careful
consideration of an organisation’s requirements and associated risks.

24
Unit 1 - Introduction to Cyber Security

1.4 INTRODUCTION TO NETWORK SECURITY

1.4.1 What is Network Security?


With Internet and new networking technology, the world has become well connected. There is a large amount of
information on networking infrastructures worldwide. Network security is gaining great importance because of
intellectual property that can be easily acquired through the internet.
There are two different types of networks:
• Data networks
• Synchronous network comprised of switches
Internet is considered a data network. Since the current data network consists of computer based routers, information
can be obtained by special programmes, such as “Trojan horses,” planted in the routers. E-business, mobile commerce
and need for wireless communication and internet applications also continue to grow, making network security more
important.
System and network technology is a key technology for a wide variety of applications. Security is crucial to networks
and applications. Although network security is a critical requirement in emerging networks, there is a significant lack
of security methods that can be easily implemented.
Network design is a well-developed process that is based on the Open Systems Interface (OSI) model. The OSI model
has several advantages when designing networks. It offers modularity, flexibility, ease-of-use, and standardisation of
protocols.
Protocols of different layers can be easily combined to create stacks which allow modular development. Implementation
of individual layers can be changed later without making other adjustments, allowing flexibility in development. In
contrast to network design, secure network design is not a well-developed process. There isn’t a methodology to
manage the complexity of security requirements. Secure network design does not contain the same advantages as
network design. When considering network security, it must be emphasised that the whole network is secure.
Network security does not only concern the security in computers at each end of the communication chain. When
transmitting data, the communication channel should not be vulnerable to attack. A possible hacker could target
the communication channel, obtain data, decrypt it and reinsert a false message. Securing the network is just as
important as securing computers and encrypting the message.
When developing a secure network, the following needs to be considered:

ACCESS Authorised users are provided the means to communicate


to and from a particular network

Information in the network remains private.


CONFIDENTIALITY

AUTHENTICATION Ensures identity of users on a network.

INTEGRITY Ensures safety of messages while in transition.

NON– REPUDIATION Ensures the use of network by a user.

25
Unit 1 - Introduction to Cyber Security

An effective network security plan is developed with the understanding of:


• business objectives and priorities,
• security issues,
• potential attackers,
• needed level of security, and
• factors that make a network vulnerable to attack

The internet architecture itself leads to vulnerabilities in the network. Understanding the security issues of internet
greatly assists in developing new security technologies and approaches for networks with internet access and internet
security itself. The types of attacks through internet also need to be studied to be able to detect and guard against
them.
There are many products available for ensuring network security. These tools are:
• encryption
• authentication mechanisms
• intrusion-detection
• security management and firewalls, etc.

Typical security currently exists on computers connected to the network. Security protocols sometimes usually appear
as part of a single layer of the OSI network reference model. Current work is being performed in using a layered
approach to secure network design. The layers of the security model correspond to the OSI model layers, which is
later discussed in this handbook.
Special security devices and technologies are also used to achieve the required network up-time of 99.999%, as for
instance:
• Firewalls
• Intrusion Detection and Prevention Systems (IDPS)
• Virtual Private Networks (VPN)
• Tunneling
• Network Access Control (NAC)
• Security Scanners
• Protocol Analysers
• Authorization, authentication and accounting (AAA)

The most used security device in networks though remain to be the firewall. There are various firewall types, such as:
• Hardware firewalls
• Server firewalls
• Personal firewalls

Some modern firewalls include:


• Intrusion detection
• Authentication
• Authorisation
• Vulnerability assessment systems

26
Unit 1 - Introduction to Cyber Security

To select an appropriate network security solution, the following information has to be collated.
• Identifying Potential Risks for Network Security
• Asset Identification
• Vulnerability Assessment and Threat Identification
• Understanding the Network Model and Architecture
• Identification of User Productivity and Business Needs
• Identification of Legal and Regulatory Requirements
The network solution selected must keep all the above in mind.
These will be discussed in detail in the subsequent sections of this handbook.

1.4.3 Dynamics of Network Security

Network security refers to any activity designed to protect your network. Specifically, these activities protect the
usability, reliability, integrity and safety of network and data. Effective network security targets a variety of threats and
stops them from entering or spreading on the network.
No single solution protects from a variety of threats. Network security is accomplished through hardware and software.
Software must be constantly updated and managed for protection against emerging threats.
Wireless networks, which by their nature, facilitate access to radio, are more vulnerable than wired networks and need
to encrypt communication to deal with sniffing and continuously checking the identity of mobile nodes.
The mobility factor adds more challenges to security namely, monitoring and maintenance of secure traffic transport
of mobile nodes. This concerns both homogenous and heterogeneous mobility (inter-technology). The latter requires
homogenisation of security level of all networks visited by the mobile.
From the terminal’s side, it is important to protect its resources (battery, disk, CPU) against misuse and ensure
confidentiality of its data. In an ad hoc or sensor network, it becomes essential to ensure terminal’s integrity as it
plays a dual role of router and terminal.
The difficulty of designing security solutions that could address these challenges is not only to ensure robustness
faced with potential attacks or to ensure that it does not slow down communication, but also to optimise the use of
resources in terms of bandwidth, memory, battery, etc.
More importantly in this open context, the wireless network is to ensure anonymity and privacy while allowing
traceability for legal reasons. Indeed, the growing need for traceability is not only necessary to fight against criminal
organisations and terrorists, but also to minimise the plundering of copyright. It is therefore facing a dilemma of
providing a network support of free exchange of information while controlling the content of communication to avoid
harmful content. Actually, this concerns both wired and wireless networks. All these factors influence the selection
and implementation of security tools that are guided by a prior risk assessment and security policy.

27
Unit 1 - Introduction to Cyber Security

1.4.4 Mitigation Techniques

A network security system usually consists of many components. Ideally, all components work together, which
minimises maintenance and improves security.
Network security components often include:
• Anti-virus and anti-spyware
• Firewall to block unauthorised access to network
• Intrusion Prevention Systems (IPS) to identify fast-spreading threats, such as zero-day or zero- hour attacks
• Virtual Private Networks (VPNs) to provide secure remote access
• Communication security
Any scheme that is developed for providing network security needs to be implemented at some layer in protocol
stack as depicted in the diagram below:

LAYER COMMUNICATION PROTOCOLS SECURITY PROTOCOLS

Application Layer HTTP FTP SMTP PGP. S/MIME, HTTPS

Transport Layer TCP /UDP SSL, TLS, SSH

Network Layer IP IPsec

We shall learn more about them in a later Unit.

28
Unit 1 - Introduction to Cyber Security

1.5 INTRODUCTION TO IDAM

1.5.1 What is Identity and Access Management (IDAM)?

Identity and Access Management (IDAM) is the process of managing who has access to what information over time.
In other words it is the security and business discipline that “enables the right individuals to access the right resources
at the right times and for the right reasons.”
This cross-functional activity involves the creation of distinct identities for individuals and systems, as well as the
association of system and application-level accounts to these identities.
Fundamentally, IDAM attempts to address three important questions:
1. Who has access to what information
(A robust identity and access management system will help a company not only to manage digital identities, but to
manage the access to resources, applications and information these identities require as well.)
2. Is the access appropriate for the job being performed?
(This element takes on two facets. First, is this access correct and defined appropriately to support a specific job
function? Second, does access to a particular resource conflict with other access rights, thus posing a potential
segregation of duties problem?)
3. Is the access and activity monitored, logged and reported appropriately?
(In addition to benefitting the user through efficiency gains, IDAM processes should be designed in a manner
that supports regulatory compliance. One of the larger regulatory realities is that access rights must be defined,
documented, monitored, logged and reported appropriately.)
A robust identity and access management system will help a company not only to manage digital identities, but to
manage the access to resources, applications and information these identities require as well.
Two facets are taken by this element.
1. IIs this access correct and also defined correctly to support a particular job function?
2. Does access to a specific resource conflict with other access rights as a result of which potential segregation
of duties problem arise.
Along with benefiting the user through efficiency gains, IDAM processes should also be designed in a manner which
supports regulatory compliance. Access rights should be defined, documented, monitored, logged and reported
appropriately.

IDAM processes are used to initiate, capture, record and manage the user identities and related access permissions
to the organisation’s proprietary information. These users may extend beyond corporate employees. i.e. The users
could be:
• Employees
• Vendors
• Customers
• Floor Devices
• Generic administrator accounts

29
Unit 1 - Introduction to Cyber Security

The means used by the organisation to facilitate the administration of user accounts and to implement proper
controls around data security form the foundation of IDAM. It addresses the need to ensure appropriate access to
resources across increasingly heterogeneous technology environments and to meet increasingly rigorous compliance
requirements.

1.5.2 What should an IDAM system include?

Identity and access management involves four basic functions:


1. Identity management: Creation, management and deletion of identities without regard to access or entitlements.
2. User access (logon): For example: a smart card and its associated data used by a customer to log on to a service
or services.
3. Privileged Identity: Focuses solely on Identity Management for privileged accounts, powerful accounts used
by IT administrators, select business users and even some applications.
4. Identity Governance: A system that relies on federated identity to authenticate a user without knowing his
or her password.
IDAM solutions should automate the following activities with respect to user identities and their related access
permissions:
• Initiation
• Capturing
• Recording
• Management
The products should also include a centralized directory service that scales as a company expands. This central
directory prevents credentials from ending up recorded haphazardly in files and sticky notes as employees try to deal
with the burden of multiple passwords for different systems.

IDAM systems should facilitate the process of user provisioning and account setup. The product should lessen the
time required with a controlled workflow that reduces errors and the potential for abuse, while enabling automated
account fulfilment. An identity and access management system should also provide administrators with the ability to
instantly view and change access rights.

Also, it is quite important that within the central directory, the access right / privilege system match employee job
title, location and business unit ID automatically in order to automatically manage the access request. These small
pieces of information help in classifying access requests significant to employees’ existing positions. There are some
rights which might be inherent in their position and provisioned automatically depending on the employee, while
others may be allowed upon request. Reviews may also be needed in some cases. Except in the case of exemption,
other requests could be denied or may be outright prohibited. All varia-ons should be managed by the IDAM system
automatically and appropriately.

In order to manage access requests, an IDAM System has to set workflows, with the option of multiple stages of
reviews with the requirement for approval of each request. This is the mechanism that can facilitate the setting of
different risk level-appropriate review processes for higher-level access as well as reviews of already existing rights in
order to prevent privilege sneak. A good IDAM system is authoritative for any organisation for securing its resources.

30
Unit 1 - Introduction to Cyber Security

Components of IDAM

IDAM is the task of handling information about users on computers. Information that authenticates the user’s identity,
and information that helps to describe information and actions that they are authorized to access and/or perform
can be included in this. It also includes the management of descriptive information about the user along with how
and by whom that information can be accessed and changed. Typically, managed entities include users, hardware
and network resources and even applications. The classification of IDAM components into 5 major categories include:
authentication, authorization, administration, audit and central user repository (Enterprise Directory).

Authentication
Authentication management and session management are covered in this area. It includes ensuring that the person
who logs on to a system is who they say they are. Generally, this is done with the user who provides enough credentials
like usernames and passwords which, when put together or combined, give certain assurance of the authenticity of
the person who is logging on in order to gain initial access to a particular resource or an application system.
After the authentication of a user, a session is created and referred at the time of the interaction of the user and the
application system until the session is logged off by the user or terminated by other means (e.g. timeout). When the
user ID/ password authentication method is used, the authentication module comes with a password service module.
By maintaining the user’s session centrally, Single Sign-On service is provided by the authentication module so that
the user is not required to logon again while accessing another system or application governed under the same IDAM
Framework.

Authorization
Authorization is a module which helps in determining whether a user is given permission for an access to a specific
resource. It includes the parameters positioned around what a user is permitted to do after their authentication. The
concern of the authorization is not with who they are, but why they are logging on and what the user is permitted to
do. Authorization can be impacted by different variables. They include everything from file and application permission
and sharing, to very precisely defied access rules that are based on role, location and even circumstance.
Authorization is generally performed by checking the resource access request, generally as a URL in web-based
application, against the strategies of authorization that are put away in an IAM policy store. Authorization is the core
module applying access control based on role. Besides, the authorization model could give complex access controls
in view of data or information or policies which includes the attributes of user, roles / groups of user, actions taken
by user, access channels, time, the requested, external data and business rules.

Administration
The zone of Administration contains user management, password management, role/group management and
user/group provisioning. User management module characterises the arrangement of administrative functions,
for example, identity creation, propagation and upkeep of user identity, benefits and privileges. One of its parts
includes the user life cycle management that makes an enterprise to deal with the life expectancy of a user account,
from the underlying phase (the initial stage) of provisioning to the last stage (the final stage) of de-provisioning.
User management needs a coordinated work process capability to endorse some user actions like user account
provisioning and de-provisioning. While some part of the user management functions must be centralized and others
can be assigned to end-users.
The assigned administration enables an enterprise to directly distribute workload to user departmental units and can
also improve the accuracy of system data by assigning the responsibility of updates to persons closest to the situation
and information.
Self-service is another key concept within user management. Through self-profile management service an enterprise
can benefit from timely update and accurate maintenance of identity data. Another popular self-service function is
self-password reset, which significantly eases the help-desk workload to handle password reset requests.

31
Unit 1 - Introduction to Cyber Security

Audit
Audit includes those activities that help “prove” that authentication, authorization and administration are done at a
sufficient level of security, measured by a set of standards. It concerns with matters ensuring compliance, or it may
be concerns with satisfying a best practice framework, such as ITIL. Or it may also simply be designed to conform to
internally developed security standards or policies.

Central User Repository


Central User Repository stores and delivers identity information to other services, and provides service to verify
credentials submi-ed from clients. The Central User Repository presents an aggregate or logical view of identities of
an enterprise. Both Meta-directory and Virtual directory can be used to manage disparate identity data from different
user repositories of applications and systems. A meta-directory typically provides an aggregate set of identity data
by merging data from different identity sources into a meta-set. Usually it comes with a 2-way data synchronization
service to keep the data in sync with other identity sources. A virtual directory delivers a unified view of consolidated
identity information, behind the scene multiple databases containing different sets of users are combined in real time.
With the regularly developing worldwide market trends, enterprises need to broaden its steps all through the world in
different topographies, which increases challenges to oversee personality while guaranteeing the resource prepared
and network secured. Because of the across the board workplace locations, enterprises are required to think of
successful identity management solution for guaranteeing the compliance to its data. Poorly managed IDAM system
may lead to data theft and other security issues, which further may lead to financial losses. There are occurrences of
very reputed enterprises suffered from data the and fraud which lead them to big losses.
With the growing dependence and competition in technological devices enterprises are compelled to adopt modern
trends like cloud computing, BYOD (Bring Your Own Device) and virtualization for its employees. This also adds up
additional security and maintenance challenges for its identity and access management system.

32
Unit 1 - Introduction to Cyber Security

1.6 INTRODUCTION TO CYBER FORENSICS

1.6.1 What is Cryptography?


We all know how vulnerable our data is over the communication channel. The data is prone to various types of attacks
while being transmitted and in turn will result in information hacking. Therefore, there arises a need for a strategy in
place to preserve the data being transmitted.
For this purpose, the data is converted into a secret code for efficient transmission over the communication network.
The act of converting the data into a secret code is called as cryptography. The original data is converted into a coded
equivalent referred to as ciphertext with the help of an encryption algorithm.
Let us take an example to understand this concept further.
Suppose a person A wants to transmit a piece of information to a person B over a public network. When the person A
has transmitted the information from his end, the data is converted into what we call ciphertext through encryption.
This is to ensure that the data stays protected and is not hacked or stolen by a third party. For person B to obtain
the information in its original form, the encrypted data is then decrypted by applying a decryption algorithm. In this
process, the person A shares what we call a “key” that is private to person B. This helps in ensuring that the data stays
protected and is communicated to the desired party. Similarly, the same process will be followed when the person B
wants to transmit the information to person A and it goes on.

1.6.2 Encryption / Decryption Keys


Encryption is the process of transforming information so it is unintelligible to anyone but the intended recipient.
Decryption is the process of transforming encrypted information so that it is intelligible again. A cryptographic
algorithm, also called a cipher, is a mathematical function used for encryption or decryption. In most cases, two
related functions are employed, one for encryption and the other for decryption.
With most modern cryptography, the ability to keep encrypted information secret is based not on the cryptographic
algorithm, which is widely known, but on a number called a key that must be used with the algorithm to produce an
encrypted result or to decrypt previously encrypted information. Decryption with the correct key is simple. Decryption
without the correct key is very difficult, and in some cases impossible for all practical purposes.
The sections that follow introduce the use of keys for encryption and decryption.
• Symmetric-Key Encryption
• Public-Key Encryption
• Key Length and Encryption Strength
We will read more about these in the next chapter.

1.6.3 Basic Algorithms

An algorithm can be thought of as the link between the programming language and the application. An algorithm
is a fancy to-do list for a computer. Algorithms take in zero or more inputs and give back one or more outputs.

33
Unit 1 - Introduction to Cyber Security

A recipe is a good example of an algorithm because it tells you what you need to do step by step. It takes inputs
(ingredients) and produces an output (the completed dish).
The words 'algorithm' and 'algorism' come from the name of a Persian mathematician called Al-Khwarizmi
(Persian: c. 780–850).
Algorithm is a step-by-step procedure, which defines a set of instructions to be executed in a certain order to get
the desired output. Algorithms are generally created independent of underlying languages, i.e. an algorithm can be
implemented in more than one programming language.
From the data structure point of view, following are some important categories of algorithms −
• Search − Algorithm to search an item in a data structure.
• Sort − Algorithm to sort items in a certain order.
• Insert − Algorithm to insert item in a data structure.
• Update − Algorithm to update an existing item in a data structure.
• Delete − Algorithm to delete an existing item from a data structure.

Characteristics of an Algorithm
Not all procedures can be called an algorithm. An algorithm should have the following characteristics:
• Unambiguous − Algorithm should be clear and unambiguous. Each of its steps (or phases), and their inputs/
outputs should be clear and must lead to only one meaning.
• Input − An algorithm should have 0 or more well-defined inputs.
• Output − An algorithm should have 1 or more well-defined outputs, and should match the desired output.
• Finiteness − Algorithms must terminate after a finite number of steps.
• Feasibility − Should be feasible with the available resources.
• Independent − An algorithm should have step-by-step directions, which should be independent of any
programming code.

How to Write an Algorithm?


There are no well-defined standards for writing algorithms. Rather, it is problem and resource dependent. Algorithms
are never written to support a particular programming code.
As we know that all programming languages share basic code constructs like loops (do, for, while), flow-control (if-
else), etc. These common constructs can be used to write an algorithm.
We write algorithms in a step-by-step manner, but it is not always the case. Algorithm writing is a process and is
executed after the problem domain is well-defined. That is, we should know the problem domain, for which we are
designing a solution.

1.6.4 AES, RSA and DES


AES-256 Encryption
Advanced Encryption Standard (AES) is one of the most frequently used and most secure encryption algorithms available
today. It is publicly accessible, and it is the cipher which the NSA uses for securing documents with the classification
"top secret". Its story of success started in 1997, when NIST (National Institute of Standards and Technology) started
officially looking for a successor to the aging encryption standard DES. An algorithm named "Rijndael", developed by
the Belgian cryptographists Daemen and Rijmen, excelled in security as well as in performance and flexibility.

34
Unit 1 - Introduction to Cyber Security

It came out on top of several competitors and was officially announced the new encryption standard AES in 2001. The
algorithm is based on several substitutions, permutations and linear transformations, each executed on data blocks of
16 byte – therefore the term blockcipher. Those operations are repeated several times, called “rounds”. During each
round, a unique roundkey is calculated out of the encryption key, and incorporated in the calculations. Based on the
block structure of AES, the change of a single bit, either in the key, or in the plaintext block, results in a completely
different ciphertext block – a clear advantage over traditional stream ciphers. The difference between AES-128, AES-
192 and AES-256 finally is the length of the key: 128, 192 or 256 bit – all drastic improvements compared to the 56
bit key of DES. By way of illustration: Cracking a 128 bit AES key with a state-of-the-art supercomputer would take
longer than the presumed age of the universe. And Boxcryptor even uses 256 bit keys. As of today, no practicable
attack against AES exists. Therefore, AES remains the preferred encryption standard for governments, banks and high
security systems around the world.

RSA Encryption
RSA is one of the most successful, asymmetric encryption systems today. Originally discovered in 1973 by the British
intelligence agency GCHQ, it received the classification “top secret”. We have to thank the cryptologists Rivest,
Shamir and Adleman for its civil rediscovery in 1977. They stumbled across it during an attempt to solve another
cryptographic problem.
As opposed to traditional, symmetric encryption systems, RSA works with two different keys: A public and a private
one. Both work complementary to each other, which means that a message encrypted with one of them can only be
decrypted by its counterpart. Since the private key cannot be calculated from the public key, the latter is generally
available to the public.
Those properties enable asymmetric cryptosystems to be used in a wide array of functions, such as digital signatures.
In the process of signing a document, a fingerprint encrypted with RSA, is attached to the file, and enables the
receiver to verify both the sender and the integrity of the document. The security of RSA itself is mainly based on
the mathematical problem of integer factorization. A message that is about to be encrypted is treated as one large
number. When encrypting the message, it is raised to the power of the key, and divided with remainder by a fixed
product of two primes. By repeating the process with the other key, the plaintext can be retrieved again. The best
currently known method to break the encryption requires factorizing the product used in the division. Currently, it
is not possible to calculate these factors for numbers greater than 768 bits. That is why modern cryptosystems use a
minimum key length of 3072 bits.

Data Encryption Standard (DES):


DES is a symmetric block cipher (shared secret key), with a key length of 56-bits. Published as the Federal Information
Processing Standards (FIPS) 46 standard in 1977, DES was officially withdrawn in 2005 [although NIST has approved
Triple DES (3DES) through 2030 for sensitive government information].
The federal government originally developed DES encryption over 35 years ago to provide cryptographic security for
all government communications. The idea was to ensure government systems all used the same, secure standard to
facilitate interconnectivity.

1.6.5 Public Key Infrastructure (PKI)

Public key cryptography can play an important role in helping provide the needed security services, including
confidentiality, authentication, digital signatures, and integrity. Public key cryptography uses two electronic keys: a
public key and a private key. These keys are mathematically related, but the private key cannot be determined from
the public key. The public key can be known by anyone while the owner keeps the private key secret.

35
Unit 1 - Introduction to Cyber Security

A Public Key Infrastructure (PKI) provides the means to bind public keys to their owners and helps in distribution of
reliable public keys in large heterogeneous networks. Public keys are bound to their owners by public key certificates.
These certificates contain information such as the owner's name and the associated public key and are issued by a
reliable certification authority (CA).
Let us look at each of these in greater detail in the next Chapter.

36
Unit 1 - Introduction to Cyber Security

1.7 INTRODUCTION TO APPLICATION SECURITY

Applications are a type of software that allows people to perform specific tasks using various ICT devices.
• Applications could be for computers (desktops, laptops, etc.)
• Applications could be for mobile devices (smartphones, iPads, etc.)
• Applications could be for running on the internet (web applications)
• Applications could also be run on the cloud
Almost every application has vulnerabilities. Common software vulnerabilities in application security include SQL
injection, Cross-Site Request Forgery (CSRF) and Cross-Site Scripting (XSS). We will learn more about them in the a
later unit.
Organizations use Application security, or “AppSec,” to protect their critical data from external threats by ensuring the
security of all the software used to run the business. This software could be built internally, bought or downloaded.
Application security helps identify, fix and prevent security vulnerabilities in any kind of software application.
There are also many tools and technologies to address application security, yet it is very important to always start with
a strong strategy. At a high level, the strategy should address, and continuously improve, these basic steps:
• identification of vulnerabilities,
• assessment of risk,
• fixing flaws,
• learning from mistakes and better managing future development processes.
Countermeasures are actions taken to ensure application security:
• ‘Application firewall’ is the most basic software countermeasure that limits the execution of files and the
handling of data by specific installed programs.
• Using a router which is also the most common hardware countermeasure can prevent the IP address of an
individual computer from being directly visible on the Internet.
• Conventional firewalls, encryption/decryption programs, anti-virus programs, spyware detection/removal
programs and biometric authentication systems are some of the other countermeasures.
Application security can be enhanced by Threat Modelling, which involves following certain steps rigorously, which
are:
• defining enterprise assets,
• identifying what each application does (or will do) with respect to these assets,
• creating a security profile for each application,
• identifying and prioritizing potential threats and documenting adverse events and the actions taken in each
case.
In this context, a threat is any potential or actual adverse event that can compromise the assets of an enterprise,
including both malicious events, such as a denial-of-service (DoS) attack, and unplanned events, such as the failure
of a storage device.

37
Unit 1 - Introduction to Cyber Security

Apart from that there are technologies available to assess applications for security vulnerabilities which include the
following:
• Static analysis (SAST), or “white-box” testing, analyzes applications without executing them.
• Dynamic analysis (DAST), or “black-box” testing, identifies vulnerabilities in running web applications.
• Interactive AST (IAST) technology combines elements of SAST and DAST and is implemented as an agent
within the test runtime.
• Mobile behavioral analysis discovers risky actions of mobile apps.
• Software composition analysis (SCA) analyzes open source and third party components.
• Manual penetration testing (or “pen testing”) uses the same methodology cybercriminals use to exploit
application weaknesses.
• Web application perimeter monitoring discovers all public-facing applications and the most exploitable
vulnerabilities.
• Runtime application self-protection (RASP) is built into an application and can detect and prevent real-time
application attacks.
There are a range of application security technologies available to secure applications, yet none of them are foolproof.
It is important to use ones skill and knowledge of multiple analysis techniques during the entire application lifetime
to bring down application risk. We will learn more about this in a later unit on Application Security.

38
Unit 1 - Introduction to Cyber Security

1.8 INTRODUCTION TO DATA, DATA CENTRE AND


CLOUD SECURITY

1.8.1 Data Security


Digital technologies are entrenched in most aspects of our lives, be it work, educations, entertainment, news, hobbies,
communication, etc.
Everyday each one of us is generating, storing and transferring a lot of digital data. Whether it is individuals generating
data or organisations, a substantial part of this data is only for limited access, i.e we do not want everyone it see
it. Some of the data could also be secret, like our financial information, health related information or passwords.
Organisations keep information private or secret like the organisation’s strategy, designs, client information, financial
information, etc.
We have already seen how vulnerable digital technologies are and how a skilled hacker could use tools and not only
access our private and secret information, but could also damage, misuse, make public such information.
Apart from this it is also important to protect data that is important to us from disasters and accidents that could
damage or destroy it and lead to a lot of inconvenience and even financial, emotional, opportunity loss.
That is why Data security is so critical for most businesses and individuals. Data security is the practice of securing
data. It is also referred to as information security, IT Security or electronic information security.
Data privacy issues could be related to a wide range of information, including:
• Health care records
• Financial transactions and data
• Genetic material
• Criminal justice records
• Residence and personal location information
• Location-based services data
• Browsing history
• Personal communications
• Best Practices for Data Protection and Security
Various hardware and software technologies are used for data security. Some common tools are:
• Antivirus
• Encryption
• Firewalls
• Two-factor authentication
• Software patches, updates
• Backups
• Data Masking and erasure, etc.

1.8.2 Data Centre Security


A data center is a facility that houses the IT equipment related to the computer network and data storage including

39
Unit 1 - Introduction to Cyber Security

servers, storage subsystems, networking switches, routers and firewalls, as well as the cabling and physical racks used to
organize and interconnect the IT equipment. They also contain infrastructure for power distribution and supplemental
power, which includes electrical switching; uninterruptable power supplies; backup generators; ventilation and cooling
systems.
A business relies heavily on the services of the data center for its day to day work. With the intensive use of technology
and IT systems by businesses, the data centre has become a critical asset and businesses cannot afford any downtime
or inefficiency in its functioning.
Securing data centres from security and safety threats has become very crucial.
The risks that threaten a data centre could be risks to the data as well as the equipment.
These risks would include disasters like floods and fire, as well as attacks by malicious third parties and even
unauthorized members of staff entering the secure area, who accidentally or deliberately tamper with the equipment.
By gaining physical or virtual access to the data centre, damage could be inflicted leading to Denial of service (DoS),
theft of confidential information, data alteration, and data loss etc.
Data center security involves the formation of security policies, precautions and practices that have to be implemented
to disallow unauthorized access and manipulation of a data center's resources.
All physical access has to be controlled completely. Identities have to be confirmed via biometrics, access cards, etc.,
and all activities in and around that data centre can be recorded through CCTV.
Some measures commonly adopted for data centre security are as follows:
• Restriction of access to the data centre to selected people by maintaining up-to-date access lists and
using access control technologies like locked doors, turnstiles and fingerprint, RFID tagging, voice or DNA
authentication through biometric access control systems.
• Further, every data center must follow a “Zero Trust” logical security procedure that includes multi-factor
authentication. Every access point should require two or more forms of identification or authorization.
• Round the clock surveillance interior and exterior high-resolution cameras.
• Presence of security personnel.
• The network and data must also be safe from attack using firewalls, anti-virus software, IP network information
security, intrusion detection, alerts to network events and real-time visibility into routing and traffic anomalies.
• For cloud customers, a cloud based service such as Alert Logic, can be used to detect security breaches.
• Data centres also use Threat Manager system to automatically identify behaviour patterns missed by
traditional network security products.
• Many data centre owners are now using smart monitoring features including Relentless Intrusion Detection
which quickly alerts if human attackers, network worms or bots are attacking the system.
There should be a comprehensive and co-ordinated plan that includes every aspect of a data center’s security,
working together. This is called a layered security system. The aim of such a system would be that a potential intruder
is faced with several layers of security, that they have to breach before they can reach valuable data or hardware
assets in the data centre. If one layer proves to be ineffective then the other layers will serve the purpose of protecting
the entire system.

1.8.3 Cloud Computing Security


Cloud computing is the delivery of IT services over the internet. More and more organizations are availing the benefits
of moving their systems to the cloud. This helps organizations to operate at larger scale, while reducing technology
costs and using agile systems that make them more competitive.

40
Unit 1 - Introduction to Cyber Security

Since these services are being used by most businesses and individuals, the security of data, systems and applications
from data theft, leakage, corruption and deletion has become an important concern.
Cloud computing security, also called cloud security involves the procedures and technology that secure cloud
computing environments against both external and insider cybersecurity threats.
Most cloud providers attempt to create a secure cloud for customers, but can’t control how users use the service. The
users can weaken the cloud security with their configuration, sensitive data, and access policies.
In each public cloud service type, the cloud provider and cloud customer share different levels of responsibility for
security. A key difference between SaaS, PaaS, and IaaS is the level of control (and responsibility) that the enterprise
has in the cloud stack, these are:
• Software-as-a-service (SaaS) — The cloud provider is typically responsible for providing security for the entire
technology stack from data center up to the application, whereas the customers are responsible for securing
their data and user access.
• Platform-as-a-service (PaaS) — the cloud service provider is often responsible for security for the technology
stack from data center to runtime, while the customers are responsible for securing their data, user access,
and applications.
• Infrastructure-as-a-service (IaaS) — The cloud provider manages the virtualization, servers, storage, networking,
and data center, while the customers are responsible for securing their data, user access, applications,
operating systems, and virtual network traffic.
Within all types of public cloud services, customers are responsible for securing their data and controlling who can
access that data.
Cloud security solutions
Cloud security solutions can consists of a set of policies, controls, procedures and technologies that work together to
protect cloud-based systems, data, and infrastructure. These security measures are configured to protect cloud data,
support regulatory compliance and protect customers' privacy as well as setting authentication rules for individual
users and devices. From authenticating access to filtering traffic, cloud security can be configured to the exact needs
of the business.
Organizations seeking cloud security solutions should consider the following criteria to solve the primary cloud
security challenges of visibility and control over cloud data.
A complete view of cloud data requires direct access to the cloud service. Cloud security solutions accomplish this
through an application programming interface (API) connection to the cloud service. With an API connection it is
possible to view:
• What data is stored in the cloud.
• Who is using cloud data?
• The roles of users with access to cloud data.
• Who cloud users are sharing data with.
• Where cloud data is located.
• Where cloud data is being accessed and downloaded from, including from which device.
Once you have visibility into cloud data, apply the controls that best suit your organization. These controls include:
• Data classification — Classify data on multiple levels, such as sensitive, regulated, or public, as it is created in
the cloud. Once classified, data can be stopped from entering or leaving the cloud service.
• Data Loss Prevention (DLP) — Implement a cloud DLP solution to protect data from unauthorized access and
automatically disable access and transport of data when suspicious activity is detected.
• Collaboration controls — Manage controls within the cloud service, such as downgrading file and folder
permissions for specified users to editor or viewer, removing permissions, and revoking shared links.

41
Unit 1 - Introduction to Cyber Security

• Encryption — Cloud data encryption can be used to prevent unauthorized access to data, even if that data is
exfiltrated or stolen.
As with in-house security, access control is a vital component of cloud security. Typical controls include:
• User access control — Implement system and application access controls that ensure only authorized users
access cloud data and applications. A Cloud Access Security Broker (CASB) can be used to enforce access
controls
• Device access control — Block access when a personal, unauthorized device tries to access cloud data.
• Malicious behavior identification — Detect compromised accounts and insider threats with user behavior
analytics (UBA) so that malicious data exfiltration does not occur.
• Malware prevention — Prevent malware from entering cloud services using techniques such as file-scanning,
application whitelisting, machine learning-based malware detection, and network traffic analysis.
• Privileged access — Identify all possible forms of access that privileged accounts may have to your data and
applications and put in place controls to mitigate exposure.
Existing compliance requirements and practices should be augmented to include data and applications residing in
the cloud.
• Risk assessment — Review and update risk assessments to include cloud services. Identify and address risk
factors introduced by cloud environments and providers. Risk databases for cloud providers are available to
expedite the assessment process.
• Compliance Assessments — Review and update compliance assessments for PCI, HIPAA, Sarbanes-Oxley and
other application regulatory requirements.

42
Unit 1 - Introduction to Cyber Security

SUMMARY

• Information/ Cyber security is the practice of defending information from unauthorized access, use, disclosure,
disruption, modification, perusal, inspection, recording or destruction.
• Information/ Cyber security comprises of Network security, Application security, Data protection and privacy,
Identify and access management, Cyber assurance/ GRCIT Forensics , Incident management, BCM/DR, Endpoint
security, Security operations and Industrial control security
• At any given moment, information is being transmitted, stored or processed. The three states exist irrespective
of the media in which information resides.
• The information security triad shows the three primary goals of information security: confidentiality, integrity
and availability. When these three tenets are put together, information will be well protected.
• The cyber security concepts compromises of Identification, Authentication, Authorisation, Confidentiality,
Integrity, Availability And Non - Repudiation.
• Risk is a function of threats exploiting vulnerabilities to obtain, damage or destroy assets.
• The types of threats can be categories as STRIDE based on the intials of threat categories.
• There are 4 different types of attacks , Network attacks, Application Attacks, Phishing Attacks and Malwares.
• Cyber Security Control help users to manage their risk and protect their critical data assets from intrusions,
security incidents and data loss.
• The types of control can be classified on the basis of Functionality and Plane of Application.
• Logical security controls are those that restrict the access capabilities of users of the system and prevent
unauthorized users from accessing the system. Logical security controls may exist within the operating system.
• Controls that protect against threats like Physical damage from natural disasters are called physical security
controls.
• Some Tools and Techniques for Cyber Security are;- Security Vulnerability Management, Vulnerability
Assessment, Security Testing, Remediation Planning and Access Control Models
• Security vulnerability management is a closed-loop workflow that generally includes identifying networked
systems and associated applications, auditing (scanning) the systems and applications for vulnerabilities and
remediating them.
• The Vulnerability Assessment involves Risk assessment and Risk analysis.
• Security testing is validating that an application does not have code issues that could allow unauthorized
access to data and potential data destruction or loss
• The most Common Types of attacks are , State sponsored attacks, Advanced Persistent Threats, Ransomware,
Denial of Service
• The various Type of testing includes 1) Vulnerability and security scanning: application code is compared
against known vulnerability signatures. 2) Penetration testing: Penetration testing simulates an attack by a
hacker. 3) Security auditing: Security auditing is a code review designed to find security flaws. and 4) Ethical
hacking: 5) Ethical Hacking.
• The various steps of Remediation planning involves Priortization and Root Cause Analysis.
• Access Control Model Involves Discretionary Access Control model , Mandatory access control model and role
based Access control model.

43
Unit 1 - Introduction to Cyber Security

• Access control models define how computers enforce access of subjects to objects
• An effective network security plan is developed with the understanding of business objectives and priorities,
security issues, potential attackers, Needed level of security, and factors that make a network vulnerable to
attack
• The Network Layer is the Layer 3 of the Open Systems Interconnection (OSI) communications model. It’s
primary function is to move data into and through other networks.
• The Layer 3 can provide various features such as Quality of service management, Load balancing and link
management, Security, Interrelation of different protocols and subnets with different schema.
• Identity and Access Management (IDAM) is the process of managing who has access to what information over
time.
• IDAM attempts to address three important questions: 1. Who has access to what information 2. Is the access
appropriate for the job being performed? 3. Is the access and activity monitored, logged and reported
appropriately?
• Identity and access management involves four basic functions – 1) Identity management: Creation, management
and deletion of identities without regard to access; 2) User access (logon): For example: a smart card and its
associated data used by a customer to log on to a service or services; 3) Privileged Identity: Focuses solely
on Identity Management for privileged accounts, powerful accounts used by IT administrators; 4) Identity
Governance: A system that relies on federated identity to authenticate a user without knowing his or her
password.
• The various components of IDAM are Classified into 5 main categories – 1) Authentication - Authentication
management and session management are covered in this area; 2) Authorization is a module which helps
in determining whether a user is given permission for an access to a specific resource; 3) Administration -
The zone of Administration contains user management, password management, role/group management and
user/group provisioning; 4) Audit includes those activities that help “prove” that authentication, authorization
and administration; 5) Central User Repository stores and delivers identity information to other services, and
provides service to verify credentials submitted from client.

44
Unit 1 - Introduction to Cyber Security

KNOWLEDGE CHECK

Q.1. State the importance of cyber security to Government, Organisations and individuals.

Q.2. Match the following terms related to cyber-crimes and cyber security with their explanations.

TERMS EXPLANATION

A. VULNERABILITY This is a path or a tool that a threat actor uses to attack the target.

B. THREAT AGENT OR This is anything of value to the threat actor such as PC, laptop, PDA, tablet, mobile
ACTOR phone, online bank account or identity.
C. THREAT VECTOR This refers to the intent and method targeted at the intentional exploitation of the
vulnerability or a situation and method that may accidentally trigger the vulnerability.
D. THREAT TARGET This is a weakness in an information system, system security procedures, internal
controls or implementations that are exposed.
E. CONFIDENTIALITY Ensuring authorized access of information assets when required for the duration
required.
F. INTEGRITY The first step in the ‘identify-authenticate-authorise’ sequence that is performed
when access to information or information processing resources are required.
G. AVAILABILITY The process of ensuring that a user has sufficient rights to perform the requested
operation, and preventing those without sufficient rights from doing the same.
H. IDENTIFICATION Refers to one of the properties of cryptographic digital signatures that offer the
possibility of proving whether a message has been digitally signed by the holder of a
digital signature’s private key.
I. AUTHENTICATION Prevention of unauthorized disclosure or use of information assets.

J. AUTHORISATION Prevention of unauthorized modification of information assets

K. NON REPUDIATION Verifies the identity by ascertaining what what you know, what you have and what
you are.

45
Unit 1 - Introduction to Cyber Security

Q.3. Select the right choice from the following multiple choice questions.
A. Which of the following are key concerns for the security of information assets?
i. Theft
ii. Fraud/ forgery
iii. Unauthorized information access
iv. Interception or modification of data and data management systems
v. All of the above
B. Information at any point of time can be present in 3 states. Which of the following options rightly
depict these states.
i. Confidentiality, Integrity and Availability
ii. Confidentiality, Integrity and Transmission
iii. Transmission, Processing and Storage
iv. Availability, Processing and Storage
v. None of the above
C. What is the primary objective of cyber security controls? Pick the most appropriate option.
i. To help control data and personnel that come into and goes out of the organization.
ii. To help manage risk and protect critical data assets from intrusions, security incidents and data loss.
iii. To help keep a control on the cyber security solutions being implemented in order to secure the
data assets.
iv. To help the government ensure that organisations and individuals are following the national cyber
security policy.
D. Which of the following best states the relationship between assets, vulnerabilities, threats and risks:
i. Asset + Threat + Vulnerability = Risk
ii. Risk + Threat + Asset = Vulnerability
iii. Threat +Vulnerability + Risk = Asset
iv. Vulnerability + Asset + Risk = Threat

Q.4. Given below are some security control. Mention against each that by functionality, which type of control does it
fall under. It could be more than one also.
A. Doors : _____________________________
B. Security procedures and authentication : _____________________________
C. Cryptographic checksums : _____________________________
D. File integrity checkers : _____________________________
E. Audit trails and logs : _____________________________
F. Notices of monitoring and logging : _____________________________
G. Visible practice of sound cyber security management : _____________________________
H. Disaster recovery and business continuity mechanisms : _____________________________
I. Backup systems and data : _____________________________

46
Unit 1 - Introduction to Cyber Security

Q.5. Describe in brief the following Tools and Techniques of Cyber Security
A. Security Vulnerability Management

B. Vulnerability Assessment

C. Security Testing

D. Access Control Models

47
Unit 1 - Introduction to Cyber Security

Q.6. State the various types of Cyber Security Controls by “Functionality” and by “Plane of Application”

FUNCTIONALITY PLANE OF APPLICATION

1. 1.
2. 2.
3. 3.
4.
5.
6.

Q.7. Complete the threat classification called STRIDE from the initials of threat categories:

S__________________________________
T__________________________________
R__________________________________
I __________________________________
D__________________________________
E___________________________________

Q.8. For each of the attacks mentioned below, identify if it is a Network Attack, Application Attack, Phishing Attack
or a Malware.
A. Cross-Site Scripting : _____________________________
B. Buffer overflow attack : _____________________________
C. Trojan Horse : _____________________________
D. HTTP flood : _____________________________
E. Watering hole attack : _____________________________
F. Social phishing : _____________________________
G. Worm : _____________________________
H. Spear phishing attack : _____________________________
I. Whaling : _____________________________
J. Virus : _____________________________
K. Vishing : _____________________________

48
Unit 1 - Introduction to Cyber Security

L. Eavesdropping : _____________________________
M. Spoofing : _____________________________
N. Network Sniffing (Packet Sniffing) : _____________________________
O. Data Modification : _____________________________
P. Denial of Service attack : _____________________________
Q. Man-in-the-middle attack : _____________________________
R. Compromised-Key Attack : _____________________________
S. Injections : _____________________________

49
50
UNIT 2
CRYPTOGRAPHY

“ At the end of this unit you will be able to:




Explain the importance of cryptography and the areas of
implementation
State the components of a cryptographic system and their
functions
State the key mechanisms used by cryptographers
Explain the different types of encryption schemes and standards
Identify the applications of cryptographic algorithms and
biometric authentication
• Identify the exchange of keys and user verification while
communicating with the server
• Compute the keys using Diffie-Hellman Key Exchange algorithm
• Perform computation in RSA algorithm for encryption and
decryption
• Use graphical password and textual passwords for signing into
websites
• Interpret SHA algorithm form RFC standards available on IETF
website
• Implement the steps involved in the SHA algorithm by taking a


sample message
• Perform the various steps such as list, generate, import and
exporting of keys
52
Unit 2 - Cryptography

2.1 BASICS OF CRYPTOGRAPHY

2.1.1 Need for Cryptography


We all know how vulnerable our data is over the communication network. The data is prone to
various types of attacks while being transmitted and in turn will result in information hacking.
Do you know?
The attack can be in the form of information being hacked by a third party, modification of the
contents of the information packets, loss of data and information being transmitted from an
The word
unreliable source. These types of attacks have caused a major concern among enterprises and
cryptography has
organisations, resulting in a dire need for a well-planned, efficient and secured strategy in place
Greek origins.
to ensure faithful transmission of data over the communication networks.
“Kryptos” means
In order to stay secure from information hacks, people thought it would be a good idea to hidden and
convert the data into a secret code for efficient transmission over the communication network. “Graphein” –
This will prevent a foreign entity or a third party from accessing the contents of the information word.
being sent. This act of converting the data into a secret code was called “cryptography”. The
original data that is converted into a coded equivalent which is difficult to hack or access, and is
referred to as a ciphertext. The algorithm that is used to convert plaintext (raw information) into ciphertext (coded
equivalent) is known as encryption algorithm

Fig 2.1: Security controls are countermeasures for managing risks

To make it simpler, let us take an example for a better understanding of the concept.

Suppose a person A wants to send a piece of information to a person B over a public network. When
the person A has sent the information, the data will be converted into what we call ciphertext using the
encryption algorithm. This step is vital to ensure that the data is protected and cannot be hacked or stolen
by a third party or a foreign entity. If the person B wants to obtain the information in its original form, the
encrypted data should be decrypted using the decryption algorithm. This process involves sharing of what
we call a “key” that is private to the communicating parties. The person A proposes a key and shares it with
the person B to promote access. The key helps in facilitating the authentication process and ensuring that
the data stays protected. Since, the key is private to both A and B, no other entity can access the information
sent over the communication network. Similarly, the person B will also follow the same process of au-
thentication for sharing the information with person A and so on. Hence, cryptography is a key contributor
for ensuring data integrity and confidentiality.

53
Unit 2 - Cryptography

2.1.2 Cryptography Fundamentals

Goals of Cryptography

• Confidentiality: To protect our confidential information from malicious actions during storage as well as
transfer. Example: Hiding of customers’ data by Banks, hiding sensitive military information.
• Integrity: Changes in the information can be done only through authorised entities and mechanisms. Example:
Money withdrawn and account updated in Banks.
• Availability: Correct information is available to the authorized entities whenever needed. Example: Accessing
bank account for transactions.

Cryptography Security Services

Cryptography makes it possible to offer the following data security services:


• Data Confidentiality: Provides protection of data (or the whole message) from unauthorised users by
preventing snooping and traffic analysis attack.
• Data Integrity: Ensures protection from any kind of modifications, changes, insertions from unauthorised
entities.
• Authentication: Ensures that the data is being transmitted between authenticated users and only these users
can access it.
• Non-repudiation: Provides protection against repudiation by either the sender or receiver by involving a
third party. Receiver can identify sender by using proof of origin. Sender can identify receiver by using proof
of delivery.
• Access Control: Prevention from unauthorised access to data such as snooping, reading, writing etc.

Security Mechanisms
There are several components within a cryptographic system such as:
• Plaintext: The original data that is sent by the actual source over the communication network is called as
plaintext. The plaintext is converted into ciphertext for secure communication.
• Ciphertext: When the plaintext undergoes encryption, it is converted into a secret code that is called ciphertext.
For the receiver to interpret this information, cipher-text is decrypted using cryptographic algorithms.
• Encryption: The process of conversion of plaintext into ciphertext is called encryption. Encryption ensures that
the information is protected so that data confidentiality and integrity is maintained.
• Decryption: The process of converting the ciphertext into plaintext is called decryption. Decryption is done to
retrieve the original information that has been sent by the actual source.
• Key: When the plaintext (message) gets encrypted, the sender uses a “key” for encryption. This key is also used
by the receiver to decrypt the information shared by the sender. In simple terms, the key used for encryption is
called encryption key and the key used for decryption is called decryption key respectively.
The following security mechanisms are used by crypotagraphers:
• Encipherment: It is defined as the hiding or covering of data which provides confidentiality. Cryptography and
steganography are the two techniques that use this.
• Data Integrity: A check-value of the initial message sent is created which is transferred along with the initial
message. After receiving the message, a new check-value is created w.r.t. the message received. If both the
check-values (old & new) are the same, integrity of data is maintained.

54
Unit 2 - Cryptography

• Digital Signature: Also known as electronic signature, the sender signs the document to be sent using his/her
private key and sends out a public key along with the document. The receiver uses the public key of the sender
to decrypt the document which proves that the document indeed is sent by him/her.
• Authentication Exchange: To prove that the two entities that are communicating are authentic, some secret
information can be used as key(which only the two of them know about).
• Traffic Padding: Insertion of bogus data into the main message to hide the pattern of the data being transferred.
• Routing Control: This mechanism helps in changing and selecting different routes during the transmission of
data to avoid attacks.
• Notarization: This mechanism involves a third party as a witness to the communication between the sender
and the receiver so that neither of them can later deny about the conversation.
• Access Control: Only authorised users have the access to data. This can be proved through PINs and passwords.

2.1.3 Types of Encryption

In the previous section we have come across the basic terminologies used in cryptography. We also understood
the importance of encryption in a communication network. In this section, we will discuss the types of encryption
techniques such as symmetric encryption and asymmetric encryption. Symmetric encryption is an encryption scheme
that utilises the same key for performing encryption and decryption. The other name given to symmetric encryption
is conventional encryption. Among the various attacks that exist, the two types of attacks that are quite common are
cryptanalysis and brute force. The former exploits the properties of encryption algorithm whereas the latter tries all
possible keys to enter into the communication system.
Private key encryption involves the sharing of a single key between the sender and the receiver. Since this type of
encryption uses a single key, it is a relatively fast mode of communication. To get into the cryptographic system:
• One way is when the key is stolen or leaked to an unauthorised entity that is not involved in the communication
process.
• Another way is w.r.t Public Key Infrastructure, commonly referred to as PKI, where there is a use of two keys-
private key and the public key.
o The public key is distributed and known to all,
o The private key is never shared with non-communicating entities.
To understand the role of PKI, lets take an example. When someone makes and online purchase, they use Secure
Sockets Layer (SSL), which is a standard protocol for secure transmission of documents, to encrypt the web session
between themself and the website. PKI is used to establish this type of communication. We will discuss SSL and PKI
in greater details later in this unit.
It is impossible to imagine our lives without the Internet which we all know is prone to various types of attacks. In
today’s world, we are largely dependent on the Internet for facilitating various requirements. We love to do online
shopping, send emails, be active on social media and so on. The question is are we really aware of the threats that
exist when we go online? There are chances that our accounts may be accessed by someone and the contents of
the information. So, how to stay immune when communicating online? Cryptography algorithms and protocols have
been crucial in establishing faithful communication. Cryptographic algorithms and protocols help in preserving our
information while we communicate over an untrusted network such as the Internet.

Let us now understand the various types of encryption.

55
Unit 2 - Cryptography

Symmetric key encryption

The type of encryption where the same key is used for encrypting the plaintext and decrypting the ciphertext is called
Symmetric Encryption or Symmetric Key Cryptography. The study of symmetric cryptosystems is called symmetric
cryptography. This type of encryption technique is used in hiding streams of data of different sizes, which could be
files, messages, passwords or encryption keys.

Some examples of this technique include Digital Encryption Standard (DES), Triple-DES (3DES), IDEA, and BLOWFISH.

The features of cryptosystem based on symmetric key encryption are:


• In order to communicate using symmetric key encryption, one must share a common key before the information
exchange.
• It is advisable that the keys should be changed regularly to prevent the third party from intercepting the keys.
• This system can prove to be expensive and cumbersome since the keys are changed regularly. Hence, there is
a need for more robustness in symmetric encryption technique.
• Suppose there are ‘n’ number of people who wish to establish communication within themselves. Then, the
number of keys required for the ones who wish to communicate will be n(n-1)/2.
• The length of the key used in symmetric encryption is smaller (in terms of bits) which makes the entire process
faster as compared to asymmetric encryption.
• The processing power of computer system required for symmetric encryption is less.
Apart from the given features, there are some challenges faced by Symmetric Encryption technique.
• Key establishment: Before starting the communication process, the sender and receiver must agree on a secret
symmetric key. This calls for a secure key establishment mechanism in place.
• Trust Issue: As the sender and the receiver are using the same symmetric key, both should trust each other.
There may be an instance wherein the receiver has lost the key to an attacker and the sender is not informed
about the incident.

Encryption Decryption

Fig 2.2: Symmetric Key Cryptography

56
Unit 2 - Cryptography

TRADITIONAL SYMMETRIC-KEY CIPHERS


A cipher is an algorithm used for encryption of plaintext and decryption of ciphertext. A symmetric key cipher uses
same key for encryption and decryption. Traditional ciphers are broadly divided into two categories:

TRADITIONAL
SYMMETRIC CIPHERS

Substitution Ciphers Transposition Ciphers

Substitutional Ciphers are those ciphers that substitute the value of one alphabet with another alphabet. For Example,
Substitution Ciphers are further divided into the following:

1. Mono-alphabetic Cipher
In mono-alphabetic cipher, each symbol in plain-text is mapped to one cipher-text symbol. For example, if the
word in plaintext is ‘read’, then the word in ciphertext (as per mapping criteria) can be ‘tayd’. If a word contains
repeated alphabets, then the mapping will remain same for each repetition. For example, ‘balloon’ will be
encrypted as ‘gyeeuuk.’ Hence, mapping between plaintext and ciphertext is one-to-one.
Various Types of mono-alphabetic ciphers are:
Additive Cipher (Shift Cipher/Caesar Cipher)
As the name suggests, the key is added to the plaintext to obtain ciphertext and the same key is subtracted
from the ciphertext to obtain plaintext. This is the simplest form of mono-alphabetic substitution cipher.
For example:

Encryption key: 4 plaintext: READ


Decryption key: 22 ciphertext: VIEH

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z A B C
D E F G H I J K L M N O P Q R S T U V W X Y Z A B C D E F

Since the key space is of 26, it is not very secure and is prone to Brute-force attack.
Variation in Additive Cipher:
Caesar Cipher: This was used by Julius Caesar for his secret communication. The key is always 3.
Shift Cipher: Since additive cipher shifts the characters to the end of alphabets, hence it is also known as shift
cipher.

57
Unit 2 - Cryptography

Multiplicative Cipher
In multiplicative cipher, the key is multiplied with the plaintext to produce ciphertext. For obtaining plaintext,
the inverse of the key is multiplied with the ciphertext.
Encryption: C = (P * k) mod n
Decryption: P = (C * k-1) mod n
For example:

Encryption key: 3 plaintext: READ


Decryption key: 1/3 ciphertext: ZLAI

0 A (0*3) Modulo 26 0 A 13 N (13*3) Modulo 26 13 N


1 B (1*3) Modulo26 3 C 14 O (14*3) Modulo 26 16 Q
2 C (2*3) Modulo26 6 F 15 P (15*3) Modulo 26 19 T
3 D (3*3) Modulo 26 9 I 16 Q (16*3) Modulo 26 22 W
4 E (4*3) Modulo 26 12 L 17 R (17*3) Modulo 26 25 Z
5 F (5*3) Modulo 26 15 O 18 S (18*3) Modulo 26 2 C
6 G (6*3) Modulo 26 18 R 19 T (19*3) Modulo 26 5 F
7 H (7*3) Modulo 26 21 V 20 U (20*3) Modulo 26 8 I
8 I (8*3) Modulo 26 24 Y 21 V (21*3) Modulo 26 11 L
9 J (9*3) Modulo 26 1 B 22 W (22*3) Modulo 26 14 O
10 K (10*3) Modulo 26 4 E 23 X (23*3) Modulo 26 17 R
11 L (11*3) Modulo 26 7 H 24 Y (24*3) Modulo 26 19 U
12 M (12*3) Modulo 26 10 K 25 Z (25*3) Modulo 26 22 X

Affine Cipher
Affine cipher uses two different types of keys (for example, a and b) simultaneously for encryption and
decryption. These keys are used as part of an equation.
For example:

Encryption: a=2, b=1, plaintext: READ


equation: ciphertext value= (plaintext value*a+b) mod 26 ciphertext: ZLAI
Decryption: a=2, b=1,
equation: plaintext value = ((ciphertext value -b) / a) mod 26

0 A (0*3+2) mod 26 2 C 13 N (13*3+2) mod 26 15 N


1 B (1*3+2) mod 26 5 F 14 O (14*3+2) mod 26 18 Q
2 C (2*3+2) mod 26 8 I 15 P (15*3+2) mod 26 21 T
3 D (3*3+2) mod 26 11 L 16 Q (16*3+2) mod 26 24 W
4 E (4*3+2) mod 26 14 O 17 R (17*3+2) mod 26 1 Z
5 F (5*3+2) mod 26 17 R 18 S (18*3+2) mod 26 4 C
6 G (6*3+2) mod 26 20 U 19 T (19*3+2) mod 26 7 F
7 H (7*3+2) mod 26 23 X 20 U (20*3+2) mod 26 10 I
8 I (8*3+2) mod 26 0 A 21 V (21*3+2) mod 26 13 L
9 J (9*3+2) mod 26 3 D 22 W (22*3+2) mod 26 16 O
10 K (10*3+2) mod 26 45 G 23 X (23*3+2) mod 26 19 R
11 L (11*3+2) mod 26 48 J 24 Y (24*3+2) mod 26 22 U
12 M (12*3+2) mod 26 51 M 25 Z (25*3+2) mod 26 25 X

58
Unit 2 - Cryptography

2. Poly-alphabetic Cipher
In poly-alphabetic cipher, each occurrence of a character may have a different substitute. For example, ‘balloon’
in plaintext is written as ‘hwtyufo.’ Hence, mapping between plaintext and ciphertext is one-to-many.

Auto-key cipher
An autokey cipher incorporates the plaintext message into the key. The key is generated from the message in
some automated fashion, sometimes by selecting certain letters from the text or, more commonly, by adding
a short primer key to the front of the message.
For example:

Vigenere Cipher
The encryption and decryption of text is done using Vigenere square or Vigenere table. The table consists of
the alphabets written out 26 times in different rows, each alphabet shifted cyclically to the left compared to the
previous alphabet, corresponding to the 26 possible Caesar Ciphers.

59
Unit 2 - Cryptography

Vigenere Table
The initial key stream is repeated to the length of the plaintext. For example, if the plaintext is “Beautiful day”
and the initial key is “pen” then the key stream generated will be “penpenpenpenp.”

Initial Key: PEN plaintext: READ


ciphertext: GINS

Encryption
The first alphabet of the plaintext, R is paired with P (which is the first alphabet of the key). So use row R
and column P of the Vigenère table. The outcome is G. Similarly, for the second alphabet of the plaintext, E,
the second alphabet of the key E is used, the alphabet at row E and column E is I. The rest of the plaintext is
enciphered in a similar fashion.

Decryption
Decryption is performed by going to the row in the table corresponding to the key, finding the position of the
ciphertext alphabet in this row, and then using the column’s label as the plaintext. For example, in top row P
(from PEN), the ciphertext G appears in the column against R, which is the first plaintext alphabet. Next we go to
row E (from PEN), find the ciphertext I which is found in column against E, thus E is the second plaintext alphabet.
This can be done using algebra also.
Where Pi is the Plaintext value, Ki is the Key value
Playfair Cipher
Playfair cipher was the first digraph cipher that is used practically. The key of this cipher is a 5x5 matrix
containing all 26 alphabets of English. Note that I and J share a single block in the matrix.
First, the plaintext is written row wise then the remaining blocks of the matrix are filled with the remaining
alphabets (those which have not yet occurred) alphabetically. Note: If two letters are in pair then some bogus
data in inserted between them. For example, BALLOON is written in the key matrix as BALXLOON (X can be
added only once because it occurs only once).
:Certain rules are followed for encryption/decryption using playfair cipher-
Encryption/Decryption is done by taking the alphabets in a group of two.
-If two alphabets are in the same row, replace them with the next alphabet to the right.
-If two alphabets are in the same column, replace them with the next alphabet in the bottom.
-If two alphabets are not in same row or column, then replace the pair with the letters on the same row respectively.

Playfair Cipher
L A R G E

S T B C D

F H I/J K M

N O P Q U

V W X Y Z

Keyword: LARGEST
Plain text: Mu st se ey ou
Cipher text: UZTBDLGZPN 63

60
Unit 2 - Cryptography

Brute-force attack on this cipher is very difficult due to large size of key domain (26!).
Hill Cipher
This poly-alphabetic cipher is based on matrix multiplication (linear algebra). The matrix is the key that is used
for encryption and decryption. Also, the key matrix should have a multiplicative inverse.
Matrix Multiplication:

Encryption: The first step is to convert the text into a matrix so the key can be applied to it. Following the rules
of matrix multiplication, P and Key are multiplied to generate ciphertext.
E=[P]*[K]
Decryption: The ciphertext is first converted into a matrix. The inverse of key is generated, which is then
multiplied with the matrix to produce the initial plaintext.
D=[C]*[K-1]
Brute-force attack on this cipher is very difficult due to large size of key domain (m*n).
One Time Pad
This poly-alphabetic cipher uses a technique which makes it purely prone to any cryptography attack. The idea
behind this is to choose a random key from the key domain to encrypt and decrypt the message. Say, the first
character of the text is encrypted using key 06, the second uses key 08 and so on (every time a new key).
Though it is a perfect cipher with full secrecy, but it is impossible to implement.

Rotor Cipher
Rotor cipher uses a rotor machine that follows monoalphabetic substitution but the mapping between plaintext
and ciphertext changes after every rotation. The rotor machine is permanently wired and uses 26 letters. If the
motor is stationary, then the cipher follows monoalphabetic substitution but if the motor is rotating, it follows
polyalphabetic substitution.
The initial position of the cipher is secretly shared between the sender and receiver.
This cipher provided better practical use than one-time pad cipher.

This method uses the concept of monoalphabetic ciphers but changes


the mapping between plaintext and ciphertext for each character.
For example:
Lets use 6 characters. Initial position (or key) will be decided by the
sender and receiver.
Plaintext: cab
Ciphertext: FAC

61
Unit 2 - Cryptography

3. Transposition Cipher

Transposition Ciphers can be further divided into the following:

Columnar Transposition Cipher


Columnar Transposition involves writing the plaintext out in rows, and then reading the ciphertext off in
columns one by one.
Encryption: The message is written column wise in a matrix and a key is applied on it to permutate the
characters. The permutated matrix is read row wise to generate ciphertext.
Decryption: The ciphertext is now written row wise in the matrix. The decryption is applied to the matrix to
generate a permutated matrix which is now read column wise to generate the plaintext message.

Rain-Fence Cipher
In the rail fence cipher, the plaintext is written diagonally downwards on successive steps.
After reaching the bottom rail, we traverse upwards diagonally. After reaching the top, the direction is changed
again. Thus, the alphabets of the message are written in a zig-zag manner.
After the message has been written in a zig-zag manner, the individual rows are combined to obtain the
ciphertext.
Key defines the number of rails/steps to be traversed in a direction.

Asymmetric key encryption


The cryptography technique that uses different keys for encryption and decryption of the information is called
Asymmetric Encryption or Asymmetric Key Cryptography. In this process, although the keys are different, they are
related through a mathematical function. This type of technique is used to conceal small blocks of data, such as
encryption keys and hash function values, which are used in digital signatures. Some examples of this technique are
RSA, Diffie-Hellman, ECC, El Gamal and DSA. The features of cryptosystem based on asymmetric key encryption are:
• Asymmetric encryption involves a pair of dissimilar keys such as the private key and public key. There is a
mathematical relation between the two keys which means when one key is used for encryption, the other
can decrypt the ciphertext back to the original plaintext.
• The public key is known to both and kept in the public repository whereas the private key is kept as a secret
and is known only to the sender and the receiver. This scheme is also referred to as Public Key Encryption.
• The public key and the private key are mathematically related, but it is not possible to find one from
another. This is also a key characteristic since the attacker cannot deduce one key from another.
• The sender accesses the public key from the repository to encrypt the data for proper transmission. The
receiver makes use of private key for the extraction of plaintext.
• The length of the keys (in terms of bits) is large making the entire process slower as compared to symmetric
key encryption.
• The amount of processing power required is higher for running the asymmetric algorithms.
As seen in symmetric encryption, this technique also faces some challenges. In public-key encryption, the user needs
to establish trust for the public key that is being used for the communication and has not been spoofed by a malicious
third party. This is achieved using Public Key Infrastructure (PKI) that involves a third-party interference. The third
party oversees the managing and testing of authenticity of the public keys. Whenever the third party provides a
public key, the sender/receiver might assume that the key is the correct one. The verification of the authenticity of
public key is done by providing a digitally signed certificate issued by the third party.

62
Unit 2 - Cryptography

Fig 2.3: Asymmetric Key Cryptography

1. Data integrity algorithms: The algorithms used to assure that information and programs are changed only in a
specified and authorized manner are data integrity algorithms. These types of algorithms are used to protect
blocks of data, such as messages, from alteration.
2. Authentication protocols: Authentication protocols enable communicating parties to authenticate the identity
of entities and to exchange session keys using cryptographic algorithms. For example, Kerberos authentication
service is used in a distributed environment.

Crypotography Protocols
Cryptographic protocols are a set of rules or instructions that give secure connections, sanctioning two bodies to
speak with privacy and data integrity. Since cryptographic protocols and algorithms and very complex and require a
high level of expertise to create, hence, most people use protocols and algorithms that are commonly applied and
accepted as secure.
Some such Protocols are:
• IPSec
• SSL (soon to be TLS)
• SSH
• S/MIME
• OpenPGP/GnuPG/PGP
• Kerberos
Each of these protocols have their own benefits and challenges and even may overlap with respect to their functions.
We will read more about each of these later in this unit.

Cryptography Algorithms
Cryptographic algorithms are the sequences of processes, which are used for encrypting and decrypting messages in
a cryptographic system.
Cryptographic algorithms are of many types and most of them can be divided in the following categories.
Symmetric: Data Encryption Standard (DES) and Advanced Encryption Standard (AES) are the most popular examples
of symmetric cryptography algorithms.
Asymmetric: RSA is one of the most common examples of this algorithm.
Cryptographic algorithms are specified by the National Institute of Standards and Technology (NIST). They include
cryptographic algorithms for encryption, key exchange, digital signature, and hashing.

63
Unit 2 - Cryptography

Cryptography Standards
There are many cryptography standards. The National Institute of Standards and Technology is an organization
aimed at helping US economic and public welfare issues by providing leadership for the nation’s measurement and
standards infrastructure. That’s basically a fancy way of saying they set the standards for things like encryption as it
pertains to non-classified government information both in transit and in rest.
Granted, there are a lot of standards, or FIPS, Federal Information Processing Standards, we’re really only concerned
with the ones that pertain to encrypted data in motion, or more specifically, as they relate to SSL. Keep in mind, these
standards aren’t binding. But they are suggested by the US Government for any and all non-classified data.
Some standards that are widely known by cryptographers are as follows:
Encryption standards
• Data Encryption Standard (DES, now obsolete)
• Advanced Encryption Standard (AES)
• RSA the original public key algorithm
Hash standards
• MD5 (obsolete)
• SHA-1 (obsolete)
• SHA-2
Digital signature standards
• Digital Signature Standard (DSS), based on the Digital Signature Algorithm (DSA)
• RSA
Public-key infrastructure (PKI) standards
• X.509 Public Key Certificates
We will read more about each of these later in this unit.

64
Unit 2 - Cryptography

2.2 DES AND AES

2.2.1 The Data Encryption Standard (DES)


This is one of the most widely used encryption scheme based on the Data Encryption Standard (DES) adopted in
1977 by the National Bureau of Standards, now the National Institute of Standards and Technology (NIST), as Federal
Information Processing Standard 46 (FIPS PUB 46).
DES technique is used in secured video teleconferencing, routers, remote access servers and for providing cryptographic
protection for sensitive information.
In DES, the data is encrypted in 64-bit blocks using a 56-bit key. The actual length of the key is 64-bit but the 8 bits
are discarded thereby producing a 56-bit key.
As seen in any encryption scheme, the two inputs are the plaintext to be encrypted and the key used for encryption.
Every 8 bit is discarded to produce a 56-bit key.
This implies that the bit position 8, 16, 24, 32, 40, 48, 56 and 64 are discarded.

Fig 2.4: Every 8th bit is discarded

DES is based upon two attributes of cryptography i.e. substitution and transposition. DES consists of 16 steps that are
referred to as rounds.
Let us understand the process of encryption in DES through the following steps:
1. Initially, the 64-bit plain text block goes through the Initial Permutation (IP) function.
2. The initial Permutation is performed on the plaintext.
3. The initial permutation breaks the plaintext into two parts of 32 bits each such as Left Plain Text (LPT) and
Right Plain Text (RPT).
4. Each LPT and RPT then goes through 16 rounds of encryption process.
5. At last, LPT and RPT are re-joined and a Final Permutation (FP) is performed on the combined block.
6. The result is a 64-bit cipher text.
The figure in the next page depicts the same.

65
Unit 2 - Cryptography

Fig 2.5: General Depiction of DES Encryption Algorithm

66
Unit 2 - Cryptography

3DES

Introduced in 1998, 3DES algorithm is adopted in finance, payment and other private industries for encrypting data
while transmission or when it is static. 3DES is a symmetric key block cipher that applies the DES cipher in encryption
with the three keys.
As already discussed, this encryption technique uses three different DES keys namely K1, K2 and K3. This makes the
actual 3DES length as 3 x 56 = 168 bits. Now, let us see how this mechanism takes place through a sequence of steps.
The steps involved in this process are as follows:
Step 1: The plaintext blocks are encrypted using single DES key K1.
Step 2: The output of Step 1 is decrypted using single DES with key K2.
Step 3: Similarly, the output of Step 2 is decrypted using single DES with key K3.
Step 4: The output received from Step 3 is the ciphertext.
Step 5: For decrypting the ciphertext, one must follow a reverse procedure. This means first decrypt using K3, then
encrypt with K2, and finally decrypt with K1.
3DES was extensively used in Microsoft products such as Microsoft Outlook 2007, Microsoft OneNote, Microsoft
System Center Configuration Manager 2012 for the protection of user configuration and user data.
Triple DES systems are significantly more secure than single DES, but these are clearly a much slower process than
encryption using single DES.

Fig 2.6: 3DES Technique

67
Unit 2 - Cryptography

2.2.2 Advanced Encryption Standard (AES)


Advanced Encryption Standard or AES was published by the National Institute of Standards and Technology (NIST) in
2001. It is the most widely adopted symmetric encryption algorithm which is six times faster than triple DES.
AES is a symmetric block cipher capable of replacing DES for a wide range of applications. The structure of
AES is a bit complex as compared to other cryptographic algorithms. The AES-128 and AES-256 version of
AES algorithms are implemented in online and internet banking.
The encryption phase of AES involves the three rounds i.e. initial round, main round and the final round.
Initial Round: AddRoundKey
Main Rounds: SubBytes, ShiftRows, MixColumns and AddRoundKey
Final Round: SubBytes, ShiftRows and AddRoundKey

General Structure
The cipher takes a plaintext block size of 128 bits, or 16 bytes. The key length can be 16, 24 or 32 bytes (128, 192, or
256 bits). This algorithm is referred as AES-128, AES-192, or AES-256 as per the key length.
The number of rounds in AES is variable and depends upon the length of the key. The number of rounds can be
calculated as 10 rounds for 128-bit keys, 12 rounds for 192-bit keys and 14 rounds for 256-bit keys. AES relies on the
technique of substitution-permutation for the operations. The replacement of inputs by specific outputs is termed as
substitutions and the process of shuffling bits around is referred to as permutations.
The Encryption process of AES consists of four major steps:

Fig 2.7: AES Algorithm

68
Unit 2 - Cryptography

1. Byte Substitution: The 16 input bytes are substituted in a fixed table which gives us a matrix of four rows and
four columns.
2. ShiftRows: All the four rows are shifted towards the left. The entries that ‘fall off ‘are re-inserted on the right
side of the row. The shift is done in the following way:
• First row is not shifted at all
• Second row is shifted by one position to the left
• Third row is shifted by two positions to the left
• Fourth row goes three positions to the left
• This gives us a new matrix with 16 bytes.
3. MixColumns: The column of four bytes undergoes transformation using a mathematical function. The four
bytes of one column are the inputs and four new bytes are the outputs that will replace the original column.
The outcome is a new matrix of 16 bytes.
4. AddRoundKey: The 16 bytes are converted into 128 bits and XORed with the 128 bits of the round key. The
output will be a ciphertext after the last round has been conducted. If the last round remains, the 128 bits are
taken as 16 bytes and another round is conducted.

69
Unit 2 - Cryptography

2.3 THE RSA ALGORITHM

The RSA algorithm was developed in 1977 by Ron Rivest, Adi Shamir, and Len Adleman at MIT and first published in
1978. The Rivest-Shamir-Adleman (RSA) scheme has been widely accepted public-key encryption technique in the
world. This technique can be used for both public-key encryption as well as digital signatures.
The algorithm makes use of the fact that there is no easy way to factor very large (100-200 digit) numbers.
The algorithm is explained as follows:
• The message is represented as an integer between 0 and (n-1). In case of large messages, they can be
broken into number of blocks. Each block can be represented by an integer in the same range.
• Perform encryption of the message by raising it to the eth power modulo n. The result will be ciphertext
message C.
• To perform decryption of the ciphertext message C, it will be raised to another power d modulo n.
The encryption key (e,n) is made public while the decryption key (d,n) is kept private by the user.
Let us see the approach to determine appropriate values for e,d and n.
• Choose two very large (100+ digit) prime numbers such as p and q.
• Set the value of n equal to p * q i.e. n = p * q.
• Select any large integer, d, such that GCD (d, ((p-1) * (q-1))) = 1 where GCD is the Greatest Common Divisor.
• Find the value of e such that e * d = 1 (mod ((p-1) * (q-1))).

Security Aspect of RSA


• Brute force: This method involves trying all possible private keys for attacking the RSA algorithm.
• Mathematical attacks: These types of attacks are equivalent in effort to factoring the product of two primes.
• Timing attacks: These types of attacks affect the running time of the decryption algorithm.
• Chosen ciphertext attacks: Ciphertext attacks aim to exploit the properties of the RSA algorithm.

70
Unit 2 - Cryptography

2.4 HASH FUNCTION

A cryptographic hash function represents a mathematical equation that helps in encryption of source information.
This hash functions caters to a multitude of applications such as blockchain technology, payments on e-commerce
websites etc.
A hash function can be simply expressed in the form of a mathematical equation. It accepts a variable block of data as
input and gives a fixed-size hash value as the output. The main objective of hash function is to achieve data integrity.

Properties of Hash functions:


• Computationally Efficient: Cryptographic hash functions must be able to perform mathematical labour
in a short period of time.
• Deterministic: For any given input, hash functions should always give the result. It doesn’t matter how
many times you enter the same input; the function must produce the same output continuously.
• Pre-Image Resistant: The output of cryptographic hash function must not reveal the details of the input.
This property of hash function is referred to as pre-image resistance. The input can be number, letters,
words, or punctuation marks.
• Collision Resistant: This property tells us that it is impossible to find two different inputs that produce the
same output.

Applications of Cryptographic Hash Functions


• Message Authentication:
The process of verifying the integrity of a message is referred to as message authentication. This is to
ensure that the data is received in the same form in which it was sent (without any modification, insertion,
deletion, or replay) by the actual source. Whenever the hash function provides message authentication, the
corresponding hash function value is called message digest. The code which helps in achieving message
authentication is called message authentication code (MAC), also known as a keyed hash function.
• Digital Signatures:
There is a similarity between the operation of message authentication service and digital signature. Digital
signature is used to authenticate the contents of information by signing the information that has been shared
by a legitimate source. Here, the hash value of a message is encrypted using the user’s private key. The one
who knows the user’s public key can verify the integrity of the message associated with the digital signature.
• Password Protection:
Nowadays, almost all e-platforms ask for username and password for logging in a portal. From the organisational
perspective, there should be a strong and secured database capable of storing multiple passwords. Alternatively,
the database can store the hash of the password rather than the password itself. Thereafter, whenever the user
will log in, the entered password will be hashed and compared to the stored hash value in the organisational
database. The user will be successfully logged in once the authentication process is complete.

2.4.1 Message Digest Algorithm – MD5


MD5 came into existence in 1991 when a well-known cryptographer Ronald Rivest proposed this technique. The MD5
algorithm is useful when encrypting or applying fingerprint function to a file. Mostly, MD5 is used encrypt database
passwords but it also generates a file thumbprint to ensure that a file is identical.

71
Unit 2 - Cryptography

MD5 is a cryptographic algorithm which accepts input of arbitrary length and in turn produces a message digest
which is 128 bits long. This digest is called “hash” or “fingerprint” of the input. The application of MD5 is in situations
where long message needs to be processed and compared quickly. This can be seen in the creation and verification
of digital signatures.

Working of MD5
• The initial step is division of the input in blocks of 512 bits each.
• The 64 bits are inserted at the end of the last block.
• These 64 bits are used to record the length of original input.
• If the last block is less than 512 bits, some extra bits are 'padded' to the end.
• Each block is divided into 16 words consisting of 32 bits each.
The process involves appending of padding bits, appending representation of padded message to the original
message, initialization of message digest buffer, processing of message in 16-word blocks and finally outputting
the result. On a 32-bit machine, Message Digest 5 is much fast as compared to other message digest algorithms.
Message Digest 5 is simple to implement when compared with similar digest algorithms.

Sameer agrees to pay Sameer agrees to pay


INPUT
Rs 20000/month for rent Rs 2,00,000/month for rent

MD5 hash MD5 hash


PROCESS
algorithm algorithm

ac49e74434a64c2 b68e2f019ef60266
OUTPUT
47aa129bef83f204 8f8ebf4eb6e3a69b

Fig 2.8: MD5 Algorithm

72
Unit 2 - Cryptography

2.4.2 Secure Hash Algorithm (SHA)

Secure Hash Algorithm (SHA) was developed by National Institute of Standards and
Technology (NIST) and published as a federal standard in the year 1993. The revised Note
version SHA was issued in FIPS 180-1 in 1995 and called SHA-1. SHA has been specified
in RFC 3174. RFC 3174 is a standard
The hash value produced by SHA-1 is 160 bits. In the year 2002, NIST revised the standard under IETF to make
and defined three new versions having hash value lengths 256, 384 and 512 bits called the SHA-1 hash
SHA-256, SHA-384, and SHA-512 respectively. SHA-1 involves various types of modular algorithm conveniently
arithmetic and logical binary operations. However, this technique has been considered available on the
since 2005. Major tech giants like Microsoft, Google, Apple and Mozilla have stopped internet.
accepting SHA-1 SSL certificates by 2017.

Fig 2.9: Comparison of SHA Parameters

Steps involved in SHA:


• Appending the padding bits: Padding is done even if the message is of the desired length. The bits range
from 1 to 124. The padding consists of a single 1 bit followed by necessary number of 0 bits.
• Appending the length: Initially, a block of 128 bits is appended to the message. The block is treated as
a 128-bit integer (most significant byte first) and contains the length of the original message (before the
padding).
• Initialising the hash buffer: In case of SHA-512, a 512-bit buffer is used for holding intermediate and final
results of the hash function. The buffer is represented in the form of eight registers (a, b, c, d, e, f, g and h).
Then, these registers are initialised to the 64-bit integers (hexadecimal values).
• Processing the message in 1024-bit (128-word) blocks: The module of the algorithm consists of 80
rounds. Each round is responsible for taking the 512-bit buffer, abcdefgh and updating the contents of the
buffer.

73
Unit 2 - Cryptography

2.5 SOME CRYPTOGRAPHIC TOOLS

2.5.1 Public-Key Cryptography


Unlike the elementary tools of substitution and permutation, there are cryptosystems and algorithms that are based
on mathematical functions. These cryptosystems are referred to as Public-key cryptosystems and the technique is
called Public-key cryptography.
Public-key cryptography uses two separate keys unlike symmetric encryption that uses just a single key. This is said
to facilitate confidentiality, key distribution and authentication.
There are a few misconceptions that may arise. Let us have a look.
• There is a misconception about the public-key encryption being more secure from cryptanalysis than
symmetric encryption. This is not true. The security of any encryption scheme depends on the length
of the key and the computational work involved in breaking a cipher. There is no criteria to distinguish
between the security aspect of symmetric or public-key encryption and decide which one is more
superior in resisting the effect of cryptanalysis.
• Another misconception is that public-key encryption has replaced symmetric encryption technique. We
all know that there is a lot of computation involved in public-key cryptography and hence there seems
little chance of symmetric encryption being abandoned.
There are some terminologies associated with Asymmetric Encryption such as:
• Asymmetric Keys: The two keys namely public key and private key that perform complementary
operations such as encryption, decryption, signature generation and signature verification are the
asymmetric keys.
• Public Key Certificate: Public Key Certificate is a digital document issued and digitally signed by the
private key of a Certification Authority binding the name of a subscriber to a public key. The certificate is
for the identification of the subscriber and granting sole control and access to the private key.
• Public Key (Asymmetric) Cryptographic Algorithm: This refers to a cryptographic algorithm
that utilises both the keys for establishing communication i.e. public key and the private key. It is
computationally impossible to derive the private key from the public key.
• Public Key Infrastructure (PKI): Public Key Infrastructure or PKI, refers to the set of policies, processes,
server platforms, software and workstations used to administer certificates and public-private key pairs.
This includes ability to issue, maintain and revoke public key certificates.
Some important characteristics of public-key cryptosystems are:
• The decryption key cannot be found out through computations and by knowing the cryptographic
algorithm and the encryption key.
• Any one out of the two keys can be used for encryption, while the other will be used for decryption.
A public-key encryption scheme has six essential stages for the encryption process. Let us understand this using an
example.

The various steps involved in Public-key cryptography are as follows:


• A pair of keys is generated by the user for encryption and decryption process.
• The public key is kept in the public register or related accessible file whereas the other key is kept private.
• If Bob wants to send a confidential message to Alice, the individual will encrypt the message using Alice’s
public key.
• Once Alice has received the message, she will use her private key for decryption. Since, only Alice knows
the private key, the message cannot be decrypted by another person.

74
Unit 2 - Cryptography

Fig 2.10: Encryption using public key

Fig 2.11: Encryption using private key

75
Unit 2 - Cryptography

Conventional Encryption Public-Key Encryption

Needed to Work: Needed to Work:


1. Encryption and decryption is carried out by using the 1. Encryption and decryption is carried out by using
same algorithm and the same key. the same algorithm with a pair of keys (one for
2. It is important that the sender and receiver must encryption and one for decryption).
share the algorithm and the key. 2. The sender and receiver must each have one of
the matched pair of keys but ensure that they are not
Needed for Security: same.

1. The secrecy of the key must be maintained.


Needed for Security:
2. It must be impossible or at least impractical to
decipher a message if no other information is available. 1. Out of the two, one of the keys must be kept secret.
3. Knowledge of the algorithm plus samples of ciphertext 2. It must be impossible or at least impractical
must be insufficient to determine the key. to decipher a message if no other information is
available.
3. Knowledge of the algorithm plus one of the keys
plus samples of ciphertext must be insufficient to
determine the other key.

Applications of Public-Key Cryptosystems


We all now know that public-key cryptosystems use two keys, public key and private key. As per the application, the
sender can use either the sender’s private key or receiver’ public key to perform the cryptographic function. We can
classify the public-key cryptosystems into three categories:
• Encryption /decryption: The sender will encrypt the message using the recipient’s public key.
• Digital signature: Here, the sender “signs” a message using its private key. The cryptographic algorithm is
applied to the message or to a small block of data to do complete signing.
• Key exchange: For exchanging the session key, there should be proper coordination between the two
parties. There can be more ways to achieve this involving the private key(s) of one or both parties.

2.5.2 Diffie-Hellman Key Exchange

Whitfield Diffie along with Martin Hellman were key contributors in the field of cryptography. Diffie and Hellman
achieved a major breakthrough in 1976 and changed the overall framework for public-key cryptography. They came
up with a cryptographic algorithm that met the requirements for public-key systems. This algorithm was named after
the two discoverers, Diffie-Hellman Key Exchange algorithm.
Using Diffie-Hellman Key Exchange algorithm, it is possible to exchange the key between users that is used for the
subsequent encryption of messages. The algorithm limits its application to the exchange of secret values.

The Algorithm
Let there be two numbers such as a prime number q and an integer a, which is a root of q. Also, let there be two users
A and B who wish to exchange a key.

76
Unit 2 - Cryptography

Fig 2.12: The Diffie-Hellman Key Exchange Algorithm

Then, user A will select an integer XA < q and will compute YA = a XA mod q. In the same way, the user B will select
an integer XB < q and will compute YB = a XB mod q.
The value of X is kept private while on the other hand, the value of B is made public. The key for the user A will be K
= (YB) XA mod q and K = (YA) XB mod q.

Key exchange in Diffie-Hellman algorithm

As assumed previously, let there be two users A and B who wish to connect over a network. Note
The user needs to generate a one-time private key XA, calculate YA, and send that to user
B. Similarly, the user B will respond by generating a private value XB, calculate YB, and send Adversary is the
YB to user A. attacker or the
Now, it is easy for both the users to calculate the key. In the first instance, the user A can pick foreign entity who
values for q and a for transmitting first piece of information. wants to hijack
the information
Although this algorithm is widely used and secure, it is also prone to contents.
certain types of attacks. One such attack is called Man-in-the-Middle Attack.

77
Unit 2 - Cryptography

Man-in-the-Middle Attack

Let there be two people A and B who want to connect over the network. There is another person by the name D the
adversary, who wants to hijack the communication channel and steal the information. Now, let us see how D i.e. the
adversary attacks the network.
• To prepare for the attack, D generates two random private keys XD1 and XD2 and computes the public
keys YD1 and YD2.
• A will transmit YA to B
• The adversary, D intercepts YA and in turn transmits a false message i.e. YD1 to B. In between, D calculates
K2 = (YA) XD2 mod q.
• Next, B receives the false message from the adversary in the form of YD1 and calculates K1 = (YD1) XB
mod q.
• Then, B transmits YB to A.
• Then again, D intercepts YB and in turn transmits a false message i.e. YD2 to A. D again calculates
K1 = (YB) XD1 mod q.
• So, A receives YD2 and calculates K2 = (YD2) XA mod q.

This process gives us an idea about the false key being generated by the adversary, D (the adversary) in this case.
Although, A and B think they share a secret key but rather B and D share a secret key K1 and A and D share the secret
key K2.
This algorithm is vulnerable since it does not authenticate the participants. This limitation can be overcome using
techniques such as digital signatures and public-key certificates.

2.5.3 Secure Email Implementation


In today’s world, individuals and organisations engage in email conversations for the exchange of ideas, data files and
other important documents. Hence, there is a need to get a secure strategy in place for protecting the email accounts
and confidential information from being hacked by a foreign entity. The email that floats within the communication
system is not encrypted and hence can be interpreted and hacked by the adversary (the attacker) . Moreover, the
identity of the sender cannot be verified and it’s very easy to falsify the header information in a standard email. Let
us see how we can get our email platforms secured and ensure data integrity. Public-key cryptography can be used
to send and receive email messages securely.

How to secure emails

It is possible to intercept the traffic and read emails, copy the user credentials and even the duplicate files. Therefore,
to be sure that no one intercepts the email messages, the connections between the computer and email provider
must be encrypted. To achieve this, the email client should implement encryption software to protect the content
from being hacked by foreign entities. This is also called end-to-end encryption meaning that no one except sender
and receiver will be able to see and retrieve the messages.
There are protocols such as PGP, GNU Privacy Guard (gpg) and S/MIME which help carry out email encryption. It takes
seconds to create an encryption code.
Historically, passwords and authentication have been used to send messages between two parties, and encryption is
just another advancement in the world of technology and communication.

78
Unit 2 - Cryptography

2.5.4 GNU Privacy Guard

GNU Privacy Guard or gpg, is a free encryption software that is compliant with the OpenPGP (RFC4880) standard. It
is a cryptographic tool helpful in managing public and private keys and performs multiple tasks such as encryption,
decryption, signing and verifying operations.
One can download GPG from the official website using the download links for all
platforms and source codes. Do you know ?

For Windows systems, you need the Gpg4win application. The installer is available In the past, we had seen that
at Windows GnuPG installer (Gpg4win) download page. All you need to do is run the a phishing attack had stolen
installer and gpg will be visible in the command prompt. almost 20,000 emails from the
Let us understand some basic functions in gpg. Democratic National Committee
in the 2016 US elections. The
hacker was able to get into the
Listing stored keys
DNC’s unencrypted inbox.
To list all public keys stored in your keyring, use gpg --list-keys.
To list all private keys stored in your keyring, use gpg --list-secret-keys.

79
Unit 2 - Cryptography

Generating a key
To generate a new key(pair), use gpg --gen-key.
Generating a revocation certificate
To generate a revocation certificate, use gpg --gen-revoke.
Importing a key
To import a public key or a private key, use the --import switch.
$ gpg --import [Link] or $ echo THE_KEY_IN_ASCII | gpg --import [Link]

Exporting a key
To export a public key, use the --export switch.
$ gpg --export KEY_ID
To export a private key, use the --export-secret-keys switch.
$ gpg --export-secret-keys KEY_ID

2.5.5 S/MIME
Secure/Multipurpose Internet Mail Extension (S/MIME) makes use of asymmetric cryptography and protects your
emails from being accessed by a third party. Using this technique, you can digitally sign your emails thus making you
the legitimate sender of the message. This is effective while dealing with phishing attacks and preventing outsiders
from interfering in the email process.
S/MIME is a security enhancement to the MIME Internet e-mail format standard which is
Note based upon the technology from RSA Data Security.

Base64 encoding This technology is well suited to commercial and organisational applications. Furthermore, to
converts the binary understand MIME we need to know about RFC 5322 (Internet Message Format).
data into text RFC 5322 has been a standard and commonly used for Internet-based text mail messages.
format which is RFC 5322 views messages as a combination of envelope and contents. Envelope contains
passed through the information required for transmission and delivery whereas content is the object to be
communication delivered to the recipient.
channel where a user
The message to be transmitted is composed of header lines (the header) which is followed by
can handle text safely
unrestricted text (the body).
for Electronic mails
used in the email The key components of a header line are:
encryption process. • Keyword (followed by a colon)
• Keyword’s arguments; breaks long line into small lines

To understand S/MIME, we must understand the e-mail format that is used i.e. MIME.

Date: January 29, 2011 [Link] PM EDT


From: Sameer K <sameer.k@[Link]>
Subject: Hello friend
To: Raja@[Link]
Cc: Khushi@[Link]
Hello friend. (The actual message starts from here which makes the message
body. This is delimited from the message heading using a blank line)

80
Unit 2 - Cryptography

How does S/MIME work?

As we have already discussed, S/MIME is an asymmetric cryptography technique that uses two keys (private and
public key) for its operation. Even if we know the public key, it is practically impossible to find the private key.
The emails are encrypted using the recipient’s public key while the decryption takes place using the corresponding
private key possessed by the recipient. Till the time private key is not hacked, only the intended recipient will be able
to access the shared information in the emails.
S/MIME allows you to sign your emails as a step to prove your identity thus promoting legitimacy in the businesses.
Every time you sign an email, you enable a private key to apply a Digital Signature to your message. For opening the
message, the recipient uses your public key for verifying the signature. This serves as a process of authentication of
identity to avoid phishing attacks.

Multipurpose Internet Mail Extensions


Multipurpose Internet Mail Extension (MIME) is an extension to RFC 5322 framework to resolve the problems faced
by using Simple Mail Transfer Protocol (SMTP).
MIME features:
• It has five new content header fields that may be included in an RFC 5322 header. These fields provide
information about the body of the message.
• There are various types of content formats that have defined in MIME to support multimedia electronic mail.
• Transfer encodings have also been defined to enable the conversion of content format into a protected form.
Functions of S/MIME:
• Enveloped data: This type of data is composed of encrypted content and encryption keys for the related
content for one or more recipients.
• Signed data: Digital signature involves two steps i.e. signing of the message digest of the content and
encryption of the private key of the person who has signed. The technique used for encoding content plus
signature is the base 64 encoding (for ASCII formats). Note that only a recipient with S/MIME capability can
view a signed data message.
• Clear-signed data: As seen in signed data format, the digital signature is formed. However, only the digital
signature is encoded using base 64 encoding. The recipient with S/MIME capabilities will be able to view the
message content but won’t be able to verify it.
• Signed and enveloped data: In this type of data, the signed-only and encrypted-only entities may be
arranged in a hierarchical manner or one inside the other. This keeps encrypted data signed and signed/clear-
signed data encrypted.
Practical applications of S/MIME:
• Electronic data exchange such as digital signatures on contracts
• Financial messaging such as storing and transferring bank statements
• Content delivery such as in electronic bill payment
• Health care such as patient records and health claims

81
Unit 2 - Cryptography

2.6 APPLICATION OF CRYPTOGRAPHY

2.6.1 IPSec and its Applications

IPSec
IPSec deals with three functional areas i.e. authentication, confidentiality and key management.
• Authentication: It is the mechanism to ensure that the packet of information was in fact shared by the actual
source. Moreover, there is no alteration in the contents of the message while transmission.
• Confidentiality: It is the act of encrypting messages to prevent eavesdropping by any third party.
• Key Management: It is the process of secured exchange of keys between any two parties involved in
communication.

Applications of IPSec
IPsec has ensured secured communication across a LAN, across private and public WANs, and across the Internet.
There are several examples to depict its use:
• Secured Connectivity: It is possible for a company to build a secure virtual private network over the Internet
or over a public WAN. This helps in enabling a business to rely heavily on the Internet and reduce its need for
private networks thereby saving costs and network management overhead.
• Secured remote access: An end user whose system is equipped with IP security protocols can make a local
call to an Internet Service Provider (ISP) and gain secure access to a company network. This helps in reducing
the cost of toll charges for traveling employees and telecommuters.
• Extranet and intranet connectivity: IPsec is very efficient in ensuring secured communication with other
organizations, ensuring authentication and confidentiality and providing a key exchange mechanism.
• Enhancing electronic commerce security: The use of IPsec has increased the security of Web and electronic
commerce applications that are already equipped with built-in security protocols.
IPsec encrypts and authenticates all traffic designated by the network administrator thereby adding an additional
layer of security to whatever is provided at the application layer.

2.6.2 Attacks on Cryptosystems

We all know how crucial it has become to preserve the information that is shared among individuals and organisations
over the online network. The information that is communicated is prone to various types of attacks and malicious
activities. A communication channel or in other words, cryptosystems, are subjected to attacks which lead to leak of
information and data-hacking.

Interruption Interception Modification Fabrication

Generally, these attacks are of four types namely:

82
Unit 2 - Cryptography

Interruption
This refers to the situation where an asset of the system is destroyed or becomes unavailable/unusable. Some
examples of this type of attack are destruction of a piece of hardware; disruption in communication line or a disabled
file management system.

Interception
This is when an unauthorized party attempts to access the information between any two parties. This can be said as
an attack on the confidentiality of an information. The unauthorized party can be a person, a program or any remote
computer system around the world. Some examples of this attack are tapping of wires to capture the data and illicit
copying of data files.

Fig 2.13: Interception Attack

Modification
In the previous section, we saw that the information can be accessed by a third party i.e. unauthorised source. There
can be situation wherein the adversary (unauthorized source) can tamper with a piece of information being shared
over the communication network. This is attributed to an attack on the integrity of information. Modification of
contents can take place in several forms such as changing of values in a given data file, alteration in the program and
modification of information contents of the messages being communicated over the network.

Fig 2.14: Modification Attack

83
Unit 2 - Cryptography

Fabrication
There can be a situation where an adversary or an unauthorized source can insert counterfeit objects into the
communication network. This is an attack on the authenticity such as an addition of a false message in a network or
an addition of some records to a file.

Fig 2.15: Fabrication Attack

1. PASSIVE ATTACKS 2. ACTIVE ATTACKS

After learning the general attacks that take place over a communication network, let us now understand the various
types of cryptographic attacks that exist.
Cryptographic attacks can be categorized in two types:

a. Release of message contents b. Traffic analysis

Let us understand these attacks and their sub-categories one by one.


Passive Attacks
Passive attacks are the ones that do not affect the system resources but makes use of the system information. These
attacks can be in the form of eavesdropping or monitoring of transmissions. The primary motive of the adversary is
to hack the information that is being transmitted over the communication network.

84
Unit 2 - Cryptography

Types of Passive Attacks:


• Release of message contents
The sources of communication such as a telephone conversation, an e-mail message and a transferred file may
contain sensitive or confidential information. It is ensured that the adversary is not able to access the content
of the information that is shared over the network.

• Traffic analysis
The attacker can observe the pattern of the message even if the message is protected through encryption. The
location and identity of the communication hosts and factors such as frequency and length can be determined
by the adversary. This type of information can be used for guessing the nature of the communication between
the sender and the recipient.
It is difficult to deal with passive attacks and detect it since they don’t alter the data. But there are techniques
to prevent these types of attacks from spreading and affecting the communication process.

Fig 2.16: Passive Attacks - Release of contents

85
Unit 2 - Cryptography

Fig 2.17: Passive Attacks - Traffic Analysis

Active Attacks
Active attacks result in the modification of the information content and create a false stream of messages. The attacks
can be classified in four types such as the following:

a. Masquerade b. Replay

c. Modification of Messages d. Denial of Services

a. Masquerade
Masquerade is a scenario where an entity pretends to be a different entity. To understand this type of attack further,
we can say that the adversary might capture and replay the authentication sequence and impersonate as an entity
having legitimate source of information.

86
Unit 2 - Cryptography

b. Replay
The act of passive capturing of the information and transmitting to produce an unauthorized effect in the
communication network is referred to as Replay.
c. Modification of messages
When the contents of the message are altered or the legitimate message is delayed or recorded, this causes modification
in the nature of information being transmitted. This is said to be an unauthorized effect in the communication
process. For example, there is a message “John Alan and Steve travelled to Paris”. After the adversary has modified
the message, it becomes “John Adam and Steve travelled to Paris”.
d. Denial of Service
As the name suggests, the adversary prevents the recipient or sender from accessing the communication facilities
thereby resulting in a form of attack known as denial of service. This can be the disruption in the entire network, either
by disabling the network or overloading the network with false messages to degrade the performance.
One counter measure can be physical protection of all communication facilities and transmission paths which is
practically very difficult. Instead, the goal should be to detect the adversary before it tampers with the communication
network. This will help in recovering from any delay or disruption caused by the unauthorized source.

Fig 2.18: Masquerade

87
Unit 2 - Cryptography

Fig 2.19: Replay

Fig 2.20: Modification of messages

88
Unit 2 - Cryptography

Fig 2.21: Denial of service

2.6.3 Cryptography Issues


Cryptography as we all know has been widely used for data protection and communicating effectively over the online
network. However, there are some issues in cryptography that have restricted its use in several countries such as
Belarus, Kazakhstan, Mongolia, Pakistan, Singapore, Tunisia, and Vietnam. These issues can be related to the security
or legal aspect of cryptography. Let us understand what type of effects legal issues have on the cryptographic systems.

Legal Issues
Cryptography has been widely used in the military gathering processes. The criminals and terrorists have also been
using the cryptographic techniques to get into the security systems of the defence agencies and hack the confidential
information. Therefore, some governments have prohibited the use of cryptography to a certain extent. Also, there
are some patent issues and have come up as a result of complex mathematical nature of algorithms involved. The
inventors of these algorithms have protected their property by patenting them and that user gets a license. We can
divide the legal issues into three categories:
• Export Control Issues: The US government has treated the cryptographic software and hardware as confidential
piece of information and hence placed them under export control. For a commercial entity to export the
cryptographic libraries and software, it is important to get an export license first. In the past, we have seen that
the export laws have eased up a bit and it has become feasible to export these cryptographic software packages.
Thus, there has to be a more efficient export mechanism in place for cryptographic systems.

89
Unit 2 - Cryptography

• Import Control Issues: There are several countries that have restricted the use of cryptography within their
authority. Under the jurisdiction, the authorities have to establish proper adherence to the law. There is a need
to tie the cryptographic capabilities to jurisdiction policy files. These files have allowed “strong” but “limited”
cryptography by ensuring limited use of size of keys and other parameters.
• Patent Related Issues: To avoid getting involved in patent infringement, it is recommended that the algorithms
used should not be patented. These can also be the ones that have expired, or they are free to use as per license
policy. Also, one can use the cryptographic algorithms once the individual has obtained the license.
We have discussed the broad guidelines before deploying cryptographic solutions. Usually, it is the vendor who has
to worry about these solutions, but one cannot take chances. While using an open-source software that is freely
available over the Internet, you have to establish a legal compliancy before its use. The laws regulating cryptography
are complex, jurisdiction dependent and subject to change. Hence, it is crucial to ensure legal compliancy and abide
by the rules of use of cryptographic algorithms.

2.6.4 Strong Authentication


We all know how the customer demands have increased which in turn has asked for enhanced security, compliance
pressure and a desire to grow. The leading enterprises have been investing heavily in developing a mechanism for
identity and access management (I&AM). Password protection has become a common mode of authentication and
hence cannot be relied upon forever. Hence, there arises a need for a strong authentication which is said to meet the
core requirements of I&AM. There is a high degree of certainty while verifying user identities and thereby enhancing
online trust.
The enterprises have been evaluating the strategies for identity and access management due to security concerns
surrounding it. Let us see some of the common issues for authentication.
• Identity theft and misuse of online identities
• Persistent security and privacy concerns of the customer thus preventing businesses online
• Companies responsible for safeguarding of information and business processes thus increased compliancy
• Increased pressure of managing security related infrastructure costs
• Adoption of new technologies for supply-chain relationships
The above challenges ask for a more secured, efficient and flexible approach to manage user identities and access
privileges. The centralization of user profiles can protect against ID theft, leverage data stores such as directories and
databases and centralize control over the entire network. Let us discuss cryptography-based authentication methods
and their features.

Authentication Methods
For protecting the identity of a user and other information, cryptography involves various techniques. It offers
information security in the form of encryption, message digests, and digital signatures. Cryptography caters to
multiple applications such as computer passwords, ATM cards, e-commerce thereby promoting access control and
information confidentiality. Let us study the various types of authentication methods in cryptography.

Password
Authentication Symmetric-Key Biometri c
Authentication
Token Authentication Authentication
Protocol

90
Unit 2 - Cryptography

Password Authentication Protocol (PAP)


Password Authentication Protocol or PAP is used as an authentication protocol to authenticate users before allowing
them total access to the information. The most commonly used authentication service that we come across in our
daily lives is a password. Password verification is when the user proves his/her identity by logging in with system
stored values. It can be classified into two types i.e. textual password and graphical password.
Graphical password is an important field of authentication in system access control and allows the selection of
passwords from images bar in a defined order. The password will be presented in a graphical user interface or in other
words, GUI. The graphical password can also be called as graphical user authentication (GUA). Graphical passwords
can be easily remembered than other passwords and offers increased level of security than textual passwords. This
is due to the fact that graphical passwords are created using selectable images as a series. This series is defined as a
specific order of images.

Authentication Token
Authentication Token is a portable device which helps in authenticating users and allowing authorized access into a
network system. The authentication technique that uses a portable device to carry embedded software is known as
software token. Some examples of software token are RSA SecureID Token Cryptocards, Challenge Response Token,
and Time based Tokens.

Symmetric-key Authentication
Symmetric-key Authentication is the sharing of a single, secret key with an authentication server wherein the key is
embedded in a token. The authentication of the user takes place by sending the user credentials to the authentication
server that is encrypted by the secret key. The user becomes an authenticated user only if the server matches the
received encrypted message using the shared secret key.

Biometric Authentication
Biometric Authentication is a technique for digitizing the measurements of physiological or behavioral characteristics
of an individual. There are various types of biometric authentication systems such as face detection authentication
system, fingerprint authentication system, Iris authentication system and voice authentication system.
• Fingerprint recognition: This type of recognition uses an electronic device to capture a digital image of
the fingerprint patterns. The image that is captured is called a live scan and digitally processed to create a
biometric template. The biometric features can be stored and used for matching later on.
• Voice biometric authentication: This type of biometric authentication uses voice patterns to recognise
the identity of a person. It is divided into five categories such as speaker dependent system, speaker
independent system, discrete speech recognition, continuous speech recognition, and natural language.
• Face detection: Face detection technology makes use of learning algorithms to allocate human faces in
digital images. This type of technology focuses on the facial features and ignores everything else in the
digital image. Face recognition takes place after face detection process and identifies the face by comparing
it with stored face images. There are a lot of neural network algorithms that had been proposed for this
type of authentication.
• Iris Authentication: This is another authentication technique which is widely used at airports worldwide. The
recognition of iris is one of the finest ways for authentication in high risk situations. This technique is also
used in many types of industries.

91
Unit 2 - Cryptography

2.6.5 Sign-on Solution for Authentication

Previously, we have studied that identity and access management is a key concern among enterprises. There have
been continuous efforts to enhance security levels and user convenience. One such measure is single sign-on or
SSO. Single sign-on technique is a centralized solution that focuses on making the passwords stronger for identity
management. This has dramatically reduced the administrative burden associated with passwords. Centralized identity
management solutions are pretty easy to implement, automate and enforce secure password practices in a consistent
manner. This can be in the form of creating strong passwords, changing passwords regularly and ensuring that the
password contains a mixture of numerals and special characters. Basically, SSO allows the users to sign on only once
and automatically verifies the identities using each application and service that needs to be accessed.
Centralized passwords that have enabled SSO can make
the third party get access to the entire information
resource if the single credential is hacked. This approach
has become inefficient and insecure as the applications and
services have seen an exponential growth worldwide. Also,
most users rely on the same set of credentials for
accessing multiple applications. They find it cumbersome to change the passwords again and again and hence
becoming prone to attacks. Although this platform has eliminated the need for users to repeatedly prove their
identities, it is prone to serious security threats. The users assign the same password credentials to all their user
accounts across various systems. Alternatively, there is One Time Password authentication for providing better
security and rendering the attacks ineffective.
In an ideal situation, the user should seamlessly get authenticated to multiple user accounts once the identity of the
user has been verified. However, in many current situations, the user has to repeat the sign on procedure for each
type of service and using the same set of credentials, which are of course validated each time the user signs in.

2.6.6 Kerberos

Now we come across to another type of authentication service which is designed for use in a distributed environment,
Kerberos. Kerberos was developed as a part of a project i.e. Project Athena at MIT.
The main motive behind launching Kerberos was to address the problem of accessing servers distributed throughout
the network. With this authentication service, the users at various workstations can access distributed network of
servers once they authenticate requests for service. Kerberos offers third party authentication service thus enabling
clients and servers establish authenticated communication.
Kerberos has been a vast improvement from the previous authorization schemes. The strong cryptography and
third-party authorization have made it extremely hard for the cybercriminals to get into the networks and access
the information. Although, this type of authentication service is not flawless, there is a need to understand Kerberos
thoroughly before its implementation.
Kerberos has been effective in making the internet more secure and has enabled the users achieve more online
without compromising with safety. Yes, Kerberos can be hacked if the adversary takes advantage of limitations such
as vulnerability, weak passwords, or malware, or it can be a combination of all the three. Due to this fact, Multi-Factor
Authentication (MFA) has become popular and more in demand. MFA asks for your password along with something
else such as randomized token, mobile phone, email, thumbprint, retina scan, facial recognition etc.

92
Unit 2 - Cryptography

Kerberos is subject to some threats such as:


• A user may impersonate and access the workstation thus pretending to be another user from that particular
workstation.
• It is also possible that a user may change the network address of the workstation which will create a false
notion that the requests are coming from impersonated workstation.
• It can also happen that the user may eavesdrop on the exchange of information and use replay attack in
order to gain access or disrupt operations across the network
Kerberos is a centralized platform that authenticates the users to servers and servers to users within a communication
setup. It exclusively utilizes symmetric encryption in the process.
There have been two versions of Kerberos in use such as Version 4 and Version 5.

Kerberos Version 4
Kerberos version 4 makes use of DES (Data Encryption Standard) for providing authentication Services. DES is found
to be insecure when dealing with long-term keys while communicating with the server. The server generates a
“ticket” which helps in authenticating the user. Due to this, the server involved in the authentication process is also
called ticket-granting server.
• Kerberos v4 was released in the late 1980s
• It’s ticket support is satisfactory
• Makes use of DES for providing authentication service
• Uses “receiver makes right” encoding system
• Uses the same key for availing a service from a server
• Risky since attacker can replay messages from an old session to the client or server
• Contains only few IP addresses and other addresses for network protocols
This version of Kerberos has serious protocol flaws that permit attacks requiring far less exhaustive search. Owing
to this fact, Kerberos v4 authentication has become a security risk and raised serious questions about the Kerberos
protocol. This is why Kerberos version 5 was introduced.

Kerberos Version 5
Kerberos version 5 was implemented in both Windows 2000 and Windows XP and used to provide a single
authentication service within a distributed network. It allows a single account database for authenticating users
on various computing platforms to access the services within an environment. The ticket in Kerberos is used to
authenticate the user’s identity but additional authorization might be required for access control. Identity-based
authorization provides more interoperability for systems that support Kerberos version 5 protocol but does not
support user authorisation.
• Kerberos v5 was published in 1993
• Well extended ticket support (forwarding, renewing and postdating tickets)
• Uses Abstract Syntax Notation One (ASN.1) encoding system and Basic Encoding Rules (BER)
• Consists of multiple IP addresses unlike Kerberos version 4
• Transitive cross-realm authentication support is reasonable

93
Unit 2 - Cryptography

Fig 2.22: Overview of Kerberos

KEY POINTS
Kerberos realm: Kerberos realm is a set of managed nodes that share the same type of Kerberos database.
The Kerberos database is present on the Kerberos master computer system.
Kerberos principal: Kerberos principal is a service or a user that is known to the Kerberos master system.
The identification of Kerberos principal is done principal name that consists of a service or a username, an
instance name and a realm name.
Authentication Server (AS): The server in Kerberos scheme which grants authentication to the user/client
for accessing the information available on the network is known as Authentication Server or simply AS.
Ticket Granting Service (TGS): The act of granting a ticket to the user for accessing the information
available on the server is referred to as the Ticket Granting Service or TGS.
Key Distribution Center (KDC): A Kerberos server or KDC, shares a secret key with the client and application
server to establish communication between both the parties. These secret keys and passwords are used
to prove the principal’s identity, and to establish an encrypted session between the KDC and the principal.
KDC consists of Authentication Server (AS) and Ticket Granting Service (TGS). The exchange through
Authentication Service takes place only once between a principal and the KDC. Thereafter, KDC delivers a
Ticket Granting Ticket (TGT) through the TGS that the client/user will use for obtaining additional tickets
for information access.

94
Unit 2 - Cryptography

Fig 2.23: Transitive cross-realm authentication support is reasonable . Request for Service in Another Realm

95
Unit 2 - Cryptography

2.6.7 IPSec Policies


IPSec is an end-to-end security model that is used to authenticate and secure the traffic between the clients and the
servers. The IP address of the system is validated through an authentication process. This will allow the deployment
of IPSec to any computer, domain, site, or any item within the Active Directory (AD).
IPSec is used for securing communications in local area networks (LAN), wide area networks (WAN) and remote
communications. This can be achieved using IPSec policies that feature rules and filters. The use of rules and filters
is totally dependent upon the information that is being secured and the amount of protection required. One has to
familiar with the following options using IPsec::

Transport Mode Tunnel Mode IPSec Policy Rules

Let us discuss these options that are key for securing the information on the network.

Transport Mode
Transport mode is used for ensuring end-to-end security between a client/user and a server in a LAN. This mode
is the default mode for IPSec. Each packet of information undergoes encryption for protecting the integrity and
confidentiality of the data that is present. Also, IPSec can be used to establish an authentic source of communication
and ensure that the communication or piece of information has not been intercepted or tampered while being
transmitted. Depending upon client’s security needs, IPSec can be configured for one of the following:

a. Authentication Header (AH) b. Encapsulating Security Payload


transport mode (ESP) transport mode

a. Authentication Header (AH) Transport Mode: Authentication Header (AH) provides functions such as
authentication, integrity and anti-replay of each packet of information without encrypting the data. This
means that the data is readable, but it is protected from any kind of modification. AH makes use of keyed
hash algorithms for signing the packet and ensuring integrity. This gives an assurance that the packet did
originate from the actual source and has not undergone any type of modification while being transmitted. This
is achieved by placing the AH header within each packet between the IP header and IP payload.
b. Encapsulating Security Payload (ESP) Transport Mode: Apart from everything that AH offers, Encapsulating
Security Payload (ESP) provides for the confidentiality of the packet during transit. In the transport mode, the
entire packet is not encrypted or signed; rather, only the data in the IP payload is encrypted and signed.
The purpose of the authentication process is to ensure that the packet had originated from the actual source. It also
encrypts the data so that it cannot be viewed or modified during transmission over the communication network.
This is accomplished by placing an ESP header before the IP payload and an ESP trailer after the IP payload, further
encapsulating only the IP payload.
Tunnel Mode
IPSec tunnel mode performs encryption of the IP header and the payload during its transmission. This helps in
providing protection to the entire packet of information. The initial step is to encapsulate the entire IP packet with
an AH or ESP header and the rest with an additional IP header. The additional IP header contains the source and

96
Unit 2 - Cryptography

destination of the tunnel endpoints. The next step is to decapsulate the packet after it reaches the tunnel endpoint
and send it to the final destination by reading the IP address. Through double encapsulation, tunnel mode proves to
be a suitable method to protect the traffic between communicating networks. This is used when traffic is travelling
through Internet that is an untrusted medium of communication. IPSec tunnel mode can be deployed in following
configurations:
• Gateway to gateway
• Server to gateway
• Server to server
The tunnel mode for IPSec can be used in AH mode or in ESP mode. The only difference is that the packets are
encapsulated twice unlike AH mode or ESP mode.

2.6.8 Secure Socket Layer (SSL)


Almost all businesses and even individuals have their own websites in today’s world. The businesses have been
expanding as well as the use of Internet and web browsers. There is a sense of enthusiasm within the businesses to
set up facilities on the web and promote e-commerce. But sadly, the internet is prone to various types of cyber threats
and online attacks. As we have seen in the previous section, one way of securing the online network is by using IPSec.
IPSec is advantageous since it is transparent to the end users and applications. Another general-purpose solution is
the Secure Socket Layer (SSL).
Secure Socket Layer or SSL is used to provide security services between TCP and the applications that use TCP. The
name given to the Internet standard version of SSL is Transport Layer Service (TLS). The purpose of SSL is to provide
confidentiality by using symmetric encryption and message integrity using a message authentication code.

SSL Architecture
SSL uses TCP for providing a reliable end-to-end secured solution. SSL consists of two layers of protocols. The SSL
Record Protocol provides basic security services to higher layers of protocol stack. HTTP or Hypertext Transfer
Protocol provides transfer services for web client or server interactions and operates on top of SSL. The three higher-
layer protocols are:
1. Handshake Protocol
2. Change Cipher Spec Protocol
3. Alert Protocol

Fig 2.24: SSL Protocol Stack

97
Unit 2 - Cryptography

SSL involves the interaction between the client and the server. The process begins with the client contacting the server
and sending the first message. This message makes the client and server to exchange a few messages for negotiating
the encryption algorithm and choosing an encryption key for this algorithm. Then, the client data is shared with the
communicating server. Thereafter, the client and the server can exchange as much information as they want.
The communicating server must have an SSL certificate and a private key. The SSL certificate contains the public key
and the RSA encryption algorithm. The public key is sent to the client for getting connected. The client uses the public
key for encryption and sending it to the communicating server.
SSL uses public key for data encryption and data integrity. But, how to check whether the public key belongs to the
person/entity who claims it. The solution for this is to use a certificate. The certificate acts as a link between the public
key and the entity that has been verified or signed by a trusted third party.
SSL is used on the Internet for sending emails in Gmail and while doing online shopping, banking and other e-commerce
activities.
Let us see the steps involved web browser and web server connection using SSL.
• The browser connects to the server using SSL (https)
• The server responds with the certificate that contains the public key of the web server involved in the communication
process
• The browser verifies the certificate by checking the signature of the certificate authority
• The browser uses the public key for agreeing a session key with the server
• The web browser and the server encrypt the data using the session key over the communication channel.

Digital certificates, also called Digital IDs, are the electronic counterparts to driver licenses, passports, or membership
cards. A digital certificate can be presented electronically to prove one’s identity or the right to access information
or services online. Digital certificates are used not only to identify people, but also to identify websites (crucial to
e-business) and software that is being sent over web.
Digital certificates bring trust and security when people are communicating or doing business on internet. A PKI is
often composed of many CAs linked by trust paths. The CAs may be linked in several ways. They may be arranged
hierarchically under a ‘root CA’ that issues certificates to subordinate CAs. The CAs can also be arranged independently
in a network. This makes up the PKI architecture.

2.6.9 Transport Layer Security (TLS)

The internet version of SSL is known as Transport Layer Security (TLS). The TLS Record Format is similar to the SSL
Record Format. Like SSL, TLS is a cryptographic protocol that is responsible for providing end-to-end communication
security over the online networks. It is an IETF standard and prevents eavesdropping, tampering and message forgery.
Some of the applications that use TLS are web browsers, instant messaging, e-mail and voice over IP (VoIP).
There are many differences between SSL and TLS such as:
• TLS is more efficient and secure than its predecessor i.e. SSL due to stronger message authentication, key material
generation and other encryption algorithms.
• TLS supports many types of formats such as pre-shared keys, secured remote passwords and
• Kerberos unlike SSL.
• TLS and SSL do not support interoperability although TLS does support backward compatibility for older devices
that use SSL.
TLS involves various steps for secure communication between the user and the online network. It involves the
exchange of information between the client and the server, exchange of keys, cipher message and the end message.
This is how TLS has proved to be flexible to be used in various types of applications. There are three main components
of TLS such as encryption, authentication and integrity.

98
Unit 2 - Cryptography

A TLS connection begins with a sequence known as a TLS handshake. The handshake
process starts the same way as seen in TCP platform and then establishes a cipher Note
suite for communication. Cipher suite is defined as a set of algorithms containing
details such as the type of encryption key to be used for a session. TLS uses public IETF stands for Internet
key cryptography for setting the encryption keys over the unencrypted channel. Engineering Task Force, a
The handshake process also involves the server proving its identity to the client for global body concerned with
the purpose of authentication. evolving Internet architecture
and ensuring smooth
Once the data has been encrypted and authenticated, the next step is to sign it
operations over the Internet.
using Message Authentication Code (MAC). This can be understood through an
example. Suppose we buy a bottle of juice containing tamper-proof foil over it. If
the foil is intact, there is a sense of assurance that the bottle is seal proof and unused. This is what a MAC does in a
communication channel.
TLS 1.3 is the latest version of the protocol developed by the Transport Layer Security Working Group of the IETF to
combat the constantly increasing vulnerabilities. The new format is said to have more privacy, reduced latency, better
performance and increased security in the encrypted connections.

Digital Signatures and Certificates in SSL and TLS


• SSL Certificates are data files that digitally bind a cryptographic key to the organisation. When these certificates
are installed on a web server, the padlock and https protocol get activated and secure connections are established
between the web server and the browser.
• SSL Certificates are used in binding together domain name, server name or hostname. It also finds out the
organisational identity along with the location of access.

The padlock is activated indicating tha t the connection


to the server is now secure.

The standard http is changed to https indica ting to the


browser that the connection between the browser and
server must be secured using SSL

Fig 2.25: SSL Certificate

Digital signature is formed when the representation of a message is encrypted. The encryption is done using the
private key of the signatory and operates on the message digest rather than the main body of the message.
The steps involved in process of digital signature are as follows:
• The sender sends a message digest and encrypts it using sender’s private key that forms the digital signature.
• Next, the sender transmits the digital signature along with the message

99
Unit 2 - Cryptography

• Then, the receiver decrypts the digital signature using public key from the sender thereby regenerating
sender’s message digest
• Thereafter, the receiver computes message digest from the message digest that has been received and
confirms whether the two digests are same.
When the receiver has successfully verified the digital signature, there are two things that is known by the receiver:
• The message has not been modified or tampered by a foreign entity during transmission.
• The message has been sent by the actual source that claims to have sent it.

Fig 2.26: The Digital Signature Process

2.6.10 Pretty Good Privacy

A concept introduced by Phil Zimmermann, Pretty Good Privacy or PGP, is another confidentiality and authentication
service used for electronic mail and file storage applications. This strategy has been growing ever since its inception
due to the following reasons:
• Pretty Good Privacy is available for free worldwide on a variety of platforms such as Windows, UNIX, Macintosh etc.
• It is based on algorithms that have been extensively reviewed and extremely secure. The package includes RSA,
DSS and Diffie-Hellman for public-key encryption; CAST-128, IDEA, and 3DES for symmetric encryption and lastly
SHA-1 for hash coding.
• It can be used in variety of applications from corporations wanting to select and enforce a standardized scheme
for encrypting files and messages to individuals for secure communication over the network.
• It is neither developed nor controlled by any government or standards organisation. The ones having belief in the
establishment will enjoy its services.
• PGP is on the Internet standards track i.e. RFC 3156, MIME Security with Open PGP.

Operation of PGP
The actual operation involves four services such as authentication, confidentiality, compression and e-mail
compatibility. Let us understand each of the security services in detail.

Authentication
The sequence of steps involved in the authentication process are as follows:
• Initially, the sender creates a message to be sent
• SHA-1 algorithm is used to generate 160-bit hash code of the message

100
Unit 2 - Cryptography

• Next, the hash code is encryption with RSA making use of the sender’s private key
• The result of encryption of the data is added to the beginning of the message
• The receiver decrypts and recovers the hash code using RSA with the sender’s public key
• Thereafter, the receiver generates a new hash code for the message and undergoes comparison with the decrypted
hash code. If the message and the decrypted hash code match, the message is accepted as authentic.
The combination of SHA-1 and RSA has proved to be an effective digital signature scheme. The strength of RSA
assures the recipient that the person having the private key is authorized to generate the signature. On the other
hand, the strength of SHA-1 assures the recipient that no third party can generate a new message matching the hash
code and the digital signature.

Confidentiality
PGP provides confidentiality by encrypting messages that are transmitted or stored locally as files. In both the
situations, symmetric encryption algorithm CAST-128 can be used. Also, techniques such as IDEA or 3DES can be
used for maintaining confidentiality.
Let us the sequence of activities that take place within a communication process.
• The sender creates a message along with a random 128-bit number that can be used as a session key.
• The message undergoes encryption using the CAST-128 (IDEA or 3DES) with the session key.
• RSA algorithm is used for encrypting the session key using recipient’s public key and added to the beginning of
the message.
• The receiver makes use of RSA using the private key for decryption and recovering the session key.
• The session key is used for decrypting the message.
There are certain observations that have been made regarding PGP for establishing confidentiality.
• For reducing the encryption time, a combination of symmetric and public-key encryption is used in preference to
RSA for encrypting the message directly. This is due to the fact that CAST-128 and other symmetric algorithms
are faster than RSA or ElGamal algorithms.
• Public-key algorithm has solved the session-key distribution problem as only the recipient is able to recover the
session key bound to the message.
• The use of symmetric keys helps in strengthening the overall symmetric encryption process.
PGP can perform both confidentiality and authentication for the same message as well. In this case, a signature is
generated for the plaintext message and both of them are encrypted using CAST-128 scheme. The session key is
encrypted using RSA algorithm. This sequence of activities has been preferred over encrypting the message and then
generating a signature for the respective encrypted message.
It is more convenient to store a signature along with the plaintext version of the message. The verification of the
message begins with performing the signature.

Compression
We all face common problems such as loaded mail boxes and insufficient space for file storage. PGP compresses the
message by applying the signature before encryption. This is said to facilitate both for e-mail transmission and file
storage. Hence, this technique is also crucial is saving space in email platforms and other online networks.

Let us understand the significance of generating the signature before compression from the following reasons:
• Signing an uncompressed message will help in future verifications. If one has signed a compressed document, the
individual would be forced to store the compressed version of the message for later verification or recompress
the message at the time of verification.

101
Unit 2 - Cryptography

• If one wishes to generate dynamically a recompressed message for verification, PGP algorithm will present a
difficulty. Since the algorithm is not deterministic, there are a lot of tradeoffs as per running speed and compression
ratio. Note that these compression algorithms are interoperable as any version of the algorithm can decompress
the output of any other version.

(a) Generic transmission diagram (from A) (b) Generic reception diagram (to B)

Fig 2.27: TTransmission and Reception of PGP Messages

The notations used in the above figure are as follows:


KS = session key used in symmetric encryption scheme EC = symmetric encryption
PRa = private key of user A, used in public-key encryption scheme DC = symmetric decryption
PUa = public key of user A, used in public-key encryption scheme H = hash function
EP = public-key encryption || = concatenation
DP = public-key decryption Z = compression using ZIP algorithm
R64 = conversion to radix 64 ASCII format
After the message has been compressed, it is encrypted for strengthening the cryptographic security.

E-Mail Compatibility
When PGP is used, the part of the block that is to be transmitted is encrypted. If we only use the signature service,
then sender’s private key is used for encrypting the message digest. If one chooses to use the confidentiality service,
then the message plus signature is encrypted using a one-time symmetric key. Many electronic mail systems permit
the use of blocks rather than the ASCII text. Since large part of encrypted in PGP mode, a sequence of arbitrary binary
words is generated that some mail systems don’t accept.
In order to accommodate this limitation, PGP uses an algorithm known as radix64 that maps 6 bits of binary data into
an 8-bit ASCII character. This is said to expand the message by 33% and the overall compression being one-third.
This chapter has helped get an insight into the various types of security offerings to facilitate communication over the
online networks. Every technique has its own variations and characteristics that define its uniqueness. As businesses
keep expanding and e-commerce industry booms, we will certainly witness more cryptographic algorithms and
authentication mechanisms to protect us from various types of cyber threats.

102
Unit 2 - Cryptography

SUMMARY

• Ciphertext is when the original data or plaintext is converted into a secret code using cryptographic algorithms.
• The conversion of plaintext into ciphertext is called encryption and retrieving back the original data is called
decryption.
• For communicating over the online network, the sender shares a key with the receiver which can be a public
key or a private key.
• Encryption is of two types i.e. symmetric encryption and asymmetric encryption.
• Symmetric encryption involves the exchange of same keys whereas asymmetric encryption is a system which
involves exchange of dissimilar keys (public key and public key).
• The major function of asymmetric encryption is data integrity and message authentication.
• In DES, the data is encrypted in 64-bit blocks using a 56-bit key.
• The encryption technique 3DES involves use of three keys and an actual length of 156 bits.
• AES technique is available in many variants such as 128-bit keys (10 rounds), 192-bit keys (12 rounds) and 256-
bit keys (14 rounds).
• Public-key cryptography involves mathematical functions and computations for improving confidentiality,
key distribution and authentication. Some of the applications of public-key cryptography are encryption/
decryption, key exchange and digital signature.
• Diffie-Hellman Key exchange Algorithm enables key exchange between two users and is prone to Man-in-the-
Middle attack.
• RSA algorithm can be used for both public-key encryption as well as digital signatures. Some of the security
attacks on RSA are brute force, mathematical attacks, timing attacks and chosen ciphertext attacks.
• Hash function is expressed in the form of mathematical equation used for encryption in various applications
such as message authentication, digital signatures and password protection.
• Message digest algorithm (MD5), accepts input of arbitrary length and produces a message which is 128 bits
long.
• SHA or Secure Hash Algorithm involves modular arithmetic and logical binary operations for providing security
service.
• Emails can be secured using GNU Privacy Guard, PGP and S/MIME technologies.
• S/MIME has various functions such as enveloped data, signed data, clear-signed data and signed and enveloped
data.
• The key functions of IPSec are authentication, confidentiality and key management.
• The attacks on cryptosystems are active attacks and passive attacks. Passive attacks are release of message
contents and traffic analysis whereas active attacks can be masquerade, replay attack, modification of messages
and denial of service.
• The main application of strong authentication is in identity access management.
• Kerberos is a security service that involves granting of ticket and authentication of user for establishing
communication with the servers.

103
Unit 2 - Cryptography

• IPSec operates in various modes i.e. Authentication Header and Encapsulating Security Payload Transport
Modes and Tunnel Mode.
• SSL (Secure Socket Layer) is used for authenticating user online on the Internet. The Internet standard of SSL is
called TLS (Transport Layer Security).
• The key functions of Pretty Good Privacy (PGP) are authentication, confidentiality, compression and e-mail
compatibility.

104
Unit 2 - Cryptography

KNOWLEDGE CHECK

Q.1. Select the right choice from the following multiple choice questions.
A. The raw information that is converted into a secret code is known as:
i. Plaintext
ii. Ciphertext
iii. Message
iv. Base data

B. The process of converting raw information into a secret code is known as:
i. Decryption
ii. Authentication
iii. Encryption
iv. Verification

C. The sender exchanges a ___________ with the receiver to ensure a secured communication.
i. Fingerprint
ii. Signature
iii. Password
iv. Key

D. If ‘n’ number of people want to communicate with each other in symmetric key encryption, then the
number of keys required will be computed as:
i. N(N+1)/2
ii. (N+1)/2
iii. (N-1)/2
iv. N(N-1)/2

E. Which of these is not an Asymmetric encryption technique:


i. Diffie-Hellman
ii. RSA algorithm
iii. Digital Encryption Standard (DES)
iv. El Gamal algorithm

F. Which of these attacks does not affect the security aspect of RSA:
i. Chosen ciphertext attacks
ii. Timing attacks
iii. Masquerade
iv. Brute force

105
Unit 2 - Cryptography

G. Which of the following does not fall under the category of Active Attacks:
i. Denial of Service
ii. Replay
iii. Masquerade
iv. Release of message contents

H. Which of the given authentication services contains Authentication Server and a Ticket-Granting Server:
i. Strong Authentication
ii. Secure Socket Layer
iii. Kerberos
iv. Pretty Good Privacy

I. Which of these does not come under the category of Strong Authentication:
i. Password Authentication Protocol (PAP)
ii. Authentication Token
iii. Biometric Authentication
iv. Pretty Good Privacy

J. Which of these is not an application of cryptographic hash function:


i. Secured connection with the server
ii. Message Authentication
iii. Digital Signature
iv. Password Protection

Q.2. Describe the basic terminologies used in cryptography.

Q.3. Describe symmetric and asymmetric encryption technique.

106
Unit 2 - Cryptography

Q.4. List the properties and applications of cryptographic hash functions.

Q.5. Write in brief the steps involved in the SHA algorithm.

Q.6. Give examples of the types of active and passive attacks on cryptosystems.

107
108
UNIT 3
NETWORK SECURITY

“ At the end of this unit you will be able to:



Explain relevant network security concepts, devices and
terminologies
Describe the vulnerabilities and attacks concerned with an
organisation’s network
Describe common network security countermeasures and tools
Distinguish between intrusion detection Systems and intrusion
prevention systems


• Implement a firewall
• Describe Security Information and Event Management (SIEM)
function
110
Unit 3 - Network Security

3.1 NETWORKS AND THEIR VULNERABILITIES

3.1.1 Need for Network Security

As we are aware a computer network connects computers


and peripherals using networking devices such as switches
and routers. Switches and routers enable the devices that
are connected to the network to communicate with each
other, as well as with other networks.

When multiple networks are connected it is called an


internetwork. The Internet is such a connection of multiple
networks.

Fig 3.1: Various types of Transmission Media

Further networks can be wired and wireless. i.e the data is transmitted across the network through wired media also
called guided media or wireless or unguided media.

Fig 3.2: Router, Switch, Hub and Bridge in a network


Unguided or wireless media is slowly gaining popularity, particularly with the advent of mobile computing devices
such as smartphones, tablets, laptops, etc.

111
Unit 3 - Network Security

Today the used of wired and wireless networks has grown exponentially. Almost all business use computer networks
for sharing information and numerous business and personal transactions are being conducted over the Internet
everyday. This is throwing up a huge risk of of information theft and other attacks on the intellectual assets of the
businesses and individuals.
These networks, ideally, should allow sharing of information and resources to authorized personnel only. However,
they are prone to “unauthorized access” if they are not properly secured. Organizations have networks of computer
systems that can be attacked from outsiders as well as from within the organization.
It is possible for attackers to take advantage of an unsecure hub/switch port to connect
their device to the network. By doing this:
• The attacker can steal important information by sniffing data packets. Did you know?

• The attacker can also flood the network with spurious information leading to
FBI studies show
denial of service to the authorized personnel.
that more than 80%
• The attacker can spoof the physical identities of the authorized personnel of network security
and then either steal their data or secretly pass/alter the communications attacks could have
between two parties without their knowledge in the form of a ‘man-in-the- been avoided if
middle’ attack. only the most basic
• There are times when malicious content or corrupt files are spread across the steps were taken.
network to hack confidential information.

It has been observed that networks with wireless network are more vulnerable than wired networks. Because, because
wireless network can be easily accessed without any physical connection.
Hence, there is a need to have an effective security mechanism in place to counter any threat that can occur. Also,
there is a need to update the systems as time goes on so that one is not predictable to the attackers.
Network security is a specialized field that protects the usability, reliability, integrity, and safety of the networking
infrastructure by dealing with the various network security risks.

112
Unit 3 - Network Security

3.1.2 Network Fundamentals

For anyone managing network security a good understanding of networking is important. This includes some common
terminology and protocols.
Let us review these in brief.

Network security glossary

• Connection: In networking, when pieces of related information are transferred through a network, we say that a
connection has occurred. This means that a connection is built before the data transfer and then it is deconstructed
at the end of the data transfer. A secured connection is very important for maintaining the effectiveness of
communication transfer over the network.
• Packet: Generally speaking, a packet is the basic unit transferred over a network. They are envelopes that carry
data (in pieces) from one endpoint to the other in order to communicate over a network,. Packets have the
following components:
- A header portion containing meta data and routing information such as the IP address of the source and
destination.
- The main body contains the payload, which is the actual data being transferred.
- The trailer, which is also called footer that contains a couple of bits. This is to tell the receiver that it has
reached the end of the packet.
• Port: A port is an address on a network device that can be associated to a specific piece of software. It is not
a physical interface or a location, but it allows the server to be able to communicate using more than one
application.
• LAN (Local Area Network): It refers to a network or a part of a network that is not publicly accessible to the
greater internet. A home or office network is an example of LAN.
• WAN (Wide Area Network): WAN is more extensive network than LAN. It is a term used for large dispersed
networks. The internet, as a whole, can be called a WAN.
• VPN (Virtual Private Network): It is a means of connecting separate LANs through the internet, while maintaining
privacy. This is used as a means of connecting remote systems as if they were on a local network, often for security
reasons.
• Firewall: A firewall is a program that decides whether traffic coming into a server or going out should be allowed.
A firewall usually works by creating rules that decide which type of traffic is acceptable on which ports. Generally,
firewalls block ports that are not used by a specific application on a server.
• Password: Nowadays, almost all e-platforms ask for username and password for logging in a portal. From the
organizational perspective, there should be a strong and secured database capable of storing multiple passwords.
Alternatively, the database can store the hash of the password rather than the password itself. Thereafter,
whenever the user will log in, the entered password will be hashed and compared to the stored hash value in
the organizational database. The user will be successfully logged in once the authentication process is complete.
• IP (Internet Protocol) Addresses: In a network it is very important for each entity to have an identification. This
is called an address. Each and every computer/device within the network will have two types of addresses:
1. Logical address is also known as IP address (Internet Protocol address). It is a virtual address that can be
viewed by the user and is used a reference to the physical address.
2. Physical address refers to a location in the memory unit and is also known as MAC addresses (Media Access
Control). The user cannot directly view the physical address, the physical address is accessed by its corresponding
logical address.
IP addresses are managed by the Internet assigned Numbers authority (IaNa) which has overall responsibility for the
IP address pool and by the Regional Internet Registries (RIRs) to which IaNa distributes large blocks of addresses.

113
Unit 3 - Network Security

• NAT (Network Address Translation): It is a way to translate requests that are incoming into a routing server to
the relevant devices or servers that it knows about in the LAN. This is usually implemented in physical LANs as a
way to route requests through one IP address to the necessary backend servers.
There are 3 ways to configure NAT:
Static NAT – In this, a single unregistered (Private) IP address is mapped with a legally registered (Public) IP
address i.e one-to-one mapping between local and global address. This is generally used for Web hosting. These
are not used in organisations as there are many devices who will need Internet access and to provide Internet
access, public IP address is needed.
Suppose, if there are 3000 devices who needs access to Internet, the organisation have to buy 3000 public
addresses that will be very costly.
Dynamic NAT – In this type of NAT, an unregistered IP address is translated into a registered (Public) IP address
from a pool of public IP address. If the IP address of pool are not free, then the packet will be dropped as only
fixed number of private IP address can be translated to public addresses.
Suppose, if there is pool of 2 public IP addresses then only 2 private IP addresses can be translated at a given
time. If 3rd private IP address wants to access Internet then the packet will be dropped therefore many private
IP addresses are mapped to a pool of public IP addresses. NAT is used when the number of users who wants to
access the Internet are fixed. This is also very costly as the organisation have to buy many global IP addresses to
make a pool.
Port Address Translation (PAT) – This is also known as NAT overload. In this, many local (private) IP addresses
can be translated to single registered IP address .Port numbers are used to distinguish the traffic i.e., which traffic
belongs to which IP address. This is most frequently used as it is cost effective as thousands of users can be
connected to the Internet by using only one real global (public) IP address.
• Network interface: A network interface can be the interface between software or hardware. It could also be
between two pieces of equipment in a network or between protocol layers of a network. It usually has a network
ID and a node ID associated. Its function is to make a connection or disconnection and pass data. Interfaces
are networking communication points for a computer. Each interface is associated with a physical or virtual
networking device. Typically, a server will have one configurable network interface for each Ethernet or wireless
internet card. In addition, it will define a virtual network interface called the ‘loopback’ or localhost interface. This
is used as an interface to connect applications and processes on a single computer to other applications and
processes.
• Network Protocols and Standards: A protocol is a set of rules and standards that define a language that can be
used to communicate. There are a great number of protocols used extensively in networking, and they are often
implemented in different layers. Some low-level protocols are TCP, UDP, IP, and ICMP. Some familiar examples
of application layer protocols, built on these lower protocols, are HTTP (for accessing we content), SSH, TLS/ SSL,
and FTP.
Protocols and standards are vital to the implementation of data communications and networking. Protocols refer
to the rules; a standard is a protocol that has been adopted by vendors and manufacturers. Network models serve
to organize, unify, and control the hardware and software components of data communications and networking.
Although the term "network model" suggests a relationship to networking, the model also encompasses data
communications. The two dominant networking models are as follows:

1. The Open Systems 2. Transmission Control Protocol


Interconnection (OSI) / Internet Protocol (TCP/IP)

The first is a theoretical framework; the second is the actual model used in today's data communications.

114
Unit 3 - Network Security

3.1.3 TCP/ IP Vulnerabilities

TCP/IP Model
You may already know that TCP/IP suite is the commonly used industry standard for connecting hosts, networks and
Internet. TCP/IP focuses on building an interconnection of networks, called internetwork that is capable of providing
universal communication over heterogeneous physical networks. This is said to facilitate communication between
hosts separated by a large geographical area.
TCP/IP acts as a communication link between the programming interface of a physical network and user applications.
The TCP/ IP model, more commonly known as the Internet protocol suite is a layering model that is simpler and has
been widely adopted. This layered structure is referred to as a protocol stack.
It defines four separate layers:
1. Application Layer: In this model, the application layer is responsible for creating and transmitting user data
between applications.
2. Transport Layer: The transport layer is responsible for data transfer between the application program running
on the client and the application program running on the server. This level of networking utilises ports to address
different services. It can build up unreliable or reliable connections depending on the type of protocol used.
3. Network (or Internetwork) Layer: The internet layer or internetwork layer is used to transport data from node
to node in a network. This layer is aware of the endpoints of connections but does not worry about the actual
connection needed to get from one place to another. IP addresses are defined in this layer as a way of reaching
remote systems in an addressable manner.
4. Network Interface/Link Layer: The network interface layer or data-link layer or simply link layer acts as the
interface to the actual network hardware. This layer implements the actual topology of a local network that allows
the internet layer to present an addressable interface. It establishes connections between neighbouring nodes to
send data and may not necessarily provide reliable delivery.

Application protocol
Application Application

TCP protocol
Transport Transport

IP protocol IP protocol
Network IP Network

Data Network Data


Link Link
Link Access Link

Fig 3.3: The TCP/IP protocol stack

In the case of TCP/IP layers, security controls have to be deployed at each layer. This is because if any one TCP layer
is attacked, none of the other layers will be aware and thus communication will be compromised. Hence, in order to
deal with the risks, one has to understand and address the security vulnerabilities and threats at each TCP /IP layer.

115
Unit 3 - Network Security

Application Layer Vulnerabilities


Caching
When a user visits web pages through a Web browser, it could perform caching of the web pages by temporarily
saving the data on the user’s machine. This make is easier for the user to access the web pages by loading the files
from the local hard drive. Password and user-names can also be cached. This poses a security risk, because an attacker
can use the cached data to access password protected web pages from that computer.
That is why clearing the cache frequently and disabling the feature in the browser that auto-saves user ids and
passwords is good practice

Hijacking
HTTP or Hypertext Transfer Protocol which is based on www (World Wide Web) is an application layer protocol in
TCP/IP suite used for transfer files that make up the web pages from the web servers. When a user opens a certain
website by entering a search into the URL, the messages are sent to the web server using HTTP for the webpage
requested by the user. Thereafter, the web server responds by delivering the results of the search criteria that was
requested.
A weak authentication between the client and the web server during the initializing of the session is a common HTTP
vulnerability. This vulnerability can lead to a session hijacking attack where the attacker steals an HTTP session of the
legitimate user by capturing the packets using a packet sniffer. A successful hijack gives the attacker full access to
the HTTP session.

Cookie Poisoning
Cookies are small files stored by certain websites in the computer of the user. They help in identifying the users,
providing them easy access to the particular website and even customizing the web pages for the user.
Cookie poisoning is when an attacker modifies or steals a cookie from the user’s computer to access personal
information that it contains which could also be a password or a user id. They can then, use the cookie on their own
machine and access unauthorized information because the website will not ask for any authentication due to the
presence of the cookie.
Web Application Firewalls (WAF) are used to detect and block cookie poisoning attacks.

Replay attack
In a replay attack an attacker intercepts data transmission of a user and then uses that information again for his/her
own benefit. It is a type of man-in-the-middle attack and more than a hijack because here the data resent can be
modified and can bring different results. The attacker could also spoof the client’s IP address and thus use his/her
own machine.
There are ways to prevent this such as if the web browser could keep track of the sessions or create unique session ids.

Cross-Site Scripting
In this type of attack, the attacker identifies web applications or browsers that are vulnerable and injects a malicious
script in it. This script can conduct a session hijack and steal the information and cookies of legitimate users that visit
the website.

Domain Name System (DNS) Attacks


DNS is database where internet domain names used by people to locate a website are located. These are mapped to
internet protocol (IP) addresses that a computer uses to locate a website. This functionality is used everytime a user
browses the internet or types a URL in a web browser.
If an attacker modifies a DNS record, he/she can direct all traffic to an incorrect IP address. The attacker can do this
by either exploiting DNS protocol vulnerability or attacking the DNS server.
There are three common DNS Protocol Attacks :
• DNS cache poisoning: It is an integrity attack where the attacker targets the caching name servers to control the
answers stored in the DNS cache and feeds it with wrong information. This false information will map a domain

116
Unit 3 - Network Security

name to a wrong IP address and divert the requests to another site, which could be a fraudulent site that could
look similar to the real web site. If the user remains unaware and enters the user id/ password, the attacker can
then steal it.
• DNS spoofing: This refers to faking the IP address of a computer to match the DNS server’s IP address. Then
user requests are directed to the wrong machine. Here the hacker’s machine will impersonate the DNS server and
reply to all user requests and misdirect them.
• DNS ID Hijacking: The term DNS Hijacking and DNS Spoofing are used interchangeably. DNS hijacking tricks the
user into believing that they are connecting to a legitimate domain name.

Dynamic Host Configuration Protocol (DHCP) starvation attack


A DHCP assigns temporal IP addresses to user machines that log into an IP network. The DHCP server is configured
with IP addresses which are leased to user machines on request.
DHCP starvation attack is a type of denial of service attack. Here an attacker sends numerous DHCP request using
spoofed MAC addresses. The DHCP server would end up leasing all its IP addresses till it has no more IPs to give out.
So, when a genuine user sends a request, the server will not be able to provide the IP address and the user will not
get access into the network.

Transport Layer Vulnerabilities


Three way handshake security flaws
TCP is a connection oriented protocol, which means that a connection has to be established in order to send
information from a sender to a receiver. This is where the TCP uses a three way handshake procedure, where:
1. the user sends a SYN segment to the server requesting to establish a connection
2. the server replies with a SYN-ACK segment acknowledging the client’s request
3. the client then sends an ACK segment and there after the connection is established.

The security weakness in this the three way handshake is due to the possibility of predicting TCP sequence numbers.
This is possible because the sequence number is incremented by a constant amount per second and by half that
amount each time a connection is initiated. An attacker can gain access and connect to the server legitimately, then
he/she can guess the next sequence number, perform a session hijack and TCP injection attacks.
• TCP blind spoofing is another form of Hijacking that can be done, where an attacker is able to guess both the
port number and sequence number of the session that is in process and can carry out an injection attack.
• SYN Flood is another flaw that the three way handshake has, where multiple SYN packets are spoofed using a
source address that does not exist. They are then sent to the target server. After receiving the fake SYN packets,
the server replies with a SYN ACK packet to the source address that is unreachable. This situation creates a lot of
half-opened sessions due to the fact that the expected ACK packets are not received by the server to properly
initiate a session. This can cause the server to be overloaded or can eventually crash. The server will not allow any
further connections to be established and legitimate user connection requests will be dropped, thus leading to
a denial of service attack.

UDP Flood attack


This is a denial of service attack, where numerous User Datagram Protocol (UDP) packets are sent to a targeted server,
so that it is overwhelmed with the number of requests and so is unable to process other requests from legitimate
users. Even a firewall protecting the targeted server can become exhausted due to the UDP flooding.

Network Layer Vulnerabilities


Internet Protocol (IP) is the main protocol in this layer, which is implemented in two versions; IPv4 and IPv6. Address
Resolution Protocol (ARP), Internet Control Message Protocol (ICMP) and Internet Group Multicast Protocol are other
protocols used in this layer. These protocols can cause major security vulnerabilities.

117
Unit 3 - Network Security

Devices on the network are uniquely identified by IP addresses and a subnet mask. An attacker can spoof an IP
address and carry out a man-in-the-middle attack. The attacker can even hijack a connection session. Given below
are some common network layer attacks.

Source route attack


IP source route involves a packet listing the specific routers it took to reach its destination; this path can be used by
the recipient to send the data back to the sender. In a source route attack, the attacker can modify the source route
option in the packet. This can lead to a loss of data confidentiality as the attacker will be able to read the data packets.

RIP Security Attacks


Routing Information Protocol (RIP) is a dynamic routing protocol used for sending routing information on local
networks. As the receiver does not check the messages, an attacker can take advantage and send incorrect routing
information or forge the RIP messages. The attacker can impersonate a route to a particular host that is unused. The
packets can be sent to the attacker for sniffing or performing a man in the middle attack.

Attacks due to Internet Control Message Protocol (ICMP) vulnerabilities


ICMP is a basic network implementation protocol of TCP/IP networks. It is used to send error and control messages
regarding the status of networked devices and has vulnerabilities that can be abused. Some attacks that can occur on
a network due to ICMP vulnerabilities are −
• An attacker to carry out network reconnaissance to determine network topology and paths into the network due
to ICMP vulnerabiliities. Through this the attacker can ascertain all host IP addresses alive in the network.
• Trace route is a ICMP utility used to map a network by describing the path in real-time from the client to the
remote host.
• An attacker can launch a denial of service attack using the ICMP vulnerability. This attack involves sending IPMP
ping packets that exceeds 65,535 bytes to a computer. The impacted computer fails to handle this packet properly
and can cause the operating system to crash.

Ping Of Death Attack


In this attack, the attacker sends malformed IP packets that exceeds 65,535 bytes to the target device. A correctly-
formed ping packet is 56 bytes or 64 bytes when the IP header is considered. The target device will naturally not be
able to process this packet properly and this can lead to an operating system crash and is called a kernel panic attack.
This, further leads to a denial of service attack

Teardrop attack
This attack is a type of a denial-of-service (DoS) attack, which works slowly by sending a series of fragmented packets
to a target device. It overwhelms the target device with the incomplete data so that it crashes down Other versions
of the teardrop attack are; NewTear, Nestea, SynDrop and Bonk.

Data-Link and Physical Layer Vulnerabilities


The Link layer consists of the data link layer and physical layer. Let’s look at the one by one.
Some vulnerabilities of the data link layer are:
Eavesdropping via sniffing
Eavesdropping via sniffing is possible at the data link layer. Since all broadcasts are sent to all switch interfaces except
the originating port, subnets using broadcasts are sent to all network interface cards attached to that switch. This
means that packets can be analysed or stored for later inspection by an attacker. Tools such as wireshark are capable
of capturing packets.
Cam flood or MAC flooding attack.
Physical or MAC addressing is done at the data link layer. For the switching process to be a success, each packet that
requires delivery needs to have a physical address. Switches use a cam table to store information such as the Mac

118
Unit 3 - Network Security

addresses available on physical ports with their associated VLAN parameters (Security, CISCO Systems 2002.) The
table can only store a fixed size of information. A hacker takes advantage of the fixed memory size by maxing it with
more entries than it can handle causing to overflow. This attack is called cam flood or mac flooding attack.
Address Resolution Protocol (ARP) Attack
ARP is used in the data link layer to convert IP addresses to their corresponding MAC addresses. The user sends a
broadcast ARP message, requesting for a MAC address for a given IP address. This message is broadcasted by the
switch to all ports except for the source port. The intended destination IP address gets the ARP message and replies
with the corresponding MAC address. All other hosts on the switch drop the packet. Gratuitous ARP is is a type of ARP
that is used by hosts to broadcast their IP address to the network in order to avoid duplication.
ARP Spoofing: An attacker can abuse Gratuitous ARP as there is no authentication in the ownership of either IP or
MAC address. Due to this, an attacker could spoof an ARP packet to broadcast an IP and MAC address of an already
existing host. This will lead to an IP conflict and the legitimate user is not allowed into the network, which is a denial
of service.
ARP cache poisoning: ARP keeps its physical to logical bindings in an ARP cache. ARP cache poisoning occurs when
an attacker modifies this table and gives incorrect mappings. When the user’s machine tries to send data, it checks in
the poisoned cache and sends the data to an attacker.

Physical Layer Vulnerabilities


This layer is open to attacker that are based on the communication media being used, i.e wired or wireless
communication media. If an attacker is about to gain access to any of them, then he or she can easily cause a denial
of service attack by making the causing the organisation application unavailable to users or simply sniff the actual
media by tapping into the network.
Ethernet copper twisted pair cables are relatively easy to hack into. An attacker needs to have knowledge of the
Ethernet cabling standards (568A or 568B). With this information, a cable can be easily tapped into without being
detected.
One other security vulnerability with twisted pair cables is that they emit electromagnetic energy that can be
picked wit sensitive equipment without the need for physical tapping into the media. Optic fiber cables emit not
electromagnet waves and are hard to tap. They can be used in place of Ethernet cables. Physical theft of data and
hardware equipment is a possible result of poor physical security.
A simple thing like removing or cutting a network cable can cause a lot of havoc on the network.
In a wireless networking environment, an attacker can easily eavesdrop. Wire Equivalent Privacy (WEP) is one of the
most common wireless authentication standards that is widely used. However, it uses a very weak RC4 encryption
algorithm and a determined hacker can easily crack it by using dictionary attacks or brute force. Wi-Fi Protected
Access overcomes the weakness that WEP has. It offers a sophisticated hierarchy that generates new encryption keys
each time a mobile device connects to the network (Computer Desktop Encyclopaedia)
Wireless access points can be spoofed. An attacker can set up a rogue access point, give it the same service set
identifier (SSID) of the genuine network and also configure the wireless network authentication password to be the
same. When users login to this network, the attacker has full access to their machines. Wireless media is susceptible
to radio frequency interference. An attacker can jam the wifi’s radio frequency by placing a device that can distort the
wave length and amplitude of the signals making the network unusable.

3.1.4 OSI Model Vulnerabilities

OSI (Open Systems Interconnection) is a logical representation of how the network systems send data or communicate
with each other ensures the interoperability of diverse communication systems using standard protocols. The “7
layers” of an OSI model is a logical representation of how the network systems are supposed to communicate with
each other.

119
Unit 3 - Network Security

The 7 different layers in this model and their relation to the TCP/IP Model is as follows:

OSI Model TCP/IP Model

Layer 7 Application

Layer 6 Presentation Application Layer 4

Layer 5 Session

Data from network


Data to network
Layer 4 Transport Transport Layer 3

Layer 3 Network Internet Layer 2

Layer 2 Data Link


Network Interface Layer 1

Layer 1 Physical

The common vulnerabilities at the various layers are as follows:

Physical Layer Physical Layer Vulnerabilities


This layer is where the real transmission of data or • Loss of Power
bits takes place through a medium. It is responsible • Loss of Environmental Control
for handling the actual physical devices that are
• Physical Theft of Data and Hardware
used to make a connection. This layer also involves
the bare software which manages the physical • Physical Damage or Destruction of Data and
connections and the hardware like Ethernet. Hardware
• Unauthorized changes to the functional
environment (data connections, removable media,
adding/removing resources)
• Disconnection of Physical Data Links
• Undetectable Interception of Data
• Keystroke & Other Input Logging

120
Unit 3 - Network Security

Layer Two - Data Link Layer Link Layer Vulnerability Examples


The Data Link Layer is concerned with the logical • MAC Address Spoofing (station claims the identity
elements of transmissions between two directly of another)
connected stations. It deals with issues of local • VLAN circumvention (station may force direct
topology where many stations may share a communication with other stations, bypassing
common local media. This is the layer where data logical controls such as subnets and firewalls.)
packets are prepared for transmission by the
• Spanning Tree errors may be accidentally or
physical layer.
purposefully introduced, causing the layer two
environments to transmit packets in infinite loops.
• In wireless media situations, layer two protocols
may allow free connection to the network by
unauthorized entities, or weak authentication and
encryption may allow a false sense of security.
• Switches may be forced to flood traffic to all VLAN
ports rather than selectively forwarding to the
appropriate ports, allowing interception of data by
any device connected to a VLAN.
Layer Three - Network Layer Network Layer Vulnerabilities
The Network layer is concerned with the global • Route spoofing - propagation of false network
topology of the internet work - it is used to topology
determine what path a packet would need to take • IP Address Spoofing- false source addressing on
to reach a final destination over multiple possible malicious packets
data links and paths over numerous intermediate
• Identity & Resource ID Vulnerability - Reliance on
hosts. This layer typically uses constructs such as IP
addressing to identify resources and peers can be
addresses to identify nodes, and routing tables to
brittle and vulnerable
identify overall paths through the network and the
more immediate next-hop that a packet may be
forwarded to.

Layer Four - Transport Layer Transport Layer Vulnerabilities


The Transport Layer is concerned with the • Mishandling of undefined, poorly defined, or
transmission of data streams into the lower layers “illegal” conditions
of the model, taking data streams from above • Differences in transport protocol implementation
and packaging them for transport, and with the allow “fingerprinting’ and other enumeration of
reassembly and passing of incoming data packets host information
back into a coherent stream for the upper layers of
• Overloading of transport-layer mechanisms such as
the model.
port numbers limit the ability to effectively filter and
qualify traffic.
• Transmission mechanisms can be subject to
spoofing and attack based on crafted packets and
the educated guessing of flow and transmission
values, allowing the disruption or seizure of control
of communications.

121
Unit 3 - Network Security

Layer Five- Session Layer Session Layer Vulnerabilities


The Session Layer is concerned with the • Weak or non-existent authentication mechanisms
organization of data communications into logical • Passing of session credentials such as user ID
flows. It takes the higher layer requests to send and password in the clear, allowing intercept and
data and organizes the initiation and cessation of unauthorized use
communication with the far end host. The session
• Session identification may be subject to spoofing
layer then presents its data flows to the transport
and hijack
layer below where actual transmission begins.
• Leakage of information based on failed
authentication attempts
• Unlimited failed sessions allow brute-force attacks
on access credentials
Layer Six- Presentation Layer Presentation Layer Vulnerabilities
The Presentation Layer deals with the organization • Poor handling of unexpected input can lead to
of data passed from the application layer into the application crashes or surrender of control to
network. This layer allows for the standardization execute arbitrary instructions.
of data and the communication of data between • Unintentional or ill-advised use of externally
dissimilar hosts, such as platforms with different supplied input in control contexts may allow remote
binary number representation schemes or character manipulation or information leakage.
sets (ASCII vs. UNICODE, for example.)
• Cryptographic flaws may be exploited to circumvent
privacy protections

Layer Seven- Application Layer Application Layer Vulnerabilities


The Application Layer deals with the high-level • Open design issues allow free use of application
functions of programs that may utilize the network. resources by unintended parties
User interface and primary function live at this layer. • Backdoors and application design flaws bypass
All functions not pertaining directly to network standard security controls
operation occur at this layer.
• Inadequate security controls force “all-or-nothing”
approach, resulting in either excessive or insufficient
access.
• Overly complex application security controls
tend to be bypassed or poorly understood and
implemented.
• Program logic flaws may be accidentally or
purposely used to crash programs or cause
undesired behavior.

122
Unit 3 - Network Security

3.2 NETWORK SECURITY MEASURES

3.2.1 Key Network Security Measures

Now that we have understood some of the common types of attacks that a network is vulnerable to. Let us look at
some of the measures that can be taken in order to achieve network security.
International Telecommunication Union (ITU), has provided recommendations on security architecture X.800, defining
mechanisms achieve network security and bring about standardization.
The “SECURITY ARCHITECTURE OPEN SYSTEMS INTERCONNECTION FOR CCITT APPLICATIONS – Recommendation
X.800” can be downloaded from the following link:
[Link]
Some fundamental measures are given below based on which network security solutions can be customised.

Firewall
The term ‘firewall’ came into being in 1764 for describing the walls that separated the parts of a building that most
prone to a fire (such as kitchen) from the rest of the structure. These types of physical barriers prevented the fire from
spreading throughout a building thereby saving lives and property. Before the introduction of firewalls, routers were
used in the 1980s for ensuring network security.
Firewall is a device that allows communication using multiple networks such as private LAN or public internet as per
a defined security policy. These firewalls determine the services that may be attacked or accessed from the outside.
It is crucial for these firewalls to decide the traffic to be blocked and the one to be permitted thus acting like a
security guard for the user network. A firewall provides a network administrator the data pertaining to the kind and
amount of traffic can pass through it, number of attempts being made to break into it and much more. These security
mechanisms not only prevent unauthorised access, but also monitor the sniffing activities taking place and helping in
identifying the entities attempting to breach the security.
The key functions of a firewall are:
• Blocking the incoming data that might be containing a hacker attack
• Hiding the information about the network for making it seem like the traffic is originating from the firewall
rather than the network. This is termed as Network Address Translation (NAT).
• Screening of the outgoing traffic for limiting the use of Internet and other remote sites.
However, firewalls are no cure-all solution to network security woes. A firewall is only as good as its rule set, and
there are many ways an attacker can find common mis-configurations and errors in the rules. For example, if a firewall
blocks all traffic except traffic originating from port 53 (DNS) so that everyone can resolve names, the attacker could
then use this rule to his/ her advantage. By changing the source port of the attack or scan to port 53, firewall will allow
all of the traffic through because it assumes it as DNS traffic. Bypassing firewalls is a whole study in itself and one
which is very interesting (especially to those with a passion for networking) because it normally involves misusing the
way TCP and IP are supposed to work. That said, firewalls today are becoming very sophisticated and a well-installed
firewall can severely thwart a would-be attacker's plans. It is important to remember that the firewall does not look
into the data section of the packet. Thus, if one has a web server that is vulnerable to a CGI exploit and firewall is set
to allow traffic to it, there is no way firewall can stop an attacker from attacking the web server. It does not look at the
data inside the packet. That would be the job of an intrusion-detection system.

Anti-virus
There is no introduction needed for a desktop version of antivirus packages like Norton Antivirus and McAfee. The
way these operate is fairly simple -- when researchers find a new virus, they figure out some unique characteristic it
has (maybe a registry key it creates or a file it replaces) and out of this they write the virus ‘signature’.

123
Unit 3 - Network Security

The whole load of signatures for which the antivirus software scans is known as the virus ‘definitions’. This is the
reason why keeping virus definitions up-to-date is very important. Many antivirus packages have an auto-update
feature to download the latest definitions. The scanning ability of software is only as good as the date of definitions.
In the enterprise, it is very common for administrators to install antivirus software on all machines, but there is no
policy for regular updates of the definitions. This is meaningless protection and serves only to provides a false sense
of security.
With the recent spread of email viruses, antivirus software at the mail server is becoming increasingly popular. The
mail server will automatically scan any email it receives for viruses and quarantines the infections. The idea is that
since all mail passes through the mail server, this is the logical point to scan for viruses. Given that most mail servers
have a permanent connection to the internet, they can regularly download the latest definitions. On the downside,
these can be evaded quite simply. If it zips up the infected file or Trojan or encrypts it, the antivirus system may not
be able to scan it.
End users must be taught how to respond to an antivirus alerts. This is especially true in the enterprise -- an attacker
doesn't need to try and bypass the user’s fortress-like firewall if all he has to do is email Trojans to a lot of people in
the company. It takes just one uninformed user to open the infected package to allow the hacker a backdoor to the
internal network.
It is advisable that the IT department gives a brief seminar on how to handle email from untrusted sources and how
to deal with attachments. These are very common attack vectors, simply because the user may harden a computer
system as much as he/ she likes, but the weak point still remains the user who operates it. As crackers say, "The human
is the path of least resistance into the network."
Intrusion Detection System
There are basically two types of Intrusion-Detection Systems (IDS):

1. HOST BASED IDS 2. NETWORK BASED IDS

Host-Based IDS:
These systems are installed on a particular important machine (usually a server or some important target) and are
tasked with making sure that the system state matches a particular set baseline. For example, the popular file-integrity
checker Tripwire is run on the target machine just after it has been installed. It creates a database of file signatures
for the system and regularly checks the current system files against their known safe signatures. If a file has been
changed, the administrator is alerted. This works very well because most attackers will replace a common system file
with a Trojan version to give them a backdoor access.
Network-Based IDS:
These systems are more popular and quite easy to install. Basically, they consist of a normal network sniffer running
in promiscuous mode. (In this mode, the network card picks up all traffic even if it is not meant for it.) The sniffer is
attached to a database of known attack signatures, and the IDS analyses each packet that it picks up to check for
known attacks. For example, a common web attack might contain string/system32/[Link]? in the URL. The IDS will
have a match for this in the database and will alert the administrator.
Newer versions of IDS support active prevention of attacks. Instead of just alerting an administrator, the IDS can
dynamically update the firewall rules to disallow traffic from attacking IP address for some amount of time. Or the
IDS can use ‘session sniping’ to fool both sides of the connection into closing down so that the attack cannot be
completed.
Unfortunately, IDS systems generate a lot of false positives. A false positive is basically a false alarm, where the IDS
sees legitimate traffic and for some reason matches it against an attack pattern.

124
Unit 3 - Network Security

This tempts a lot of administrators into turning them off or even worse -- not bothering to read the logs. This may
result in an actual attack being missed.
IDS evasion is also not all that difficult for an experienced attacker. The signature is based on some unique
feature of the attack. An attacker can modify the attack so that the signature is not matched. For example,
the above attack string/system32/[Link]? could be rewritten in hexadecimal to look something like:
'2f%73%79%73%74%65%6d%33%32%2f%63%6d%64%2e%65%78%65%3f'
This might be totally missed by the IDS. Furthermore, an attacker could split the attack into many packets by
fragmenting the packets. This means that each packet would only contain a small part of the attack, and the signature
would not match. Even if the IDS is able to reassemble fragmented packets, this creates a time overhead and since
the IDS has to run at near real-time status, they tend to drop packets while they are processing. IDS evasion is a topic
for a paper on its own.
The advantage of a network-based IDS is that it is very difficult for an attacker to detect. The IDS itself does not need
to generate any traffic and in fact, many of them have a broken TCP/IP stack so that they don't have an IP address.
Thus, the attacker does not know whether the network segment is being monitored or not.

Demilitarized Zones
In computer networking, a demilitarized zone is a special local network configuration designed to improve security by
segregating computers on each side of a firewall. It is also known as a perimeter network or a screened sub-network,
it is a physical or logical subnet that separates an internal local area network (LAN) from other un-trusted networks,
usually the internet. External-facing servers, resources and services are located in the DMZ. So, they are accessible
from the internet, but the rest of the internal LAN remains unreachable. This provides an additional layer of security
to the LAN as it restricts the ability of hackers to directly access internal servers and data via the internet.
Any service provided to users on the public internet should be placed in the DMZ network. Some of the most
common of these services include web servers and proxy servers, as well as servers for email, domain name system
(DNS), File Transfer Protocol (FTP) and voice over IP (VoIP).
The systems running these services in the DMZ are reachable by hackers and cybercriminals around the world and
need to be hardened to withstand constant attack. The term DMZ comes from the geographic buffer zone that was
set up between North Korea and South Korea at the end of the Korean War.

Fig 3.4: Demilitarized Zone (DMZ)

125
Unit 3 - Network Security

DNSSEC
Domain name system security extensions (DNSSEC) are a set of protocols that make the traditional domain name
system (DNS) more secure. As we know DNS resolves hostnames into IP addresses, but is vulnerable to attacks
because it works by using unencrypted data for DNS records. DNSSEC is a security system that has been developed
in the form of extensions that could be added to existing DNS protocols. The extensions can:
• authenticate the origin of data sent from a DNS server
• verify the integrity of data
• authenticate nonexistent DNS data.
However, DNSSEC cannot protect how the data is distributed and who can access the data.
A system of public keys and digital signatures is used by DNSSEC to verify data. The public keys can also be used by
security systems in order to encrypt data when it is sent through the Internet and then to decrypt the data when it
is received. Though, DNSSEC cannot protect the privacy or confidentiality of data as it does not include encryption
algorithms.
New types of records have to be created for the implementation of DNSSEC, such as:
• DS
• DNSKEY
• NSEC
• RRSIG
RRSIG record is the digital signature, and it stores the key information used for validation of the accompanying data.
The key contained in the RRSIG record is matched against the public key in the DNSKEY record. The NSEC family of
records, including NSEC, NSEC3 and NSEC3PARAM, is then used as an additional reference to thwart DNS spoofing
attempts. The DS record is used to verify keys for subdomains.
The process used for a DNSSEC lookup varies as per the type of server used to send the request. For all processes
the verification of DNSSEC keys requires starting points called trust anchors. Trust anchors are included in operating
systems or other trusted software.
After a key is verified through the trust anchor, it must also be verified by the authoritative name server through the
authentication chain, which consists of a series of DS and DNSKEY records.
To enable DNSSEC, registrars must have this technology enabled not only in their domain name infrastructure, but on
the DNS server as well. ICANN has an updated list of domain registrars who support DNSSEC. This can be accessed
from the following link:
[Link]
One of the easiest and fastest ways to enable DNSSEC is by using Cloudflare. Cloudflare makes the complex DNSSEC
activation process really easy.
[Link]

Public Key Encryption


We have read about PKI in the previous section. As we know, public key cryptography uses two electronic keys:
• a public key
• a private key
Encryption is performed to ensure the safety and privacy of information sent from one party to a nother. “Keys” are
used to lock (encrypt) and unlock (decrypt) the data that’s transmitted, and if a single key is used for this purpose then
symmetric encryption is said to have occurred. This method only works when the key that’s used is kept absolutely
secure, and as a secret between the two communicating parties.

126
Unit 3 - Network Security

We have also learnt about digital certificates in the previous chapter, that are gaining importance with the growing use
of online services and e-commerce, and a corresponding increase in electronic transaction. The use of PKI technology
to support digital signatures can help increase confidence in electronic transactions. For example, digital signature
allows a seller to provide assurance that goods or services were requested by a buyer and therefore they can demand
payment. It allows parties without prior knowledge of each other to engage in verifiable transactions.
By verifying the validity of the certificate, vendor ensures receipt of a valid public key for buyer. By verifying the
signature on the purchase order, vendor ensures the order was not altered after buyer issued it. Once validity of the
certificate and signature is established, vendor can ship the requested goods to buyer with the knowledge that buyer
ordered the goods. This transaction can occur without any prior business relationships between buyer and seller.

Secure Sockets Layer


We have read about SSL as well in the previous chapter. It is a protocol that protects data that is sent between web
browsers and web servers. It ensures that the data was sourced from the website it is supposed to have originated
from and that it was not tampered with while it was being sent. A website address which starts with ‘https’ is SSL
enabled.
SSL provides security and privacy for the purpose of conducting secure transactions over the internet. SSL protocol
protects HTTP transmissions by adding a layer of encryption. This ensures that transactions are not subject to ‘sniffing’
by a third party.
SSL provides visitors to one’s website with the confidence to communicate securely via an encrypted session.
For companies wishing to conduct serious e-commerce, such as receiving credit card numbers or other sensitive
information, SSL is a must. Web users can tell when they have reached an SSLprotected site by the ‘https’ designation
at the start of the web page's address. The ‘s’ added to the familiar HTTP — the Hypertext Transfer Protocol — stands
for secure.

Smart cards
Smart cards are typically credit card type cards that contain a small amount of memory and sometimes a processor.
Since smart cards contain more memory than a typical magnetic stripe and can process information, they are used
in security situations where these features are a necessity. They can be used to hold system logon information, such
as a user's private key along with other personal Information, including passwords. In a typical smart card logon
environment, user is required to insert his/ her smart card into a reader device connected to the computer. The
software then uses the information stored on the smart card for authentication. When paired with a password and/
or a biometric identifier, the level of security is increased. For example, requiring the user to simply enter a password
for logon is less secure than having them insert a smart card and enter a password. File encryption utilities which use
the smart card as the key to the electronic lock is another security use of smart cards.

Secure code
Electronic software distribution over any network involves potential security problems. Software can contain
programmes, such as viruses and Trojan horses. To help address some of these problems, one can associate digital
signatures with the files. A digital certificate is a means of establishing identity via public key cryptography. Code
signed with a digital certificate verifies identity of the publisher and ensures that the code has not been tampered
with after it was signed. Certificates and object signing establish identity and let user make decisions about the
validity of a person's identity. When user executes the code for the first time, a dialog box appears. The dialog box
provides information on the certificate and a link to the certificate authority. Microsoft developed the Microsoft
Authenticode technology, which enables developers and programmers to digitally sign software. Before software is
released to the public or internal to an organisation, developers can digitally sign the code. If software is modified

127
Unit 3 - Network Security

after digitally signing it, the signature becomes invalid. On Internet Explorer, one can specify security settings that
prevent users form downloading and running unsigned software from any security zone. Internet Explorer can be
configured to automatically trust certain software vendors and authorities so that software and other information is
automatically accepted.

Virtual Private Network (VPN) and Wide Area Network (WAN)


Many organisations have local area networks and information servers spread across multiple locations. When
organisation-wide access to information or other LAN based resources is required, leased lines are often used to
connect LANs into a Wide Area Network. Leased lines are relatively expensive to set up and maintain, making the
internet an attractive alternative for connecting physically separated LANs.
The major drawback of using internet for this purpose is the lack of confidentiality of the data flowing over the internet
between LANs, as well as the vulnerability to spoofing and other attacks. Virtual private networks use encryption to
provide the required security services. Typically, encryption is performed between firewalls, and secure connectivity is
limited to a small number of sites.
One important consideration when creating virtual private networks is that the security policies in use at each site
must be equivalent. A VPN essentially creates one large network out of what were
previously multiple independent networks. The security of VPN will essentially fall to that of the lowest common
denominator — if one LAN allows unprotected dial-up access, all resources on the VPN are potentially at risk.

Standby servers
It is possible to set up a standby server in case the production server fails. The standby server should mirror the
production server. One can use the standby server to replace the production server in the event of a failure or as
a read-only server. Create the standby server by loading the same operating system and applications as on the
production server. Make backups of data on the production server and restore these backups on the standby server.
This also helps to verify backups that are performed.
The standby server will have a different IP address and name if it is connected to the network. Name and IP address
of the standby server will have to be changed if the production server fails and the standby server needs to become
the production server. To maintain the standby server, regular backups and restorations need to be performed. For
example, let's say, a full back-up was created on Mondays and incremental backups every alternative day of the week.
Restore the full back-up on the standby server and subsequent incremental backups thereafter on the days that
backups are created.

Proxy Servers and Reverse Proxy Servers

Proxy server
A proxy server is a server, with its own IP address, that acts as a go‑between or intermediary between a user who
sends a web request through the internet and the web server or servers that have that information in the form of a
webpage.
The proxy server undertakes the web request on behalf of the user, collates the response from the target web server
or servers and then forwards web page data to the user so that the user can see the page or pages in his/her browser.
However, that is not all that a proxy server does. It makes certain changes in the data sent by the user, which doesn’t
make any change in the results, however it makes sure that the target web servers are unable to locate the user. It
change the IP address of the user, so that the web server cannot know where the user is, it can encrypt the user’s data,
to make it unreadable in transit and it can also block access to certain web pages, based on IP address.

128
Unit 3 - Network Security

Fig 3.5: Use of Proxy Server

Organizations and individuals use proxy server for the following:


• Proxy servers can change the identifying information that a web request contains and helps to keep the
personal information and browsing habits of the users private.
• The proxy server can also be configured to encrypt the web requests so that no-one can read the user’s
transactions and also prevent access to known malware sites.
• Organizations use proxy servers to control and monitor their employees’ use of the internet, while parents
use them to control and monitor the internet access of their children.
• Proxy servers also help organizations improve their network performance. The proxy servers can save a copy
of popular websites locally (using cache) and on receiving requests share the latest saved copy, thus saving
bandwidth for the organisation.
• Organizations can also join their proxy server with a Virtual Private Network (VPN), so remote users always
access the internet through the company proxy.
Although proxy servers help us in many ways, yet they can pose a few risks as well. Hence one must choose them
carefully. For example:
• Some free proxy servers may not be very effective and could even be stealing private information.
• Since the proxy server has the user’s actual IP address and web request information, it could be misusing the same.
Hence it is important to check it’s logs and retention or law enforcement cooperation policies. Also use encryption
while sending requests to the proxy server

Reverse Proxy Server

A reverse proxy server is also a type of


proxy server, which forwards requests from
multiple users to different servers across the
Internet. So, the reverse proxy becomes a
private network’s “public face.” The address
advertised would be of the proxy server.
It could be sitting behind the firewall in a
private network and directing the requests
from web browsers and mobile apps of the
client network to a backend server.
Fig 3.6: Use of Reverse Proxy Server

129
Unit 3 - Network Security

A reverse proxy provides additional control and security as well as increased scalability and flexibility.
Another benefit of a reverse proxy is that it helps in reducing the time it takes to generate a response and return it
to the client, also known as web acceleration. It does this by using techniques like compression of server responses
before returning them to the client, encryption of traffic between clients and servers called SSL termination, storing
copy of backend server’s response to the client, locally, which is also called caching.
Apache, IIS, and Nginx are commonly used reverse proxy servers.
NGINX [engine x] is an HTTP and reverse proxy server, a mail proxy server, and a generic TCP/UDP proxy server,
originally written by Igor Sysoev. F.
To install, configure and learn more about it, visit the following websites:
[Link]
[Link]

Packet filtering gateways


Packet filtering firewalls use routers with packet filtering rules to grant or deny access based on source address,
destination address, and port. They offer minimum security but at a very low cost, and can be an appropriate choice
for a low-risk environment. They are fast, flexible, and transparent. Filtering rules are not often easily maintained on a
router, but there are tools available to simplify the tasks of creating and maintaining the rules.
Filtering gateways do have inherent risks, including:
• The source and destination addresses and ports contained in the IP packet header is the only
• information that is available to the router in making decision whether or not to permit traffic
• access to an internal network.
• They do not protect against IP or DNS address spoofing.
• An attacker will have direct access to any host on the internal network once access has been
• granted by firewall.
• Strong user authentication isn't supported with some packet filtering gateways.
• They provide little or no useful logging.

Application gateways
An application gateway uses server programmes (called proxies) that run on firewall. These proxies take external
requests, examine them, and forward legitimate requests to the internal host that provides appropriate service.
Application gateways can support functions, such as user authentication and logging. Because an application gateway
is considered as the most secure type of firewall, this configuration provides a number of advantages to the medium-
high risk site:
• Firewall can be configured as the only host address that is visible to the outside network,
• requiring all connections to and from the internal network to go through the firewall.
• Use of proxies for different services prevents direct access to services on the internal network,
• protecting the enterprise against insecure or badly configured internal hosts.
• Strong user authentication can be enforced with application gateways.
• Proxies can provide detailed logging at the application level.

Hybrid or complex gateways


Hybrid gateways combine two or more of the above firewall types, and implement them in series rather than in
parallel. If they are connected in series, then the overall security is enhanced. On the other hand, if they are connected
in parallel, then the network security perimeter will only be as secure as the least secure of all methods used. In
medium to high-risk environments, a hybrid gateway may be the ideal firewall implementation.

130
Unit 3 - Network Security

Secure Shell (SSH)


SSH is a is a cryptographic network protocol that enables secure network system administration and secure file
transfers over an unsecured network. Some of its applications are remote command-line, remote command execution
and login. Though, any network service can be secured with SSH. It is popular in data centers and large enterprises.

Patching and updating


It is embarrassing and sad that this has to be listed as a security measure. Despite being one of the most effective
ways to stop an attack, there is a tremendously laid-back attitude to regularly patching systems. There is no excuse
for not doing this, and yet the level of patching remains woefully inadequate. Take, for example, the MSblaster worm
that spread havoc recently. The exploit was known almost a month in advance and a patch had been released. Still,
millions of users and businesses were infected. While administrators know that having to patch 500 machines is a
laborious task, the way I look at it is that I would rather be updating my systems on a regular basis than waiting for
disaster to strike and then running around trying to patch and clean up those 500 systems.
In the enterprise, there is no "easy" way to patch large numbers of machines, but there are patch deployment
mechanisms that take a lot of the burden away. Frankly, it is part of an admin's job to do this, and when a network
is horribly fouled up by the latest worm, it just means that someone, somewhere didn't do his job well enough. Now
that we've concluded a brief introduction to the types of threats faced in the enterprise, it is time to have a look at
some of the tools that attackers use.
Keep in mind that a lot of these tools have legitimate purposes and are very useful to administrators as well. For
example, I can use a network sniffer to diagnose a low-level network problem or I can use it to collect your password.
It just depends which shade of hat I choose to wear.

Some Vulnerabilities and Their Control Measures


The organisations have to be smart in dealing with the threats and attacks to network security. Let us see some
common controls for vulnerabilities at the various OSI layers:

Physical Layer Vulnerabilities Physical Layer Controls


• Loss of Power • Locked perimeters and enclosures
• Loss of Environmental Control • Electronic lock mechanisms for logging & detailed
• Physical Theft of Data and Hardware authorization
• Physical Damage or Destruction of Data and
• Video & Audio Surveillance
  Hardware
• Unauthorized changes to the functional • PIN & password secured locks
environment (data connections, removable • Biometric authentication systems
media, adding/removing resources) • Data Storage Cryptography
• Disconnection of Physical Data Links • Electromagnetic Shielding
• Undetectable Interception of Data
• Keystroke & Other Input Logging

Network Layer Vulnerabilities Network Layer Controls


• Route spoofing - propagation of false • Route policy controls - Use strict anti-spoofing and
network topology route filters at network edges
• IP Address Spoofing- false source addressing • Firewalls with strong filter & anti-spoof policy
on malicious packets
• ARP/Broadcast monitoring software
• Identity & Resource ID Vulnerability -
Reliance on addressing to identify resources • Implementations that minimize the ability to abuse
and peers can be brittle and vulnerable protocol features such as
• broadcast

131
Unit 3 - Network Security

Link Layer Vulnerability Examples Link Layer Controls


• MAC Address Spoofing (station claims the • MAC Address Filtering- Identifying stations by
identity of another) address and cross-referencing physical port or
• VLAN circumvention (station may force direct logical access
communication with other stations, bypassing • Do not use VLANs to enforce secure designs.
logical controls such as subnets and firewalls.) Layers of trust should be physically isolated from
• Spanning Tree errors may be accidentally or one another, with policy engines such as firewalls
purposefully introduced, causing the layer two between.
environments to transmit packets in infinite loops.
• Wireless applications must be carefully evaluated
• In wireless media situations, layer two
for unauthorized access exposure. Built-in
protocols may allow free connection to the
encryption, authentication, and MAC filtering may
network by unauthorized entities, or weak
be applied to secure networks.
authentication and encryption may allow a
false sense of security.
• Switches may be forced to flood traffic to all
VLAN ports rather than selectively forwarding
to the appropriate ports, allowing interception
of data by any device connected to a VLAN.

Transport Layer Vulnerabilities Transport Layer Controls


• Mishandling of undefined, poorly defined, or • Strict firewall rules limiting access to specific
“illegal” conditions transmission protocols and sub- sub protocol
• Differences in transport protocol information such as TCP/UDP port number or ICMP
implementation allow “fingerprinting’ and type
other enumeration of host information • Stateful inspection at firewall layer, preventing out-
• Overloading of transport-layer mechanisms of-state packets, “illegal” flags, and other phony
such as port numbers limit the ability to packet profiles from entering the perimeter
effectively filter and qualify traffic.
• Stronger transmission and layer session
• Transmission mechanisms can be subject to
identification mechanisms to prevent the attack and
spoofing and attack based on crafted packets
takeover of communications.
and the educated guessing of flow and
transmission values, allowing the disruption
or seizure of control of communications.

Session Layer Vulnerabilities Session Layer Controls


• Weak or non-existent authentication mechanisms • Encrypted password exchange and storage
• Passing of session credentials such as user ID • Accounts have specific expirations for credentials
and password in the clear, allowing intercept and authorization
and unauthorized use
• Protect session identification information via
• Session identification may be subject to
random/cryptographic means
spoofing and hijack
• Leakage of information based on failed • Limit failed session attempts via timing mechanism,
authentication attempts not lockout
• Unlimited failed sessions allow brute-force
attacks on access credentials

132
Unit 3 - Network Security

Presentation Layer Vulnerabilities Presentation Layer Controls


• Poor handling of unexpected input can lead • Careful specification and checking of received input
to application crashes or surrender of control incoming into applications or library functions
to execute arbitrary instructions. • Separation of user input and program control
• Unintentional or ill-advised use of externally functions- input should be sanitized and sanity
supplied input in control contexts may allow checked before being passed into functions that
remote manipulation or information leakage. use the input to control operation
• Cryptographic flaws may be exploited to
• Careful and continuous review of cryptography
circumvent privacy protections
solutions to ensure current security versus know
and emerging threats
Application Layer Vulnerabilities Application Layer Controls
• Open design issues allow free use of • Controls must be detailed and flexible, but also
application resources by unintended parties straightforward to prevent complexity issues from
• Backdoors and application design flaws masking policy and implementation weakness
bypass standard security controls • Standards, testing, and review of application code
• Inadequate security controls force “all- and functionality-A baseline is used to measure
or-nothing” approach, resulting in either application implementation and recommend
excessive or insufficient access. improvements
• Overly complex application security controls
• IDS systems to monitor application inquiries and
tend to be bypassed or poorly understood
activity
and implemented.
• Program logic flaws may be accidentally or • Some host-based firewall systems can regulate
purposely used to crash programs or cause traffic by application, preventing unauthorized or
undesired behavior. covert use of the network.

3.2.2 Network Security Tools


There are various techniques that exist for ensuring network security and protecting information. The key targets for
the attacker are servers such as web servers, mail servers and DNS servers. Tools such as trace route for mapping
the network and ping for checking which hosts are alive. One has to make sure that the firewall is able to block ping
requests and trace route packets.

1. Port Scanners
The systems that offer TCP or UDP services will surely have an open port for that particular service. If we take an
example, for serving web pages, TCP port 80 will be open. The role of a port scanner is to scan a host or range of
hosts in order to determine what are the ports that are open and what kind of services are being performed. The
attacker comes to know what kind of systems can be attacked. The attacker gets an idea of the type of services
being offered and the type operating systems in use.
The network security tool to counter this threat is to have an solution which can run on multiple operating
systems. The tool should be versatile and should have features such as OS fingerprinting, service version scanning
and stealth scanning.

2. Network Sniffers
The role of a network sniffer is to hack all the traffic across the network. This type of attack affects the network
interface card (NIC) or LAN card installed in the system. In this mode of attack, the NIC picks up all the traffic
irrespective of whether it was meant for it or not. Setting up a sniffer is to capture all of the network traffic and
obtain log-ins and passwords that could provide an entry into the main system. Some examples of network
sniffing tools are Ethereal, Snort and TCPdump.

133
Unit 3 - Network Security

In networks, that operate in a switched environment, a conventional network scanner is ineffective. These networks
are attacked using a switched network sniffer such as Ettercap. This helps the attacker to collect passwords, hijack
sessions, modify and kill connections that are being established.

3. Password Crackers
On getting an access, the attacker goes after the main file on the master system. The attacker wants to access the
database and hijack valuable information by logging in through obtained passwords. The password cracker tries
possible combinations for cracking the code and it’s a matter of time when the attacker is able to log-in.

3.2.3 Security Polices and Controls

A company's security plan consists of security policies. Security policies give specific guidelines about areas of
responsibility and consist of plans that provide steps to take and rules to follow to implement the policies. Therefore,
in order to get a better picture of an organisation’s functioning, one must know about the policies of that organisation
and how they are implemented.
Policies should define what one considers valuable and should specify what steps should be taken to safeguard those
assets. Policies can be drafted in many ways. One example is a general policy of only a few pages that cover most
possibilities. Another example is a draft policy for different sets of assets, including email policies, password policies,
internet access policies, and remote access policies.
Two common problems with organisational policies are:
1. These are platitude rather than a decision or direction it is not really used by organisations Instead, a policy is a
piece of paper to show to auditors, lawyers, other organisational components or customers, but it does not affect
behaviour.
2. Security policies that are too stringent are often bypassed because people get tired of adhering to them (the
human factor), which creates vulnerabilities for security breaches and attacks. For example, specifying a restrictive
account lockout policy increases the potential for denial of service attacks.
A good risk assessment will determine whether good security policies and controls are implemented. Vulnerabilities
and weaknesses exist in security policies because of poor security policies and the human factor.
An example is implementing a security keypad on the server room door. Administrators may get tired of entering
the security PIN number and stop the door from closing by using a book or broom, thereby bypassing the security
control. Specifying a restrictive password policy can actually reduce the security of the network. For example, if one
requires password longer than seven characters, most users have difficulty remembering them. They might write their
password down and leave them where an intruder can find them.

Closing ports
Transport layer protocols namely Transmission Control Protocol(TCP) and User Datagram Protocol (UDP) identify
applications communicating with each other by the means of ports numbers. It is considered a good practice to
ensure and close unnecessary and unused ports because attackers can use these opening as an entry point while
trying to access the main network.
To be effective, policy requires visibility. Visibility aids the implementation of policy by helping to ensure a policy is
fully communicated throughout the organisation. This is achieved through a plan of each policy that is a written set
of steps and rules. The plan defines when, how, and by whom the steps and rules are implemented. Management
presentations, videos, panel discussions, guest speakers, question/ answer forums, and newsletters increase visibility.
If an organisation has computer security training and awareness, it is possible to effectively notify users of new
policies. It also can be used to familiarise new employees with the organisation's policies.

134
Unit 3 - Network Security

Computer security policies should be introduced in a manner that ensures management's unqualified support,
especially in environments where employees feel inundated with policies, directives, guidelines, and procedures.
Organisation's policy is the vehicle for emphasising management's commitment to computer security and clarifying
their expectations for employee performance, behaviour and accountability.

A good risk assessment will determine good Good security Security Controls
security controls and policies. Vulnerabilities controls can stop and Policies
exist because of poor security policies and
human factors.
Assets

Non-Malicious
Techniques and
methods

Malicious Motives & Techniques and


Threat Goals methods

Techniques and
methods
Natural
Disasters Vulnerabilities

Poor Security policies No security policies or


could let an attack controls could be disas-

Fig 3.7: Relationship between a good risk assessment and good security polices and controls

Types of security policies


Policies can be defined for any area of security. It is up to the security administrator and IT manager to classify what
policies need to be defined and who should plan the policies. There can be policies for the entire company or policies
for various sections within the company. The various types of policies that can be included are:
• password policies
• administrative responsibilities
• user responsibilities
• email policies
• internet policies
• backup and restore policies

Password policies
Security provided by password system depends on the passwords being kept secret at all times. Thus, a password is
vulnerable to compromise whenever it is used, stored, or even known. In a password-based authentication mechanism
implemented on a system, passwords are vulnerable to compromise due to some wrong practices:
• A default password is initially assigned to a user when enrolled on the system, which if hacked, can provide
the hacker access to a large number of systems. This is particularly true because many people do not
change the default password.

135
Unit 3 - Network Security

• Employees use passwords that are commonly used by all or are previously compromised passwords.
• Passwords are shared among team members.
• Companies employ out-of-date password quality standards.
• Organizations don’t use other security measures to protect against the compromised passwords.
• Users are expected to remember their passwords. Due to this, they either put simple passwords or use
personal information that people can guess at. Computer-generated passwords are difficult to remember.
Password policies can be set depending on the needs of an organisation. For example, it is possible to specify minimum
password length, no blank passwords, and maximum and minimum password age. It is also possible to prevent users
from reusing passwords and ensure that users use specific characters in their passwords, making passwords more
difficult to crack. This can be set through Windows 2000 account policies discussed later in the paper.

Administrative responsibilities
Many systems come from the vendor with a few standard user logins already enrolled in the system. Change passwords
for all standard user logins before allowing the general user population to access the system. For example, change the
administrator password when installing the system.
The administrator is responsible for generating and assigning the initial password for each user login. A user must
then be informed of this password. In some areas, it may be necessary to prevent exposure of the password to the
administrator. In other cases, user can easily nullify this exposure. To prevent exposure of a password, it is possible to
use smart card encryption in conjunction with the user's username and password. Even if the administrator knows the
password, he/ she will be unable to use it without the smart card. When a user's initial password must be exposed to
the administrator, this exposure may be nullified by having the user immediately change the password by the normal
procedure. Occasionally, user will forget the password or the administrator may determine that a user's password
may have been compromised.
To be able to correct these problems, it is recommended that the administrator is permitted to change the password
of any user by generating a new one. The administrator should not have to know the user's password in order to do
this but should follow the same rules for distributing the new password that applies to initial password assignment.
Positive identification of the user by the administrator is required when a forgotten password must be replaced.

User responsibilities
Users should understand their responsibility to keep passwords private and to report changes in their user status,
suspected security violations and so forth. To assure security awareness among the user population, it is recommended
that each user is required to sign a statement to acknowledge understanding these responsibilities.
The simplest way to recover from a compromised password is to change it. Therefore, passwords should be changed
on a periodic basis to counter the possibility of undetected password compromise. They should be changed often
enough so that there is an acceptably low probability of compromise during a password's lifetime. To avoid needless
exposure of users passwords to the administrator, users should be able to change their passwords without any
intervention by the administrator.

Email policies
Email is increasingly critical to the normal conduct of business. Organisations need policies for email to help employees
use it properly, to reduce the risk of intentional or inadvertent misuse, and to assure that official records transferred
via email are properly handled. Similar to policies for the appropriate use of telephone, organisations need to define
appropriate use of email.
Organisational policies are needed to establish general guidance in the areas, such are as:
• use of email to conduct official business
• use of email for personal business
• access control and confidential protection of messages
• management and retention of emails

136
Unit 3 - Network Security

It is easy to have email accidents. Email folders can grow until the email system crashes. Badly configured discussion
group software can send messages to wrong groups. Errors in email lists can flood subscribers with hundreds of
error messages. Sometime errors messages will bounce back and forth between email servers. Some ways to prevent
accidents are to:
• train users what to do when things go wrong, as well as how to do it right
• configure email software so that the default behaviour is the safest behavior
• use software that follows internet email protocols and conventions religiously
Every time an online service gateway connects its proprietary email system to the internet, there are howls of
protest because of the flood of error messages that result from the online service's misbehaving email servers.
Using encryption algorithms to digitally sign email message can prevent impersonation. Encrypting contents of the
message or the channel that is transmitted over can prevent eavesdropping. Email encryption is discussed later in this
paper under ‘Public key infrastructures’.
Using public locations like internet cafes and chat rooms to access email can lead to the user leaving valuable
information cached or downloaded on computers. Users need to clean up the computer after they use it, so no
important documents are left behind. This is often a problem in places like airport lounges.

Internet policies
The World Wide Web has a body of software and a set of protocols and conventions used to traverse and find
information over the internet. Through the use of hypertext and multimedia techniques, the web is easy for anyone
to roam, browse and contribute to.
Web clients, also known as web browsers, provide a user interface to navigate through information by pointing and
clicking. Browsers also introduce vulnerabilities to an organisation, although generally less severe than the threat
posed by servers. Various settings can set on Internet Explorer browsers by using Group Policy in Windows 2000.
Web servers can be attacked directly or used as jumping off points to attack an organisation's internal networks.
There are many areas of web servers to secure: the underlying operating system, the web server software, server
scripts and other software and so forth. Firewalls and proper configuration of routers and the IP can help to fend off
denial of service attacks.

Backup and restore policies


Backups are important only if the information stored on the system is of value and important. Backups are important
for a number of reasons:
• Computer hardware failure: In case certain hardware devices, such as hard drives or RAID systems fail
• Software failure: Some software applications could have flaws in them whereby information is interpreted
or stored incorrectly
• User error: Users often delete or modify files accidentally. Making regular backups can help restore deleted
or modified files
• Administrator error: Sometimes administrators also make mistakes, such as accidentally deleting active
user accounts
• Hacking and vandalism: Computer hackers sometimes alter or delete data
• Theft: Computers are expensive and usually easier to sell. Sometimes a thief will steal just the hardware
inside the computer, such as hard drives, video cards and sound drivers
• Natural disasters: Floods, earthquakes, fires, and hurricanes can cause disastrous effects on computer
systems. A building can be demolished or washed away.
• Other disasters: Unforeseeable accidents can cause damage. Some examples are, if a plane crashes into
buildings or if gas pipes leak and cause explosions.

137
Unit 3 - Network Security

When doing hardware and software upgrades:


• One must never upgrade without backing up data files
• Be sure to back up system information, such as registries, master boot records and the partition boot
sector in operating systems, such as Microsoft Windows 2000 and Microsoft Windows NT
• Make sure that an up-to-date emergency repair disk exists
Information that should be backed up includes:
• Important information that is sensitive to the organisation and for the continuity of operations, which
includes databases, mail servers and any user files
• System databases, such as registries and user account databases

Backup policies
The backup policies should include plans for:
• Regularly scheduled backups
• Types of backups – most backup systems support normal backups, incremental backups and differential backups
• Scheduled backups – the schedule should normally be during the night when a company has the least
numbers of users
• Information to be backed up
• Type of media used for backups – tapes, CD-ROMs, other hard drives and so forth
• Type of backup devices – tape devices, CD writers, other hard drives, swappable hard drives, and maybe
to a network share
Devices also come in various speeds, normally measured in megabytes backed up per minute. Time taken to perform
backups depends on the system requirements.

Onsite and offsite storage of backups


• Onsite storage: Stores backups in a fireproof safe. Backups should not be stored in the drawer of the
table on which the computer sits. Secure storage protects against natural disaster, theft, and sabotage of
critical data. All software, including operating system software, service packs, and other critical application
software should also be safely stored.
• Offsite storage: Important data should also be stored offsite. Certain companies specialise in storing data.
An alternative solution could be using a safe deposit box and a bank.

IP security policies
The Internet Protocol (IP) underlies the majority of corporate networks as well as the internet. It has worked well for
decades. It is powerful, highly efficient and cost-effective. Its strength lies in its flexibly routed packets, in which data
is broken up into manageable pieces for transmission over networks. And it can be used by any operating system.
In spite of its strengths, IP was never designed to be secure. Due to its method of routing packets, IP-based networks
are vulnerable to spoofing, sniffing, session hijacking and man-in-the-middle attacks — threats that were unheard of
when IP was first introduced.
The initial attempt to provide security over internet have been application-level protocols and software, such as Secure
Sockets Layer (SSL) for securing web traffic and Pretty Good Privacy (PGP) for securing email. These applications,
however, are limited to specific applications.
By using IP security, it is possible to secure and encrypt all IP traffic. It is possible to make use of IP security policies
in Windows 2000 to control how, when and on whom IP security works.
The IP security policy can define many rules, such as:
• What IP addresses to scan for?
• How to encrypt packets?
• Setting filters to take a look at all IP traffic passing through the object on which IP security policy is applied

138
Unit 3 - Network Security

3.3 INTRUSION DETECTION & PREVENTION SYSTEM

Intrusion Detection Systems (IDS)


Intrusion Detection Systems basically identify intrusion threats, attacks and malicious activities in a network, and
generate alerts. The limitation of IDS is that they cannot resolve network attacks. It passes in a network only for
watching network traffic like packet sniffing. The IDS basically analyses the copied packets on the network segment
for detecting attacks or if an attack has already taken place. This is to alert network admin for what is happening in
the network.

Intrusion Prevention System (IPS)


Intrusion Prevention System is the process of both detecting intrusion activities or threats, and managing responsive
actions on those detected intrusions and threats throughout the network. IPS is monitoring real time packet traffic
with malicious activities or which match specific profiles and trigger the generation of alerts and it can drop or block
that traffic in real time pass through in a network. The mainly IPS countermeasures is to stop an attack in progress.

IDS and IPS terms under network security


In network security, the firewall serves the main purpose of security, but it also allows network traffic on specified
ports to either go in or out of the network. The firewalls cannot detect this network traffic sent on a particular port or
legitimate port or part of an intrusion attempt or attack. If, for example, one allows remote access to an internal web
server through allowing inbound access on TCP port 80, then an invader could use this port to attack the web server.
In this case, the IDS can distinguish traffic between the allowed connections to a web server or attempted attack to
a web server by comparing the signature of the traffic to a database of known attack signatures. The IDS will notify
such an attack by generating alert for taking appropriate action and IPS, on the other hand, will take action on that
detected attacked connections or drop/ close this connection.
Intrusion Detection and Intrusion Prevention Systems, IDS and IPS respectively, are network level defences deployed
in thousands of computer networks worldwide. The basic difference between these two technologies lies in how they
protect network environments with respect to detection and prevention terms. IDS generate only alerts or logs after
threats or malicious activities have occurred. Intrusion Detection Systems merely detects the likely intrusions and
report this to the network administrators.

Difference between IDS and IPS systems


IDS and IPS are originally developed for addressing requirements of lacking in most firewalls. IDS are basically used
to detect threats or intrusions in a network segment, but IPS is focused on identifying those threats or intrusions for
blocking or dropping their activities.
IDS and IPS list similar functions, like packet inspection, stateful analysis, TCP segment reassembly, deep packet
inspection, protocol validation and signature matching. The best example of security gate in terms of the difference
of IDS and IPS is – IDS work like a patrol car within the border, monitoring activities and looking for abnormal
situations, but IPS operates like a security guard at the gate, allowing and denying access based on credentials and
some predefined rule set or policy. No matter how strong the security at the gate is, the patrols continue to operate
in a system that provides its own checks.
IDS: The IDS are software or appliances that detect a threat, unauthorised or malicious network traffic. IDS have their
own predefined rule sets through which they can inspect the configuration of endpoints to determine whether they
may be susceptible to attack (this is known as host-based IDS), and also they can record activities across a network
and compare these to known attacks or attack patterns. This is called network-based IDS. The purpose of intrusion
detection is to provide monitoring, auditing, forensics and reporting of network malicious activities.

139
Unit 3 - Network Security

• Prevent network attacks


• Identify intruders
• Preserve logs in case the incident leads to criminal prosecution

IPS: The IPS does not only detect the bad packets caused by malicious codes, botnets, viruses and targeted attacks
but also takes action to prevent those network activities from causing damage to the network. The attacker’s main
motive is to take sensitive data or intellectual property through which he/ she can get customers’ data, like employee
information, financial records, etc. The IPS is specified to provide protection for assets, resources, data, and networks.
• Stop the attack
• Change security environment
Technology has been developed to serve as both detection and prevention systems. Intrusion Detection and
Prevention Systems (IDPS) are primarily focused on identifying possible incidents. For example, an IDPS can detect
when an attacker has successfully compromised a system by exploiting a vulnerability in the system. The IDPS can
then report the incident to security administrators, who can quickly initiate incident response actions to minimise
damage caused by the incident. The IDPS could also log information that can be used by incident handlers. An IDPS
might be able to block reconnaissance and notify security administrators, who can take actions, if needed, to alter
other security controls to prevent related incidents.
In addition to identifying incidents and supporting incident response efforts, organisations have found other uses for
IDPSs, including the following:
• Identifying security policy problems
IDPS can provide some step of quality control for security policy implementation, such as duplicating
firewall rulesets and alerting when it sees network traffic that should have been blocked by the firewall but
was not because of a firewall configuration error.
• Documenting the existing threat to an organisation
IDPS log information about threats that they detect. Understanding the frequency and characteristics
of attacks against an organisation’s computing resources is helpful in identifying appropriate security
measures for protecting resources. The information can also be used to educate management about the
threats that an organisation faces.
• Deterring individuals from violating security policies
If individuals are aware that their actions are being monitored by IDPS technologies for security policy
violations, they may be less likely to commit such violations because of the risk of detection.
Because of the increasing dependence on information systems and the prevalence and potential impact of
intrusions against those systems, IDPS have become a necessary addition to the security infrastructure of
nearly every organisation.

Key functions of IDPS technologies


There are many types of IDPS technologies, which are differentiated primarily by the types of events they can recognise
and the methodologies that they use to identify incidents.
In addition to monitoring and analysing events to identify undesirable activity, all types of IDPS technologies typically
perform the following functions:
• Recording information related to observed events
Information is usually recorded locally, and might also be sent to separate systems such as centralised
logging servers, security information and event management (SIEM) solutions, and enterprise management
systems.

140
Unit 3 - Network Security

• Notifying security administrators of important observed events


This notification, known as an alert, occurs through any of several methods, including the following: emails,
pages, messages on the IDPS user interface, Simple Network Management Protocol (SNMP) traps, syslog
messages, and user-defined programmes and scripts. A notification message typically includes only basic
information regarding an event. Administrators need to access the IDPS for additional information.
• Producing reports
Reports summarise the monitored events or provide details on particular events of interest. Some IDPS
are also able to change their security profile when a new threat is detected. For example, IDPS might be
able to collect more detailed information for a particular session after malicious activity is detected within
that session. It might also alter the settings for when certain alerts are triggered or what priority should be
assigned to subsequent alerts after a particular threat is detected.

141
Unit 3 - Network Security

3.4 IMPLEMENTATION OF A FIREWALL

A firewall possess the capability of screening both incoming and outgoing traffic, but it is the former which poses a
greater threat to the network. This is the reason why the incoming traffic is screened more than the outgoing traffic.
There are three types of screenings that a firewall can perform such as:
• Blocking the incoming data that is not required by the user
• Blocking any address that does not represent an authenticated user
• Blocking communication contents that are not required
The screening process can be related to the process of elimination. The first step is to determine whether the incoming
transmission is requested by a user and is verified. Once it is allowed, it is checked more closely to ascertain that it is
a trusted site. A firewall also checks the contents of the transmission.

Types of Attack
There is a need to understand the nature of security threats that exist before choosing a specific type of firewall.
Internet being a large community, consists of both good and bad elements. Bad elements can be outsiders who
damage the network unintentionally to the malicious hackers who use Internet to do deliberate assaults on various
companies. The attacks that can have an adverse on the businesses are:
• Information theft: This involves stealing organisational information such as employee records, customer
records, or company intellectual property.
• Information sabotage: Herein, the attacker modifies the information to damage an individual or the
organisational reputation. This can be achieved by changing employee medical or educational records or
uploading derogatory content onto the web site.
• Denial of Service (DoS): By denial of services, one understands that the organisational network and servers
are brought down which stops the legitimate users from accessing the services. This directly interrupts the
normal operations.

Attempts to Gain Access


For gaining access, the hacker starts with gathering information about the network. This information is used for
stealing and destroying the data within that network. A hacker makes use of a port scanner for finding out the way
network is structured, and the software being run on it. A port scanner is a piece of software that is capable of mapping
a network. The hacker can exploit the software weaknesses and utilise hacking tools to access the confidential files.
Thereafter, the hacker can penetrate into the administrator’s files and wipe off an entire drive. A good security
password would prove to be a countermeasure. Firewalls are usually immune to port scanning. As more port scanners
are being developed, firewall vendors are producing patches to maintain the immunity levels. Let us now venture into
some of the firewall technologies that exist.

Firewall Technologies
Firewalls are available in a variety of shapes, sizes and prices. The selection of a firewall is decided by the business
requirements and the size of the network. Irrespective of the firewall chosen, there is a need to ensure it is secured
and certified by a trusted third party such as International Computer Security Association (ICSA). ICSA has classified
the firewall into three categories namely- packet filter firewalls, application-level proxy servers, and stateful packet
inspection firewalls.

142
Unit 3 - Network Security

Packet Filter Firewalls


Each computer system within a network is recognised by a unique IP address. Packet filter firewalls check the address
of incoming traffic and does not entertain anything that does not fall under the category of trusted IP addresses. These
type of firewalls use rules for denying access as per the information located in each packet i.e. TCP/IP port number,
source/destination IP address, or data type. Although an ordinary router can screen traffic by address, hackers can still
use a trick known as source IP spoofing to show that the data is arriving from a trusted source. These firewalls have a
major limitation of being prone to IP spoofing and difficult to configure. A mistake made in the configuration could
result in leaving the system wide open for attack.

Application-Level Proxy Server


Application-level proxy server is in charge of examining the application being used for IP packets for verifying the
authenticity. The traffic that arrives from applications such as HTTP (for Web), FTP (for file transfer), and SMTP/POP3
(for e-mail) asks for the installation and configuration of a different application proxy. In order to support the proxy,
the administrators need to reconfigure the network settings and applications (i.e. web browsers) which can prove to
be a tedious task.

Stateful Packet Inspection Firewall


Latest in the generation of firewall technology, stateful packet inspection firewall examines all the parts of IP packet
for determining the course of action in the communication process. The Internet experts use this type of firewall to
decide whether to accept or reject the communication request. Firewalls keep a track of all the requests originating
from the network. Thereafter, it scans incoming communication to check whether it was requested and rejects in case
it was not. In the next level, the screening software carefully determines the state of every packet thereby giving it
the name stateful packet inspection.
To counter the security attacks, security capabilities of the firewall is being equipped with a lot of new features. Along
with the security capabilities, support is being provided in the form of public web and e-mail servers (referred to as
demilitarised zone), content filtering, virtual private networking (VPN) encryption support and finally the antivirus
support. Let us now read about the additional features and functionalities with respect to the security firewalls.

Demilitarized Zone (DMZ) Firewalls


In the networking terminologies, a demilitarized zone is a special local network configuration designed to improve
security by segregating computers on each side of a firewall. DMZ is a physical or logical subnet that separates an
internal local area network (LAN) from other un-trusted networks, usually the internet. It consists of external-facing
servers, resources and related services. These services are accessible from the internet, but the rest of the internal LAN
remains unreachable. This provides an additional layer of security to the LAN as it restricts the ability of hackers to
directly access internal servers and data via the internet. Any service provided to users on the public internet should
be placed in the DMZ network. Some of the most common of these services include web servers and proxy servers,
as well as servers for email, domain name system (DNS), File Transfer Protocol (FTP) and voice over IP (VoIP). A DMZ
firewall functions by creating a protected or ‘demilitarised’ information area on the network. The outsiders can get to
this area but cannot penetrate into the mainstream of the communication network. This is key in allowing the users
access the information wanting to be shared and preventing them from getting access to the confidential information.

Content Filtering
A content filter is responsible for extending the firewall’s capability to block the access to certain web sites. This add-
on can be used to keep a check on the content that can be searched on the internet such as ensuring employees do
not access unsuitable material in the office environment. Using this functionality, one can define the type of content
to be displayed and gain access to the list of websites that offer such content. One can choose to either block those
sites or ask for a log in. Also, such a service should keep updating the list of websites that have prohibited access on
a regular basis.

143
Unit 3 - Network Security

Virtual Private Networks (VPNs)


A VPN or Virtual Private Network utilises public network infrastructure i.e. the Internet. The purpose of a VPN is to
provide capabilities that are similar to a private leased line but at a lower cost. VPN enables secured sharing of data
resources by making use of encryption techniques thereby ensuring only authorised access. Various organisations
have adopted VPNs as a cost-effective strategy towards connecting branch offices, remote workers, and privileged
partners/customers to the private LANs. A large of firewalls are equipped with VPN capabilities either built in or
offered as an extra option.
While implementing a VPN, there is a need to ensure that the devices support same level of encryption. As per the
latest standards, 3DES or 168-bit Data Encryption Standard (3DES) is the strongest encryption available. It should be
noted that the strong the encryption level, more is the processing power required by the firewall. Some of the firewall
vendors are providing VPN hardware accelerators for improving the VPN traffic performance.

Antivirus Protection
There is no introduction needed for a desktop version of antivirus packages like Norton Antivirus and McAfee. The
way these operate is fairly simple -- when researchers find a new virus, they figure out some unique characteristic it
has (maybe a registry key it creates or a file it replaces) and out of this they write the virus ‘signature’. The whole load
of signatures for which the antivirus software scans is known as the virus ‘definitions’. This is the reason why keeping
virus definitions up-to-date is very important. Many antivirus packages have an auto-update feature to download
the latest definitions. The scanning ability of software is only as good as the date of definitions. In the enterprise,
it is very common for administrators to install antivirus software on all machines, but there is no policy for regular
updates of the definitions. This is meaningless protection and serves only to provides a false sense of security. With
the recent spread of email viruses, antivirus software at the mail server is becoming increasingly popular. The mail
server will automatically scan any email it receives for viruses and quarantines the infections. The idea is that since
all mail passes through the mail server, this is the logical point to scan for viruses. Given that most mail servers have
a permanent connection to the internet, they can regularly download the latest definitions. On the downside, these
can be evaded quite simply. If it zips up the infected file or Trojan or encrypts it, the antivirus system may not be able
to scan it. End users must be taught how to respond to an antivirus alerts. This is especially true in the enterprise
-- an attacker doesn't need to try and bypass the user’s fortress-like firewall if all he has to do is email Trojans to a
lot of people in the company. It takes just one uninformed user to open the infected package to allow the hacker a
backdoor to the internal network. It is advisable that the IT department gives a brief seminar on how to handle email
from untrusted sources and how to deal with attachments. These are very common attack vectors, simply because the
user may harden a computer system as much as he/ she likes, but the weak point still remains the user who operates
it. As crackers say, "The human is the path of least resistance into the network."

Selecting a firewall
Data administrators can implement the firewalls either as a software or as an addition to the existing router/gateway.
Also, firewalls have seen a rise in popularity owing to their ease of use, improved performance and lower costs.
Router/Firmware Based Firewalls
Routers offering limited firewall capabilities can be augmented with additional software and firmware. As a precaution,
the administrators must ensure that the router does not get overburdened by running a greater number of services.
Extended functionalities such as VPN, DMZ, content filtering, or antivirus protection maybe too expensive or not
available at all.
Software-Based Firewalls
Software-based firewalls can be understood as complex applications that run on dedicated UNIX or Windows
NT servers. If the additional capabilities associated with software, server operating system, server hardware, and
continuous maintenance are provided, the costs become slightly higher. Data administrators must constantly monitor
and install the latest OS and security patches to counter the threats. In the absence of such patches, software firewall
would be considered weak and can be rendered useless.
Firewall Appliances

144
Unit 3 - Network Security

A large majority of the firewall appliances are dedicated, hardware-based systems. Since these appliances run on
embedded OS, they are less susceptible to various types of security weaknesses that are visible in Windows NT and
UNIX operating systems. These firewalls are designed in such a way that they meet the high throughput requirements
and processor-intensive requirements of stateful packet inspection firewalls. It is easy to install and configure these
firewall appliances than the software firewall products. The features offered by these firewalls are plug-and-play
installation, require minimum maintenance and are a complete solution. When compared to other firewalls, these
prove to be extremely cost-effective.

NextGen Firewalls (NGFWs)


A typical next generation firewall (NGFW) includes features such as:
• Application access and user control
• Integrated intrusion protection
• Advanced malware detection using techniques such as sandboxing
• Leveraging threat intelligence feeds
In addition to the above-mentioned features, NGFW offers services such as Network Address Translation (NAT),
dynamic routing protocol support and high-availability capability. The enterprises that deploy a large system of
nextgen firewall network would require strong central management system, inspection of HTTPS tunnels, integrating
with third party vendors and well-defined APIs for provisioning and policy making. NextGen Firewalls can be deployed
in areas such as:
• on-premises at the edge of enterprises and branch offices
• on-premises at internal segment boundaries
• in public clouds, e.g. Amazon (AWS), Microsoft Azure, Google Cloud Platform
• in private clouds, e.g. VMware, Cisco ACI

Benefits of NextGen Firewalls


NGFWs enables safe use of Internet applications allowing the users to be more productive at the same time blocking
undesirable applications. This is accomplished by using deep packet inspection techniques for identifying and
controlling applications regardless of the IP port being used. The basic security policy of an organization is focused
towards blocking inbound connections and allowing outbound connections which at times might be limited as per
need. With these type of firewalls, the organizations enjoy more visibility into the applications being accessed by their
employees and establish proper control.

IPTables Commands
IPTables, a rule-based firewall, comes pre-installed in most of the Linux operating systems. In the past IPTables was
called ipchains or ipfwadm and was included in Kernel 2.4. This firewall is a front-end tool for interacting with the
kernel and deciding the packets to be filtered. Let us define the various practical iptables rules that are commonly
used.
There are different versions of IPTables used in different protocols:
• iptables applies to IPv4
• ip6tables applies to IPv6.
• arptables applies to ARP.
• ebtables applies to Ethernet frames.

145
Unit 3 - Network Security

The main files in IPTables are:


• /etc/init.d/iptables – init script to start|stop|restart and save rulesets
• /etc/sysconfig/iptables – where Rulesets are saved
• /sbin/iptables – binary
At present, there are three types of tables namely Filter, NAT and Mangle. Also, the total number of chains are:
• INPUT : Default chain originating to system.
• OUTPUT : Default chain generating from system.
• FORWARD : Default chain packets are sent through another interface.
• RH-Firewall-1-INPUT : The user-defined custom chain.
Note that the above files may differ when used in Ubuntu Linux platform. Let us now learn the commands used in
performing basic operations using these firewalls.

 For starting, stopping or restarting the firewall, type the following commands:

# /etc/init.d/iptables start
# /etc/init.d/iptables stop
# /etc/init.d/iptables restart

 For starting the IPTables on system boot, type the given commands.

#chkconfig --level 345 iptables on

 Use the following commands for saving IPTables be default and applying the rules and restoring them in case
these are flushed out.

#service iptables save

 For checking the status of IPTables, type options ‘L’ for listing the ruleset, ‘v’ for verbose and ‘n’ for displaying
the results in numeric format.

[root@tecmint ~]# iptables -L -n -v


Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
6 396 ACCEPT all -- * * [Link]/0 [Link]/0 state
RELATED,ESTABLISHED
0 0 ACCEPT icmp -- * * [Link]/0 [Link]/0
0 0 ACCEPT all -- lo * [Link]/0 [Link]/0
0 0 ACCEPT tcp -- * * [Link]/0 [Link]/0 state NEW tcp
dpt:22
0 0 ACCEPT all -- * * [Link]/0 [Link]/0 reject-with icmp-
host-prohibited

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)


pkts bytes target prot opt in out source destination
0 0 REJECT all -- * * [Link]/0 [Link]/0 reject-with icmp-
host-prohibited

Chain OUTPUT (policy ACCEPT 5 packets, 588 bytes)


pkts bytes target prot opt in out source destination

146
Unit 3 - Network Security

 For displaying IPTables with numbers use the given commands. It is possible to append and remove the rules
using arguments ‘line numbers’.

[root@tecmint ~]# iptables -n -L -v --line-numbers

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)

num pkts bytes target prot opt in out source destination


1 51 4080 ACCEPT all -- * * [Link]/0 [Link]/0 state RELATED,
ESTABLISHED
2 0 0 ACCEPT icmp -- * * [Link]/0 [Link]/0
3 0 0 ACCEPT all -- lo * [Link]/0 [Link]/0
4 0 0 ACCEPT tcp -- * * [Link]/0 [Link]/0 state NEW tcp
dpt:22
5 0 0 ACCEPT all -- * * [Link]/0 [Link]/0 reject-with icmp-
host-prohibited

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)


num pkts bytes target prot opt in out source destination
1 0 0 REJECT all -- * * [Link]/0 [Link]/0 reject-with icmp-
host-prohibited

Chain OUTPUT (policy ACCEPT 45 packets, 5384 bytes


num pkts bytes target prot opt in out source destination

 Use the given commands for deleting the IPTables.

[root@tecmint ~]# iptables -F

 Following commands shall display rulesets in INPUT and OUTPUT chains with rule numbers.

[root@tecmint ~]# iptables -L INPUT -n --line-numbers

Chain INPUT (policy ACCEPT)

num target prot opt source destination


1 ACCEPT all -- [Link]/0 [Link]/0 state RELATED,ESTABLISHED
2 ACCEPT icmp -- [Link]/0 [Link]/0
3 ACCEPT all -- [Link]/0 [Link]/0
4 ACCEPT tcp -- [Link]/0 [Link]/0 state NEW tcp dpt:22
5 REJECT all -- [Link]/0 [Link]/0 reject-with icmp-host-prohibited

[root@tecmint ~]# iptables -L OUTPUT -n --line-numbers


Chain OUTPUT (policy ACCEPT)

num target prot opt source destination

147
Unit 3 - Network Security

 If you want to delete a rule (say rule no.5) from the INPUT chain, use the given command.

[root@tecmint ~]# iptables -D INPUT 5

 For inserting or appending rule to the INPUT chain in between 4 and 5, use the given command.

[root@tecmint ~]# iptables -I INPUT 5 -s ipaddress -j DROP

IPTables Control Scripts Configuration File


The behavior of the iptables initscripts is controlled by the /etc/sysconfig/iptables-config configuration file. The
following is a list of directives contained in this file:
1. IPTABLES_MODULES - Specifies a space-separated list of additional iptables modules to load when a firewall is
activated. These can include connection tracking and NAT helpers.
2. IPTABLES_MODULES_UNLOAD - Unloads modules on restart and stop. This directive accepts the following
values:
• yes — The default value. This option must be set to achieve a correct state for a firewall restart or stop.
• no — This option should only be set if there are problems unloading the netfilter modules.
3. IPTABLES_SAVE_ON_STOP - Saves current firewall rules to /etc/sysconfig/iptables when the firewall is stopped.
This directive accepts the following values:
• yes — Saves existing rules to /etc/sysconfig/iptables when the firewall is stopped, moving the previous version
to the /etc/sysconfig/[Link] file.
• no — The default value. Does not save existing rules when the firewall is stopped.
4. IPTABLES_SAVE_ON_RESTART - Saves current firewall rules when the firewall is restarted. This directive accepts
the following values:
• yes — Saves existing rules to /etc/sysconfig/iptables when the firewall is restarted, moving the previous version
to the /etc/sysconfig/[Link] file.
• no — The default value. Does not save existing rules when the firewall is restarted.
5. IPTABLES_SAVE_COUNTER - Saves and restores all packet and byte counters in all chains and rules. This directive
accepts the following values:
• yes — Saves the counter values.
• no — The default value. Does not save the counter values.
6. IPTABLES_STATUS_NUMERIC - Outputs IP addresses in numeric form instead of domain or hostnames. This
directive accepts the following values:
• yes — The default value. Returns only IP addresses within a status output.
• no — Returns domain or hostnames within a status output.

Designing Firewall Networks


After the data administrators are acquainted with the different types of firewalls in the market, they need to define a
firewall policy for the organisation. Some of the questions that can be considered while defining this policy are:
• Will the firewall deny all the services except the ones that help in connecting to the Internet?
• Is the firewall intended towards providing access using metered and audited techniques in a non-threatening
manner?

148
Unit 3 - Network Security

The next step is to decide the level of monitoring, redundancy and control required. The effort involves juggling needs
analysis along with risk assessment along sorting the requirements that help in determining what to implement. In
case of firewalls, security is a larger priority than connectivity. The best practice is to block everything by default and
only allow the services that are required on a case-by-case basis.
Security breaches are a major threat to an organization and recognised as a major threat. It is crucial for the
organizations to be well aware of the damage caused by various types of security attacks. Although the firewalls don’t
define a complete data security system, they are a vital component in lieu of an organization’s immunity to cyber-
attacks. Hence, there is a need to invest some amount of time to evaluate the best system suited to their needs and
deploy these solutions swiftly to avoid data hack.

149
Unit 3 - Network Security

3.5 SECURITY INFORMATION AND EVENT MANAGEMENT


(SIEM)

3.5.1 Introduction to SIEM


“SIEM” is defined as a group of multifaceted technologies that together, provide a centralized overall view into an
infrastructure. Furthermore, it provides analysis and workflow, correlation, normalization, aggregation and reporting,
as well as log management. Security Information and Event Management (SIEM) software, when correctly configured
and monitored, plays a significant role in identifying breaches.
Some of the major drivers behind the benefits of deploying SIEM technologies are:
• Log management and maintenance
• Continuous monitoring and incident response
• Case management or ticketing systems
• Compliance obligations (HIPAA, SOX, PII, NERC,COBIT 5 ,FISMA, PCI, etc.)
• Gaining and maintaining certifications (such as ISO 27000, ISO 27001, ISO27002 and ISO 27003)
• Policy enforcement validation and policy violations

Primarily, SIEM has been implemented in response to governmental compliance requirements. Correspondingly
many organisations found it necessary to implement SIEM in an effort to not only protect sensitive data but also as
proof that they are working in compliance with the requirements.
• Correlation: Correlation involves both real-time and historical analysis of event data. Because a logging device
collects massive amounts of data, correlation is an important tool for identifying meaningful security events.
• Prioritization: Highlighting important security events over less critical ones is an important feature of SIEM.
Frequently, prioritization incorporates input from vulnerability scanning reports.
• Workflow: Real-time identification and notification of threats is an essential part of the SIEM workflow.
Comprehensive incident management allows analysts to document threat response, an important part of
regulatory compliance.
Security information and event management (SIEM) technology support threat detection and security incident
response through the real-time collection and historical analysis of security events from a wide variety of event
and contextual data sources. It also supports compliance reporting and incident investigation through analysis of
historical data from these sources. The core capabilities of SIEM technology are a broad scope of event collection and
the ability to correlate and analyze events across disparate sources.
Security information and event management (SIEM) is an approach to security management that seeks to provide a
holistic view of an organization’s information technology (IT) security.
The underlying principle of a SIEM system is that relevant data about an enterprise’s security is produced in multiple
locations and being able to look at all the data from a single point of view makes it easier to spot trends and see
patterns that are out of the ordinary. SIEM combines SIM (security information management) and SEM (security event
management) functions into one security management system.
An SEM system centralizes the storage and interpretation of logs and allows near real-time analysis which enables
security personnel to take defensive actions more quickly. A SIM system collects data into a central repository for
trend analysis and provides automated reporting for compliance and centralized reporting. By bringing these two
functions together, SIEM systems provide quicker identification, analysis and recovery of security events. They also
allow compliance managers to confirm they are fulfilling an organization's legal compliance requirements.

150
Unit 3 - Network Security

A SIEM system collects logs and other security-related documentation for analysis. Most SIEM systems work by
deploying multiple collection agents in a hierarchical manner to gather security-related events from end-user devices,
servers, network equipment and even specialized security equipment like firewalls, antivirus or intrusion prevention
systems. The collectors forward events to a centralized management console, which performs inspections and flags
anomalies. To allow the system to identify anomalous events, it’s important that the SIEM administrator first creates
a profile of the system under normal event conditions.
At the most basic level, a SIEM system can be rules-based or employ a statistical correlation engine to establish
relationships between event log entries. In some systems, pre-processing may happen at edge collectors, with only
certain events being passed through to a centralized management node. In this way, the volume of information being
communicated and stored can be reduced. The danger of this approach, however, is that relevant events may be
filtered out too soon.
SIEM systems are typically expensive to deploy and complex to operate and manage. While Payment Card Industry
Data Security Standard (PCI DSS) compliance has traditionally driven SIEM adoption in large enterprises, concerns
over advanced persistent threats (APTs) have led smaller organizations to look at the benefits a SIEM managed
security service provider (MSSP) can offer.
Security information and event management (SIEM) systems provide centralized logging capabilities for an enterprise
and can be used to analyze and/or report on the log entries it receives. Some SIEM systems, which can be either products
or services, can also be configured to stop certain attacks they detect, generally by directing the reconfiguration of
other enterprise security controls.
Traditionally, most organizations with SIEM services have used them either for security compliance efforts or for
incident detection and handling efforts. But increasingly, organizations use SIEMs for both purposes. This increases
the technology's potential value to the organization, but unfortunately, tends to complicate configuration and
management.

3.5.2 Different types of SIEM tools


Many SIEM services and products are available today to meet the needs of a wide variety of organizations. Taking
every characteristic of every one of them into account is not feasible, so this article concentrates on the features of
the most widely used SIEM services.

The architecture of SIEM services and products


SIEM services and products are made available through any one of several architectures, including the following:
software installed on an on-premises server, on-premises hardware appliance, on-premises virtual appliance and
public cloud-based service.
SIEM services and products serve two purposes: providing centralized security logging and reporting for an organization,
and aiding in the detection, analysis and mitigation of security incidents. Each of these SIEM architectures has its own
advantages and disadvantages, and no architecture is generally superior to the others.
Another important aspect of SIEM architecture is how log data is transferred from each log source to the SIEM. There
are two basic approaches: agent-based and agent-less. Agent-based means a software agent is installed on each
host that generates logs, and this agent is responsible for extracting, processing and transmitting the data to the
SIEM server. Agent-less means the log data transfer happens without an agent; the log-generating host could directly
transmit its logs to the SIEM, or there could be an intermediate logging server involved, such as a syslog server. Most
products offer agent-based and agent-less log transfers to accommodate the widest possible range of log sources.

Typical environments suitable for SIEM systems


Early SIEM services and products had a reputation for being for large organizations with advanced security capabilities.
The main motivation behind these deployments was to duplicate network security logs in a centralized location so the
security administrators and analysts could view all the logs through a single console, and potentially correlate events
across log sources in support of incident detection and response efforts.

151
Unit 3 - Network Security

There are many SIEM systems available today, including "light" SIEM products designed for organizations that cannot
afford or do not feel they need a fully featured SIEM. It can be quite a challenge to figure out which products
to evaluate, let alone choose the one that's best for a particular organization or organizational unit. Part of the
SIEM evaluation process should involve creating a list of criteria to be used to highlight SIEM capabilities that are
particularly important to consider.

How much native support does the SIEM provide for relevant log sources?
A SIEM is of diminished value if it cannot receive and understand log data from all of the log-generating sources of
interest to the organization. Most obvious is the organization's enterprise security controls, such as firewalls, virtual
private networks, intrusion prevention systems, email and Web security gateways, and antimalware products. It is
reasonable to expect a SIEM to natively understand log files created by any major product or cloud-based service in
these categories.
In addition, a SIEM should provide native support for log files from the operating system brands and versions the
organization uses. An exception is mobile device operating systems, which often do not provide any security logging
capabilities. SIEMs should also natively support the organization's major database platforms, as well as any enterprise
applications that enable multiple users to interact with sensitive data. Native SIEM support for other software used
by an organization is generally nice to have but is not mandatory. If a SIEM does not natively support a log source,
then the organization generally can either develop customized code to provide the necessary support or use the SIEM
without the log source's data present.

Can the SIEM supplement existing logging capabilities?


Particular applications and other software in use by the organization may lack robust logging capabilities. Some SIEM
products and services can supplement these by performing their own monitoring on behalf of other software. In
essence, this extends the SIEM from being strictly a centralized log collection, analysis and reporting solution, to also
generating raw log data on behalf of other hosts.

How effectively can the SIEM make use of threat intelligence?


There are many SIEM systems available today, including 'light' SIEM products designed for organizations that cannot
afford or do not feel they need a fully-featured SIEM. Most SIEMs are capable of ingesting threat intelligence feeds.
These feeds, which are often acquired via separate subscriptions to services, contain up-to-date information on threat
activity being observed all over the world, including which hosts are being used to stage or launch attacks, and what
the characteristics are of these attacks. The greatest value in using these feeds is the SIEM being able to identify
attacks more accurately and to make more informed decisions, often automatically, about which attacks need to be
stopped and what the best method is to stop them. Of course, the quality of threat intelligence varies among vendors.
Factors to consider when evaluating threat intelligence effectiveness include how often the threat intelligence is
updated, and how the threat intelligence vendor indicates its confidence in the malicious nature of each threat.

What features do SIEM products provide to assist in performing data analysis?


SIEM products that are used for incident detection and/or handling should provide features that help people to
review and analyze the log data for themselves, as well as the SIEM's own alerts and other findings. One reason
for this is that even a highly accurate SIEM will occasionally misinterpret events, so people need to have a way
to validate the SIEM's results. Another reason for this is that people who are investigating incidents need helpful
interfaces to facilitate these investigations. Examples of such interfaces include sophisticated search capabilities and
data visualization capabilities.

Some SIEM Solutions


Enterprise and SMB SIEM Solutions
Users should have a clear categorization of activities so that they can drill into the ones that are suspicious. Event
normalization is critical to a powerful SIEM and SolarWinds is starting to see the emergence of threat intelligence
feeds and integration of SIEMs with them.

152
Unit 3 - Network Security

Here's a more detailed look at IBM Security QRadar, HP's ArcSight, LogRhythm, SolarWinds, and Splunk.

IBM Security QRadar


IBM Security QRadar, a security information and event management (SIEM) tool used for collecting and analyzing
security log data. It collects log data from an enterprise, its network devices, host assets and operating systems,
applications, vulnerabilities, and user activities and behaviours. IBM QRadar executes real-time analysis of the log
data and network flows to identify malicious activity so that it can be stopped quickly, preventing or minimizing
damage to the organization. It can be deployed as hardware, software or virtual appliance-based product. The
product architecture includes event processors for collecting, storing and analyzing event data and event collectors
for capturing and forwarding data. The SIEM product also includes flow processors to collect Layer 4 network flows,
QFlow processors for performing deep packet inspection of Layer 7 application traffic, and centralized consoles for
Security Operations Centre (SOC) analysts to utilize when managing the SIEM.
[Link]

HP’s ArcSight
Hewlett-Packard's ArcSight is primarily an enterprise-class SIEM offering, although the offering can scale down for
smaller enterprises. The ArcSight Express rack-mount application includes a vast array of built-in capabilities. In
addition to the log management capabilities that it comprises the appliance can collect, store and analyze all security
data from a single interface.
The software is capable of analyzing millions of security events from firewalls, intrusion protection systems, end-
point devices, and an array of other log- and data-producing devices. It boasts built-in security dashboards and audit
reports that foresee threats and compliance and is able to protect against zero-day attacks, advanced persistent
threats, breach attempts, insider attacks, malware and unauthorized user access.
ArcSight Enterprise Security Manager (ESM) is targeted at large-scale, security event management applications.
ArcSight Express "should be considered for midsize SIEM deployments (while) ESM is appropriate for larger
deployments, as long as sufficient in-house support resources are available. ArcSight Logger can be used for log
management capabilities for two-tier deployments. It also has optional modules that can be used for advanced
support for user activity monitoring, identifying and accessing management integration and fraud management.
ArcSight pricing is based on a more traditional software model that is more complex than SolarWinds or Splunk.
[Link]/go/ArcSight

LogRhythm
LogRhythm All-In-One (XM) appliance and software are designed for midsized to large enterprises. It includes a
dedicated event manager, dedicated log manager, dedicated artificial intelligence engine, site log forwarder and a
network monitor. Each of the software components also is available in a stand-alone appliance as well. LogRhythm's
security intelligence platform collects forensics data from log data, flow data, event data, machine data and vulnerability
data. It also generates independent forensics data for the host and network. The system can produce real-time
processing, machine or forensics analytics in order to create output for risk-prioritized alerts, real-time dashboards or
reports. It also is used for incident response, including case management and workflow.
[Link]

SolarWinds
SolarWinds' Log & Event Manager is targeted at the SMB market but can scale for to larger businesses. The offering
has pre-packaged templates and an automated log management system. Among the features, the company identifies
as must-haves for a SIEM offering is the ability to collect data from the network devices, machine data and cloud logs,
as well as in-memory event correlation for real-time threat detection.
Additional must-have features, include a flexible deployment option for scalable log collection and analysis, out-
of-the-box reporting for security, compliance and operations, forensic analysis, and built-in active response for
automated remediation.

153
Unit 3 - Network Security

Other features the company identifies as essential are the ability to do internal data loss protection, embedded file
integrity monitoring for threat detection and compliance support, plus high compression and encryption for secure
long archival and long management. SolarWinds is using node-based pricing.
[Link]

Splunk
Like other SIEM products, the core of Splunk Enterprise monitors and manages application logs, business process
logs, configuration files, web access and web proxy logs, Syslog data, database audit logs and tables, file system audit
logs, and operating system metrics, status and diagnostic commands. But at Splunk, the focus is on machine data, the
data generated by all of the systems in the data centre, the connected "internet of things," and other personal and
corporate devices that get connected to the corporate network.
Although the product has "enterprise" in its name, Splunk says the solution can be used by SMBs as well and has
been architected for use by non-SIEM experts. Non-SIEM engineers will be able to use the event pattern detection,
instant Pivot interface that enables users to discover relationships in data without mastering the search language, and
dashboards that can share pre-built panels that integrate multiple charts and views over time.
[Link]

3.5.3 SIEM Log Correlation and Event Triggering


A computer network, consisting of multiple switches, routers, security systems, servers, databases and applications,
can generate incredible amounts of data every day, in the form of system logs. While it can be a good practice
occasionally, it can also need formal requirements now and then to incorporate a system to aggregate these logs,
for example for forensic purposes.
An SIEM tool not only gathers millions of logs per day, but also correlates them to detect security incidents and
potentially risky sets of events. As a result, IT department can easily incorporate in everyday tasks a quick review
of administration panel, where tens of millions SIEM events is translated into dozens of incidents and potentially
dangerous related events.

Monitor traffic and logs


SIEM tools rely on two types of information’s: event logs and flows, Event logs are gathered for later review and
for correlation to detect incidents. Flows (coherent sequence of data packets within TCP or UDP transmission) are
subjected to behavioural analysis, which is used to detect an abnormality in a normal distribution of network traffic
(for example DoS attacks). Traffic analysis and event correlation increase the level of infrastructure position analysis
and the level of detection of undesirable events in the network. All this information can be searched and reviewed at
any time and the user can generate report required to fulfill audit requirements.
Event Logs refers to the broad practice of collecting, aggregating and analyzing network data for a variety of
purposes. Data logging devices collect incredible amounts of information on security, operational and application
events. log management comprises of the tools to search and parse this data for trends, anomalies and other relevant
information. Ensure that the solution monitors all networks and host systems (such as clients and servers) potentially
through the use of Network and Host Intrusion Detection Systems (NIDS/HIDS) and Prevention Solutions (NIPS/HIPS).
These solutions should provide both signature-based capabilities to detect known attacks and heuristic capabilities to
detect potentially unknown attacks through new or unusual system behaviour.
Security Operations Centre (SOC) monitors logs to detect anomalies, perform impact analyses, and proactively
notifies staff of a potential DoS attack. Security Information and Event Management (SIEM) software automates
log management and helps to mitigate internal threats, conduct log forensics analysis, meet regulatory compliance
requirements and more.

154
Unit 3 - Network Security

Events monitored must include at least the following:


• Unauthorised access attempts such as:
• Failed or rejected user logins or other actions
• Critical notifications from network firewalls or gateways such as dropped traffic on specific rules (e.g. firewall
management rules)
• System alerts or failures such as:
• Console alerts or messages
• System log exceptions
• Network management alarms
• Alarms raised by the access control system
• System power alerts
• Key Performance Indicators
• Changes to, or attempts to change, system security settings or controls

Ensure that monitoring systems are adjusted appropriately only to collect logs, events, and alerts that are relevant
in the context of delivering the requirements of the monitoring policy. Inappropriate collection of monitoring
information could breach data protection and privacy legislation.
If the monitoring system shows too many alerts to follow-up in a particular manner, then it should be investigated in
order, either to remediate the monitoring system or to address the root cause of the events. The monitoring process
and systems must be reviewed regularly to ensure that they are performing adequately and not suffering from too
many false positives or false negatives. As for logging, monitoring controls must be documented in the logging policy
for all systems classified as explicit for security purposes.
SOC monitors the edge routers to build a profile of normal network traffic and update that profile as traffic patterns
change over time. Drawing on that knowledge, SOC staff can immediately identify significant deviations from the
profile as they occur, analyze anomalies, and alert of any attack.
The inbound and outbound network traffic traversing network boundaries should be continuously monitored to
identify unusual activity or tendencies that could indicate attacks and the compromise of data. The transfer of
sensitive information, particularly large data transfers or unauthorised encrypted traffic should automatically generate
a security alert and prompt a follow-up investigation. The analysis of network traffic can be a key tool in preventing
the loss of data.
The following traffic flow types must always be logged:
• All authentication requests (successful and failed)
• All VPN session requests (successful and failed)
• All packets denied by specific rules and by the "clean-up"
• All successful packets whose destination is the firewall itself (firewall management traffic)
Any decision not to log other types of traffic must be documented and justified. In addition to the traffic logs, firewalls
must log all events mentioned under "Non-personal devices".

Collect logs from all types of ICT systems devices and applications
ICT (information and communications technology) is a term that describes the general processing and communication
of information through technology. The importance of ICTs lies less in the technology itself than in its ability to create
greater access to information and communication in unreached areas. Some of the examples of ICT tools are radios,
TVs, laptops, tablets, mobiles, smartphones, gaming devices, etc.
Monitoring Information and Communications Technologies (ICT) devices and application’s activity allows organizations
to improve and detect attacks and react to them appropriately while providing a basis upon which lessons can be
learned to improve the overall security of the organisation.

155
Unit 3 - Network Security

In addition, monitoring the use of ICT systems allow them to ensure that systems are being used appropriately in
accordance with the organisational policies. Monitoring is often a key capability needed to comply with security, legal
and regulatory requirements.
Failure to monitor ICT systems and their use for specific organisation’s processes could lead to non-compliance with
the corporate security policy and legal or regulatory requirements or result in attacks going unnoticed.
Develop and deploy a centralised capability that can collect and analyse accounting logs and security alerts from ICT
systems across the organisation, including user systems, servers, network devices, and including security appliances,
systems and applications. Much of this should be automated due to the volume of data involved in enabling experts
to swiftly identify and investigate irregularities. Ensure that the design and implementation of the centralised solution
do not provide an opportunity for attackers to bypass normal network security and access controls.

Monitoring multiple security technologies


Monitoring of multiple security technologies outside the internal data is essential to be alert of whatever is happening
on the external source apart from the internal data itself. Every day there are numerous bugs, glitches, threats, risks and
attacks occurring within a single minute that are also detected, countered or reported by the different data sources.
It is necessary to be aware of these activities so as to be aware of their possible occurrence and to be equipped with
the knowledge and tools to counter them. Some of the important external data sources are described below:
• Computer Network Defense: It is a set of processes and protective measures that uses a computer network
to detect, monitor, protect, analyze and defend against the network infiltrations resulting in service/network
denial, degradation and disruptions. CND enables a government or military institute/organization to defend and
retaliate against network attacks perpetrated by malicious or adversarial computer systems or networks.
• Computer Emergency Response Team (CERT): A computer emergency response team (CERT) is a group of
experts. who respond to cybersecurity incidents. These teams deal with the evolution of malware, viruses and
other cyber attacks.
• Many aspects of group operations are targeted at traditional hacking methods like viruses and malware. New
kinds of cyber attacks are surfacing all the time, and security professionals need to stay ahead of these problems.
They need to do testing and simulations to anticipate security problems before they arise. They also need to
quickly do damage control on any problems that have not been anticipated. The work of a CERT involves a wide
spectrum of security activities aimed at preventing and minimizing cyber attacks from wherever they originate,
and it also involves working to reduce incidences of these problems in the future.
• SANS: SANS provides intensive, immersion training designed to help staffs master the practical steps necessary
for defending systems and networks against the most dangerous threats, the ones being actively exploited.
Developed through a consensus process involving hundreds of administrators, security managers, and information
security professionals, and address both security fundamentals and awareness, and the in-depth technical aspects
of the most crucial areas of IT security. SANS is the most trusted and by far the largest source for information
security training and security certification in the world. Many of the valuable SANS resources are free to all, they
include the very popular Internet Storm Center (the Internet's early warning system), the weekly news digest
(NewsBites), the weekly vulnerability digest (@RISK), and more than 1,200 award-winning, original information
security research papers. [Link]
• SecurityFocus: It is an online computer security news portal and source of information security services. Home
to the well-known Bugtraq mailing list, SecurityFocus columnists and writers consist of cybercrime prosecutors,
security analyst, hackers etc. SecurityFocus was acquired by Symantec in August 2002. [Link]
com/
• Telemetry monitoring: Telemetry is a term for technologies that accommodate collecting information in the
form of measurements or statistical data, and forward it to IT systems in a remote location. This term can be used
in reference to many different types of systems, such as wireless systems using radio, or some types of systems
operating over telephone or computer networks. Others may use different strategies like SMS messaging.

156
Unit 3 - Network Security

In general, telemetry allows for the robust collection of data and its delivery to centralized systems where it can be
used effectively. Part of the development of telemetry involves the emergence of big data technologies and big
data strategies that take massive amounts of relatively unstructured data and aggregate it in centralized systems.
Normally, this type of information flows out of devices as streams of unstructured data. In any event, the information
needs to be collected, put into an appropriate structure for storage, perhaps combined with other data, and stored as
a transactional record. From there, the data can be further transferred to an analytics-oriented database, or analysed
in place. Glitches arise when it comes on how to deal with that information. Obviously, data integration is critical to
most telemetry operations. The information must be managed from point-to-point, and then continue within midway
or analytics databases.
Telemetry Data Packet Capture: These are cases in which it needs to go beyond collecting logging messages and
network flow information. An example is a need for deep forensic capabilities to meeting strict regulatory requirements
for capturing raw network packets. Network traffic can be captured and forwarded to an intrusion detection system
(IDS), a deep packet inspection engine (DPI), or simply to a repository where captured packets are stored for future
use. The choice of the packet capturing technology is influenced by the network and media type to monitor.

3.5.4 Log Analysis


Logs are usually stored in a storage unit such as a hard drive or to an application such as a log collector. The log
messages are application specific and interpretation of the messages must be taken in context of the application or
the system. An overview of distinctive reasons why log management is used in an organization are:
• Log analysis can provide necessary support for an existing or new data source. All log analysis tools link themselves
to the unstructured data such as system logs, CPU data, configuration files and application logs, and then analyze
these logs to provide valuable information.
• Log analysis components work alongside each other to identify root cause from the unstructured data.
• Regular log analysis helps in minimizing and evading the different risks associated with the enterprise. It gives
the evidence of what occurred, the factors that determined the cause and the impacts. It thus helps in building
countermeasures and models to reduce the risks.
• Logs analysis increases security awareness, and rapid detection of failed processes, network outages or protocol
failures are possible through log analysis. The analysis of logs helps in determining trends, and the data stored in
data archives by the log analysis helps in improving the search functionalities and performance.
• Another advantage associated with log analysis is in enabling active data streaming, which is accessible across
the different remote sources. Log analysis is generally performed due to security or audit compliance, forensics,
security incident responses or system troubleshooting
The Log Monitoring service monitors, correlates and analyzes security logs and alerts across virtually any security
technology and critical information asset in the environment. Continuous 24x7 watchful event log monitoring
identifies irregularities and helps to retort to threats in real-time. Log analysis is prepared towards narrowing down
the events of interest. The analyst needs to focus on recent changes, failures, errors, status changes, access and
administration events, and other events unusual for the environment.
Hence, it is important to minimize the disturbance by removing routine, repetitive log entries from the view after
confirming that they are not harmful. Analyst needs to correlate activities across different logs to get a comprehensive
picture of the situation.

157
Unit 3 - Network Security

SUMMARY

• The information that is transmitted over the communication channel is known as a packet. Packets contain two
portions i.e. header and footer.
• A protocol is a set of rules and standards that define a language used for communication. The examples can
be TCP, IP, UDP and ICMP.
• The application layer allows access to network resources to users and user applications.
• The presentation layer is responsible for mapping resources and creating context.
• The session layer is responsible for establishing, managing and terminating the sessions between two users.
• The transport layer performs tasks such as processing message delivery and error recovery.
• The network layer is responsible for moving packets from the source to the destination.
• The data link layer organizes bits into frames and ensures hop-to-hop delivery of data packets.
• The physical layer performs the transmission of data or bits through a medium.
• TCP/IP protocol is a communication link between the programming interface of a physical network and user
applications.
• The IP address identifies the host within a network and consists of a network number and a host number.
• Risk identification is defined as the process of determining risks that could prevent the program, enterprise or
investment from achieving its objectives.
• Port scanners, network sniffers and password crackers are some of the commonly used network security tools.
• A demilitarized zone is a special local network configuration designed to improve security by segregating
computers on each side of a firewall.
• Security Information and Event Management (SIEM) is a group of multifaceted technologies that together,
provide a centralized overall view into an infrastructure.

158
Unit 3 - Network Security

KNOWLEDGE CHECK

Q.1. Expand the following abbreviations:


VPN : _________________________________________________________
TCP/IP : _________________________________________________________
HTTP: _________________________________________________________
UDP : _________________________________________________________
ARP : _________________________________________________________
DNS : _________________________________________________________
FTP : _________________________________________________________
SSH : _________________________________________________________
DHCP : _________________________________________________________
IPS : _________________________________________________________
IDPS : _________________________________________________________

Q.2. Select the right choice from the following multiple choice questions:

A. In networking terminologies, when information is transmitted over the communication channel is referred
to as:
i. Connection
ii. Network Interface
iii. Packet
iv. Threads

B. The program that is responsible for deciding whether the traffic should enter the server or not is:
i. Protocol
ii. VPN
iii. NAT
iv. Firewall

C. Which of the following is an attack where the attacker steals important information from data packets?
i. Man-in the middle
ii. Sniffing
iii. Spoofing
iv. Denial of Service

D. In which of the following layers of the TCP/IP model is the IP addresses defined?
i. Application Layer
ii. Transport Layer
iii. Network Layer
iv. Link Layer

159
Unit 3 - Network Security

E. Which of the following layers of the TCP/IP model acts as the interface to the actual network hardware?
i. Application Layer
ii. Transport Layer
iii. Network Layer
iv. Link Layer

F. Which of the following is NOT an Application Layer vulnerability?


i. Cookie poisoning
ii. Caching
iii. UDP Flood attack
iv. DNS Attacks
v. Hijacking

G. Which of the following is NOT an Transport Layer vulnerability? (Can select more than one)
i. SYN Flood
ii. TCP blind spoofing
iii. UDP Flood attack
iv. DNS Attacks
v. Teardrop Attack

H. Which of the following is NOT an Network Layer vulnerability? (Can select more than one)
i. Ping of Death attack
ii. TCP blind spoofing
iii. Cookie poisoning
iv. Source route attack
v. MAC flooding attack

I. Which of the following is NOT an Link Layer vulnerability? (Can select more than one)
i. TCP blind spoofing
ii. ARP Spoofing
iii. Cookie poisoning
iv. Eavesdropping via sniffing
v. Teardrop Attack

J. Which of the following can a DNSSEC extension NOT do?


i. authenticate the origin of data sent from a DNS server
ii. verify the integrity of data
iii. authenticate non-existent DNS data.
iv. use encryption to provide the required security services

160
Unit 3 - Network Security

Q.3. Match the following Application Layer vulnerabilities with their explanations.

VULNERABILITY EXPLANATION
A. Hijacking i. An attacker modifies or steals small files stored by certain websites in the
computer of the user. Through this they can access personal information
of the user which could also be a password or a user id. They can then,
use these packets of information on their own machine and access
unauthorized information
B. Domain Name System (DNS) ii. Saving of data from web pages browsed by the user temporarily on the
Attacks user’s machine poses a security risk, because an attacker can use the
saved data to access password protected web pages from that computer.

C. Cookie poisoning iii. The attacker intercepts data transmission of a user and then uses that
information again for his/her own benefit. It is a type of man-in-the-
middle attack and more than a hijack.
D. Replay attack iv. The attacker injects a malicious script a vulnerable web applications or
browser which conducts a session hijack and steals the information and
cookies of legitimate users of the website.
E. Dynamic Host Configuration v. HTTP vulnerability can lead to an attack where the attacker steals an
Protocol (DHCP) starvation attack HTTP session of the legitimate user by capturing the packets using a
packet sniffer.
F. Caching vi. The attacker modifies a record database where internet domain names
used by people to locate a website are located. By doing this the attacker
can direct all traffic to an incorrect IP address.
G. Cross-Site Scripting vii. The attacker sends numerous requests for IP address using spoofed MAC
addresses. The server assigns temporal IP addresses to user machines
that log into an IP network, would end up leasing all its IP addresses till
it has no more IPs to give [Link], when a genuine user sends a request,
the server will not be able to provide the IP address and the user will not
get access into the network.

Q.4. Match the following Transport Layer and Network Layer vulnerabilities with their explanations.

VULNERABILITY EXPLANATION
A. SYN Flood i. It is another form of Hijacking that can be done, where an attacker is able
to guess both the port number and sequence number of the session that
is in process and can carry out an injection attack.
B. Source Route Attack ii. This is a denial of service attack, where numerous user datagram protocol
packets are sent to a targeted server, so that it is overwhelmed with the
number of requests and so is unable to process other requests from legitimate
users. Even a firewall protecting the targeted server can become exhausted.
C. TCP blind spoofing iii. This attack is a type of a denial-of-service (DoS) attack, which works slowly
by sending a series of fragmented packets to a target device. It overwhelms
the target device with the incomplete data so that it crashes down
D. UDP Flood Attack iv. In this attack, the attacker sends malformed IP packets that exceeds
65,535 bytes to the target device. A correctly formed ping packet is 56
bytes or 64 bytes when the IP header is considered. The target device will
naturally not be able to process this packet properly and this can lead to
an operating system crash

161
Unit 3 - Network Security

E. Teardrop Attack v. After receiving the fake SYN packets, the target server replies with a
packet to the source address that is unreachable. This situation creates
a lot of half-opened sessions which causes the server to be overloaded
and so the server is unable to allow any further connections, leading to
a denial of service attack.
F. RIP Security Attacks vi. The attacker can modify the option in the packet that lists the specific
routers taken by a packet to reach its destination. This can lead to a
loss of data confidentiality as the attacker will be able to read the data
packets.
G. Ping of Death Attack vii. The attacker can impersonate a route to a particular host that is unused.
The packets can be sent to the attacker for sniffing or performing a man
in the middle attack.

Q.5. Match the following tools with their functions.

TOOL FUNCTION
A. Port Scanners i. Captures all of the network traffic and obtains log-ins and passwords to
provide an entry into the main systems.

B. Network Sniffers ii. Scans a host or range of hosts in order to determine what are the ports
that are open and what kind of services are being performed.
C. Password Crackers iii. Tries possible combinations for cracking code for password protected
files.

Q.6. State at least 2 effective countermeasures for the following vulnerabilities at the various OSI layers:

VULNERABILITY EXPLANATION

A. Physical Layer Vulnerabilities

B. Link Layer Vulnerability


Examples

C. Network Layer Vulnerabilities

D. Transport Layer Vulnerabilities

E. Session Layer Vulnerabilities

F. Presentation Layer
Vulnerabilities

G. Application Layer Vulnerabilities

162
Unit 3 - Network Security

Q.7. State at least 4 password usage practices that leave the passwords vulnerable to compromise.
1. : __________________________________________________________________________________________________________
2. : __________________________________________________________________________________________________________
3. : __________________________________________________________________________________________________________
4. : __________________________________________________________________________________________________________

163
164
UNIT 4
APPLICATION SECURITY

“ At the end of this unit you will be able to:





Explain what are applications
State the key vulnerabilities to applications
Explain the overall process of identification of these vulnerabilities
Explain how hardware and software vulnerabilities can be identified
and resolved
Describe application security testing processes


• Describe application security counter measures and their application
• Explain what is OWASP and OWASP tools and methodologies
166
Unit 4 - Application Security

4.1 IDENTIFYING APPLICATION SECURITY RISKS

4.1.1 Application Security Risks

Applications – An Introduction

Applications are a type of software that allows people to perform specific tasks using various ICT devices.
• Applications could be for computers (desktops, laptops, etc.)
• Applications could be for mobile devices (smartphones, iPads, etc.)
• Some applications are also on the cloud
An application runs inside an operating system when opened, and continues running until it is closed. We can have
more than one application open at a time, and this is known as multitasking.
There are countless applications and they fall into many different categories. Applications such as Microsoft Word are
full-featured while the gadgets are capable of accomplishing one or two things.

Some examples of Applications


A few Computer or Desktop applications are:
• Word processors: A word processor allows to, write a letter, design a flyer, and create many other types of
documents. The most common word processor used in today's world is Microsoft Word.
• Web browsers: A web browser is a tool that is used to access the Internet. Most computers come with
a web browser pre-installed, but one can also download a different one according to preference. Some
commonly used browsers are Explorer, Firefox, Google Chrome and Safari.
• Games: There are various different games that can be played on the computer. They range from card
games such as Solitaire to action games.
• Gadgets: Sometimes called widgets, these are simple applications that can be placed on the desktop (or
on the Dashboard if using a Mac). There are endless gadgets available to the users such as calendars,
calculators, maps, news headlines, etc.
A few Mobile applications are:
• Apps for chatting or calling via internet: e.g. WhatsApp, Hike, etc
• Apps for buying or selling online: e.g. OLX, Flipkart, Amazon, Snapdeal, etc
• Apps for information: e.g. cricbuzz, newshunt, NDTV, BBC, etc

How to Install Applications?


Installing an application in a computer is as simple as inserting the installation disc and following the instructions
on the screen. One can also download the software from the Internet and then can run the software and follow the
instructions on the screen. A lot of applications feature a readme file (e.g. [Link]) that includes installation and
other related information.
One has to be careful while downloading since unknown viruses and malware could also be downloaded along with the
desired file. If the computer has an antivirus program, then downloaded software can be scanned before installing it.

167
Unit 4 - Application Security

Files and applications


Each application will have a group of file types—or formats—which it is able to open. When one double-clicks on a
file, the computer will automatically use the correct application to open it, as long as the application is installed on
the computer.
If the correct application is not installed, then the file may not open the application. However, in some cases, one can
open a file with a web application that runs in the browser. For example, if we don't have Microsoft Word, we can
open Word documents with Google Docs. These are cloud applications.
The file format can be ascertained by looking at the extension at the end of the file name (such as .docx, .txt, or .jpg).
On some computers, the extension may be hidden, and one may need to look at the icon to determine the file format.

Application vulnerabilities (weakness and exposures)

Organisations use Application Security, or ‘AppSec’ to protect their critical data from external threats by ensuring the
security of all the software used to run the business. This software can be built internally, bought or downloaded.
Application security helps to identify, fix and prevent security vulnerabilities in any kind of software application.
A software ‘vulnerability’ is an unintended flaw, weakness or exposure to risks in the software that leads it to process
critical data in an insecure way. Cybercriminals can enter an organisation’s systems by exploiting these ‘holes’ in
applications and steal confidential data.
SQL injection, Cross-Site Forgery (CSRF) and Cross-Site Scripting (XSS) are some of the common software vulnerabilities
known in the field of application security.
• SQL injection exploits an application vulnerability that allows an attacker to submit a database SQL
command, exposing the back-end database where the attacker can create, read, update, alter or delete data.
• Cross-Site Scripting (XSS) is an attack that occurs when ‘malicious scripts are injected into otherwise
benign and trusted websites’ (according to OWASP). XSS comes from the security weaknesses of client-
side scripting languages, such as HTML and JavaScript.
• Cross-Site Request Forgery (CSRF) manipulates a web application vulnerability that allows an attacker
to trick the end user into performing unwanted actions. CSRF lets the attacker access functionalities in a
target web application using the already authenticated browser of the victim.
• Smurf attack – This works in the same way as Ping Flood attack with one major difference that the source
IP address of the attacker host is spoofed with an IP address of another legitimate non-malicious computer.
Such attack will cause disruption both on the attacked host (receiving a large number of ICMP requests) as
well as on the spoofed victim host (receiving a large number of ICMP replies).
• Buffer overflow attack – in this type of attack the victim host is being provided with traffic/ data that is
out of range of the processing specs of the victim host, protocols or applications, overflowing the buffer
and overwriting the adjacent memory. One example can be the Ping of Death attack where malformed
ICMP packet with size exceeding the normal value can cause the buffer overflow.
• Botnet – a collection of compromised computers that can be controlled by remote perpetrators to perform
various types of attacks on other computers or networks. A known example of botnet usage is within the
distributed denial of service attack, where multiple systems submit as many requests as possible to the
victim machine to overload it with incoming packets. Botnets can be otherwise used to send out spam,
spread viruses and spyware and steal personal and confidential information which afterwards is forwarded
to the botmaster.
• Man-in-the-middle attack – this attack is in the form of active monitoring or eavesdropping on victim’s
connections and communication between victim hosts. In this type of attack, the interaction between the victim
parties of the communication process and the attacker takes place. This is achieved by the attacker intercepting
all parts of the communication, changing the content of it and sending it back as legitimate replies.

168
Unit 4 - Application Security

• Both parties are not aware of the attacker's presence and believe the replies they get are legitimate. For this
attack to be successful, the perpetrator must successfully impersonate at least one of the endpoints. There
is a need to have well-defined protocols in place to ensure secured mutual authentication and encryption
during the communication process. This will help in countering the effect of such type of attacks.
• Session hijacking attack – this attack is targeted as an exploit of the valid computer session to gain
unauthorized access to information on a computer system.
Almost every application has vulnerabilities. There are also many tools and technologies to address application
security yet it is very important to always start with a strong strategy. At a high level, the strategy should address,
and continuously improve these basic steps:
• Identification of vulnerabilities (flaws, weaknesses or exposure to risks)
• Assessment of risk
• Fixing the flaw, weakness or exposure
• Learning from mistakes and better managing future development processes
Application security can be enhanced by Threat Modelling, which involves following certain steps rigorously, which are:
• Defining enterprise assets
• Identifying what each application does (or will do) with respect to these assets
• Creating a security profile for each application
• Identifying and prioritizing potential threats and documenting adverse events and the actions taken in
each case
A threat can be defined as a potential or an actual adverse event capable of compromising the valuable assets of an
enterprise. This could include malicious events such as denial-of-service (DoS) attack and unplanned events such as
failure of a storage device.
Apart from that, there are many types of technologies available to assess applications for security vulnerabilities
which include the following:
• Static analysis (SAST), or “white-box” testing, analyzes applications without executing them.
• Dynamic analysis (DAST), or “black-box” testing, identifies vulnerabilities in running web applications.
• Interactive AST (IAST) technology combines elements of SAST and DAST and is implemented as an agent
within the test runtime.
• Mobile behavioral analysis discovers risky actions of mobile apps.
• Software composition analysis technologies (SCA) technologies analyze open source and third party
components.
• Manual penetration testing (or “pen testing”) technologies use the same methodology cybercriminals use
to exploit application weaknesses.
• Web application perimeter monitoring technologies help the attackers discover public-facing applications
and easily exploitable vulnerabilities.
• Runtime application self-protection technologies help in detecting and preventing real-time application attacks.
While, there is a variety of application security technologies available to help with this endeavor, but none are fool
proof. One must use the strengths of multiple analysis techniques along the entire application lifetime to bring down
the application risk.
It is crucial for the organisations to develop a mature and robust application security program that can:
• Assesses every application, whether built internally, brought or downloaded.
• Helps the developers in finding and fixing vulnerabilities while coding.
• Incorporates security into the development process and scales the program by taking the help of automation
and cloud-based services.

169
Unit 4 - Application Security

Security has become an important aspect of the software design process of the applications as well. Security measures
along with a sound application security routine helps in minimising the likelihood of an attack by an unauthorised
code. It helps in providing immunity against unpermitted access, stealing, modifying and deleting of sensitive data
within an application.

Top 10 Web Application Security Risks By Open Web Application Security Project (OWASP)
OWASP is an online community that produces freely-available articles, methodologies, documentation, tools, and
technologies in the field of web application security. We will read more about it in a later section of this unit. Given
below are the list of top 10 web application security risks identified by them.
1. Injection: Injection flaws, such as Structured SQL, NoSQL, OS, and LDAP injection, occur when untrusted data
is sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the interpreter into
executing unintended commands or accessing data without proper authorization.
• Threat Agents/Attack Vectors: Almost any source of data can be an injection vector, environment variables,
parameters, external and internal web services, and all types of users. Injection flaws occur when an attacker
can send hostile data to an interpreter.
• Security Weakness: Injection flaws are very prevalent, particularly in legacy code. Injection vulnerabilities are
often found in SQL, LDAP, XPath, or NoSQL queries, OS commands, XML parsers, SMTP headers, expression
languages, and ORM queries. Injection flaws are easy to discover when examining code. Scanners and
fuzzers can help attackers find injection flaws.
• Impacts: Injection can result in data loss, corruption, or disclosure to unauthorized parties, loss of
accountability, or denial of access. Injection can sometimes lead to complete host takeover. The business
impact depends on the needs of the application and data.
2. Broken Authentication: Application functions related to authentication and session management are often
implemented incorrectly, allowing attackers to compromise passwords, keys, or session tokens, or to exploit other
implementation flaws to assume other users’ identities temporarily or permanently.
• Threat Agents/Attack Vectors: Attackers have access to hundreds of millions of valid username and
password combinations for credential stuffing, default administrative account lists, automated brute force,
and dictionary attack tools. Session management attacks are well understood, particularly in relation to
unexpired session tokens.
• Security Weakness: The prevalence of broken authentication is widespread due to the design and
implementation of most identity and access controls. Session management is the bedrock of authentication
and access controls, and is present in all stateful applications. Attackers can detect broken authentication
using manual means and exploit them using automated tools with password lists and dictionary attacks.
• Impacts: Attackers have to gain access to only a few accounts, or just one admin account to compromise
the system. Depending on the domain of the application, this may allow money laundering, social security
fraud, and identity theft, or disclose legally protected highly sensitive information.
3. Sensitive Data Exposure: Many web applications and APIs do not properly protect sensitive data, such as financial,
healthcare, and Professional Indemnity Insurance (PII). Attackers may steal or modify such weakly protected data
to conduct credit card fraud, identity theft, or other crimes. Sensitive data may be compromised without extra
protection, such as encryption at rest or in transit, and requires special precautions when exchanged with the
browser.
• Threat Agents/Attack Vectors: Rather than directly attacking crypto, attackers steal keys, execute man-
in-the-middle attacks, or steal clear text data off the server, while in transit, or from the user’s client, e.g.
browser. A manual attack is generally required. Previously retrieved password databases could be brute
forced by Graphics Processing Units (GPUs).
• Security Weakness: Over the last few years, this has been the most common impactful attack. The most common
flaw is simply not encrypting sensitive data. When crypto is employed, weak key generation and management,
and weak algorithm, protocol and cipher usage is common, particularly for weak password hashing storage
techniques. For data in transit, server-side weaknesses are mainly easy to detect, but hard for data at rest.

170
Unit 4 - Application Security

• Impacts: Failure frequently compromises all data that should have been protected. Typically, this information
includes sensitive personal information (as in the case of PII) data such as health records, credentials,
personal data, and credit cards, which often require protection as defined by laws or regulations such as
the EU GDPR or local privacy laws.
4. XML External Entities (XXE): Many older or poorly configured XML processors evaluate external entity references
within XML documents. External entities can be used to disclose internal files using the file URI handler, internal
file shares, internal port scanning, remote code execution, and denial of service attacks.
• Threat Agents/Attack Vectors: Attackers can exploit vulnerable XML processors if they can upload XML or
include hostile content in an XML document, exploiting vulnerable code, dependencies or integrations.
• Security Weakness: By default, many older XML processors allow specification of an external entity, a URI
that is dereferenced and evaluated during XML processing. SAST tools can discover this issue by inspecting
dependencies and configuration. DAST tools require additional manual steps to detect and exploit this
issue. Manual testers need to be trained in how to test for XXE, as it not commonly tested as of 2017.
• Impacts: These flaws can be used to extract data, execute a remote request from the server, scan internal
systems, perform a denial-of-service attack, as well as execute other attacks. The business impact depends
on the protection needs of all affected application and data.
5. Broken Access Control: Restrictions on what authenticated users are allowed to do are often not properly
enforced. Attackers can exploit these flaws to access unauthorized functionality and/or data, such as access other
users’ accounts, view sensitive files, modify other users’ data, change access rights, etc.
• Threat Agents/Attack Vectors: Exploitation of access control is a core skill of attackers. SAST and DAST
tools can detect the absence of access control but cannot verify if it is functional when it is present. Access
control is detectable using manual means, or possibly through automation for the absence of access
controls in certain frameworks.
• Security Weakness: Access control weaknesses are common due to the lack of automated detection, and
lack of effective functional testing by application developers. Access control detection is not typically
amenable to automated static or dynamic testing. Manual testing is the best way to detect missing or
ineffective access control, including HTTP method (GET vs PUT, etc), controller, direct object references, etc.
• Impacts: The technical impact is attackers acting as users or administrators, or users using privileged
functions, or creating, accessing, updating or deleting every record. The business impact depends on the
protection needs of the application and data.
6. Security Misconfiguration: Security misconfiguration is the most commonly seen issue. This is commonly a
result of insecure default configurations, incomplete or ad hoc configurations, open cloud storage, misconfigured
HTTP headers, and verbose error messages containing sensitive information. Not only must all operating systems,
frameworks, libraries, and applications be securely configured, but they must be patched/upgraded in a timely
fashion.
• Threat Agents/Attack Vectors: Attackers will often attempt to exploit unpatched flaws or access default
accounts, unused pages, unprotected files and directories, etc to gain unauthorized access or knowledge
of the system.
• Security Weakness: Security misconfiguration can happen at any level of an application stack, including
the network services, platform, web server, application server, database, frameworks, custom code,
and pre-installed virtual machines, containers, or storage. Automated scanners are useful for detecting
misconfigurations, use of default accounts or configurations, unnecessary services, legacy options, etc.
• Impacts: Such flaws frequently give attackers unauthorized access to some system data or functionality.
Occasionally, such flaws result in a complete system compromise. The business impact depends on the
protection needs of the application and data.
7. Cross-Site Scripting XSS: XSS flaws occur whenever an application includes untrusted data in a new web page
without proper validation or escaping, or updates an existing web page with user-supplied data using a browser
API that can create HTML or JavaScript. XSS allows attackers to execute scripts in the victim’s browser which can
hijack user sessions, deface web sites, or redirect the user to malicious sites.

171
Unit 4 - Application Security

• Threat Agents/Attack Vectors: Automated tools can detect and exploit all three forms of XSS, and there are
freely available exploitation frameworks.
• Security Weakness: XSS is the second most prevalent issue in the OWASP Top 10, and is found in around
two thirds of all applications. Automated tools can find some XSS problems automatically, particularly in
mature technologies such as PHP, J2EE / JSP, and [Link].
• Impacts: The impact of XSS is moderate for reflected and DOM XSS, and severe for stored XSS, with remote
code execution on the victim’s browser, such as stealing credentials, sessions, or delivering malware to the
victim.
8. Insecure Deserialization: Insecure deserialization often leads to remote code execution. Even if deserialization
flaws do not result in remote code execution, they can be used to perform attacks, including replay attacks,
injection attacks, and privilege escalation attacks.
• Threat Agents/Attack Vectors: Exploitation of deserialization is somewhat difficult, as off the shelf exploits
rarely work without changes or tweaks to the underlying exploit code.
• Security Weakness: This issue is included in the Top 10 based on an industry survey and not on quantifiable
data. Some tools can discover deserialization flaws, but human assistance is frequently needed to validate
the problem. It is expected that prevalence data for deserialization flaws will increase as tooling is developed
to help identify and address it.
• Impacts: The impact of deserialization flaws cannot be overstated. These flaws can lead to remote code
execution attacks, one of the most serious attacks possible. The business impact depends on the protection
needs of the application and data.
9. Using Components with Known Vulnerabilities: Components, such as libraries, frameworks, and other software
modules, run with the same privileges as the application. If a vulnerable component is exploited, such an attack can
facilitate serious data loss or server takeover. Applications and APIs using components with known vulnerabilities
may undermine application defenses and enable various attacks and impacts.
• Threat Agents/Attack Vectors: While it is easy to find already-written exploits for many known vulnerabilities,
other vulnerabilities require concentrated effort to develop a custom exploit.
• Security Weakness: Prevalence of this issue is very widespread. Component-heavy development patterns
can lead to development teams not even understanding which components they use in their application or
API, much less keeping them up to date. Some scanners such as [Link] help in detection, but determining
exploitability requires additional effort.
• Impacts: While some known vulnerabilities lead to only minor impacts, some of the largest breaches to
date have relied on exploiting known vulnerabilities in components. Depending on the assets you are
protecting, perhaps this risk should be at the top of the list.
10. Insufficient Logging & Monitoring: Insufficient logging and monitoring, coupled with missing or ineffective
integration with incident response, allows attackers to further attack systems, maintain persistence, pivot to more
systems, and tamper, extract, or destroy data. Most breach studies show time to detect a breach is over 200 days,
typically detected by external parties rather than internal processes or monitoring.
• Threat Agents/Attack Vectors: Exploitation of insufficient logging and monitoring is the bedrock of nearly
every major incident. Attackers rely on the lack of monitoring and timely response to achieve their goals
without being detected.
• Security Weakness: This issue is included in the Top 10 based on an industry survey. One strategy for
determining if you have sufficient monitoring is to examine the logs following penetration testing. The
testers’ actions should be recorded sufficiently to understand what damages they may have inflicted.
• Impacts: Most successful attacks start with vulnerability probing. Allowing such probes to continue can
raise the likelihood of successful exploit to nearly 100%. In 2016, identifying a breach took an average of
191 days – plenty of time for damage to be inflicted.
• To read more about these you can go to the following OWASP website:
[Link]

172
Unit 4 - Application Security

Bluetooth related attacks

• Bluesnarfing – Bluesnarfing attack allows the attacker or the malicious user get unauthorised access to
information on a particular device using a bluetooth connectivity.

• Bluejacking – this kind of attack allows the malicious user to send unsolicited (often spam) messages
over bluetooth enabled devices.

• Bluebugging – it is a hack attack on a bluetooth enabled devices. Bluebugging enables the attacker
to initiate phone calls on the victim's phone as well as read through the address book, messages and
eavesdrop on phone conversations.

4.1.2 Identifying the Application Security Risks

The Process of Identification and Analysis of Vulnerabilities


While analyzing threats, it is important to understand the concept of a risk in application security. A risk can be
understood as the probability of a threat agent exploiting a vulnerability thereby causing an impact on the application.
Threat modeling is a strategic and a systematic approach of identifying and enumerating threats to an application
environment. This is an important component of risk management and works on the objective of minimising risks
and its associated impact.
Threat analysis deals in the identification of application threats and analyses each aspect of the application. These
aspects can be application functionality, architecture and design for identifying and classifying the potential
weaknesses.
The following process is followed to identify and analyse vulnerabilities:
• View the system as an adversary
• Characterize the system
• Model threat scenarios
• Perform application penetration testing
• Perform threat analysis
• Perform countermeasure identification
Let us look at some of these in detail.

The Process of Identification and Analysis of Vulnerabilities


In order to understand the adversary's intent, following steps can prove to be helpful.

Dependency Determination
It is important to understand the entire architecture and dependencies of the application. This understanding provides
a better overview and focus.
One of the key objectives of this phase is to determine clear dependencies and to link them to the next phase. The
Figure shows the overall architecture of a web application.

173
Unit 4 - Application Security

Fig 4.1: Architecture for web application

The application has several dependencies:


• A database : Any web application consists of MS-SQL Server as the database running at the backend. This
interface must be examined when performing an application view.
• The platform and web server : The application runs on the web server with some underlying platform (say
.NET). This is helpful from two perspectives:
• in securing deployment, and
• in defining the source code type and language.
• Web resources and languages : Web applications use web resources for proper rendering, including images,
script files, and any user-created component libraries. There are many languages used for web developments
like JavaScript, Java, Python, CSS, PHP, Ruby, C++, C, Shell, C#, Objective C, R, VimL, Go, Perl. In the given
example, ASPX and ASMX are the web applications and web service pages that are written in C# language.
These resources help to determine patterns during a code review.
• Authentication : The application authenticates users through a Lightweight Directory Access Protocol (LDAP)
server. The authentication code is a critical component and needs analysis.
• Firewall : The application layer firewall is in place and content filtering must be enabled.
• Third-party components : Any third-party components being consumed by the application along with the
integration code need analysis.
• Information access from the internet : Some other important applications that require security attention are
RSS (Rich Site Summary) feeds, emails and other information that an application may access from the Internet.
With this information in place, one can better understand the code. The next step is to identify the entry points to
the application.

174
Unit 4 - Application Security

Identify the Entry/Exit Points


The objective of this phase is to identify entry points to the web application. A web application can be accessed from
various sources (Figure 4.2). It is important to evaluate every source, each has an associated risk. Entry/exit points
are places where data enters or exits an application. When identifying entry/exit points, the following data should be
identified and collected:

Fig 4.2: Architecture for web application - entry points

These entry points provide information to an application. These values affect the databases, LDAP servers, processing
engines and other application components. If these values are not guarded, they can open up potential vulnerabilities
in the application. The relevant entry points are as following:
• HTTP variables: The browser or end-client sends information to the application. These set of requests
comprises several entry points such as form and query string data, cookies, and server variables.
• XML messages: The application is accessible by web services over XML (Extensible Markup Language)
messages. These messages are potential entry points to a web application.
• RSS and Atom feeds: Many new applications consume third-party XML-based feeds and present the output
in different formats to an end-user. RSS and Atom feed has the potential to open up new vulnerabilities,
such as XSS or client-side script execution.
• XML files from servers: The applications can access the XML files from various partners over the Internet.
• Mail system: The application gets access to mails from the available mailing systems.

These are the important entry points to the application in the case study. It is possible to grab certain key patterns in
the submitted data using regular expressions from multiple files to trace and analyse patterns.
An application analyst should capture these entries and exit points in the format below:
• Numerical ID - There is a need to ensure that an entry/exit point has a numerical ID so that it can be cross-
referenced with various threats and vulnerabilities.
• Name - Provide a name to the entry/exit point and identify the purpose of having it.
• Description - Provide a suitable description to entry/exit point thereby outlining the activity taking place.

175
Unit 4 - Application Security

After locating these entry points to an application, one needs to trace them and search for vulnerabilities.

Identify the Assets


Assets are the reason threats exist. An adversary’s goal is to gain access to an asset. The security team needs to
identify which assets need to be protected from an unauthorised user. Assets can be categorised as physical and
abstract such as employee safety, company's reputation, etc. Assets have the capability of interacting with other
assets which makes them a pass-through point for an adversary.
When identifying assets, the following data should be identified and collected:
• Numerical ID - Every asset should be accompanied by a numerical ID assigned for cross-referencing with
various threats and vulnerabilities that exist.
• Name - Provide a suitable name to the asset.
• Description - Write a description that underlines the fact why the asset needs protection.

Identify the Trust Levels


Trust levels are assigned to entry/exit points to define the privileges an external entity must access and affect the
system. Trust levels are categorised per privileges assigned or credentials supplied and cross-referenced with the
entry/exit points and protected resources. When identifying trust levels, the following data should be identified and
collected:
• Numerical ID - Each trust level should have a numerical ID to cross-reference with entry/exit points and
assets.
• Name - Assign a name to the trust level.
• Description - Write a description that explains more detail about the trust level and its purpose.

How to Gather information?


The following actions need to be performed in order to gather information:
• Gather preliminary information about the application through manual documentation review
• Evaluate the criticality of information by taking various factors into consideration
• Identify the application type/category by considering various factors
• Identify dependency the application has with in-house/ outsourced/ third party/ client applications
• Gather web-based information through the use of automated tools and techniques
• Establish the application functionality and connectivity, and understand how it works
• Review application design and architecture to ensure that appropriate security requirements are enforced
• Check the source code of a web application manually for security issues
Identify the basic objective of the application and the scope of its implementation. For this to happen, individual should
adopt a method to reach out to different stakeholders internally - the business, Infrastructure team, development and
web-based research of similar application.
The Analyst should follow an objective method as described below:
Capture the discussions and create a mapping of application in a tracker, if there are results from the internet sources
or subscription, align these results across the points captured as part of the discussion for identifying any known
vulnerabilities. These vulnerabilities can be highlighted as a trigger for future security threats or events.

176
Unit 4 - Application Security

For identifying the criticality of the application, setup discussions primarily with business stakeholders and users of
the applications for the task detailed in table below:

Sources Sources

1. Stakeholder discussions Hold discussion with business stakeholders to understand the scope of an
application, the underlying feature set, data flow related to the application and
type of data transaction.
Hold discussion with the infrastructure team to understand where the application is
hosted, underlying infrastructure supporting the applications, which include details
on server, hosting (in-house, outsourced data-centre, cloud, etc.), (supporting
technologies – virtualisation, etc), the set of protocols used and their objectives.
Hold discussion with application development team (in case of in-house
development) and gather information on the type of programming language
being used, type of interfaces, like Rich Internet Applications (RIA) which makes
extensive use of Ajax, Flash, HTML5, etc.)

2. Internet search Conduct a broad internet search for identifying risks and vulnerabilities about
the application based on the technologies, hosting environment, programming
language and interfaces. Start with some of the major search engines using
different keywords and word combinations. Narrow results of the search within the
search results or formulating a more advanced query. Follow link after link as a lead
is pursued until the start point has been forgotten.

3. Subscription Subscribe to white papers or data feeds from various sources such as OWASP,
white papers of product vendors, research papers from analyst firms like Gartner,
Forrester, IDC, etc.)

Sources Sources

1. Business owners Hold discussion with business stakeholders to understand the business importance
of an application:
• Need for this application in the environment, is it business critical, is it internal
tracking application, is it for bringing operational efficiency for a task which
was done manually earlier?
• What are the input elements, processing elements and the expected outcomes?
• Is there an interdependency of the application with other processes, applications
or any infrastructure components?
• Who all are the typical users of the applications, what is their role and at what
stage of application processing?
Identify through business discussion if the application is utilised by different lines
of business in case of a corporate application or functional application. Determine
the role of different lines of business and usage of an application.
Identify different elements which characterise the criticality of [Link]
factors include, but are not limited to:
• Type of data used – Personally Identifiable Information (PII), Financial
Information, Protected Health Information (PHI)
• Volume of data used

177
Unit 4 - Application Security

• Business sensitivity of transaction on the application from the perspective


timeliness, recovery time objective or recovery point objective
Understand any client specific requirements that business needs to adhere to
through the use of an application. This may be based on the dependency of the
application on any client processes or client applications.
Understand compliance requirements from client or business perspective. These
many include data protection requirements, regulatory and legal requirements.

2. Application development Understand the current security controls deployed across the application from the
teams or Infrastructure following perspective:
support teams • Hosting of the application and security controls related to physical security,
segmentation of zones, network controls around the application, open ports
and their requirements.
• Identity and access management controls – who have access to the application,
what kind of access – super user; user, type of access – read; write; execute,
type of authentication, segregation of duties, etc.
• Type of encryption controls for data at rest and during transactions.
• Any other specific information/ cyber security policy requirements.
Understand the compliance requirement from corporate information security policy
or client specific security controls. This may be related to meeting any internal
certification requirements like ISO 27001 or PCI-DSS standards certification or any
compliance standard that an organisation is obliged under.

Capture the discussions and create a mapping of application in the tracker. Use this information to provide a
prioritization across the different application. These discussions will help analyst to understand the importance and
business impact of the application in case of a breach vis-à-vis other application sets. This knowledge helps the
analyst to prioritize communication in case of an event of possible security threat.
After that the Analyst must understand the type of application and the dependency it has with in-house/ outsourced/
third party/ client applications, evaluate the application category by considering the following factors:
• type of application - for example, legacy applications, third party application, custom code, mobile application,
communication and integration APIs and packaged enterprise applications such as ERP and CRM
• type of environment -such as development, testing, staging, production
• externally provisioned systems - third party or client systems
• application programming interfaces
Also gather web-based information through the use of automated tools and techniques such as:
• search engine discovery
• web crawlers
• identify application entry points
• map execution paths through application

Characterize the System


Background information about the system will be gathered to characterize it. This information will help the security
team to focus and identify the specific areas that need to be addressed.

178
Unit 4 - Application Security

There are five categories of background information:


• Use scenarios
• External dependencies
• External security notes
• Internal security notes
• Implementation assumptions

Use scenarios
Use scenarios describe how a system will be used or not used in terms of configuration or security goals and non-goals.
Use case scenarios can be defined both in a supported and unsupported configuration. Not addressing use scenarios
may result in a vulnerability. Use scenarios have the ability to limit the scope of analysis along with validating the
threat model. These use scenarios can be utlilised by the testing team for conducting security testing and identifying
the possible attack paths. The architect and end users typically identify the use scenarios.
When defining use scenarios, the following data should be collected:
• Numerical ID - Each use scenario should have a unique identification number.
• Description -A description that defines the use scenario and whether it is supported or not.
External dependencies
External dependencies define a system’s dependence on outside resources and the security policy outside the system
being [Link] the user does not treat the threat from an external dependency efficiently, it can result in vulnerability.
The following data can be taken into consideration while defining external dependencies:
• Numerical ID - Every external dependency should be provided a unique identification number.
• Description - A description of the dependency.
• External security note reference - Within an application, the external security notes can be cross-referenced
on other components with external dependencies.
External security notes
External security notes are provided to inform the users of security and integration information for the system.
External security can be an indication of a warning against misuse or a form of guarantee made by the system to the
user. External security notes are used to validate external dependencies and can be used as mitigation to a threat.
However, this is not a good practice as it makes the end user responsible for security.
The following data can be collected for defining external security notes:
• Numerical ID - As a standard practice, every external security note must be provided a unique identification
number.
• Description - A description of the note.
Internal security notes
Internal security notes are used in defining a threat model. At the same time, these notes also explain the concessions
made in the design and implementatio n of a system security.
In order to define internal security notes, following data can be collected:
• Numerical ID - Every internal security note should be identifiable with a unique identification number.
• Description - A description of the security concession and justification for the concession.
Implementation assumptions
Implementation assumptions contain features that are developed later in the process and are made during the design phase.
In order to define implementation assumptions, the following data can be considered:
• Numerical ID - Each internal implementation assumption should have a unique identification number.
• Description - A description of the method of implementation.

179
Unit 4 - Application Security

Modelling the System


• Data flow diagrams show how data flows logically through the application (end to end), and allows
identification of affected components through critical points (i.e. data entering or leaving the system,
storage of data) and finally the control flow through these type of components.
• The trust boundaries provide the locations where the level of trust changes.
• The process components display the processing of data from network sources such as web servers,
application servers and database servers.
• As the name suggests, entry points are the points from where the data is input to the system such as input
fields, methods, etc. On the other hand, exit points are the points from where the data outputs from the
system. Both entry and exit points are elements of a trust boundary.

Application Penetration Testing


Application security should be considered as a process. For any process, a framework needs to be established, which
should include components, such as planning and designing, validation, review and analysis of the application threats
in the system.
A proper security framework should include continuous security training for all the developers, threat models for the
entire system, regular code reviews and frequent penetration testing. An application security analyst needs to analyse
a system’s architecture and ‘business model’ to uncover security weaknesses and needs. To do this, an individual
needs to scrutinize its constituent applications and application logic to uncover subtle but pervasive security and
privacy issues. Application analyst needs to analyse a system’s processes to assess security at an architectural level as
dissimilar applications interact and are coupled.

Black box testing


The black box methodology relies only on information ordinarily available to two distinct classes of attackers: insiders
and outsiders. Any source code, object code or information concerning transmission protocols will be extracted by
application analyst using whatever tools are necessary and appropriate for the task at hand.
In blackbox testing, the Application Under Test (AUT) is validated against its requirements considering the inputs and
expected outputs, regardless of how inputs are transformed into outputs. Testers are least concerned with internal
structure or code that implements the business logic of an application.
Primarily, there are four types of techniques for designing test cases for black box testing:
• BVA (Boundary Value Analysis)
• EP (Equivalence Partitioning)
• Decision Tables
• State Transition (Tables and diagrams)

White box testing


Primary focus of this methodology is to validate how the business logic of an application is implemented by
code. The techniques for testing the internal structure of an application are:
• Code Coverage
• Path Coverage

Grey box testing


This is a mixture of a black box and white box. In this methodology, mainly the application security analyst tests the
application as in black box, but for some business, critical or vulnerable modules of application testing are done as
white box.

180
Unit 4 - Application Security

White box code inspection is used in analysing static behaviour whereas black box exploratory testing is used in
determining the dynamic behavior of a system. The testing process helps in the coupling between systems and the
interactions between the distributed systems.
Application analyst makes the use of threat modeling techniques to understand the risk to a system from malicious
users or applications. Threat modeling allows anticipating attacks by understanding how an adversary chooses targets
(assets) and entry points and conducts an attack.
The threat models will profile how adversaries view the system, its applications and attempt its exploitation. A set of
diagrammatic threat models are generally conceptualised and reviewed with key stakeholders. This is important not
only in identifying potential threats but also in understanding what application defenses must be defeated in order
for a threat or series of threats to be realised.
Once a threat model is reviewed and established as accurate, the process of test planning begins. In the test plan,
each threat path is refined into a general set of test cases that detail the tools, techniques and strategies for finding
vulnerabilities that will realise each threat. In some cases, the test cases are specific and detailed. In others, they are
more high-level direction for an application analyst in order to guide their exploratory testing of a feature or set of
features.
This test plan is also reviewed with key stakeholders, and any modifications will be mutually agreed upon. Once
an application analyst takes a sign-off on the test plan, test execution begins. However, in the event that fruitful
attack vectors are found during test execution, the test plan and threat model will be updated to reflect these new
approaches. During test execution, application analysts make parallel progress on the threat model and the test plan.
Daily updates are given to the point of contact and in the event if a vulnerability is identified, it is documented in
a manner to describe how to reproduce the problem and a description of its exploitability, including risk scenario,
severity, reproduction steps and remediation recommendations. In the event that testing of a particular feature does
not reveal vulnerabilities, application analysts will still document the testing that was performed in detail. This is
important because it is imperative to understand not only where an application fails but also where it is implemented
securely.
The following is a summary of the attacks that large systems are typically most susceptible to due to malicious
outsiders and insiders (users, processes and applications):
• Authentication/ authorization attacks: These attacks include brute-forcing passwords (both dictionary
attacks and common account/password strings) and credentials, exploiting insufficient and poorly implemented
protection and recovery of passwords, key material (and so forth) both in memory and at component boundaries.
This includes attempting to bypass authentication, predict/hijack an authorised session, session expiration
prevention, privilege escalation, data tampering and so forth.
• System dependency attacks: By carefully monitoring the environment of use of an application, crucial system
resources can be identified and targeted in an attempt to disrupt access to them.
A system must have the ability to securely process corrupt, missing and Trojan files, including cookies and registry
keys. Other known attacks against any reused third party components will also be catalogued.
• Input attacks: Large systems are often susceptible to input strings that tend to cause insecure behaviours.
Attacks in this class include long strings (buffer overruns), SQL injection, command injection, format strings, LDAP
injection, OS commanding, SSI injection, XPath injection, escape characters, and special/problematic character
sets. A variety of initial configurations and command line switches may also affect the system.
• Design attacks: Systemic design flaws often allow an application to be exploited. This includes unprotected
internal APIs, alterna te routes through and around security checks, open ports, forcing loop conditions and faking
the source of data (content spoofing). Race conditions and attacks that take advantage of time discrepancies
(Time of Check/Time of Use) are of particular concern in this category.
• Information disclosure attacks: Applications can often be forced to disclose sensitive or useful data in any
number of ways. Error messages generated by the application often contain information useful to attackers.
Attacks of this type include directory indexing attacks, path traversal attacks and determination of whether the
application allocates resources from a predictable and accessible location. The intent with this set of attacks is to
isolate any, all cases of information leakage.

181
Unit 4 - Application Security

• Logic/ implementation (business model) attacks: The hardest attacks to apply are often the most lucrative
for an attacker. These include screening temporary files for sensitive information, attempts to abuse internal
functionality to expose secrets and cause insecure behaviour, checking for faulty process validation and testing
an application’s ability to be remote-controlled. Users may get in between the time-of-check and time-of-use of
sensitive data (‘man-in-the-middle’) and perform denial of service at the component level.
• Cryptographic attacks: One of the biggest issues in cryptography is improper implementation. While
cryptography is exceptionally well suited to protect data at rest (when stored) or in transit, several challenges arise
when implementing cryptography on data in use. There are often hidden cracks in cryptography implementation.

Application Test Plan


For any activity, some planning is always required and the same is true for application testing. Without a proper plan,
there is always a high risk of getting distracted during the testing.
A comprehensive test plan has the following components:
• Scope
i. Overview of AUT
ii. Features (or areas) to be tested
iii. Exclusions (features or areas not to be tested) with reason
iv. Dependencies of various testing activities on each other
• Objectives: Objectives define the goals of the testing activity. These can be validation of bug fixes, new features
being added, revamping of AUT, etc.
• Focus: This section emphasizes on the application to be included in the testing process such as security,
functionality, usability, reliability, performance, efficiency, etc.
• Approach: This section provides information about the testing methodology to be adopted as per the area of
AUT. For example, in the Single Touch Payroll (STP) solution of an ERP application, the approach section may
contain the information that black box testing will be approaching for payroll. On the other hand, for reports, the
approach will be grey box testing.
• Schedule: This section describes who will be doing what and where on the AUT, when and how. Schedule section
is, a ‘4Ws and H’ of the STP. Normally it is a simple table, but every organisation may have its own customised
format according to their own needs. When the test plan is ready and the application is under development,
testers design and document the test cases.

Process of Penetration Testing


A penetration or a pen test evaluates the security of an IT infrastructure by exploiting the vulnerabilities. These type
of vulnerabilities could exist in operating systems, service and application flaws, improper configurations or risky
end-user behavior.
Such type of assessments help in validating the efficacy of defensive mechanisms and the end-user adherence to
various security policies. Penetration tests can be carried out using manual or automated technologies which can
help in compromising servers, endpoints, web applications, wireless networks, network devices, mobile devices and
other exposure points. After the vulnerabilities have been exploited on a system, the testers try and attempt to use
the compromised system. This is aimed at launching more exploits at the internal resources to achieve higher levels
of security clearance and better access to electronic assets and information.
The information pertaining to security vulnerabilities that is exploited through penetration testing is aggregated and
presented to the IT and networking staff within an organisation. These professionals then make strategic conclusions
and prioritise the related remediation efforts.
The fundamental purpose of penetration testing features measuring of the feasibility of systems and end-user
compromise and thereby evaluate the effect of such type of consequences on the overall process.

182
Unit 4 - Application Security

How often should penetration testing be performed?


Penetration testing is crucial towards ensuring consistent IT and network security management. This is done by
discovering new threats and emerging vulnerabilities that could potentially be exploited by the attackers. Hence,
penetration testing should be conducted on a regular basis.
Apart from the regularly scheduled analysis and assessments that are mandatory tests should also be run whenever:
• New network infrastructure or related applications are being added
• Infrastructure or applications have undergone significant upgrades and modifications.
• Significant upgrades or modifications are applied to infrastructure or applications
• New office locations are established
• Security patches are applied
• End user policies are modified

Application penetration testing methodology


A tester knows nothing or has very little information about the application to be tested.
The testing model consists of:
• Tester: Who performs the testing activities
• Tools and methodology: The core of the testing guide
• Application: The black box to test
• The set of active tests has been divided into 11 sub-categories for 91 controls in total
• Information gathering
For the purpose of search engine discovery and reconnaissance, there are direct and indirect elements. Searching
the indexes and associated content from caches falls under the direct method category. On the other hand, indirect
methods consists of sensitive designs and configuration information obtained from forums, newspapers, and
tendering websites.
As soon as the search engine bot completes the crawling process, it starts the process of indexing the webpages
based on tags and associated attributes i.e. for example, <TITLE> attribute which returns a relevant search result.

How to test?
Use a search engine to search for:
• Network diagrams and configurations
• Archived posts and emails by administrators and other key staff
• Log on procedures and username formats
• Usernames and passwords
• Error message content
• Development, test, UAT and staging versions of the website
• Configuration and deployment management testing
Proper configuration of single elements that make up an application architecture is important to prevent mistakes
that might compromise the security of the whole architecture.
Configuration review and testing is a critical task in creating and maintaining an architecture. This is because many
different systems will be usually provided with generic configurations that might not be suited to the task they will
perform on the specific site they're installed on. While the typical web and application server installation will contain
a lot of functionality (like application examples, documentation, test pages) what is not essential should be removed
before deployment to avoid post-install exploitation.

183
Unit 4 - Application Security

Sample and known files and directories


The servers and web applications contain sample applications and files that help the developer and ensure that the
server is working properly after the installation process. There are several web server applications that are vulnerable
for example, in the past we saw CVE-1999-0449 (Denial of Service in IIS when the Exair sample site had been installed),
CAN-2002-1744 (Directory traversal vulnerability in [Link] in Microsoft IIS 5.0), CAN-2002-1630 (use of
[Link] in Oracle 9iAS), or CAN-2003-1172 (Directory traversal in the view-source sample in Apache’s Cocoon).
CGI scanners include a detailed list of known files and directory samples that are provided by different web or
application servers, and might be a fast way to determine if these files are present.
However, the only way to be sure is to do a full review of the contents of the web server or application server and
determine whether they are related to the application itself or not.

Comment review
It is very common, and even recommended, for programmers to include detailed comments on their source code
to allow other programmers to better understand why a given decision was taken in coding a given function.
Programmers usually add comments while developing large web applications. But the comments included in line in
HTML code could reveal internal information that an attacker shouldn't know. Sometimes, even the source code could
be commented out as the functionality may no longer be required, though this comment is leaked out to HTML pages
returned to the users unintentionally.
To determine if any information is being leaked through comments, a comment review should be done. To do this
review thoroughly, it should be done through an analysis of the web server, static and dynamic content and file
searches. Browsing the site either in an automatic or guided fashion and storing all the content retrieved can also be
useful. Then the retrieved content can be searched to analyse the HTML comments available in the code.

System configuration
CIS-CAT (Center for Internet Security - Configuration Assessment Tool) helps the security personnel by providing a fast
and detailed assessment of the target systems' conformance to CIS Benchmarks. CIS also provides the recommended
system configuration hardening guide including database, OS, Web server, visualisation.

Identity management testing


It is common in modern enterprises to define the system roles to manage users and authorisation to system resources.
In most system implementations, it is expected that at least two roles exist: administrators and regular users. The
first representing a role that permits access to privileged and sensitive functionality and information. The second
representing a role that permits access to regular business functionality and information. Well-developed roles should
be aligned with business processes which are supported by an application.
Just plain authorisation is not the only way for managing access to system objects. Where confidentiality is not too
critical, softer controls like application workflow or audit logging can be used to support data integrity requirements,
without restricting user access to the functionality or creating difficult to manage and complex role structures.
One must keep in mind the Goldilocks principle while role engineering, which implies that defining too few and
broad roles (which would expose access to functionality that users don't need) is as bad as too many and tightly
created roles (which would restrict access to functionality that users do need).
How to test: Either with or without the help of system developers or administrators, develop a role versus permission
matrix. The matrix should enumerate all roles that can be provisioned and explore the permissions that are allowed
to be applied to the objects including any constraints.
If a matrix is provided with an application, it should be validated by a tester. If it doesn't exist, the tester should
generate it and determine whether the matrix satisfies the desired access policy for the application.

184
Unit 4 - Application Security

Session management testing


One of the core components of any web-based application is the mechanism by which it controls and maintains the
state for a user interacting with it. This is referred to as session management and is defined as the set of all controls
governing state-full interaction between a user and the web-based application. This broadly covers anything from
how user authentication is performed to what happens upon them logging out.
How to test: By using an intercepting proxy or traffic intercepting browser plug-in, trap all responses where a cookie
is set by an application (using the set-cookie directive) and inspect the cookie for the following:
• Secure attribute - While passing a cookie that contains sensitive information or session token, an encrypted
tunnel should be used. Initially, a user logs in into an application and then sets the session token using a cookie.
Once the token has been set, it is verified to ensure tagging using the 'secure' flag. In case it is not, the browser
agrees to pass it using an unencrypted channel (such as HTTP) which might lead to the users submitting their
cookie over an insecure channel.
• HttpOnly attribute - This attribute should always be set even though not every browser supports it. This attribute
aids in securing the cookie from being accessed by a client side script, it does not eliminate cross-site scripting
risks but also does eliminate some exploitation vectors.
• Check to see if the ";HttpOnly" tag has been set.
• Domain attribute - The domain should be verified properly to ensure that it hasn't been set loosely. This domain
is required by the server that receives the cookie i.e. for example, if an application is present on a server app.
[Link], it should be set to'; domain=[Link]" and NOT "; domain=.[Link]". This would allow
other potentially vulnerable servers to obtain that cookie.
• Path attribute - Path attribute, just like domain attribute, should be verified to ensure it is not set loosely. It is be
noted that even if the domain attribute is tightly configured and path is set to the root directory "/", the attribute
is vulnerable to less secure applications on the server in use.
For example, if an application resides at /myapp/, then verify that the cookies path is set to "; path=/myapp/" and
NOT "; path=/".
Expires attribute - In case the expires attribute is set to a specific time in the future, one should verify that the cookie
does not contain any sensitive information. For example, if a cookie is set to "; expires=Sun, 31-Jul-2016 [Link]
GMT" and it is currently July 31st, 2014, then the tester should inspect the cookie. If the cookie is a session token that
is stored on the user's hard drive, then an attacker or a local user (such as an admin) who has access to this cookie
can access the application by resubmitting this token until the expiration date passes.
Input Validation testing: HTTP Verb Tampering tests the web application's response to different HTTP methods
accessing system objects. For every system object discovered during spidering, a tester should attempt accessing all
those objects with every HTTP method.
HTTP Parameter Pollution tests applications’ response to receiving multiple HTTP parameters with the same name.
For example, if the parameter username is included in the GET or POST parameters twice, which one is honored if any?
Error handling: An important aspect of secure application development is to prevent information leakage. Error
messages give an attacker great insight into the inner workings of an application. The purpose of reviewing error
handling code is to assure the application fails safely under all possible error conditions, expected and unexpected.
No sensitive information is presented to a user when an error occurs.
It is difficult to pull off something similar to SQL injection using error messages successfully. It lessens the attack
footprint and the attacker would have to resort to use “blind SQL injection”, which is more difficult and time-consuming

An efficient error/exception handling strategy is crucial for the following reasons:


• Good error handling does not provide the attacker any important information which means an end to the attacks
on the application.
• It is easier to maintain a proper centralised error strategy thereby reducing the chance of uncaught errors that
bubble up to the front end of the application
• The leak in information can result in social engineering exploitation..

185
Unit 4 - Application Security

4.1.3 Application Security Identification Tools

Vulnerability scanners, and more specifically web application scanners, otherwise known as penetration testing tools
(i.e. ethical hacking tools) have been historically used by security organisations within corporations and security
consultants to automate security testing of http request/responses, however this is not a substitute for the need for
actual source code review.
Physical code reviews of an application's source code can either be accomplished manually or in an automated
fashion. Given the common size of individual programmes (often 500,000 lines of code or more), the human brain
cannot execute a comprehensive data flow analysis needed to completely check all circuitous paths of an application
programme to find vulnerability points. The human brain is suited more for filtering, interrupting and reporting
outputs of automated source code analysis tools available commercially other than trying to trace every possible path
through a compiled code base to find the root cause level vulnerabilities.
The two types of automated tools associated with application vulnerability detection (application vulnerability
scanners) are Penetration Testing Tools (often categorised as Black Box Testing Tools) and static code analysis tools
(often categorised as White Box Testing Tools).
According to Gartner Research, "...next-generation modern Web and mobile applications require a combination of
SAST and DAST techniques, and new interactive application security testing (IAST) approaches have emerged that
combine static and dynamic techniques to improve testing...". Because IAST combines SAST and DAST techniques, the
results are highly actionable, and can be linked to the specific line of code and recorded for replay later for developers.
Industries such as banking and large E-commerce corporations have adopted customer profiles for these tools. Both
black box and white box testing tools are required for detecting application security. Black box testing tools are an
example of ethical hacking tools which attack the application surface thereby exposing vulnerabilities in the source code.
The penetration testing tools are executed on the existing application that has been deployed. White Box testing
(meaning Source Code Analysis tools) are used by either the application security groups or by the application
development groups.
Typically introduced into a company through the application security organisation, the White Box tools complement
the Black Box testing tools where they give specific visibility into the specific root vulnerabilities within the source
code in advance of the other source code being deployed.
Vulnerabilities identified with White Box testing and Black Box testing are typically in accordance with the OWASP
taxonomy for software coding errors. White Box testing vendors have recently introduced dynamic versions of their
source code analysis methods which operate on deployed applications. Given that the White Box testing tools have
dynamic versions like the Black Box testing tools, both tools can be correlated in the same software error detection
pattern ensuring full application protection to the client company.
The advances in professional malware targeted at internet customers of online organisations have seen a change in
web application design requirements since 2007. It is generally assumed that a sizable percentage of internet users
will be compromised through malware and that any data coming from their infected host may be tainted. Therefore,
application security has begun to manifest more advanced anti-fraud and heuristic detection systems in the back-
office rather than within the client-side or web server code.

4.1.4 Application testing tools

There are at least 50 testing tools available in the market today. These tools feature both paid and open source type
of tools. Tools offering services such as UI testing, functional testing, DB testing, load testing, performance, security
testing and link validation testing are purpose-specific. Also, there are tools that are strong and are capable of testing
major components of an application. Basically, the concept of 'Application Testing' is its functional testing.

186
Unit 4 - Application Security

• Here is the list of some most important and fundamental features that are provided by almost all of the
‘Functional Testing’ tools.
• Record and play
• Parameterise the values
• Script editor
• Run (the test or script, with debug and update modes)
• Report of run session
The vendors focus on specific features that make their products unique among the competitors. The features listed
above are common and are found in almost all functional testing tools.
Following is the list of few widely used Functional Testing tools.
• HP QTP (Quick Test Professional)
• Selenium
• IBM Rational Robot
• Test Complete
• Push to Test
• Telerik

SYN, Stealth, XMAS, NULL, IDLE and FIN Scans

NESSUS - Nessus is a popular vulnerability scanner developed by Tenable, Inc. It is used for scanning various
technologies including operating systems, network devices, hypervisors, databases, web servers, and critical
infrastructure. Some vulnerabilities and exposures that it can scan for include vulnerabilities that could allow
unauthorized access or control to sensitive data on a system; misconfiguration (e.g. open mail relay, missing patches,
etc.); Default passwords; DoS vulnerabilities. To know more about Nessus and install it on can visit the website -
[Link]
SYN - SYN or stealth scan is known as half-open scan as it does not complete the TCP three-way handshake. Initially,
a hacker sends an SYN packet to the target. If the SYN/ACK frame is received back, then it is assumed the target
would be properly connected and the port listens. In case RST is received from the target, it means that the port isn't
active or has been closed. The advantage of this scan is that the fewer IDS systems log this activity as an attack or a
connection attempt.
XMAS - With the XMAS scan method, one can send a packet with the FIN, URG, and PSH flags set. In case the port
is open, there is no response; but if the port has been closed, the target will respond with an RST/ACK packet. These
type of scans work on the target systems that follow RFC 793 implementation of TCP/IP and are not compatible with
any version of Windows.
FIN - Similar to an XMAS scan, FIN scan sends a packet with only the FIN flag set. FIN scans receive the same type of
response and have the same type of limitations as XMAS scans.
NULL - A NULL scan sends a packet with no flag set. In terms of limitations and responses, it is the same as XMAS
and FIN type of scans.
IDLE - IDLE scan makes use of spoofed IP address in order to send a SYN packet to a target. The port can either be
opened or closed depending upon the response. These scans monitor IP header sequence numbers for determining
port scan response.
IPEye - IPEye, a command line tool and a TCP port scanner is capable of performing SYN, FIN, Null, and XMAS scans.
IPEye probes the ports on any target system and provides responses such as closed, reject, drop, or open. 'Closed'
response indicates that there is a computer on the other end but it doesn't listen to the port. Rejecting means that a
firewall has rejected the connection to the port. The 'drop' option means that a firewall drops everything to the port,
or there is no system at the other end. 'Open' indicates some kind of a service listening at the port. These responses
are crucial to aid an attacker in identifying the type of system that is responding.

187
Unit 4 - Application Security

IPSecScan - IPSecScan is a tool that can either scan a single IP address or a range of addresses that look for systems
with IPSec enabled.
NetScan Tools Pro, hping2, KingPingicmpenum, and SNMP Scanner are scanning tools and can be easily used to
fingerprint the operating system.
Icmpenum not only uses ICMP Echo packets for probing networks but also ICMP Timestamp and ICMP information
packets. Also, it supports spoofing and sniffing for reply packets and is great for scanning network wherein the
firewall is blocks the ICMP Echo packets. Icmpenum is incapable of blocking Timestamp and information packets.
The hping2 tool contains a host of features ranging from OS fingerprinting such as TCP, User Datagram Protocol
(UDP), ICMP, and raw-IP ping protocols, trace-route mode and finally the ability to send files between source and
target system.
SNMP offers the capability of scanning a range or list of hosts that perform ping, DNS, and Simple Network
Management (SNMP) queries.

4.1.5 Threat Risk Modelling

Threats can be ranked from the perspective of risk factors. By determining the risk factor posed by the various
identified threats, it is possible to create a prioritized list of threats to support a risk mitigation strategy, such as
deciding on which threats have to be mitigated first. Different risk factors can be used to determine which threats can
be ranked as High, Medium, or as a Low risk. In general, threat risk models use different factors to model risks such
as those shown in the figure below:

Fig 4.3: Use of different factors by threat risk models to model risks

188
Unit 4 - Application Security

Generic Risk Model

A generic risk model takes into account the likelihood such as the probability of an attack and its impact i.e. damage
potential.
It can be defined as, Risk = Likelihood x Impact

Likelihood or probability focuses on the ease of exploitation, that further depends on the type of threat and the system
characteristics. It is a possibility of realizing a threat that is further determined by the existence of an appropriate
countermeasure.
The following considerations can be taken into account to determine the ease of exploitation:
• Will the attacker be able to exploit this remotely?
• Is there a need for the attacker to be authenticated?
• Can the exploits be automated?
The impact is decided by the damage potential and extent of the impact. For example, the number of components
defined by a particular threat.
Some factors to be considered for determining damage potential are:
• Can the attacker completely take over and manipulate the system?
• Can the attacker gain administration access to the system?
• Is the attacker capable enough to make the system crash?
• Can the attacker access sensitive information such as secrets, PII, etc.?
The following can help in determining the number of components that may have been affected by a particular threat:
• How many number of data sources and systems are impacted?
• How 'deep' has the infrastructure been damaged?
These considerations can help an application specialist in calculating the overall risk generating out of these threats.
The risks can be provided qualitative values such as High, Medium and Low to the various likelihoods and impacts.

189
Unit 4 - Application Security

4.2 COUNTERMEASURES

4.2.1 Application Security Counter measures

The primary function of countermeasure identification is to determine whether the protective measures such as
security control and policy measures are in place. These measures are aimed at preventing from threats identified via
the procedure of threat analysis. The term 'vulnerability' is given to the threat that has no countermeasure.
Countermeasures are actions taken to ensure application security:
• ‘Application firewall’ is the most basic software countermeasure that limits the execution of files and the
handling of data by specific installed programs.
• Using a router which is also the most common hardware countermeasure can prevent the IP address of an
individual computer from being directly visible on the Internet.
• Conventional firewalls, encryption/decryption programs, anti-virus programs, spyware detection/removal
programs and biometric authentication systems are some of the other countermeasures.
The most basic software countermeasure is an ‘application firewall’ that limits the execution of files or handling of
data by specific installed programmes.
• The most common hardware countermeasure is a router that can prevent the IP address of an individual
computer from being directly visible on the internet.
• Other countermeasures include conventional firewalls, encryption/decryption programmes, anti-virus
programmes, spyware detection/removal programmes and biometric authentication systems.
Application security can be enhanced by Threat Modelling, which involves the following rigorous steps:
• Defining enterprise assets
• Identifying what each application does (or will do) with respect to these assets
• Creating a security profile for each application
• Identifying and prioritising potential threats and documenting adverse events and actions taken in each case
In this context, a threat is any potential or actual adverse event that can compromise the assets of an enterprise,
including both malicious events, such as a denial-of-service (DoS) attack and unplanned events, such as failure of a
storage device.
Apart from that, there are technologies available to assess applications for security vulnerabilities which include the
following:
• Static analysis (SAST), or ‘white-box’ testing analyses applications without executing them.
• Dynamic analysis (DAST), or ‘black-box’ testing identifies vulnerabilities in running web applications.
• Interactive AST (IAST) technology combines elements of SAST and DAST, and is implemented as an agent
within the test runtime.
• Mobile behavioral analysis discovers risky actions of mobile apps.
• Software composition analysis (SCA) analyses open source and third-party components.
• Manual penetration testing (or pen testing) uses the same methodology that cybercriminals use to exploit
application weaknesses.
• Web application perimeter monitoring discovers all public-facing applications and the most exploitable
vulnerabilities.
• Runtime application self-protection (RASP) is built into an application and can detect and prevent real-time
application attacks.

190
Unit 4 - Application Security

While there is a variety of application security technologies available to help with this endeavor, but none are fool
proof. One must use the strengths of multiple analytic techniques along the entire application, lifetime to bring down
the application risk.
The end goal for any organisation should be a mature, robust application security programme that:
• Assesses every application, whether built internally, brought or downloaded
• Enables developers to find and fix vulnerabilities while they are coding
• Takes advantage of automation and cloud-based services to easily incorporate security into the development
process and scale the programme
Once an afterthought in software design, security is becoming an increasingly important concern during development
as applications become more frequently accessible over networks and are, thus, vulnerable to a wide variety of threats.
Security measures built into applications and a sound application security routine minimize the likelihood that
unauthorised code will be able to manipulate applications to access, steal, modify, or delete the sensitive data.

Threat Type Countermeasure

1. Credentials and authentication tokens are protected with encryption in storage and transit.
2. Protocols are resistant to brute force, dictionary, and replay attacks.
3. Strong password policies are enforced.
Authentication 4. Trusted server authentication is used instead of SQL authentication.
5. Passwords are stored with salted hashes.
6. Password resets do not reveal password hints and valid usernames.
7. Account lockouts do not result in a denial of service attack.
1. Strong ACLs are used for enforcing authorised access to resources.
2. Rolebased access controls are used to restrict access to specific operations.
Authorisation 3. The system tracks the principle of least privilege for user and service accounts.
4. Privilege separation is correctly configured within the presentation, business and data
access layers.
1. Least privileged processes are used and service accounts with no administration capability.
Configuration
2. Auditing and logging of all administration activities is enabled.
Management
3. Access to configuration files and administrator interfaces is restricted to administrators.
1. Standard encryption algorithms and correct key sizes are being used.
2. Hashed message authentication codes (HMACs) are used to protect data integrity.
Data Protection
3. Secrets (e.g., keys, confidential data) are cryptographically protected both in transport and
in Storage and in storage.
Transit
4. Built-in secure storage is used for protecting keys.
5. No credentials and sensitive data are sent in clear text over the wire.
1. Data type, format, length, and range checks are enforced.
2. All data sent from the client is validated.
Data Validation/
Parameter 3. No security decision is based upon parameters (e.g., URL parameters) that can be manipulated.
Validation 4. Input filtering via white list validation is used.
5. Output encoding is used.

Error Handling 1. All exceptions are handled in a structured manner.


and Exception 2. Privileges are restored to the appropriate level in case of errors and exceptions.
Management 3. Error messages are scrubbed so that no sensitive information is revealed to the attacker.

191
Unit 4 - Application Security

1. No sensitive information is stored in clear text in the cookie.


2. The contents of authentication cookies are encrypted.
3. Cookies are configured to expire.
User and Session 4. Sessions are resistant to replay attacks.
Management
5. Secure communication channels are used to protect authentication cookies.
6. User is forced to re-authenticate when performing critical functions.
7. Sessions are expired at logout.

1. Sensitive information (e.g., passwords, PII) is not logged.


2. Access controls (e.g., ACLs) are enforced on log files to prevent unauthorised access.
Auditing and 3. Integrity controls (e.g., signatures) are enforced on log files to provide non-repudiation.
Logging
4. Log files provide for audit trail for sensitive operations and logging of key events.
5. Auditing and logging is enabled across the tiers on multiple servers.

4.2.2 Product lifecycle of application security


Application security overcomes the gaps in the security policy of an application by encompassing the measures
taken throughout the code's lifecycle. Application security is a crucial aspect of overcoming the flaws in design,
development, deployment, upgrade, or maintenance of the application.

Fig 4.4: Application Security- Product Lifecycle

192
Unit 4 - Application Security

Applications are capable of controlling the kind of resources granted to them. They further determine the use of these
resources through the application users while ensuring application security.
• Asset: A resource of value, such as the data in a database or on a file system, or a system resource.
• Threat: Anything that can exploit the vulnerability and obtain, damage, or destroy an asset.
• Vulnerability: A gap or weakness in a security programme that can be exploited by threats to gain
unauthorised access to an asset.
• Attack (or exploit): An action taken to harm an asset.
• Countermeasure: A defense that addresses a threat and mitigates risk.

4.2.3 Building Secure Web Applications - Secure Coding

Computer software programs enable the various processes in a network to communicate with each other and run
applications. Securing the software is becoming more important than ever as the focus of attackers moving towards
the application layer.
It is usually more convenient and cost-effective to build secure software than to correct the security issues after the
software package has been completed. It is safer as well, so as to avoid security breach in the first place.
The principle of secure coding was developed keeping this in mind. It helps software engineers and other developers
anticipate security challenges and prepare for these issues at the design stage.
Secure coding is the practice of writing a source code or a code base that is compatible with the best security
principles for a given system and interface.
To develop a secure application, developers must learn important secure coding principles and how they can be applied.
As and when the security community becomes aware of more and more hacking and cyber-attack strategies, they
build new security mechanisms to protect from them. As developers contribute collectively, a large collection of
practices for secure coding has evolved.
SEI CERT Coding Standards has a very nice collection of recommended steps to take to ensure that the program is
secure and sorted according to the programming languages – C, C++, Java, Perl, and Android.
One can access them from the following links:
[Link]
OWASP has compiled a list of secure coding practices for application security.
[Link]
Secure coding principles described in OWASP Secure Coding Guidelines are:
• Input Validation
• Output Encoding
• Authentication and Password Management (includes secure handling of credentials by external services/
scripts)
• Session Management
• Access Control
• Cryptographic Practices
• Error Handling and Logging
• Data Protection
• Communication Security

193
Unit 4 - Application Security

• System Configuration
• Database Security
• File Management
• Memory Management
• General Coding Practices
Compliance with this control is assessed through Application Security Testing Program (required by MSSEI 6.2),
[Link]

4.2.4 Mitigation Strategies

The given checklist indicates the various threats and countermeasures. Note that the list is not limited and there are
many more ways to counter various types of threats. Once threats and corresponding countermeasures are identified
it is possible to derive a threat profile with the following criteria:
• Non mitigated threats: Threats which have no countermeasures and represent vulnerabilities that can be
fully exploited and cause an impact
• Partially mitigated threats: Threats partially mitigated by one or more countermeasures, which represent
vulnerabilities that can only partially be exploited and cause a limited impact
• Fully mitigated threats: These threats have appropriate countermeasures in place and do not expose
vulnerability and cause an impact.
The objective of risk management is to reduce the impact that the exploitation of a threat can have on the application.
This can be done by responding to a threat with a risk mitigation strategy. In general, there are five options to
mitigate threats.
• Do nothing: for example, hoping for the best
• Inform about the risk: for example, warning user population about the risk
• Mitigate the risk: for example, by putting countermeasures in place
• Accept the risk: for example, after evaluating the impact of the exploitation (business impact)
• Transfer the risk: for example, through contractual agreements and insurance
• Terminate the risk: for example, shutdown, turn-off, unplug or decommission the asset

The decision of which strategy is most appropriate depends on the impact on exploitation of a threat can have, the
likelihood of its occurrence, and the costs for transferring (i.e. costs for insurance) or avoiding (i.e. costs or losses due
to redesign) it. That is, such a decision is based on the risk a threat poses to the system.
Therefore, the chosen strategy does not mitigate the threat itself but the risk it poses to the system. Ultimately the
overall risk has to take into account the business impact since this is a critical factor for the business risk management
strategy. One strategy could be to fix only the vulnerabilities for which the cost to fix is less than the potential business
impact derived by the exploitation of the vulnerability. Another strategy could be to accept the risk when the loss of
some security controls (e.g. Confidentiality, Integrity, and Availability) implies a small degradation of the service and
not a loss of a critical business function. In some cases, transfer of the risk to another service provider might also be
an option.

194
Unit 4 - Application Security

4.3 OWASP TOP 10

4.3.1 Open Web Application Security Project

Open Web Application Security Project (or OWASP) operates as a non-profit and is not affiliated with any
technology company, which means it is in a unique position to provide impartial, practical information about AppSec
to individuals, corporations, universities, government agencies and other organizations worldwide. Operating as a
community of like-minded professionals, OWASP issues software tools and knowledge-based documentation on
application security. All of its articles, methodologies and technologies are made available free of charge to the
public. OWASP maintains roughly 100 local chapters and counts thousands of members.
OWASP seeks to educate developers, designers, architects and business owners about the risks associated with the
most common Web application security vulnerabilities. OWASP, which supports both open source and commercial
security products, has become known as a forum in which information technology professionals can network and
build expertise. The organization publishes a popular Top Ten list that explains the most dangerous Web application
security flaws and provides recommendations for dealing with those flaws.

Name Owner License Platforms

Acunetix WVS Acunetix Commercial / Free (Limited Windows


Capability)
AppScan IBM Commercial Windows

App Scanner Trustwave Commercial Windows

AVDS Beyond Security Commercial / Free (Limited N/A


Capability)

BugBlast Buguroo Offensive Security Commercial SaaS or On-Premises

Burp Suite PortSwiger Commercial / Free (Limited Most platforms supported


Capability)
Contrast Contrast Security Commercial / Free (Limited SaaS or On-Premises
Capability)
GamaScan GamaSec Commercial Windows

Grabber Romain Gaucher Open Source Python 2.4, BeautifulSoup


and PyXML

195
Unit 4 - Application Security

Name Owner License Platforms


Grendel-Scan David Byrne Open Source Windows, Linux and
Macintosh
GoLismero GoLismero Team GPLv2.0 Windows, Linux and
Macintosh
IKare ITrust Commercial N/A

IndusGuard Web Indusface Commercial SaaS

N-Stealth N-Stalker Commercial Windows

Netsparker MavitunaSecurity Commercial Windows

Nexpose Rapid7 Commercial / Free (Limited Windows/Linux


Capability)
Nikto CIRT Open Source Unix/Linux

AppSpider Rapid7 Commercial Windows


ParosPro MileSCAN Commercial Windows

[Link] Websecurify Commercial Macintosh

QualysGuard Qualys Commercial N/A


Retina BeyondTrust Commercial Windows

Securus Orvant, Inc Commercial N/A

Sentinel WhiteHat Security Commercial N/A

Vega Subgraph Open Source Windows, Linux and


Macintosh
Wapiti InformáticaGesfor Open Source Windows, Unix/Linux and
Macintosh
WebApp360 TripWire Commercial Windows

WebInspect HP Commercial Windows

SOATest Parasoft Commercial Windows / Linux / Solaris

Trustkeeper Scanner Trustwave SpiderLabs Commercial SaaS

WebReaver Websecurify Commercial Macintosh


WebScanService German Web Security Commercial N/A
Websecurify Suite Websecurify Commercial / Free (Limited Windows, Linux, Macintosh
Capability)
Wikto Sensepost Open Source Windows
w3af [Link] GPLv2.0 Linux and Mac
Xenotix XSS Exploit OWASP Open Source Windows
Framework
Zed Attack Proxy OWASP Open Source Windows, Unix/Linux and
Macintosh

196
Unit 4 - Application Security

The OWASP tools, documents and code library projects have been divided into three categories. The first one are
the tools and documents that are used for finding security-related design and implementation flaws. The second
category represents the tools and documents used to guard against security-related design and implementation
flaws. Finally, there are tools and documents used for adding security-related activties into the application lifecycle
management (ALM).

The OWASP Risk Rating Methodology


Discovering vulnerabilities is important, but being able to estimate the associated risk to the business is just as
important. Early in the life cycle, one may identify security concerns in the architecture or design by using threat
modeling. Later, one may find security issues using code review or penetration testing. Or problems may not be
discovered until the application is in production and is actually compromised.
By following the approach here, it is possible to estimate the severity of all of these risks to the business and make
an informed decision about what to do about those risks. Having a system in place for rating risks will save time and
eliminate arguing about priorities.
This system will help to ensure that the business doesn't get distracted by minor risks while ignoring more serious
risks that are less well understood.
Ideally, there would be a universal risk rating system that would accurately estimate all risks for all organizations.
But a vulnerability that is critical to one organization may not be very important to another. So a basic framework is
presented here that should be customized for the particular organization.
Step 1: Identifying a Risk
Step 2: Factors for Estimating Likelihood
Step 3: Factors for Estimating Impact
Step 4: Determining Severity of the Risk
Step 5: Deciding What to Fix
Step 6: Customizing Risk Rating Model

4.3.2 Preventive measures as per OWASP top ten project

1. Injection
Preventing injection requires keeping data separate from commands and queries.
• The preferred option is to use a safe API, which avoids the use of the interpreter entirely or provides a
parameterized interface, or migrate to use Object Relational Mapping Tools (ORMs). Note: Even when
parameterized, stored procedures can still introduce SQL injection if PL/SQL or T-SQL concatenates queries
and data, or executes hostile data with EXECUTE IMMEDIATE or exec().
• Use positive or “whitelist” server-side input validation. This is not a complete defense as many applications
require special characters, such as text areas or APIs for mobile applications.
• For any residual dynamic queries, escape special characters using the specific escape syntax for that
interpreter. Note: SQL structure such as table names, column names, and so on cannot be escaped, and
thus user-supplied structure names are dangerous. This is a common issue in report-writing software.
• Use LIMIT and other SQL controls within queries to prevent mass disclosure of records in case of SQL
injection.

197
Unit 4 - Application Security

2. Broken Authentication.
• Where possible, implement multi-factor authentication to prevent automated, credential stuffing, brute
force, and stolen credential re-use attacks.
• Do not ship or deploy with any default credentials, particularly for admin users.
• Implement weak-password checks, such as testing new or changed passwords against a list of the top
10000 worst passwords.
• Align password length, complexity and rotation policies with NIST 800-63 B’s guidelines in section 5.1.1 for
Memorized Secrets or other modern, evidence based password policies.
• Ensure registration, credential recovery, and API pathways are hardened against account enumeration
attacks by using the same messages for all outcomes.
• Limit or increasingly delay failed login attempts. Log all failures and alert administrators when credential
stuffing, brute force, or other attacks are detected.
• Use a server-side, secure, built-in session manager that generates a new random session ID with high
entropy after login. Session IDs should not be in the URL, be securely stored and invalidated after logout,
idle, and absolute timeouts.
3. Sensitive Data Exposure.
Do the following, at a minimum, and consult the references:
• Classify data processed, stored or transmitted by an application. Identify which data is sensitive according
to privacy laws, regulatory requirements, or business needs.
• Apply controls as per the classification.
• Don’t store sensitive data unnecessarily. Discard it as soon as possible or use PCI DSS compliant tokenization
or even truncation. Data that is not retained cannot be stolen.
• Make sure to encrypt all sensitive data at rest.
• Ensure up-to-date and strong standard algorithms, protocols, and keys are in place; use proper key
management.
• Encrypt all data in transit with secure protocols such as TLS with perfect forward secrecy (PFS) ciphers,
cipher prioritization by the server, and secure parameters. Enforce encryption using directives like HTTP
Strict Transport Security (HSTS).
• Disable caching for response that contain sensitive data.
• Store passwords using strong adaptive and salted hashing functions with a work factor (delay factor), such
as Argon2, scrypt, bcrypt or PBKDF2.
• Verify independently the effectiveness of configuration and settings."
4. XML External Entities (XXE)
Developer training is essential to identify and mitigate XXE. Besides that, preventing XXE requires:
• Whenever possible, use less complex data formats such as JSON, and avoiding serialization of sensitive data.
• Patch or upgrade all XML processors and libraries in use by the application or on the underlying operating
system. Use dependency checkers. Update SOAP to SOAP 1.2 or higher.
• Disable XML external entity and DTD processing in all XML parsers in the application, as per the OWASP
Cheat Sheet ‘XXE Prevention’.
• Implement positive (“whitelisting”) server-side input validation, filtering, or sanitization to prevent hostile
data within XML documents, headers, or nodes.
• Verify that XML or XSL file upload functionality validates incoming XML using XSD validation or similar.
• SAST tools can help detect XXE in source code, although manual code review is the best alternative in large,
complex applications with many integrations.
If these controls are not possible, consider using virtual patching, API security gateways, or Web Application
Firewalls (WAFs) to detect, monitor, and block XXE attacks."

198
Unit 4 - Application Security

5. Broken Access Control.


Access control is only effective if enforced in trusted server-side code or server-less API, where the attacker
cannot modify the access control check or metadata.
• With the exception of public resources, deny by default.
• Implement access control mechanisms once and re-use them throughout the application, including
minimizing CORS usage.
• Model access controls should enforce record ownership, rather than accepting that the user can create,
read, update, or delete any record.
• Unique application business limit requirements should be enforced by domain models.
• Disable web server directory listing and ensure file metadata (e.g. .git) and backup files are not present
within web roots.
• Log access control failures, alert admins when appropriate (e.g. repeated failures).
• Rate limit API and controller access to minimize the harm from automated attack tooling.
• JWT tokens should be invalidated on the server after logout.
Developers and QA staff should include functional access control unit and integration tests."
6. Security Misconfiguration
Secure installation processes should be implemented, including:
• A repeatable hardening process that makes it fast and easy to deploy another environment that is properly
locked down. Development, QA, and production environments should all be configured identically, with
different credentials used in each environment. This process should be automated to minimize the effort
required to setup a new secure environment.
• A minimal platform without any unnecessary features, components, documentation, and samples. Remove
or do not install unused features and frameworks.
• A task to review and update the configurations appropriate to all security notes, updates and patches as
part of the patch management process (see A9:2017-Using Components with Known Vulnerabilities). In
particular, review cloud storage permissions (e.g. S3 bucket permissions).
• A segmented application architecture that provides effective, secure separation between components or
tenants, with segmentation, containerization, or cloud security groups (ACLs).
• Sending security directives to clients
• An automated process to verify the effectiveness of the configurations and settings in all environments.

7. Cross-Site Scripting XSS


Preventing XSS requires separation of untrusted data from active browser content. This can be achieved by:
• Using frameworks that automatically escape XSS by design, such as the latest Ruby on Rails, React JS. Learn
the limitations of each framework’s XSS protection and appropriately handle the use cases which are not
covered.
• Escaping untrusted HTTP request data based on the context in the HTML output (body, attribute, JavaScript,
CSS, or URL) will resolve Reflected and Stored XSS vulnerabilities. The OWASP Cheat Sheet ‘XSS Prevention’
has details on the required data escaping techniques.
• Applying context-sensitive encoding when modifying the browser document on the client side acts against
DOM XSS. When this cannot be avoided, similar context sensitive escaping techniques can be applied to
browser APIs as described in the OWASP Cheat Sheet ‘DOM based XSS Prevention’.
• Enabling a Content Security Policy (CSP) as a defense-in-depth mitigating control against XSS. It is effective
if no other vulnerabilities exist that would allow placing malicious code via local file includes (e.g. path
traversal overwrites or vulnerable libraries from permitted content delivery networks).

199
Unit 4 - Application Security

8. Insecure Deserialization
The only safe architectural pattern is not to accept serialized objects from untrusted sources or to use serialization
mediums that only permit primitive data types. If that is not possible, consider one of more of the following:
• Implementing integrity checks such as digital signatures on any serialized objects to prevent hostile object
creation or data tampering.
• Enforcing strict type constraints during deserialization before object creation as the code typically expects
a definable set of classes. Bypasses to this technique have been demonstrated, so reliance solely on this is
not advisable.
• Isolating and running code that deserializes in low privilege environments when possible.
• Log deserialization exceptions and failures, such as where the incoming type is not the expected type, or
the deserialization throws exceptions.
• Restricting or monitoring incoming and outgoing network connectivity from containers or servers that
deserialize.
• Monitoring deserialization, alerting if a user deserializes constantly."

9. Using Components with Known Vulnerabilities


There should be a patch management process in place to:
• Remove unused dependencies, unnecessary features, components, files, and documentation.
• Continuously inventory the versions of both client-side and server-side components (e.g. frameworks, libraries)
and their dependencies using tools like versions, DependencyCheck, [Link], etc. Continuously monitor
sources like CVE and NVD for vulnerabilities in the components. Use software composition analysis tools to
automate the process. Subscribe to email alerts for security vulnerabilities related to components you use.
• Only obtain components from official sources over secure links. Prefer signed packages to reduce the
chance of including a modified, malicious component.
• Monitor for libraries and components that are unmaintained or do not create security patches for older
versions. If patching is not possible, consider deploying a virtual patch to monitor, detect, or protect against
the discovered issue.
Every organization must ensure that there is an ongoing plan for monitoring, triaging, and applying updates or
configuration changes for the lifetime of the application or portfolio."

10. Insufficient Logging & Monitoring


As per the risk of the data stored or processed by the application:
• Ensure all login, access control failures, and server-side input validation failures can be logged with
sufficient user context to identify suspicious or malicious accounts, and held for sufficient time to allow
delayed forensic analysis.
• Ensure that logs are generated in a format that can be easily consumed by a centralized log management
solutions.
• Ensure high-value transactions have an audit trail with integrity controls to prevent tampering or deletion,
such as append-only database tables or similar.
• Establish effective monitoring and alerting such that suspicious activities are detected and responded to
in a timely fashion.
• Establish or adopt an incident response and recovery plan, such as NIST 800-61 rev 2 or later.
There are commercial and open source application protection frameworks such as OWASP AppSensor, web
application firewalls such as ModSecurity with the OWASP ModSecurity Core Rule Set, and log correlation
software with custom dashboards and alerting."

200
Unit 4 - Application Security

SUMMARY

• An application is a type of software that allows people to perform specific tasks using various ICT devices. Word
processors, web browsers are some of the commonly used applications.
• Google docs is an example of a cloud application since it provides the functionality of an Microsoft Word.
• Some examples of software vulnerability include SQL injection, Cross-Site Request Forgery (CSRF) and Cross-
Site Scripting (XSS).
• Denial of Service attack causes an interruption or suspension of services of a specific host/ server by flooding
it with large quantities of useless traffic or external communication requests.
• Bluesnarfing, bluejacking and bluebugging are security attacks related to Bluetooth.
• White-box testing, black-box testing and grey box testing are a few examples of application penetration testing
techniques.
• The black box methodology relies only on information ordinarily available to two distinct classes of attackers:
insiders and outsiders.
• White-box testing validates how the business logic of an application is implemented by code.
• Penetration tests are usually conducted using manual or automated techniques to routinely compromise
servers, endpoints, web apps, wireless networks, network appliances, mobile devices and other possible
exposure points.
• The likelihood or probability is characterized by the ease of exploitation, which depends mainly on the type of
danger and the characteristics of the device, and by the risk of realizing the danger, which is determined by the
presence of an effective counter-measure.
• Authentication / authorization attacks involve brute-forced passwords (both dictionary attacks and common
account / password strings) and credential, ineffective and poorly enforced password security and retrieval, key
material (and so on) in both memory and component limits.
• Attacks such as long strings (buffer overruns), SQL injection, command injection, format strings, LDAP injection,
OS commanding, SSI injection, XPath injection, escape characters, and special/problematic character sets fall
under the category of input attacks.
• Design attacks include unprotected internal APIs, alternate routes through and around security checks, open
ports, forcing loop conditions and faking the source of data (content spoofing).
• Examples of data disclosure attacks include directory indexing attacks, path-crossing attacks, and deciding if
the program allocates resources from a reliable and available location.
• Application firewall is the most basic software countermeasure that limits the execution of files and the handling
of data by specific installed programs.
• OWASP aims to inform developers, designers, architects and business owners about the risks associated with
the most growing security vulnerabilities in Web applications.

201
Unit 4 - Application Security

KNOWLEDGE CHECK

Q.1. Match the following threats to their corresponding countermeasures.

Threat Type Countermeasures


1. Authentication A. Strong ALCs are used for enforcing authorized access to resources.
2. Authorization B. Standard encryption algorithms and correct key sizes are being used
3. Data Protection in Storage C. Privileges are restored to the appropriate level in case of errors and
and Transit exceptions
4. Error Handling and D. The contents of the authentication cookies are encrypted
Exception Management
5. User and Session E. Data type, format, length, and range checks are enforced
Management
6. Data Validation / F. Least privileged processes are used and service accounts with no
Parameter Validation administration capability
7. Configuration G. Protocols are resistant to brute force, dictionary, and replay attacks
Management
8. Auditing and Logging H. No sensitive information is stored in clear text in the cookie

Q.2. OWASP stands for: O______________________


W_____________________
A______________________
S______________________
P______________________

Q.3. Select the right choice from the following multiple choice questions
A. Which software vulnerability is an attack that occurs when ‘malicious scripts are injected into otherwise
benign and trusted websites?
i. SQL Injection
ii. Cross Site Scripting (XSS)
iii. Cross Site Request Forgery (CSRF)
iv. Smurf Attack
v. Buffer Overflow attack

B. In which type of attack the victim host is being provided with traffic/ data that is out of range of the processing
specs of the victim host, protocols or applications, overflowing the buffer and overwriting the adjacent memory?
i. SQL Injection
ii. Cross Site Scripting (XSS)
iii. Cross Site Request Forgery (CSRF)
iv. Botnet
v. Buffer Overflow attack

202
Unit 4 - Application Security

C. Which software vulnerability allows an attacker to submit a database SQL command, exposing the back-end
database where the attacker can create, read, update, alter or delete data. ?
i. SQL Injection
ii. Cross Site Scripting (XSS)
iii. Man-in-the-middle attack
iv. Smurf Attack
v. Buffer Overflow attack

Q.4. Define vulnerabilities in Application security.

Q.5. Explain what is 'Secure Coding' and it's benefits.

Q.6. What is the Black Box Testing and White Box Testing in Application Security? What is more effective in finding
holes in the applications?

203
Unit 4 - Application Security

Q.7. List and explain briefly the steps followed to identify vulnerabilities in application security.

204
UNIT 5
SECURITY AUDITING

“ At the end of this unit you will be able to:




State the importance of security audits
List the various types of security audits
Explain what is risk based auditing


• Describe the process and tools of risk analysis
• Describe the risk management process
206
Unit 5 - Security Auditing

5.1 AUDIT PLANNING (SCOPE, PRE-AUDIT PLANNING,


DATA GATHERING, AUDIT RISK)

5.1.1 Audit Scope

Information audit is important to evaluate the level of organizational security and immunity to various threats. Audit
is important to save the organization from spending unnecessary funds on the damages that could have occurred
due to the attack.
The scope of an audit depends upon:
• Site business plan
• Type of data assets to be protected
• Importance of data and relative priority
• Previous security incidents
• Time available
• Auditors experience and expertise

5.1.2 Audit Classification

Broadly, there are two types of Audit, internal and external.


• External audits are commonly conducted by independent, certified parties in an objective manner. They
are scoped in advance, finally limited to identifying and reporting of any implementation and control gaps
based on the stated policies and standards such as the COBIT (Control Objectives for Information and
related Technology). In the end, the objective is to lead the client to a source of accepted principles and
sometimes correlated to current best practices.
• Internal audits usually are conducted by experts linked to an organisation, and it involves a feedback
process where the auditor may not only audit the system but also potentially provide advice in a limited
fashion. They differ from the external audit in allowing the auditor to discuss mitigation strategies with the
owner of the system that is being audited.
There is a large variety of audit types based on standards followed. Some examples include SSAE 16 audits (Type I or
II), audits of ISO 9001, ISO/IEC 17799, ISO/IEC 27001, ISO 27018 cloud security standard and audits of Industry specific
standards such as HIPPA controls.
Within the broad scope of auditing information security, there are multiple types of audits, multiple objectives
for different audits, etc. Audits can be broken down into a number of types, from the simple analysis of security
architecture based on opinion to a full-blown, end-to-end audit against a security framework such as IS027001.
Auditing information security covers topics from auditing the physical security of data centers to auditing the logical
security of databases and highlight key components to look for and different methods for auditing these areas. When
centered on the IT aspects of information security, it can be seen as a part of an information technology audit. It is
often then referred to as an information technology security audit or a computer security audit. However, information
security encompasses much more than IT.

207
Unit 5 - Security Auditing

5.1.3 Pre-audit planning

Pre-audit planning starts with developing the scope and objective of the audit. The audit personnel then coordinates
with the organization for level of support required, locations, duration and other related parameters. Thereafter, both
the parties agree on the pricing or the finances as per the scope of work. Once the pricing is mutually approved,
documentation such as confidentiality, contracting and required formal agreements are prepared. These documents
state the audit objectives, scope and protocol.
Conducting a preliminary review of the client’s environment, mission, operations, policies and practices and They
perform risk assessments of client environment, data, and technology resources. The audit personnel then complete
the research of regulations, industry standards, practices and issues. Futher, they review current policies, controls,
operations and practices and holding an entrance meeting to review the engagement memo, to request items from
the client, schedule client resources and to answer the questions of the client. This will also include laying out a time
line and specific methods to be used for the various activities.

5.1.4 Data gathering Audit Risk

Data gathering stage involves accumulating and verifying relevant and useful evidences for confirming the audit
objectives supporting audit findings and recommendations. While conducting data gathering, auditor conducts
interviews, observes procedures and practices, performs automated and manual tests and other important tasks as
required. Activities that require field visits may be carried out at the client's worksite or any remote location which will
depend on the nature of the audit.

208
Unit 5 - Security Auditing

5.2 RISK ANALYSIS

5.2.1 Purpose of risk analysis

The basic purpose of risk analysis is to:


• Identify assets and their values
• Identify vulnerabilities and threats
• Quantify the probability and business impact of these potential threats
• Provide an economic balance between the impact of threat and the cost of the countermeasure

5.2.2 Risk based auditing

Risk based auditing focuses on the analysis and management of risk. Auditors start the audit process by equipping
themselves with knowledge of the nature of the business of the entity and its business environment. Auditors arm
themselves with sufficient information about a business and its environment so as to assess risk before making a
decision of either performing a compliance test or a substantive test.
Compliance test: This is the process of gathering evidence for the purpose of testing an organization’s compliance
with control procedures and processes in relation to external rules, legal requirements, and regulations.
Substantive test: This is the process of gathering evidence in order to evaluate the integrity of individual transactions,
processes, data, and other information.
Audit risk can be categorised as:
• Inherent risk
• Control risk
• Detection risk
• Overall risk
Risk based auditing is generally composed of five broad stages. There is no hard and fast rule of what constitute each
stage, but, the most importance facets of those stages are covered in this section.
Five (5) stages of risk based audit:
1. Information gathering and planning stage
2. Mastery of internal control stage
3. Compliance test stage
4. Substantive test stage
5. Conclusion and production of report stage

5.2.3 Types of Control

Controls in information security are categorized based on the functionality such as preventive, detective, corrective,
deterrent, recovery and compensating. The categorization can also be done based on the plane of application such
as physical, administrative or technical. Let us understand these controls in brief.

209
Unit 5 - Security Auditing

• Preventive controls: -Preventive controls are the first controls met by an adversary. These try to prevent
security violations and enforce access control. Like other controls, these may be physical, administrative,
or technical. Doors, security procedures and authentication requirements are examples of physical,
administrative and technical preventive controls respectively.
• Detective controls: -Detective controls are in place to detect security violations and alert the defenders.
They come into play when preventive controls have failed or have been circumvented and are no less
crucial than detective controls. Detective controls include cryptographic checksums, file integrity checkers,
audit trails and logs and similar mechanisms.
• Corrective controls: - Corrective controls try to correct the situation after a security violation has occurred.
Although even after violation occured, but the data remains secure, so it makes sense to try and fix the
situation. Corrective controls vary widely, depending on the area being targeted, and they may be technical
or administrative in nature.
• Deterrent controls: - Deterrent controls are intended to discourage potential attackers. Examples of
deterrent controls include notices of monitoring and logging as well as the visible practice of sound
information security management.
• Recovery controls: - Recovery controls are somewhat like corrective controls, but they are applied in more
serious situations to recover from security violations and restore information and information processing
resources. Recovery controls may include disaster recovery and business continuity mechanisms, backup
systems and data, emergency key management arrangements and similar controls.
• Compensating controls: - Compensating controls are intended to be alternative arrangements for other
controls when the original controls have failed or cannot be used. When a second set of controls addresses
the threats , it acts as a compensating control.

By plane of application:
• Physical controls include doors, secure facilities, fire extinguishers, flood protection and air conditioning.
• Administrative controls are the organization's policies, procedures and guidelines intended to facilitate
information security.
• Technical controls are the various technical measures, such as firewalls, authentication systems, intrusion
detection systems and file encryption among others.
• Access Control Models are the abstract foundations upon which actual access control mechanisms and
systems are built. Access control is among the most important concepts in computer security. Access control
models define how computers enforce access of subjects (such as users, other computers, applications and
so on) to objects (such as computers, files, directories, applications, servers and devices).

Three main access control models that exist are:


• Discretionary Access Control model
• Mandatory Access Control model
• Role Based Access Control model

Discretionary Access Control (DAC)


The Discretionary Access Control model is the most widely used of the three models.
In the DAC model, the owner (creator) of information (file or directory) has the discretion to decide about and set
access control restrictions on the object in question, which may, for example, be a file or a directory. The advantage
of DAC is its flexibility. Users may decide who can access information and what they can do with it — read, write,
delete, rename, execute and so on. At the same time, this flexibility is also a disadvantage of DAC because users may
make wrong decisions regarding access control restrictions or maliciously set insecure or inappropriate permissions.
Nevertheless, the DAC model remains the model of choice for the absolute majority of operating systems today,
including Solaris.

210
Unit 5 - Security Auditing

Mandatory Access Control (MAC)


Mandatory access control, as its name suggests, takes a stricter approach to access control. In systems utilizing MAC,
users have little or no discretion as to what access permissions they can set on their Information. Instead, mandatory
access controls specified in system-wide security policy are enforced by the operating system and applied to all
operations on that system.

Role-Based Access Control (RBAC)


In the role-based access control model, rights and permissions are assigned to roles instead of an individual users.
This added layer of abstraction permits easier and more flexible administration and enforcement of access controls.
For example, access to marketing files may be restricted only to the marketing manager role, and users Ann, David,
and Joe may be assigned the role of marketing manager. Later, when David moves from the marketing department
elsewhere, it is enough to revoke his role of marketing manager, and no other changes would be necessary. When
one apply this approach to an organization with thousands of employees and hundreds of roles, one can see the
added security and convenience of using RBAC.

5.2.4 Risk Assessment using SimpleRisk or Eramba (Open source Tools)

How to Perform an Internal Risk Assessment in Simple Risk?

Introduction
SimpleRisk is an excellent way to perform a basic risk assessment for an organization.
SimpleRisk tool includes a template for CIS that contains 20 yes/no question answers which provides valuable insight
into an organization's risk posture.

Instruction
To begin the process, from the menu at the top of Simple Risk, click on "Assessments" and then select "Critical
Security Controls" under the Available Assessments. One can leave the "Asset Name" field blank, or enter the name
of a specific application or business unit to which answers will apply.
Below is a screen shot of the Critical Security Controls assessment.

211
Unit 5 - Security Auditing

From here, simply answer "Yes" or "No" to the 20 questions and click on "Submit". A risk will be created for each "No"
answer under the "Pending Risks" section found on the left. Click on "Add" to push the risks into SimpleRisk.

5.2.5. Risk management

Cyber security risk management is management of risk to information.


As per ISO27001:
• Management implies someone proactively identifying, assessing, evaluating and dealing with risks on
an ongoing basis along with related governance aspects, such as direction, control, authorisation and
resourcing of the process.
• Risks, in this context, are the possibilities of harm.
• Information is the valuable meaning or knowledge that is derived from data, in other words content of
computer files, paperwork, conversations, expertise, intellectual property and so forth.
The process diagram sums it up:

Fig 5.1: Risks Management process

The first stage of this process is to identify potential information risks. Several factors or information sources fed-into
the step of identification includes the following:
• Vulnerabilities are inherent weaknesses within facilities, technologies, processes (including information
risk management itself), people and relationships, some of which are not even recognised as such.
• Threats are actors (insiders and outsiders) and natural events that might cause incidents if they acted on
vulnerabilities causing impacts.

212
Unit 5 - Security Auditing

• Assets are defined as the valuable information content and the physical components such as storage
vessels, computer hardware, etc.
• Impacts are harmful effects of incidents and calamities affecting assets, damaging organisation and its
business interests, and often third parties.
• Incidents can range from minor, trivial or inconsequential based on the magnitude of the effect on the
organization. They can be calamities, disasters and outright catastrophes.
• Advisories, standards, etc. refer to relevant warnings and advice put out by myriad organisations such as
CERT, FBI, ISO/IEC, journalists, technology vendors, plus information risk and security professionals (social
network).
Evaluate risks stage includes considering/ assessing all that information in order to determine the significance of
various risks, which in turn drives priorities for the next stage. An organisation’s appetite or tolerance for risks is a
major concern, reflecting corporate strategies and policies as well as broader cultural drivers and personal attitudes
of people engaged in risk management activities.
Treat risks means avoiding, mitigating, sharing and/ or accepting risks. This stage involves both deciding what to do,
and doing it (implementing risk treatment decisions).
Handle changes might seem obvious but it is called out on the diagram due to its importance. Information risks are
constantly in flux, partly as a result of risk treatments and partly due to various other factors both within and outside
the organisation.

Risk treatment
Risk treatment is the process of selecting and implementing measures to modify risk. Risk treatment measures can
include:
• Avoiding
• Optimising
• Transferring
• Retaining risk

Identification of options
Having identified and evaluated risks, the next step involves:
• Identification of alternative actions for managing these risks
• Evaluation and assessment of their results or impact
• Specification and implementation of treatment plans
Since identified risks may have varying impact on an organisation, not all risks carry the prospect of loss or damage.
Opportunities may also arise from the risk identification process as types of risk with positive impact or outcomes as
identified.
Management or treatment options for risks expected to have positive outcome include:
• starting or continuing an activity likely to create or maintain the positive outcome.
• modifying the likelihood of risk to increase possible beneficial outcomes.
• trying to manipulate possible consequences to increase the expected gains.
• sharing risk with other parties that may contribute by providing additional resources, which could increase
the likelihood of opportunity or expected gains and
• retaining residual risk.

213
Unit 5 - Security Auditing

Management options for risks having negative outcomes looks similar to those for risks with positive ones, although
their interpretation and implications are completely different. Such options or alternatives might be to:
• avoid risk by deciding to stop, postpone, cancel, divert or continue with an activity that may be the cause
for that risk
• modify the likelihood of risk trying to reduce or eliminate the likelihood of negative outcomes
• try modifying the consequences in a way that will reduce losses
• share risk with other parties facing the same risk (insurance arrangements and organisational structures,
such as partnerships and joint ventures can be used to spread responsibility and liability)
• (of course one should always keep in mind that if a risk is shared in whole or in part, the organisation is
acquiring a new risk i.e. risk that the organisation to which the initial risk has been transferred may not
manage this risk effectively)
• retain risk or its residual risks
In general, cost of managing a risk needs to be compared with the benefits obtained or expected. It is important
to consider all direct and indirect costs and benefits whether tangible or intangible and measure them in terms of
finances or others.
More than one option can be considered and adopted either separately or in combination. An example is the effective
use of support contracts and specific risk treatments followed by appropriate insurance and other means of risk
financing.
In the event that available resources (e.g., budget) for risk treatment are not sufficient, the risk management action
plan should set the necessary priorities, and clearly identify the order in which individual risk treatment actions should
be implemented.

Development of action plan


Treatment plans are necessary in order to describe how the chosen options will be implemented. The treatment
plans should be comprehensive and provide all necessary information about:
• proposed actions, priorities or time plans,
• resource requirements,
• roles and responsibilities of all parties involved in the proposed actions,
• performance measures, and
• reporting and monitoring requirements.
Action plans should be in line with the values and perceptions of all types of stakeholders (e.g., internal
organisational units, outsourcing partner, customers, etc.). The better the plans are communicated to various
stakeholders, the easier it will be to obtain the approval of proposed plans and a commitment to their
implementation.

Approval of action plan


As with all relevant management processes, initial approval is not sufficient to ensure effective implementation of
the process. Support of top management team is critical throughout the entire process cycle. For this reason, it
is the responsibility of the owner of risk management department to keep organisation’s executive management
continuously and properly informed and updated by regular reporting.

Implementation of action plan


The risk management plan should define how risk management is to be conducted throughout an organisation.
It must be developed in a way that will ensure that risk management is embedded in all organisations’ important
practices and business processes so that it will become relevant, effective and efficient.

214
Unit 5 - Security Auditing

The risk management plan may include specific sections for particular functions, areas, projects, activities or
processes. These sections may be separate plans but in all cases, they should be consistent with organisation’s risk
management strategy (which includes specific RM policies per risk area or risk category).
The necessary awareness of and commitment to risk management at senior management levels throughout an
organisation is a critical mission and should receive close attention by:
• obtaining active ongoing support of an organisation’s directors and senior executives for risk management
and for development and implementation of risk management policy and plan
• appointing a senior manager to lead and sponsor the initiatives, and
• obtaining the involvement of all senior managers in the execution of the risk management plan.
The organisation’s board should define, document and approve its policy for managing risk, including objectives and
a statement of commitment to risk management. The policy may include:
• objectives and rationale for managing risk
• links between the policy and organisation’s strategic plans
• extent and types of risk an organisation will take and ways it will balance threats and opportunities
• processes to be used to manage risk
• accountabilities for managing particular risks
• details of the support and expertise available to assist those involved in managing risks
• a statement on how risk management performance will be measured and reported
• a commitment to the periodic review of the risk management system
• a statement of commitment to the policy by directors and organisation’s executive

The policy statement highlights an organization's internal and external environment, action taken by the board
members for risk management and the roles and accountability of the concerned individuals.
Ultimately, it's the responsibility of the directors and senior executives to ensure that the risks are well taken care of
to prevent any type of organizational damage.
This may be facilitated by:
• specifying those accountable for the management of particular risks, for implementing treatment strategies
and for maintenance of controls.
• establishing performance measurement and reporting processes, and
• ensuring appropriate levels of recognition, reward, approval and sanction.
These steps do not contribute towards implementing security mechanisms for the IT platforms. The noteworthy
points are the actions to be performed thereby reducing the identified risks. The actions that are part of the technical
implementation process are taken within the Information Security Management System (ISMS) which is outside the
risk management process.
Last but not least, an important responsibility of the top management is to identify requirements and allocate necessary
resources for risk management. This should include people and skills, processes and procedures, information systems
and databases, money and other resources for specific risk treatment activities.
The risk management plan should also specify how risk management skills of managers and staff will be developed and
maintained. Integration of risk management process with other operational and product processes is fundamental.

Identification of residual risks


Residual risk is a risk that remains even after the identification of risk management options and implementation of
action plans. It also includes all initially unidentified risks as well as all risks previously identified and evaluated but
not designated for treatment at that time.

215
Unit 5 - Security Auditing

It is important for an organisation’s management and all other decision makers to be well-informed about the nature
and extent of the residual risk. For this purpose, residual risks should always be documented and subjected to regular
monitor and review procedures.
As per ISO27001, residual literally means 'of the residue' or 'leftover'. So, residual risk is the left over risk remaining
after all risk treatments have been applied.

Accepted risks are still risks


They don't cease to have the potential for causing impacts simply because management decides not to do anything
about them. Acceptance means management doesn't think they are worth reducing. Management may be wrong.
The risks may not be as they believe or they may change (e.g., if novel threats appear or new vulnerabilities are being
exploited).

Mitigated or controlled risks are still risks


The mitigated risk is the reduced but not eliminated and usually its controls may fail in action (e.g., antivirus software
that does not recognise 100% of all malware or that someone accidentally disables one day).

Eliminated risks are probably no longer risks, but even then there remains the possibility that risk analysis was
mistaken (e.g., perhaps only a part of the risk was or perhaps, the risk materially changed since it was assessed and
treated) or that the controls applied may not be as perfect as they appear (again, they may fail in action).

Avoided risks are probably no longer risks, but again there is a possibility that risk analysis was wrong or that
they not be completely avoided (e.g., in a large business, there may be small business units out of management's
line of vision, still facing the risk, or a business may later decide to get into risky activities it previously avoided).

Transferred risks are reduced but are still risks, since the transferral may not turn out well in practice (e.g., if an
insurance company declines a claim for some reason) and may not be adequate to completely negate the impacts
(e.g., the insurance 'excess' charge).
If a manager does not explicitly treat an identified risk, or arbitrarily accepts it without truly understanding it, they are
in effect saying, “I do not believe this risk is of concern”. This is the decision for which they can be held to account.
The overall point is that one should keep an eye on residual risks, review them from time to time, and where appropriate
improve/ change the treatments if the residuals are excessive.

Risk management feedback loops

Risk management is a comprehensive process that requires organisations to:


• frame risk (i.e. establish the context for risk based decisions);
• assess risk;
• respond to risk once determined; and
• monitor risk on an ongoing basis using effective organisational communication and a feedback
loop for continuous improvement in the risk related activities of organisations.

Risk management is carried out as a holistic, organisation-wide activity that addresses risk from the strategic level to
the tactical level, ensuring that risk based decision making is integrated into every aspect of the organisation.
The following sections briefly describe each of the four risk management components.
The first component of risk management addresses how organisations frame risk or establish a risk context i.e.
describing the environment in which risk based decisions are made.

216
Unit 5 - Security Auditing

The key purpose of a risk framing component is to produce a risk management strategy capable of addressing how
organizations intend to assess, respond to and monitor the risks. This involves making explicit and transparent risk
perceptions that can be utilized by the organizations in investing and making operational decisions. The risk frame
provides a groundwork in order to manage risks and define the boundaries for risk based decisions among various
organizations.
Establishing a realistic and credible risk frame requires that organisations identify:
• risk assumptions (e.g., assumptions about threats, vulnerabilities, consequences/ impact, and likelihood of
occurrence that affect how risk is assessed, responded to, and monitored over time)
• risk constraints (e.g., constraints on risk assessment, response, and monitoring alternatives under
consideration)
• risk tolerance (e.g., levels of risk, types of risk, and degree of risk uncertainty that are acceptable), and
• priorities and trade-offs (e.g., relative importance of missions/ business functions, trade-offs among
different types of risk that organisations face, time frames in which organisations must address risk, and
any factors of uncertainty that organisations consider in risk responses).
The risk framing component and the associated risk management strategy also include any strategic-level decisions
on how risk to organizational operations and assets, individuals, other organisations, and the nation is to be managed
by senior leaders/ executives.
The second component of the risk management addresses how organisations assess risk within the context of the
organizational risk frame.
Purpose of the risk assessment component is to identify:
• threats to organisations (i.e. operations, assets, or individuals) or threats directed through organisations
against other organisations or the nation;
• vulnerabilities internal and external to organisations;
• harm (i.e. consequences/ impact) to organisations that may occur given the potential for threats exploiting
vulnerabilities; and
• likelihood that harm will occur. The end result is a determination of risk (i.e. the degree of harm and
likelihood of harm occurring).
To support the risk assessment component, organisations identify:
• tools, techniques, and methodologies that are used to assess risk;
• assumptions related to risk assessments;
• constraints that may affect risk assessments;
• roles and responsibilities;
• how risk assessment information is collected, processed, and communicated throughout organisations;
• how risk assessments are conducted within organisations;
• frequency of risk assessments; and
• how threat information is obtained (i.e. sources and methods).
The third component within the risk management focuses on how the organizations respond to risks once they have
been determined. These risks are determined based on the results of risk assessments. The risk response component
provides a consistent, organization centric response to risks as per organizational risk frame. This is achieved by:
• developing alternate course of action for dealing with risks
• evaluating the alternate course of action
• determining an appropriate course of action that is consistent with the risk tolerance
• implementing risk response as per the selected course of action

217
Unit 5 - Security Auditing

In order to support the risk response component, organizations describe various types of risk responses such as
accepting, avoiding, mitigating, sharing or transferring the risks.
Organizations also emphasize on the tools, techniques and methodologies that are used for developing course of
action for providing the required response to a risk. They also focus on the ways to evaluate the courses of action,
communicating the risk response across organizations and to external entities such as external service providers,
supply chain partners, etc.
The next component of risk management is the way organizations monitor the risks over time. The functions of the
risk monitoring component are:
• verification of the risk response measures to be implemented and deriving information security
requirements from organizational missions/business functions, federal legislations, directives, regulations,
policies, standards and guidelines.
• determining the effectiveness of the risk response measures after implementation
• identifying changes in the organizational information systems and environments due to risk impact.
The organizations support the risk monitoring component by verifying the compliances and determining the
effectiveness of the risk response. This procedure is carried by using various tools, techniques and methodologies
that help determine correctness of the risk response. Also, there is a need to ensure that the risk mitigation measures
are implemented correctly, operating as required and producing the desired output of keeping the risks at check.
The organizations should also monitor the changes that could impact the effectiveness of risk responses.

Risk monitoring

Risk monitoring provides organisations with the means to:


• verify compliance;
• determine ongoing effectiveness of risk response measures; and
• identify risk-impacting changes to organisational information systems and environments of
operation.

Monitoring helps the organizations in providing awareness about the risks being incurred, highlighting the need to
look at other steps in risk management, and initiate the activities that improve the process.
Organizations make use of various tools and techniques for increasing awareness and helping senior leaders/
executives in developing a better understanding of the risk. This risk can harm the organizational operations, assets
and many individuals part of the work process.
Risk monitoring is done at various tiers of risk management keeping in mind the objectives and utility of information
being produced. Tier 1 includes ongoing threat assessments and way changes in the threat may affect activities
taking place in Tier 2 and Tier 3. This features enterprise architectures (with embedded security architectures) and
organizational information systems.
On the other hand, Tier 2 activities consists of analyzing new or current technologies that are either used or considered
for the future. These technologies help the organizations identify exploitable weaknesses and deficiencies that affect
the organizational growth.
The Tier 3 activities emphasize on information systems and includes techniques such as automated monitoring of
standard configuration settings. These activities help in managing the information technology products, vulnerability
scanning, and ongoing assessments of security controls.

218
Unit 5 - Security Auditing

It is also crucial to ensure that the monitoring process is conducted smoothly. For this purpose, organizations should
plan how to conduct the monitoring process such as automated and physical approaches and the frequency of
monitoring activities. For example, frequency can be times deployed security controls change, critical items on plan
of action, milestones, etc.

219
Unit 5 - Security Auditing

5.3 PHASE APPROACH – RISK ASSESSMENT

IT/IS audit is the process of examining and evaluating the organization's information technology infrastructure,
policies and operations. These audits are aimed at determining whether the IT controls can protect corporate assets,
ensure data integrity and are well aligned with the set business goals. The responsibility of the audit personnel is to
not only examine the physical security controls but the overall business and financial controls as well.
Risk analysis involves conducting an accurate and thorough assessment of the potential risks as well as vulnerabilities
that could affect information systems. These risks can hamper the confidentiality, integrity and availability of
electronically protected information held by the entity. It is an effective tool for managing risks and in turn identifying
vulnerabilities and threats. Risk analysis is important for assessing the possible damages in order to determine the
areas for implementing security mechanisms.
Following are the steps that help in conducting risk analysis:
1. Identify the scope or the risky area to be analyzed
2. Gather data required for risk analysis
3. Identify threats and vulnerabilities and document them
4. Assess the existing security measures
5. Determine the probability of threat and its potential impact after occurrence
6. Determine the level of risk involved
7. Document the security mechanisms

Risk assessment
Risk assessment is a term used to describe the overall process or methods, where:
• Identify hazards and risk factors that have the potential to cause harm (hazard identification).
• Analyze and evaluate the risk associated with that hazard (risk analysis, and risk evaluation).
• Determine appropriate ways to eliminate the hazard, or control the risk when the hazard cannot be eliminated
(risk control).
A risk assessment is a thorough look at the workplace to identify things, situations, processes, etc. that may cause
harm, particularly to people. After identification is made, analyze and evaluate how likely and severe the risk is. When
this determination is made, can next, decide what measures should be in a place to effectively eliminate or control
the harm from occurring.
• Risk assessment – the overall process of hazard identification, risk analysis and risk evaluation.
• Hazard identification – the process of finding, listing and characterizing hazards.
• Risk analysis – a process for comprehending the nature of hazards and determining the level of risk.
Notes:
(1) Risk analysis provides a basis for risk evaluation and decisions about risk control.
(2) Information can include current and historical data, theoretical analysis, informed opinions, and the concerns of
stakeholders.
(3) Risk analysis includes risk estimation.

220
Unit 5 - Security Auditing

Risk evaluation – the process of comparing an estimated risk against given risk criteria to determine the significance
of the risk.

Risk control – actions implementing risk evaluation decisions.


Note: Risk control can involve monitoring, re-evaluation and compliance with decisions.

Risk Mitigation
Risk mitigation defines the strategy for preparing to face a threat and protecting the data center from its effect.
Similar to risk reduction, risk mitigation is about reducing the negative effects of threats on the business continuity
(BC). These threats can have adverse effects such as cyber-attacks, and physical or virtual damage to the data center.
An element of risk management process, risk mitigation differs in the way it is implemented and depends upon the
type of organization. The principle is to prepare the business for all potential risks by having a plan in place that
weighs the impact of each risk. Risk mitigation is crucial in areas wherein the threat cannot be avoided fully. The steps
taken in the process are aligned towards reducing the adverse effects, and potentially long-term effects. Basically,
mitigation deals more with the aftermath of a disaster rather than the planning to avoid it.
Prioritization is an important aspect of risk mitigation which involves accepting the risk in one area to protect another.
One should focus on the key areas whose security cannot be compromised at any cost thereby protecting the
resources required for business continuity and sacrificing the ones which are less mission critical. This takes place at
times when dealing with the threat is beyond the control of the security experts.
In an ideal case scenario, organization is well prepared for any type of risk. But, if there is a well defined risk mitigation
plan in place, organizations can save their businesses with some level of damage and hope for a recovery in the future.

Risk reassessment
Risk reassessment, as the name suggests, deals with identifying new type of risks and reassessing current ones. Also,
the method helps in closing risks that are outdated and cannot do any harm in the near future.
This project management tool is used to control risks by creating a schedule for risk reassessment. It involves
determining the kinds of risks which are present in any project thereby helping the project managers in identifying
and controlling the risks. The number of repetitions that are performed in the reassessment is dependent upon the
project progression defined by its objectives.
The actions that are usually taken in the reassessment process are identifying risks, analysis of the impact, developing
a risk response plan, identifying risk triggers which in turn would help in developing a contingency plan.
To stay updated with the security threats and the way they affect the businesses, it is a good practice to maintain an
updated risk register.

221
Unit 5 - Security Auditing

SUMMARY

• The audit can be divided into two groups, i.e. internal audit and external audit.
• External assessments define and disclose any implementation and compliance deficiencies based on policies
and principles such as COBIT (Regulation Priorities for Information and Related Technology).
• Internal audits require a consultation mechanism where the auditor may not only audit the program, but may
also offer recommendations in a limited way.
• Risk-based auditing focuses on the analysis and management of risk. It involves compliance test and substantive
test.
• Audit risk is categorized into inherent risk, control risk, detection risk and overall risk.
• In Discretionary Access Control (DAC) model, the owner (creator) of information (file or directory) has the
discretion to decide about and set access control restrictions on the object in question, which may, for example,
be a file or a directory.
• In Mandatory Access Control (MAC) model, the users have little or no discretion as to what access permissions
they can set on their information.
• In the Role-Based Access Control (RBAC), rights and permissions are assigned to roles instead of individual
users. The added layer of abstraction allows for simpler and more versatile management and compliance of
access controls.
• Risk control provides an organization with the capacity to retain risk awareness, to highlight the need to review
other steps in the risk management process, and to implement process improvement activities as necessary.
• Risk assessment, risk recognition and risk analysis are essential elements of risk management.
• Risk reduction is a technique to plan for and raising the effects of risks to a data center.

222
Unit 5 - Security Auditing

KNOWLEDGE CHECK

Q.1. Match the following threats to their corresponding countermeasures

Threat Type Countermeasures


1. MAC A. Process of selecting and implementing measures to modify risk for
measures like avoiding, optimizing, transferring.-3
2. Compensating controls B. Feedback process where the auditor may not only audit the system
but also potentially provide advice in a limited fashion.-7
3. Risk Measurement C. Users have little or no discretion as to what access permissions they
can set on their Information-1
4. Risk Mitigation D. Process of gathering evidence for the purpose of testing an
organisation’s compliance with control procedures and processes in
relation to external rules, legal requirements.\
5. Information Security Audit E. Intended to be alternative arrangements for other controls when the
original controls have failed or cannot be used-2
6. Compliance Test F. Actions implementing risk evaluation decisions-8

7. Internal Audits G. One of the best ways to determine the security of an organisations
information without incurring the cost and other associated damages
of a security incident-5
8. Risk Control H. Strategy to prepare for and lessen the effects of threats faced by a
data center. –4

Q.2. Explain briefly and the various steps involved in the Risk Management Process.

Q.3 Which of the following is not the purpose of Risk Analysis?


i. Identify assets and their values
ii. Identify vulnerabilities and neglecting threats
iii. Quantify the probability and business impact of potential threats
iv. Provide an economic balance between the impact of threat and the cost of the countermeasure

223
224
UNIT 6
CYBER FORENSICS

“ At the end of this unit you will be able to:






State the importance of Cyber Forensics
State the various types of Cyber Forensics
Describe first response processes
Explain what is forensic duplication
Describe the process and tools for forensic duplication


• Describe the process and tools for disk forensic
• Describe mobile and CDR forensics
226
Unit 6 - Cyber Forensics

6.1 INTRODUCTION TO CYBER FORENSICS

6.1.1 What is Computer Forensics?

It is a discipline which brings together computer science and elements of law for collection and analysis of data from
computer systems, wireless communications, networks and storage devices in a way that is admissible as an evidence
in a court of law.

Forensic science can be defined as an application of science to law. The prime goal of any forensic investigation is to
determine related evidence and the evidential value of the crime scene.

Cyber forensics is also known as 'Computer and Network forensics' or 'Digital forensics' . It is the science to obtain,
preserve and document evidence from digital electronic storage devices such as mobile phones, computers, digital
cameras, PDAs and various memory storage devices. Everything must be designed to preserve the probative value of
the evidence and assure its admissibility in a legal proceeding.

Cyber forensic also involves the collection, identification, analysis and examination of data while preserving the integrity
of the information and maintenance of chain of custody of the data. Data is distinct pieces of digital information
formatted in a specific pattern. Organizations have huge inflow of data from multiple sources. For example, we
can store or transfer data by networking equipment, standard computer system, personal digital assistants (PDA),
computing peripherals, consumer electronic devices and various types of media along with other sources.

As criminals are expanding the use of technology in their enterprise of illegal activities, this new field of science is
becoming increasingly important. The techniques used in computer forensic are not as advanced as mainstream
forensics techniques used by law enforcement such as ballistics, fingerprinting, blood typing and DNA testing. The
immaturity of this field attributes to fast paced changes in computer technology and the multidisciplinary nature of
the subject which involves complicated linkage between business management, legal system, information technology
and law enforcement

The Need for Cyber Forensics


Cyber forensics has become hugely important in these times because more and more of our lives are being recorded
in technology such as in our computers, cellular telephones, iPad, on the internet (through social networking sites
and other sites we access for shopping, obtaining information, communicating, entertainment, etc.) and on the cloud.
As computers, computing devices (or other devices with computing capability such as mobile phones or PDAs) and
networks become more widely used in general, the chance that crimes involving such networks and devices occur will
increase. With more and more valuable information available on the cyberspace, hacking and electronic crimes have
grown at an exponential rate in recent years in numbers and in sophistication.
Cyber-crime have surpassed the illegal drug trade according to the recent reports. The Unethical hackers, also known
black hats prey on information systems of government, public, corporate and private networks and are constantly
testing security mechanism of organizations to the limit.
With increasing number of cyber-crimes and litigations which involves large organizations, the need to employ cyber
forensic experts has increased to protect the organization from computer incidents or solve cases involving the use
of computers and related technologies. The staggering financial losses caused because of computer crimes have also
contributed to a renewed interest in computer forensics.

227
Unit 6 - Cyber Forensics

Organisations need cyber forensics to:


• Ensure the overall integrity and continued existence of an organization’s computer system and network
infrastructure.
• Help the organization capture important information if their computer systems or networks that are
compromised and assist in the prosecution of the case if the criminal is caught.
• Interpret, extract and process the evidence to prove the attacker's actions and to prove the innocence of the
organization in court.
• Terrorists and cyber criminals use the Internet as a medium of communication thus, tracking down their IP
address becomes vital to determine the geographical position of terrorists.
• Tracking complicated cases within the organization of fraud, cheating, harassment and illegal activities such as
leaking of confidential information, child pornography, sexual harassment, e-mail spamming, etc.
• Therefore, saving the organization reputation, money, goodwill and valuable time . Also saving the organization
from multiple legal problems.
The main objectives of computer forensics can be summarized as follows:
• To preserve, recover and analyze the computer and related materials in a way which can be presented as an
evidence in a court.
• To identify the crime evidence in short time, estimate the potential impact of the criminal activities on the
potential victim and assess the identity of the perpetrator.
Cyber forensics activities commonly include:
• Preservation: It is important for the forensic team to preserve the integrity of the original evidence. The
original evidence should not be modified or damaged. Hence an image or a copy of the original evidence
must be made first for the analysis to be performed on.
• Identification: Before starting the investigation, the forensic team must identify the evidence and its location.
For e.g. evidence may be contained in hard disks, removable media, or log files. They also need to identify the
type of evidence and the best method to extract data from it.
• Extraction: The forensic investigator should extract data from the copy from the original evidence. The data
which is extracted from the original evidence.
• Interpretation: Interpreting the data that has been extracted is crucial to the investigation. The analysis and
inspection of the evidence must be interpreted in a lucid manner.
• Documentation: From the beginning of the investigation until the end, the forensic team must maintain
documentation relating to the evidence. The documentation comprises of, the chain of custody and documents
relating to the evidence analysis.
The evidence acquired from computers is fragile and can be easily erased or altered, and the seized computer can
be compromised if not handled using proper processes and methodologies which may differ depending upon the
procedures, resources and target company. Forensic tools enable the forensic examiner to recover deleted files,
hidden files and temporary data that the user may not locate.
Types of Cyber Forensics

Some types of cyber forensics that cyber security professional must know about are as follow:
• Disk Forensics
• Memory Forensics
• Network Forensics
• Mobile Forensics
• Internet Forensics

228
Unit 6 - Cyber Forensics

Disk Forensics
Disk forensics is the science of extracting forensic information from the digital storage media like Hard disk, USB
devices, Firewire devices, CD, DVD, Flash drives, Floppy disks, etc. Hard drives are used for permanent storage. All the
data and files that are created or downloaded are saved with a name in a folder in the disk drive. All these files can
be accessed by disk forensics, however, that is just the tip of the iceberg.
A forensics expert knows about and has the sophisticated tools to access a complex network of files that ordinary
users may know much about. Some of these are as follows:
• Files created in temporary storage without the users’ knowledge having the content of deleted files.
• Backups of mobile devices and cell phones that happen automatically
• Temporary storage area in memory and on disk that holds the most recently downloaded Web pages
• Metadata created by many applications like Microsoft Word, Excel and PowerPoint that embeds information
(metadata) into the documents they create so users can identify documents, authors or systems that created
these documents, as well as how large they are and when they were last printed, last accessed, last modified
and date created, etc.
• Logs created by the operating system containing information such as the devices that were plugged into a
system, files copied, any other storage being used, cloud storage used, webmail accounts and other locations
and applications, etc.
• Files stored on the hard disk or solid state drive without the user knowing it, etc.
Various storage devices that can be sources of digital evidence are hard disks with IDE/SATA/SCSI interfaces, CD,
DVD, Floppy disk, Mobiles, PDAs, flash cards, SIM, USB/ Firewire disks, Magnetic Tapes, Zip drives, Jazz drives, etc.
For disk forensics these storage media are seized from the crime scene, a hash value of the storage media to be seized
is computed using an appropriate cyber forensics tool. Hash value is a unique signature generated by a mathematical
hashing algorithm based on the content of the storage media. After computing the hash value, the storage media is
securely sealed and taken for further processing.
An important part of disk forensics is creating an exact copy of the original evidence. This is for protection of the
original evidence. The original storage media is write protected and bit stream copying is made to ensure complete
data is copied into the destination media.

Memory Forensics:
Many cyber attacks or malicious behaviours do not leave any indicators in the computer’s hard-drive. In such cases,
the memory (RAM—Random Access Memory) has to be accessed and analyzed. This memory contains volatile
data, i.e. data which resides in a computer’s short term memory storage, including browsing history, chat messages,
clipboard contents, etc. This data is a temporary memory on the computer while it is running, however when the
computer is powered off, this data is lost.
Memory forensics provides insights about the runtime system activity, this could include the following:
• Open network connections
• Recently executed commands or processes
• Account credentials
• Chat messages
• Encryption keys
• Running processes
• Injected code fragments
• Internet history which is non-cacheable, etc.

229
Unit 6 - Cyber Forensics

All programs are loaded in memory, in order to execute, hence can be identified through memory forensics. As attack
methods are becoming more and more sophisticated, memory forensics is in high demand for security professionals
today. Many network-based security solutions like firewalls and antivirus tools are unable to detect malware written
directly into a computer’s physical memory or RAM. Security teams use memory forensics tools to protect invaluable
business intelligence and data from stealthy attacks such as fileless, in-memory malware or RAM scrapers.

Network Forensics
Network forensics is a sub branch of cyber forensics. It records, captures and analyze network events in order to
discover the attacks on source of security or other problems.
A number of techniques and devices are used to intercept data, collect all data that moves through a network, identify
selected data packets for further investigation, etc. Computers with high storage volumes and rapid processing
speeds are required for accurate forensic analysis of network.
Forensic analysts search for data that points towards human communication, manipulation of files, and the use of
certain keywords. They track communications and establish timelines based on network events logged by network
control systems, track down the source of hack attacks and other security-related incidences, collect information on
anomalies and network arte-facts, and uncovering incidents of unauthorised network access.
Network forensics systems can be one of two kinds:
1. A brute force method of "catch it as you can" which involves capturing all network traffic for analysis.
2. A more intelligent "stop look listen" method which involves analysing each data packet flowing across the network
and only capture what is deemed as suspicious and worthy of extra analysis
Network forensics is used to dig out flaws in IT infrastructure and networks, thereby giving information security
officers and IT administrators thee scope to shore up their defences to prevent futurecyber attacks.

Mobile Forensics
Mobile forensics is used to recover digital evidence or data from a mobile device which could be a cell phone,
smartphone, PDA devices, GPS devices and tablet computers. Mobile devices are used to save various types of
personal information such as photos, contacts, notes, SMS, messages and more. Smartphones may contain video,
emial, web browsing and location information, social media messages, contacts,etc. Other information that can be
accessed are as follows:
• Incoming, outgoing and missed call history
• Internet browsing history, content, cookies, search history, analytics information
• To-do lists, notes, calendar entries, ringtones
• Documents, spreadsheets, presentation files and other user-created data
• Passwords, passcodes, swipe codes, user account credentials
• Historical geo-location data, cell phone tower related location data, Wi-Fi connection information
• User dictionary content
• Data from various installed apps
• System files, usage logs, error messages
• Deleted data from all of the above, etc.
A wide variety of tools exist to extract evidence from mobile devices and no one tool or method can acquire all the
evidence from all devices as the cell phone technologies are varied and keep on changing rapidly.

230
Unit 6 - Cyber Forensics

Internet forensics
Internet forensics consists of the extraction, analysis and identification of evidence related to the user’s online
activities. Internet-related evidence includes artifacts such as log files, history files, cookies, cached content, as well as
any remnants of information left in the computer’s volatile memory (RAM).
Criminals use the Internet as means of communication using phone calls or publishing offensive material on a web
site. Some such activities are as follows:
• Spam, or unsolicited emails many of which are sent with a goal to obtain financial details of the user.
• Phishing or frauds involving fake web sites that look like those of banks or credit card companies and attempts
to entice victims by appearing to come from a well-known, legitimate business like Citibank or eBay.
• Computer viruses, worms and spyware, etc.
Internet forensics examines the data to attempt to find the source of attacks such as these.

6.1.2 Process of Computer Forensics

The main outcome of the Forensic Investigation Process is the collection of potential digital evidence in the form
of media and then identifying and extracting the data, analyzing it and transforming it into evidence whether it is
needed for law enforcement or for an organization’s internal usage. The following steps are part of the process of
computer forensics:
• Readiness: Readiness means being fully prepared for the task to be undertaken. It involves obtaining
authorization to search and seize. Some activities include regular testing, appropriate training and validation
of software and familiarity legislation.
• Evaluation: Making appropriate judgment based on the case to be investigated. Involves receiving instructions,
the clarification of those instructions if unclear or ambiguous, carrying out risk analysis and the allocation of
roles and resources. Risk analysis for law enforcement includes an assessment of the likelihood of physical
threat on entering a suspect’s property and how best to counter it.
• Collection: The gathering is carried out on-site on crime. Some activities involved are, identifying and ensuring
that the device which stores evidence and documented the scene is secure. Carrying Interviews or meetings
with personnel who may hold information relevant to the examination, bag, tag and safely transport the
equipment and electronic evidence to a forensic lab.
• Analysis: The analysis involves extracting relevant information obtained to apply in the current situation.
Quality of information should be accurate, thorough, impartial, recorded, repeatable and completed within
the scheduled time and proportional to resources allocated.
• Presentation: includes preparing a thorough summary of the evidence in question, taking into account the
conclusions, the events involved, structuring the material as it should be and presenting relevant details that
the investigator would like to see reviewed. Report must always be published with the end reader in mind.
Examiner should be able to explain his work in a way which is comprehensible to respective persons.
• Review: This is carrying out an assessment of the whole procedure with the intention of instituting change in
the future if necessary. Mainly aimed at raising the level of quality by making future examinations more efficient
and time effective. Examples of review include analysis of what went wrong, well and future improvements.
Feedback is necessary for instructing party.

231
Unit 6 - Cyber Forensics

6.1.3 Need for forensics investigator

A Computer Forensic Investigator blends their expertise in computer science with their forensic ability to recover
information from computers and storage devices. Investigators are responsible for helping law enforcement agencies
with cybercrimes and for collecting evidence. Computer forensic investigators usually hold a bachelor degree in
computer science with criminal justice history.
The role of the Investigator is to recover data like documents, photos and e-mails from computer hard drives and
other data storage devices, such as zip and flash drives, that have been deleted, damaged or otherwise manipulated.
Investigators often work on cases involving offenses committed on the Internet ('cybercrime') and examine computers
that may have been involved in other types of crime to find evidence of illegal activity. As an Information Security
professional, a computer forensic investigator may also use their expertise in a corporate setting to protect computers
from infiltration, determine how a computer was broken into or recover lost files.

Duties of a Computer Forensic Investigator


Forensic Computer Analysts use forensic techniques and investigative methods to identify sensitive electronic
records, including history of Internet usage, word processing documents, photographs, and other files. They use their
technical skills to search for hidden, deleted or missing data, and information. They help detectives and other officials
to analyze data and evaluate its relevance to the case under investigation. Investigators also transfer the evidence into
a format that can be used for legal purposes (i.e. criminal trials) and often testify in court themselves.
Computer forensic Investigators must be familiar with standard computer operating systems, networks and hardware,
as well as security software and document-creation applications. Investigators must have expertise in hacking and
intrusion techniques, and prior experience with security testing and computer system diagnostics. They are also
expected to have excellent analytical skills, to be highly conscious of details and to be able to multi-task efficiently.

6.1.4 Computer Forensics Involves

• Preservation: It is important for the forensic team to preserve the integrity of the original evidence. The
original evidence should not be modified or damaged. Hence an image or a copy of the original evidence
must be made first for the analysis to be performed on.
• Identification: Before starting the investigation, the forensic team must identify the evidence and its location.
For eg. evidence may be contained in hard disks, removable media, or log files. They also need to identify the
type of evidence and the best method to extract data from it.
• Extraction: The forensic expert must extract data from the evidence after it has been identified. As volatile
data may be lost at any moment, this data must be retrieved by the forensic investigator from the copy made
from the original evidence. This derived data needs to be measured and evaluated against the original proof.
• Interpretation: Interpreting the data that has been extracted is crucial to the investigation. The analysis and
inspection of the evidence must be interpreted in a lucid manner.
• Documentation: From the beginning of the investigation until the end, the forensic team must maintain
documentation relating to the evidence. The report includes the nature of the chain of custody and records
related to the examination of facts.

232
Unit 6 - Cyber Forensics

6.1.5 Goals of Forensics Analysis

Forensic analysis is necessary when there is a belief that electronic data may have been deleted, misappropriated, or
otherwise managed in an inappropriate manner.
The purpose of the forensic examination is to obtain adequate knowledge about the data or equipment, its use (or
misuse), the responsibility of the persons, and then to create as accurate a image as possible of what occurred, when
it occurred and how it occurred. In other words, forensic research makes it possible to go further, so as to make the
case stronger.

6.1.6 Cyber forensics Procedures

Steps of incident response

• Preparation: In Preparation, the team develops the formal incident response capability; where they create
an incident response process defining the organizational structure with roles and responsibilities; where they
create procedures with clear instructions to respond to an incident; where the right people with the correct skill
set are selected; where the conditions for reporting an incident are defined; where they identify the incident;
where the team defines what they are going to report; and to whom the team is going to communicate. This
step is crucial to ensure that response actions are known and coordinated. Good preparation can enable them
to reduce potential damage by ensuring fast and successful response.
What to do before the incident: Planning leads to successful incident response. During this phase, the
organization needs to prepare for both, the organisation itself and the Computer Security Incident Response
Team (CSIRT) members. Incident response is vulnerable in nature. The planning for the pre-incident requires
only the preventive steps that the CSIRT will promise to do, to safeguard the property of that organization.
Incident response plan: An incident response plan sets out the step-by-step process to be followed in the
event of an incident.
• Identification: This step is where the team verifies if an occasion has occurred and that the supported events,
observations, indicators and deviations from traditional operations suggest a malicious act or [Link]
protection mechanism in place can facilitate the team doing the identification. Incident handler team will
use their experience to look at the signs and indicators. The observation might occur at the network, host
or system level. This is where the team leverages the alerts and logs from routers, firewalls, IDS, SIEM, AV
gateways, operating system, network flows, and more.
• Containment: This stage consists of limiting the injury. It is about stopping the offenders/attackers. It is where
the team makes a decision on which strategy it will use to contain the incidents based on processes and
procedures. It is where the team interacts with the home-based business owners and judges to finish off the
system or disconnect the network or continue operations and monitor the activity. All depends on the scope,
magnitude and impact of the incident.
• Eradication: After the successful containment of the incident, successive steps involve eliminating the reason
for the incident. Within the case of a deadly disease incident, the demand is for eradicating the virus. It’s on
this step that the team should determine how it was initially executed and apply the necessary measures to
ensure it doesn’t happen again.
• Recovery: In this phase, restoring a backup or reimaging of a system takes place. After successful restoration,
it is important to monitor it for a certain time period. Monitoring is important because the team wants to
potentially identify signs that evaded detection.
• Lessons Learned: Follow up activity is crucial. It is where the team can reflect and document what happened;
where they can learn what failed and what worked; where the team identifies improvements for incident
handling process and procedures; where they write the final report.

233
Unit 6 - Cyber Forensics

Goals of Incident Response Plan:


• Prevention of a disjoint and non-cohesive response.
• Occurrence of incident is confirmed or dispelled.
• Promotes collection of accurate information.
• Proper retrieval and handling of evidence establishment is controlled.
• Protection of privacy rights established by law and policy.
• Minimisation of disruption to business and network operations.
• Allowance for criminal or civil action against the culprit.
• Accurate reports and useful recommendations are provided.
• Rapid detection and containment is provided.
• Minimization to exposure and compromise proprietary data.
• Tries to protect the organisation’s reputation and assets.
• Educates senior administration.
• Promotes rapid detection and prevention of such incidents in future.

6.1.7 Incident response team


The people involved in the incident response process should belong to various multidiscipline fields. Resources from
different operational units of an organization are usually required.
Computer Security Incident Response Team (CSIRT) is a team whose members work for incidence response process.
In order to resolve an incident, the CSIRT works together as an interdisciplinary team. CSIRT has the appropriate legal,
technical and other expertise necessary. Its members decide whether to apply incidence response or not based on
the seriousness of the incident.

6.1.8 Detecting Incidents


One of the most important features of incident response is detection of the incident. It is also one of the most
disjointed phases, in which incident response proficiency has only slight control.
Suspected incidents may be detected in innumerable ways. When someone suspects that an unauthorized, or unlawful
event has occurred involving an organisation’s computer network or data processing equipment, they raise the alarm
of a potential computer security incident. Initially, the incident may be reported by a user, detected by a system
admin, identified by IDS alert, or detected by some other means.
In most organisations, ultimate users may report an incident through one of the three avenues. These three avenues
may be their immediate supervisor, the corporate help desk, or an incident hotline managed by the Information
Security entity. Typically, employee-related issues are reported to a supervisor or directly to the local Human Resource
department, while end users report technical issues to the help desk.
It is important to record all known details. To make sure all relevant facts are recorded, one should build an initial
response checklist.
The CSIRT should be activated and appropriate people contacted after completing the initial response checklist. This
team will use the information from the initial response checklist for conducting further phases.

234
Unit 6 - Cyber Forensics

6.1.9 Chain of custody

Chain of custody

Chain of Custody is an essential first step in the cyber forensics investigations. Chain of Custody is essentially
documenting how to protect, transport and check that products acquired for investigation have been preserved in
an appropriate manner. Chain of custody demonstrates ‘trust’ to the courts and to a client that, the media was not
tampered with. It is an audit trail of' who did what' on a single piece of evidence and 'why it happened'.
Digital evidence is an integral element in the identification of motive, mode and process in computer-related crimes,
and it is critical in many internal investigations. Digital evidence is typically acquired from a myriad of devices including
a vast numbers of IoT devices that store the user information and data ‘spores’, digital video and images (which may
store important metadata and obfuscated/hidden information), audio evidence, and other stored data on flash drives,
hard disk drives, and other physical media.
The process for digital forensics follows a structured path. The process comprises of four primary steps:
• Collection: It is the identification, marking, documentation and retrieval of data from possible relevant sources
maintaining the quality of the collected data and evidence. That is where the cycle of the Chain of Custody
begins. The Chain of Custody is also used in all these 4 measures.
• Examination: We use a forensically sound method to collect data, both manually and automatedly. DF
examiners may execute especially interesting data which will be used in testimony which supports or refutes
the assertion. Data protection is important and we will also discuss safe methods of handling digital forensics
investigations later. During this step, not only the results of the investigation process are recorded and noted,
but also the Chain of Custody documentation is completed to note the disposition of any collected evidence
used in the examination and how it was used.
• Analysis: The analysis is a result of the examination. We use legally justifiable methods and techniques
to derive useful information to address questions posed in a particular case. Again, the Chain of Custody
reporting ‘may’ be involved in this step.
• Reporting: This is an exam and review report. Reporting usually involves a statement about the Custody
Chain, an overview of the usage of the various instruments, a description of the analysis of different identified
data sources, problems and vulnerabilities, and suggestions for additional forensic measures.

235
Unit 6 - Cyber Forensics

6.1.10 Handling Evidence


The following procedures can be adhered to while handling evidence during an investigation:
• Record the information about the computer system being examined, which is currently placed within a
computer while examining the contents of the hard drive.
• Media that is being duplicated, digital photographs of the original system should be taken.
• The evidence tag for the original media or for the duplication of forensic data must be filled out.
• For media should be appropriately labeled with an evidence label.
• The best evidence copy of the evidence media must be stored in the evidence safe.
• An evidence custodian records the best evidence in the evidence log. There will also be a corresponding entry
in the evidence log for each piece of best evidence.
• Working copy is where all the examinations are performed on a forensic copy of the best evidence.
• The evidence custodian checks whether the backup copies of the best evidence are created or not. Once the
principal investigator for the case states that the data will no longer be needed in an expeditious manner, then
the evidence custodian will create the tape backups.
• It is a responsibility of evidence custodian to make sure that all the disposition dates are met. The principal
investigator assigns the date of evidence disposition.
• To ensure all of the best evidences are presented properly, properly stored and labeled, an evidence custodian
performs a monthly audit.

236
Unit 6 - Cyber Forensics

6.2 FIRST RESPONSE

6.2.1 Key steps to first response


First Response to a Cyber Crime incident involves:
1. Obtaining authorizations and resources
2. Securing the crime scene
3. Initial survey of the area
4. Planning search & seizure

6.2.2 Obtaining Authorizations and Resources

• Authorizations
An investigator must seek permission to conduct a search at the site of a crime from the appropriate authority.
A search warrant may have to be secured, which is a written order issued by a judge that directs a law
enforcement officer to search for a certain piece of evidence at a specific location.

• Obtaining a Search Warrant


An investigator can carry out his or her investigation once the investigation plan has been formed. The
prosecutor would need to get a search warrant from a court first. A search warrant is a legal order from a
judge who directs the law enforcement officer to check at a specific location for a particular piece of evidence.
The following are the two types of valid search warrants:
~ Electronic storage device search warrant: This enables computer component search and seizure such as
the following:
• Hardware
• Software
• Storage devices
• Documentation
~ Search warrant for service providers: If the crime is committed over the Internet, then the first respondent
wants information from the service provider about the victim's computer. A service provider search warrant
allows the first responder to get this information. The first responder can get the following information from
the service provider:
• Service records
• Billing records
• Subscriber information

• Resources
First Response Toolkit: The forensic specialist has to create a toolkit before a cybercrime event happens and
prior to any potential evidence collection. Once a crime is reported, someone should immediately report to
the site and should not have to waste any time in gathering materials.

237
Unit 6 - Cyber Forensics

Creating a first response toolkit includes the following procedures:


• Create a trusted forensic computer or test bed
• Document the details of the forensic computer
• Document the summary of collected tools
• Test the tools
Evidence-Collecting Tools and Equipment: The specialist should have general crime scene processing
tools, such as the following:
• Cameras
• Notepads
• Sketchpads
• Evidence forms
• Crime scene tape
• Markers
The following are some of the tools and equipment used to collect the evidence:
• Documentation tools:
Cable tags
Indelible felt-tip markers
Stick-on labels
• Disassembly and removal tools:
Flat-head and Phillips-head screwdrivers
Hex-nut drivers
Needle-nose pliers
Secure-bit drivers
Small tweezers
Specialized screwdrivers
Standard pliers
Star-type nut drivers
Wire cutters
• Package and transport supplies:
Antistatic bags
Antistatic bubble wrap
Cable ties
Evidence bags
Evidence tape
Packing materials
Sturdy boxes of various sizes
• Other tools:
Gloves
Hand truck
Magnifying glass
Printer paper
Seizure disk
Unused floppy disks

238
Unit 6 - Cyber Forensics

• Notebook computers:
Licensed software
Bootable CDs
External hard drives
Network cables
• Software tools:
DIBS Mobile Forensic Workstation
Access Data’s Ultimate Toolkit
Teel Technologies SIM Tools
• Hardware tools:
Paraben forensic hardware
Digital Intelligence forensic hardware
Tableau Hardware Accelerator
WiebeTech forensic hardware tools
Logicube forensic hardware tools

6.2.3 Securing the Crime Scene


Securing the crime scene ensures that all the personnels are removed from the crime scene area. At this point in the
investigation, the states of any electronic devices are not altered. The following steps have to be initiated.
• Arrange for Search warrant for search and seizure
• Plan the search and seizure
• Conduct the initial search of the scene
• Take care of Health and safety issues
Implement the following guidelines when securing and reviewing an electronic crime scene:
• Implement the legal authority's rules to securing the crime scene.
• Verify the type of accident involved.
• Ensure the responders are safe in the area.
• Isolate those present at the scene.
• Locate the suspect, and help him.
• Check all details relevant to an offence.
• Additional flash messages are sent to other responding units.
• Request additional scene assistance if necessary.
• Establish a safety perimeter to assess if the suspects are still present at the crime scene.
• Protecting proof which could easily be lost.
• Secure missing data such as pagers and ID boxes for callers.
• Ensure devices containing perishable data are secured, documented, and photographed.
• Identify the connected telephone lines for devices such as modems and caller ID boxes.
• Log, disconnect and mark phone lines and cable networks.
• Observe the current situation and report findings at the scene.
• Secure secret signatures or physical traces that could be found on keyboards, keys, diskettes and CDs.

239
Unit 6 - Cyber Forensics

6.2.4 Initial survey of the area


The next step is to try to identify any evidence. The forensic professional should carefully survey the scene, observe
and assess the situation and decide on the steps for proceeding further.
From the information gathered and based on visual inspection of the scene of offence, the forensic professional
should identify all the potential evidences. These physical pieces of evidences may include conventional physical
evidence like manuals, user guides or other items left behind like passwords on slips, bank account numbers etc. It is
also necessary to remember at the scene of offence, the location of the different equipment and objects. For example,
a mouse on the left side of the desktop may mean a left-handed user is the person who controls the computer. While
identifying the digital evidence, one should make sure that the potentially perishable evidence is identified and all the
precautions are put in place for its preservation.

Potential Data Sources apart from the Crime Scene Area


The forensic specialist should also think of possible data sources at the crime scene area as well as other places.
There are usually many sources of information within an organization regarding network activity and application
usage. Information may also be recorded by other organisation such as logs of network activity for an Internet
service provider (ISP). Forensic professionals should be mindful of the owner of each data source and the effect that
this might have on collecting data. For example, getting copies of ISP records typically requires a court order. The
forensic professionals should also be aware of the organization’s policies, as well as legal considerations, regarding an
externally owned property at the organization’s facilities (for example, an employee’s personal laptop or a contractor’s
laptop).

Preliminary Interviews at Scene of Offence


In order to clarify the following, the forensic expert would also need to interview people at the crime scene:
• What measures were taken to mitigate the problem?
• Were any logs (system control, etc.) that covered the issue present?
• Are there questionable entries in them?
• Has anyone been using the device since the problem happened?
• Has there been any warnings that firewall / IDS / network security devices have set off?
• What commands or processes were running on the affected machine or on the network after the problem
occurred?
• Do they have similar structures in any of their branches / offices?
• How the registry is maintained for Internet users / other users?

6.2.5 Planning Search & Seizure


Developing a plan is an important first step in most cases because there are multiple potential data sources. The
specialist should create a plan that prioritizes the sources, establishing the order in which the data should be acquired.
Important factors for prioritization include the following. By considering these three factors for each potential data
source, the forensic professionals can make informed decisions regarding the prioritization of data source acquisition,
as well as determining which data sources to acquire.
• Likely Value
Based on the forensic professional's understanding of the situation and previous experience in similar
situations, he or she should be able to estimate the relative likely value of each potential data source.

240
Unit 6 - Cyber Forensics

• Volatility
Volatile data refers to data that is lost in a live system after a computer is shut down or due to the passage of
time. Many activities performed on the device can also result in the loss of volatile data. The suggested order
in which volatile data should be generally collected, from the beginning to the end, is:
1. Network connections
2. Login sessions
3. Contents of memory
4. Running processes
5. Open files
6. Network configuration
7. Operating system time.
• Amount of Effort Required
The amount of effort required to acquire different data sources may vary widely. The effort involves not only
the time spent by the forensic professionals and others within the organization (including legal advisors) but
also the cost of equipment and services (e.g., outside experts).

A search and seizure plan should contain the following details:


• Description of the incident
• Incident manager
• Case name or title for the incident
• Location of the incident
• Applicable jurisdiction and relevant legislation
• Location of the equipment to be seized:
• Structure’s type and size
• Where computers are located
• Who was present at the incident scene
• Whether the location is potentially dangerous
• Details of what is to be seized (make, model, location, ID, etc.):
• Types
• Serial numbers
• When the machines confiscated were running or switched off
• If the machines were networked, and if so, what sort of network, where data is stored on the network,
where backups are kept, whether the system administrator is cooperative, whether the server needs to be
taken down, and the business impact of this action.
• Other work to be performed at the scene (e.g., full search and evidence required) are:
• Search and seizure type (overt/covert)
• Local management involvement

6.2.6 Formulate/Execute Response Strategy

The political, technical, legal and business factors that surround the incident should be considered for formulation
of the response strategy. For selecting the strategy, the objectives/suggestions of the group or individual with
responsibility should be taken on which the final solution depends.

241
Unit 6 - Cyber Forensics

• Considering the totality of the circumstances: Based on the circumstances of the computer security incident,
the response strategies will vary. While deciding how many resources are needed to investigate an incident,
whether to create a forensic duplication of relevant systems, whether to make a criminal referral, whether to
pursue civil litigation and other aspects of response strategy, the following factors are to be considered:
A. How much are the affected systems critical?
B. How sensitive is the compromised or stolen information?
C. Who are the potential perpetrators?
D. Is the incident known to the public?
E. What is the level of unauthorized access attained by the attacker?
F. What is the attacker's apparent skill?
G. Involvement of system and user downtime.
H. The overall monetary loss.
• Considering appropriate responses: The organisation needs to arrive at a viable response strategy armed
with the circumstances of the attack and the capacity to respond. It shows some common situations with
response strategies and potential outcomes. The response strategy determines how to proceed from an
incident to an outcome.
• Taking action: An organization may need to discipline an employee or to respond to a malicious act by an
outsider. The action can be initiated with a criminal referral, a civil complaint, or administrative reprimand or
privilege revocation as per what the incident warrants.
• Legal action: It is common to investigate a computer security incident that is actionable, or it could lead to
a lawsuit or court proceeding. The two prospective legal choices are to file a civil complaint or to notify law
enforcement. When deciding whether to include law enforcement in the incident response, the following
should be considered:
A. Does the damage/cost of the incident merit a criminal action?
B. Is it likely that the outcome desired by the organisation will be achieved by civil or criminal action? Can the
damages be recovered or can one receive restitution from the offending party?
C. Was the cause of the incident reasonably established? (Law enforcement officers are not computer security
professionals.)
D. For an effective investigation, does the organisation have proper documentation and an organized report
which will be conducive?
E. Can substantial investigative leads be provided to law enforcement officials for them to act on?
F. Does the organisation know and have a working relationship (prior liaison) with the local or federal law
enforcement officers?
G. Will the organization be ready to risk public exposure?
H. Do the past performances of the individual merit any legal action?
I. How will the law enforcement involvement impact business operations?
• Administrative action: More common than initiating civil or criminal actions is disciplining or terminating
employees via administrative measures. To discipline internal employees, some administrative actions that can
be implemented which include:
A. Does the damage/cost of the incident merit a criminal action?
B. Letter of reproof.
C. Immediate discharge.
D. Leave of absence for a specific length of time.

242
Unit 6 - Cyber Forensics

E. Job duties to be reassigned.


F. Temporary reduction in pay to compensate for losses/damage.
G. Public/private apology for actions demanded.
H. Withdrawal of certain advantages such as a network or web access.

243
Unit 6 - Cyber Forensics

6.3 FORENSIC DUPLICATION

6.3.1 What is a Forensic Duplicate?


A Forensic Duplicate is a file, in a raw bitstream format, that contains every bit of source information. A Qualified
Forensic Duplicate is a file containing every bit of source information in a raw bitstream format, but stored in an altered
format. For example, the files might contain hashes of sectors on the drive or empty sectors might be compressed.
A Restored Image is a forensic duplicate or a suitable forensic duplicate restored to another medium of storage.
It's quite difficult to restore. For example, if the sizes of the original hard drive and the one on which the original is
restored don't match then some metadata like the partition table will no longer match. In order to preserve access to
the data on the mirror, the partition tables will need to be rewritten, so that the result is no longer a true duplicate.

6.3.2 How to create a Forensic Duplicate?

A Mirror Image is created from hardware that does a bit-by-bit copy from one hard drive to another.
Here are some tools used to create forensic duplicates:
• Logical Backup: Bit-Imaging
Bit-Imaging A logical backup copies a logical volume of folders and files. It does not capture any data that
may be present on the media, such as deleted files or residual data still stored in slack space, known as disk
imagery / cloning / bitstream imagery, which produces a bit--replica of the original media, including free
space and slack space. Bit stream images require more storage space than logical backups, which take longer
to execute.
• Write Blocker:
A write-blocker is hardware or software based tool that prevents a computer from writing to computer storage
media connected to it. Hardware write-blockers are physically connected to the computer and the storage
media being processed to prevent any writes to that media. Wide varieties of write blocking devices are
available based on the type of the interface. e.g., SATA/IDE/USB etc.,
It is very important to ensure that the evidence collected is not changed in any way. Hence an exact copy of
the data residing on the evidence hard disk (or other electronic digital storage device) is made, which the
Forensic specialist may examine and perform the various analysis. The reason for taking this measure is that, if
a search were conducted on the original data from the evidence, it would create both the actual and perceived
problem that the original has been corrupted or altered by the person performing the analysis, therefore
making it vulnerable to a disqualifying objection in court.
It is equally important that the image copy must be exact as all the conclusions will be based on the data from
the copy. Which in the end must be the same conclusions which will arise from the data in the original?
To create forensic duplicates of the data from the evidence efficiently, the following activities need to be
performed:
• Making the media forensically sterile
• Copying the image exactly
• Using a hash to save time

244
Unit 6 - Cyber Forensics

Making the media Forensically Sterile


The media that is used to make this copy must be forensically sterile. This is to ensure that the media itself will not
contaminate the evidence. To make the media forensically sterile all previous data must be removed from the copy
media with a software tool that is proven to remove all data from the drive. Just reformatting a hard drive does not
completely remove all files from the drive.

Copying the image exactly


As discussed, the copy of the image has to be exact because the search will be conducted on the copy, not on the
original.
To ensure this consistency, the computer forensics expert uses special techniques during the copying process to
maintain the integrity of the original media.

Using a hash to save time


Another consideration concerns time. Once the copy is made, the forensic examination is performed using any
number of tools that can dramatically cut the amount of the time required.
A computer forensics specialist will run files through a hash algorithm. It is a one-way mathematical formula. By
One-way it means that the original value cannot be determined from the hashed value. A hash algorithm computes
a unique attribute, in a sense producing a unique digital fingerprint representing a particular file.
The hash MD5 (Message Digest 5) and hash SHA-1 (Secure Hash Standard) are among today's most popular hash
algorithms.
This hash process serves two functions.
1. It helps ensure the integrity of the file, if the file is altered its hash value will be changed. Therefore, any
tampering would be quickly discovered.
2. By computing a hash code from each file on the evidence drive and comparing the hash value against
a hash database for all identified commercial applications and operating system components helps the
forensic specialist to recognize which files can be safely overlooked and thereby save time. This is done
using specialized tools.
The hash is a signature that speeds up the comparison between the original and duplicate when it is done bit-by-bit.
This is done to ensure that the bit stream from the original and the duplicate are the same.
A signature is a small piece of data, usually between 4 and 22 bytes long, measured from sector contents, a string,
a file, or a whole hard drive. 32-b Cyclic redundancy codes are a popular signature option, cryptographically spoken
secure signatures such as MD5 or SHA1 use an algorithm to generate the signatures that are so complicated that it
is computationally impossible (i.e. it just takes too long) to generate a sector, block, track, or file that has the same
signature as a given sector, block, track, or file. A good duplication tool will have some way of proving that the
duplicate is true, typically by calculating the signature.
After this has been done, the forensic specialist must then analyze the evidence and produce the appropriate reports.
Such files are not immediately accessible in some situations because they might be password-protected. Special
methods must then be used in these situations to break the passwords.

How to create an Image


To create an image as discussed one should first take a pristine media and after that, to do the disk image, a bit-by-bit
image must be done using toolkit available for the purpose. One example is the Linux based toolkit Helix that brings
the DD tool built in, that will assist in making the forensic image of the hard drive. Once the image is created, the next
step is to ensure its integrity. It is a good practice to record the time and the evidence in creation method including
the image hash for future reference.

245
Unit 6 - Cyber Forensics

Image creation can take several hours to execute. It is a simple task but needs to be practiced, to get it right.
Traditionally the image creation of the hard drives is done by removing the hard drive from the impacted system and
creating a forensic image using a write block. But there are times this method is not practical. Another way of making
a forensic image of the hard drive is to use live acquisition methods, boot disk acquisition or using remote/enterprise
grade tools. A live system acquisition might be useful in cases the affected drive is encrypted or there is a RAID across
multiple drives or it is not feasible to power down the machine. However, this method will only grab the logical part
of the hard drive i.e. partitions such as FAT, NTFS, EXT2, etc.
The other method is using a bootable forensic disk such as Helix. For that, the system must be rebooted using the CD/
USB. This allows one to create a bit-by-bit image of the physical drive, the evidence on the drive is not altered during
the boot process and an image of the hard drive can be created into an image file. This image file can then be used
across different analysis tools and is easier to backup.
To understand more, a quick look at the hands-on scenario to create a forensic image using a bootable disk method
from a compromised or suspicious system using dd is recommended.
‘dd’ is a simple and flexible tool that is launched using the command line and is available for Windows and Linux.
In this case, dd is being run in a Linux system. ‘dd’ copies chunk of raw data from one input source to an output
destination. It does not know anything about partitions or file systems. dd reads from its input source into blocks
(512 bytes of data by default) specified by the if= suffix. It then writes the data to an output destination using the
of= suffix.

The creation of the image is a simple process but requires practice. Also, it is a process that can take several hours to
accomplish.
Regardless of whether an image file or a restored image is used in the examination, the data should be accessed
only as read-only to ensure that the data being examined is not modified and that it will provide consistent results
on successive runs.
Write-blockers can be used during this process to prevent writing to the restored image from occurring. After the
backup has been restored (if needed), the analyst starts reviewing the collected data and conducts an evaluation of
the related files and data by finding all files, including deleted files, remains of slack and free space files, and secret
files. First, the analyst would need to extract the data from any or all of the files, which could be complicated by steps
such as encryption and password protection.

Hardware Mirroring
Mirroring hardware is achieved with the use of hardware duplicators that use a hard drive and replicate it on another
hard disk. As the hard drives themselves are not copies of each other, the mirroring system destroys data placement
and hence must alter the metadata that the OS uses to access sectors such as partition tables and master boot
records.
Normally, this mirroring hardware is used to install the same disk image on many machines or to backup drives
before repairing them. However, there are a handful of companies that produce forensic hardware units that capture
a suspect drive. These make copies of the original partitioning and boot the sectors and these verify the accuracy of
the capturing process by using cryptographically secure Hashes of the original and of the mirror.
One of their big advantages is the speed and safety. For example, the Logicube places the capturing disk (the
destination) within the enclosure and links the suspect drive outside, preventing the forensics examiner's most critical
mistake of writing in the wrong direction and losing the evidence. Some companies even offer hardware duplication
models which, cannot under any circumstances write to the suspect drive. Some models have special hardware that
allows the capturing of laptop disks through a PCMCIA or card bus card or the capturing of a disk drive in situ through
the USB port. (Firewire uses too high a level of abstraction to allow forensics capturing of disk drives.)

246
Unit 6 - Cyber Forensics

Fig 6.1: Forensic Disk Drive Capturing with Logicube SF-5000

Software Based Forensics Duplication


Several software products are available that create qualified forensics duplicates. For example-
UNIX dd: The UNIX dd utility is authorized to do forensic duplicates. Dd is a UNIX device, so the original drive must
be installed in UNIX. Raw dd duplicates have to be verified with a hashing (signatures) but advanced versions of dd
or scripts that include the verification.
Encase: Encase is a very expensive but very impressive forensics suite focused on Windows that includes creating
eligible forensic duplicates. Being Windows based makes Encase easy to use, but it also poses questions about the OS
detecting suspicious drives and modifying their contents in the process. It does not-of course-mean that Encase will
ever generate user data. Encases strength lies in their seamless integration of all forensics investigation tasks. Encase
generates a qualified forensics duplicate.
Safeback: Safeback is a small software program that is installed on a DOS boot disk (usually a floppy, but that's going
to be changing as floppy drives die out). Safeback uses DOS. It offers options on the type of duplicate, Safeback:
Safeback is a small software program that is placed on a DOS boot disk (typically a floppy, but this will be changing
as floppy drives die out). Safeback uses DOS. It offers options on the type of duplicate, true forensics duplicate or a
mirror.
After a logical backup or bit stream imaging has been performed, the backup or image may have to be restored to
another media before the data can be examined. This is dependent on the forensic tools that will be used to perform
the analysis. Some tools may analyze data directly from an image file while others allow first to restore the backup or
image to a medium.
Hardware-based Disk Imaging Commercial Tools
These are tools which help in evidence acquisition, but require a specific hardware assembly. Forensic investigators
can process computer hard drives from normal PC hardware, but the use of standalone, specialized hardware tools
can promote and speed up the processing data.

247
Unit 6 - Cyber Forensics

Fig 6.2: The figure given below lists and includes but is not limited to a few hardware-based forensic disk
acquisition tools.

248
Unit 6 - Cyber Forensics

Software-based Disk Imaging Tools


Forensic software resources are organized into programs on the command-line and GUI. Some tools are specialized
in performing one function, for example SAFEBACK,a New Technologies, Inc. (NTI) command-line disk retrieval tool.
Many instruments are designed to perform a number of different tasks. For example, Technology Pathways ProDiscover,
X-Ways Forensics, guidance software EnCase.

Write-Protection with USB Devices


The fundamental prerequisite for a successful forensic analysis of digital evidence is that the original evidence must
not be changed, i.e. the analysis or retrieval of digital data from a confiscated computer's hard disks must be carried
out in such a way that disk contents are not changed. Therefore, there is a need of Hardware Write Blockers to avoid
modification of evidence.

Fig 6.3: Above is a list of Hardware Write Blockers

249
Unit 6 - Cyber Forensics

Software based disk imager

NAME LICENSE PLATFORM DESCRIPTION


access Data FTK Imager lite FREE Windows Developed by access Data
version 3.1.1 standalone disk imaging program
FTK Imager lite version can be installed and
executed from a CD/DvD or UsB media
access Data Forensic Proprietary Windows Developed by access Data
Toolkit (FTK) version 5.2 Multi-purpose tool commonly used to index
acquired media
EnCase Forensic Imager FREE Windows a suite of digital forensics products by guidance
software
Free to download and use
standalone product that does not require an
EnCase Forensic license
Requires no installation
acquisition of local drives
EnCase Portable Proprietary Windows Delivered on a USB device, that allows to quickly
and easily collecting vital data in a forensically
sound manner
Can collect memory from running computers as
well as from computers that are turned off
EnCase Forensic v7.09 Proprietary Windows a suite of digital forensics products by guidance
software
Rapidly acquire and analyze data from the widest
variety of devices
ProDiscover Basic Edition FREE Windows Developed by Technology Pathways
version [Link] Complete GUI based computer forensic software
package. It includes the ability to image,
preserve, analyze and report on evidence found
on a computer disk drive
The sleuth Kit (TsK) open source Windows, library and collection of command line tools that
linux, os X, allow to image and analyze disk images
and Unix
systems
PTK Forensics Proprietary LAMP GUI for the sleuth Kit

autopsy FREE Windows autopsy is a digital forensics platform and


graphical interface to the sleuth Kit and other
digital forensics tools
Macquisition Proprietary MAC Developed by BlackBag Technologies, Inc.
MacQuisition is a forensic tool built for imaging
Mac os X systems
SANS Investigate Forensic Proprietary Ubuntu Developed by International team of forensics
Toolkit (SIFT) Workstation experts, led by SANS Faculty Fellow Rob lee
version 2.14 It is a multi-purpose forensic operating system
that is pre-configured with all the necessary tools
to perform a detailed digital forensic examination

250
Unit 6 - Cyber Forensics

Memory Imaging Tools


A great deal of information can be acquired from RAM analysis which is unavailable during most typical forensic
acquisition and analysis. With the advent of Bitlocker, and the increasing sophistication of malware, rootkits, and
other viruses, live memory analysis has become even more important to the field of computer forensics. Hardware
solution available for live memory acquisition is done by the use of a Firewire device. Firewire devices use Direct
Memory Access (DMA), without having to go through the CPU. Memory mapping is performed in the hardware
without going through the host operating system, which allows not only for high-speed transfers but also bypasses
the problem with some versions of Windows that disallow memory to be accessed from User mode.
The UltraBlock Firewire is used to acquire data from a Firewire connection in a forensically sound write-protected
environment.

Fig 6.4: The Ultra-Block Firewire

NAME LICENSE PLATFORM DESCRIPTION


Capture guard Physical COMMERCIAL Windows XP, Capable of imaging the physical memory of
Memory Windows vista, the computer it’s connected to.
acquisition Hardware - and Windows 7 Creates dump files in the standard WinDD
Express Card that contain an format that can be used with other WinDD
ExpressCard-34 compatible dump analysis tools.
hot-plug slot.
Capture guard Physical COMMERCIAL Windows XP, ExpressCard device capable of imaging
Memory Windows vista, the physical memory of the computer it’s
acquisition Hardware – PCIE and Windows connected to. Creates dump files in the
add-on 7 that contain standard WinDD format.
an available
PCI-Express
expansion slot

251
Unit 6 - Cyber Forensics

There are many memory acquisition tools. Important tools are listed below:

NAME LICENSE PLATFORM DESCRIPTION

Belkasoft live RAM Capture FREE Windows- Developed by Belkasoft


XP, vista, Works in kernel-mode, which allows bypassing
Windows 7 proactive anti-debugging protection used by
and 8, 2003 many modern applications such as online games
and 2008 and intrusion detection systems. Kernel-mode
server operation yields more reliable results compared to
user-mode tools
DumpIt FREE FREE Developed by Moonsols ltd.
Generates a physical memory dump of Windows
machines
Works with both x86 (32-bits) and x64 (64-bits)
machines
Can be deployed on UsB keys, for quick incident
response
OS Forensics Proprietary Windows Develops by PassMark software Pvt. ltd.
acquire live memory on 32bit and 64bit systems
output can be a straight dump or a Microsoft
crash dump file
Linux Memory Extractor GNU GPl v2 Linux and allows the acquisition of volatile memory from
(LIME) Linux-based Linux and Linux-based devices, such as those
devices, such powered by android.
as android. The tool supports dumping memory either to the
file system of the device or over the network
Goldfish FREE 32 bit Developed by Digital Forensics Investigation
versions of Research laboratory
Mac OS X Mac OS X live forensic tool
up to and
Its main purpose is to provide an easy to use
including
interface to dump the system RAM of a target
Mac OS
machine via a Firewire connection
X (10.5)
leopard

6.3.3 Authenticate the Evidence


The FRE, and the laws of many state jurisdictions define data as ‘written-works' and ‘record-keeping’. Before introducing
them as evidence, documents and recorded material must be authenticated. The pieces of evidence that are collected
by any person/investigator should be collected using authenticate methods and techniques because during court
proceedings these will become major evidences to prove the crime. In other words, for providing a piece of evidence
of the testimony, it is necessary to have an authenticated evidence by a spectator who has personal knowledge of
its origin. For a evidence to be admissible, it is necessary that it should be authenticated, otherwise the information
cannot be presented to the judging body. The matter of record is that the evidence collected by any person should
meet the demand for authentication. The evidence collected must have some sort of internal documentation that
records the manner of collected information.

252
Unit 6 - Cyber Forensics

Investigation: The investigation stage includes defining the who, what, when, where, how and why surrounding
an incident. One of the finest means to streamline a technical investigation is to divide the evidence and it
collect into three categories:
Host-based evidence: The data is usually collected from Windows or Unix machines, or the device actually
involved in the incident.
Network-based evidence: This type of evidence is usually collected from those not directly involved in an
incident such as routers, IDS, network monitors, or some network node.
Other evidence: Testimonial data that contribute to the case, such as motive, intent, or some other non-
digital evidence are involved in this category.

253
Unit 6 - Cyber Forensics

6.4 STANDARD OPERATING PROCEDURES FOR DISK


FORENSICS

6.4.1 The Initial Assessment


After evidence data/data sources have been collected, the next step that a forensic specialist is engaged in the
examination and extraction of the relevant pieces of information from the collected data.
This step includes making a true copy of the data to operate on so that it does not change the original data. This can
also include bypassing or minimizing OS, or device features that obscure data and code, such as mechanisms for data
compression, encryption and access control.
A collected piece of evidence, such as a hard drive, can contain hundreds, thousands of data files; it can be a challenge
to recognize data files containing relevant information, including information hidden by file compression and access
control. Besides that, interesting data files can contain extraneous information that should be filtered. For example,
yesterday's firewall log could contain millions of records, but only five of the records may be relevant to the interest
case.
To support the forensic specialist, there are various tools and techniques available to reduce the amount of data that
must be sifted through. Text and pattern searches can be used to identify pertinent data, such as finding documents
that mention a subject or person or identifying an e-mail log entries for a e-mail address.
Another useful technique is to use a method that can evaluate the content form of each data file, such as text, images,
music, or a compressed set of files. There are also repositories that contain known file information, which can also be
used to include or exclude files from further consideration.
The Forensic Specialist will need to be able to perform the following tasks:
• Obtain items relevant to forensic examinations in line with investigative procedures from authorized
channels
• Check forensic objects against documents and locate and correct any inaccuracies
• Locate and obtain the required resources that might be needed to retrieve relevant data or information
from the evidence
• Create an image or copy of the original storage system using clean storage media to back up
• Install write blocking software to avoid any alteration in computer.
• Identify data which is required to be extracted and like sources.
• Select the best method and tools for extraction as per the make and model of the device
• Locate the appropriate files and electronic data manually or use forensic software
• Display slack space contents with hex editors or special slack recovery software
• Search for secret, deleted or missing files and details
• Recognize the type of data contained in several files by looking at their file headers or simple histogram
• Identify the presence of encrypted data or the use of Steganography and the feasibility of decryption or
extracting embedded data
• Recognise the encryption process by looking at the file header, recognizing the encryption programs
installed on the device or finding encryption keys
• Extract the embedded data by finding the Stegano key, or by using brute force and cryptographic attacks
to determine a password
• Use various tools and techniques to crack, disable or bypass passwords imposed on individual files, as well
as OS passwords

254
Unit 6 - Cyber Forensics

• Discover, access and copy data from hidden, encrypted or damaged disks
• Un-compress files and read disk images
• Recover data and metadata from files using forensic toolkits
• Identify malicious behavior against OSs using security software, such as file integrity checkers and host
IDSs, etc.
• Perform string searches and pattern matching using Boolean search tools, fuzzy logic, synonyms and
definitions, stemming and other search methods
• Assess and recover network traffic data with the aim to determine what and how the network systems of
the organization have been affected.
• Obtain relevant details from ISP and cloud service provider
• Reveal (unlock) digital photos that have been changed to conceal a location or person's identity
• Send the computer or original media for physical evidence inspection after data has been deleted
• If equipment is destroyed, uninstall and reconstruct the machine to retrieve missing data
• Carefully document the process followed in extraction as well as the data retrieved
• Identify and minimize any risks to safety linked to working with forensic items in line with health and safety
procedures
• Take measures to ensure the preservation of physical evidence like finger prints, DNA, etc. while handling
the evidence

6.4.2 Locating the Files and Extracting Data

Locating the files is the first step in the examination. A disk image can capture several gigabytes of slack space and
free space, which can include thousands of fragments of data and information. Extracting data manually from unused
space can be a time-consuming and difficult process since it requires knowledge of the underlying file system format.

Introduction to Registry
Windows operating system's memory is the Server. It is a hierarchical database that stores the configurations and
options for the configuration required to run applications and commands.
The following information is stored in the registry:
• If a user installs a newly attached hardware software / application, hardware or system driver, the initial
configuration settings are stored in the registry as keys and values.
• The modifications made to such settings are changed in the registry during the use of the program or
hardware.
During their run-time, the program and device components retrieve their new registry configuration to continue their
service as per the current user's settings. The registry also acts as an index for the kernel process, revealing machine
run-time information.
This information, however, is dynamic and only exists when Windows is running. The Registry resembles a subtree like
Windows Explorer because of its nesting pattern.
There are four to six 'Hives' or main registry keys containing a nesting of keys, subkeys and values having a set of
support files containing backups of its data. There can be four to six' Hives' or main registry keys displayed (d),
depending on the version of the OS. Each is called according to their handles (Handle Key= HK) as specified in the
Win32 API and is used for settings governance.

255
Unit 6 - Cyber Forensics

The database configuration of the registry can be accessed via the Registry Editor, RegEdit. This displays one
hierarchical list, but not one large database file is the Windows Registry. The primary structure of data is the hive in
which many exist. That hive is defined by a root key which gives access to all tree sub-keys up to 512 deep levels.
There are four to six predefined root keys that are used to access all other keys or sub keys (as per the windows
program used). In other words, the binary tree is traversed downwards from the base. By these root keys new keys are
introduced, and all existing keys must be identified via the root keys. One downside to this strategy is that a higher-
key problem will prevent access to lower keys. In reality, this does not occur very much.

The following table lists the root keys with the abbreviations:

Root Key Abbreviation Root Key Name Component data is stored for
CC HKEY_CURRENT_CONFIG Current hardware
HKCR HKEY_CLASSES_ROOT Classes (types) of documents and
registered applications
HKCU HKEY_CURRENT_USER Current logged-on user
HKLM HKEY_LOCAL_MACHINE The system hardware, software and
security
HKPD HKEY_PERFORMANCE_DATA Performance data
HKU HKEY_USERS User profiles

Programs gain access to the registry by using the Registry Application Programming Interface (API) which provides a
standard set of functions for the Windows sub-systems and application programs to access and update the Registry.
This is how the Registry editor (RegEdit) and other utilities work.
When a program uses the API to access the registry the Windows Object Manager will return a handle for the object
identified by a key. That is why the "HKEY" in the root keys means "handle key".

256
Unit 6 - Cyber Forensics

The following illustration is an example registry key structure as displayed by the Registry Editor.
Each of the trees under the Computer is key. The HKEY_LOCAL_MACHINE key has the following subkeys: HARDWARE,
SAM, SECURITY, SOFTWARE, and SYSTEM. Each of these keys, in turn, has subkeys. For example, the HARDWARE key
has the subkeys DESCRIPTION, DEVICEMAP and RESOURCEMAP, the DEVICEMAP key has several subkeys including
VIDEO.
All user's registry settings (HKEY_CURRENT_USER) are stored in the registry hive file [Link] in the user's profile
directory. The [Link] file is a copy of the data stored in the registry hive for a specific user. There are three or more
hive files that have the name [Link]. The first one is related to network services account, the second one to
local services account and the third one to the user account (each user account has its [Link] hive file).

Windows Artifacts & its paths


The windows registry also contains a wealth of information, artifacts relics from the past. There are lots of data artifacts
found in the Operating System support files, Windows folder and sub-folders, and the Registry file. Collectively all of
these components allow examiners to find evidence creating a digital history, connections, or a legal position.
Windows artifacts are important in forensics because of the following reasons −
• Around 90% of the traffic in the world comes from computers using Windows as their operating system.
• The Windows operating system stores different forms of information relating to computer system user operation.
• Windows artifacts can give access to system created data
• Some of the important artifacts are:
• Recycle bin: It is one of the most significant forensic investigation items on Windows. Windows Recycle bin holds
files that the user has deleted but that are not yet physically replaced by the program. Even if the user completely
removes the file from the system, it serves as an important source of investigation. This is because the examiner
can extract valuable information, like the original file path as well as the time that it was sent to Recycle Bin, from
the deleted files.
• Sticky Notes: Windows Sticky Notes replaces the real world habit of writing with pen and paper. These notes used
to float on the desktop with different options for colors, fonts etc. In Windows 7 the Sticky Notes file is stored as
an OLE file hence in Python script, we can investigate OLE file to extract metadata from Sticky Notes.
• Windows registry files contain many valuable details for a forensic analyst and are like a treasure chest of
information. This is a hierarchical database containing information about configuration of the operating system,
user behavior, software installation, etc. In Python script we are going to access common baseline information
from the SYSTEM and SOFTWARE hives.

Extracting Data
Fortunately, several tools are available that can automate the process of extracting
data from unused space and saving it to data files, as well as recovering deleted files
and files within a recycling bin. Analysts can also display the contents of slack space
with hex editors or special slack recovery tools.
The following processes are among those that an analyst should be able to perform
with a variety of tools:
Extracting data from slack space
When an operating system writes a file to disk, it allocates a certain number of sectors.
The allocated sectors and their location on the disk are recorded in a directory table
for later access. When the file is deleted, space originally allocated to it is simply
marked as unallocated. The actual data remains on the disk. Deleted files in this state
are easily recoverable by many disk utilities. When a new file is written to this same
space, the OS may allocate it to the same sectors, however the file content will not
completely fill the sector. Some part of the sector will still retain the content of the
deleted file and this portion is called slack space.

257
Unit 6 - Cyber Forensics

Slack space is a significant source of proof in forensic investigation. Slack space will also contain sensitive details
about a defendant a lawyer may use in a courtroom. For example, if a user deleted files, filled an entire cluster of hard
drives, and then saved new files that filled only half of the cluster, then the latter half would not actually be empty.
This can contain any remaining information from the deleted files. This information could be extracted by forensic
investigators using special computer forensic tools.

Using File Viewers.


Using viewers to display the contents of a certain types of files instead of the original source applications is an
effective technique for canning or previewing data and is more productive because the analyst does not require
native applications for displaying each type of file. Various tools are available to view popular types of files, and
specialized tools are also available to display graphics only. If available file viewers do not support a file format, the
original source application should be used, if this is not available, then it may be necessary to research the files format
and manually extract the data from the file.

Uncompressing Files.
Compressed files can contain useful data files as well as other compressed files. Therefore it is necessary for the
analyst to locate compressed files and extract them. Early in the forensic process, uncompressing files will be done to
ensure that the contents of compressed files are used in searches and other actions. Analysts should note, however,
that compressed files that contain harmful content, such as compression bombs, which are files that have been
compressed repeatedly, usually dozens or hundreds of times. Compression bombs may cause screening tools to fail
or consume significant resources; they may also contain malware and other malicious payloads. Although there is
no sure way of detecting compression bombs before uncompressing a file, there are ways of minimizing its effect.
The test program should, for example, use up-to-date antivirus software and be stand-alone to limit the impact of
only that program. Additionally, an picture of the test system should be created so that the system can be restored
if appropriate.
Graphically Displaying Directory Structures: This method makes it simpler and quicker for analysts to gather general
information about media content, such as the type of installed software and the possible technical skill of the user(s)
who generated the data. Some products can view directory structures in Windows, Linux, and UNIX, while other
products are unique to Macintosh directory structures.

Identifying Known Files.


The advantage of finding files of interest is clear, but the removal of unimportant files, such as known good OS and
application files, from consideration is also always beneficial. Analysts can use validated hash sets, including those
generated by NIST is the National Software Reference Library (NSRL)project56 or validated personal hash sets57, as a
basis for distinguishing known benign and malicious data. Hash sets typically use the SHA-1 and MD5 algorithms to
establish message digest values for each known file.

Performing String Searches and Pattern Matches.


SString searches help to locate keywords or strings while perusing vast volumes of data. There are many search
tools available which can use Boolean, Fuzzy logic, synonyms and concepts, stemming, and other search methods.
Definitions of popular searches involve searching a single file for several terms and searching for misspelled versions
of those terms. Developing concise sets of search terms that will help the analyst reduce the amount of information
to be checked. The following are some concerns or potential difficulties when running string searches:
Using specialized software some proprietary file formats can not be searched for string. Furthermore, prior to a string
scan, compressed, encrypted, and password-files require additional preprocessing.
Use multicharacter data sets which include foreign or Unicode characters may trigger string search problems. Many
search applications aim to solve this by including translation features for languages. Another potential problem is the
search function or the algorithm's inherent limitations.

258
Unit 6 - Cyber Forensics

For example, if part of the string resided in one cluster and the remainder of the string resided in a non-adjacent
cluster, a match could not be found for a search string. Similarly, if part of a search string resided in one cluster and
the remainder of the string resided in another cluster that was not part of the same file containing the first cluster,
some search tools that report a false match.

Accessing File Metadata.


File metadata provides details about any given file. For example, gathering metadata on a graphic file might include
the production date of the graphics, copyright details and definition, and identity of the creators. Metadata for digital
camera-generated graphics may include the make and model of the digital camera used to take the image, as well
as F-stop, flash, and aperture settings. For word processing files, metadata might indicate when and by whom edits
were last performed and user-defined comments on the author, the organization that licensed the program. Special
utilities may have metadata extracted from files.

Extracting the Data


The rest of the examination process involves extracting data from some or all files. To make sense of the contents of
a file, a forensic specialist needs to know what type of data the file contains.
The intended purpose of file extensions is to denote the nature of the file and its contents.
For example:
• a jpg extension indicates a graphic file
• an mp3 extension indicates a music file
Users may therefore allocate any file extension to any form of file, such as naming a mysong.mp3 text file or omitting
a file extension. Additionally, certain file extensions on other OSs may be secret or unsupported. An analyst does
therefore not presume that the file extensions are correct.
The forensic specialist can more accurately identify the type of data stored in many files by looking at their file
headers. A file header contains identifying information about a file and possibly metadata that provides information
about the file is contents.
The file header includes a file signature defining the type of data found in that particular file. The example below in
the Figure has a FF D8 file header which indicates this is a JPEG file.

A file header could be in a file separate from the actual file data.
A simple histogram showing the distribution of ASCII values as a percentage of total characters in a file is another
useful technique for defining the form of data in a file. A spike in the lines ëspaceí, ëaí, and ëeí, for example, usually
indicates a text file, while consistency around the histogram indicates a compressed file.

Handling Encryption
Encryption often presents challenges. Users might encrypt individual files, folders, volumes, or partitions so that
others cannot access their contents without a decryption key or passphrase. The encryption might be performed by
the OS or a third-party program. Although it is relatively easy to identify an encrypted file, it is usually not so easy to
decrypt it.

259
Unit 6 - Cyber Forensics

The forensic specialist might be able to identify the encryption method as follows:
• By examining the file header
• By identifying encryption programs installed on the system
• By finding encryption keys (which are often stored on another media).
If the method of encryption is known the analyst will assess the feasibility of decrypting the file better. In certain cases,
decryption of files is not possible because the encryption process is strong and the authentication (e.g., passphrase)
used to decrypt is not usable. Although one can detect the use of encrypted data very quickly, it is more difficult to
detect the use of steganography.

Handling Steganography
Steganography, also known as steg, is the incorporation of the data into other data. Examples of steganography are
digital watermarks, and the hiding of words and details within images. Some techniques a forensic specialist can use
to locate stegged data includes:
• Discovering several versions of the same image
• Identifying the presence of grayscale images
• Searching metadata and registries
• Using histograms
• Searching for recognized Steganography applications by using hash sets
This has been established for certain that stegged data exists, and then it could be possible to extract the embedded
data by deciding which software generated the data and then discovering the stego key, or by using brute force and
cryptographic attacks to determine the password.
However, such efforts are often unsuccessful and can be extremely time consuming, particularly if the forensic
specialist does not find the presence of known Steganography software on the media being reviewed. In addition,
some software programs can analyze files and estimate the probability that the files were altered with Steganography.

Handling Password protected files


The forensic expert might also need to have access to non-stegged files that are password protected. Passwords are
mostly stored on the same network as the files they cover, either in encoded or encrypted formats. There are many
applications available that can crack passwords that are stored in individual files, as well as OS passwords.
• Most cracking utilities can attempt to guess passwords, as well as performing brute force attempts that try
every possible password. The time needed for a brute force attack on an encoded or encrypted password
can vary greatly, depending on the type of encryption used and the sophistication of the password itself.
• Another approach is to bypass a password. For example, one might boot the device and disable the
screensaver password, or bypass the Basic Input / Output Device (BIOS) password by dragging the BIOS
Jumper from the system motherboard or using the manufacturer's backdoor password. Bypassing a
password, however, may mean rebooting the system, which would be undesirable.
• Instead, seek to capture the password by network or host-based controls (e.g. packet sniffer, keystroke
logger) with appropriate management and legal approval.
If a boot-up password has been set on a hard drive, it might be possible to guess it (i.e., a default password from a
vendor) or to circumvent it with specialized hardware and software.

260
Unit 6 - Cyber Forensics

6.4.3. Cyber Forensic Analysis

After the examination and data extraction has been completed, the next step is to perform analysis on the extracted
data. There are many tools available that can be helpful in the analysis of different types of data.
While using these tools, or performing manual reviews of data, the forensic specialist should be aware of the value
of using system time and file time. Knowing when an event occurred, a file was generated or changed, or an e-mail
was sent, can be crucial to a forensic investigation. For example, this information can be used to recreate the timeline
of activities. Although this may seem like a simple task, it is often complicated by unintentional or intentional
discrepancies in time settings among systems. Knowing the time, date and time zone settings for a computer whose
data will be analysed can greatly assist the forensic specialist.

Timeframe analysis
Timeframe analysis can be helpful when assessing when events have occurred on a computer device, and can be used
as part of associating the usage of the machine to theindividual(s) at the time the events occurred. Two methods that
can be used are:
• Checking the time and date stamps found in the metadata file system (e.g. last updated, last accessed,
generated, change of status) to link the files of interest to the timeframes relevant to the investigation.
An example of this review will be the last updated date and time to determine when the contents of the
file were last changed.
• Analysis of the program and task logs that might be available. This can include error logs, installation logs,
communication logs, protection logs, etc. For example, a security log review can show when a user name /
password combination was used to log in to the system.

Network Time Protocol (NTP)


It is usually beneficial if an organization maintains its systems with accurate time stamping. The Network Time Protocol
(NTP) synchronizes the time on a computer with an atomic clock run by NIST or other organizations. Synchronization
helps ensure that each system maintains a reasonably accurate measurement of time.
NTP uses Synchronized Universal Time (UTC) to synchronize the time of a computer clock to a millisecond and even
to a fraction of a millisecond. UTC time is measured using a number of technologies, including radio and satellite
networks. Specialized receivers are available for high-level systems, such as the Global Positioning System (GPS) and
governments in certain nations. It is not feasible or cost-effective, however, to equip any device with one of these
receivers. Computers known as primary time servers are often equipped with receivers and use protocols such as NTP
to synchronize the clock times of networked computers.
Degrees of separation from the UTC source shall be specified as strata. A radio clock (which receives true time from a
dedicated transmitter or satellite navigation system) is Stratum-0; a computer which is directly connected to a radio
clock is Stratum-1; a computer which receives its time from a Stratum-1 computer is Stratum-2, and so on.
The word NTP refers to both the protocol and the client / server programs running on the device. The programs are
compiled as an NTP client, NTP server, or both of them. In simple words, an NTP client initiates an exchange of time
requests with a time server. Thanks to this exchange, the client will measure the delay of the connection and its local
offset and alter the local offset to match the clock of server's computer.
As a rule, six exchanges over a span of about 5 to 10 minutes are needed to set the clock initially. When synchronized,
the client updates the clock roughly every 10 minutes, normally involving just one exchange of messages. In addition to
client / server synchronization, NTP also facilitates broadcast synchronization of peer-computer clocks. Unfortunately,
the NTP protocol can be used for Denial of Service (DoS) attacks since it responds to a packet with a spoofed source
IP address and because at least one of its built-in commands sends a long reply to a short request.

261
Unit 6 - Cyber Forensics

The forensic specialists can use special tools that can generate forensic timelines based on the event data. Such tools
typically give analysts a graphical interface for viewing and analysing sequences of events. A common feature of these
tools is to permit analysts to group related events into meta-events. This helps analysts to get a big picture view of
events.
In many cases, forensic analysis involves not only data from files, but also the data from other sources, such as the OS
state, network traffic, or applications.

Data hiding analysis


Data can be concealed on a computer system. Data hiding analysis can be useful in detecting and recovering such
data and may indicate knowledge, ownership, or intent.
Methods that can be used include:
• Comparison of the file headers to the corresponding file extensions to detect any anomalies. The existence
of unevenness may mean that the user deliberately hides the data.
• Access to all password-protected, authenticated, and compressed files that may suggest an effort to
conceal the data from unauthorized users. The password itself may be as important as the contents of the
file.
• Steganography.
• Gaining access to a host-protected area (HPA). The presence of user-created data in an HPA may indicate
an attempt to conceal data.

Application and file analysis


Many of the programs and files found that contain information relevant to the investigation and provide insight into
the system's functionality and user awareness. The findings of this study may suggest additional steps that need to
be taken in the process of extraction and study.
Some examples include:
• Reviewing file names for relevance and patterns.
• Examining file content.
• Identifying the number and type of operating system(s).
• Correlating the files to the installed applications.
• Considering relationships between files. For example, correlating Internet history to cache files and e-mail
files to e-mail attachments.
• Identifying unknown file forms to determine their importance for the investigation.
• Examining the user's default storage location(s) for the programs and the file structure of the drive to
decide whether the files have been placed in their default or alternative location(s).
• Examining user-configuration settings.

Ownership and possession


In some instances, it may be essential to identify the individual(s) who created, modified, or accessed a file. It may
also be important to determine ownership and knowledgeable possession of the questioned data. Elements of
knowledgeable possession may be based on the analysis described above, including one or more of the following
factors.
• Placing the subject at the computer at a date and time may help determine ownership and possession
(timeframe analysis).
• Files of interest may be in non-default locations (e.g., user-created directory named “child porn”) (application
and file analysis).

262
Unit 6 - Cyber Forensics

• The file name itself may have an evidential meaning and may indicate the contents of the file (application
and file analysis).
• Hidden data can suggest an intentional effort to escape detection (hidden data analysis).
• Unless the passwords used to gain access to encrypted and password protected files are retrieved, the
passwords themselves can suggest possession or ownership (hidden data analysis).
• The contents of a file can signify ownership or possession by providing user-specific details.

Analyzing OS Data
The following items describe the most common OS data sources in network forensics:
• IDS Software. The IDS data is also the starting point for the investigation of suspicious activity. Not only do IDSs
typically attempt to detect malicious network traffic on all TCP / IP layers, but they also record other data fields
(and often raw packets) that can be useful in validating events and correlating them with other data sources.
Nonetheless, as noted above, the IDS software does generate false positives, so the IDS warnings should be
checked. The degree to which this can be achieved depends on the amount of data collected relating to the
warning and the information available, the characteristics of the signature or the method of anomaly detection
that caused the alarm.
• SEM Software. Ideally, SEM can be incredibly useful for forensics as it can automatically compare events between
multiple data sources, then extract the relevant information and show it to the user. However, since SEM software
works by inputting data from several other sources, the reliability of SEM depends on the data sources are fed
into it, how accurate each data source is, and how well the software can normalize data and correlate events.
• NFAT Software. NFAT software is primarily developed to assist the analysis of network traffic, so it is useful if
an incident of concern has been tracked. NFAT software typically provides features that support analysis, such as
traffic reconstruction and visualization;
• Firewalls, Routers, Proxy Servers, and Remote Access Servers. By itself, data from these sources are usually of
little value. Analyzing data over time can indicate overall patterns, such as an increase in blocked link attempts.
However, since these sources usually provide little detail about each event, the data offer little insight into the
essence of the events. Additionally, a lot of incidents could be reported every day, and the sheer amount of data
could be daunting. The primary meaning of the data is the association of events reported by other sources. For
example, if a host is breached and a network IDS sensor has detected an attack, querying firewall logs for events
involving an apparent IP attack address that confirm that the attack has penetrated the network and may indicate
other hosts that the attacker has attempted to compromise. In addition, the mapping of addresses (e.g., NAT)
performed by these tools is important for network forensics because the apparent IP address of the intruder or
victim may have been used by hundreds or thousands of hosts. Fortunately, analysts will typically check logs to
determine which internal address is being used.
• DHCP Servers. Usually, DHCP servers can be configured to log each IP address assignment and associated MAC
address along with a timestamp. This knowledge may be of assistance to analysts in determining which host
carried out an operation using an IP address. Analysts should be mindful of the possibility that perpetrators
of internal networks of organizations could have falsified their MAC addresses or IP addresses, a phenomenon
known as spoofing.
• Packet Sniffers. Out of all network traffic data sources, packet sniffers can gather the most information about
network activity. However, sniffers can catch vast quantities of benign data worth millions or trillions of packets,
and usually do not have any indication as to which packets may contain malicious behavior. In most cases,
packet sniffers are often used to provide more details on events that other devices or applications have marked
as potentially harmful. Some organizations record most or all packets for some period so that when an incident
occurs, the raw network data is available for examination and analysis. Packet sniffer data is best checked with
a protocol analyzer that interprets analyst data based on knowledge of protocol specifications and specific
implementations.

263
Unit 6 - Cyber Forensics

• Network Monitoring. Network monitoring software is helpful in identifying significant deviations from normal
traffic flows, such as those caused by DDoS attacks, during which, hundreds or thousands of systems launch
simultaneous attacks against hosts or networks. Network monitoring software can document the impact of these
attacks on network bandwidth and availability, as well as providing information about the apparent targets.
Traffic flow data can also be helpful in investigating suspicious activity identified by other sources. For example,
it might indicate whether a communications pattern has occurred in the preceding days or weeks.
• ISP Records. Information from an ISP is primarily of value in tracing an attack back to its source, particularly when
the attack uses spoofed IP addresses.

Various tools and techniques can be used to support the examination process. Some of the tools have been discussed
in the previous unit regarding extraction of data files. These can also be used for analysing collected data files.
In addition, security applications, such as file integrity checkers and host IDSs, can be very helpful in identifying
malicious activity against OSs. For instance, file integrity checkers can be used to compute the message digests of
OS files and compare them against databases of known message digests to determine whether any files have been
compromised. If intrusion detection software is installed on the computer, it might contain logs that indicate the
actions performed against the OS.
Another issue that an analysts face is the examination of swap files and RAM dumps, which are large binary data files
containing unstructured data. Hex editors can be used to open these files and examine their contents, however, on
large files, manually trying to locate intelligible data using a hex editor can be a time-consuming process. Filtering
tools automate the process of examining swap and RAM dump files by identifying text patterns and numerical values
that might represent phone numbers, names of people, email addresses, Web addresses and other types of critical
information.
Analysts often want to gather additional information about a program running on a system, such as the processes
purpose and manufacturer. After obtaining a list of the processes currently running on a system, analysts can look
up the process name to obtain such additional information. However, users might change the names of programs to
conceal their functions, such as naming a Trojan program [Link].
Therefore, process name lookups should be performed only after verifying the identity of the processes files by
computing and comparing their message digests. Similar lookups can be performed on library files, such as DLLs on
Windows systems, to determine which libraries are loaded and what their typical purpose is.
The forensic specialist may collect many different types of OS data, including multiple file systems. Trying to sift
through each type of data to find relevant information can be a time-intensive process, hence it has been generally
found useful to identify a few data sources to review initially, and then find other likely sources of important
information based on that review. In addition, in many cases, analysis can involve data from other types of sources,
such as network traffic or applications.

Analyzing Network Traffic Data


When an event of interest has been identified, the forensic specialists assess and extract the network traffic data. Once
that is done the next step is to analyse network traffic data with the goal of determining what has happened and how
the organizations’ systems and networks have been affected.
This process might be as simple as reviewing a few log entries on a single data source and determining that the event
was a false alarm, or as complex as sequentially examining and analysing dozens of sources (most of which might
contain no relevant data), manually correlating data among several sources, then analysing the collective data to
determine the probable intent and significance of the event. It is complicated and time-consuming work.
Although current tools (e.g., SEM software, NFAT software) can be helpful in gathering and presenting network
traffic data, such tools have rather limited analysis abilities and can be used effectively only by well-trained, forensic
specialists.

264
Unit 6 - Cyber Forensics

For analysing network traffic data, the forensic specialist must be adept at the following:
• In-depth understanding of the tools
• Reasonably comprehensive knowledge of networking principles, common network and application protocols,
network and application security products, and network-based threats and attack methods
• Have knowledge of the organisations’ environment, such as the network architecture and the IP addresses used
by critical assets (e.g., firewalls, publicly accessible servers)
• Have knowledge of the information supporting the applications and OSs used by the organization.
• Understanding of the organisations’ normal computing baseline, such as typical patterns of usage on systems
and networks across the enterprise
• Understanding each of the network traffic data sources, as well as access to supporting materials, such as intrusion
detection signature documentation.
• Understanding the characteristics and the relative value of each data source so that they can locate the relevant
data quickly.
Most of these have been covered in other Units of this handbook. Let’s look at the Analysis Tools in greater detail.

Analysis Tools
Several open source and commercial tools exist for computer forensics investigation. A typical forensic analysis
includes a manual review of materials on the media, for e.g. reviewing the Windows registry for suspect information,
discovering and cracking passwords, keyword searches for topics related to the crime, and extracting email and
pictures for review.

Fig.6.5: Analysis tools

265
Unit 6 - Cyber Forensics

Log File Analysis Tools

NAME LICENSE PLATFORM DESCRIPTION

Splunk. FREE enterprise edition- windows Indexes any machine data


commercial regardless of format or
location—logs, click stream
data, configurations, sensor
data, traps and alerts, change
events
Data can be indexed from
virtually any source, format or
location

log2timeline Free Windows (with active state Framework for automatic


Perl installed) Linux and creation of a super timeline
Mac OS X (10.5.7+ and a tool to parse various log files
10.6.+) and artefacts found on suspect
systems (and supporting
systems, such as network
equipment) and produce a
timeline that can be analyzed
by forensic investigators/
analysts

log Parser 2.2 Free Windows a powerful, versatile tool that


provides universal query access
to text-based data such as log
files, XMl files and Csv files, as
well as key data sources on the
Windows® operating system
such as the Event log, the
Registry, the file system, and
active Directory

Operating systems, such as Linux® and Windows®, generate log files to capture system events. Windows, for
example, provides event, security and system log events.

Registry analysis tools

NAME LICENSE PLATFORM DESCRIPTION

access Data Registry Proprietary Windows Access Data Registry viewer allows an
viewer investigator to view the contents of Windows
operating system registries.

alien Registry viewer Proprietary Windows Alien Registry viewer is like the RegEdit
application included into Windows, but unlike
RegEdit, it works with standalone registry
files. While RegEdit shows the contents of
the system registry, alien Registry viewer
works with registry files copied from other
computers170

266
Unit 6 - Cyber Forensics

NAME LICENSE PLATFORM DESCRIPTION

Windows Registry Free Windows This application allows to read files


Recovery containing Windows
9x, NT, 2K, XP, 2K3 registry hives
It extracts much useful information about
configuration and windows installation
settings of host machine
Registry hive can be exported into REgEDIT4
format. Every topic data can be saved to Csv
Yet another Registry Utility Free Can be run on Shows allocated, but unused, key/value data
(yaru)172 Windows, Linux space
or Mac OS-X shows unallocated hive space
Report generation capability
logging capability that records the user
selections along with data values to a
separate XMl file for later review173
Regripper open source Windows and RegRipper175 , written in Perl, is a Windows
Linux. Registry data extraction tool
RegRipper can be customized to the
examiner’s needs using available plug-ins
or by users writing plug-ins to suit specific
needs
Reglookup open source Windows and Used to list the contents of a registry into a
Linux. comma separated format
Registry Decoder open source Windows and Tool in which to perform browsing,
Linux. searching, analysis, and reporting of registry
hive contents

Web Browser analysis tools


From a forensic perspective, analyzing web browsing history, bookmarks, cached web pages and images, stored form
values and passwords gives valuable insight to important evidence. An analysis of the browser history, suggested
sites, cookies, etc. A forensic analyst can easily determine the browsing activity of a suspect.

NAME LICENSE PLATFORM DESCRIPTION

Internet Evidence Proprietary Windows It can recover data from a hard drive, live
Finder(IEF) standard RAM, or files for Internet-related evidence
edition. Features include-
• social Networking artefacts
• Instant Messenger Chat History
• Webmail
• Full Web browser artefacts
• P2P file sharing applications

Pasco v1.0 Free Windows Internet Explorer activity forensic analysis tool

267
Unit 6 - Cyber Forensics

NAME LICENSE PLATFORM DESCRIPTION

IEHistoryView Free Windows This utility reads all information from the
history file on a computer, and displays the
list of all URLs that have been visited in the
last few days
It also allows selecting one or more URL
addresses, and then remove them from the
history file or save them into text, HTMl or
XMl file
In addition, it also allows to view the visited
URL list of other user profiles on a computer,
and even access the visited URL list on a
remote computer, as long as there is desired
permission to access the history folder
SkypeLogView v1.51 Free Windows Skype log view reads the log files created by
Skype application and displays the details
of incoming/outgoing calls, chat messages
and file transfers made by the specified
Skype account

Mozilla History view v1.52 Free Windows A small utility that reads the history data file
([Link]) of Firefox/Mozilla/Netscape Web
browsers, and displays the list of all visited
Web pages in the last days. For each visited
Web page, the following information is
displayed: URL, First visit date, last visit date,
visit counter, Referrer, Title, and Host name
MyLastSearch Free Windows My Last Search utility scans the cache and
history files of 4 Web browsers (IE, Firefox,
opera, and Chrome), and locate all search
queries made with the most popular search
engines (Google, Yahoo and MSN) and with
popular social networking sites (Twitter,
Facebook, MySpace)

File System analysis tools


File system Forensic analysis focuses on the file system and disk. The file system of a computer is where most files
are stored and where most evidence is found, it is also the most technically challenging part of the forensic analysis.
Following table lists File system analysis tools:

NAME LICENSE PLATFORM DESCRIPTION

access Data Forensic Proprietary Windows Developed by access Data


Toolkit (FTK) version Multi-purpose tool, commonly used to index
5.2 acquired media

EnCase Forensic Free Windows A suite of digital forensics products by


v7.09 guidance software.
Rapidly acquire and analyze data from the
widest variety of devices.

268
Unit 6 - Cyber Forensics

NAME LICENSE PLATFORM DESCRIPTION

saNs Investigate Forensic Proprietary Ubuntu Developed by International team of


Toolkit (sIFT) Workstation forensics experts, led by saNs Faculty Fellow
version 2.14 Rob lee.
It is a Multi-purpose forensic operating
system that is pre-configured with all the
necessary tools to perform a detailed digital
forensic examination
Cross compatibility between Linux and
windows

autopsy Free Windows autopsy is a digital forensics platform and


graphical interface to The sleuth Kit and
other digital forensics tools

mft2csv_v2.0.0.13 open source The mft2csv application will take an $MFT


file as input and rip information from all its
records and log it to a csv file

analyzeMFT open source Windows/ Linux [Link] is designed to fully parse


the MFT file from an NTFs file system and
present the results as accurately as possible
in a format that allows further analysis with
other tools
The sleuth Kit (TsK) open source Windows, Linux, library and collection of command line tools
OS X, and Unix that allow to image and analyze disk images
systems

Digital Forensics Free and open Windows Capabilities include:


Framework source Windows and Linux OS forensics
(DFF) Recover hidden and deleted artefacts
Read standard digital forensics file formats
virtual machine disk reconstruction

CaINE (Computer aided Free Linux It is a GNU/Linux live distribution created as


Investigative Environment) a project of Digital Forensics
CaINE offers a complete forensic
environment that is organized to integrate
existing software tools as software modules
and to provide a friendly graphical interface
The main design objectives that CaINE aims
to guarantee are the following:
an interoperable environment that supports
the digital investigator during the four
phases of the digital investigation
a user friendly graphical interface
a semi-automated compilation of the final
report.

269
Unit 6 - Cyber Forensics

NAME LICENSE PLATFORM DESCRIPTION

X-Ways Forensics Proprietary Windows X-Ways Forensics comprises of all the


general and specialist features known from
WinHex, such as:
Disk cloning and imaging
ability to read partitioning and file system
structures inside raw (.dd) image files, Iso,
vHD and vMDK images
Complete access to disks, RaIDs, and images
more than 2 TB in size (more than 232
sectors) with sector sizes up to 8 KB
Built-in interpretation of JBoD, RaID 0, RaID
5, RaID 5EE, and RaID 6 systems (including
Linux software RaIDs), Windows dynamic
disks, and lvM2
automatic identification of lost/deleted
partitions

WinHex Proprietary Windows WinHex is a universal hexadecimal editor

CyberCheck suite Proprietary Windows CyberCheck is a cyber forensics tool for data
recovery and analysis of digital evidence
CyberCheck uses images created by
TrueBack, Encase or raw images - Indian
Language support
Recovers data from deleted files, re-
formatted or re-partitioned storage media
Recovers data from unallocated clusters, lost
clusters, file/partition/disk/MBR slack, swap
files. supports FaT12/16/32, NTFs, EXT2Fs,
EXT3Fs, UFs

Memory Analysis Tools


With the advent of Microsoft’s Bitlocker, along with the increased sophistication of Malwares, viruses and root kits,
volatile memory forensics has become more important to a forensic analysts and incident responders.
Important forensic artefacts such as Running processes and services. Unpacked/decrypted versions of protected
programs, system information (e.g. time elapsed since last reboot), logged in users. Registry hives in use by programs,
open network connections /ARP cache, Remnants of chats, communications in social networks. Recent Web
browsing activities including IE In Private mode and similar privacy-oriented modes in other Web browsers. Recent
communications via Webmail systems; Running malware/Trojans; Decryption keys for encrypted volumes mounted at
the time of the capture; Information from cloud services, etc., can never be obtained without acquiring and analyzing
volatile memory.

270
Unit 6 - Cyber Forensics

The following table lists Memory analysis tools:

NAME LICENSE PLATFORM DESCRIPTION

Volatility Framework open source Windows/Linux The volatility Framework is completely a open
collection of tools, implemented in Python. It
extracts digital artefacts from volatile memory
(RAM) samples
volatility comes with several standard plug-ins.
The plug-ins use various techniques to extract
artefacts from volatile memory (RAM) samples,
these include:
• Running processes
• Open network sockets
• Open network connections
• Dlls loaded for each process
• Open files for each process
• Volatility also has support for extracting
artefacts from Windows Hibernation files and
Windows crash dump file etc.

Belkasoft Evidence Proprietary Windows - Belkasoft Evidence Centre makes it easy for
Centre 2014 an investigator to search, analyze, store and
share digital evidence found on the hard drive or
the computer’s volatile memory. The toolkit will
extract digital evidence from multiple sources by
analyzing hard drives, volatile memory dumps etc.

pdgmail open source Windows/Linux Python script to extract Gmail artefacts from
memory images. Works with any memory image

Mobile Analysis Tools

NAME LICENSE DESCRIPTION

Teel Tech Chip kit Proprietary Chip-off forensics is a mobile forensic hardware tool which performs
for Mobile Phone advanced digital data extraction and analysis technique which involves
Chip Off Forensic physically removing flash memory chip(s) from a subject device and then
(Hardware) acquiring the raw data using specialized equipment.
Oxygen Forensic for Proprietary Oxygen forensic is a mobile forensic tool which helps to examining data
Mobiles from mobile devices, cloud services, drones and IoT device.

Paraben’s device Proprietary Paraben's Device Seizure Field Kit is a completely portable hand-held forensic
seizure Kit solution. This helps forensic experts to perform a comprehensive digital forensic
analysis of over 2,200 cell phones, PDAs, and GPS devices anywhere, anytime.
C-DAC’s Mobile Proprietary CDAC’s MobileCheck is a digital forensics solution for acquisition and analysis
Forensic tool of Mobile phones, Smartphones and Personal Digital Assistants (PDAs).
Mobilyze Proprietary This tool helps to acquire, view and preserve the data held on any iOS or
Android device with over 4 billion smart devices on the planet.
i9 – CDR Analysis Proprietary The tool which is used for the data analysis of Mobile, IMEI and Tower Call
Software from icube Detailed Records (CDRs) by the trainees during cybercrime investigation
Solution training.

271
Unit 6 - Cyber Forensics

Visualization tools
These tools present data on security events in a graphical format. This is most commonly used to visually reflect
network traffic flows, and can be very helpful in troubleshooting operational problems and in detecting misuse
For example, attackers might use covert channel sousing protocols in unintended ways to secretly communicate
information (e.g., setting certain values in network protocol headers or application payloads).
The use of covert channels is usually difficult to detect, but one useful approach is to recognize deviations in the
predicted network traffic flows. Visualization tools are also part of the NFAT program.
• Some visualization software can perform traffic reconstruction by using timetamp and sequential data fields,
software can evaluate the sequence of events, and they can also graphically show how packets crossed networks.
organizations.
• Some visualization tools can also be used to display other types of security event data. For example, the forensic
specialist could import intrusion detection records into a visualization tool, which would then display the data
depending on a range of different characteristics, such as the source or destination IP address or port. One could
then disable the display of known positive behavior in such a way that only unknown events are shown.
Importing and viewing data into the tool is typically fairly straightforward, but learning how to use the tool effectively
to reduce large datasets to a few events of interest will take considerable effort. Traffic restoration can also be done
by protocol analyzers. While these tools typically lack visualization capabilities, they can convert individual packets to
data streams and provide a sequential context for activities.

272
Unit 6 - Cyber Forensics

6.5 MOBILE AND CDR FORENSICS

6.5.1 Understanding the Indian Scenario


India is a huge country with 1.3 billion people. As per the latest statistics, there are 0.8 billion mobile phone subscribers
in India. With the cost of calls as low as 1 paise per second, the possession and sage of mobile phones have reached
an all-time high.
From an investigative perspective, this bodes well for the Investigating Officer (IO). Just about every criminal has at
least one cell phone, some have many. What this means is that cell phones act as silent witnesses to most crimes.
Most investigations today start out by looking for the proverbial needle in the haystack. CDR analysis provides a great
way to get started to narrowing down the case to identify suspects for the crime.
All service providers produce large volumes of call detail records for the purposes of billing and other commercial
requirements. Most of this data is captured in an automated way and is available to the Law Enforcement subject to
their meeting the regulatory/ legal requirements for requesting the data.

Network Operator Data


• The Network Operators can provide detailed data on calls made/received, message traffic, data transferred and
connection location/timing
• The HLR can provide;
• Customer name and address
• Billing name and address (if other than customer)
• User name and address (if other than customer)
• Billing account details
• Telephone Number (MSISDN)
• IMSI
• SIM serial number (as printed on the SIM-card)
• PIN/PUK for the SIM
• Subscriber Services allowed
The service provider can also provide a detailed Cell ID chart specifying the latitude and longitude and address where
each tower is located.

The Call Data Records (CDR’s)


• Produced in the originating MSC transferred to the OMC
• Every call
• Every message Each CDR may contain
• Originating MSISDN
• Terminating MSISDN
• Originating and terminating IMEI
• Duration of call
• Type of Service

273
Unit 6 - Cyber Forensics

• Initial serving Base Station (BTS)


• Final serving Base Station (BTS)
• Originating and terminating IMSI

Type of Data from various CDRs


• Kind of Data Available from Single Number CDR’s
• Important callers of a number (by frequency and duration)

Fig 6.6: Screenshot of data from Single Number CDRs

• Details of those common callers


• Common locations of a number (Night halts, place of work etc.)
• Different IMEIs used by a number
• First and last calls of a day
• Locations between specific period
• Movements and Route? Plotting on maps (Google Maps, Map point, Arc GIS etc.)
• Plotting call locations on maps.
• Regularly called numbers and locations
• Timeline of calls made
• Make of device used to make / receive calls
• Important countries called
• Target numbers of IMEI
• Period of using target numbers in IMEI
• Common callers of target numbers in IMEI
• Common locations of IMEI

274
Unit 6 - Cyber Forensics

• Kind of Data Available from Multiple Number CDR’s


• Relation between multiple numbers
• Sandwich Call Analysis
• Frequency of calls of multiple numbers
• Location of calls of multiple numbers
• Mixed calls of multiple numbers
• Common numbers in multiple CDRs
• Common IMEI in multiple CDRs
• Common Location in multiple CDRs
• Identification of companion/gang member numbers

Kind of Data Available from Tower Dump CDR’s

Fig 6.7: Screenshot of Data from Tower Dump CDRs


• Relation of calls between different towers
• Cross calling between towers
• Present in a tower. At a given time
• Numbers common in different towers
• IMEI common in different towers
• Other party common in different towers
• Searching multiple numbers in tower
• Create tower groups on location or time basis for analysis
• Finding out common numbers
• Common other party

275
Unit 6 - Cyber Forensics

• Common time in groups of towers or individual towers


• Finding relation of calls between different tower groups
• Day wise analysis of towers
• Time wise analysis of towers
• Time line analysis of towers
• Geo location of Cell Id’s

Fig 6.8: Sample Images of CDRs of different formats

Fig 6.9: CDR Sample No. 1

276
Unit 6 - Cyber Forensics

Fig 6.10: CDR Sample No. 2

Fig 6.11: CDR Sample No. 3.

277
Unit 6 - Cyber Forensics

Fig 6.12: CDR Sample No. 4.

Analysis Patterns used to analyze the CDR


Analysis patterns are nothing but the forensic techniques used by an experienced or analyst using CDR’s stored
by a mobile network operator. CDRs can be analyzed to investigate several facts pertaining to the suspect, like
the company or circle or associates of the suspect, suspect’s residential and work location, tracking of suspect’s
movements, where the suspect could have been at the given time.
Listed below are the proven and widely accepted techniques used to analyze the CDR’s.
Frequency of Calls: Using this technique the closely associated numbers that are considered the base numbers for
analysis of the suspected number can be generated. This technique is based on the frequency of calls between the
suspected number and other numbers, more the frequency between the suspected and another number it is assumed
that the other number is closely related to the suspected number. However, apart from the frequency, the duration
of the calls is also taken into consideration while generating base numbers.
Local, STD. & ISD Calls: By observing the type of calls made from or received by the suspected number, the reach
of the suspect can be predicted. Like for example if the CDR of the suspect contains calls only from the city of his
residence or neighboring places then it can be assumed that the suspect might be small time businessman or an
operative or criminal, if the suspects CDR contains the calls from other states also then it can be said that the suspect
has a big business or is involved in organized crime, and if the CDR contains international calls then the suspect is
running vast business or is a part of international crime syndicate.
Residential Location: If the time stamps of the calls listed in the CDR are studied then the suspect’s residential area
can be found out. For instance, the calls made or received during late night or during early morning hours indicate
that the suspect has made or received calls from his/her residence, and by studying the cell sites corresponding to the
calls between late night and early morning hours the residential area of the suspect can be guessed.
Working Location: Similar to finding the residential location, working location can also be found by studying the cell
sites corresponding to the calls in-between the typical office hour’s i.e. between 9:00 am and 5:00 pm or 7:00 pm.

278
Unit 6 - Cyber Forensics

Repeated or Typical Calls Pattern: This pattern has no standard procedure because it depends mostly on the Case
and Suspect type. This kind of pattern is applied during the analysis to identify the typical call patterns. For example
it is possible that the suspect receives a call from a particular number and after receiving the call from that number
the suspect immediately, in turn, makes calls to particular number or group of numbers; or in some cases the pattern
might be vice-versa the suspects makes the initiates the procedure, i.e. the suspect first calls a particular number then
immediately the suspect receives a call from a particular number or group of numbers. Thus the IO can keep a watch
on the numbers generated using this pattern to get further leads.
Identifying Groups: Whenever a crime takes place at a certain place, and if there is no clue of the suspects then in
such a scenario the tower data from all the towers located at the crime scene is collected. Once the tower data is
collected then the first thing that becomes necessary is to identify the groups from that data. Because usually in such
cases the crime committed by a group of persons and due to vast amounts of tower data most of the times there are
some links or numbers that are kind of hidden and are not easily noticeable and often such numbers turn out to be
that of the criminal or have direct link to the criminal. So, when groups are formed all other irrelevant numbers are
eliminated and only numbers from identified groups and their related numbers remain so this makes the IO to easily
identify the hidden numbers or links which might not have been possible without identifying the groups.
Group’s identification is done by studying the repeated calls between certain numbers at the given period. Once
the groups have been identified then the IO can eliminate all other irrelevant numbers from the tower data and
concentrate his investigation only on the numbers from the identified groups and apply the previously mentioned
techniques on these numbers to get closer to the criminal.
As the tower data comes in huge amounts i.e. records from the tower data may vary from several thousand to several
crores so it is essential for IO to be patient and spend a good amount of time to identify the groups from the tower
data.
Linkage: Once the groups are identified it is essential to identify the hierarchical structure of the groups formed, this
to be done to guess the boss and the operatives. So, to form the hierarchical structure the successive links between
the numbers of a group must be identified based on the frequency and time of the calls between numbers.

Legalities
If the investigating officer needs to produce the analyzed CDR’s as evidence in the court of law then he can do so
under Section 65B (Admissibility of electronic records) of The Indian Evidence Act, 1872.
165B. Admissibility of electronic records: -
1. Notwithstanding anything contained in this Act, any information found in an electronic record written on a paper,
stored, registered or copied in a computer-generated optical or magnetic media (hereinafter referred to as the
data output) shall also be deemed to be a document if the requirements stated in this section are met in relation
to the information and the device.
2. The conditions referred to in sub-section (1) with regard to a computer output shall be as follows: —
• The computer output containing the information was generated by the computer during the period during
which the computer was regularly used to store or process the information for the purposes of any activities
carried out regularly by the individual having legal control over the computer during that time.
• During the material part of that time, the computer operated properly or, if not, throughout any time during
which it did not operate properly or was out of operation during that period, did not affect the electronic
record or the quality of its contents, and
• The information stored in the electronic record reproduces or derives from that record.
3. Where, over any time, computers regularly performed the task of storing or processing information for the
purposes of any activities carried out regularly over that time as referred to in clause (a) of the sub-section,
whether—
• By a combination of computers operating over that period or
• By different computers operating successively over that period, or

279
Unit 6 - Cyber Forensics

• By different combinations of the computers the time limit shall be regarded as a single computer.
• In any other manner involving, in any order, the successive operation of one or more computers and one or
more combinations of computers over that time, all computers used for that purpose over that time shall be
considered as a single computer for the purposes of this section, and references to a computer shall be read
accordingly in this section.
4. In any proceedings where it is required by this section to provide a statement in testimony, a certificate shall do
any of the following things —
• Identify the electronic record containing the statement and explain how it was made.
• Providing any tool involved in the creation of the electronic record as necessary to show that the electronic
record was produced by a computer.
• Dealing with some of the matters relating to the conditions set out in subsection (2) and purporting to
be signed by a individual holding a responsible official role in relation to the operation of the individual
concerned the system or management of the related activities (whichever is appropriate) shall be proof of
any matter specified in the certificate and, for the purposes of this sub-section, adequate proof shall be given
to the best of the knowledge and conviction of the person specifying it.
5. For the purposes of this section, —
• Information shall be taken for supply to a machine for the purposes of this section if it is supplied in
any suitable form and if it is supplied directly or (with or without human intervention) by any appropriate
equipment.
• If any official information is given during operations carried out.
• In order to be stored or processed for the purposes of those activities by a computer controlled other than
during those activities, the information shall be considered to be supplied to that computer during those
activities if it is properly supplied to the machine.
• The computer output shall be considered to have been produced by a computer, whether it was generated
directly or (with or without human intervention) by any appropriate equipment.
Explanation — for the purposes of this section, any reference to information derived from other information is a
reference to measuring, comparing or some other method.

Case Studies
Listed below are two case studies to prove the effectiveness of CDR analysis in crime-busting.
Case Brief: A businessman belonging to a reputed family was murdered on 10th of Oct 2008 around 7 pm, while
returning to his house in his Maruti-800 car after closing his shop. His house was on the outskirts of the city and the
distance between his house and shop was around 6 km. Wallet and other expensive items were recovered from the
car eliminating the possibility of murder due to financial gains. There was no eye witness or evidence from the site of
incidence. Despite hard efforts by police murder remained a mystery for all most two weeks.
Methodology Followed: Police believed that the murder was planned, and the gang members divided themselves
among two groups one group was present close to shop and another group was present at the site of incidence.
Group present close to the shop followed the victim and informed the exact movements of the victim to the other
group present at the site of the incidence.
Tower data of the two sites i.e. of the shop (say A for reference) and of the site of incidence (say B for reference)
was requested from both the sites which numbered in lakhs and the obtained data was imported into the C5 CDR
Analyzer. Next, the data was processed as per the following pattern:
1. Frequently called numbers between the two sites A and B in the specified time frame.
2. Numbers moving from site A to site B in specified frame were filtered

280
Unit 6 - Cyber Forensics

3. Altogether eighteen numbers were filtered out from this lakh of data who were present at the site A and were
frequently calling to the numbers at site B and showed a movement to site B around the time of the murder.
4. Search was further refined by narrowing down the time specified and based on call duration and the final list of
8 suspects was prepared.
5. In the meantime, proclaimed offender’s database was checked to see if there was any history sheeters specialized
in contract killing. Three persons were identified.
6. Ownership details of the eight suspect were asked from the service providers, this further eliminated 6 numbers
from the list, and now police were left with two numbers whose details furnished were fraudulent.
7. From the tower data IMEI of the two numbers were identified and further, it was checked to see if any other SIM
card is being used in these IMEI’s (instruments).
8. Two numbers were identified which were used in these IMEIs
9. One of the number used, matched with the one of the contract killer identified from the list of proclaimed
offenders!!
10. This clearly suggested that the suspect had procured a new SIM in an unidentified name and used the same while
committing the crime.
11. Field investigation was carried out and physical presence of the suspect in that area around the said time was
confirmed. Subsequently, police arrested him, further interrogation revealed the identity of his accomplice.
12. Murder mystery was solved, the suspect admitted to his offence and informed about the involvement of another
person. Further interrogation also gave key valuable information which turned handy to solve other important
cases.

Case Brief 2: A person named Raja was murdered and burnt in a remote area of Kotter taluka of Tucker district
on 14/04/2009 between 1 and 4 p.m. when he had gone to Husker in his auto-rickshaw to buy a new SIM card for
himself. On completion of crime scene investigation, the investigating team did not find any eye-witnesses or any
substantial evidence, the only thing they could find was an empty box of Vodafone SIM card that was recovered from
the auto-rickshaw belonging to Raja. Further investigations revealed that the victim had an antagonism with a person
known as Gopal Gowda and friend known as Nagaraj who also had enmity with Gopal Gowda, as the two of them had
a common enemy, so the victim and Nagaraj became good friends. Based on suspicion Gopal Gowda was summoned
for questioning, but it went in vain as there were no substantial evidences or proofs at that point of time. Now the
investigating team was left in dark with no eye-witnesses on any proofs on who has committed the crime, but the only
ray of hope was the empty box of Vodafone SIM card they had recovered from the auto-rickshaw of the victim. So, it
was decided by the IO that they should go for CDR analysis of the victim, Gopal Gowda and Nagaraj.
Methodology Followed: The investigative team believed that the crime was committed by a gang because first the
victim was abducted from Husker and was brought to Kotter where he was killed and burnt. So, the following pattern
was adopted to process the required CDR’s using the CDR Analyzer to find the missing links.
1. CDR of Gopal Gowda was processed to check whether he was present at the location where crime had committed
or has he contacted anybody at the time when the victim was murdered, but the search yielded no result, because
at the time of crime he was present at a location far from the place where crime had occurred and he had no
contact as such with anybody at the time when crime was committed. His location at the time of the crime was
identified by using a provision provided in the software to find the location by using the Tower-ID.
2. CDR of Nagaraj was processed to check whether if the victim had called him after purchasing the new SIM card
as both were good friends, but the investigative team didn’t know what number the victim was using, the only
option was to search for a new number in the CDR.
3. New number was found in the CDR of Nagaraj and a call was made from the new number to Nagaraj on the day
when victim was murdered at around 11 a.m., by using the C5 CDR Analyzer’s utility to check Service Provider of
the number it was confirmed that the number belonged to Vodafone, by this revelation the investigative team
made a guess that probably this is the new number that victim had purchased, but they needed the confirmation
to make sure that this is the victims number.

281
Unit 6 - Cyber Forensics

4. SDR details of the new number was requested from Vodafone and from the details received it was found that
the number was registered with the victim, so this made clear that the victim was using the cell phone when he
was murdered, but at the time of crime scene investigation the team did not find any mobile phone at the crime
scene, so now the team believed that the gang who had committed the crime might have taken the mobile phone
with them.
5. The IMEI number associated with the victim’s mobile number was fetched using the software and it was kept
under watch with Vodafone to check whether the victim’s mobile phone is currently being used by any other
number.
6. Details were received from the service provider which showed that the same mobile phone is being used by 4 or
5 Airtel numbers.
7. Using the Geo-Analysis module of the software the location of those Airtel numbers was identified and it turned
out to be a place known as Idebur near Madhikere taluka of Tucker district.
8. The SDR details were sought of those Airtel numbers and by means of the details received the investigative team
raided the addresses and apprehended five persons.
9. During the interrogation of the apprehended persons, it was revealed that they had murdered burnt Raja and
Gopal Gowda had contracted them to do the crime for Rs.5000.

Tools available in market for CDR Analysis


Analysis of CDR can be done using the spreadsheet’s like MS-Excel which comes along with Microsoft Office, but
performing analysis using the spreadsheets must be done manually which can take a considerable amount of time
and patience and there are more chances of occurrence of humanoid errors which in-turn slow down the investigation
and can even be misleading. So, to evade such errors and to perform swift analysis it is advisable to go for automated
CDR analysis tools. That is readily available in the market, these tools that can perform speedy and accurate CDR
analysis, and is also able to generate intelligent reports as well.

Challenges in CDR Analysis


There are many problems conventionally associated with CDR Analysis
• There is a huge variety of formats which keep on changing, on a regular basis differ from various service providers,
various circles and various states.
• Each service provider has different date formats which can be a cause of confusion.
• Each service provider has different codes for the different type of calls. At times these are not decipherable
without the help of Service Provider.
• The volume of data is provided in a flat file format which might be in many GB size especially in the case of tower
dumps in big cities.
• The SDR’s databases can be many GB in size and might contain billions of records. Searching for data in these
can be very time-consuming.
• The normal method of analysis in MS Excel does not work very well with the large volume of data.

282
Unit 6 - Cyber Forensics

SUMMARY

• Cyber Forensics is a discipline that incorporates legal and computer science elements to capture and interpret
data from operating systems, networks, wireless communications and storage devices in a manner that is
admissible in court as evidence.
• The science of collecting, storing and recording evidence from modern electronic storage devices, such as
computers, PDAs, digital cameras, cell phones and various memory storage devices, is also known as 'Digital
forensics' and/or 'Computer and network forensics'.
• Cyber forensics includes preservation of the integrity of evidence, identification of evidence, extraction of data,
interpreting the data and documentation related to evidence analysis.
• The different forms of cyber-forensic techniques are disk forensics, memory forensics, network forensics,
computer forensics, and internet forensics.
• Disk forensics is the science of extracting forensic information from digital storage media like Hard disk, USB
devices, Firewire devices, CD, DVD, Flash drives, Floppy disks, etc.
• Memory forensics provides insights about the runtime system activity such as account credentials, chat
messages, running processes, injected code fragments, open network connections, etc.
• Network forensics consists of observing, documenting and analyzing network activities to identify the origins
of intrusion breaches or other inappropriate incidents.
• Mobile forensics is used to recover digital evidence or data from a mobile device which could be a cell phone,
smartphone, PDA devices, GPS devices and tablet computers.
• Internet forensics consists of the extraction, analysis and identification of evidence related to the user’s online
activities.
• The key elements in the process of computer forensics are readiness for the tasks, evaluation for risk analysis,
and collection of evidence, analysis of relevant information, presenting the evidence in accordance with the
findings and review of the situation.
• A Computer Forensic Investigator combines their computer science background with their forensic skills to
recover information from computers and storage devices.
• Forensic analysis is required in situations where there is a suspicion that electronic data may have been lost,
misappropriated or otherwise improperly handled.
• Cyber forensics procedure involves preparation, planning before the incident and developing an incident
response plan.
• The Chain of Custody essentially records the way in which we protect, transport and check that the objects
obtained for investigation have been held in a proper manner. The custody chain shows' trust' in the courts and
the client that the media is not being tampered with.
• Digital evidence is an integral element in the detection of motive, mode and procedure in computer-related
crimes, and it is critical in many internal investigations that an entity considers risk reduction in order to resolve
internal processes.
• The process of digital forensics involves the collection of data, proper examination, and thorough analysis
using justifiable methods and reporting the significant findings.
• A search warrant is a written order provided by a judge directing the law enforcement officer to search a
particular piece of evidence at a specific location.

283
Unit 6 - Cyber Forensics

• A Qualified Forensic Duplicate is a file which contains every tiny source information in a raw bitstream format
but would be stored in an altered format. For example, empty sectors might be compressed, or the files could
contain industry hashes on the drive.
• Hash algorithm is used by the forensic specialist for computation using a one-way mathematical formula.
• Hardware mirroring is done by using hardware duplicators that take a hard drive and mirror it on another hard
drive.
• The investigation stage includes defining the who, what, when, where, how and why surrounding an incident.
• The evidences can be divided into three categories i.e. host-based evidence, network-based evidence and
other evidence.
• The Register is the nucleus of the Windows OS. It is a hierarchical database which stores the configuration
settings and options necessary for running applications and commands.
• NTP or Network Time Protocol uses Universal Synchronized Time (UTC) to synchronize device clock times to a
millisecond, and often a fraction of a millisecond.

284
Unit 6 - Cyber Forensics

KNOWLEDGE CHECK

Q.1. What are the types of Digital data, from which can be served as digital evidence?

Q.2. List the classifications of Cyber Forensics by filling the following blanks.
i. D_ _ _ FORENSICS
ii. N_ _W_ _ _ FORENSICS
iii. W_ _ _ _ _ _S FORENSICS
iv. DATA_ _ _ _ FORENSICS
v. M_ _ _ _E DEVICE FORENSICS
vi. G_S FORENSICS
vii. E_ _ _L FORENSICS
viii. MEM_ _ _ FORENSICS

Q.3. State the importance of a good response toolkit. Describe briefly the procedures to create a first response
toolkit.

285
Unit 6 - Cyber Forensics

Q.4. which of the following are true/false in context of securing & evaluating an electronic crime scene?
Investigator should
a. Put together all the systems including the affected ones True/False
b. Establish a security perimeter to see if the offenders are still present True/False
c. Protect perishable data such as pagers & caller ID boxes True/False
d. Secure the telephone lines and allow them to operate normally True/False
e. Protect physical evidences or hidden fingerprints that may be found on keyboards,
mouse, diskette CDs True/False

Q.5. Complete the following sentences


a. Time period for which an organization’s systems are non-functional during an investigation is called __
_________________________________
b. Containment decision means ___________________________________________________________________________
c. Search warrant is a ______________________________________________________________________________________
d. data on a live system that is lost after a computer is powered down or due to the passage of time is
called ___________________________________________________________________________________________________

Q.6. Match the following tables

1. Documentation Tools A. Evidence tape


Evidence bags
Antistatic bags
Antistatic bubble-wrap
2. Dissassembly and removal tools B. Cable tags
Indelible felt-tip markers
Stick on labels
3. Package & Transport Supplies C. Paraben forensic hardware
Digital intelligence forensic hardware
Tableau hardware accelerator
WiebeTech forensic hardware tools
4. Software Tools D. Flat head & philip’s head screwdrivers
Hex-nut drivers
Secure-bit drivers
Standard pliers
5. Hardware Tools E. DIBS mobile forensics workstation
AccessData’s Ultimate Toolkit
Teel Technologies SIM tools

Q.7. What is ‘Hash’? Explain briefly, the functions of Hash.

286
Unit 6 - Cyber Forensics

Q.8. Complete the steps for creating an image of a disk using the dd tool:
Step1: Use dd to zeroize an 320Gb USB drive. This will render the drive sterile and into a pristine state.

Step2: __________________________________________________________________________________________________________________
__________________________________________________________________________________________________________________

Step3: __________________________________________________________________________________________________________________
__________________________________________________________________________________________________________________

Step4: To confirm that the drive has been zeroized dump the contents using xxd.

Step5: Boot the Helix CD on the target/compromised system and plug the USB media. Then create a EXT2 file system
using fdisk and mke2fs.

Step6: __________________________________________________________________________________________________________________
__________________________________________________________________________________________________________________

Step7: __________________________________________________________________________________________________________________
__________________________________________________________________________________________________________________

Step8: After that the bit-by-bit image creation can start. Start by creating a cryptographic fingerprint of the original
disk using MD5. Then using dd with the input source being the /dev/sda and the output file a file named [Link].
Other useful option is the conv=sync,noerror to avoid stopping the image creation when founding an unreadable
sector. Finally create the fingerprint of the image created and verify that both fingerprints match and unmount the
drive.

Q.9. Match the duplication software with its function in the following table:

Duplication software Functions of the software

1. UNIX ‘dd’ A. Expensive windows based Forensic suite which integrates


all forensic investigation &generates a qualified forensic
duplicate

2. Encase B. Small software program which is mounted on a DOS boot


disk & provides options on the type of duplicate, a true
forensic duplicate or a mirror
3. Safeback C. Duplicates the image by mounting the drive in UNIX

287
ABREVIATION FULL FORM ABREVIATION FULL FORM

AAA Authorization, Authentication and Accounting EBCDIC Extended Binary Coded Decimal Interchange
ACLs Access Controls Code
AD Active Directory ECC Elliptic-Curve Cryptography
AES Advanced Encryption Standard EP Equivalence Partitioning
AH Authentication Header ERP Enterprise Resource Planning
ALM Application Lifecycle Management ESAPI Enterprise Security API
API Application Program Interfac ESP Encapsulating Security Payload
AppSec Application Security EXT2 Second Extended File System
ARP Address Resolution Protocol FAT File Allocation Table
AS Authentication Server FCS Frame Check Sequence
ASCII American Standard Code for Information FDAS Fast Disk Acquizition System
Interchange FDDI Fiber Distributed Data Interface
ASMX Active Server Method File FIPS Federal Information Processing Standard
ASPX Active Server Page eXtended FTK Forensic ToolKit
AUT Application Under Test FTP File Transfer Protocol
BC Business Continuity GB Gigabyte
BCM Business Continuity Management Gbps Gigabits Per Second
BER Basic Encoding Rules GPG GNU Privacy Guard
BIOS Basic Input/Output System GPS Global Positioning System
BOOTP Bootstrap Protocol GRC Governance, Risk Management and Compliance
bps Bits Per Seconds GRE Generic Routing Encapsulation
BVA Boundary Value Analysis GUA Graphical User Authentication
BYOD Bring Your Own Device GUI Graphical User Interface
CAINE Computer Aided Investigative Environment HK Handle Key
CC HKEY_CURRENT_CONFIG HKCR HKEY_CLASSES_ROOT
CD Compact Disk HKCU HKEY_CURRENT_USER
CDMA Code Division Multiple Access HKLM HKEY_LOCAL_MACHINE
CDR Call Detailed Records HKPD HKEY_PERFORMANCE_DATA
CERT Computer Emergency Response Team HKU HKEY_USERS
CGI Common Gateway Interface HLR Home Location Register
C-I-A Triad Confidentiality, Integrity and Availability HMACs Hashed Message Authentication Codes
CIS Center for Internet Security HPA Host Protected Area
CLI Command Line Interface HTML Hypertext Mark-up Language
COBIT Control Objectives for Information and Related HTTP Hypher Text Transfer Protocol
Technology I&AM Identity and Access Management
CPU Central Processing Unit I/O Input & Output
CRC Cyclic Redundancy Check IAST Interactive Application Security Testing
CSIRT Computer Security Incident Response Team ICMP Internet Control Message Protocol
CSMA/CD Carrier Sense Multiple Access/Collision ICT Information and Communications Technology
Detection IDAM Identity and Access Management
CSRF Cross-Site Request Forgery IDE Integrated Drive Electronics
DAC Discretionary Access Control IDEA International Data Encryption Algorithm
DAST Dynamic Analysis Security Testing IDPS Intrusion Detection and Prevention Systems
DCS Distributed Control System IDS Intrusion Detection System
DD Data Defination IEC International Electrotechnical Commission
DDoS Distributed Denial of Service IEF Internet Evidence Finder
DES Digital Encryption Standard IETF Internet Engineering Task Force
DF Don't Fragment IGMP Internet Group Management Protocol
DFF Digital Forensics Framework IIS Internet Information Server
DHCP Dynamic Host Configuration Protocol IMEI International Mobile Equipment Identity
DIBS Digital Integrated Business Services IMSI International Mobile Subscriber Identity
DLL Dynamic Link Library IO Investigating Officer
DMA Direct Memory Access iOS iPhone Operating System
DNA Deoxyribonucleic Acid IoT Internet of Things
DNS Domain Name System IP Internet Protocol
DoD Department of Defence IPC Inter-Process Communication
DOS Disk Operating System Ipsec Internet Protocol Security
DoS Denial of Service IPV4 Internet Protocol Version 4
DR Disaster Recovery IR Internet Registry
DSA Digital Signal Algorithm iSCSI Internet Small Computer Systems
DVD Digital versatile Disk ISD International Subscriber Dialling
ABREVIATION FULL FORM ABREVIATION FULL FORM

ISMS Information Security Management System RFCs Request for Comments


ISO International Standards Organization RIA Rich Internet Applications
ISP Internet Service Provider RIRs Regional Internet Registries
IT Information Technology RM Risk Management
ITIL Information Technology Infrastructure Library RPT Right Plain Text
JPG/JPEG Joint Photographic Experts Group RSA Rivest–Shamir–Adleman
KDC Key Distribution Center RSS RDF Site Summary
LAN Local-area network RTOS Real Time Operating System
LDAP Lightweight Directory Access Protocol S.A.T.A.N Security Administrator’s Tool for Analyzing
LIME Linux Memory Extractor Networks
LIRs Local Internet Registries S/MIME Secure/Multipurpose Internet Mail Extension
LLC Logical Link Control SAST Static Application Security Testing
LPT Left Plain Text SATA Serial Advanced Technology Attachment
MAC Mandatory Acess Control SCA Software Composition Analysis
MAC Message Authentication Code SCADA Supervisory Control and Data Acquisition
MAC Media Access Control SCSI Small Computer System Interface
MAC OS Macintosh Operating System SEM Structural Equation Modeling
MAN Metropolitan Area Network SFD Start Frame Delimiter
MD5 Message Digest Algorithm SFTP SSH File Transfer Protocol
MF More Fragment SHA Secure Hash Algorithm
MFA Mult -Factor Authentication SID Security Identifier
MMS Multimedia Messaging Service SIEM security information and event management
MS - DOS Microsoft Disk Operating System SIFT SANS Investigative Forensic Toolkit
MSC Mobile Switching Centre SMS Short Message Service
MSIDN Mobile Station International Subscriber SMTP Simple Mail Transfer Protocol
Directory Number SNMP Simple Network Management Protocol
MTU Maximum Transmission Unit SQL Standardized Query Language
MVS Multiple Virtual Storage SSH Secure Shell
NAC Network Access Control SSI Server - Side Include
NAT Network Address Translation SSL Secure Sockets Layer
NFAT Network Forensic Analysis Tool SSO Single Sign - on
NIC Network Cards & Adapters STD Subscriber Trunk Dialling
NIRs National Internet Registries STP Shielded Twisted Pair
NIST National Institute of Standards and Technology TCP/IP Transmission Control Protocol/ Internet
NSRL National Software Reference Library Protocol
NTFS New Technology File System TDMA Time Division Multiple Access
NTI New Technologies Inc. TGS Ticket Granting Server
NTP Network Time Protocol TGT Ticket Granting Ticket
OLE Object Linking and Embedding TLS Transport Layer Security
OS Operating System ToS Type of Service
OSI Open Systems Interconnection TP Transport Protocol
OWASP Open Web Application Security Project TSK The Sleuth kit
PAN Personal Area Network TTL Time to Live
PAP Password Authentication Protocol UAT User Acceptance Testing
PASV Passive FTP UDP User Datagram Protocol
PC Personal Computer UI User Interface
PCMIA Personal Computer Memory Card URL Uniform Resource Locators
International Association USB Universal Serial Bus
PDA Personal Digital Assistant UTC Universal Time Cordinated
PGP Pretty Good Privacy UTP Unshielded Twisted Pair
PKI Public Key Infrastructure VA Vulnerability Assessment
PLC Programable Logic Controllers VFS Virtual File System
POD Ping of Death VMS Virtual Memory System
PPP Point-to-Point Protocol VoIP Voice Over Internet Protocol
QoS Quality of Service VPN Virtual Private Network
RAID Redundant Array and Independent Disks WAN Wide-area Network
RAM Random Access Memory WAP Wireless Application Protocol
RARP Reverse Address Resolution Protocol WINS Windows Internet Naming Service
RASP Runtime Application Self-Protection WWW World Wide Web
RBAC Role-Based Acess Control XMl Extensible Markup Language
RCA Root Cause Analysis XSS Cross-Site Scripting
BIBLIOGRAPHY
UNIT 1 INTRODUCTION TO CYBER SECURITY
• [Link]
• [Link]
• [Link]
• [Link]
• [Link]
sisintrusion-
• detection-how-toguide-33678.
• [Link]
• [Link]
ersecuritythreats
• [Link]
sand-Exposures

UNIT 2 CRYPTOGRAPHY
• [Link]
cryptographic-standards-and-guidelines
• [Link]
• [Link]
• [Link]
• [Link]
• [Link]

UNIT 3 NETWORK SECURITY


• [Link]
working/[Link]
• [Link]
ty-j2-0639C
• [Link]
• [Link]
• [Link]
• [Link]
• Internet & Intranet Security, Authors: Rolf Oppliger, Rolf Oppliger
• [Link]
• [Link]
• Whitepaper on ‘Network Security: A Simple Guide to Firewalls’ by 3COM
• [Link]
• [Link]
• [Link]
html/security_guide/sect-security_guide-iptables-iptables_control_scripts

UNIT 4 APPLICATION SECURITY


• [Link]
• [Link]
• [Link]

UNIT 5 SECURITY AUDITING


• [Link]
risk-management-inventory/rm-process/risk-treatment
• Research Paper on 'Determinants of Institutional Objectives and Risk Identification:
Relative Relationship – Case of a University' published under 'The Journal of International
Social Research Volume 3 / 11 Spring 2010' written by Anass BAYAGA.
• [Link]/oshanswers/hsprograms/hazard_risk.html
• [Link]
[Link]
• [Link]
• [Link]
• [Link]
• [Link]
migrated_content

UNIT 6 CYBER FORENSICS


• [Link]
• [Link]
• [Link]
ANSWERS KEY
UNIT 1 INTRODUCTION TO CYBER SECURITY

• ANSWER 2
A. Vulnerability- This is a weakness in an information system, system security procedures,
internal controls or implementations that are exposed.
B. Threat Agent or Actor- This refers to the intent and method targeted at the intentional
exploitation of the vulnerability or a situation and method that may accidentally trigger
the vulnerability.
C. Threat Vector- This is a path or a tool that a threat actor uses to attack the target.
D. Threat Target- This is anything of value to the threat actor such as PC, laptop, PDA,
tablet, mobile phone, online bank account or identity.
E. Confidentiality- Prevention of unauthorized disclosure or use of information assets.
F. Integrity- Ensuring authorized access of information assets when required for the
duration required.
G. Availability- Prevention of unauthorized modification of information assets
H. Identification- The first step in the ‘identify-authenticate-authorise’ sequence that is
performed when access to information or information processing resources are required.
I. Authentication- Verifies the identity by ascertaining what you know, what you have and
what you are.
J. Authorisation- The process of ensuring that a user has sufficient rights to perform the
requested operation and preventing those without sufficient rights from doing the same.
K. Non-Repudiation- Refers to one of the properties of cryptographic digital signatures
that offer the possibility of proving whether a message has been digitally signed by the
holder of a digital signature’s private key.

• ANSWER 3
A. v
B. iii
C. ii
D. ii
• ANSWER 4
A. Preventive Controls
B. Preventive Controls
C. Detective Controls
D. Detective Controls
E. Detective Controls
F. Deterrent Controls
G. Deterrent Controls
H. Recovery Controls
I. Recovery Controls

• ANSWER 6

FUNCTIONALITY PLANE OF APPLICATION

1. Preventive 1. Physical
2. Detective 2. Administrative
3. Corrective 3. Technical
4. Deterrent
5. Recovery
6. Compensating

• ANSWER 7
S- Spoofing of user Identity
T- Tampering
R- Repudiation
I- Information disclosure (privacy breach or data leak)
D- Denial of Service
E- Elevation of Privilege
• ANSWER 8
A- Application Attack K- Phishing Attack
B- Application Attack L- Network Attack
C- Malware M- Network Attack
D- Application Attack N- Network Attack
E- Network Attack O- Network Attack
F- Phishing Attack P- Network Attack
G- Malware Q- Network Attack
H- Phishing Attack R- Network Attack
I- Phishing Attack S- Application Attack
J- Malware

UNIT 2 CRYPTOGRAPHY

• ANSWER 1

A – (ii) F – (iii)
B – (iii) G – (iv)
C – (iv) H – (iii)
D – (iv) I – (iv)
E – (iii) J – (i)

UNIT 3 NETWORK SECURITY

• ANSWER 1
VPN : Virtual Private Network
TCP/IP : Transmission Control Protocol / Internet Protocol.
HTTP : Hyper Text Transfer Protocol,
UDP : User Datagram Protocol
ARP : Address Resolution Protocol
DNS : Domain Name System
FTP : File Transfer Protocol
SSH : Secure Shell
DHCP : Dynamic Host Configuration Protocol
IPS : Intrusion Prevention System
IDPS : Intrusion Detection and Prevention Systems

• ANSWER 2
A – (i) F – (iii)
B – (iv) G – (iv & v)
C – (ii) H – (ii & iii)
D – (iii) I – (i & iii)
E – (iv) J – (iv)

• ANSWER 3

A – (v) E – (vii)
B – (vi F – (ii)
C – (i) G – (iv)
D – (iii)

• ANSWER 4
A – (v) E – (iii)
B – (vi) F – (vii)
C – (i) G - (iv)
D – (ii)

• ANSWER 5

A – (ii) C - (iii)
B – (I)
UNIT 4 APPLICATION SECURITY

• ANSWER 1

1–G 5 – D &H
2–A 6–E
3–B 7–F
4–C

• ANSWER 2
Open
Web
Application
Security
Project

• ANSWER 3
A – (ii)
B – (v)
C - (i)

UNIT 5 SECURITY AUDITING

• ANSWER 1

1–C 5-G
2- E 6-D
3-A 7-B
4-H 8-F

• ANSWER 3
(ii)
UNIT 6 CYBER FORENSICS
• ANSWER 2
i. DISKFORENSICS
ii. NETWORK FORENSICS
iii. WIRELESS FORENSICS
iv. DATA BASE FORENSICS
v. MOBILE DEVICE FORENSICS
vi. GIS FORENSICS
vii. EMAIL FORENSICS
[Link] FORENSICS

• ANSWER 4
a- False
b- True
c- True
d- False
e- True

• ANSWER 6
1- B
2- D
3- A
4- E
5- C

• ANSWER 9
1- C
2- A
3- B
4
300

You might also like