Cybersecurity Guide
Cybersecurity Guide
HANDBOOK
3
FOREWORD
The discipline of Cyber Security is one which continues to evolve rapidly in scope, complexity and needs.
With a specific and concerted focus on digitization and rising sophistication of cybercrime, the need for
having a comprehensive plan to secure and safeguard the information infrastructure, personal identity,
and digital assets has never been more pronounced than during the current times. The only way this
massive ask gets addressed is by ensuring availability of adequate pool of cyber professionals who are
well equipped with the core technical expertise which enables them to effectively take up the various roles
that enterprises of today are coming up with as part of their elaborate strategies and initiatives for Cyber
preparedness, including but not limited to, networks, applications, devices, identities et al.
Thanks to the capacity building and skilling efforts that are underway across the country for this niche
domain, there is a substantial pool of Cyber Security professionals who have been working on furthering the
Cyber Security agenda of their respective organizations and in turn the country’s cyberspace. Having said
that, efforts have to be scaled up to bridge that gap between the demand and the supply side. Currently,
the engineering graduates who are joining the workforce do not have adequate exposure to Cyber Security
and it is precisely this gap that needs to be bridged. DSCI and Sector Skills Council, NASSCOM, have been
working with Industry in accordance with a job role focussed framework with an intent to step up focus on
industry readiness on cybersecurity, which will be needed by the governments and businesses of all shape
and size.
There is another dimension to this whole scenario which is that of having a workforce which is balanced
from diversity standpoint. At present, the situation is rather skewed wherein very few women are part of
Cyber Security workforce of the country, which could be due to their lack of awareness and opportunities
in this area, as well as not enough exposure to training courses.
Microsoft-DSCI CyberShikshaa is an endeavour that intends to enhance the women representation in
Cyber Security workforce and prime them for taking up various job roles in Industry as well as government
sector. This particular courseware has been prepared with the intent of having a reference and a baseline
for the CyberShikshaa trainers as well as the trainees. Given the massive scope of this realm having a
single course guide may not be feasible. The courseware covers the fundamental aspects of different
modules viz. System Fundamentals, Introduction to Cyber Security, Network Security, Application Security,
Security Auditing & Cyber Forensics. Good knowledge of security issues, continuous learning of emerging
threats, and proactive approach to protect-detect-respond are three key hallmarks of any Cyber Security
professional. The CyberShikshaa candidates will find this material useful from guidance perspective and
their learnings certainly need to be augmented by hands on trainings, labs and practical case study based
approach, and online content.
UNIT 2 CRYPTOGRAPHY 51
2.1 BASICS OF CRYPTOGRAPHY 53
2.2 DES AND AES 65
2.3 THE RSA ALGORITHM 70
2.4 HASH FUNCTION 71
2.5 SOME CRYPTOGRAPHIC TOOLS 74
2.6 APPLICATION OF CRYPTOGRAPHY 82
•
•
•
Explain the relevance of cyber security in the society
Explain basic cyber security principles and concepts
Describe various types of threats and attacks
”
• Describe the commonly used cyber security controls
• Provide a brief introduction to key domains of cyber security
2
Unit 1 - Introduction to Cyber Security
Cybercrime has become one of the fastest growing crimes in the digital environment as advanced technologies
continue to progress by offering high speed, ease of usage and connectivity. Cybercrime continues to diverge into
different paths as time passes. More and more criminals are exploiting the advantages provided by modern day
technology in order to perpetrate a diverse range of criminal activities using digital devices.
Due to this, the field of information/cyber security has seen significant growth in recent times. Recent incidents
of information theft from large companies like Target, Sony and Citibank has shown the risks and challenges of
this field and this necessitates the growing need for information/cyber security professionals in this field. We are
also witnessing a rising level of data leakage from governments, businesses and other organizations, families and
individuals.
The key objective of Information/Cyber Security is the protection of information and its critical elements, including
the systems and hardware that are involved in the creation, use, storage, transmission and deletion of the information.
It is used to protect information from unauthorized access, use, disclosure, disruption, modification or destruction.
By selecting and applying appropriate safeguards, Information/Cyber Security helps organizations in protecting their
resources, reputation, legal position and other tangible and intangible assets.
5. Cyber 7. Incident
6. IT Forensics 8. BCM/DR
Assurance / GRC Management
3
Unit 1 - Introduction to Cyber Security
3. Data Protection to prevent unauthorized access to computers, databases and websites and protect
and Privacy data from corruption. It also includes protective digital privacy measures.
4. Identity & Access to enable the right individuals to access the right resources at the right times for the
Management right reasons by authentication and authorisation of identities and access.
5. Cyber
to develop and administer processes for Governance, Risk and Compliance.
Assurance / GRC
to collect, analyse and report on digital data in a way that is legally admissible. It can
6. IT Forensics be used in the detection and prevention of crime and in any dispute where evidence
is stored digitally.
7. Incident to manage information security incidents and identify, analyze, and correct hazards
Management to prevent a future re-occurrence
to develop and administer processes for creating systems of prevention and recovery
8. BCM/DR to deal with potential threats to a company thus protecting an organization from
the effects of significant negative events
9. End Point to protect the corporate network when accessed via remote devices such as laptops
Security or other wireless and mobile devices. Each device with a remote connection to the
network creates a potential entry point for security threats.
10. Security to monitor, assess and defend enterprise information systems (web sites, applications,
Operations databases, data centers and servers, networks, desktops, etc.)
4
Unit 1 - Introduction to Cyber Security
• theft
• fraud/ forgery
• unauthorized information access
• interception or modification of data and data
management systems
The above concerns are materialized, in the event of a breach caused by exploitation of vulnerability.
Vulnerabilities
• Vulnerability is a weakness in an information system, system security procedures, internal
controls or implementations that are exposed. This may be exploited or triggered by a threat
source.
• ‘Threat agent or actor’ refers to the intent and method targeted at the intentional exploitation
of the vulnerability or a situation and method that may accidentally trigger the vulnerability.
• A ‘Threat vector’ is a path or a tool that a threat actor uses to attack the target.
• ‘Threat targets’ are anything of value to the threat actor such as PC, laptop, PDA, tablet,
mobile phone, online bank account or identity.
Information States
Information has 3 basic states. At any point of time, Information could be in a state of being transmitted, stored or
processed. This is so, irrespective of the media in which information is residing.
Transmission
INFORMATION STATES
Processing Storage
Information systems security concerns itself with the maintenance of three critical characteristics of information:
confidentiality, integrity and availability
These characteristics of information represent all the security concerns in an automated environment. All organisations
are concerned about these irrespective of their outlook on sharing information.
5
Unit 1 - Introduction to Cyber Security
The triad shows the three goals of information security: confidentiality, integrity and availability. Information is
protected when all the three tenets are put together.
1. The first tenet of the information security triad is confidentiality.
Confidentiality is defined by ISO-17799 as “ensuring that information is accessible only to those authorized
to have access to it.”
This can be one of the most difficult tasks to ever undertake. To attain confidentiality, we have to keep
secret information secret. People from both inside and outside the organization will be threatening to
reveal the secret information.
2. The second tenet of the information security triad is integrity. Integrity is defined by ISO-17799 as “the
action of safeguarding the accuracy and completeness of information and processing methods.”
This means that when a user requests for any type of information from the system, the information provided
would be correct.
3. The last tenet of the information security triad is availability.
ISO-17799 defines availability as ensuring that authorized users have access to information and associated
assets when required. This means that when a user needs a file or system, the file or system is there to be
accessed. This seems simple enough, but there are so many factors working against system availability.
There are hardware failures, natural disasters, malicious users, and outside attackers all fighting to remove
the availability from systems. Some common mechanisms to fight against this downtime include fault-
tolerant systems, load balancing, and system failover.
6
Unit 1 - Introduction to Cyber Security
Given below are some terms that relate to basic information/ cyber security concepts:
1 2 3 4
5 6 7
Non–
Integrity Availability
Repudiation
• Identification is the first step in the ‘identify-authenticate-authorise’ sequence that is performed every day
countless times by humans and computers alike when access to information or information processing resources
are required. While the particulars of the identification systems differ depending on who or what is being identified,
some intrinsic properties of identification apply regardless of these in particular. Just three of these properties are
scope, locality and uniqueness of IDs.
Identification name spaces can be local or global in scope. To explain this concept, let's use the familiar notation
of email addresses. While many email accounts named Sameer may exist around the world, an email address
sameer@[Link] must refer exactly to one such user in the [Link] locality. If the company in question is a small
one, then maybe only one employee is named Sameer. His colleagues may refer to that certain person by only
using his first name. That would work because the colleagues are in the same locality and only one Sameer works
there. However, if Sameer was someone in another country or even from the other end of the town, to refer to
Sameer@[Link] as simply Sameer would make no sense because username Sameer is not globally unique and
would indicate different persons in different localities. This is one of the reasons why two user accounts should
never use the same name on the same system. This will ensure that access controls are not based on names that
can be misinterpreted and also that there is ease of establishing accountability for user actions.
• Authentication happens right after identification and before authorization. It verifies authenticity of the identity
declared at the identification stage. The three methods of authentication are what you know, what you have and
what you are. Regardless of the authentication method used, the aim is to obtain reasonable assurance that the
identity declared at the identification stage belongs to the party in communication.
7
Unit 1 - Introduction to Cyber Security
Reasonable assurance could mean different degrees of assurance, depending on the environment and application.
Therefore, one may require different approaches towards authentication. Authentication requirements of a national
security system would be critical and would naturally differ from authentication requirements of a small company.
As different authentication methods have different costs and properties as well as different returns on investment,
the choice of authentication method for a system or organisation should be made after these factors have been
carefully considered.
• Authorisation
Authorisation is the process of ensuring that a user has sufficient rights to perform the requested operation. It
also is the process of preventing others, who do not have sufficient rights, from doing the same. For this, after the
users declare their identity at the identification stage and prove it at the authentication stage, they are assigned
a set of authorizations (rights, privileges or permissions) that define what all they can do on the system. These
authorisations are usually defined by the system’s security policy and are implemented by the security system
administrator. These privileges may range from one extreme of "permit nothing" to the other extreme of "permit
everything" and include anything in between.
• Confidentiality
It means persons authorised to have access to receive or use information, documents, etc. Unauthorised access
to confidential information may have devastating consequences not only in national security applications, but
also in commerce and industry. Main mechanisms of protection of confidentiality in information systems are
cryptography and access controls. Examples of threats to confidentiality are malware, intruders, social engineering,
insecure networks and poorly administered systems.
• Integrity
It is concerned with trustworthiness, origin, completeness and correctness of information as well as prevention of
improper or unauthorised modification of information. Integrity in information/cyber security context refers not
only to integrity of information itself but also to the origin of integrity i.e. integrity of the source of information.
Integrity protection mechanisms may be grouped into two broad types: preventive mechanisms, such as access
controls that prevent unauthorised modification of information and detective mechanisms, which are intended
to detect unauthorised modifications when preventive mechanisms have failed. Controls that protect integrity
include principles of least privilege, separation and rotation of duties.
• Availability
Availability of information, although usually mentioned last, is not the least important pillar of information/cyber
security. If the authorised users of the information cannot access and use the information, then there is what is
the use of the having that information at all? Therefore, even though availability is the last item in the C-I-A triad,
it is just as important and as necessary as confidentiality and integrity. Attacks against availability are known as
Denial of Service (DoS) attacks. Natural and manmade disasters also affect availability. While natural disasters are
infrequent, they have a severe impact. Human errors are frequent but usually not as severe. Business continuity
and disaster recovery planning (which at the very least includes regular and reliable backups) are used to minimize
the losses in the case of availability.
• Non-repudiation
In the information/cyber security context, it refers to one of the properties of cryptographic digital signatures that
offer the possibility of proving whether a message has been digitally signed by the holder of a digital signature’s
private key.
Non-repudiation is fast becoming very important due to the increase of electronic commerce. However, it can also
be controversial. An owner of a digital signature, may use it maliciously to repudiate from a legitimate transaction
by claiming that his/ her digital signature key was stolen.
8
Unit 1 - Introduction to Cyber Security
The following types of non-repudiation services are defined in international standard ISO 14516:2002 (guidelines for
the use and management of trusted third party services).
• Approval: non-repudiation of approval provides proof of who is responsible for approval of the contents of a
message.
• Sending: non-repudiation of sending provides proof of who sent the message.
• Origin: non-repudiation of origin is a combination of approval and sending.
• Submission: non-repudiation of submission provides proof that a delivery agent has accepted the message for
transmission.
• Transport: non-repudiation of transport provides proof for the message originator that a delivery agent has
delivered the message to the intended recipient.
• Receipt: non-repudiation of receipt provides proof that the recipient received the message.
• Knowledge: non-repudiation of knowledge provides proof that the recipient recognized the content of the
received message.
• Delivery: non-repudiation of delivery is a combination of receipt and knowledge, as it provides proof that the
recipient received and recognized the content of the message.
What type of security is associated with each level of the OSI model?
We know that the OSI reference model for networking is designed around seven layers arranged in a stack. The OSI
security architecture reference model (ISO 7498-2) is also designed around seven layers, reflecting a high level
view of the different requirements within Information Security.
Authenticatio n
For these 3 layers the security measures used are user ac-
count management is used to control access, Host Intrusion
Access Control Detection System, Rules based access control, digital certifi-
cates, encrypted passwords (safely stored), and timers to limit
the number of attempts that may be made to establish a
Non-Repudiation session
Fig 1.3: OSI Security Architecture—Some security measures for each layer
9
Unit 1 - Introduction to Cyber Security
Accurately assessing threats and identifying vulnerabilities is critical to understanding the risk to assets. Understanding
the difference between threats, vulnerabilities, and risk is the first step.
Let us look at each of these terms separately.
Assets
Assets are People, property, and information that are valuable and we are trying to protect.
• People may include employees and customers along with other invited persons such as contractors or guests.
• Property assets consist of both tangible and intangible items that can be assigned a value. Intangible assets
include reputation and proprietary information.
• Information may include databases, software code, critical company records, and many other intangible items.
Vulnerabilities
A vulnerability is a weakness or gap in our protection efforts. These weaknesses or gaps can be exploited by threats
to gain unauthorized access to an asset. These are security flaws in a system that allow an attack to be successful.
Vulnerability can be treated. Weaknesses should be identified and proactive measures taken to correct identified
vulnerabilities.
Therefore, vulnerability testing is performed on an ongoing basis by the people responsible for resolving such
vulnerabilities. It helps to provide data used to identify unexpected dangers to security that need to be addressed.
Testing for vulnerabilities is useful for maintaining ongoing security, allowing the people responsible for the security
of one’s resources to respond effectively to new dangers as they arise.
Threats
Threats are anything that can exploit a vulnerability, intentionally or accidentally and obtain, damage, or destroy an
asset. A threat is what we’re trying to protect against. It refers to the source and means of a particular type of attack.
Threats (effects) generally cannot be controlled. One can’t stop the efforts of an international terrorist group, prevent
a hurricane, or tame a tsunami in advance. However, threats need to be identified so that measures can be taken to
protect the asset.
10
Unit 1 - Introduction to Cyber Security
A threat assessment is performed to determine the best approaches to securing a system against a particular threat,
or class of threat. Penetration testing exercises are substantially focused on assessing threat profiles, to help one
develop effective countermeasures against the types of attacks represented by a given threat.
Analyzing threats can help one develop specific security policies to implement in line with policy priorities and
understand the specific implementation needs for securing one’s resources.
Risks
Risks are the potential for loss, damage or destruction of an asset as a result of a threat exploiting a vulnerability. Risk
is the intersection of assets, threats, and vulnerabilities. The term “risk” refers to the likelihood of being targeted by a
given attack, of an attack being successful, and general exposure to a given threat.
Risk can be mitigated Risk can be managed to either lower vulnerability or the overall impact on the business.
A risk assessment is performed to determine the most important potential security breaches to address now, rather
than later. One enumerates the most critical and most likely dangers, and evaluates their levels of risk relative to each
other as a function of the interaction between the cost of a breach and the probability of that breach.
Analyzing risk can help one determine appropriate security budgeting — for both time and money — and prioritize
security policy implementations so that the most immediate challenges can be resolved the most quickly.
Microsoft has proposed a threat classification called STRIDE from the initials of threat categories:
Spoofing of user identity
Tampering
Repudiation
Information disclosure (privacy breach or data leak)
Denial of Service (D.o.S.)
Elevation of privilege
• Non-Target specific: Non-Target specific threat agents are computer viruses, worms, Trojans and logic
bombs.
• Employees: staff, contractors, operational/ maintenance personnel or security guards who are annoyed
with the company.
• Organized crime and criminals: criminals target information that is of value to them, such as bank
accounts, credit cards or intellectual property that can be converted into money. Criminals will often make
use of insiders to help them.
• Corporations: corporations are engaged in offensive information warfare or competitive intelligence.
Partners and competitors come under this category.
• Unintentional human error: accidents, carelessness etc.
• Intentional human error: insider, outsider etc.
• Natural: Flood, fire, lightning, meteor, earthquakes etc.
11
Unit 1 - Introduction to Cyber Security
Attacks
Network Attacks
• Watering hole attack - This is a more complex type of a phishing attack. Instead of the usual way of
sending spoofed emails to end users in order to trick them into revealing confidential information, attackers
use multiple staged approaches to gain access to the targeted information.
• Eavesdropping - Network communications usually occur in an unsecured or “cleartext” format, which
allows an attacker who has gained access to data paths to “listen in” or interpret (read) the traffic. When an
attacker is eavesdropping it is referred to as sniffing or snooping. The ability of an eavesdropper to monitor
the network is generally the biggest security problem that administrators face in an enterprise. Without
strong encryption services that are based on cryptography, data can be read by others as it travels through
the network.
• Spoofing - It is a technique used to masquerade a person, program or an address as another by falsifying
the data with purpose of unauthorized access.
• Network Sniffing (Packet Sniffing) - A process to capture the data packets travelling in the network.
Network sniffing can be used both by IT professionals to analyse and monitor the traffic for example,
in order to find unexpected suspicious traffic, but as well by perpetrators to collect data sent over clear
text that is easily readable with use of network sniffers (protocol analysers). Best counter measure against
sniffing is the use of encrypted communication between the hosts.
• Data Modification - After an attacker has read the data, the next logical step is to alter it. An attacker can
modify the data in the packet without the knowledge of the sender or receiver. No-one wants any of their
messages to be modified in transit.
• Denial of Service attack - An attack designed to cause a disturbance or elimination of services of a
particular host/ server by flooding it with major quantities of useless traffic or the communication requests
which are external. After the success of DoS attack, it becomes impossible for the server to answer to even
the legitimate requests, this can be observed through a variety of ways – slow response of the server,
slow network performance, unavailability of soware or web page, inability to access data, website or other
resources. Distributed Denial of Service Aack (DDoS) occurs where certain different infected systems
(botnet) are flooded on a specific host with traffic simultaneously.
• Man-in-the-middle attack - The attack is in the form of active monitoring or eavesdropping on victims’
connections and communication between victim hosts. This form of attack includes interaction between
both victim parties, having the communication, and the attacker. This is achieved by attacker intercepting
all or part of the communication, changing the content of it and sending it back as legitimate replies.
• Compromised-Key Attack - A key is a secret code or number necessary to interpret secured information.
Although obtaining a key is a difficult and resource-intensive process for an attacker, it is possible. After an
attacker obtains a key, that key is referred to as a compromised key. An attacker uses the compromised key
to gain access to a secured communication without the sender or receiver being aware of the attack. With
the compromised key, the attacker can also decrypt or modify data.
12
Unit 1 - Introduction to Cyber Security
Application Attacks
• Injection - Injections let attackers modify a back-end statement of command through unsanitized user
input. Moynihan It is the most common type of Application Layer Attacks.
• Cross-Site Scripting - Cross-site scripting is a type of vulnerability that lets attackers insert Javascript in the
pages of a trusted site. By doing so, they can completely alter the contents of the site to do their bidding
• Buffer overflow attack - In this type of attack the victim host is being provided with traffic/ data that
is out of range of the processing specs of the vicm host, protocols or applicaons, overflowing the buffer
and overwriting the adjacent memory. One example can be the mentioned, Ping of Death attack, where
malformed ICMP packet with size exceeding the normal value can cause the buffer overflow.
• Trojan Horse - Trojan horses are fake programs which pretend to be the original programs. Since they can
replicate most of the application level behavior of an application, a trojan horse is one of the of the most
famous styles to launch application layer attacks.
• HTTP flood - HTTP flood is a type of layer 7 application attack hitting web servers that apply the GET
requests used to fetch information, as in URL data retrievals during SSL sessions. Hackers sends the GET or
POST requests to a target web server. These requests are specifically designed to consume considerable
resources. Then, bots start from a given HTTP link and follow all links on the provided website in a recursive
way.
Phishing Attacks
• Phishing attack – This type of attack uses social engineering techniques to steal confidential information.
The most common purpose of such attack targets victim’s banking account details and credentials. Phishing
attacks tend to use schemes involving spoofed emails sent to users that lead them to malware infected
websites designed to appear as real online banking websites.
• Social phishing – In the recent years, phishing techniques evolved much to include social media like
Facebook or Twitter. This type of Phishing is often called Social Phishing. The purpose remains the same - to
obtain confidential information and gain access to personal files.
• Spear phishing attack – This is a type of phishing attack is targeted at specific individuals, groups of
individuals or companies. Spear phishing attacks are performed mostly with primary purpose of industrial
espionage and the of sensitive information while ordinary phishing attacks are directed against wide public
with intent of financial fraud.
• Whaling – It is a type of phishing attack specifically targeted at senior executives or other high profile
targets within a company.
• Vishing (Voice Phishing or VoIP Phishing) – It is a use of social engineering techniques over telephone
system to gain access to confidential information from users. This phishing attack is often combined with
caller ID spoofing that masks the real source phone number and instead of it displays the number familiar
to the phishing victim or number known to be of a real banking institution.
Malware
• Virus - Virus is a malicious program able to inject its code into other programs/ applications or data files
and the targeted areas become “infected”. Installation of a virus is done without user’s consent, and spreads
in form of executable code transferred from one host to another. Types of viruses include Resident virus
, non-resident virus; boot sector virus; macro virus; file-infecting virus (file-infector); Polymorphic virus;
Metamorphic virus; Stealth virus; Companion virus and Cavity virus.
• Worm - Worm is a malicious program category, exploiting operating system vulnerabilities to spread itself.
In its design, worm is quite similar to a virus - considered even its sub-class. Unlike the viruses though
worms can reproduce/ duplicate and spread by itself. During this process worm does not require to attach
itself to any existing program or executable. Different types of worms based on their method of spread are
email worms; internet worms; network worms and multi-vector worms.
13
Unit 1 - Introduction to Cyber Security
• Trojan - Computer Trojan or Trojan Horses are named after the mythological Trojan horse owing to their
similarity in operation strategy. Trojans are a type of malware software that masquerades itself as a not-
malicious even useful application but it will actually do damage to the host computer after its installation.
Unlike virus, Trojans do not self-replicate unless end user intervene to install.
1.2.3 Hacking
Hacking is attempting to find security gaps and exploit a computer or network system to gain access and/or control
over the systems. Hackers are highly intelligent and skilled in computers, network, programming and the use of
hacking tools. They could hack systems and commit criminal acts such as privacy invasion, theft of corporate/personal
data, frauds, etc. Sometimes, organisations use the skills of hackers to help them improve the security of their systems
by identifying loopholes and weaknesses in their security systems.
Script Kiddies
Script Kiddies are amateur hackers, who may not be very skilled, or may be doing this just for the fun of it or to
impress their friends. They download off-the-shelf tools and codes and are not very concerned about learning the
science and the art of hacking. They are also quite dangerous because they do not fully understand the repercussions
of their actions and could end up doing a lot of damage just for fun.
A particular type of Script Kiddy is the Blue Hat Hackers, whose key agenda is to take revenge on anyone who
makes them angry. Like script kiddies they do not want to learn, but use simple cyber attacks like flooding the IP
with overloaded packets, resulting in DoS attacks, etc.
14
Unit 1 - Introduction to Cyber Security
Hacktivist
Hacktivists are protestors on the internet who may have political intentions. Instead of carrying placards and
marching streets to call attention towards social causes, they deface websites and upload promotional material, so
that the viewers would receive information about the cause they propagate, but anonymously. They use the same
knowledge, skills and tools of a black hat hacker but with the objective of getting public attention to a political
matter. They could also extract unauthorised information from government or organisational sources and make it
public, acting like a whistleblower.
15
Unit 1 - Introduction to Cyber Security
Security threats can affect an institution by the exploitation of numerous types of vulnerabilities. No single control or
security device can completely protect a system that is connected to a public network. Effective security will require
the establishment of layers of various types of controls, monitoring, and testing methods.
16
Unit 1 - Introduction to Cyber Security
Types of Controls
Central to information/cyber security is the concept of controls, which may be categorized by the following:
• Preventive • Physical
• Detective • Administrative
• Corrective • Technical
• Deterrent
• Recovery
• Compensating
By functionality:
Preventive controls
Preventive controls are the first controls met by an adversary. These try to prevent security violations and enforce
access control. Like other controls, these may be physical, administrative or technical. Doors, security procedures
and authentication requirements are examples of physical, administrative and technical preventive controls
respectively.
Detective controls
Detective controls are in place to detect security violations and alert the defenders. They come into play when
preventive controls have failed or have been circumvented and are no less crucial than detective controls. Detective
controls include cryptographic checksums, file integrity checkers, audit trails and logs and similar mechanisms.
Corrective controls
Corrective controls try to correct the situation after a security violation has occurred. Although a violation occurred,
but the data remains secure, so it makes sense to try and fix the situation. Corrective controls vary widely, depending
on the area being targeted, and they may be technical or administrative in nature.
Deterrent controls
Deterrent controls are intended to discourage potential attackers. Examples of deterrent controls include notices
of monitoring and logging as well as the visible practice of sound information/cyber security management.
Recovery controls
Recovery controls are somewhat like corrective controls, but they are applied in more serious situations to
recover from security violations and restore information and information processing resources. Recovery controls
may include disaster recovery and business continuity mechanisms, backup systems and data, emergency key
management arrangements and similar controls.
Compensating controls
Compensating controls are intended to be alternative arrangements for other controls when the original controls
have failed or cannot be used. When a second set of controls addresses the same threats that are addressed by
another set of controls, it acts as a compensating control
17
Unit 1 - Introduction to Cyber Security
By plane of application:
Physical controls
Physical controls include doors, secure facilities, fire extinguishers, flood protection and air conditioning.
Administrative controls
Administrative controls are the organization’s policies, procedures and guidelines intended to facilitate information/
cyber security.
Technical or Logical controls are the various technical measures, such as firewalls, authentication systems intrusion
detection systems and file encryption among others.
Many systems are programmed with controls as per the degree of risk associated with the system. For example, a
high-risk money transfer processing system at a financial institution would have a lot more controls than a lower-risk
non-transactional record-keeping system at the same institution.
Yet, there are many high-risk systems that may not be programmed with adequate control features or the control
may not be implemented properly. In such cases programmers and/or process owners are not aware of one or more
of the risks faced by the organization.
18
Unit 1 - Introduction to Cyber Security
Controls that protect against these threats are called physical security controls. Examples of physical security controls
include various types of locks (e.g., conventional keys, electronic access badges, biometric locks, cipher locks);
insurance coverage over hardware and the costs to re-create data; procedures to perform daily backups of system
software, application programs, and data; as well as off-site storage and rotation of the backup media (e.g., magnetic
tapes, disks, compact disks [CDs]) to a secure location; and current and tested disaster recovery programs.
Physical security controls pertain to the central processing unit and associated hardware and peripheral devices.
Security vulnerability management has evolved from the vulnerability assessment systems that began in the early 1990s
with the advent of network security scanner S.A.T.A.N. (Security Administrator’s Tool for Analyzing Networks). It was
followed by the first commercial vulnerability scanner from ISS. While the early tools mostly found the vulnerabilities,
and produced reports, today’s solutions deliver comprehensive discovery and support the entire security vulnerability
management lifecycle.
Vulnerabilities can exist anywhere in the IT environment. They could be the result of many different root causes. The
security vulnerability management solutions collect intelligence from the endpoint and network comprehensively
and then apply some advanced analytics to identify and even prioritise vulnerabilities that pose the maximum risks
to systems. This results in actionable data that helps the IT security teams to focus on tasks that will most quickly and
effectively reduce overall network risk with fewest possible resources.
Security vulnerability management works in a closed-loop workflow system that usually includes identifying
the networked systems and their associated applications, auditing or scanning the systems and applications for
vulnerabilities and remediating them. Any IT infrastructure component could present existing or new security
concerns and vulnerabilities. It may be a fault in the product/ component or it may be inadequate configuration.
Malicious code or unauthorised individuals may exploit these vulnerabilities to cause damage, such as disclosure of
data to competition or using passwords and userids to conduct frauds. Vulnerability management is the process of
identifying those vulnerabilities and taking appropriate measures to mitigate risk.
Vulnerability assessment and management is an essential piece for managing overall IT risk because:
Persistent threats
Attacks exploiting security vulnerabilities for financial gain and criminal agendas continue to dominate headlines.
Regulation
Many government and industry regulations mandate rigorous vulnerability management practices.
Risk management
Mature organizations treat it as a key risk management component. Organizations that follow mature IT security
principles understand the importance of risk management.
Properly planned and implemented threat and vulnerability management programs represent a key element in
an organization’s information/cyber security program, providing an approach to risk and threat mitigation that is
proactive and business aligned, not just reactive and technology focused.
Vulnerability Assessment
Includes assessment of the environment for known vulnerabilities, and to assess IT components, using security
configuration policies (by device role) that have been defined for the environment. This is accomplished through
scheduled vulnerability and configuration assessments of the environment.
19
Unit 1 - Introduction to Cyber Security
Network based vulnerability assessment (VA) has been the main method used in order to baseline networks, servers
and hosts. The strength of VA is its breadth of coverage.
A comprehensive and accurate vulnerability assessment can be done for managed systems by using credentialed
access. Unmanaged systems can be identified and a basic assessment can be done. It is also important to evaluate
databases and web applications for security weaknesses considering the increase in attacks that target these
components.
Database scanners are used to check database configuration and properties, and to verify whether they comply with
database security best practices. Web application scanners test an application’s logic for ‘abuse’ cases that can break
or exploit the application. There are more tools that can be used to perform more in-depth testing and analysis.
All these scanning technologies (whether it is for network, application or database) assess different types of security
weaknesses, and most organisations need to implement a combination.
Risk assessment
Larger issues should be expressed in the language of risk (e.g. ISO 27005), precisely expressing influence in terms of
business. The business case for any remedial action should incorporate considerations relating to reduction of risk
and compliance with policy. This incorporates the basis of action to be agreed upon between the relevant line of
business and the security team.
Risk analysis
‘Fixing’ the issue may involve acceptance of risk, shifting of risk to another party or reducing risk by applying remedial
action, which could be anything from a configuration change to implementing a new infrastructure (e.g., data loss
prevention, firewalls, host intrusion prevention software).
Elimination of the root cause of security weaknesses may require changes to user administration and system
provisioning processes. Many processes and often several teams may come into play (e.g., configuration management,
change management, patch management, etc.). Monitoring and incident management processes are also required
to maintain the environment.
Security Testing
Hackers or attackers are people who gain unauthorised access to an application. Their motive can range from
malicious or harmful to simple curiosity or wanting to brag/show off. There is another type of hacker, who is hired to
find if the application can be breached. They are called ‘ethical hackers’. Hackers who have malicious intent and wish
to break into an application to steal data or causing damage are called ‘crackers’.
Types of attacks
The most common types of attacks are:
• State sponsored attacks: State sponsored attacks are penetrations conducted by terrorist groups, foreign
governments and other outside entities.
• Advanced persistent threats: Advanced persistent threats are continuous attacks aimed at an organisation
often for political reasons.
• Ransomware: Ransomware locks data and requires the owner to pay a fee to have their data released.
• Denial of Service (DoS): Denial of Service makes an application inaccessible to its users.
20
Unit 1 - Introduction to Cyber Security
21
Unit 1 - Introduction to Cyber Security
Remediation Planning
Prioritization
Vulnerability and security configuration assessments usually generate long remediation work lists. This remediation
work needs to be prioritized. When organizations implement vulnerability assessment and security configuration
baselines for the first time, they may discover that many systems contain multiple vulnerabilities and security
configuration errors. There is a lot of work and therefore, prioritization is important.
Example: an application that had its database pilfered by hackers where the ultimate failure the forensic
specialist may be investigating is the exfiltration of consumer private data, but SQL Injection isn’t what caused
the failure.
Why did the SQL Injection happen?
Was the root of the problem that the developer responsible simply didn’t follow the corporate policy for
building SQL queries?
Or was the issue a failure to implement something like the OWASP ESAPI (ESAPI - The OWASP Enterprise
Security API is a free, open source web application security control library that makes it easier for programmers
to write lower-risk applications.) in the appropriate manner?
Or maybe the cause was a vulnerable open-source piece of code that was incorporated into the corporate
application without passing it through the full source code lifecycle process?
22
Unit 1 - Introduction to Cyber Security
The Discretionary Access Control model is the most widely used of the three models. In this model, the owner or the
creator of the information (which could a file or directory) can decide and set the access control restrictions on the file
or directory that carries this information. The advantage of DAC is its flexibility. The users may decide who can access
the information and what privileges to give, such as read, write, delete, rename, execute, etc.
Mandatory access control, takes a stricter approach to access control. User of systems utilising MAC, have little or
no choice as to what access permissions they can set on their information. They have to abide by mandatory access
controls specified in a system-wide security policy, which are enforced by the operating system and applied to all
operations on the system.
Data classification levels (such as public, confidential, secret and top secret) are used in MAC based systems. They
also use security clearance labels corresponding to data classification levels. This helps to decide what access control
restrictions to enforce in accordance with the security policy set by the system administrator. Apart from this access
control restrictions may be imposed per group and/ or per domain. i.e. aprt from having the required security clearance
level, the users or applications must also belong to the appropriate group or domain. For example, a file that carries
a 'confidential' label and belongs only to the research group cannot be accessed by a user from the marketing group
even if that user has a security clearance level higher than confidential (such as secret or top secret). This concept is
known as compartmentalization or ‘need to know’.
When used appropriately, MAC based systems, are usually more secure than DAC based systems, however, they are
also much more difficult to use and administer because of the additional restrictions and limitations imposed by the
operating system. MAC based systems are thus, mostly used in government, military and financial institutions where
more than usual security is required and where the complexity and costs can be tolerated. environments.
In role-based access control model, rights and permissions are assigned to roles instead of individual users. This
added layer of abstraction permits easier and more flexible administration and enforcement of access controls. For
example, access to marketing files may be restricted only to the marketing managers, and users Ann, David, and
Joe may be assigned the role of a marketing manager. Later, when David moves from the marketing department
elsewhere, it is enough to revoke his role of marketing manager, and no other changes would be necessary.
When this approach is applied to an organisation with thousands of employees and hundreds of roles, the added
security and convenience of using RBAC can be seen. Solaris has supported RBAC since release 8.
23
Unit 1 - Introduction to Cyber Security
Further distinction should be made between centralized and decentralized (distributed) access control models. In
environments with centralized access control, a single, central entity makes access control decisions and manages the
access control system whereas in distributed access control environments, these decisions are made and enforced
in a decentralized manner. Both approaches have their pros and cons, and it is generally inappropriate to say that
one is better than the other. The selection of a specific access control approach should be made only after careful
consideration of an organisation’s requirements and associated risks.
24
Unit 1 - Introduction to Cyber Security
25
Unit 1 - Introduction to Cyber Security
The internet architecture itself leads to vulnerabilities in the network. Understanding the security issues of internet
greatly assists in developing new security technologies and approaches for networks with internet access and internet
security itself. The types of attacks through internet also need to be studied to be able to detect and guard against
them.
There are many products available for ensuring network security. These tools are:
• encryption
• authentication mechanisms
• intrusion-detection
• security management and firewalls, etc.
Typical security currently exists on computers connected to the network. Security protocols sometimes usually appear
as part of a single layer of the OSI network reference model. Current work is being performed in using a layered
approach to secure network design. The layers of the security model correspond to the OSI model layers, which is
later discussed in this handbook.
Special security devices and technologies are also used to achieve the required network up-time of 99.999%, as for
instance:
• Firewalls
• Intrusion Detection and Prevention Systems (IDPS)
• Virtual Private Networks (VPN)
• Tunneling
• Network Access Control (NAC)
• Security Scanners
• Protocol Analysers
• Authorization, authentication and accounting (AAA)
The most used security device in networks though remain to be the firewall. There are various firewall types, such as:
• Hardware firewalls
• Server firewalls
• Personal firewalls
26
Unit 1 - Introduction to Cyber Security
To select an appropriate network security solution, the following information has to be collated.
• Identifying Potential Risks for Network Security
• Asset Identification
• Vulnerability Assessment and Threat Identification
• Understanding the Network Model and Architecture
• Identification of User Productivity and Business Needs
• Identification of Legal and Regulatory Requirements
The network solution selected must keep all the above in mind.
These will be discussed in detail in the subsequent sections of this handbook.
Network security refers to any activity designed to protect your network. Specifically, these activities protect the
usability, reliability, integrity and safety of network and data. Effective network security targets a variety of threats and
stops them from entering or spreading on the network.
No single solution protects from a variety of threats. Network security is accomplished through hardware and software.
Software must be constantly updated and managed for protection against emerging threats.
Wireless networks, which by their nature, facilitate access to radio, are more vulnerable than wired networks and need
to encrypt communication to deal with sniffing and continuously checking the identity of mobile nodes.
The mobility factor adds more challenges to security namely, monitoring and maintenance of secure traffic transport
of mobile nodes. This concerns both homogenous and heterogeneous mobility (inter-technology). The latter requires
homogenisation of security level of all networks visited by the mobile.
From the terminal’s side, it is important to protect its resources (battery, disk, CPU) against misuse and ensure
confidentiality of its data. In an ad hoc or sensor network, it becomes essential to ensure terminal’s integrity as it
plays a dual role of router and terminal.
The difficulty of designing security solutions that could address these challenges is not only to ensure robustness
faced with potential attacks or to ensure that it does not slow down communication, but also to optimise the use of
resources in terms of bandwidth, memory, battery, etc.
More importantly in this open context, the wireless network is to ensure anonymity and privacy while allowing
traceability for legal reasons. Indeed, the growing need for traceability is not only necessary to fight against criminal
organisations and terrorists, but also to minimise the plundering of copyright. It is therefore facing a dilemma of
providing a network support of free exchange of information while controlling the content of communication to avoid
harmful content. Actually, this concerns both wired and wireless networks. All these factors influence the selection
and implementation of security tools that are guided by a prior risk assessment and security policy.
27
Unit 1 - Introduction to Cyber Security
A network security system usually consists of many components. Ideally, all components work together, which
minimises maintenance and improves security.
Network security components often include:
• Anti-virus and anti-spyware
• Firewall to block unauthorised access to network
• Intrusion Prevention Systems (IPS) to identify fast-spreading threats, such as zero-day or zero- hour attacks
• Virtual Private Networks (VPNs) to provide secure remote access
• Communication security
Any scheme that is developed for providing network security needs to be implemented at some layer in protocol
stack as depicted in the diagram below:
28
Unit 1 - Introduction to Cyber Security
Identity and Access Management (IDAM) is the process of managing who has access to what information over time.
In other words it is the security and business discipline that “enables the right individuals to access the right resources
at the right times and for the right reasons.”
This cross-functional activity involves the creation of distinct identities for individuals and systems, as well as the
association of system and application-level accounts to these identities.
Fundamentally, IDAM attempts to address three important questions:
1. Who has access to what information
(A robust identity and access management system will help a company not only to manage digital identities, but to
manage the access to resources, applications and information these identities require as well.)
2. Is the access appropriate for the job being performed?
(This element takes on two facets. First, is this access correct and defined appropriately to support a specific job
function? Second, does access to a particular resource conflict with other access rights, thus posing a potential
segregation of duties problem?)
3. Is the access and activity monitored, logged and reported appropriately?
(In addition to benefitting the user through efficiency gains, IDAM processes should be designed in a manner
that supports regulatory compliance. One of the larger regulatory realities is that access rights must be defined,
documented, monitored, logged and reported appropriately.)
A robust identity and access management system will help a company not only to manage digital identities, but to
manage the access to resources, applications and information these identities require as well.
Two facets are taken by this element.
1. IIs this access correct and also defined correctly to support a particular job function?
2. Does access to a specific resource conflict with other access rights as a result of which potential segregation
of duties problem arise.
Along with benefiting the user through efficiency gains, IDAM processes should also be designed in a manner which
supports regulatory compliance. Access rights should be defined, documented, monitored, logged and reported
appropriately.
IDAM processes are used to initiate, capture, record and manage the user identities and related access permissions
to the organisation’s proprietary information. These users may extend beyond corporate employees. i.e. The users
could be:
• Employees
• Vendors
• Customers
• Floor Devices
• Generic administrator accounts
29
Unit 1 - Introduction to Cyber Security
The means used by the organisation to facilitate the administration of user accounts and to implement proper
controls around data security form the foundation of IDAM. It addresses the need to ensure appropriate access to
resources across increasingly heterogeneous technology environments and to meet increasingly rigorous compliance
requirements.
IDAM systems should facilitate the process of user provisioning and account setup. The product should lessen the
time required with a controlled workflow that reduces errors and the potential for abuse, while enabling automated
account fulfilment. An identity and access management system should also provide administrators with the ability to
instantly view and change access rights.
Also, it is quite important that within the central directory, the access right / privilege system match employee job
title, location and business unit ID automatically in order to automatically manage the access request. These small
pieces of information help in classifying access requests significant to employees’ existing positions. There are some
rights which might be inherent in their position and provisioned automatically depending on the employee, while
others may be allowed upon request. Reviews may also be needed in some cases. Except in the case of exemption,
other requests could be denied or may be outright prohibited. All varia-ons should be managed by the IDAM system
automatically and appropriately.
In order to manage access requests, an IDAM System has to set workflows, with the option of multiple stages of
reviews with the requirement for approval of each request. This is the mechanism that can facilitate the setting of
different risk level-appropriate review processes for higher-level access as well as reviews of already existing rights in
order to prevent privilege sneak. A good IDAM system is authoritative for any organisation for securing its resources.
30
Unit 1 - Introduction to Cyber Security
Components of IDAM
IDAM is the task of handling information about users on computers. Information that authenticates the user’s identity,
and information that helps to describe information and actions that they are authorized to access and/or perform
can be included in this. It also includes the management of descriptive information about the user along with how
and by whom that information can be accessed and changed. Typically, managed entities include users, hardware
and network resources and even applications. The classification of IDAM components into 5 major categories include:
authentication, authorization, administration, audit and central user repository (Enterprise Directory).
Authentication
Authentication management and session management are covered in this area. It includes ensuring that the person
who logs on to a system is who they say they are. Generally, this is done with the user who provides enough credentials
like usernames and passwords which, when put together or combined, give certain assurance of the authenticity of
the person who is logging on in order to gain initial access to a particular resource or an application system.
After the authentication of a user, a session is created and referred at the time of the interaction of the user and the
application system until the session is logged off by the user or terminated by other means (e.g. timeout). When the
user ID/ password authentication method is used, the authentication module comes with a password service module.
By maintaining the user’s session centrally, Single Sign-On service is provided by the authentication module so that
the user is not required to logon again while accessing another system or application governed under the same IDAM
Framework.
Authorization
Authorization is a module which helps in determining whether a user is given permission for an access to a specific
resource. It includes the parameters positioned around what a user is permitted to do after their authentication. The
concern of the authorization is not with who they are, but why they are logging on and what the user is permitted to
do. Authorization can be impacted by different variables. They include everything from file and application permission
and sharing, to very precisely defied access rules that are based on role, location and even circumstance.
Authorization is generally performed by checking the resource access request, generally as a URL in web-based
application, against the strategies of authorization that are put away in an IAM policy store. Authorization is the core
module applying access control based on role. Besides, the authorization model could give complex access controls
in view of data or information or policies which includes the attributes of user, roles / groups of user, actions taken
by user, access channels, time, the requested, external data and business rules.
Administration
The zone of Administration contains user management, password management, role/group management and
user/group provisioning. User management module characterises the arrangement of administrative functions,
for example, identity creation, propagation and upkeep of user identity, benefits and privileges. One of its parts
includes the user life cycle management that makes an enterprise to deal with the life expectancy of a user account,
from the underlying phase (the initial stage) of provisioning to the last stage (the final stage) of de-provisioning.
User management needs a coordinated work process capability to endorse some user actions like user account
provisioning and de-provisioning. While some part of the user management functions must be centralized and others
can be assigned to end-users.
The assigned administration enables an enterprise to directly distribute workload to user departmental units and can
also improve the accuracy of system data by assigning the responsibility of updates to persons closest to the situation
and information.
Self-service is another key concept within user management. Through self-profile management service an enterprise
can benefit from timely update and accurate maintenance of identity data. Another popular self-service function is
self-password reset, which significantly eases the help-desk workload to handle password reset requests.
31
Unit 1 - Introduction to Cyber Security
Audit
Audit includes those activities that help “prove” that authentication, authorization and administration are done at a
sufficient level of security, measured by a set of standards. It concerns with matters ensuring compliance, or it may
be concerns with satisfying a best practice framework, such as ITIL. Or it may also simply be designed to conform to
internally developed security standards or policies.
32
Unit 1 - Introduction to Cyber Security
An algorithm can be thought of as the link between the programming language and the application. An algorithm
is a fancy to-do list for a computer. Algorithms take in zero or more inputs and give back one or more outputs.
33
Unit 1 - Introduction to Cyber Security
A recipe is a good example of an algorithm because it tells you what you need to do step by step. It takes inputs
(ingredients) and produces an output (the completed dish).
The words 'algorithm' and 'algorism' come from the name of a Persian mathematician called Al-Khwarizmi
(Persian: c. 780–850).
Algorithm is a step-by-step procedure, which defines a set of instructions to be executed in a certain order to get
the desired output. Algorithms are generally created independent of underlying languages, i.e. an algorithm can be
implemented in more than one programming language.
From the data structure point of view, following are some important categories of algorithms −
• Search − Algorithm to search an item in a data structure.
• Sort − Algorithm to sort items in a certain order.
• Insert − Algorithm to insert item in a data structure.
• Update − Algorithm to update an existing item in a data structure.
• Delete − Algorithm to delete an existing item from a data structure.
Characteristics of an Algorithm
Not all procedures can be called an algorithm. An algorithm should have the following characteristics:
• Unambiguous − Algorithm should be clear and unambiguous. Each of its steps (or phases), and their inputs/
outputs should be clear and must lead to only one meaning.
• Input − An algorithm should have 0 or more well-defined inputs.
• Output − An algorithm should have 1 or more well-defined outputs, and should match the desired output.
• Finiteness − Algorithms must terminate after a finite number of steps.
• Feasibility − Should be feasible with the available resources.
• Independent − An algorithm should have step-by-step directions, which should be independent of any
programming code.
34
Unit 1 - Introduction to Cyber Security
It came out on top of several competitors and was officially announced the new encryption standard AES in 2001. The
algorithm is based on several substitutions, permutations and linear transformations, each executed on data blocks of
16 byte – therefore the term blockcipher. Those operations are repeated several times, called “rounds”. During each
round, a unique roundkey is calculated out of the encryption key, and incorporated in the calculations. Based on the
block structure of AES, the change of a single bit, either in the key, or in the plaintext block, results in a completely
different ciphertext block – a clear advantage over traditional stream ciphers. The difference between AES-128, AES-
192 and AES-256 finally is the length of the key: 128, 192 or 256 bit – all drastic improvements compared to the 56
bit key of DES. By way of illustration: Cracking a 128 bit AES key with a state-of-the-art supercomputer would take
longer than the presumed age of the universe. And Boxcryptor even uses 256 bit keys. As of today, no practicable
attack against AES exists. Therefore, AES remains the preferred encryption standard for governments, banks and high
security systems around the world.
RSA Encryption
RSA is one of the most successful, asymmetric encryption systems today. Originally discovered in 1973 by the British
intelligence agency GCHQ, it received the classification “top secret”. We have to thank the cryptologists Rivest,
Shamir and Adleman for its civil rediscovery in 1977. They stumbled across it during an attempt to solve another
cryptographic problem.
As opposed to traditional, symmetric encryption systems, RSA works with two different keys: A public and a private
one. Both work complementary to each other, which means that a message encrypted with one of them can only be
decrypted by its counterpart. Since the private key cannot be calculated from the public key, the latter is generally
available to the public.
Those properties enable asymmetric cryptosystems to be used in a wide array of functions, such as digital signatures.
In the process of signing a document, a fingerprint encrypted with RSA, is attached to the file, and enables the
receiver to verify both the sender and the integrity of the document. The security of RSA itself is mainly based on
the mathematical problem of integer factorization. A message that is about to be encrypted is treated as one large
number. When encrypting the message, it is raised to the power of the key, and divided with remainder by a fixed
product of two primes. By repeating the process with the other key, the plaintext can be retrieved again. The best
currently known method to break the encryption requires factorizing the product used in the division. Currently, it
is not possible to calculate these factors for numbers greater than 768 bits. That is why modern cryptosystems use a
minimum key length of 3072 bits.
Public key cryptography can play an important role in helping provide the needed security services, including
confidentiality, authentication, digital signatures, and integrity. Public key cryptography uses two electronic keys: a
public key and a private key. These keys are mathematically related, but the private key cannot be determined from
the public key. The public key can be known by anyone while the owner keeps the private key secret.
35
Unit 1 - Introduction to Cyber Security
A Public Key Infrastructure (PKI) provides the means to bind public keys to their owners and helps in distribution of
reliable public keys in large heterogeneous networks. Public keys are bound to their owners by public key certificates.
These certificates contain information such as the owner's name and the associated public key and are issued by a
reliable certification authority (CA).
Let us look at each of these in greater detail in the next Chapter.
36
Unit 1 - Introduction to Cyber Security
Applications are a type of software that allows people to perform specific tasks using various ICT devices.
• Applications could be for computers (desktops, laptops, etc.)
• Applications could be for mobile devices (smartphones, iPads, etc.)
• Applications could be for running on the internet (web applications)
• Applications could also be run on the cloud
Almost every application has vulnerabilities. Common software vulnerabilities in application security include SQL
injection, Cross-Site Request Forgery (CSRF) and Cross-Site Scripting (XSS). We will learn more about them in the a
later unit.
Organizations use Application security, or “AppSec,” to protect their critical data from external threats by ensuring the
security of all the software used to run the business. This software could be built internally, bought or downloaded.
Application security helps identify, fix and prevent security vulnerabilities in any kind of software application.
There are also many tools and technologies to address application security, yet it is very important to always start with
a strong strategy. At a high level, the strategy should address, and continuously improve, these basic steps:
• identification of vulnerabilities,
• assessment of risk,
• fixing flaws,
• learning from mistakes and better managing future development processes.
Countermeasures are actions taken to ensure application security:
• ‘Application firewall’ is the most basic software countermeasure that limits the execution of files and the
handling of data by specific installed programs.
• Using a router which is also the most common hardware countermeasure can prevent the IP address of an
individual computer from being directly visible on the Internet.
• Conventional firewalls, encryption/decryption programs, anti-virus programs, spyware detection/removal
programs and biometric authentication systems are some of the other countermeasures.
Application security can be enhanced by Threat Modelling, which involves following certain steps rigorously, which
are:
• defining enterprise assets,
• identifying what each application does (or will do) with respect to these assets,
• creating a security profile for each application,
• identifying and prioritizing potential threats and documenting adverse events and the actions taken in each
case.
In this context, a threat is any potential or actual adverse event that can compromise the assets of an enterprise,
including both malicious events, such as a denial-of-service (DoS) attack, and unplanned events, such as the failure
of a storage device.
37
Unit 1 - Introduction to Cyber Security
Apart from that there are technologies available to assess applications for security vulnerabilities which include the
following:
• Static analysis (SAST), or “white-box” testing, analyzes applications without executing them.
• Dynamic analysis (DAST), or “black-box” testing, identifies vulnerabilities in running web applications.
• Interactive AST (IAST) technology combines elements of SAST and DAST and is implemented as an agent
within the test runtime.
• Mobile behavioral analysis discovers risky actions of mobile apps.
• Software composition analysis (SCA) analyzes open source and third party components.
• Manual penetration testing (or “pen testing”) uses the same methodology cybercriminals use to exploit
application weaknesses.
• Web application perimeter monitoring discovers all public-facing applications and the most exploitable
vulnerabilities.
• Runtime application self-protection (RASP) is built into an application and can detect and prevent real-time
application attacks.
There are a range of application security technologies available to secure applications, yet none of them are foolproof.
It is important to use ones skill and knowledge of multiple analysis techniques during the entire application lifetime
to bring down application risk. We will learn more about this in a later unit on Application Security.
38
Unit 1 - Introduction to Cyber Security
39
Unit 1 - Introduction to Cyber Security
servers, storage subsystems, networking switches, routers and firewalls, as well as the cabling and physical racks used to
organize and interconnect the IT equipment. They also contain infrastructure for power distribution and supplemental
power, which includes electrical switching; uninterruptable power supplies; backup generators; ventilation and cooling
systems.
A business relies heavily on the services of the data center for its day to day work. With the intensive use of technology
and IT systems by businesses, the data centre has become a critical asset and businesses cannot afford any downtime
or inefficiency in its functioning.
Securing data centres from security and safety threats has become very crucial.
The risks that threaten a data centre could be risks to the data as well as the equipment.
These risks would include disasters like floods and fire, as well as attacks by malicious third parties and even
unauthorized members of staff entering the secure area, who accidentally or deliberately tamper with the equipment.
By gaining physical or virtual access to the data centre, damage could be inflicted leading to Denial of service (DoS),
theft of confidential information, data alteration, and data loss etc.
Data center security involves the formation of security policies, precautions and practices that have to be implemented
to disallow unauthorized access and manipulation of a data center's resources.
All physical access has to be controlled completely. Identities have to be confirmed via biometrics, access cards, etc.,
and all activities in and around that data centre can be recorded through CCTV.
Some measures commonly adopted for data centre security are as follows:
• Restriction of access to the data centre to selected people by maintaining up-to-date access lists and
using access control technologies like locked doors, turnstiles and fingerprint, RFID tagging, voice or DNA
authentication through biometric access control systems.
• Further, every data center must follow a “Zero Trust” logical security procedure that includes multi-factor
authentication. Every access point should require two or more forms of identification or authorization.
• Round the clock surveillance interior and exterior high-resolution cameras.
• Presence of security personnel.
• The network and data must also be safe from attack using firewalls, anti-virus software, IP network information
security, intrusion detection, alerts to network events and real-time visibility into routing and traffic anomalies.
• For cloud customers, a cloud based service such as Alert Logic, can be used to detect security breaches.
• Data centres also use Threat Manager system to automatically identify behaviour patterns missed by
traditional network security products.
• Many data centre owners are now using smart monitoring features including Relentless Intrusion Detection
which quickly alerts if human attackers, network worms or bots are attacking the system.
There should be a comprehensive and co-ordinated plan that includes every aspect of a data center’s security,
working together. This is called a layered security system. The aim of such a system would be that a potential intruder
is faced with several layers of security, that they have to breach before they can reach valuable data or hardware
assets in the data centre. If one layer proves to be ineffective then the other layers will serve the purpose of protecting
the entire system.
40
Unit 1 - Introduction to Cyber Security
Since these services are being used by most businesses and individuals, the security of data, systems and applications
from data theft, leakage, corruption and deletion has become an important concern.
Cloud computing security, also called cloud security involves the procedures and technology that secure cloud
computing environments against both external and insider cybersecurity threats.
Most cloud providers attempt to create a secure cloud for customers, but can’t control how users use the service. The
users can weaken the cloud security with their configuration, sensitive data, and access policies.
In each public cloud service type, the cloud provider and cloud customer share different levels of responsibility for
security. A key difference between SaaS, PaaS, and IaaS is the level of control (and responsibility) that the enterprise
has in the cloud stack, these are:
• Software-as-a-service (SaaS) — The cloud provider is typically responsible for providing security for the entire
technology stack from data center up to the application, whereas the customers are responsible for securing
their data and user access.
• Platform-as-a-service (PaaS) — the cloud service provider is often responsible for security for the technology
stack from data center to runtime, while the customers are responsible for securing their data, user access,
and applications.
• Infrastructure-as-a-service (IaaS) — The cloud provider manages the virtualization, servers, storage, networking,
and data center, while the customers are responsible for securing their data, user access, applications,
operating systems, and virtual network traffic.
Within all types of public cloud services, customers are responsible for securing their data and controlling who can
access that data.
Cloud security solutions
Cloud security solutions can consists of a set of policies, controls, procedures and technologies that work together to
protect cloud-based systems, data, and infrastructure. These security measures are configured to protect cloud data,
support regulatory compliance and protect customers' privacy as well as setting authentication rules for individual
users and devices. From authenticating access to filtering traffic, cloud security can be configured to the exact needs
of the business.
Organizations seeking cloud security solutions should consider the following criteria to solve the primary cloud
security challenges of visibility and control over cloud data.
A complete view of cloud data requires direct access to the cloud service. Cloud security solutions accomplish this
through an application programming interface (API) connection to the cloud service. With an API connection it is
possible to view:
• What data is stored in the cloud.
• Who is using cloud data?
• The roles of users with access to cloud data.
• Who cloud users are sharing data with.
• Where cloud data is located.
• Where cloud data is being accessed and downloaded from, including from which device.
Once you have visibility into cloud data, apply the controls that best suit your organization. These controls include:
• Data classification — Classify data on multiple levels, such as sensitive, regulated, or public, as it is created in
the cloud. Once classified, data can be stopped from entering or leaving the cloud service.
• Data Loss Prevention (DLP) — Implement a cloud DLP solution to protect data from unauthorized access and
automatically disable access and transport of data when suspicious activity is detected.
• Collaboration controls — Manage controls within the cloud service, such as downgrading file and folder
permissions for specified users to editor or viewer, removing permissions, and revoking shared links.
41
Unit 1 - Introduction to Cyber Security
• Encryption — Cloud data encryption can be used to prevent unauthorized access to data, even if that data is
exfiltrated or stolen.
As with in-house security, access control is a vital component of cloud security. Typical controls include:
• User access control — Implement system and application access controls that ensure only authorized users
access cloud data and applications. A Cloud Access Security Broker (CASB) can be used to enforce access
controls
• Device access control — Block access when a personal, unauthorized device tries to access cloud data.
• Malicious behavior identification — Detect compromised accounts and insider threats with user behavior
analytics (UBA) so that malicious data exfiltration does not occur.
• Malware prevention — Prevent malware from entering cloud services using techniques such as file-scanning,
application whitelisting, machine learning-based malware detection, and network traffic analysis.
• Privileged access — Identify all possible forms of access that privileged accounts may have to your data and
applications and put in place controls to mitigate exposure.
Existing compliance requirements and practices should be augmented to include data and applications residing in
the cloud.
• Risk assessment — Review and update risk assessments to include cloud services. Identify and address risk
factors introduced by cloud environments and providers. Risk databases for cloud providers are available to
expedite the assessment process.
• Compliance Assessments — Review and update compliance assessments for PCI, HIPAA, Sarbanes-Oxley and
other application regulatory requirements.
42
Unit 1 - Introduction to Cyber Security
SUMMARY
• Information/ Cyber security is the practice of defending information from unauthorized access, use, disclosure,
disruption, modification, perusal, inspection, recording or destruction.
• Information/ Cyber security comprises of Network security, Application security, Data protection and privacy,
Identify and access management, Cyber assurance/ GRCIT Forensics , Incident management, BCM/DR, Endpoint
security, Security operations and Industrial control security
• At any given moment, information is being transmitted, stored or processed. The three states exist irrespective
of the media in which information resides.
• The information security triad shows the three primary goals of information security: confidentiality, integrity
and availability. When these three tenets are put together, information will be well protected.
• The cyber security concepts compromises of Identification, Authentication, Authorisation, Confidentiality,
Integrity, Availability And Non - Repudiation.
• Risk is a function of threats exploiting vulnerabilities to obtain, damage or destroy assets.
• The types of threats can be categories as STRIDE based on the intials of threat categories.
• There are 4 different types of attacks , Network attacks, Application Attacks, Phishing Attacks and Malwares.
• Cyber Security Control help users to manage their risk and protect their critical data assets from intrusions,
security incidents and data loss.
• The types of control can be classified on the basis of Functionality and Plane of Application.
• Logical security controls are those that restrict the access capabilities of users of the system and prevent
unauthorized users from accessing the system. Logical security controls may exist within the operating system.
• Controls that protect against threats like Physical damage from natural disasters are called physical security
controls.
• Some Tools and Techniques for Cyber Security are;- Security Vulnerability Management, Vulnerability
Assessment, Security Testing, Remediation Planning and Access Control Models
• Security vulnerability management is a closed-loop workflow that generally includes identifying networked
systems and associated applications, auditing (scanning) the systems and applications for vulnerabilities and
remediating them.
• The Vulnerability Assessment involves Risk assessment and Risk analysis.
• Security testing is validating that an application does not have code issues that could allow unauthorized
access to data and potential data destruction or loss
• The most Common Types of attacks are , State sponsored attacks, Advanced Persistent Threats, Ransomware,
Denial of Service
• The various Type of testing includes 1) Vulnerability and security scanning: application code is compared
against known vulnerability signatures. 2) Penetration testing: Penetration testing simulates an attack by a
hacker. 3) Security auditing: Security auditing is a code review designed to find security flaws. and 4) Ethical
hacking: 5) Ethical Hacking.
• The various steps of Remediation planning involves Priortization and Root Cause Analysis.
• Access Control Model Involves Discretionary Access Control model , Mandatory access control model and role
based Access control model.
43
Unit 1 - Introduction to Cyber Security
• Access control models define how computers enforce access of subjects to objects
• An effective network security plan is developed with the understanding of business objectives and priorities,
security issues, potential attackers, Needed level of security, and factors that make a network vulnerable to
attack
• The Network Layer is the Layer 3 of the Open Systems Interconnection (OSI) communications model. It’s
primary function is to move data into and through other networks.
• The Layer 3 can provide various features such as Quality of service management, Load balancing and link
management, Security, Interrelation of different protocols and subnets with different schema.
• Identity and Access Management (IDAM) is the process of managing who has access to what information over
time.
• IDAM attempts to address three important questions: 1. Who has access to what information 2. Is the access
appropriate for the job being performed? 3. Is the access and activity monitored, logged and reported
appropriately?
• Identity and access management involves four basic functions – 1) Identity management: Creation, management
and deletion of identities without regard to access; 2) User access (logon): For example: a smart card and its
associated data used by a customer to log on to a service or services; 3) Privileged Identity: Focuses solely
on Identity Management for privileged accounts, powerful accounts used by IT administrators; 4) Identity
Governance: A system that relies on federated identity to authenticate a user without knowing his or her
password.
• The various components of IDAM are Classified into 5 main categories – 1) Authentication - Authentication
management and session management are covered in this area; 2) Authorization is a module which helps
in determining whether a user is given permission for an access to a specific resource; 3) Administration -
The zone of Administration contains user management, password management, role/group management and
user/group provisioning; 4) Audit includes those activities that help “prove” that authentication, authorization
and administration; 5) Central User Repository stores and delivers identity information to other services, and
provides service to verify credentials submitted from client.
44
Unit 1 - Introduction to Cyber Security
KNOWLEDGE CHECK
Q.1. State the importance of cyber security to Government, Organisations and individuals.
Q.2. Match the following terms related to cyber-crimes and cyber security with their explanations.
TERMS EXPLANATION
A. VULNERABILITY This is a path or a tool that a threat actor uses to attack the target.
B. THREAT AGENT OR This is anything of value to the threat actor such as PC, laptop, PDA, tablet, mobile
ACTOR phone, online bank account or identity.
C. THREAT VECTOR This refers to the intent and method targeted at the intentional exploitation of the
vulnerability or a situation and method that may accidentally trigger the vulnerability.
D. THREAT TARGET This is a weakness in an information system, system security procedures, internal
controls or implementations that are exposed.
E. CONFIDENTIALITY Ensuring authorized access of information assets when required for the duration
required.
F. INTEGRITY The first step in the ‘identify-authenticate-authorise’ sequence that is performed
when access to information or information processing resources are required.
G. AVAILABILITY The process of ensuring that a user has sufficient rights to perform the requested
operation, and preventing those without sufficient rights from doing the same.
H. IDENTIFICATION Refers to one of the properties of cryptographic digital signatures that offer the
possibility of proving whether a message has been digitally signed by the holder of a
digital signature’s private key.
I. AUTHENTICATION Prevention of unauthorized disclosure or use of information assets.
K. NON REPUDIATION Verifies the identity by ascertaining what what you know, what you have and what
you are.
45
Unit 1 - Introduction to Cyber Security
Q.3. Select the right choice from the following multiple choice questions.
A. Which of the following are key concerns for the security of information assets?
i. Theft
ii. Fraud/ forgery
iii. Unauthorized information access
iv. Interception or modification of data and data management systems
v. All of the above
B. Information at any point of time can be present in 3 states. Which of the following options rightly
depict these states.
i. Confidentiality, Integrity and Availability
ii. Confidentiality, Integrity and Transmission
iii. Transmission, Processing and Storage
iv. Availability, Processing and Storage
v. None of the above
C. What is the primary objective of cyber security controls? Pick the most appropriate option.
i. To help control data and personnel that come into and goes out of the organization.
ii. To help manage risk and protect critical data assets from intrusions, security incidents and data loss.
iii. To help keep a control on the cyber security solutions being implemented in order to secure the
data assets.
iv. To help the government ensure that organisations and individuals are following the national cyber
security policy.
D. Which of the following best states the relationship between assets, vulnerabilities, threats and risks:
i. Asset + Threat + Vulnerability = Risk
ii. Risk + Threat + Asset = Vulnerability
iii. Threat +Vulnerability + Risk = Asset
iv. Vulnerability + Asset + Risk = Threat
Q.4. Given below are some security control. Mention against each that by functionality, which type of control does it
fall under. It could be more than one also.
A. Doors : _____________________________
B. Security procedures and authentication : _____________________________
C. Cryptographic checksums : _____________________________
D. File integrity checkers : _____________________________
E. Audit trails and logs : _____________________________
F. Notices of monitoring and logging : _____________________________
G. Visible practice of sound cyber security management : _____________________________
H. Disaster recovery and business continuity mechanisms : _____________________________
I. Backup systems and data : _____________________________
46
Unit 1 - Introduction to Cyber Security
Q.5. Describe in brief the following Tools and Techniques of Cyber Security
A. Security Vulnerability Management
B. Vulnerability Assessment
C. Security Testing
47
Unit 1 - Introduction to Cyber Security
Q.6. State the various types of Cyber Security Controls by “Functionality” and by “Plane of Application”
1. 1.
2. 2.
3. 3.
4.
5.
6.
Q.7. Complete the threat classification called STRIDE from the initials of threat categories:
S__________________________________
T__________________________________
R__________________________________
I __________________________________
D__________________________________
E___________________________________
Q.8. For each of the attacks mentioned below, identify if it is a Network Attack, Application Attack, Phishing Attack
or a Malware.
A. Cross-Site Scripting : _____________________________
B. Buffer overflow attack : _____________________________
C. Trojan Horse : _____________________________
D. HTTP flood : _____________________________
E. Watering hole attack : _____________________________
F. Social phishing : _____________________________
G. Worm : _____________________________
H. Spear phishing attack : _____________________________
I. Whaling : _____________________________
J. Virus : _____________________________
K. Vishing : _____________________________
48
Unit 1 - Introduction to Cyber Security
L. Eavesdropping : _____________________________
M. Spoofing : _____________________________
N. Network Sniffing (Packet Sniffing) : _____________________________
O. Data Modification : _____________________________
P. Denial of Service attack : _____________________________
Q. Man-in-the-middle attack : _____________________________
R. Compromised-Key Attack : _____________________________
S. Injections : _____________________________
49
50
UNIT 2
CRYPTOGRAPHY
•
•
•
Explain the importance of cryptography and the areas of
implementation
State the components of a cryptographic system and their
functions
State the key mechanisms used by cryptographers
Explain the different types of encryption schemes and standards
Identify the applications of cryptographic algorithms and
biometric authentication
• Identify the exchange of keys and user verification while
communicating with the server
• Compute the keys using Diffie-Hellman Key Exchange algorithm
• Perform computation in RSA algorithm for encryption and
decryption
• Use graphical password and textual passwords for signing into
websites
• Interpret SHA algorithm form RFC standards available on IETF
website
• Implement the steps involved in the SHA algorithm by taking a
”
sample message
• Perform the various steps such as list, generate, import and
exporting of keys
52
Unit 2 - Cryptography
To make it simpler, let us take an example for a better understanding of the concept.
Suppose a person A wants to send a piece of information to a person B over a public network. When
the person A has sent the information, the data will be converted into what we call ciphertext using the
encryption algorithm. This step is vital to ensure that the data is protected and cannot be hacked or stolen
by a third party or a foreign entity. If the person B wants to obtain the information in its original form, the
encrypted data should be decrypted using the decryption algorithm. This process involves sharing of what
we call a “key” that is private to the communicating parties. The person A proposes a key and shares it with
the person B to promote access. The key helps in facilitating the authentication process and ensuring that
the data stays protected. Since, the key is private to both A and B, no other entity can access the information
sent over the communication network. Similarly, the person B will also follow the same process of au-
thentication for sharing the information with person A and so on. Hence, cryptography is a key contributor
for ensuring data integrity and confidentiality.
53
Unit 2 - Cryptography
Goals of Cryptography
• Confidentiality: To protect our confidential information from malicious actions during storage as well as
transfer. Example: Hiding of customers’ data by Banks, hiding sensitive military information.
• Integrity: Changes in the information can be done only through authorised entities and mechanisms. Example:
Money withdrawn and account updated in Banks.
• Availability: Correct information is available to the authorized entities whenever needed. Example: Accessing
bank account for transactions.
Security Mechanisms
There are several components within a cryptographic system such as:
• Plaintext: The original data that is sent by the actual source over the communication network is called as
plaintext. The plaintext is converted into ciphertext for secure communication.
• Ciphertext: When the plaintext undergoes encryption, it is converted into a secret code that is called ciphertext.
For the receiver to interpret this information, cipher-text is decrypted using cryptographic algorithms.
• Encryption: The process of conversion of plaintext into ciphertext is called encryption. Encryption ensures that
the information is protected so that data confidentiality and integrity is maintained.
• Decryption: The process of converting the ciphertext into plaintext is called decryption. Decryption is done to
retrieve the original information that has been sent by the actual source.
• Key: When the plaintext (message) gets encrypted, the sender uses a “key” for encryption. This key is also used
by the receiver to decrypt the information shared by the sender. In simple terms, the key used for encryption is
called encryption key and the key used for decryption is called decryption key respectively.
The following security mechanisms are used by crypotagraphers:
• Encipherment: It is defined as the hiding or covering of data which provides confidentiality. Cryptography and
steganography are the two techniques that use this.
• Data Integrity: A check-value of the initial message sent is created which is transferred along with the initial
message. After receiving the message, a new check-value is created w.r.t. the message received. If both the
check-values (old & new) are the same, integrity of data is maintained.
54
Unit 2 - Cryptography
• Digital Signature: Also known as electronic signature, the sender signs the document to be sent using his/her
private key and sends out a public key along with the document. The receiver uses the public key of the sender
to decrypt the document which proves that the document indeed is sent by him/her.
• Authentication Exchange: To prove that the two entities that are communicating are authentic, some secret
information can be used as key(which only the two of them know about).
• Traffic Padding: Insertion of bogus data into the main message to hide the pattern of the data being transferred.
• Routing Control: This mechanism helps in changing and selecting different routes during the transmission of
data to avoid attacks.
• Notarization: This mechanism involves a third party as a witness to the communication between the sender
and the receiver so that neither of them can later deny about the conversation.
• Access Control: Only authorised users have the access to data. This can be proved through PINs and passwords.
In the previous section we have come across the basic terminologies used in cryptography. We also understood
the importance of encryption in a communication network. In this section, we will discuss the types of encryption
techniques such as symmetric encryption and asymmetric encryption. Symmetric encryption is an encryption scheme
that utilises the same key for performing encryption and decryption. The other name given to symmetric encryption
is conventional encryption. Among the various attacks that exist, the two types of attacks that are quite common are
cryptanalysis and brute force. The former exploits the properties of encryption algorithm whereas the latter tries all
possible keys to enter into the communication system.
Private key encryption involves the sharing of a single key between the sender and the receiver. Since this type of
encryption uses a single key, it is a relatively fast mode of communication. To get into the cryptographic system:
• One way is when the key is stolen or leaked to an unauthorised entity that is not involved in the communication
process.
• Another way is w.r.t Public Key Infrastructure, commonly referred to as PKI, where there is a use of two keys-
private key and the public key.
o The public key is distributed and known to all,
o The private key is never shared with non-communicating entities.
To understand the role of PKI, lets take an example. When someone makes and online purchase, they use Secure
Sockets Layer (SSL), which is a standard protocol for secure transmission of documents, to encrypt the web session
between themself and the website. PKI is used to establish this type of communication. We will discuss SSL and PKI
in greater details later in this unit.
It is impossible to imagine our lives without the Internet which we all know is prone to various types of attacks. In
today’s world, we are largely dependent on the Internet for facilitating various requirements. We love to do online
shopping, send emails, be active on social media and so on. The question is are we really aware of the threats that
exist when we go online? There are chances that our accounts may be accessed by someone and the contents of
the information. So, how to stay immune when communicating online? Cryptography algorithms and protocols have
been crucial in establishing faithful communication. Cryptographic algorithms and protocols help in preserving our
information while we communicate over an untrusted network such as the Internet.
55
Unit 2 - Cryptography
The type of encryption where the same key is used for encrypting the plaintext and decrypting the ciphertext is called
Symmetric Encryption or Symmetric Key Cryptography. The study of symmetric cryptosystems is called symmetric
cryptography. This type of encryption technique is used in hiding streams of data of different sizes, which could be
files, messages, passwords or encryption keys.
Some examples of this technique include Digital Encryption Standard (DES), Triple-DES (3DES), IDEA, and BLOWFISH.
Encryption Decryption
56
Unit 2 - Cryptography
TRADITIONAL
SYMMETRIC CIPHERS
Substitutional Ciphers are those ciphers that substitute the value of one alphabet with another alphabet. For Example,
Substitution Ciphers are further divided into the following:
1. Mono-alphabetic Cipher
In mono-alphabetic cipher, each symbol in plain-text is mapped to one cipher-text symbol. For example, if the
word in plaintext is ‘read’, then the word in ciphertext (as per mapping criteria) can be ‘tayd’. If a word contains
repeated alphabets, then the mapping will remain same for each repetition. For example, ‘balloon’ will be
encrypted as ‘gyeeuuk.’ Hence, mapping between plaintext and ciphertext is one-to-one.
Various Types of mono-alphabetic ciphers are:
Additive Cipher (Shift Cipher/Caesar Cipher)
As the name suggests, the key is added to the plaintext to obtain ciphertext and the same key is subtracted
from the ciphertext to obtain plaintext. This is the simplest form of mono-alphabetic substitution cipher.
For example:
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z A B C
D E F G H I J K L M N O P Q R S T U V W X Y Z A B C D E F
Since the key space is of 26, it is not very secure and is prone to Brute-force attack.
Variation in Additive Cipher:
Caesar Cipher: This was used by Julius Caesar for his secret communication. The key is always 3.
Shift Cipher: Since additive cipher shifts the characters to the end of alphabets, hence it is also known as shift
cipher.
57
Unit 2 - Cryptography
Multiplicative Cipher
In multiplicative cipher, the key is multiplied with the plaintext to produce ciphertext. For obtaining plaintext,
the inverse of the key is multiplied with the ciphertext.
Encryption: C = (P * k) mod n
Decryption: P = (C * k-1) mod n
For example:
Affine Cipher
Affine cipher uses two different types of keys (for example, a and b) simultaneously for encryption and
decryption. These keys are used as part of an equation.
For example:
58
Unit 2 - Cryptography
2. Poly-alphabetic Cipher
In poly-alphabetic cipher, each occurrence of a character may have a different substitute. For example, ‘balloon’
in plaintext is written as ‘hwtyufo.’ Hence, mapping between plaintext and ciphertext is one-to-many.
Auto-key cipher
An autokey cipher incorporates the plaintext message into the key. The key is generated from the message in
some automated fashion, sometimes by selecting certain letters from the text or, more commonly, by adding
a short primer key to the front of the message.
For example:
Vigenere Cipher
The encryption and decryption of text is done using Vigenere square or Vigenere table. The table consists of
the alphabets written out 26 times in different rows, each alphabet shifted cyclically to the left compared to the
previous alphabet, corresponding to the 26 possible Caesar Ciphers.
59
Unit 2 - Cryptography
Vigenere Table
The initial key stream is repeated to the length of the plaintext. For example, if the plaintext is “Beautiful day”
and the initial key is “pen” then the key stream generated will be “penpenpenpenp.”
Encryption
The first alphabet of the plaintext, R is paired with P (which is the first alphabet of the key). So use row R
and column P of the Vigenère table. The outcome is G. Similarly, for the second alphabet of the plaintext, E,
the second alphabet of the key E is used, the alphabet at row E and column E is I. The rest of the plaintext is
enciphered in a similar fashion.
Decryption
Decryption is performed by going to the row in the table corresponding to the key, finding the position of the
ciphertext alphabet in this row, and then using the column’s label as the plaintext. For example, in top row P
(from PEN), the ciphertext G appears in the column against R, which is the first plaintext alphabet. Next we go to
row E (from PEN), find the ciphertext I which is found in column against E, thus E is the second plaintext alphabet.
This can be done using algebra also.
Where Pi is the Plaintext value, Ki is the Key value
Playfair Cipher
Playfair cipher was the first digraph cipher that is used practically. The key of this cipher is a 5x5 matrix
containing all 26 alphabets of English. Note that I and J share a single block in the matrix.
First, the plaintext is written row wise then the remaining blocks of the matrix are filled with the remaining
alphabets (those which have not yet occurred) alphabetically. Note: If two letters are in pair then some bogus
data in inserted between them. For example, BALLOON is written in the key matrix as BALXLOON (X can be
added only once because it occurs only once).
:Certain rules are followed for encryption/decryption using playfair cipher-
Encryption/Decryption is done by taking the alphabets in a group of two.
-If two alphabets are in the same row, replace them with the next alphabet to the right.
-If two alphabets are in the same column, replace them with the next alphabet in the bottom.
-If two alphabets are not in same row or column, then replace the pair with the letters on the same row respectively.
Playfair Cipher
L A R G E
S T B C D
F H I/J K M
N O P Q U
V W X Y Z
Keyword: LARGEST
Plain text: Mu st se ey ou
Cipher text: UZTBDLGZPN 63
60
Unit 2 - Cryptography
Brute-force attack on this cipher is very difficult due to large size of key domain (26!).
Hill Cipher
This poly-alphabetic cipher is based on matrix multiplication (linear algebra). The matrix is the key that is used
for encryption and decryption. Also, the key matrix should have a multiplicative inverse.
Matrix Multiplication:
Encryption: The first step is to convert the text into a matrix so the key can be applied to it. Following the rules
of matrix multiplication, P and Key are multiplied to generate ciphertext.
E=[P]*[K]
Decryption: The ciphertext is first converted into a matrix. The inverse of key is generated, which is then
multiplied with the matrix to produce the initial plaintext.
D=[C]*[K-1]
Brute-force attack on this cipher is very difficult due to large size of key domain (m*n).
One Time Pad
This poly-alphabetic cipher uses a technique which makes it purely prone to any cryptography attack. The idea
behind this is to choose a random key from the key domain to encrypt and decrypt the message. Say, the first
character of the text is encrypted using key 06, the second uses key 08 and so on (every time a new key).
Though it is a perfect cipher with full secrecy, but it is impossible to implement.
Rotor Cipher
Rotor cipher uses a rotor machine that follows monoalphabetic substitution but the mapping between plaintext
and ciphertext changes after every rotation. The rotor machine is permanently wired and uses 26 letters. If the
motor is stationary, then the cipher follows monoalphabetic substitution but if the motor is rotating, it follows
polyalphabetic substitution.
The initial position of the cipher is secretly shared between the sender and receiver.
This cipher provided better practical use than one-time pad cipher.
61
Unit 2 - Cryptography
3. Transposition Cipher
Rain-Fence Cipher
In the rail fence cipher, the plaintext is written diagonally downwards on successive steps.
After reaching the bottom rail, we traverse upwards diagonally. After reaching the top, the direction is changed
again. Thus, the alphabets of the message are written in a zig-zag manner.
After the message has been written in a zig-zag manner, the individual rows are combined to obtain the
ciphertext.
Key defines the number of rails/steps to be traversed in a direction.
62
Unit 2 - Cryptography
1. Data integrity algorithms: The algorithms used to assure that information and programs are changed only in a
specified and authorized manner are data integrity algorithms. These types of algorithms are used to protect
blocks of data, such as messages, from alteration.
2. Authentication protocols: Authentication protocols enable communicating parties to authenticate the identity
of entities and to exchange session keys using cryptographic algorithms. For example, Kerberos authentication
service is used in a distributed environment.
Crypotography Protocols
Cryptographic protocols are a set of rules or instructions that give secure connections, sanctioning two bodies to
speak with privacy and data integrity. Since cryptographic protocols and algorithms and very complex and require a
high level of expertise to create, hence, most people use protocols and algorithms that are commonly applied and
accepted as secure.
Some such Protocols are:
• IPSec
• SSL (soon to be TLS)
• SSH
• S/MIME
• OpenPGP/GnuPG/PGP
• Kerberos
Each of these protocols have their own benefits and challenges and even may overlap with respect to their functions.
We will read more about each of these later in this unit.
Cryptography Algorithms
Cryptographic algorithms are the sequences of processes, which are used for encrypting and decrypting messages in
a cryptographic system.
Cryptographic algorithms are of many types and most of them can be divided in the following categories.
Symmetric: Data Encryption Standard (DES) and Advanced Encryption Standard (AES) are the most popular examples
of symmetric cryptography algorithms.
Asymmetric: RSA is one of the most common examples of this algorithm.
Cryptographic algorithms are specified by the National Institute of Standards and Technology (NIST). They include
cryptographic algorithms for encryption, key exchange, digital signature, and hashing.
63
Unit 2 - Cryptography
Cryptography Standards
There are many cryptography standards. The National Institute of Standards and Technology is an organization
aimed at helping US economic and public welfare issues by providing leadership for the nation’s measurement and
standards infrastructure. That’s basically a fancy way of saying they set the standards for things like encryption as it
pertains to non-classified government information both in transit and in rest.
Granted, there are a lot of standards, or FIPS, Federal Information Processing Standards, we’re really only concerned
with the ones that pertain to encrypted data in motion, or more specifically, as they relate to SSL. Keep in mind, these
standards aren’t binding. But they are suggested by the US Government for any and all non-classified data.
Some standards that are widely known by cryptographers are as follows:
Encryption standards
• Data Encryption Standard (DES, now obsolete)
• Advanced Encryption Standard (AES)
• RSA the original public key algorithm
Hash standards
• MD5 (obsolete)
• SHA-1 (obsolete)
• SHA-2
Digital signature standards
• Digital Signature Standard (DSS), based on the Digital Signature Algorithm (DSA)
• RSA
Public-key infrastructure (PKI) standards
• X.509 Public Key Certificates
We will read more about each of these later in this unit.
64
Unit 2 - Cryptography
DES is based upon two attributes of cryptography i.e. substitution and transposition. DES consists of 16 steps that are
referred to as rounds.
Let us understand the process of encryption in DES through the following steps:
1. Initially, the 64-bit plain text block goes through the Initial Permutation (IP) function.
2. The initial Permutation is performed on the plaintext.
3. The initial permutation breaks the plaintext into two parts of 32 bits each such as Left Plain Text (LPT) and
Right Plain Text (RPT).
4. Each LPT and RPT then goes through 16 rounds of encryption process.
5. At last, LPT and RPT are re-joined and a Final Permutation (FP) is performed on the combined block.
6. The result is a 64-bit cipher text.
The figure in the next page depicts the same.
65
Unit 2 - Cryptography
66
Unit 2 - Cryptography
3DES
Introduced in 1998, 3DES algorithm is adopted in finance, payment and other private industries for encrypting data
while transmission or when it is static. 3DES is a symmetric key block cipher that applies the DES cipher in encryption
with the three keys.
As already discussed, this encryption technique uses three different DES keys namely K1, K2 and K3. This makes the
actual 3DES length as 3 x 56 = 168 bits. Now, let us see how this mechanism takes place through a sequence of steps.
The steps involved in this process are as follows:
Step 1: The plaintext blocks are encrypted using single DES key K1.
Step 2: The output of Step 1 is decrypted using single DES with key K2.
Step 3: Similarly, the output of Step 2 is decrypted using single DES with key K3.
Step 4: The output received from Step 3 is the ciphertext.
Step 5: For decrypting the ciphertext, one must follow a reverse procedure. This means first decrypt using K3, then
encrypt with K2, and finally decrypt with K1.
3DES was extensively used in Microsoft products such as Microsoft Outlook 2007, Microsoft OneNote, Microsoft
System Center Configuration Manager 2012 for the protection of user configuration and user data.
Triple DES systems are significantly more secure than single DES, but these are clearly a much slower process than
encryption using single DES.
67
Unit 2 - Cryptography
General Structure
The cipher takes a plaintext block size of 128 bits, or 16 bytes. The key length can be 16, 24 or 32 bytes (128, 192, or
256 bits). This algorithm is referred as AES-128, AES-192, or AES-256 as per the key length.
The number of rounds in AES is variable and depends upon the length of the key. The number of rounds can be
calculated as 10 rounds for 128-bit keys, 12 rounds for 192-bit keys and 14 rounds for 256-bit keys. AES relies on the
technique of substitution-permutation for the operations. The replacement of inputs by specific outputs is termed as
substitutions and the process of shuffling bits around is referred to as permutations.
The Encryption process of AES consists of four major steps:
68
Unit 2 - Cryptography
1. Byte Substitution: The 16 input bytes are substituted in a fixed table which gives us a matrix of four rows and
four columns.
2. ShiftRows: All the four rows are shifted towards the left. The entries that ‘fall off ‘are re-inserted on the right
side of the row. The shift is done in the following way:
• First row is not shifted at all
• Second row is shifted by one position to the left
• Third row is shifted by two positions to the left
• Fourth row goes three positions to the left
• This gives us a new matrix with 16 bytes.
3. MixColumns: The column of four bytes undergoes transformation using a mathematical function. The four
bytes of one column are the inputs and four new bytes are the outputs that will replace the original column.
The outcome is a new matrix of 16 bytes.
4. AddRoundKey: The 16 bytes are converted into 128 bits and XORed with the 128 bits of the round key. The
output will be a ciphertext after the last round has been conducted. If the last round remains, the 128 bits are
taken as 16 bytes and another round is conducted.
69
Unit 2 - Cryptography
The RSA algorithm was developed in 1977 by Ron Rivest, Adi Shamir, and Len Adleman at MIT and first published in
1978. The Rivest-Shamir-Adleman (RSA) scheme has been widely accepted public-key encryption technique in the
world. This technique can be used for both public-key encryption as well as digital signatures.
The algorithm makes use of the fact that there is no easy way to factor very large (100-200 digit) numbers.
The algorithm is explained as follows:
• The message is represented as an integer between 0 and (n-1). In case of large messages, they can be
broken into number of blocks. Each block can be represented by an integer in the same range.
• Perform encryption of the message by raising it to the eth power modulo n. The result will be ciphertext
message C.
• To perform decryption of the ciphertext message C, it will be raised to another power d modulo n.
The encryption key (e,n) is made public while the decryption key (d,n) is kept private by the user.
Let us see the approach to determine appropriate values for e,d and n.
• Choose two very large (100+ digit) prime numbers such as p and q.
• Set the value of n equal to p * q i.e. n = p * q.
• Select any large integer, d, such that GCD (d, ((p-1) * (q-1))) = 1 where GCD is the Greatest Common Divisor.
• Find the value of e such that e * d = 1 (mod ((p-1) * (q-1))).
70
Unit 2 - Cryptography
A cryptographic hash function represents a mathematical equation that helps in encryption of source information.
This hash functions caters to a multitude of applications such as blockchain technology, payments on e-commerce
websites etc.
A hash function can be simply expressed in the form of a mathematical equation. It accepts a variable block of data as
input and gives a fixed-size hash value as the output. The main objective of hash function is to achieve data integrity.
71
Unit 2 - Cryptography
MD5 is a cryptographic algorithm which accepts input of arbitrary length and in turn produces a message digest
which is 128 bits long. This digest is called “hash” or “fingerprint” of the input. The application of MD5 is in situations
where long message needs to be processed and compared quickly. This can be seen in the creation and verification
of digital signatures.
Working of MD5
• The initial step is division of the input in blocks of 512 bits each.
• The 64 bits are inserted at the end of the last block.
• These 64 bits are used to record the length of original input.
• If the last block is less than 512 bits, some extra bits are 'padded' to the end.
• Each block is divided into 16 words consisting of 32 bits each.
The process involves appending of padding bits, appending representation of padded message to the original
message, initialization of message digest buffer, processing of message in 16-word blocks and finally outputting
the result. On a 32-bit machine, Message Digest 5 is much fast as compared to other message digest algorithms.
Message Digest 5 is simple to implement when compared with similar digest algorithms.
ac49e74434a64c2 b68e2f019ef60266
OUTPUT
47aa129bef83f204 8f8ebf4eb6e3a69b
72
Unit 2 - Cryptography
Secure Hash Algorithm (SHA) was developed by National Institute of Standards and
Technology (NIST) and published as a federal standard in the year 1993. The revised Note
version SHA was issued in FIPS 180-1 in 1995 and called SHA-1. SHA has been specified
in RFC 3174. RFC 3174 is a standard
The hash value produced by SHA-1 is 160 bits. In the year 2002, NIST revised the standard under IETF to make
and defined three new versions having hash value lengths 256, 384 and 512 bits called the SHA-1 hash
SHA-256, SHA-384, and SHA-512 respectively. SHA-1 involves various types of modular algorithm conveniently
arithmetic and logical binary operations. However, this technique has been considered available on the
since 2005. Major tech giants like Microsoft, Google, Apple and Mozilla have stopped internet.
accepting SHA-1 SSL certificates by 2017.
73
Unit 2 - Cryptography
74
Unit 2 - Cryptography
75
Unit 2 - Cryptography
Whitfield Diffie along with Martin Hellman were key contributors in the field of cryptography. Diffie and Hellman
achieved a major breakthrough in 1976 and changed the overall framework for public-key cryptography. They came
up with a cryptographic algorithm that met the requirements for public-key systems. This algorithm was named after
the two discoverers, Diffie-Hellman Key Exchange algorithm.
Using Diffie-Hellman Key Exchange algorithm, it is possible to exchange the key between users that is used for the
subsequent encryption of messages. The algorithm limits its application to the exchange of secret values.
The Algorithm
Let there be two numbers such as a prime number q and an integer a, which is a root of q. Also, let there be two users
A and B who wish to exchange a key.
76
Unit 2 - Cryptography
Then, user A will select an integer XA < q and will compute YA = a XA mod q. In the same way, the user B will select
an integer XB < q and will compute YB = a XB mod q.
The value of X is kept private while on the other hand, the value of B is made public. The key for the user A will be K
= (YB) XA mod q and K = (YA) XB mod q.
As assumed previously, let there be two users A and B who wish to connect over a network. Note
The user needs to generate a one-time private key XA, calculate YA, and send that to user
B. Similarly, the user B will respond by generating a private value XB, calculate YB, and send Adversary is the
YB to user A. attacker or the
Now, it is easy for both the users to calculate the key. In the first instance, the user A can pick foreign entity who
values for q and a for transmitting first piece of information. wants to hijack
the information
Although this algorithm is widely used and secure, it is also prone to contents.
certain types of attacks. One such attack is called Man-in-the-Middle Attack.
77
Unit 2 - Cryptography
Man-in-the-Middle Attack
Let there be two people A and B who want to connect over the network. There is another person by the name D the
adversary, who wants to hijack the communication channel and steal the information. Now, let us see how D i.e. the
adversary attacks the network.
• To prepare for the attack, D generates two random private keys XD1 and XD2 and computes the public
keys YD1 and YD2.
• A will transmit YA to B
• The adversary, D intercepts YA and in turn transmits a false message i.e. YD1 to B. In between, D calculates
K2 = (YA) XD2 mod q.
• Next, B receives the false message from the adversary in the form of YD1 and calculates K1 = (YD1) XB
mod q.
• Then, B transmits YB to A.
• Then again, D intercepts YB and in turn transmits a false message i.e. YD2 to A. D again calculates
K1 = (YB) XD1 mod q.
• So, A receives YD2 and calculates K2 = (YD2) XA mod q.
This process gives us an idea about the false key being generated by the adversary, D (the adversary) in this case.
Although, A and B think they share a secret key but rather B and D share a secret key K1 and A and D share the secret
key K2.
This algorithm is vulnerable since it does not authenticate the participants. This limitation can be overcome using
techniques such as digital signatures and public-key certificates.
It is possible to intercept the traffic and read emails, copy the user credentials and even the duplicate files. Therefore,
to be sure that no one intercepts the email messages, the connections between the computer and email provider
must be encrypted. To achieve this, the email client should implement encryption software to protect the content
from being hacked by foreign entities. This is also called end-to-end encryption meaning that no one except sender
and receiver will be able to see and retrieve the messages.
There are protocols such as PGP, GNU Privacy Guard (gpg) and S/MIME which help carry out email encryption. It takes
seconds to create an encryption code.
Historically, passwords and authentication have been used to send messages between two parties, and encryption is
just another advancement in the world of technology and communication.
78
Unit 2 - Cryptography
GNU Privacy Guard or gpg, is a free encryption software that is compliant with the OpenPGP (RFC4880) standard. It
is a cryptographic tool helpful in managing public and private keys and performs multiple tasks such as encryption,
decryption, signing and verifying operations.
One can download GPG from the official website using the download links for all
platforms and source codes. Do you know ?
For Windows systems, you need the Gpg4win application. The installer is available In the past, we had seen that
at Windows GnuPG installer (Gpg4win) download page. All you need to do is run the a phishing attack had stolen
installer and gpg will be visible in the command prompt. almost 20,000 emails from the
Let us understand some basic functions in gpg. Democratic National Committee
in the 2016 US elections. The
hacker was able to get into the
Listing stored keys
DNC’s unencrypted inbox.
To list all public keys stored in your keyring, use gpg --list-keys.
To list all private keys stored in your keyring, use gpg --list-secret-keys.
79
Unit 2 - Cryptography
Generating a key
To generate a new key(pair), use gpg --gen-key.
Generating a revocation certificate
To generate a revocation certificate, use gpg --gen-revoke.
Importing a key
To import a public key or a private key, use the --import switch.
$ gpg --import [Link] or $ echo THE_KEY_IN_ASCII | gpg --import [Link]
Exporting a key
To export a public key, use the --export switch.
$ gpg --export KEY_ID
To export a private key, use the --export-secret-keys switch.
$ gpg --export-secret-keys KEY_ID
2.5.5 S/MIME
Secure/Multipurpose Internet Mail Extension (S/MIME) makes use of asymmetric cryptography and protects your
emails from being accessed by a third party. Using this technique, you can digitally sign your emails thus making you
the legitimate sender of the message. This is effective while dealing with phishing attacks and preventing outsiders
from interfering in the email process.
S/MIME is a security enhancement to the MIME Internet e-mail format standard which is
Note based upon the technology from RSA Data Security.
Base64 encoding This technology is well suited to commercial and organisational applications. Furthermore, to
converts the binary understand MIME we need to know about RFC 5322 (Internet Message Format).
data into text RFC 5322 has been a standard and commonly used for Internet-based text mail messages.
format which is RFC 5322 views messages as a combination of envelope and contents. Envelope contains
passed through the information required for transmission and delivery whereas content is the object to be
communication delivered to the recipient.
channel where a user
The message to be transmitted is composed of header lines (the header) which is followed by
can handle text safely
unrestricted text (the body).
for Electronic mails
used in the email The key components of a header line are:
encryption process. • Keyword (followed by a colon)
• Keyword’s arguments; breaks long line into small lines
To understand S/MIME, we must understand the e-mail format that is used i.e. MIME.
80
Unit 2 - Cryptography
As we have already discussed, S/MIME is an asymmetric cryptography technique that uses two keys (private and
public key) for its operation. Even if we know the public key, it is practically impossible to find the private key.
The emails are encrypted using the recipient’s public key while the decryption takes place using the corresponding
private key possessed by the recipient. Till the time private key is not hacked, only the intended recipient will be able
to access the shared information in the emails.
S/MIME allows you to sign your emails as a step to prove your identity thus promoting legitimacy in the businesses.
Every time you sign an email, you enable a private key to apply a Digital Signature to your message. For opening the
message, the recipient uses your public key for verifying the signature. This serves as a process of authentication of
identity to avoid phishing attacks.
81
Unit 2 - Cryptography
IPSec
IPSec deals with three functional areas i.e. authentication, confidentiality and key management.
• Authentication: It is the mechanism to ensure that the packet of information was in fact shared by the actual
source. Moreover, there is no alteration in the contents of the message while transmission.
• Confidentiality: It is the act of encrypting messages to prevent eavesdropping by any third party.
• Key Management: It is the process of secured exchange of keys between any two parties involved in
communication.
Applications of IPSec
IPsec has ensured secured communication across a LAN, across private and public WANs, and across the Internet.
There are several examples to depict its use:
• Secured Connectivity: It is possible for a company to build a secure virtual private network over the Internet
or over a public WAN. This helps in enabling a business to rely heavily on the Internet and reduce its need for
private networks thereby saving costs and network management overhead.
• Secured remote access: An end user whose system is equipped with IP security protocols can make a local
call to an Internet Service Provider (ISP) and gain secure access to a company network. This helps in reducing
the cost of toll charges for traveling employees and telecommuters.
• Extranet and intranet connectivity: IPsec is very efficient in ensuring secured communication with other
organizations, ensuring authentication and confidentiality and providing a key exchange mechanism.
• Enhancing electronic commerce security: The use of IPsec has increased the security of Web and electronic
commerce applications that are already equipped with built-in security protocols.
IPsec encrypts and authenticates all traffic designated by the network administrator thereby adding an additional
layer of security to whatever is provided at the application layer.
We all know how crucial it has become to preserve the information that is shared among individuals and organisations
over the online network. The information that is communicated is prone to various types of attacks and malicious
activities. A communication channel or in other words, cryptosystems, are subjected to attacks which lead to leak of
information and data-hacking.
82
Unit 2 - Cryptography
Interruption
This refers to the situation where an asset of the system is destroyed or becomes unavailable/unusable. Some
examples of this type of attack are destruction of a piece of hardware; disruption in communication line or a disabled
file management system.
Interception
This is when an unauthorized party attempts to access the information between any two parties. This can be said as
an attack on the confidentiality of an information. The unauthorized party can be a person, a program or any remote
computer system around the world. Some examples of this attack are tapping of wires to capture the data and illicit
copying of data files.
Modification
In the previous section, we saw that the information can be accessed by a third party i.e. unauthorised source. There
can be situation wherein the adversary (unauthorized source) can tamper with a piece of information being shared
over the communication network. This is attributed to an attack on the integrity of information. Modification of
contents can take place in several forms such as changing of values in a given data file, alteration in the program and
modification of information contents of the messages being communicated over the network.
83
Unit 2 - Cryptography
Fabrication
There can be a situation where an adversary or an unauthorized source can insert counterfeit objects into the
communication network. This is an attack on the authenticity such as an addition of a false message in a network or
an addition of some records to a file.
After learning the general attacks that take place over a communication network, let us now understand the various
types of cryptographic attacks that exist.
Cryptographic attacks can be categorized in two types:
84
Unit 2 - Cryptography
• Traffic analysis
The attacker can observe the pattern of the message even if the message is protected through encryption. The
location and identity of the communication hosts and factors such as frequency and length can be determined
by the adversary. This type of information can be used for guessing the nature of the communication between
the sender and the recipient.
It is difficult to deal with passive attacks and detect it since they don’t alter the data. But there are techniques
to prevent these types of attacks from spreading and affecting the communication process.
85
Unit 2 - Cryptography
Active Attacks
Active attacks result in the modification of the information content and create a false stream of messages. The attacks
can be classified in four types such as the following:
a. Masquerade b. Replay
a. Masquerade
Masquerade is a scenario where an entity pretends to be a different entity. To understand this type of attack further,
we can say that the adversary might capture and replay the authentication sequence and impersonate as an entity
having legitimate source of information.
86
Unit 2 - Cryptography
b. Replay
The act of passive capturing of the information and transmitting to produce an unauthorized effect in the
communication network is referred to as Replay.
c. Modification of messages
When the contents of the message are altered or the legitimate message is delayed or recorded, this causes modification
in the nature of information being transmitted. This is said to be an unauthorized effect in the communication
process. For example, there is a message “John Alan and Steve travelled to Paris”. After the adversary has modified
the message, it becomes “John Adam and Steve travelled to Paris”.
d. Denial of Service
As the name suggests, the adversary prevents the recipient or sender from accessing the communication facilities
thereby resulting in a form of attack known as denial of service. This can be the disruption in the entire network, either
by disabling the network or overloading the network with false messages to degrade the performance.
One counter measure can be physical protection of all communication facilities and transmission paths which is
practically very difficult. Instead, the goal should be to detect the adversary before it tampers with the communication
network. This will help in recovering from any delay or disruption caused by the unauthorized source.
87
Unit 2 - Cryptography
88
Unit 2 - Cryptography
Legal Issues
Cryptography has been widely used in the military gathering processes. The criminals and terrorists have also been
using the cryptographic techniques to get into the security systems of the defence agencies and hack the confidential
information. Therefore, some governments have prohibited the use of cryptography to a certain extent. Also, there
are some patent issues and have come up as a result of complex mathematical nature of algorithms involved. The
inventors of these algorithms have protected their property by patenting them and that user gets a license. We can
divide the legal issues into three categories:
• Export Control Issues: The US government has treated the cryptographic software and hardware as confidential
piece of information and hence placed them under export control. For a commercial entity to export the
cryptographic libraries and software, it is important to get an export license first. In the past, we have seen that
the export laws have eased up a bit and it has become feasible to export these cryptographic software packages.
Thus, there has to be a more efficient export mechanism in place for cryptographic systems.
89
Unit 2 - Cryptography
• Import Control Issues: There are several countries that have restricted the use of cryptography within their
authority. Under the jurisdiction, the authorities have to establish proper adherence to the law. There is a need
to tie the cryptographic capabilities to jurisdiction policy files. These files have allowed “strong” but “limited”
cryptography by ensuring limited use of size of keys and other parameters.
• Patent Related Issues: To avoid getting involved in patent infringement, it is recommended that the algorithms
used should not be patented. These can also be the ones that have expired, or they are free to use as per license
policy. Also, one can use the cryptographic algorithms once the individual has obtained the license.
We have discussed the broad guidelines before deploying cryptographic solutions. Usually, it is the vendor who has
to worry about these solutions, but one cannot take chances. While using an open-source software that is freely
available over the Internet, you have to establish a legal compliancy before its use. The laws regulating cryptography
are complex, jurisdiction dependent and subject to change. Hence, it is crucial to ensure legal compliancy and abide
by the rules of use of cryptographic algorithms.
Authentication Methods
For protecting the identity of a user and other information, cryptography involves various techniques. It offers
information security in the form of encryption, message digests, and digital signatures. Cryptography caters to
multiple applications such as computer passwords, ATM cards, e-commerce thereby promoting access control and
information confidentiality. Let us study the various types of authentication methods in cryptography.
Password
Authentication Symmetric-Key Biometri c
Authentication
Token Authentication Authentication
Protocol
90
Unit 2 - Cryptography
Authentication Token
Authentication Token is a portable device which helps in authenticating users and allowing authorized access into a
network system. The authentication technique that uses a portable device to carry embedded software is known as
software token. Some examples of software token are RSA SecureID Token Cryptocards, Challenge Response Token,
and Time based Tokens.
Symmetric-key Authentication
Symmetric-key Authentication is the sharing of a single, secret key with an authentication server wherein the key is
embedded in a token. The authentication of the user takes place by sending the user credentials to the authentication
server that is encrypted by the secret key. The user becomes an authenticated user only if the server matches the
received encrypted message using the shared secret key.
Biometric Authentication
Biometric Authentication is a technique for digitizing the measurements of physiological or behavioral characteristics
of an individual. There are various types of biometric authentication systems such as face detection authentication
system, fingerprint authentication system, Iris authentication system and voice authentication system.
• Fingerprint recognition: This type of recognition uses an electronic device to capture a digital image of
the fingerprint patterns. The image that is captured is called a live scan and digitally processed to create a
biometric template. The biometric features can be stored and used for matching later on.
• Voice biometric authentication: This type of biometric authentication uses voice patterns to recognise
the identity of a person. It is divided into five categories such as speaker dependent system, speaker
independent system, discrete speech recognition, continuous speech recognition, and natural language.
• Face detection: Face detection technology makes use of learning algorithms to allocate human faces in
digital images. This type of technology focuses on the facial features and ignores everything else in the
digital image. Face recognition takes place after face detection process and identifies the face by comparing
it with stored face images. There are a lot of neural network algorithms that had been proposed for this
type of authentication.
• Iris Authentication: This is another authentication technique which is widely used at airports worldwide. The
recognition of iris is one of the finest ways for authentication in high risk situations. This technique is also
used in many types of industries.
91
Unit 2 - Cryptography
Previously, we have studied that identity and access management is a key concern among enterprises. There have
been continuous efforts to enhance security levels and user convenience. One such measure is single sign-on or
SSO. Single sign-on technique is a centralized solution that focuses on making the passwords stronger for identity
management. This has dramatically reduced the administrative burden associated with passwords. Centralized identity
management solutions are pretty easy to implement, automate and enforce secure password practices in a consistent
manner. This can be in the form of creating strong passwords, changing passwords regularly and ensuring that the
password contains a mixture of numerals and special characters. Basically, SSO allows the users to sign on only once
and automatically verifies the identities using each application and service that needs to be accessed.
Centralized passwords that have enabled SSO can make
the third party get access to the entire information
resource if the single credential is hacked. This approach
has become inefficient and insecure as the applications and
services have seen an exponential growth worldwide. Also,
most users rely on the same set of credentials for
accessing multiple applications. They find it cumbersome to change the passwords again and again and hence
becoming prone to attacks. Although this platform has eliminated the need for users to repeatedly prove their
identities, it is prone to serious security threats. The users assign the same password credentials to all their user
accounts across various systems. Alternatively, there is One Time Password authentication for providing better
security and rendering the attacks ineffective.
In an ideal situation, the user should seamlessly get authenticated to multiple user accounts once the identity of the
user has been verified. However, in many current situations, the user has to repeat the sign on procedure for each
type of service and using the same set of credentials, which are of course validated each time the user signs in.
2.6.6 Kerberos
Now we come across to another type of authentication service which is designed for use in a distributed environment,
Kerberos. Kerberos was developed as a part of a project i.e. Project Athena at MIT.
The main motive behind launching Kerberos was to address the problem of accessing servers distributed throughout
the network. With this authentication service, the users at various workstations can access distributed network of
servers once they authenticate requests for service. Kerberos offers third party authentication service thus enabling
clients and servers establish authenticated communication.
Kerberos has been a vast improvement from the previous authorization schemes. The strong cryptography and
third-party authorization have made it extremely hard for the cybercriminals to get into the networks and access
the information. Although, this type of authentication service is not flawless, there is a need to understand Kerberos
thoroughly before its implementation.
Kerberos has been effective in making the internet more secure and has enabled the users achieve more online
without compromising with safety. Yes, Kerberos can be hacked if the adversary takes advantage of limitations such
as vulnerability, weak passwords, or malware, or it can be a combination of all the three. Due to this fact, Multi-Factor
Authentication (MFA) has become popular and more in demand. MFA asks for your password along with something
else such as randomized token, mobile phone, email, thumbprint, retina scan, facial recognition etc.
92
Unit 2 - Cryptography
Kerberos Version 4
Kerberos version 4 makes use of DES (Data Encryption Standard) for providing authentication Services. DES is found
to be insecure when dealing with long-term keys while communicating with the server. The server generates a
“ticket” which helps in authenticating the user. Due to this, the server involved in the authentication process is also
called ticket-granting server.
• Kerberos v4 was released in the late 1980s
• It’s ticket support is satisfactory
• Makes use of DES for providing authentication service
• Uses “receiver makes right” encoding system
• Uses the same key for availing a service from a server
• Risky since attacker can replay messages from an old session to the client or server
• Contains only few IP addresses and other addresses for network protocols
This version of Kerberos has serious protocol flaws that permit attacks requiring far less exhaustive search. Owing
to this fact, Kerberos v4 authentication has become a security risk and raised serious questions about the Kerberos
protocol. This is why Kerberos version 5 was introduced.
Kerberos Version 5
Kerberos version 5 was implemented in both Windows 2000 and Windows XP and used to provide a single
authentication service within a distributed network. It allows a single account database for authenticating users
on various computing platforms to access the services within an environment. The ticket in Kerberos is used to
authenticate the user’s identity but additional authorization might be required for access control. Identity-based
authorization provides more interoperability for systems that support Kerberos version 5 protocol but does not
support user authorisation.
• Kerberos v5 was published in 1993
• Well extended ticket support (forwarding, renewing and postdating tickets)
• Uses Abstract Syntax Notation One (ASN.1) encoding system and Basic Encoding Rules (BER)
• Consists of multiple IP addresses unlike Kerberos version 4
• Transitive cross-realm authentication support is reasonable
93
Unit 2 - Cryptography
KEY POINTS
Kerberos realm: Kerberos realm is a set of managed nodes that share the same type of Kerberos database.
The Kerberos database is present on the Kerberos master computer system.
Kerberos principal: Kerberos principal is a service or a user that is known to the Kerberos master system.
The identification of Kerberos principal is done principal name that consists of a service or a username, an
instance name and a realm name.
Authentication Server (AS): The server in Kerberos scheme which grants authentication to the user/client
for accessing the information available on the network is known as Authentication Server or simply AS.
Ticket Granting Service (TGS): The act of granting a ticket to the user for accessing the information
available on the server is referred to as the Ticket Granting Service or TGS.
Key Distribution Center (KDC): A Kerberos server or KDC, shares a secret key with the client and application
server to establish communication between both the parties. These secret keys and passwords are used
to prove the principal’s identity, and to establish an encrypted session between the KDC and the principal.
KDC consists of Authentication Server (AS) and Ticket Granting Service (TGS). The exchange through
Authentication Service takes place only once between a principal and the KDC. Thereafter, KDC delivers a
Ticket Granting Ticket (TGT) through the TGS that the client/user will use for obtaining additional tickets
for information access.
94
Unit 2 - Cryptography
Fig 2.23: Transitive cross-realm authentication support is reasonable . Request for Service in Another Realm
95
Unit 2 - Cryptography
Let us discuss these options that are key for securing the information on the network.
Transport Mode
Transport mode is used for ensuring end-to-end security between a client/user and a server in a LAN. This mode
is the default mode for IPSec. Each packet of information undergoes encryption for protecting the integrity and
confidentiality of the data that is present. Also, IPSec can be used to establish an authentic source of communication
and ensure that the communication or piece of information has not been intercepted or tampered while being
transmitted. Depending upon client’s security needs, IPSec can be configured for one of the following:
a. Authentication Header (AH) Transport Mode: Authentication Header (AH) provides functions such as
authentication, integrity and anti-replay of each packet of information without encrypting the data. This
means that the data is readable, but it is protected from any kind of modification. AH makes use of keyed
hash algorithms for signing the packet and ensuring integrity. This gives an assurance that the packet did
originate from the actual source and has not undergone any type of modification while being transmitted. This
is achieved by placing the AH header within each packet between the IP header and IP payload.
b. Encapsulating Security Payload (ESP) Transport Mode: Apart from everything that AH offers, Encapsulating
Security Payload (ESP) provides for the confidentiality of the packet during transit. In the transport mode, the
entire packet is not encrypted or signed; rather, only the data in the IP payload is encrypted and signed.
The purpose of the authentication process is to ensure that the packet had originated from the actual source. It also
encrypts the data so that it cannot be viewed or modified during transmission over the communication network.
This is accomplished by placing an ESP header before the IP payload and an ESP trailer after the IP payload, further
encapsulating only the IP payload.
Tunnel Mode
IPSec tunnel mode performs encryption of the IP header and the payload during its transmission. This helps in
providing protection to the entire packet of information. The initial step is to encapsulate the entire IP packet with
an AH or ESP header and the rest with an additional IP header. The additional IP header contains the source and
96
Unit 2 - Cryptography
destination of the tunnel endpoints. The next step is to decapsulate the packet after it reaches the tunnel endpoint
and send it to the final destination by reading the IP address. Through double encapsulation, tunnel mode proves to
be a suitable method to protect the traffic between communicating networks. This is used when traffic is travelling
through Internet that is an untrusted medium of communication. IPSec tunnel mode can be deployed in following
configurations:
• Gateway to gateway
• Server to gateway
• Server to server
The tunnel mode for IPSec can be used in AH mode or in ESP mode. The only difference is that the packets are
encapsulated twice unlike AH mode or ESP mode.
SSL Architecture
SSL uses TCP for providing a reliable end-to-end secured solution. SSL consists of two layers of protocols. The SSL
Record Protocol provides basic security services to higher layers of protocol stack. HTTP or Hypertext Transfer
Protocol provides transfer services for web client or server interactions and operates on top of SSL. The three higher-
layer protocols are:
1. Handshake Protocol
2. Change Cipher Spec Protocol
3. Alert Protocol
97
Unit 2 - Cryptography
SSL involves the interaction between the client and the server. The process begins with the client contacting the server
and sending the first message. This message makes the client and server to exchange a few messages for negotiating
the encryption algorithm and choosing an encryption key for this algorithm. Then, the client data is shared with the
communicating server. Thereafter, the client and the server can exchange as much information as they want.
The communicating server must have an SSL certificate and a private key. The SSL certificate contains the public key
and the RSA encryption algorithm. The public key is sent to the client for getting connected. The client uses the public
key for encryption and sending it to the communicating server.
SSL uses public key for data encryption and data integrity. But, how to check whether the public key belongs to the
person/entity who claims it. The solution for this is to use a certificate. The certificate acts as a link between the public
key and the entity that has been verified or signed by a trusted third party.
SSL is used on the Internet for sending emails in Gmail and while doing online shopping, banking and other e-commerce
activities.
Let us see the steps involved web browser and web server connection using SSL.
• The browser connects to the server using SSL (https)
• The server responds with the certificate that contains the public key of the web server involved in the communication
process
• The browser verifies the certificate by checking the signature of the certificate authority
• The browser uses the public key for agreeing a session key with the server
• The web browser and the server encrypt the data using the session key over the communication channel.
Digital certificates, also called Digital IDs, are the electronic counterparts to driver licenses, passports, or membership
cards. A digital certificate can be presented electronically to prove one’s identity or the right to access information
or services online. Digital certificates are used not only to identify people, but also to identify websites (crucial to
e-business) and software that is being sent over web.
Digital certificates bring trust and security when people are communicating or doing business on internet. A PKI is
often composed of many CAs linked by trust paths. The CAs may be linked in several ways. They may be arranged
hierarchically under a ‘root CA’ that issues certificates to subordinate CAs. The CAs can also be arranged independently
in a network. This makes up the PKI architecture.
The internet version of SSL is known as Transport Layer Security (TLS). The TLS Record Format is similar to the SSL
Record Format. Like SSL, TLS is a cryptographic protocol that is responsible for providing end-to-end communication
security over the online networks. It is an IETF standard and prevents eavesdropping, tampering and message forgery.
Some of the applications that use TLS are web browsers, instant messaging, e-mail and voice over IP (VoIP).
There are many differences between SSL and TLS such as:
• TLS is more efficient and secure than its predecessor i.e. SSL due to stronger message authentication, key material
generation and other encryption algorithms.
• TLS supports many types of formats such as pre-shared keys, secured remote passwords and
• Kerberos unlike SSL.
• TLS and SSL do not support interoperability although TLS does support backward compatibility for older devices
that use SSL.
TLS involves various steps for secure communication between the user and the online network. It involves the
exchange of information between the client and the server, exchange of keys, cipher message and the end message.
This is how TLS has proved to be flexible to be used in various types of applications. There are three main components
of TLS such as encryption, authentication and integrity.
98
Unit 2 - Cryptography
A TLS connection begins with a sequence known as a TLS handshake. The handshake
process starts the same way as seen in TCP platform and then establishes a cipher Note
suite for communication. Cipher suite is defined as a set of algorithms containing
details such as the type of encryption key to be used for a session. TLS uses public IETF stands for Internet
key cryptography for setting the encryption keys over the unencrypted channel. Engineering Task Force, a
The handshake process also involves the server proving its identity to the client for global body concerned with
the purpose of authentication. evolving Internet architecture
and ensuring smooth
Once the data has been encrypted and authenticated, the next step is to sign it
operations over the Internet.
using Message Authentication Code (MAC). This can be understood through an
example. Suppose we buy a bottle of juice containing tamper-proof foil over it. If
the foil is intact, there is a sense of assurance that the bottle is seal proof and unused. This is what a MAC does in a
communication channel.
TLS 1.3 is the latest version of the protocol developed by the Transport Layer Security Working Group of the IETF to
combat the constantly increasing vulnerabilities. The new format is said to have more privacy, reduced latency, better
performance and increased security in the encrypted connections.
Digital signature is formed when the representation of a message is encrypted. The encryption is done using the
private key of the signatory and operates on the message digest rather than the main body of the message.
The steps involved in process of digital signature are as follows:
• The sender sends a message digest and encrypts it using sender’s private key that forms the digital signature.
• Next, the sender transmits the digital signature along with the message
99
Unit 2 - Cryptography
• Then, the receiver decrypts the digital signature using public key from the sender thereby regenerating
sender’s message digest
• Thereafter, the receiver computes message digest from the message digest that has been received and
confirms whether the two digests are same.
When the receiver has successfully verified the digital signature, there are two things that is known by the receiver:
• The message has not been modified or tampered by a foreign entity during transmission.
• The message has been sent by the actual source that claims to have sent it.
A concept introduced by Phil Zimmermann, Pretty Good Privacy or PGP, is another confidentiality and authentication
service used for electronic mail and file storage applications. This strategy has been growing ever since its inception
due to the following reasons:
• Pretty Good Privacy is available for free worldwide on a variety of platforms such as Windows, UNIX, Macintosh etc.
• It is based on algorithms that have been extensively reviewed and extremely secure. The package includes RSA,
DSS and Diffie-Hellman for public-key encryption; CAST-128, IDEA, and 3DES for symmetric encryption and lastly
SHA-1 for hash coding.
• It can be used in variety of applications from corporations wanting to select and enforce a standardized scheme
for encrypting files and messages to individuals for secure communication over the network.
• It is neither developed nor controlled by any government or standards organisation. The ones having belief in the
establishment will enjoy its services.
• PGP is on the Internet standards track i.e. RFC 3156, MIME Security with Open PGP.
Operation of PGP
The actual operation involves four services such as authentication, confidentiality, compression and e-mail
compatibility. Let us understand each of the security services in detail.
Authentication
The sequence of steps involved in the authentication process are as follows:
• Initially, the sender creates a message to be sent
• SHA-1 algorithm is used to generate 160-bit hash code of the message
100
Unit 2 - Cryptography
• Next, the hash code is encryption with RSA making use of the sender’s private key
• The result of encryption of the data is added to the beginning of the message
• The receiver decrypts and recovers the hash code using RSA with the sender’s public key
• Thereafter, the receiver generates a new hash code for the message and undergoes comparison with the decrypted
hash code. If the message and the decrypted hash code match, the message is accepted as authentic.
The combination of SHA-1 and RSA has proved to be an effective digital signature scheme. The strength of RSA
assures the recipient that the person having the private key is authorized to generate the signature. On the other
hand, the strength of SHA-1 assures the recipient that no third party can generate a new message matching the hash
code and the digital signature.
Confidentiality
PGP provides confidentiality by encrypting messages that are transmitted or stored locally as files. In both the
situations, symmetric encryption algorithm CAST-128 can be used. Also, techniques such as IDEA or 3DES can be
used for maintaining confidentiality.
Let us the sequence of activities that take place within a communication process.
• The sender creates a message along with a random 128-bit number that can be used as a session key.
• The message undergoes encryption using the CAST-128 (IDEA or 3DES) with the session key.
• RSA algorithm is used for encrypting the session key using recipient’s public key and added to the beginning of
the message.
• The receiver makes use of RSA using the private key for decryption and recovering the session key.
• The session key is used for decrypting the message.
There are certain observations that have been made regarding PGP for establishing confidentiality.
• For reducing the encryption time, a combination of symmetric and public-key encryption is used in preference to
RSA for encrypting the message directly. This is due to the fact that CAST-128 and other symmetric algorithms
are faster than RSA or ElGamal algorithms.
• Public-key algorithm has solved the session-key distribution problem as only the recipient is able to recover the
session key bound to the message.
• The use of symmetric keys helps in strengthening the overall symmetric encryption process.
PGP can perform both confidentiality and authentication for the same message as well. In this case, a signature is
generated for the plaintext message and both of them are encrypted using CAST-128 scheme. The session key is
encrypted using RSA algorithm. This sequence of activities has been preferred over encrypting the message and then
generating a signature for the respective encrypted message.
It is more convenient to store a signature along with the plaintext version of the message. The verification of the
message begins with performing the signature.
Compression
We all face common problems such as loaded mail boxes and insufficient space for file storage. PGP compresses the
message by applying the signature before encryption. This is said to facilitate both for e-mail transmission and file
storage. Hence, this technique is also crucial is saving space in email platforms and other online networks.
Let us understand the significance of generating the signature before compression from the following reasons:
• Signing an uncompressed message will help in future verifications. If one has signed a compressed document, the
individual would be forced to store the compressed version of the message for later verification or recompress
the message at the time of verification.
101
Unit 2 - Cryptography
• If one wishes to generate dynamically a recompressed message for verification, PGP algorithm will present a
difficulty. Since the algorithm is not deterministic, there are a lot of tradeoffs as per running speed and compression
ratio. Note that these compression algorithms are interoperable as any version of the algorithm can decompress
the output of any other version.
(a) Generic transmission diagram (from A) (b) Generic reception diagram (to B)
E-Mail Compatibility
When PGP is used, the part of the block that is to be transmitted is encrypted. If we only use the signature service,
then sender’s private key is used for encrypting the message digest. If one chooses to use the confidentiality service,
then the message plus signature is encrypted using a one-time symmetric key. Many electronic mail systems permit
the use of blocks rather than the ASCII text. Since large part of encrypted in PGP mode, a sequence of arbitrary binary
words is generated that some mail systems don’t accept.
In order to accommodate this limitation, PGP uses an algorithm known as radix64 that maps 6 bits of binary data into
an 8-bit ASCII character. This is said to expand the message by 33% and the overall compression being one-third.
This chapter has helped get an insight into the various types of security offerings to facilitate communication over the
online networks. Every technique has its own variations and characteristics that define its uniqueness. As businesses
keep expanding and e-commerce industry booms, we will certainly witness more cryptographic algorithms and
authentication mechanisms to protect us from various types of cyber threats.
102
Unit 2 - Cryptography
SUMMARY
• Ciphertext is when the original data or plaintext is converted into a secret code using cryptographic algorithms.
• The conversion of plaintext into ciphertext is called encryption and retrieving back the original data is called
decryption.
• For communicating over the online network, the sender shares a key with the receiver which can be a public
key or a private key.
• Encryption is of two types i.e. symmetric encryption and asymmetric encryption.
• Symmetric encryption involves the exchange of same keys whereas asymmetric encryption is a system which
involves exchange of dissimilar keys (public key and public key).
• The major function of asymmetric encryption is data integrity and message authentication.
• In DES, the data is encrypted in 64-bit blocks using a 56-bit key.
• The encryption technique 3DES involves use of three keys and an actual length of 156 bits.
• AES technique is available in many variants such as 128-bit keys (10 rounds), 192-bit keys (12 rounds) and 256-
bit keys (14 rounds).
• Public-key cryptography involves mathematical functions and computations for improving confidentiality,
key distribution and authentication. Some of the applications of public-key cryptography are encryption/
decryption, key exchange and digital signature.
• Diffie-Hellman Key exchange Algorithm enables key exchange between two users and is prone to Man-in-the-
Middle attack.
• RSA algorithm can be used for both public-key encryption as well as digital signatures. Some of the security
attacks on RSA are brute force, mathematical attacks, timing attacks and chosen ciphertext attacks.
• Hash function is expressed in the form of mathematical equation used for encryption in various applications
such as message authentication, digital signatures and password protection.
• Message digest algorithm (MD5), accepts input of arbitrary length and produces a message which is 128 bits
long.
• SHA or Secure Hash Algorithm involves modular arithmetic and logical binary operations for providing security
service.
• Emails can be secured using GNU Privacy Guard, PGP and S/MIME technologies.
• S/MIME has various functions such as enveloped data, signed data, clear-signed data and signed and enveloped
data.
• The key functions of IPSec are authentication, confidentiality and key management.
• The attacks on cryptosystems are active attacks and passive attacks. Passive attacks are release of message
contents and traffic analysis whereas active attacks can be masquerade, replay attack, modification of messages
and denial of service.
• The main application of strong authentication is in identity access management.
• Kerberos is a security service that involves granting of ticket and authentication of user for establishing
communication with the servers.
103
Unit 2 - Cryptography
• IPSec operates in various modes i.e. Authentication Header and Encapsulating Security Payload Transport
Modes and Tunnel Mode.
• SSL (Secure Socket Layer) is used for authenticating user online on the Internet. The Internet standard of SSL is
called TLS (Transport Layer Security).
• The key functions of Pretty Good Privacy (PGP) are authentication, confidentiality, compression and e-mail
compatibility.
104
Unit 2 - Cryptography
KNOWLEDGE CHECK
Q.1. Select the right choice from the following multiple choice questions.
A. The raw information that is converted into a secret code is known as:
i. Plaintext
ii. Ciphertext
iii. Message
iv. Base data
B. The process of converting raw information into a secret code is known as:
i. Decryption
ii. Authentication
iii. Encryption
iv. Verification
C. The sender exchanges a ___________ with the receiver to ensure a secured communication.
i. Fingerprint
ii. Signature
iii. Password
iv. Key
D. If ‘n’ number of people want to communicate with each other in symmetric key encryption, then the
number of keys required will be computed as:
i. N(N+1)/2
ii. (N+1)/2
iii. (N-1)/2
iv. N(N-1)/2
F. Which of these attacks does not affect the security aspect of RSA:
i. Chosen ciphertext attacks
ii. Timing attacks
iii. Masquerade
iv. Brute force
105
Unit 2 - Cryptography
G. Which of the following does not fall under the category of Active Attacks:
i. Denial of Service
ii. Replay
iii. Masquerade
iv. Release of message contents
H. Which of the given authentication services contains Authentication Server and a Ticket-Granting Server:
i. Strong Authentication
ii. Secure Socket Layer
iii. Kerberos
iv. Pretty Good Privacy
I. Which of these does not come under the category of Strong Authentication:
i. Password Authentication Protocol (PAP)
ii. Authentication Token
iii. Biometric Authentication
iv. Pretty Good Privacy
106
Unit 2 - Cryptography
Q.6. Give examples of the types of active and passive attacks on cryptosystems.
107
108
UNIT 3
NETWORK SECURITY
•
•
Explain relevant network security concepts, devices and
terminologies
Describe the vulnerabilities and attacks concerned with an
organisation’s network
Describe common network security countermeasures and tools
Distinguish between intrusion detection Systems and intrusion
prevention systems
”
• Implement a firewall
• Describe Security Information and Event Management (SIEM)
function
110
Unit 3 - Network Security
Further networks can be wired and wireless. i.e the data is transmitted across the network through wired media also
called guided media or wireless or unguided media.
111
Unit 3 - Network Security
Today the used of wired and wireless networks has grown exponentially. Almost all business use computer networks
for sharing information and numerous business and personal transactions are being conducted over the Internet
everyday. This is throwing up a huge risk of of information theft and other attacks on the intellectual assets of the
businesses and individuals.
These networks, ideally, should allow sharing of information and resources to authorized personnel only. However,
they are prone to “unauthorized access” if they are not properly secured. Organizations have networks of computer
systems that can be attacked from outsiders as well as from within the organization.
It is possible for attackers to take advantage of an unsecure hub/switch port to connect
their device to the network. By doing this:
• The attacker can steal important information by sniffing data packets. Did you know?
• The attacker can also flood the network with spurious information leading to
FBI studies show
denial of service to the authorized personnel.
that more than 80%
• The attacker can spoof the physical identities of the authorized personnel of network security
and then either steal their data or secretly pass/alter the communications attacks could have
between two parties without their knowledge in the form of a ‘man-in-the- been avoided if
middle’ attack. only the most basic
• There are times when malicious content or corrupt files are spread across the steps were taken.
network to hack confidential information.
It has been observed that networks with wireless network are more vulnerable than wired networks. Because, because
wireless network can be easily accessed without any physical connection.
Hence, there is a need to have an effective security mechanism in place to counter any threat that can occur. Also,
there is a need to update the systems as time goes on so that one is not predictable to the attackers.
Network security is a specialized field that protects the usability, reliability, integrity, and safety of the networking
infrastructure by dealing with the various network security risks.
112
Unit 3 - Network Security
For anyone managing network security a good understanding of networking is important. This includes some common
terminology and protocols.
Let us review these in brief.
• Connection: In networking, when pieces of related information are transferred through a network, we say that a
connection has occurred. This means that a connection is built before the data transfer and then it is deconstructed
at the end of the data transfer. A secured connection is very important for maintaining the effectiveness of
communication transfer over the network.
• Packet: Generally speaking, a packet is the basic unit transferred over a network. They are envelopes that carry
data (in pieces) from one endpoint to the other in order to communicate over a network,. Packets have the
following components:
- A header portion containing meta data and routing information such as the IP address of the source and
destination.
- The main body contains the payload, which is the actual data being transferred.
- The trailer, which is also called footer that contains a couple of bits. This is to tell the receiver that it has
reached the end of the packet.
• Port: A port is an address on a network device that can be associated to a specific piece of software. It is not
a physical interface or a location, but it allows the server to be able to communicate using more than one
application.
• LAN (Local Area Network): It refers to a network or a part of a network that is not publicly accessible to the
greater internet. A home or office network is an example of LAN.
• WAN (Wide Area Network): WAN is more extensive network than LAN. It is a term used for large dispersed
networks. The internet, as a whole, can be called a WAN.
• VPN (Virtual Private Network): It is a means of connecting separate LANs through the internet, while maintaining
privacy. This is used as a means of connecting remote systems as if they were on a local network, often for security
reasons.
• Firewall: A firewall is a program that decides whether traffic coming into a server or going out should be allowed.
A firewall usually works by creating rules that decide which type of traffic is acceptable on which ports. Generally,
firewalls block ports that are not used by a specific application on a server.
• Password: Nowadays, almost all e-platforms ask for username and password for logging in a portal. From the
organizational perspective, there should be a strong and secured database capable of storing multiple passwords.
Alternatively, the database can store the hash of the password rather than the password itself. Thereafter,
whenever the user will log in, the entered password will be hashed and compared to the stored hash value in
the organizational database. The user will be successfully logged in once the authentication process is complete.
• IP (Internet Protocol) Addresses: In a network it is very important for each entity to have an identification. This
is called an address. Each and every computer/device within the network will have two types of addresses:
1. Logical address is also known as IP address (Internet Protocol address). It is a virtual address that can be
viewed by the user and is used a reference to the physical address.
2. Physical address refers to a location in the memory unit and is also known as MAC addresses (Media Access
Control). The user cannot directly view the physical address, the physical address is accessed by its corresponding
logical address.
IP addresses are managed by the Internet assigned Numbers authority (IaNa) which has overall responsibility for the
IP address pool and by the Regional Internet Registries (RIRs) to which IaNa distributes large blocks of addresses.
113
Unit 3 - Network Security
• NAT (Network Address Translation): It is a way to translate requests that are incoming into a routing server to
the relevant devices or servers that it knows about in the LAN. This is usually implemented in physical LANs as a
way to route requests through one IP address to the necessary backend servers.
There are 3 ways to configure NAT:
Static NAT – In this, a single unregistered (Private) IP address is mapped with a legally registered (Public) IP
address i.e one-to-one mapping between local and global address. This is generally used for Web hosting. These
are not used in organisations as there are many devices who will need Internet access and to provide Internet
access, public IP address is needed.
Suppose, if there are 3000 devices who needs access to Internet, the organisation have to buy 3000 public
addresses that will be very costly.
Dynamic NAT – In this type of NAT, an unregistered IP address is translated into a registered (Public) IP address
from a pool of public IP address. If the IP address of pool are not free, then the packet will be dropped as only
fixed number of private IP address can be translated to public addresses.
Suppose, if there is pool of 2 public IP addresses then only 2 private IP addresses can be translated at a given
time. If 3rd private IP address wants to access Internet then the packet will be dropped therefore many private
IP addresses are mapped to a pool of public IP addresses. NAT is used when the number of users who wants to
access the Internet are fixed. This is also very costly as the organisation have to buy many global IP addresses to
make a pool.
Port Address Translation (PAT) – This is also known as NAT overload. In this, many local (private) IP addresses
can be translated to single registered IP address .Port numbers are used to distinguish the traffic i.e., which traffic
belongs to which IP address. This is most frequently used as it is cost effective as thousands of users can be
connected to the Internet by using only one real global (public) IP address.
• Network interface: A network interface can be the interface between software or hardware. It could also be
between two pieces of equipment in a network or between protocol layers of a network. It usually has a network
ID and a node ID associated. Its function is to make a connection or disconnection and pass data. Interfaces
are networking communication points for a computer. Each interface is associated with a physical or virtual
networking device. Typically, a server will have one configurable network interface for each Ethernet or wireless
internet card. In addition, it will define a virtual network interface called the ‘loopback’ or localhost interface. This
is used as an interface to connect applications and processes on a single computer to other applications and
processes.
• Network Protocols and Standards: A protocol is a set of rules and standards that define a language that can be
used to communicate. There are a great number of protocols used extensively in networking, and they are often
implemented in different layers. Some low-level protocols are TCP, UDP, IP, and ICMP. Some familiar examples
of application layer protocols, built on these lower protocols, are HTTP (for accessing we content), SSH, TLS/ SSL,
and FTP.
Protocols and standards are vital to the implementation of data communications and networking. Protocols refer
to the rules; a standard is a protocol that has been adopted by vendors and manufacturers. Network models serve
to organize, unify, and control the hardware and software components of data communications and networking.
Although the term "network model" suggests a relationship to networking, the model also encompasses data
communications. The two dominant networking models are as follows:
The first is a theoretical framework; the second is the actual model used in today's data communications.
114
Unit 3 - Network Security
TCP/IP Model
You may already know that TCP/IP suite is the commonly used industry standard for connecting hosts, networks and
Internet. TCP/IP focuses on building an interconnection of networks, called internetwork that is capable of providing
universal communication over heterogeneous physical networks. This is said to facilitate communication between
hosts separated by a large geographical area.
TCP/IP acts as a communication link between the programming interface of a physical network and user applications.
The TCP/ IP model, more commonly known as the Internet protocol suite is a layering model that is simpler and has
been widely adopted. This layered structure is referred to as a protocol stack.
It defines four separate layers:
1. Application Layer: In this model, the application layer is responsible for creating and transmitting user data
between applications.
2. Transport Layer: The transport layer is responsible for data transfer between the application program running
on the client and the application program running on the server. This level of networking utilises ports to address
different services. It can build up unreliable or reliable connections depending on the type of protocol used.
3. Network (or Internetwork) Layer: The internet layer or internetwork layer is used to transport data from node
to node in a network. This layer is aware of the endpoints of connections but does not worry about the actual
connection needed to get from one place to another. IP addresses are defined in this layer as a way of reaching
remote systems in an addressable manner.
4. Network Interface/Link Layer: The network interface layer or data-link layer or simply link layer acts as the
interface to the actual network hardware. This layer implements the actual topology of a local network that allows
the internet layer to present an addressable interface. It establishes connections between neighbouring nodes to
send data and may not necessarily provide reliable delivery.
Application protocol
Application Application
TCP protocol
Transport Transport
IP protocol IP protocol
Network IP Network
In the case of TCP/IP layers, security controls have to be deployed at each layer. This is because if any one TCP layer
is attacked, none of the other layers will be aware and thus communication will be compromised. Hence, in order to
deal with the risks, one has to understand and address the security vulnerabilities and threats at each TCP /IP layer.
115
Unit 3 - Network Security
Hijacking
HTTP or Hypertext Transfer Protocol which is based on www (World Wide Web) is an application layer protocol in
TCP/IP suite used for transfer files that make up the web pages from the web servers. When a user opens a certain
website by entering a search into the URL, the messages are sent to the web server using HTTP for the webpage
requested by the user. Thereafter, the web server responds by delivering the results of the search criteria that was
requested.
A weak authentication between the client and the web server during the initializing of the session is a common HTTP
vulnerability. This vulnerability can lead to a session hijacking attack where the attacker steals an HTTP session of the
legitimate user by capturing the packets using a packet sniffer. A successful hijack gives the attacker full access to
the HTTP session.
Cookie Poisoning
Cookies are small files stored by certain websites in the computer of the user. They help in identifying the users,
providing them easy access to the particular website and even customizing the web pages for the user.
Cookie poisoning is when an attacker modifies or steals a cookie from the user’s computer to access personal
information that it contains which could also be a password or a user id. They can then, use the cookie on their own
machine and access unauthorized information because the website will not ask for any authentication due to the
presence of the cookie.
Web Application Firewalls (WAF) are used to detect and block cookie poisoning attacks.
Replay attack
In a replay attack an attacker intercepts data transmission of a user and then uses that information again for his/her
own benefit. It is a type of man-in-the-middle attack and more than a hijack because here the data resent can be
modified and can bring different results. The attacker could also spoof the client’s IP address and thus use his/her
own machine.
There are ways to prevent this such as if the web browser could keep track of the sessions or create unique session ids.
Cross-Site Scripting
In this type of attack, the attacker identifies web applications or browsers that are vulnerable and injects a malicious
script in it. This script can conduct a session hijack and steal the information and cookies of legitimate users that visit
the website.
116
Unit 3 - Network Security
name to a wrong IP address and divert the requests to another site, which could be a fraudulent site that could
look similar to the real web site. If the user remains unaware and enters the user id/ password, the attacker can
then steal it.
• DNS spoofing: This refers to faking the IP address of a computer to match the DNS server’s IP address. Then
user requests are directed to the wrong machine. Here the hacker’s machine will impersonate the DNS server and
reply to all user requests and misdirect them.
• DNS ID Hijacking: The term DNS Hijacking and DNS Spoofing are used interchangeably. DNS hijacking tricks the
user into believing that they are connecting to a legitimate domain name.
The security weakness in this the three way handshake is due to the possibility of predicting TCP sequence numbers.
This is possible because the sequence number is incremented by a constant amount per second and by half that
amount each time a connection is initiated. An attacker can gain access and connect to the server legitimately, then
he/she can guess the next sequence number, perform a session hijack and TCP injection attacks.
• TCP blind spoofing is another form of Hijacking that can be done, where an attacker is able to guess both the
port number and sequence number of the session that is in process and can carry out an injection attack.
• SYN Flood is another flaw that the three way handshake has, where multiple SYN packets are spoofed using a
source address that does not exist. They are then sent to the target server. After receiving the fake SYN packets,
the server replies with a SYN ACK packet to the source address that is unreachable. This situation creates a lot of
half-opened sessions due to the fact that the expected ACK packets are not received by the server to properly
initiate a session. This can cause the server to be overloaded or can eventually crash. The server will not allow any
further connections to be established and legitimate user connection requests will be dropped, thus leading to
a denial of service attack.
117
Unit 3 - Network Security
Devices on the network are uniquely identified by IP addresses and a subnet mask. An attacker can spoof an IP
address and carry out a man-in-the-middle attack. The attacker can even hijack a connection session. Given below
are some common network layer attacks.
Teardrop attack
This attack is a type of a denial-of-service (DoS) attack, which works slowly by sending a series of fragmented packets
to a target device. It overwhelms the target device with the incomplete data so that it crashes down Other versions
of the teardrop attack are; NewTear, Nestea, SynDrop and Bonk.
118
Unit 3 - Network Security
addresses available on physical ports with their associated VLAN parameters (Security, CISCO Systems 2002.) The
table can only store a fixed size of information. A hacker takes advantage of the fixed memory size by maxing it with
more entries than it can handle causing to overflow. This attack is called cam flood or mac flooding attack.
Address Resolution Protocol (ARP) Attack
ARP is used in the data link layer to convert IP addresses to their corresponding MAC addresses. The user sends a
broadcast ARP message, requesting for a MAC address for a given IP address. This message is broadcasted by the
switch to all ports except for the source port. The intended destination IP address gets the ARP message and replies
with the corresponding MAC address. All other hosts on the switch drop the packet. Gratuitous ARP is is a type of ARP
that is used by hosts to broadcast their IP address to the network in order to avoid duplication.
ARP Spoofing: An attacker can abuse Gratuitous ARP as there is no authentication in the ownership of either IP or
MAC address. Due to this, an attacker could spoof an ARP packet to broadcast an IP and MAC address of an already
existing host. This will lead to an IP conflict and the legitimate user is not allowed into the network, which is a denial
of service.
ARP cache poisoning: ARP keeps its physical to logical bindings in an ARP cache. ARP cache poisoning occurs when
an attacker modifies this table and gives incorrect mappings. When the user’s machine tries to send data, it checks in
the poisoned cache and sends the data to an attacker.
OSI (Open Systems Interconnection) is a logical representation of how the network systems send data or communicate
with each other ensures the interoperability of diverse communication systems using standard protocols. The “7
layers” of an OSI model is a logical representation of how the network systems are supposed to communicate with
each other.
119
Unit 3 - Network Security
The 7 different layers in this model and their relation to the TCP/IP Model is as follows:
Layer 7 Application
Layer 5 Session
Layer 1 Physical
120
Unit 3 - Network Security
121
Unit 3 - Network Security
122
Unit 3 - Network Security
Now that we have understood some of the common types of attacks that a network is vulnerable to. Let us look at
some of the measures that can be taken in order to achieve network security.
International Telecommunication Union (ITU), has provided recommendations on security architecture X.800, defining
mechanisms achieve network security and bring about standardization.
The “SECURITY ARCHITECTURE OPEN SYSTEMS INTERCONNECTION FOR CCITT APPLICATIONS – Recommendation
X.800” can be downloaded from the following link:
[Link]
Some fundamental measures are given below based on which network security solutions can be customised.
Firewall
The term ‘firewall’ came into being in 1764 for describing the walls that separated the parts of a building that most
prone to a fire (such as kitchen) from the rest of the structure. These types of physical barriers prevented the fire from
spreading throughout a building thereby saving lives and property. Before the introduction of firewalls, routers were
used in the 1980s for ensuring network security.
Firewall is a device that allows communication using multiple networks such as private LAN or public internet as per
a defined security policy. These firewalls determine the services that may be attacked or accessed from the outside.
It is crucial for these firewalls to decide the traffic to be blocked and the one to be permitted thus acting like a
security guard for the user network. A firewall provides a network administrator the data pertaining to the kind and
amount of traffic can pass through it, number of attempts being made to break into it and much more. These security
mechanisms not only prevent unauthorised access, but also monitor the sniffing activities taking place and helping in
identifying the entities attempting to breach the security.
The key functions of a firewall are:
• Blocking the incoming data that might be containing a hacker attack
• Hiding the information about the network for making it seem like the traffic is originating from the firewall
rather than the network. This is termed as Network Address Translation (NAT).
• Screening of the outgoing traffic for limiting the use of Internet and other remote sites.
However, firewalls are no cure-all solution to network security woes. A firewall is only as good as its rule set, and
there are many ways an attacker can find common mis-configurations and errors in the rules. For example, if a firewall
blocks all traffic except traffic originating from port 53 (DNS) so that everyone can resolve names, the attacker could
then use this rule to his/ her advantage. By changing the source port of the attack or scan to port 53, firewall will allow
all of the traffic through because it assumes it as DNS traffic. Bypassing firewalls is a whole study in itself and one
which is very interesting (especially to those with a passion for networking) because it normally involves misusing the
way TCP and IP are supposed to work. That said, firewalls today are becoming very sophisticated and a well-installed
firewall can severely thwart a would-be attacker's plans. It is important to remember that the firewall does not look
into the data section of the packet. Thus, if one has a web server that is vulnerable to a CGI exploit and firewall is set
to allow traffic to it, there is no way firewall can stop an attacker from attacking the web server. It does not look at the
data inside the packet. That would be the job of an intrusion-detection system.
Anti-virus
There is no introduction needed for a desktop version of antivirus packages like Norton Antivirus and McAfee. The
way these operate is fairly simple -- when researchers find a new virus, they figure out some unique characteristic it
has (maybe a registry key it creates or a file it replaces) and out of this they write the virus ‘signature’.
123
Unit 3 - Network Security
The whole load of signatures for which the antivirus software scans is known as the virus ‘definitions’. This is the
reason why keeping virus definitions up-to-date is very important. Many antivirus packages have an auto-update
feature to download the latest definitions. The scanning ability of software is only as good as the date of definitions.
In the enterprise, it is very common for administrators to install antivirus software on all machines, but there is no
policy for regular updates of the definitions. This is meaningless protection and serves only to provides a false sense
of security.
With the recent spread of email viruses, antivirus software at the mail server is becoming increasingly popular. The
mail server will automatically scan any email it receives for viruses and quarantines the infections. The idea is that
since all mail passes through the mail server, this is the logical point to scan for viruses. Given that most mail servers
have a permanent connection to the internet, they can regularly download the latest definitions. On the downside,
these can be evaded quite simply. If it zips up the infected file or Trojan or encrypts it, the antivirus system may not
be able to scan it.
End users must be taught how to respond to an antivirus alerts. This is especially true in the enterprise -- an attacker
doesn't need to try and bypass the user’s fortress-like firewall if all he has to do is email Trojans to a lot of people in
the company. It takes just one uninformed user to open the infected package to allow the hacker a backdoor to the
internal network.
It is advisable that the IT department gives a brief seminar on how to handle email from untrusted sources and how
to deal with attachments. These are very common attack vectors, simply because the user may harden a computer
system as much as he/ she likes, but the weak point still remains the user who operates it. As crackers say, "The human
is the path of least resistance into the network."
Intrusion Detection System
There are basically two types of Intrusion-Detection Systems (IDS):
Host-Based IDS:
These systems are installed on a particular important machine (usually a server or some important target) and are
tasked with making sure that the system state matches a particular set baseline. For example, the popular file-integrity
checker Tripwire is run on the target machine just after it has been installed. It creates a database of file signatures
for the system and regularly checks the current system files against their known safe signatures. If a file has been
changed, the administrator is alerted. This works very well because most attackers will replace a common system file
with a Trojan version to give them a backdoor access.
Network-Based IDS:
These systems are more popular and quite easy to install. Basically, they consist of a normal network sniffer running
in promiscuous mode. (In this mode, the network card picks up all traffic even if it is not meant for it.) The sniffer is
attached to a database of known attack signatures, and the IDS analyses each packet that it picks up to check for
known attacks. For example, a common web attack might contain string/system32/[Link]? in the URL. The IDS will
have a match for this in the database and will alert the administrator.
Newer versions of IDS support active prevention of attacks. Instead of just alerting an administrator, the IDS can
dynamically update the firewall rules to disallow traffic from attacking IP address for some amount of time. Or the
IDS can use ‘session sniping’ to fool both sides of the connection into closing down so that the attack cannot be
completed.
Unfortunately, IDS systems generate a lot of false positives. A false positive is basically a false alarm, where the IDS
sees legitimate traffic and for some reason matches it against an attack pattern.
124
Unit 3 - Network Security
This tempts a lot of administrators into turning them off or even worse -- not bothering to read the logs. This may
result in an actual attack being missed.
IDS evasion is also not all that difficult for an experienced attacker. The signature is based on some unique
feature of the attack. An attacker can modify the attack so that the signature is not matched. For example,
the above attack string/system32/[Link]? could be rewritten in hexadecimal to look something like:
'2f%73%79%73%74%65%6d%33%32%2f%63%6d%64%2e%65%78%65%3f'
This might be totally missed by the IDS. Furthermore, an attacker could split the attack into many packets by
fragmenting the packets. This means that each packet would only contain a small part of the attack, and the signature
would not match. Even if the IDS is able to reassemble fragmented packets, this creates a time overhead and since
the IDS has to run at near real-time status, they tend to drop packets while they are processing. IDS evasion is a topic
for a paper on its own.
The advantage of a network-based IDS is that it is very difficult for an attacker to detect. The IDS itself does not need
to generate any traffic and in fact, many of them have a broken TCP/IP stack so that they don't have an IP address.
Thus, the attacker does not know whether the network segment is being monitored or not.
Demilitarized Zones
In computer networking, a demilitarized zone is a special local network configuration designed to improve security by
segregating computers on each side of a firewall. It is also known as a perimeter network or a screened sub-network,
it is a physical or logical subnet that separates an internal local area network (LAN) from other un-trusted networks,
usually the internet. External-facing servers, resources and services are located in the DMZ. So, they are accessible
from the internet, but the rest of the internal LAN remains unreachable. This provides an additional layer of security
to the LAN as it restricts the ability of hackers to directly access internal servers and data via the internet.
Any service provided to users on the public internet should be placed in the DMZ network. Some of the most
common of these services include web servers and proxy servers, as well as servers for email, domain name system
(DNS), File Transfer Protocol (FTP) and voice over IP (VoIP).
The systems running these services in the DMZ are reachable by hackers and cybercriminals around the world and
need to be hardened to withstand constant attack. The term DMZ comes from the geographic buffer zone that was
set up between North Korea and South Korea at the end of the Korean War.
125
Unit 3 - Network Security
DNSSEC
Domain name system security extensions (DNSSEC) are a set of protocols that make the traditional domain name
system (DNS) more secure. As we know DNS resolves hostnames into IP addresses, but is vulnerable to attacks
because it works by using unencrypted data for DNS records. DNSSEC is a security system that has been developed
in the form of extensions that could be added to existing DNS protocols. The extensions can:
• authenticate the origin of data sent from a DNS server
• verify the integrity of data
• authenticate nonexistent DNS data.
However, DNSSEC cannot protect how the data is distributed and who can access the data.
A system of public keys and digital signatures is used by DNSSEC to verify data. The public keys can also be used by
security systems in order to encrypt data when it is sent through the Internet and then to decrypt the data when it
is received. Though, DNSSEC cannot protect the privacy or confidentiality of data as it does not include encryption
algorithms.
New types of records have to be created for the implementation of DNSSEC, such as:
• DS
• DNSKEY
• NSEC
• RRSIG
RRSIG record is the digital signature, and it stores the key information used for validation of the accompanying data.
The key contained in the RRSIG record is matched against the public key in the DNSKEY record. The NSEC family of
records, including NSEC, NSEC3 and NSEC3PARAM, is then used as an additional reference to thwart DNS spoofing
attempts. The DS record is used to verify keys for subdomains.
The process used for a DNSSEC lookup varies as per the type of server used to send the request. For all processes
the verification of DNSSEC keys requires starting points called trust anchors. Trust anchors are included in operating
systems or other trusted software.
After a key is verified through the trust anchor, it must also be verified by the authoritative name server through the
authentication chain, which consists of a series of DS and DNSKEY records.
To enable DNSSEC, registrars must have this technology enabled not only in their domain name infrastructure, but on
the DNS server as well. ICANN has an updated list of domain registrars who support DNSSEC. This can be accessed
from the following link:
[Link]
One of the easiest and fastest ways to enable DNSSEC is by using Cloudflare. Cloudflare makes the complex DNSSEC
activation process really easy.
[Link]
126
Unit 3 - Network Security
We have also learnt about digital certificates in the previous chapter, that are gaining importance with the growing use
of online services and e-commerce, and a corresponding increase in electronic transaction. The use of PKI technology
to support digital signatures can help increase confidence in electronic transactions. For example, digital signature
allows a seller to provide assurance that goods or services were requested by a buyer and therefore they can demand
payment. It allows parties without prior knowledge of each other to engage in verifiable transactions.
By verifying the validity of the certificate, vendor ensures receipt of a valid public key for buyer. By verifying the
signature on the purchase order, vendor ensures the order was not altered after buyer issued it. Once validity of the
certificate and signature is established, vendor can ship the requested goods to buyer with the knowledge that buyer
ordered the goods. This transaction can occur without any prior business relationships between buyer and seller.
Smart cards
Smart cards are typically credit card type cards that contain a small amount of memory and sometimes a processor.
Since smart cards contain more memory than a typical magnetic stripe and can process information, they are used
in security situations where these features are a necessity. They can be used to hold system logon information, such
as a user's private key along with other personal Information, including passwords. In a typical smart card logon
environment, user is required to insert his/ her smart card into a reader device connected to the computer. The
software then uses the information stored on the smart card for authentication. When paired with a password and/
or a biometric identifier, the level of security is increased. For example, requiring the user to simply enter a password
for logon is less secure than having them insert a smart card and enter a password. File encryption utilities which use
the smart card as the key to the electronic lock is another security use of smart cards.
Secure code
Electronic software distribution over any network involves potential security problems. Software can contain
programmes, such as viruses and Trojan horses. To help address some of these problems, one can associate digital
signatures with the files. A digital certificate is a means of establishing identity via public key cryptography. Code
signed with a digital certificate verifies identity of the publisher and ensures that the code has not been tampered
with after it was signed. Certificates and object signing establish identity and let user make decisions about the
validity of a person's identity. When user executes the code for the first time, a dialog box appears. The dialog box
provides information on the certificate and a link to the certificate authority. Microsoft developed the Microsoft
Authenticode technology, which enables developers and programmers to digitally sign software. Before software is
released to the public or internal to an organisation, developers can digitally sign the code. If software is modified
127
Unit 3 - Network Security
after digitally signing it, the signature becomes invalid. On Internet Explorer, one can specify security settings that
prevent users form downloading and running unsigned software from any security zone. Internet Explorer can be
configured to automatically trust certain software vendors and authorities so that software and other information is
automatically accepted.
Standby servers
It is possible to set up a standby server in case the production server fails. The standby server should mirror the
production server. One can use the standby server to replace the production server in the event of a failure or as
a read-only server. Create the standby server by loading the same operating system and applications as on the
production server. Make backups of data on the production server and restore these backups on the standby server.
This also helps to verify backups that are performed.
The standby server will have a different IP address and name if it is connected to the network. Name and IP address
of the standby server will have to be changed if the production server fails and the standby server needs to become
the production server. To maintain the standby server, regular backups and restorations need to be performed. For
example, let's say, a full back-up was created on Mondays and incremental backups every alternative day of the week.
Restore the full back-up on the standby server and subsequent incremental backups thereafter on the days that
backups are created.
Proxy server
A proxy server is a server, with its own IP address, that acts as a go‑between or intermediary between a user who
sends a web request through the internet and the web server or servers that have that information in the form of a
webpage.
The proxy server undertakes the web request on behalf of the user, collates the response from the target web server
or servers and then forwards web page data to the user so that the user can see the page or pages in his/her browser.
However, that is not all that a proxy server does. It makes certain changes in the data sent by the user, which doesn’t
make any change in the results, however it makes sure that the target web servers are unable to locate the user. It
change the IP address of the user, so that the web server cannot know where the user is, it can encrypt the user’s data,
to make it unreadable in transit and it can also block access to certain web pages, based on IP address.
128
Unit 3 - Network Security
129
Unit 3 - Network Security
A reverse proxy provides additional control and security as well as increased scalability and flexibility.
Another benefit of a reverse proxy is that it helps in reducing the time it takes to generate a response and return it
to the client, also known as web acceleration. It does this by using techniques like compression of server responses
before returning them to the client, encryption of traffic between clients and servers called SSL termination, storing
copy of backend server’s response to the client, locally, which is also called caching.
Apache, IIS, and Nginx are commonly used reverse proxy servers.
NGINX [engine x] is an HTTP and reverse proxy server, a mail proxy server, and a generic TCP/UDP proxy server,
originally written by Igor Sysoev. F.
To install, configure and learn more about it, visit the following websites:
[Link]
[Link]
Application gateways
An application gateway uses server programmes (called proxies) that run on firewall. These proxies take external
requests, examine them, and forward legitimate requests to the internal host that provides appropriate service.
Application gateways can support functions, such as user authentication and logging. Because an application gateway
is considered as the most secure type of firewall, this configuration provides a number of advantages to the medium-
high risk site:
• Firewall can be configured as the only host address that is visible to the outside network,
• requiring all connections to and from the internal network to go through the firewall.
• Use of proxies for different services prevents direct access to services on the internal network,
• protecting the enterprise against insecure or badly configured internal hosts.
• Strong user authentication can be enforced with application gateways.
• Proxies can provide detailed logging at the application level.
130
Unit 3 - Network Security
131
Unit 3 - Network Security
132
Unit 3 - Network Security
1. Port Scanners
The systems that offer TCP or UDP services will surely have an open port for that particular service. If we take an
example, for serving web pages, TCP port 80 will be open. The role of a port scanner is to scan a host or range of
hosts in order to determine what are the ports that are open and what kind of services are being performed. The
attacker comes to know what kind of systems can be attacked. The attacker gets an idea of the type of services
being offered and the type operating systems in use.
The network security tool to counter this threat is to have an solution which can run on multiple operating
systems. The tool should be versatile and should have features such as OS fingerprinting, service version scanning
and stealth scanning.
2. Network Sniffers
The role of a network sniffer is to hack all the traffic across the network. This type of attack affects the network
interface card (NIC) or LAN card installed in the system. In this mode of attack, the NIC picks up all the traffic
irrespective of whether it was meant for it or not. Setting up a sniffer is to capture all of the network traffic and
obtain log-ins and passwords that could provide an entry into the main system. Some examples of network
sniffing tools are Ethereal, Snort and TCPdump.
133
Unit 3 - Network Security
In networks, that operate in a switched environment, a conventional network scanner is ineffective. These networks
are attacked using a switched network sniffer such as Ettercap. This helps the attacker to collect passwords, hijack
sessions, modify and kill connections that are being established.
3. Password Crackers
On getting an access, the attacker goes after the main file on the master system. The attacker wants to access the
database and hijack valuable information by logging in through obtained passwords. The password cracker tries
possible combinations for cracking the code and it’s a matter of time when the attacker is able to log-in.
A company's security plan consists of security policies. Security policies give specific guidelines about areas of
responsibility and consist of plans that provide steps to take and rules to follow to implement the policies. Therefore,
in order to get a better picture of an organisation’s functioning, one must know about the policies of that organisation
and how they are implemented.
Policies should define what one considers valuable and should specify what steps should be taken to safeguard those
assets. Policies can be drafted in many ways. One example is a general policy of only a few pages that cover most
possibilities. Another example is a draft policy for different sets of assets, including email policies, password policies,
internet access policies, and remote access policies.
Two common problems with organisational policies are:
1. These are platitude rather than a decision or direction it is not really used by organisations Instead, a policy is a
piece of paper to show to auditors, lawyers, other organisational components or customers, but it does not affect
behaviour.
2. Security policies that are too stringent are often bypassed because people get tired of adhering to them (the
human factor), which creates vulnerabilities for security breaches and attacks. For example, specifying a restrictive
account lockout policy increases the potential for denial of service attacks.
A good risk assessment will determine whether good security policies and controls are implemented. Vulnerabilities
and weaknesses exist in security policies because of poor security policies and the human factor.
An example is implementing a security keypad on the server room door. Administrators may get tired of entering
the security PIN number and stop the door from closing by using a book or broom, thereby bypassing the security
control. Specifying a restrictive password policy can actually reduce the security of the network. For example, if one
requires password longer than seven characters, most users have difficulty remembering them. They might write their
password down and leave them where an intruder can find them.
Closing ports
Transport layer protocols namely Transmission Control Protocol(TCP) and User Datagram Protocol (UDP) identify
applications communicating with each other by the means of ports numbers. It is considered a good practice to
ensure and close unnecessary and unused ports because attackers can use these opening as an entry point while
trying to access the main network.
To be effective, policy requires visibility. Visibility aids the implementation of policy by helping to ensure a policy is
fully communicated throughout the organisation. This is achieved through a plan of each policy that is a written set
of steps and rules. The plan defines when, how, and by whom the steps and rules are implemented. Management
presentations, videos, panel discussions, guest speakers, question/ answer forums, and newsletters increase visibility.
If an organisation has computer security training and awareness, it is possible to effectively notify users of new
policies. It also can be used to familiarise new employees with the organisation's policies.
134
Unit 3 - Network Security
Computer security policies should be introduced in a manner that ensures management's unqualified support,
especially in environments where employees feel inundated with policies, directives, guidelines, and procedures.
Organisation's policy is the vehicle for emphasising management's commitment to computer security and clarifying
their expectations for employee performance, behaviour and accountability.
A good risk assessment will determine good Good security Security Controls
security controls and policies. Vulnerabilities controls can stop and Policies
exist because of poor security policies and
human factors.
Assets
Non-Malicious
Techniques and
methods
Techniques and
methods
Natural
Disasters Vulnerabilities
Fig 3.7: Relationship between a good risk assessment and good security polices and controls
Password policies
Security provided by password system depends on the passwords being kept secret at all times. Thus, a password is
vulnerable to compromise whenever it is used, stored, or even known. In a password-based authentication mechanism
implemented on a system, passwords are vulnerable to compromise due to some wrong practices:
• A default password is initially assigned to a user when enrolled on the system, which if hacked, can provide
the hacker access to a large number of systems. This is particularly true because many people do not
change the default password.
135
Unit 3 - Network Security
• Employees use passwords that are commonly used by all or are previously compromised passwords.
• Passwords are shared among team members.
• Companies employ out-of-date password quality standards.
• Organizations don’t use other security measures to protect against the compromised passwords.
• Users are expected to remember their passwords. Due to this, they either put simple passwords or use
personal information that people can guess at. Computer-generated passwords are difficult to remember.
Password policies can be set depending on the needs of an organisation. For example, it is possible to specify minimum
password length, no blank passwords, and maximum and minimum password age. It is also possible to prevent users
from reusing passwords and ensure that users use specific characters in their passwords, making passwords more
difficult to crack. This can be set through Windows 2000 account policies discussed later in the paper.
Administrative responsibilities
Many systems come from the vendor with a few standard user logins already enrolled in the system. Change passwords
for all standard user logins before allowing the general user population to access the system. For example, change the
administrator password when installing the system.
The administrator is responsible for generating and assigning the initial password for each user login. A user must
then be informed of this password. In some areas, it may be necessary to prevent exposure of the password to the
administrator. In other cases, user can easily nullify this exposure. To prevent exposure of a password, it is possible to
use smart card encryption in conjunction with the user's username and password. Even if the administrator knows the
password, he/ she will be unable to use it without the smart card. When a user's initial password must be exposed to
the administrator, this exposure may be nullified by having the user immediately change the password by the normal
procedure. Occasionally, user will forget the password or the administrator may determine that a user's password
may have been compromised.
To be able to correct these problems, it is recommended that the administrator is permitted to change the password
of any user by generating a new one. The administrator should not have to know the user's password in order to do
this but should follow the same rules for distributing the new password that applies to initial password assignment.
Positive identification of the user by the administrator is required when a forgotten password must be replaced.
User responsibilities
Users should understand their responsibility to keep passwords private and to report changes in their user status,
suspected security violations and so forth. To assure security awareness among the user population, it is recommended
that each user is required to sign a statement to acknowledge understanding these responsibilities.
The simplest way to recover from a compromised password is to change it. Therefore, passwords should be changed
on a periodic basis to counter the possibility of undetected password compromise. They should be changed often
enough so that there is an acceptably low probability of compromise during a password's lifetime. To avoid needless
exposure of users passwords to the administrator, users should be able to change their passwords without any
intervention by the administrator.
Email policies
Email is increasingly critical to the normal conduct of business. Organisations need policies for email to help employees
use it properly, to reduce the risk of intentional or inadvertent misuse, and to assure that official records transferred
via email are properly handled. Similar to policies for the appropriate use of telephone, organisations need to define
appropriate use of email.
Organisational policies are needed to establish general guidance in the areas, such are as:
• use of email to conduct official business
• use of email for personal business
• access control and confidential protection of messages
• management and retention of emails
136
Unit 3 - Network Security
It is easy to have email accidents. Email folders can grow until the email system crashes. Badly configured discussion
group software can send messages to wrong groups. Errors in email lists can flood subscribers with hundreds of
error messages. Sometime errors messages will bounce back and forth between email servers. Some ways to prevent
accidents are to:
• train users what to do when things go wrong, as well as how to do it right
• configure email software so that the default behaviour is the safest behavior
• use software that follows internet email protocols and conventions religiously
Every time an online service gateway connects its proprietary email system to the internet, there are howls of
protest because of the flood of error messages that result from the online service's misbehaving email servers.
Using encryption algorithms to digitally sign email message can prevent impersonation. Encrypting contents of the
message or the channel that is transmitted over can prevent eavesdropping. Email encryption is discussed later in this
paper under ‘Public key infrastructures’.
Using public locations like internet cafes and chat rooms to access email can lead to the user leaving valuable
information cached or downloaded on computers. Users need to clean up the computer after they use it, so no
important documents are left behind. This is often a problem in places like airport lounges.
Internet policies
The World Wide Web has a body of software and a set of protocols and conventions used to traverse and find
information over the internet. Through the use of hypertext and multimedia techniques, the web is easy for anyone
to roam, browse and contribute to.
Web clients, also known as web browsers, provide a user interface to navigate through information by pointing and
clicking. Browsers also introduce vulnerabilities to an organisation, although generally less severe than the threat
posed by servers. Various settings can set on Internet Explorer browsers by using Group Policy in Windows 2000.
Web servers can be attacked directly or used as jumping off points to attack an organisation's internal networks.
There are many areas of web servers to secure: the underlying operating system, the web server software, server
scripts and other software and so forth. Firewalls and proper configuration of routers and the IP can help to fend off
denial of service attacks.
137
Unit 3 - Network Security
Backup policies
The backup policies should include plans for:
• Regularly scheduled backups
• Types of backups – most backup systems support normal backups, incremental backups and differential backups
• Scheduled backups – the schedule should normally be during the night when a company has the least
numbers of users
• Information to be backed up
• Type of media used for backups – tapes, CD-ROMs, other hard drives and so forth
• Type of backup devices – tape devices, CD writers, other hard drives, swappable hard drives, and maybe
to a network share
Devices also come in various speeds, normally measured in megabytes backed up per minute. Time taken to perform
backups depends on the system requirements.
IP security policies
The Internet Protocol (IP) underlies the majority of corporate networks as well as the internet. It has worked well for
decades. It is powerful, highly efficient and cost-effective. Its strength lies in its flexibly routed packets, in which data
is broken up into manageable pieces for transmission over networks. And it can be used by any operating system.
In spite of its strengths, IP was never designed to be secure. Due to its method of routing packets, IP-based networks
are vulnerable to spoofing, sniffing, session hijacking and man-in-the-middle attacks — threats that were unheard of
when IP was first introduced.
The initial attempt to provide security over internet have been application-level protocols and software, such as Secure
Sockets Layer (SSL) for securing web traffic and Pretty Good Privacy (PGP) for securing email. These applications,
however, are limited to specific applications.
By using IP security, it is possible to secure and encrypt all IP traffic. It is possible to make use of IP security policies
in Windows 2000 to control how, when and on whom IP security works.
The IP security policy can define many rules, such as:
• What IP addresses to scan for?
• How to encrypt packets?
• Setting filters to take a look at all IP traffic passing through the object on which IP security policy is applied
138
Unit 3 - Network Security
139
Unit 3 - Network Security
IPS: The IPS does not only detect the bad packets caused by malicious codes, botnets, viruses and targeted attacks
but also takes action to prevent those network activities from causing damage to the network. The attacker’s main
motive is to take sensitive data or intellectual property through which he/ she can get customers’ data, like employee
information, financial records, etc. The IPS is specified to provide protection for assets, resources, data, and networks.
• Stop the attack
• Change security environment
Technology has been developed to serve as both detection and prevention systems. Intrusion Detection and
Prevention Systems (IDPS) are primarily focused on identifying possible incidents. For example, an IDPS can detect
when an attacker has successfully compromised a system by exploiting a vulnerability in the system. The IDPS can
then report the incident to security administrators, who can quickly initiate incident response actions to minimise
damage caused by the incident. The IDPS could also log information that can be used by incident handlers. An IDPS
might be able to block reconnaissance and notify security administrators, who can take actions, if needed, to alter
other security controls to prevent related incidents.
In addition to identifying incidents and supporting incident response efforts, organisations have found other uses for
IDPSs, including the following:
• Identifying security policy problems
IDPS can provide some step of quality control for security policy implementation, such as duplicating
firewall rulesets and alerting when it sees network traffic that should have been blocked by the firewall but
was not because of a firewall configuration error.
• Documenting the existing threat to an organisation
IDPS log information about threats that they detect. Understanding the frequency and characteristics
of attacks against an organisation’s computing resources is helpful in identifying appropriate security
measures for protecting resources. The information can also be used to educate management about the
threats that an organisation faces.
• Deterring individuals from violating security policies
If individuals are aware that their actions are being monitored by IDPS technologies for security policy
violations, they may be less likely to commit such violations because of the risk of detection.
Because of the increasing dependence on information systems and the prevalence and potential impact of
intrusions against those systems, IDPS have become a necessary addition to the security infrastructure of
nearly every organisation.
140
Unit 3 - Network Security
141
Unit 3 - Network Security
A firewall possess the capability of screening both incoming and outgoing traffic, but it is the former which poses a
greater threat to the network. This is the reason why the incoming traffic is screened more than the outgoing traffic.
There are three types of screenings that a firewall can perform such as:
• Blocking the incoming data that is not required by the user
• Blocking any address that does not represent an authenticated user
• Blocking communication contents that are not required
The screening process can be related to the process of elimination. The first step is to determine whether the incoming
transmission is requested by a user and is verified. Once it is allowed, it is checked more closely to ascertain that it is
a trusted site. A firewall also checks the contents of the transmission.
Types of Attack
There is a need to understand the nature of security threats that exist before choosing a specific type of firewall.
Internet being a large community, consists of both good and bad elements. Bad elements can be outsiders who
damage the network unintentionally to the malicious hackers who use Internet to do deliberate assaults on various
companies. The attacks that can have an adverse on the businesses are:
• Information theft: This involves stealing organisational information such as employee records, customer
records, or company intellectual property.
• Information sabotage: Herein, the attacker modifies the information to damage an individual or the
organisational reputation. This can be achieved by changing employee medical or educational records or
uploading derogatory content onto the web site.
• Denial of Service (DoS): By denial of services, one understands that the organisational network and servers
are brought down which stops the legitimate users from accessing the services. This directly interrupts the
normal operations.
Firewall Technologies
Firewalls are available in a variety of shapes, sizes and prices. The selection of a firewall is decided by the business
requirements and the size of the network. Irrespective of the firewall chosen, there is a need to ensure it is secured
and certified by a trusted third party such as International Computer Security Association (ICSA). ICSA has classified
the firewall into three categories namely- packet filter firewalls, application-level proxy servers, and stateful packet
inspection firewalls.
142
Unit 3 - Network Security
Content Filtering
A content filter is responsible for extending the firewall’s capability to block the access to certain web sites. This add-
on can be used to keep a check on the content that can be searched on the internet such as ensuring employees do
not access unsuitable material in the office environment. Using this functionality, one can define the type of content
to be displayed and gain access to the list of websites that offer such content. One can choose to either block those
sites or ask for a log in. Also, such a service should keep updating the list of websites that have prohibited access on
a regular basis.
143
Unit 3 - Network Security
Antivirus Protection
There is no introduction needed for a desktop version of antivirus packages like Norton Antivirus and McAfee. The
way these operate is fairly simple -- when researchers find a new virus, they figure out some unique characteristic it
has (maybe a registry key it creates or a file it replaces) and out of this they write the virus ‘signature’. The whole load
of signatures for which the antivirus software scans is known as the virus ‘definitions’. This is the reason why keeping
virus definitions up-to-date is very important. Many antivirus packages have an auto-update feature to download
the latest definitions. The scanning ability of software is only as good as the date of definitions. In the enterprise,
it is very common for administrators to install antivirus software on all machines, but there is no policy for regular
updates of the definitions. This is meaningless protection and serves only to provides a false sense of security. With
the recent spread of email viruses, antivirus software at the mail server is becoming increasingly popular. The mail
server will automatically scan any email it receives for viruses and quarantines the infections. The idea is that since
all mail passes through the mail server, this is the logical point to scan for viruses. Given that most mail servers have
a permanent connection to the internet, they can regularly download the latest definitions. On the downside, these
can be evaded quite simply. If it zips up the infected file or Trojan or encrypts it, the antivirus system may not be able
to scan it. End users must be taught how to respond to an antivirus alerts. This is especially true in the enterprise
-- an attacker doesn't need to try and bypass the user’s fortress-like firewall if all he has to do is email Trojans to a
lot of people in the company. It takes just one uninformed user to open the infected package to allow the hacker a
backdoor to the internal network. It is advisable that the IT department gives a brief seminar on how to handle email
from untrusted sources and how to deal with attachments. These are very common attack vectors, simply because the
user may harden a computer system as much as he/ she likes, but the weak point still remains the user who operates
it. As crackers say, "The human is the path of least resistance into the network."
Selecting a firewall
Data administrators can implement the firewalls either as a software or as an addition to the existing router/gateway.
Also, firewalls have seen a rise in popularity owing to their ease of use, improved performance and lower costs.
Router/Firmware Based Firewalls
Routers offering limited firewall capabilities can be augmented with additional software and firmware. As a precaution,
the administrators must ensure that the router does not get overburdened by running a greater number of services.
Extended functionalities such as VPN, DMZ, content filtering, or antivirus protection maybe too expensive or not
available at all.
Software-Based Firewalls
Software-based firewalls can be understood as complex applications that run on dedicated UNIX or Windows
NT servers. If the additional capabilities associated with software, server operating system, server hardware, and
continuous maintenance are provided, the costs become slightly higher. Data administrators must constantly monitor
and install the latest OS and security patches to counter the threats. In the absence of such patches, software firewall
would be considered weak and can be rendered useless.
Firewall Appliances
144
Unit 3 - Network Security
A large majority of the firewall appliances are dedicated, hardware-based systems. Since these appliances run on
embedded OS, they are less susceptible to various types of security weaknesses that are visible in Windows NT and
UNIX operating systems. These firewalls are designed in such a way that they meet the high throughput requirements
and processor-intensive requirements of stateful packet inspection firewalls. It is easy to install and configure these
firewall appliances than the software firewall products. The features offered by these firewalls are plug-and-play
installation, require minimum maintenance and are a complete solution. When compared to other firewalls, these
prove to be extremely cost-effective.
IPTables Commands
IPTables, a rule-based firewall, comes pre-installed in most of the Linux operating systems. In the past IPTables was
called ipchains or ipfwadm and was included in Kernel 2.4. This firewall is a front-end tool for interacting with the
kernel and deciding the packets to be filtered. Let us define the various practical iptables rules that are commonly
used.
There are different versions of IPTables used in different protocols:
• iptables applies to IPv4
• ip6tables applies to IPv6.
• arptables applies to ARP.
• ebtables applies to Ethernet frames.
145
Unit 3 - Network Security
For starting, stopping or restarting the firewall, type the following commands:
# /etc/init.d/iptables start
# /etc/init.d/iptables stop
# /etc/init.d/iptables restart
For starting the IPTables on system boot, type the given commands.
Use the following commands for saving IPTables be default and applying the rules and restoring them in case
these are flushed out.
For checking the status of IPTables, type options ‘L’ for listing the ruleset, ‘v’ for verbose and ‘n’ for displaying
the results in numeric format.
146
Unit 3 - Network Security
For displaying IPTables with numbers use the given commands. It is possible to append and remove the rules
using arguments ‘line numbers’.
Following commands shall display rulesets in INPUT and OUTPUT chains with rule numbers.
147
Unit 3 - Network Security
If you want to delete a rule (say rule no.5) from the INPUT chain, use the given command.
For inserting or appending rule to the INPUT chain in between 4 and 5, use the given command.
148
Unit 3 - Network Security
The next step is to decide the level of monitoring, redundancy and control required. The effort involves juggling needs
analysis along with risk assessment along sorting the requirements that help in determining what to implement. In
case of firewalls, security is a larger priority than connectivity. The best practice is to block everything by default and
only allow the services that are required on a case-by-case basis.
Security breaches are a major threat to an organization and recognised as a major threat. It is crucial for the
organizations to be well aware of the damage caused by various types of security attacks. Although the firewalls don’t
define a complete data security system, they are a vital component in lieu of an organization’s immunity to cyber-
attacks. Hence, there is a need to invest some amount of time to evaluate the best system suited to their needs and
deploy these solutions swiftly to avoid data hack.
149
Unit 3 - Network Security
Primarily, SIEM has been implemented in response to governmental compliance requirements. Correspondingly
many organisations found it necessary to implement SIEM in an effort to not only protect sensitive data but also as
proof that they are working in compliance with the requirements.
• Correlation: Correlation involves both real-time and historical analysis of event data. Because a logging device
collects massive amounts of data, correlation is an important tool for identifying meaningful security events.
• Prioritization: Highlighting important security events over less critical ones is an important feature of SIEM.
Frequently, prioritization incorporates input from vulnerability scanning reports.
• Workflow: Real-time identification and notification of threats is an essential part of the SIEM workflow.
Comprehensive incident management allows analysts to document threat response, an important part of
regulatory compliance.
Security information and event management (SIEM) technology support threat detection and security incident
response through the real-time collection and historical analysis of security events from a wide variety of event
and contextual data sources. It also supports compliance reporting and incident investigation through analysis of
historical data from these sources. The core capabilities of SIEM technology are a broad scope of event collection and
the ability to correlate and analyze events across disparate sources.
Security information and event management (SIEM) is an approach to security management that seeks to provide a
holistic view of an organization’s information technology (IT) security.
The underlying principle of a SIEM system is that relevant data about an enterprise’s security is produced in multiple
locations and being able to look at all the data from a single point of view makes it easier to spot trends and see
patterns that are out of the ordinary. SIEM combines SIM (security information management) and SEM (security event
management) functions into one security management system.
An SEM system centralizes the storage and interpretation of logs and allows near real-time analysis which enables
security personnel to take defensive actions more quickly. A SIM system collects data into a central repository for
trend analysis and provides automated reporting for compliance and centralized reporting. By bringing these two
functions together, SIEM systems provide quicker identification, analysis and recovery of security events. They also
allow compliance managers to confirm they are fulfilling an organization's legal compliance requirements.
150
Unit 3 - Network Security
A SIEM system collects logs and other security-related documentation for analysis. Most SIEM systems work by
deploying multiple collection agents in a hierarchical manner to gather security-related events from end-user devices,
servers, network equipment and even specialized security equipment like firewalls, antivirus or intrusion prevention
systems. The collectors forward events to a centralized management console, which performs inspections and flags
anomalies. To allow the system to identify anomalous events, it’s important that the SIEM administrator first creates
a profile of the system under normal event conditions.
At the most basic level, a SIEM system can be rules-based or employ a statistical correlation engine to establish
relationships between event log entries. In some systems, pre-processing may happen at edge collectors, with only
certain events being passed through to a centralized management node. In this way, the volume of information being
communicated and stored can be reduced. The danger of this approach, however, is that relevant events may be
filtered out too soon.
SIEM systems are typically expensive to deploy and complex to operate and manage. While Payment Card Industry
Data Security Standard (PCI DSS) compliance has traditionally driven SIEM adoption in large enterprises, concerns
over advanced persistent threats (APTs) have led smaller organizations to look at the benefits a SIEM managed
security service provider (MSSP) can offer.
Security information and event management (SIEM) systems provide centralized logging capabilities for an enterprise
and can be used to analyze and/or report on the log entries it receives. Some SIEM systems, which can be either products
or services, can also be configured to stop certain attacks they detect, generally by directing the reconfiguration of
other enterprise security controls.
Traditionally, most organizations with SIEM services have used them either for security compliance efforts or for
incident detection and handling efforts. But increasingly, organizations use SIEMs for both purposes. This increases
the technology's potential value to the organization, but unfortunately, tends to complicate configuration and
management.
151
Unit 3 - Network Security
There are many SIEM systems available today, including "light" SIEM products designed for organizations that cannot
afford or do not feel they need a fully featured SIEM. It can be quite a challenge to figure out which products
to evaluate, let alone choose the one that's best for a particular organization or organizational unit. Part of the
SIEM evaluation process should involve creating a list of criteria to be used to highlight SIEM capabilities that are
particularly important to consider.
How much native support does the SIEM provide for relevant log sources?
A SIEM is of diminished value if it cannot receive and understand log data from all of the log-generating sources of
interest to the organization. Most obvious is the organization's enterprise security controls, such as firewalls, virtual
private networks, intrusion prevention systems, email and Web security gateways, and antimalware products. It is
reasonable to expect a SIEM to natively understand log files created by any major product or cloud-based service in
these categories.
In addition, a SIEM should provide native support for log files from the operating system brands and versions the
organization uses. An exception is mobile device operating systems, which often do not provide any security logging
capabilities. SIEMs should also natively support the organization's major database platforms, as well as any enterprise
applications that enable multiple users to interact with sensitive data. Native SIEM support for other software used
by an organization is generally nice to have but is not mandatory. If a SIEM does not natively support a log source,
then the organization generally can either develop customized code to provide the necessary support or use the SIEM
without the log source's data present.
152
Unit 3 - Network Security
Here's a more detailed look at IBM Security QRadar, HP's ArcSight, LogRhythm, SolarWinds, and Splunk.
HP’s ArcSight
Hewlett-Packard's ArcSight is primarily an enterprise-class SIEM offering, although the offering can scale down for
smaller enterprises. The ArcSight Express rack-mount application includes a vast array of built-in capabilities. In
addition to the log management capabilities that it comprises the appliance can collect, store and analyze all security
data from a single interface.
The software is capable of analyzing millions of security events from firewalls, intrusion protection systems, end-
point devices, and an array of other log- and data-producing devices. It boasts built-in security dashboards and audit
reports that foresee threats and compliance and is able to protect against zero-day attacks, advanced persistent
threats, breach attempts, insider attacks, malware and unauthorized user access.
ArcSight Enterprise Security Manager (ESM) is targeted at large-scale, security event management applications.
ArcSight Express "should be considered for midsize SIEM deployments (while) ESM is appropriate for larger
deployments, as long as sufficient in-house support resources are available. ArcSight Logger can be used for log
management capabilities for two-tier deployments. It also has optional modules that can be used for advanced
support for user activity monitoring, identifying and accessing management integration and fraud management.
ArcSight pricing is based on a more traditional software model that is more complex than SolarWinds or Splunk.
[Link]/go/ArcSight
LogRhythm
LogRhythm All-In-One (XM) appliance and software are designed for midsized to large enterprises. It includes a
dedicated event manager, dedicated log manager, dedicated artificial intelligence engine, site log forwarder and a
network monitor. Each of the software components also is available in a stand-alone appliance as well. LogRhythm's
security intelligence platform collects forensics data from log data, flow data, event data, machine data and vulnerability
data. It also generates independent forensics data for the host and network. The system can produce real-time
processing, machine or forensics analytics in order to create output for risk-prioritized alerts, real-time dashboards or
reports. It also is used for incident response, including case management and workflow.
[Link]
SolarWinds
SolarWinds' Log & Event Manager is targeted at the SMB market but can scale for to larger businesses. The offering
has pre-packaged templates and an automated log management system. Among the features, the company identifies
as must-haves for a SIEM offering is the ability to collect data from the network devices, machine data and cloud logs,
as well as in-memory event correlation for real-time threat detection.
Additional must-have features, include a flexible deployment option for scalable log collection and analysis, out-
of-the-box reporting for security, compliance and operations, forensic analysis, and built-in active response for
automated remediation.
153
Unit 3 - Network Security
Other features the company identifies as essential are the ability to do internal data loss protection, embedded file
integrity monitoring for threat detection and compliance support, plus high compression and encryption for secure
long archival and long management. SolarWinds is using node-based pricing.
[Link]
Splunk
Like other SIEM products, the core of Splunk Enterprise monitors and manages application logs, business process
logs, configuration files, web access and web proxy logs, Syslog data, database audit logs and tables, file system audit
logs, and operating system metrics, status and diagnostic commands. But at Splunk, the focus is on machine data, the
data generated by all of the systems in the data centre, the connected "internet of things," and other personal and
corporate devices that get connected to the corporate network.
Although the product has "enterprise" in its name, Splunk says the solution can be used by SMBs as well and has
been architected for use by non-SIEM experts. Non-SIEM engineers will be able to use the event pattern detection,
instant Pivot interface that enables users to discover relationships in data without mastering the search language, and
dashboards that can share pre-built panels that integrate multiple charts and views over time.
[Link]
154
Unit 3 - Network Security
Ensure that monitoring systems are adjusted appropriately only to collect logs, events, and alerts that are relevant
in the context of delivering the requirements of the monitoring policy. Inappropriate collection of monitoring
information could breach data protection and privacy legislation.
If the monitoring system shows too many alerts to follow-up in a particular manner, then it should be investigated in
order, either to remediate the monitoring system or to address the root cause of the events. The monitoring process
and systems must be reviewed regularly to ensure that they are performing adequately and not suffering from too
many false positives or false negatives. As for logging, monitoring controls must be documented in the logging policy
for all systems classified as explicit for security purposes.
SOC monitors the edge routers to build a profile of normal network traffic and update that profile as traffic patterns
change over time. Drawing on that knowledge, SOC staff can immediately identify significant deviations from the
profile as they occur, analyze anomalies, and alert of any attack.
The inbound and outbound network traffic traversing network boundaries should be continuously monitored to
identify unusual activity or tendencies that could indicate attacks and the compromise of data. The transfer of
sensitive information, particularly large data transfers or unauthorised encrypted traffic should automatically generate
a security alert and prompt a follow-up investigation. The analysis of network traffic can be a key tool in preventing
the loss of data.
The following traffic flow types must always be logged:
• All authentication requests (successful and failed)
• All VPN session requests (successful and failed)
• All packets denied by specific rules and by the "clean-up"
• All successful packets whose destination is the firewall itself (firewall management traffic)
Any decision not to log other types of traffic must be documented and justified. In addition to the traffic logs, firewalls
must log all events mentioned under "Non-personal devices".
Collect logs from all types of ICT systems devices and applications
ICT (information and communications technology) is a term that describes the general processing and communication
of information through technology. The importance of ICTs lies less in the technology itself than in its ability to create
greater access to information and communication in unreached areas. Some of the examples of ICT tools are radios,
TVs, laptops, tablets, mobiles, smartphones, gaming devices, etc.
Monitoring Information and Communications Technologies (ICT) devices and application’s activity allows organizations
to improve and detect attacks and react to them appropriately while providing a basis upon which lessons can be
learned to improve the overall security of the organisation.
155
Unit 3 - Network Security
In addition, monitoring the use of ICT systems allow them to ensure that systems are being used appropriately in
accordance with the organisational policies. Monitoring is often a key capability needed to comply with security, legal
and regulatory requirements.
Failure to monitor ICT systems and their use for specific organisation’s processes could lead to non-compliance with
the corporate security policy and legal or regulatory requirements or result in attacks going unnoticed.
Develop and deploy a centralised capability that can collect and analyse accounting logs and security alerts from ICT
systems across the organisation, including user systems, servers, network devices, and including security appliances,
systems and applications. Much of this should be automated due to the volume of data involved in enabling experts
to swiftly identify and investigate irregularities. Ensure that the design and implementation of the centralised solution
do not provide an opportunity for attackers to bypass normal network security and access controls.
156
Unit 3 - Network Security
In general, telemetry allows for the robust collection of data and its delivery to centralized systems where it can be
used effectively. Part of the development of telemetry involves the emergence of big data technologies and big
data strategies that take massive amounts of relatively unstructured data and aggregate it in centralized systems.
Normally, this type of information flows out of devices as streams of unstructured data. In any event, the information
needs to be collected, put into an appropriate structure for storage, perhaps combined with other data, and stored as
a transactional record. From there, the data can be further transferred to an analytics-oriented database, or analysed
in place. Glitches arise when it comes on how to deal with that information. Obviously, data integration is critical to
most telemetry operations. The information must be managed from point-to-point, and then continue within midway
or analytics databases.
Telemetry Data Packet Capture: These are cases in which it needs to go beyond collecting logging messages and
network flow information. An example is a need for deep forensic capabilities to meeting strict regulatory requirements
for capturing raw network packets. Network traffic can be captured and forwarded to an intrusion detection system
(IDS), a deep packet inspection engine (DPI), or simply to a repository where captured packets are stored for future
use. The choice of the packet capturing technology is influenced by the network and media type to monitor.
157
Unit 3 - Network Security
SUMMARY
• The information that is transmitted over the communication channel is known as a packet. Packets contain two
portions i.e. header and footer.
• A protocol is a set of rules and standards that define a language used for communication. The examples can
be TCP, IP, UDP and ICMP.
• The application layer allows access to network resources to users and user applications.
• The presentation layer is responsible for mapping resources and creating context.
• The session layer is responsible for establishing, managing and terminating the sessions between two users.
• The transport layer performs tasks such as processing message delivery and error recovery.
• The network layer is responsible for moving packets from the source to the destination.
• The data link layer organizes bits into frames and ensures hop-to-hop delivery of data packets.
• The physical layer performs the transmission of data or bits through a medium.
• TCP/IP protocol is a communication link between the programming interface of a physical network and user
applications.
• The IP address identifies the host within a network and consists of a network number and a host number.
• Risk identification is defined as the process of determining risks that could prevent the program, enterprise or
investment from achieving its objectives.
• Port scanners, network sniffers and password crackers are some of the commonly used network security tools.
• A demilitarized zone is a special local network configuration designed to improve security by segregating
computers on each side of a firewall.
• Security Information and Event Management (SIEM) is a group of multifaceted technologies that together,
provide a centralized overall view into an infrastructure.
158
Unit 3 - Network Security
KNOWLEDGE CHECK
Q.2. Select the right choice from the following multiple choice questions:
A. In networking terminologies, when information is transmitted over the communication channel is referred
to as:
i. Connection
ii. Network Interface
iii. Packet
iv. Threads
B. The program that is responsible for deciding whether the traffic should enter the server or not is:
i. Protocol
ii. VPN
iii. NAT
iv. Firewall
C. Which of the following is an attack where the attacker steals important information from data packets?
i. Man-in the middle
ii. Sniffing
iii. Spoofing
iv. Denial of Service
D. In which of the following layers of the TCP/IP model is the IP addresses defined?
i. Application Layer
ii. Transport Layer
iii. Network Layer
iv. Link Layer
159
Unit 3 - Network Security
E. Which of the following layers of the TCP/IP model acts as the interface to the actual network hardware?
i. Application Layer
ii. Transport Layer
iii. Network Layer
iv. Link Layer
G. Which of the following is NOT an Transport Layer vulnerability? (Can select more than one)
i. SYN Flood
ii. TCP blind spoofing
iii. UDP Flood attack
iv. DNS Attacks
v. Teardrop Attack
H. Which of the following is NOT an Network Layer vulnerability? (Can select more than one)
i. Ping of Death attack
ii. TCP blind spoofing
iii. Cookie poisoning
iv. Source route attack
v. MAC flooding attack
I. Which of the following is NOT an Link Layer vulnerability? (Can select more than one)
i. TCP blind spoofing
ii. ARP Spoofing
iii. Cookie poisoning
iv. Eavesdropping via sniffing
v. Teardrop Attack
160
Unit 3 - Network Security
Q.3. Match the following Application Layer vulnerabilities with their explanations.
VULNERABILITY EXPLANATION
A. Hijacking i. An attacker modifies or steals small files stored by certain websites in the
computer of the user. Through this they can access personal information
of the user which could also be a password or a user id. They can then,
use these packets of information on their own machine and access
unauthorized information
B. Domain Name System (DNS) ii. Saving of data from web pages browsed by the user temporarily on the
Attacks user’s machine poses a security risk, because an attacker can use the
saved data to access password protected web pages from that computer.
C. Cookie poisoning iii. The attacker intercepts data transmission of a user and then uses that
information again for his/her own benefit. It is a type of man-in-the-
middle attack and more than a hijack.
D. Replay attack iv. The attacker injects a malicious script a vulnerable web applications or
browser which conducts a session hijack and steals the information and
cookies of legitimate users of the website.
E. Dynamic Host Configuration v. HTTP vulnerability can lead to an attack where the attacker steals an
Protocol (DHCP) starvation attack HTTP session of the legitimate user by capturing the packets using a
packet sniffer.
F. Caching vi. The attacker modifies a record database where internet domain names
used by people to locate a website are located. By doing this the attacker
can direct all traffic to an incorrect IP address.
G. Cross-Site Scripting vii. The attacker sends numerous requests for IP address using spoofed MAC
addresses. The server assigns temporal IP addresses to user machines
that log into an IP network, would end up leasing all its IP addresses till
it has no more IPs to give [Link], when a genuine user sends a request,
the server will not be able to provide the IP address and the user will not
get access into the network.
Q.4. Match the following Transport Layer and Network Layer vulnerabilities with their explanations.
VULNERABILITY EXPLANATION
A. SYN Flood i. It is another form of Hijacking that can be done, where an attacker is able
to guess both the port number and sequence number of the session that
is in process and can carry out an injection attack.
B. Source Route Attack ii. This is a denial of service attack, where numerous user datagram protocol
packets are sent to a targeted server, so that it is overwhelmed with the
number of requests and so is unable to process other requests from legitimate
users. Even a firewall protecting the targeted server can become exhausted.
C. TCP blind spoofing iii. This attack is a type of a denial-of-service (DoS) attack, which works slowly
by sending a series of fragmented packets to a target device. It overwhelms
the target device with the incomplete data so that it crashes down
D. UDP Flood Attack iv. In this attack, the attacker sends malformed IP packets that exceeds
65,535 bytes to the target device. A correctly formed ping packet is 56
bytes or 64 bytes when the IP header is considered. The target device will
naturally not be able to process this packet properly and this can lead to
an operating system crash
161
Unit 3 - Network Security
E. Teardrop Attack v. After receiving the fake SYN packets, the target server replies with a
packet to the source address that is unreachable. This situation creates
a lot of half-opened sessions which causes the server to be overloaded
and so the server is unable to allow any further connections, leading to
a denial of service attack.
F. RIP Security Attacks vi. The attacker can modify the option in the packet that lists the specific
routers taken by a packet to reach its destination. This can lead to a
loss of data confidentiality as the attacker will be able to read the data
packets.
G. Ping of Death Attack vii. The attacker can impersonate a route to a particular host that is unused.
The packets can be sent to the attacker for sniffing or performing a man
in the middle attack.
TOOL FUNCTION
A. Port Scanners i. Captures all of the network traffic and obtains log-ins and passwords to
provide an entry into the main systems.
B. Network Sniffers ii. Scans a host or range of hosts in order to determine what are the ports
that are open and what kind of services are being performed.
C. Password Crackers iii. Tries possible combinations for cracking code for password protected
files.
Q.6. State at least 2 effective countermeasures for the following vulnerabilities at the various OSI layers:
VULNERABILITY EXPLANATION
F. Presentation Layer
Vulnerabilities
162
Unit 3 - Network Security
Q.7. State at least 4 password usage practices that leave the passwords vulnerable to compromise.
1. : __________________________________________________________________________________________________________
2. : __________________________________________________________________________________________________________
3. : __________________________________________________________________________________________________________
4. : __________________________________________________________________________________________________________
163
164
UNIT 4
APPLICATION SECURITY
•
•
•
•
•
Explain what are applications
State the key vulnerabilities to applications
Explain the overall process of identification of these vulnerabilities
Explain how hardware and software vulnerabilities can be identified
and resolved
Describe application security testing processes
”
• Describe application security counter measures and their application
• Explain what is OWASP and OWASP tools and methodologies
166
Unit 4 - Application Security
Applications – An Introduction
Applications are a type of software that allows people to perform specific tasks using various ICT devices.
• Applications could be for computers (desktops, laptops, etc.)
• Applications could be for mobile devices (smartphones, iPads, etc.)
• Some applications are also on the cloud
An application runs inside an operating system when opened, and continues running until it is closed. We can have
more than one application open at a time, and this is known as multitasking.
There are countless applications and they fall into many different categories. Applications such as Microsoft Word are
full-featured while the gadgets are capable of accomplishing one or two things.
167
Unit 4 - Application Security
Organisations use Application Security, or ‘AppSec’ to protect their critical data from external threats by ensuring the
security of all the software used to run the business. This software can be built internally, bought or downloaded.
Application security helps to identify, fix and prevent security vulnerabilities in any kind of software application.
A software ‘vulnerability’ is an unintended flaw, weakness or exposure to risks in the software that leads it to process
critical data in an insecure way. Cybercriminals can enter an organisation’s systems by exploiting these ‘holes’ in
applications and steal confidential data.
SQL injection, Cross-Site Forgery (CSRF) and Cross-Site Scripting (XSS) are some of the common software vulnerabilities
known in the field of application security.
• SQL injection exploits an application vulnerability that allows an attacker to submit a database SQL
command, exposing the back-end database where the attacker can create, read, update, alter or delete data.
• Cross-Site Scripting (XSS) is an attack that occurs when ‘malicious scripts are injected into otherwise
benign and trusted websites’ (according to OWASP). XSS comes from the security weaknesses of client-
side scripting languages, such as HTML and JavaScript.
• Cross-Site Request Forgery (CSRF) manipulates a web application vulnerability that allows an attacker
to trick the end user into performing unwanted actions. CSRF lets the attacker access functionalities in a
target web application using the already authenticated browser of the victim.
• Smurf attack – This works in the same way as Ping Flood attack with one major difference that the source
IP address of the attacker host is spoofed with an IP address of another legitimate non-malicious computer.
Such attack will cause disruption both on the attacked host (receiving a large number of ICMP requests) as
well as on the spoofed victim host (receiving a large number of ICMP replies).
• Buffer overflow attack – in this type of attack the victim host is being provided with traffic/ data that is
out of range of the processing specs of the victim host, protocols or applications, overflowing the buffer
and overwriting the adjacent memory. One example can be the Ping of Death attack where malformed
ICMP packet with size exceeding the normal value can cause the buffer overflow.
• Botnet – a collection of compromised computers that can be controlled by remote perpetrators to perform
various types of attacks on other computers or networks. A known example of botnet usage is within the
distributed denial of service attack, where multiple systems submit as many requests as possible to the
victim machine to overload it with incoming packets. Botnets can be otherwise used to send out spam,
spread viruses and spyware and steal personal and confidential information which afterwards is forwarded
to the botmaster.
• Man-in-the-middle attack – this attack is in the form of active monitoring or eavesdropping on victim’s
connections and communication between victim hosts. In this type of attack, the interaction between the victim
parties of the communication process and the attacker takes place. This is achieved by the attacker intercepting
all parts of the communication, changing the content of it and sending it back as legitimate replies.
168
Unit 4 - Application Security
• Both parties are not aware of the attacker's presence and believe the replies they get are legitimate. For this
attack to be successful, the perpetrator must successfully impersonate at least one of the endpoints. There
is a need to have well-defined protocols in place to ensure secured mutual authentication and encryption
during the communication process. This will help in countering the effect of such type of attacks.
• Session hijacking attack – this attack is targeted as an exploit of the valid computer session to gain
unauthorized access to information on a computer system.
Almost every application has vulnerabilities. There are also many tools and technologies to address application
security yet it is very important to always start with a strong strategy. At a high level, the strategy should address,
and continuously improve these basic steps:
• Identification of vulnerabilities (flaws, weaknesses or exposure to risks)
• Assessment of risk
• Fixing the flaw, weakness or exposure
• Learning from mistakes and better managing future development processes
Application security can be enhanced by Threat Modelling, which involves following certain steps rigorously, which are:
• Defining enterprise assets
• Identifying what each application does (or will do) with respect to these assets
• Creating a security profile for each application
• Identifying and prioritizing potential threats and documenting adverse events and the actions taken in
each case
A threat can be defined as a potential or an actual adverse event capable of compromising the valuable assets of an
enterprise. This could include malicious events such as denial-of-service (DoS) attack and unplanned events such as
failure of a storage device.
Apart from that, there are many types of technologies available to assess applications for security vulnerabilities
which include the following:
• Static analysis (SAST), or “white-box” testing, analyzes applications without executing them.
• Dynamic analysis (DAST), or “black-box” testing, identifies vulnerabilities in running web applications.
• Interactive AST (IAST) technology combines elements of SAST and DAST and is implemented as an agent
within the test runtime.
• Mobile behavioral analysis discovers risky actions of mobile apps.
• Software composition analysis technologies (SCA) technologies analyze open source and third party
components.
• Manual penetration testing (or “pen testing”) technologies use the same methodology cybercriminals use
to exploit application weaknesses.
• Web application perimeter monitoring technologies help the attackers discover public-facing applications
and easily exploitable vulnerabilities.
• Runtime application self-protection technologies help in detecting and preventing real-time application attacks.
While, there is a variety of application security technologies available to help with this endeavor, but none are fool
proof. One must use the strengths of multiple analysis techniques along the entire application lifetime to bring down
the application risk.
It is crucial for the organisations to develop a mature and robust application security program that can:
• Assesses every application, whether built internally, brought or downloaded.
• Helps the developers in finding and fixing vulnerabilities while coding.
• Incorporates security into the development process and scales the program by taking the help of automation
and cloud-based services.
169
Unit 4 - Application Security
Security has become an important aspect of the software design process of the applications as well. Security measures
along with a sound application security routine helps in minimising the likelihood of an attack by an unauthorised
code. It helps in providing immunity against unpermitted access, stealing, modifying and deleting of sensitive data
within an application.
Top 10 Web Application Security Risks By Open Web Application Security Project (OWASP)
OWASP is an online community that produces freely-available articles, methodologies, documentation, tools, and
technologies in the field of web application security. We will read more about it in a later section of this unit. Given
below are the list of top 10 web application security risks identified by them.
1. Injection: Injection flaws, such as Structured SQL, NoSQL, OS, and LDAP injection, occur when untrusted data
is sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the interpreter into
executing unintended commands or accessing data without proper authorization.
• Threat Agents/Attack Vectors: Almost any source of data can be an injection vector, environment variables,
parameters, external and internal web services, and all types of users. Injection flaws occur when an attacker
can send hostile data to an interpreter.
• Security Weakness: Injection flaws are very prevalent, particularly in legacy code. Injection vulnerabilities are
often found in SQL, LDAP, XPath, or NoSQL queries, OS commands, XML parsers, SMTP headers, expression
languages, and ORM queries. Injection flaws are easy to discover when examining code. Scanners and
fuzzers can help attackers find injection flaws.
• Impacts: Injection can result in data loss, corruption, or disclosure to unauthorized parties, loss of
accountability, or denial of access. Injection can sometimes lead to complete host takeover. The business
impact depends on the needs of the application and data.
2. Broken Authentication: Application functions related to authentication and session management are often
implemented incorrectly, allowing attackers to compromise passwords, keys, or session tokens, or to exploit other
implementation flaws to assume other users’ identities temporarily or permanently.
• Threat Agents/Attack Vectors: Attackers have access to hundreds of millions of valid username and
password combinations for credential stuffing, default administrative account lists, automated brute force,
and dictionary attack tools. Session management attacks are well understood, particularly in relation to
unexpired session tokens.
• Security Weakness: The prevalence of broken authentication is widespread due to the design and
implementation of most identity and access controls. Session management is the bedrock of authentication
and access controls, and is present in all stateful applications. Attackers can detect broken authentication
using manual means and exploit them using automated tools with password lists and dictionary attacks.
• Impacts: Attackers have to gain access to only a few accounts, or just one admin account to compromise
the system. Depending on the domain of the application, this may allow money laundering, social security
fraud, and identity theft, or disclose legally protected highly sensitive information.
3. Sensitive Data Exposure: Many web applications and APIs do not properly protect sensitive data, such as financial,
healthcare, and Professional Indemnity Insurance (PII). Attackers may steal or modify such weakly protected data
to conduct credit card fraud, identity theft, or other crimes. Sensitive data may be compromised without extra
protection, such as encryption at rest or in transit, and requires special precautions when exchanged with the
browser.
• Threat Agents/Attack Vectors: Rather than directly attacking crypto, attackers steal keys, execute man-
in-the-middle attacks, or steal clear text data off the server, while in transit, or from the user’s client, e.g.
browser. A manual attack is generally required. Previously retrieved password databases could be brute
forced by Graphics Processing Units (GPUs).
• Security Weakness: Over the last few years, this has been the most common impactful attack. The most common
flaw is simply not encrypting sensitive data. When crypto is employed, weak key generation and management,
and weak algorithm, protocol and cipher usage is common, particularly for weak password hashing storage
techniques. For data in transit, server-side weaknesses are mainly easy to detect, but hard for data at rest.
170
Unit 4 - Application Security
• Impacts: Failure frequently compromises all data that should have been protected. Typically, this information
includes sensitive personal information (as in the case of PII) data such as health records, credentials,
personal data, and credit cards, which often require protection as defined by laws or regulations such as
the EU GDPR or local privacy laws.
4. XML External Entities (XXE): Many older or poorly configured XML processors evaluate external entity references
within XML documents. External entities can be used to disclose internal files using the file URI handler, internal
file shares, internal port scanning, remote code execution, and denial of service attacks.
• Threat Agents/Attack Vectors: Attackers can exploit vulnerable XML processors if they can upload XML or
include hostile content in an XML document, exploiting vulnerable code, dependencies or integrations.
• Security Weakness: By default, many older XML processors allow specification of an external entity, a URI
that is dereferenced and evaluated during XML processing. SAST tools can discover this issue by inspecting
dependencies and configuration. DAST tools require additional manual steps to detect and exploit this
issue. Manual testers need to be trained in how to test for XXE, as it not commonly tested as of 2017.
• Impacts: These flaws can be used to extract data, execute a remote request from the server, scan internal
systems, perform a denial-of-service attack, as well as execute other attacks. The business impact depends
on the protection needs of all affected application and data.
5. Broken Access Control: Restrictions on what authenticated users are allowed to do are often not properly
enforced. Attackers can exploit these flaws to access unauthorized functionality and/or data, such as access other
users’ accounts, view sensitive files, modify other users’ data, change access rights, etc.
• Threat Agents/Attack Vectors: Exploitation of access control is a core skill of attackers. SAST and DAST
tools can detect the absence of access control but cannot verify if it is functional when it is present. Access
control is detectable using manual means, or possibly through automation for the absence of access
controls in certain frameworks.
• Security Weakness: Access control weaknesses are common due to the lack of automated detection, and
lack of effective functional testing by application developers. Access control detection is not typically
amenable to automated static or dynamic testing. Manual testing is the best way to detect missing or
ineffective access control, including HTTP method (GET vs PUT, etc), controller, direct object references, etc.
• Impacts: The technical impact is attackers acting as users or administrators, or users using privileged
functions, or creating, accessing, updating or deleting every record. The business impact depends on the
protection needs of the application and data.
6. Security Misconfiguration: Security misconfiguration is the most commonly seen issue. This is commonly a
result of insecure default configurations, incomplete or ad hoc configurations, open cloud storage, misconfigured
HTTP headers, and verbose error messages containing sensitive information. Not only must all operating systems,
frameworks, libraries, and applications be securely configured, but they must be patched/upgraded in a timely
fashion.
• Threat Agents/Attack Vectors: Attackers will often attempt to exploit unpatched flaws or access default
accounts, unused pages, unprotected files and directories, etc to gain unauthorized access or knowledge
of the system.
• Security Weakness: Security misconfiguration can happen at any level of an application stack, including
the network services, platform, web server, application server, database, frameworks, custom code,
and pre-installed virtual machines, containers, or storage. Automated scanners are useful for detecting
misconfigurations, use of default accounts or configurations, unnecessary services, legacy options, etc.
• Impacts: Such flaws frequently give attackers unauthorized access to some system data or functionality.
Occasionally, such flaws result in a complete system compromise. The business impact depends on the
protection needs of the application and data.
7. Cross-Site Scripting XSS: XSS flaws occur whenever an application includes untrusted data in a new web page
without proper validation or escaping, or updates an existing web page with user-supplied data using a browser
API that can create HTML or JavaScript. XSS allows attackers to execute scripts in the victim’s browser which can
hijack user sessions, deface web sites, or redirect the user to malicious sites.
171
Unit 4 - Application Security
• Threat Agents/Attack Vectors: Automated tools can detect and exploit all three forms of XSS, and there are
freely available exploitation frameworks.
• Security Weakness: XSS is the second most prevalent issue in the OWASP Top 10, and is found in around
two thirds of all applications. Automated tools can find some XSS problems automatically, particularly in
mature technologies such as PHP, J2EE / JSP, and [Link].
• Impacts: The impact of XSS is moderate for reflected and DOM XSS, and severe for stored XSS, with remote
code execution on the victim’s browser, such as stealing credentials, sessions, or delivering malware to the
victim.
8. Insecure Deserialization: Insecure deserialization often leads to remote code execution. Even if deserialization
flaws do not result in remote code execution, they can be used to perform attacks, including replay attacks,
injection attacks, and privilege escalation attacks.
• Threat Agents/Attack Vectors: Exploitation of deserialization is somewhat difficult, as off the shelf exploits
rarely work without changes or tweaks to the underlying exploit code.
• Security Weakness: This issue is included in the Top 10 based on an industry survey and not on quantifiable
data. Some tools can discover deserialization flaws, but human assistance is frequently needed to validate
the problem. It is expected that prevalence data for deserialization flaws will increase as tooling is developed
to help identify and address it.
• Impacts: The impact of deserialization flaws cannot be overstated. These flaws can lead to remote code
execution attacks, one of the most serious attacks possible. The business impact depends on the protection
needs of the application and data.
9. Using Components with Known Vulnerabilities: Components, such as libraries, frameworks, and other software
modules, run with the same privileges as the application. If a vulnerable component is exploited, such an attack can
facilitate serious data loss or server takeover. Applications and APIs using components with known vulnerabilities
may undermine application defenses and enable various attacks and impacts.
• Threat Agents/Attack Vectors: While it is easy to find already-written exploits for many known vulnerabilities,
other vulnerabilities require concentrated effort to develop a custom exploit.
• Security Weakness: Prevalence of this issue is very widespread. Component-heavy development patterns
can lead to development teams not even understanding which components they use in their application or
API, much less keeping them up to date. Some scanners such as [Link] help in detection, but determining
exploitability requires additional effort.
• Impacts: While some known vulnerabilities lead to only minor impacts, some of the largest breaches to
date have relied on exploiting known vulnerabilities in components. Depending on the assets you are
protecting, perhaps this risk should be at the top of the list.
10. Insufficient Logging & Monitoring: Insufficient logging and monitoring, coupled with missing or ineffective
integration with incident response, allows attackers to further attack systems, maintain persistence, pivot to more
systems, and tamper, extract, or destroy data. Most breach studies show time to detect a breach is over 200 days,
typically detected by external parties rather than internal processes or monitoring.
• Threat Agents/Attack Vectors: Exploitation of insufficient logging and monitoring is the bedrock of nearly
every major incident. Attackers rely on the lack of monitoring and timely response to achieve their goals
without being detected.
• Security Weakness: This issue is included in the Top 10 based on an industry survey. One strategy for
determining if you have sufficient monitoring is to examine the logs following penetration testing. The
testers’ actions should be recorded sufficiently to understand what damages they may have inflicted.
• Impacts: Most successful attacks start with vulnerability probing. Allowing such probes to continue can
raise the likelihood of successful exploit to nearly 100%. In 2016, identifying a breach took an average of
191 days – plenty of time for damage to be inflicted.
• To read more about these you can go to the following OWASP website:
[Link]
172
Unit 4 - Application Security
• Bluesnarfing – Bluesnarfing attack allows the attacker or the malicious user get unauthorised access to
information on a particular device using a bluetooth connectivity.
• Bluejacking – this kind of attack allows the malicious user to send unsolicited (often spam) messages
over bluetooth enabled devices.
• Bluebugging – it is a hack attack on a bluetooth enabled devices. Bluebugging enables the attacker
to initiate phone calls on the victim's phone as well as read through the address book, messages and
eavesdrop on phone conversations.
Dependency Determination
It is important to understand the entire architecture and dependencies of the application. This understanding provides
a better overview and focus.
One of the key objectives of this phase is to determine clear dependencies and to link them to the next phase. The
Figure shows the overall architecture of a web application.
173
Unit 4 - Application Security
174
Unit 4 - Application Security
These entry points provide information to an application. These values affect the databases, LDAP servers, processing
engines and other application components. If these values are not guarded, they can open up potential vulnerabilities
in the application. The relevant entry points are as following:
• HTTP variables: The browser or end-client sends information to the application. These set of requests
comprises several entry points such as form and query string data, cookies, and server variables.
• XML messages: The application is accessible by web services over XML (Extensible Markup Language)
messages. These messages are potential entry points to a web application.
• RSS and Atom feeds: Many new applications consume third-party XML-based feeds and present the output
in different formats to an end-user. RSS and Atom feed has the potential to open up new vulnerabilities,
such as XSS or client-side script execution.
• XML files from servers: The applications can access the XML files from various partners over the Internet.
• Mail system: The application gets access to mails from the available mailing systems.
These are the important entry points to the application in the case study. It is possible to grab certain key patterns in
the submitted data using regular expressions from multiple files to trace and analyse patterns.
An application analyst should capture these entries and exit points in the format below:
• Numerical ID - There is a need to ensure that an entry/exit point has a numerical ID so that it can be cross-
referenced with various threats and vulnerabilities.
• Name - Provide a name to the entry/exit point and identify the purpose of having it.
• Description - Provide a suitable description to entry/exit point thereby outlining the activity taking place.
175
Unit 4 - Application Security
After locating these entry points to an application, one needs to trace them and search for vulnerabilities.
176
Unit 4 - Application Security
For identifying the criticality of the application, setup discussions primarily with business stakeholders and users of
the applications for the task detailed in table below:
Sources Sources
1. Stakeholder discussions Hold discussion with business stakeholders to understand the scope of an
application, the underlying feature set, data flow related to the application and
type of data transaction.
Hold discussion with the infrastructure team to understand where the application is
hosted, underlying infrastructure supporting the applications, which include details
on server, hosting (in-house, outsourced data-centre, cloud, etc.), (supporting
technologies – virtualisation, etc), the set of protocols used and their objectives.
Hold discussion with application development team (in case of in-house
development) and gather information on the type of programming language
being used, type of interfaces, like Rich Internet Applications (RIA) which makes
extensive use of Ajax, Flash, HTML5, etc.)
2. Internet search Conduct a broad internet search for identifying risks and vulnerabilities about
the application based on the technologies, hosting environment, programming
language and interfaces. Start with some of the major search engines using
different keywords and word combinations. Narrow results of the search within the
search results or formulating a more advanced query. Follow link after link as a lead
is pursued until the start point has been forgotten.
3. Subscription Subscribe to white papers or data feeds from various sources such as OWASP,
white papers of product vendors, research papers from analyst firms like Gartner,
Forrester, IDC, etc.)
Sources Sources
1. Business owners Hold discussion with business stakeholders to understand the business importance
of an application:
• Need for this application in the environment, is it business critical, is it internal
tracking application, is it for bringing operational efficiency for a task which
was done manually earlier?
• What are the input elements, processing elements and the expected outcomes?
• Is there an interdependency of the application with other processes, applications
or any infrastructure components?
• Who all are the typical users of the applications, what is their role and at what
stage of application processing?
Identify through business discussion if the application is utilised by different lines
of business in case of a corporate application or functional application. Determine
the role of different lines of business and usage of an application.
Identify different elements which characterise the criticality of [Link]
factors include, but are not limited to:
• Type of data used – Personally Identifiable Information (PII), Financial
Information, Protected Health Information (PHI)
• Volume of data used
177
Unit 4 - Application Security
2. Application development Understand the current security controls deployed across the application from the
teams or Infrastructure following perspective:
support teams • Hosting of the application and security controls related to physical security,
segmentation of zones, network controls around the application, open ports
and their requirements.
• Identity and access management controls – who have access to the application,
what kind of access – super user; user, type of access – read; write; execute,
type of authentication, segregation of duties, etc.
• Type of encryption controls for data at rest and during transactions.
• Any other specific information/ cyber security policy requirements.
Understand the compliance requirement from corporate information security policy
or client specific security controls. This may be related to meeting any internal
certification requirements like ISO 27001 or PCI-DSS standards certification or any
compliance standard that an organisation is obliged under.
Capture the discussions and create a mapping of application in the tracker. Use this information to provide a
prioritization across the different application. These discussions will help analyst to understand the importance and
business impact of the application in case of a breach vis-à-vis other application sets. This knowledge helps the
analyst to prioritize communication in case of an event of possible security threat.
After that the Analyst must understand the type of application and the dependency it has with in-house/ outsourced/
third party/ client applications, evaluate the application category by considering the following factors:
• type of application - for example, legacy applications, third party application, custom code, mobile application,
communication and integration APIs and packaged enterprise applications such as ERP and CRM
• type of environment -such as development, testing, staging, production
• externally provisioned systems - third party or client systems
• application programming interfaces
Also gather web-based information through the use of automated tools and techniques such as:
• search engine discovery
• web crawlers
• identify application entry points
• map execution paths through application
178
Unit 4 - Application Security
Use scenarios
Use scenarios describe how a system will be used or not used in terms of configuration or security goals and non-goals.
Use case scenarios can be defined both in a supported and unsupported configuration. Not addressing use scenarios
may result in a vulnerability. Use scenarios have the ability to limit the scope of analysis along with validating the
threat model. These use scenarios can be utlilised by the testing team for conducting security testing and identifying
the possible attack paths. The architect and end users typically identify the use scenarios.
When defining use scenarios, the following data should be collected:
• Numerical ID - Each use scenario should have a unique identification number.
• Description -A description that defines the use scenario and whether it is supported or not.
External dependencies
External dependencies define a system’s dependence on outside resources and the security policy outside the system
being [Link] the user does not treat the threat from an external dependency efficiently, it can result in vulnerability.
The following data can be taken into consideration while defining external dependencies:
• Numerical ID - Every external dependency should be provided a unique identification number.
• Description - A description of the dependency.
• External security note reference - Within an application, the external security notes can be cross-referenced
on other components with external dependencies.
External security notes
External security notes are provided to inform the users of security and integration information for the system.
External security can be an indication of a warning against misuse or a form of guarantee made by the system to the
user. External security notes are used to validate external dependencies and can be used as mitigation to a threat.
However, this is not a good practice as it makes the end user responsible for security.
The following data can be collected for defining external security notes:
• Numerical ID - As a standard practice, every external security note must be provided a unique identification
number.
• Description - A description of the note.
Internal security notes
Internal security notes are used in defining a threat model. At the same time, these notes also explain the concessions
made in the design and implementatio n of a system security.
In order to define internal security notes, following data can be collected:
• Numerical ID - Every internal security note should be identifiable with a unique identification number.
• Description - A description of the security concession and justification for the concession.
Implementation assumptions
Implementation assumptions contain features that are developed later in the process and are made during the design phase.
In order to define implementation assumptions, the following data can be considered:
• Numerical ID - Each internal implementation assumption should have a unique identification number.
• Description - A description of the method of implementation.
179
Unit 4 - Application Security
180
Unit 4 - Application Security
White box code inspection is used in analysing static behaviour whereas black box exploratory testing is used in
determining the dynamic behavior of a system. The testing process helps in the coupling between systems and the
interactions between the distributed systems.
Application analyst makes the use of threat modeling techniques to understand the risk to a system from malicious
users or applications. Threat modeling allows anticipating attacks by understanding how an adversary chooses targets
(assets) and entry points and conducts an attack.
The threat models will profile how adversaries view the system, its applications and attempt its exploitation. A set of
diagrammatic threat models are generally conceptualised and reviewed with key stakeholders. This is important not
only in identifying potential threats but also in understanding what application defenses must be defeated in order
for a threat or series of threats to be realised.
Once a threat model is reviewed and established as accurate, the process of test planning begins. In the test plan,
each threat path is refined into a general set of test cases that detail the tools, techniques and strategies for finding
vulnerabilities that will realise each threat. In some cases, the test cases are specific and detailed. In others, they are
more high-level direction for an application analyst in order to guide their exploratory testing of a feature or set of
features.
This test plan is also reviewed with key stakeholders, and any modifications will be mutually agreed upon. Once
an application analyst takes a sign-off on the test plan, test execution begins. However, in the event that fruitful
attack vectors are found during test execution, the test plan and threat model will be updated to reflect these new
approaches. During test execution, application analysts make parallel progress on the threat model and the test plan.
Daily updates are given to the point of contact and in the event if a vulnerability is identified, it is documented in
a manner to describe how to reproduce the problem and a description of its exploitability, including risk scenario,
severity, reproduction steps and remediation recommendations. In the event that testing of a particular feature does
not reveal vulnerabilities, application analysts will still document the testing that was performed in detail. This is
important because it is imperative to understand not only where an application fails but also where it is implemented
securely.
The following is a summary of the attacks that large systems are typically most susceptible to due to malicious
outsiders and insiders (users, processes and applications):
• Authentication/ authorization attacks: These attacks include brute-forcing passwords (both dictionary
attacks and common account/password strings) and credentials, exploiting insufficient and poorly implemented
protection and recovery of passwords, key material (and so forth) both in memory and at component boundaries.
This includes attempting to bypass authentication, predict/hijack an authorised session, session expiration
prevention, privilege escalation, data tampering and so forth.
• System dependency attacks: By carefully monitoring the environment of use of an application, crucial system
resources can be identified and targeted in an attempt to disrupt access to them.
A system must have the ability to securely process corrupt, missing and Trojan files, including cookies and registry
keys. Other known attacks against any reused third party components will also be catalogued.
• Input attacks: Large systems are often susceptible to input strings that tend to cause insecure behaviours.
Attacks in this class include long strings (buffer overruns), SQL injection, command injection, format strings, LDAP
injection, OS commanding, SSI injection, XPath injection, escape characters, and special/problematic character
sets. A variety of initial configurations and command line switches may also affect the system.
• Design attacks: Systemic design flaws often allow an application to be exploited. This includes unprotected
internal APIs, alterna te routes through and around security checks, open ports, forcing loop conditions and faking
the source of data (content spoofing). Race conditions and attacks that take advantage of time discrepancies
(Time of Check/Time of Use) are of particular concern in this category.
• Information disclosure attacks: Applications can often be forced to disclose sensitive or useful data in any
number of ways. Error messages generated by the application often contain information useful to attackers.
Attacks of this type include directory indexing attacks, path traversal attacks and determination of whether the
application allocates resources from a predictable and accessible location. The intent with this set of attacks is to
isolate any, all cases of information leakage.
181
Unit 4 - Application Security
• Logic/ implementation (business model) attacks: The hardest attacks to apply are often the most lucrative
for an attacker. These include screening temporary files for sensitive information, attempts to abuse internal
functionality to expose secrets and cause insecure behaviour, checking for faulty process validation and testing
an application’s ability to be remote-controlled. Users may get in between the time-of-check and time-of-use of
sensitive data (‘man-in-the-middle’) and perform denial of service at the component level.
• Cryptographic attacks: One of the biggest issues in cryptography is improper implementation. While
cryptography is exceptionally well suited to protect data at rest (when stored) or in transit, several challenges arise
when implementing cryptography on data in use. There are often hidden cracks in cryptography implementation.
182
Unit 4 - Application Security
How to test?
Use a search engine to search for:
• Network diagrams and configurations
• Archived posts and emails by administrators and other key staff
• Log on procedures and username formats
• Usernames and passwords
• Error message content
• Development, test, UAT and staging versions of the website
• Configuration and deployment management testing
Proper configuration of single elements that make up an application architecture is important to prevent mistakes
that might compromise the security of the whole architecture.
Configuration review and testing is a critical task in creating and maintaining an architecture. This is because many
different systems will be usually provided with generic configurations that might not be suited to the task they will
perform on the specific site they're installed on. While the typical web and application server installation will contain
a lot of functionality (like application examples, documentation, test pages) what is not essential should be removed
before deployment to avoid post-install exploitation.
183
Unit 4 - Application Security
Comment review
It is very common, and even recommended, for programmers to include detailed comments on their source code
to allow other programmers to better understand why a given decision was taken in coding a given function.
Programmers usually add comments while developing large web applications. But the comments included in line in
HTML code could reveal internal information that an attacker shouldn't know. Sometimes, even the source code could
be commented out as the functionality may no longer be required, though this comment is leaked out to HTML pages
returned to the users unintentionally.
To determine if any information is being leaked through comments, a comment review should be done. To do this
review thoroughly, it should be done through an analysis of the web server, static and dynamic content and file
searches. Browsing the site either in an automatic or guided fashion and storing all the content retrieved can also be
useful. Then the retrieved content can be searched to analyse the HTML comments available in the code.
System configuration
CIS-CAT (Center for Internet Security - Configuration Assessment Tool) helps the security personnel by providing a fast
and detailed assessment of the target systems' conformance to CIS Benchmarks. CIS also provides the recommended
system configuration hardening guide including database, OS, Web server, visualisation.
184
Unit 4 - Application Security
185
Unit 4 - Application Security
Vulnerability scanners, and more specifically web application scanners, otherwise known as penetration testing tools
(i.e. ethical hacking tools) have been historically used by security organisations within corporations and security
consultants to automate security testing of http request/responses, however this is not a substitute for the need for
actual source code review.
Physical code reviews of an application's source code can either be accomplished manually or in an automated
fashion. Given the common size of individual programmes (often 500,000 lines of code or more), the human brain
cannot execute a comprehensive data flow analysis needed to completely check all circuitous paths of an application
programme to find vulnerability points. The human brain is suited more for filtering, interrupting and reporting
outputs of automated source code analysis tools available commercially other than trying to trace every possible path
through a compiled code base to find the root cause level vulnerabilities.
The two types of automated tools associated with application vulnerability detection (application vulnerability
scanners) are Penetration Testing Tools (often categorised as Black Box Testing Tools) and static code analysis tools
(often categorised as White Box Testing Tools).
According to Gartner Research, "...next-generation modern Web and mobile applications require a combination of
SAST and DAST techniques, and new interactive application security testing (IAST) approaches have emerged that
combine static and dynamic techniques to improve testing...". Because IAST combines SAST and DAST techniques, the
results are highly actionable, and can be linked to the specific line of code and recorded for replay later for developers.
Industries such as banking and large E-commerce corporations have adopted customer profiles for these tools. Both
black box and white box testing tools are required for detecting application security. Black box testing tools are an
example of ethical hacking tools which attack the application surface thereby exposing vulnerabilities in the source code.
The penetration testing tools are executed on the existing application that has been deployed. White Box testing
(meaning Source Code Analysis tools) are used by either the application security groups or by the application
development groups.
Typically introduced into a company through the application security organisation, the White Box tools complement
the Black Box testing tools where they give specific visibility into the specific root vulnerabilities within the source
code in advance of the other source code being deployed.
Vulnerabilities identified with White Box testing and Black Box testing are typically in accordance with the OWASP
taxonomy for software coding errors. White Box testing vendors have recently introduced dynamic versions of their
source code analysis methods which operate on deployed applications. Given that the White Box testing tools have
dynamic versions like the Black Box testing tools, both tools can be correlated in the same software error detection
pattern ensuring full application protection to the client company.
The advances in professional malware targeted at internet customers of online organisations have seen a change in
web application design requirements since 2007. It is generally assumed that a sizable percentage of internet users
will be compromised through malware and that any data coming from their infected host may be tainted. Therefore,
application security has begun to manifest more advanced anti-fraud and heuristic detection systems in the back-
office rather than within the client-side or web server code.
There are at least 50 testing tools available in the market today. These tools feature both paid and open source type
of tools. Tools offering services such as UI testing, functional testing, DB testing, load testing, performance, security
testing and link validation testing are purpose-specific. Also, there are tools that are strong and are capable of testing
major components of an application. Basically, the concept of 'Application Testing' is its functional testing.
186
Unit 4 - Application Security
• Here is the list of some most important and fundamental features that are provided by almost all of the
‘Functional Testing’ tools.
• Record and play
• Parameterise the values
• Script editor
• Run (the test or script, with debug and update modes)
• Report of run session
The vendors focus on specific features that make their products unique among the competitors. The features listed
above are common and are found in almost all functional testing tools.
Following is the list of few widely used Functional Testing tools.
• HP QTP (Quick Test Professional)
• Selenium
• IBM Rational Robot
• Test Complete
• Push to Test
• Telerik
NESSUS - Nessus is a popular vulnerability scanner developed by Tenable, Inc. It is used for scanning various
technologies including operating systems, network devices, hypervisors, databases, web servers, and critical
infrastructure. Some vulnerabilities and exposures that it can scan for include vulnerabilities that could allow
unauthorized access or control to sensitive data on a system; misconfiguration (e.g. open mail relay, missing patches,
etc.); Default passwords; DoS vulnerabilities. To know more about Nessus and install it on can visit the website -
[Link]
SYN - SYN or stealth scan is known as half-open scan as it does not complete the TCP three-way handshake. Initially,
a hacker sends an SYN packet to the target. If the SYN/ACK frame is received back, then it is assumed the target
would be properly connected and the port listens. In case RST is received from the target, it means that the port isn't
active or has been closed. The advantage of this scan is that the fewer IDS systems log this activity as an attack or a
connection attempt.
XMAS - With the XMAS scan method, one can send a packet with the FIN, URG, and PSH flags set. In case the port
is open, there is no response; but if the port has been closed, the target will respond with an RST/ACK packet. These
type of scans work on the target systems that follow RFC 793 implementation of TCP/IP and are not compatible with
any version of Windows.
FIN - Similar to an XMAS scan, FIN scan sends a packet with only the FIN flag set. FIN scans receive the same type of
response and have the same type of limitations as XMAS scans.
NULL - A NULL scan sends a packet with no flag set. In terms of limitations and responses, it is the same as XMAS
and FIN type of scans.
IDLE - IDLE scan makes use of spoofed IP address in order to send a SYN packet to a target. The port can either be
opened or closed depending upon the response. These scans monitor IP header sequence numbers for determining
port scan response.
IPEye - IPEye, a command line tool and a TCP port scanner is capable of performing SYN, FIN, Null, and XMAS scans.
IPEye probes the ports on any target system and provides responses such as closed, reject, drop, or open. 'Closed'
response indicates that there is a computer on the other end but it doesn't listen to the port. Rejecting means that a
firewall has rejected the connection to the port. The 'drop' option means that a firewall drops everything to the port,
or there is no system at the other end. 'Open' indicates some kind of a service listening at the port. These responses
are crucial to aid an attacker in identifying the type of system that is responding.
187
Unit 4 - Application Security
IPSecScan - IPSecScan is a tool that can either scan a single IP address or a range of addresses that look for systems
with IPSec enabled.
NetScan Tools Pro, hping2, KingPingicmpenum, and SNMP Scanner are scanning tools and can be easily used to
fingerprint the operating system.
Icmpenum not only uses ICMP Echo packets for probing networks but also ICMP Timestamp and ICMP information
packets. Also, it supports spoofing and sniffing for reply packets and is great for scanning network wherein the
firewall is blocks the ICMP Echo packets. Icmpenum is incapable of blocking Timestamp and information packets.
The hping2 tool contains a host of features ranging from OS fingerprinting such as TCP, User Datagram Protocol
(UDP), ICMP, and raw-IP ping protocols, trace-route mode and finally the ability to send files between source and
target system.
SNMP offers the capability of scanning a range or list of hosts that perform ping, DNS, and Simple Network
Management (SNMP) queries.
Threats can be ranked from the perspective of risk factors. By determining the risk factor posed by the various
identified threats, it is possible to create a prioritized list of threats to support a risk mitigation strategy, such as
deciding on which threats have to be mitigated first. Different risk factors can be used to determine which threats can
be ranked as High, Medium, or as a Low risk. In general, threat risk models use different factors to model risks such
as those shown in the figure below:
Fig 4.3: Use of different factors by threat risk models to model risks
188
Unit 4 - Application Security
A generic risk model takes into account the likelihood such as the probability of an attack and its impact i.e. damage
potential.
It can be defined as, Risk = Likelihood x Impact
Likelihood or probability focuses on the ease of exploitation, that further depends on the type of threat and the system
characteristics. It is a possibility of realizing a threat that is further determined by the existence of an appropriate
countermeasure.
The following considerations can be taken into account to determine the ease of exploitation:
• Will the attacker be able to exploit this remotely?
• Is there a need for the attacker to be authenticated?
• Can the exploits be automated?
The impact is decided by the damage potential and extent of the impact. For example, the number of components
defined by a particular threat.
Some factors to be considered for determining damage potential are:
• Can the attacker completely take over and manipulate the system?
• Can the attacker gain administration access to the system?
• Is the attacker capable enough to make the system crash?
• Can the attacker access sensitive information such as secrets, PII, etc.?
The following can help in determining the number of components that may have been affected by a particular threat:
• How many number of data sources and systems are impacted?
• How 'deep' has the infrastructure been damaged?
These considerations can help an application specialist in calculating the overall risk generating out of these threats.
The risks can be provided qualitative values such as High, Medium and Low to the various likelihoods and impacts.
189
Unit 4 - Application Security
4.2 COUNTERMEASURES
The primary function of countermeasure identification is to determine whether the protective measures such as
security control and policy measures are in place. These measures are aimed at preventing from threats identified via
the procedure of threat analysis. The term 'vulnerability' is given to the threat that has no countermeasure.
Countermeasures are actions taken to ensure application security:
• ‘Application firewall’ is the most basic software countermeasure that limits the execution of files and the
handling of data by specific installed programs.
• Using a router which is also the most common hardware countermeasure can prevent the IP address of an
individual computer from being directly visible on the Internet.
• Conventional firewalls, encryption/decryption programs, anti-virus programs, spyware detection/removal
programs and biometric authentication systems are some of the other countermeasures.
The most basic software countermeasure is an ‘application firewall’ that limits the execution of files or handling of
data by specific installed programmes.
• The most common hardware countermeasure is a router that can prevent the IP address of an individual
computer from being directly visible on the internet.
• Other countermeasures include conventional firewalls, encryption/decryption programmes, anti-virus
programmes, spyware detection/removal programmes and biometric authentication systems.
Application security can be enhanced by Threat Modelling, which involves the following rigorous steps:
• Defining enterprise assets
• Identifying what each application does (or will do) with respect to these assets
• Creating a security profile for each application
• Identifying and prioritising potential threats and documenting adverse events and actions taken in each case
In this context, a threat is any potential or actual adverse event that can compromise the assets of an enterprise,
including both malicious events, such as a denial-of-service (DoS) attack and unplanned events, such as failure of a
storage device.
Apart from that, there are technologies available to assess applications for security vulnerabilities which include the
following:
• Static analysis (SAST), or ‘white-box’ testing analyses applications without executing them.
• Dynamic analysis (DAST), or ‘black-box’ testing identifies vulnerabilities in running web applications.
• Interactive AST (IAST) technology combines elements of SAST and DAST, and is implemented as an agent
within the test runtime.
• Mobile behavioral analysis discovers risky actions of mobile apps.
• Software composition analysis (SCA) analyses open source and third-party components.
• Manual penetration testing (or pen testing) uses the same methodology that cybercriminals use to exploit
application weaknesses.
• Web application perimeter monitoring discovers all public-facing applications and the most exploitable
vulnerabilities.
• Runtime application self-protection (RASP) is built into an application and can detect and prevent real-time
application attacks.
190
Unit 4 - Application Security
While there is a variety of application security technologies available to help with this endeavor, but none are fool
proof. One must use the strengths of multiple analytic techniques along the entire application, lifetime to bring down
the application risk.
The end goal for any organisation should be a mature, robust application security programme that:
• Assesses every application, whether built internally, brought or downloaded
• Enables developers to find and fix vulnerabilities while they are coding
• Takes advantage of automation and cloud-based services to easily incorporate security into the development
process and scale the programme
Once an afterthought in software design, security is becoming an increasingly important concern during development
as applications become more frequently accessible over networks and are, thus, vulnerable to a wide variety of threats.
Security measures built into applications and a sound application security routine minimize the likelihood that
unauthorised code will be able to manipulate applications to access, steal, modify, or delete the sensitive data.
1. Credentials and authentication tokens are protected with encryption in storage and transit.
2. Protocols are resistant to brute force, dictionary, and replay attacks.
3. Strong password policies are enforced.
Authentication 4. Trusted server authentication is used instead of SQL authentication.
5. Passwords are stored with salted hashes.
6. Password resets do not reveal password hints and valid usernames.
7. Account lockouts do not result in a denial of service attack.
1. Strong ACLs are used for enforcing authorised access to resources.
2. Rolebased access controls are used to restrict access to specific operations.
Authorisation 3. The system tracks the principle of least privilege for user and service accounts.
4. Privilege separation is correctly configured within the presentation, business and data
access layers.
1. Least privileged processes are used and service accounts with no administration capability.
Configuration
2. Auditing and logging of all administration activities is enabled.
Management
3. Access to configuration files and administrator interfaces is restricted to administrators.
1. Standard encryption algorithms and correct key sizes are being used.
2. Hashed message authentication codes (HMACs) are used to protect data integrity.
Data Protection
3. Secrets (e.g., keys, confidential data) are cryptographically protected both in transport and
in Storage and in storage.
Transit
4. Built-in secure storage is used for protecting keys.
5. No credentials and sensitive data are sent in clear text over the wire.
1. Data type, format, length, and range checks are enforced.
2. All data sent from the client is validated.
Data Validation/
Parameter 3. No security decision is based upon parameters (e.g., URL parameters) that can be manipulated.
Validation 4. Input filtering via white list validation is used.
5. Output encoding is used.
191
Unit 4 - Application Security
192
Unit 4 - Application Security
Applications are capable of controlling the kind of resources granted to them. They further determine the use of these
resources through the application users while ensuring application security.
• Asset: A resource of value, such as the data in a database or on a file system, or a system resource.
• Threat: Anything that can exploit the vulnerability and obtain, damage, or destroy an asset.
• Vulnerability: A gap or weakness in a security programme that can be exploited by threats to gain
unauthorised access to an asset.
• Attack (or exploit): An action taken to harm an asset.
• Countermeasure: A defense that addresses a threat and mitigates risk.
Computer software programs enable the various processes in a network to communicate with each other and run
applications. Securing the software is becoming more important than ever as the focus of attackers moving towards
the application layer.
It is usually more convenient and cost-effective to build secure software than to correct the security issues after the
software package has been completed. It is safer as well, so as to avoid security breach in the first place.
The principle of secure coding was developed keeping this in mind. It helps software engineers and other developers
anticipate security challenges and prepare for these issues at the design stage.
Secure coding is the practice of writing a source code or a code base that is compatible with the best security
principles for a given system and interface.
To develop a secure application, developers must learn important secure coding principles and how they can be applied.
As and when the security community becomes aware of more and more hacking and cyber-attack strategies, they
build new security mechanisms to protect from them. As developers contribute collectively, a large collection of
practices for secure coding has evolved.
SEI CERT Coding Standards has a very nice collection of recommended steps to take to ensure that the program is
secure and sorted according to the programming languages – C, C++, Java, Perl, and Android.
One can access them from the following links:
[Link]
OWASP has compiled a list of secure coding practices for application security.
[Link]
Secure coding principles described in OWASP Secure Coding Guidelines are:
• Input Validation
• Output Encoding
• Authentication and Password Management (includes secure handling of credentials by external services/
scripts)
• Session Management
• Access Control
• Cryptographic Practices
• Error Handling and Logging
• Data Protection
• Communication Security
193
Unit 4 - Application Security
• System Configuration
• Database Security
• File Management
• Memory Management
• General Coding Practices
Compliance with this control is assessed through Application Security Testing Program (required by MSSEI 6.2),
[Link]
The given checklist indicates the various threats and countermeasures. Note that the list is not limited and there are
many more ways to counter various types of threats. Once threats and corresponding countermeasures are identified
it is possible to derive a threat profile with the following criteria:
• Non mitigated threats: Threats which have no countermeasures and represent vulnerabilities that can be
fully exploited and cause an impact
• Partially mitigated threats: Threats partially mitigated by one or more countermeasures, which represent
vulnerabilities that can only partially be exploited and cause a limited impact
• Fully mitigated threats: These threats have appropriate countermeasures in place and do not expose
vulnerability and cause an impact.
The objective of risk management is to reduce the impact that the exploitation of a threat can have on the application.
This can be done by responding to a threat with a risk mitigation strategy. In general, there are five options to
mitigate threats.
• Do nothing: for example, hoping for the best
• Inform about the risk: for example, warning user population about the risk
• Mitigate the risk: for example, by putting countermeasures in place
• Accept the risk: for example, after evaluating the impact of the exploitation (business impact)
• Transfer the risk: for example, through contractual agreements and insurance
• Terminate the risk: for example, shutdown, turn-off, unplug or decommission the asset
The decision of which strategy is most appropriate depends on the impact on exploitation of a threat can have, the
likelihood of its occurrence, and the costs for transferring (i.e. costs for insurance) or avoiding (i.e. costs or losses due
to redesign) it. That is, such a decision is based on the risk a threat poses to the system.
Therefore, the chosen strategy does not mitigate the threat itself but the risk it poses to the system. Ultimately the
overall risk has to take into account the business impact since this is a critical factor for the business risk management
strategy. One strategy could be to fix only the vulnerabilities for which the cost to fix is less than the potential business
impact derived by the exploitation of the vulnerability. Another strategy could be to accept the risk when the loss of
some security controls (e.g. Confidentiality, Integrity, and Availability) implies a small degradation of the service and
not a loss of a critical business function. In some cases, transfer of the risk to another service provider might also be
an option.
194
Unit 4 - Application Security
Open Web Application Security Project (or OWASP) operates as a non-profit and is not affiliated with any
technology company, which means it is in a unique position to provide impartial, practical information about AppSec
to individuals, corporations, universities, government agencies and other organizations worldwide. Operating as a
community of like-minded professionals, OWASP issues software tools and knowledge-based documentation on
application security. All of its articles, methodologies and technologies are made available free of charge to the
public. OWASP maintains roughly 100 local chapters and counts thousands of members.
OWASP seeks to educate developers, designers, architects and business owners about the risks associated with the
most common Web application security vulnerabilities. OWASP, which supports both open source and commercial
security products, has become known as a forum in which information technology professionals can network and
build expertise. The organization publishes a popular Top Ten list that explains the most dangerous Web application
security flaws and provides recommendations for dealing with those flaws.
195
Unit 4 - Application Security
196
Unit 4 - Application Security
The OWASP tools, documents and code library projects have been divided into three categories. The first one are
the tools and documents that are used for finding security-related design and implementation flaws. The second
category represents the tools and documents used to guard against security-related design and implementation
flaws. Finally, there are tools and documents used for adding security-related activties into the application lifecycle
management (ALM).
1. Injection
Preventing injection requires keeping data separate from commands and queries.
• The preferred option is to use a safe API, which avoids the use of the interpreter entirely or provides a
parameterized interface, or migrate to use Object Relational Mapping Tools (ORMs). Note: Even when
parameterized, stored procedures can still introduce SQL injection if PL/SQL or T-SQL concatenates queries
and data, or executes hostile data with EXECUTE IMMEDIATE or exec().
• Use positive or “whitelist” server-side input validation. This is not a complete defense as many applications
require special characters, such as text areas or APIs for mobile applications.
• For any residual dynamic queries, escape special characters using the specific escape syntax for that
interpreter. Note: SQL structure such as table names, column names, and so on cannot be escaped, and
thus user-supplied structure names are dangerous. This is a common issue in report-writing software.
• Use LIMIT and other SQL controls within queries to prevent mass disclosure of records in case of SQL
injection.
197
Unit 4 - Application Security
2. Broken Authentication.
• Where possible, implement multi-factor authentication to prevent automated, credential stuffing, brute
force, and stolen credential re-use attacks.
• Do not ship or deploy with any default credentials, particularly for admin users.
• Implement weak-password checks, such as testing new or changed passwords against a list of the top
10000 worst passwords.
• Align password length, complexity and rotation policies with NIST 800-63 B’s guidelines in section 5.1.1 for
Memorized Secrets or other modern, evidence based password policies.
• Ensure registration, credential recovery, and API pathways are hardened against account enumeration
attacks by using the same messages for all outcomes.
• Limit or increasingly delay failed login attempts. Log all failures and alert administrators when credential
stuffing, brute force, or other attacks are detected.
• Use a server-side, secure, built-in session manager that generates a new random session ID with high
entropy after login. Session IDs should not be in the URL, be securely stored and invalidated after logout,
idle, and absolute timeouts.
3. Sensitive Data Exposure.
Do the following, at a minimum, and consult the references:
• Classify data processed, stored or transmitted by an application. Identify which data is sensitive according
to privacy laws, regulatory requirements, or business needs.
• Apply controls as per the classification.
• Don’t store sensitive data unnecessarily. Discard it as soon as possible or use PCI DSS compliant tokenization
or even truncation. Data that is not retained cannot be stolen.
• Make sure to encrypt all sensitive data at rest.
• Ensure up-to-date and strong standard algorithms, protocols, and keys are in place; use proper key
management.
• Encrypt all data in transit with secure protocols such as TLS with perfect forward secrecy (PFS) ciphers,
cipher prioritization by the server, and secure parameters. Enforce encryption using directives like HTTP
Strict Transport Security (HSTS).
• Disable caching for response that contain sensitive data.
• Store passwords using strong adaptive and salted hashing functions with a work factor (delay factor), such
as Argon2, scrypt, bcrypt or PBKDF2.
• Verify independently the effectiveness of configuration and settings."
4. XML External Entities (XXE)
Developer training is essential to identify and mitigate XXE. Besides that, preventing XXE requires:
• Whenever possible, use less complex data formats such as JSON, and avoiding serialization of sensitive data.
• Patch or upgrade all XML processors and libraries in use by the application or on the underlying operating
system. Use dependency checkers. Update SOAP to SOAP 1.2 or higher.
• Disable XML external entity and DTD processing in all XML parsers in the application, as per the OWASP
Cheat Sheet ‘XXE Prevention’.
• Implement positive (“whitelisting”) server-side input validation, filtering, or sanitization to prevent hostile
data within XML documents, headers, or nodes.
• Verify that XML or XSL file upload functionality validates incoming XML using XSD validation or similar.
• SAST tools can help detect XXE in source code, although manual code review is the best alternative in large,
complex applications with many integrations.
If these controls are not possible, consider using virtual patching, API security gateways, or Web Application
Firewalls (WAFs) to detect, monitor, and block XXE attacks."
198
Unit 4 - Application Security
199
Unit 4 - Application Security
8. Insecure Deserialization
The only safe architectural pattern is not to accept serialized objects from untrusted sources or to use serialization
mediums that only permit primitive data types. If that is not possible, consider one of more of the following:
• Implementing integrity checks such as digital signatures on any serialized objects to prevent hostile object
creation or data tampering.
• Enforcing strict type constraints during deserialization before object creation as the code typically expects
a definable set of classes. Bypasses to this technique have been demonstrated, so reliance solely on this is
not advisable.
• Isolating and running code that deserializes in low privilege environments when possible.
• Log deserialization exceptions and failures, such as where the incoming type is not the expected type, or
the deserialization throws exceptions.
• Restricting or monitoring incoming and outgoing network connectivity from containers or servers that
deserialize.
• Monitoring deserialization, alerting if a user deserializes constantly."
200
Unit 4 - Application Security
SUMMARY
• An application is a type of software that allows people to perform specific tasks using various ICT devices. Word
processors, web browsers are some of the commonly used applications.
• Google docs is an example of a cloud application since it provides the functionality of an Microsoft Word.
• Some examples of software vulnerability include SQL injection, Cross-Site Request Forgery (CSRF) and Cross-
Site Scripting (XSS).
• Denial of Service attack causes an interruption or suspension of services of a specific host/ server by flooding
it with large quantities of useless traffic or external communication requests.
• Bluesnarfing, bluejacking and bluebugging are security attacks related to Bluetooth.
• White-box testing, black-box testing and grey box testing are a few examples of application penetration testing
techniques.
• The black box methodology relies only on information ordinarily available to two distinct classes of attackers:
insiders and outsiders.
• White-box testing validates how the business logic of an application is implemented by code.
• Penetration tests are usually conducted using manual or automated techniques to routinely compromise
servers, endpoints, web apps, wireless networks, network appliances, mobile devices and other possible
exposure points.
• The likelihood or probability is characterized by the ease of exploitation, which depends mainly on the type of
danger and the characteristics of the device, and by the risk of realizing the danger, which is determined by the
presence of an effective counter-measure.
• Authentication / authorization attacks involve brute-forced passwords (both dictionary attacks and common
account / password strings) and credential, ineffective and poorly enforced password security and retrieval, key
material (and so on) in both memory and component limits.
• Attacks such as long strings (buffer overruns), SQL injection, command injection, format strings, LDAP injection,
OS commanding, SSI injection, XPath injection, escape characters, and special/problematic character sets fall
under the category of input attacks.
• Design attacks include unprotected internal APIs, alternate routes through and around security checks, open
ports, forcing loop conditions and faking the source of data (content spoofing).
• Examples of data disclosure attacks include directory indexing attacks, path-crossing attacks, and deciding if
the program allocates resources from a reliable and available location.
• Application firewall is the most basic software countermeasure that limits the execution of files and the handling
of data by specific installed programs.
• OWASP aims to inform developers, designers, architects and business owners about the risks associated with
the most growing security vulnerabilities in Web applications.
201
Unit 4 - Application Security
KNOWLEDGE CHECK
Q.3. Select the right choice from the following multiple choice questions
A. Which software vulnerability is an attack that occurs when ‘malicious scripts are injected into otherwise
benign and trusted websites?
i. SQL Injection
ii. Cross Site Scripting (XSS)
iii. Cross Site Request Forgery (CSRF)
iv. Smurf Attack
v. Buffer Overflow attack
B. In which type of attack the victim host is being provided with traffic/ data that is out of range of the processing
specs of the victim host, protocols or applications, overflowing the buffer and overwriting the adjacent memory?
i. SQL Injection
ii. Cross Site Scripting (XSS)
iii. Cross Site Request Forgery (CSRF)
iv. Botnet
v. Buffer Overflow attack
202
Unit 4 - Application Security
C. Which software vulnerability allows an attacker to submit a database SQL command, exposing the back-end
database where the attacker can create, read, update, alter or delete data. ?
i. SQL Injection
ii. Cross Site Scripting (XSS)
iii. Man-in-the-middle attack
iv. Smurf Attack
v. Buffer Overflow attack
Q.6. What is the Black Box Testing and White Box Testing in Application Security? What is more effective in finding
holes in the applications?
203
Unit 4 - Application Security
Q.7. List and explain briefly the steps followed to identify vulnerabilities in application security.
204
UNIT 5
SECURITY AUDITING
•
•
•
State the importance of security audits
List the various types of security audits
Explain what is risk based auditing
”
• Describe the process and tools of risk analysis
• Describe the risk management process
206
Unit 5 - Security Auditing
Information audit is important to evaluate the level of organizational security and immunity to various threats. Audit
is important to save the organization from spending unnecessary funds on the damages that could have occurred
due to the attack.
The scope of an audit depends upon:
• Site business plan
• Type of data assets to be protected
• Importance of data and relative priority
• Previous security incidents
• Time available
• Auditors experience and expertise
207
Unit 5 - Security Auditing
Pre-audit planning starts with developing the scope and objective of the audit. The audit personnel then coordinates
with the organization for level of support required, locations, duration and other related parameters. Thereafter, both
the parties agree on the pricing or the finances as per the scope of work. Once the pricing is mutually approved,
documentation such as confidentiality, contracting and required formal agreements are prepared. These documents
state the audit objectives, scope and protocol.
Conducting a preliminary review of the client’s environment, mission, operations, policies and practices and They
perform risk assessments of client environment, data, and technology resources. The audit personnel then complete
the research of regulations, industry standards, practices and issues. Futher, they review current policies, controls,
operations and practices and holding an entrance meeting to review the engagement memo, to request items from
the client, schedule client resources and to answer the questions of the client. This will also include laying out a time
line and specific methods to be used for the various activities.
Data gathering stage involves accumulating and verifying relevant and useful evidences for confirming the audit
objectives supporting audit findings and recommendations. While conducting data gathering, auditor conducts
interviews, observes procedures and practices, performs automated and manual tests and other important tasks as
required. Activities that require field visits may be carried out at the client's worksite or any remote location which will
depend on the nature of the audit.
208
Unit 5 - Security Auditing
Risk based auditing focuses on the analysis and management of risk. Auditors start the audit process by equipping
themselves with knowledge of the nature of the business of the entity and its business environment. Auditors arm
themselves with sufficient information about a business and its environment so as to assess risk before making a
decision of either performing a compliance test or a substantive test.
Compliance test: This is the process of gathering evidence for the purpose of testing an organization’s compliance
with control procedures and processes in relation to external rules, legal requirements, and regulations.
Substantive test: This is the process of gathering evidence in order to evaluate the integrity of individual transactions,
processes, data, and other information.
Audit risk can be categorised as:
• Inherent risk
• Control risk
• Detection risk
• Overall risk
Risk based auditing is generally composed of five broad stages. There is no hard and fast rule of what constitute each
stage, but, the most importance facets of those stages are covered in this section.
Five (5) stages of risk based audit:
1. Information gathering and planning stage
2. Mastery of internal control stage
3. Compliance test stage
4. Substantive test stage
5. Conclusion and production of report stage
Controls in information security are categorized based on the functionality such as preventive, detective, corrective,
deterrent, recovery and compensating. The categorization can also be done based on the plane of application such
as physical, administrative or technical. Let us understand these controls in brief.
209
Unit 5 - Security Auditing
• Preventive controls: -Preventive controls are the first controls met by an adversary. These try to prevent
security violations and enforce access control. Like other controls, these may be physical, administrative,
or technical. Doors, security procedures and authentication requirements are examples of physical,
administrative and technical preventive controls respectively.
• Detective controls: -Detective controls are in place to detect security violations and alert the defenders.
They come into play when preventive controls have failed or have been circumvented and are no less
crucial than detective controls. Detective controls include cryptographic checksums, file integrity checkers,
audit trails and logs and similar mechanisms.
• Corrective controls: - Corrective controls try to correct the situation after a security violation has occurred.
Although even after violation occured, but the data remains secure, so it makes sense to try and fix the
situation. Corrective controls vary widely, depending on the area being targeted, and they may be technical
or administrative in nature.
• Deterrent controls: - Deterrent controls are intended to discourage potential attackers. Examples of
deterrent controls include notices of monitoring and logging as well as the visible practice of sound
information security management.
• Recovery controls: - Recovery controls are somewhat like corrective controls, but they are applied in more
serious situations to recover from security violations and restore information and information processing
resources. Recovery controls may include disaster recovery and business continuity mechanisms, backup
systems and data, emergency key management arrangements and similar controls.
• Compensating controls: - Compensating controls are intended to be alternative arrangements for other
controls when the original controls have failed or cannot be used. When a second set of controls addresses
the threats , it acts as a compensating control.
By plane of application:
• Physical controls include doors, secure facilities, fire extinguishers, flood protection and air conditioning.
• Administrative controls are the organization's policies, procedures and guidelines intended to facilitate
information security.
• Technical controls are the various technical measures, such as firewalls, authentication systems, intrusion
detection systems and file encryption among others.
• Access Control Models are the abstract foundations upon which actual access control mechanisms and
systems are built. Access control is among the most important concepts in computer security. Access control
models define how computers enforce access of subjects (such as users, other computers, applications and
so on) to objects (such as computers, files, directories, applications, servers and devices).
210
Unit 5 - Security Auditing
Introduction
SimpleRisk is an excellent way to perform a basic risk assessment for an organization.
SimpleRisk tool includes a template for CIS that contains 20 yes/no question answers which provides valuable insight
into an organization's risk posture.
Instruction
To begin the process, from the menu at the top of Simple Risk, click on "Assessments" and then select "Critical
Security Controls" under the Available Assessments. One can leave the "Asset Name" field blank, or enter the name
of a specific application or business unit to which answers will apply.
Below is a screen shot of the Critical Security Controls assessment.
211
Unit 5 - Security Auditing
From here, simply answer "Yes" or "No" to the 20 questions and click on "Submit". A risk will be created for each "No"
answer under the "Pending Risks" section found on the left. Click on "Add" to push the risks into SimpleRisk.
The first stage of this process is to identify potential information risks. Several factors or information sources fed-into
the step of identification includes the following:
• Vulnerabilities are inherent weaknesses within facilities, technologies, processes (including information
risk management itself), people and relationships, some of which are not even recognised as such.
• Threats are actors (insiders and outsiders) and natural events that might cause incidents if they acted on
vulnerabilities causing impacts.
212
Unit 5 - Security Auditing
• Assets are defined as the valuable information content and the physical components such as storage
vessels, computer hardware, etc.
• Impacts are harmful effects of incidents and calamities affecting assets, damaging organisation and its
business interests, and often third parties.
• Incidents can range from minor, trivial or inconsequential based on the magnitude of the effect on the
organization. They can be calamities, disasters and outright catastrophes.
• Advisories, standards, etc. refer to relevant warnings and advice put out by myriad organisations such as
CERT, FBI, ISO/IEC, journalists, technology vendors, plus information risk and security professionals (social
network).
Evaluate risks stage includes considering/ assessing all that information in order to determine the significance of
various risks, which in turn drives priorities for the next stage. An organisation’s appetite or tolerance for risks is a
major concern, reflecting corporate strategies and policies as well as broader cultural drivers and personal attitudes
of people engaged in risk management activities.
Treat risks means avoiding, mitigating, sharing and/ or accepting risks. This stage involves both deciding what to do,
and doing it (implementing risk treatment decisions).
Handle changes might seem obvious but it is called out on the diagram due to its importance. Information risks are
constantly in flux, partly as a result of risk treatments and partly due to various other factors both within and outside
the organisation.
Risk treatment
Risk treatment is the process of selecting and implementing measures to modify risk. Risk treatment measures can
include:
• Avoiding
• Optimising
• Transferring
• Retaining risk
Identification of options
Having identified and evaluated risks, the next step involves:
• Identification of alternative actions for managing these risks
• Evaluation and assessment of their results or impact
• Specification and implementation of treatment plans
Since identified risks may have varying impact on an organisation, not all risks carry the prospect of loss or damage.
Opportunities may also arise from the risk identification process as types of risk with positive impact or outcomes as
identified.
Management or treatment options for risks expected to have positive outcome include:
• starting or continuing an activity likely to create or maintain the positive outcome.
• modifying the likelihood of risk to increase possible beneficial outcomes.
• trying to manipulate possible consequences to increase the expected gains.
• sharing risk with other parties that may contribute by providing additional resources, which could increase
the likelihood of opportunity or expected gains and
• retaining residual risk.
213
Unit 5 - Security Auditing
Management options for risks having negative outcomes looks similar to those for risks with positive ones, although
their interpretation and implications are completely different. Such options or alternatives might be to:
• avoid risk by deciding to stop, postpone, cancel, divert or continue with an activity that may be the cause
for that risk
• modify the likelihood of risk trying to reduce or eliminate the likelihood of negative outcomes
• try modifying the consequences in a way that will reduce losses
• share risk with other parties facing the same risk (insurance arrangements and organisational structures,
such as partnerships and joint ventures can be used to spread responsibility and liability)
• (of course one should always keep in mind that if a risk is shared in whole or in part, the organisation is
acquiring a new risk i.e. risk that the organisation to which the initial risk has been transferred may not
manage this risk effectively)
• retain risk or its residual risks
In general, cost of managing a risk needs to be compared with the benefits obtained or expected. It is important
to consider all direct and indirect costs and benefits whether tangible or intangible and measure them in terms of
finances or others.
More than one option can be considered and adopted either separately or in combination. An example is the effective
use of support contracts and specific risk treatments followed by appropriate insurance and other means of risk
financing.
In the event that available resources (e.g., budget) for risk treatment are not sufficient, the risk management action
plan should set the necessary priorities, and clearly identify the order in which individual risk treatment actions should
be implemented.
214
Unit 5 - Security Auditing
The risk management plan may include specific sections for particular functions, areas, projects, activities or
processes. These sections may be separate plans but in all cases, they should be consistent with organisation’s risk
management strategy (which includes specific RM policies per risk area or risk category).
The necessary awareness of and commitment to risk management at senior management levels throughout an
organisation is a critical mission and should receive close attention by:
• obtaining active ongoing support of an organisation’s directors and senior executives for risk management
and for development and implementation of risk management policy and plan
• appointing a senior manager to lead and sponsor the initiatives, and
• obtaining the involvement of all senior managers in the execution of the risk management plan.
The organisation’s board should define, document and approve its policy for managing risk, including objectives and
a statement of commitment to risk management. The policy may include:
• objectives and rationale for managing risk
• links between the policy and organisation’s strategic plans
• extent and types of risk an organisation will take and ways it will balance threats and opportunities
• processes to be used to manage risk
• accountabilities for managing particular risks
• details of the support and expertise available to assist those involved in managing risks
• a statement on how risk management performance will be measured and reported
• a commitment to the periodic review of the risk management system
• a statement of commitment to the policy by directors and organisation’s executive
The policy statement highlights an organization's internal and external environment, action taken by the board
members for risk management and the roles and accountability of the concerned individuals.
Ultimately, it's the responsibility of the directors and senior executives to ensure that the risks are well taken care of
to prevent any type of organizational damage.
This may be facilitated by:
• specifying those accountable for the management of particular risks, for implementing treatment strategies
and for maintenance of controls.
• establishing performance measurement and reporting processes, and
• ensuring appropriate levels of recognition, reward, approval and sanction.
These steps do not contribute towards implementing security mechanisms for the IT platforms. The noteworthy
points are the actions to be performed thereby reducing the identified risks. The actions that are part of the technical
implementation process are taken within the Information Security Management System (ISMS) which is outside the
risk management process.
Last but not least, an important responsibility of the top management is to identify requirements and allocate necessary
resources for risk management. This should include people and skills, processes and procedures, information systems
and databases, money and other resources for specific risk treatment activities.
The risk management plan should also specify how risk management skills of managers and staff will be developed and
maintained. Integration of risk management process with other operational and product processes is fundamental.
215
Unit 5 - Security Auditing
It is important for an organisation’s management and all other decision makers to be well-informed about the nature
and extent of the residual risk. For this purpose, residual risks should always be documented and subjected to regular
monitor and review procedures.
As per ISO27001, residual literally means 'of the residue' or 'leftover'. So, residual risk is the left over risk remaining
after all risk treatments have been applied.
Eliminated risks are probably no longer risks, but even then there remains the possibility that risk analysis was
mistaken (e.g., perhaps only a part of the risk was or perhaps, the risk materially changed since it was assessed and
treated) or that the controls applied may not be as perfect as they appear (again, they may fail in action).
Avoided risks are probably no longer risks, but again there is a possibility that risk analysis was wrong or that
they not be completely avoided (e.g., in a large business, there may be small business units out of management's
line of vision, still facing the risk, or a business may later decide to get into risky activities it previously avoided).
Transferred risks are reduced but are still risks, since the transferral may not turn out well in practice (e.g., if an
insurance company declines a claim for some reason) and may not be adequate to completely negate the impacts
(e.g., the insurance 'excess' charge).
If a manager does not explicitly treat an identified risk, or arbitrarily accepts it without truly understanding it, they are
in effect saying, “I do not believe this risk is of concern”. This is the decision for which they can be held to account.
The overall point is that one should keep an eye on residual risks, review them from time to time, and where appropriate
improve/ change the treatments if the residuals are excessive.
Risk management is carried out as a holistic, organisation-wide activity that addresses risk from the strategic level to
the tactical level, ensuring that risk based decision making is integrated into every aspect of the organisation.
The following sections briefly describe each of the four risk management components.
The first component of risk management addresses how organisations frame risk or establish a risk context i.e.
describing the environment in which risk based decisions are made.
216
Unit 5 - Security Auditing
The key purpose of a risk framing component is to produce a risk management strategy capable of addressing how
organizations intend to assess, respond to and monitor the risks. This involves making explicit and transparent risk
perceptions that can be utilized by the organizations in investing and making operational decisions. The risk frame
provides a groundwork in order to manage risks and define the boundaries for risk based decisions among various
organizations.
Establishing a realistic and credible risk frame requires that organisations identify:
• risk assumptions (e.g., assumptions about threats, vulnerabilities, consequences/ impact, and likelihood of
occurrence that affect how risk is assessed, responded to, and monitored over time)
• risk constraints (e.g., constraints on risk assessment, response, and monitoring alternatives under
consideration)
• risk tolerance (e.g., levels of risk, types of risk, and degree of risk uncertainty that are acceptable), and
• priorities and trade-offs (e.g., relative importance of missions/ business functions, trade-offs among
different types of risk that organisations face, time frames in which organisations must address risk, and
any factors of uncertainty that organisations consider in risk responses).
The risk framing component and the associated risk management strategy also include any strategic-level decisions
on how risk to organizational operations and assets, individuals, other organisations, and the nation is to be managed
by senior leaders/ executives.
The second component of the risk management addresses how organisations assess risk within the context of the
organizational risk frame.
Purpose of the risk assessment component is to identify:
• threats to organisations (i.e. operations, assets, or individuals) or threats directed through organisations
against other organisations or the nation;
• vulnerabilities internal and external to organisations;
• harm (i.e. consequences/ impact) to organisations that may occur given the potential for threats exploiting
vulnerabilities; and
• likelihood that harm will occur. The end result is a determination of risk (i.e. the degree of harm and
likelihood of harm occurring).
To support the risk assessment component, organisations identify:
• tools, techniques, and methodologies that are used to assess risk;
• assumptions related to risk assessments;
• constraints that may affect risk assessments;
• roles and responsibilities;
• how risk assessment information is collected, processed, and communicated throughout organisations;
• how risk assessments are conducted within organisations;
• frequency of risk assessments; and
• how threat information is obtained (i.e. sources and methods).
The third component within the risk management focuses on how the organizations respond to risks once they have
been determined. These risks are determined based on the results of risk assessments. The risk response component
provides a consistent, organization centric response to risks as per organizational risk frame. This is achieved by:
• developing alternate course of action for dealing with risks
• evaluating the alternate course of action
• determining an appropriate course of action that is consistent with the risk tolerance
• implementing risk response as per the selected course of action
217
Unit 5 - Security Auditing
In order to support the risk response component, organizations describe various types of risk responses such as
accepting, avoiding, mitigating, sharing or transferring the risks.
Organizations also emphasize on the tools, techniques and methodologies that are used for developing course of
action for providing the required response to a risk. They also focus on the ways to evaluate the courses of action,
communicating the risk response across organizations and to external entities such as external service providers,
supply chain partners, etc.
The next component of risk management is the way organizations monitor the risks over time. The functions of the
risk monitoring component are:
• verification of the risk response measures to be implemented and deriving information security
requirements from organizational missions/business functions, federal legislations, directives, regulations,
policies, standards and guidelines.
• determining the effectiveness of the risk response measures after implementation
• identifying changes in the organizational information systems and environments due to risk impact.
The organizations support the risk monitoring component by verifying the compliances and determining the
effectiveness of the risk response. This procedure is carried by using various tools, techniques and methodologies
that help determine correctness of the risk response. Also, there is a need to ensure that the risk mitigation measures
are implemented correctly, operating as required and producing the desired output of keeping the risks at check.
The organizations should also monitor the changes that could impact the effectiveness of risk responses.
Risk monitoring
Monitoring helps the organizations in providing awareness about the risks being incurred, highlighting the need to
look at other steps in risk management, and initiate the activities that improve the process.
Organizations make use of various tools and techniques for increasing awareness and helping senior leaders/
executives in developing a better understanding of the risk. This risk can harm the organizational operations, assets
and many individuals part of the work process.
Risk monitoring is done at various tiers of risk management keeping in mind the objectives and utility of information
being produced. Tier 1 includes ongoing threat assessments and way changes in the threat may affect activities
taking place in Tier 2 and Tier 3. This features enterprise architectures (with embedded security architectures) and
organizational information systems.
On the other hand, Tier 2 activities consists of analyzing new or current technologies that are either used or considered
for the future. These technologies help the organizations identify exploitable weaknesses and deficiencies that affect
the organizational growth.
The Tier 3 activities emphasize on information systems and includes techniques such as automated monitoring of
standard configuration settings. These activities help in managing the information technology products, vulnerability
scanning, and ongoing assessments of security controls.
218
Unit 5 - Security Auditing
It is also crucial to ensure that the monitoring process is conducted smoothly. For this purpose, organizations should
plan how to conduct the monitoring process such as automated and physical approaches and the frequency of
monitoring activities. For example, frequency can be times deployed security controls change, critical items on plan
of action, milestones, etc.
219
Unit 5 - Security Auditing
IT/IS audit is the process of examining and evaluating the organization's information technology infrastructure,
policies and operations. These audits are aimed at determining whether the IT controls can protect corporate assets,
ensure data integrity and are well aligned with the set business goals. The responsibility of the audit personnel is to
not only examine the physical security controls but the overall business and financial controls as well.
Risk analysis involves conducting an accurate and thorough assessment of the potential risks as well as vulnerabilities
that could affect information systems. These risks can hamper the confidentiality, integrity and availability of
electronically protected information held by the entity. It is an effective tool for managing risks and in turn identifying
vulnerabilities and threats. Risk analysis is important for assessing the possible damages in order to determine the
areas for implementing security mechanisms.
Following are the steps that help in conducting risk analysis:
1. Identify the scope or the risky area to be analyzed
2. Gather data required for risk analysis
3. Identify threats and vulnerabilities and document them
4. Assess the existing security measures
5. Determine the probability of threat and its potential impact after occurrence
6. Determine the level of risk involved
7. Document the security mechanisms
Risk assessment
Risk assessment is a term used to describe the overall process or methods, where:
• Identify hazards and risk factors that have the potential to cause harm (hazard identification).
• Analyze and evaluate the risk associated with that hazard (risk analysis, and risk evaluation).
• Determine appropriate ways to eliminate the hazard, or control the risk when the hazard cannot be eliminated
(risk control).
A risk assessment is a thorough look at the workplace to identify things, situations, processes, etc. that may cause
harm, particularly to people. After identification is made, analyze and evaluate how likely and severe the risk is. When
this determination is made, can next, decide what measures should be in a place to effectively eliminate or control
the harm from occurring.
• Risk assessment – the overall process of hazard identification, risk analysis and risk evaluation.
• Hazard identification – the process of finding, listing and characterizing hazards.
• Risk analysis – a process for comprehending the nature of hazards and determining the level of risk.
Notes:
(1) Risk analysis provides a basis for risk evaluation and decisions about risk control.
(2) Information can include current and historical data, theoretical analysis, informed opinions, and the concerns of
stakeholders.
(3) Risk analysis includes risk estimation.
220
Unit 5 - Security Auditing
Risk evaluation – the process of comparing an estimated risk against given risk criteria to determine the significance
of the risk.
Risk Mitigation
Risk mitigation defines the strategy for preparing to face a threat and protecting the data center from its effect.
Similar to risk reduction, risk mitigation is about reducing the negative effects of threats on the business continuity
(BC). These threats can have adverse effects such as cyber-attacks, and physical or virtual damage to the data center.
An element of risk management process, risk mitigation differs in the way it is implemented and depends upon the
type of organization. The principle is to prepare the business for all potential risks by having a plan in place that
weighs the impact of each risk. Risk mitigation is crucial in areas wherein the threat cannot be avoided fully. The steps
taken in the process are aligned towards reducing the adverse effects, and potentially long-term effects. Basically,
mitigation deals more with the aftermath of a disaster rather than the planning to avoid it.
Prioritization is an important aspect of risk mitigation which involves accepting the risk in one area to protect another.
One should focus on the key areas whose security cannot be compromised at any cost thereby protecting the
resources required for business continuity and sacrificing the ones which are less mission critical. This takes place at
times when dealing with the threat is beyond the control of the security experts.
In an ideal case scenario, organization is well prepared for any type of risk. But, if there is a well defined risk mitigation
plan in place, organizations can save their businesses with some level of damage and hope for a recovery in the future.
Risk reassessment
Risk reassessment, as the name suggests, deals with identifying new type of risks and reassessing current ones. Also,
the method helps in closing risks that are outdated and cannot do any harm in the near future.
This project management tool is used to control risks by creating a schedule for risk reassessment. It involves
determining the kinds of risks which are present in any project thereby helping the project managers in identifying
and controlling the risks. The number of repetitions that are performed in the reassessment is dependent upon the
project progression defined by its objectives.
The actions that are usually taken in the reassessment process are identifying risks, analysis of the impact, developing
a risk response plan, identifying risk triggers which in turn would help in developing a contingency plan.
To stay updated with the security threats and the way they affect the businesses, it is a good practice to maintain an
updated risk register.
221
Unit 5 - Security Auditing
SUMMARY
• The audit can be divided into two groups, i.e. internal audit and external audit.
• External assessments define and disclose any implementation and compliance deficiencies based on policies
and principles such as COBIT (Regulation Priorities for Information and Related Technology).
• Internal audits require a consultation mechanism where the auditor may not only audit the program, but may
also offer recommendations in a limited way.
• Risk-based auditing focuses on the analysis and management of risk. It involves compliance test and substantive
test.
• Audit risk is categorized into inherent risk, control risk, detection risk and overall risk.
• In Discretionary Access Control (DAC) model, the owner (creator) of information (file or directory) has the
discretion to decide about and set access control restrictions on the object in question, which may, for example,
be a file or a directory.
• In Mandatory Access Control (MAC) model, the users have little or no discretion as to what access permissions
they can set on their information.
• In the Role-Based Access Control (RBAC), rights and permissions are assigned to roles instead of individual
users. The added layer of abstraction allows for simpler and more versatile management and compliance of
access controls.
• Risk control provides an organization with the capacity to retain risk awareness, to highlight the need to review
other steps in the risk management process, and to implement process improvement activities as necessary.
• Risk assessment, risk recognition and risk analysis are essential elements of risk management.
• Risk reduction is a technique to plan for and raising the effects of risks to a data center.
222
Unit 5 - Security Auditing
KNOWLEDGE CHECK
7. Internal Audits G. One of the best ways to determine the security of an organisations
information without incurring the cost and other associated damages
of a security incident-5
8. Risk Control H. Strategy to prepare for and lessen the effects of threats faced by a
data center. –4
Q.2. Explain briefly and the various steps involved in the Risk Management Process.
223
224
UNIT 6
CYBER FORENSICS
•
•
•
•
•
State the importance of Cyber Forensics
State the various types of Cyber Forensics
Describe first response processes
Explain what is forensic duplication
Describe the process and tools for forensic duplication
”
• Describe the process and tools for disk forensic
• Describe mobile and CDR forensics
226
Unit 6 - Cyber Forensics
It is a discipline which brings together computer science and elements of law for collection and analysis of data from
computer systems, wireless communications, networks and storage devices in a way that is admissible as an evidence
in a court of law.
Forensic science can be defined as an application of science to law. The prime goal of any forensic investigation is to
determine related evidence and the evidential value of the crime scene.
Cyber forensics is also known as 'Computer and Network forensics' or 'Digital forensics' . It is the science to obtain,
preserve and document evidence from digital electronic storage devices such as mobile phones, computers, digital
cameras, PDAs and various memory storage devices. Everything must be designed to preserve the probative value of
the evidence and assure its admissibility in a legal proceeding.
Cyber forensic also involves the collection, identification, analysis and examination of data while preserving the integrity
of the information and maintenance of chain of custody of the data. Data is distinct pieces of digital information
formatted in a specific pattern. Organizations have huge inflow of data from multiple sources. For example, we
can store or transfer data by networking equipment, standard computer system, personal digital assistants (PDA),
computing peripherals, consumer electronic devices and various types of media along with other sources.
As criminals are expanding the use of technology in their enterprise of illegal activities, this new field of science is
becoming increasingly important. The techniques used in computer forensic are not as advanced as mainstream
forensics techniques used by law enforcement such as ballistics, fingerprinting, blood typing and DNA testing. The
immaturity of this field attributes to fast paced changes in computer technology and the multidisciplinary nature of
the subject which involves complicated linkage between business management, legal system, information technology
and law enforcement
227
Unit 6 - Cyber Forensics
Some types of cyber forensics that cyber security professional must know about are as follow:
• Disk Forensics
• Memory Forensics
• Network Forensics
• Mobile Forensics
• Internet Forensics
228
Unit 6 - Cyber Forensics
Disk Forensics
Disk forensics is the science of extracting forensic information from the digital storage media like Hard disk, USB
devices, Firewire devices, CD, DVD, Flash drives, Floppy disks, etc. Hard drives are used for permanent storage. All the
data and files that are created or downloaded are saved with a name in a folder in the disk drive. All these files can
be accessed by disk forensics, however, that is just the tip of the iceberg.
A forensics expert knows about and has the sophisticated tools to access a complex network of files that ordinary
users may know much about. Some of these are as follows:
• Files created in temporary storage without the users’ knowledge having the content of deleted files.
• Backups of mobile devices and cell phones that happen automatically
• Temporary storage area in memory and on disk that holds the most recently downloaded Web pages
• Metadata created by many applications like Microsoft Word, Excel and PowerPoint that embeds information
(metadata) into the documents they create so users can identify documents, authors or systems that created
these documents, as well as how large they are and when they were last printed, last accessed, last modified
and date created, etc.
• Logs created by the operating system containing information such as the devices that were plugged into a
system, files copied, any other storage being used, cloud storage used, webmail accounts and other locations
and applications, etc.
• Files stored on the hard disk or solid state drive without the user knowing it, etc.
Various storage devices that can be sources of digital evidence are hard disks with IDE/SATA/SCSI interfaces, CD,
DVD, Floppy disk, Mobiles, PDAs, flash cards, SIM, USB/ Firewire disks, Magnetic Tapes, Zip drives, Jazz drives, etc.
For disk forensics these storage media are seized from the crime scene, a hash value of the storage media to be seized
is computed using an appropriate cyber forensics tool. Hash value is a unique signature generated by a mathematical
hashing algorithm based on the content of the storage media. After computing the hash value, the storage media is
securely sealed and taken for further processing.
An important part of disk forensics is creating an exact copy of the original evidence. This is for protection of the
original evidence. The original storage media is write protected and bit stream copying is made to ensure complete
data is copied into the destination media.
Memory Forensics:
Many cyber attacks or malicious behaviours do not leave any indicators in the computer’s hard-drive. In such cases,
the memory (RAM—Random Access Memory) has to be accessed and analyzed. This memory contains volatile
data, i.e. data which resides in a computer’s short term memory storage, including browsing history, chat messages,
clipboard contents, etc. This data is a temporary memory on the computer while it is running, however when the
computer is powered off, this data is lost.
Memory forensics provides insights about the runtime system activity, this could include the following:
• Open network connections
• Recently executed commands or processes
• Account credentials
• Chat messages
• Encryption keys
• Running processes
• Injected code fragments
• Internet history which is non-cacheable, etc.
229
Unit 6 - Cyber Forensics
All programs are loaded in memory, in order to execute, hence can be identified through memory forensics. As attack
methods are becoming more and more sophisticated, memory forensics is in high demand for security professionals
today. Many network-based security solutions like firewalls and antivirus tools are unable to detect malware written
directly into a computer’s physical memory or RAM. Security teams use memory forensics tools to protect invaluable
business intelligence and data from stealthy attacks such as fileless, in-memory malware or RAM scrapers.
Network Forensics
Network forensics is a sub branch of cyber forensics. It records, captures and analyze network events in order to
discover the attacks on source of security or other problems.
A number of techniques and devices are used to intercept data, collect all data that moves through a network, identify
selected data packets for further investigation, etc. Computers with high storage volumes and rapid processing
speeds are required for accurate forensic analysis of network.
Forensic analysts search for data that points towards human communication, manipulation of files, and the use of
certain keywords. They track communications and establish timelines based on network events logged by network
control systems, track down the source of hack attacks and other security-related incidences, collect information on
anomalies and network arte-facts, and uncovering incidents of unauthorised network access.
Network forensics systems can be one of two kinds:
1. A brute force method of "catch it as you can" which involves capturing all network traffic for analysis.
2. A more intelligent "stop look listen" method which involves analysing each data packet flowing across the network
and only capture what is deemed as suspicious and worthy of extra analysis
Network forensics is used to dig out flaws in IT infrastructure and networks, thereby giving information security
officers and IT administrators thee scope to shore up their defences to prevent futurecyber attacks.
Mobile Forensics
Mobile forensics is used to recover digital evidence or data from a mobile device which could be a cell phone,
smartphone, PDA devices, GPS devices and tablet computers. Mobile devices are used to save various types of
personal information such as photos, contacts, notes, SMS, messages and more. Smartphones may contain video,
emial, web browsing and location information, social media messages, contacts,etc. Other information that can be
accessed are as follows:
• Incoming, outgoing and missed call history
• Internet browsing history, content, cookies, search history, analytics information
• To-do lists, notes, calendar entries, ringtones
• Documents, spreadsheets, presentation files and other user-created data
• Passwords, passcodes, swipe codes, user account credentials
• Historical geo-location data, cell phone tower related location data, Wi-Fi connection information
• User dictionary content
• Data from various installed apps
• System files, usage logs, error messages
• Deleted data from all of the above, etc.
A wide variety of tools exist to extract evidence from mobile devices and no one tool or method can acquire all the
evidence from all devices as the cell phone technologies are varied and keep on changing rapidly.
230
Unit 6 - Cyber Forensics
Internet forensics
Internet forensics consists of the extraction, analysis and identification of evidence related to the user’s online
activities. Internet-related evidence includes artifacts such as log files, history files, cookies, cached content, as well as
any remnants of information left in the computer’s volatile memory (RAM).
Criminals use the Internet as means of communication using phone calls or publishing offensive material on a web
site. Some such activities are as follows:
• Spam, or unsolicited emails many of which are sent with a goal to obtain financial details of the user.
• Phishing or frauds involving fake web sites that look like those of banks or credit card companies and attempts
to entice victims by appearing to come from a well-known, legitimate business like Citibank or eBay.
• Computer viruses, worms and spyware, etc.
Internet forensics examines the data to attempt to find the source of attacks such as these.
The main outcome of the Forensic Investigation Process is the collection of potential digital evidence in the form
of media and then identifying and extracting the data, analyzing it and transforming it into evidence whether it is
needed for law enforcement or for an organization’s internal usage. The following steps are part of the process of
computer forensics:
• Readiness: Readiness means being fully prepared for the task to be undertaken. It involves obtaining
authorization to search and seize. Some activities include regular testing, appropriate training and validation
of software and familiarity legislation.
• Evaluation: Making appropriate judgment based on the case to be investigated. Involves receiving instructions,
the clarification of those instructions if unclear or ambiguous, carrying out risk analysis and the allocation of
roles and resources. Risk analysis for law enforcement includes an assessment of the likelihood of physical
threat on entering a suspect’s property and how best to counter it.
• Collection: The gathering is carried out on-site on crime. Some activities involved are, identifying and ensuring
that the device which stores evidence and documented the scene is secure. Carrying Interviews or meetings
with personnel who may hold information relevant to the examination, bag, tag and safely transport the
equipment and electronic evidence to a forensic lab.
• Analysis: The analysis involves extracting relevant information obtained to apply in the current situation.
Quality of information should be accurate, thorough, impartial, recorded, repeatable and completed within
the scheduled time and proportional to resources allocated.
• Presentation: includes preparing a thorough summary of the evidence in question, taking into account the
conclusions, the events involved, structuring the material as it should be and presenting relevant details that
the investigator would like to see reviewed. Report must always be published with the end reader in mind.
Examiner should be able to explain his work in a way which is comprehensible to respective persons.
• Review: This is carrying out an assessment of the whole procedure with the intention of instituting change in
the future if necessary. Mainly aimed at raising the level of quality by making future examinations more efficient
and time effective. Examples of review include analysis of what went wrong, well and future improvements.
Feedback is necessary for instructing party.
231
Unit 6 - Cyber Forensics
A Computer Forensic Investigator blends their expertise in computer science with their forensic ability to recover
information from computers and storage devices. Investigators are responsible for helping law enforcement agencies
with cybercrimes and for collecting evidence. Computer forensic investigators usually hold a bachelor degree in
computer science with criminal justice history.
The role of the Investigator is to recover data like documents, photos and e-mails from computer hard drives and
other data storage devices, such as zip and flash drives, that have been deleted, damaged or otherwise manipulated.
Investigators often work on cases involving offenses committed on the Internet ('cybercrime') and examine computers
that may have been involved in other types of crime to find evidence of illegal activity. As an Information Security
professional, a computer forensic investigator may also use their expertise in a corporate setting to protect computers
from infiltration, determine how a computer was broken into or recover lost files.
• Preservation: It is important for the forensic team to preserve the integrity of the original evidence. The
original evidence should not be modified or damaged. Hence an image or a copy of the original evidence
must be made first for the analysis to be performed on.
• Identification: Before starting the investigation, the forensic team must identify the evidence and its location.
For eg. evidence may be contained in hard disks, removable media, or log files. They also need to identify the
type of evidence and the best method to extract data from it.
• Extraction: The forensic expert must extract data from the evidence after it has been identified. As volatile
data may be lost at any moment, this data must be retrieved by the forensic investigator from the copy made
from the original evidence. This derived data needs to be measured and evaluated against the original proof.
• Interpretation: Interpreting the data that has been extracted is crucial to the investigation. The analysis and
inspection of the evidence must be interpreted in a lucid manner.
• Documentation: From the beginning of the investigation until the end, the forensic team must maintain
documentation relating to the evidence. The report includes the nature of the chain of custody and records
related to the examination of facts.
232
Unit 6 - Cyber Forensics
Forensic analysis is necessary when there is a belief that electronic data may have been deleted, misappropriated, or
otherwise managed in an inappropriate manner.
The purpose of the forensic examination is to obtain adequate knowledge about the data or equipment, its use (or
misuse), the responsibility of the persons, and then to create as accurate a image as possible of what occurred, when
it occurred and how it occurred. In other words, forensic research makes it possible to go further, so as to make the
case stronger.
• Preparation: In Preparation, the team develops the formal incident response capability; where they create
an incident response process defining the organizational structure with roles and responsibilities; where they
create procedures with clear instructions to respond to an incident; where the right people with the correct skill
set are selected; where the conditions for reporting an incident are defined; where they identify the incident;
where the team defines what they are going to report; and to whom the team is going to communicate. This
step is crucial to ensure that response actions are known and coordinated. Good preparation can enable them
to reduce potential damage by ensuring fast and successful response.
What to do before the incident: Planning leads to successful incident response. During this phase, the
organization needs to prepare for both, the organisation itself and the Computer Security Incident Response
Team (CSIRT) members. Incident response is vulnerable in nature. The planning for the pre-incident requires
only the preventive steps that the CSIRT will promise to do, to safeguard the property of that organization.
Incident response plan: An incident response plan sets out the step-by-step process to be followed in the
event of an incident.
• Identification: This step is where the team verifies if an occasion has occurred and that the supported events,
observations, indicators and deviations from traditional operations suggest a malicious act or [Link]
protection mechanism in place can facilitate the team doing the identification. Incident handler team will
use their experience to look at the signs and indicators. The observation might occur at the network, host
or system level. This is where the team leverages the alerts and logs from routers, firewalls, IDS, SIEM, AV
gateways, operating system, network flows, and more.
• Containment: This stage consists of limiting the injury. It is about stopping the offenders/attackers. It is where
the team makes a decision on which strategy it will use to contain the incidents based on processes and
procedures. It is where the team interacts with the home-based business owners and judges to finish off the
system or disconnect the network or continue operations and monitor the activity. All depends on the scope,
magnitude and impact of the incident.
• Eradication: After the successful containment of the incident, successive steps involve eliminating the reason
for the incident. Within the case of a deadly disease incident, the demand is for eradicating the virus. It’s on
this step that the team should determine how it was initially executed and apply the necessary measures to
ensure it doesn’t happen again.
• Recovery: In this phase, restoring a backup or reimaging of a system takes place. After successful restoration,
it is important to monitor it for a certain time period. Monitoring is important because the team wants to
potentially identify signs that evaded detection.
• Lessons Learned: Follow up activity is crucial. It is where the team can reflect and document what happened;
where they can learn what failed and what worked; where the team identifies improvements for incident
handling process and procedures; where they write the final report.
233
Unit 6 - Cyber Forensics
234
Unit 6 - Cyber Forensics
Chain of custody
Chain of Custody is an essential first step in the cyber forensics investigations. Chain of Custody is essentially
documenting how to protect, transport and check that products acquired for investigation have been preserved in
an appropriate manner. Chain of custody demonstrates ‘trust’ to the courts and to a client that, the media was not
tampered with. It is an audit trail of' who did what' on a single piece of evidence and 'why it happened'.
Digital evidence is an integral element in the identification of motive, mode and process in computer-related crimes,
and it is critical in many internal investigations. Digital evidence is typically acquired from a myriad of devices including
a vast numbers of IoT devices that store the user information and data ‘spores’, digital video and images (which may
store important metadata and obfuscated/hidden information), audio evidence, and other stored data on flash drives,
hard disk drives, and other physical media.
The process for digital forensics follows a structured path. The process comprises of four primary steps:
• Collection: It is the identification, marking, documentation and retrieval of data from possible relevant sources
maintaining the quality of the collected data and evidence. That is where the cycle of the Chain of Custody
begins. The Chain of Custody is also used in all these 4 measures.
• Examination: We use a forensically sound method to collect data, both manually and automatedly. DF
examiners may execute especially interesting data which will be used in testimony which supports or refutes
the assertion. Data protection is important and we will also discuss safe methods of handling digital forensics
investigations later. During this step, not only the results of the investigation process are recorded and noted,
but also the Chain of Custody documentation is completed to note the disposition of any collected evidence
used in the examination and how it was used.
• Analysis: The analysis is a result of the examination. We use legally justifiable methods and techniques
to derive useful information to address questions posed in a particular case. Again, the Chain of Custody
reporting ‘may’ be involved in this step.
• Reporting: This is an exam and review report. Reporting usually involves a statement about the Custody
Chain, an overview of the usage of the various instruments, a description of the analysis of different identified
data sources, problems and vulnerabilities, and suggestions for additional forensic measures.
235
Unit 6 - Cyber Forensics
236
Unit 6 - Cyber Forensics
• Authorizations
An investigator must seek permission to conduct a search at the site of a crime from the appropriate authority.
A search warrant may have to be secured, which is a written order issued by a judge that directs a law
enforcement officer to search for a certain piece of evidence at a specific location.
• Resources
First Response Toolkit: The forensic specialist has to create a toolkit before a cybercrime event happens and
prior to any potential evidence collection. Once a crime is reported, someone should immediately report to
the site and should not have to waste any time in gathering materials.
237
Unit 6 - Cyber Forensics
238
Unit 6 - Cyber Forensics
• Notebook computers:
Licensed software
Bootable CDs
External hard drives
Network cables
• Software tools:
DIBS Mobile Forensic Workstation
Access Data’s Ultimate Toolkit
Teel Technologies SIM Tools
• Hardware tools:
Paraben forensic hardware
Digital Intelligence forensic hardware
Tableau Hardware Accelerator
WiebeTech forensic hardware tools
Logicube forensic hardware tools
239
Unit 6 - Cyber Forensics
240
Unit 6 - Cyber Forensics
• Volatility
Volatile data refers to data that is lost in a live system after a computer is shut down or due to the passage of
time. Many activities performed on the device can also result in the loss of volatile data. The suggested order
in which volatile data should be generally collected, from the beginning to the end, is:
1. Network connections
2. Login sessions
3. Contents of memory
4. Running processes
5. Open files
6. Network configuration
7. Operating system time.
• Amount of Effort Required
The amount of effort required to acquire different data sources may vary widely. The effort involves not only
the time spent by the forensic professionals and others within the organization (including legal advisors) but
also the cost of equipment and services (e.g., outside experts).
The political, technical, legal and business factors that surround the incident should be considered for formulation
of the response strategy. For selecting the strategy, the objectives/suggestions of the group or individual with
responsibility should be taken on which the final solution depends.
241
Unit 6 - Cyber Forensics
• Considering the totality of the circumstances: Based on the circumstances of the computer security incident,
the response strategies will vary. While deciding how many resources are needed to investigate an incident,
whether to create a forensic duplication of relevant systems, whether to make a criminal referral, whether to
pursue civil litigation and other aspects of response strategy, the following factors are to be considered:
A. How much are the affected systems critical?
B. How sensitive is the compromised or stolen information?
C. Who are the potential perpetrators?
D. Is the incident known to the public?
E. What is the level of unauthorized access attained by the attacker?
F. What is the attacker's apparent skill?
G. Involvement of system and user downtime.
H. The overall monetary loss.
• Considering appropriate responses: The organisation needs to arrive at a viable response strategy armed
with the circumstances of the attack and the capacity to respond. It shows some common situations with
response strategies and potential outcomes. The response strategy determines how to proceed from an
incident to an outcome.
• Taking action: An organization may need to discipline an employee or to respond to a malicious act by an
outsider. The action can be initiated with a criminal referral, a civil complaint, or administrative reprimand or
privilege revocation as per what the incident warrants.
• Legal action: It is common to investigate a computer security incident that is actionable, or it could lead to
a lawsuit or court proceeding. The two prospective legal choices are to file a civil complaint or to notify law
enforcement. When deciding whether to include law enforcement in the incident response, the following
should be considered:
A. Does the damage/cost of the incident merit a criminal action?
B. Is it likely that the outcome desired by the organisation will be achieved by civil or criminal action? Can the
damages be recovered or can one receive restitution from the offending party?
C. Was the cause of the incident reasonably established? (Law enforcement officers are not computer security
professionals.)
D. For an effective investigation, does the organisation have proper documentation and an organized report
which will be conducive?
E. Can substantial investigative leads be provided to law enforcement officials for them to act on?
F. Does the organisation know and have a working relationship (prior liaison) with the local or federal law
enforcement officers?
G. Will the organization be ready to risk public exposure?
H. Do the past performances of the individual merit any legal action?
I. How will the law enforcement involvement impact business operations?
• Administrative action: More common than initiating civil or criminal actions is disciplining or terminating
employees via administrative measures. To discipline internal employees, some administrative actions that can
be implemented which include:
A. Does the damage/cost of the incident merit a criminal action?
B. Letter of reproof.
C. Immediate discharge.
D. Leave of absence for a specific length of time.
242
Unit 6 - Cyber Forensics
243
Unit 6 - Cyber Forensics
A Mirror Image is created from hardware that does a bit-by-bit copy from one hard drive to another.
Here are some tools used to create forensic duplicates:
• Logical Backup: Bit-Imaging
Bit-Imaging A logical backup copies a logical volume of folders and files. It does not capture any data that
may be present on the media, such as deleted files or residual data still stored in slack space, known as disk
imagery / cloning / bitstream imagery, which produces a bit--replica of the original media, including free
space and slack space. Bit stream images require more storage space than logical backups, which take longer
to execute.
• Write Blocker:
A write-blocker is hardware or software based tool that prevents a computer from writing to computer storage
media connected to it. Hardware write-blockers are physically connected to the computer and the storage
media being processed to prevent any writes to that media. Wide varieties of write blocking devices are
available based on the type of the interface. e.g., SATA/IDE/USB etc.,
It is very important to ensure that the evidence collected is not changed in any way. Hence an exact copy of
the data residing on the evidence hard disk (or other electronic digital storage device) is made, which the
Forensic specialist may examine and perform the various analysis. The reason for taking this measure is that, if
a search were conducted on the original data from the evidence, it would create both the actual and perceived
problem that the original has been corrupted or altered by the person performing the analysis, therefore
making it vulnerable to a disqualifying objection in court.
It is equally important that the image copy must be exact as all the conclusions will be based on the data from
the copy. Which in the end must be the same conclusions which will arise from the data in the original?
To create forensic duplicates of the data from the evidence efficiently, the following activities need to be
performed:
• Making the media forensically sterile
• Copying the image exactly
• Using a hash to save time
244
Unit 6 - Cyber Forensics
245
Unit 6 - Cyber Forensics
Image creation can take several hours to execute. It is a simple task but needs to be practiced, to get it right.
Traditionally the image creation of the hard drives is done by removing the hard drive from the impacted system and
creating a forensic image using a write block. But there are times this method is not practical. Another way of making
a forensic image of the hard drive is to use live acquisition methods, boot disk acquisition or using remote/enterprise
grade tools. A live system acquisition might be useful in cases the affected drive is encrypted or there is a RAID across
multiple drives or it is not feasible to power down the machine. However, this method will only grab the logical part
of the hard drive i.e. partitions such as FAT, NTFS, EXT2, etc.
The other method is using a bootable forensic disk such as Helix. For that, the system must be rebooted using the CD/
USB. This allows one to create a bit-by-bit image of the physical drive, the evidence on the drive is not altered during
the boot process and an image of the hard drive can be created into an image file. This image file can then be used
across different analysis tools and is easier to backup.
To understand more, a quick look at the hands-on scenario to create a forensic image using a bootable disk method
from a compromised or suspicious system using dd is recommended.
‘dd’ is a simple and flexible tool that is launched using the command line and is available for Windows and Linux.
In this case, dd is being run in a Linux system. ‘dd’ copies chunk of raw data from one input source to an output
destination. It does not know anything about partitions or file systems. dd reads from its input source into blocks
(512 bytes of data by default) specified by the if= suffix. It then writes the data to an output destination using the
of= suffix.
The creation of the image is a simple process but requires practice. Also, it is a process that can take several hours to
accomplish.
Regardless of whether an image file or a restored image is used in the examination, the data should be accessed
only as read-only to ensure that the data being examined is not modified and that it will provide consistent results
on successive runs.
Write-blockers can be used during this process to prevent writing to the restored image from occurring. After the
backup has been restored (if needed), the analyst starts reviewing the collected data and conducts an evaluation of
the related files and data by finding all files, including deleted files, remains of slack and free space files, and secret
files. First, the analyst would need to extract the data from any or all of the files, which could be complicated by steps
such as encryption and password protection.
Hardware Mirroring
Mirroring hardware is achieved with the use of hardware duplicators that use a hard drive and replicate it on another
hard disk. As the hard drives themselves are not copies of each other, the mirroring system destroys data placement
and hence must alter the metadata that the OS uses to access sectors such as partition tables and master boot
records.
Normally, this mirroring hardware is used to install the same disk image on many machines or to backup drives
before repairing them. However, there are a handful of companies that produce forensic hardware units that capture
a suspect drive. These make copies of the original partitioning and boot the sectors and these verify the accuracy of
the capturing process by using cryptographically secure Hashes of the original and of the mirror.
One of their big advantages is the speed and safety. For example, the Logicube places the capturing disk (the
destination) within the enclosure and links the suspect drive outside, preventing the forensics examiner's most critical
mistake of writing in the wrong direction and losing the evidence. Some companies even offer hardware duplication
models which, cannot under any circumstances write to the suspect drive. Some models have special hardware that
allows the capturing of laptop disks through a PCMCIA or card bus card or the capturing of a disk drive in situ through
the USB port. (Firewire uses too high a level of abstraction to allow forensics capturing of disk drives.)
246
Unit 6 - Cyber Forensics
247
Unit 6 - Cyber Forensics
Fig 6.2: The figure given below lists and includes but is not limited to a few hardware-based forensic disk
acquisition tools.
248
Unit 6 - Cyber Forensics
249
Unit 6 - Cyber Forensics
250
Unit 6 - Cyber Forensics
251
Unit 6 - Cyber Forensics
There are many memory acquisition tools. Important tools are listed below:
252
Unit 6 - Cyber Forensics
Investigation: The investigation stage includes defining the who, what, when, where, how and why surrounding
an incident. One of the finest means to streamline a technical investigation is to divide the evidence and it
collect into three categories:
Host-based evidence: The data is usually collected from Windows or Unix machines, or the device actually
involved in the incident.
Network-based evidence: This type of evidence is usually collected from those not directly involved in an
incident such as routers, IDS, network monitors, or some network node.
Other evidence: Testimonial data that contribute to the case, such as motive, intent, or some other non-
digital evidence are involved in this category.
253
Unit 6 - Cyber Forensics
254
Unit 6 - Cyber Forensics
• Discover, access and copy data from hidden, encrypted or damaged disks
• Un-compress files and read disk images
• Recover data and metadata from files using forensic toolkits
• Identify malicious behavior against OSs using security software, such as file integrity checkers and host
IDSs, etc.
• Perform string searches and pattern matching using Boolean search tools, fuzzy logic, synonyms and
definitions, stemming and other search methods
• Assess and recover network traffic data with the aim to determine what and how the network systems of
the organization have been affected.
• Obtain relevant details from ISP and cloud service provider
• Reveal (unlock) digital photos that have been changed to conceal a location or person's identity
• Send the computer or original media for physical evidence inspection after data has been deleted
• If equipment is destroyed, uninstall and reconstruct the machine to retrieve missing data
• Carefully document the process followed in extraction as well as the data retrieved
• Identify and minimize any risks to safety linked to working with forensic items in line with health and safety
procedures
• Take measures to ensure the preservation of physical evidence like finger prints, DNA, etc. while handling
the evidence
Locating the files is the first step in the examination. A disk image can capture several gigabytes of slack space and
free space, which can include thousands of fragments of data and information. Extracting data manually from unused
space can be a time-consuming and difficult process since it requires knowledge of the underlying file system format.
Introduction to Registry
Windows operating system's memory is the Server. It is a hierarchical database that stores the configurations and
options for the configuration required to run applications and commands.
The following information is stored in the registry:
• If a user installs a newly attached hardware software / application, hardware or system driver, the initial
configuration settings are stored in the registry as keys and values.
• The modifications made to such settings are changed in the registry during the use of the program or
hardware.
During their run-time, the program and device components retrieve their new registry configuration to continue their
service as per the current user's settings. The registry also acts as an index for the kernel process, revealing machine
run-time information.
This information, however, is dynamic and only exists when Windows is running. The Registry resembles a subtree like
Windows Explorer because of its nesting pattern.
There are four to six 'Hives' or main registry keys containing a nesting of keys, subkeys and values having a set of
support files containing backups of its data. There can be four to six' Hives' or main registry keys displayed (d),
depending on the version of the OS. Each is called according to their handles (Handle Key= HK) as specified in the
Win32 API and is used for settings governance.
255
Unit 6 - Cyber Forensics
The database configuration of the registry can be accessed via the Registry Editor, RegEdit. This displays one
hierarchical list, but not one large database file is the Windows Registry. The primary structure of data is the hive in
which many exist. That hive is defined by a root key which gives access to all tree sub-keys up to 512 deep levels.
There are four to six predefined root keys that are used to access all other keys or sub keys (as per the windows
program used). In other words, the binary tree is traversed downwards from the base. By these root keys new keys are
introduced, and all existing keys must be identified via the root keys. One downside to this strategy is that a higher-
key problem will prevent access to lower keys. In reality, this does not occur very much.
The following table lists the root keys with the abbreviations:
Root Key Abbreviation Root Key Name Component data is stored for
CC HKEY_CURRENT_CONFIG Current hardware
HKCR HKEY_CLASSES_ROOT Classes (types) of documents and
registered applications
HKCU HKEY_CURRENT_USER Current logged-on user
HKLM HKEY_LOCAL_MACHINE The system hardware, software and
security
HKPD HKEY_PERFORMANCE_DATA Performance data
HKU HKEY_USERS User profiles
Programs gain access to the registry by using the Registry Application Programming Interface (API) which provides a
standard set of functions for the Windows sub-systems and application programs to access and update the Registry.
This is how the Registry editor (RegEdit) and other utilities work.
When a program uses the API to access the registry the Windows Object Manager will return a handle for the object
identified by a key. That is why the "HKEY" in the root keys means "handle key".
256
Unit 6 - Cyber Forensics
The following illustration is an example registry key structure as displayed by the Registry Editor.
Each of the trees under the Computer is key. The HKEY_LOCAL_MACHINE key has the following subkeys: HARDWARE,
SAM, SECURITY, SOFTWARE, and SYSTEM. Each of these keys, in turn, has subkeys. For example, the HARDWARE key
has the subkeys DESCRIPTION, DEVICEMAP and RESOURCEMAP, the DEVICEMAP key has several subkeys including
VIDEO.
All user's registry settings (HKEY_CURRENT_USER) are stored in the registry hive file [Link] in the user's profile
directory. The [Link] file is a copy of the data stored in the registry hive for a specific user. There are three or more
hive files that have the name [Link]. The first one is related to network services account, the second one to
local services account and the third one to the user account (each user account has its [Link] hive file).
Extracting Data
Fortunately, several tools are available that can automate the process of extracting
data from unused space and saving it to data files, as well as recovering deleted files
and files within a recycling bin. Analysts can also display the contents of slack space
with hex editors or special slack recovery tools.
The following processes are among those that an analyst should be able to perform
with a variety of tools:
Extracting data from slack space
When an operating system writes a file to disk, it allocates a certain number of sectors.
The allocated sectors and their location on the disk are recorded in a directory table
for later access. When the file is deleted, space originally allocated to it is simply
marked as unallocated. The actual data remains on the disk. Deleted files in this state
are easily recoverable by many disk utilities. When a new file is written to this same
space, the OS may allocate it to the same sectors, however the file content will not
completely fill the sector. Some part of the sector will still retain the content of the
deleted file and this portion is called slack space.
257
Unit 6 - Cyber Forensics
Slack space is a significant source of proof in forensic investigation. Slack space will also contain sensitive details
about a defendant a lawyer may use in a courtroom. For example, if a user deleted files, filled an entire cluster of hard
drives, and then saved new files that filled only half of the cluster, then the latter half would not actually be empty.
This can contain any remaining information from the deleted files. This information could be extracted by forensic
investigators using special computer forensic tools.
Uncompressing Files.
Compressed files can contain useful data files as well as other compressed files. Therefore it is necessary for the
analyst to locate compressed files and extract them. Early in the forensic process, uncompressing files will be done to
ensure that the contents of compressed files are used in searches and other actions. Analysts should note, however,
that compressed files that contain harmful content, such as compression bombs, which are files that have been
compressed repeatedly, usually dozens or hundreds of times. Compression bombs may cause screening tools to fail
or consume significant resources; they may also contain malware and other malicious payloads. Although there is
no sure way of detecting compression bombs before uncompressing a file, there are ways of minimizing its effect.
The test program should, for example, use up-to-date antivirus software and be stand-alone to limit the impact of
only that program. Additionally, an picture of the test system should be created so that the system can be restored
if appropriate.
Graphically Displaying Directory Structures: This method makes it simpler and quicker for analysts to gather general
information about media content, such as the type of installed software and the possible technical skill of the user(s)
who generated the data. Some products can view directory structures in Windows, Linux, and UNIX, while other
products are unique to Macintosh directory structures.
258
Unit 6 - Cyber Forensics
For example, if part of the string resided in one cluster and the remainder of the string resided in a non-adjacent
cluster, a match could not be found for a search string. Similarly, if part of a search string resided in one cluster and
the remainder of the string resided in another cluster that was not part of the same file containing the first cluster,
some search tools that report a false match.
A file header could be in a file separate from the actual file data.
A simple histogram showing the distribution of ASCII values as a percentage of total characters in a file is another
useful technique for defining the form of data in a file. A spike in the lines ëspaceí, ëaí, and ëeí, for example, usually
indicates a text file, while consistency around the histogram indicates a compressed file.
Handling Encryption
Encryption often presents challenges. Users might encrypt individual files, folders, volumes, or partitions so that
others cannot access their contents without a decryption key or passphrase. The encryption might be performed by
the OS or a third-party program. Although it is relatively easy to identify an encrypted file, it is usually not so easy to
decrypt it.
259
Unit 6 - Cyber Forensics
The forensic specialist might be able to identify the encryption method as follows:
• By examining the file header
• By identifying encryption programs installed on the system
• By finding encryption keys (which are often stored on another media).
If the method of encryption is known the analyst will assess the feasibility of decrypting the file better. In certain cases,
decryption of files is not possible because the encryption process is strong and the authentication (e.g., passphrase)
used to decrypt is not usable. Although one can detect the use of encrypted data very quickly, it is more difficult to
detect the use of steganography.
Handling Steganography
Steganography, also known as steg, is the incorporation of the data into other data. Examples of steganography are
digital watermarks, and the hiding of words and details within images. Some techniques a forensic specialist can use
to locate stegged data includes:
• Discovering several versions of the same image
• Identifying the presence of grayscale images
• Searching metadata and registries
• Using histograms
• Searching for recognized Steganography applications by using hash sets
This has been established for certain that stegged data exists, and then it could be possible to extract the embedded
data by deciding which software generated the data and then discovering the stego key, or by using brute force and
cryptographic attacks to determine the password.
However, such efforts are often unsuccessful and can be extremely time consuming, particularly if the forensic
specialist does not find the presence of known Steganography software on the media being reviewed. In addition,
some software programs can analyze files and estimate the probability that the files were altered with Steganography.
260
Unit 6 - Cyber Forensics
After the examination and data extraction has been completed, the next step is to perform analysis on the extracted
data. There are many tools available that can be helpful in the analysis of different types of data.
While using these tools, or performing manual reviews of data, the forensic specialist should be aware of the value
of using system time and file time. Knowing when an event occurred, a file was generated or changed, or an e-mail
was sent, can be crucial to a forensic investigation. For example, this information can be used to recreate the timeline
of activities. Although this may seem like a simple task, it is often complicated by unintentional or intentional
discrepancies in time settings among systems. Knowing the time, date and time zone settings for a computer whose
data will be analysed can greatly assist the forensic specialist.
Timeframe analysis
Timeframe analysis can be helpful when assessing when events have occurred on a computer device, and can be used
as part of associating the usage of the machine to theindividual(s) at the time the events occurred. Two methods that
can be used are:
• Checking the time and date stamps found in the metadata file system (e.g. last updated, last accessed,
generated, change of status) to link the files of interest to the timeframes relevant to the investigation.
An example of this review will be the last updated date and time to determine when the contents of the
file were last changed.
• Analysis of the program and task logs that might be available. This can include error logs, installation logs,
communication logs, protection logs, etc. For example, a security log review can show when a user name /
password combination was used to log in to the system.
261
Unit 6 - Cyber Forensics
The forensic specialists can use special tools that can generate forensic timelines based on the event data. Such tools
typically give analysts a graphical interface for viewing and analysing sequences of events. A common feature of these
tools is to permit analysts to group related events into meta-events. This helps analysts to get a big picture view of
events.
In many cases, forensic analysis involves not only data from files, but also the data from other sources, such as the OS
state, network traffic, or applications.
262
Unit 6 - Cyber Forensics
• The file name itself may have an evidential meaning and may indicate the contents of the file (application
and file analysis).
• Hidden data can suggest an intentional effort to escape detection (hidden data analysis).
• Unless the passwords used to gain access to encrypted and password protected files are retrieved, the
passwords themselves can suggest possession or ownership (hidden data analysis).
• The contents of a file can signify ownership or possession by providing user-specific details.
Analyzing OS Data
The following items describe the most common OS data sources in network forensics:
• IDS Software. The IDS data is also the starting point for the investigation of suspicious activity. Not only do IDSs
typically attempt to detect malicious network traffic on all TCP / IP layers, but they also record other data fields
(and often raw packets) that can be useful in validating events and correlating them with other data sources.
Nonetheless, as noted above, the IDS software does generate false positives, so the IDS warnings should be
checked. The degree to which this can be achieved depends on the amount of data collected relating to the
warning and the information available, the characteristics of the signature or the method of anomaly detection
that caused the alarm.
• SEM Software. Ideally, SEM can be incredibly useful for forensics as it can automatically compare events between
multiple data sources, then extract the relevant information and show it to the user. However, since SEM software
works by inputting data from several other sources, the reliability of SEM depends on the data sources are fed
into it, how accurate each data source is, and how well the software can normalize data and correlate events.
• NFAT Software. NFAT software is primarily developed to assist the analysis of network traffic, so it is useful if
an incident of concern has been tracked. NFAT software typically provides features that support analysis, such as
traffic reconstruction and visualization;
• Firewalls, Routers, Proxy Servers, and Remote Access Servers. By itself, data from these sources are usually of
little value. Analyzing data over time can indicate overall patterns, such as an increase in blocked link attempts.
However, since these sources usually provide little detail about each event, the data offer little insight into the
essence of the events. Additionally, a lot of incidents could be reported every day, and the sheer amount of data
could be daunting. The primary meaning of the data is the association of events reported by other sources. For
example, if a host is breached and a network IDS sensor has detected an attack, querying firewall logs for events
involving an apparent IP attack address that confirm that the attack has penetrated the network and may indicate
other hosts that the attacker has attempted to compromise. In addition, the mapping of addresses (e.g., NAT)
performed by these tools is important for network forensics because the apparent IP address of the intruder or
victim may have been used by hundreds or thousands of hosts. Fortunately, analysts will typically check logs to
determine which internal address is being used.
• DHCP Servers. Usually, DHCP servers can be configured to log each IP address assignment and associated MAC
address along with a timestamp. This knowledge may be of assistance to analysts in determining which host
carried out an operation using an IP address. Analysts should be mindful of the possibility that perpetrators
of internal networks of organizations could have falsified their MAC addresses or IP addresses, a phenomenon
known as spoofing.
• Packet Sniffers. Out of all network traffic data sources, packet sniffers can gather the most information about
network activity. However, sniffers can catch vast quantities of benign data worth millions or trillions of packets,
and usually do not have any indication as to which packets may contain malicious behavior. In most cases,
packet sniffers are often used to provide more details on events that other devices or applications have marked
as potentially harmful. Some organizations record most or all packets for some period so that when an incident
occurs, the raw network data is available for examination and analysis. Packet sniffer data is best checked with
a protocol analyzer that interprets analyst data based on knowledge of protocol specifications and specific
implementations.
263
Unit 6 - Cyber Forensics
• Network Monitoring. Network monitoring software is helpful in identifying significant deviations from normal
traffic flows, such as those caused by DDoS attacks, during which, hundreds or thousands of systems launch
simultaneous attacks against hosts or networks. Network monitoring software can document the impact of these
attacks on network bandwidth and availability, as well as providing information about the apparent targets.
Traffic flow data can also be helpful in investigating suspicious activity identified by other sources. For example,
it might indicate whether a communications pattern has occurred in the preceding days or weeks.
• ISP Records. Information from an ISP is primarily of value in tracing an attack back to its source, particularly when
the attack uses spoofed IP addresses.
Various tools and techniques can be used to support the examination process. Some of the tools have been discussed
in the previous unit regarding extraction of data files. These can also be used for analysing collected data files.
In addition, security applications, such as file integrity checkers and host IDSs, can be very helpful in identifying
malicious activity against OSs. For instance, file integrity checkers can be used to compute the message digests of
OS files and compare them against databases of known message digests to determine whether any files have been
compromised. If intrusion detection software is installed on the computer, it might contain logs that indicate the
actions performed against the OS.
Another issue that an analysts face is the examination of swap files and RAM dumps, which are large binary data files
containing unstructured data. Hex editors can be used to open these files and examine their contents, however, on
large files, manually trying to locate intelligible data using a hex editor can be a time-consuming process. Filtering
tools automate the process of examining swap and RAM dump files by identifying text patterns and numerical values
that might represent phone numbers, names of people, email addresses, Web addresses and other types of critical
information.
Analysts often want to gather additional information about a program running on a system, such as the processes
purpose and manufacturer. After obtaining a list of the processes currently running on a system, analysts can look
up the process name to obtain such additional information. However, users might change the names of programs to
conceal their functions, such as naming a Trojan program [Link].
Therefore, process name lookups should be performed only after verifying the identity of the processes files by
computing and comparing their message digests. Similar lookups can be performed on library files, such as DLLs on
Windows systems, to determine which libraries are loaded and what their typical purpose is.
The forensic specialist may collect many different types of OS data, including multiple file systems. Trying to sift
through each type of data to find relevant information can be a time-intensive process, hence it has been generally
found useful to identify a few data sources to review initially, and then find other likely sources of important
information based on that review. In addition, in many cases, analysis can involve data from other types of sources,
such as network traffic or applications.
264
Unit 6 - Cyber Forensics
For analysing network traffic data, the forensic specialist must be adept at the following:
• In-depth understanding of the tools
• Reasonably comprehensive knowledge of networking principles, common network and application protocols,
network and application security products, and network-based threats and attack methods
• Have knowledge of the organisations’ environment, such as the network architecture and the IP addresses used
by critical assets (e.g., firewalls, publicly accessible servers)
• Have knowledge of the information supporting the applications and OSs used by the organization.
• Understanding of the organisations’ normal computing baseline, such as typical patterns of usage on systems
and networks across the enterprise
• Understanding each of the network traffic data sources, as well as access to supporting materials, such as intrusion
detection signature documentation.
• Understanding the characteristics and the relative value of each data source so that they can locate the relevant
data quickly.
Most of these have been covered in other Units of this handbook. Let’s look at the Analysis Tools in greater detail.
Analysis Tools
Several open source and commercial tools exist for computer forensics investigation. A typical forensic analysis
includes a manual review of materials on the media, for e.g. reviewing the Windows registry for suspect information,
discovering and cracking passwords, keyword searches for topics related to the crime, and extracting email and
pictures for review.
265
Unit 6 - Cyber Forensics
Operating systems, such as Linux® and Windows®, generate log files to capture system events. Windows, for
example, provides event, security and system log events.
access Data Registry Proprietary Windows Access Data Registry viewer allows an
viewer investigator to view the contents of Windows
operating system registries.
alien Registry viewer Proprietary Windows Alien Registry viewer is like the RegEdit
application included into Windows, but unlike
RegEdit, it works with standalone registry
files. While RegEdit shows the contents of
the system registry, alien Registry viewer
works with registry files copied from other
computers170
266
Unit 6 - Cyber Forensics
Internet Evidence Proprietary Windows It can recover data from a hard drive, live
Finder(IEF) standard RAM, or files for Internet-related evidence
edition. Features include-
• social Networking artefacts
• Instant Messenger Chat History
• Webmail
• Full Web browser artefacts
• P2P file sharing applications
Pasco v1.0 Free Windows Internet Explorer activity forensic analysis tool
267
Unit 6 - Cyber Forensics
IEHistoryView Free Windows This utility reads all information from the
history file on a computer, and displays the
list of all URLs that have been visited in the
last few days
It also allows selecting one or more URL
addresses, and then remove them from the
history file or save them into text, HTMl or
XMl file
In addition, it also allows to view the visited
URL list of other user profiles on a computer,
and even access the visited URL list on a
remote computer, as long as there is desired
permission to access the history folder
SkypeLogView v1.51 Free Windows Skype log view reads the log files created by
Skype application and displays the details
of incoming/outgoing calls, chat messages
and file transfers made by the specified
Skype account
Mozilla History view v1.52 Free Windows A small utility that reads the history data file
([Link]) of Firefox/Mozilla/Netscape Web
browsers, and displays the list of all visited
Web pages in the last days. For each visited
Web page, the following information is
displayed: URL, First visit date, last visit date,
visit counter, Referrer, Title, and Host name
MyLastSearch Free Windows My Last Search utility scans the cache and
history files of 4 Web browsers (IE, Firefox,
opera, and Chrome), and locate all search
queries made with the most popular search
engines (Google, Yahoo and MSN) and with
popular social networking sites (Twitter,
Facebook, MySpace)
268
Unit 6 - Cyber Forensics
269
Unit 6 - Cyber Forensics
CyberCheck suite Proprietary Windows CyberCheck is a cyber forensics tool for data
recovery and analysis of digital evidence
CyberCheck uses images created by
TrueBack, Encase or raw images - Indian
Language support
Recovers data from deleted files, re-
formatted or re-partitioned storage media
Recovers data from unallocated clusters, lost
clusters, file/partition/disk/MBR slack, swap
files. supports FaT12/16/32, NTFs, EXT2Fs,
EXT3Fs, UFs
270
Unit 6 - Cyber Forensics
Volatility Framework open source Windows/Linux The volatility Framework is completely a open
collection of tools, implemented in Python. It
extracts digital artefacts from volatile memory
(RAM) samples
volatility comes with several standard plug-ins.
The plug-ins use various techniques to extract
artefacts from volatile memory (RAM) samples,
these include:
• Running processes
• Open network sockets
• Open network connections
• Dlls loaded for each process
• Open files for each process
• Volatility also has support for extracting
artefacts from Windows Hibernation files and
Windows crash dump file etc.
Belkasoft Evidence Proprietary Windows - Belkasoft Evidence Centre makes it easy for
Centre 2014 an investigator to search, analyze, store and
share digital evidence found on the hard drive or
the computer’s volatile memory. The toolkit will
extract digital evidence from multiple sources by
analyzing hard drives, volatile memory dumps etc.
pdgmail open source Windows/Linux Python script to extract Gmail artefacts from
memory images. Works with any memory image
Teel Tech Chip kit Proprietary Chip-off forensics is a mobile forensic hardware tool which performs
for Mobile Phone advanced digital data extraction and analysis technique which involves
Chip Off Forensic physically removing flash memory chip(s) from a subject device and then
(Hardware) acquiring the raw data using specialized equipment.
Oxygen Forensic for Proprietary Oxygen forensic is a mobile forensic tool which helps to examining data
Mobiles from mobile devices, cloud services, drones and IoT device.
Paraben’s device Proprietary Paraben's Device Seizure Field Kit is a completely portable hand-held forensic
seizure Kit solution. This helps forensic experts to perform a comprehensive digital forensic
analysis of over 2,200 cell phones, PDAs, and GPS devices anywhere, anytime.
C-DAC’s Mobile Proprietary CDAC’s MobileCheck is a digital forensics solution for acquisition and analysis
Forensic tool of Mobile phones, Smartphones and Personal Digital Assistants (PDAs).
Mobilyze Proprietary This tool helps to acquire, view and preserve the data held on any iOS or
Android device with over 4 billion smart devices on the planet.
i9 – CDR Analysis Proprietary The tool which is used for the data analysis of Mobile, IMEI and Tower Call
Software from icube Detailed Records (CDRs) by the trainees during cybercrime investigation
Solution training.
271
Unit 6 - Cyber Forensics
Visualization tools
These tools present data on security events in a graphical format. This is most commonly used to visually reflect
network traffic flows, and can be very helpful in troubleshooting operational problems and in detecting misuse
For example, attackers might use covert channel sousing protocols in unintended ways to secretly communicate
information (e.g., setting certain values in network protocol headers or application payloads).
The use of covert channels is usually difficult to detect, but one useful approach is to recognize deviations in the
predicted network traffic flows. Visualization tools are also part of the NFAT program.
• Some visualization software can perform traffic reconstruction by using timetamp and sequential data fields,
software can evaluate the sequence of events, and they can also graphically show how packets crossed networks.
organizations.
• Some visualization tools can also be used to display other types of security event data. For example, the forensic
specialist could import intrusion detection records into a visualization tool, which would then display the data
depending on a range of different characteristics, such as the source or destination IP address or port. One could
then disable the display of known positive behavior in such a way that only unknown events are shown.
Importing and viewing data into the tool is typically fairly straightforward, but learning how to use the tool effectively
to reduce large datasets to a few events of interest will take considerable effort. Traffic restoration can also be done
by protocol analyzers. While these tools typically lack visualization capabilities, they can convert individual packets to
data streams and provide a sequential context for activities.
272
Unit 6 - Cyber Forensics
273
Unit 6 - Cyber Forensics
274
Unit 6 - Cyber Forensics
275
Unit 6 - Cyber Forensics
276
Unit 6 - Cyber Forensics
277
Unit 6 - Cyber Forensics
278
Unit 6 - Cyber Forensics
Repeated or Typical Calls Pattern: This pattern has no standard procedure because it depends mostly on the Case
and Suspect type. This kind of pattern is applied during the analysis to identify the typical call patterns. For example
it is possible that the suspect receives a call from a particular number and after receiving the call from that number
the suspect immediately, in turn, makes calls to particular number or group of numbers; or in some cases the pattern
might be vice-versa the suspects makes the initiates the procedure, i.e. the suspect first calls a particular number then
immediately the suspect receives a call from a particular number or group of numbers. Thus the IO can keep a watch
on the numbers generated using this pattern to get further leads.
Identifying Groups: Whenever a crime takes place at a certain place, and if there is no clue of the suspects then in
such a scenario the tower data from all the towers located at the crime scene is collected. Once the tower data is
collected then the first thing that becomes necessary is to identify the groups from that data. Because usually in such
cases the crime committed by a group of persons and due to vast amounts of tower data most of the times there are
some links or numbers that are kind of hidden and are not easily noticeable and often such numbers turn out to be
that of the criminal or have direct link to the criminal. So, when groups are formed all other irrelevant numbers are
eliminated and only numbers from identified groups and their related numbers remain so this makes the IO to easily
identify the hidden numbers or links which might not have been possible without identifying the groups.
Group’s identification is done by studying the repeated calls between certain numbers at the given period. Once
the groups have been identified then the IO can eliminate all other irrelevant numbers from the tower data and
concentrate his investigation only on the numbers from the identified groups and apply the previously mentioned
techniques on these numbers to get closer to the criminal.
As the tower data comes in huge amounts i.e. records from the tower data may vary from several thousand to several
crores so it is essential for IO to be patient and spend a good amount of time to identify the groups from the tower
data.
Linkage: Once the groups are identified it is essential to identify the hierarchical structure of the groups formed, this
to be done to guess the boss and the operatives. So, to form the hierarchical structure the successive links between
the numbers of a group must be identified based on the frequency and time of the calls between numbers.
Legalities
If the investigating officer needs to produce the analyzed CDR’s as evidence in the court of law then he can do so
under Section 65B (Admissibility of electronic records) of The Indian Evidence Act, 1872.
165B. Admissibility of electronic records: -
1. Notwithstanding anything contained in this Act, any information found in an electronic record written on a paper,
stored, registered or copied in a computer-generated optical or magnetic media (hereinafter referred to as the
data output) shall also be deemed to be a document if the requirements stated in this section are met in relation
to the information and the device.
2. The conditions referred to in sub-section (1) with regard to a computer output shall be as follows: —
• The computer output containing the information was generated by the computer during the period during
which the computer was regularly used to store or process the information for the purposes of any activities
carried out regularly by the individual having legal control over the computer during that time.
• During the material part of that time, the computer operated properly or, if not, throughout any time during
which it did not operate properly or was out of operation during that period, did not affect the electronic
record or the quality of its contents, and
• The information stored in the electronic record reproduces or derives from that record.
3. Where, over any time, computers regularly performed the task of storing or processing information for the
purposes of any activities carried out regularly over that time as referred to in clause (a) of the sub-section,
whether—
• By a combination of computers operating over that period or
• By different computers operating successively over that period, or
279
Unit 6 - Cyber Forensics
• By different combinations of the computers the time limit shall be regarded as a single computer.
• In any other manner involving, in any order, the successive operation of one or more computers and one or
more combinations of computers over that time, all computers used for that purpose over that time shall be
considered as a single computer for the purposes of this section, and references to a computer shall be read
accordingly in this section.
4. In any proceedings where it is required by this section to provide a statement in testimony, a certificate shall do
any of the following things —
• Identify the electronic record containing the statement and explain how it was made.
• Providing any tool involved in the creation of the electronic record as necessary to show that the electronic
record was produced by a computer.
• Dealing with some of the matters relating to the conditions set out in subsection (2) and purporting to
be signed by a individual holding a responsible official role in relation to the operation of the individual
concerned the system or management of the related activities (whichever is appropriate) shall be proof of
any matter specified in the certificate and, for the purposes of this sub-section, adequate proof shall be given
to the best of the knowledge and conviction of the person specifying it.
5. For the purposes of this section, —
• Information shall be taken for supply to a machine for the purposes of this section if it is supplied in
any suitable form and if it is supplied directly or (with or without human intervention) by any appropriate
equipment.
• If any official information is given during operations carried out.
• In order to be stored or processed for the purposes of those activities by a computer controlled other than
during those activities, the information shall be considered to be supplied to that computer during those
activities if it is properly supplied to the machine.
• The computer output shall be considered to have been produced by a computer, whether it was generated
directly or (with or without human intervention) by any appropriate equipment.
Explanation — for the purposes of this section, any reference to information derived from other information is a
reference to measuring, comparing or some other method.
Case Studies
Listed below are two case studies to prove the effectiveness of CDR analysis in crime-busting.
Case Brief: A businessman belonging to a reputed family was murdered on 10th of Oct 2008 around 7 pm, while
returning to his house in his Maruti-800 car after closing his shop. His house was on the outskirts of the city and the
distance between his house and shop was around 6 km. Wallet and other expensive items were recovered from the
car eliminating the possibility of murder due to financial gains. There was no eye witness or evidence from the site of
incidence. Despite hard efforts by police murder remained a mystery for all most two weeks.
Methodology Followed: Police believed that the murder was planned, and the gang members divided themselves
among two groups one group was present close to shop and another group was present at the site of incidence.
Group present close to the shop followed the victim and informed the exact movements of the victim to the other
group present at the site of the incidence.
Tower data of the two sites i.e. of the shop (say A for reference) and of the site of incidence (say B for reference)
was requested from both the sites which numbered in lakhs and the obtained data was imported into the C5 CDR
Analyzer. Next, the data was processed as per the following pattern:
1. Frequently called numbers between the two sites A and B in the specified time frame.
2. Numbers moving from site A to site B in specified frame were filtered
280
Unit 6 - Cyber Forensics
3. Altogether eighteen numbers were filtered out from this lakh of data who were present at the site A and were
frequently calling to the numbers at site B and showed a movement to site B around the time of the murder.
4. Search was further refined by narrowing down the time specified and based on call duration and the final list of
8 suspects was prepared.
5. In the meantime, proclaimed offender’s database was checked to see if there was any history sheeters specialized
in contract killing. Three persons were identified.
6. Ownership details of the eight suspect were asked from the service providers, this further eliminated 6 numbers
from the list, and now police were left with two numbers whose details furnished were fraudulent.
7. From the tower data IMEI of the two numbers were identified and further, it was checked to see if any other SIM
card is being used in these IMEI’s (instruments).
8. Two numbers were identified which were used in these IMEIs
9. One of the number used, matched with the one of the contract killer identified from the list of proclaimed
offenders!!
10. This clearly suggested that the suspect had procured a new SIM in an unidentified name and used the same while
committing the crime.
11. Field investigation was carried out and physical presence of the suspect in that area around the said time was
confirmed. Subsequently, police arrested him, further interrogation revealed the identity of his accomplice.
12. Murder mystery was solved, the suspect admitted to his offence and informed about the involvement of another
person. Further interrogation also gave key valuable information which turned handy to solve other important
cases.
Case Brief 2: A person named Raja was murdered and burnt in a remote area of Kotter taluka of Tucker district
on 14/04/2009 between 1 and 4 p.m. when he had gone to Husker in his auto-rickshaw to buy a new SIM card for
himself. On completion of crime scene investigation, the investigating team did not find any eye-witnesses or any
substantial evidence, the only thing they could find was an empty box of Vodafone SIM card that was recovered from
the auto-rickshaw belonging to Raja. Further investigations revealed that the victim had an antagonism with a person
known as Gopal Gowda and friend known as Nagaraj who also had enmity with Gopal Gowda, as the two of them had
a common enemy, so the victim and Nagaraj became good friends. Based on suspicion Gopal Gowda was summoned
for questioning, but it went in vain as there were no substantial evidences or proofs at that point of time. Now the
investigating team was left in dark with no eye-witnesses on any proofs on who has committed the crime, but the only
ray of hope was the empty box of Vodafone SIM card they had recovered from the auto-rickshaw of the victim. So, it
was decided by the IO that they should go for CDR analysis of the victim, Gopal Gowda and Nagaraj.
Methodology Followed: The investigative team believed that the crime was committed by a gang because first the
victim was abducted from Husker and was brought to Kotter where he was killed and burnt. So, the following pattern
was adopted to process the required CDR’s using the CDR Analyzer to find the missing links.
1. CDR of Gopal Gowda was processed to check whether he was present at the location where crime had committed
or has he contacted anybody at the time when the victim was murdered, but the search yielded no result, because
at the time of crime he was present at a location far from the place where crime had occurred and he had no
contact as such with anybody at the time when crime was committed. His location at the time of the crime was
identified by using a provision provided in the software to find the location by using the Tower-ID.
2. CDR of Nagaraj was processed to check whether if the victim had called him after purchasing the new SIM card
as both were good friends, but the investigative team didn’t know what number the victim was using, the only
option was to search for a new number in the CDR.
3. New number was found in the CDR of Nagaraj and a call was made from the new number to Nagaraj on the day
when victim was murdered at around 11 a.m., by using the C5 CDR Analyzer’s utility to check Service Provider of
the number it was confirmed that the number belonged to Vodafone, by this revelation the investigative team
made a guess that probably this is the new number that victim had purchased, but they needed the confirmation
to make sure that this is the victims number.
281
Unit 6 - Cyber Forensics
4. SDR details of the new number was requested from Vodafone and from the details received it was found that
the number was registered with the victim, so this made clear that the victim was using the cell phone when he
was murdered, but at the time of crime scene investigation the team did not find any mobile phone at the crime
scene, so now the team believed that the gang who had committed the crime might have taken the mobile phone
with them.
5. The IMEI number associated with the victim’s mobile number was fetched using the software and it was kept
under watch with Vodafone to check whether the victim’s mobile phone is currently being used by any other
number.
6. Details were received from the service provider which showed that the same mobile phone is being used by 4 or
5 Airtel numbers.
7. Using the Geo-Analysis module of the software the location of those Airtel numbers was identified and it turned
out to be a place known as Idebur near Madhikere taluka of Tucker district.
8. The SDR details were sought of those Airtel numbers and by means of the details received the investigative team
raided the addresses and apprehended five persons.
9. During the interrogation of the apprehended persons, it was revealed that they had murdered burnt Raja and
Gopal Gowda had contracted them to do the crime for Rs.5000.
282
Unit 6 - Cyber Forensics
SUMMARY
• Cyber Forensics is a discipline that incorporates legal and computer science elements to capture and interpret
data from operating systems, networks, wireless communications and storage devices in a manner that is
admissible in court as evidence.
• The science of collecting, storing and recording evidence from modern electronic storage devices, such as
computers, PDAs, digital cameras, cell phones and various memory storage devices, is also known as 'Digital
forensics' and/or 'Computer and network forensics'.
• Cyber forensics includes preservation of the integrity of evidence, identification of evidence, extraction of data,
interpreting the data and documentation related to evidence analysis.
• The different forms of cyber-forensic techniques are disk forensics, memory forensics, network forensics,
computer forensics, and internet forensics.
• Disk forensics is the science of extracting forensic information from digital storage media like Hard disk, USB
devices, Firewire devices, CD, DVD, Flash drives, Floppy disks, etc.
• Memory forensics provides insights about the runtime system activity such as account credentials, chat
messages, running processes, injected code fragments, open network connections, etc.
• Network forensics consists of observing, documenting and analyzing network activities to identify the origins
of intrusion breaches or other inappropriate incidents.
• Mobile forensics is used to recover digital evidence or data from a mobile device which could be a cell phone,
smartphone, PDA devices, GPS devices and tablet computers.
• Internet forensics consists of the extraction, analysis and identification of evidence related to the user’s online
activities.
• The key elements in the process of computer forensics are readiness for the tasks, evaluation for risk analysis,
and collection of evidence, analysis of relevant information, presenting the evidence in accordance with the
findings and review of the situation.
• A Computer Forensic Investigator combines their computer science background with their forensic skills to
recover information from computers and storage devices.
• Forensic analysis is required in situations where there is a suspicion that electronic data may have been lost,
misappropriated or otherwise improperly handled.
• Cyber forensics procedure involves preparation, planning before the incident and developing an incident
response plan.
• The Chain of Custody essentially records the way in which we protect, transport and check that the objects
obtained for investigation have been held in a proper manner. The custody chain shows' trust' in the courts and
the client that the media is not being tampered with.
• Digital evidence is an integral element in the detection of motive, mode and procedure in computer-related
crimes, and it is critical in many internal investigations that an entity considers risk reduction in order to resolve
internal processes.
• The process of digital forensics involves the collection of data, proper examination, and thorough analysis
using justifiable methods and reporting the significant findings.
• A search warrant is a written order provided by a judge directing the law enforcement officer to search a
particular piece of evidence at a specific location.
283
Unit 6 - Cyber Forensics
• A Qualified Forensic Duplicate is a file which contains every tiny source information in a raw bitstream format
but would be stored in an altered format. For example, empty sectors might be compressed, or the files could
contain industry hashes on the drive.
• Hash algorithm is used by the forensic specialist for computation using a one-way mathematical formula.
• Hardware mirroring is done by using hardware duplicators that take a hard drive and mirror it on another hard
drive.
• The investigation stage includes defining the who, what, when, where, how and why surrounding an incident.
• The evidences can be divided into three categories i.e. host-based evidence, network-based evidence and
other evidence.
• The Register is the nucleus of the Windows OS. It is a hierarchical database which stores the configuration
settings and options necessary for running applications and commands.
• NTP or Network Time Protocol uses Universal Synchronized Time (UTC) to synchronize device clock times to a
millisecond, and often a fraction of a millisecond.
284
Unit 6 - Cyber Forensics
KNOWLEDGE CHECK
Q.1. What are the types of Digital data, from which can be served as digital evidence?
Q.2. List the classifications of Cyber Forensics by filling the following blanks.
i. D_ _ _ FORENSICS
ii. N_ _W_ _ _ FORENSICS
iii. W_ _ _ _ _ _S FORENSICS
iv. DATA_ _ _ _ FORENSICS
v. M_ _ _ _E DEVICE FORENSICS
vi. G_S FORENSICS
vii. E_ _ _L FORENSICS
viii. MEM_ _ _ FORENSICS
Q.3. State the importance of a good response toolkit. Describe briefly the procedures to create a first response
toolkit.
285
Unit 6 - Cyber Forensics
Q.4. which of the following are true/false in context of securing & evaluating an electronic crime scene?
Investigator should
a. Put together all the systems including the affected ones True/False
b. Establish a security perimeter to see if the offenders are still present True/False
c. Protect perishable data such as pagers & caller ID boxes True/False
d. Secure the telephone lines and allow them to operate normally True/False
e. Protect physical evidences or hidden fingerprints that may be found on keyboards,
mouse, diskette CDs True/False
286
Unit 6 - Cyber Forensics
Q.8. Complete the steps for creating an image of a disk using the dd tool:
Step1: Use dd to zeroize an 320Gb USB drive. This will render the drive sterile and into a pristine state.
Step2: __________________________________________________________________________________________________________________
__________________________________________________________________________________________________________________
Step3: __________________________________________________________________________________________________________________
__________________________________________________________________________________________________________________
Step4: To confirm that the drive has been zeroized dump the contents using xxd.
Step5: Boot the Helix CD on the target/compromised system and plug the USB media. Then create a EXT2 file system
using fdisk and mke2fs.
Step6: __________________________________________________________________________________________________________________
__________________________________________________________________________________________________________________
Step7: __________________________________________________________________________________________________________________
__________________________________________________________________________________________________________________
Step8: After that the bit-by-bit image creation can start. Start by creating a cryptographic fingerprint of the original
disk using MD5. Then using dd with the input source being the /dev/sda and the output file a file named [Link].
Other useful option is the conv=sync,noerror to avoid stopping the image creation when founding an unreadable
sector. Finally create the fingerprint of the image created and verify that both fingerprints match and unmount the
drive.
Q.9. Match the duplication software with its function in the following table:
287
ABREVIATION FULL FORM ABREVIATION FULL FORM
AAA Authorization, Authentication and Accounting EBCDIC Extended Binary Coded Decimal Interchange
ACLs Access Controls Code
AD Active Directory ECC Elliptic-Curve Cryptography
AES Advanced Encryption Standard EP Equivalence Partitioning
AH Authentication Header ERP Enterprise Resource Planning
ALM Application Lifecycle Management ESAPI Enterprise Security API
API Application Program Interfac ESP Encapsulating Security Payload
AppSec Application Security EXT2 Second Extended File System
ARP Address Resolution Protocol FAT File Allocation Table
AS Authentication Server FCS Frame Check Sequence
ASCII American Standard Code for Information FDAS Fast Disk Acquizition System
Interchange FDDI Fiber Distributed Data Interface
ASMX Active Server Method File FIPS Federal Information Processing Standard
ASPX Active Server Page eXtended FTK Forensic ToolKit
AUT Application Under Test FTP File Transfer Protocol
BC Business Continuity GB Gigabyte
BCM Business Continuity Management Gbps Gigabits Per Second
BER Basic Encoding Rules GPG GNU Privacy Guard
BIOS Basic Input/Output System GPS Global Positioning System
BOOTP Bootstrap Protocol GRC Governance, Risk Management and Compliance
bps Bits Per Seconds GRE Generic Routing Encapsulation
BVA Boundary Value Analysis GUA Graphical User Authentication
BYOD Bring Your Own Device GUI Graphical User Interface
CAINE Computer Aided Investigative Environment HK Handle Key
CC HKEY_CURRENT_CONFIG HKCR HKEY_CLASSES_ROOT
CD Compact Disk HKCU HKEY_CURRENT_USER
CDMA Code Division Multiple Access HKLM HKEY_LOCAL_MACHINE
CDR Call Detailed Records HKPD HKEY_PERFORMANCE_DATA
CERT Computer Emergency Response Team HKU HKEY_USERS
CGI Common Gateway Interface HLR Home Location Register
C-I-A Triad Confidentiality, Integrity and Availability HMACs Hashed Message Authentication Codes
CIS Center for Internet Security HPA Host Protected Area
CLI Command Line Interface HTML Hypertext Mark-up Language
COBIT Control Objectives for Information and Related HTTP Hypher Text Transfer Protocol
Technology I&AM Identity and Access Management
CPU Central Processing Unit I/O Input & Output
CRC Cyclic Redundancy Check IAST Interactive Application Security Testing
CSIRT Computer Security Incident Response Team ICMP Internet Control Message Protocol
CSMA/CD Carrier Sense Multiple Access/Collision ICT Information and Communications Technology
Detection IDAM Identity and Access Management
CSRF Cross-Site Request Forgery IDE Integrated Drive Electronics
DAC Discretionary Access Control IDEA International Data Encryption Algorithm
DAST Dynamic Analysis Security Testing IDPS Intrusion Detection and Prevention Systems
DCS Distributed Control System IDS Intrusion Detection System
DD Data Defination IEC International Electrotechnical Commission
DDoS Distributed Denial of Service IEF Internet Evidence Finder
DES Digital Encryption Standard IETF Internet Engineering Task Force
DF Don't Fragment IGMP Internet Group Management Protocol
DFF Digital Forensics Framework IIS Internet Information Server
DHCP Dynamic Host Configuration Protocol IMEI International Mobile Equipment Identity
DIBS Digital Integrated Business Services IMSI International Mobile Subscriber Identity
DLL Dynamic Link Library IO Investigating Officer
DMA Direct Memory Access iOS iPhone Operating System
DNA Deoxyribonucleic Acid IoT Internet of Things
DNS Domain Name System IP Internet Protocol
DoD Department of Defence IPC Inter-Process Communication
DOS Disk Operating System Ipsec Internet Protocol Security
DoS Denial of Service IPV4 Internet Protocol Version 4
DR Disaster Recovery IR Internet Registry
DSA Digital Signal Algorithm iSCSI Internet Small Computer Systems
DVD Digital versatile Disk ISD International Subscriber Dialling
ABREVIATION FULL FORM ABREVIATION FULL FORM
UNIT 2 CRYPTOGRAPHY
• [Link]
cryptographic-standards-and-guidelines
• [Link]
• [Link]
• [Link]
• [Link]
• [Link]
• ANSWER 2
A. Vulnerability- This is a weakness in an information system, system security procedures,
internal controls or implementations that are exposed.
B. Threat Agent or Actor- This refers to the intent and method targeted at the intentional
exploitation of the vulnerability or a situation and method that may accidentally trigger
the vulnerability.
C. Threat Vector- This is a path or a tool that a threat actor uses to attack the target.
D. Threat Target- This is anything of value to the threat actor such as PC, laptop, PDA,
tablet, mobile phone, online bank account or identity.
E. Confidentiality- Prevention of unauthorized disclosure or use of information assets.
F. Integrity- Ensuring authorized access of information assets when required for the
duration required.
G. Availability- Prevention of unauthorized modification of information assets
H. Identification- The first step in the ‘identify-authenticate-authorise’ sequence that is
performed when access to information or information processing resources are required.
I. Authentication- Verifies the identity by ascertaining what you know, what you have and
what you are.
J. Authorisation- The process of ensuring that a user has sufficient rights to perform the
requested operation and preventing those without sufficient rights from doing the same.
K. Non-Repudiation- Refers to one of the properties of cryptographic digital signatures
that offer the possibility of proving whether a message has been digitally signed by the
holder of a digital signature’s private key.
• ANSWER 3
A. v
B. iii
C. ii
D. ii
• ANSWER 4
A. Preventive Controls
B. Preventive Controls
C. Detective Controls
D. Detective Controls
E. Detective Controls
F. Deterrent Controls
G. Deterrent Controls
H. Recovery Controls
I. Recovery Controls
• ANSWER 6
1. Preventive 1. Physical
2. Detective 2. Administrative
3. Corrective 3. Technical
4. Deterrent
5. Recovery
6. Compensating
• ANSWER 7
S- Spoofing of user Identity
T- Tampering
R- Repudiation
I- Information disclosure (privacy breach or data leak)
D- Denial of Service
E- Elevation of Privilege
• ANSWER 8
A- Application Attack K- Phishing Attack
B- Application Attack L- Network Attack
C- Malware M- Network Attack
D- Application Attack N- Network Attack
E- Network Attack O- Network Attack
F- Phishing Attack P- Network Attack
G- Malware Q- Network Attack
H- Phishing Attack R- Network Attack
I- Phishing Attack S- Application Attack
J- Malware
UNIT 2 CRYPTOGRAPHY
• ANSWER 1
A – (ii) F – (iii)
B – (iii) G – (iv)
C – (iv) H – (iii)
D – (iv) I – (iv)
E – (iii) J – (i)
• ANSWER 1
VPN : Virtual Private Network
TCP/IP : Transmission Control Protocol / Internet Protocol.
HTTP : Hyper Text Transfer Protocol,
UDP : User Datagram Protocol
ARP : Address Resolution Protocol
DNS : Domain Name System
FTP : File Transfer Protocol
SSH : Secure Shell
DHCP : Dynamic Host Configuration Protocol
IPS : Intrusion Prevention System
IDPS : Intrusion Detection and Prevention Systems
• ANSWER 2
A – (i) F – (iii)
B – (iv) G – (iv & v)
C – (ii) H – (ii & iii)
D – (iii) I – (i & iii)
E – (iv) J – (iv)
• ANSWER 3
A – (v) E – (vii)
B – (vi F – (ii)
C – (i) G – (iv)
D – (iii)
• ANSWER 4
A – (v) E – (iii)
B – (vi) F – (vii)
C – (i) G - (iv)
D – (ii)
• ANSWER 5
A – (ii) C - (iii)
B – (I)
UNIT 4 APPLICATION SECURITY
• ANSWER 1
1–G 5 – D &H
2–A 6–E
3–B 7–F
4–C
• ANSWER 2
Open
Web
Application
Security
Project
• ANSWER 3
A – (ii)
B – (v)
C - (i)
• ANSWER 1
1–C 5-G
2- E 6-D
3-A 7-B
4-H 8-F
• ANSWER 3
(ii)
UNIT 6 CYBER FORENSICS
• ANSWER 2
i. DISKFORENSICS
ii. NETWORK FORENSICS
iii. WIRELESS FORENSICS
iv. DATA BASE FORENSICS
v. MOBILE DEVICE FORENSICS
vi. GIS FORENSICS
vii. EMAIL FORENSICS
[Link] FORENSICS
• ANSWER 4
a- False
b- True
c- True
d- False
e- True
• ANSWER 6
1- B
2- D
3- A
4- E
5- C
• ANSWER 9
1- C
2- A
3- B
4
300