Domain 4
Domain 4
Back to Page
/ 50
Back to Page
Understand Computer Networking
Objectives
What is Networking?
A network is simply two or more computers linked together to share data, information, or resources.
To properly establish secure data communications, it is important to explore all the technologies involved in
computer communications. From hardware and software to protocols and encryption and beyond, you’ll need to
familiarize yourself with many details, standards and procedures.
Types of Networks
LAN. A local area network (LAN) is a network typically spanning a single floor or building. This is commonly
a limited geographical area.
WAN. Wide area network (WAN) is the term usually assigned to the long-distance connections between
geographically remote networks.
Network Devices
Hub
Hubs are used to connect multiple devices in a network. They’re less likely to be seen in business or corporate
networks than in home networks. Hubs are wired devices and are not as smart as switches or routers.
Switch
Rather than using a hub, you might consider using a switch, or what is also known as an intelligent hub. Switches
are wired devices that know the addresses of the devices connected to them and route traffic to that port/device
rather than retransmitting to all devices.
Offering greater efficiency for traffic delivery and improving the overall throughput of data, switches are smarter
than hubs, but not as smart as routers. Switches can also create separate broadcast domains when used to
create VLANs which will be discussed later.
,
Router
Routers are used to control traffic flow on networks and are often used to connect similar networks and control
traffic flow between them. Routers can be wired or wireless and can connect multiple switches. Smarter than
hubs and switches, routers determine the most efficient “route” for the traffic to flow across the network.
Firewall
Firewalls are essential tools in managing and controlling network traffic and protecting the network. A firewall is a
network device used to filter traffic. It is typically deployed between a private network and the internet, but it can
also be deployed between departments (segmented networks) within an organization (overall network). Firewalls
filter traffic based on a defined set of rules, also called filters or access control lists.
Server
A server is a computer that provides information to other computers on a network. Some common servers are
web servers, email servers, print servers, database servers, and file servers. All of these are, by design, networked
and accessed in some way by a client computer. Servers are usually secured differently than workstations to
protect the information they contain.
Endpoint
Endpoints are the ends of a network communication link. One end is often at a server where a resource resides,
and the other end is often a client making a request to use a network resource. An endpoint can be another server,
desktop workstation, laptop, tablet, mobile phone, or any other end-user device.
Ethernet (IEEE 802.3) is a standard that defines wired connections of networked devices. This standard defines
the way data is formatted over the wire to ensure disparate devices can communicate over the same cables.
Device Addresses
Networking at a Glance
This diagram represents a small business network. The lines depict wired connections. Notice how all devices
behind the firewall connect via the network switch, and the firewall lies between the network switch and the
internet.
The network diagram below represents a typical home network. Notice the primary difference between the home
network and the business network is that the router, firewall, and network switch are often combined into one
device supplied by your internet provider and shown here as the wireless access point.
Many different models, architectures, and standards exist that provide ways to interconnect different hardware
and software systems with each other for the purposes of sharing information, coordinating their activities, and
accomplishing joint or shared tasks.
Computers and networks emerge from the integration of communication devices, storage devices, processing
devices, security devices, input devices, output devices, operating systems, software, services, data, and people.
Translating an organization’s security needs into safe, reliable, and effective network systems starts with a simple
premise. The purpose of all communications is to exchange information and ideas between people and
organizations so that they can get work done.
Those simple goals can be re-expressed in network (and security) terms such as:
In the most basic form, a network model has at least two layers:
The lower layer is often referred to as the media or transport layer and is responsible for receiving bits from
a physical connection and converting them into a frame. Frames are grouped into standardized sizes. Think
of frames as a bucket and the bits as water. If the buckets are sized similarly and the water is contained
within the buckets, the data can be transported in a controlled manner. Route data is added to the frames of
data to create packets. In other words, a destination address is added to the bucket. Once the buckets are
sorted and ready to go, the host layer takes over.
The upper layer, known as the host or application layer, is responsible for managing the integrity of a
connection and controlling the session as well as establishing, maintaining, and terminating
communication sessions between two computers. It is also responsible for transforming data received
from the application layer into a format that any system can understand. And finally, it allows applications
to communicate and determines whether a remote communication partner is available and accessible.
The Open Systems Interconnection (OSI) protocol was developed to establish a common communication
structure or standard for all computer systems. The actual OSI protocol was never widely adopted, but the theory
behind the protocol, the OSI model, was readily accepted. The OSI model serves as an abstract framework, or
theoretical model, for how protocols should function in an ideal world on ideal hardware. Thus, the OSI model has
become a common reference point against which all protocols can be compared.
The OSI model divides networking tasks into seven distinct layers. Each layer is responsible for performing
specific tasks or operations with the goal of supporting data exchange (in other words, network communication)
between two computers. The layers are always numbered from bottom to top. They are referred to by either their
name or their layer number. For example, Layer 3 is also known as the Network Layer.
The conceptual layers are ordered and stacked to depict the flow of information from one device to another. As
information transverses the model, each layer communicates directly with the layer above and below. The layers
of the sending and receiving systems correspond to the various states of information throughout the exchange.
For example, Layer 3 communicates with both the Data Link (2) and Transport (4) layers and both systems
leverage Transport layer (4) for the same purpose.
The Application, Presentation, and Session layers (5-7) are commonly referred to as data. However, each layer
has the potential to perform encapsulation. Encapsulation is the addition of header and possibly a footer (trailer)
data by a protocol used at that OSI layer. Encapsulation is particularly important when discussing Transport,
Network, and Data Link layers (2-4), which all generally include some form of header. At the Physical Layer (1), the
data unit is converted into binary —for example, 01010111—and sent across physical wires such as an ethernet
cable.
Let’s map some common networking terminology to the OSI Model:
Switches, bridges, or WAPs sending frames, are activities that occur at the Data Link Layer (2).
Encapsulation occurs as data moves down the OSI model from Application to Physical. As data is encapsulated at
each descending layer, the previous layer’s header, payload and footer are all treated as the next layer’s payload.
The data unit size increases as it moves down the conceptual model and the contents continue to encapsulate.
The inverse action occurs as data moves up the OSI model layers from Physical to Application. This process is
known as de-encapsulation (or decapsulation). The header and footer are used to properly interpret the data
payload and are then discarded. As a data unit moves up the OSI model, it becomes smaller. The
encapsulation/de-encapsulation process is best depicted visually below:
The OSI model wasn’t the first or only attempt to streamline networking protocols or establish a common
communications standard. In fact, the most widely used protocol today, TCP/IP was developed in the early ,
1970s. The OSI model was not developed until the late 1970s. The TCP/IP protocol stack focuses on the core
functions of networking.
The most widely used protocol suite is TCP/IP, but it is not just a single protocol; rather, it is a protocol stack
comprising dozens of individual protocols. TCP/IP is a platform-independent protocol based on open standards.
However, this is both a benefit and a drawback. TCP/IP can be found in just about every available operating
system, but it consumes a significant amount of resources and is relatively easy to hack into because it was
designed for ease of use rather than for security.
At the Application Layer, TCP/IP protocols include Telnet, File Transfer Protocol (FTP) ,
Simple Mail Transport Protocol (SMTP) and Domain Name Service (DNS) .
,
The two primary Transport Layer protocols of TCP/IP are TCP and UDP. TCP is a full-duplex connection-oriented
protocol, whereas UDP is a simplex connectionless protocol.
In the Internet Layer, Internet Control Message Protocol (ICMP) is used to determine the health of a network or a
specific link. ICMP is utilized by ping, traceroute, and other network management tools. The ping utility employs
ICMP echo packets and bounces them off remote systems. Thus, you can use ping to determine whether the
remote system is online, whether the remote system is responding promptly, whether the intermediary systems
are supporting communications, and the level of performance efficiency at which the intermediary systems are
communicating.
As a security practitioner, it is important to understand the basic components of Internet Protocol (IP) addressing.
The figure below shows the essential components of an IPv4 address which will be covered in more detail below.
This section will highlight some important historical aspects of IP, differentiate, IPv4 and IPv6, and introduce the
concepts of public and private addressing. At the end of this section, you should be able to determine if an
address is IPv4 or IPv6 and if the address is public or private.
Internet protocols are currently deployed and used worldwide in two major versions. IPv4 provides a 32-bit
address space, which by the late 1980s was projected to be exhausted. IPv6 was introduced in December 1995
and provides a 128-bit address space along with several other important features.
IP hosts/devices associate an address with a unique logical address. An IPv4 address is expressed as four octets
separated by a dot (.), for example, 216.12.146.140. Each octet may have a value between 0 and 255. However, 0
is the network itself (not a device on that network), and 255 is generally reserved for broadcast purposes. Each
address is subdivided into two parts: the network number and the host. The network number assigned by an
external organization, such as the Internet Corporation for Assigned Names and Numbers (ICANN), represents the
organization’s network. The host represents the network interface within the network.
To ease network administration, networks are typically divided into subnets. Because subnets cannot be
distinguished with the addressing scheme discussed so far, a separate mechanism, the subnet mask, is used to
define the part of the address used for the subnet. The mask is usually converted to decimal notation like
255.255.255.0.
With the ever-increasing number of computers and networked devices, IPv4 does not provide enough addresses
for our needs. To overcome this shortcoming, IPv4 was subdivided into public and private address ranges. Public
addresses are limited with IPv4, however, private addresses can be shared by anyone, and it is highly likely that
everyone on your street is using the same address scheme.
The nature of the addressing scheme established by IPv4 meant that network designers had to start thinking in
terms of IP address reuse. IPv4 facilitated this in several ways, such as its creation of the private address groups;
this allows every LAN in every SOHO (small office, home office) situation to use addresses such as 192.168.2.xxx
for its internal network addresses, without fear that some other system can intercept traffic on their LAN.
This table shows the private IPv4 addresses available for anyone to use:
Range
10.0.0.0 to 10.255.255.254
172.16.0.0 to 172.31.255.254
192.168.0.0 to 192.168.255.254
The first octet of 127 is reserved for a computer’s loopback address. Usually, the address 127.0.0.1 is used. The
loopback address is used to provide a mechanism for self-diagnosis and troubleshooting at the machine level.
This mechanism allows anetwork administrator to treat a local machine as if it were a remote machine and ping
the network interface to establish whether it is operational.
IPv6 is a modernization of IPv4, which addressed several weaknesses in the IPv4 environment:
A much larger address field. IPv6 addresses are 128 bits, which supports 2 128 or
340,282,366,920,938,463,463,374,607,431,768,211,456 hosts. This ensures that addresses will not run out.
Improved security. IPsec is an optional part of IPv4 networks, but a mandatory component of IPv6 networks.
This will help ensure the integrity and confidentiality of IP packets and allow communicating partners to
authenticate with each other.
Improved quality of service (QoS). This will help services obtain an appropriate share of a network’s
bandwidth.
An IPv6 address is shown as eight groups of four digits. Instead of numeric (0-9) digits like IPv4, IPv6 addresses
use the hexadecimal range (0000-ffff) and are separated by colons (:) rather than periods (.).
An example IPv6 address is 2001:0db8:0000:0000:0000:ffff:0000:0001. To make it easier for humans to read and
type, it can be shortened by removing the leading zeros at the beginning of each field and substituting two colons
(::) for the longest consecutive zero fields. All fields must retain at least one digit. After shortening, the example
::1 is the local loopback address, used the same as 127.0.0.1 in IPv4.
The range 2001:db8:: to 2001:db8:ffff:ffff:ffff:ffff:ffff:ffff is reserved for documentation use, as in the
examples above.
fc00:: to fdff:ffff:ffff:ffff:ffff:ffff:ffff:ffff are addresses reserved for internal network use and are not routable
on the internet.
What is Wi-Fi?
Wireless networking (Wi-Fi) is a popular method of connecting corporate and home systems because of the ease
of deployment and relatively low cost. It has made networking more versatile than ever. Workstations and portable
systems are no longer tied to a cable but can roam freely within the signal range of the deployed wireless access
points. However, with this freedom comes additional vulnerabilities.
Wi-Fi range generally accommodates most homes or small offices. Range extenders or multiple access points
may be placed strategically to extend the signal for larger areas such as a collegiate campus. The Wi-Fi standard
has evolved since inception. Typically, each iteration offers more throughput at faster speeds.
To exploit a LAN, threat actors need to enter the physical space or immediate vicinity of the physical network such
as open ports at wall outlets or on a network device. For example, given physical access, threat actors may place
sniffer taps onto cables or plug in rogue USB devices. By contrast, wireless media offers threat actors
opportunities which do not require circumventing physical access controls. Exploitation of Wi-Fi weakness can
occur from outside the parameter of physical access controls by capitalizing on technological weaknesses or
misconfigurations.
Figure 4.7: Wi-Fi
TCP/IP’s vulnerabilities are numerous. Improperly implemented TCP/IP stacks in various operating systems are
vulnerable to various attacks, including Denial of Service (DoS) Distributed Denial of Service (DDos), fragment
, ,
TCP/IP (as well as most protocols) is also subject to passive attacks via monitoring or sniffing. Network
monitoring, or sniffing, is the act of monitoring traffic patterns to obtain information about a network. Observe the
simplified network diagram below. Think about the various types of network traffic, web browsing, email,
authentications, and alerts traversing the busy information system highway that is an organizational network.
Imagine if a threat actor was able to observe the traffic at various points on the diagram. What type of information
might an adversary glean?
Fragment Attacks
In afragment attack, an attacker fragments traffic in such a way that a system is unable to put data packets
back together.
Oversized Packet Attacks
Purposely sending a network packet that is larger than expected or larger than can be handled by the
receiving system, causing the receiving system to fail unexpectedly.
Spoofing Attacks
Faking the sending address of a transmission to gain illegal entry into a secure system. Source: CNSSI 4009-
2015
Man-in-the-Middle Attacks
An attack where adversaries position themselves in between the user and the system so that they can
intercept and alter data traveling between them. Source: NISTIR 7711
On a network, there are physical ports that you connect wires to and logical ports that determine where the
data/traffic goes.
Physical Ports
Physical ports are the ports on the routers, switches, servers, computers, etc., to which you connect wires (e.g.,
fiber optic cables, Cat5 cables) to create a network.
Logical Ports
When a communication connection is established between two systems, it is done using ports. A logical port
(also called a socket) is little more than an address number that both ends of the communication link agree to use
when transferring data. Ports allow a single IP address to be able to support multiple simultaneous
communications, each using a different port number. In the Application Layer of the TCP/IP model (which
includes the Session, Presentation, and Application Layers of the OSI model) resides numerous application or
service-specific protocols. Data types are mapped using port numbers associated with services. For example, web
traffic (or HTTP) is port 80. Secure web traffic (or HTTPS) is port 443. Table 5.4 highlights some of these
protocols and their customary or assigned ports. Note that in several cases, a service or protocol may have two
ports assigned, one secure and one insecure. When in doubt, systems should be implemented using the most
secure version of a protocol and its services.
Well-known ports (0–1023). These ports are related to the common protocols that are at the core of the
Transport Control Protocol/Internet Protocol (TCP/IP) model, Domain Name Service (DNS), Simple Mail
Transfer Protocol (SMTP), etc.
Registered ports (1024–49151). These ports are often associated with proprietary applications from
vendors and developers. While they are officially approved by the Internet Assigned Numbers Authority
(IANA), in practice many vendors simply implement a port of their choosing. Examples include Remote
Authentication Dial-In User Service (RADIUS) authentication (1812), Microsoft SQL Server (1433/1434), and
the Docker REST API (2375/2376).
Dynamic or private ports (49152–65535). Whenever a service is requested that is associated with well-
known or registered ports, those services will respond with a dynamic port that is used for that session and
then released.
Secure Ports
Some network protocols transmit information in clear text, meaning it is not encrypted and should not be used.
Clear text information is subject to network sniffing. This tactic uses software to inspect packets of data as they
travel across the network and extract text such as usernames and passwords. Network sniffing could also reveal
the content of documents and other files if they are sent via insecure protocols.
The table below shows some of the insecure protocols along with recommended secure alternatives.
Secure Alternative
Insecure Port Protocol Protocol
Port
21 FTP
- File Transfer Protocol 22* SFTP
- Secure File Transfer
Protocol
23 – Telnet Telnet 22* SSH
- Secure Shell
25 – SMTP Simple Mail Transfer 587 – SMTP SMTP with TLS
Protocol
37 – Time Time Protocol 123 – NTP Network Time
Protocol
53 – DNS Domain Name 853 DoT - DNS over TLS (DoT)
Service
80 – HTTP HyperText Transfer 443 – HTTPS HyperText Transfer
Protocol Protocol (SSL/TLS)
143 IMAP
- Internet Message 993 – IMAP IMAP for SSL/TLS
Access Protocol
161/162 SNMP- Simple Network 161/162 SNMP - SNMPv3
Management
Protocol
445 – SMB Server Message 2049 NFS - Network File System
Block
389 – LDAP Lightweight Directory 636 LDAPS
- Lightweight Directory
Access Protocol Access Protocol
Secure
Port 21. File Transfer Protocol (FTP) sends the username and password using plaintext from the client to
the server. This could be intercepted by an attacker and later used to retrieve confidential information from
the server. The secure alternative, SFTP, on port 22 uses encryption to protect the user credentials and
packets of data being transferred.
Port 23. Telnet is used by many Linux systems and many other systems as a basic text-based terminal. All
information to and from the host on a telnet connection is sent in plaintext and can be intercepted by an
attacker. This includes username and password as well as all information presented on the screen, as this
interface is all text. Secure Shell (SSH) on port 22 uses encryption to ensure that traffic between the host
and terminal is not sent in a plaintext format.
Port 25. Simple Mail Transfer Protocol (SMTP) is the default unencrypted port for sending email messages.
Since it is unencrypted, data contained within the emails could be discovered by network sniffing. The
secure alternative is to use port 587 for SMTP using Transport Layer Security (TLS), which will encrypt the
data between the mail client and the mail server.
Port 37. Time Protocol may be used by legacy equipment and has mostly been replaced by using port 123
for Network Time Protocol (NTP). NTP on port 123 offers better error-handling capabilities, which reduces
the likelihood of unexpected errors.
Port 53. Domain Name Service (DNS) is still used widely. However, using DNS over TLS (DoT) on port 853
protects DNS information from being modified in transit.
Port 80. HyperText Transfer Protocol (HTTP) is the basis of nearly all web browser traffic on the internet.
Information sent via HTTP is not encrypted and is susceptible to sniffing attacks. HTTPS using TLS
encryption is preferred, as it protects the data in transit between the server and the browser. Note that this
is often notated as SSL/TLS. Secure Sockets Layer (SSL) has been compromised and is no longer
considered secure. It is now recommended that web servers and clients use Transport Layer Security
(TLS) 1.3 or higher for the best protection.
Port 143. Internet Message Access Protocol (IMAP) is a protocol used for retrieving emails. IMAP traffic on
port 143 is not encrypted and is susceptible to network sniffing. The secure alternative is to use port 993 for
IMAP, which adds SSL/TLS security to encrypt the data between the mail client and the mail server.
Ports 161 and 162. Simple Network Management Protocol is commonly used to send and receive data for
managing infrastructure devices. Because sensitive information is often included in these messages, the
use of SNMP version 2 or 3 (abbreviated SNMPv2 or SNMPv3) is recommended to include encryption and
additional security features. Unlike many others discussed here, all versions of SNMP use the same ports,
so there is not a definitive secure and insecure pairing. Additional context will be needed to determine if
information on ports 161 and 162 is secured or not.
Port 445. Server Message Block (SMB) is used by many versions of Windows for accessing files over the
network. Files are transmitted unencrypted, and many vulnerabilities are well known. Therefore, it is
recommended that traffic on port 445 should not be allowed to pass through a firewall at the network
perimeter. A more secure alternative is port 2049, Network File System (NFS). Although NFS can use
encryption, it is recommended that NFS also not be allowed through firewalls.
Port 389. Lightweight Directory Access Protocol (LDAP) is used to communicate directory information
from servers to clients. This can be an address book for email or usernames for logins. The LDAP protocol
also allows records in the directory to be updated, introducing additional risk. Since LDAP is not encrypted,
it is susceptible to sniffing and manipulation attacks. Lightweight Directory Access Protocol Secure
Between the client and the server, there is a system for synchronizing and acknowledging any request. It is known
as a three-way handshake. This handshake is used to establish a TCP connection between two devices.
To establish communications on a web server, the client sends synchronization (SYN) packet to the web server’s
port 80 or 443. This is a request to establish a connection. The web server replies to the SYN packet with an
acknowledgement known as a SYN/ACK. Finally, the client acknowledges the connection with an
acknowledgement (ACK). At this point, the basic connection is established, and the client and host will further
negotiate secure communications over that connection.
Understand Network (Cyber) Threats and Attacks
Objectives
Types of Threats
There are many types of cyberthreats to organizations. Below are several of the most common.
Spoofing
This is an attack with the goal of gaining access to a target system using a falsified identity. Spoofing can be used
against IP addresses, MAC addresses, usernames, system names, wireless network SSIDs, email addresses, and
many other types of logical identification.
Phishing
This is an attack that attempts to misdirect legitimate users to malicious websites through the abuse of URLs or
hyperlinks in emails could be considered phishing.
DOS/DDOS
A denial-of-service (DoS) attack is a network resource consumption attack that has the primary goal of preventing
legitimate activity on a victimized system. Attacks involving numerous unsuspecting secondary victim systems
are known as distributed denial-ofservice (DDoS) attacks.
Virus
The computer virus is perhaps the earliest form of malicious code to plague security administrators. As with
biological viruses, computer viruses have two main functions— propagation and destruction. A virus is a self-
replicating piece of code that spreads without the consent of a user, but frequently with their assistance (a user
must click on a link or open a file).
Worm
Worms pose a significant risk to network security. They contain the same destructive potential as other malicious
code objects with an added twist—they propagate themselves without requiring any human intervention.
Trojan
Named after the ancient story of the Trojan horse, the Trojan is a software program that appears benevolent but
carries a malicious, behind-the-scenes payload that has the potential to wreak havoc on a system or network. For
example, ransomware often uses a Trojan to infect a target machine and then uses encryption technology to
encrypt documents, spreadsheets, and other files stored on the system with a key known only to the malware
creator.
On-Path Attack
In an on-path attack, attackers place themselves between two devices, often between a web browser and a web
server, to intercept or modify information that is intended for one or both endpoints. On-path attacks are also
known as man-in-the-middle (MITM) attacks.
Side-Channel
A side-channel attack is a passive, noninvasive attack to observe the operation of a device. Methods include
power monitoring, timing, and fault analysis attacks.
Advanced persistent threat (APT) refers to threats that demonstrate an unusually high level of technical and
operational sophistication, spanning months or even years. APT attacks are often conducted by highly organized
groups of attackers.
Insider Threat
Insider threats are threats that arise from individuals who are trusted by the organization. These could be
disgruntled employees or employees involved in espionage. Insider threats are not always willing participants. A
trusted user who falls victim to a scam could be an unwilling insider threat.
Malware
A program that is inserted into a system, usually covertly, with the intent of compromising the confidentiality,
integrity, or availability of the victim’s data, applications or operating system or otherwise annoying or disrupting
the victim.
Ransomware
Malware used for the purpose of facilitating a ransom attack. Ransomware attacks often use cryptography to lock
the files on an affected computer and require the payment of a ransom fee in return for the unlock code.
So far in this chapter, you have explored how a TCP/IP network operates, and you have reviewed examples of how
threat actors can exploit inherent vulnerabilities. The remainder of this unit will discuss the various ways these
network threats can be detected and even prevented.
While there is no single step to protect against all attacks, there are some basic steps that help to protect against
many types of attacks.
Here are some examples of steps that can be taken to protect networks.
Firewalls can prevent many different types of attacks. Network-based firewalls protect entire networks, and
host-based firewalls protect individual systems.
Table 4.3 lists tools used to identify threats that can help to protect against many types of attacks, like viruses,
malware, Denial of Service, spoofing, on-path, and side-channel. From monitoring activity on a single computer, as
with HIDS to gathering log data, as with security information and event management solutions (SIEMs) to
, ,
filtering network traffic with methods such as firewalls these tools help protect entire networks and individual
,
systems. These tools, which will be covered more in depth, all help to identify potential threats, while anti-malware,
firewall, and intrusion protection system tools also have the added ability to prevent threats.
An intrusion occurs when an attacker can bypass or thwart security mechanisms and gain access to an
organization’s resources. Intrusion detection is a specific form of monitoring of recorded information and real-time
events to detect abnormal activity that might indicate a potential incident or intrusion.
An intrusion detection system (IDS) automates the inspection of logs and real-time system events to detect
intrusion attempts and system failures. An IDS is part of a defense-in-depth security plan. It works with, and
complements, other security mechanisms such as firewalls, but it does not replace them.
IDSs can recognize attacks that come from external connections, such as an attack from the internet, and attacks
that spread internally, such as a malicious worm. Once the IDS detects a suspicious event, it responds by sending
alerts or raising alarms. The primary goal of an IDS is to provide a means for a timely and accurate response to
intrusions.
Intrusion detection and prevention refers to capabilities that are part of isolating and protecting a more secure or
trusted domain or zone from one that is less trusted or less secure. These are natural functions to expect of a
firewall, for example.
IDS types are commonly classified as host-based and network-based. A host-based IDS (HIDS) monitors a single
computer or host. A network-based IDS (NIDS) monitors a network by observing network traffic patterns.
benefit of HIDSs over NIDSs is that HIDSs can detect anomalies on the host system that NIDSs cannot detect. For
example, a HIDS can detect infections where an intruder has infiltrated a system and is controlling it remotely.
HIDSs are more costly to manage than NIDSs because they require administrative attention on each system,
whereas NIDSs usually support centralized administration. A HIDS cannot detect network attacks on other
systems.
A NIDS monitors and evaluates network activity to detect attacks or event anomalies. It cannot monitor the
content of encrypted traffic but can monitor other packet details. A single NIDS can monitor a large network by
using remote sensors to collect data at key network locations that send data to a central management console.
These sensors can monitor traffic at routers, firewalls, network switches that support port mirroring, and other
types of network taps. A NIDS has little negative effect on the overall network performance, and when it is
deployed on a single-purpose system, it doesn’t adversely affect performance on any other computer. A NIDS is
usually able to detect the initiation of an attack or ongoing attacks, but it can’t always provide information about
the success of an attack. It won’t know if an attack affected specific systems, user accounts, files, or applications.
Security management involves the use of tools that collect information about the IT environment from many
disparate sources to better examine the overall security of the organization and streamline security efforts. The
general idea of a SIEM solution is to gather log data from various sources across the enterprise to better
understand potential security concerns and apportion resources accordingly.
SIEM systems can be used along with other components (e.g., defense-in-depth) as part of an overall information
security program.
Preventing Threats
While there is no single step to protect against all threats, there are some basic steps that help reduce the risk of
many types of threats.
Keep systems and applications up to date. Vendors regularly release patches to correct bugs and security
flaws, but these only help when they are applied. Patch management ensures that systems and
applications are kept up to date with relevant patches.
Remove or disable unneeded services and protocols. Imagine a web server running every available service
and protocol. Obviously, it is vulnerable to potential threats on any of these services and protocols.
Use intrusion detection and prevention systems. As discussed, intrusion detection and prevention systems
observe activity, attempt to detect threats and provide alerts. They can often block or stop a threat.
Use up-to-date anti-malware software. We have already covered the various types of malicious code such as
viruses and worms. A primary countermeasure is anti-malware software.
Use firewalls. Firewalls can prevent many different types of threats. Network-based firewalls protect entire
networks, and host-based firewalls protect individual systems. Later in this chapter is a section describing
how firewalls can prevent attacks.
Antivirus
Using antivirus products is strongly encouraged as a security best practice and is a requirement for compliance
with the Payment Card Industry Data Security Standard (PCI DSS) There are several antivirus products available,
.
and many can be deployed as part of an enterprise solution that integrates with several other security products.
Antivirus systems try to identify malware based on the signature of known malware or by detecting abnormal
activity on a system. This identification is done with various types of scanners, pattern recognition, and advanced
machine learning algorithms.
Anti-malware which is often used synonymously with antivirus, now goes beyond just virus protection, as modern
,
solutions provide a more holistic approach detecting rootkits, ransomware, and spyware. Malware is often the
term used as an overarching identifier for malicious applications or software when compared with a virus, which is
a more specific type of malicious code requiring human interaction to replicate to additional computer systems.
Many endpoint solutions include several malware protection mechanisms including software firewalls and IDS or
IPS systems.
Scans
Regular vulnerability and port scans are a good way to evaluate the effectiveness of security controls used within
an organization. These scans may reveal areas where patches or security settings are insufficient, where new
vulnerabilities have developed or become exposed, and where security policies are either ineffective or not being
followed. This is important because attackers can exploit any of these vulnerabilities.
Here is an example scan from Zenmap showing open ports on a host.
Figure 4.11: Scans
Firewalls
In building construction or vehicle design, a firewall is a specially built physical barrier that prevents the spread of
fire from one area of the structure to another, or from one compartment of a vehicle to another. Early computer
security engineers borrowed that term for the devices and services that isolate network segments from each other
as a security measure. As a result, firewalling refers to the process of designing, using, or operating different
processes in ways that isolate high-risk activities from lower-risk ones.
Firewalls enforce policies by filtering network traffic based on a set of rules. While a firewall should always be
placed at internet gateways, other internal network considerations and conditions determine where a firewall
would be employed, such as network zoning or segregation of different levels of sensitivity.
Firewalls have rapidly evolved to provide enhanced security capabilities. This growth in capabilities can be seen in
Figure 4.12, which contrasts a simplified view of traditional and next-generation firewalls. It integrates a variety of
threat management capabilities into a single framework, including proxy services intrusion prevention services
,
(IPS) and tight integration with the identity and access management (IAM) environment to ensure only authorized
,
users pass traffic across the infrastructure. While firewalls can manage traffic at Layers 2 (MAC addresses), 3 (IP
ranges), and 7 ( application programming interface [API] and application firewalls), the traditional implementation
has been to control traffic at Layer 4.
An intrusion prevention system (IPS) is a special type of active IDS that automatically attempts to detect and
block attacks before they reach target systems. A distinguishing difference between an IDS and an IPS is that the
IPS is placed in line with the traffic as shown in the figure. In other words, all traffic must pass through the IPS and
the IPS can choose which traffic to forward and which to block after analyzing it. This allows the IPS to prevent an
attack from reaching a target. Since IPS systems are most effective at preventing network-based attacks, it is
common to see the IPS function integrated into firewalls. Just like IDS, there are Network-based IPS (NIPS) and
Host-based IPS (HIPS).
Figure 4.13: IPS
Understand Network Security Infrastructure
Objectives
When it comes to data centers, there are two primary options: Organizations can outsource the data center or own
the data center. If the data center is owned, it will likely be built on premises A place, such as a building,
.
basement, or closet, to house the data center is needed, along with power, HVAC, fire suppression, and
redundancy.
Power
Data centers and information systems in general consume a tremendous amount of electrical power, which needs
to be delivered both constantly and consistently. Wide fluctuations in the quality of power affect system lifespan,
while disruptions in supply completely stop system operations.
Power at the site is always an integral part of data center operations. Regardless of fuel source, backup
generators must be sized to provide for the critical load (the computing resources) and the supporting
infrastructure. Similarly, battery backups must be properly sized to carry the critical load until generators start and
stabilize. As with data backups, testing is necessary to ensure the failover to alternate power works properly.
Data Center/Closets
The facility wiring infrastructure is integral to overall information system security and reliability. Protecting access
to the physical layer of the network is important in minimizing intentional or unintentional damage. Proper
protection of the physical site must address these sorts of security challenges. Data centers and wiring closets
may include the following:
High-density equipment and equipment within enclosed spaces requires adequate cooling and airflow. Well-
established standards for the operation of computer equipment exist, and equipment is tested against these
standards. For example, the recommended range for optimized maximum uptime and hardware life is from 64° to
81°F (18° to 27°C), and it is recommended that a server rack a metal cabinet often implemented to house several
,
servers in a vertical, space-saving layout within a server room, have three temperature sensors, positioned at the
top, middle, and bottom of the rack, to measure the actual operating temperature of the environment. Proper
management of data center temperatures, including cooling, is essential.
Cooling is not the only issue with airflow: Contaminants like dust and noxious fumes require appropriate controls
to minimize their impact on equipment. Monitoring for water or gas leaks, sewer overflow, or HVAC failure should
be integrated into the building control environment, with appropriate alarms to signal potential problems to
organizational staff. Contingency planning to respond to the warnings should prioritize the systems in the
building, so the impact of a major system failure on people, operations, or other infrastructure can be minimized.
Fire Suppression
For server rooms, appropriate fire detection/suppression must be considered based on the size of the room,
typical human occupation, egress routes, and risk of damage to equipment. For example, water used for fire
suppression would cause more harm to servers and other electronic components. Gas-based fire suppression
systems are more friendly to electronics but can be toxic to humans.
Now that we have looked at some of the primary components that must be considered when building an on-
premises data center, we should take a deeper dive into some of the components.
First, we consider the data center’s air conditioning requirements. Servers and other equipment generate a lot of
heat that must be handled appropriately. This is not just to make it comfortable when humans are present, but to
ensure the equipment is kept within its operating parameters. When equipment gets too hot, it can lead to quicker
failure or a voided warranty. Most equipment is programmed to automatically shut down when a certain
temperature threshold is met, which helps to protect the equipment, but a system that is shut down is not
available to users. An abnormal system shutdown also can lead to the loss or corruption of data.
Another consideration for the on-premises data center is fire suppression systems. In the United States, most
commercial buildings are required to have sprinkler systems that are activated in a fire. These sprinklers minimize
the amount of damage caused to the building and keep the fire from spreading to adjacent areas, but they can be
detrimental to electronic equipment, as water and electricity don’t mix.
Another hazard is having water overhead in a data center. Eventually, water pipes will fail and may leak on
equipment. This risk can be reduced somewhat by using a drypipe system that keeps the water out of the pipes
over the data center. These systems have a valve outside the data center that is only opened when a sensor
indicates a fire is present. Since water is not held in the pipes above the data center, the risk of leaks is reduced.
Redundancy
The concept of redundancy is to design systems with duplicate components so that if a failure were to occur,
there would be a backup. This can apply to the data center as well. Risk assessments pertaining to the data center
should identify when multiple separate utility service entrances are necessary for redundant communication
channels and/or mechanisms.
Ifthe organization requires full redundancy, devices should have two power supplies connected to diverse power
sources. Those power sources should be backed up by batteries and generators. In a high-availability
environment, even generators can be redundant and fed by different fuel types.
Example of Redundancy
Figure 4.14 illustrates how in addition to keeping redundant backups of information, a redundant source of power,
to provide backup power so you have an uninterrupted power supply or UPS is also a best practice. Transfer
, ,
switches or transformers may also be involved. And if power is interrupted by weather or blackouts, a backup
generator is essential. Often there will be two generators connected by two different transfer switches. These
generators might be powered by diesel or gasoline, by another fuel such as propane, or even by solar panels. A
hospital or essential government agency might contract with more than one power company and be on two
different grids in case one goes out.
Some organizations seeking to minimize downtime and enhance business continuity and disaster recovery
capabilities will create agreements with other, similar organizations. They agree that if one of the parties
experiences an emergency and cannot operate within their own facility, the other party will share its resources and
let them operate within theirs to maintain critical functions. These agreements may even include competitors,
because their facilities and resources meet the industry’s needs.
For example, Hospital A and Hospital B are competitors in the same city. The hospitals create an agreement with
each other: If something bad happens to Hospital A (e.g., fire, flood, bomb threat, loss of power), Hospital A can
temporarily send personnel and systems to work inside Hospital B to stay in business during the interruption (and
Hospital B can relocate to Hospital A, if Hospital B has a similar problem). The hospitals have decided that they
are not going to compete based on safety and security—they are going to compete on service, price, and customer
loyalty. This way, they protect themselves and the healthcare industry.
These agreements are called joint operating agreements (JOA) or memoranda of understanding (MOU) or
memoranda of agreement (MOA). Sometimes these agreements are mandated by regulatory requirements, or
they might just be part of the administrative safeguards instituted by an entity within industry guidelines.
Cloud
Cloud computing is usually associated with an internet-based set of computing resources, and typically sold as a
service provided by a cloud service provider (CSP) .
Cloud computing is very similar to electrical or power grids. It is provisioned in a geographic location and is
sourced using an electrical means that is not necessarily obvious to the consumer. But when you want electricity,
it’s available to you via a common standard interface, and you pay only for what you use. Cloud computing is very
similar. It is a very scalable, elastic and easy-to-use “utility” for the provisioning and deployment of information
technology (IT) services.
There are various definitions of cloud computing according to the leading standard. A globally accepted definition
for the term cloud is:
“A model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable
computing resources (such as networks, servers, storage, applications, and services) that can be rapidly
provisioned and released with minimal management effort or service provider interaction.” NIST SP 800145
Image 4.15 depicts cloud computing characteristics, services, and deployment models, all of which will be
covered in this section.
Cloud Redundancy
Many organizations have moved from hard-wired server rooms to operations that are run by cloud-based facilities,
because they provide both security and flexibility. Cloud service providers have different availability zones, so if
one goes down, activities can shift to another. When you use cloud-based services, you don’t have to maintain a
whole on-premises data center with all the required redundancies because the cloud service provider should do
that for you.
Cloud Characteristics
Cloud-based assets include any resources that an organization accesses using cloud computing.
Cloud computing refers to on-demand access to computing resources available from almost anywhere and easily
scalable. Organizations typically lease cloud-based resources from third parties. Among the benefits of cloud
computing are:
Usage. Most commonly, usage is metered and priced according to units (or instances) consumed. Usage
also can be billed back to specific departments or functions.
Reduced cost of ownership. There is no need to purchase assets to support the function, no loss of asset
value over time, and a reduction of other related costs of maintenance and support.
Reduced energy and cooling costs. In addition to savings, there is the “green IT” environment effect with
optimum use of IT resources and systems.
Scale up. Allows an enterprise to scale up new software or data-based services/solutions quickly and
without having to install massive hardware locally.
Service Models
Some cloud-based services only provide data storage and access. When storing data in the cloud, organizations
must ensure that security controls are in place with the cloud services provider to prevent unauthorized access to
the data.
There are varying levels of responsibility for assets depending on the service model. This includes maintaining the
assets, ensuring they remain functional, and keeping the systems and applications up to date with current
patches. In some cases, the cloud service provider is responsible for these steps. In other cases, the consumer is
responsible for these steps.
Types of cloud computing service models include Software as a Service (SaaS) Platform as
, a Service (PaaS) ,
Platform as a Service (PaaS): A cloud provides an environment for customers to use to build and operate their
own software.
PaaS is a way for customers to rent hardware, operating systems, storage and network capacity over the internet
from a cloud service provider. The service delivery model allows customers to rent virtualized servers and
associated services for running existing applications or developing and testing new ones.
The consumer does not manage or control the underlying cloud infrastructure, including network, servers,
operating systems, or storage, but has control over the deployed applications and possibly application-hosting
environment configurations.
A PaaS cloud provides a toolkit for conveniently developing, deploying, and administering application software
that is structured to support large numbers of consumers, process large quantities of data, and potentially be
accessed from any point on the internet.
PaaS clouds will typically provide a set of software building blocks and a set of development tools, such as
programming languages and supporting run-time environments that facilitate the construction of high-quality,
scalable applications.
Additionally, PaaS clouds will typically provide tools that assist with the deployment of new applications. Often,
deploying a new software application in a PaaS cloud is not much more difficult than uploading a file to a web
server. PaaS clouds generally will provide and maintain the computing resources (e.g., processing, storage, and
networking) that consumer applications need to operate.
PaaS clouds provide many benefits for developers, including that the operating system can be changed and
upgraded frequently, along with associated features and system services.
Infrastructure as a Service (IaaS): A cloud provides network access to traditional computing resources such as
processing power and storage.
IaaS models provide basic computing resources to consumers. This includes servers, storage, and in some cases,
networking resources. Consumers install operating systems and applications and perform all required
maintenance on the operating systems and applications.
Although the consumer has use of the related equipment, the cloud service provider retains ownership and is
ultimately responsible for hosting, running, and maintenance of the hardware. IaaS is also referred to as hardware
as a service by some customers and providers.
IaaS has several benefits for organizations, which include but are not limited to:
Ability to scale up and down infrastructure services based on actual usage. This is particularly useful and
beneficial where there are significant spikes and dips within the infrastructure usage curve.
Retaining system control at the operating system level.
Deployment Models
There are four cloud deployment models. Your cloud deployment model affects the breakdown of responsibilities
for your cloud-based assets. The four cloud models available are public private hybrid and community ., , ,
Public
Public clouds are commonly referred to as services that are commercially available for the public to purchase. It is
easy to get access to a public cloud. There is no real mechanism, other than applying for and paying for the cloud
service. It is open to the public and is therefore a shared resource that many people use as part of a resource
pool.
A public cloud deployment model includes assets available for any consumers to rent or lease and is hosted by an
external cloud service provider (CSP). Service level agreements can be effective at ensuring the CSP provides
cloud-based services at a level acceptable to the organization.
Private Cloud
Private clouds begin with the same technical concept as public clouds, except that instead of being shared with
the public, they are generally developed and deployed for a private organization that builds its own cloud.
Organizations can create and host private clouds using their own resources. Therefore, this deployment model
includes cloud-based assets for a single organization. As such, the organization is responsible for all
maintenance. However, an organization can also rent resources from a third party and split maintenance
requirements based on the service model (SaaS, PaaS, or IaaS).
Private clouds provide organizations and their departments private access to the computing, storage, networking
and software assets that are available in the private cloud.
Some learners find it difficult to distinguish between public and private clouds. In short, the difference between
public and private cloud is based on who is allowed to consume the cloud resources and if the resources are
shared. Public clouds make resources available to several organizations simultaneously to create cost
efficiencies through economies of scale. This means that different organizations will share the same underlying
resources for physical hardware such as data storage devices, memory, and CPU. Conversely with private clouds
the expense is typically higher per unit of compute, but the abovementioned resource examples would not be
shared across organizations (i.e., resources are private or per organization).
Hybrid Cloud
A hybrid cloud deployment model is created by combining two forms of cloud computing deployment models,
typically a public and private cloud. Hybrid cloud computing is gaining popularity with organizations by providing
them with the ability to retain control of their IT environments, conveniently allowing them to use public cloud
service to fulfill non-mission-critical workloads, and taking advantage of flexibility, scalability and cost savings.
Important drivers or benefits of hybrid cloud deployments include:
Retaining ownership and oversight of critical tasks and processes related to technology.
Reusing the organization’s previous investments in technology.
Retaining control over most critical business components and systems.
Using a cost-effective means of fulfilling noncritical business functions via public cloud components.
Community Cloud
Community clouds can be either public or private. What makes them unique is that they are generally developed
for a particular community. An example could be a public community cloud focused primarily on organic food, or
maybe a community cloud focused specifically on financial services. The idea behind the community cloud is
that people of like minds or similar interests can get together, share IT capabilities and services, and use them in a
way that is beneficial for the interests that they share.
A managed service provider (MSP) is a company that manages information technology assets for another
company. Small- and medium-sized businesses commonly outsource part, or all, their information technology
functions to an MSP to manage day-to-day operations or to provide expertise in areas the company does not have.
Organizations also may use an MSP to provide network and security monitoring and patching services.
Today, many MSPs offer cloud-based services augmenting SaaS solutions with active incident investigation and
response activities. One such example is a managed detection and response (MDR) service where a vendor ,
monitors firewalls and other security tools to provide expertise in triaging events.
Some other common reasons organizations use MSPs are:
The SLA is an agreement between a cloud service provider and a cloud service customer based on a taxonomy of
cloud computing—specific terms to set the quality of the cloud services delivered. It characterizes quality of the
cloud services delivered in terms of a set of measurable properties specific to cloud computing (business and
technical) and a given set of cloud computing roles (cloud service customer, cloud service provider, and related
sub-roles).
Don’t underestimate or downplay the importance of a service level agreement. In it, the minimum level of service,
availability, security, controls, processes, communications, support, and other crucial business elements are
stated and agreed to by both parties.
The purpose of an SLA is to document specific parameters, minimum service levels, and remedies for any failure
to meet the specified requirements. Other important SLA points to consider include the following:
Network Design
The objective of network design is to satisfy data communication requirements and result in efficient overall
performance. Several elements that are considered when planning for security in a network include the following
issues.
Network Segmentation
Network segmentation involves controlling traffic among networked devices. Complete or physical network
segmentation occurs when a network is isolated from all outside communications, so transactions can only occur
between devices within the segmented network.
A DMZ is a network area that is designed to be accessed by outside visitors but is still isolated from the
organization’s private network. The DMZ is often the host of public web, email, file, and other resource servers.
VLANs are created by switches to logically segment a network without altering its physical topology.
A virtual private network (VPN) is a communication tunnel that provides point-to-point transmission of both
authentication and data traffic over an untrusted network.
Defense in Depth
Defense in depth uses multiple types of access controls in literal or theoretical layers to help an organization
avoid a monolithic security stance.
Network access control (NAC) is a concept of controlling access to an environment through strict adherence to
and implementation of security policy.
Defense in Depth
Defense in depth uses a layered approach when designing the security posture of an organization. Think about a
castle that holds the crown jewels. The jewels will be placed in a vaulted chamber in a central location guarded by
security personnel. The castle is built around the vault with additional layers of security—guards, walls, a moat.
The same approach is true when designing the logical security of a facility or system. Using layers of security will
deter many attackers and encourage them to focus on other, easier targets.
Defense in depth provides a starting point for considering all types of controls— Administrative, technological and
physical—that empower insiders and operators to work together to protect their organization and its systems.
The following examples further explain defense in depth:
Data. Controls that protect data with technologies such as encryption, data leak prevention, identity and
access management, and data controls.
Application. Controls that protect an application with technologies such as data leak prevention, application
firewalls, and database monitors.
Host. Every control that is placed at the endpoint level, such as antivirus, endpoint firewall, configuration,
and patch management.
Internal network. Controls that are in place to protect uncontrolled data flow and user access across the
organizational network. Relevant technologies include intrusion detection systems, intrusion prevention
systems, internal firewalls, and network access controls.
Perimeter. Controls that protect against unauthorized access to the network. This level includes the use of
technologies such as gateway firewalls, honeypots, malware analysis, and secure demilitarized zones
(DMZs).
Physical. Controls that provide a physical barrier, such as locks, walls or access control.
Policies, procedures, and awareness. Administrative controls that reduce insider threats (intentional and
unintentional) and identify risks as soon as they appear.
Zero Trust
Zero trust networks are often microsegmented networks, with firewalls at nearly every connecting point. Zero
trust encapsulates information assets, the services that apply to them, and their security properties. This concept
recognizes that once inside a trust-but-verify environment, a user may have unlimited capabilities to roam around,
identify assets and systems, and potentially find exploitable vulnerabilities. Placing a greater number of firewalls
or other security boundary control devices throughout the network increases the number of opportunities to
detect a troublemaker before harm is done. Many enterprise architectures are pushing this to the extreme of
microsegmenting their internal networks, which enforces frequent re-authentication of a user ID, as depicted in
Figure 4.17.
Figure 4.17: Zero Trust
Consider a rock music concert. By traditional perimeter controls, such as firewalls, you show your ticket at the
gate and have free access to the venue, including backstage where the rock stars are. In a zero-trust environment,
additional checkpoints are added. Your identity (ticket) is validated to access the floor level seats, and again to
access the backstage area. Your credentials must be valid at all three levels to meet any rock stars.
Zero trust is an evolving design approach which recognizes that even the most robust access control systems
have weaknesses. It adds defenses at the user, asset and data levels rather than relying on perimeter defense. In
the extreme, it insists that every process or action a user attempts to take must be authenticated and authorized;
the window of trust becomes vanishingly small.
While microsegmentation adds internal perimeters, zero trust focuses on assets, or data, rather than the
perimeter. Zero trust builds more effective gates to protect assets directly rather than building additional or higher
walls.
An organization’s network is perhaps one of its most critical assets. As such, it is vital that we both know and
control access to it, both from insiders (e.g., employees, contractors) and outsiders (e.g., customers, corporate
partners, vendors). We need to be able to see who and what is attempting to make a network connection.
At one time, network access was limited to internal devices. Gradually, that was extended to remote connections,
although initially those were the exceptions rather than the norm. This started to change with the concepts of
Bring Your Own Device (BYOD) and the Internet of Things (IoT).
Consider IoT for a moment, it’s important to understand the range of devices that might be found within an
organization. They include heating, ventilation, and air conditioning (HVAC) systems that monitor the ambient
temperature and adjust heating or cooling levels automatically. Other IoT devices may include air monitoring
systems, security systems, sensors, cameras, and vending and coffee machines. And businesses with remote
workers need to think even more broadly about connected devices—baby monitors, sprinkler systems, home
personal assistants, thermostats, and more. These will be discussed more later in this section.
How can a NAC solution help? It starts with a policy. Once an organization establishes access control policies and
associated security policies, they can be enforced via the NAC device(s).
An NAC device will provide necessary network visibility for access security and may be used for incident
response. Aside from identifying connections, NAC tools should be able to isolate noncompliant devices within a
quarantined network and provide a mechanism to “fix” the noncompliant elements, such as turning on endpoint
protection. In short, the goal is to ensure that all devices wishing to join the network do so only when they comply
with the requirements laid out in the organization policies. These policies should not only apply to internal users,
but also any temporary users such as guests or contractors, and any related devices they may bring into the
organization.
Let’s consider some possible use cases for NAC deployment:
Medical devices
IoT devices
BYOD/mobile devices (laptops, tablets, smartphones)
Guest users and contractors
As it has been established, it is critically important that all mobile devices, regardless of their owner, go through an
onboarding process, ideally each time a network connection is made, and that the device is identified and
interrogated to ensure the organization’s policies are being met.
At its simplest form, Network Access Control is a way to prevent unwanted devices from connecting to a network.
Some NAC systems allow for the installation of required software on the end user’s device to enforce device
compliance to policy prior to connecting.
A high-level example of a NAC system is hotel internet access. Typically, a user connecting to the hotel network is
required to acknowledge the acceptable use policy before being allowed to access the internet. After the user
clicks the acknowledge button, the device is connected to a network that enables internet access. Some hotels
add an additional layer requiring the guest to enter a special password or a room number and guest name before
access is granted. This prevents abuse by someone who is not a hotel guest and may even help to track network
abuse to a particular user.
A slightly more complex scenario is a business that separates employee BYOD devices from corporate-owned
devices on the network. If the BYOD device is preapproved and allowed to connect to the corporate network, the
NAC system can validate the device using a hardware address or installed software, and even check to make sure
the antivirus software and operating system software are up to date before connecting it to the network.
Alternatively, if it is a personal device not allowed to connect to the corporate network, it can be redirected to the
guest network for internet access without access to internal corporate resources.
Network segmentation is also an effective way to achieve defense in depth for distributed or multi-tiered
applications. The use of a demilitarized zone (DMZ), for example, is a common practice in security architecture.
With a DMZ, host systems that are accessible through the firewall are physically separated from the internal
network by means of secured switches or by using an additional firewall to control traffic between the web server
and the internal network. Application DMZs (or semi-trusted networks) are frequently used today to limit access
to application servers to those networks or systems that have a legitimate need to connect. A web front-end
server might be in the DMZ, but it might retrieve data from a database server that is on the other side of the
firewall.
For example, you may have a network where you manage your client’s personal information, and even if the data is
encrypted or obfuscated by cryptography, you need to make sure this PII network is completely segregated from
the rest of the network with some secure switches that only an authorized individual can access. Only authorized
personnel can control the firewall settings and control the traffic between the web server and the internal network.
For example, in a hospital or a doctor’s office, you would have a segregated network for the patient information
and billing, and on the other side would be the electronic medical records. If they are using a web-based
application for medical record services, they would have a demilitarized zone or segmented areas. And perhaps
even behind the firewall, they have their own specified server to protect critical information and keep it segregated.
It worth noting at this point that while this course will not explore specifics, some networks use a web
is
application firewall (WAF) rather than a DMZ network. The WAF has an internal and an external connection like a
traditional firewall, with the external traffic being filtered by the traditional or next generation firewall first. It
monitors all traffic, encrypted or not, from the outside for malicious behavior before passing commands to a web
server that may be internal to the network.
An embedded system is a computer implemented as part of a larger system. The embedded system is typically
designed around a limited set of specific functions in relation to the larger product of which it is a component.
Examples of embedded systems include network-attached printers, smart TVs, HVAC controls, smart appliances,
smart thermostats, and medical devices.
Network-enabled devices are any type of portable or nonportable device that has native network capabilities. This
generally assumes the network in question is a wireless type of network, typically provided by a mobile
telecommunications company. Network-enabled devices include smartphones, mobile phones, tablets, smart TVs
or streaming media players (such as a Roku Player, Amazon Fire TV, or Google Android TV/Chromecast), network-
attached printers, game systems, and much more.
The Internet of Things (IoT) is the collection of devices that can communicate over the internet with one another
or with a control console to affect and monitor the real world. IoT devices might be labeled as smart devices or
smart-home equipment. Many of the ideas of industrial environmental control found in office buildings are finding
their way into more consumer-available solutions for small offices or personal homes.
Embedded systems and network-enabled devices that communicate with the internet are considered IoT devices
and need special attention to ensure that communication is not used in a malicious manner. Because an
embedded system is often in control of a mechanism in the physical world, a security breach could cause harm to
people and property. Since many of these devices have multiple access routes, such as ethernet, wireless,
Bluetooth, etc., special care should be taken to isolate them from other devices on the network. You can impose
logical network segmentation with switches using VLANs, or through other traffic-control means, including MAC
addresses, IP addresses, physical ports, protocols, or application filtering, routing, and access control
management. Network segmentation can be used to isolate IoT environments.
The characteristics that make embedded systems operate efficiently are also a security risk. Embedded systems
are often used to control something physical, such as a valve for water, steam, or oil. These devices have a limited
instruction set and are often hardcoded or permanently written to a memory chip. For ease of operating the
mechanical parts, the embedded system is often connected to a corporate network since and may operate using
the TCP/IP protocol—yes, the same protocol that runs all over the internet. Therefore, it is feasible for anyone
anywhere on the internet to control the opening and closing of a valve when the networks are fully connected. This
is the primary reason for segmentation of these systems on a network. If these are segmented properly, a
compromised corporate network will not be able to access the physical controls on the embedded systems.
The other side of the embedded systems, which also applies to IoT devices, is the general lack of system updates
when a new vulnerability is found. In the case of most embedded systems with the programming directly on the
chips, physical replacement of the chip would be required to patch the vulnerability. For many systems, it may not
be cost-effective to have someone visit each one to replace a chip, or manually connect to the chip to reprogram
it.
We buy all these internet-connected things because of the convenience. Cameras, light bulbs, speakers,
refrigerators—they all bring convenience to our lives, but they also introduce risk. While the reputable mainstream
brands will likely provide updates to their devices when a new vulnerability is discovered, many of the smaller
companies simply don’t plan to do that as they seek to control the costs of a device. These devices, when
connected to a corporate network, can be an easy internet-connected doorway for a cybercriminal to access a
corporate network. If these devices are properly segmented, or separated, on the network from corporate servers
and other corporate networking, a compromise on an IoT device or a compromised embedded system will not be
able to access those corporate data and systems. The figure below depicts one example of segmenting IoT
devices from other network infrastructure.
Figure 4.18: Segmentation for Embedded Systems and IoT
Microsegmentation
The toolsets of current adversaries are polymorphic in nature and allow threats to bypass static security controls.
Modern cyberattacks take advantage of traditional security models to move easily between systems within a data
center.
Microsegmentation aids in protecting against these threats. A fundamental design requirement of
microsegmentation is to understand the protection requirements for traffic within a data center and traffic to and
from the internet traffic flows.
When organizations avoid infrastructure-centric design paradigms, they are likely to become more efficient at
service delivery in the data center and become apt at detecting and preventing advanced persistent threats.
Some key points about microsegmentation:
Microsegmentation allows for extremely granular restrictions within the IT environment, to the point where
rules can be applied to individual machines and/or users, and these rules can be as detailed and complex
as desired. For instance, it can limit which IP addresses can communicate to a given machine, at which
time of day, with which credentials, and which services those connections can use.
Microsegmentation uses logical rules, not physical rules, and does not require additional hardware or
manual interaction with the device (that is, the administrator can apply the rules to various machines
without having to physically touch each device or the cables connecting it to the networked environment).
Microsegmentation is the ultimate end state of the defense-in-depth philosophy; no single point of access
within the IT environment can lead to broader compromise.
Microsegmentation is crucial in shared environments, such as the cloud, where more than one customer’s
data and functionality might reside on the same device(s), and where third-party personnel
(administrators/technicians who work for the cloud provider, not the customer) might have physical access
to the devices.
Microsegmentation allows the organization to limit which business functions, units, offices, or departments
can communicate with others, to enforce the concept of least privilege. For instance, the Human Resources
office probably has employee data that no other business unit should have access to, such as employee
home address, salary, and medical records. Microsegmentation, like VLANs, can make HR its own distinct
IT enclave, so that sensitive data is not available to other business units, thus reducing the risk of exposure.
Since VLANs act as discrete networks, communications between VLANs must be enabled. Broadcast traffic is
limited to the VLAN, reducing congestion and reducing the effectiveness of some attacks. Administration of the
environment is simplified, as the VLANs can be reconfigured when individuals change their physical location or
need access to different services. VLANs can be configured based on switch port, IP subnet, MAC address, and
protocols.
VLANs do not guarantee a network’s security. At first glance, it may seem that traffic cannot be intercepted
because communication within a VLAN is restricted to member devices. However, there are attacks that allow a
malicious user to see traffic from other VLANs (so-called VLAN hopping). The VLAN technology is only one tool
that can improve the overall security of the network environment.
There are a few common uses of VLANs in corporate networks. The first is to separate Voice Over IP (VOIP)
telephones from the corporate network. This is most often done to more effectively manage the network traffic
generated by voice communications by isolating it from the rest of the network.
Another common use of VLANs in a corporate network is to separate the data center from all other network
traffic. This makes it easier to keep the server-to-server traffic contained to the data center network while allowing
certain traffic from workstations or the web to access the servers. As briefly discussed earlier, VLANs can also be
used to segment networks. For example, a VLAN can separate the payroll workstations from the rest of the
workstations in the network. Routing rules can also be used to only allow devices within this Payroll VLAN to
access the servers containing payroll information.
Earlier,we also discussed Network Access Control (NAC). These systems use VLANs to control whether devices
connect to the corporate network or to a guest network. Even though a wireless access controller may attach to a
single port on a physical network switch, the VLAN associated with the device connection on the wireless access
controller determines the VLAN that the device operates on and to which networks it is allowed to connect.
Finally, in large corporate networks, VLANs can be used to limit the amount of broadcast traffic within a network.
This is most common in networks of more than 1,000 devices and may be separated by department, location or
building, or any other criteria as needed.
The most important thing to remember is that while VLANs are logically separated, they may be allowed to access
other VLANs. They can also be configured to deny access to other VLANs.
A virtual private network (VPN) is not necessarily an encrypted tunnel. It is simply a point-to-point connection
between two hosts that allows them to communicate. Secure communications can, of course, be provided by the
VPN, but only if the security protocols have been selected and correctly configured to provide a trusted path over
an untrusted network, such as the internet. Remote users employ VPNs to access their organization’s network,
and depending on the VPN’s implementation, they may have most of the same resources available to them as if
they were physically at the office. As an alternative to expensive dedicated point-to-point connections,
organizations use gateway-to-gateway VPNs to securely transmit information over the internet between sites or
even with business partners.
Chapter Summary
This chapter covered computer networking and securing the network. A network is simply two or more computers
linked together to share data, information or resources. There are many types of networks, such as LAN, WAN,
WLAN and VPN . Some of the devices found on a network include hubs, switches, routers, firewalls, servers,
,
endpoints (e.g., desktop computer, laptop, tablet, mobile phone, VOIP). Other network terms include ports,
protocols, ethernet, Wi-Fi, IP address, and MAC address.
The chapter also reviewed two network models, OSI and TCP/IP, in figures 4.4 and 4.5. The OSI model has seven
layers and the TCP/IP four. They both utilize binary code from the physical or network interface layer, where the
cables or Wi-Fi connect, to the Application Layer, where users interact with the data. The data traverses the
network as packets, with headers or footers being added and removed accordingly as they get passed from layer
to layer. This helps route the data and ensures packets are not lost and remain together. IPv4 is slowly being
phased out by IPv6 to improve security, improve quality of service, and support more devices.
Unsurprisingly, the chapter points out that Wi-Fi has replaced many of our wired networks, and with its ease of
use, it also brings security issues. Securing Wi-Fi is important.
Attacks on networks such as DoS/DDoS, fragment, oversized packet, spoofing, and man-in-the middle attacks
were introduced. Also discussed were the ports and protocols that connect networks and network services, from
physical ports (e.g., LAN ports) that connect the wires, to logical ports (e.g., 80 or 443) that connect the
protocols/services.
Some possible threats to networks were examined, including spoofing, DoS/DDoS, virus, worm, Trojan, on-path
(man-in-the-middle) attack, and side-channel attacks. This chapter also explored how to identify threats by using
intrusion detection systems, network intrusion detection systems, host-based intrusion detection systems, or
security information and event management, as well as to how prevent threats using antivirus, scans, firewalls, or
the systems mentioned above.
This chapter discussed on-premises data centers and their requirements (e.g., power, heating, ventilation and air
conditioning, fire suppression, redundancy, and memorandums of understanding/memorandums of agreement),
as well as important aspects and characteristics of the cloud, including service models, Software as a Service,
Infrastructure as a Service, and Platform as a Service, and public, private, community, and hybrid deployment
models. The important roles of a managed service provider and service level agreements were also discussed.
Terminology for network design was covered, including network segmentation such as microsegmentation,
demilitarized zones (DMZs), virtual local area networks (VLAN), virtual private networks (VPN), defense in depth,
zero trust, and network access control.