Management Development Program On Network Administration Skills
Management Development Program On Network Administration Skills
Development Program
on Network
Administration Skills
Regional Institute for Co-operative
Management
Jelam J Davda
COMPUTER NETWORK
A computer network, often simply referred to as a network, is a group of computers and devices
interconnected by communications channels that facilitate communications among users and allows
users to share resources. Networks may be classified according to a wide variety of characteristics.
Introduction
A computer network allows sharing of resources and information among interconnected devices.
In the 1960s, the Advanced Research Projects Agency (ARPA) started funding the design of the
Advanced Research Projects Agency Network (ARPANET) for the United States Department of
Defense. It was the first computer network in the world. [1] Development of the network began in
1969, based on designs developed during the 1960s.
Purpose
Network classification
The following list presents categories used for classifying networks.
Connection method
Computer networks can be classified according to the hardware and software technology that is
used to interconnect the individual devices in the network, such as optical fiber, Ethernet,
wireless LAN, HomePNA, power line communication or G.hn.
Ethernet uses physical wiring to connect devices. Frequently deployed devices include hubs,
switches, bridges, or routers. Wireless LAN technology is designed to connect devices without
wiring. These devices use radio waves or infrared signals as a transmission medium. ITU-T G.hn
technology uses existing home wiring (coaxial cable, phone lines and power lines) to create a
high-speed (up to 1 Gigabit/s) local area network.
Wired technologies
Twisted pair wire is the most widely used medium for telecommunication. Twisted-pair
wires are ordinary telephone wires which consist of two insulated copper wires twisted
into pairs and are used for both voice and data transmission. The use of two wires twisted
together helps to reduce crosstalk and electromagnetic induction. The transmission speed
ranges from 2 million bits per second to 100 million bits per second.
Coaxial cable is widely used for cable television systems, office buildings, and other
worksites for local area networks. The cables consist of copper or aluminum wire
wrapped with insulating layer typically of a flexible material with a high dielectric
constant, all of which are surrounded by a conductive layer. The layers of insulation help
minimize interference and distortion. Transmission speed range from 200 million to more
than 500 million bits per second.
Optical fiber cable consists of one or more filaments of glass fiber wrapped in protective
layers. It transmits light which can travel over extended distances. Fiber-optic cables are
not affected by electromagnetic radiation. Transmission speed may reach trillions of bits
per second. The transmission speed of fiber optics is hundreds of times faster than for
coaxial cables and thousands of times faster than a twisted-pair wire.[citation needed]
Wireless technologies
Cellular and PCS systems – Use several radio communications technologies. The systems
are divided to different geographic areas. Each area has a low-power transmitter or radio
relay antenna device to relay calls from one area to the next area.
Wireless LANs – Wireless local area network use a high-frequency radio technology
similar to digital cellular and a low-frequency radio technology. Wireless LANs use
spread spectrum technology to enable communication between multiple devices in a
limited area. An example of open-standards wireless radio-wave technology is IEEE.
Infrared communication , which can transmit signals between devices within small
distances not more than 10 meters peer to peer or ( face to face ) without any body in the
line of transmitting.
Scale
Networks are often classified as local area network (LAN), wide area network (WAN),
metropolitan area network (MAN), personal area network (PAN), virtual private network (VPN),
campus area network (CAN), storage area network (SAN), and others, depending on their scale,
scope and purpose, e.g., controller area network (CAN) usage, trust level, and access right often
differ between these types of networks. LANs tend to be designed for internal use by an
organization's internal systems and employees in individual physical locations, such as a
building, while WANs may connect physically separate parts of an organization and may include
connections to third parties.
Computer networks may be classified according to the functional relationships which exist
among the elements of the network, e.g., active networking, client–server and peer-to-peer
(workgroup) architecture.
Network topology
Computer networks may be classified according to the network topology upon which the
network is based, such as bus network, star network, ring network, mesh network. Network
topology is the coordination by which devices in the network are arranged in their logical
relations to one another, independent of physical arrangement. Even if networked computers are
physically placed in a linear arrangement and are connected to a hub, the network has a star
topology, rather than a bus topology. In this regard the visual and operational characteristics of a
network are distinct. Networks may be classified based on the method of data used to convey the
data, these include digital and analog networks.
Typical library network, in a branching tree topology and controlled access to resources
All interconnected devices must understand the network layer (layer 3), because they are
handling multiple subnets (the different colors). Those inside the library, which have only 10/100
Mbit/s Ethernet connections to the user device and a Gigabit Ethernet connection to the central
router, could be called "layer 3 switches" because they only have Ethernet interfaces and must
understand IP. It would be more correct to call them access routers, where the router at the top is
a distribution router that connects to the Internet and academic networks' customer access
routers.
The defining characteristics of LANs, in contrast to WANs (Wide Area Networks), include their
higher data transfer rates, smaller geographic range, and no need for leased telecommunication
lines. Current Ethernet or other IEEE 802.3 LAN technologies operate at speeds up to 10 Gbit/s.
This is the data transfer rate. IEEE has projects investigating the standardization of 40 and 100
Gbit/s.[3]
A personal area network (PAN) is a computer network used for communication among computer
and different information technological devices close to one person. Some examples of devices
that are used in a PAN are personal computers, printers, fax machines, telephones, PDAs,
scanners, and even video game consoles. A PAN may include wired and wireless devices. The
reach of a PAN typically extends to 10 meters. [4] A wired PAN is usually constructed with USB
and Firewire connections while technologies such as Bluetooth and infrared communication
typically form a wireless PAN.
A wide area network (WAN) is a computer network that covers a large geographic area such as a
city, country, or spans even intercontinental distances, using a communications channel that
combines many types of media such as telephone lines, cables, and air waves. A WAN often
uses transmission facilities provided by common carriers, such as telephone companies. WAN
technologies generally function at the lower three layers of the OSI reference model: the physical
layer, the data link layer, and the network layer.
Campus network
In the case of a university campus-based campus network, the network is likely to link a variety
of campus buildings including; academic departments, the university library and student
residence halls.
A Metropolitan area network is a large computer network that usually spans a city or a large
campus.
Sample EPN made of Frame relay WAN connections and dialup remote access.
A virtual private network (VPN) is a computer network in which some of the links between
nodes are carried by open connections or virtual circuits in some larger network (e.g., the
Internet) instead of by physical wires. The data link layer protocols of the virtual network are
said to be tunneled through the larger network when this is the case. One common application is
secure communications through the public Internet, but a VPN need not have explicit security
features, such as authentication or content encryption. VPNs, for example, can be used to
separate the traffic of different user communities over an underlying network with strong
security features.
A VPN may have best-effort performance, or may have a defined service level agreement (SLA)
between the VPN customer and the VPN service provider. Generally, a VPN has a topology
more complex than point-to-point.
Internetwork
An internetwork is the connection of two or more private computer networks via a common
routing technology (OSI Layer 3) using routers. The Internet is an aggregation of many
internetworks, hence its name was shortened to Internet.
A global area network (GAN) is a network used for supporting mobile communications across an
arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile
communications is handing off the user communications from one local coverage area to the
next. In IEEE Project 802, this involves a succession of terrestrial wireless LANs.[5]
Internet
The Internet is a global system of interconnected governmental, academic, corporate, public, and
private computer networks. It is based on the networking technologies of the Internet Protocol
Suite. It is the successor of the Advanced Research Projects Agency Network (ARPANET)
developed by DARPA of the United States Department of Defense. The Internet is also the
communications backbone underlying the World Wide Web (WWW).
Participants in the Internet use a diverse array of methods of several hundred documented, and
often standardized, protocols compatible with the Internet Protocol Suite and an addressing
system (IP addresses) administered by the Internet Assigned Numbers Authority and address
registries. Service providers and large enterprises exchange information about the reachability of
their address spaces through the Border Gateway Protocol (BGP), forming a redundant
worldwide mesh of transmission paths.
Intranets and extranets are parts or extensions of a computer network, usually a local area
network.
An intranet is a set of networks, using the Internet Protocol and IP-based tools such as web
browsers and file transfer applications, that is under the control of a single administrative entity.
That administrative entity closes the intranet to all but specific, authorized users. Most
commonly, an intranet is the internal network of an organization. A large intranet will typically
have at least one web server to provide users with organizational information.
An extranet is a network that is limited in scope to a single organization or entity and also has
limited connections to the networks of one or more other usually, but not necessarily, trusted
organizations or entities—a company's customers may be given access to some part of its
intranet—while at the same time the customers may not be considered trusted from a security
standpoint. Technically, an extranet may also be categorized as a CAN, MAN, WAN, or other
type of network, although an extranet cannot consist of a single LAN; it must have at least one
connection with an external network.
Overlay network
An overlay network is a virtual computer network that is built on top of another network. Nodes
in the overlay are connected by virtual or logical links, each of which corresponds to a path,
perhaps through many physical links, in the underlying network.
For example, many peer-to-peer networks are overlay networks because they are organized as
nodes of a virtual system of links run on top of the Internet. The Internet was initially built as an
overlay on the telephone network .[6]
Overlay networks have been around since the invention of networking when computer systems
were connected over telephone lines using modem, before any data network existed.
Nowadays the Internet is the basis for many overlaid networks that can be constructed to permit
routing of messages to destinations not specified by an IP address. For example, distributed hash
tables can be used to route messages to a node having a specific logical address, whose IP
address is not known in advance.
Overlay networks have also been proposed as a way to improve Internet routing, such as through
quality of service guarantees to achieve higher-quality streaming media. Previous proposals such
as IntServ, DiffServ, and IP Multicast have not seen wide acceptance largely because they
require modification of all routers in the network.[citation needed] On the other hand, an overlay
network can be incrementally deployed on end-hosts running the overlay protocol software,
without cooperation from Internet service providers. The overlay has no control over how
packets are routed in the underlying network between two overlay nodes, but it can control, for
example, the sequence of overlay nodes a message traverses before reaching its destination.
For example, Akamai Technologies manages an overlay network that provides reliable, efficient
content delivery (a kind of multicast). Academic research includes End System Multicast and
Overcast for multicast; RON (Resilient Overlay Network) for resilient routing; and OverQoS for
quality of service guarantees, among others.
A network card, network adapter, or NIC (network interface card) is a piece of computer
hardware designed to allow computers to communicate over a computer network. It provides
physical access to a networking medium and often provides a low-level addressing system
through the use of MAC addresses.
Repeaters
Hubs
A network hub contains multiple ports. When a packet arrives at one port, it is copied
unmodified to all ports of the hub for transmission. The destination address in the frame is not
changed to a broadcast address.[7] It works on the Physical Layer of the OSI model.
Bridges
A network bridge connects multiple network segments at the data link layer (layer 2) of the OSI
model. Bridges broadcast to all ports except the port on which the broadcast was received.
However, bridges do not promiscuously copy traffic to all ports, as hubs do, but learn which
MAC addresses are reachable through specific ports. Once the bridge associates a port and an
address, it will send traffic for that address to that port only.
Bridges learn the association of ports and addresses by examining the source address of frames
that it sees on various ports. Once a frame arrives through a port, its source address is stored and
the bridge assumes that MAC address is associated with that port. The first time that a previously
unknown destination address is seen, the bridge will forward the frame to all ports other than the
one on which the frame arrived.
Switches
A network switch is a device that forwards and filters OSI layer 2 datagrams (chunk of data
communication) between ports (connected cables) based on the MAC addresses in the packets. [8]
A switch is distinct from a hub in that it only forwards the frames to the ports involved in the
communication rather than all ports connected. A switch breaks the collision domain but
represents itself as a broadcast domain. Switches make forwarding decisions of frames on the
basis of MAC addresses. A switch normally has numerous ports, facilitating a star topology for
devices, and cascading additional switches.[9] Some switches are capable of routing based on
Layer 3 addressing or additional logical levels; these are called multi-layer switches. The term
switch is used loosely in marketing to encompass devices including routers and bridges, as well
as devices that may distribute traffic on load or by application content (e.g., a Web URL
identifier).
Routers
Topology can be considered as a virtual shape or structure of a network. This shape does not
correspond to the actual physical design of the devices on the computer network. The computers
on a home network can be arranged in a circle but it does not necessarily mean that it represents
a ring topology.
Any particular network topology is determined only by the graphical mapping of the
configuration of physical and/or logical connections between nodes. The study of network
topology uses graph theory. Distances between nodes, physical interconnections, transmission
rates, and/or signal types may differ in two networks and yet their topologies may be identical.
A local area network (LAN) is one example of a network that exhibits both a physical topology
and a logical topology. Any given node in the LAN has one or more links to one or more nodes
in the network and the mapping of these links and nodes in a graph results in a geometric shape
that may be used to describe the physical topology of the network. Likewise, the mapping of the
data flow between the nodes in the network determines the logical topology of the network. The
physical and logical topologies may or may not be identical in any particular network.
Bus topology
Star topology
Ring topology
Tree topology
Mesh topology
Hybrid topology
Physical topologies
Signal topologies
Logical topologies
The terms Signal topology and logical topology are often used interchangeably, though there
may be a subtle or substantial difference between the two. Example IEEE 802.3 1BASE-5 was a
logical bus, although the signal topology was a tree structure.[citation needed]
Physical topologies
The mapping of the nodes of a network and the physical connections between them – i.e., the
layout of wiring, cables, the locations of nodes, and the interconnections between the nodes and
the cabling or wiring system[1].
Point-to-point
The simplest topology is a permanent link between two endpoints (the line in the illustration
above). Switched point-to-point topologies are the basic model of conventional telephony. The
value of a permanent point-to-point network is the value of guaranteed, or nearly so,
communications between the two endpoints. The value of an on-demand point-to-point
connection is proportional to the number of potential pairs of subscribers, and has been
expressed as Metcalfe's Law.
Permanent (dedicated)
Switched:
Bus
Main article: Bus network
Bus network topology
In local area networks where bus topology is used, each machine is connected to a single cable.
Each computer or server is connected to the single bus cable through some kind of connector. A
terminator is required at each end of the bus cable to prevent the signal from bouncing back
and forth on the bus cable. A signal from the source travels in both directions to all machines
connected on the bus cable until it finds the MAC address or IP address on the network that is
the intended recipient. If the machine address does not match the intended address for the
data, the machine ignores the data. Alternatively, if the data does match the machine address,
the data is accepted. Since the bus topology consists of only one wire, it is rather inexpensive to
implement when compared to other topologies. However, the low cost of implementing the
technology is offset by the high cost of managing the network. Additionally, since only one cable
is utilized, it can be the single point of failure. If the network cable breaks, the entire network
will be down.
Linear bus
The type of network topology in which all of the nodes of the network are connected to a
common transmission medium which has exactly two endpoints (this is the 'bus', which is also
commonly referred to as the backbone, or trunk) – all data that is transmitted between nodes in
the network is transmitted over this common transmission medium and is able to be received by
all nodes in the network virtually simultaneously (disregarding propagation delays)[1].
Note: The two endpoints of the common transmission medium are normally terminated with a
device called a terminator that exhibits the characteristic impedance of the transmission
medium and which dissipates or absorbs the energy that remains in the signal to prevent the
signal from being reflected or propagated back onto the transmission medium in the opposite
direction, which would cause interference with and degradation of the signals on the
transmission medium (See Electrical termination).
Distributed bus
The type of network topology in which all of the nodes of the network are connected to a
common transmission medium which has more than two endpoints that are created by adding
branches to the main section of the transmission medium – the physical distributed bus
topology functions in exactly the same fashion as the physical linear bus topology (i.e., all nodes
share a common transmission medium).
Notes:
1.) All of the endpoints of the common transmission medium are normally terminated with a
device called a 'terminator' (see the note under linear bus).
2.) The physical linear bus topology is sometimes considered to be a special case of the physical
distributed bus topology – i.e., a distributed bus with no branching segments.
3.) The physical distributed bus topology is sometimes incorrectly referred to as a physical tree
topology – however, although the physical distributed bus topology resembles the physical tree
topology, it differs from the physical tree topology in that there is no central node to which any
other nodes are connected, since this hierarchical functionality is replaced by the common bus.
Star
Main article: Star network
In local area networks with a star topology, each network host is connected to a central hub. In
contrast to the bus topology, the star topology connects each node to the hub with a point-to-
point connection. All traffic that traverses the network passes through the central hub. The hub
acts as a signal booster or repeater. The star topology is considered the easiest topology to design
and implement. An advantage of the star topology is the simplicity of adding additional nodes.
The primary disadvantage of the star topology is that the hub represents a single point of failure.
Notes
After the special case of the point-to-point link, as in note 1.) above, the next simplest type of
network that is based upon the physical star topology would consist of one central node – the
'hub' – with two separate point-to-point links to two peripheral nodes – the 'spokes'.
Although most networks that are based upon the physical star topology are commonly
implemented using a special device such as a hub or switch as the central node (i.e., the 'hub' of
the star), it is also possible to implement a network that is based upon the physical star topology
using a computer or even a simple common connection point as the 'hub' or central node –
however, since many illustrations of the physical star network topology depict the central node
as one of these special devices, some confusion is possible, since this practice may lead to the
misconception that a physical star network requires the central node to be one of these special
devices, which is not true because a simple network consisting of three computers connected as
in note 2.) above also has the topology of the physical star.
Star networks may also be described as either broadcast multi-access or nonbroadcast multi-
access (NBMA), depending on whether the technology of the network either automatically
propagates a signal at the hub to all spokes, or only addresses individual spokes with each
communication.UHH DDSDSVSDI FFFFFFFFFFDF F
Extended star
A type of network topology in which a network that is based upon the physical star topology has
one or more repeaters between the central node (the 'hub' of the star) and the peripheral or 'spoke'
nodes, the repeaters being used to extend the maximum transmission distance of the point-to-
point links between the central node and the peripheral nodes beyond that which is supported by
the transmitter power of the central node or beyond that which is supported by the standard upon
which the physical layer of the physical star network is based.
If the repeaters in a network that is based upon the physical extended star topology are replaced
with hubs or switches, then a hybrid network topology is created that is referred to as a physical
hierarchical star topology, although some texts make no distinction between the two topologies.
Distributed Star
A type of network topology that is composed of individual networks that are based upon the
physical star topology connected together in a linear fashion – i.e., 'daisy-chained' – with no
central or top level connection point (e.g., two or more 'stacked' hubs, along with their associated
star connected nodes or 'spokes').
Ring
In local area networks where the ring topology is used, each computer is connected to the
network in a closed loop or ring. Each machine or computer has a unique address that is used
for identification purposes. The signal passes through each machine or computer connected to
the ring in one direction. Ring topologies typically utilize a token passing scheme, used to control
access to the network. By utilizing this scheme, only one machine can transmit on the network
at a time. The machines or computers connected to the ring act as signal boosters or repeaters
which strengthen the signals that traverse the network. The primary disadvantage of ring
topology is the failure of one machine will cause the entire network to fail. [citation needed]
Mesh
The value of fully meshed networks is proportional to the exponent of the number of subscribers,
assuming that communicating groups of any two endpoints, up to and including all the endpoints,
is approximated by Reed's Law.
Fully connected mesh topology
Fully connected
Note: The physical fully connected mesh topology is generally too costly and complex for
practical networks, although the topology is used when there are only a small number of nodes
to be interconnected.
Partially connected
The type of network topology in which some of the nodes of the network are connected to more
than one other node in the network with a point-to-point link – this makes it possible to take
advantage of some of the redundancy that is provided by a physical fully connected mesh
topology without the expense and complexity required for a connection between every node in
the network.
Note: In most practical networks that are based upon the physical partially connected mesh
topology, all of the data that is transmitted between nodes in the network takes the shortest
path (or an approximation of the shortest path) between nodes, except in the case of a failure
or break in one of the links, in which case the data takes an alternative path to the destination.
This requires that the nodes of the network possess some type of logical 'routing' algorithm to
determine the correct path to use at any particular time.
] Tree
The type of network topology in which a central 'root' node (the top level of the hierarchy) is
connected to one or more other nodes that are one level lower in the hierarchy (i.e., the second
level) with a point-to-point link between each of the second level nodes and the top level central
'root' node, while each of the second level nodes that are connected to the top level central 'root'
node will also have one or more other nodes that are one level lower in the hierarchy (i.e., the
third level) connected to it, also with a point-to-point link, the top level central 'root' node being
the only node that has no other node above it in the hierarchy (The hierarchy of the tree is
symmetrical.) Each node in the network having a specific fixed number, of nodes connected to it
at the next lower level in the hierarchy, the number, being referred to as the 'branching factor' of
the hierarchical tree.This tree has individual peripheral nodes.
1.) A network that is based upon the physical hierarchical topology must have at least three
levels in the hierarchy of the tree, since a network with a central 'root' node and only one
hierarchical level below it would exhibit the physical topology of a star.
2.) A network that is based upon the physical hierarchical topology and with a branching factor
of 1 would be classified as a physical linear topology.
3.) The branching factor, f, is independent of the total number of nodes in the network and,
therefore, if the nodes in the network require ports for connection to other nodes the total
number of ports per node may be kept low even though the total number of nodes is large – this
makes the effect of the cost of adding ports to each node totally dependent upon the branching
factor and may therefore be kept as low as required without any effect upon the total number
of nodes that are possible.
4.) The total number of point-to-point links in a network that is based upon the physical
hierarchical topology will be one less than the total number of nodes in the network.
5.) If the nodes in a network that is based upon the physical hierarchical topology are required
to perform any processing upon the data that is transmitted between nodes in the network, the
nodes that are at higher levels in the hierarchy will be required to perform more processing
operations on behalf of other nodes than the nodes that are lower in the hierarchy. Such a type
of network topology is very useful and highly recommended.
Signal topology
The mapping of the actual connections between the nodes of a network, as evidenced by the path
that the signals take when propagating between the nodes.
Note: The term 'signal topology' is often used synonymously with the term 'logical topology',
however, some confusion may result from this practice in certain situations since, by definition,
the term 'logical topology' refers to the apparent path that the data takes between nodes in a
network while the term 'signal topology' generally refers to the actual path that the signals (e.g.,
optical, electrical, electromagnetic, etc.) take when propagating between nodes.
Logical topology
The logical topology, in contrast to the "physical", is the way that the signals act on the network
media, or the way that the data passes through the network from one device to the next without
regard to the physical interconnection of the devices. A network's logical topology is not
necessarily the same as its physical topology. For example, twisted pair Ethernet is a logical bus
topology in a physical star topology layout. While IBM's Token Ring is a logical ring topology,
it is physically set up in a star topology.
The logical classification of network topologies generally follows the same classifications as
those in the physical classifications of network topologies, the path that the data takes between
nodes being used to determine the topology as opposed to the actual physical connections being
used to determine the topology
Notes:
1.) Logical topologies are often closely associated with media access control (MAC) methods and
protocols.
2.) The logical topologies are generally determined by network protocols as opposed to being
determined by the physical layout of cables, wires, and network devices or by the flow of the
electrical signals, although in many cases the paths that the electrical signals take between
nodes may closely match the logical flow of data, hence the convention of using the terms
'logical topology' and 'signal topology' interchangeably.
3.) Logical topologies are able to be dynamically reconfigured by special types of equipment
such as routers and switches.
Daisy chains
Except for star-based networks, the easiest way to add more computers into a network is by
daisy-chaining, or connecting each computer in series to the next. If a message is intended for a
computer partway down the line, each system bounces it along in sequence until it reaches the
destination. A daisy-chained network can take two basic forms: linear and ring.
A linear topology puts a two-way link between one computer and the next. However, this was
expensive in the early days of computing, since each computer (except for the ones at each end)
required two receivers and two transmitters.
By connecting the computers at each end, a ring topology can be formed. An advantage of the
ring is that the number of transmitters and receivers can be cut in half, since a message will
eventually loop all of the way around. When a node sends a message, the message is processed
by each computer in the ring. If a computer is not the destination node, it will pass the message
to the next node, until the message arrives at its destination. If the message is not accepted by
any node on the network, it will travel around the entire ring and return to the sender. This
potentially results in a doubling of travel time for data.
Centralization
The star topology reduces the probability of a network failure by connecting all of the peripheral
nodes (computers, etc.) to a central node. When the physical star topology is applied to a logical
bus network such as Ethernet, this central node (traditionally a hub) rebroadcasts all
transmissions received from any peripheral node to all peripheral nodes on the network,
sometimes including the originating node. All peripheral nodes may thus communicate with all
others by transmitting to, and receiving from, the central node only. The failure of a transmission
line linking any peripheral node to the central node will result in the isolation of that peripheral
node from all others, but the remaining peripheral nodes will be unaffected. However, the
disadvantage is that the failure of the central node will cause the failure of all of the peripheral
nodes also.
If the central node is passive, the originating node must be able to tolerate the reception of an
echo of its own transmission, delayed by the two-way round trip transmission time (i.e. to and
from the central node) plus any delay generated in the central node. An active star network has
an active central node that usually has the means to prevent echo-related problems.
A tree topology (a.k.a. hierarchical topology) can be viewed as a collection of star networks
arranged in a hierarchy. This tree has individual peripheral nodes (e.g. leaves) which are required
to transmit to and receive from one other node only and are not required to act as repeaters or
regenerators. Unlike the star network, the functionality of the central node may be distributed.
As in the conventional star network, individual nodes may thus still be isolated from the network
by a single-point failure of a transmission path to the node. If a link connecting a leaf fails, that
leaf is isolated; if a connection to a non-leaf node fails, an entire section of the network becomes
isolated from the rest.
In order to alleviate the amount of network traffic that comes from broadcasting all signals to all
nodes, more advanced central nodes were developed that are able to keep track of the identities
of the nodes that are connected to the network. These network switches will "learn" the layout of
the network by "listening" on each port during normal data transmission, examining the data
packets and recording the address/identifier of each connected node and which port it's
connected to in a lookup table held in memory. This lookup table then allows future
transmissions to be forwarded to the intended destination only.
Decentralization
In a mesh topology (i.e., a partially connected mesh topology), there are at least two nodes with
two or more paths between them to provide redundant paths to be used in case the link providing
one of the paths fails. This decentralization is often used to advantage to compensate for the
single-point-failure disadvantage that is present when using a single device as a central node
(e.g., in star and tree networks). A special kind of mesh, limiting the number of hops between
two nodes, is a hypercube. The number of arbitrary forks in mesh networks makes them more
difficult to design and implement, but their decentralized nature makes them very useful. This is
similar in some ways to a grid network, where a linear or ring topology is used to connect
systems in multiple directions. A multi-dimensional ring has a toroidal topology, for instance.
A fully connected network, complete topology or full mesh topology is a network topology in
which there is a direct link between all pairs of nodes. In a fully connected network with n nodes,
there are n(n-1)/2 direct links. Networks designed with this topology are usually very expensive
to set up, but provide a high degree of reliability due to the multiple paths for data that are
provided by the large number of redundant links between nodes. This topology is mostly seen in
military applications. However, it can also be seen in the file sharing protocol BitTorrent in
which users connect to other users in the "swarm" by allowing each user sharing the file to
connect to other users also involved. Often in actual usage of BitTorrent any given individual
node is rarely connected to every single other node as in a true fully connected network but the
protocol does allow for the possibility for any one node to connect to any other node when
sharing files.
Hybrids
Hybrid networks use a combination of any two or more topologies in such a way that the
resulting network does not exhibit one of the standard topologies (e.g., bus, star, ring, etc.). For
example, a tree network connected to a tree network is still a tree network, but two star networks
connected together exhibit a hybrid network topology. A hybrid topology is always produced
when two different basic network topologies are connected. Two common examples for Hybrid
network are: star ring network and star bus network
A Star ring network consists of two or more star topologies connected using a multistation
access unit (MAU) as a centralized hub.
A Star Bus network consists of two or more star topologies connected using a bus trunk (the bus
trunk serves as the network's backbone).
While grid networks have found popularity in high-performance computing applications, some
systems have used genetic algorithms to design custom networks that have the fewest possible
hops in between different nodes. Some of the resulting layouts are nearly incomprehensible,
although they function quite well.
INFORMATION SECURITY
Information security means protecting information and information systems from unauthorized
access, use, disclosure, disruption, modification or destruction.[1]
The terms information security, computer security and information assurance are frequently
incorrectly used interchangeably. These fields are interrelated often and share the common goals
of protecting the confidentiality, integrity and availability of information; however, there are
some subtle differences between them.
These differences lie primarily in the approach to the subject, the methodologies used, and the
areas of concentration. Information security is concerned with the confidentiality, integrity and
availability of data regardless of the form the data may take: electronic, print, or other forms.
Computer security can focus on ensuring the availability and correct operation of a computer
system without concern for the information stored or processed by the computer.
Should confidential information about a business' customers or finances or new product line fall
into the hands of a competitor, such a breach of security could lead to lost business, law suits or
even bankruptcy of the business. Protecting confidential information is a business requirement,
and in many cases also an ethical and legal requirement.
For the individual, information security has a significant effect on privacy, which is viewed very
differently in different cultures.
The field of information security has grown and evolved significantly in recent years. There are
many ways of gaining entry into the field as a career. It offers many areas for specialization
including: securing network(s) and allied infrastructure, securing applications and databases,
security testing, information systems auditing, business continuity planning and digital forensics
science, etc.
Basic principles
[edit] Key concepts
For over twenty years, information security has held confidentiality, integrity and availability
(known as the CIA triad) to be the core principles of information security.
There is continuous debate about extending this classic trio. Other principles such as
Accountability have sometimes been proposed for addition - it has been pointed out that issues
such as Non-Repudiation do not fit well within the three core concepts, and as regulation of
computer systems has increased (particularly amongst the Western nations) Legality is becoming
a key consideration for practical security installations.
In 2002, Donn Parker proposed an alternative model for the classic CIA triad that he called the
six atomic elements of information. The elements are confidentiality, possession, integrity,
authenticity, availability, and utility. The merits of the Parkerian hexad are a subject of debate
amongst security professionals.
[edit] Confidentiality
Breaches of confidentiality take many forms. Permitting someone to look over your shoulder at
your computer screen while you have confidential data displayed on it could be a breach of
confidentiality. If a laptop computer containing sensitive information about a company's
employees is stolen or sold, it could result in a breach of confidentiality. Giving out confidential
information over the telephone is a breach of confidentiality if the caller is not authorized to have
the information.
Confidentiality is necessary (but not sufficient) for maintaining the privacy of the people whose
personal information a system holds.[citation needed]
[edit] Integrity
In information security, integrity means that data cannot be modified without authorization. This
is not the same thing as referential integrity in databases. Integrity is violated when an employee
accidentally or with malicious intent deletes important data files, when a computer virus infects a
computer, when an employee is able to modify his own salary in a payroll database, when an
unauthorized user vandalizes a web site, when someone is able to cast a very large number of
votes in an online poll, and so on.
There are many ways in which integrity could be violated without malicious intent. In the
simplest case, a user on a system could mis-type someone's address. On a larger scale, if an
automated process is not written and tested correctly, bulk updates to a database could alter data
in an incorrect way, leaving the integrity of the data compromised. Information security
professionals are tasked with finding ways to implement controls that prevent errors of integrity.
[edit] Availability
For any information system to serve its purpose, the information must be available when it is
needed. This means that the computing systems used to store and process the information, the
security controls used to protect it, and the communication channels used to access it must be
functioning correctly. High availability systems aim to remain available at all times, preventing
service disruptions due to power outages, hardware failures, and system upgrades. Ensuring
availability also involves preventing denial-of-service attacks.
[edit] Authenticity
In computing, e-Business and information security it is necessary to ensure that the data,
transactions, communications or documents (electronic or physical) are genuine. It is also
important for authenticity to validate that both parties involved are who they claim they are.
[edit] Non-repudiation
In law, non-repudiation implies one's intention to fulfill their obligations to a contract. It also
implies that one party of a transaction cannot deny having received a transaction nor can the
other party deny having sent a transaction.
Electronic commerce uses technology such as digital signatures and encryption to establish
authenticity and non-repudiation.
A comprehensive treatment of the topic of risk management is beyond the scope of this article.
However, a useful definition of risk management will be provided as well as some basic
terminology and a commonly used process for risk management.
The CISA Review Manual 2006 provides the following definition of risk management: "Risk
management is the process of identifying vulnerabilities and threats to the information resources
used by an organization in achieving business objectives, and deciding what countermeasures, if
any, to take in reducing risk to an acceptable level, based on the value of the information
resource to the organization."[2]
There are two things in this definition that may need some clarification. First, the process of risk
management is an ongoing iterative process. It must be repeated indefinitely. The business
environment is constantly changing and new threats and vulnerability emerge every day. Second,
the choice of countermeasures (controls) used to manage risks must strike a balance between
productivity, cost, effectiveness of the countermeasure, and the value of the informational asset
being protected.
Risk is the likelihood that something bad will happen that causes harm to an informational asset
(or the loss of the asset). A vulnerability is a weakness that could be used to endanger or cause
harm to an informational asset. A threat is anything (man made or act of nature) that has the
potential to cause harm.
The likelihood that a threat will use a vulnerability to cause harm creates a risk. When a threat
does use a vulnerability to inflict harm, it has an impact. In the context of information security,
the impact is a loss of availability, integrity, and confidentiality, and possibly other losses (lost
income, loss of life, loss of real property). It should be pointed out that it is not possible to
identify all risks, nor is it possible to eliminate all risk. The remaining risk is called residual risk.
A risk assessment is carried out by a team of people who have knowledge of specific areas of the
business. Membership of the team may vary over time as different parts of the business are
assessed. The assessment may use a subjective qualitative analysis based on informed opinion, or
where reliable dollar figures and historical information is available, the analysis may use
quantitative analysis.
The ISO/IEC 27002:2005 Code of practice for information security management recommends
the following be examined during a risk assessment:
security policy,
organization of information security,
asset management,
human resources security,
physical and environmental security,
communications and operations management,
access control,
information systems acquisition, development and maintenance,
information security incident management,
business continuity management, and
regulatory compliance.
1. Identification of assets and estimating their value. Include: people, buildings, hardware,
software, data (electronic, print, other), supplies.
2. Conduct a threat assessment. Include: Acts of nature, acts of war, accidents, malicious
acts originating from inside or outside the organization.
3. Conduct a vulnerability assessment, and for each vulnerability, calculate the probability
that it will be exploited. Evaluate policies, procedures, standards, training, physical
security, quality control, technical security.
4. Calculate the impact that each threat would have on each asset. Use qualitative analysis
or quantitative analysis.
5. Identify, select and implement appropriate controls. Provide a proportional response.
Consider productivity, cost effectiveness, and value of the asset.
6. Evaluate the effectiveness of the control measures. Ensure the controls provide the
required cost effective protection without discernible loss of productivity.
For any given risk, Executive Management can choose to accept the risk based upon the relative
low value of the asset, the relative low frequency of occurrence, and the relative low impact on
the business. Or, leadership may choose to mitigate the risk by selecting and implementing
appropriate control measures to reduce the risk. In some cases, the risk can be transferred to
another business by buying insurance or out-sourcing to another business. The reality of some
risks may be disputed. In such cases leadership may choose to deny the risk. This is itself a
potential risk.[citation needed]
[edit] Controls
When Management chooses to mitigate a risk, they will do so by implementing one or more of
three different types of controls.
[edit] Administrative
Administrative controls (also called procedural controls) consist of approved written policies,
procedures, standards and guidelines. Administrative controls form the framework for running
the business and managing people. They inform people on how the business is to be run and how
day to day operations are to be conducted. Laws and regulations created by government bodies
are also a type of administrative control because they inform the business. Some industry sectors
have policies, procedures, standards and guidelines that must be followed - the Payment Card
Industry (PCI) Data Security Standard required by Visa and Master Card is such an example.
Other examples of administrative controls include the corporate security policy, password policy,
hiring policies, and disciplinary policies.
Administrative controls form the basis for the selection and implementation of logical and
physical controls. Logical and physical controls are manifestations of administrative controls.
Administrative controls are of paramount importance.
[edit] Logical
Logical controls (also called technical controls) use software and data to monitor and control
access to information and computing systems. For example: passwords, network and host based
firewalls, network intrusion detection systems, access control lists, and data encryption are
logical controls.
An important logical control that is frequently overlooked is the principle of least privilege. The
principle of least privilege requires that an individual, program or system process is not granted
any more access privileges than are necessary to perform the task. A blatant example of the
failure to adhere to the principle of least privilege is logging into Windows as user Administrator
to read Email and surf the Web. Violations of this principle can also occur when an individual
collects additional access privileges over time. This happens when employees' job duties change,
or they are promoted to a new position, or they transfer to another department. The access
privileges required by their new duties are frequently added onto their already existing access
privileges which may no longer be necessary or appropriate.
[edit] Physical
Physical controls monitor and control the environment of the work place and computing
facilities. They also monitor and control access to and from such facilities. For example: doors,
locks, heating and air conditioning, smoke and fire alarms, fire suppression systems, cameras,
barricades, fencing, security guards, cable locks, etc. Separating the network and work place into
functional areas are also physical controls.
An important aspect of information security and risk management is recognizing the value of
information and defining appropriate procedures and protection requirements for the
information. Not all information is equal and so not all information requires the same degree of
protection. This requires information to be assigned a security classification.
The first step in information classification is to identify a member of senior management as the
owner of the particular information to be classified. Next, develop a classification policy. The
policy should describe the different classification labels, define the criteria for information to be
assigned a particular label, and list the required security controls for each classification.
Some factors that influence which classification information should be assigned include how
much value that information has to the organization, how old the information is and whether or
not the information has become obsolete. Laws and other regulatory requirements are also
important considerations when classifying information.
The type of information security classification labels selected and used will depend on the nature
of the organisation, with examples being:
In the business sector, labels such as: Public, Sensitive, Private, Confidential.
In the government sector, labels such as: Unclassified, Sensitive But Unclassified,
Restricted, Confidential, Secret, Top Secret and their non-English equivalents.
In cross-sectoral formations, the Traffic Light Protocol, which consists of: White, Green,
Amber and Red.
All employees in the organization, as well as business partners, must be trained on the
classification schema and understand the required security controls and handling procedures for
each classification. The classification a particular information asset has been assigned should be
reviewed periodically to ensure the classification is still appropriate for the information and to
ensure the security controls required by the classification are in place.
[edit] Access control
Access to protected information must be restricted to people who are authorized to access the
information. The computer programs, and in many cases the computers that process the
information, must also be authorized. This requires that mechanisms be in place to control the
access to protected information. The sophistication of the access control mechanisms should be
in parity with the value of the information being protected - the more sensitive or valuable the
information the stronger the control mechanisms need to be. The foundation on which access
control mechanisms are built start with identification and authentication.
Identification is an assertion of who someone is or what something is. If a person makes the
statement "Hello, my name is John Doe." they are making a claim of who they are. However,
their claim may or may not be true. Before John Doe can be granted access to protected
information it will be necessary to verify that the person claiming to be John Doe really is John
Doe.
Authentication is the act of verifying a claim of identity. When John Doe goes into a bank to
make a withdrawal, he tells the bank teller he is John Doe (a claim of identity). The bank teller
asks to see a photo ID, so he hands the teller his driver's license. The bank teller checks the
license to make sure it has John Doe printed on it and compares the photograph on the license
against the person claiming to be John Doe. If the photo and name match the person, then the
teller has authenticated that John Doe is who he claimed to be.
There are three different types of information that can be used for authentication: something you
know, something you have, or something you are. Examples of something you know include
such things as a PIN, a password, or your mother's maiden name. Examples of something you
have include a driver's license or a magnetic swipe card. Something you are refers to biometrics.
Examples of biometrics include palm prints, finger prints, voice prints and retina (eye) scans.
Strong authentication requires providing information from two of the three different types of
authentication information. For example, something you know plus something you have. This is
called two factor authentication.
On computer systems in use today, the Username is the most common form of identification and
the Password is the most common form of authentication. Usernames and passwords have served
their purpose but in our modern world they are no longer adequate. Usernames and passwords
are slowly being replaced with more sophisticated authentication mechanisms.
After a person, program or computer has successfully been identified and authenticated then it
must be determined what informational resources they are permitted to access and what actions
they will be allowed to perform (run, view, create, delete, or change). This is called
authorization.
Authorization to access information and other computing services begins with administrative
policies and procedures. The policies prescribe what information and computing services can be
accessed, by whom, and under what conditions. The access control mechanisms are then
configured to enforce these policies.
Different computing systems are equipped with different kinds of access control mechanisms -
some may even offer a choice of different access control mechanisms. The access control
mechanism a system offers will be based upon one of three approaches to access control or it
may be derived from a combination of the three approaches.
Examples of common access control mechanisms in use today include Role-based access control
available in many advanced Database Management Systems, simple file permissions provided in
the UNIX and Windows operating systems, Group Policy Objects provided in Windows network
systems, Kerberos, RADIUS, TACACS, and the simple access lists used in many firewalls and
routers.
To be effective, policies and other security controls must be enforceable and upheld. Effective
policies ensure that people are held accountable for their actions. All failed and successful
authentication attempts must be logged, and all access to information must leave some type of
audit trail.[citation needed]
[edit] Cryptography
Information security uses cryptography to transform usable information into a form that renders
it unusable by anyone other than an authorized user; this process is called encryption.
Information that has been encrypted (rendered unusable) can be transformed back into its
original usable form by an authorized user, who possesses the cryptographic key, through the
process of decryption. Cryptography is used in information security to protect information from
unauthorized or accidental disclosure while the information is in transit (either electronically or
physically) and while information is in storage.
Cryptography provides information security with other useful applications as well including
improved authentication methods, message digests, digital signatures, non-repudiation, and
encrypted network communications. Older less secure application such as telnet and ftp are
slowly being replaced with more secure applications such as ssh that use encrypted network
communications. Wireless communications can be encrypted using protocols such as
WPA/WPA2 or the older (and less secure) WEP. Wired communications (such as ITU-T G.hn)
are secured using AES for encryption and X.1035 for authentication and key exchange. Software
applications such as GnuPG or PGP can be used to encrypt data files and Email.
Information security must protect information throughout the life span of the information, from
the initial creation of the information on through to the final disposal of the information. The
information must be protected while in motion and while at rest. During its life time, information
may pass through many different information processing systems and through many different
parts of information processing systems. There are many different ways the information and
information systems can be threatened. To fully protect the information during its lifetime, each
component of the information processing system must have its own protection mechanisms. The
building up, layering on and overlapping of security measures is called defense in depth. The
strength of any system is no greater than its weakest link. Using a defence in depth strategy,
should one defensive measure fail there are other defensive measures in place that continue to
provide protection.
Recall the earlier discussion about administrative controls, logical controls, and physical
controls. The three types of controls can be used to form the basis upon which to build a defense-
in-depth strategy. With this approach, defense-in-depth can be conceptualized as three distinct
layers or planes laid one on top of the other. Additional insight into defense-in- depth can be
gained by thinking of it as forming the layers of an onion, with data at the core of the onion,
people as the outer layer of the onion, and network security, host-based security and application
security forming the inner layers of the onion. Both perspectives are equally valid and each
provides valuable insight into the implementation of a good defense-in-depth strategy.
[edit] Process
The terms reasonable and prudent person, due care and due diligence have been used in the
fields of Finance, Securities, and Law for many years. In recent years these terms have found
their way into the fields of computing and information security. U.S.A. Federal Sentencing
Guidelines now make it possible to hold corporate officers liable for failing to exercise due care
and due diligence in the management of their information systems.
In the business world, stockholders, customers, business partners and governments have the
expectation that corporate officers will run the business in accordance with accepted business
practices and in compliance with laws and other regulatory requirements. This is often described
as the "reasonable and prudent person" rule. A prudent person takes due care to ensure that
everything necessary is done to operate the business by sound business principles and in a legal
ethical manner. A prudent person is also diligent (mindful, attentive, and ongoing) in their due
care of the business.
In the field of Information Security, Harris[4] offers the following definitions of due care and due
diligence:
"Due care are steps that are taken to show that a company has taken responsibility for the
activities that take place within the corporation and has taken the necessary steps to help protect
the company, its resources, and employees." And, [Due diligence are the] "continual activities
that make sure the protection mechanisms are continually maintained and operational."
Attention should be made to two important points in these definitions. First, in due care, steps are
taken to show - this means that the steps can be verified, measured, or even produce tangible
artifacts. Second, in due diligence, there are continual activities - this means that people are
actually doing things to monitor and maintain the protection mechanisms, and these activities are
ongoing.
An enterprise-wide issue
Leaders are accountable
Viewed as a business requirement
Risk-based
Roles, responsibilities, and segregation of duties defined
Addressed and enforced in policy
Adequate resources committed
Staff aware and trained
A development life cycle requirement
Planned, managed, measurable, and measured
Reviewed and audited
Business Continuity (BC) and Disaster Recovery (DR) are the watchwords of businesses in the
Information Technology (IT) world. The predominant role of Wide Area Networks (WANs) in
almost all major fields of business has made it an imperative for IT and Network managers
across the globe to accelerate their network infrastructure, and also devise workable BC/DR
plans.
The primary objective of a Disaster Recovery plan (a.k.a. Business Continuity plan) is the
description of how an organization has to deal with potential natural or human-induced disasters.
The disaster recovery plan steps that every enterprise incorporates as part of business
management includes the guidelines and procedures to be undertaken to effectively respond to
and recover from disaster recovery scenarios, which adversely impacts information systems and
business operations. Plan steps that are well-constructed and implemented will enable
organizations to minimize the effects of the disaster and resume mission-critical functions
quickly.
An Enterprise appoints a Disaster Recovery team within the organization, which can actively
involve in creating the plan steps, implementing and maintaining the plan. As a priority,
businesses organizations create DRP templates as a basis for developing Disaster Recovery plans
for the organization. The following steps are taken in creating an efficient disaster recovery or
business continuity planning:
Objective
The statement of the objective including project details, Onsite/Offsite data, resources, and
business type
Contingency Procedures
The routine to be established when operating in contingency mode should be determined and
documented. It should include inventory of systems and equipment in the location; descriptions
of process, equipment, software; minimum requirements of processing; location of vital records
with categories; descriptions of data and communication networks, and customer/vendor details.
A resource planning should be developed for operating in emergency mode. The essential
procedures to restore normalcy and business continuity must be listed out, including the plan
steps for recovering lost data and to restore normal operating mode.
The disaster recovery plan developed thereby should be tested for efficiency. To aid in that
function a test strategy and corresponding test plan should be developed and administered. The
results obtained should be recorded, analyzed, and modified as required. Organizations realize
the importance of business continuity plans that keep their business operations continuing
without any hindrance. Disaster recovery planning is a crucial component of today’s network-
based organizations that determine productivity, and business continuity.
Disasters, unpredictable by nature, can strike anywhere at anytime with little or no warning.
Recovering from one can be stressful, expensive and time consuming, particularly for those who
have not taken the time to think ahead and prepare for such possibilities. However, when disaster
strikes, those who have prepared and made recovery plans survive with comparatively minimal
loss and/or disruption of productivity.
Disasters can take several different forms. Some primarily impact individuals -- e.g., hard drive
meltdowns -- while others have a larger, collective impact. Disasters can occur such as power
outages, floods, fires, storms, equipment failure, sabotage, terrorism, or even epidemic illness.
Each of these can at the very least cause short-term disruptions in normal business operation. But
recovering from the impact of many of the aforementioned disasters can take much longer,
especially if organizations have not made preparations in advance.
Most of us recognize that these potential problems as possibilities. Unfortunately the randomness
of some of these disasters lulls some organizations into a sense of false security-"that's not likely
to happen here." However, if proper preparations have been made, the disaster recovery process
does not have to be exceedingly stressful. Instead the process can be streamlined, but this
facilitation of recovery will only happen where preparations have been made. Organizations that
take the time to implement disaster recovery plans ahead of time often ride out catastrophes with
minimal or no loss of data, hardware, or business revenue. This in turn allows them to maintain
the faith and confidence of their customers and investors.
Disaster Recovery Planning is the factor that makes the critical difference between the
organizations that can successfully manage crises with minimal cost and effort and maximum
speed, and those that are left picking up the pieces for untold lengths of time and at whatever
cost providers decide to charge; organizations forced to make decision out of desperation.
Detailed disaster recovery plans can prevent many of the heartaches and headaches experienced
by an organization in times of disaster. By having practiced plans, not only for equipment and
network recovery, but also plans that precisely outline what steps each person involved in
recovery efforts should undertake, an organization can improve their recovery time and minimize
the time that their normal business functions are disrupted. Thus it is vitally important that
disaster recovery plans be carefully laid out and regularly updated. Organizations need to put
systems in place to regularly train their network engineers and mangers. Special attention should
also be paid to training any new employees who will have a critical role in the disaster recovery
process.
There are several options available for organizations to use once they decide to begin creating
their disaster recovery plan. The first and often most accessible source a business can drawn on
would be to have any experienced managers within the organization draw on the knowledge and
experience they have to help craft a plan that will fit the recovery needs specific to their unique
organization. For organizations that do not have this type of expertise in house, there are a
number of outside options that can be called on, such as trained consultants and specially
designed software.
One of the most common practices used by responsible organizations is a disaster recovery plan
template. While templates might not cover every need specific to every organization, they are a
great place from which to start one's preparation. Templates help make the preparation process
simpler and more straightforward. They provide guidance and can even reveal aspects of disaster
recovery that might otherwise be forgotten.
The primary goal of any disaster recovery plan is to help the organization maintain its business
continuity, minimize damage, and prevent loss. Thus the most important question to ask when
evaluating your disaster recovery plan is, "Will my plan work?" The best way to ensure
reliability of one's plan is to practice it regularly. Have the appropriate people actually practice
what they would do to help recover business function should a disaster occur. Also regular
reviews and updates of recovery plans should be scheduled. Some organizations find it helpful to
do this on a monthly basis so that the plan stays current and reflects the needs an organization
has today, and not just the data, software, etc., it had six months ago.
Business Continuity
Not many years ago when a business wanted to find the ways to prepare itself against disaster
and ensure business continuity should catastrophe strike, the bulk of the organization's time,
money, and effort would be spent on ways that disasters could (hopefully) be avoided altogether.
Often the outcome of an organization's search for ways to protect their most critical business
applications (in order to shore up their business continuity if disaster hit), was that they found
they could potentially avoid harm through the use of redundant data lines. As news of this
information spread, it did not take long before the words "disaster" and "recovery" were replaced
by "continuity" and "resumption."
While a small percentage of corporate entities were still dedicated to disaster recovery as one
way of maintaining business continuity, the bulk of the focus was placed on disaster avoidance.
Over the last several years however, that paradigm has shifted and a new kind of disaster
preparation has replaced that type of thinking. Avoidance is a great idea in theory, but cannot
always be reproduced in real life.
The horrific events of 9/11 brought into sharp focus the short comings and inadequacies of the
idea of avoidance plans as preparation. The urgent need to regain business continuity after the
disaster, and the inability of many businesses to be able to gain access to their normal critical
business functions were a wakeup call for corporations everywhere to reevaluate the plans they
had previously put in place to mitigate such events. 9/11 made many organizations realize the
vast inadequacy of their current plans as they saw the heavy price paid by many organizations for
their unwitting vulnerability. Attempting to avoid disaster was a good place to start, but now
organizations realized that they must prepare for unavoidable circumstances as well.
One of the most common areas of vulnerability for organizations when a disaster strikes is the
loss of their WAN connectivity. Earthquakes, floods, and acts of war can certainly disrupt the
use of an organization's data lines. But loss of WAN connectivity can happen even without a
major catastrophe. Much simpler threats such as the accidental cutting of data lines or equipment
failure can have the same devastating net result on connectivity. Whether the cause is a
construction mishap from the new building next door, or the effects of a far more serious event
like a flood, fire, or terrorist attack, if an organization loses their connectivity their business
continuity is often lost as well, and they are functionally in a state of disaster.
The loss of WAN connectivity can have serious consequences for an organization's daily
business activities. Emails, financial transactions, ERP/CRM systems, order placement and
processing, are just a few of the critical operations affected by WAN connectivity. If
connectivity is lost these activities can be severely slowed or halted altogether until data lines
can be recovered. Thus, having a functioning WAN system is critical for productive business
operation and should be an integral part of any disaster recovery plan.
There are several methods available for organizations who want to ensure a high availability of
WAN connectivity as part of their disaster recovery plan. The earliest techniques used to back up
data lines were complex and cumbersome. They used multiple data lines that were connected to a
programmable router. Complex programming allowed data to be passed over multiple
connections which helped reduce vulnerability to a single line and helped protect against
backbone failure. This technique, though far from streamlined, was better than no back-up
system at all and did help maintain at least some business continuity.
Since that time the technology has evolved and a more elegant technique is available. This new
technique involves the use of intelligent devices that can handle multiple data lines of different
speeds from multiple providers simultaneously. These devices, called Router Clustering Devices,
intelligently detect if a line, component or service is failing and then proceed to switch the flow
of data to other available and working lines. These advancements provide better protection for an
organization's data flow. They reduce the potential mess of disaster recovery and in turn increase
business continuity when disasters do happen without the complexity and awkwardness of the
old system.
Business Continuity planning is an essential part of running any modern organization that takes
its business and its clients seriously. With so many potential business disasters looming that can
befall an organization at any time, it seems unwise not to take actions to prepare for and try to
prevent the devastating impact of such catastrophes.
There is a multiplicity of benefits in planning for Business Continuity within your organization.
Not only will your data, hardware, software, etc., be better protected, but the people that
compose your organization will be better safeguarded should a disaster occur. In addition,
employees will be informed and rehearsed as to what actions to take to immediately start the
recovery process and ensure business continuity if disaster strikes.
Without this type of preparation any unexpected event can severely disrupt the operation,
continuity, and effectiveness of your business. Disabling events can come in all shapes and
varieties. They can vary from the more common calamities like hard drive corruption, building
fires or flooding to the rarer, yet more severe and often longer lasting disruptions that can occur
on a city-wide or even national basis; events such as disruptions in transport (oil crises, metro
shut-downs, transport worker, strikes, etc.), infrastructure weakening from terrorist attacks, or
even severe loss of staff due to illness like a pandemic flu. All of these strikes a blow at an
organization's struggle for business continuity.
For smaller companies the impact of the above mentioned and even lesser disasters can hit much
harder. For example, unexpected non-availability of key workers alone could be catastrophic,
potentially causing as much disruption to business continuity as technological hardship,
especially if it occurs during the height of the company's busy season. If only one person is
trained to do particular and/or essential tasks, their unexpected absence can severely disrupt
productivity.
Thus, putting business continuity plans into practice in your organization now can prepare your
business for most any potential disaster, help ensure that you will be able to maintain continuity
of your business practices, and reduce or even possibly remove the effect such calamities could
have on your organization.
In addition to the above mentioned benefits, the following are also advantages of business
continuity planning:
If not already, your organization my soon be required to incorporate some type of Business
Continuity Management planning into its policies by either corporate governance or
governmental legislation.
With an effective and practiced Business Continuity plan, your insurance company may well
view you more favorably should some sort of disaster ever require you to call upon their
services.
In creating a Business Continuity plan, the process of evaluating potential weakness and
planning how to deal with what could possibly go wrong often offers management the chance to
gain a better understanding of the minutia of their business and ultimately helps an organization
identify ways to strengthen any short comings. Frequently the greatest and most immediate
value of the Business Continuity planning process is the awareness one gains of the details of
his/her business and not necessarily the streamlining of how to handle disaster as an
organization. Business Continuity planning can often create awareness of useful ways to
improve an organization, sometimes even in areas that had previously gone unconsidered.
Business Continuity planning will make your organization more robust. It can strengthen your
organization not only against large-scale problems it can also help make smaller problems that
might have caused continuity interruptions to become moot, through detailed planning.
Business Continuity plan will show your investors that you take business seriously, that you are
prepared and desire to maintain productivity regardless of difficulty. This preparation will also
show your staff that you have their employment and personal well-being in mind. It will show
that you care.
Informing your customers that you have a Business Continuity plan, that you have taken steps to
ensure continuity of your productivity so that you can keep your commitments to them, lets
them know that you consider the provision of quality service a high priority which in turns
instills their confidence in your business.
A Business Continuity plan helps protect your organization's image, brand, and reputation. Being
known as a reliable company is always good for business.
And finally, a Business continuity plan can significantly reduce your loses if ever you are hit by
disaster.
Business Continuity (BC) and Disaster Recovery (DR) are the watchwords of businesses in the
Information Technology (IT) world. The predominant role of Wide Area Networks (WANs) in
almost all major fields of business has made it an imperative for IT and Network managers
across the globe to accelerate their network infrastructure, and also devise workable BC/DR
plans.
The primary objective of a Disaster Recovery plan (a.k.a. Business Continuity plan) is the
description of how an organization has to deal with potential natural or human-induced disasters.
The disaster recovery plan steps that every enterprise incorporates as part of business
management includes the guidelines and procedures to be undertaken to effectively respond to
and recover from disaster recovery scenarios, which adversely impacts information systems and
business operations. Plan steps that are well-constructed and implemented will enable
organizations to minimize the effects of the disaster and resume mission-critical functions
quickly.
An Enterprise appoints a Disaster Recovery team within the organization, which can actively
involve in creating the plan steps, implementing and maintaining the plan. As a priority,
businesses organizations create DRP templates as a basis for developing Disaster Recovery plans
for the organization. The following steps are taken in creating an efficient disaster recovery or
business continuity planning:
Objective
The statement of the objective including project details, Onsite/Offsite data, resources, and
business type
Contingency Procedures
The routine to be established when operating in contingency mode should be determined and
documented. It should include inventory of systems and equipment in the location; descriptions
of process, equipment, software; minimum requirements of processing; location of vital records
with categories; descriptions of data and communication networks, and customer/vendor details.
A resource planning should be developed for operating in emergency mode. The essential
procedures to restore normalcy and business continuity must be listed out, including the plan
steps for recovering lost data and to restore normal operating mode.
Testing and Maintenance
The dates of testing, disaster recovery scenario, and plans for each scenario should be
documented. Maintenance involves record of scheduled review on a daily, weekly, monthly,
quarterly, yearly basis; reviews of plans, teams, activities, tasks accomplished and complete
documentation review and update.
The disaster recovery plan developed thereby should be tested for efficiency. To aid in that
function a test strategy and corresponding test plan should be developed and administered. The
results obtained should be recorded, analyzed, and modified as required. Organizations realize
the importance of business continuity plans that keep their business operations continuing
without any hindrance. Disaster recovery planning is a crucial component of today’s network-
based organizations that determine productivity, and business continuity.