ESTABLISHING A MANAGEMENT FRAMEWORK FOR SECURITY AND CONTROL
In terms of information system security and control, technology is not the main problem. Even
the greatest technology may be readily overcome in the absence of wise management practices, which
the technology serves as a basis for. For instance, according to experts, technology at the time could
have stopped more than 90% of successful cyber attacks. These attacks become so frequent due to
insufficient human attention.
A strong security policy and set of controls are necessary for protecting information resources.
An international set of security and control standards called ISO 17799 offers practical
recommendations. The best practices for information system security and control are outlined, covering
security policy, business continuity planning, physical security, access control, compliance, and the
establishment of a security role inside the company.
Types of Information Systems Controls
• General controls: All computerized applications must have general controls, which are
composed of a combination of hardware, software, and manual processes that provide a
comprehensive control environment. Software controls, physical hardware controls, computer
operations controls, data security controls, process implementation controls, and administrative
controls are all examples of general controls.
– Software – In order to prevent unauthorized access to computer programs, software
programs, and system software, it is important to keep an eye on how they are being
used. For the programs that directly handle data and data files, system software serves
as an essential control area.
– Hardware – Check for equipment malfunction and make sure that computer hardware
is physically secure. It is important to take additional precautions to safeguard computer
equipment from high heat and humidity as well as flames. Organizations that depend on
computers must plan for backup and continuous functioning in order to provide
uninterrupted service.
– Computer operations – Ensure that the computer department's work is being done in
accordance with established protocols for data processing and storage. They consist of
setup controls for computer processing jobs, computer operation, backup controls, and
recovery processes for processing that terminates unexpectedly.
– Data security – Make sure that while they are being used or stored, important business
data files stored on disk or tape are not vulnerable to unauthorized access, charging, or
destruction.
– Implementation – At various stages, conduct an audit of the system development
process to make sure it is appropriately managed and monitored. The systems
development audit checks whether users and management have conducted formal
reviews at various phases of development, the degree of user engagement at each step
of implementation, and the application of a formal cost-benefit approach to determine
system viability. The audit should check for the use of quality assurance procedures and
controls during the creation, conversion, and testing of programs as well as for
comprehensive and accurate system, user, and operations documentation.
– Administrative – Establish formal standards, regulations, practices, and control
disciplines to guarantee the effective implementation and enforcement of the
organization's general and application controls.
• Application controls: Program controls, such as those used in order processing or payroll, are
particular controls that are specific to each computerized application. They are made up of
controls that are implemented from a system's business functional area and from pre-
programmed processes. Application controls, which comprise automatic and human processes,
make ensuring that only allowed data are fully and precisely processed by that application.
– Input – Data entering the system is checked by input controls for correctness and
completeness. Authorization of input, data conversion, data modification, and error
handling all have their own input controls.
– Processing – Processing controls guarantee that updated data is precise and complete.
Processing are performed using run control totals, computer matching, and
programmed edit checks.
– Output – Output controls guarantee the accuracy, completeness, and equitable
distribution of computer processing's outcomes.
Not all of the application controls discussed here are used in every information system. Some systems
require more of these controls than others, depending on the importance of the data and the nature of
the application.
Risk Assessment
• Determines the level of risk to the firm if a specific activity or process is not properly controlled
• An organization must be aware of which assets need to be protected and how vulnerable they
are before allocating money to controls. These queries are addressed by a risk assessment,
which also assists the business in choosing the most economical combination of measures for
asset protection. Business managers may assess the worth of information assets, areas of
vulnerability, the likelihood that an issue will occur frequently, and the potential for harm by
collaborating with information systems experts.
• One problem with risk assessment and other approaches for estimating security costs and
benefits is that businesses may not always be able to precisely estimate the likelihood that
attacks to their information systems will materialize. However, management will appreciate
some effort made to foresee, budget for, and control direct and indirect security expenditures.
• A strategy to reduce overall costs and increase defenses is the result of the risk assessment
process. Information systems developers must compare different control strategies to one
another and to their respective cost-effectiveness in order to choose which controls to utilize. A
strong control at one moment may compensate for a weak control at another. If the regions of
greatest risk are secure or if compensatory controls are present elsewhere, it might not be cost-
effective to implement strict controls at every stage of the processing cycle. The total level of
control for a given application is determined by the sum of all controls created for that
application.
Security Policy
• Policy ranking information risks, identifying acceptable security goals, and identifying the
mechanisms for achieving these goals
– Acceptable Use Policy (AUP) – The allowed uses of the company's computing resources
and hardware, including desktop and laptop computers, wireless devices, telephones,
and the Internet, are outlined in an acceptable use policy (AUP). The company's privacy,
user accountability, and personal usage of its tools and networks should all be made
clear in the policy. A good AUP outlines behaviors that are acceptable and unacceptable
for each user as well as the consequences for noncompliance.
– Authorization policies – For various tiers of users, different levels of access to
information assets are determined by authorization policies. Systems for managing
authorizations determine when and where a user is allowed to access particular areas of
a website or a company database. Such systems restrict each user's access to a system
to the areas that person is authorized to use, according to data defined by a set of
access rules.
• Companies need to create a cogent corporate policy that considers the risks' nature, the
information assets that need to be safeguarded, the processes and technology needed to handle
the risks, as well as implementation and auditing systems.
• Statements classifying information threats, setting reasonable security objectives, and specifying
the methods for accomplishing these objectives make up a security policy. What informational
resources are the most crucial for the company? Who in the company creates and maintains this
data? What security measures are currently in place to safeguard the data? For each of these
assets, what amount of risk is management ready to accept? For instance, is it willing to lose
client credit information once every ten years? Or will it construct a credit card data security
structure that can endure the catastrophe that occurs once every hundred years? To attain this
degree of acceptable risk, management must project the cost.
Ensuring Business Continuity
• Downtime: Period of time in which a system is not operational
– Companies need to take extra precautions to make sure that their systems and apps are
constantly accessible as they depend more and more on digital networks for their
income and operations. Denial of service attacks, network failure, high Internet traffic,
and depleted server resources are just a few of the many variables that might affect
how well a website performs. Customer resentment, millions of dollars in missed sales,
and the inability to complete essential internal transactions are all consequences of
computer breakdowns, disruptions, and downtime.
• Fault-tolerant computer systems: Redundant hardware, software, and power supply
components to provide continuous, uninterrupted service
– Computers that are fault-tolerant include additional memory chips, processors, and disk
storage devices to back up their systems and keep them operational in case of failure.
They can detect hardware problems and switch to a backup device automatically thanks
to specialized software routines or self-checking logic incorporated into their circuitry.
These computers' parts can be taken out and fixed without affecting the computer
system.
• High-availability computing: Designing to maximize application and system availability
– For businesses that perform a lot of electronic transactions or rely on digital networks
for their internal processes, high-availability computing systems are a minimal necessity.
A variety of techniques and technologies are needed for high-availability computing in
order to assure the best performance for networks and computer systems.
– High-availability computing should be distinguished from fault tolerance. To increase
application and system availability, fault tolerance and high availability computing were
both developed. Both make use of hardware for backup. However, fault tolerance offers
continuous availability and the complete elimination of recovery time, whereas high-
availability computing aids businesses in recovering rapidly after a crash.
• Load balancing: Distributes access requests across multiple servers
– Large quantities of access requests are split across several servers using load balancing.
So that no single device is overloaded, queries are sent to the server that is most readily
accessible. Requests are sent to a more capable server if one server begins to get
overloaded.
• Mirroring: Backup server that duplicates processes on primary server
– Through the deployment of a backup server, all of the operations and transactions of
the primary server are duplicated. The backup server may take over right away without
any service interruptions if the primary server fails. Because each server must be
mirrored by an identical server whose sole goal is to be accessible in the case of a
failure, server mirroring is, nevertheless, exceedingly expensive.
• Recovery-oriented computing: Designing computing systems to recover more rapidly from
mishaps
– Recovery-oriented computing is a method that researchers are looking at as a means to
help computers recover even more quickly from errors. This work entails developing
skills and tools to assist operators in locating the causes of defects in multicomponent
systems and promptly correcting their errors. It also involves creating systems that can
recover fast.
• Disaster recovery planning: Plans for restoration of computing and communications disrupted
by an event such as an earthquake, flood, or terrorist attack
– Disaster recovery plans are largely concerned with the technical aspects of maintaining
systems, such as whether files should be backed up and how backup computer systems
or disaster recovery services should be maintained.
• Business continuity planning: Plans for handling mission-critical functions if systems go down
– Planning for business continuity focuses on how the organization can resume operations
following a calamity. The business continuity plan identifies key business processes and
establishes strategies for addressing mission-critical operations in the event of system
failure. In order to identify the systems and business processes that are most important
to the organization, business managers and information technology professionals must
collaborate on both types of plans. To determine the company's most crucial systems
and the effects a system outage will have on the business, they must undertake a
business impact study. Management must decide which areas of the business need to
be restored first and how long the firm can operate without its systems.
Auditing
• MIS audit: Identifies all of the controls that govern individual information systems and assesses
their effectiveness
– An MIS audit lists every control that oversees a specific information system and
evaluates its efficacy. The auditor has to have a solid grasp of operations, physical
facilities, telecommunications, security systems, security objectives, organizational
structure, personnel, manual procedures, and specific applications to do this. The
auditor often conducts interviews with important users and operators of a particular
information system to learn about their practices. Examinations are conducted of
security, application controls, overall integrity controls, and control disciplines. Using
automated audit tools, if necessary, the auditor should run tests and follow a sample
transaction's path through the system.
• Security audits: Review technologies, procedures, documentation, training, and personnel
– Technologies, practices, documentation, training, and employees should all be
examined during security audits. To assess how the technology, information systems
personnel, and company workers would react in the event of an attack or disaster, a
particularly comprehensive audit may even mimic one.
TECHNOLOGIES AND TOOLS FOR SECURITY AND CONTROL
Businesses can use a variety of techniques and technology to detect or guard against infiltration.
They consist of encryption, firewalls, intrusion detection systems, and authentication technologies.
Additionally, there are methods and tools available to assist businesses improve the reliability of their
software.
Access Control – Consists of all the policies and procedures a company uses to prevent improper access
to systems by unauthorized insiders and outsiders
• Access control refers to all of a company's rules and practices used to prevent unauthorized
insiders and outsiders from improperly accessing systems. A user has to be approved and
authenticated in order to acquire access. The capacity to verify that a person is who they say
they are is referred to as authentication. Software for access control is created to restrict access
to systems and data to those who have been given permission to do so.
• Authentication:
– Passwords – Utilizing passwords that are only known by authorized users is a common
way to create authentication. A password is used by an end user to enter a computer
system and may also be used to access particular systems and data. But users frequently
trade passwords, forget them, or select weak passwords that are simple to guess, which
jeopardizes security. Additionally, passwords can be "sniffed" if sent across a network or
taken through social engineering.
– Tokens, smart cards – Systems occasionally employ smart cards and other tokens for
access control. A token is a tangible object, similar to an ID card that is used to confirm a
certain user's identity.
– Biometric authentication – In order to authenticate system users, biometric
authentication offers a potential new method that can get around some of the
drawbacks of passwords. The foundation of biometric authentication is the
measurement of a physical or behavioral characteristic that distinguishes each person
from others. It determines whether there are any variations between a person's unique
traits and the recorded profile by comparing those attributes with the person's
fingerprints, face, or retinal picture. Access is given if the profiles match. The technology
is pricey, and only recently have face and fingerprint recognition systems been
employed for security purposes.
Firewalls, Intrusion Detection Systems, and Antivirus Software
• Firewalls: Hardware and software controlling flow of incoming and outgoing network traffic
– Firewalls are becoming more and more important as more organizations expose their
networks to Internet traffic. An apparatus that regulates the flow of incoming and
outgoing network traffic is known as a firewall. Although firewalls can also be used to
separate one area of a company's network from the rest of the network, they are often
positioned between the organization's private internal networks and untrusted external
networks like the Internet.
– Before allowing a user access to a network, the firewall serves as a gatekeeper, checking
their credentials. Incoming traffic's identities, Internet Protocol (IP) addresses,
applications, and other details are identified by the firewall. This data is compared to
the access rules that the network administrator set into the system. A security policy
may be enforced by the company on traffic moving between its network and other
untrusted networks, such as the Internet, thanks to the firewall's ability to stop
unwanted communication from entering and leaving the network.
– An administrator must carefully craft and keep up-to-date internal rules defining the
users, apps, and IP addresses that are accepted or refused in order to build a reliable
firewall. Firewalls should be considered one component of a comprehensive security
strategy since they can hinder, but not entirely prohibit, outsiders from accessing your
network. It may be necessary to implement larger business rules, user obligations, and
security awareness training in order to deal with Internet security efficiently.
• Intrusion detection systems: Full-time monitoring tools placed at the most vulnerable points of
corporate networks to detect and deter intruders
– Commercial security vendors increasingly offer intrusion detection technologies and
services in addition to firewalls to safeguard against unauthorized network access
attempts and suspicious network traffic. In order to continuously identify and prevent
offenders, intrusion detection systems have instruments for continuous monitoring
positioned at the weakest points or "hot spots" of corporate networks. If the system
discovers a suspicious or abnormal occurrence, it will trigger an alarm. Scanning
software analyzes for patterns suggestive of well-known computer attack techniques,
such using poor passwords, verifies if crucial files have been deleted or changed, and
issues alerts for vandalism or system management faults. Monitoring software looks at
events as they happen to find active security assaults. A network's most sensitive area
can be configured to be shut down if it receives illegal traffic using the intrusion
detection tool.
• Antivirus software: Software that checks computer systems and drives for the presence of
computer viruses and can eliminate the virus from the infected area
• Every computer must have antivirus protection in defensive technology
programs for both consumers and enterprises. Computer systems and disks may
be checked for the existence of viruses using antivirus software. The program
often removes the infection from the region that is afflicted. However, the
majority of antivirus software only works against infections that were already
well-known at the time the software was built. The antivirus software must be
updated regularly to remain functional.
– Wi-Fi Protected Access specification – Even with its shortcomings, WEP offers some
level of protection if Wi-Fi users remember to turn it on. When a wireless network
provides access to internal company data, businesses can further increase WEP security
by utilizing it in conjunction with virtual private network (VPN) technology. Wi-Fi
equipment suppliers have created new, more robust security standards. A Wi-Fi
Protected Access (WPA) protocol was released by the Wi-Fi Alliance industry trade
association. This specification may be used to upgrade 802.11b-compatible equipment
and is compatible with future wireless LAN devices. WPA enhances data encryption by
using longer, 128-bit keys that are constantly changing instead of the WEP's static
encryption keys, which are easier to break. WPA offers a system based on the Extensible
Authentication Protocol (EAP) that works with a central authentication server to verify
each user on the network before the user may join it in order to increase user
authentication. Mutual authentication is also used to prevent wireless users from being
drawn into malicious networks that may steal their network credentials. To ensure that
data packets are not being repeated by hackers to deceive network users and are
actually a part of an active network session, data packets can be verified.
Encryption and Public Key Infrastructure
For the protection of sensitive data sent via networks like the Internet, many companies rely on
encryption. In order to prevent unwanted access to or interpretation of the data being transferred,
communications are coded and scrambled using encryption. By using a secret numerical code, often
known as an encryption key, a message can be made to be conveyed as a jumbled string of characters.
The message must be decrypted (unscrambled) with a matching key in order to be read (the key is made
up of a huge collection of letters, numbers, and symbols).
• Public key encryption: Uses two different keys, one private and one public. The keys are
mathematically related so that data encrypted with one key can be decrypted using only the
other key
– An array of public and private keys that lock data during transmission and unlock it upon
receipt make up a public key encryption scheme. The recipient's public key is located by
the sender in a directory, and it is used to encrypt messages. Over the Internet or a
private network, the communication is transmitted in encrypted form. The recipient
uses his or her private key to decode the data and read the message when the
encrypted message is delivered.
• Message integrity: The ability to be certain that the message being sent arrives at the proper
destination without being copied or changed
– Because public networks are less secure than private networks, encryption is especially
helpful for securing communications via the Internet and other public networks. In
addition to addressing the issues of message integrity and authentication, encryption
helps safeguard the transfer of payment data, such as credit card information. The
capacity to ensure that a message is transmitted and that it reaches its intended
recipient without being duplicated or altered is known as message integrity.
• Digital signature: A digital code attached to an electronically transmitted message that is used
to verify the origin and contents of a message
– Authentication is aided by digital signatures and certificates. Digital signatures now have
the same legal standing as handwritten ones thanks to the Electronic Signatures in
Global and National Commerce Act of 2000. A digital signature is a code added to an
electronic communication that is used to confirm the message's origin and contents. It
offers a means of connecting a communication with a sender, serving a purpose
comparable to a written signature. Someone must be able to confirm that the signature
genuinely belongs to the person who provided the data and that the data were not
changed after being digitally signed for an electronic signature to be recognized as valid
in court.
• Digital certificates: Data files used to establish the identity of users and electronic assets for
protection of online transactions
– A certificate authority (CA), a dependable party used in digital certificate systems, is
used to verify a user's identity. The CA system may be managed by an internal
department within a firm or by an outside business like VeriSign. A digital certificate
user's identity is offline verified by the CA. This data is entered into a CA server, which
creates an encrypted digital certificate with the owner's public key and identity
information. The public key's ownership by the specified owner is verified by the
certificate. The CA publishes its own public key for public consumption, maybe online.
An encrypted message's recipient can access the sender's public key and identity details
by decrypting the digital certificate that was attached to the message using the CA's
public key, confirming that it was really issued by the CA. The recipient can send an
encrypted response using this data. A credit card user and a merchant, for instance,
might use the digital certificate system to verify that their digital certificates were issued
by a reputable and approved third party before exchanging data.
• Public Key Infrastructure (PKI): Use of public key cryptography working with a certificate
authority
– Online secure identity identification is increasingly being provided using public key
infrastructure (PKI), which use public key cryptography in conjunction with a certificate
authority.
• Secure Sockets Layer (SSL) and its successor Transport Layer Security (TLS): Protocols for
secure information transfer over the Internet; enable client and server computer encryption and
decryption activities as they communicate during a secure Web session.
– The protocols Secure Sockets Layer (SSL) and Transport Layer Security (TLS), which
succeeded it, are used to send information securely over the Internet. During a secure
Web connection, they allow client and server computers to control encryption and
decryption operations while they interact with one another.
• Secure Hypertext Transfer Protocol (S-HTTP): Used for encrypting data flowing over the
Internet; limited to Web documents, whereas SSL and TLS encrypt all data being passed
between client and server.
– While SSL and TLS encrypt all data being sent between client and server, Secure
Hypertext Transfer Protocol (S-HTTP) is another protocol used to encrypt data moving
over the Internet but is only applicable to Web content.