Management Information and Control System
Management Information and Control System
Study Material
Audit andInformation
Management Assuranceand
Control System
Price
First :: Rs. 450.00
March, 2011
Designed
Designed && Printed at :
Printed at:
3d Printers and publishers
Print and Art Service
Balkot, BhaktapurBagbazar, Kathmandu
Tel: 5211358, 5211064
Tel: 4244419, 4239154
Preface
This study material on the subject of “MICS” has been exclusively designed and developed for the students of
Chartered Accountancy Professional [CAP]-III Level. It aims to develop candidates’ capability in performing
and reporting on audit and assurance to increase reliability of financial and non-financial information,
identifying significant risks and apply risk assessment tools to the engagement, identify, gather and document
evidence and assess its sufficiency and appropriateness for an audit engagement, and competence to provide
comprehensive audit and business assurance services, by testing their ability to integrate and apply their
knowledge of auditing on realistic problems.
It broadly covers the chapters of Legal Compliance, Practice Management, Audit Process, Audit strategy
and planning, Audit techniques and procedures, Audit Reporting, Special Audits, Corporate Governance and
Audit Committee, and Audit under computerized environment. Practical problems are included at the end of
each chapter which can be useful to the students for their self-assessment about the progress evaluation after
thoroughly reading the material.
Students are requested to accustom with the syllabus of the subject and read each topic thoroughly for
understanding on the chapter. We believe this material will be of great help to the students of CAP-III. However,
they are advised not to rely solely on this material. They should update themselves and refer recommended
text-books given in the CA Education Scheme and Syllabus along with efforts of CA. Ramesh Dhital and CA.
Anima Pokharel who has other relevant materials in the subject.
Last but the most, we acknowledge the efforts of CA. Ramesh Dhital, who has meticulously assisted for
preparation and updating this study material. Likewise, CA. Chandra Kanta Bhandari, who has reviewed
this study material for building in this comprehensive shape.
Due care has been taken to make every chapter simple, comprehensive and relevant for the students. In case
students need any clarification, creative feedbacks or suggestions for further improvement on the material, they
may be forwarded to [email protected] of the Institute.
CHAPTER 2 17-58
2.0 Different Types of Information System Case Study
2.1 Types of Information System according to Organizational Hierarchy 19
2.1.1 Different Kinds of Information Systems 19
2.2 Types of Information System to Support the Organization 21
2.2.1 Transaction Processing System 22
2.2.2 Knowledge Work and Office System 26
2.2.3 Management Information System 27
2.2.4 Decision Support System 30
2.2.5 Executive Support Systems or Executive Information Systems 33
2.2.6 Expert Support System 35
2.3 Sales and marketing information systems 53
2.4 Manufacturing and Production Information Systems 54
2.5 Finance and Accounting Information Systems 56
2.6 Human Resources Information Systems 57
CHAPTER 3 59-172
3.0 Information Technology Strategy and Trends
3.1 Enterprise, Strategy and Vision 61
3.1.1 Internal and External Business Issues 63
3.1.2 Factors Influencing IT 72
3.2 Assess Current and Future IT Environments 73
3.2.1 Current Status of IT 73
3.2.2 IT Risk and Opportunity 150
3.3 IT Strategy Planning 156
CHAPTER 4
4.0 System Development Life Cycle 173-218
4.1 Definition, Stages of System Development 175
4.2 Underlying Principles of System Development 184
4.3 Phases of System Development 189
4.4 Computer Aided System Engineering (CASE) 190
4.5 Models of System Development 192
4.6 Integration and System Testing 198
4.7 System Maintenance: 206
4.8 Project Management Tools: 211
CHAPTER 5
5.0 System Analysis and Design, Case study 219–266
5.1 Strategies for System Analysis and Problem Solving 221
5.2 Concept of Data and Process Modeling 228
5.3 Strategies for System Design 245
5.4 Input Design 250
5.5 Output design 265
CHAPTER 6 221-306
6.0 E-Commerce and Case Study of Inter Organizational Systems
6.1 Introduction to E-Commerce 269
6.2 Features of E-commerce 275
6.3 Categories of e-Commerce 279
6.4 Electronic Payment Processes 281
6.5 Emerging Technologies in IT Business Environment 283
CHAPTER 7 307-318
7.0 E-business Enabling Software Packages Case Study
7.1 Enterprises Resource Planning (ERP) 309
7.2 Supply Chain Management Introduction (SCM): 313
7.3 Sales Force Automation 314
7.4 Customer Relationship Management: 315
CHAPTER 8 319-380
8.0 Information System Security, Protection and Control
8.1 System Vulnerability and Abuse 321
8.2 System Quality Problems: 336
8.3 Creating a Control Environment 339
8.4 Protection of digital network 346
8.5 Evaluation of IS 374
8.6 Development of Control Structure 376
CHAPTER 9 381-398
9. Disaster Recovery and Business Continuity Planning
9.1 Disasters Recovery Planning 383
9.2 Data backup and recovery 385
9.3 High availability planning of servers 394
9.4 IT Outsourcing: 395
CHAPTER 10
10. Auditing And Information System: 399-428
10.1 IT audit strategies 405
10.2 Review of DRP/BCP 412
10.3 Evaluation of IS 413
10.4 Standards for IS Audit 417
CHAPTER 11 429-450
11. Ethical and Legal Issues in Information Technology
11.1 Patents, Trademark and Copyright 431
11.2 Significance of IT Law: 433
11.3 Digital Signature and authentication of digitized information 434
11.4 Digital Signature and Verification 340
11.5 Introduction to Digital Data Exchange and digital reporting standard-XML and XBRL 445
11.6 Brief Description of COSO, COBIT, CMM, ITIL, ISO/IEC27001 448
CHAPTER 12 451-471
12. Electronic Transaction Act 2063
12.1 Electronic record and Digital Signature 453
12.2 Dispatch, Receipt and Acknowledgement of 454
12.3 Provisions Relating to Controller and 456
12.5 Provisions Relating to Digital Signature and Certificates 460
12.6 Functions, Duties and Rights of Subscriber 461
12.7 Electronic Record and Government use of Digital Signature 462
12.8 Provisions Relating to Network Service 463
12.9 Offence Relating To Computer 464
12.10 Provisions Relating to Information 467
12.11 Provisions Relating to Information 469
12.12 Miscellaneous 470
Chapter 1 : Organizational Management and Information System
Chapter 1
1.6 IT Governance:
IT governance refers to the framework and processes organizations establish to ensure effective
management, decision-making, and control of their information technology (IT) resources. One widely
recognized standard for IT governance is the Control Objective for Information and Related
Technologies (COBIT), developed by the Information Systems Audit and Control Association (ISACA)
and the International Federation of Accountants (IFAC). Simply put, it's putting structure around how
organizations align IT strategy with business strategy, ensuring that companies stay on track to achieve
their strategies and goals, and implementing good ways to measure IT's performance. It makes sure
that all stakeholders' interests are taken into account and that processes provide measurable results. An
IT governance framework should answer some key questions, such as how the IT department is
functioning overall, what key metrics management needs and what return IT is giving back to the business
from the investment it's making.
Every organization-large and small, public and private-needs a way to ensure that the IT function sustains
the organization's strategies and objectives. The level of sophistication you apply to IT governance,
however, may vary according to size, industry or applicable regulations. In general, the larger and more
regulated the organization, the more detailed the IT governance structure should be.
b. Establish Governance Framework: Select a recognized governance framework that aligns with your
organization's goals and industry best practices. Frameworks like COBIT, ISO/IEC 38500, or ITIL
provide guidelines, processes, and controls to structure your governance activities. Customize the
framework to suit your organization's unique needs and context.
c. Assign Roles and Responsibilities: Clearly define the roles and responsibilities of key stakeholders
involved in IT governance. This includes establishing a governance board or committee, appointing
executive sponsors, and assigning specific responsibilities to individuals or teams. Ensure that the
governance structure includes representation from both business and IT functions.
d. Develop Policies and Procedures: Create and document IT governance policies and procedures that
outline the desired behaviors, practices, and controls within your organization. This may include
policies related to information security, risk management, project prioritization, resource allocation,
and performance measurement. Ensure that these policies align with industry standards and regulatory
requirements.
e. Communicate and Educate: Effective communication and education are crucial for successful
implementation. Educate stakeholders about the importance of IT governance, its benefits, and their
roles and responsibilities. Conduct training sessions, workshops, and awareness programs to ensure
everyone understands the governance framework, policies, and procedures.
f. Implement Controls and Processes: Put in place the necessary controls and processes to support IT
governance. This includes establishing mechanisms for strategic planning, project portfolio
management, risk assessment, performance measurement, and resource management. Implement tools
and technologies to automate and streamline governance processes where possible.
g. Monitor and Evaluate: Continuously monitor and evaluate the effectiveness of your IT governance
practices. Regularly review governance policies, processes, and controls to ensure they remain relevant
and aligned with organizational goals. Monitor key performance indicators (KPIs) and metrics to assess
the impact and effectiveness of governance activities. Use this feedback to make improvements and
adjustments as needed.
h. Evolve and Improve: IT governance is an ongoing process that should evolve and adapt with the
changing needs of the organization. Regularly review and update your governance framework, policies,
Chapter 2
This figure provides examples of TPS, DSS, MIS, and ESS, showing the level of the organization
and business function that each supports.
It should be noted that each of the different systems may have components that are used by organizational
levels and groups other than its main constituencies. A secretary may find information on an MIS, or a
middle manager may need to extract data from a TPS.
Fig 2-5 Knowledge management can be viewed as three levels of techniques, technologies, and
systems that promote the collection, organization, access, sharing and use of workplace and
enterprise knowledge
Fig 2-6 How management information systems obtain their data from the organization's TPS
In the system illustrated by this diagram, three TPS supply summarized transaction data to the MIS
reporting system at the end of the time period. Managers gain access to the organizational data through
the MIS, which provides them with the appropriate reports.
The benefits of a Decision Support System include improved decision quality, increased efficiency,
reduced uncertainty, better resource allocation, and enhanced strategic planning. By providing timely and
relevant insights, DSS empowers decision-makers to make more informed and effective decisions.
Decision Support System is a valuable tool that leverages data, models, and analytical techniques to assist
decision-makers in solving complex problems, evaluating alternatives, and making informed decisions.
It enhances decision-making capabilities, fosters collaboration, and drives organizational success.
A DSS for data analysis empowers decision-makers with the tools and capabilities to analyze data, uncover
patterns and trends, and make informed decisions. By leveraging data analysis techniques, statistical
methods, and advanced visualization, a DSS for data analysis enables organizations to harness the power
of data to drive strategic decision-making and achieve better business outcomes.
Fig 2-10 Some of the attributes of intelligent behavior. AI is attempting to duplicate these
capabilities in computer-based systems.
Debate has raged about artificial intelligence since serious work in the field began and several
technological, moral and philosophical questions about the possibility of development of intelligent,
thinking machines have been raised. For example, British Al pioneer Alan Turing in
1950 proposed a test to determine whether machines could think. According to the Turing test, a computer
could demonstrate intelligence if a human interviewer, conversing with an unseen human and an unseen
computer could not tell which was which. Although much work has been done in many of the subgroups
that fall under the AI umbrella, critics believe that no computer can truly pass the Turing test. They
claim that it is just not possible to develop intelligence to impart true humanlike capabilities to computers,
but progress continues. Only time will tell whether we will achieve the ambitious goals of artificial
intelligence and equal the popular images found in science fiction.
One derivative of the Turing test is the Reverse Turing Test, also known as the "AI Judge" or "AI
Evaluator" test. In this variation, instead of determining if a machine can convincingly mimic human
intelligence, the goal is to identify if a human evaluator can correctly distinguish between interactions with
a machine and interactions with another human. The Reverse Turing Test aims to assess the machine's
ability to exhibit intelligent behavior to the extent that it becomes indistinguishable from a human
counterpart.
Knowledge Base- The knowledge base of an expert system contains (1) facts about a specific
subject area (e.g., John is an analyst) and (2) heuristics (rules of thumb) that express the reasoning
procedures of an expert on the subject (e.g., IF John is an analyst, THEN he needs a workstation).
There are many ways that such knowledge is represented in expert systems. Examples are rule-based,
frame-based, object-based, and case-based methods of knowledge representation.
Software Resources- An expert system software package contains an inference engine and other
programs for refining knowledge and communicating with users. The inference engine program
processes the knowledge (such as rules and facts) related to a specific problem. It then makes
associations and inferences resulting in recommended courses of action for a user. User interface
programs for communicating with end users are also needed, including an explanation program
to explain the reasoning process to a user if requested.
Knowledge acquisition: Knowledge acquisition programs are not part of an expert system but are
software tools for knowledge base development, as are expert system shells, which are used for
developing expert systems. The process of building an expert system involves acquiring and codifying
knowledge from human experts. Knowledge acquisition techniques include interviews, observations,
documentation review, and knowledge elicitation methods. The acquired knowledge is then
organized and represented in a structured format within the knowledge base.
Limitations: Expert systems have certain limitations. They typically excel in well-defined domains
with explicit knowledge and rule-based reasoning. However, they may struggle with complex,
ambiguous, or ill-defined problems that require contextual understanding or creative thinking.
Additionally, expert systems require regular updates and maintenance to keep the knowledge base
up-to-date with advancements in the domain.
Applications: Expert systems find applications in various domains such as medicine, finance,
engineering, troubleshooting, customer support, and decision-making tasks. For example, in
healthcare, expert systems can assist in diagnosing diseases based on symptoms and medical history.
In finance, they can provide recommendations for investment strategies based on market trends and
risk profiles.
Fig 2-13 A summary of 4-ways that knowledge can be represented in expert system's knowledge base
Fig 2-14 A summary of knowledge can be represented in an expert system's knowledge base
Fig 2-14 Criteria for applications those are suitable for expert systems development.
Knowledge Engineering
Knowledge engineering is a discipline within artificial intelligence (AI) that involves acquiring,
representing, organizing, and utilizing knowledge to build intelligent systems, such as expert systems. It
focuses on capturing and formalizing human expertise and domain knowledge in a format that can be
effectively utilized by computer systems.
A knowledge engineer is a professional who works with experts to capture the knowledge (facts and rules
of thumb) they possess. The knowledge engineer then builds the knowledge base (and the rest of expert
system if necessary), using an iterative, prototyping process until the expert system is acceptable. Thus,
knowledge engineers perform a role similar to that of systems analysts in conventional information
systems development.
Once the decision is made to develop an expert system, a team of one or more domain experts and a
knowledge engineer may be formed. Experts skilled in the use of expert system shells could also
develop their own expert systems. If a shell is used, fact and rules of thumb about specific domain can be
defined and entered into a knowledge base with the help of a rule editor or other knowledge acquisition
tool. A limited working prototype of the knowledge base is then constructed, tested and evaluated using
the inference engine and user interface programs of the shell. The knowledge engineer and domain
experts can modify the knowledge base, then retest the system and evaluate the results. This process
is repeated until the knowledge base and the shell result in an acceptable expert system.
The process of knowledge engineering typically involves the following steps:
Knowledge Acquisition: This step involves gathering domain-specific knowledge from human experts.
Knowledge engineers interact with subject matter experts through interviews, workshops, observations, or
by studying existing documentation and resources. The goal is to extract relevant knowledge, problem-
solving strategies, rules, and heuristics used by experts in the domain.
Fig 2-16 An example of fuzzy logic rules and a funny logic SQL query in a credit risk analysis-
application.
Fuzzy Logic in Business
Examples of applications of fuzzy logic are numerous in Japan but rare in the United States. The United
States has preferred to use AI solutions like expert systems or neural networks, but Japan has implemented
many fuzzy logic applications, especially the use of special-purpose fuzzy logic micro processor
chips, called fuzzy process controllers. Thus, the Japanese ride on subway trains, use elevators, and
drive cars that are guided or supported by fuzzy process controllers made by Hitachi and Toshiba.
Many models of Japanese-made products also feature fuzzy logic microprocessors. The list is growing
and includes autofocus cameras, auto stabilizing camcorders, energy-efficient air conditioners, self-
adjusting washing machines, and automatic transmissions. fuzzy logic can be used in credit scoring
models to assess the creditworthiness of individuals based on a combination of factors with varying
degrees of importance and uncertainty. It can evaluate risk factors, such as market volatility, economic
conditions, or operational uncertainties, and provide a more nuanced assessment of potential risks. It can
aid in customer segmentation, which involves categorizing customers into distinct groups based on their
characteristics or behavior. Also, it can be used in quality control processes to handle imprecise
measurements and variability in product attributes and can help identify and manage deviations from
desired standards.
GENETIC ALGORITHMS
Genetic algorithms (GAs) are computational search and optimization techniques inspired by the principles
of natural evolution and genetics. They are used to solve complex problems by mimicking the process of
natural selection, reproduction, and genetic variation. Genetic algorithms operate on a population of
potential solutions and iteratively evolve the population to find the best solution or approximate solutions
to a given problem.
The use of genetic algorithms is a growing application of artificial intelligence. Genetic algorithm
software uses Darwinian (survival of the fittest), randomizing, and other mathematical functions to
simulate an evolutionary process that can yield increasingly better solutions to a problem. Genetic
algorithms were first used to simulate millions of years in biological, geological, and ecosystem
evolution in just a few minutes on a computer. Genetic algorithm software is being used to model a variety
of scientific, technical, and business processes.
Genetic algorithms are especially useful for situations in which thousands of solutions are possible
and must be evaluated to produce an optimal solution. Genetic algorithm software uses sets of
mathematical process rules (algorithms) that specify how combinations of process components or steps
The Institute of Chartered Accountants of Nepal ȁͷͳ
Management Information and Control System
are to be formed. This process may involve trying random process combinations (mutation), combining
parts of several good processes (crossover), and selecting good sets of processes and discarding poor ones
(selection) to generate increasingly better solutions.
Genetic algorithms have been successfully applied to a wide range of optimization and search problems,
including parameter optimization, scheduling, routing, machine learning, and resource allocation, among
others. They offer advantages in handling complex, non-linear, and multi-objective problems where
traditional optimization techniques may struggle. By imitating the principles of natural evolution, genetic
algorithms provide an efficient and effective approach for finding near-optimal or approximate solutions
in various domains.
Interrelationships among Systems
In any organization or system, there are interrelationships among different systems that collectively
contribute to the functioning and performance of the overall entity. These interrelationships can be
complex and interconnected, and understanding them is crucial for effective management and decision-
making. Here are some common interrelationships among systems:
Transaction processing systems (TPS) are typically a major source of data for other systems, whereas
Executive Support Systems (ESS) are primarily a recipient of data from lower- level systems. The other
types of systems may exchange data with each other as well. Data may also be exchanged among systems
serving different functional areas. For example, an order captured by a sales system may be transmitted
to a manufacturing system as a transaction for producing or delivering the product specified in the order
or to a Management Information Systems (MIS) for financial reporting.
The various types of systems in the organization have interdependencies. TPS are major producers
of information that is required by the other systems, which, in turn, produce information for other
systems. These different types of systems have been loosely coupled in most organizations.
It is definitely advantageous to integrate these systems so that information can flow easily between
different parts of the organization and provide management with an enterprise-wide view of how the
organization is performing as a whole. But integration costs money, and integrating many different
systems is extremely time consuming and complex. This is a major challenge for large organizations,
which are typically saddled with hundreds, even thousands of different applications serving different
levels and business functions. Each organization must weigh its needs for integrating systems against the
difficulties of mounting a large-scale systems integration effort.
This system provides information about the number of items available in inventory to support
manufacturing and production activities.
Product life cycle management (PLM) systems are one type of manufacturing and production system that
has become increasingly valuable in the automotive, aerospace, and consumer products industries. PLM
systems are based on a data repository that organizes every piece of information that goes into making a
particular product, such as formula cards, packaging information, shipping specifications, and patent data.
Once all these data are available, companies can select and combine the data they need to serve specific
functions. For, example, designers and engineers can use the data to determine which parts are needed for
a new design, whereas retailers can use them to determine self height and how materials should be stored
in warehouses.
For many years, engineering-intensive industries have used computer-aided design (CAD) systems
to automate the modeling and design of their products. The software enables users to create a digital model
of a part, a product, or a structure and make changes to the design on the computer without having to
build physical prototypes. PLM software goes beyond CAD software to include not only automated
modeling and design capabilities but also tools to help companies manage and automate materials
sourcing, engineering change orders, and product documentation, such as test results, product packaging,
and post sales data. The Window on Organizations describes how these systems are providing new sources
of value.
This system maintains data on the firm's employees to support the human resources function.
Chapter 3
Fig 3-2 The firm value chain and the industry value chain
Illustrated are various examples of strategic information systems for the primary and support activities of
a firm and of its value partners that would add a margin of value to a firm's products or services.
Digitally enabled networks can be used not only to purchase supplies but also to closely
coordinate production of many independent firms. For instance, the Italian casual wear company Benetton
uses subcontractors and independent firms for labor-intensive production processes, such as tailoring,
finishing, and ironing, while maintaining control of design, procurement, marketing, and distribution.
Benetton uses computer networks to provide independent businesses and foreign production centers
Fig 3-4 Stockless inventory compared to traditional and just-in-time supply methods
The just-in-time supply method reduces inventory requirements of the customer, whereas stockless
inventory enables the customer to eliminate inventories entirely. Deliveries are made daily, sometimes
directly to the departments that need the supplies.
Supply chain management and efficient customer response systems are two examples of how emerging
digital firms engage in business strategies not available to traditional firms. Both types of systems require
network-based information technology infrastructure investment and software competence to make
customer and supply chain data flow seamlessly among different organizations. Both types of strategies
have greatly enhanced the efficiency of individual firms and the U.S. economy as a whole by moving
toward a demand-pull production system and away from the traditional supply-push economic system
in which factories were managed on the basis of 12-month official plans rather than on near-
instantaneous customer purchase information. Figure 3-5 illustrates the relationships between supply
chain management, efficient customer response, and the various business-level strategies.
Fig 3-8 Primary storage in the computer. Primary storage can be visualized as a matrix. Each byte
represents a mailbox with a unique address.
Figure 3-8 shows that primary memory is divided into storage locations called bytes. Each location
contains a set of eight binary switches or devices, each of which can store one bit of information. The
set of eight bits found in each storage location is sufficient to store one letter, one digit, or one special
symbol (such as H5) using either EBCDIC or ASCII. Each byte has a unique address, similar to a mailbox,
indicating where it is located in RAM. The computer can remember where the data in all of the bytes
are located simply by keeping track of these addresses. Most of the information used by a
computer application is stored on secondary storage devices such as disks and tapes, located outside
of the primary storage area. In order for the computer to work on information, information must be
Fig 3-9 The various steps in the machine cycle. The machine cycle has two main stages of
operation; the instruction cycle (I-cycle) and the execution cycle (E-cycle). There are several steps
within each cycle required to process a single machine instruction in the CPU.
During the instruction cycle, the control unit retrieves one program instruction from primary storage
and decodes it. It places the part of the instruction telling the ALU what to do next in a special instruction
register and places the part specifying the address of the data to be used in the operation into an address
register. (A register is a special temporary storage location in the ALU or control unit that acts like a
high-speed staging area for program instructions or data being transferred from primary storage to the
CPU for processing.)
During the execution cycle, the control unit locates the required data in primary storage, places it in a
storage register, instructs the ALU to perform the desired operation, temporarily stores the result of the
operation in an accumulator, and finally places the result in primary memory. As each instruction is
completed, the control unit advances to and reads the next instruction of the program.
Computers and Computer Processing
Computers represent and process data the same way, but there are different classifications. We can
use size and processing speed to categorize contemporary computers as mainframes, minicomputers,
PCs, workstations, and supercomputers.
Fig 3-10 Sequential and parallel processing. During sequential processing, each task is assigned to
one CPU that processes one instruction at a time. In parallel processing, multiple tasks are assigned
to multiple processing units to expedite the result.
Some supercomputers can now perform more than a trillion mathematical calculations each second-a
teraflop. The term teraflop comes from the Greek terms, which for mathematicians means one trillion,
and flop, an acronym for floating point operations per second. (A floating- point operation is a basic
computer arithmetic operation, such as addition, on numbers that include a decimal point.) Work is
under way to build supercomputers capable of 10 teraflops.
Microprocessor and processing Power
Computers' processing power depends in part on the speed and performance of their microprocessors.
You will often see chips labeled as 8-bit, 16-bit, or 32-bit devices. These labels refer to the word length,
or the number of bits that can be processed at one time by the machine. An 8-bit chip can process 8
The Institute of Chartered Accountants of Nepal ȁͺͳ
Management Information and Control System
bits, or I byte, of information in a single machine cycle. A 32-bit chip can process 32 bits or 4 bytes
in a single cycle. The larger the word length, the greater the speed of the computer.
A second factor affecting chip speed is cycle speed. Every event in a computer must be
sequenced so that one step logically follows another. The control unit sets a beat to the chip. This beat
is established by an internal clock and is measured in megahertz (abbreviated MHZ, which stands for
millions of cycles per second). The Intel 8088 chip, for instance, originally had a clock speed of 4.47
megahertz, whereas the Intel Pentium II chip has a clock speed that ranges from 233 to 450 megahertz.
A third factor affecting speed is the data bus width. The data bus acts as a highway between the CPU.
Primary storage, and other devices, determining how much data can be moved at one time. The 8088
chip used in the original IBM personal computer, for example, had a 16-bit word length but only
an 8-bit data bus width. This meant that data were processed within the CPU chip itself in I6-bit chunks
but could only be moved 8 bits at a time between the CPU, primary storage, and external devices.
On the other hand, the Alpha chip has both a 64-bit word length and a 64-bit data bus width. To have
a computer execute more instructions per second and work through programs or handle users
expeditiously, it is necessary to increase the word length of the processor, the data bus width, or the
cycle speed-or all three.
Microprocessors can be made faster by using reduced instruction set computing (RISC) in their design.
Some instructions that a computer uses to process data are actually embedded in the chip circuitry.
Conventional chips, based on complex instruction set computing, have several hundred or more
instructions hard-wired into their circuitry, and they may take several clock cycles to execute a single
instruction. In many instances, only 20 percent of these instructions are needed for 80 percent of the
computer's tasks. If the little-used instructions are eliminated, the remaining instructions can execute
much faster.
Reduced instruction set (RISC) computers have only the most frequently used instructions
embedded in them. A RISC CPU can execute most instructions in a single machine cycle and sometimes
multiple instructions at the same time. RISC is most appropriate for scientific and workstation
computing, where there are repetitive arithmetic and logical operations on data or applications calling
for three-dimensional image rendering.
On the other hand, software written for conventional processors cannot be automatically transferred to
RISC machines; new software is required. Many RISC suppliers are adding more instructions to appeal
to a greater number of customers, and designers of conventional microprocessors are streamlining their
chips to execute instructions more rapidly.
Microprocessors optimized for multimedia and graphics have been developed to improve processing of
visually intensive applications. Intel's MMX (Multimedia extension) microprocessor is a Pentium
chip that has been modified to increase performance in many applications featuring graphics and sound.
Multimedia applications such as games and video will be able to run more smoothly, with more colors,
and be able to perform more tasks simultaneously. For example, multiple channels of audio, high quality
video or animation, and Internet communication could all be running in the same application.
Computer Network and Client/Server Computing
In the modern digital era, stand-alone computers have largely been supplified by networked systems for
most processing tasks. This practice of leveraging multiple computers connected by a communications
82 | The Institute of Chartered Accountants of Nepal
Chapter 3: Information Technology Strategy and Trends
network for processing is referred to as distributed processing. This approach contrasts with centralized
processing, where a single, large central computer performs all processing tasks. Distributed processing,
on the other hand, divides the processing workload among various devices such as PCs, minicomputers,
and mainframes, all interconnected.
A prominent example of distributed processing is the client/server model of computing. This model
divides processing between "clients" and "servers," each performing tasks they're best suited for. Both
entities are part of the network.
The client, typically a desktop computer, workstation, or laptop, serves as the user's point of entry for a
specific function. Users generally interact directly with the client portion of an application, which could
involve data input or retrieval for further analysis.
The server, on the other hand, provides services to the client. It can range from a supercomputer or
mainframe to another desktop computer. Servers store and process shared data and perform backend
functions, often unseen by users, such as managing network activities.
This client/server model is foundational to internet computing, forming the backbone of web
applications, cloud services, and more. With the rise of cloud computing and edge computing, these
concepts have evolved further, enabling even more efficient and scalable distributed processing models
for today's increasingly interconnected world.
Figure 3-11 illustrates five different ways that the components of an application could be partitioned
between the client and the server. The interface component is essentially the application interface how the
application appears visually to the user. The application logic component consists of the processing logic,
which is shaped by the organizations business rules. (An example might be that a salaried employee is
only to be paid monthly.) The data management component consists of the storage and management of
the data used by the application.
Fig 3-12 Disk pack storage. Large systems often rely on disk packs, which provide reliable
storage for large amounts of data with quick access and retrieval. A typical removable disk- pack
system contains 11 two- sided disks.
Read/write heads move horizontally over the spinning disks to any of 200 positions, called cylinders.
At any one of these cylinders, the read/write heads can read or write information to any of 20
different concentric circles on the disk surface areas called tracks. (Each track contains several records.)
The cylinder represents the circular tracks on the same vertical line within the disk pack. Read/write
heads are directed to a specific record using an address consisting of the cylinder number, the recording
surface number, and the data record number.
The entire disk pack is housed in a disk drive or disk unit. Large mainframe or minicomputer systems
have multiple disk drives because they require immense disk storage capacity.
Disk drive performance can be further enhanced by using a disk technology called RAID
(Redundant Array of Inexpensive Disks). RAID devices package more than a hundred 6.25-inch disk
drives, a controller chip, and specialized software into a single large unit. Traditional disk drives deliver
data from the disk drive along a single path, but RAID delivers data over multiple paths simultaneously,
accelerating disk access time. Small RAID systems provide 10 to 20 gigabytes of storage capacity,
whereas larger systems provide more than 10 terabytes. RAID is potentially more reliable than standard
disk drives because other drives are available to deliver data if one drive fails.
PCs usually contain hard disks, which can store more than five hundred (500) Gigabytes. (500
Gigabytes is currently the most common size.) PCs also use floppy disks, which are flat, 3.5-inch disks
of polyester film with a magnetic coating (5.25-inch floppy disks are becoming obsolete). These disks
Fig 3-13 The sector method of storing data. Each track of a disk can be divided into sectors. Disk
storage location can be identified by sector and data record number.
Magnetic disks on both large and small computers permit direct access to individual records. Each
record can be given a precise physical address in terms of cylinders and tracks or sectors, and the
read/write head can be directed to go directly to that address and access the information. This means that
the computer system does not have to search the entire file, as in a sequential tape file, in order to find
the record. Disk storage is often referred to as a direct access storage device (DASD).
For on-line systems requiring direct access, disk technology provides the only practical means of storage
today. DASD is, however, more expensive than magnetic tape. Updating information stored on a disk
destroys the old information because the old data on the disk are written over if changes are made. The
disk drives themselves are susceptible to environmental disturbances. Even smoke particles can disrupt
the movement of read/write heads over the disk surface, which is why disk drives are sealed from the
environment.
Optical Disks
Optical disks, including compact disks and laser optical disks, can store data at far greater densities than
magnetic disks. They're compatible with both PCs and larger computer systems. Data is recorded on
optical disks when a laser device etches microscopic pits in the reflective layer of a spiral track. These
Fig 3.14 Storage media cost, speed, and capacity trade-0ffs. Note how cost increases with faster
access speeds but decreases with the increased capacity of storage media.
However, all storage media, especially memory chips and magnetic disks, continue to increase in speed
and capacity and decrease in cost. Developments like automated high-speed cartridge assemblies have
given faster access times to magnetic tape, and the speed of optical disk drives continues to increase.
Note in Figure 3.14 that semiconductor memories are used mainly for primary storage, although they are
sometimes used as high-speed secondary storage devices. Magnetic disk and tape and optical disk
devices, in contrast, are used as secondary storage devices to enlarge the storage capacity of computer
systems. Also, because most primary storage circuits use RAM (random- access memory) chips, which
lose their contents when electrical power is interrupted, secondary storage devices provide a more
permanent type of storage media.
Computer Storage Fundamentals
Data are processed and stored in a computer system through the presence or absence of electronic
or magnetic signals in the computer's circuitry or in the media it uses. This character is called "two-state"
or binary representation of data because the computer and the media can exhibit only two possible
states or conditions, similar to a common light switch: "on" or "off." For example, transistors and other
semiconductor circuits are in either a conducting or a non- conducting state. Media such as magnetic
disks and tapes indicate these two states by having magnetized spots whose magnetic fields have one of
two different directions, or polarities. This binary characteristic of computer circuitry and media is what
94 | The Institute of Chartered Accountants of Nepal
Chapter 3: Information Technology Strategy and Trends
makes the binary number system the basis for representing data in computers. Thus, for electronic
circuits, the conducting ("on") state represents the number 1, whereas the no conducting ("off") state
represents the number O.
For magnetic media, the magnetic field of a magnetized spot in one direction represents a 1, while
magnetism in the other direction represents a 0.
The smallest element of data is called a bit, short for binary digit, which can have a value of either
0 or 1. The capacity of memory chips is usually expressed in terms of bits. A byte is a basic grouping
of bits that the computer operates as a single unit. Typically, it consists of eight bits and represents one
character of data in most computer coding schemes. Thus, the capacity of a computer's memory and
secondary storage devices is usually expressed in terms of bytes. Computer codes such as ASCII
(American Standard Code for Information Interchange) use various arrangements of bits to form bytes
that represent the numbers 0 through 9, the letters of the alphabet, and many other characters. See Figure
3.15
Fig 3.15 example of ASCII computer code that computers use to represent numbers and the letters of
the alphabets
Since childhood, we have learned to do our computations using the numbers 0 through 9, the digits of the
decimal number system. Although it is fine for us to use 10 digits for our computations, computers do
not have this luxury. Every computer processor is made of millions of tiny switches that can be turned
off or on. Because these switches have only two states, it makes sense for a computer to perform its
computations with a number system that only has two digits: the binary number system. These digits (0
and 1) correspond to the off/ on positions of the switches in the computer processor. With only these two
digits, a computer can perform all the arithmetic that we can with 10 digits.
Fig 3-21 Assembly language. This sample assembly language command adds the contents ot
register 3 to register 5 and stores the result in register 5.
Fig 3-22 This sample FORTRAN program code is part of a program to compute sales figures for a
particular item.
COBOL
COBOL (COmmon Business Oriented Language) (Figure 3-23) came into use in the early
1960s. It was developed by a committee representing both government and industry. Rear
Admiral Grace M. Hopper was a key committee member who played a major role in COBOL
development. COBOL was designed with business administration in mind, for processing large data
files with alphanumeric characters (mixed alphabetic and numeric data), and for performing repetitive
tasks such as payroll. It is poor at complex mathematical calculations. Also, there are many versions of
COBOL, and not all are compatible with each other. Today, more efficient programming languages have
largely superseded COBOL, but still the COBOL code is in operation due to the high cost involved in
replacing the system.
Fig 3-23 COBOL. This sample COBOL program code is part of a routine to computer total sales
figures for a particular item.
Fig 3-25 Class, subclasses, inheritance, and overriding. This figure illustrates how a messages
method can come from the class itself or an ancestor class. Class variables and methods are shaded
when they are inherited from above.
Java
Java is a programming language named after the many cups of coffee its Sun Microsystems developers
drank along the way. It is an object-oriented language, combining data with the functions for processing
the data, and it is platform-independent. Java software is designed to run on any computer or computing
device, regardless of the specific microprocessor or operating system it uses. A Macintosh Apple, an IBM
personal computer running Windows, a DEC computer running UNIX, and even a smart cellular phone
or personal digital assistant can share the same java application. Java can be used to create miniature
programs called "applets" designed to reside on centralized network servers. The network delivers only
the applets required for a specific function. With java applets residing on a network, a user can download
only the software functions and data that he or she needs to perform a particular task, such as analyzing
the revenue from one sales territory. The user does not need to maintain large software programs or data
files on his or her desktop machine. When the user is finished with processing, the data can be saved
through the network. Java can be used with network computers because it enables all processing software
and data to be stored on a network server, downloaded via a network as needed, and then placed back on
the network server.
Java is also a very robust language that can handle text, data, graphics, sound, and video, all within
one program if needed. Java applets often are used to provide interactive capabilities for Web pages. For
example, java applets can be used to create animated cartoons or real-time news tickers for a Web site,
or to add a capability to a Web page to calculate a loan payment schedule on-line in response to financial
data input by the user. (Microsoft's ActiveX sometimes is used as an alternative to java for creating
The Institute of Chartered Accountants of Nepal ȁͳʹͳ
Management Information and Control System
interactivity on a Web page. ActiveX is a set of controls that enables programs or other objects such as
charts, tables, or animations to be embedded within a Web page. However, ActiveX lacks java's machine
independence and was designed for a Windows environment.)
Java also can be used to create more extensive applications that can run over the Internet or over a
company's private network. Java can let PC users manipulate data on networked systems using Web
browsers, reducing the need to write specialized software. For example, Sprint PCS, the mobile-phone
partnership, is using java for an application that allows its employees to use Web browsers to analyze
business data and send reports to colleagues via e-mail on an internal network. The system it replaces
required specialized desktop software to accomplish these tasks and restricted these reports to a smaller
number of employees (Clark, 1998).
To run java software, a computer needs an operating system containing a java Virtual Machine (JVM).
(A JVM is incorporated into Web browser software such as Netscape Navigator or Microsoft Internet
Explorer.) The java Virtual Machine is a compact program that enables the computer to run java
applications. The JVM lets the computer simulate an ideal standardized java computer, complete with
its own representation of a CPU and its own instruction set. The Virtual Machine executes java programs
by interpreting their commands one by one and commanding the underlying computer to perform all the
tasks specified by each command.
Management and Organizational Benefits of Java
Companies are starting to develop more applications in java because such applications can potentially
run in Windows, UNIX, IBNI mainframe, Macintosh, and other environments without having to
be rewritten for each computing platform. Sun Microsystems terms this phenomenon "write once,
run anywhere." java also could allow more software to be distributed and used through networks.
Functionality could be stored with data on the network and downloaded only as needed, Companies might
not need to purchase thousands of copies of commercial software to run on individual computers; instead
users could download applets over a network and use network computers.
Java is similar to C+ + but considered easier to use. Java program code can be written more quickly than
with other languages. Sun claims that no java program can penetrate the user's computer, making it safe
from viruses and other types of damage that might occur when downloading more conventional programs
off a network.
Despite these benefits, java has not yet fulfilled its early promise to revolutionize software development
and use. Programs written in current versions of java tend to run slower than "native" programs,
which are written for a particular operating system because they must be interpreted by the java Virtual
Machine. Vendors such as Microsoft are supporting alternative versions of java that include subtle
differences in their Virtual Machines that affect java's performance in different pieces of hardware
and operating systems. Without a standard version of java, true platform independence cannot be
achieved. The Window on Management explores the management issues posed by java as companies
consider whether to use this programming language.
Hypertext markup language (HTML)
Hypertext markup language (HTML) is a page description language for creating hypertext or hypermedia
documents such as Web pages. HTML uses instructions called tags to specify how text, graphics, video,
and sound are placed on a document and to create dynamic links to other documents and objects stored
122 | The Institute of Chartered Accountants of Nepal
Chapter 3: Information Technology Strategy and Trends
in the same or remote computers. Using these links, a user need only point at a highlighted key word
or graphic, click on it, and immediately be transported to another document.
HTML programs can be custom-written, but they also can be created by using the HTML authoring
capabilities of Web browsers or of popular word-processing, spread sheet, data management, and
presentation graphics software packages. HTML editor such as Claris Home Page and Adobe PageMill
are more powerful HTML authoring tool programs for creating Web pages.
Low-Code/No-Code Development Platforms
These platforms enable users to build applications with minimal coding knowledge. They provide visual
interfaces and pre-built components to simplify the development process.
DevOps and CI/CD:
DevOps (Development and Operations) focuses on streamlining collaboration between development and
IT operations teams. Continuous Integration/Continuous Deployment (CI/CD) pipelines automate the
building, testing, and deployment of software, ensuring rapid and reliable delivery.
Containerization and Orchestration
Containerization tools like Docker allow applications and their dependencies to be packaged into
lightweight, portable containers. Orchestration frameworks like Kubernetes manage and automate the
deployment, scaling, and management of containers.
Machine Learning and AI Libraries
Libraries such as TensorFlow and PyTorch provide developers with powerful tools to build and train
machine learning models. They offer extensive support for tasks like computer vision, natural language
processing, and data analysis.
Serverless Computing
Serverless platforms, like AWS Lambda and Azure Functions, abstract away the infrastructure
management. Developers can focus on writing code in the form of functions, which automatically
scale based on demand and only incur costs for actual usage.
Microservices Architecture
Microservices involve building applications as a collection of small, loosely coupled services that can be
developed, deployed, and scaled independently. This approach enables flexibility, scalability, and easier
maintenance.
Agile and Scrum Methodologies
Agile and Scrum methodologies prioritize iterative development, frequent collaboration, and adapting to
change. They focus on delivering incremental value and fostering teamwork.
Data Analytics and Visualization
Tools like Tableau, Power BI, and Apache Superset help users analyze and visualize complex data sets,
making it easier to draw insights and communicate findings.
A single human resources database serves multiple applications and also enables a corporation to easily
draw together all the information for various applications. The database management system acts as the
interface between the application programs and the data.
Database Management Systems
A database management system (DBMS) is simply the software that permits an organization to centralize
data, manage them efficiently, and provide access to the stored data by application programs. The DBMS
acts as an interface between application programs and the physical data files. When the application
program calls for a data item, such as gross pay, the DBMS finds this item in the database and presents
it to the application program. Using traditional data files, the programmer would have to specify the size
and format of each data element used in the program and then tell the computer where they were located.
A DBMS eliminates most of the data definition statements found in traditional programs.
This diagram shows the relationships between the entities ORDER, ORDERED_PART, PART, and
SUPPLIER that might be used to model the database in Figure 3-33.
Distributing Databases
Database design also considers how the data are to be distributed. Information systems can be designed
with a centralized database that is used by a single central processor or by multiple processors in a
client/server network. Alternatively, the database can be distributed. A distributed database is
one that is stored in more than one physical location.
There are two main methods of distributing a database (see Figure 3-35). In a partitioned database,
parts of the database are stored and maintained physically in one location and other parts are stored
and maintained in other locations (see Figure 3-35a) so that each remote processor has the necessary
data to serve its local area. Changes in local files can be justified with the central database on a batch
basis, often at night. Another strategy is to replicate (that is, duplicate in its entirety) the central database
(Figure 3-35b) at all remote locations. For example, Lufthansa Airlines replaced its centralized
mainframe database with a replicated database to make information more immediately available to
flight dispatchers. Any change made to Lufthansa's Frankfort DBMS is automatically replicated in New
York and Hong Kong. This strategy also requires updating the central database during off-hours.
developing a DBMS often appear to be as great as the benefits. It may take time for the database to
provide value.
Solution Guidelines
The critical elements for creating a database environment are (1) data administration, (2) data- planning
and modeling methodology, (3) database technology and management, and (4) users. This environment
is depicted in Figure 3-36.
Avoid the risk. This may take several forms such as discussions with the customer to reduce the
scope of the work, and giving incentives to engineers to avoid the risk of manpower turnover,
etc.
Transfer the risk. This involves practice of distributing the risk among multiple parties or
stakeholders involved in a particular endeavor. Its aim is to mitigate the potential negative
impacts and uncertainties associated with particular venture by spreading the risk burden among
multiple participants. This strategy involves getting the risky component developed by a third
party, or buying insurance cover, etc.
Reduce the Risk. This involves planning ways to contain the damage due to a risk. For
example, if there is risk that some key personnel might leave, new recruitment may plan.
Accept the Risk: When the cost incurred to reduce the risk is higher than the cost incurred if the
risk occurs, then it is supposed to accept the risk.
To choose between the different strategies of handling a risk, the project manager must consider the
cost of handling the risk and the corresponding reduction in risk. For this may compute the risk
leverage of the different risks. Risk leverage is the difference in risk exposure divided by the cost of
reducing the risk. More formally,
Risk leverage (Reduced risk) = risk exposure before reduction - risk exposure after reduction/cost of
reduction Even though we have identified four broad ways to handle any risk, risk handling requires a lot
of ingenuity on the part of the project manager. As an example, let consider the options available to
contain an important type of risk that occurs in many software projects-that of schedule slippage. Risks
relating to schedule slippage arise primarily due to the intangible nature of software.
Therefore, these risks can be dealt with by increasing the visibility of the software product. Producing
relevant documents during the development process wherever meaningful, and getting these documents
reviewed by an appropriate team can increase visibility of a software product. Milestones should be
placed at regular intervals through a software engineering process in order to provide a manager with
regular indication of progress.
Completion of phases of the development process being followed need not be the only milestones.
Every phase can be broken down to reasonable-sized tasks and milestones can be scheduled for these
tasks too. A milestone is reached, once documentation produced as part of a software engineering task is
Market Forces
It is very difficult to predict the market forces such as the demand and supply, the trend of the market
growth, the consumer behavior and the choices, the emergence of new products and the new product
concepts. The ability of the organization to predict these forces and plan the strategies is limited for the
various reasons. The market forces affect the sales, the growth and the profitability. With the problems
arising out of market forces, it is difficult to reorient the organization quickly to meet the eventualities
adversely affecting the business unless the business is managed through a proper business plan.
Technological Change
There are a number of illustrative cases throughout the world on the technological breakthroughs and
changes which have threatened the current business creating new business opportunities. The
emergence of the microchip, plastic, laser technology, fiber optics technology, nuclear energy,
wireless communication, audio-visual transmission, turbo engines, thermal conductivity and many more,
are the examples which have made some products obsolete, threatening the current business, but at the
same time, have created new business opportunities. The technological changes have affected not
only the business prospects but the managerial and operational styles of the organizations.
These strategies are applicable to all the types of business and industries.
This strategy considers a very long-term business perspective, deals 'with the overall strength of the entire
company and evolves those policies of the business which will dominate the course of the business
movement. It is the most productive strategy, if chosen correctly and fatal if chosen wrongfully. The other
strategies act under the overall company strategy. To illustrate the overall company strategy, following
examples are given:
1. A two wheeler manufacturing company will have a strategy of mass production and an
aggressive marketing.
2. A computer manufacturer will have a strategy of adding new products every two or three years.
3. A consumer goods manufacturer will have a strategy of maximum reach to the consumer and
exposure by way of a wide distribution network.
4. A company can have a strategy of remaining in the low price range and catering to the masses.
5. Another company can have a strategy of expanding very fast to capture the market.
6. A third company can have a strategy of creating a corporate brand image to humid a brand
loyalty. e.g., Escorts, Kirloskar, Godrej, Tata, Bajaj, BHEL, MTNL.
The overall company strategy is broad-based having a far reaching effect on the different facet:
of the business, and forming the basis for generating strategies in the other areas of business.
Growth Strategy
An organization may grow in two different ways. Growth may either mean the growth of the existing
business turnover, year after year, or it may mean the expansion and diversification of the business.
A two wheeler manufacturing company's growth was very rapid on a single product for more than
two decade: then it brought out new models, then came the range of products, finally the company had
manufacturing units at multiple locations. This is an example of the growth of the existing business
structure.
There is another major example of organization called “AMAZON”. Originally it was an online
bookstore in 1994 but since it expanded into other industries it has today become one of the worlds largest
e-commerce and technology company. Its policy, there- fore, is to grow with diversification.
Similarly “Netflix” which we all know is one of the top-rated company today. Originally it was opened
as a DVD by mail service in 1997, and now it has evolved globally as a streaming entertainment platform.
Growth strategy means the selection of a product with a very fast growth potential. It means choice of
industries such as electronics, communication, transport, textile, plastic, and so on where the growth
potential exists for expansion, diversification and integration. The growth strategy means acquisition of
business of the other firms and opening new market segments.
Growth strategies are adopted to establish, consolidate, and maintain a leadership and acquire a
competitive edge in the business and industry. It has a direct, positive impact on the profitability.
Product Strategy
A growth strategy, where the company chooses a certain product with particular characteristics, becomes
a product strategy. A product strategy means choice of a product which can expand as a family of products
and provide the basis for adding associated products. It can be positioned into the expanding markets by
way of model, type, and price.
The product strategy can be innovated continuously for new markets. Some examples are as follows:
1. A company producing pressure cookers enters the business of making ovens, boilers, washing
machines and mixers-the products for home market.
2. A company producing a low prices detergent powder enters the business of washing soap and
bath soap.
Above all the specific example of product strategy is Apple Inc. Apple is now renowned for its innovative
and iconic products. It achieved the success with the product strategy where they first introduced
Macintosh Computers in the 1980s then they emphasized on designs, aesthetics, user experience and
attracted a loyal customer base. Soon it evolved into multiple products like ipod and iTunes, apple watch
and wearables and services like apple music, apple tv, apple fitness and others creating that ecosystem
with diversified revenue streams and created a seamless and integrated user experience across its devices
and services.
When a consumer need exists, has a potential of expanding in several dimensions and a product can he
conceived satisfying that need, it becomes the product strategy.
Market Strategy
The product and the marketing strategies are closely related. The marketing strategies deal with the
distribution, services, market research, pricing, advertising, packing and choice of market itself. A few
examples of marketing strategies are as follows.
1. Many companies adopt the strategy of providing after sales service of the highest order.
The marketing strategies act as an expediting and activating force for the product and the growth strategy
and as a force which accelerates business development. They are generated to create loyalty and
preference, for holding market share, for communicating consumer needs and also explaining how the
product satisfies it. Marketing strategies are generally centered around one of the factors such as quality,
price, service, and availability.
The corporate management formulates the strategies and implements them. The choice of strategy and
the method of implementation affect the corporate success. Development of a strategy is a difficult
task and it is an exercise in multidisciplinary fields. It can be developed by the business analyst under
the directions of the management. The attitude and philosophy of the management will he reflected
in the strategy formulation. There are no ready-made formulae or procedures to ensure the selection of a
correct strategy, only the results can prove its worth.
The last but not the least point is the business policy evolved by the top management. All the strategies
are governed by the business policy. The policies mirror the management's bias, preference,
attitude, strength and weakness. Business policy is the frame within which the strategies are
sketched.
Business policies provide the necessary guidelines to decide and act across the company and they
generally remain effective for a long time. The business policies inform people in the organization about
the intentions of the management to conduct the business in a particular direction and in a particular
manner. The policies should be clearly stated as they would be used by the people in the organization
without recourse of consultation. This is also true for a strategy formulation.
Short-Range Planning
Short-range planning deals with the targets and the objectives of the organization. Based on the goals and
the objectives, a short-range plan provides the scheme for implementation of the long- range plan. Short-
A budget gives details of the resources required to achieve the targets. The budgets are prepared first in
terms of physical units and then cons cried into the financial units. The companies prepare budgets
for sales. Production, expenses, capital expenditure, raw materials, advertising and cash, and use them
for a decision-making and control.
The budgets are used as a control mechanism. The person responsible for the budget is in-formed
regularly whether the performance is below the budget or above and whether his expense budgets and
performance have adverse relations.
The budgets act as self-motivating tools for achieving the operational performance. It induces action on
the part of the manager if his performance is under the budget. Though the budgets are made at
'responsibility centers' of the organization, it is not an exercise in isolation. All the budgets when
computed in the monetary terms result into financial budgets. The diagram in Figure 3-38 shows the
relationship of the various budgets.
Enhancing the value chain essentially involves improving communication across all levels, reducing costs
in every aspect of business, decreasing transaction or operation cycle times, monitoring and meeting
customer expectations, assessing competitors' moves, and improving customer service and relations.
The value chain is made up of all entities (process, tasks, individual, company) that participate in the
production of product or service. Each entity adds value to product or service. It encompasses all
processes, tasks and stages between suppliers and customers. IS and IT has helped to develop solutions
like.
ERP (Enterprise Resource Planning) systems that facilitate integrated resource management, thus
improving business operations.
SCM (Supply Chain Management) systems that streamline and optimize the supply chain, reducing the
overall costs of business operations.
CRM (Customer Relationship Management) systems that enhance customer relationships, resulting in
increased loyalty and repeat business.
PLM (Product Lifecycle Management) systems that manage the entire lifecycle of a product, enabling
continuous product improvements and efficient maintenance.
These solutions are often powered by enabling technologies such as the Internet, wireless connectivity,
Electronic Data Interchange (EDI), digital technologies, and CAD/CAM (Computer-Aided
Design/Computer-Aided Manufacturing) systems. These technologies add further value, accelerate
processes, and bolster the overall efficacy of the solutions.
The role of IS & IT impacting business strategy and development can be summarized in brief as under:
It enables in
Beyond Cost Savings: IS and IT offer benefits that extend beyond traditional cost savings. They
foster innovation and facilitate the development of new business models, products, and services.
Building Entry Barriers: Through the use of technology solutions, businesses can create entry
barriers across the entire supply chain. These integrated solutions bind customers and suppliers
together, thereby enhancing business performance measurements in terms of cost, quality, and
service.
Facilitating Paradigm Shift: IS and IT enable a shift from traditional 'make and sell' business models
to 'sense and respond' models. This transformation allows businesses to be more adaptive and
responsive to market needs and customer demands.
Applying such a model requires a specific strategic analysis for each business scenario to determine
appropriate design, development, and implementation strategies. However, Porter's Five Forces and
strategic options analysis can still provide a valuable starting point for this analysis.
The five forces and five options are applicable in each case for every business. However, specific strategic
analysis is required in each case of business to determine various strategic options for design,
development and implementation. Figure 3.40 shows strategic analysis model.
Chapter 4
While this problem-solving approach comes in many flavors, it usually incorporates the
following general problem-solving steps (see Figure 4-1);
1. Planning-identify the scope and boundary of the problem, and plan the development strategy and
goals.
2. Analysis-study and analyze the problems, causes, and effects. Then, identify and analyze the
requirements that must be fulfilled by any successful solution.
3. Design- If necessary, design the solution-not all solutions require design.
4. Implementation- Implement the solution.
5. Support-analyze the implemented solution, refine the design, and implement improvements to
the solution. Different support situations can thread back into the previous steps.
The term cycle in systems development life cycle refers to the natural tendency for systems to
cycle through these activities, as was shown in Figure 4-1.
Fig 4-2 Feasibility Check Point in the System Development Life Cycle
Systems Analysis-A Definition Phase Checkpoint The next checkpoint occurs after the definition of
user requirements for the new system. These requirements frequently prove more extensive than
originally stated. For this reason, the analyst must frequently revise cost estimates for design and
implementation. Once again, feasibility is reassessed. If feasibility is in question, scope, schedule, and
costs must be rejustified. (Again, Module A offers guidelines for adjusting project expectations.)
How Do the End-Users and Managers Feel about the Problem (Solution)? It's important not only
to evaluate whether a system can work, but we must also evaluate whether a system will work. A
workable solution might fail because of end-user or management resistance. The following questions
address this concern;
• Does management support the system?
• How do the end-users feel about their role in the new system?
• What end-users or managers may resist or not use the system? People tend to resist change.
Can this problem be overcome? lf so, how?
• How will the working environment of the end-users change? Can or will end-users and
management adapt to the change?
Essentially, these questions address the political acceptability of solving the problem or the solution.
Usability Analysis When determining operational feasibility in the later stages of the development
life cycle, usability analysis is often performed with a working prototype of the proposed system. This
is a test of the system's user interfaces and is measured in how easy they are to learn and to use and
how they support the desired productivity levels of the users. Many large corporations, software
consultant agencies, and software development companies employ user interface specialists for
designing and testing system user interfaces. They have special rooms equipped with video
cameras, tape recorders, microphones, and two-way mirrors to observe and record a user working
with the system. Their goal is to identify the areas of the system where the users are prone to
make mistakes and processes that may be confusing or too complicated. They also observe the
reactions of the users and assess their productivity.
How do you determine if a systems user interface is usable? There are certain goals or criteria that
experts agree help measure the usability of an interface and they are as follows:
• Ease of learning-How long it takes to train someone to perform at a desired level.
• Ease of use-You are able to perform your activity quickly and accurately. lf you are a first- time
user or infrequent user, the interface is easy and understandable. If you are a frequent user, your
level of productivity and efficiency is increased.
• Satisfaction-You, the user, are favorably pleased with the interface and prefer it over types you
are familiar with.
Technical feasibility can be evaluated only after those phases during which technical issues are
resolved-namely, after the evaluation and design phases of our life cycle have been completed. Today,
These generic models are not definitive descriptions of software processes. Rather, they are useful
abstractions, which can be used to explain different approaches to software development. For many
large systems, of course, there is no single software process that is used. Different processes are used
to develop different parts of the system.
1) The waterfall model - This model represents the software development process as a sequential
flow of phases, where each phase follows the completion of the previous one. The phases
typically include requirements specification, software design, implementation, testing, and
maintenance. It emphasizes a linear and structured approach to development.
2) Evolutionary development- This approach involves iterative and incremental development.
It starts with the rapid development of an initial system based on abstract specifications. The
system is then refined with continuous customer input to meet their evolving needs. This model
allows for flexibility and adaptation during the development process.
3) Formal systems development- This model is based on producing a formal mathematical system
specification and using mathematical methods to transform it into a program. Verification of
system components is carried out through mathematical arguments that demonstrate their
conformity to the specification. This approach is less common but can be used in projects where
high assurance and correctness are critical.
4) Reuse-based development- This approach focuses on integrating reusable components into a
system rather than building everything from scratch. It assumes the availability of a significant
number of reusable components, and the development process centers around their selection,
adaptation, and integration. Reuse-based development can significantly speed up software
development by leveraging existing components.
Processes based on the waterfall model and evolutionary developments are widely used for practical
systems development. Formal system development has been successfully used in a number of projects
but processes based on this model are still only used in a few organizations. Informal reuse is common
in many processes but most organizations do not explicitly orient their software development processes
The generic process for agile is given below. However note that this process may slightly vary based on
specific agile methodology(such as scrum or Kanban) that’s being used.
Step 1: Project Planning
In this initial stage, the overall project scope, objectives, and potential team members are identified. The
outcome of this stage typically includes a high-level project timeline, a list of team members, and a
rough estimate of the resources required.
Step 2: Product Roadmap Creation
The team, along with stakeholders, identifies key product features and groups them into a product
backlog. These features are usually described in terms of user stories, which define what each feature
will do for the end user. These are then prioritized based on their importance and value to the project.
Step 3: Release Planning
The team decides which features from the product backlog will be included in each release. This is
typically based on the priority of the features, the team's capacity, and the overall project timeline.
Step 4: Sprint Planning
The team plans "sprints," which are short, time-boxed iterations (typically lasting between one to four
weeks) during which a set of features are developed. The team selects features from the top of the
product backlog to include in the sprint, based on the sprint's duration and the team's velocity (the
amount of work they can complete in a sprint).
Step 5: Daily Stand-Up or Scrum
During the sprint, the team holds a daily meeting (also known as a stand-up or scrum) to discuss their
progress, any blockers they are facing, and the plan for the next 24 hours.
If you are dealing with a large application or program then there may be various test cases
that might need to be created to test separate sections of the program. The various test cases are
normally gathered together and referred to as test suites, which is a set of test cases.
While these different types of maintenance are generally recognised, different people sometimes give
them different names. Corrective maintenance is universally used to refer to maintenance for fault
repair. However, adaptive maintenance sometimes means adapting to a new environment and
sometimes means adapting the software to new requirements. Perfective maintenance sometimes means
perfecting the software by implementing new requirements and, in other cases. Maintaining the
functionality of the system but improving its structure and its performance.
lt is difficult to find up-to-date figures for the relative effort devoted to the different types of maintenance.
A rather old survey by Lientz and Swanson (1980) discovered that about 65 per cent of maintenance
was concemed with implementing new requirements. 18 per cent with changing the system to adapt it to
a new operating environment and 17 per cent to correct system faults. Similar figures were reported by
Nosek and Palvia (1990) 10 years later. For custom systems, this distribution of costs is still roughly
correct.
From these figures we can see that repairing system faults is not the most expensive maintenance activity.
Rather. evolving the system to cope with new environments and new or changed requirements
consumes most maintenance effort.
Maintenance is therefore a natural continuation of the system development process with associated
specification. design, implementation and testing activities. A spiral model. such as that shown in
Figure 27.3. is therefore a better representation of the software process than representations such
as the waterfall model (see Figure 3.l) where maintenance is represented as a separate process activity.
The costs of system maintenance represent a large proportion of the budget of most organizations that use
software systems. In the 1980s, Lientz and Swanson found that large organizations devoted at least 50 per
cent of their total programming effort to evolving existing systems.
McKee (1984) found a similar distribution of maintenance effort across the different types of maintenance
but suggests that the amount of effort spent on maintenance is between 65 and 75 per cent of total
available effort. As organizations have replaced old systems with off-the-shelf systems, such as enterprise
resource planning systems. this figure may not have come down. Although the details may be uncertain,
we do know that software change remains a major cost for all organizations.
Maintenance costs as a proportion of development costs vary from one application domain to another.
For business application systems, a study by Guimaraes (1983) showed that maintenance costs
were broadly comparable with system development Costs. For embedded real-time systems,
maintenance costs may be up to four times higher than development costs. The high reliability and
performance requirements of these systems may require modules to be tightly linked and hence difficult
to change.
lt is usually cost-effective to invest effort when designing and implementing a system to reduce
maintenance costs. It is more expensive to add functionality after delivery because of the need to
understand the existing system and analyse the impact of system changes. Therefore, any work done
during development to reduce the cost of this analysis is likely to reduce maintenance costs. Good software
engineering techniques such as precise specification, the use of object-oriented development and
configuration management all contribute to maintenance cost reduction Figure
27.4 shows how overall lifetime costs may decrease as more effort is expended during system
development to produce a maintainable system. Because of the potential reduction in costs of
understanding, analysis and testing, there is a I significant multiplier effect when the system is developed
for maintainability. For E System 1, extra development costs of $25,000 are invested in making the
system more maintainable. This results in a saving of $100,000 in maintenance costs over l the lifetime
of the system.
The Institute of Chartered Accountants of Nepal ȁʹͲͻ
Management Information and Control System
This assumes that a percentage increase in development costs results in a comparable percentage decrease
in overall system costs. i T One important reason why maintenance costs are high is that it is more
expensive to add functionality after a system is in operation than it is to implement the p same
functionality during development. The key factors that distinguish development and maintenance and
which lead to higher maintenance costs are:
1. Team stability After the delivery of a system, the development team is often disbanded, and new
individuals or teams are assigned to system maintenance. These new members may lack
understanding of the system and the design decisions made during development. Consequently, a
significant portion of the maintenance effort is dedicated to comprehending the existing system
before implementing changes.
2. Contractual responsibility Maintenance contracts are typically separate from system development
contracts and may be assigned to different companies. This separation, combined with the lack of
team stability, means that there is often no incentive for the development team to prioritize writing
the software in a way that facilitates easy changes. Cutting corners during development to save
effort may increase maintenance costs in the long run.
3. Staff skills Maintenance staff members are often relatively inexperienced and may be unfamiliar
with the specific application domain. Maintenance is sometimes considered a less skilled process
than system development and is frequently assigned to junior staff members. Additionally, legacy
systems may be written in outdated programming languages, requiring maintenance staff to learn
these languages to maintain the system.
4. Program age and structure As programs age, their structure tends to degrade due to multiple
changes, making them more challenging to understand and modify. Many legacy systems were
developed without modern software engineering techniques and may lack proper structure.
Furthermore, these systems were often optimized for efficiency rather than understandability,
adding complexity to maintenance efforts
The first three of these problems stem from the fact that many organisations still make a
distinction between system development and maintenance. Maintenance is seen as a second-class
activity and there is no incentive to spend money during development to reduce the costs of system
change. The only long-term solution to this problem is to accept that systems rarely have a defined
lifetime but continue in use, in some form, for an indefinite period.
210 | The Institute of Chartered Accountants of Nepal
Chapter 4 : System Development Life Cycle
Rather than develop systems, maintain them until further maintenance is impossible and then replace
them, we have to adopt the notion of evolutionary systems. Evolutionary systems are systems that are
designed to evolve and change in response to new demands. They can be created from existing
legacy systems by improving their structure through re-engineering.
The last issue in the list above, namely the problem of degraded system structure is, in some ways,
the easiest problem to address. Re-engineering techniques may be applied to improve the system structure
and understandability. lf appropriate, architectural transformation (discussed later in this chapter) can
adapt the system to new hardware. Preventative maintenance work (essentially incremental re-
engineering) can be supported to improve the system and make it easier to change.
A Context Diagram (and a DFD for that matter) provides no information about the timing, sequencing,
or synchronization of processes such as which processes occur in sequence or in parallel. Therefore it
should not be confused with a flowchart or process flow which can show these things.
Some of the benefits of a Context Diagram are:
• Shows the scope and boundaries of a system at a glance including the other systems that
interface with it
• No technical knowledge is assumed or required to understand the diagram
• Easy to draw and amend due to its limited notation
• Easy to expand by adding different levels of DFDs
• Can benefit a wide audience including stakeholders, business analyst, data analysts,
developers
Work Break Down Structure:
A work breakdown structure (WBS) is a chart in which the critical work elements, called tasks, of a
project are illustrated to portray their relationships to each other and to the project as a whole. The
graphical nature of the WBS can help a project manager predict outcomes based on various scenarios,
which can ensure that optimum decisions are made about whether or not to adopt suggested procedures
or changes.
When creating a WBS, the project manager defines the key objectives first and then identifies the
tasks required to reach those goals. A WBS takes the form of a tree diagram with the "trunk" at the top
and the "branches" below. The primary requirement or objective is shown at the top, with increasingly
specific details shown as the observer reads down.
At Event 3, we have to evaluate two predecessor activities - Activity 1-3 and Activity 2-3, both of which
are predecessor activities. Activity 1-3 gives us an Earliest Start of 3 weeks at Event 3. However, Activity
2-3 also has to be completed before Event 3 can begin. Along this route, the Earliest Start would be
4+0=4. The rule is to take the longer (bigger) of the two Earliest Starts. So the Earliest Start at event 3
is 4.
Similarly, at Event 4, we find we have to evaluate two predecessor activities - Activity 2-4 and Activity
3-4. Along Activity 2-4, the Earliest Start at Event 4 would be 10 wks, but along Activity 3-4, the
Earliest Start at Event 4 would be 11 wks. Since 11 wks is larger than 10 wks, we select it as the Earliest
Start at Event 4. We have now found the longest path through the network. It will take 11 weeks along
activities 1-2, 2-3 and 3-4. This is the Critical Path.
At Event 3 there is only one activity, Activity 3-4 in the backward pass, and we find that the value
is 11-7 = 4 weeks. However at Event 2 we have to evaluate 2 activities, 2-3 and 2-4. We find that the
backward pass through 2-4 gives us a value of 11-6 = 5 while 2-3 gives us 4-0 = 4. We take the smaller
value of 4 on the backward pass.
Chapter 5
Today, many organizations have evolved from a structured analysis approach to an information
engineering approach.
Information engineering is a data-centered, but process-sensitive technique that is applied to the
organization as a whole (or a significant part, such as a division), rather than on an ad-hoc, project-by-
project basis (as in structured analysis). Unlike structured analysis, which is often used on a project-by-
project basis, information engineering is applied to the entire organization or a significant part of it, such
as a division.
The basic concept of information engineering is that information systems should be engineered like other
products. Information engineering books typically use a pyramid framework to depict information
systems building blocks and system development phases. The phases are:
1 Information Strategy Planning (ISP) is a systems analysis approach that focuses on examining the
entire business organization to develop an overarching plan and architecture for future information
systems development. The primary goal of ISP is not to create actual information systems or computer
applications but to create a strategic plan that aligns information systems with the organization's business
objectives.
In ISP, the project team analyzes the business mission and goals and formulates an information systems
architecture and plan that optimally supports the organization in achieving its business goals. This strategic
plan guides the identification and prioritization of specific business areas. A business area represents a
collection of cross-organizational business processes that require high integration to support the
information strategy plan and fulfill the business mission.
However, for JAD to be successful, it requires a skilled facilitator who can effectively manage group
dynamics, encourage participation from all members, and mediate any conflicts that arise.
One of the most interesting contemporary applications of systems analysis methods is business process
redesign.
Business process redesign (BPR) also called business process reengineering is the application of
systems analysis (and design) methods to the goal of dramatically changing and improving the
fundamental business processes of an organization, independent of information technology. The
motivation behind BPR arose from the realization that many existing information systems and
applications merely automated inefficient business processes. Automating outdated processes does not
add value to the business and may even subtract value from it. BPR is one of several projects influenced
by the total quality management (TQM) trend.
BPR projects primarily focus on non-computer processes within the organization. Each process undergoes
careful analysis to identify bottlenecks, assess value contribution, and identify opportunities for
elimination or streamlining. After redesigning the business processes, BPR projects often explore how
information technology can be effectively applied to support the improved processes. This may lead to the
initiation of new application development projects, which can be addressed using other techniques
discussed in this section.
Object Oriented Analysis
Object-Oriented Analysis (OOA) is a pivotal technique in systems development that strives to harmonize the
traditionally separate concerns of data and processes. In OOA, data and the processes that act upon that data are
Process
Fig 5-7. Example of a Level 1 DFD Showing the Data Flow and Data Store Associated
With a SubProcess "Digital Sound Wizard."
Fig 5-8. A Valid DFD Example Illustrating Data Flows, Data Store, Processes, and Entities.
One-to-many
One-to-many relationship refers to a situation where multiple items in one entity can be connected to
multiple items in another entity. It means that a single item in the first entity can be associated with many
items in the second entity, but not the other way around.
Using the previous scenario as an illustration, when an instructor teaches multiple courses within a year,
it establishes a one-to-many relationship. It is important to note that merging entities should be avoided if
there is a potential for transforming a one-to-one relationship into a one-to-many relationship.
Exercise
Consider the Order Tracking system.
Customer place orders for universal products. Orders are filled in the Order Processing department
by order processing clerks. In the Order Processing department, an Order number is assigned to each
order for identification and an invoice with the cost of the products for the order is produced. When the
invoice is sent to the customer, a shipment is also made to the customer by the Shipping department.
After the DFD are drawn, the following are the data entities established: SHIPMENT CUSTOMER,
ORDER, INVOICE and PRODUCT.
Establish the possible relationships between each of these data entities.
Solution to Exercise
The following diagram represents a one-to-one relationship.
This top-down approach is applied to the construction of a data model by way of E-R diagram. The
above steps are illustrated as a flowchart in Figure 5-13.
Fig 5-14 Final E-R diagram representing relationship among students, instructors and courses
offered.
Next we define and group the attributes for each data entities, as shown below:
COURSES OFFERED = Class-Number
Class-Name
Class-Credits
Class-Room
Class-Time
Class-Instructor
Class-Enrollment
Class-Maximum-Limit
INSTRUCTOR = Instructor-Number
lnstructor-Name
Instructor-Department
Instructor-schedule (for all classes taught)
{Class-Number}
{Class-Name}
{Class-Credits}
Chapter 6
Fig 7-1 This E-Commerce Process Architecture Highlights Nine Essential Categories of
E- Commerce Processes
Access Control and Security
In e-commerce processes, it is crucial to establish mutual trust and ensure secure access between the parties
involved in a transaction. This is accomplished through various measures such as user authentication, access
authorization, and the implementation of security features. For instance, these processes verify the identity of a
customer and an e-commerce site using methods like user names and passwords, encryption keys, or digital
certificates and signatures. Once authenticated, the e-commerce site grants access only to specific parts of the
site that are relevant to the individual user's transactions. Typically, users are granted access to all resources on
an e-commerce site, except for restricted areas such as other users' accounts, confidential company data, and
webmaster administration sections.
In the case of B2B e-commerce, companies may rely on secure industry exchanges or web trading portals that
restrict access to registered customers, ensuring that only authorized individuals can access trading information
and applications. Additional security processes are implemented to safeguard e-commerce resources from
various threats, including hacker attacks, password or credit card number theft, and system failures. These
measures are put in place to maintain the integrity and security of e-commerce sites and protect both the
businesses and their customers.
Profiling and Personalizing
Once you have gained access to an e-commerce site, profiling processes can occur that gather data on
you and your Web site behavior and choices, as well as build electronic profiles of your characteristics
and preferences. User profiles are developed using profiling tools such as user registration, cookie files,
website behavior tracking software, and user feedback. These profiles are then used to recognize you as
an individual user and provide you with a personalized view of the contents of the site, as well as product
Fig 7-2 The role of content management and workflow management in a web-based
procurement process: The MS Market System Used By Microsoft Corp
Event Notification
Event notification is a crucial component of modern e-commerce applications. These systems are
predominantly event-driven, responding to a diverse range of events throughout the entire e-commerce
process. From a new customer's initial website access to payment and delivery processes, as well as
various customer relationship and supply chain management activities, event notification plays a vital
role in keeping stakeholders informed of relevant updates and changes that may affect their transactions.
To facilitate event notification, e-commerce systems utilize event notification software, which works
in conjunction with workflow management software. This combination enables continuous monitoring
of all e-commerce processes, capturing essential events, including unexpected changes and problem
situations. Subsequently, the event notification software collaborates with user-profiling software to
automatically notify all involved stakeholders through their preferred electronic messaging methods,
Fig 7-4 an example of a secure electronic payment system with many payment alternatives
Electronic Funds Transfer
Electronic funds transfer (EFT) systems continue to be a vital and pervasive element in modern banking
and retail industries, facilitating swift and secure money and credit transfers between financial institutions,
businesses, and their customers. The landscape of EFT has evolved significantly with the advent of
advanced information technologies, offering a plethora of efficient electronic payment methods and
services.
In the banking sector, robust and interconnected networks support teller terminals at bank branches,
ensuring smooth in-person transactions for customers. Additionally, the proliferation of automated teller
machines (ATMs) has revolutionized access to funds, allowing users to withdraw cash, make deposits,
and perform various banking tasks across the globe.
Moreover, the rise of web-based payment services has transformed the way consumers manage their
finances. Popular platforms like PayPal, BillPoint, and others offer secure cash transfers over the internet,
empowering users to conduct online transactions with ease. Services like CheckFree and Paytrust have
simplified bill payment processes, enabling customers to settle their bills automatically through online
platforms.
In the retail industry, EFT systems have become indispensable, offering seamless and instantaneous
payment options for customers. Point-of-sale (POS) terminals at retail outlets are now connected to bank
EFT systems, facilitating transactions through credit cards or debit cards for purchases such as groceries,
gas, and other goods.
Software as a service
SaaS, or Software as a Service, describes any cloud service where consumers are able to access software
applications over the internet. The applications are hosted in "the cloud" and can be used for a wide
range of tasks for both individuals and organisations. Google, Twitter, Facebook and Flickr are all
examples of SaaS, with users able to access the services via any internet enabled device. Enterprise
users are able to use applications for a range of needs, including accounting and invoicing, tracking
sales, planning, performance monitoring and communications (including webmail and instant
messaging).
SaaS is often referred to as software-on-demand and utilising it is akin to renting software rather than
buying it. With traditional software applications you would purchase the software upfront as a package
and then install it onto your computer. The software's licence may also limit the number of users
Cost-Efficient: SaaS eliminates the need for additional hardware costs as the processing power
is provided by the cloud provider.
Easy Setup: There are no initial setup costs; applications are ready for use once subscribed to.
Flexible Payment Model: Users pay for what they use, often on a monthly basis, making it cost-
effective for short-term needs.
Scalability: Users can easily access more storage or additional services on demand without
installing new software or hardware.
Automated Updates: Updates are automatically available online to existing customers, usually
free of charge.
Cross-Device Compatibility: SaaS applications can be accessed from any internet-enabled
device, providing flexibility for users.
Remote Accessibility: Applications can be accessed from any location with an internet-enabled
device, freeing users from installation restrictions.
Customization: Some SaaS applications offer customization options, allowing businesses to
tailor the software to their specific needs and branding.
Office software extensively utilizes SaaS, providing a range of solutions for accounting, invoicing,
sales, and planning. Businesses can subscribe to the required software and access it online from any
office computer using a username and password. The flexibility of SaaS allows easy switching between
software based on changing needs. Additionally, businesses can set up multiple users with varying
levels of access to the software, accommodating different team sizes and requirements.
The SaaS industry has experienced significant growth and diversification since its inception. Many new
players have emerged, offering specialized SaaS solutions for various industries and use cases.
Additionally, advancements in cloud technology have further enhanced the performance, security, and
accessibility of SaaS applications. Furthermore, many SaaS providers now focus on integrating artificial
intelligence and machine learning capabilities into their software, enabling more intelligent and
personalized user experiences. This continuous innovation in the SaaS space ensures that businesses and
individuals have access to cutting-edge software solutions for their needs.
Data Exchange:
Data exchange is the process of converting data from a source schema to a target schema, ensuring that
the target data accurately represents the source data. It involves restructuring the data, which can lead
to some content loss, making it distinct from data integration. During data exchange, instances may face
constraints that make transformation impossible. Conversely, there might be multiple ways to transform
an instance, requiring the identification and justification of the "best" solution among them. This process
plays a crucial role in data management and ensures seamless data sharing and compatibility between
different systems and databases.
Chapter 7
As SFA technologies continue to evolve, organizations need to address challenges associated with
implementation and user perceptions to maximize their benefits. A deeper understanding of user needs,
seamless integration with existing systems, and continuous updates to align with changing market
demands will be crucial for successful SFA adoption.
Chapter 8
Information System Security,
Protection Protection
of Information and Control
Assets
Security features
Security features, while not providing an absolute guarantee of a secure system, are the essential building
blocks required to construct a robust security infrastructure. These features can be categorized into four
primary areas:
Authentication: This feature validates a user's identity, affirming that users are indeed who they claim
to be. In a contemporary context, think of Two-Factor Authentication (2FA) or Biometric
authentication used in online banking or social media platforms. These techniques add an extra layer
of security, ensuring that you're the only one able to log into your accounts, even if someone else
knows your password.
Authorization: Once your identity is confirmed, authorization controls what you can and cannot do
within a system. It defines permissions and privileges for users, determining what resources you can
access and what actions you can perform. For instance, in a project management tool like Trello or
Asana, while a team member might be able to view and edit certain project tasks, they may not have
the authority to delete or add new tasks - that may be reserved for the project manager.
Encryption: Encryption plays a crucial role in data privacy and security. It transforms readable data
(plaintext) into an encoded version (ciphertext) that can only be deciphered using a decryption key.
Consider messaging apps like Signal or WhatsApp that offer end-to-end encryption, meaning only
the sender and receiver can read the messages, ensuring that even if a malicious actor intercepts the
communication, they would not be able to understand it.
Auditing: Auditing involves recording system activities for detection and investigation of security
breaches or incidents. It helps track user activities, system changes, and data access, providing a
valuable trail for forensic investigations and compliance purposes. For instance, e-commerce
platforms maintain audit logs to document user transactions, which can be used to resolve disputes
over whether a particular item was purchased or not.
These features work synergistically to create a comprehensive security framework, providing multi-
layered protection for systems and data in an increasingly complex digital environment.
Attacks against e-Commerce Web sites are so alarming, they follow right after violent crimes in the news.
Practically every month, there is an announcement of an attack on a major Website where sensitive
Compared to robbing a bank, the tools necessary to perform an attack on the Internet is fairly cheap. The
criminal only needs access to a computer and an Internet connection. On the other hand, a bank robbery
may require firearms, a getaway car, and tools to crack a safe, but these may still not be enough. Hence,
the low cost of entry to an e-Commerce site attracts the broader criminal population.
The payoff of a successful attack is unimaginable. If you were to take a penny from every account
at any one of the major banks, it easily amounts to several million dollars. The local bank robber
optimistically expects a windfall in the tens of thousands of dollars. Bank branches do not keep a lot of
cash on hand. The majority is represented in bits and bytes sitting on a hard disk or zipping through a
network.
While the local bank robber is restricted to the several branches in his region, his online
counterpart can choose from the thousands of banks with an online operation. The online bank robber
can rob a bank in another country, taking advantage of non-existent extradition rules between the country
where the attack originated, and the country where the attack is destined.
An attack on a bank branch requires careful planning and precautions to ensure that the criminal does not
leave a trail. He ensures the getaway car is not easily identifiable after the robbery. He cannot leave
fingerprints or have his face captured on the surveillance cameras. If he performs his actions on the
Internet, he can easily make himself anonymous and the source of the attack untraceable.
The local bank robber obtains detailed building maps and city maps of his target. His online counterpart
easily and freely finds information on hacking and cracking. He uses different sets of tools and techniques
everyday to target an online bank.
As mentioned, the vulnerability of a system exists at the entry and exit points within the system. Figure
9-6 shows an e-Commerce system with several points that the attacker can target:
• Shopper
• Shopper' computer
• Network connection between shopper and Web site's server
• Web site's server
• Software vendor
These target points and their exploits are explored later in this article.
Attacks
This section describes potential security attack methods from an attacker or hacker.
Tricking the shopper
Some of the easiest and most profitable attacks are based on tricking the shopper, also known as social
engineering techniques. These attacks involve surveillance of the shopper's behavior, gathering
information to use against the shopper. For example, a mother's maiden name is a common challenge
question used by numerous sites. If one of these sites is tricked into giving away a password once the
challenge question is provided, then not only has this site been compromised, but it is also likely that
the shopper used the same logon ID and password on other sites.
A common scenario is that the attacker calls the shopper, pretending to be a representative from a site
visited, and extracts information. The attacker then calls a customer service representative at the site,
posing as the shopper and providing personal information. The attacker then asks for the password to
be reset to a specific value.
Another common form of social engineering attacks are phishing schemes. Typo pirates play on the
names of famous sites to collect authentication and registration information. For example,
http://www.ibm.com/shop is registered by the attacker as www.ibn.com/shop. A shopper mistypes
and enters the illegitimate site and provides confidential information. Alternatively, the attacker sends
emails spoofed to look like they came from legitimate sites. The link inside the email maps to a rogue
site that collects the information.
Fig 9-7 Attacker sniffing the network between client and server
TLS provides encryption and authentication mechanisms to protect the confidentiality and integrity of data
transmitted between a client (such as a web browser) and a server. When establishing a TLS connection,
both the client and server undergo a handshake process to negotiate encryption algorithms and exchange
encryption keys. This handshake verifies the authenticity of the server and establishes a secure channel
for data transmission.
The main features and benefits of TLS include:
Encryption: TLS encrypts data to prevent unauthorized access and ensure that information remains
confidential during transmission. Encryption algorithms used in TLS include symmetric encryption (for
bulk data) and asymmetric encryption (for exchanging encryption keys).
Data Integrity: TLS employs cryptographic hash functions to ensure that data transmitted between the
client and server is not tampered with during transit. This provides assurance that the data received at the
destination is the same as the data sent by the sender.
Authentication: TLS supports various authentication mechanisms, including digital certificates issued by
trusted Certificate Authorities (CAs). These certificates verify the identity of the server and sometimes the
client, enabling users to trust the authenticity of the entities they are communicating with.
Forward Secrecy: TLS supports forward secrecy, which means that even if an attacker compromises the
private key of a server, they cannot decrypt past communications that were secured with different session
keys. This enhances the long-term security of the communication.
Interoperability: TLS is a widely adopted standard and is supported by most modern web browsers, email
clients, and network devices. It ensures interoperability between different systems and allows for secure
communication across diverse platforms.
TLS has undergone several versions and improvements over time. The current versions include TLS 1.2
and TLS 1.3, with the latter being the most secure and efficient. TLS is continuously updated to address
known vulnerabilities and adapt to evolving security requirements.
Secure Socket Layer (SSL)- previously used
Secure Socket Layer (SSL) is a protocol that encrypts data between the shopper's computer and the site's
server. When an SSL-protected page is requested, the browser identifies the server as a trusted entity and
initiates a handshake to pass encryption key information back and forth. Now, on subsequent requests to
The Institute of Chartered Accountants of Nepal ȁ͵ͷͻ
Management Information and Control System
the server, the information flowing back and forth is encrypted so that a hacker sniffing the network
cannot read the contents.
The SSL certificate is issued to the server by a certificate authority authorized by the government.
When a request is made from the shopper's browser to the site's server using https://..., the shopper's
browser checks if this site has a certificate it can recognize. If the site is not recognized by a trusted
certificate authority, then the browser issues a warning.
As an end-user, you can determine if you are in SSL by checking your browser. For example,
inMozilla® Firefox, the secure icon is at the top in the URL entry field as shown in Figure 9-10.
Server firewalls
Server firewalls are essential components of network security that help protect servers from unauthorized
access and potential attacks. Similar to a moat surrounding a castle, a firewall acts as a barrier between the
server and the external network, controlling incoming and outgoing network traffic based on predefined
rules.
A common configuration for server firewalls involves the use of a demilitarized zone (DMZ), which is
created using two firewalls. The outer firewall allows incoming and outgoing HTTP requests, enabling
communication between client browsers and the server. The inner firewall, located behind the e-
Commerce servers, provides a higher level of security. It only permits requests from trusted servers on
specific ports to enter the server environment. Intrusion detection software is often employed on both
firewalls to detect any unauthorized access attempts and potential threats.
In addition to a DMZ, another technique often used is the implementation of a honey pot server. A honey
pot is a deceptive resource, such as a fake payment server, intentionally placed in the DMZ to lure and
deceive potential attackers. These servers are closely monitored, and any access or interaction by an
attacker is promptly detected. The honey pot serves as a distraction and can provide valuable insight into
the attacker's methods and intentions, enabling security teams to strengthen defenses and respond
effectively.
Server firewalls, along with other security measures, play a crucial role in safeguarding the integrity and
availability of servers in an e-Commerce environment. By carefully controlling and monitoring network
360 | The Institute of Chartered Accountants of Nepal
Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
traffic, organizations can mitigate the risk of unauthorized access, data breaches, and other malicious
activities. Regular updates and configuration reviews are necessary to ensure that firewalls remain
effective against evolving threats and vulnerabilities.
Another such advancement is the advent of Web Application Firewalls (WAFs). Unlike regular firewalls
that filter traffic based on ports and protocols, WAFs operate at the application layer (Layer 7 of the OSI
model) and are specifically designed to inspect HTTP/HTTPS traffic. They can identify and block attacks
such as cross-site scripting (XSS), SQL injection, and other common web-based threats that can
compromise servers. WAFs are particularly effective for e-Commerce servers that often host complex web
applications with multiple potential vulnerabilities.
Another rising trend in server security is the use of machine learning and artificial intelligence. AI-
enhanced firewalls analyze patterns and behaviors in network traffic, learning over time to identify
suspicious activity better. These intelligent firewalls can adapt and respond to threats faster and more
accurately than traditional firewalls, often detecting and mitigating issues before they become significant
problems
Moreover, the implementation of a Zero Trust security model has gained traction. This model operates on
the principle of "trust nothing, verify everything," irrespective of whether the request originates from
within or outside the network. In the context of firewalls, this means robust identity verification and strict
access controls, ensuring that only verified users or systems can access server resources.
Micro segmentation is another strategy increasingly being employed. It involves breaking down the
security perimeters into small zones to maintain separate access for separate parts of the network. This can
help to limit an attacker’s ability to traverse across the network even if they breach the initial firewall.
These advancements, combined with traditional firewall systems, help create a layered defense strategy,
reducing the chances of successful server attacks and enhancing overall network security.
Minimum Password
Length Passwords must be at least 12 characters in length.
Account Lockout After 5 unsuccessful login attempts, the account is locked for 15 minutes.
Password Strength Users are educated on creating strong passwords and regularly reminded to
Education update their passwords.
Password Usage
Monitoring Password usage and activity are monitored for potential security breaches.
You may choose to have different policies for shoppers versus your internal users. For example, you may
choose to lockout an administrator after 2 failed login attempts instead of 5. These password policies
protect against attacks that attempt to guess the user's password. They ensure that passwords are
sufficiently strong enough so that they cannot be easily guessed. The account lockout capability ensures
that an automated scheme cannot make more than a few guesses before the account is locked.
Intrusion detection and audits of security logs
One of the cornerstones of an effective security strategy is to prevent attacks and to detect potential
attackers. This helps understand the nature of the system's traffic, or as a starting point for litigation
against the attackers.
Secure Connection: Always ensure that the website is using a secure connection, denoted by HTTPS
in the URL, particularly when entering sensitive information like your credit card details or
passwords.
Valid SSL Certificate: Don't shop at sites where your browser shows warnings about the SSL
certificate. This could indicate that the site is not secure and the information you send could be
intercepted by others.
Strong Passwords: Use strong, unique passwords for each online account. Consider using a password
manager to help generate and remember these passwords.
Two-Factor Authentication: If available, enable two-factor authentication for added security. This
typically involves receiving a text or using an app to receive a code that you input when logging in.
Log Out After Shopping: Always log out of your account after you're done shopping, especially on
public or shared computers.
Use Credit Cards Instead of Debit Cards: Credit cards usually offer better protection against
fraudulent charges than debit cards.
Privacy: Information must be protected from unauthorized access. Encryption is used to achieve
privacy. In public key infrastructure (PKI), a message is encrypted using the recipient's public key,
which can only be decrypted using their private key.
Integrity: Messages must not be altered or tampered with during transmission. Techniques like
message authentication codes (MACs) or digital signatures can be used to ensure message
integrity.
Authentication: Both the sender and recipient need to prove their identities to each other.
Authentication can be accomplished using digital certificates, passwords, biometrics, or other
methods that verify identity.
Non-repudiation: There should be proof that the message was indeed received by the intended
recipient. Techniques like digital signatures provide non-repudiation by ensuring that the sender
cannot deny sending the message.
Public Key Infrastructure (PKI) forms the cornerstone of modern encryption techniques, and it's used
extensively to secure both online and offline communications. In PKI, data is encrypted with a public
key and decrypted with a corresponding private key. The public key is widely distributed and
accessible to anyone, while the private key is kept secret by the recipient.
In the realm of authentication, a process that verifies the sender's identity, PKI utilizes a system akin
to digital signatures. Here, a hash of the original message is encrypted using the sender's private key.
This encrypted hash, or signature, can then be decrypted by the recipient using the sender's public
key and compared against their own hash of the message to ensure the integrity and authenticity of
the data.
However, it's worth noting that PKI, due to the computational overhead of asymmetric encryption,
isn't typically efficient for the transmission of large amounts of data. Therefore, it's often used as an
initial step to establish a secure channel. During this phase, both parties can securely agree on a
symmetric key for further communications. This process, known as key agreement, often employs
This symmetric key, which is identical for both parties, is then used to encrypt and decrypt the
exchanged data efficiently. It's essential to understand that this symmetric key must be protected
during its distribution to prevent unauthorized access to the communication.
Maintaining the secrecy of private keys is crucial in this security model, but it's not the only potential
security lapse. Vulnerabilities can arise anywhere in the system, from software weaknesses to user
behavior, thus emphasizing the need for a holistic approach to security.
In summary, PKI systems often leverage a combination of asymmetric encryption, like RSA, for
authentication and secure key exchange, and symmetric encryption for the efficient exchange of data.
The security of these systems hinges on various factors, including safe private key handling, the
robustness of the encryption algorithms, and the overall security of the system they're implemented
in.
Upon receiving the message, the recipient uses the sender's public key to decrypt the digital signature,
extracting the original message digest. The recipient also runs the received plaintext message through
the same hash function to produce a new message digest. If the decrypted message digest matches the
newly generated one, it verifies that the message has not been tampered with during transmission. To
further strengthen the security and enforce non-repudiation, a third-party timestamping service is often
employed to validate the time and date at which the message was sent.
Authentication, on the other hand, can be ensured using digital certificates. How does a customer, for
instance, know that a website collecting sensitive information is not a fraudulent setup mimicking a
legitimate e-merchant? They can verify the site's digital certificate. This is a digital document issued by
a trusted Certification Authority (CA) like Verisign, Thawte, etc., that vouches for the website's
authenticity. It uniquely identifies the website or merchant, confirming their identity and legitimacy.
Digital certificates are not only issued for e-commerce sites and web servers but are also used to
authenticate emails and other online services.
First, digital signatures. Imagine you're writing a secret note. You want to make sure that when your
friend reads it, they know it was you who wrote it, and that nobody else has changed it while it was on
its way. To do this, you create a 'message digest' - a unique representation of your message, sort of like
a digital fingerprint, made using something called a hash function.
When your friend gets your note, they use your public key to 'unlock' your digital signature and get the
original message digest. They also create a new message digest by running your note through the same
hash function you used. If these two digests match, your friend knows that the note really is from you
and that nobody messed with it.
Sometimes, to make extra sure that the note really was sent when you say you sent it, a third party might
add a timestamp to your note, sort of like a digital postmark.
Next, let's talk about digital certificates. Imagine you're shopping online. How do you know that the
website you're buying from is the real deal and not some imposter trying to steal your information? This
is where digital certificates come in.
A digital certificate is like a digital ID card for a website or email service, issued by a trusted
organization called a Certification Authority (CA). This CA is like a digital notary, confirming that the
website or service is who they say they are. When you visit a site, you can check its digital certificate
to make sure it's not an imposter.
So, in short, digital signatures make sure a message is genuine and hasn't been tampered with, and
digital certificates prove that a website or online service is the real deal. Both of these help keep our
online world safe and secure.
Hash Function: The original message is processed through a hash function to generate a unique
value called the message digest.
Encryption: The message digest is encrypted using the sender's private key, creating the digital
signature. The digital signature is appended to the message.
Verification: The recipient uses the sender's public key to decrypt the digital signature, obtaining
the message digest. They independently run the received message through the same hash function
to generate a new digest.
Integrity Check: The recipient compares the received message digest with the newly generated
one. If they match, it indicates that the message has not been tampered with during transmission.
Digital Certificates:
Digital certificates serve as a means of authenticating the identity of the website or entity receiving
sensitive information. Here's how it works:
Certification Authority (CA): Trusted entities like Verisign or Thawte issue digital certificates.
CAs verify the identity of the certificate holder before issuing the certificate.
Certificate Content: A digital certificate contains information about the certificate holder,
including their public key and other identifying details. The certificate is digitally signed by the CA
to ensure its authenticity.
By combining digital signatures and digital certificates, the integrity, authenticity, and non-repudiation
of electronic communications can be ensured. Digital signatures verify the integrity of the message,
while digital certificates authenticate the identity of the receiving entity. Together, they provide a secure
framework for transmitting sensitive information and establishing trust in e-commerce transactions.
TLS operates at the transport layer of the networking stack, sitting on top of the reliable transmission
protocol (usually TCP). It uses a combination of symmetric and asymmetric encryption algorithms, digital
certificates, and cryptographic protocols to establish a secure connection between two parties.
Handshake: The TLS handshake process begins when a client connects to a server over a secure
connection (usually initiated by the client accessing a website with "https" in the URL). During the
handshake, the client and server negotiate the security parameters for the session.
Encryption Setup: Once the handshake is complete, the client and server establish a shared session
key using asymmetric encryption (public-key cryptography). This session key is then used for
symmetric encryption (faster and more efficient) of the actual data transmitted between the client and
server.
Data Exchange: With the secure connection established, the client and server can securely exchange
data. The data is encrypted using the session key, ensuring confidentiality.
Integrity and Authentication: TLS also provides mechanisms for data integrity and server
authentication. Message Authentication Codes (MACs) are used to verify that the data has not been
tampered with during transmission. Digital certificates issued by trusted Certificate Authorities (CAs)
are used to authenticate the identity of the server, ensuring that the client is communicating with the
intended server and not an imposter.
TLS has evolved over the years, with different versions such as TLS 1.0, TLS 1.1, TLS 1.2, and the most
recent version, TLS 1.3. Newer versions often address security vulnerabilities found in older versions and
introduce improvements in performance and security.
TLS is widely used to secure various network protocols, including HTTPS (secure web browsing), FTPS
(secure file transfer), and secure email protocols like SMTPS and IMAPS. Its widespread adoption has
significantly contributed to the secure transmission of sensitive information over the internet.
When a client (like your web browser) connects to an SSL-secured server (like a shopping website),
it asks the server to identify itself.
The server sends back a copy of its SSL certificate, which includes the server's public key.
The client checks the server's certificate against a list of trusted Certification Authorities (CAs). If the
certificate is valid and trusted, the client creates, encrypts, and sends back a symmetric session key
using the server's public key.
The server decrypts the symmetric session key using its private key. Now, both the server and client
have the same session key for that specific session.
The server sends back an acknowledgment, encrypted with the session key, to start the encrypted
session.
Server and client now encrypt all transmitted data with the session key.
By using this process, SSL provides a way to securely transmit sensitive information, like credit card
numbers or login credentials, over the internet. The use of both PKI and symmetric encryption helps ensure
both the integrity and confidentiality of the transmitted data.
SET (Secure Electronic Transaction): SET is a protocol developed by Visa and Mastercard to enhance
the security of electronic payment transactions. It uses public-key cryptography and digital certificates to
ensure the privacy and integrity of transaction data. SET allows for secure communication between the
merchant, customer, and bank, protecting sensitive information during online transactions. While SET was
once widely used, it has been largely replaced by more modern payment security protocols such as EMV
and tokenization.
Firewalls: Firewalls are security mechanisms, either hardware or software-based, that control the
incoming and outgoing network traffic to protect a system or network from unauthorized access and
malicious activity. Firewalls monitor and filter network traffic based on predefined security rules, allowing
only authorized connections and blocking suspicious or potentially harmful traffic. They act as a barrier
Kerberos: Kerberos is a network authentication protocol that provides secure authentication between
clients and servers in a distributed computing environment. It uses symmetric key cryptography to verify
the identities of users and services, allowing them to securely communicate over a potentially untrusted
network. Kerberos eliminates the need to transmit passwords over the network by using tickets to
authenticate users. It is commonly used in enterprise environments to control access to resources and
protect against unauthorized access.
Securely transmitting credit card details can be achieved with SSL, but once those details are stored on a
server, they become vulnerable to potential hacking attempts. To protect sensitive data, the Payment Card
Industry Data Security Standard (PCI DSS) was developed. This is a set of comprehensive requirements
for enhancing payment account data security and it's globally recognized and adopted.
Firewalls, either software or hardware-based, serve as a primary line of defense, protecting servers,
networks, and individual PCs from outside threats like viruses and hacker attacks.
For internal security and to ensure that only authorized employees have access to certain information,
many companies use authentication protocols like Kerberos. This system uses symmetric key
cryptography to confirm the identities of individuals on a network, helping to maintain the integrity and
confidentiality of information.
Please note that while technologies like firewalls and Kerberos contribute to security, they form just part
of a broader security strategy. It's also essential for organizations to adopt good security practices, like
regularly updating and patching systems, and educating employees about potential security threats.
Transactions
Transactions involving sensitive information, such as credit card details, require robust security measures
to ensure the protection of the data. Let's examine the three stages of these transactions:
Credit card details supplied by the customer: When a customer provides their credit card details to the
merchant or a payment gateway, it is crucial to ensure the secure transmission and storage of this
information. The server's SSL (Secure Sockets Layer) technology plays a significant role in encrypting
the data during transmission between the customer's browser and the server. Additionally, the merchant
or server's digital certificates verify the authenticity and integrity of the communication, assuring the
customer that they are interacting with a trusted entity.
Credit card details passed to the bank for processing: After the merchant or payment gateway receives
the customer's credit card details, they need to securely transmit this information to the bank or payment
processor for processing. The payment gateway employs a range of sophisticated security measures to
protect this data during transit. These measures may include encryption, tokenization, and secure
communication protocols to ensure the confidentiality and integrity of the transaction data.
Exposure, generally speaking, is the maximum amount of damage that will be suffered if some event
occurs. All other things being equal, the risk associated with that event increases as the exposure
increases. For example, a lender is exposed to the risk that a borrower will default. Some exposures
can be pinned down to a specific number, while others are more qualitative - for example, reputational
risks. Exposure can be controlled in a number of ways: for example, it might be reduced by
transferring the risk to another company (such as an insurer), financed (cushioned by capital) or simply
retained.
Volatility, loosely meaning the variability of potential outcomes, is a good proxy for the word "risk" in many
of its applications. This is particularly true for risks that are predominantly dependent on market factors, such
as options pricing. In other applications, it is an important component of the overall risk. Generally, the
greater the volatility, the higher the risk. For example, the number of loans that turn bad is proportionately
higher, on average, in the credit card business than in commercial real estate. Nonetheless, it is real estate
lending that is widely considered to be riskier, because the loss rate is much more volatile - and therefore
harder to cost and manage.
Like exposure, volatility has a specific technical meaning in some areas of risk management. In market
risk, for example, it is synonymous with the standard deviation of returns and can be estimated in a number
of ways.
Probability How likely is it that some risky event will actually occur? The more likely the event is to
occur-in other words, the higher the probability-the greater the risk. The assignment of probabilities to
potential outcomes has been a major contribution to the science of risk management. Certain events,
such as interest rate movements or credit card defaults, are so likely that they need to be planned
for as a matter of course and mitigation strategies should be an integral part of the business' regular
operations. Others, such as a fire at a computer center, are highly improbable, but can have a devastating
impact.
Severity How bad might it get? Whereas exposure is typically defined in terms of the worst that could
possibly happen, severity is the amount of damage that is, in some defined sense, likely to be suffered.
The greater the severity, the higher the risk. Severity is the partner to probability: if we know how likely
an event is to happen, and how much we are likely to suffer as a consequence, we have a pretty good
idea of the risk we are running. But severity is often a function of our other risk factors, such as volatility
- the higher a price might go, the more a company might lose.
Time horizon The longer the duration of an exposure, the higher the risk. For example, extending a 10-
year loan to the same borrower has a much greater probability of default than a one-year loan. Hiring the
374 | The Institute of Chartered Accountants of Nepal
Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
same technology company for a five-year outsourcing contract is much riskier than a six-month consulting
project - though not necessarily ten times as risky. The time horizon can also be thought of as a measure
of how long it takes to reverse the effects of a decision or event. The key issue for financial risk exposures
is the liquidity of the positions affected by the decision or event. Positions in highly liquid instruments,
such as US Treasury bonds, can usually be eliminated in a short period of time, while positions in, say,
real estate, are illiquid and take much longer to sell down.
There are few frameworks that need to be considered while evaluating the risk for information security:
COBIT (Control Objectives for Information and Related Technologies): This framework, developed by
ISACA, provides a set of best practices for IT management and IT governance. It helps organizations
align their IT goals with their business goals, while helping to manage the risks associated with IT and
information systems.
ISO 27001: This is an international standard that provides a framework for establishing, implementing,
maintaining, and continually improving an Information Security Management System (ISMS). It helps
organizations identify, manage, and reduce the range of threats to which their information systems are
exposed.
NIST Cybersecurity Framework: Developed by the National Institute of Standards and Technology, this
framework provides a policy for managing cybersecurity risk. It's widely adopted and provides best
practices for identifying, protecting, detecting, responding to, and recovering from cybersecurity
incidents.
Risk IT Framework: Also developed by ISACA, the Risk IT framework complements COBIT from a risk
management perspective. It provides an end-to-end, comprehensive view of all risks related to the use of
IT and a similarly complete treatment of risk management, from the tone and culture at the top, to
operational issues.
FAIR (Factor Analysis of Information Risk): FAIR is a quantitative risk analysis methodology for
cybersecurity and operational risk. It helps organizations understand, analyze, and quantify information
risk in financial terms.
Incorporating these frameworks into an organization's risk management strategy can significantly
improve their ability to evaluate, manage, and mitigate risks associated with their Information Systems.
It's also worth noting that many of these frameworks complement each other and can be used together to
provide a holistic approach to IT and information systems risk management.
Computer assisted audit techniques(CAAT)
CAATs, or Computer Assisted Audit Tools and Techniques, is indeed an evolving field within the audit
profession, and it involves leveraging technology to automate or enhance the audit process. This can
involve the use of various software packages like SAS, Excel, Access, Crystal Reports, Cognos, and
Business Objects, among others.
At its core, CAATs involve the use of these technological tools to test and analyze large volumes of data,
which can provide auditors with a deeper understanding of an organization's financial situation,
operational efficiency, and internal controls. In more detail, CAATs can be used for several audit tasks:
1. Cost of computer, peripherals and software. It could be either a capital cost for buying a
computer or the cost of renting one.
2. Cost of space such as rent, furniture, etc. In a place like Bombay the cost of space occupied by a
system analyst (5 sq. metres) in prime location could be Rs. 5000 per month!
3. Cost of systems analysts and programmers (salary during the period of assignment).
4. Cost of materials such as stationery, floppy disks, toner, ribbon, etc.
5. Cost of designing and printing new forms, user manuals, documentation, etc.
6. Cost of secretarial services, travel, telephone, etc. An estimate is sometimes made of indirect cost
if it is very high and added to the direct cost.
7. Cost of training analysts and users.
Benefits can be broadly classified as tangible benefits and intangible benefits. Tangible benefits are
directly measurable. These are:
376 | The Institute of Chartered Accountants of Nepal
Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
1. Direct savings made due to reducing (a) inventories, (b) delays in collecting outstanding
payments, (c) wastage, (d) cost of production, and increasing production, as also its speed.
2. Savings due to reduction in human resources or increasing volume of work with the same human
resources.
Intangible benefits are:
1. Better service to customers
2. Superior quality of products
3. Accurate, reliable and up to date strategic, tactical and operational information which ensures
better management and thereby more profits.
The sum of all costs (direct and indirect) is compared with the sum of all savings (tangible and
intangible). It is not always easy to assign money value to intangible benefits. It is arrived at by
discussion amongst users of the information system.
If the project is a high cost one, extending over a period of time, then it is necessary to estimate costs
during various phases of development of the system so that they can be budgeted by the management.
Project failures often lead to substantial financial and reputational damage for organizations. For
instance, poorly executed projects can erode shareholder value almost instantly. While no recent
statistics are included in your provided text, multiple studies still confirm high failure rates of IT projects
worldwide. Therefore, addressing the reasons for project failures and strategizing effective management
controls is essential.
There is a consensus that one major factor contributing to project failures is the tendency for
management to ignore early warning signs. To counter this issue, several solutions have been proposed,
many of which focus on improving oversight and control mechanisms:
Early Warning System: Establishing a system to detect and alert about potential issues at the
earliest stages can help prevent bigger problems down the line.
Exit Champion: Recognizing the role of individuals who advocate for project termination when
necessary can save resources and shift focus to more viable projects.
Decision Quality: Focusing on the quality of decisions, rather than solely on the outcome, promotes
a more strategic and sustainable approach to project management.
Independent Reviews: Regular and independent reviews of every major project can provide
objective insights and early detection of issues.
Fail-safe Options: Providing for fail-safe options ensures there's a contingency plan in case the
project does not proceed as expected.
Extreme project management is another successful technique. In this model, project managers focus on
dealing with external stakeholders and managing the project, while technical teams handle technology
discussions and solution development. Auditors also play a crucial role in project success. They provide
an independent assessment of the project’s adherence to established plans and controls, and their
recommendations can greatly improve project outcomes.
Thus, auditing is a key component of the control process in IT development projects, contributing to their
overall success by ensuring compliance, promoting transparency, and validating performance against
objectives.
Evaluating the design and implementation of internal controls within business processes.
Verifying the accuracy and completeness of data inputs and outputs.
Assessing the segregation of duties and authorization controls.
Reviewing controls over financial transactions, inventory management, and procurement.
Assessing compliance with regulatory requirements and industry standards.
Assessing the adequacy of controls within system development life cycle processes.
Evaluating the design and implementation of logical and physical security controls.
Reviewing change management and configuration control processes.
Verifying the effectiveness of system testing and quality assurance procedures.
Assessing the adequacy of system documentation and user training.
In addition, in the context of high-risk projects, it is vital for the auditor to pay special attention to the
monitoring controls as discussed in the previous sessions.
Chapter 9
Disaster Recovery and Business Continuity Planning
9.4 IT Outsourcing:
The financial services industry has changed rapidly and dramatically. Advances in technology enable
institutions to provide customers with an array of products, services, and delivery channels. One result
of these changes is that financial institutions increasingly rely on external service providers for a variety
of technology-related services. Generally, the term "outsourcing" is used to describe these types of
arrangements.
The ability to contract for technology services typically enables an institution to offer its customers
enhanced services without the various expenses involved in owning the required technology or
maintaining the human capital required to deploy and operate it. In many situations, outsourcing offers
the institution a cost-effective alternative to in-house capabilities.
Outsourcing, however, does not reduce the fundamental risks associated with information technology or
the business lines that use it. Risks such as loss of funds, loss of competitive advantage, damaged
reputation, improper disclosure of information, and regulatory action remain. Because the functions are
performed by an organization outside the financial institution, the risks may be realized in a different
manner than if the functions were inside the financial institution resulting in the need for controls designed
to monitor such risks.
Financial institutions can outsource many areas of operations, including all or part of any service,
process, or system operation. Examples of information technology (IT) operations frequently outsourced
by institutions and addressed in this booklet include: the origination, processing, and settlement of
payments and financial transactions; information processing related to customer account creation and
maintenance; as well as other information and transaction processing activities that support critical banking
functions, such as loan processing, deposit processing, fiduciary and trading activities; security monitoring
and testing; systemdevelopmentandmaintenance;networkoperations;helpdeskoperations;and all centers.
Management may choose to outsource operations for various reasons. These include.
Gain operational or financial efficiency.
Increase management focus on core business functions.
Refocus limited internal resources on core functions.
Obtain specialized expertise.
Increase availability of services.
Chapter 10
Auditing And Information System:
Rights andorobligations
Valuation allocation Asset, liabilities, equity, reserves are been recorded at correct amount
Presentation and All items of financial statements have been properly classified
disclosure described and disclosed
After auditors obtain understanding of internal controls they must determine control risk in relation to
each assertion.
1. If auditors assess controls at less than maximum level, they go to next step and test the
controls to evaluate whether they are operating effectively.
2. If auditors assess control risk at higher than maximum level, they will not test controls at all, and
carry out detailed substantive check procedures.
5. Test of controls:
In this step the auditors will test controls to ascertain whether they are operating effectively or
not. Auditors will carry out testing of both application and management controls. This phase
usually begins by focusing on management controls. If testing shows that control to
expectations, management controls are not operating reliably, there may be little point in testing
The Institute of Chartered Accountants of Nepal ȁͶͳͳ
Management Information and Control System
application controls, in such case auditors may qualify their opinion or carry out substantive tests
in detail.
6. Reassess controls:
After auditors have completed tests of controls, they again assess the control risk. In light of test
results, they might revise the anticipated control risk upward or downward. In other words
auditor may conclude that internal controls are stronger or weaker than anticipated. They may
also conclude that it is worthwhile to perform more tests to further reduce substantive testing.
7. Completion of audit:
In the final phase of audit, Audit procedures are developed based on the auditor understands of
the organization and its environment. A substantive audit approach is used when auditing an
organization's information system. Once audit procedures have been performed and results have
been evaluated, the auditor will issue either an unqualified or qualified audit report based on
their findings.
10.3 Evaluation of IS
All around the world there is a huge amount of money invested in IT (e.g. Seddon, 2001). It is therefore
important to evaluate the return on the investment. Evaluation is complicated and consequently there are
a lot of proposals for how to evaluate IT-systems.
Much of the literature on evaluation takes a formal-rational stand and sees evaluation as a largely quantitative
process of calculating the likely cost/benefit on the basis of defined criteria (Walsham,
1993). These approaches are often developed from a management perspective and contain different
measures that often are of harder economical character. One common criticism of the formal-rational view is
that such evaluation concentrates on technical and economical aspects rather than human and social aspects
(Hirschheim & Smithson, 1988). Further Hirschheim & Smithson maintain that this can have major negative
consequences in terms of decreased user satisfaction but also broader organizational consequences in terms of
system value.
There are also other evaluation approaches such as interpretative (e.g. Remenyi, 1999; Walsham,
1993) and criteria-based. Interpretative approaches often view IT-systems as social systems that have
information technology embedded into them (Goldkuhl & Lyytinen, 1982). Criteria-based approaches
are concerned with identifying and assessing the worth of programme outcomes in the light of initially
specified success criteria (Walsham, 1993). The criteria used are often derived from one specific
perspective or theory.
The evaluation processes that will be described in section 3 are based on six generic types of evaluation
(cf Cronholm & Goldkuhl, 2003 for a fuller description of the six generic evaluation types). These types
are derived from two strategies concerning and Strategie
We distinguish between three types of strategy:
• Goal-based evaluation
• Goal-free evaluation
• Criteria-based evaluation
The differentiation is made in relation to what drives the evaluation. Goal-based evaluation means
that explicit goals from the organisational context drive the evaluation of the ITsystem. The basic
strategy of this approach is to measure if predefined goals are fulfilled or not, to what extent and in
what ways. The approach is deductive. What is measured depends on the character of the goals and a
quantitative approach as well as qualitative approach could be used.
The goal-free evaluation means that no such explicit goals are used. Goal-free evaluation is an inductive
and situationally driven strategy. This approach is a more interpretative approach (e.g. Remenyi,
1999; Walsham, 1993). The aim of interpretive evaluation is to gain a deeper understanding of the
The other strategy of "what to evaluate" is "IT-systems in use". Evaluating IT-systems in use means to
study a use situation where a user interacts with an IT-system. This analysis situation is more complex
than the situation "IT-systems as such" since it also includes a user, but it also has the ability to give a
richer picture.
The data sources for this situation could be interviews with the users and their perceptions and
understanding of the IT-system's quality, observations of users interacting with ITsystems, the IT-
system itself and the possible documentation of the IT-system (see Figure 2). Compared to the
strategy "IT-systems as such" this strategy offers more possible data sources. When high requirements
are placed on data quality the evaluator can choose to combine all the data sources in order to achieve
a high degree of triangulation. If there are fewer resources to hand the evaluator can choose one or two
of the possible data sources.
There is a tremendous amount of resources available to educate the end-user in applying the above
techniques and demonstrating all of the additional ways that they can be used.
ISACA's Information Systems (IS) Audit and Assurance Standards are principles-based standards that are
mandatory requirements for ISACA members and Certified Information Systems Auditors (CISAs). These
standards provide the minimum level of acceptable performance needed to meet the professional
responsibilities set out in the ISACA Code of Professional Ethics for IS auditors.
ISACA also provides supporting guidance that helps ISACA members and CISAs understand how to
implement the standards. This guidance, which includes guidelines and tools and techniques, is not
mandatory but is highly recommended for effective professional practice.
ISACA regularly updates its body of knowledge to reflect the evolving field of IT governance, risk, audit,
and cybersecurity. This includes periodic updates to the CISA exam, the COBIT framework, and other
professional resources.
The contents of an audit charter typically include the purpose, outlining the fundamental reason for
its existence and the objectives it seeks to achieve. The charter also establishes the responsibilities
of the audit function, clearly defining the tasks and roles it is expected to perform. Moreover, it
describes the authority bestowed upon the audit function, detailing its right to access information,
personnel, and resources necessary for the audit. The charter further clarifies the function's
accountability, stipulating to whom and for what the audit function is answerable.
The audit charter also outlines the exclusions, or areas that are outside the audit's purview, ensuring
clarity and avoiding potential conflicts. Importantly, the charter also sets the standard for effective
The next step in planning is setting the audit scope and objectives, which define what the audit will
cover and what it aims to achieve. This is followed by developing the audit approach or strategy,
which lays out how the audit will be conducted. The allocation of personnel resources is then
considered, ensuring that individuals with the right skills and experience are assigned to relevant
tasks within the audit.
Finally, the planning phase also addresses engagement logistics, including the timing of audit
activities, required resources, and communication protocols.
In summary, audit planning sets the foundation for a successful and efficient audit, guiding the
entire process and ensuring that all potential areas of risk are thoroughly evaluated.
422 | The Institute of Chartered Accountants of Nepal
Chapter 10 : Auditing And Information System
4. Audit Staffing
The process of audit staffing requires careful consideration and effective management skills, as it
involves aligning the technical and auditing skill requirements with the competencies of the
available staff and the developmental goals of team members. It is crucial to ensure that the audit
team possesses a well-rounded understanding of the audit process and the necessary technical
acumen to handle the unique demands of each audit.
The primary auditor in charge, often referred to as the lead auditor, has a significant role in directing
the individual audit. This individual must possess a comprehensive understanding of the
technology, risks, and auditing techniques unique to the subject matter of the audit. Not only is this
expertise vital to conducting the audit effectively, but it also serves as a resource for guidance and
developmental assistance for staff auditors contributing to the fieldwork.
The lead auditor's role extends beyond just possessing knowledge; they should be adept at
imparting this knowledge to their team. They guide staff auditors in applying audit techniques, help
them understand the unique risks associated with the technology in use, and assist them in
navigating the complexities of the audit process.
In essence, effective audit staffing is about more than just filling roles. It involves strategically
deploying the right personnel with the appropriate skills and providing a learning environment for
continuous development. This approach ensures the successful execution of the audit and
contributes to the ongoing professional growth of the team.
5. Using work of other Experts
IS auditor should consider using the work of other experts in the audit when there are constraints
that could impair the audit work to be performed or potential gains in the quality of the audit.
Examples of these are the knowledge required by the technical nature of the tasks to be
performed, scarce audit resources and limited knowledge of specific areas of audit.
6. Audit Schedule
1. Schedules of individual audits, resources, the start and finish deadlines, and possible overlap
of each audit all must be reconciled when developing a master information system audit
schedule for the information system audit plan.
2. Time allocation for an individual audit should include time for planning, fieldwork review,
report writing, and post audit follow up.
3. Communication of audit plan
1. These division are based on the expertise required, geographical divisions, managerial
responsibility divisions, or some method that worked well in the prior audit approaches.
2. Evidence of approval by the audit management with their assessment of risks and planned
scope and objectives should be well documented in this section.
3. It is a series of audit steps designed to meet the audit objectives by identifying the process
related risks , determining the controls that are in place to mitigate the risk, and testing those
Risk Ranking
Risk ranking is a critical part of risk management, typically conducted by evaluating risks based on their
impact and likelihood of occurrence. The impact of a risk should be measured in terms that reflect the
organization's objectives, potentially encompassing diverse areas such as financial implications, people-
related consequences, or reputational damage.
Areas identified as low risk, often denoted as 'Green Areas', are considered to pose minimal threat from
both a business and audit perspective. Given their low risk nature, it is not imperative to review the controls
over these areas in detail annually or on a rotational basis. Nonetheless, the choice not to conduct rotational
reviews is a management decision, made in the context of the organization's overall risk strategy and
available resources.
Medium-risk areas, or 'Orange/Yellow Areas', represent a more substantial risk, but not to an extent that
is likely to result in significant loss or reputational damage should the required controls fail. Given their
increased risk status, the controls over these areas should ideally be reviewed every two to three years on
a rotational basis. This helps ensure that the controls remain effective in managing the risks identified.
High-risk areas, also known as 'Red Areas', are considered to be inherently high risk from both a business
and audit standpoint. These areas bear the potential to cause significant financial loss or reputational
Data Analytics
Data analytics is the systematic examination of voluminous datasets to uncover underlying patterns,
discernible trends, and insightful correlations, ultimately driving informed decision-making and strategic
business initiatives. By deploying various techniques and utilizing sophisticated tools, valuable
information can be derived from the raw data, providing meaningful insights that aid in steering business
strategies.
An IS auditor can use data analytics for the following purposes:
1. Determination of the operational effectiveness of the current control environment
2. Determination of the effectiveness of antifraud procedures and controls
3. Identification of business process errors
4. Identification of business process improvements and inefficiencies in the control environment
5. Identification of exceptions or unusual business rules
6. Identification of fraud
7. Identification of areas where poor data quality exists
8. Performance of risk assessment at the planning phase of an audit
The process of collecting and analyzing data involves several key stages. The first is setting the scope,
which includes determining the objectives of the audit or review, defining data needs, and identifying
reliable data sources. Next, the data is identified and obtained, which may involve requesting data from
responsible sources, testing a data sample, and extracting data for usage.
Following data acquisition, it's crucial to validate the data to determine its sufficiency and reliability for
audit tests. This could involve independent validation of balances, reconciliation of detailed data to report
control totals, and validation of various data fields such as numeric, character, and date fields.
Additionally, the time period of the dataset is verified to ensure it aligns with the scope and purpose of the
audit, and all necessary fields are confirmed to be included in the acquired dataset.
Upon validation, tests are executed, often involving the running of scripts and other analytical tests. The
results of these tests are then meticulously documented, including the purpose of testing, data sources, and
conclusions drawn. Finally, these results are reviewed to ensure the testing procedures have been
adequately performed and have undergone review by a qualified individual.
Chapter 11
Ethical and Legal Issues in Information Technology
In the Figure 5 above, a Certificate validation step is added to what is shown in Figure 3. Only the
fields required for the validation of a certificate are displayed.
Alice wants to make sure that the PuKB included in CertB belongs to Bob and is still valid.
• She checks the Id field and finds BobId, which is Bob's identity. In fact, the only thing she really
knows is that this certificate appears to belong to Bob.
• She then checks the validity fields and finds that the current date and time is within the validity
period. So far the certificate seems to belong to Bob and to be valid.
• The ultimate verification takes place by verifying CertB's signature using the CA's public- key
(PuKCA found in CertCA)10. If CertB signature is ok, this means that:
a) Bob's certificate has been signed by the CA in which Alice and Bob has put all their trust.
b) Bob's certificate integrity is proven and has not been altered in any way.
c) Bob's identity is assured and the public-key included in the certificate is still valid and belongs
to Bob. Therefore, Alice can encrypt the message and be assured that only Bob will be able to
read it.
Similar steps will be performed by Bob on Alice's certificate before verifying Alice's signature.
Beyond the mechanics
So far, this article covered in some details the public-key mechanics associated with encryption and digital
signature. In section 2.1 the notion of Certificate Authority has been brought up. The CA is the heart of
a Public-Key Infrastructure (PKI).
CHAPTER 12
Electronic Transaction Act 2063