0% found this document useful (0 votes)
1K views480 pages

Management Information and Control System

This document provides study material for the Chartered Accountancy Professional (CAP)-III exam. It covers topics related to audit, information management, assurance, and control systems. The material is divided into multiple chapters that cover organizational management and information systems, different types of information systems case studies, information technology strategy and trends, system development life cycles, system analysis and design case studies, e-commerce and inter-organizational systems case studies, e-business enabling software packages case studies, information system security, protection and control, and disaster recovery and business continuity planning.

Uploaded by

himalbaral100
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views480 pages

Management Information and Control System

This document provides study material for the Chartered Accountancy Professional (CAP)-III exam. It covers topics related to audit, information management, assurance, and control systems. The material is divided into multiple chapters that cover organizational management and information systems, different types of information systems case studies, information technology strategy and trends, system development life cycles, system analysis and design case studies, e-commerce and inter-organizational systems case studies, e-business enabling software packages case studies, information system security, protection and control, and disaster recovery and business continuity planning.

Uploaded by

himalbaral100
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 480

Study Material

Chartered Accountancy Professional (CAP)- III


For CAP II

Study Material
Audit andInformation
Management Assuranceand
Control System

THE INSTITUTE OF CHARTERED ACCOUNTANTS OF NEPAL


Publisher : The Institute of Chartered Accountants of Nepal
ICAN Marg, Satdobato, Lalitpur
P. O. Box: 5289, Kathmandu
Tel: 977-1-5530832, 5530730, Fax: 977-1-5550774
E-mail: [email protected], Website: www.ican.org.np

© The Institute of Chartered Accountants of Nepal

This study material has been prepared by the Institute


of Chartered Accountants of Nepal. Permission of the
Council of the Institute is essential for reproduction of any
portion of this paper.

All rights reserved. No part of this publication may be


reproduced, stored in a retrieval system, or transmitted,
in any form, or by any means, electronic, mechanical,
photocopying, recording, or otherwise, without prior
permission, inwriting, from the publisher.

Price : Rs. 300/-

Price
First :: Rs. 450.00
March, 2011

First edition : October, 2011


Second Edition : December, 2015
Second edition : May, 2017
Third Edition : September, 2019

Designed
Designed && Printed at :
Printed at:
3d Printers and publishers
Print and Art Service
Balkot, BhaktapurBagbazar, Kathmandu
Tel: 5211358, 5211064
Tel: 4244419, 4239154
Preface
This study material on the subject of “MICS” has been exclusively designed and developed for the students of
Chartered Accountancy Professional [CAP]-III Level. It aims to develop candidates’ capability in performing
and reporting on audit and assurance to increase reliability of financial and non-financial information,
identifying significant risks and apply risk assessment tools to the engagement, identify, gather and document
evidence and assess its sufficiency and appropriateness for an audit engagement, and competence to provide
comprehensive audit and business assurance services, by testing their ability to integrate and apply their
knowledge of auditing on realistic problems.

It broadly covers the chapters of Legal Compliance, Practice Management, Audit Process, Audit strategy
and planning, Audit techniques and procedures, Audit Reporting, Special Audits, Corporate Governance and
Audit Committee, and Audit under computerized environment. Practical problems are included at the end of
each chapter which can be useful to the students for their self-assessment about the progress evaluation after
thoroughly reading the material.

Students are requested to accustom with the syllabus of the subject and read each topic thoroughly for
understanding on the chapter. We believe this material will be of great help to the students of CAP-III. However,
they are advised not to rely solely on this material. They should update themselves and refer recommended
text-books given in the CA Education Scheme and Syllabus along with efforts of CA. Ramesh Dhital and CA.
Anima Pokharel who has other relevant materials in the subject.
Last but the most, we acknowledge the efforts of CA. Ramesh Dhital, who has meticulously assisted for
preparation and updating this study material. Likewise, CA. Chandra Kanta Bhandari, who has reviewed
this study material for building in this comprehensive shape.

Due care has been taken to make every chapter simple, comprehensive and relevant for the students. In case
students need any clarification, creative feedbacks or suggestions for further improvement on the material, they
may be forwarded to [email protected] of the Institute.

The Institute of Chartered Accountants of Nepal


TABLE OF CONTENT
CHAPTER 1 1-16
1.0 Organizational Management and Information System
1.1 Introduction to management Information System 1
1.2 Revision of organization and management level 2
1.3 Computer based management information system 5
1.4 Business perspective of information system 5
1.5 IT and Information Security: 6
1.5.1 How do I protect my information? 7
1.5.2 How is the security of the average company setup? 10
1.5.3 How do most security breaches happen? 11
1.6 IT Governance: 12

CHAPTER 2 17-58
2.0 Different Types of Information System Case Study
2.1 Types of Information System according to Organizational Hierarchy 19
2.1.1 Different Kinds of Information Systems 19
2.2 Types of Information System to Support the Organization 21
2.2.1 Transaction Processing System 22
2.2.2 Knowledge Work and Office System 26
2.2.3 Management Information System 27
2.2.4 Decision Support System 30
2.2.5 Executive Support Systems or Executive Information Systems 33
2.2.6 Expert Support System 35
2.3 Sales and marketing information systems 53
2.4 Manufacturing and Production Information Systems 54
2.5 Finance and Accounting Information Systems 56
2.6 Human Resources Information Systems 57

CHAPTER 3 59-172
3.0 Information Technology Strategy and Trends
3.1 Enterprise, Strategy and Vision 61
3.1.1 Internal and External Business Issues 63
3.1.2 Factors Influencing IT 72
3.2 Assess Current and Future IT Environments 73
3.2.1 Current Status of IT 73
3.2.2 IT Risk and Opportunity 150
3.3 IT Strategy Planning 156
CHAPTER 4
4.0 System Development Life Cycle 173-218
4.1 Definition, Stages of System Development 175
4.2 Underlying Principles of System Development 184
4.3 Phases of System Development 189
4.4 Computer Aided System Engineering (CASE) 190
4.5 Models of System Development 192
4.6 Integration and System Testing 198
4.7 System Maintenance: 206
4.8 Project Management Tools: 211

CHAPTER 5
5.0 System Analysis and Design, Case study 219–266
5.1 Strategies for System Analysis and Problem Solving 221
5.2 Concept of Data and Process Modeling 228
5.3 Strategies for System Design 245
5.4 Input Design 250
5.5 Output design 265

CHAPTER 6 221-306
6.0 E-Commerce and Case Study of Inter Organizational Systems
6.1 Introduction to E-Commerce 269
6.2 Features of E-commerce 275
6.3 Categories of e-Commerce 279
6.4 Electronic Payment Processes 281
6.5 Emerging Technologies in IT Business Environment 283

CHAPTER 7 307-318
7.0 E-business Enabling Software Packages Case Study
7.1 Enterprises Resource Planning (ERP) 309
7.2 Supply Chain Management Introduction (SCM): 313
7.3 Sales Force Automation 314
7.4 Customer Relationship Management: 315

CHAPTER 8 319-380
8.0 Information System Security, Protection and Control
8.1 System Vulnerability and Abuse 321
8.2 System Quality Problems: 336
8.3 Creating a Control Environment 339
8.4 Protection of digital network 346
8.5 Evaluation of IS 374
8.6 Development of Control Structure 376
CHAPTER 9 381-398
9. Disaster Recovery and Business Continuity Planning
9.1 Disasters Recovery Planning 383
9.2 Data backup and recovery 385
9.3 High availability planning of servers 394
9.4 IT Outsourcing: 395

CHAPTER 10
10. Auditing And Information System: 399-428
10.1 IT audit strategies 405
10.2 Review of DRP/BCP 412
10.3 Evaluation of IS 413
10.4 Standards for IS Audit 417

CHAPTER 11 429-450
11. Ethical and Legal Issues in Information Technology
11.1 Patents, Trademark and Copyright 431
11.2 Significance of IT Law: 433
11.3 Digital Signature and authentication of digitized information 434
11.4 Digital Signature and Verification 340
11.5 Introduction to Digital Data Exchange and digital reporting standard-XML and XBRL 445
11.6 Brief Description of COSO, COBIT, CMM, ITIL, ISO/IEC27001 448

CHAPTER 12 451-471
12. Electronic Transaction Act 2063
12.1 Electronic record and Digital Signature 453
12.2 Dispatch, Receipt and Acknowledgement of 454
12.3 Provisions Relating to Controller and 456
12.5 Provisions Relating to Digital Signature and Certificates 460
12.6 Functions, Duties and Rights of Subscriber 461
12.7 Electronic Record and Government use of Digital Signature 462
12.8 Provisions Relating to Network Service 463
12.9 Offence Relating To Computer 464
12.10 Provisions Relating to Information 467
12.11 Provisions Relating to Information 469
12.12 Miscellaneous 470
Chapter 1 : Organizational Management and Information System

Chapter 1

Organizational Management and Information System

The Institute of Chartered Accountants of Nepal ȁͳ


Management Information and Control System

2 | The Institute of Chartered Accountants of Nepal


Chapter 1 : Organizational Management and Information System
1.1 Introduction to management Information System
Management Information System (MIS) is a strategic and organized approach that combines
technology, people, and processes to provide relevant and timely information to support managerial
decision-making within an organization. It encompasses the collection, storage, processing, and
dissemination of data and information to assist managers in planning, organizing, and controlling
operations effectively.
MIS serves as a bridge between different functional areas of an organization, integrating data from
various sources and transforming it into meaningful information for decision-makers at different
levels. It involves the use of computer systems, software applications, and databases to capture,
analyze, and present data in a structured format that aids in decision-making processes. By leveraging
MIS, organizations can streamline operations, enhance productivity, optimize resource allocation, and
gain insights into market trends and performance metrics. It facilitates the generation of reports,
dashboards, and analytics that support managers in evaluating performance, identifying opportunities,
and addressing challenges in a timely manner.
Ultimately, a well-designed and implemented MIS empowers managers with accurate, reliable, and
up-to-date information to make informed decisions, align strategies with organizational goals, and
achieve competitive advantage in a dynamic business environment.

1.2 Revision of organization and management level


The store owner for example, besides using different types of information, also performs a set of distinct
functions. He works as a purchaser, storekeeper, accountant, and salesman. As long as the business
is small it may be possible for one person to do all the above functions. As the business grows it
becomes essential to delegate responsibility to specialists in each area and make them accountable
for their efficient functioning.
In Fig. 1-1 we give the typical functions of managers of each category. The manager of each function
is known as a middle level manager. These middle level managers report to the chief executive who is
in overall in charge of the whole organization. The middle level managers will, in turn, have many
assistants who are responsible for specific day-to-day operations. They are known as line managers.
The management structure is thus a pyramid as shown in Fig. 1-2. In this pyramid the Chief Executive,
being overall in charge of policy, will require strategic information. The middle level managers require
tactical information to perform their function and the line managers being responsible for day-to-day
operations would require operational information. Thus, we see that as a manager rises higher in the
hierarchy, he receives summarized information that is also less structured. More complex analysis of
basic data is required as we go up in the hierarchy.

The Institute of Chartered Accountants of Nepal ȁ͵


Management Information and Control System

Fig 1-1 Functional responsibilities in a management system

Fig 1-2 Types of information needed to manage an organization


4 | The Institute of Chartered Accountants of Nepal
Chapter 1 : Organizational Management and Information System
1.3 Computer based management information system.
As organizations continue to grow in size and pursue broader operational goals, the limitations of manual
information systems become increasingly apparent. In today's rapidly evolving landscape of industry and
commerce, computer-based information systems have become indispensable for the efficient functioning
of organizations. Several significant trends highlight the necessity for this transition:
Firstly, the expanding scale of organizations, particularly evident in countries experiencing population
growth and rapid industrial development, calls for the adoption of computer-based systems. These systems
empower managers to process data in various ways, enabling them to analyze organizational performance
from diverse perspectives.
Secondly, the escalating volume of data and the critical need for timely and varied information make
computer-based information processing essential for effective organizational management. With data
becoming more abundant and the requirement to stay competitive, computer-based systems enable
efficient data management and analysis.
Moreover, the dispersion of organizations into multiple branches, along with intensifying market
competition, demands international competitiveness and a favorable balance of payments. To achieve
these objectives, organizations must rely on computer-based information systems to streamline operations
and enhance decision-making.
The broader socio-economic environment also plays a significant role, as societal changes occur at an
unprecedented pace. Increasingly complex governmental regulations and the need to engage with diverse
stakeholders, such as consumer groups, environmental protection organizations, and financial institutions,
require organizations to rely on up-to-date, well-analyzed, and effectively presented information. In this
dynamic context, decision-making guided by reliable information supersedes outdated thumb rules and
hunches.
The convergence of these developments underscores the imperative for organizations to transition from
manual information systems to computer-based solutions. Embracing this shift enables them to efficiently
manage operations, make informed decisions, and adapt to the evolving demands of a rapidly changing
business environment. As organizations continue to evolve, relying solely on manual information systems
proves inadequate. The advent of computer-based information systems has become imperative for efficient
organizational management, aligning decision-making processes with the demands of the modern era.

1.4 Business perspective of information system


In today's business landscape, information systems have seamlessly integrated into our daily operations,
playing a vital role in various functions such as accounting, finance, operations management, marketing,
and human resource management. These systems have become essential components that underpin the
success of businesses and organizations, making them nothing short of business imperatives. As a result,
the study of information systems has become an indispensable field in business administration and
management, warranting its inclusion as a core course in most business majors.
Whether one aspires to be a manager, entrepreneur, or business professional, having a fundamental
understanding of information systems is as crucial as comprehending any other functional area in business.
The Institute of Chartered Accountants of Nepal ȁͷ
Management Information and Control System
Information technologies, including Internet-based information systems, have assumed pivotal and
expanding roles in modern businesses. They possess the potential to significantly enhance the efficiency,
effectiveness, and competitive positioning of organizations within rapidly changing marketplaces.
Whether applied to support product development teams, customer support processes, e-commerce
transactions, or any other business activity, information technology empowers businesses to optimize their
processes, strengthen managerial decision-making, and foster collaborative workgroups.
In today's dynamic global environment, businesses must leverage information technologies and systems
to not only survive but also thrive and achieve success. These systems are an indispensable ingredient that
enables organizations to adapt to evolving market conditions, seize opportunities, and sustain a
competitive edge. By embracing information systems, businesses can harness the power of technological
advancements, capitalize on efficient business processes, and enhance their overall performance. Simply
put, information technologies and systems are vital components for achieving business success in today's
dynamic global environment. They empower businesses to navigate and thrive amidst the ever-evolving
market conditions, providing a competitive edge that drives growth and profitability. Recognizing their
significance and incorporating them strategically is crucial for businesses aiming to stay ahead in the
dynamic and interconnected world of modern commerce.

1.5 IT and Information Security:


What is Information Security (IS) about?
IT, or Information Technology can be defined as the area of knowledge and practices which is associated
with use, development, and management of computer systems, network, software, and electronic
information. It basically refers to the use of technology to facilitate the storage, retrieval, transmission,
and manipulation of data for various purposes.
Information Security is the practice of protecting information by mitigating information risks. It involves
the preservation of confidentiality, Integrity, and Availability of information, as well as other properties
such as authenticity, accountability, non-repudiation, and reliability. Information Security management
provides a framework for implementing controls that persevere information assets and give confidence to
interested parties.”, as per ISO 27000:2018.
The Information Security has three fundamental components, known as the security triad:
• Confidentiality
The assurance that the information is accessible only to authorized individuals or entities refers to
confidentiality. This component is to ensure that the sensitive data is well protected from
unauthorized access, disclosure or exposure. When we do shop and pay the bills through our
credit/debit card, we get concerned while providing sensitive bank information to unknown vendors.
Connecting to the software requires to go through the internet browser which has been prepared by
someone unknown. Further, the broadcast of internet worldwide, our connectivity to that internet
through our devices and current cloud computing security add more risk to the confidentiality terms.
Therefore, it is important to consider the security issues concerning the confidentiality of
individual/entity.

6 | The Institute of Chartered Accountants of Nepal


Chapter 1 : Organizational Management and Information System
In order to ensure Confidentiality, you have several tools at your disposal, depending on the nature
of the information. Encryption is the most commonly thought of method used to promote
Confidentiality, but other methods include Access Control Lists (ACLs) that keep people from
having access to information, using smart cards plus pin numbers to prevent unauthorized people
into your building and looking around, or even explaining to your employees what information about
the company they can and cannot disclose over the phone. The secure storage is data is equally
important.
• Integrity
Here integrity refers to the accuracy, completeness, and trustworthiness of information and systems.
It ensures that data remains unaltered and maintain its intended state throughout its lifecycle.
Protecting the integrity of information involves implementing mechanisms to detect and prevent any
changes from unauthorized modifications such as access controls, digital signatures. The data in
rest, at transit or at processing, in all forms they are susceptible to unauthorized alteration or deletion,
and hence the integrity of information need to be secured.
• Availability
Accessing of information system when needed is about availability. It ensures that authorized users
have timely access to information and services without disruption and delay. Ensuring availability
involves implementing measures such as redundancy, fault tolerance, backup systems and disaster
recovery, plans to mitigate the impact of potential disruptions, including hardware failure, network
outages, or natural disasters. Suppose an E-commerce based company’s primary requirement is its
availability 24 hour seven days a week, here Availability is highly important because any
fluctuations in that shall directly impact the customers apparently.It's mostly about system uptime
for them, but it can also cover subjects such as accidentally denying a user access to a resource they
should have, having a user locked out of the front door because the biometrics does not recognize
his/her fingerprints (False negative), or even major issues such as natural disasters, and how
the company should recover in case of one.
The above-mentioned CIA triad are referred as the fundamental component of information security
which are dedicated towards protecting the information assets. These triad is not tied to any specific
cyber security framework, but rather part of many information security framework like ISO
27001:2022 (Information Security Management System, ISMS), NIST SP 800-53 (Security and
Privacy Controls for Federal Information Systems and Organizations). Based on the CIA triad and
its fundamental context of protection of information assets, a framework is designed as per the
suitability of the organization in order to create a robust security posture.
1.5.1 How do I protect my information?
Now that you know the goals of security, you may ask: "how do I apply them?" Well, first, you must
decide what needs protected. In other words, you need to audit all of your assets, from information stored
on servers to physical items such as d o c u me n t s , if your duties call for it. This topic will be covered
with more depth later, but right now, we'll keep it simple. Since most people reading this are
applying the principals here just to information security, we will first focus on information
classifications. There are many different ways of classifying information, but many of them follow the
same basic principles. According to Microsoft’s view of information, there are four types of information:
The Institute of Chartered Accountants of Nepal ȁ͹
Management Information and Control System
• Public
• Internal/proprietary
• Confidential
• Secret/ (Top secret)
While it may not be as cool as remembering CIA, the word PICS should help you remember these
four data types. But remember, while Microsoft and others use these classifications of data, not all groups
follow this as a standard. In other words, it's just not as wide spread as the talk about the CIA model, and
some companies may use their own models.
Depending on the type of data, security is compromised just by exposing the information to others. With
other types of data, however, damage is only done if the data was altered or unavailable. Here is a more
in-depth explanation of the four major data types:
Public/Unclassified data: This type of data/information, and the underlying information assets
associated with it, is generally designed to be used by anonymous individuals or systems that have a
credible interest in communication with organization. This type of information is generally disclosed
freely to public. It generally includes information like, marketing materials like brochure, newsletters,
then information on public web servers like website that can be accessed by general public, and data
shared at public records like NEPSE, company registrar filing, any other data deemed unclassified.
Since these data are publicly available, the risk associated with it are many. There are chances that this
information shall be used for malicious purpose impacting the reputational damage, financial loss, and
potential legal consequences. They even can use this information to manipulate individuals within the
company by impersonating someone with access to public information. Also, the identity theft risk is
much common as they can impersonate individual to commit fraudulent activities. While in case of
competitors they may leverage this public information released to gain insights into the company’s
operations, strategies, or intellectual property.
Although it may seem that loss of public data seem no harm, but in reality it can make a major impact
and can be disastrous sometime.
Internal data: Those data and information and the underlying information assets associated with it, that
are to be circulated within the organization by internal employee and thus its prohibited from being
circulated outside of the organization. To understand, it includes data like organizational policies
including Information security policy, human Resource Policy, standard operating Procedures,
instructions manual, financial records and any other data deemed proprietary by the organization. They
are also called Proprietary or Private data. These data are generally confidential in terms of organization
specific information that has been generated, and stored within the organization own internal systems.
The data ranges from financial records to customer databases, research and development findings,
sensitive business analysis, basically those data that are not publicly accessible. This also includes the
exclusive proprietary information owned by company like patents, trade secrets, trademarks or
confidential knowledge that sets the company apart from its competitors.
This internal information is required to be safeguarded with assets, in order to maintain privacy, comply
with the legal requirements and preserve the confidentiality and integrity of the organization. The

8 | The Institute of Chartered Accountants of Nepal


Chapter 1 : Organizational Management and Information System
practices like access control, employee awareness, regular backup of data, data loss prevention, non-
disclosure agreement, physical security is one of the many ways to protect the internal data. Along with
advancement of technology the concept of Dynamic data defense system (DDDS) is an innovative
approach that combines the artificial intelligence, blockchain technology and biometric authentication
to safeguard the internal information. Here through use of AI it analyzes the patterns, anomalies and
user behavior within the organization ecosystem. The baseline is established of normal data usage
patterns and any deviations to those patterns will trigger alerts for further investigation.
Confidential data: Those data, information and underlying assets associated with it, which is intended to
be viewed and/or utilized by selected employees only are regarded as confidential. It is kept private
between a close circle of people and its leakage can lead minor to major loss to the company. This category
of information requires high level of protection from unauthorized parties in order to ensure the
confidentiality, integrity and availability. Imagine you have a personal diary where you write your
thoughts, secret and private information. This diary is your confidential data. You wouldn’t want anyone
else to read it because it contains personal and sensitive details that are meant for you only. Similarly in
the digital world, confidential data may include things like your social security number, bank account
details, passwords, medical records or any other information, if accessed by wrong person, could cause
harm or be misused. Protecting these data requires strong authentication like unique passwords, avoid
clicking in suspicious links and downloading unknown files, staying away from accessing illegal sites,
and remaining proactive in safeguarding your confidential data.
In the international information security framework Personally Identifiable Information (PII) has been
identified as personal information. It means any information customer provides to obtain a financial
product or service, about a consumer resulting from any transaction involving a financial product or
service, or otherwise obtained about a consumer in connection with providing a financial product or
service. which includes
- Full name
- National identification information such as passport, citizenship, National identity card, voting
information, or other data used at national identity
- Local and national information like driver’s license, vehicle permit documents
- Digital identifiers like username, password etc.
- Facial recognition, fingerprints, iris or other biometric details
- Data and place of birth
- Medical records and data associated with it
- Criminal records
- Financial and accounting records such as banking, mortgage, credit, debit card information
- Professional and occupational information such as salary
- Professional license, certification, designation
- Any other information deemed PII, not listed above.
They are part of confidential information and all the acts related to data privacy strictly governs the proper
storage and disposal policy of PII data. To protect confidential information, first we need to identify and
classify that the data falls into this category, then we need to adopt necessary measures of control like
access control, encryption, data handling policies, employee awareness training, secure communication,
regular information system audit and other measures. The protection of confidential information is not a
The Institute of Chartered Accountants of Nepal ȁͻ
Management Information and Control System
one-time approach rather it’s a continuous process that requires evaluation, adaptation, and vigilance. By
implementing above measures and fostering culture of data security, the risk to unauthorized disclosure
of information can be significantly reduced.
Secret data:
The sensitive data refers to highly sensitive information that is intentionally kept hidden or undisclosed
from unauthorized individual or entities. If such information revealed or accessed by wrong people, could
cause significant harm, compromise security, or violate confidentiality. They are again intended to view
by selected category of employees only. It requires extremely higher level of protection from
unauthorized parties for ensuring its confidentiality, integrity, and availability. This classification
supersedes the classification level. Secret data may vary as per the context but it generally includes trade
secret of business, undisclosed formula or algorithm, confidential finding from research, or any other
information that holds high value and needs to be protected.
Imagine you know a top-secret recipe of a popular food item. That recipe contains unique ingredients and
cooking techniques that give it a competitive advantage. The recipe is considered as secret data because
it gives that competitive advantage as if they find out, your product can be replicated and potentially may
harm your business. Likewise in the digital realm secret data may include encryption keys, access codes,
passwords, or other sensitive information like social security number, health records. The unauthorized
access to such data can lead to identity theft, financial fraud, or privacy breaches.
For protecting secret data, stringent security measures need to be deployed like robust encryption
techniques, secure storage systems, access controls, and authentication mechanisms. The secret data can
be compartmentalized into different layers of access and then only individual specific on need-to-know
basis are granted access to certain segments of the secret data. Using virtual private network (VPN) for
communication, encrypted email services. The proper storage and disposal of data such as shredding
physical documents, secure data erasure method for digital storage by completely or irreversibly
destroying the data will prevent unauthorized retrieval. A non-disclosure agreement with the person
holding such secret data is equally important to bind by the confidentiality terms. The protection of this
data required a combination of technical, organizational, and legal measures to maintain the
confidentiality, prevent unauthorized access, and preserve the competitive advantage or security of the
information.

1.5.2 How is the security of the average company setup?


When news of security breaches becomes commonplace, it's not surprising considering the crude design
of most businesses' network security. Many businesses, as well as home users, have a security model that
resembles an eggshell - hardened on the outside to keep intruders out, but once breached, there are no
internal measures to prevent or limit access. Unlike the resilience of a chicken's eggshell, companies
struggle to create a completely impermeable security infrastructure. Whether it's online with networked
computers or offline with physical access, there are always potential vulnerabilities that can be exploited.
The attack surface, which refers to the area of the shell that can be targeted, is an important concept in
security. While having a smaller attack surface is ideal, it's crucial to understand that this is just a small
part of overall security. Many individuals and companies mistakenly believe that the attack surface is the

10 | The Institute of Chartered Accountants of Nepal


Chapter 1 : Organizational Management and Information System
sole determinant of security. However, relying solely on the strength of the outer shell leaves everything
within the network vulnerable once a breach occurs. It only takes one hole in the shell for an attacker to
gain access, and it's impossible to prevent all vulnerabilities.
The primary goal of IT security is not to prevent damage entirely, but rather to limit it. While preventing
damage is desirable, it's impossible to achieve 100% prevention. Instead, the focus should be on making
it more difficult to cause damage and reducing the frequency of such incidents. In assessing the damage,
it's important to assign a monetary value to the losses. For example, if a home computer is destroyed in a
fire, the damage includes not only the cost of the computer itself but also the value of the data lost.
However, if there is an up-to-date backup of the data that survives the fire, the damage is reduced to the
cost of a new computer and the effort required to restore the backups. While this doesn't lower the
likelihood of such events occurring, it minimizes the resulting damage.
The eggshell rule, although applicable to most companies, also extends to home users. Consider that you
may be browsing the internet from your home network, which includes internal servers with sensitive data,
while being logged in as an administrator. Additionally, you might have programs running as servers with
full administrative rights and no damage control measures.
It's important to clarify that while having a strong outer shell and firewall is essential for security,
additional security measures must be implemented. The focus should not solely be on the outer layer of
protection. It's necessary to establish secondary layers of defense, implement access controls, and ensure
that systems do not blindly trust other systems or subsystems. One of the biggest challenges faced by
companies is ensuring that security procedures are followed consistently. While a great security setup may
have been designed, it's crucial to evaluate whether administrators in remote offices adhere to these
protocols. Do the administrators in your data room know how to respond during a crisis? If you were to
randomly unplug a server in your server room, how quickly would your network recover? These are
essential considerations that need to be addressed. A firewall, for instance, cannot physically reconnect a
server that has been unplugged.
In summary, while a strong outer shell and firewall are important components of a comprehensive security
strategy, they are just the beginning. It's necessary to implement additional layers of security, access
controls, and procedures, and ensure consistent adherence to security protocols. By taking these steps,
organizations can strengthen their overall security posture and mitigate potential risks effectively.

1.5.3 How do most security breaches happen?


Imagine you are a student named Himal who regularly uses a school computer to access the student portal
and submit the HomeWorks and other assignments. One day, you receive an email that appears to be from
your school’s IT department. The email claims that there is an issue with your account and requests you to
click on a link to verify your information.
Unaware of the risk, you click on the link and are redirected to a website that looks identical to your school’s
login page. Without thinking twice, you enter your username and password. However, what you don’t realize
is that the email was cleverly phishing attempt by a malicious individual. In this scenario, a security breach
occurs through a common technique called phishing. This is one of the most common forms of attack. In this
scenario, the attacker pretends to be from schools and they create this urgency by claiming an issue with the

The Institute of Chartered Accountants of Nepal ȁͳͳ


Management Information and Control System
student’s account prompting to take immediate action. This kind of email includes a link that redirects the
student to fake website, designed to look identical to School’s page.
Security breaches occur through various avenues, exploiting vulnerabilities in systems, processes or
human behavior. Weak or stolen credentials are a common entry point, where attackers guess passwords.
Employ brute force techniques, or trick individuals into revealing their login information through methods
like phishing or social engineering. Malware and ransomware are another significant threat, as malicious
can infect systems through infected email attachments, downloads, or compromised websites. Unpatched
or outdated software leaves systems vulnerable to known vulnerabilities that attackers can exploit.
Insider threats pose risks as well, with authorized individuals misusing their privileges or accidentally
compromising security controls. Physical security lapses, such as unauthorized access or theft of devices,
can also lead to breaches. Third party risks exist when vendors or partners with access to systems have
weak security practices. Social engineering involves manipulating individuals to reveal sensitive
information. Distributed Denial of services (DDoS) attacks overwhelm systems, causing service
disruptions and exposing vulnerabilities. Maintaining a strong security posture requires regular updates,
employee awareness training, strict access controls, robust authentication, encryption, monitoring systems,
and incident response planning.
Also keep in mind, damage is not always caused by external sources. Internally caused damage is a
major issue. In terms of companies, most security breaches are internal, caused by users. This applies to
things you may not take into consideration. It could be that the VP of the marketing department meant
to send an email to a friend, talking about how ugly the new male hire is, only to accidentally send it to
the whole office he works in. It could be that the newest security patch for X program accidentally
changing some system files, preventing the OS from booting. Even hardware failures, from power
outages, to blown CPUs, fall under security due to the disruptions of operations.

1.6 IT Governance:
IT governance refers to the framework and processes organizations establish to ensure effective
management, decision-making, and control of their information technology (IT) resources. One widely
recognized standard for IT governance is the Control Objective for Information and Related
Technologies (COBIT), developed by the Information Systems Audit and Control Association (ISACA)
and the International Federation of Accountants (IFAC). Simply put, it's putting structure around how
organizations align IT strategy with business strategy, ensuring that companies stay on track to achieve
their strategies and goals, and implementing good ways to measure IT's performance. It makes sure
that all stakeholders' interests are taken into account and that processes provide measurable results. An
IT governance framework should answer some key questions, such as how the IT department is
functioning overall, what key metrics management needs and what return IT is giving back to the business
from the investment it's making.
Every organization-large and small, public and private-needs a way to ensure that the IT function sustains
the organization's strategies and objectives. The level of sophistication you apply to IT governance,
however, may vary according to size, industry or applicable regulations. In general, the larger and more
regulated the organization, the more detailed the IT governance structure should be.

12 | The Institute of Chartered Accountants of Nepal


Chapter 1 : Organizational Management and Information System
Organizations today are subject to many regulations governing data retention, confidential information,
financial accountability and recovery from disasters. COBIT provides a comprehensive framework
that helps an organization align their IT goals with business objectives, while also ensuring the proper
utilization, security, and governance of IT resources. It outlines a set of principles, best practices, and
controls that enable organizations to establish a robust IT governance structure.
According to COBIT, IT governance encompasses several key areas as their focus areas, which
includes
• Strategic alignment: It means ensuring that the organization’s IT initiatives and any investments
are closely aligned with its strategic objectives and priorities. For this they require a thorough
knowledge and understanding or organization goals and mission and then ability to convert that into
the strategies for IT. The reason behind this alignment is that the IT governance aligned with business
objective shall ensure that the technology is leveraged effectively to drive the business growth,
innovation, and competitive advantage.
• Value delivery: This is all about focus on maximizing the value derived from IT investments. IT
involves efficient resource management, effective project execution, and service delivery. By
properly managing IT resources, such as infrastructure, applications, data, and human resources,
organizations can optimize their performance and deliver value to stakeholders. Making sure that
the IT department does what's necessary to deliver the benefits promised at the beginning of a
project or investment. The best way to get a handle on everything is by developing a process to
ensure that certain functions are accelerated when the value proposition is growing, and
eliminating functions when the value decreases.
• Resource management: One way to manage resources more effectively and efficiently is by
managing IT resources effectively. These resources may include infrastructure, applications, data,
and human resources to meet business requirements. Proper resource management involves
strategic planning, capacity management, talent acquisition and development, and effective
utilization of IT assets. The goal behind optimizing resource allocation and ensuring their
availability, organization can enhance operational efficiency and support business objectives.
• Risk management: Instituting a formal risk framework that puts some rigor around how IT
measures, accepts and manages risk that could impact the organization’s operations, reputation, or
compliance with laws and regulations. This includes identifying and assessing risks, implementing
controls to mitigate them, and monitoring their effectiveness. Having a robust risk management
framework shall ensure that the organization operates in a secure and compliant manner.
• Performance measures: Another crucial aspect of IT governance is performance measures
which enables organizations to assess the effectiveness and efficiency to their IT
processes, services and controls. IT involves establishing performance metrics , defining
key performance (KPIs), and implementing monitoring mechanisms to measure and track
IT performance. It’s about providing insight into the organization’s IT capabilities,
identifies areas for improvement, and enables informed decision -making for continuous
improvement. It uses both qualitative and quantitative measures to get those answers.

The Institute of Chartered Accountants of Nepal ȁͳ͵


Management Information and Control System
How do you actually implement everything involved in IT governance?
Implementing IT governance involves a structured and comprehensive approach. The step involved to
consider when implementing IT governance:
a. Define Governance Objectives: Start by clearly defining the governance objectives specific to your
organization. Identify what you aim to achieve through IT governance, such as strategic alignment, risk
mitigation, resource optimization, or performance improvement. These objectives will serve as a
guiding framework throughout the implementation process.

b. Establish Governance Framework: Select a recognized governance framework that aligns with your
organization's goals and industry best practices. Frameworks like COBIT, ISO/IEC 38500, or ITIL
provide guidelines, processes, and controls to structure your governance activities. Customize the
framework to suit your organization's unique needs and context.

c. Assign Roles and Responsibilities: Clearly define the roles and responsibilities of key stakeholders
involved in IT governance. This includes establishing a governance board or committee, appointing
executive sponsors, and assigning specific responsibilities to individuals or teams. Ensure that the
governance structure includes representation from both business and IT functions.

d. Develop Policies and Procedures: Create and document IT governance policies and procedures that
outline the desired behaviors, practices, and controls within your organization. This may include
policies related to information security, risk management, project prioritization, resource allocation,
and performance measurement. Ensure that these policies align with industry standards and regulatory
requirements.

e. Communicate and Educate: Effective communication and education are crucial for successful
implementation. Educate stakeholders about the importance of IT governance, its benefits, and their
roles and responsibilities. Conduct training sessions, workshops, and awareness programs to ensure
everyone understands the governance framework, policies, and procedures.

f. Implement Controls and Processes: Put in place the necessary controls and processes to support IT
governance. This includes establishing mechanisms for strategic planning, project portfolio
management, risk assessment, performance measurement, and resource management. Implement tools
and technologies to automate and streamline governance processes where possible.

g. Monitor and Evaluate: Continuously monitor and evaluate the effectiveness of your IT governance
practices. Regularly review governance policies, processes, and controls to ensure they remain relevant
and aligned with organizational goals. Monitor key performance indicators (KPIs) and metrics to assess
the impact and effectiveness of governance activities. Use this feedback to make improvements and
adjustments as needed.

h. Evolve and Improve: IT governance is an ongoing process that should evolve and adapt with the
changing needs of the organization. Regularly review and update your governance framework, policies,

14 | The Institute of Chartered Accountants of Nepal


Chapter 1 : Organizational Management and Information System
and practices to address emerging risks, technological advancements, and evolving business
requirements. Foster a culture of continuous improvement and adaptability within the organization.
Here is a quick rundown on the choices of IT framework:
CoBIT: COBIT which stands for Control Objectives for Information and Related Technologies, is a
framework, developed by Information Systems Audit and Control Association (ISACA), for the
governance and management of Enterprise IT. It provides a set of guidelines, best practices, control
objectives and supporting toolset for IT governance that is accepted worldwide which is used by auditors
and companies as a way to integrate technology to implement controls and meet specific business
objectives. The main goal of COBIT is to align business objectives with IT goals and ensure that IT
resources are used efficiently and effectively to achieve those objectives. IT helps organizations establish
a governance structure and define processes to control and manage their IT activities, risks and resources.
It is regarded as a valuable tool for IT governance, Risk management and Compliance. The latest version,
released in 2019, is CoBIT 2019 which presents six principles as Meet stakeholders need, holistic
approach, dynamic governance system, distinct governance from management, tailored to enterprise
needs, and end to end governance system. CoBIT is well-suited to organizations focused on risk
management and mitigation.
ITIL: The Information Technology Infrastructure Library (ITIL), is a widely adopted framework for IT
service management (ITSM). It provides a set of best practices and guidelines for managing IT services
and aligning them with the needs of the business. IT offers systematic approach on managing IT services
throughout their lifecycle, from strategy and design to transition, operation, and continual improvement.
The key concepts it offers are eight sets of management procedures including service strategy, service
design, service transition, service operation, Continual service improvement. ITIL helps to establish a
proven practice for managing services through improvement of service quality, reduce costs, increase
customer satisfaction, and enhance the overall effectiveness and efficiency for IT operations.
COSO: This model for evaluating internal controls is from the Committee of Sponsoring Organizations
of the Treadway Commission, is a private sector initiative established under United States in 1985. It is
dedicated to providing guidance on enterprise risk management, internal control, and fraud deterrence.
The primary objective of COSO is to enhance the quality of financial reporting and promote effective
internal controls within organizations. This framework provides a comprehensive approach to designing,
implementing, and assessing internal controls, which are essential mechanisms for mitigating risks and
achieving organizational objectives. The COSO framework is applicable to organizations of all types and
sizes, across various industries.
CMMI: The Capability Maturity Model Integration method, created by a group from government,
industry and Carnegie-Mellon's Software Engineering Institute, is a process improvement framework that
helps organizations enhance their capability and maturity in delivering high quality products and
services. It’s primary objective of CMMI is to provide organizations with a structured approach to
improve their processes and achieve predictable and consistent outcomes. It focuses on key areas
such as project management, process management, engineering, and support functions. It is based
on the concept of maturity levels, which represent the degree of process improvement and
organizational capability. There are five maturity levels in CMMI, ranging from initial (Level 1) TO
Optimizing (Level 5). Each level build upon the previous one, with level 5 representing the highest
level of process maturity. It is applicable to various industries and not limited to software
The Institute of Chartered Accountants of Nepal ȁͳͷ
Management Information and Control System
development. It has been widely adopted by organizations globally, both in the private and public
sectors, to drive continuous improvement and optimize their processes. Overall, it provides
organizations with a systematic approach to improve their processes and achieve higher levels of
maturity. It helps organization enhance their capabilities, streamline operations, and deliver products
and services of higher quality, ultimately leading to increased customer satisfaction and competitiv e
advantage.
While each framework has its own specific focus, they can be combined and used in a complementary
manner. For instance, organizations can adopt COBIT’s IT governance principles to establish an
overarching governance structure. They can then utilize ITIL’s best practices to manage IT services
effectively and efficiently. COSO’s guidance can be integrated into the overall governance and risk
management processes to ensure strong internal controls. Finally, CMMI, can be used to enhance process
capability and maturity. By combining these frameworks, organization can establish a comprehensive
approach to governance, risk management, IT service management, and process improvement. However,
it is essential to customize the adoption of these frameworks to fit the specific needs and goals of the
organization. Proper integration and alignment of the frameworks can lead to enhances operational
efficiency, better risk management practices, and improved overall organizational performance. In fact,
combining frameworks is fairly common; the PricewaterhouseCoopers study found that in 65 percent of
cases, companies use CoBIT and ITIL together or with lesser-known frameworks. But most importantly,
use a framework that fits your corporate culture and that your stakeholders are familiar with. If the
company is using one of these frameworks and can leverage it to be its IT governance framework, all the
better.

16 | The Institute of Chartered Accountants of Nepal


Chapter 2 : Different Types of Information System Case Study

Chapter 2

Different Types of Information System Case Study

The Institute of Chartered Accountants of Nepal ȁͳ͹


Management Information and Control System

18 | The Institute of Chartered Accountants of Nepal


Chapter 2 : Different Types of Information System Case Study
2.1 Types of Information System according to Organizational Hierarchy
Because there are different interests, specialties, and levels in an organization, there are different kinds
of systems. No single system can provide all the information an organization needs. Figure 2-1
illustrates one way to depict the kinds of systems found in an organization. In the illustration, the
organization is divided into strategic, management, and operational levels and then is further divided
into functional areas, such as sales and marketing, manufacturing and production, finance and accounting,
and human resources. Information systems are built to serve these different organizational interests.

Fig 2-1 Types of information systems


Organizations can be divided into strategic, management, and operational levels and into four major
functional areas: sales and marketing, manufacturing and production, finance and accounting, and human
resources. Information systems serve each of these levels and functions.
2.1.1 Different Kinds of Information Systems
Three main categories of information systems serve different organizational levels: operational- level
systems, management-level systems, and strategic-level systems.
Operational-level systems support operational managers by keeping track of the elementary activities
and transactions of the organization, such as sales, receipts, cash deposits, payroll, credit decisions, and
the flow of materials in a factory. The principal purpose of systems at this level is to answer
routine questions and to track the flow of transactions through the organization. How many parts are in
inventory? What happened to Mr. William's payment? To answer these kinds of questions, information
generally must be easily available, correct, and accurate. Examples of operational-level systems include

The Institute of Chartered Accountants of Nepal ȁͳͻ


Management Information and Control System
a system to record bank deposits from automatic teller machines or one that tracks the number of hours
worked each day by employees on a factory floor.
Management-level systems mainly refer to the category of information systems that support managerial
decision-making and control within an organization. These systems provide managers at different levels
with the necessary information and tools to plan, organize, and control various aspects of business
operations. They serve the monitoring, controlling, decision-making, and administrative activities of
middle managers. It provides managers with regular reports and summaries of essential information from
various departments and levels of the organization. These systems collect, process, and present data in a
structured format, enabling managers to monitor performance, make informed decisions, and allocate
resources effectively. The principal question addressed by such systems is this: Are things working well?
Management-level systems typically provide periodic reports rather than instant information on
operations. An example is an inventory management system used by a retail company.
Some management-level systems support non-routine decision-making. They tend to focus on less-
structured decisions for which information requirements are not always clear. These systems often answer
"what-if" questions: What would be the impact on production schedules if we were to double sales in the
month of December? What would happen to our return on investment if a factory schedule were delayed
for six months? Answers to these questions frequently require new data from outside the organization,
as well as data from inside that cannot be easily drawn from existing operational-level systems.
Strategic-level systems help senior management tackle and address strategic issues and long- term
trends, both in the firm and in the external environment. It is dedicated towards supporting strategic
decision-making at the highest level of an organization. These systems provide top-level executives, and
senior management with the necessary information and tools to formulate long term strategies, set
organization goals, and allocate resources effectively. It focusses on the broader picture and help shape
the direction of the organization. Their principal concern is matching changes in the external environment
with existing organizational capability. What will employment levels be in next five years? What are
the long-term industries cost trends, and where does our firm fit in? What products should we be making
in next five years? An example of strategic level system is a Balanced Scorecard (BSC), which is a
framework that translates an organization’s strategy into a set of performance measures across different
perspectives, including financial, customer, internal processes, and learning and growth. It provides a
comprehensive view of the organization’s performance and aligns it with strategic goals. Other examples
are scenario analysis tool, market intelligence systems, and Enterprise Resource Planning (ERP) with
strategic modules that support long term planning and decision making at the top level of an organization.
Information systems also serve the major business functions, such as sales and marketing,
manufacturing and production, finance and accounting, and human resources. A typical organization has
operational, management, and strategic-level systems for each functional area. For example, the sales
function generally has a sales system on the operational level to record daily sales figures and to
process orders. A management-level system tracks monthly sales figures by sales territory and
reports on territories where sales exceed or fall below the anticipated levels. A system to forecast
sales trends over a five-year period serves the strategic level. We first describe the specific categories of
systems serving each organizational level and their value to the organization. Then we show how
organizations use these systems for each major business function.

20 | The Institute of Chartered Accountants of Nepal


Chapter 2 : Different Types of Information System Case Study
There are other various types of systems utilized by organizations to handle information processing,
storage, and distribution. Here is an overview:
Executive Information Systems (EIS): EIS cater to the needs of top-level executives, offering high-level
summaries and key performance indicators (KPIs) for strategic decision-making.
Enterprise Resource Planning (ERP) Systems: ERP systems integrate various business functions and
processes into a unified system. They typically include modules for finance, human resources, supply
chain management, and customer relationship management. ERP systems streamline operations and
improve coordination between departments.
Knowledge Management Systems (KMS): KMS capture, organize, and distribute an organization's
knowledge assets. They facilitate knowledge sharing, collaboration, and learning.
Customer Relationship Management (CRM) Systems: CRM systems manage an organization's
interactions with customers and maintain customer-related data. They support sales, marketing, and
customer service processes, aiming to enhance customer satisfaction and retention.
Geographic Information Systems (GIS): GIS systems capture, store, analyze, and present geographical
data. They are utilized for mapping, spatial analysis, and decision-making in fields like urban planning,
environmental management, and logistics.
Expert Systems: Expert systems replicate the decision-making abilities of human experts in specific
domains. They use rule-based logic and knowledge bases to provide problem-solving advice and solutions.
Collaboration Systems: Collaboration systems facilitate teamwork, communication, and information
sharing in a digital environment. They often incorporate features like document sharing, real-time
messaging, and project management tools.

2.2 Types of Information System to Support the Organization


Figure 2-2 shows the specific types of information systems that correspond to each organizational
level. The organization has executive support systems (ESS) at the strategic level; management
information systems (MIS) and decision-support systems (DSS) at the management level; and transaction
processing systems (TPS) at the operational level. Systems at each level in turn are specialized to serve
each of the major functional areas. Thus, the typical systems found in organizations are designed to assist
workers or managers at each level and in the functions of sales and marketing, manufacturing and
production, finance and accounting and human resources.

The Institute of Chartered Accountants of Nepal ȁʹͳ


Management Information and Control System

Fig 2-2 The four major types of information systems

This figure provides examples of TPS, DSS, MIS, and ESS, showing the level of the organization
and business function that each supports.
It should be noted that each of the different systems may have components that are used by organizational
levels and groups other than its main constituencies. A secretary may find information on an MIS, or a
middle manager may need to extract data from a TPS.

2.2.1 Transaction Processing System


A Transaction Processing System (TPS) is an information system that is designed to facilitate and manage
routine business transactions. It is responsible for capturing, processing, and storing transactional data
that occurs as a part of an organization's day-to-day operations.
The primary function of a TPS is to ensure the accurate and efficient processing of transactions. It
processes a wide range of transactions such as sales, purchases, payments, and inventory updates. For
example, when a customer makes a purchase, the TPS records the details of the transaction, such as the
item purchased, the quantity, the price, and the customer's information.

22 | The Institute of Chartered Accountants of Nepal


Chapter 2 : Different Types of Information System Case Study
TPSs are characterized by their high volume and rapid processing capabilities. They are designed to handle
a large number of transactions simultaneously and in real-time. This is crucial for organizations that deal
with a high volume of transactions on a daily basis, such as retail stores, banks, and e-commerce platforms.
The key components of a TPS include data entry interfaces, databases for storing transaction data, and
processing logic to validate and process transactions. These systems often employ techniques such as data
validation, error checking, and concurrency control to ensure the accuracy and integrity of the data.
TPSs provide several benefits to organizations. First, they enable efficient and accurate recording of
transactions, reducing the chances of errors and inconsistencies. This helps in maintaining reliable and up-
to-date records of business activities. Second, TPSs support operational decision-making by providing
real-time information on transaction status and inventory levels. This allows organizations to monitor their
operations and make timely adjustments as needed.
Furthermore, TPSs serve as the foundation for other information systems within an organization. Data
captured by TPSs serves as input for management information systems, decision support systems, and
other higher-level systems that rely on transactional data for analysis and reporting.
Overall, a Transaction Processing System plays a vital role in ensuring the smooth and efficient handling
of routine business transactions, maintaining accurate records, and supporting operational decision-
making within an organization.

Fig 2-3 Transaction Processing Information Systems


A symbolic representation of a Transaction Processing System (TPS) could be depicted as follows:
[Input] ---> [TPS] ---> [Output]

The Institute of Chartered Accountants of Nepal ȁʹ͵


Management Information and Control System
In this representation, the "Input" represents the various transactions and data that enter the TPS. This
includes information such as sales data, customer details, inventory updates, financial transactions, or any
other relevant data that needs to be processed.
The "TPS" component represents the Transaction Processing System itself. It consists of the necessary
hardware, software, databases, and processing logic to handle the incoming transactions. The TPS
processes and validates the data, performs necessary calculations or updates, and ensures data integrity
and accuracy.
The "Output" represents the processed results or actions generated by the TPS. This could include updated
inventory records, financial reports, transaction confirmations, customer receipts, or any other relevant
outputs resulting from the processing of the transactions.

Fig 2-4 Typical applications of TPS


Figure 2-5 depicts a payroll TPS, which is a typical accounting transaction processing system found in
most firms. A payroll system keeps track of the money paid to employees. The master file is composed
of discrete pieces of information (such as a name, address, or employee number) called data
elements. Data are keyed into the system, updating the data elements. The elements on the master file
24 | The Institute of Chartered Accountants of Nepal
Chapter 2 : Different Types of Information System Case Study
are combined in different ways to make up reports of interest to management and government agencies
and to send paychecks to employees. These TPS can generate other report combinations of existing data
elements.

Fig 2-5 A symbolic representation for a payroll TPS


A payroll system is a typical accounting TPS that processes transactions such as employee time cards
and changes in employee salaries and deductions. It keeps track of money paid to employees, withholding
tax, and paychecks.
There are five functional categories of TPS: sales/marketing, manufacturing/production,
finance/accounting, human resources, and other types of systems specific to a particular industry. Within
each of these major functions are sub functions. For each of these sub functions (e.g., sales
management) there is a major application system.
Transaction processing systems are often so central to a business that TPS failure for a few hours can lead
to a firm’s demise and perhaps that of other firms linked to it. Imagine what would happen to UPS if its
package tracking system were not working! What would the airlines do without their computerized
reservation systems?
Managers need TPS to monitor the status of internal operations and the firm’s relations with the external
environment. TPS are also major producers of information for the other types of systems. (For
example, the payroll system illustrated here, along with other accounting TPS, supplies data to the
company’s general ledger system, which is responsible for maintaining records of the firm’s income
and expenses and for producing reports such as income statements and balance sheets.)

The Institute of Chartered Accountants of Nepal ȁʹͷ


Management Information and Control System
2.2.2 Knowledge Work and Office System
Knowledge work and personal system within an organization means support of information system
designed to enhance productivity, information sharing process and ensure effective communication
among team members involved in knowledge-intensive tasks. Making personal knowledge available to
others is the central activity of the knowledge-creating company. It takes place continuously and at all
levels of the organization.
Knowledge management has thus become one of the major strategic uses of information technology.
Many companies are building knowledge management systems (KMS) to manage organizational
learning and business know-how. The goal of such systems is to help knowledge workers create,
organize, and make available important business knowledge, wherever and whenever it's needed in an
organization. This information includes processes, procedures, patents, reference works, formulas,
"best practices," forecasts, and fixes. Internet and intranet Web sites, group-ware, data mining,
knowledge bases, and online discussion groups are some of the key technologies that may be used by a
KMS.
Knowledge management systems also facilitate organizational learning and knowledge creation. They
are designed to provide rapid feedback to knowledge workers, encourage behavior changes by
employees, and significantly improve business performance. As the organizational learning process
continues and its knowledge base expands, the knowledge-creating company works to integrate its
knowledge into its business processes, products, and services. This integration helps the company
become a more innovative and agile provider of high-quality products and customer services, as
well as a formidable competitor in the marketplace.

26 | The Institute of Chartered Accountants of Nepal


Chapter 2 : Different Types of Information System Case Study

Fig 2-5 Knowledge management can be viewed as three levels of techniques, technologies, and
systems that promote the collection, organization, access, sharing and use of workplace and
enterprise knowledge

2.2.3 Management Information System


A Management Information System (MIS) is a type of information system that supports decision-making
and management activities within an organization. It collects, processes, stores, and disseminates data and
information to assist managers in planning, organizing, and controlling operations.
The primary objective of an MIS is to provide accurate, timely, and relevant information to support
managerial decision-making at different levels of an organization. It utilizes technology and data
management techniques to transform raw data into meaningful and actionable information that can be used
by managers to make informed decisions.
We define management information systems as the study of information systems in business and
management. The term management information systems (MIS) also designate a specific category of
information systems serving management-level functions. Management information systems (MIS) serve
the management level of the organization, providing managers with reports and often online access to
the organization's current performance and historical records. Typically, MIS are oriented almost
The Institute of Chartered Accountants of Nepal ȁʹ͹
Management Information and Control System
exclusively to internal, not environmental or external, events. MIS primarily serve the functions of
planning, controlling, and decision making at the management level. Generally, they depend on
underlying transaction processing systems for their data.
Data Collection: MIS gathers data from various sources within and outside the organization. This data
can be both internal (e.g., sales figures, inventory levels) and external (e.g., market trends, industry
reports). Data is collected through manual entry, automated systems, and integration with other
information systems.
Data Processing: Once the data is collected, it undergoes processing to convert it into useful information.
Data processing involves activities such as data validation, aggregation, calculation, transformation, and
formatting. This step ensures that the information generated is accurate, consistent, and meaningful.
Data Storage and Management: MIS includes databases and data repositories to store and manage the
collected data. These databases are designed to facilitate efficient data retrieval, updating, and storage.
Data management practices such as data security, backup, and data governance ensure the integrity and
availability of the stored information.
Information Presentation: MIS presents information in a format that is understandable and useful for
managers. This can include reports, dashboards, charts, graphs, and visualizations. The information is
tailored to meet the specific needs and preferences of different users and can be accessed through user-
friendly interfaces.
Decision Support: MIS provides decision support capabilities to managers by offering tools for data
analysis, modeling, and forecasting. These features enable managers to explore different scenarios,
identify trends, analyze performance, and evaluate potential outcomes before making decisions.
Integration and Connectivity: MIS integrates with other information systems within an organization,
such as transaction processing systems, supply chain systems, or customer relationship management
systems. This integration ensures a seamless flow of data and information across different departments
and functions.
Security and Control: MIS incorporates security measures to protect sensitive data and ensure data
privacy. Access controls, encryption, authentication, and audit trails are implemented to safeguard the
information from unauthorized access and potential threats.
The benefits of a Management Information System include improved decision-making, enhanced
operational efficiency, better resource allocation, increased productivity, and improved communication
and collaboration within the organization. Management Information System plays a crucial role in
providing managers with the necessary information and insights to make informed decisions and
effectively manage the operations of an organization.

28 | The Institute of Chartered Accountants of Nepal


Chapter 2 : Different Types of Information System Case Study

Fig 2-6 How management information systems obtain their data from the organization's TPS

In the system illustrated by this diagram, three TPS supply summarized transaction data to the MIS
reporting system at the end of the time period. Managers gain access to the organizational data through
the MIS, which provides them with the appropriate reports.

The Institute of Chartered Accountants of Nepal ȁʹͻ


Management Information and Control System
This report showing summarized annual sales data was produced by the MIS in Figure 2-6.
Here is another report which shows monthly portfolio performance data produced by MIS.

Fig 2-7 A sample MIS report


MIS usually serve managers primarily interested in weekly, monthly, and yearly results, although
some MIS enable managers to drill down to see daily or hourly data if required. MIS generally provide
answers to routine questions that have been specified in advance and have a predefined procedure for
answering them. For instance, MIS reports might list the total pounds of lettuce used this quarter by a
fast-food chain or, as illustrated in Figure 2-7, compare total annual sales figures for specific products
to planned targets or present. These systems are generally not flexible and have little analytical
capability. Most MIS use simple routines such as summaries and comparisons, as opposed to
sophisticated mathematical models or statistical techniques.

2.2.4 Decision Support System


Decision-support systems (DSS) is a computer based information system that supports individuals,
organizations, or groups, in making decisions and solving problems. It makes use of data, models,
analytics techniques to provide relevant information and insights for decision making. DSS also serve
the management level of the organization and to variety of industries like business, healthcare, finance
and logistics. DSS help managers make decisions that are unique, rapidly changing, and not easily
specified in advance. They address problems where the procedure for arriving at a solution may

30 | The Institute of Chartered Accountants of Nepal


Chapter 2 : Different Types of Information System Case Study
not be fully predefined in advance. Although DSS use internal information from TPS and MIS, they
often bring in information from external sources, such as current stock prices or product prices of
competitors.
Clearly, by design, DSS have more analytical power than other systems. They use a variety of models to
analyze data, or they condense large amounts of data into a form in which they can be analyzed by
decision makers. DSS are designed so that users can work with them directly; these systems explicitly
include user-friendly software. DSS are interactive; the user can change assumptions, ask new
questions, and include new data.
An interesting, small, but powerful DSS is the voyage-estimating system of a subsidiary of a large
American metals company that exists primarily to carry bulk cargoes of coal, oil, ores, and finished
products for its parent company. The firm owns some vessels, charters others, and bids for shipping
contracts in the open market to carry general cargo. A voyage-estimating system calculates financial and
technical voyage details. Financial calculations include ship/time costs (fuel, labor, capital), freight rates
for various types of cargo, and port expenses. Technical details include a myriad of factors, such as ship
cargo capacity, speed, port distances, fuel and water consumption, and loading patterns (location of cargo
for different ports).
The system can answer questions such as the following: Given a customer delivery schedule and an
offered freight rate, which vessel should be assigned at what rate to maximize profits? What is the
optimal speed at which a particular vessel can optimize its profit and still meet its delivery schedule?
What is the optimal loading pattern for a ship bound for the U.S. West Coast from Malaysia? Figure 2-8
illustrates the DSS built for this company. The system operates on a powerful desktop personal computer,
providing a system of menus that makes it easy for users to enter data or obtain information.

Fig 2-8 Voyage-estimating decision-support system

The Institute of Chartered Accountants of Nepal ȁ͵ͳ


Management Information and Control System
This DSS operates on a powerful PC. It is used daily by managers who must develop bids on shipping
contracts.
This voyage-estimating DSS draws heavily on analytical models. Other types of DSS are less model-
driven, focusing instead on extracting useful information to support decision making from massive
quantities of data. For example, Intrawest-the largest ski operator in North America- collects and stores
vast amounts of customer data from its Web site, call center, lodging reservations, ski schools, and ski
equipment rental stores. It uses special software to analyze these data to determine the value, revenue
potential, and loyalty of each customer so managers can make better decisions on how to target their
marketing programs. The system segment customers into seven categories based on needs, attitudes,
and behaviors, ranging from "passionate experts" to "value-minded family vacationers." The company
then e-mails video clips that would appeal to each segment to encourage more visits to its resorts.

Fig 2-9 Data Analysis decision-support system

The benefits of a Decision Support System include improved decision quality, increased efficiency,
reduced uncertainty, better resource allocation, and enhanced strategic planning. By providing timely and
relevant insights, DSS empowers decision-makers to make more informed and effective decisions.
Decision Support System is a valuable tool that leverages data, models, and analytical techniques to assist
decision-makers in solving complex problems, evaluating alternatives, and making informed decisions.
It enhances decision-making capabilities, fosters collaboration, and drives organizational success.
A DSS for data analysis empowers decision-makers with the tools and capabilities to analyze data, uncover
patterns and trends, and make informed decisions. By leveraging data analysis techniques, statistical
methods, and advanced visualization, a DSS for data analysis enables organizations to harness the power
of data to drive strategic decision-making and achieve better business outcomes.

32 | The Institute of Chartered Accountants of Nepal


Chapter 2 : Different Types of Information System Case Study
Sometimes you'll hear DSS systems referred to as business intelligence systems because they focus on
helping users make better business decisions.
2.2.5 Executive Support Systems or Executive Information Systems
Executive Support Systems (ESS) or Executive Information Systems (EIS) are specialized information
systems designed to support the strategic decision-making needs of top-level executives within an
organization. These systems provide high-level summaries, reports, and analyses of data to aid executives
in monitoring organizational performance, identifying trends, and making strategic decisions.
Strategic Decision Support: ESS/EIS focus on providing executives with information and insights to
support strategic decision-making. They provide a consolidated view of critical data from various sources,
including internal systems, external databases, and market intelligence. This information helps executives
evaluate the current state of the organization, identify emerging opportunities or challenges, and make
informed decisions to drive the organization's strategy.
Customized Dashboards and Reports: ESS/EIS present information in customized dashboards and reports
tailored to the specific needs of executives. These dashboards offer a graphical representation of key
performance indicators (KPIs), financial metrics, market trends, and other relevant data. Executives can
monitor the overall health of the organization, track progress towards strategic goals, and assess the
performance of different business units or departments.
Real-Time Data and Analytics: ESS/EIS provide access to real-time or near-real-time data, allowing
executives to stay updated with the latest information. Real-time data feeds enable executives to respond
quickly to changing market conditions, customer demands, or internal issues. The systems often
incorporate advanced analytics capabilities, such as data visualization, predictive modeling, and scenario
analysis, to support executives in analyzing trends, predicting outcomes, and evaluating strategic options.
Drill-Down and Drill-Up Capabilities: ESS/EIS offer drill-down and drill-up capabilities, allowing
executives to dive deeper into specific areas of interest or zoom out for a broader perspective. Executives
can explore underlying details, access supporting documentation, and analyze data at different levels of
granularity. This flexibility enables executives to investigate specific performance metrics, identify root
causes, and gain a comprehensive understanding of the factors influencing organizational performance.
Integration with External Data Sources: ESS/EIS often integrate external data sources, such as market
research reports, industry benchmarks, and economic indicators. This integration provides executives with
a broader context for decision-making by incorporating external market insights and industry trends. The
systems may also include competitive intelligence to help executives evaluate the organization's position
in the market and make strategic adjustments accordingly.
Collaboration and Communication: ESS/EIS support collaboration and communication among executives
and key stakeholders. They provide features for sharing reports, annotations, and comments, facilitating
discussions and collaboration on strategic initiatives. This enables executives to align their decisions, share
insights, and engage in informed discussions to drive consensus and strategic alignment within the
organization.
Security and Access Control: ESS/EIS incorporate robust security measures to protect sensitive data and
ensure appropriate access controls. Executives often deal with confidential or sensitive information, and

The Institute of Chartered Accountants of Nepal ȁ͵͵


Management Information and Control System
these systems employ encryption, authentication, and access controls to safeguard data integrity and
privacy.
Further, Senior managers use executive support systems (ESS) to help them make decisions. ESS
serves the strategic level of the organization. They address non-routine decisions requiring judgment,
evaluation, and insight because there is no agreed-on procedure for arriving at a solution.
ESS is designed to incorporate data about external events, such as new tax laws or competitors, but they
also draw-summarized information from internal MIS and DSS. They filter, compress, and track critical
data, displaying the data of greatest importance to senior managers. For example, the CEO of Leiner
Health Products, the largest manufacturer of private-label vitamins and supplements in the United States,
has an ESS that provides on his desktop a minute-to- minute view of the firm's financial performance
as measured by working capital, accounts receivable, accounts payable, cash flow, and inventory.
ESS employs the most advanced graphics software and can present graphs and data from many
sources. Often the information is delivered to senior executives through a portal, which uses a Web
interface to present integrated personalized business content from a variety of sources.
Unlike the other types of information systems, ESS is not designed primarily to solve specific problems.
Instead, ESS provides a generalized computing and communications capacity that can be applied to a
changing array of problems. Although many DSS are designed to be highly analytical, ESS tends to make
less use of analytical models.
Executive Support Systems or Executive Information Systems empower top-level executives with the
information and tools necessary for effective strategic decision-making. By providing timely and relevant
insights, these systems help executives steer the organization toward its strategic objectives, respond to
market dynamics, and gain a competitive edge.

Fig 2-9 Model of a typical executive support system

34 | The Institute of Chartered Accountants of Nepal


Chapter 2 : Different Types of Information System Case Study
This system pools data from diverse internal and external sources and makes them available to executives
in an easy-to-use form.
2.2.6 Expert Support System
Artificial intelligence
Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are
programmed to think and learn like humans. It involves the development of computer systems capable
of performing tasks that typically require human intelligence, such as visual perception, speech
recognition, decision-making, and problem-solving.
AI can be categorized into two types: Narrow AI and General AI. Narrow AI, also known as Weak AI,
is designed to perform specific tasks within a limited domain. Examples of narrow AI include virtual
personal assistants like Siri and Alexa, recommendation systems, and image recognition software.
On the other hand, General AI, also known as Strong AI or AGI (Artificial General Intelligence), refers
to AI systems that possess human-like intelligence and can understand, learn, and apply knowledge
across various domains. While significant progress has been made in narrow AI, achieving General AI
remains an ongoing area of research.
Artificial intelligence is future. The development of computer systems in such a way that it is capable
to perform works that require human intelligence is referred as AI. AI technologies are being used in
a variety of ways to improve the decision support provided to managers and business professionals in
many companies. Al- enabled applications are at work in information distribution and retrieval,
database mining, product design, manufacturing, inspection, training, user support, surgical planning,
resource scheduling, and resource management. Indeed, for anyone who schedules, plans, allocates
resources, designs new products, uses the Internet, develops software as well as for anyone responsible
for product quality, an investment professional, heads of IT or uses IT, or operates in any of a score of
other capacities and arenas, Al technologies already may be in place and providing competitive advantage.
An Overview of Artificial Intelligence
Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed
to think, learn, and perform tasks autonomously. AI encompasses a wide range of techniques, algorithms,
and approaches aimed at replicating or augmenting human cognitive abilities. Artificial intelligence has
the potential to revolutionize numerous industries, enhance productivity, and solve complex problems.
Ongoing research and development in AI continue to push the boundaries, bringing us closer to creating
intelligent machines that can assist and augment human capabilities in various domains.
Artificial intelligence (Al) is a field of science and technology based on disciplines such as computer
science, biology, psychology, linguistics, mathematics, and engineering. The goal of AI is to develop
computers that can simulate the ability to think, as well as see, hear, walk, and feel. A major thrust of
artificial intelligence is the simulation of computer functions no associated with human intelligence,
such as reasoning, learning, and problem solving, as summarized Figure 2-10.

The Institute of Chartered Accountants of Nepal ȁ͵ͷ


Management Information and Control System

Fig 2-10 Some of the attributes of intelligent behavior. AI is attempting to duplicate these
capabilities in computer-based systems.
Debate has raged about artificial intelligence since serious work in the field began and several
technological, moral and philosophical questions about the possibility of development of intelligent,
thinking machines have been raised. For example, British Al pioneer Alan Turing in
1950 proposed a test to determine whether machines could think. According to the Turing test, a computer
could demonstrate intelligence if a human interviewer, conversing with an unseen human and an unseen
computer could not tell which was which. Although much work has been done in many of the subgroups
that fall under the AI umbrella, critics believe that no computer can truly pass the Turing test. They
claim that it is just not possible to develop intelligence to impart true humanlike capabilities to computers,
but progress continues. Only time will tell whether we will achieve the ambitious goals of artificial
intelligence and equal the popular images found in science fiction.
One derivative of the Turing test is the Reverse Turing Test, also known as the "AI Judge" or "AI
Evaluator" test. In this variation, instead of determining if a machine can convincingly mimic human
intelligence, the goal is to identify if a human evaluator can correctly distinguish between interactions with
a machine and interactions with another human. The Reverse Turing Test aims to assess the machine's
ability to exhibit intelligent behavior to the extent that it becomes indistinguishable from a human
counterpart.

36 | The Institute of Chartered Accountants of Nepal


Chapter 2 : Different Types of Information System Case Study
During the test, the machine and the human evaluator engage in conversations through a computer
interface. The evaluator is unaware of whether they are interacting with a machine or a human. The
machine's objective is to generate responses or actions that are so convincingly human-like that the
evaluator cannot differentiate between the two.
If the machine successfully deceives the evaluator and is mistaken for a human more often than chance, it
indicates a high level of intelligent behavior and a significant milestone in artificial intelligence research.
The Reverse Turing Test pushes the boundaries of machine intelligence by focusing on the machine's
ability to exhibit human-like behavior rather than solely mimicking human responses.
The primary purpose of CAPTCHA is to differentiate between humans and automated bots by presenting
a challenge that is easy for humans to solve but difficult for machines. Typically, this involves displaying
distorted or obscured text, images, or puzzles that users must correctly identify or solve.
By successfully completing the CAPTCHA challenge, users demonstrate their ability to interpret and
respond to the presented task, which is something that automated bots struggle with due to the complexity
of visual or cognitive recognition. CAPTCHA helps protect websites from spam, fraud, and abuse by
ensuring that only genuine human users can access certain functionalities or submit forms.
While CAPTCHA shares the concept of distinguishing between humans and machines with the Turing
test, it focuses more on practical application rather than evaluating machine intelligence or human-like
behavior. Nonetheless, it leverages the principles of human cognitive abilities to create a task that is
difficult for automated systems to solve, thereby verifying the user's humanity.

Figure 2-11 shows several common examples of CAPTCHA patterns.

The Institute of Chartered Accountants of Nepal ȁ͵͹


Management Information and Control System
Fig 2-11 Examples of typical CAPTCHA patterns that can be easily solved by humans but prove
difficult to detect by computer.

The Domains of Artificial Intelligence


Figure 2-12 illustrates the major domains of AI research and development. Note that AI applications
can be grouped under three major areas-cognitive science, robotics, and natural interfaces-though these
classifications do overlap, and other classifications can be used. Also note that expert systems are
just one of many important AI applications. Let's briefly review each of these major areas of AI and
some of their current technologies.

38 | The Institute of Chartered Accountants of Nepal


Chapter 2 : Different Types of Information System Case Study

Fig 2-12 The major application areas of AI.


Cognitive Science
Cognitive science is an interdisciplinary field that focuses on understanding the nature of cognition, which
refers to the mental processes and abilities involved in perception, thinking, learning, memory, language,
decision-making, and problem-solving. It combines insights and methodologies from various disciplines,
including psychology, neuroscience, computer science, linguistics, philosophy, and anthropology.
Cognitive science aims to uncover how the mind works, how information is processed, and how knowledge
is acquired, represented, and used. It seeks to understand both the structure and function of cognitive
processes, as well as the underlying mechanisms that support them.
Cognitive psychology investigates mental processes such as attention, perception, memory, learning, and
problem-solving. It explores how individuals acquire, store, and retrieve information, and how cognitive
processes influence behavior. Neuroscience examines the neural basis of cognition by studying the
structure and function of the brain. It seeks to understand how neural networks and brain regions support
cognitive processes and how they interact to give rise to complex behaviors. Artificial intelligence (AI)
draws upon cognitive science to develop computer systems and algorithms that simulate or replicate
intelligent behavior. AI research focuses on areas such as machine learning, natural language processing,
computer vision, and knowledge representation. Linguistics investigates the structure and processing of
language, studying how language is acquired, represented, and used in communication. It explores the
cognitive mechanisms involved in language production and comprehension. Philosophy of mind examines

The Institute of Chartered Accountants of Nepal ȁ͵ͻ


Management Information and Control System
fundamental questions about the nature of consciousness, perception, intentionality, and mental
representation. It explores philosophical issues related to cognitive processes and their relationship to the
physical world. Cognitive anthropology studies the role of culture in shaping cognition, including how
cultural beliefs, practices, and artifacts influence cognitive processes. It investigates how cognition is
influenced by social, cultural, and environmental factors. Cognitive modeling involves creating
computational models and simulations to simulate and understand cognitive processes. These models
provide insights into how cognitive abilities and behaviors emerge from the interaction of underlying
cognitive mechanisms.
Cognitive science seeks to integrate knowledge and findings from these diverse fields to gain a holistic
understanding of the human mind and intelligence. It has practical implications in areas such as education,
human-computer interaction, artificial intelligence, clinical psychology, and the development of
interventions for cognitive disorders.
This area of artificial intelligence is based on research in biology, neurology psychology, mathematics,
and many allied disciplines. It focuses on researching how the human works and how humans think and
learn. The results of such research in human information processing are the basis for the development
of a variety of computer-based applications in artificial intelligence Applications in the cognitive
science area of AI include the development of expert systems and other knowledge-based systems that
add a knowledge base and some reasoning capability to information systems. Also included are adaptive
learning systems that can modify their behaviors on the basis of information they acquire as they operate.
Chess-playing systems are primitive examples of such applications though many more applications are
being implemented. Fuzzy logic systems can process data that are incomplete or ambiguous, that is, fuzzy
data. Thus, they can solve semi structured problems with incomplete knowledge by developing
approximate inferences and answers, as humans do. Neural network software can learn by processing
sample problems and their solutions. As neural nets start to recognize patterns, they can begin to
program themselves to solve such problems on their own. Genetic algorithm software uses Darwinian
(survival of the fittest), randomizing, and other mathematics functions to simulate evolutionary processes
that can generate increasingly better solutions to problems. In addition intelligent agents use expert system
and other AI technologies to serve as software surrogates for a variety of end-user applications.
Robotics
Robotics is a field that involves the design, development, and operation of robots. It combines knowledge
from various disciplines such as mechanical engineering, electrical engineering, computer science, and
artificial intelligence to create intelligent machines that can perform tasks autonomously or with human
guidance.
AI, engineering, and physiology are the basic disciplines of robotics. This technology produces robot
machines with human-like computer intelligence and computer-controlled humanlike physical
capabilities. This area, thus, includes applications designed to give robots the powers of sight, or visual
perception; touch or tactile capabilities; dexterity, or skill in handling and manipulation; locomotion,
or the physical ability to move over any terrain; and navigation, or the intelligence to find one's way to a
destination.
Robotics involves the design and construction of physical robots. This includes selecting appropriate
materials, components, and actuators to create mechanical systems capable of physical interaction with

40 | The Institute of Chartered Accountants of Nepal


Chapter 2 : Different Types of Information System Case Study
the environment. Robot design also considers factors such as size, weight, mobility, and sensory
capabilities based on the intended application. Then, Perception is a crucial aspect of robotics. Robots
need to sense and understand the surrounding environment to make informed decisions and perform tasks.
Sensors such as cameras, lidar, infrared, and tactile sensors are used to capture information about the
environment, including objects, obstacles, and their spatial relationships. Robotics also involves
developing control systems to govern the movement and actions of robots. This includes algorithms for
motion planning, trajectory generation, and feedback control to ensure precise and efficient robot
movements. Actuators such as motors, pneumatic systems, and hydraulics are used to actuate the robot's
physical motions. Here the term Localization refers to the ability of a robot to determine its own position
within its environment. Mapping involves creating a representation of the environment. Localization and
mapping techniques, such as simultaneous localization and mapping (SLAM), enable robots to navigate
and operate in unknown or dynamic environments.
Along with that Robotics incorporates principles of artificial intelligence and machine learning to enable
robots to adapt and learn from their experiences. This includes techniques such as reinforcement learning,
where robots can learn through trial and error and receive rewards for desired behaviors. Intelligent robots
can make decisions, recognize patterns, and improve their performance over time. Robotics finds
applications in various industries and sectors. In manufacturing, robots are used for tasks such as assembly,
welding, and material handling to improve efficiency and precision. Robots are also employed in
healthcare for surgical procedures, rehabilitation, and assistance to people with disabilities. In agriculture,
robots are used for crop monitoring, harvesting, and precision farming. Other areas of application include
logistics, exploration, defense, and entertainment. As robots become more integrated into our daily lives,
human-robot interaction becomes crucial. Research in this area focuses on developing intuitive and natural
interfaces for communication and collaboration between humans and robots. This includes speech
recognition, gesture recognition, haptic interfaces, and social interaction capabilities. Robotics is a rapidly
evolving field with ongoing advancements in hardware, software, and algorithms. It holds the promise of
transforming various industries and improving our quality of life by enabling robots to perform complex
tasks, enhance productivity, and assist humans in various domains.
Natural Interfaces
Natural interfaces refer to user interfaces that allow users to interact with technology in a way that closely
resembles natural human communication and behavior. These interfaces aim to bridge the gap between
humans and machines by enabling intuitive and seamless interactions.
The development of natural interfaces is considered a major area of AI applications and is essential to the
natural use of computers by humans. For example, the development of natural languages and speech
recognition are major thrusts of this area of AI. Being able to talk to computers and robots in
conversational human languages and have them "understand" us as we understand each other is a goal of
AI research. This goal involves research and development in linguistics, psychology, computer science,
and other disciplines. Other natural interface research applications include the development of
multisensory devices that use a variety of body movements to operate computers, which is related to the
emerging application area of virtual reality. Virtual reality involves using multisensory human-
computer interfaces that enable human users to experience computer-- simulated objects, spaces,
activities, and "worlds" as if they actually exist.

The Institute of Chartered Accountants of Nepal ȁͶͳ


Management Information and Control System
Speech recognition is a natural interface that allows users to interact with technology using spoken
language. It involves converting spoken words into written text or commands that can be understood by
computers. This technology enables users to dictate text, issue voice commands to devices, and interact
with virtual assistants like Siri or Alexa. Here, Gesture recognition enables users to interact with
technology through physical gestures or movements. It involves using cameras or sensors to track and
interpret hand movements, body postures, or facial expressions. Gesture recognition is used in
applications such as gaming, virtual reality, and smart devices to control actions or navigate interfaces
without the need for physical input devices. On the other hand, Touch interfaces have become ubiquitous
in modern technology. They allow users to interact directly with graphical elements on touch-sensitive
screens using their fingers or stylus. Touch interfaces are prevalent in smartphones, tablets, and interactive
kiosks, providing users with intuitive and tactile control over various applications and functions.
Further, Natural Language Processing (NLP) is a field of artificial intelligence that focuses on enabling
computers to understand and respond to human language in a natural way. NLP technologies analyze and
interpret the meaning of text or speech, allowing users to communicate with technology using natural
language. This includes applications like chatbots, language translation, and sentiment analysis. Virtual
reality (VR) and augmented reality (AR) technologies create immersive and interactive experiences by
blending the digital world with the physical world. These interfaces enable users to engage with virtual
environments or overlay digital information onto their real-world surroundings, enhancing natural
interaction through spatial awareness and realistic simulations. The Biometric interfaces utilize unique
biological characteristics for authentication and interaction. Biometric technologies include fingerprint
recognition, iris or retinal scanning, facial recognition, and voice recognition. These interfaces offer secure
and personalized interactions, allowing users to access devices or systems based on their distinct biological
traits. Brain-computer interfaces (BCIs) establish a direct communication pathway between the human
brain and computers or external devices. BCIs capture and interpret brain signals, enabling users to control
technology or communicate without physical input. These interfaces have potential applications in
assistive technology, neurorehabilitation, and enhanced communication for individuals with disabilities.
Natural interfaces aim to simplify human-computer interactions by leveraging familiar and intuitive
communication methods. By reducing the learning curve and cognitive effort required to interact with
technology, natural interfaces enhance user experience, accessibility, and engagement. Continued
advancements in technology are expanding the capabilities and potential applications of natural interfaces
in various domains.
Expert systems
One of the most practical and widely implemented applications of artificial intelligence in business
is the development of expert systems and other knowledge-based information systems. A knowledge
based information system (KBIS) adds a knowledge base to the major components found in other types
of computer based information systems. An expert system (ES) is a knowledge-based information system
that uses its knowledge about a specific, complex application area to act as expert consultant users. Expert
systems provide answers to questions in a very specific problem area by making human like inferences
about knowledge contained in a specialized knowledge base. They must also be able to explain their
reasoning process and conclusions to a user, so expert systems can provide decision support to end users
in the form of advice from an expert consultant in a specific problem area.

42 | The Institute of Chartered Accountants of Nepal


Chapter 2 : Different Types of Information System Case Study
Components of an Expert System
The components of an expert system include a knowledge base and software modules that perform
inference on the knowledge in the knowledge base and communicate answers to a user's questions. Fig
2-13 illustrates the interrelated components of an expert system. Note the following components:

Knowledge Base- The knowledge base of an expert system contains (1) facts about a specific
subject area (e.g., John is an analyst) and (2) heuristics (rules of thumb) that express the reasoning
procedures of an expert on the subject (e.g., IF John is an analyst, THEN he needs a workstation).
There are many ways that such knowledge is represented in expert systems. Examples are rule-based,
frame-based, object-based, and case-based methods of knowledge representation.
Software Resources- An expert system software package contains an inference engine and other
programs for refining knowledge and communicating with users. The inference engine program
processes the knowledge (such as rules and facts) related to a specific problem. It then makes
associations and inferences resulting in recommended courses of action for a user. User interface
programs for communicating with end users are also needed, including an explanation program
to explain the reasoning process to a user if requested.
Knowledge acquisition: Knowledge acquisition programs are not part of an expert system but are
software tools for knowledge base development, as are expert system shells, which are used for
developing expert systems. The process of building an expert system involves acquiring and codifying
knowledge from human experts. Knowledge acquisition techniques include interviews, observations,
documentation review, and knowledge elicitation methods. The acquired knowledge is then
organized and represented in a structured format within the knowledge base.
Limitations: Expert systems have certain limitations. They typically excel in well-defined domains
with explicit knowledge and rule-based reasoning. However, they may struggle with complex,
ambiguous, or ill-defined problems that require contextual understanding or creative thinking.
Additionally, expert systems require regular updates and maintenance to keep the knowledge base
up-to-date with advancements in the domain.
Applications: Expert systems find applications in various domains such as medicine, finance,
engineering, troubleshooting, customer support, and decision-making tasks. For example, in
healthcare, expert systems can assist in diagnosing diseases based on symptoms and medical history.
In finance, they can provide recommendations for investment strategies based on market trends and
risk profiles.

Fig 2-13 A summary of 4-ways that knowledge can be represented in expert system's knowledge base

The Institute of Chartered Accountants of Nepal ȁͶ͵


Management Information and Control System

Fig 2-14 A summary of knowledge can be represented in an expert system's knowledge base

Expert System Applications


Using an expert system involves an interactive computer-based session in which the solution to a problem
is explored, with the expert system acting as a consultant to an end user. The expert system asks questions
of the user, searches its knowledge base for facts and rules or other knowledge, explains its reasoning
process when asked, and gives expert advice to the user in the subject area being explored.
Expert systems are being used for many different types of applications, and the variety of applications is
expected to continue to increase. You should realize, however, that expert systems typically
accomplish one or more generic uses. As you know, expert systems are being used in many different fields,
including medicine, engineering, the physical sciences, and business. Expert systems are widely used in
healthcare for diagnosis, treatment recommendation, and medical decision support. They analyze patient
symptoms, medical history, and test results to provide accurate and timely diagnoses, suggest treatment
options, and offer guidance to healthcare professionals.
Expert systems assist in financial analysis, investment decision-making, and risk assessment. They analyze
market data, historical trends, and investor profiles to provide personalized investment advice, portfolio
management strategies, and risk evaluation for individuals and financial institutions. Expert systems aid
in process control, quality assurance, and troubleshooting in manufacturing and engineering domains.
They help identify production bottlenecks, optimize manufacturing processes, diagnose equipment
failures, and provide guidance on maintenance and repair procedures. Expert systems support customer
service operations by providing automated assistance and troubleshooting. They offer self-service options,
guiding customers through common issues, resolving problems, and escalating complex cases to human
support agents when necessary. Expert systems assist in IT troubleshooting, system maintenance, and
software support. They help diagnose software and hardware issues, recommend solutions, and provide
step-by-step guidance for resolving technical problems. Expert systems now help diagnose illnesses, search
44 | The Institute of Chartered Accountants of Nepal
Chapter 2 : Different Types of Information System Case Study
for minerals, analyze compounds, recommend repairs, and do financial planning. So from a strategic
business standpoint, expert systems can be and are being used to improve every step of the product cycle
of a business, from finding customers to shipping products to providing customer service.

Benefits of Expert Systems


An expert system captures the expertise of an expert or group of experts in a computer-based information
system. Thus, it can outperform a single human expert in many problem situations. That's because an
expert system is faster and more consistent, can have the knowledge of several experts, and does not get
tired or distracted by overwork or stress. Expert systems also help preserve and reproduce the
knowledge of experts. They allow a company to preserve the expertise of an expert before she leaves
the organization. This expertise can then be shared by reproducing the software and knowledge base of
the expert system. It ensures consistency and standardization in decision-making and problem-solving.
They follow predefined rules and guidelines, eliminating variations in human judgment and reducing the
likelihood of errors or biases. This leads to more reliable and predictable outcomes, especially in domains
where consistency and accuracy are critical. By automating complex decision-making processes, expert
systems can significantly improve efficiency and productivity. They can analyze vast amounts of data and
information, perform complex calculations, and generate recommendations or solutions in a fraction of
the time it would take for humans to do the same. This allows organizations to streamline their operations,
save time, and allocate resources more effectively. Also, Expert systems enhance decision-making by
providing accurate and timely recommendations based on domain-specific knowledge and reasoning.
They consider various factors, analyze complex relationships, and weigh different options to generate
well-informed decisions. Expert systems can also handle complex scenarios and provide insights that may
not be readily apparent to human decision-makers. Expert systems can be easily scaled and deployed
across different locations or organizations, ensuring consistent access to expertise. They can be accessed
remotely, allowing users to benefit from expert guidance regardless of their physical location. This
scalability and accessibility make expert systems valuable in situations where expert knowledge needs to
be disseminated widely or accessed on demand.

Limitations of Expert Systems


The major limitations of expert systems arise from their limited focus, inability to learn, maintenance and
developmental cost. Expert systems excel only in solving specific types of problems in a limited domain
of knowledge. They fail miserably in solving problems requiring a broad knowledge base and subjective
problem solving. They do well with specific types of operational or analytical tasks but falter at subjective
managerial decision-making.
Expert systems may also be difficult and costly to develop and maintain. The costs of knowledge
engineers, lost expert time, and hardware and software resources may be too high to offset the benefits
expected from some applications. Also, expert systems can't maintain themselves; that is, they can't
learn from experience but instead must be taught new knowledge and modified as new expertise is needed
to match developments in their subject areas.

The Institute of Chartered Accountants of Nepal ȁͶͷ


Management Information and Control System
Although there are practical applications for expert systems, applications have been limited and specific
because, as discussed, expert systems are narrow in their domain of knowledge. An amusing example of
this is the user who used an expert system designed to diagnose skin diseases to conclude that his
rusty old car had likely developed measles. Additionally, once some of the novelty had worn off, most
programmers and developers realized that common expert systems were just more elaborate versions of
the same decision logic used in most computer programs. Today, many of the techniques used to develop
expert systems can now be found in most complex programs without any fuss about them.
With more than 61,000 orders processed electronically in one year at Cutler-Hammer, the expert
system unquestionably has proved itself. Plant managers Frank C. Campbell at Sumter and Steven R.
Kavanaugh at Fayetteville overflow with praise for the software. It's easy to see why. In the past,
paperwork stifled production flow, but now Bid Manager takes care of even small but significant details.
What's more, says Huber, "Bid Manager has helped us think differently about products." For example,
Cutler-Hammer has standardized its products and models, slimming down the number of steel
enclosure sizes from more than 400 to only 100.
There's no question that the expert system has decisively helped Cutler-Hammer's business CEO Randy
Carson reports that Bid Manager has increased Cutler-Hammer's market share for configured
products-motor control centers, control panels, and the like-by 15 percent. He adds that Bid Manager
has boosted sales of the larger assemblies by 20 percent, doubling profits, increasing productivity by
35 percent, and reducing quality costs by 26 percent. He concludes, "Bid Man-ager has
transformed Cutler-Hammer into a customer-driven company."

Developing Expert Systems


What types of problems are most suitable to expert system solutions? One way is to identify criteria that
make a problem situation suitable for an expert system.
Figure 2-14 emphasizes that many real-world situations do not fit the suitability criteria for expert
system solutions. Hundreds of rules may be required to capture the assumptions, facts, and reasoning
that are involved in even simple problem situations. For example, a task that might take an expert a
few minutes to accomplish might require an expert system with hundreds of rules and take several months
to develop.
The easiest way to develop an expert system is to use an expert system shell as a developmental tool. An
expert system shell is a software package consisting of an expert system without its kernel, that its
knowledge base. This leaves a shell of software (the inference engine and user interface programs) with
generic inference and user interface capabilities. Other development tools (e.g., rule editors, user
interface generators) are added in making the shell a powerful expert system development tool.
Expert system shells are now available as relatively low-cost software packages that help users develop
their own expert systems on microcomputers. They allow trained users to develop the knowledge base
for a specific expert system application. For example, one shell uses a spreadsheet format to help
end users develop IF-THEN rules, automatically generating rules based on examples furnished by a
user. Once a knowledge base is constructed, it is used with the shell's inference engine and user interface
modules as a complete expert system on a specific subject area.

46 | The Institute of Chartered Accountants of Nepal


Chapter 2 : Different Types of Information System Case Study

Suitability Criteria For Expert System


Domain: The domain, or subject area, of the problem is relatively small and limited to a
well-defined problem area.
Expertise: Solutions to the problem require the efforts of an expert. That is, a body of
knowledge, techniques, and intuition is needed that only a few people possess.
Complexity: Solution of the problem is a complex task that requires logical inference
processing, which would not be handled as well by conventional information processing.
Structure: The solution process must be able to cope with ill-structured, uncertain, missing and
conflicting data and a problem situation that changes with the passage of time.
Availability: An expert exists who is articulate and cooperative, and who has the support of the
management and end users involved in the development of the proposed system.

Fig 2-14 Criteria for applications those are suitable for expert systems development.
Knowledge Engineering
Knowledge engineering is a discipline within artificial intelligence (AI) that involves acquiring,
representing, organizing, and utilizing knowledge to build intelligent systems, such as expert systems. It
focuses on capturing and formalizing human expertise and domain knowledge in a format that can be
effectively utilized by computer systems.
A knowledge engineer is a professional who works with experts to capture the knowledge (facts and rules
of thumb) they possess. The knowledge engineer then builds the knowledge base (and the rest of expert
system if necessary), using an iterative, prototyping process until the expert system is acceptable. Thus,
knowledge engineers perform a role similar to that of systems analysts in conventional information
systems development.
Once the decision is made to develop an expert system, a team of one or more domain experts and a
knowledge engineer may be formed. Experts skilled in the use of expert system shells could also
develop their own expert systems. If a shell is used, fact and rules of thumb about specific domain can be
defined and entered into a knowledge base with the help of a rule editor or other knowledge acquisition
tool. A limited working prototype of the knowledge base is then constructed, tested and evaluated using
the inference engine and user interface programs of the shell. The knowledge engineer and domain
experts can modify the knowledge base, then retest the system and evaluate the results. This process
is repeated until the knowledge base and the shell result in an acceptable expert system.
The process of knowledge engineering typically involves the following steps:
Knowledge Acquisition: This step involves gathering domain-specific knowledge from human experts.
Knowledge engineers interact with subject matter experts through interviews, workshops, observations, or
by studying existing documentation and resources. The goal is to extract relevant knowledge, problem-
solving strategies, rules, and heuristics used by experts in the domain.

The Institute of Chartered Accountants of Nepal ȁͶ͹


Management Information and Control System
Knowledge Representation: Once the knowledge is acquired, it needs to be structured and represented
in a form that can be understood and processed by the computer system. Different representation
techniques can be used, such as rule-based systems, semantic networks, frames, or ontologies. The chosen
representation method should capture the relationships, dependencies, and reasoning mechanisms of the
domain knowledge.
Knowledge Organization and Storage: The acquired knowledge needs to be organized and stored in a
knowledge base or knowledge repository. This involves structuring the knowledge into a logical and
coherent format, ensuring efficient retrieval and utilization by the intelligent system. The knowledge base
may contain facts, rules, procedures, constraints, and other relevant information required for decision-
making and problem-solving.
Knowledge Validation and Verification: It is essential to validate the acquired knowledge to ensure its
accuracy, consistency, and relevance. Knowledge engineers work closely with domain experts to verify
the knowledge base against established standards, guidelines, and real-world examples. This validation
process helps identify any gaps, conflicts, or errors in the knowledge and allows for refinement and
improvement.
Knowledge Integration and Reasoning: The knowledge base is integrated into the intelligent system,
and the reasoning mechanisms are implemented to enable the system to make inferences, solve problems,
and provide recommendations. This involves designing and developing the inference engine, which applies
the domain knowledge and reasoning rules to generate appropriate responses or actions based on user
queries or inputs.
Knowledge Maintenance and Evolution: Knowledge engineering is an iterative process that requires
continuous maintenance and evolution of the knowledge base. The system needs to be regularly updated
with new knowledge, rules, and best practices as the domain evolves. Feedback from users and domain
experts is crucial in identifying areas for improvement, addressing limitations, and ensuring the knowledge
remains relevant and up-to-date.
Knowledge engineering plays a vital role in building intelligent systems, including expert systems, where
the knowledge of human experts is captured and utilized to provide valuable insights, recommendations,
and decision-making support. By effectively acquiring, representing, and utilizing knowledge, knowledge
engineers bridge the gap between human expertise and machine intelligence, enabling the development of
powerful and domain-specific intelligent systems.
Neural networks
Neural networks, also known as artificial neural networks (ANNs), are a key component of artificial
intelligence (AI) that mimic the structure and functioning of the human brain. They are composed of
interconnected nodes, called neurons, which work together to process and analyze complex patterns and
relationships in data.
Neural networks are computing systems modeled after the brain's mesh like network of interconnected
processing elements, called neurons. The three main layers are the input layer, hidden layer(s), and output
layer. Of course, neural networks are a lot simpler in architecture (the human brain is estimated to have
more than 100 billion neuron brain cells!). Like the brain, however, the interconnected processors in a
neural network operate in parallel and interact dynamically. Each neuron receives input signals, applies

48 | The Institute of Chartered Accountants of Nepal


Chapter 2 : Different Types of Information System Case Study
a specific activation function to the input, and produces an output signal. Activation functions introduce
non-linearity and help capture complex patterns in the data. This interaction enables the network to "learn"
from data it processes. That is, it learns to recognize patterns and relationship in these data. The more data
examples it receives as input, the better it can learn to duplicate the results of the examples it
processes. During this training, the network adjusts the weights and biases to minimize the difference
between its predicted outputs and the desired outputs. This process typically involves an optimization
algorithm, such as gradient descent, which iteratively updates the weights based on the computed errors.
Thus, the neural network will change the strengths of the interconnections between the processing
elements in response to changing patterns in the data it receives and the results that occur. Deep neural
networks have the ability to learn hierarchical representations of data, enabling them to extract high-level
features and solve complex tasks, such as image recognition, natural language processing, and speech
recognition. Neural networks have found applications in various domains, including image and speech
recognition, natural language processing, sentiment analysis, recommendation systems, autonomous
vehicles, financial forecasting, and medical diagnosis, among others. Their ability to learn from large
amounts of data and uncover intricate patterns makes them powerful tools for solving complex problems.
Neural networks have revolutionized the field of AI and are at the core of many advanced machine learning
algorithms. Their ability to model and learn from data has enabled significant advancements in tasks that
were previously challenging for traditional programming approaches. See Figure 2-15.
For example, a neural network can be trained to learn which credit characteristics result in good or bad
loans. Developers of a credit evaluation neural network could provide it with data from many examples
of credit applications and loan results to process, with opportunities to adjust the signal strengths between
its neurons. The neural network would continue to be trained until it demonstrated a high degree of
accuracy in correctly duplicating the results or recent cases. At that point it would be trained enough to
begin making credit evaluations of its own.

The Institute of Chartered Accountants of Nepal ȁͶͻ


Management Information and Control System

Fig 2-15 Evaluating the training status of a neural network application.

Fuzzy logic systems


Fuzzy logic systems are a branch of artificial intelligence that deals with reasoning and decision-making
in the presence of uncertainty or vagueness. Unlike traditional logic systems that operate with binary values
(true or false), fuzzy logic systems introduce the concept of partial truth and degrees of membership.
In spite of their funny name, fuzzy logic systems represent a small, but serious, application of AI in
business. Fuzzy logic is a method of reasoning that resembles human reasoning. Similar to human
reasoning, it allows for approximate values and inferences (fuzzy logic) as well as incomplete or
ambiguous data (fuzzy data) instead of relying only on crisp data such as binary (yes/no) choices. For
example, Figure 2-16 illustrates a partial set of rules (fuzzy-rules) and a fuzzy SQL query for analyzing
and extracting credit risk information on business that are being evaluated for selection as investments.
Fuzzy logic systems provide a framework for handling uncertainty and imprecision, allowing for more
flexible and human-like reasoning and decision-making. They have been successful in many real-world
applications, particularly in situations where crisp or binary logic systems may not be suitable.
Notice how fuzzy logic uses terminology that is deliberately imprecise, such as very high,
increasing, somewhat decreased, reasonable, and very low. This language enables fuzzy systems to
process incomplete data and quickly provide approximate, but acceptable, solutions to problems
that are difficult for other methods to solve. Thus, fuzzy logic queries of a database, such as the SQL
query shown in Figure 2-16, promise to improve the extraction of data from business databases. It is
important to note that fuzzy logic isn't fuzzy or imprecise thinking. Fuzzy logic actually brings
precision to decision scenarios where it previously didn't exist.

50 | The Institute of Chartered Accountants of Nepal


Chapter 2 : Different Types of Information System Case Study

Fig 2-16 An example of fuzzy logic rules and a funny logic SQL query in a credit risk analysis-
application.
Fuzzy Logic in Business
Examples of applications of fuzzy logic are numerous in Japan but rare in the United States. The United
States has preferred to use AI solutions like expert systems or neural networks, but Japan has implemented
many fuzzy logic applications, especially the use of special-purpose fuzzy logic micro processor
chips, called fuzzy process controllers. Thus, the Japanese ride on subway trains, use elevators, and
drive cars that are guided or supported by fuzzy process controllers made by Hitachi and Toshiba.
Many models of Japanese-made products also feature fuzzy logic microprocessors. The list is growing
and includes autofocus cameras, auto stabilizing camcorders, energy-efficient air conditioners, self-
adjusting washing machines, and automatic transmissions. fuzzy logic can be used in credit scoring
models to assess the creditworthiness of individuals based on a combination of factors with varying
degrees of importance and uncertainty. It can evaluate risk factors, such as market volatility, economic
conditions, or operational uncertainties, and provide a more nuanced assessment of potential risks. It can
aid in customer segmentation, which involves categorizing customers into distinct groups based on their
characteristics or behavior. Also, it can be used in quality control processes to handle imprecise
measurements and variability in product attributes and can help identify and manage deviations from
desired standards.
GENETIC ALGORITHMS
Genetic algorithms (GAs) are computational search and optimization techniques inspired by the principles
of natural evolution and genetics. They are used to solve complex problems by mimicking the process of
natural selection, reproduction, and genetic variation. Genetic algorithms operate on a population of
potential solutions and iteratively evolve the population to find the best solution or approximate solutions
to a given problem.
The use of genetic algorithms is a growing application of artificial intelligence. Genetic algorithm
software uses Darwinian (survival of the fittest), randomizing, and other mathematical functions to
simulate an evolutionary process that can yield increasingly better solutions to a problem. Genetic
algorithms were first used to simulate millions of years in biological, geological, and ecosystem
evolution in just a few minutes on a computer. Genetic algorithm software is being used to model a variety
of scientific, technical, and business processes.
Genetic algorithms are especially useful for situations in which thousands of solutions are possible
and must be evaluated to produce an optimal solution. Genetic algorithm software uses sets of
mathematical process rules (algorithms) that specify how combinations of process components or steps
The Institute of Chartered Accountants of Nepal ȁͷͳ
Management Information and Control System
are to be formed. This process may involve trying random process combinations (mutation), combining
parts of several good processes (crossover), and selecting good sets of processes and discarding poor ones
(selection) to generate increasingly better solutions.
Genetic algorithms have been successfully applied to a wide range of optimization and search problems,
including parameter optimization, scheduling, routing, machine learning, and resource allocation, among
others. They offer advantages in handling complex, non-linear, and multi-objective problems where
traditional optimization techniques may struggle. By imitating the principles of natural evolution, genetic
algorithms provide an efficient and effective approach for finding near-optimal or approximate solutions
in various domains.
Interrelationships among Systems
In any organization or system, there are interrelationships among different systems that collectively
contribute to the functioning and performance of the overall entity. These interrelationships can be
complex and interconnected, and understanding them is crucial for effective management and decision-
making. Here are some common interrelationships among systems:
Transaction processing systems (TPS) are typically a major source of data for other systems, whereas
Executive Support Systems (ESS) are primarily a recipient of data from lower- level systems. The other
types of systems may exchange data with each other as well. Data may also be exchanged among systems
serving different functional areas. For example, an order captured by a sales system may be transmitted
to a manufacturing system as a transaction for producing or delivering the product specified in the order
or to a Management Information Systems (MIS) for financial reporting.

Fig 2-17 Interrelationships among systems

52 | The Institute of Chartered Accountants of Nepal


Chapter 2 : Different Types of Information System Case Study

The various types of systems in the organization have interdependencies. TPS are major producers
of information that is required by the other systems, which, in turn, produce information for other
systems. These different types of systems have been loosely coupled in most organizations.
It is definitely advantageous to integrate these systems so that information can flow easily between
different parts of the organization and provide management with an enterprise-wide view of how the
organization is performing as a whole. But integration costs money, and integrating many different
systems is extremely time consuming and complex. This is a major challenge for large organizations,
which are typically saddled with hundreds, even thousands of different applications serving different
levels and business functions. Each organization must weigh its needs for integrating systems against the
difficulties of mounting a large-scale systems integration effort.

2.3 Sales and marketing information systems


Information systems can be classified by the specific organizational function they serve as well as by
organizational level. We now describe typical information systems that support each of the major business
functions and provide examples of functional applications for each organizational level.
The sales and marketing function is responsible for selling the organization's products or services.
Marketing is concerned with identifying the customers for the firm's products or services,
determining what customers need or want, planning and developing products and services to meet
their needs, and advertising and promoting these products and services. Sales are concerned with
contacting customers, selling the products and services, taking orders, and following up on sales. Sales
and marketing information systems support these activities.
At the strategic level, sales and marketing systems monitor trends affecting new products and sales
opportunities, support planning for new products and services, and monitor the performance of
competitors.
At the management level, sales and marketing systems support market research, advertising and
promotional campaigns, and pricing decisions. They analyze sales performance and the performance of
the sales staff. At the operational level, sales and marketing systems assist in locating and contacting
prospective customers, tracking sales, processing orders, and providing customer service support.
Review Figure 2-7. It shows the output of a typical sales information system at the management level.
The system consolidates data about each item sold (such as the product code, product description, and
amount sold) for further management analysis. Company managers examine these sales data to monitor
sales activity and buying trends.
In the recent year sales and marketing information system have shown significant impact on
organizations. One key impact is the ability to gain enhanced customer insights by gathering large
amounts of customer data including demographics, preferences, and behaviour patterns. This deeper
understanding of customers enables businesses to tailor their marketing strategies and deliver
personalized experiences, ultimately improving customer enagements to our products and services.
Further the sales automation tools is streamlining various sales processes, such as lead management, order

The Institute of Chartered Accountants of Nepal ȁͷ͵


Management Information and Control System
processing, and inventory management. These tools reduce administrative tasks and eliminates manual
work, resulting in increased sales efficiency, shorter sales cycle, and more time for sales teams to focus
on building relationship and closing deals. Fruthermore, SMIS enable businesses to develop and execute
personalized marketing campaigns.
Overall, SMIS have revolutionized sales and marketing by providing organizations with the tools and
insights needed to better understand their customers, streamline sales processes, and deliver personalized
experiences. These advancements have resulted in improved customer satisfaction, increased sales
efficiency, and more effective marketing strategies, ultimately driving business growth and success.

2.4 Manufacturing and Production Information Systems


The manufacturing and production function is responsible for actually producing the firm's goods
and services. Manufacturing and production systems deal with the planning, development, and
maintenance of production facilities; demand forecast, the establishment of production goals; the
acquisition, storage, and availability of production materials; and the scheduling of equipment, facilities,
materials, and labor required to fashion finished products. Manufacturing and production information
systems support these activities. With accurate data and advanced algorithms, MPIS can optimize
scheduling, allocate resources effectively, and minimize production bottlenecks. By streamlining the
planning and scheduling processes, organizations can ensure timely delivery, reduce lead times, and
maximize the utilization of resources.
Information systems can guide the actions of machines and equipment to help pharmaceutical and
other types of firms monitor and control the manufacturing process. MPIS also facilitate real-time
monitoring and control of manufacturing processes. By integrating with production equipment and
sensors, these systems collect data on machine performance, quality metrics, and production outputs. This
real-time data enables organizations to identify potential issues or deviations from standards, allowing for
immediate corrective actions
Strategic-level manufacturing systems deal with the firm's long-term manufacturing goals, such as where
to locate new plants or whether to invest in new manufacturing technology. At the management level,
manufacturing and production systems analyze and monitor manufacturing and production costs and
resources. Operational manufacturing and production systems deal with the status of production tasks.
Overall, MPIS have revolutionized manufacturing and production by providing organizations with the
tools and insights needed to optimize production planning, monitor processes in real-time, and manage
inventory efficiently. These systems contribute to improved productivity, reduced costs, enhanced
product quality, and better decision-making, ultimately leading to increased customer satisfaction and
competitive advantage in the marketplace.
Most manufacturing and production systems use some sort of inventory system, as illustrated in Figure
2-18. Data about each item in inventory, such as the number of units depleted because of a shipment
or purchase or the number of units replenished by reordering or returns, are either scanned or keyed into
the system. The inventory master file contains basic data about each item, including the unique
identification code for each item, a description of the item, the number of units on hand, the number of
units on order, and the reorder point (the number of units in inventory that triggers a decision to reorder
54 | The Institute of Chartered Accountants of Nepal
Chapter 2 : Different Types of Information System Case Study
to prevent a stockout). Companies can estimate the number of items to reorder, or they can use a formula
for calculating the least expensive quantity to reorder called the economic order quantity. The system
produces reports that give information about such things as the number of each item available in
inventory, the number of units of each item to reorder, or items in inventory that must be replenished.

Fig 2-18 Overview of an inventory system

The Institute of Chartered Accountants of Nepal ȁͷͷ


Management Information and Control System

This system provides information about the number of items available in inventory to support
manufacturing and production activities.
Product life cycle management (PLM) systems are one type of manufacturing and production system that
has become increasingly valuable in the automotive, aerospace, and consumer products industries. PLM
systems are based on a data repository that organizes every piece of information that goes into making a
particular product, such as formula cards, packaging information, shipping specifications, and patent data.
Once all these data are available, companies can select and combine the data they need to serve specific
functions. For, example, designers and engineers can use the data to determine which parts are needed for
a new design, whereas retailers can use them to determine self height and how materials should be stored
in warehouses.
For many years, engineering-intensive industries have used computer-aided design (CAD) systems
to automate the modeling and design of their products. The software enables users to create a digital model
of a part, a product, or a structure and make changes to the design on the computer without having to
build physical prototypes. PLM software goes beyond CAD software to include not only automated
modeling and design capabilities but also tools to help companies manage and automate materials
sourcing, engineering change orders, and product documentation, such as test results, product packaging,
and post sales data. The Window on Organizations describes how these systems are providing new sources
of value.

2.5 Finance and Accounting Information Systems


The information system plays a critical role in managing and organizing financial data, facilitating
financial transactions, generating accurate financial reports within organizations. These systems make use
of technology to automate the existing financial processes, improve data accuracy and enhance financial
decision making. The finance function is responsible for managing the firm's financial assets, such as
cash, stocks, bonds, and other investments, to maximize the return on these financial assets. The finance
function is also in charge of managing the capitalization of the firm (finding new financial assets in stocks,
bonds, or other forms of debt). To determine whether the firm is getting the best return on its investments,
the finance function must obtain a considerable amount of information from sources external to the firm.
The accounting function is responsible for maintaining and managing the firm's financial records-
receipts, disbursements, depreciation, and payroll- to account for the flow of funds in a firm. Finance and
accounting share related problems-how to keep track of a firm's financial assets and fund flows. They
provide answers to questions such as these: What is the current inventory of financial assets? What records
exist for disbursements, receipts, payroll, and other fund flows? FAIS streamline financial processes by
automating tasks such as data entry, invoice processing, and financial reconciliations. By eliminating
manual processes and reducing the reliance on paper-based documentation, FAIS increase efficiency and
reduce the likelihood of errors. This saves time and resources for finance and accounting teams, allowing
them to focus on value-added activities such as financial analysis and strategic planning.
Strategic-level systems for the finance and accounting function establish long-term investment goals for
the firm and provide long-range forecasts of the firm's financial performance. At the management level,
56 | The Institute of Chartered Accountants of Nepal
Chapter 2 : Different Types of Information System Case Study
information systems help managers oversee and control the firm's financial resources. Operational systems
in finance and accounting track the flow of funds in the firm through transactions such as paychecks,
payments to vendors, securities reports, and receipts. Review Figure 2-3, which illustrates a payroll
system, a typical accounting TPS found in all businesses with employees.
Overall, finance and accounting information systems have transformed financial management by
automating processes, improving data accuracy, and enabling informed decision-making. These systems
enhance efficiency, reduce errors, and provide timely and reliable financial information, contributing to
effective financial planning, compliance, and overall organizational success.

2.6 Human Resources Information Systems


The human resources function is responsible for attracting, developing, and maintaining the firm's
workforce. Human resources information systems support activities, such as identifying potential
employees, maintaining complete records on existing employees, and creating programs to develop
employees' talents and skills.
Human Resources Information Systems (HRIS) are specialized systems designed to support human
resource management processes within an organization. These systems integrate various HR functions and
activities, ranging from employee recruitment and onboarding to performance management and payroll
administration. HRIS streamlines HR processes, improves data accuracy, and enhances employee
management.
Strategic-level human resources systems identify the manpower requirements (skills, educational level,
types of positions, number of positions, and cost) for meeting the firm's long-term business plans. At the
management level, human resources systems help managers monitor and analyze the recruitment,
allocation, and compensation of employees. Human resources operational systems track the
recruitment and placement of the firm's employees.
Figure 2-19 illustrates a typical human resources TPS for employee record keeping. It maintains basic
employee data, such as the employee's name, age, sex, marital status, address, educational background,
salary, job title, date of hire, and date of termination. The system can produce a variety of reports, such
as lists of newly hired employees, employees who are terminated or on leaves of absence, employees
classified by job type or educational level, or employee job performance evaluations. Such systems
are typically designed to provide data that can satisfy federal and state record keeping requirements for
Equal Employment Opportunity (EEO) and other purposes.

The Institute of Chartered Accountants of Nepal ȁͷ͹


Management Information and Control System

Fig 2-19 An employee record keeping system

This system maintains data on the firm's employees to support the human resources function.

58 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends

Chapter 3

Information Technology Strategy and Trends

The Institute of Chartered Accountants of Nepal ȁͷͻ


Management Information and Control System

60 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
3.1 Enterprise, Strategy and Vision
Enterprise: In the context of business, an enterprise refers to a complex and organized entity, typically a
company or organization, engaged in commercial, industrial, or professional activities. It encompasses the
overall operations, structure, and resources of the entity, including its employees, assets, systems, and
processes.
Strategy: Strategy is the plan of action designed to achieve a specific goal or set of objectives. It involves
making choices and allocating resources to maximize the organization's competitive advantage and attain
desired outcomes. Strategic decisions are typically based on an analysis of internal and external factors,
market dynamics, customer needs, and long-term sustainability.
Vision: Vision refers to an organization's desired future state or long-term direction. It provides a
compelling image of what the organization aspires to become or achieve. A vision statement outlines the
core purpose and values of the organization and serves as a guiding principle for strategic decision-making.
It inspires and aligns stakeholders by painting a picture of the organization's ultimate goals and the impact
it seeks to create.
The relationship between these concepts is as follows: Vision provides a clear and inspiring destination
for the enterprise, guiding its overall direction. Strategy, on the other hand, defines the approach, tactics,
and initiatives that the enterprise will employ to fulfill its vision. It involves the identification of
competitive advantages, target markets, value propositions, and resource allocation. Strategy translates the
long-term vision into actionable plans and sets the course for the enterprise's activities, ensuring that they
are aligned with the desired future state.
Business environment is prone to changes and this factor makes business planning very complex. Some
factors such as the market forces, technological changes, complex diversity of business and competition
has a significant impact on any business prospects. MIS is designed to assess and monitor these factors.
The MIS design is supposed to provide some insight into these factors enabling the management to evolve
some strategy to deal with them. Since these factors are a part of the environment, MIS design is
required to keep a watch on environment factors and provide information to the management for a strategy
formulation.
Strategy formulation is a complex task based on the strength and the weakness of the organization as well
as the mission and goals it wishes to achieve. Strategy formulation is the responsibility of the top
management and the top management relies on the MIS for information. There are various business
strategies such as overall company growth, product, market, financing and so on. MIS should provide the
relevant information that would help the management in deciding the type of strategies the business needs.
Every business may not require all the strategies all the time. The type of strategy is directly related to
the current status of business and the goals it wishes to achieve. The MIS is supposed to provide current
information on the status of the business vis-a-vis the goals. MIS is supposed to give a status with regard to
whether the business is on a growth path or is stagnant or is likely to decline, and the reasons thereof. If the
status of the business shows a declining trend the strategy should be of growth. If business is losing
in a particular market segment, then the strategy should be a market or a product strategy.

The Institute of Chartered Accountants of Nepal ȁ͸ͳ


Management Information and Control System
The continuous assessment of business progress in terms of sales, market, quality, profit and its direction becomes
the major role of MIS. It should further aid the top management in strategy formulation at each stage of business.
The business does not survive on a single strategy but it requires a mix of strategy operating at different levels
of the management. For example when a business is on the growth path, it would require a mix of price, product
and market strategies. If a business is on a decline, it would need a mix of price-discount, sales promotion and
advertising strategies.
The MIS is supposed to evaluate the strategies in terms of the impact they have on business and provide
an optimum mix. The MIS is supposed to provide a strategy-pay off matrix for such an evaluation.
In business planning, MIS should provide support to top management for focusing its attention on
decision-making and action. In business management, the focus shifts from one aspect to another. In the
introductory phase, the focus would be on a product design and manufacturing. When the business
matures and requires to sustain or to consolidate, the focus would be on the post-sales services and
support. The MIS should provide early warning to change the focus of the management from one aspect
to the other.
Evolving the strategies is not the only task the top management has to perform. It also has to provide the
necessary resources to implement the strategies. The assessment of resource need and its selection
becomes a major decision for the top management. The MIS should provide information on resources,
costs, quality and availability for deciding the cost effective resources mix.
When the strategies are being implemented, it is necessary that the management get a continuous feedback
on its effectiveness in relation to the objective, which they are supposed to achieve. MIS is supposed
to give a critical feedback on the strategy performance. According to the nature of the feedback, the
management may or may not make a change in the strategy mix, the focus and the resource allocation.
MIS has certain other characteristics for the top management. It contains forecasting models probe
into the future business model for evaluation of the strategy performance by simulating business
conditions. It contains functional models such as the model for a new product launching budgeting,
scheduling and the models using PERT/CPM technique for planning.
MIS for the top management relies heavily on databases that are external to the organization. The
management also relies heavily on the internal data, which is evolved out of transaction processing.
Management uses the standards, the norms, the ratios and the yardsticks while planning and controlling
the business activities. They are also used for designing strategies and their mix. The MIS is supposed to
provide correct, precise an unbiased standard to the top management for planning.
We can summarize the role of the MIS in the top management function as follows. MIS supports by way
of information, to
1. Decide the goals and objectives.
2. Determine the correct status of the further business and projects,
3. Provide the correct locus for the attention and action of the management.
4. Evolve, decide and determine the mix of the strategies,
5. Evaluate the performance and give a critical feedback on the strategic failures,

62 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
6. Provide cost-benefit evaluation to decide on the choice of resources, the mobilization of
resources, and the mix of resources.
7. Generate the standards, the norms, the ratios and the yardsticks for measurement and control.
Success of business depends on the quality of support the MIS gives to the management. The quality
is assured only through an appropriate design of the MIS and proper integration of the business plan
with the MIS plan. Figure 3-1 explains the role of the MIS in strategic planning
and its support in the execution and control of the management processes.

Fig 3-1 MIS and Strategic Management Process

3.1.1 Internal and External Business Issues


Business strategy is a set of activities and decisions firms make that determine the following:
• Products and services the firm produces
• Industries in which the firm competes
The Institute of Chartered Accountants of Nepal ȁ͸͵
Management Information and Control System
• Competitors, suppliers, and customers of the firm
• Long-term goals of the firm
Strategies often result from a conscious strategic planning process in which nearly all small to large
firms engage at least once a year. This process produces a document called the strategic plan, and
the managers of the firm are given the task of achieving the goals of the strategic plan. But firms have
to adapt these plans to changing environments and as a result where firms end up is not necessarily
where they planned to be. Nevertheless, strategic plans are useful interim tools for defining what the
firm will do until the business environment changes

Thinking about strategy usually takes place at three different levels:


• Business. A single firm producing a set of related products and services
• Firm. A collection of businesses that make up a single, multidivisional firm
• Industry. A collection of firms that make up an industrial environment or ecosystem
Information systems and technologies play a crucial role in corporate strategy and strategic planning at
each of these three different levels. Just about any substantial information system-a supply chain system,
customer relationship system, or enterprise management system-can have strategic implications for a
firm. What the firm wants to do in the next five years will be shaped in large part by what its
information systems enable it to do. IT and the ability to use IT effectively will shape what the
firm makes or provides customers, how it makes the product/service, how it competes with others in its
industry, and how it cooperates with other firms and logistic partners.
To understand how IT fits into the strategic thinking process, it is useful to consider the three levels of
business strategy (the business, the firm, and the industry level). At each level of strategy IT plays
an important role.

Business-Level Strategy: The Value Chain Model


Business-level strategy involves the plans and decisions made by an organization to secure a competitive
edge within a specific market segment. One widely adopted framework for formulating and executing
business-level strategies is the Value Chain Model, devised by Michael Porter. The crux of business-level
strategy is to determine effective competition methods within a specific market, be it a product or service
sector.
The most prevalent generic strategies at this level include: (1) achieving the status of a low-cost producer,
(2) differentiating your product or service, and/or (3) modifying the scope of competition by either
expanding into global markets or focusing on smaller niches that competitors overlook.
In today's digital age, companies have additional capabilities to support business-level strategy. They can
efficiently manage the supply chain, develop advanced customer response systems, and participate in
value webs to deliver innovative products and services to the market.

64 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
By leveraging the value chain analysis, organizations can pinpoint opportunities for cost reduction,
differentiation, and gaining a competitive edge in specific activities. Moreover, it aids in identifying
potential areas for collaboration or integration with suppliers, distributors, or other stakeholders to boost
overall value creation. This analysis becomes more dynamic and essential with the rise of digital
transformation, where elements like data analytics, AI, and blockchain can further enhance value chain
efficiency and competitive advantage.
Leveraging Technology in the Value Chain
At the business level the most common analytical tool is value chain analysis. The value chain model
highlights specific activities in the business where competitive strategies can best be applied (Porter, 1985)
and where information systems are most likely to have a strategic impact. The value chain model identifies
specific, critical leverage points where a firm can use information technology most effectively to enhance
its competitive position. This model views the firm as a series or chain of basic activities that add a
margin of value to a firm's products or services. These activities can be categorized as either primary
activities or support activities.
Primary activities are most directly related to the production and distribution of the firm's products
and services that create value for the customer. Primary activities include inbound logistics, operations,
outbound logistics, sales and marketing, and service. Inbound logistics includes receiving and storing
materials for distribution to production. An operation transforms inputs into finished products. Outbound
logistics entails storing and distributing finished products. Sales and marketing includes promoting and
selling the firm's products. The service activity includes maintenance and repair of the firm's goods and
services. Using RFID tags and barcode scanning streamlines the tracking and management of
incoming/outgoing materials and inventory, improving accuracy and reducing errors. Leveraging
technology solutions like route optimization algorithms, GPS tracking, and mobile applications can
enhance the efficiency and visibility of last-mile delivery, ensuring timely and accurate delivery to
customers.
Support activities make the delivery of the primary activities possible and consist of organization
infrastructure (administration and management), human resources (employee recruiting, hiring, and
training), technology (improving products and the production process), and procurement (purchasing
input).
Firms achieve competitive advantage when they provide more value to their customers or when they
provide the same value to customers at a lower price. An information system could have a strategic impact
if it helps the firm provide products or services at a lower cost than competitors or if it provides products
and services at the same cost as competitors but with greater value. The value activities that add the most
value to products and services depend on the features of each particular firm. Like CRM software enables
organizations to manage customer data, track interactions, and personalize marketing efforts. It helps in
understanding customer preferences, improving sales forecasting, and enhancing customer satisfaction.
Then Building an online presence and leveraging e-commerce platforms enable organizations to reach a
wider audience, provide convenient purchasing options, and streamline the sales process. ERP software
integrates various internal functions, including finance, HR, and inventory management, providing a
unified view of operations and improving overall efficiency. Utilizing data analytics tools and techniques
helps organizations gain insights into operational performance, customer behavior, and market trends. It
enables data-driven decision-making and the identification of opportunities for improvement.
The Institute of Chartered Accountants of Nepal ȁ͸ͷ
Management Information and Control System
It's important for organizations to align their technology investments with their business goals and consider
factors such as scalability, data security, and integration capabilities. By strategically leveraging
technology throughout the value chain, organizations can drive innovation, streamline processes, and
deliver superior value to customers, ultimately achieving a competitive advantage in the market.
The firm's value chain can be linked to the value chains of its other partners, including suppliers,
distributors, and customers. Figure 3-2 illustrates the activities of the firm value chain and the industry
value chain, showing examples of information systems that could be developed to make each of the value
activities more cost-effective. A firm can achieve a strategic advantage over competitors using
information systems not only by improving its internal value chain, but also by developing highly
efficient ties to its industry partners-such as suppliers, logistics firms, and distributors-and their value
chains.

Fig 3-2 The firm value chain and the industry value chain
Illustrated are various examples of strategic information systems for the primary and support activities of
a firm and of its value partners that would add a margin of value to a firm's products or services.
Digitally enabled networks can be used not only to purchase supplies but also to closely
coordinate production of many independent firms. For instance, the Italian casual wear company Benetton
uses subcontractors and independent firms for labor-intensive production processes, such as tailoring,
finishing, and ironing, while maintaining control of design, procurement, marketing, and distribution.
Benetton uses computer networks to provide independent businesses and foreign production centers

66 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
with production specifications so that they can efficiently produce the items needed by Benetton retail
outlets (Camuffo, Romano, and Vinelli, 2001).
Internet technology has made it possible to extend the value chain so that it ties together all the firm's
suppliers, business partners, and customers into a value web. A value web is a collection of independent
firms that use information technology to coordinate their value chains to produce a product or service for
a market collectively. It is more customers driven and operates in a less linear fashion than the traditional
value chain.
Figure 3-3 shows that this value web synchronizes the business processes of customers, suppliers,
and trading partners among different companies in an industry or related industries. These
value webs are flexible and adaptive to changes in supply and demand. Relationships can be bundled
or unbundled in response to changing market conditions. A company can use this value web to
maintain long-standing relationships with many customers over long periods or to respond immediately
to individual customer transactions. Firms can accelerate time to market and to customers by optimizing
their value web relationships to make quick decisions on who can deliver the required products or services
at the right price and location.

Fig 3-3 The value web

The Institute of Chartered Accountants of Nepal ȁ͸͹


Management Information and Control System
The value web is a networked system that can synchronize the value chains of business partners within
an industry to respond rapidly to changes in supply and demand.
Businesses should try to develop strategic information systems for both the internal value chain activities
and the external value activities that add the most value. A strategic analysis might, for example, identify
sales and marketing activities for which information systems could provide the greatest boost. The
analysis might recommend a system to reduce marketing costs by targeting marketing campaigns more
efficiently or by providing information for developing products more finely tuned to a firm's target
market. A series of systems, including some linked to systems of other value partners, might be required
to create a strategic advantage.
Value chains and value webs are not static. From time to time they may have to be redesigned to keep
pace with changes in the competitive landscape (Fine et al., 2002). Companies may need to reorganize
and reshape their structural, financial, and human assets and recast systems to tap new sources of value.
We now show how information technology at the business level helps the firm reduce costs, differentiate
products, and serve new markets.
Information Systems Products and Services
Firms can use information systems to create unique new products and services that can be easily
distinguished from those of competitors. Strategic information systems for product
differentiation can prevent the competition from responding in kind so that firms with these
differentiated products and services no longer have to compete on the basis of cost.
Financial institutions have created many of these information technologies-based products and services.
Citibank developed automatic teller machines (ATMs) and bank debit cards in 1977. Citibank is one of
the largest banks in the United States. Citibank ATMs were so successful that Citibank's competitors
were forced to counterstrike with their own ATM systems. Citibank, Wells Fargo Bank, and others
have continued to innovate by providing online electronic banking services so that customers can do
most of their banking transactions using home computers linked to the Internet. These banks have
recently launched new account aggregation services that enable customers to view all of their accounts,
including their credit cards, investments, online travel rewards, and even accounts from competing
banks, from a single online source. Some companies, such as NetBank, have used the web to set up
virtual banks offering a full array of banking services without any physical branches. (Customers
mail in their deposits and use designated ATMs to obtain cash.)
Computerized reservation systems such as American Airlines' SABRE system started out as a powerful
source of product differentiation for the airline and travel industries. These traditional reservation
systems are now being challenged by new travel services with which consumers can make their own
airline, hotel, and car reservations directly on the Web, bypassing travel agents and other intermediaries.
Manufacturers and retailers are starting to use information systems to create products and services
that are custom-tailored to fit the precise specifications of individual customers. Dell Computer
Corporation sells directly to customers using assemble-to-order manufacturing. Individuals, businesses,
and government agencies can buy computers directly from Dell, customized with the exact features
and components they need. They can place their orders directly using a toll-free telephone number
or Dell's Web site. Once Dell's production control receives an order, it directs an assembly plant to

68 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
assemble the computer using components from an on-site warehouse based on the configuration
specified by the customer.
Google is a multinational technology company which is renowned for its innovative use of information
systems. They have developed numerous products and services by leveraging the extensive data
infrastructure and advanced algorithms. It includes google maps, google search engine, google ads,
google cloud, emails where through extensive use of information technology and data analysis has
allowed them to create innovative products and services that have transformed the way people search,
navigate and access information online.
Internet Connection
You can go to the NetBank website where you can see how one company used the Internet to create an
entirely new type of business. You can complete an exercise for analyzing this website's
capabilities and its strategic benefits.
Systems to Focus on Market Niche
A business can create new market niches by identifying a specific target for a product or service that it
can serve in a superior manner. Through focused differentiation, the firm can provide a specialized
product or service for this narrow target market better than competitors.
An information system can give companies a competitive advantage by producing data for finely tuned
sales and marketing techniques. Such systems treat existing information as a resource that the
organization can consider to increase profitability and market penetration. Information systems
enable companies to analyze customer buying patterns, tastes, and preferences closely so that they
efficiently pitch advertising and marketing campaigns to smaller and smaller target markets.
The data come from a range of sources-credit card transactions, demographic data, purchase data from
checkout counter scanners at supermarkets and retail stores, and data collected when people access and
interact with websites. Sophisticated software tools can find patterns in these large pools of data and
infer rules from them that can be used to guide decision-making. Analysis of such data can drive one-
to-one marketing where personal messages can be created based on individualized preferences.
Contemporary customer relationship management (CRM) systems feature analytical capabilities for this
type of intensive data analysis.
For example, Sears Roebuck continually analyzes purchase data from its 60 million past and present
credit card users to target appliance buyers, gardening enthusiasts, and mothers-to-be with special
promotions. The company might mail customers who purchase a washer and dryer a maintenance
contract and annual contract renewal forms.
Supply Chain Management and Efficient Customer Response Systems
A powerful business-level strategy available to digital firms involves linking the value chains of vendors
and suppliers to the firm's value chain. Digital firms can carry integration of value chains further
by linking the customer's value chain to the firm's value chain in an efficient customer response system.
Firms using systems to link with customers and suppliers are able to reduce their inventory costs while
responding rapidly to customer demands.

The Institute of Chartered Accountants of Nepal ȁ͸ͻ


Management Information and Control System
By keeping prices low and shelves well stocked using a legendary inventory replenishment system,
Wal-Mart has become the leading retail business in the United States. Wal-Mart's continuous
replenishment system sends orders for new merchandise directly to suppliers as soon as consumers pay
for their purchases at the cash register. Point-of-sale terminals record the bar code of each item passing
the checkout counter and send a purchase transaction directly to a central computer at Wal-Mart
headquarters. The computer collects the orders from all Wal-Mart stores and transmits them to suppliers.
Suppliers can also access Wal-Mart's sales and inventory data using Web technology.
Because the system can replenish inventory with lightning speed, Wal-Mart does not need to spend
much money on maintaining large inventories of goods in its own warehouses. The system also enables
Wal-Mart to adjust purchases of store items to meet customer demands.
Competitors, such as Sears, have been spending 24.9 percent of sales on overhead. But by using systems
to keep operating costs low, Wal-Mart pays only 16.6 percent of sales revenue for overhead. (Operating
costs average 20.7 percent of sales in the retail industry.)
Wal-Mart's continuous inventory replenishment system uses sales data captured at the checkout
counter to transmit orders to restock merchandise directly to its suppliers. The system enables Wal-
Mart to keep costs low while fine-tuning its merchandise to meet customer demands.
Wal-Mart's continuous replenishment system is an example of efficient supply chain
management. Supply chain management systems can not only lower inventory costs, but they also
can deliver the product or service more rapidly to the customer. Supply chain management plays an
important role in efficient customer response systems that respond to customer demands more efficiently.
An efficient customer response system directly links consumer behavior to distribution and production
and supply chains. Production begins after the customer purchases or orders a product. Wal-Mart's
continuous replenishment system provides such an efficient customer response. Dell Computer
Corporation's assemble-to-order system, described earlier, is another example of an efficient customer
response system.
The convenience and ease of using these information systems raise switching costs (the cost of switching
from one product to a competing product), which discourages customers from going to competitors.
Another example is Baxter International's stockless inventory and ordering system, which uses supply
chain management to create an efficient customer response system. Participating hospitals become
unwilling to switch to another supplier because of the system's convenience and low cost. Baxter
supplies nearly two-thirds of all products used by U.S. hospitals. When hospitals want to place an
order, they do not need to call a salesperson or send a purchase order-they simply use a desktop computer
that links electronically to Baxter's supply catalog either through proprietary software or through the
Web. The system generates shipping, billing, invoicing, and inventory information, providing
customers with an estimated delivery date. With more than 80 distribution centers in the United
States, Baxter can make daily deliveries of its products, often within hours of receiving an order.
Baxter delivery personnel no longer drop off their cartons at loading docks to be placed in hospital
storerooms. Instead, they deliver orders directly to the hospital corridors, dropping them at nursing
stations, operating rooms, and supply closets. This has created in effect a "stockless inventory," with
Baxter serving as the hospitals' warehouse.

70 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
Figure 3-4 compares stockless inventory with the just-in-time supply method and traditional inventory
practices. Whereas just-in-time inventory enables customers to reduce their inventories by ordering only
enough material for a few days' inventory, stockless inventory enables them to eliminate their
inventories entirely. All inventory responsibilities shift to the distributor, which manages the supply
flow. The stockless inventory is a powerful instrument for locking in customers, thus giving the
supplier a decided competitive advantage. Information systems can also raise switching costs by
making product support, service, and other interactions with customers more convenient and reliable.

Fig 3-4 Stockless inventory compared to traditional and just-in-time supply methods
The just-in-time supply method reduces inventory requirements of the customer, whereas stockless
inventory enables the customer to eliminate inventories entirely. Deliveries are made daily, sometimes
directly to the departments that need the supplies.
Supply chain management and efficient customer response systems are two examples of how emerging
digital firms engage in business strategies not available to traditional firms. Both types of systems require
network-based information technology infrastructure investment and software competence to make
customer and supply chain data flow seamlessly among different organizations. Both types of strategies
have greatly enhanced the efficiency of individual firms and the U.S. economy as a whole by moving
toward a demand-pull production system and away from the traditional supply-push economic system
in which factories were managed on the basis of 12-month official plans rather than on near-
instantaneous customer purchase information. Figure 3-5 illustrates the relationships between supply
chain management, efficient customer response, and the various business-level strategies.

The Institute of Chartered Accountants of Nepal ȁ͹ͳ


Management Information and Control System

Fig 3-5 Business-level strategy


Efficient customer response and supply chain management systems are often interrelated, helping
firms lock in customers and suppliers while lowering operating costs. Other types of systems can be used
to support product differentiation, focused differentiation, and low-cost producer strategies.
3.1.2 Factors Influencing IT
A Strategic Business Unit (SBU) is a distinct entity within a larger corporate structure. Each SBU serves
a defined external market and is sufficiently unique to warrant its own strategic planning concerning
products and markets. This setup allows large corporations to promote and manage each unit based on its
specific market conditions and strategic needs.
The concept of the SBU originated in the 1960s, primarily through the diversified organizational structure
of General Electric. Each SBU in an organization is large and homogeneous enough to control most
strategic factors affecting its performance. Thus, it operates as a self-contained planning unit, with distinct
business strategies that might differ from the parent company's overall strategy.
An SBU can represent an entire company or just a smaller part of a company designated for a specific
task. Commonly used analytical tools like the BCG Matrix are used to evaluate the performance and
potential of these units.
SBUs are structured to cater to the unique demands of each market the company operates in. They group
a unique set of products or services, targeting a specific set of customers, and facing a well-defined set of
competitors. This organization reflects the external (market) dimension of a business, emphasizing that an
SBU should serve external customers rather than merely being an internal supplier.
In modern business terminology, the terms "Segment" or "Division" are often used interchangeably
with SBUs, or to describe a collection of SBUs with shared characteristics. By utilizing the SBU
structure, corporations can ensure more responsive and effective strategic planning tailored to diverse
market conditions.

72 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
Success factors
There are three factors that are generally seen as determining the success of an SBU:
1. The degree of autonomy given to each SBU manager,
2. The degree to which an SBU shares functional programs and facilities with other SBUs, and
3. The manner in which the corporation is because of new changes in market.
3.2 Assess Current and Future IT Environments
3.2.1 Current Status of IT
Hardware
Computer hardware consists of a central processing unit, primary storage, secondary storage,
input devices, output devices, and communications devices (see Figure 3-6). The central processing unit
manipulates raw data into a more useful form and controls the other parts of the computer system.
Primary storage temporarily stores data and program instructions during processing, whereas secondary
storage devices (magnetic and optical disks, magnetic tape) store data and programs when they are not
being used in processing. Input devices, such as a keyboard or mouse, convert data and instructions
into electronic form for input into the computer. Output devices, such as printers and video
display terminals, convert electronic data produced by the computer system and display them in a
form that people can understand. Communications devices provide connections between the computer
and communications networks. Buses are paths for transmitting data and signals between the different
parts of the computer system.

Fig 3-6 Hardware components of a computer system

The Institute of Chartered Accountants of Nepal ȁ͹͵


Management Information and Control System
A contemporary computer system can be categorized into six major components. The central processing
unit manipulates data and controls the other parts of the computer system; primary storage temporarily
stores data and program instructions during processing; secondary storage feeds data and instructions
into the central processor and stores data for future use; input devices convert data and instructions for
processing in the computer; output devices present data in a form that people can understand; and
communications devices control the passing of information to and from communications networks.
How computer Represents data
In the realm of computers, all forms of data, whether symbols, images, or text, must be translated into a series
of binary digits or bits. A bit, the fundamental unit of data, represents either a 0 or a 1. The presence of an
electronic or magnetic signal in a computer signifies a one, while its absence signifies zero. Digital computers
operate directly with these binary digits, either singly or in groups.
A sequence of eight bits is referred to as a byte, which the computer handles as a unit. Each byte can represent
a variety of data, including a decimal number, a symbol, a character, or part of an image. The decimal system
(base 10) can be converted into the binary system (base 2), where any number can be expressed as a power
of the number 2. This binary representation enables computers to process all data as groups of zeroes and
ones.
While pure binary is vital for numerical representation, computers also need to encode alphabetic characters
and various other symbols used in natural language, such as $ and &. This requirement led to the development
of standard binary codes, including EBCDIC and ASCII.
The Extended Binary Coded Decimal Interchange Code (EBCDIC) was developed by IBM in the 1950s,
encoding each number, alphabetic character, or special character with eight bits. The American Standard
Code for Information Interchange (ASCII), developed by the American National Standards Institute (ANSI),
aimed to provide a standard code usable across different machines to ensure compatibility. Originally a seven-
bit code, most computers today use the eight-bit version of ASCII. EBCDIC is commonly used in IBM and
other mainframe computers, while ASCII is prevalent in data transmission, PCs, and some larger computers.
Modern Unicode standards, like UTF-8, have also emerged to represent a wider array of international
languages.
Regarding images, computers store them by overlaying a grid or matrix on the image. Each cell, or pixel
(picture element), in this grid is assessed for its light or color properties. These details are then stored as data.
High-resolution computer displays often operate on standards beyond the traditional 1024 X 768 VGA grid,
with modern screens frequently supporting resolutions like 4K (3840 X 2160 pixels) or even 8K (7680 X
4320 pixels), amounting to millions of pixels. Through this process of data reduction and binary
representation, modern computers can seamlessly operate in complex environments, handling text, images,
and much more.
Time and Size in the Computer World
Early computers or less powerful hardware devices measured machine cycle times in milliseconds
(thousandths of a second). As technology advanced, measures evolved to microseconds (millionths of a
second) and nanoseconds (billionths of a second). Today's high-performance computers, especially
those with multiple processors, measure machine cycles in picoseconds (trillionths of a second), with
some capable of executing one billion instructions per second. Each processor can handle 100 MIPS

74 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
(Millions of Instructions Per Second), a commonly used benchmark to assess the speed of larger
computers.
In terms of storage capacity, bytes are the standard unit of measurement. Historically, small PCs
contained around 640 kilobytes of internal primary memory (each kilobyte equal to approximately 1024
bytes). Modern PCs can have primary memory storage upwards of 16 gigabytes, a stark contrast to past
capacities. Each gigabyte represents approximately one billion bytes. However, the true giants of storage
are data centers and cloud storage platforms, which can handle petabytes or even exabytes of data. A
petabyte is approximately one quadrillion bytes, while an exabyte equals approximately one quintillion
bytes.
Over the decades, there have been significant generational leaps in computer hardware technology, each
characterized by different processing components. Each generation has drastically improved processing
power and storage capabilities while reducing costs. For instance, the cost of performing 100,000
calculations plummeted from several dollars in the 1950s to less than $0.025 in the 1980s and
approximately $0.00004 in 1995. In today's terms, this cost is practically negligible. These hardware
advancements have been paralleled by generational shifts in software, leading to increasingly powerful,
affordable, and user-friendly computers.
First Generation: Vacuum Tube Technology, 1946-1956
The first generation of computers were defined by the use of vacuum tubes for data storage and processing.
These machines were large, power-hungry, short-lived, and produced significant heat. With a maximum
main memory of around 2 kilobytes and processing speeds of 10 kilo instructions per second, these
computers were primarily utilized for simple scientific and engineering calculations. Data was internally
stored on rotating magnetic drums, while punched cards served as external storage. All tasks, from running
programs to printing, required manual intervention.
Second Generation: Transistors, 1957-1963
Transistors replaced vacuum tubes in the second generation of computers, making them smaller, more
reliable, and less energy-consuming. The primary storage technology was magnetic core memory,
comprising small magnetic "doughnuts" that could represent data bits. These computers had up to 32
kilobytes of RAM memory and processing speeds between 200,000 and 300,000 instructions per second.
Their increased memory and processing capabilities enabled wider scientific use and business applications
like payroll automation and billing.
Third Generation: Integrated Circuits, 1964-1979
The third generation of computers introduced integrated circuits, also known as semiconductors. These
comprised of hundreds and later thousands of tiny transistors imprinted onto small silicon chips. Computer
memory expanded to 2 megabytes of RAM, and speeds reached 5 million instructions per second. This
generation also saw the advent of user-friendly software, enabling non-technical individuals to operate
computers, thus expanding their utility in business operations.
Fourth Generation: Very Large-Scale Integrated Circuits, 1980- early 2010s
The fourth generation of computers, lasting until the early 2010s, was characterized by very large-scale
integrated circuits (VLSIC), consisting of hundreds of thousands to millions of circuits per chip. The
advent of the microprocessor allowed memory, logic, and control functions to be integrated on a single
The Institute of Chartered Accountants of Nepal ȁ͹ͷ
Management Information and Control System
chip, greatly reducing the size of computers. This generation saw computers evolve from room-sized
machines to desktop and laptop devices, making them widely accessible for business and personal use.
Memory capacities reached the gigabyte range, and processing speeds surpassed one billion instructions
per second.
Fifth Generation: Cloud Computing and Artificial Intelligence, early 2010s-Present
The ongoing fifth generation of computers has seen the rise of cloud computing and artificial intelligence.
The focus has shifted from hardware development to software and services, with data storage and
processing moving to the "cloud", and AI algorithms enabling machine learning and predictive analytics.
In addition, quantum computing, while still in its nascent stages, is poised to lead the next revolution in
computer technology. Using principles of quantum physics, quantum computers promise unprecedented
computational power, with potential applications spanning cryptography, material science, and artificial
intelligence.
The CPU and Primary Storage
The central processing unit (CPU) is the part of the computer system where the manipulation of symbols,
numbers, and letter occurs, and it controls the other parts of the computer system. The CPU consists of a
control unit and an arithmetic-logic unit (see Figure 3-7). Located near the CPU is primary storage
(sometimes called primary memory or main memory), where data and program instructions are stored
temporarily during processing. Three kinds of buses link the CPU, primary storage, and the other
devices in the computer system. The data bus moves data to and from primary storage. The address bus
transmits signals for locating a given address in primary storage. The control bus transmits signals
specifying whether to read or write data to or from a given primary storage address, input device, or
output device. The characteristics of the CPU and primary storage are very important in determining the
speed and capabilities of a computer.

Fig 3-7 The CPU and primary storage

76 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
The CPU contains Arithmetic-Logic unit and a control unit. Data and instructions are stored in unique
addresses in primary storage that the CPU can access during processing. The data bus, address bus and
control bus transmit signals between the central processing unit, primary storage, and other devices
in the computer system.
Primary Storage
Primary storage has three functions. It stores all or part of the program that is being executed. Primary
storage also stores the operating system programs that manage the operation of the computer. Finally,
the primary storage area holds data that are being used by the program. Data and programs are placed
in primary storage before processing, between processing steps, and after processing have ended, prior
to being returned to secondary storage or released as output. How is it possible for an electronic device
such as primary storage to actually store information? How is it possible to retrieve this information from
a known location in memory? Figure 3-8 illustrates primary storage in an electronic digital computer.
Internal primary storage is often
called RAM, or random access memory. It is called RAM because it can directly access any randomly
chosen location in the same amount of time.

Fig 3-8 Primary storage in the computer. Primary storage can be visualized as a matrix. Each byte
represents a mailbox with a unique address.
Figure 3-8 shows that primary memory is divided into storage locations called bytes. Each location
contains a set of eight binary switches or devices, each of which can store one bit of information. The
set of eight bits found in each storage location is sufficient to store one letter, one digit, or one special
symbol (such as H5) using either EBCDIC or ASCII. Each byte has a unique address, similar to a mailbox,
indicating where it is located in RAM. The computer can remember where the data in all of the bytes
are located simply by keeping track of these addresses. Most of the information used by a
computer application is stored on secondary storage devices such as disks and tapes, located outside
of the primary storage area. In order for the computer to work on information, information must be

The Institute of Chartered Accountants of Nepal ȁ͹͹


Management Information and Control System
transferred into primary memory for processing. Therefore, data are continually being read into and
written out of the primary storage area during the execution of a program.
Types of Semiconductor Memory
Primary storage typically consists of semiconductor memory. These are integrated circuits created by
imprinting millions of tiny transistors onto a small silicon chip. There are numerous types of
semiconductor memory employed for primary storage.
Random Access Memory (RAM) is used for temporary storage of data or program instructions and is
often described as the computer's "working memory". It's volatile, meaning its contents are lost when
the power supply is interrupted or when the computer is shut down.
Read-Only Memory (ROM) can only be read from and not written to. ROM chips are preloaded with
programs by the manufacturer, often containing essential or frequently used programs, such as certain
computing routines.
ROM has several subclasses, including Programmable ROM (PROM) and Erasable Programmable
ROM (EPROM). PROM chips can be programmed once, allowing manufacturers to avoid the expense
of custom chip production by programming a PROM chip with a specific program for their product.
EPROM chips, on the other hand, can be erased and reprogrammed, making them ideal for applications
like robotics where programs might need regular updates.
In addition to these memory types, modern computers often use variants of RAM, like Dynamic RAM
(DRAM) and Static RAM (SRAM), each with different characteristics regarding speed, capacity, and
volatility. There are also advancements like Flash memory, a type of non-volatile memory that can be
electrically erased and reprogrammed, commonly used in USB drives and solid -state drives (SSDs).
Moreover, new memory technologies such as Resistive RAM (ReRAM) and Magnetoresistive RAM
(MRAM) are being developed, promising faster speeds and higher durability than traditional memory
types.

The Arithmetic-Logic Unit and Control Unit


The arithmetic-logic unit (ALU) performs the principal logical and arithmetic operations of the computer.
It adds, subtracts, multiplies, and divides, determining whether a number is positive, negative, or zero. In
addition to performing arithmetic functions, an ALU must be able to determine when one quantity is
greater than or less than another and when two quantities are equal. The ALU can perform logic
operations on the binary codes for letters as well as numbers. The control unit coordinates and controls
the other parts of the computer system. It reads a stored program, one instruction at a time, and directs
other components of the computer system to perform the tasks required by the program. The series of
operations required to process a single machine instruction is called the machine cycle. As illustrated in
Figure 3-9, the machine cycle has two parts: an instruction cycle and an execution cycle.

78 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends

Fig 3-9 The various steps in the machine cycle. The machine cycle has two main stages of
operation; the instruction cycle (I-cycle) and the execution cycle (E-cycle). There are several steps
within each cycle required to process a single machine instruction in the CPU.
During the instruction cycle, the control unit retrieves one program instruction from primary storage
and decodes it. It places the part of the instruction telling the ALU what to do next in a special instruction
register and places the part specifying the address of the data to be used in the operation into an address
register. (A register is a special temporary storage location in the ALU or control unit that acts like a
high-speed staging area for program instructions or data being transferred from primary storage to the
CPU for processing.)
During the execution cycle, the control unit locates the required data in primary storage, places it in a
storage register, instructs the ALU to perform the desired operation, temporarily stores the result of the
operation in an accumulator, and finally places the result in primary memory. As each instruction is
completed, the control unit advances to and reads the next instruction of the program.
Computers and Computer Processing
Computers represent and process data the same way, but there are different classifications. We can
use size and processing speed to categorize contemporary computers as mainframes, minicomputers,
PCs, workstations, and supercomputers.

The Institute of Chartered Accountants of Nepal ȁ͹ͻ


Management Information and Control System
Categories of Computers
A mainframe, the largest in the realm of computers, is a high-performance machine with enormous
memory and superior processing power. They are primarily used for large-scale business, scientific, or
military applications where handling massive amounts of data or complex processes is essential.
Minicomputers, which are mid-range machines about the size of an office desk, are often used in
academic institutions, factories, or research laboratories.
A Personal Computer (PC), also known as a microcomputer, is designed for individual use. Desktop
PCs have a stationary setup, while laptops offer portability. PCs are commonly used for personal
purposes as well as in business environments.
Workstations, similar in size to a PC, offer more powerful computational and graphics-processing
capabilities. They are typically used for scientific, engineering, and design work that requires robust
graphics or computational capabilities.
Supercomputers, the pinnacle of computational power, are used for tasks requiring extremely rapid and
complex calculations with vast numbers of variable factors. They were traditionally used in scientific
and military applications, but their use has expanded to include business contexts as well.
However, this traditional classification scheme has become less distinct due to rapid advancements in
technology. Powerful PCs now possess graphics and processing capabilities akin to workstations. While
they still can't match the multitasking or simultaneous user handling capabilities of mainframes,
minicomputers, or workstations, the gap continues to narrow. Some of today's high-end workstations
even rival the capabilities of earlier mainframes and supercomputers.
Additionally, any of these computer categories can be designed to support a network, allowing users to
share files, software, peripherals, and other network resources. Server computers are specifically
optimized for network use, boasting large memory, disk storage capacity, high-speed communications
capabilities, and robust CPUs. These servers, including powerful workstations, are being increasingly
customized as web servers to maintain and manage websites.
Further, the emergence of edge computing devices, IoT devices, and quantum computers has broadened
the landscape of computer categories, each with its own specialized applications and performance
characteristics.
Supercomputers and Parallel Processing
Supercomputers are the epitome of high-performance computing, used primarily for tasks requiring
extremely rapid and complex computations involving a multitude of variables. Traditionally,
supercomputers have been instrumental in areas such as classified weapons research, weather
forecasting, petroleum and engineering applications, which require complex mathematical models and
simulations.
The usage of supercomputers is not confined to scientific or military applications. Businesses have also
started harnessing the power of these computational beasts. An example of this would be Trimark
Investment Management, a mutual funds company based in Toronto, Canada. As the company's
customer base expanded from 250,000 to 800,000, it switched to a Pyramid Technology supercomputer
with six processors to handle the increased data processing needs.

80 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
Supercomputers possess the unique ability to perform hundreds of billions, and now with the latest
technology, even quadrillions of calculations per second. This level of performance is exponentially
faster than the largest mainframes. This capability is derived from a computational approach known as
parallel processing.
In parallel processing, a problem is broken down into smaller parts by multiple processing units (CPUs),
and these parts are solved simultaneously. Some supercomputers can leverage thousands or even
millions of processors for this purpose. To make this parallel processing possible and efficient, it
requires both a rethinking of the problem and special software. This software is responsible for dividing
the problem among different processors efficiently, providing the necessary data, and reassembling the
various subtasks to form the final solution.
The landscape of supercomputing continues to evolve, with new generations of supercomputers offering
unprecedented levels of performance, enabling them to tackle complex problems in business, science,
and engineering that were previously unfeasible.

Fig 3-10 Sequential and parallel processing. During sequential processing, each task is assigned to
one CPU that processes one instruction at a time. In parallel processing, multiple tasks are assigned
to multiple processing units to expedite the result.
Some supercomputers can now perform more than a trillion mathematical calculations each second-a
teraflop. The term teraflop comes from the Greek terms, which for mathematicians means one trillion,
and flop, an acronym for floating point operations per second. (A floating- point operation is a basic
computer arithmetic operation, such as addition, on numbers that include a decimal point.) Work is
under way to build supercomputers capable of 10 teraflops.
Microprocessor and processing Power
Computers' processing power depends in part on the speed and performance of their microprocessors.
You will often see chips labeled as 8-bit, 16-bit, or 32-bit devices. These labels refer to the word length,
or the number of bits that can be processed at one time by the machine. An 8-bit chip can process 8
The Institute of Chartered Accountants of Nepal ȁͺͳ
Management Information and Control System
bits, or I byte, of information in a single machine cycle. A 32-bit chip can process 32 bits or 4 bytes
in a single cycle. The larger the word length, the greater the speed of the computer.
A second factor affecting chip speed is cycle speed. Every event in a computer must be
sequenced so that one step logically follows another. The control unit sets a beat to the chip. This beat
is established by an internal clock and is measured in megahertz (abbreviated MHZ, which stands for
millions of cycles per second). The Intel 8088 chip, for instance, originally had a clock speed of 4.47
megahertz, whereas the Intel Pentium II chip has a clock speed that ranges from 233 to 450 megahertz.
A third factor affecting speed is the data bus width. The data bus acts as a highway between the CPU.
Primary storage, and other devices, determining how much data can be moved at one time. The 8088
chip used in the original IBM personal computer, for example, had a 16-bit word length but only
an 8-bit data bus width. This meant that data were processed within the CPU chip itself in I6-bit chunks
but could only be moved 8 bits at a time between the CPU, primary storage, and external devices.
On the other hand, the Alpha chip has both a 64-bit word length and a 64-bit data bus width. To have
a computer execute more instructions per second and work through programs or handle users
expeditiously, it is necessary to increase the word length of the processor, the data bus width, or the
cycle speed-or all three.
Microprocessors can be made faster by using reduced instruction set computing (RISC) in their design.
Some instructions that a computer uses to process data are actually embedded in the chip circuitry.
Conventional chips, based on complex instruction set computing, have several hundred or more
instructions hard-wired into their circuitry, and they may take several clock cycles to execute a single
instruction. In many instances, only 20 percent of these instructions are needed for 80 percent of the
computer's tasks. If the little-used instructions are eliminated, the remaining instructions can execute
much faster.
Reduced instruction set (RISC) computers have only the most frequently used instructions
embedded in them. A RISC CPU can execute most instructions in a single machine cycle and sometimes
multiple instructions at the same time. RISC is most appropriate for scientific and workstation
computing, where there are repetitive arithmetic and logical operations on data or applications calling
for three-dimensional image rendering.
On the other hand, software written for conventional processors cannot be automatically transferred to
RISC machines; new software is required. Many RISC suppliers are adding more instructions to appeal
to a greater number of customers, and designers of conventional microprocessors are streamlining their
chips to execute instructions more rapidly.
Microprocessors optimized for multimedia and graphics have been developed to improve processing of
visually intensive applications. Intel's MMX (Multimedia extension) microprocessor is a Pentium
chip that has been modified to increase performance in many applications featuring graphics and sound.
Multimedia applications such as games and video will be able to run more smoothly, with more colors,
and be able to perform more tasks simultaneously. For example, multiple channels of audio, high quality
video or animation, and Internet communication could all be running in the same application.
Computer Network and Client/Server Computing
In the modern digital era, stand-alone computers have largely been supplified by networked systems for
most processing tasks. This practice of leveraging multiple computers connected by a communications
82 | The Institute of Chartered Accountants of Nepal
Chapter 3: Information Technology Strategy and Trends
network for processing is referred to as distributed processing. This approach contrasts with centralized
processing, where a single, large central computer performs all processing tasks. Distributed processing,
on the other hand, divides the processing workload among various devices such as PCs, minicomputers,
and mainframes, all interconnected.
A prominent example of distributed processing is the client/server model of computing. This model
divides processing between "clients" and "servers," each performing tasks they're best suited for. Both
entities are part of the network.
The client, typically a desktop computer, workstation, or laptop, serves as the user's point of entry for a
specific function. Users generally interact directly with the client portion of an application, which could
involve data input or retrieval for further analysis.
The server, on the other hand, provides services to the client. It can range from a supercomputer or
mainframe to another desktop computer. Servers store and process shared data and perform backend
functions, often unseen by users, such as managing network activities.
This client/server model is foundational to internet computing, forming the backbone of web
applications, cloud services, and more. With the rise of cloud computing and edge computing, these
concepts have evolved further, enabling even more efficient and scalable distributed processing models
for today's increasingly interconnected world.
Figure 3-11 illustrates five different ways that the components of an application could be partitioned
between the client and the server. The interface component is essentially the application interface how the
application appears visually to the user. The application logic component consists of the processing logic,
which is shaped by the organizations business rules. (An example might be that a salaried employee is
only to be paid monthly.) The data management component consists of the storage and management of
the data used by the application.

Fig 3-11 Types of client/server computing.

The Institute of Chartered Accountants of Nepal ȁͺ͵


Management Information and Control System
There are various ways in which an applications interface, logic. And data management components can
be divided among the clients and servers in a network.
The exact division of tasks depends on the requirements of each application, including its processing
needs, the number of users, and the available resources. For example, client tasks for a large corporate
payroll might include inputting data (such as enrolling new employees and recording hours worked),
submitting data queries to the server, analyzing the retrieved data, and displaying results on the screen
or on a printer. The server portion will fetch the entered data, and process the payroll. It also will
control access so that only authorized users can view or update the data.
In some firms client/server networks with PCs have actually replaced mainframes and minicomputers.
The process of transferring applications from large computers to smaller ones is called downsizing.
Downsizing has many advantages. Memory and processing power on a PC cost a fraction of their
equivalent on a mainframe. The decision to downsize involves many factors in addition to the cost
of computer hardware, including the need for new software, training, and perhaps new organizational
procedures.
Secondary Storage
In addition to primary storage, which holds information and programs for immediate processing,
contemporary computer systems rely on other types of storage to perform their tasks. Information systems
often require storage solutions that can retain information in a nonvolatile state (not requiring electrical
power) and accommodate large volumes of data too vast for today's largest computers, such as extensive
databases or comprehensive census data. This type of long-term, non-volatile storage of data external to the
CPU and primary storage is known as secondary storage.
Primary storage utilizes the fastest and most expensive technology, with access to information being
electronic and nearly instantaneous. On the other hand, secondary storage retains data even when the
computer is turned off, offering non-volatile storage. Common types of secondary storage include magnetic
disks, optical disks, and magnetic tape. These media can rapidly transfer large volumes of data to the CPU.
However, unlike primary storage, secondary storage requires mechanical movement to access data, making
it comparatively slower.
In recent years, solid-state drives (SSDs) have become an increasingly popular form of secondary storage.
These drives use flash memory and have no moving parts, making them faster and more durable than
traditional hard drives. Furthermore, advancements in cloud storage solutions provide an off-site form of
secondary storage, allowing users to store and access large amounts of data over the internet. Network
Attached Storage (NAS) devices are also a popular choice for businesses and home users that need to store
large amounts of data in a networked environment.
Magnetic Disk
The most widely used secondary-storage medium today is magnetic disk. There are two kinds of magnetic
disks: floppy disks (used in PCs) and hard disks (used on commercial disk drives and PCs). Hard disks
are thin steel platters with an iron oxide coating. In larger systems, multiple hard disks are mounted
together on a vertical shaft. Figure 3-12 illustrates a commercial hard disk pack for a large system.
It has 11 disks, each with two surfaces, top and bottom. However, although there are 11 disks, no
information is recorded on the top or bottom surfaces; thus, there are only 20 recording surfaces on the
disk pack. On each surface, data are stored on tracks.
84 | The Institute of Chartered Accountants of Nepal
Chapter 3: Information Technology Strategy and Trends

Fig 3-12 Disk pack storage. Large systems often rely on disk packs, which provide reliable
storage for large amounts of data with quick access and retrieval. A typical removable disk- pack
system contains 11 two- sided disks.
Read/write heads move horizontally over the spinning disks to any of 200 positions, called cylinders.
At any one of these cylinders, the read/write heads can read or write information to any of 20
different concentric circles on the disk surface areas called tracks. (Each track contains several records.)
The cylinder represents the circular tracks on the same vertical line within the disk pack. Read/write
heads are directed to a specific record using an address consisting of the cylinder number, the recording
surface number, and the data record number.
The entire disk pack is housed in a disk drive or disk unit. Large mainframe or minicomputer systems
have multiple disk drives because they require immense disk storage capacity.
Disk drive performance can be further enhanced by using a disk technology called RAID
(Redundant Array of Inexpensive Disks). RAID devices package more than a hundred 6.25-inch disk
drives, a controller chip, and specialized software into a single large unit. Traditional disk drives deliver
data from the disk drive along a single path, but RAID delivers data over multiple paths simultaneously,
accelerating disk access time. Small RAID systems provide 10 to 20 gigabytes of storage capacity,
whereas larger systems provide more than 10 terabytes. RAID is potentially more reliable than standard
disk drives because other drives are available to deliver data if one drive fails.
PCs usually contain hard disks, which can store more than five hundred (500) Gigabytes. (500
Gigabytes is currently the most common size.) PCs also use floppy disks, which are flat, 3.5-inch disks
of polyester film with a magnetic coating (5.25-inch floppy disks are becoming obsolete). These disks

The Institute of Chartered Accountants of Nepal ȁͺͷ


Management Information and Control System
have a storage capacity ranging from 360K to 2.8 megabytes and a much slower access rate than hard
disks. Floppy disks and cartridges and packs of multiple disks use a sector method of storing data. As
illustrated in Figure 3-13, the disk surface is divided into pie-shaped pieces. Each sector is assigned a
unique number. Data can be located using an address consisting of the sector number and an individual
data record number.

Fig 3-13 The sector method of storing data. Each track of a disk can be divided into sectors. Disk
storage location can be identified by sector and data record number.
Magnetic disks on both large and small computers permit direct access to individual records. Each
record can be given a precise physical address in terms of cylinders and tracks or sectors, and the
read/write head can be directed to go directly to that address and access the information. This means that
the computer system does not have to search the entire file, as in a sequential tape file, in order to find
the record. Disk storage is often referred to as a direct access storage device (DASD).
For on-line systems requiring direct access, disk technology provides the only practical means of storage
today. DASD is, however, more expensive than magnetic tape. Updating information stored on a disk
destroys the old information because the old data on the disk are written over if changes are made. The
disk drives themselves are susceptible to environmental disturbances. Even smoke particles can disrupt
the movement of read/write heads over the disk surface, which is why disk drives are sealed from the
environment.
Optical Disks
Optical disks, including compact disks and laser optical disks, can store data at far greater densities than
magnetic disks. They're compatible with both PCs and larger computer systems. Data is recorded on
optical disks when a laser device etches microscopic pits in the reflective layer of a spiral track. These

86 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
pits and the spaces between them encode binary information. Optical disks can store substantial
quantities of data, including text, images, sound, and full-motion video in a highly compact format. A
low-power laser beam from an optical head reads the disk.
One of the most prevalent optical disk systems for PCs is CD-ROM (Compact Disk Read-Only
Memory). A standard 4.75-inch CD-ROM can store up to 700 megabytes, a significant upgrade from a
high-density floppy disk. CD-ROMs are particularly suitable for applications that need massive
quantities of static data stored compactly for easy retrieval, including graphics and sound. They are also
less vulnerable to magnetism, dirt, and rough handling compared to floppy disks.
CD-ROMs offer read-only storage, meaning new data can't be written to them, but existing data can be
read. They are typically used for reference materials with considerable data amounts, such as
encyclopedias and directories, and multimedia applications that combine text, sound, and images.
WORM (Write Once/Read Many) and CD-R (Compact Disk-Recordable) optical disk systems allow
users to write data to a disk once. After writing, the data cannot be erased but can be read indefinitely.
CD-RW (CD-Rewritable) technology enables users to create rewritable optical disks. While they're not
competitive with magnetic disk storage in terms of access speed and cost, rewritable optical disks are
useful for applications that require large storage volumes with occasional updates.
While CD-ROMs have been popular, the advent of Digital Versatile Disks (DVDs) offers even higher
storage capacities. DVDs can store a minimum of 4.7 gigabytes of data, sufficient for a full-length, high-
quality motion picture. Initially used for storing movies and multimedia applications with large amounts
of video and graphics, DVDs have increasingly replaced CD-ROMs for storing vast amounts of digitized
text, graphics, audio, and video data.
More recently, the advent of Blu-ray discs, offering capacities of 25 GB for single-layer discs and 50
GB for dual-layer discs, have further enhanced the potential of optical storage media. Cloud storage and
streaming services have also influenced the way we store and access media, reducing the dependence
on physical storage media for everyday use.
Magnetic Tape
Magnetic tape, a seasoned storage technology, continues to find application in secondary storage for
sizeable volumes of information. Its usage is primarily seen in legacy mainframe batch applications and
archiving data. Traditional reel-to-reel magnetic tape systems, using 14-inch reels up to 2400 feet long
and 0.5 inches wide, can store data at various densities.
Modern mass storage systems using tape cartridge libraries with higher density and storage capacity—
reaching up to 35 terabytes in some cases—are phasing out reel-to-reel tapes in mainframe and
minicomputer systems. Small tape cartridges, similar to home audio cassettes, are also used for
information storage in PCs and some minicomputers.
The main advantages of magnetic tape include its cost-effectiveness, stability, and capability to store vast
volumes of information. It can also be reused multiple times. However, it has its disadvantages: magnetic
tape stores data sequentially and is relatively slow compared to other secondary storage media. To access
a specific record, the tape must be read from the beginning to the desired record's location. Consequently,
it's not the most effective medium for tasks requiring rapid information retrieval, such as airline

The Institute of Chartered Accountants of Nepal ȁͺ͹


Management Information and Control System
reservation systems. Other drawbacks include the aging of tape over time and the labor-intensive process
of mounting and dismounting tape reels.
Though magnetic tape might seem like a fading technology, it still finds a place in specific use cases and
continues to evolve. For instance, advancements in tape technology, like Linear Tape-Open (LTO)
systems, now offer storage capacities of up to 12 TB for uncompressed data and 30 TB for compressed
data. These developments ensure that magnetic tape remains relevant in areas like long-term data
archiving and backup systems.
Computer Peripherals: Input, Output, and Storage Technologies
Peripherals
The term "peripherals" is used to refer to all input, output, and secondary storage devices that are integral
to a computer system but not part of the Central Processing Unit (CPU). Peripherals interact with the CPU,
either through direct connections or via telecommunications links, allowing data to be inputted, outputted,
or stored as needed.
Therefore, all peripherals are considered online devices. They are distinct from the CPU but can be
electronically connected to it and operated under its control. This contrasts with offline devices, which
operate independently and aren't directly controlled by the CPU.
Common examples of peripherals include keyboards and mice (input devices), monitors and printers
(output devices), and external hard drives or USB flash drives (secondary storage devices). Other
peripherals like scanners, webcams, or game controllers further extend the functionality of a computer. In
modern computing, peripherals also encompass devices like digital cameras, smartphones, or tablets that
can connect to a computer system for data transfer or synchronization.
Input Technologies
Input technologies have considerably evolved, providing a more intuitive user interface for computer
users. Data and commands can now be directly and easily entered into a computer system through various
means. Pointing devices like electronic mice and touch pads have become standard, but we're seeing a
growth in more interactive and natural technologies.
Optical scanning, handwriting recognition, and voice recognition technologies have revolutionized data
entry, making the process more seamless and efficient. These advancements have eliminated the need to
record data on paper source documents, such as sales order forms, and then keyboard the data into a
computer in a separate data-entry step.
Moreover, recent innovations, such as gesture recognition, facial recognition, and virtual or augmented
reality interfaces, further contribute to a more natural user interface. Devices like smart speakers and
virtual assistants have popularized voice recognition technology, enabling users to control various
functions and perform internet searches using voice commands.
As technology advances, we can anticipate the development of even more intuitive and natural user
interfaces, incorporating technologies like artificial intelligence and machine learning to enhance user-
computer interaction further.

88 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
Pointing Devices
Keyboards remain the most commonly used devices for data and text entry into computer systems.
However, pointing devices offer a more intuitive method for issuing commands, making selections, and
responding to prompts on your display screen. These devices work hand-in-hand with your system's
graphical user interface (GUI), which presents a visually intuitive environment with icons, menus,
windows, buttons, and bars.
Electronic mice are the most prevalent pointing devices. They allow easy selection from menus and icons
using point-and-click or point-and-drag techniques. As you move the mouse on a surface, it controls the
on-screen cursor. Clicking or double-clicking the mouse buttons can initiate various actions represented
by the selected icon or menu item.
Alternative pointing devices include trackballs, pointing sticks, touch pads, and touch screens. Trackballs
are stationary devices with a partially exposed rolling ball used to control the cursor. Pointing sticks, also
called track points, resemble pencil eraser heads and are typically located on the keyboard. They move the
cursor based on the direction and intensity of applied pressure. Touch pads, common on laptops, are small
rectangular surfaces that respond to finger movement, translating it into cursor motion.
Touch screens allow direct interaction with the display. Depending on the technology used, these screens
may react to pressure, disruption in an infrared grid, sound waves, or changes in an electrical field. This
allows users to make selections, issue commands, or input data directly on the screen.
Additionally, we're seeing an increase in advanced pointing technologies, such as stylus pens for precision
input, particularly on tablets or drawing pads. Gesture recognition, part of augmented reality (AR) and
virtual reality (VR) systems, allows users to interact with their devices using physical movements.
Biometric input methods, such as fingerprint or facial recognition, also serve as interaction methods in
modern devices.
Pen-Based Computing
Pen-based computing technologies continue to hold relevance in today's digital landscape, particularly in
handheld devices, personal digital assistants (PDAs), and tablet PCs. Despite the prevalence of touch-
screen technologies, many users still prefer using a stylus pen for tasks requiring precision or a more
natural handwriting experience.
Modern tablet PCs and certain PDAs incorporate advanced processors and software that can accurately
recognize, digitize, and interpret handwriting, hand-printing, and hand-drawn sketches. These devices
typically feature a pressure-sensitive layer beneath their Liquid Crystal Display (LCD) screens,
functioning similarly to touch screens. This setup allows users to use a pen to directly input data, make
selections, send emails, and more, essentially transforming the device into a digital notepad.
Various pen-like devices, such as digitizer pens and graphics tablets, offer enhanced capabilities. A
digitizer pen can serve as a pointing device, or it can be used to write or draw on the pressure-sensitive
surface of a graphics tablet. The computer digitizes the handwriting or drawing, which can then be
displayed on the screen and integrated into various applications.

The Institute of Chartered Accountants of Nepal ȁͺͻ


Management Information and Control System
In the context of modern advancements, we see the evolution of stylus pens, like the Apple Pencil or
Samsung S Pen, which provide even more precise control and have pressure sensitivity. They allow for
intricate artwork creation, note-taking, and document markup. Some also include features like tilt
recognition and side buttons for shortcuts, further enhancing the pen-based computing experience. With
the rise of convertible laptops and 2-in-1 devices, pen-based computing continues to be a key aspect of
user-computer interaction.
Speech Recognition Systems
Speech recognition may be the future of data entry and certainly promises to be the easiest method
for word processing, application navigation, and conversational computing because speech is the easiest,
most natural means of human communication. Speech input has now become technologically and
economically feasible for a variety of applications. Early speech recognition products used discrete speech
recognition, where you had to pause between each spoken word. New continuous speech recognition
(CSR) software recognizes continuous, conversationally paced speech.
Speech recognition systems digitize, analyze, and classify your speech and its sound patterns. The
software compares your speech patterns to a database of sound patterns in its vocabulary and passes
recognized words to your application software. Typically, speech recognition systems require training the
computer to recognize your voice and its unique sound patterns to achieve a high degree of accuracy.
Training such systems involves repeating a variety of words and phrases in a training session, as
well as using the system extensively.
Continuous speech recognition software products like Dragon NaturallySpeaking and Via Voice by IBM
have up to 300,000-word vocabularies. Training to 95 percent accuracy may take several hours. Longer
use, faster processors, and more memory make 99 percent accuracy possible. In addition, Microsoft
Office Suite has built-in speech recognition for dictation and voice commands of a variety of
software processes.
Speech recognition devices in work situations allow operators to perform data entry without using
their hands to key in data or instructions and to provide faster and more accurate input. For example
manufacturers use speech recognition systems for the inspection, inventory, and quality control of a
variety of products; airlines and parcel delivery companies use them for voice- directed sorting of luggage
and parcels. Speech recognition can also help you operate your computer's operating systems and software
packages through voice input of data and commands. For example, such software can be voice-enabled
so you can send e-mail and surf the World Wide Web.
Speaker independent voice recognition systems, which allow a computer to understand a few words from
a voice it has never heard before, are being built into products and used in a growing number of
applications. Examples include voice-merging computers, which use speech recognition and voice
response software to guide an end user verbally through the steps of a task in many kinds of activities
Typically, they enable computers to respond to verbal and Touch- Tone input over the telephone.
Examples of applications include computerized telephone call switching, telemarketing surveys, and
bank pay-by phone bill-paying services, stock quotation services, university registration systems, and
customer credit and account balance inquiries.

90 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
One of the newest examples of this technology is Ford SYNC. SYNC is a factory-installed, in- car
communications and entertainment system jointly developed by Ford Motor Company and Microsoft.
The system was offered on 12 different Ford, Lincoln, and Mercury vehicles in North America for the
2008 model year.
Ford SYNC allows a driver to bring almost any mobile phone or digital media player into a vehicle and
operate it using voice commands, the vehicle's steering wheel, or manual radio controls. The system can
even receive text messages and read them aloud using a digitized female voice named "Samantha."
SYNC can interpret a hundred or so shorthand messages, such as LOL for "laughing out loud and it will
read swear words; it won't, however, decipher obscene acronyms. Speech recognition is now common in
your car, home, and workplace.
Speech recognition technology, already making significant strides, is advancing further fueled by
breakthroughs in artificial intelligence and machine learning. These developments are leading to
enhanced accuracy, superior understanding of context, and improved handling of diverse accents,
background noise, and multiple simultaneous voices. Speech recognition technology that uses voice
assistance has become ingrained in daily life. Today, platforms like Amazon's Alexa, Google's Assistant,
and Apple's Siri are present on a variety of devices, including smartphones, PCs, smart home systems,
and automobiles, such as the Ford SYNC system. These digital assistants can do a number of activities
with the aid of voice commands, including creating reminders and controlling smart home products. We
may anticipate voice recognition being even more intricately knit into the fabric of our digital interactions
as technology advances.
Optical Scanning
Optical scanning devices read text or graphics and convert them into digital input for your computer.
Thus, optical scanning enables the direct entry of data from source documents into a computer system.
For example, you can use a compact desktop scanner to scan pages of text and graphics into your computer
for desktop publishing and web publishing applications. You can scan documents of all kinds into
your system and organize them into folders as part of a document management library system for
easy reference or retrieval.
There are many types of optical scanners, but all employ photoelectric devices to scan the characters
being read. Reflected light patterns of the data are converted into electronic impulses that are then
accepted as input to the computer system. Compact desktop scanners have become very popular due to
their low cost and ease of use with personal computer systems. However, larger, more expensive flatbed
scanners are faster and provide higher-resolution color scanning. Another optical scanning technology is
called optical character recognition (OCR). The OCR scanners can read the characters and codes on
merchandise tags, product labels; credit card receipts utility bills, insurance premiums, airline tickets, and
other documents. In addition, OCR scanners are used to automatically sort mail, score tests, and process
a wide variety of forms in business and government.
Devices such as handheld optical scanning wands are frequently used to read barcodes, codes the use bars
to represent characters. One common example is the Universal Product Code (UPC) bar coding that
you see on just about every product sold. For example, the automated checkout scanners
found in supermarkets read UPC bar coding. Supermarket scanners emit laser beams that are reflected
off a code. The reflected image is converted to electronic impulses that are sent to the in store computer,
The Institute of Chartered Accountants of Nepal ȁͻͳ
Management Information and Control System
where they are matched with pricing information. Pricing information is returned to the terminal, visually
displayed, and printed on a receipt for the customer.
Other Input Technologies
Magnetic stripe technology is a familiar form of data entry that helps computers read credit cards.
The coating of the magnetic stripe on the back of such cards can hold about 200 bytes of information.
Customer account numbers can be recorded on the magnetic stripe so it can be read by bank ATMs, credit
card authorization terminals, and many other types of magnetic stripe readers.
Smart cards that embed a microprocessor chip and several kilobytes of memory into debit, credit and
other cards are popular in Europe and becoming available in the United States. One example is in the
Netherlands, where Dutch banks have issued millions of smart debit cards. Smart debit cards enable you
to store a cash balance on the card and electronically transfer some of it to others to pay for small
items and services. The balance on the card can be replenished in ATMs or other terminals. The smart
debit cards used in the Netherlands feature a microprocessor and either 8 or 16 kilo bytes of memory, plus
the usual magnetic stripe. The smart cards are widely used to make payments in parking meters, vending
machines, newsstands, pay telephones, and retail stores.
Digital cameras represent another fast-growing set of input technologies. Digital still cameras and
digital video cameras (digital camcorders) enable you to shoot, store, and download still photos or full
motion video with audio into your PC. Then you can use image-editing software to edit and enhance the
digitized images and include them in newsletters, reports, multimedia presentations, and Web pages.
Today's typical mobile phone includes digital camera capabilities as well.
The computer systems of the banking industry can magnetically read checks and deposit slips using
magnetic ink character recognition (MICR) technology. Computers can thus sort and post checks to the
proper checking accounts. Such processing is possible because the identification numbers of the bank and
the customer's account are preprinted on the bottom of the checks with an iron oxide-based ink.
The first bank receiving a check after it has been written must encode the amount of the check in magnetic
ink on the check's lower-right corner. The MICR system uses 14 characters (the 10 decimal digits and 4
special symbols) of a standardized design. Reader-sorter equipment reads a check by first magnetizing
the magnetic ink characters and then sensing the signal induced by each character as it passes a reading
head. In this way, data are electronically captured by the bank's computer systems.
Output Technologies
Computers provide information in a variety of forms. Video displays and printed documents have
been, and still are, the most common forms of output from computer systems. Yet other natural and
attractive output technologies such as voice response systems and multimedia output are increasingly
found along with video displays in business applications.
For example, you have probably experienced the voice and audio output generated by speech and
audio microprocessors in a variety of consumer products. Voice messaging software enables PCs and
servers in voice mail and messaging systems to interact with you through voice responses. Of course,
multimedia output is common on the web sites of the Internet and corporate intranets.

92 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
Video Output
Video displays are the most common type of computer output. Many desktop computers still rely on video
monitors that use a cathode ray tube (CRT) technology similar to the picture tubes used in home television
sets. Usually, the clarity of the video display depends on the type of video monitor you use and the
graphics circuit board installed in your computer. These can provide a variety of graphics modes of
increasing capability. A high-resolution, flicker-free monitor is especially important if you spend a lot of
time viewing multimedia on CDs, or on the web, or the complex graphical displays of many software
packages.
The biggest use of liquid crystal displays (LCDs) has been to provide a visual display capability for
portable microcomputers and PDAs. However, the use of "flat panel" LCD video monitors for, desktop
PC systems has become common as their cost becomes more affordable. These LCD displays need
significantly less electric current and provide a thin, flat display. Advances in technology such as
active matrix and dual scan capabilities have improved the color and clarity of LCD displays. In
addition, high-clarity fiat panel televisions and monitors using plasma display technologies are
becoming popular for large-screen (42- to 80-inch) viewing.
Printed Output
Printing information on paper is still the most common form of output after video displays. Thus, most
personal computer systems rely on an inkjet or laser printer to produce permanent (hard- copy) output in
high-quality printed form. Printed output is still a common form of business communications and is
frequently required for legal documentation. Computers can produce printed reports and correspondence,
documents such as sales invoices, payroll checks, bank statements, and printed versions of graphic
displays.
Inkjet printers, which spray ink onto a page, have become the most popular, low-cost printers for
microcomputer systems. They are quiet, produce several pages per minute of high- quality output, and
can print both black-and-white and high-quality color graphics. Laser printers use an electrostatic
process similar to a photocopying machine to produce many pages per minute of high-quality black-
and-white output. More expensive color laser printers and multifunction inkjet and laser models that
print, fax, scan, and copies are other popular choices for business offices.
Storage Trade-Offs
Data and information must be stored until needed using a variety of storage methods. For example,
many people and organizations still rely on paper documents stored in filing cabinets as a major form
of storage media. However, you and other computer users are more likely to depend on the memory
circuits and secondary storage devices of computer systems to meet your storage requirements. Progress
in very-large-scale integration (VLSI), which packs millions of memory circuit elements on tiny
semiconductor memory chips, is responsible for continuing increases in the main-memory capacity of
computers. Secondary storage capacities are also escalating into the billions and trillions of characters,
due to advances in magnetic and optical media.
There are many types of storage media and devices. Figure 3.14 illustrates the speed, capacity, and cost
relationships of several alternative primary and secondary storage media. Note the cost/speed/capacity
trade-offs as you move from semiconductor memories to magnetic disks to optical disks and to magnetic
tape. High-speed storage media cost more per byte and provide lower capacities. Large-capacity storage
The Institute of Chartered Accountants of Nepal ȁͻ͵
Management Information and Control System
media cost less per byte but are slower. These trade- offs are why we have different kind of storage
media.

Fig 3.14 Storage media cost, speed, and capacity trade-0ffs. Note how cost increases with faster
access speeds but decreases with the increased capacity of storage media.
However, all storage media, especially memory chips and magnetic disks, continue to increase in speed
and capacity and decrease in cost. Developments like automated high-speed cartridge assemblies have
given faster access times to magnetic tape, and the speed of optical disk drives continues to increase.
Note in Figure 3.14 that semiconductor memories are used mainly for primary storage, although they are
sometimes used as high-speed secondary storage devices. Magnetic disk and tape and optical disk
devices, in contrast, are used as secondary storage devices to enlarge the storage capacity of computer
systems. Also, because most primary storage circuits use RAM (random- access memory) chips, which
lose their contents when electrical power is interrupted, secondary storage devices provide a more
permanent type of storage media.
Computer Storage Fundamentals
Data are processed and stored in a computer system through the presence or absence of electronic
or magnetic signals in the computer's circuitry or in the media it uses. This character is called "two-state"
or binary representation of data because the computer and the media can exhibit only two possible
states or conditions, similar to a common light switch: "on" or "off." For example, transistors and other
semiconductor circuits are in either a conducting or a non- conducting state. Media such as magnetic
disks and tapes indicate these two states by having magnetized spots whose magnetic fields have one of
two different directions, or polarities. This binary characteristic of computer circuitry and media is what
94 | The Institute of Chartered Accountants of Nepal
Chapter 3: Information Technology Strategy and Trends
makes the binary number system the basis for representing data in computers. Thus, for electronic
circuits, the conducting ("on") state represents the number 1, whereas the no conducting ("off") state
represents the number O.
For magnetic media, the magnetic field of a magnetized spot in one direction represents a 1, while
magnetism in the other direction represents a 0.
The smallest element of data is called a bit, short for binary digit, which can have a value of either
0 or 1. The capacity of memory chips is usually expressed in terms of bits. A byte is a basic grouping
of bits that the computer operates as a single unit. Typically, it consists of eight bits and represents one
character of data in most computer coding schemes. Thus, the capacity of a computer's memory and
secondary storage devices is usually expressed in terms of bytes. Computer codes such as ASCII
(American Standard Code for Information Interchange) use various arrangements of bits to form bytes
that represent the numbers 0 through 9, the letters of the alphabet, and many other characters. See Figure
3.15

Fig 3.15 example of ASCII computer code that computers use to represent numbers and the letters of
the alphabets
Since childhood, we have learned to do our computations using the numbers 0 through 9, the digits of the
decimal number system. Although it is fine for us to use 10 digits for our computations, computers do
not have this luxury. Every computer processor is made of millions of tiny switches that can be turned
off or on. Because these switches have only two states, it makes sense for a computer to perform its
computations with a number system that only has two digits: the binary number system. These digits (0
and 1) correspond to the off/ on positions of the switches in the computer processor. With only these two
digits, a computer can perform all the arithmetic that we can with 10 digits.

The Institute of Chartered Accountants of Nepal ȁͻͷ


Management Information and Control System
The binary system is built on an understanding of exponentiation (raising a number to a power).
In contrast to the more familiar decimal system, in which each place represents the number 10 raised to a
power (ones, tens, hundreds, thousands, and so on), each place in the binary system represents the number
2 raised to successive powers (20, 21, 22, and so on).
Storage capacities are frequently measured in kilobytes (KB), megabytes (MB), gigabytes (GB), or
terabytes (TB). Although kilo means 1,000 in the metric system, the computer industry uses K to represent
1,024 (or 2m) storage positions. For example, a capacity of 10 megabytes is really
10,485,760 storage positions, rather than 10 million positions. However, such differences are frequently
disregarded to simplify descriptions of storage capacity. Thus, a megabyte is roughly
1 million bytes of storage, a gigabyte is roughly 1 billion bytes, and a terabyte represents about 1 trillion
bytes, while a petabyte is more than l quadrillion bytes.
To put these storage capacities in perspective, consider the following: A terabyte is equivalent to
approximately 20 million typed pages, and it has been estimated that the total size of all the books,
photographs, video and sound recordings, and maps in the U.S. Library of Congress approximates 3
petabytes (3,000 terabytes).
Direct and Sequential Access
Primary storage media such as semiconductor memory chips are called direct access memory or random-
access memory (RAM). Magnetic disk devices are frequently called direct access storage devices
(DASDs). In contrast, media such as magnetic tape cartridges are known as sequential access devices.
The terms direct access and random access describe the same concept. They mean that an element
of data or instructions (such as a byte or word) can be directly stored and retrieved by selecting and using
any of the locations on the storage media. They also mean that each storage position (1) has a unique
address and (2) can be individually accessed in approximately the same length of time without having to
search through other storage positions. For example, each memory cell on a microelectronic
semiconductor RAM chip can be individually sensed or changed in the same length of time. Also,
any data record stored on a magnetic or optical disk can be accessed directly in approximately the same
period.
Sequential access storage media such as magnetic tape do not have unique storage addresses that can be
directly addressed. Instead, data must be stored and retrieved using a sequential or serial process. Data
are recorded one after another in a predetermined sequence (e.g., numeric order) on a storage medium.
Locating an individual item of data requires searching the recorded data on the tape until the desired item
is located.
Semiconductor Memory
The primary storage (main memory) of your computer consists of microelectronic semiconductor memory
chips. It provides you with the working storage your computer needs to process your applications. Plug-in
memory circuit boards containing 256 megabytes or more of memory chips can be added to your PC to
increase its memory capacity. Specialized memory can help improve your computers performance.
Examples include external cache memory of 512 kilobytes to help your microprocessor work faster or a
video graphics accelerator card with 64 megabytes or more of RAM for faster and clearer video

96 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
performance. Removable credit-card- size and smaller "flash memory" RAM devices like a jump drive
or a memory stick can also provide hundreds of megabytes of erasable direct access storage for PCs,
PDAs, or digital cameras.
Some of the major attractions of semiconductor memory are its small size, great speed, and shock
and temperature resistance. One major disadvantage of most semiconductor memory is its volatility.
Uninterrupted electric power must be supplied, or the contents of memory will be lost. Therefore, either
emergency transfer to other devices or standby electrical power (through battery packs or emergency
generators) is required if data are to be saved. Another alternative is to permanently "burn in" the
contents of semiconductor devices so they cannot be erased by a loss of power.
Thus, there are two basic types of semiconductor memory; randomaccess memory [RAM] and read-only
memory [ROM].
• RAM, random-access memory. These memory chips are the most widely used primary storage
medium. Each memory position can be both sensed (read) and changed (written), so it is also called
read/write memory. This is a volatile memory.
• ROM, read-only memory. Nonvolatile random-access memory chips are used for permanent
storage; ROM can be read but not erased or overwritten. Frequently used control instructions in the
control unit and programs in primary storage (such as parts of the operating system) can be
permanently burned into the storage cells during manufacture, sometimes called firmware.
Variations include PROM (programmable read-only memory) and EPROM (erasable
programmable read-only memory), which can be permanently or temporarily programmed after
manufacture.
One of the newest and most innovative forms of storage that uses semiconductor memory is the flash
drive (sometimes referred to as a jump drive).
Flash memory uses a small chip containing thousands of transistors that can be programmed to Store
data for virtually unlimited periods without power. The small drives can be easily transported
in your pocket and are highly durable. Storage capacities currently range as high as 20 gigabytes, but
newer flash technologies are making even higher storage capacities a reality. The advent of credit-card-
like memory cards and ever-smaller storage technologies puts more data into the user's pocket every day.
Magnetic Disks
Magnetic disks are the most common form of secondary storage for your computer system. That's
because they provide fast access and high storage capacities at a reasonable cost. Magnetic disk
drives contain metal disks that are coated on both sides with an iron oxide recording material. Several
disks are mounted together on a vertical shaft, which typically rotates the disks at speeds of 3,600 to 7,600
revolutions per minute (rpm). Electromagnetic read/write heads are positioned by access arms between
the slightly separated disks to read and write data on concentric, circular tracks. Data are recorded on
tracks in the form of tiny magnetized spots to form the binary digits of common computer codes.
Thousands of bytes can be recorded on each track, and there are several hundred data tracks on each disk
surface, thus providing you with billions of storage positions for your software and data.

The Institute of Chartered Accountants of Nepal ȁͻ͹


Management Information and Control System
Types of Magnetic Disks
Magnetic disks come in various configurations, including removable disk cartridges and fixed disk units.
Removable disk devices, due to their portability and ability to store backup copies of data offline, offer
both convenience and security.
Floppy disks, or magnetic diskettes, consist of polyester film disks coated with an iron oxide compound.
The disk is housed in a protective flexible or rigid plastic jacket with access openings to accommodate the
read/write head of a disk drive unit. The 3.5-inch floppy disk, which could store up to 1.44 megabytes,
was the most widely used. Super disk technology, offering storage up to 120 megabytes, and Zip drives
with a capacity of up to 750 MB, used similar floppy-like technology. In the modern era, most computers
no longer include floppy disk drives, but these can still be sourced if required.
Hard disk drives integrate magnetic disks, access arms, and read/write heads into a single sealed unit. This
structure allows for increased speeds, higher data recording densities, and precise tolerances in a
controlled, stable environment. Both fixed and removable disk cartridge versions are available. Hard
drives today can hold data from several hundred gigabytes to multiple terabytes.
RAID Storage
Redundant Arrays of Independent Disks (RAID) storage represents a major shift in data storage
technology, providing a significant expansion in online storage capacity. RAID systems connect multiple
small hard disk drives, ranging from six to over a hundred, along with their controlling microprocessors,
into a single unit. This setup offers vast storage capacities (as much as 1-2 terabytes or even more) while
maintaining high access speeds due to the parallel access of data across multiple disks via various
pathways.
Furthermore, because of their redundant nature, which maintains several copies of data across various
drives, RAID systems offer a vital fault-tolerant feature. The system can automatically restore data from
backup copies kept on the other drives in the case of a single disk failure. In addition to individual RAID
units, storage area networks (SANs) further enhance storage capabilities. SANs are high-speed, fiber
channel local area networks that can interconnect numerous RAID units, effectively sharing their
combined capacity with multiple users through network servers.
RAID technology includes various classifications, and recent advancements encompass both hardware-
based solutions and software methods. While the technical details of RAID systems might surpass the
requirements of the typical business technologist, it's essential to understand that most contemporary
organizational storage mechanisms likely utilize some form of RAID technology. For those seeking a more
in-depth understanding of this technology, a plethora of resources are available on the internet.
Magnetic Tape
Magnetic tape is still being used as a secondary storage medium in business applications. The read/ write
heads of magnetic tape drives record data in the form of magnetized spots on the iron oxide coating of the
plastic tape. Magnetic tape devices include tape reels and cartridges in mainframes and midrange systems
and small cassettes or cartridges for PCs. Magnetic tape cartridges have replaced tape reels in many
applications and can hold more than 200 megabytes. One growing business application of magnetic
tape involves the use of high-speed 36-track magnetic tape cartridges in robotic automated drive
assemblies that can directly access hundreds of cartridges. These devices provide lower-cost storage to

98 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
supplement magnetic disks to meet massive data warehouse and other online business storage
requirements. Other major applications for magnetic tape include long-term archival storage and
backup storage for PCs and other systems.
Optical Disks
Optical disks, a fast-growing type of storage media, use several major alternative technologies. One
version is called CD-ROM (compact disk-read-only memory). CD-ROM technology uses
12-centimeter (4.7-inch) compact disks (CDs) similar to those used in stereo music systems.
Each disk can store more than 600 megabytes. That's the equivalent of more than 400 l.44- megabyte
floppy disks or more than 500,000 double-spaced pages of text. A laser records data by burning
permanent microscopic pits in a spiral track on a master disk from which compact disks can be mass
produced. Then CD-ROM disk drives use a laser device to read the binary codes formed by those pits.
CD-R (compact disk-recordable) is another popular optical disk technology. CD-R drives or CD burners
are commonly used to record data permanently on CDs. The major limitation of CD-ROM and
CD-R disks is that recorded data cannot be erased. However, CD-RW (CD- rewritable) drives
record and erases data by using a laser to heat a microscopic point on the disk's surface. In CD-RW
versions using magneto-optical technology, a magnetic coil changes the spot's reflective properties
from one direction to another, thus recording a binary 1 or 0. A laser device can then read the binary
codes on the disk by sensing the direction of reflected light. DVD technologies have dramatically
increased optical disk capacities and capabilities. DVD (digital video disk or digital versatile disk)
optical disks can hold from 3.0 to 8.5 gigabytes of multimedia data on each side. The large capacities
and high-quality images and sound of DVD technology are expected to replace CD technologies for
data storage and promise to accelerate the use of DVD drives for multimedia products that can be used
in both computers and home entertainment systems. Thus, DVD-ROM disks are increasingly replacing
magnetic tape videocassettes for movies and other multimedia products, while DVD+RW disks are being
used for backup and archival storage of large data and multimedia files.
Business Applications
One of the major uses of optical disks in mainframe and midrange systems is in image processing,
where long-term archival storage of historical files of document images must be maintained. Financial
institutions, among others, are using optical scanners to capture digitized document images and store them
on optical disks as an alternative to microfilm media.
One of the major business uses of CD-ROM disks for personal computers is to provide a publishing
medium for fast access to reference materials in a convenient, compact form. This material includes
catalogs, directories, manuals, periodical abstracts, part listings, and statistical databases of business and
economic activity. Interactive multimedia applications in business, education, and entertainment are
another major use of optical disks. The large storage capacities of CD and DVD disks are a natural choice
for computer video games, educational videos, multimedia encyclopedias, and advertising presentations.
Radio Frequency Identification
One of the newest and most rapidly growing storage technologies is radio frequency identification
[RFID], a system for tagging and identifying mobile objects such as store merchandise, postal packages,

The Institute of Chartered Accountants of Nepal ȁͻͻ


Management Information and Control System
and sometimes even living organisms (like pets). Using a special device called an RFID reader, RFID
allows objects to be labeled and tracked as they move from place to place.
The RFID technology works using small (sometimes smaller than a grain of sand) pieces of hard ware
called RFID chips. These chips feature an antenna to transmit and receive radio signals. Currently, there
are two general types of RFID chips: passive and active. Passive RFID chips do not have a power source
and must derive their power from the signal sent from the reader. Active RFID chip are self-powered and
do not need to be close to the reader to transmit their signal. Any RFID chips may be attached to
objects, or in the case of some passive RFID systems, injected into objects. A recent use for RFID
chips is the identification of pets such as dogs or cats. By having a tiny RFID chip injected just under
their skin, they can be easily identified if they become lost. The RFID chip contains contact
information about the owner of the pet. Taking this a step further, the Transportation Security
Administration is considering using RFID tags embedded in airline boarding passes to keep track of
passengers.
Whenever a reader within range sends appropriate signals to an object, the associated RFID chip responds
with the requested information, such as an identification number or product date. The reader, in turn,
displays the response data to an operator. Readers may also forward data to a networked central computer
system. Such RFID systems generally support storing information on the chips as well as simply reading
data.
The RFID systems were created as an alternative to common bar codes. Relative to bar codes, RFID
allows objects to be scanned from a greater distance, supports storing of data, and allows more
information to be tracked per object.
Recently (as discussed in the next section), RFID has raised some privacy concerns as a result of the
invisible nature of the system and its capability to transmit fairly sophisticated messages. As these types
of issues are resolved, we can expect to see RFID technology used in just about every way imaginable.
Software
Software is the detailed instructions that control the operation of a computer system. Without software,
computer hardware could not perform the tasks we associate with computers. The functions of software
are to (1) manage the computer resources of the organization; (2) provide tools for human beings to take
advantage of these resources; and (3) act as an intermediary between organizations and stored
information. Selecting appropriate software for the organization is a key management decision.
Software programs
A software program is a series of statements or instructions to the computer. The process of writing or
coding programs is termed programming, and individuals who specialize in this task are called
programmers.
The stored program concept means that a program must be stored in the computer's primary storage along
with the required data in order to execute, or have its instructions performed by the computer. Once a
program has finished executing, the computer hardware can be used for another task when a new
program is loaded into memory.

100 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
Major Types of software
There are two major types of software: system software and application software. Each kind performs a
different function. System Software is a set of generalized programs that manage the resources of the
computer, such as the central processor, communications links, and peripheral devices. Programmers who
write system software are called system programmers.
Application software describes the programs that are written for or by users to apply the computer to a specific
task. Software for processing an order or generating a mailing list is application software. Programmers who
write application software are called application programmers.
The types of software are interrelated and can be thought of as a set of nested boxes, each of which must
interact closely with the other boxes surrounding it. Figure 3-16 illustrates this relationship. The system
software surrounds and controls access to the hardware. Application software must work through the
system software in order to operate. End users work primarily with application software. Each type of
software must be specially designed to a specific machine to ensure its compatibility.

Fig 3-16 The major types of software.


The relationship between the system software, application software, and users can be illustrated by a
series of nested boxes. System software consisting of operating systems, language translators, and utility
programs-controls access to the hardware. Application software, such as the programming languages
and "fourth-generation" languages, must work through the system software to operate. The user
interacts primarily with the application software.
System Software
System software coordinates the various parts of the computer system and mediates between application
software and computer hardware. The system software that manages and controls the activities of the

The Institute of Chartered Accountants of Nepal ȁͳͲͳ


Management Information and Control System
computer is called the operating System. Other system software consists of computer language translation
programs that convert programming languages into machine language and utility programs that perform
common processing tasks.
Functions of the Operating System
One way to look at the operating system is as the system's chief manager. Operating system software
decides which computer resources will be used, which programs will be run, and the order in which
activities will take place.
An operating system performs three functions. It allocates and assigns system resources; it schedules the
use of computer resources and computer jobs; and it monitors computer system activities.
Allocation and Assignment
The operating system allocates resources to the application jobs in the execution queue. It provides
locations in primary memory for data and programs and controls the input and output devices such as
printers, terminals, and telecommunication links.
Scheduling
Thousands of pieces of work can be going on in a computer simultaneously. The operating system
decides when to schedule the jobs that have been submitted and when to coordinate the scheduling in
various areas of the computer so that different parts of different jobs can be worked on at the same
time. For instance, while a program is executing, the operating system is scheduling the use of input and
output devices. Not all jobs are performed in the order they are submitted; the operating system must
schedule these jobs according to organizational priorities. On-line order processing may have priority over
a job to generate mailing lists and labels.
Monitoring
The operating system monitors the activities of the computer system. It keeps track of each computer job
and may also keep track of who is using the system, of what programs have been run, and of any
unauthorized attempts to access the system.
Multiprogramming, Virtual Storage, Time Sharing, and Multiprocessing
How is it possible for 1000 or more users sitting at remote terminals to use a computer information
system simultaneously if, as we stated in the previous chapter, most computers can execute only one
instruction from one program at a time? How can computers run thousands of programs? The answer is
that the computer has a series of specialized operating system capabilities.
Multiprogramming
The most important operating system capability for sharing computer resources is multiprogramming.
Multiprogramming permits multiple programs to share a computer system's resources at any one time
through concurrent use of a CPU. By concurrent use, we mean that only one program is actually using
the CPU at any given moment but that the input/output needs of other programs can be serviced at the
same time. Two or more programs are active at the same time, but they do not use the same computer
resources simultaneously. With multiprogramming, a group of programs takes turns using the processor.

102 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
Figure 3-17 shows how three programs in a multi programming environment can be stored in primary
storage. The first program executes until an input/output event is read in the program. The operating
system then directs a channel (a small processor limited to input and output functions) to read
the input and move the output to an output device. The CPU moves to the second program until an
input/output statement occurs. At this point, the CPU switches to the execution of the third program, and
so forth, until eventually all three programs have been executed. In this manner, many different programs
can be executing at the same time, although different resources within the CPU are actually being utilized.

Fig 3-17 Single-program execution versus multiprogramming. In multiprogramming, the computer


can be used much more efficiently because a number of programs can be executing concurrently.
Several complete programs are loaded into memory. This memory management aspect of the
operating system greatly increases throughput by better management of high- speed memory and
input/output devices.
The first operating systems executed only one program at a time. Before multiprogramming, when
a program read data off a tape or disk or wrote data to a printer, the entire CPU came to a stop. This
was a very inefficient way to use the computer. With multiprogramming, the CPU utilization rate is
much higher.
Multitasking
Multitasking refers to multiprogramming on single-user operating systems such as those in older
personal computers. One person can run two or more programs or program tasks concurrently on a
single computer. For example, a sales representative could write a letter to prospective clients with a
word processing program while simultaneously using a database program to search for all sales contacts
in a particular city or geographic area. Instead of terminating the session with the word processing
program, returning to the operating system, and then initiating a session with the database program,
multitasking allows the sales representative to display both programs on the computer screen and work
with them at the same time.

The Institute of Chartered Accountants of Nepal ȁͳͲ͵


Management Information and Control System
Virtual Storage
Virtual storage handles programs more efficiently because the computer divides the programs into
small fixed- or variable-length portions, storing only a small portion of the program in primary memory
at one time. If only two or three large programs can be read into memory, a certain part of main
memory generally remains underutilized because the programs add up to less than the total amount
of primary storage space available. Given the limited size of primary memory, only a small number of
programs can reside in primary storage at any given time.
Only a few statements of a program actually execute at any given moment. Virtual storage breaks
a program into a number of fixed-length portions called pages or into variable-length portions called
segments. Each of these portions is relatively small (a page is approximately 2 to
4 kilobytes). This permits a very large number of programs to reside in primary memory,
inasmuch as only one page of each program is actually located there (see Figure 3-18).

Fig 3-18 Virtual storage.


Virtual storage is based on the tact that, in general, only a few statements in a program can actually be
utilized at any given moment. In virtual storage, programs are broken down into small sections called
pages. Individual program pages are read into memory only when needed. The rest of the program is
stored on disk until it is required. In this way, very large programs can be executed by small machines or
a large number of programs can be executed concurrently by a single machine.
All other program pages are stored on a peripheral disk unit until they are ready for execution. Virtual
storage provides a number of advantages. First, the central processor is utilized more fully. Many
more programs can be in primary storage because only one page of each program actually resides there.
Second, programmers no longer have to worry about the size of the primary storage area. With

104 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
virtual storage, programs can be of infinite length and small machines can execute a program of any
size (admittedly, small machines will take longer than big machines to execute a large program).
Time Sharing
Time-sharing is a method implemented by operating systems to facilitate simultaneous resource
allocation to numerous users. This technique is distinct from multiprogramming as it designates a fixed
duration for the CPU to work on a single program before transitioning to another.
In a time-sharing environment, a multitude of users are allotted small slices of computing time, typically
2 milliseconds each. During this time slot, each user can execute their required operations. After each
slice of time elapses, the CPU shifts its focus to another user, granting them their 2-millisecond slice of
processing time.
Despite each user only receiving a minuscule portion of the CPU's time, the high-speed operation of the
CPU allows for substantial work to be accomplished within each 2-millisecond period. This is because
the CPU operates at the nanosecond level, allowing for thousands of operations to be completed in each
user's designated time slice. This method enables a significant number of users to be connected to a
single CPU simultaneously, promoting efficient use of system resources.
Multiprocessing
Operating systems that support multiprocessing allow many CPUs to run concurrently in a single
computer system. The operating system can allocate multiple CPUs to carry out various instructions
from the same program or from separate programs at the same time using this approach, thereby
effectively dividing the workload between the CPUs.
Multiprocessing permits simultaneous processing on multiple CPUs, as contrast to multiprogramming,
which makes use of concurrent processing with a single CPU. This implies that multiple tasks may be
carried out simultaneously by each CPU, potentially improving system performance and efficiency.
It's crucial to note that while the fundamental idea of multiprocessing is still relevant, new approaches or
paradigms may be introduced as a result of technological and architectural improvements. This
knowledge
Language Translation and Utility Software
When computers execute programs written in languages such as COBOL, FORTAN, or C, the computer
must convert these human-readable instructions into a form it can understand. System software includes
special language translator programs that translate higher-level language programs written in programming
languages such as BASIC, COBOL, and FORTRAN into machine language that the computer can
execute. This type of system software is called a compiler or interpreter. The program written in the
high level language before translation into machine language is called the Object code. A compiler
translates source code into machine code called object code. Just before execution by the computer,
the object code modules are joined with other object code modules in a process called linkage editing.
The resulting load module is what is actually executed by the computer. Figure 3-19 illustrates the
language translation process.

The Institute of Chartered Accountants of Nepal ȁͳͲͷ


Management Information and Control System

Fig 3-19 the language translation process.


The source code, the program in a high level language, is translated by the compiler into object code so
that the instructions can be "understood" by the machine. These are grouped into modules. Some
programming languages such as BASIC do not use a compiler but an interpreter, which translates each
source code statement one at a time into machine code and executes it. Interpreter languages such as
BASIC provide immediate feedback to the programmer if a mistake is made, but they are very slow to
execute because they are translated one statement at a time.
An assembler is similar to a compiler, but it is used to translate only assembly language into machine
code.

106 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
System software includes utility programs for routine, repetitive tasks, such as copying, clearing primary
storage, computing a square root, or sorting. If you have worked on a computer and have performed
such functions as setting up new files, deleting old files, or formatting diskettes, you have worked with
utility programs. Utility programs are prewritten programs that are stored so that they can be shared by
all users of a computer system and can be rapidly used in many different information system applications
when requested.
Graphical User Interfaces
When interacting with a computer system, users do so via an operating system that guides the interaction
through a user interface. The user interface is essentially the segment of the information system that the
user directly interacts with, facilitating communication with the operating system. In the early days of PC
operating systems, a command-driven approach was the norm. However, graphical user interfaces
(GUIs), which leverage extensive use of icons, buttons, bars, and boxes for various tasks, have emerged
as the dominant user interface model for PC operating systems and a wide array of application software.
Earlier PC operating systems such as DOS were command-driven, meaning that they required users to
enter text-based commands via a keyboard. For instance, to delete a file named DATAFILE, a user would
need to type a command like "DELETE C:\DATAFILE." This demanded a significant amount of memory
work from the user to recall these commands and their syntax to operate the computer effectively.
GUIs, on the other hand, employ graphic symbols known as icons to represent programs, files, and tasks.
This allows for command activation through the movement of a cursor across the screen using a mouse,
with command selection made by clicking a mouse button. Icons, as symbolic images, are employed in
GUIs to represent programs and files. For instance, deleting a file could be as simple as moving the cursor
to a Trash icon. Many GUIs use pull-down menus to help users select commands and pop-up boxes to
assist users in choosing among command options. Windowing features enable users to generate, stack,
resize, and rearrange information boxes.
Supporters of GUIs argue that they reduce the learning curve since novices don't have to memorize
various arcane commands for each application. Common functions such as accessing help, saving files,
or printing outputs are performed in a uniform manner. Moreover, a complicated series of commands can
be issued simply by connecting icons. However, GUIs may not always simplify complex tasks if users
have to spend excessive time pointing to icons and selecting operations to perform on those icons.
Furthermore, graphic symbols might not be straightforward to understand unless the GUI is well
designed. Current GUIs are modeled after an office desktop, with files, documents, and actions based on
standard office behavior, making them less useful for non-office applications in control rooms or
processing plants.
PC Operating Systems
Like any other software, PC software is based on specific operating systems and computer hardware. A
software package written for one PC operating system generally cannot run on another.
DOS was the most popular operating system for 16-bit PCs. It is used today with older PCs based
on the IBM PC standard because so much available application software was written for systems using
DOS. (PC-DOS is used exclusively with IBM PCs. MS-DOS, developed by Microsoft, is used with other
16-bit PCs that function like the IBM PC.) DOS itself does not support multitasking and limits the size
of a program in memory to 640 K.
The Institute of Chartered Accountants of Nepal ȁͳͲ͹
Management Information and Control System
DOS is command-driven, but it can present a graphical user interface by using Microsoft Windows,
a highly popular graphical user interface shell that runs in conjunction with the DOS operating system.
Windows supports limited forms of multitasking and networking but shares the memory limitations of
DOS. Early versions of Windows had some problems with application crashes when multiple programs
competed for the same memory space.
Microsoft's Windows 98 and Windows 95 are genuine 32-bit operating systems. A 32- bit operating
system can run faster than DOS, which could only address data in 16-bit chunks, because it can address
data in 32-bit chunks. Both Windows 98 and Windows 95 provide a streamlined graphical user interface
that arranges icons to provide instant access to common tasks. They can support software written for
DOS but can also run programs that take up more than 640 K of memory. Window's 98 and 95 feature
multitasking, multithreading (the ability to manage multiple independent tasks simultaneously), and
powerful networking capabilities, including the capability to integrate fax, e-mail, and scheduling
programs.
Windows 98 is faster and more integrated with the Internet than Windows 95, with support for new hardware
technologies such as MMX, digital video disk, videoconferencing cameras, scanners, TV tuner-adapter
cards, and joysticks. It provides capabilities for optimizing hardware performance and file management
on the hard disk and enhanced three-dimensional graphics. The most visible feature of Windows 98 is the
integration of the operating system with Web browser software. Users will be able to work with the
traditional Windows interface or use the Web browser interface to display information. The user's hard disk
can be viewed as an extension of the World Wide Web so that a document residing on the hard disk or on
the Web can be accessed the same way. Small applet programs on the Windows desktop can automatically
retrieve information from specific Web sites whenever the user logs onto the Internet. These applets can
automatically update the desktop with the latest news, stock quotes, or weather.
WINDOWS NT (for New Technology) is another 32-bit operating system developed by Microsoft
with features that make it appropriate for applications in large networked organizations. It is
used as an operating system for high-performance workstations and network servers. Windows NT shares
the same graphical user interface as the other Windows operating systems, but it has more powerful
networking, multitasking, and memory-management capabilities. Windows NT can support existing
software written for DOS and Windows, and it can provide mainframe-like computing power for new
applications with massive memory and file requirements. It can even support multiprocessing with
multiple CPUs.
There are two versions of Windows NT-Workstation version for users of standalone or client desktop
computers and a Server version designed to run on network servers. Windows NT Server includes
tools for creating and operating Web sites. Unlike OS/2, Windows NT is not tied to computer hardware
based on Intel microprocessors.
WINDOWS CE has some of the capabilities of Windows, including its graphical user interface, but it is
designed to run on small handheld computers, personal digital assistants, or wireless communication
devices such as pagers and cellular phones. It is a portable and compact operating system requiring
very little memory. Non-PC and consumer devices can use this operating system to share information
with Windows-based PCs and to connect to the Internet. OS/2 is a robust 32-bit operating system for
powerful IBM or IBM-compatible PCs with Intel microprocessors. OS/2 is used for complex, memory-
intensive applications or those that require networking, multitasking or large programs. OS/2 provides
108 | The Institute of Chartered Accountants of Nepal
Chapter 3: Information Technology Strategy and Trends
powerful desktop computers with mainframe-operating-system capabilities, such as multitasking and
supporting multiple users in networks, and it supports networked multimedia and pen computing
applications.
OS/2 supports applications that run under Windows and DOS and has its own graphical user interface.
There are now two versions of OS/2. OS/2 Warp is for personal use. It can accept voice-input
commands and run java applications without a Web browser. OS/2 Warp Server has capabilities similar
to Windows NT for supporting networking, systems management, and Internet access.
UNIX is an interactive, multi-user, multitasking operating system developed by Bell
Laboratories in 1969 to help scientific researchers share data. Many people can use UNIX simultaneously
to perform the same kind of task, or one user can run many tasks on UNIX concurrently. UNIX was
developed to connect various machines together and is highly supportive of communications and
networking. UNIX was designed for minicomputers but now has versions for PCs, workstations, and
mainframes. It is often used on workstations and server computers. UNIX can run on many different
kinds of computers and can be easily customized. Application programs that run under UNIX can be
ported from one computer to run on a different computer with little modification. UNIX also can store
and manage a large number of files.
UNIX is considered powerful but very complex, with a legion of commands. Graphical user interfaces
have been developed for UNIX. UNIX cannot respond well to problems caused by the overuse of system
resources such as jobs or disk space. UNIX also poses some security problems, because multiple
jobs and users can access the same file simultaneously. Vendors have developed different versions of
UNIX that are incompatible, thereby limiting software portability.
Mac OS, the operating system for the Macintosh computer, features multitasking as well as powerful
multimedia and networking capabilities, and a mouse-driven graphical user interface. New features of
this operating system allow users to connect to, explore, and publish on the Internet and World Wide Web
and to use java software.
Recent update:
DOS, while a critical stepping stone in the history of operating systems, is mostly obsolete and isn't
generally used in modern computing. Windows 98 and Windows NT have also been succeeded by newer
versions of the Windows operating system. The latest iteration as of my last update is Windows 11, which
boasts a refreshed, centered Start menu, snap layouts, and support for Android apps, among other features.
The Mac OS, mentioned at the end of your text, has evolved into macOS. As of September 2021, the
most recent version of macOS is Monterey (macOS 12), offering features such as Universal Control,
Shortcuts, and a focus mode.
Windows CE evolved into Windows Mobile and eventually became Windows 10 Mobile, which
Microsoft stopped supporting as of December 2019. Microsoft's focus shifted towards integrating with
and supporting Android and iOS for mobile device needs.
UNIX, developed by Bell Labs in 1969, has indeed inspired many other operating systems, including
Linux and macOS. The Linux OS, which you did not mention, is a prevalent UNIX-like open-source
operating system used in many modern systems, from servers and supercomputers to embedded systems
in appliances and vehicles. Linux distributions (like Ubuntu, Debian, and Fedora) often serve as

The Institute of Chartered Accountants of Nepal ȁͳͲͻ


Management Information and Control System
alternatives to Windows and macOS on desktop computers, especially for developers and system
administrators.
Application Software
Application software is primarily concerned with accomplishing the tasks of end users. Many different
languages can be used to develop application software. Each has different strengths and drawbacks.
Generations of Programming Languages
To communicate with the first generation of computers, specialized programmers wrote programs
in machine Language-the Os and 1s of binary code. Programming in Os and 1s (reducing all statements
such as add, subtract, and divide into a series of Os and 1s) made early programming a slow, labor-
intensive process.
As computer hardware improved and processing speed and memory size increased, computer languages
changed from machine language to languages that were easier for humans to understand. Generations
of programming languages developed to correspond with the generations of computer hardware.
Figure 3-20 shows the development of programming languages during the last 50 years as the
capabilities of hardware have increased. The major trend is to increase the ease with which users can
interact with hardware and software.

Fig 3-20 Generations of programming languages.


As the capabilities of hardware increased, programming languages developed from the first generation
of machine and second generation of assembly languages of the 1950s to 1960s, through the third-
generation, high-level languages

110 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
such as FORTRAN and COBOL developed in the 1960s and 1970s, to today's fourth- generation
languages and tools.
Machine language was the first-generation programming language. The second generation of
programming languages occurred in the early 1950s with the development of assembly language. Instead
of using Os and 1s, programmers could substitute language-like acronyms and words such as add,
sub (subtract), and load in programming statements. A language translator called a compiler converted
the English-like statements into machine language.
From the mid-1950s to the mid-1970s, the third generation of programming languages emerged. These
languages, such as FORTRAN, COBOL, and BASIC, allowed programs to be written with regular
words using sentence-like statements. These languages are called high-level languages because each
statement generates multiple statements when it is translated into machine language. Programs
became easier to create and started to be used more widely for scientific and business problems.
Beginning in the late 1970's, fourth-generation languages and tools were created. These languages
dramatically reduce programming time and make software tasks so easy that many can be performed
by nontechnical computer users without the help of professional programmers. Software such as word
processing, spreadsheets, data management, and Web browsers became popular productivity tools for end
users.
Popular Programming Languages
Most managers need not be expert programmers, but they should understand how to evaluate software
applications and to select programming languages that are appropriate for their organization's
objectives. We will now briefly describe the more popular high-level languages.
Assembly Language
Like machine language, assembly language (Figure 3-21) is designed for a specific machine and specific
microprocessors. Each operation in assembly corresponds to a machine operation. On the other hand,
assembly language makes use of certain mnemonics (e.g., load, sum) and assigns addresses and storage
locations automatically. Although assembly language gives programmers great control, it is costly in
terms of programmer time, difficult to read and debug, and difficult to learn. Assembly language is used
primarily today in system software.

Fig 3-21 Assembly language. This sample assembly language command adds the contents ot
register 3 to register 5 and stores the result in register 5.

The Institute of Chartered Accountants of Nepal ȁͳͳͳ


Management Information and Control System
FORTRAN
FORTRAN, an acronym for FORmula TRANslator, came to existence in 1956 with the primary goal of
simplifying the process of crafting scientific and engineering software. It became especially prevalent in
computations that involved numerical data. While it is possible to develop various types of business
applications in FORTRAN, modern versions of the language also offer intricate structures for managing
the logic within programs.
However, FORTRAN is not highly regarded for its efficiency in input/output tasks or its abilities in
printing and handling lists. The syntax rules in FORTRAN are stringent, which makes errors during coding
a common occurrence. This, in turn, can make the debugging process quite challenging. Today, the
languages like Python, C++, and Java are more commonly used for scientific and engineering applications
due to their powerful features, efficiency, and more intuitive syntax. However, FORTRAN still finds use
in legacy systems and certain areas of scientific computing that require high-performance numerical
computation.

Fig 3-22 This sample FORTRAN program code is part of a program to compute sales figures for a
particular item.
COBOL
COBOL (COmmon Business Oriented Language) (Figure 3-23) came into use in the early
1960s. It was developed by a committee representing both government and industry. Rear
Admiral Grace M. Hopper was a key committee member who played a major role in COBOL
development. COBOL was designed with business administration in mind, for processing large data
files with alphanumeric characters (mixed alphabetic and numeric data), and for performing repetitive
tasks such as payroll. It is poor at complex mathematical calculations. Also, there are many versions of
COBOL, and not all are compatible with each other. Today, more efficient programming languages have
largely superseded COBOL, but still the COBOL code is in operation due to the high cost involved in
replacing the system.

Fig 3-23 COBOL. This sample COBOL program code is part of a routine to computer total sales
figures for a particular item.

112 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
BASIC
BASIC (Beginners All-purpose Symbolic Instruction Code) was developed in 1964 by john Kemeny and
Thomas Kurtz to teach students at Dartmouth College how to use computers, Today it is a popular
programming language on college campuses and for PCs. BASIC can do almost all computer processing
tasks from inventory to mathematical calculations. It is easy to use, demonstrates computer capabilities
well, and requires only a small interpreter. The weakness of BASIC is that it does few tasks well
even though it does them all. It has no sophisticated program logic control or data structures, which makes
it difficult to use in teaching good programming practices. Different versions of BASIC exist.
Pascal
Named after Blaise Pascal, the seventeenth-century mathematician and philosopher, Pascal was
developed by the Swiss computer science professor Niklaus Wirth of Zurich in the late 1960s. Pascal
programs can be compiled using minimal computer memory, so they can be used on PCs. With
sophisticated structures to control program logic and a simple, powerful set of commands, Pascal is used
primarily in computer science courses to teach sound programming practices. The language is weak at
file handling and input/output and is not easy for beginners to use.
C and C++
The C programming language, developed at AT&T's Bell Labs in the early 1970s, is a highly efficient and
versatile language. It blends the ability to function on diverse computer systems with stringent control and
resourceful utilization of computing assets. C is primarily employed by professional software developers
for creating operating system and application software, predominantly for personal computers.
C++ is an evolution of C that introduces object-oriented programming capabilities. Alongside inheriting
all of C's features, C++ adds functionality for dealing with software objects, making it a powerful tool for
application software development.
As of recent, both C and C++ remain relevant and widely used programming languages. Their efficiency,
flexibility, and performance make them popular choices for system software, game development, and
applications that require direct hardware manipulation or real-time performance. Furthermore, they've
significantly influenced many contemporary programming languages, such as Java, C#, and Python.
However, the rise of higher-level languages and scripting languages has somewhat shifted the use of C
and C++ towards more system-level and performance-critical applications.
Other Programming Languages
Other important programming languages include Ada, LISP, Prolog, and PL/1.
Ada was developed in 1980 by the U.S. Defense Department to serve as a standard for all of its
applications. Named after Ada, Countess of Lovelace, a nineteenth century mathematician, it was
designed to be executed in diverse hardware environments. Ada is used for both military and nonmilitary
applications because it can operate on different brands of computer hardware.
LISP (designating LISt Processor) and Prolog (designating PROgramming LOGic) are used for artificial-
intelligence applications. LISP, created in the late 1950s, is oriented toward putting symbols such as
operations, variables, and data values into meaningful lists. Prolog was introduced around 1970 and also
is well-suited to manipulating symbols. It can run on a wider variety of computers than LISP.

The Institute of Chartered Accountants of Nepal ȁͳͳ͵


Management Information and Control System
PL/1 (Programming Language 1) is a powerful general-purpose programming language developed
by IBINI in 1964. It can comfortably handle both mathematical and business problems, but it has
not replaced COBOL or FORTRAN because organizations have already invested so heavily in COBOL
and FORTRAN systems.
These languages, while perhaps not as widely used as some others, still have niches market available. For
example, Ada is often used in systems where safety and reliability are paramount, such as avionics and
defense. LISP and Prolog continue to be used in some artificial intelligence and logic programming
applications. PL/1, although not widely used today, influenced many other languages and is still used in
some legacy systems.
Fourth-Generation Languages and PC Software Tools
Fourth-generation languages consist of a variety of software tools that enable end users to develop
software applications with minimal or no technical assistance or that enhance the productivity of
professional programmers. Fourth-generation languages tend to be nonprocedural
or less procedural than conventional programming languages. Procedural languages require
specification of the sequence of steps, or procedures, that tell the computer what to do and how to do
it. Nonprocedural languages need only specify what has to be accomplished rather than provide details
about how to carry out the task. Thus, a nonprocedural language can accomplish the same task with fewer
steps and lines of program code than a procedural language.
There are seven categories of fourth-generation languages: query languages, report generators, graphics
languages, application generators, very high-level programming languages, application software
packages, and PC tools. Figure 3-24 illustrates the spectrum of these tools and some commercially
available products in each category.

Fig 3-24 Fourth-generation languages. The spectrum of major categories of fourth-generation


languages and commercially available products in each category is illustrated. Tools range from those
that are simple and designated primarily for end users to complex tools designed for information
systems professionals.
Query Languages
Query languages are high-level languages for retrieving data stored in databases or files. They are
usually interactive, on-line, and capable of supporting requests for information that are not predefined.
They are often tied to database management systems or some of the PC software tools described later
in this section. For instance, the query

114 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
SELECT ALL WHERE age >4O AND name ="Wilson"
requests all records where the name is "Wilson" and the age is more than 40.
Available query tools have different kinds of syntax and structure, some being closer to natural language
than others (Vassiliou, 1984-85). Natural language software allows users to communicate with the
computer using conversational commands that resemble human speech. Natural language development is
one of the concerns of artificial intelligence. Some consider the movement toward natural language as the
next generation in software development.
Report Generators
Report generators are facilities for creating customized reports. They extract data from i files or databases
and create reports in many formats. Report generators generally provide more control over the way data
are formatted, organized, and displayed than query languages. The more powerful report generators can
manipulate data with complex calculations and logic before they are output. Some report generators are
extensions of database or query languages. Today, there are many report generation tools available, some
of which are incorporated into larger software suites like Microsoft Power BI, Tableau, and Oracle's
Business Intelligence tools.
Graphics Languages
Graphics languages retrieve data from files or databases and display them in graphic format. Users
can ask for data and specify how they are to be charted. Some graphics software can perform arithmetic
or logical operations on data as well. SAS and Systat are examples of powerful analytical graphics
software. The popular data visualization tools like Matplotlib and Seaborn for Python, ggplot2 for R, and
D3.js for JavaScript, among others are popular.
Application Generators
Application generators contain preprogrammed modules that can generate entire applications, greatly
speeding development. A user can specify what needs to be done, and the application generator will
create the appropriate code for input, validation, update, processing, and reporting.
1\1ost full-function application generators consist of a comprehensive, integrated set of development
tools: a database management system, data dictionary, query language, screen painter, graphics generator,
report generator, decision support/modeling tools, security facilities, and a high-level programming
language. Application generators now include tools for developing full-function Web sites.
Very High-Level Programming Languages
Very high-level programming languages are designed to generate program code with fewer instructions
than conventional languages such as COBOL or FORTRAN. Programs and applications based on these
languages can be developed in much shorter periods of time. Simple features of these languages can
be employed by end users. However, these languages are designed primarily as productivity tools
for professional programmers. APL, Nomad2, Python, and Ruby are examples of these languages.
Application Software Packages
A software package is a prewritten, precoded, commercially available set of programs that eliminates the
need for individuals or organizations to write their own software programs for certain functions. There

The Institute of Chartered Accountants of Nepal ȁͳͳͷ


Management Information and Control System
are software packages for system software, but the vast majority of package software is application
software.
Application software packages consist of prewritten application software that is marketed commercially.
These packages are available for major business applications on mainframes, minicomputers, and PCs.
Although application packages for large complex systems must be installed by technical specialists,
many application packages, especially those for PCs, are marketed directly to end users.
The Window on Organizations provides examples of geographic information systems software, a type of
leading-edge application software package that is proving very useful for businesses. Geographic
information systems (GIS) can analyze and display data using digitized maps to enhance planning and
decision making.
PC Software Tools
Some of the most popular and productivity-promoting software tools are the general purpose application
packages that have been developed for PCs, especially word processing, spreadsheet, data management,
presentation graphics, integrated software packages, e-mail, Web browsers, and groupware.
Word Processing Software. Word Processing Software: Word processing software electronically
stores text data as a computer file, rendering the physical stora ge of paper obsolete. This software
empowers users to edit documents electronically in memory, which eradicates the need for retyping
an entire page to make corrections. Formatting options are also provided to adjust line spacing,
margins, character size, and column width. Notable examples of word processing packages include
Microsoft Word and WordPerfect.
Advanced features of most word processing software enhance the writing process. They include spell -
check tools, style checkers for grammar and punctuation analysis, thesaurus programs, and mail merge
programs, which can associate letters or other text documents with names and addresses in a mailing
list.
Spreadsheets. Electronic spreadsheet software provides computerized versions of traditional financial
modeling tools such as the accountant's columnar pad, pencil, and calculator. An electronic spreadsheet
is organized into a grid of columns and rows. The power of the electronic spreadsheet is evident when
one changes a value or values, because all other related values on the spreadsheet will be automatically
recomputed.
Spreadsheets are valuable for applications in which numerous calculations with pieces of data must be
related to each other. Spreadsheets also are useful for applications that require modeling and what-if
analysis. After the user has constructed a set of mathematical relationships, the spreadsheet can be
recalculated instantaneously using a different set of assumptions. A number of alternatives can easily be
evaluated by changing one or two pieces of data without having to rekey in the rest of the worksheet.
Many spreadsheet packages include graphics functions that can present data in the form of line
graphs, bar graphs, or pie charts. The most popular spreadsheet packages are Microsoft Excel and
Lotus 1-2-3, the newest versions of this software can read and write Web files. The most prominent
spreadsheet packages include Microsoft Excel and Lotus 1-2-3. The latest versions of this software can
read and write Web files.

116 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
Data Management Software. Although spreadsheet programs are powerful tools for manipulating
quantitative data, data management Software is more suitable for creating and manipulating lists and for
combining information from different files. PC database management packages have programming
features and easy-to-learn menus that enable no specialists to build small information systems.
Data management software typically has facilities for creating files and databases and for storing,
modifying, and manipulating data for reports and queries. Popular database management software for the
personal computer includes Microsoft Access, which has been enhanced to publish data on the Web.
Presentation Graphics. Presentation graphics software allows users to create professional- quality
graphics presentations. This software can convert numeric data into charts and other types of graphics
and can include multimedia displays of sound, animation, photos, and video clips. The leading presentation
graphics packages include capabilities for computer-generated slide shows and translating content for
the Web. Microsoft PowerPoint, Lotus Freelance Graphics, and Aldus Persuasion are popular
presentation graphics packages.
Integrated Software Packages And Software Suites. Integrated software packages combine the
functions of the most important PC software packages, such as word processing, spreadsheets,
presentation graphics, and data management. This integration provides a more general-purpose software
tool and eliminates redundant data entry and data maintenance. Integrated packages are a compromise.
Although they can do many things well, they generally do not have the same power and depth as single-
purpose packages.
Integrated software packages should be distinguished from software suites, which are collections of
applications software sold as a unit. Microsoft Office 97 is an example. This software suite contains Word
processing software, Excel spreadsheet software, Access database software, PowerPoint presentation
graphics software, and Outlook, a set of tools for e-mail, scheduling, and contact management. Software
suites have some features of integrated packages, such as the ability to share data among different
applications, but they consist of full-featured versions of each type of software.
E-Mail Software. E-mail Software: Electronic mail (e-mail) has become an integral part of personal
and professional communication, allowing for instant exchange of messages between computers.
Individuals or businesses can utilize a networked computer to send short notes or longer documents to
recipients, either within the same network or on a different one. Many organizations maintain their own
e-mail systems, while others rely on services provided by telecommunication companies such as AT&T
and Verizon, commercial online platforms like Gmail and Outlook, or even public networks via the
internet.
Web browsers and comprehensive software suites, like Microsoft Office or Google Workspace, come
with integrated e-mail functions. However, there are also dedicated e-mail software applications, like
Thunderbird or Apple Mail, designed specifically for managing electronic correspondence. Many of
these modern e-mail platforms not only allow the sending and receiving of messages, but also come
equipped with features for routing messages to multiple recipients, forwarding messages, attaching
documents or multimedia files to messages, and sorting or prioritizing incoming mails.
Moreover, the advancement of technology has seen the rise of smart email systems that utilize AI and
machine learning. These systems can automate tasks such as sorting and categorizing emails, providing
smart replies, and even scheduling optimal times for sending emails. Email security has also improv ed,
The Institute of Chartered Accountants of Nepal ȁͳͳ͹
Management Information and Control System
with enhanced encryption and spam filtering techniques being incorporated to safeguard users' privacy
and protect them from malicious content.
Incorporating the growing trend of remote work and mobile computing, most modern email services
provide dedicated mobile applications and seamless synchronization across multiple devices. This
means users can access and manage their email correspondence anytime, anywhere, making electronic
communication more efficient and flexible than ever before.
Web Browsers. Web browsers are easy-to-use software tools for displaying Web pages and for accessing
the web and other Internet resources. Web browser software features a point-and-click graphical user
interface that can be employed throughout the Internet to access and display information stored on
computers at other Internet sites. Browsers can display or present graphics, audio, and video information
as well as traditional text, and they allow you to click on-screen buttons or highlighted words to link
to related Web sites. Web browsers have become the primary interface for accessing the Internet or
for using networked systems based on Internet technology. The two leading commercial Web browsers are
Microsoft's Internet Explorer and Netscape Navigator, which is also available as part of the Netscape
Communicator software suite. They include capabilities for using e-mail, file transfer, on-line
discussion groups and bulletin boards, along with other Internet services. Newer versions of these
browsers contain support for Web publishing and workgroup computing.
Groupware. Groupware provides functions and services to support the collaborative activities of work
groups. Groupware includes software for information sharing, electronic meetings, scheduling, and e-mail
and a network to connect the members of the group as they work on their own desktop computers, often
in widely scattered locations.
Groupware enhances collaboration by allowing the exchange of ideas electronically. All the messages on
a topic can be saved in a group, stamped with the date, time, and author. Any group member can review
the ideas of others at any time and add to them, or individuals can post a document for others to comment
upon or edit. Members can post requests for help, allowing others to respond. Finally, if a group so
chooses, members can store their work notes on the groupware so that all others in the group can see what
progress is being made, what problems occur, and what activities are planned.
The leading commercial groupware product has been Lotus Notes from the Lotus Development
Corporation. The Internet is rich in capabilities to support collaborative work. Microsoft Internet Explorer
4.0 and Netscape Communicator include groupware functions, such as e-mail, electronic scheduling
and calendaring, audio and data conferencing, and electronic discussion groups and databases. Microsoft's
Office 2000 software suite includes groupware features using Web technology.
New Software Tools and Approaches
A growing backlog of software projects and the need for businesses to fashion systems that are flexible or
that can run over the Internet have stimulated new approaches to software development with object-
oriented programming tools and new programming languages such as java and hypertext markup language
(HTML).

118 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
Object Oriented Programming
Traditional software development methods have treated data and procedures as independent components.
A separate programming procedure must be written every time someone wants to take an action on a
particular piece of data. The procedures act on data that the program passes to them.
What Makes Object-Oriented Programming Different?
Object-Oriented programming combines data and the specific procedures that operate on those data into
one object. The object combines data and program code. Instead of passing data to procedures, programs
send a message for an object to perform a procedure that is already embedded into it. (Procedures are
termed methods in object oriented languages.) The same message may be sent to many different
objects, but each will implement that message differently. For example, an object-oriented financial
application might have Customer objects sending debit and credit messages to Account objects. The
Account objects in turn might maintain Cash-on- Hand, Accounts-Payable, and Accounts-Receivable
objects.
An object's data are hidden from other parts of the program and can only be manipulated from inside the
object. The method for manipulating the object's data can be changed internally without affecting
other parts of the program. Programmers can focus on what they want an object to do, and the object
decides how to do it.
An object's data are encapsulated from other parts of the system, so each object is an independent
software building block that can be used in many different systems without changing the program
code. Thus, object-oriented programming is expected to reduce the time and cost of writing software by
producing reusable program code or software chips that can be reused in other related systems. Future
software work can draw upon a library of reusable objects, and productivity gains from object-oriented
technology could be magnified if objects were stored in reusable software libraries and explicitly designed
for reuse.
Object-oriented programming has spawned a new programming technology known as Visual programming.
With visual programming, programmers do not write code. Rather, they use a mouse to select and
move around programming objects, copying an object from a library into a specific location in a program,
or drawing a line to connect two or more objects. Visual Basic is a widely used visual programming tool for
creating applications that run under Microsoft Windows.
The few topics to be discussed are:
Encapsulation: You've covered this well. It's the idea that data (attributes) and the methods that act
on that data are bundled together into one unit - an object. This helps to hide the internal
implementation details and protects the data from being directly accessed or modified.
Inheritance: Inheritance is the process by which one class takes on the properties (methods and
attributes) of another class. This is done to achieve code reusability and can also be used to add
more features to an existing class without modifying it.
Polymorphism: This is the ability of an object to take on many forms. The most common use of
polymorphism in OOP occurs when a parent class reference is used to refer to a child class object.
This allows functions to use entities of different types at different times.

The Institute of Chartered Accountants of Nepal ȁͳͳͻ


Management Information and Control System
Abstraction: Abstraction refers to the idea of hiding complexity. It is the process of hiding the
implementation details from the user and showing only the functionality. This is done using abstract
classes and interfaces.
Finally, regarding visual programming, while it's true that some OOP languages like Visual Basic offer
visual interfaces for programming, it's important to note that not all OOP languages offer such features.
The main focus of OOP is not on visual programming, but rather on the organization and structuring of
code for reusability, extensibility, and maintainability.
Object-Oriented Programming Concepts
Object-oriented programming is based on the concepts of class and inheritance. Program code is not
written separately for every object but for classes, or general categories, of similar objects. Objects
belonging to a certain class have the features of that class. Classes of objects in turn can inherit all the
structure and behaviors of a more general class and then add variables and behaviors unique to
each object. New classes of objects are created by choosing an existing class and specifying how the
new class differs from the existing class, instead of starting from scratch each time.
Classes are organized hierarchically into superclasses and subclasses. For example, a car class might
have a vehicle class for a superclass, so that it would inherit all the methods and data previously defined
for vehicle. The design of the car class would only need to describe how cars differ from vehicles. A
banking application could define a Savings-Account object that is very much like a Bank-Account object
with a few minor differences. Savings-Account inherits all the Bank-Account's state and methods and
then adds a few extras.
We can see how class and inheritance work in Figure 3-25, which illustrates a tree of classes concerning
employees and how they are paid. Employee is the common ancestor of the other four classes.
Nonsalaried and salaried are subclasses of Employee, whereas Temporary and Permanent are
subclasses of Nonsalaried. The variables for the class are in the top half of the box, and the methods
are in the bottom half. Darker items in each box are inherited from some ancestor class. Lighter
methods, or class variables, are unique to a specific class and they override, or redefine, existing
methods. W/hen a subclass overrides an inherited method, its object still responds to the same
message, but it executes its definition of the method rather than its ancestors. Whereas Pay is a method
inherited from some superclass, the method Pay- OVERRIDE is specific to the Temporary, Permanent,
and Salaried classes.
Object-oriented software can be custom-programmed or it can be developed with rapid- application
development tools, which can potentially cost 30 percent to 50 percent less than traditional program
development methods. Some of these tools provide visual programming environments in which
developers can create ready-to-use program code by "snapping" together prebuilt objects. Other tools
generate program code that can be compiled to run on a variety of computing platforms. The Window
on Technology explores the use of such tools to speed up object-oriented software creation.

120 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends

Fig 3-25 Class, subclasses, inheritance, and overriding. This figure illustrates how a messages
method can come from the class itself or an ancestor class. Class variables and methods are shaded
when they are inherited from above.
Java
Java is a programming language named after the many cups of coffee its Sun Microsystems developers
drank along the way. It is an object-oriented language, combining data with the functions for processing
the data, and it is platform-independent. Java software is designed to run on any computer or computing
device, regardless of the specific microprocessor or operating system it uses. A Macintosh Apple, an IBM
personal computer running Windows, a DEC computer running UNIX, and even a smart cellular phone
or personal digital assistant can share the same java application. Java can be used to create miniature
programs called "applets" designed to reside on centralized network servers. The network delivers only
the applets required for a specific function. With java applets residing on a network, a user can download
only the software functions and data that he or she needs to perform a particular task, such as analyzing
the revenue from one sales territory. The user does not need to maintain large software programs or data
files on his or her desktop machine. When the user is finished with processing, the data can be saved
through the network. Java can be used with network computers because it enables all processing software
and data to be stored on a network server, downloaded via a network as needed, and then placed back on
the network server.
Java is also a very robust language that can handle text, data, graphics, sound, and video, all within
one program if needed. Java applets often are used to provide interactive capabilities for Web pages. For
example, java applets can be used to create animated cartoons or real-time news tickers for a Web site,
or to add a capability to a Web page to calculate a loan payment schedule on-line in response to financial
data input by the user. (Microsoft's ActiveX sometimes is used as an alternative to java for creating
The Institute of Chartered Accountants of Nepal ȁͳʹͳ
Management Information and Control System
interactivity on a Web page. ActiveX is a set of controls that enables programs or other objects such as
charts, tables, or animations to be embedded within a Web page. However, ActiveX lacks java's machine
independence and was designed for a Windows environment.)
Java also can be used to create more extensive applications that can run over the Internet or over a
company's private network. Java can let PC users manipulate data on networked systems using Web
browsers, reducing the need to write specialized software. For example, Sprint PCS, the mobile-phone
partnership, is using java for an application that allows its employees to use Web browsers to analyze
business data and send reports to colleagues via e-mail on an internal network. The system it replaces
required specialized desktop software to accomplish these tasks and restricted these reports to a smaller
number of employees (Clark, 1998).
To run java software, a computer needs an operating system containing a java Virtual Machine (JVM).
(A JVM is incorporated into Web browser software such as Netscape Navigator or Microsoft Internet
Explorer.) The java Virtual Machine is a compact program that enables the computer to run java
applications. The JVM lets the computer simulate an ideal standardized java computer, complete with
its own representation of a CPU and its own instruction set. The Virtual Machine executes java programs
by interpreting their commands one by one and commanding the underlying computer to perform all the
tasks specified by each command.
Management and Organizational Benefits of Java
Companies are starting to develop more applications in java because such applications can potentially
run in Windows, UNIX, IBNI mainframe, Macintosh, and other environments without having to
be rewritten for each computing platform. Sun Microsystems terms this phenomenon "write once,
run anywhere." java also could allow more software to be distributed and used through networks.
Functionality could be stored with data on the network and downloaded only as needed, Companies might
not need to purchase thousands of copies of commercial software to run on individual computers; instead
users could download applets over a network and use network computers.
Java is similar to C+ + but considered easier to use. Java program code can be written more quickly than
with other languages. Sun claims that no java program can penetrate the user's computer, making it safe
from viruses and other types of damage that might occur when downloading more conventional programs
off a network.
Despite these benefits, java has not yet fulfilled its early promise to revolutionize software development
and use. Programs written in current versions of java tend to run slower than "native" programs,
which are written for a particular operating system because they must be interpreted by the java Virtual
Machine. Vendors such as Microsoft are supporting alternative versions of java that include subtle
differences in their Virtual Machines that affect java's performance in different pieces of hardware
and operating systems. Without a standard version of java, true platform independence cannot be
achieved. The Window on Management explores the management issues posed by java as companies
consider whether to use this programming language.
Hypertext markup language (HTML)
Hypertext markup language (HTML) is a page description language for creating hypertext or hypermedia
documents such as Web pages. HTML uses instructions called tags to specify how text, graphics, video,
and sound are placed on a document and to create dynamic links to other documents and objects stored
122 | The Institute of Chartered Accountants of Nepal
Chapter 3: Information Technology Strategy and Trends
in the same or remote computers. Using these links, a user need only point at a highlighted key word
or graphic, click on it, and immediately be transported to another document.
HTML programs can be custom-written, but they also can be created by using the HTML authoring
capabilities of Web browsers or of popular word-processing, spread sheet, data management, and
presentation graphics software packages. HTML editor such as Claris Home Page and Adobe PageMill
are more powerful HTML authoring tool programs for creating Web pages.
Low-Code/No-Code Development Platforms
These platforms enable users to build applications with minimal coding knowledge. They provide visual
interfaces and pre-built components to simplify the development process.
DevOps and CI/CD:
DevOps (Development and Operations) focuses on streamlining collaboration between development and
IT operations teams. Continuous Integration/Continuous Deployment (CI/CD) pipelines automate the
building, testing, and deployment of software, ensuring rapid and reliable delivery.
Containerization and Orchestration
Containerization tools like Docker allow applications and their dependencies to be packaged into
lightweight, portable containers. Orchestration frameworks like Kubernetes manage and automate the
deployment, scaling, and management of containers.
Machine Learning and AI Libraries
Libraries such as TensorFlow and PyTorch provide developers with powerful tools to build and train
machine learning models. They offer extensive support for tasks like computer vision, natural language
processing, and data analysis.
Serverless Computing
Serverless platforms, like AWS Lambda and Azure Functions, abstract away the infrastructure
management. Developers can focus on writing code in the form of functions, which automatically
scale based on demand and only incur costs for actual usage.
Microservices Architecture
Microservices involve building applications as a collection of small, loosely coupled services that can be
developed, deployed, and scaled independently. This approach enables flexibility, scalability, and easier
maintenance.
Agile and Scrum Methodologies
Agile and Scrum methodologies prioritize iterative development, frequent collaboration, and adapting to
change. They focus on delivering incremental value and fostering teamwork.
Data Analytics and Visualization
Tools like Tableau, Power BI, and Apache Superset help users analyze and visualize complex data sets,
making it easier to draw insights and communicate findings.

The Institute of Chartered Accountants of Nepal ȁͳʹ͵


Management Information and Control System
Blockchain Technology
Blockchain provides secure and decentralized data storage and verification. It has applications in areas
like cryptocurrencies, supply chain management, and smart contracts.
Quantum Computing
Although still in its early stages, quantum computing explores the potential of quantum physics principles
to solve complex computational problems. It has the potential to revolutionize fields such as cryptography
and optimization.
Remember to stay updated with the latest industry news and developments to explore the newest software
tools and approaches beyond what I've covered here.
Peoples, Procedures and Data
People
People are the essential ingredient for the successful operation of all information systems. This person
includes end users and IS specialists.
End users
Also called users or clients are people who use an information system or the information it produces.
They can be customers, salespersons, engineers, clerks, accountants, or managers and are found at all
levels or an organization. In fact most of us are information system end users. Most end users in business
are knowledge workers, that is people who spend most of their time communicating and collaborating in
teams and workgroups and creating, using and distributing information.
IS Specialists
IS specialist are people who develop and operate information systems. They include systems analysts,
software developers, system operators and other managerial, technical and clerical IS personnel. Briefly
systems analyst design information systems based on the information requirements or end users, software
developers create computer programs based on the specifications of systems analysts and system operators
help monitor and operate large computer systems and networks.
Procedure
Procedures are operating instructions for the people who will use an information system. Examples are
instructions for filling out a paper form or using a software package.
People's View of a Computerized Database
Data
Data are more than the raw material of information systems. Managers and information systems
professionals have broadened the concept of data resources. They realize that data constitute valuable
organizational resources. Thus, you should view data just as you would any organizational resource that
must be managed effectively to benefit all stakeholders in an organization.

124 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
The concept of data as an organizational resource has resulted in a variety of changes in the modern
organization. Data that previously were captured as a result of a common transaction are now stored,
processed, and analyzed using sophisticated software applications that can reveal complex relationships
among sales, customers, competitors, and markets. In today's wired world, the data to create a simple list
of an organization's customers are protected with the same energy as the cash in a bank vault. Data are
the lifeblood of today's organizations, and the effective and efficient management of data is considered
an integral part of organizational strategy. Data can take many forms, including traditional alphanumeric
data, composed of numbers, letters, and other characters that describe business transactions and other
events and entities; text data, consisting of sentences and paragraphs used in written communications;
image data, such as graphic shapes and figures or photographic and video images; and audio data,
including the human voice and other sounds.
The data resources of information systems are typically organized, stored, and accessed by a variety of
data resource management technologies into:
• Databases that hold processed and organized data.
• Knowledge bases hold knowledge in a variety of forms, such as facts, rules, and case
examples about successful business practices.
For example, data about sales transactions may be accumulated, processed, and stored in a Web enabled
sales database that can be accessed for sales analysis reports by managers and marketing professionals.
Knowledge bases are used by knowledge management systems and expert systems to share knowledge or
give expert advice on specific subjects.
Data modeling helps identify the definable information required by the work system. (It does not address
soft information such as one-time situations, problems, exceptions, and opportunities.) The next step
after data modeling is deciding how to structure the information in the computerized information
system. Although users are typically shielded from much of the internal complexity of computerized
databases, they need to know about types of data, logical versus physical views of data, and other topics
that help them understand what information the system contains and how they can access it.
Data versus Information
The word data is the plural of datum, though data commonly represents both singular and plural forms.
Data are raw facts or observations, typically about physical phenomena or business transactions. For
example, a spacecraft launch or the sale of an automobile would generate a lot of data describing those
events. More specifically data are objective measurements of the attributes (the characteristics) of entities
(e.g., people, places, things, events).
Example. Business transactions, such as buying a car or an airline ticket, can produce a lot of data.
Just think of the hundreds of facts needed to describe the characteristics of the car you want and its
financing or the intricate details for even the simplest airline reservation.
People often use the terms data and information interchangeably. However, it is better to view data as raw
material resources that are processed into finished information products. Then we can define information as
data that have been converted into a meaningful and useful context for specific end users. Thus, data are
usually subjected to a value-added process (data processing or information processing) during which (1)

The Institute of Chartered Accountants of Nepal ȁͳʹͷ


Management Information and Control System
their form is aggregated, manipulated, and organized; (2) their content is analyzed and evaluated and
(3) they are placed in a proper context for a human user.
The issue of context is really at the heart of understanding the difference between information and data can
be thought of as context independent: A list of numbers or names, by itself, does not provide any
understanding of the context in which it was recorded. In fact, the same list could be recorded in a variety
of contexts. In contrast, for data to become information, both the context data and the perspective of the
person accessing the data become essential. The same data may be considered valuable information to one
person and completely irrelevant to the next. Just think of data as potentially valuable to all and information
as valuable relative to its user. Example. Names, quantities, and dollar amounts recorded on sales forms
represent data about sales transactions. However, a sales manager may not regard these as information.
Only after such facts are properly organized and manipulated can meaningful sales information be
furnished and specify, for example, the amount of sales by product type, sales territory or salesperson.
Types of Data
The five primary types of data in today's information systems include predefined data items, text,
images, audio, and video. Traditional business information systems contained only predefined data items
and text. More recent advances in technology have made it practical to process pictures and sounds using
techniques such as digitization, voice messaging, and teleconferencing.
Predefined data items include numerical or alphabetical items whose meaning and format are specified
explicitly and then used to control calculations that use the data. For example, credit card number,
transaction date, purchase amount, and merchant ID are predefined data items in information systems
that authorize and record credit card transactions. Most of the data in transaction -oriented business
systems is of this type and the operation of these systems is programmed based on the meaning
and precise format of these data items. This problem would have never occurred if the data item
year had been defined as a four-digit number in all information systems.
Text is a series of letters, numbers, and other characters whose combined meaning does not depend on a
pre-specified format or definition of individual items. For example, word processors operate on text
without relying on pre-specified meanings or definitions of items in the text; rather, the meaning of
text is determined by reading it and interpreting it.
Images are data in the form of pictures, which may be photographs, hand-drawn pictures, or graphs
generated from numerical data. Images can be stored, modified, and transmitted in many of the same
ways as text. Editing of images provides many other possibilities, however, such as changing the size of
an object, changing its transparency or shading, changing its orientation on the page, and even moving
it from one part of a picture to another. Like text and unlike predefined data items, the meaning of
an image is determined by looking at the image and interpreting it.
Audio is data in the form of sounds. Voice messages are the kind of audio data encountered most
frequently in business. Other examples include the sounds a doctor hears through a stethoscope and the
sounds an expert mechanic hears when working on a machine. The meaning of audio data is determined
by listening to the sounds and interpreting them.

126 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
Video combines pictures and sounds displayed over time. The term video is used here because it is
becoming the catch-phrase for multiple types of data display that involve both sound and pictures, such
as a videoconference. The meaning of video data is determined by viewing and listening over a length of
time.
Although this book discusses these five types of data extensively, other types of data can be important in
certain situations as well. For example, taste and smell are important in the restaurant and wine
businesses, and the development of a fine sense of touch for robots is a key technical challenge in that
area.
The five primary types of data serve different purposes and have different advantages and
disadvantages. Predefined data items provide a terse, coded description of some event or object, but lack
the richness of text, images, audio, or video. When Nissan truck designers commissioned a
photographer to take pictures of small trucks in use as computer and family cars, they were startled
to discover how little their trucks were actually being used for the purposes being advertised and reported
in market surveys. One surprise was how many people were eating in trucks, "not just drinks, but
whole spaghetti dinners!" They also noticed how much people resembled their vehicles and how
scuffed up some of the vehicles were, leading them to wonder whether vehicles could be more like
denim and look better the more worn they became. Richer information is not necessarily better, however,
and it can be worse. For example, a car dealer's accountants just want to know how much the car was
sold for; they have no desire to read a story, listen to a tape, or watch a video. Predefined data items
help them by reducing the sale of a car to a few facts they need to do their jobs. Such data might also be
fine for a manager who needs to know whether weekly sales targets have been met. If the manager
wants to understand why salespeople are having trouble meeting their goals, it might be more useful to
observe their work.
What is a Database?
The information in a computerized system is often called a database, although the term is used in many
ways. For example, people sometimes refer to the World Wide Web as a text database even though
the structure of the data is not defined in any independent way. This discussion of a user's view of a
computerized database assumes the database consists of one or more tables of predefined data items. Text
databases will be discussed separately.
Excluding text databases, we can think of a database as a structured collection of data items stored,
controlled, and accessed through a computer based on predefined relationships between predefined types
of data items related to a specific business, situation, or problem. By this definition, paper memos in a
file cabinet are not a database because they are not accessed through a computer. Similarly, the
entire World Wide Web is not a database of this type because it lacks predefined relationships between
predefined types of data items (even though, as will be discussed later, a particular Web page might
contain links to other pages and might provide access to a database).
Databases come in different forms and are used in many different ways. Work systems discussed thus far
in this book use databases for storing and retrieving information needed for day-to-day operation of firms.
The databases in these systems contain data about things such as inventory orders, shipments, customers,
and employees. Some of the everyday use focuses on retrieving and updating specific items of information,
such as adjusting the units on hand of a product after each sale, or recording an order from a customer.

The Institute of Chartered Accountants of Nepal ȁͳʹ͹


Management Information and Control System
Other everyday uses of databases produce summaries of current status or recent performance. Examples
include a listing showing the total units on hand for each product group, or a listing showing total sales last
week broken out by state.
In some situations the same database is used for both updating specific information and generating
status and performance reports. In other situations, it is more practical to use one database (often called
the production database) for real time updating and to generate a copy of that database periodically for
status and performance reports for management. If this is done, the copy will be up to one shift or one
day out of date, depending on how frequently the downloads occur, but that is usually current enough for
purposes related to reporting.
Notice the difference between the term database and database management system (DBMS). A DBMS
is an integrated set of programs used to define, update, and control databases.
Logical Versus Physical Views of Data
The basic idea about data organization in computerized information systems is that the person using the
data does not need to know exactly where or how the data are stored in the computer. For example, a real
estate agent wanting a list of all 3-bedroom apartments rented in the last two weeks should ideally be
able to say, "List all 3-bedroom apartments rented in the last two weeks." Even if the information
system only accepts coded questions in special formats, the user should not have to know the computer-
related details of data storage.
Even most programmers do not need to know exactly where each item resides in the database. Instead,
users and programmers need a model of how the database is stored. The technical workings of the
computer system then translate between the model of the database and the way the database is actually
handled technically. Hiding unnecessary details in this way is totally consistent with the way many things
happen in everyday life. For example, people can drive a car without knowing exactly how its electrical
system operates.
The terms logical view of data and physical view of data are often used to describe the difference between
the way a user thinks about the data and the way computers actually handle the data. A logical view of
data expresses the way the user or programmer thinks about the data. It is posed in terms of a data model,
a sufficiently detailed description of the structure of the data to help the user or programmer think about
the data. This data model may reveal little about exactly how or where each item of data is stored.
The technical aspects of the information system (the programming language, database management
system, and operating system) then work together to convert this logical view into a physical view of data,
that is, exactly what the machine has to do to find and retrieve the data.
The physical view is stated in terms of specific locations in storage devices plus internal techniques used
to find the data. Because this book is directed at business professionals rather than programmers, it
emphasizes logical views of data.
Files
The file is the simplest form of data organization used in business data processing. A file is a set of related
records. A record is a set of fields, each of which is related to the same thing, person, or event. A field
is a group of characters that have a predefined meaning. A key is a field that uniquely identifies which
person, thing, or event is described by the record. Each record contains a set of fields, such as social
128 | The Institute of Chartered Accountants of Nepal
Chapter 3: Information Technology Strategy and Trends
security number, last name, and birth date. Social security number is the key Held because two students
will have different social security numbers even if they have the same name.
These basic terms about files correspond to the entities, relationships and attributes discussed in the
previous section. The file contains data about a type of entity (student). Each record is the data for a
particular entity (such as Alvin Bates). The key in the record identifies the entity. The other fields are
attributes of that entity. This example shows that a file can be seen as a table. Each row of the table
corresponds to a different record. Each column represents a different field. The importance of drinking
of a file as a table will become clear when the relational data model is discussed. The data in the file is
organized consistently. This consistency is the fundamental characteristic of computerized files and a
database that makes it possible to write programs that use the data.
The order of records in a file also matters. The four records in the table are sorted by last name. Their
order would have been different if they had been sorted by social security number. Sorting the data by
social security number might be more appropriate for other applications, such as submitting payroll taxes.
Some database management systems make it possible to maintain multiple sorts of the same data so that
it can be accessed in multiple ways.
The general description of a file uses just a few terms (file, record, field, and key) that are widely
applicable and easily understood. When data are in the form of a file, users or programmers can easily
specify the subset of the data they need. They can select the records based on the values in individual
fields. For example, they can say they want all the students who live in Oakdale or all students born
before 1979. They can also identify the specific fields they want. For example, for a mailing list they can
select the names and addresses, but not social security number.
Organizing data as a file works well when the information needed for the situation is limited to the
attributes of a single type of entity. The entity is the student (identified by social security number), and
the attributes include name, address, and date of birth.
Unfortunately, organizing all the data in a situation as a single file is often impractical. If you were
using a paper and pencil system to keep track of this information, you would probably organize it into
six separate file folders related to each of these types of entities. You would do this because it would be
easier to keep track of the data that way.
Organizing the data as totally separate files for each of the six entity types is usually inadequate, however,
because the entity types are related. Otherwise there would be no reason to think of them as parts of
the same system. The registration system requires combining data from different files and therefore needs
to maintain links between entities of different types.
Relational Databases
The relational data model is the predominant logical view of data used in current information systems
because it provides an easily understood way to combine and manipulate data in multiple files in
a database. Posed in terms of this model, a relational database is a set of two- dimensional tables in which
one or more key-fields in each table are associated with corresponding key or non-key fields in other
tables. (The term "relational" comes from the fact that relational databases use the term relation instead
of the term file. A relation is a keyed table consisting of records.)

The Institute of Chartered Accountants of Nepal ȁͳʹͻ


Management Information and Control System
Relational databases have the advantage of meshing with the data modeling techniques mentioned
earlier. Entity-relationship diagrams provide a simple starting point for thinking about the tables in a
relational database. The starting point includes a table for each entity type and for each relationship in
the diagram.
Designing a database for efficiency requires a technique called normalization, which eliminates
redundancies from the tables in the database and parse them down to their simplest form. Going beyond
just normalization, database designers must also organize the database to achieve internal efficiency
by reflecting the way the users will access the data. For a small database, this may be a simple question.
For a large database with stringent response time requirements, this optimization process may stretch the
knowledge of database experts.
Although the internal structure of a relational database may be quite complicated, its straightforward
appearance to users makes it comparatively easy to work with by combining and manipulating tables to
create new tables. The industry standard programming language for expressing data access and
manipulation in relational databases is called SQL (Structured Query Language), but it is often possible
to pose straight forward database queries without using SQL. Relational databases have become popular
because they are easier to understand and work with than other forms of database organization. Early
implementations of relational databases were slow an inefficient, but faster computers and better software
have reduced these shortcomings.
Database Management Systems
A database management system (DBMS) is an integrated set of programs used to define databases,
perform transactions that update databases, retrieve data from databases, and establish database
efficiency. Some DBMSs for personal computers can be used directly by end users to set up applications.
Other DBMSs are much more complex and require programmers to set up applications. DBMSs include
a query language that allows end users to retrieve data. DBMSs make data more of a resource and
facilitate programming work, thereby making access to data more reliable and robust.
Making Data More of a Resource
DBMSs provide many capabilities that help in treating data as a resource. DBMSs improve data access
by providing effective ways to organize data. They improve data accuracy by checking for identifiable
errors during data collection and by discouraging data redundancy. They encourage efficiency by
providing different ways to organize the computerized database. They encourage flexibility by providing
ways to change the scope and organization of the database as business conditions change. They support
data security by helping control access to data and by supporting recovery procedures when problems
arise. They support data manageability by providing information needed to monitor the database.
Making Programming More Efficient
DBMSs also contain numerous capabilities that make programming more efficient. They provide
consistent, centralized methods for defining the database. Also, they provide standard subroutines
that programmers use within application programs for storing and retrieving data in the database. DBMSs
free the programmer or end user from having to reinvent these complex capabilities.

130 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
DBMSs for different purposes provide vastly different features. A DBMS for a personal computer
contains far fewer capabilities than a DBMS for a mainframe or complex network. The following
discussion focuses on the range of DBMS functions rather than on the capabilities in any one DBMS.
Business professionals unaware of these issues do not appreciate what it takes to use a DBMS successfully.
Defining the Database and Access to Data
DBMS applications start with a data definition, the identification of all the fields in the database, how
they are formatted, how they are combined into different types of records, and how the record types
are interrelated. A central tool for defining data in a DBMS is a repository called a data dictionary. For
each data item the data dictionary typically includes:
• Name of the data item
• Definition of the data item
• Name of the file the data item is stored in
• Abbreviation that can be used as a column heading for reports
• Typical format for output (for example, $X,XXX.XX or MM-DD-YY)
• Range of reasonable values (for example, the codes used for months)
• Identification of data flow diagrams where it appears in system documentation
• Identification of user input screens and output reports where it appears
Data dictionaries can be used throughout the system development process. In the early stages they
serve as a repository of terms. This is especially useful for coordination when many people are working
on the project at the same time. During programming, data dictionaries make it unnecessary to write the
same information multiple times and help check for errors and inconsistencies. Instead of cluttering
programs with subroutines that check input data, equivalent data checks can be inserted automatically
from the data dictionary when the program is compiled. This is an example of setting something up
once and reusing it so that the programmer doesn't have to recreate it repeatedly.
A data dictionary consists of metadata, information defining data in the information system. Aside
from defining the data in an information system, metadata helps in linking computer equipment from
different vendors. This can be done by using interfaces that include two types of data, the application data
(such as the information about courses and sections) and metadata defining the meaning and format of the
application data.
The data definition for a database is often called a schema. Because some users may not be allowed
access to part of the data in the database, many DBMSs support the use of subschemas. A subschema is
a subset of a schema and therefore defines a particular portion of a database. The system of schemas and
subschemas supports data independence because schemas and subschemas can be defined outside of
the programs that access the data. Data independence permits modifications of the format or content
of part of a database without having to retest every program that accesses the data. This is a major
convenience for programmers, especially in large systems with many programs that access the same
database.

The Institute of Chartered Accountants of Nepal ȁͳ͵ͳ


Management Information and Control System
Although schemas and subschemas are logical views of how the database is organized, in order to store
or retrieve data DBMSs also need a physical definition of exactly where the files reside in the computer
system. This physical definition can be quite complicated if the database contains many different
files or is spread across multiple storage devices in multiple locations. A DBMS must reserve the areas
in physical storage where the data will reside. It must also organize the data for efficient retrieval.
Because the number of records in any file in a database can grow or shrink over time, a DBMS must
provide ways to change the amount of space reserved for each file in the database. After the database
is defined, a DBMS plays a role in processing transactions that create or modify data in the database.
Methods for Accessing Data in a Computer System
A computer system finds stored data either by knowing the exact location or by searching for the data.
Different DBMSs contain different internal methods for storing and retrieving data. This section looks at
three methods that could be used: sequential access, direct access, and indexed access. Programmers set
up DBMSs to use whatever method is appropriate for the situation, while also shielding users from
technical details of data access.
Sequential Access
The earliest computerized data processing used sequential access in which individual records within a
single file are processed in sequence until all records have been processed or until the processing is
terminated for some other reason. Sequential access is the only method for data stored on tape, but it can
also be used for data on a direct access device such as a disk. Sequential processing makes it unnecessary
to know the exact location of each data item because data are processed according to the order in which
they are stored.
Although sequential processing is useful for many types of scheduled periodic processing, it has the same
drawback as a tape cassette containing a number of songs. lf you want to hear the song at the end of the
tape, you have to pass through everything that comes before it. Imagine a telephone directory that is stored
alphabetically on a tape. To find the phone number of a person named Adams, you would mount the tape
and search until the name Adams was encountered or passed. If the name were Zwicky you would need
to search past almost every name in the directory before you could find the phone number you needed.
On the average, you would have to read past half of the names in the directory. As if this weren't bad
enough, you would also need to rewind the tape. These characteristics of sequential access make it
impractical to use when immediate processing of the data is required.
Direct Access
Processing events as they occur requires direct access, the ability to find an individual item in a file
immediately. Magnetic disk storage was developed to provide this capability. Optical storage is another
physical implementation of the same logical approach for finding data. To understand how direct access
works, imagine that the phone directory described earlier is stored on a hard disk. A program uses a
mathematical procedure to calculate the approximate location on the hard disk where Sam Patterson's
phone number is stored. Another program instructs the read head to move to that location to find the data.
Using the same logic to change George Butler's phone number, one program calculates a location for
the phone number, and another program directs the read head to store the new data in that location.

132 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
Finding data on disk is not as simple as this example implies because procedures for calculating where a
specific data item should reside on a disk sometimes calculate the same location for two different data
items. This result is called a collision. For example, assume that the procedure calculates that the phone
numbers for both Liz Parelli and Joe Ramirez should be stored in location 45521 on a disk. If neither
phone number is on the disk nor the user wishes to store Joe's number, it will be stored in location 45521.
If the user stores Liz's number later, the computer will attempt to store it in location 45521, but will
find that this location is already occupied. It will then store Liz's phone number in location 45522 if that
location is not occupied. If it is occupied, the computer will look at successive locations until it finds an
empty one. When Liz's number is retrieved at some later time, the computer will look for it first in location
45521. Observing that the number in this location is not Liz's, it will then search through successive
locations until it finds her number.
Because users just want to get a telephone number and don't care about how and where it is stored
on a hard disk, the DBMS shields them from these details. Someone in the organization has to know
about these details, however, because ignoring them can cause serious problems. When direct access
databases are more than 60% to 70% full, the collisions start to compound, and response time degrades
rapidly. To keep storage and retrieval times acceptable, the amount of disk space available for the
database must be increased. Someone must unload the database onto another disk or a tape and then
reload it so that it is more evenly distributed across the allocated disk space. Maintaining the performance
of large databases with multiple users and frequent updating requires fine-tuning by experts.
Indexed Access
A third method for finding data is to use indexed access. An index is a table used to find the location of
data. The index indicates where alphabetical groups of names are stored. For instance, the index
contains the information that the names from Palla to Pearson are on track 53. The user enters the name
Sam Patterson. The program uses the index to decide where to start searching for the phone number.
Using indexes makes it possible to perform both sequential processing and
direct access efficiently. Therefore, access to data using such indexes is often called the indexed sequential
access method (ISAM). To perform a sequential processing task, such as listing the phone directory in
alphabetical order, a program reads each index entry in turn and then reads all of the data pointed to by
that index entry. If the index entries and the data pointed to by the index entries are in alphabetical order,
the listing will also be in alphabetical order.
Although they solve many problems, using indexes also causes complications. Assume that all the
space on a track of a disk is used up and that other telephone number needs to be stored that belongs on
that track. This situation is called an overflow. ISAM will put the data in a special overflow area but then
may have to look in two places when it needs to retrieve a telephone number. Database performance
degrades as more data goes into the overflow area. As a result, it is occasionally necessary to unload the
data, store it again, and revise the indexes. Once again, these are the details the DBMS and technical
staff take care of because most users have neither the desire nor the need to think about them.
Other breakdown of access can be done as follows:
File-Based Access:
This method involves accessing data stored in files directly on the file system. Programs read and write
data by opening and manipulating files using file system APIs. This method is commonly used for simple
The Institute of Chartered Accountants of Nepal ȁͳ͵͵
Management Information and Control System
data storage and retrieval, particularly for smaller datasets or when the data is organized in a file-based
structure.
Database Access:
Databases provide a structured and organized way to store and access data. The most common method for
accessing data in a database is through a Database Management System (DBMS). DBMSs offer query
languages (e.g., SQL) and APIs that allow users to retrieve, update, and manipulate data stored in tables.
Database access provides efficient data management, indexing, transactional support, and security
features.
Network Communication:
Data can be accessed remotely over a network using various protocols. For example, accessing data
through network file systems (e.g., NFS, SMB) allows users to access files on remote servers as if they
were local. Similarly, accessing data through network protocols like HTTP, FTP, or SSH allows users to
retrieve or transfer files over the network.
Application Programming Interfaces (APIs):
Many software applications expose APIs that allow developers to access and manipulate data
programmatically. APIs define a set of functions, methods, or protocols through which developers can
interact with the application's data and functionality. APIs can be specific to a particular application,
service, or platform.
Web-based Access:
With the proliferation of web technologies, data access through web browsers has become prevalent. Web
applications provide user interfaces that allow users to interact with data over the internet. Data can be
accessed and manipulated through web forms, RESTful APIs, web services, and other web-based
interfaces.
Remote Procedure Calls (RPC):
RPC is a method where a program can call procedures or functions on a remote system as if they were
local. RPC enables distributed systems to access and exchange data across different machines or processes.
It allows programs to invoke remote methods and retrieve results seamlessly.
Middleware:
Middleware is software that acts as an intermediary between different applications or systems, facilitating
communication and data exchange. Middleware can provide standardized interfaces, protocols, and data
formats to enable interoperability and data access across disparate systems.
Cloud Storage and Services: Cloud computing platforms offer storage services and APIs that allow users
to access data stored in the cloud. Services like Amazon S3, Google Cloud Storage, or Microsoft Azure
Blob Storage provide scalable and reliable storage options with APIs for data access and manipulation.
The Database approach to data management
Database technology can cut through many of the problems a traditional file organization creates.
A more rigorous definition of a database is a collection of data organized to serve many applications

134 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information Technology Strategy and Trends
efficiently by centralizing the data and controlling redundant data. Rather than storing data in
separate files for each application, data are stored so as to appear to users as being stored in only one
location. A single database services multiple applications. For example, instead of a corporation
storing employee data in separate information systems and separate files for personnel, payroll, and
benefits, the corporation could create a single common human resources database. Figure 3-26 illustrates
the database concept.

Fig 3-26 The contemporary database environment

A single human resources database serves multiple applications and also enables a corporation to easily
draw together all the information for various applications. The database management system acts as the
interface between the application programs and the data.
Database Management Systems
A database management system (DBMS) is simply the software that permits an organization to centralize
data, manage them efficiently, and provide access to the stored data by application programs. The DBMS
acts as an interface between application programs and the physical data files. When the application
program calls for a data item, such as gross pay, the DBMS finds this item in the database and presents
it to the application program. Using traditional data files, the programmer would have to specify the size
and format of each data element used in the program and then tell the computer where they were located.
A DBMS eliminates most of the data definition statements found in traditional programs.

The Institute of Chartered Accountants of Nepal ȁͳ͵ͷ


Management Information and Control System
The DBMS relieves the programmer or end user from the task of understanding where and how the data
are actually stored by separating the logical and physical views of the data. The logical view presents
data as they would be perceived by end users or business specialists, whereas the physical view shows
how data are actually organized and structured on physical storage media. The database management
software makes the physical database available for different logical views presented for various
application programs. The logical description of the entire database showing all the data elements and
relationships among them is called the conceptual schema, whereas the specification of how data from
the conceptual schema are stored on physical media is termed the physical schema or internal schema.
The specific set of data from the database, or view, which is required by each user or application program
is termed the subschema. For example, for the human resources database illustrated in Figure 3-26 an
employee retirement benefits program might use a subschema consisting of the employee's name, address,
social security number, pension plan, and retirement benefits data.
A database management system has three components:
1. A data definition language
2. A data manipulation language
3. A data dictionary
The data definition language is the formal language programmers use to specify the structure of the
content of the database. The data definition language defines each data element as it appears in the
database before that data element is translated into the forms required by application programs.
Most DBMS have a specialized language called a data manipulation language that is used in conjunction
with some conventional third- or fourth-generation programming languages to manipulate the data in the
database. This language contains commands that permit end users and programming specialists to extract
data from the database to satisfy information requests and develop applications.
The most prominent data manipulation language today is Structured Query Language, or SQL. End users
and information systems specialists can use SQL as an interactive query language to access data from
databases, and SQL commands can be embedded in application programs written in conventional
programming languages.
The third element of a DBMS is a data dictionary. This is an automated or manual file that stores
definitions of data elements and data characteristics, such as usage, physical representation, ownership
(who in the organization is responsible for maintaining the data), authorization, and security. Many data
dictionaries can produce lists and reports of data use, groupings, program locations, and so on.
Figure 3-27 illustrates a sample data dictionary report that shows the size, format, meaning, and uses of
a data element in a human resources database. A data element represents a field. In addition to listing the
standard name (AMT-PAY-BASE), the dictionary lists the names that reference this element in specific
systems and identifies the individuals, business functions, programs, and reports that use this data element.

136 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information
Chapter 4 :Technology Strategy and
System Development LifeTrends
Cycle

The Institute of Chartered Accountants of Nepal ȁͳ͵͹


Management Information and Control System

Fig 3-27 Sample data dictionary report


The sample data dictionary report for a human resources database provides helpful information, such
as the size of the data element, which programs and reports use it, and which group in the organization
is the owner responsible for maintaining it. The report also shows some of the other names that the
organization uses for this piece of data.
By creating an inventory of data contained in the database, the data dictionary serves as an important
data management tool. For instance, business users could consult the dictionary to find out exactly what
pieces of data are maintained for the sales or marketing function or even to determine all the information
maintained by the entire enterprise. The dictionary could supply business users with the name, format,
and specifications required to access data for reports. Technical staff could use the dictionary to
determine what data elements and files must be changed if a program is changed.
Most data dictionaries are entirely passive; they simply report. More advanced types are active; changes
in the dictionary can be automatically used by related programs. For instance, to change ZIP codes from
five to nine digits, one could simply enter the change in the dictionary without having to modify and
recompile all application programs using ZIP codes.
In an ideal database environment, the data in the database are defined only once and used for all
applications whose data reside in the database, thereby eliminating data redundancy and inconsistency.
Application programs, which are written using a combination of the data manipulation language of the
DBMS and a conventional programming language, request data elements from the database. Data
elements called for by the application programs are found and delivered by the DBMS. The
programmer does not have to specify in detail how or where the data are to be found.
How a DBMS Solves the Problems of The Traditional File Environment
A DBMS can reduce data redundancy and inconsistency by minimizing isolated files in which the
same data are repeated. The DBMS may not enable the organization to eliminate data redundancy
entirely, but it can help control redundancy. Even if the organization maintains some redundant data,
using a DBMS eliminates data inconsistency because the DBMS can help the organization ensure that
every occurrence of redundant data has the same values. The DBMS uncouples programs and data,
enabling data to stand on their own. Access and availability of information can be increased and program
development and maintenance costs can be reduced because users and programmers can perform ad hoc

138 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information
Chapter 4 :Technology Strategy and
System Development LifeTrends
Cycle
queries of data in the database. The DBMS enables the organization to centrally manage data, their use,
and security.
The Window on Organizations illustrates some of the benefits of a DBMS for information management.
Procter & Gamble had to manage massive amounts of product data for its more than 300 brands.
Because these data were stored in 30 separate repositories, the company could not easily bring together
information about the company's various products and their components, degrading operational
efficiency. Management of these data improved once P&G created a common set of technical standards
for all its products and organized its data in a single global database.
Types of Databases
Contemporary DBMS use different database models to keep track of entities, attributes, and
relationships. Each model has certain processing advantages and certain business advantages.
Relational DBMS
The most popular type of DBMS today for PCs as well as for larger computers and mainframes is the
relational DBMS. The relational data model represents all data in the database as simple two-
dimensional tables called relations. Tables may be referred to as files. Information in more than one file
can be easily extracted and combined.
Figure 3-28 shows a supplier table and a part table. In each table the rows represent unique records
and the columns represent fields, or the attributes that describe the entities. The correct term for a row
in a relation is tuple. Often a user needs information from a number of relations to produce a report.
Here is the strength of the relational model: It can relate data in any one file or table to data in another
file or table as long as both tables share a common data element.

Fig 3-28 The relational data model

The Institute of Chartered Accountants of Nepal ȁͳ͵ͻ


Management Information and Control System
Each table is a relation, each row is a tuple representing a record, and each column is an attribute
representing a field. These relations can easily be combined and extracted to access data and produce
reports, provided that any two share a common data element. In this example, the PART and SUPPLIER
files share the data element Supplier_Number.
To demonstrate, suppose we wanted to find in the relational database in Figure 3-28 the names and
addresses of suppliers who could provide us with part number 137 or part number 152. We would need
information from two tables: the supplier table and the part table. Note that these two files have a
shared data element: Supplier_Number.
In a relational database, three basic operations, as shown in Figure 3-29, are used to develop useful
sets of data: select, project, and join. The select operation creates a subset consisting of all records in the
file that meet stated criteria. In our example, we want to select records (rows) from the part table where
the part number equals 137 or 152. The join operation combines relational tables to provide the user
with more information than is available in individual tables. In our example, we want to join the now-
shortened part table (only parts numbered 137 or 152 will be presented) and the supplier table into a
single new table.

Fig 3-29 The three basic operations of a relational DBMS


The select, project, and join operations enable data from two different tables to be combined and only
selected attributes to be displayed.
The project operation creates a subset consisting of columns in a table, permitting the user to create new
tables (also called views) that contain only the information required. In our example, we want to extract
from the new table only the following columns: Part_Number, Supplier_Number, Supplier_Name, and
Supplier_Address (see Figure 3-29).
The SQL statements for producing the new resultant table in Figure 3-29 would be as follows:
SELECT PART.Part_Number, SUPPLIER.Supplier_Number,
SUPPLIER.Supplier_Name, SUPPLIER.Supplier_Address
FROM PART, SUPPLIER

140 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information
Chapter 4 :Technology Strategy and
System Development LifeTrends
Cycle
WHERE PART.Supplier_Number = SUPPLIER.Supplier_Number AND
Part_Number = 137 OR Part_Number = 152;
You can learn more about SQL and how to create a SQL query in the Hands-On Guide to MIS at the end
of this text.
Leading mainframe relational database management systems include IBM's DB2 and Oracle from
the Oracle Corporation. DB2, Oracle, and Microsoft SQL Server are used as DBMS for midrange
computers. Microsoft Access is a PC relational database management system, and Oracle Lite is a DBMS
for small handheld computing devices.
Internet Connection
The Internet Connection for this chapter directs you to a series of Web sites where you can complete an
exercise to evaluate various commercial database management system products.
Hierarchical and Network DBMS
You can still find older systems that are based on a hierarchical or network data model. The hierarchical
DBMS is used to model one-to-many relationships, presenting data to users in a treelike structure. Within
each record, data elements are organized into pieces of records called segments. To the user, each record
looks like an organizational chart with one top-level segment called the root. An upper segment is
connected logically to a lower segment in a parent-child relationship. A parent segment can have more
than one child, but a child can have only one parent.
Figure 3-30 shows a hierarchical structure that might be used for a human resources database. The
root segment is Employee, which contains basic employee information such as name, address, and
identification number. Immediately below it are three child segments: Compensation (containing
salary and promotion data), Job Assignments (containing data about job positions and departments), and
Benefits (containing data about beneficiaries and benefit options). The Compensation segment has two
children below it: Performance Ratings (containing data about employees' job performance evaluations)
and Salary History (containing historical data about employees' past salaries). Below the Benefits
segment are child segments for Pension, Life Insurance, and Health, containing data about these benefit
plans.

Fig 3-30 A hierarchical database for a human resources system

The Institute of Chartered Accountants of Nepal ȁͳͶͳ


Management Information and Control System
The hierarchical database model looks like an organizational chart or a family tree. It has a single
root segment (Employee) connected to lower level segments (Compensation, Job Assignments, and
Benefits). Each subordinate segment, in turn, may connect to other subordinate segments. Here,
Compensation connects to Performance Ratings and Salary History. A benefit connects to Pension, Life
Insurance, and Health. Each subordinate segment is the child of the segment directly above it.
Whereas hierarchical structures depict one-to-many relationships, network DBMS depict data logically
as many-to-many relationships. In other words, parents can have multiple children, and a child can have
more than one parent. A typical many-to-many relationship for a network DBMS is the student-
course relationship (see Figure 3-31). There are many courses in a university and many students.
A student takes many courses, and a course has many students.

Fig 3-31 The network data model

142 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information
Chapter 4 :Technology Strategy and
System Development LifeTrends
Cycle
This illustration of a network data model showing the relationship the students in a university have to
the courses they take represents an example of logical many-to-many relationships.
Hierarchical and network DBMS are considered outdated and are no longer used for building new
database applications. They are much less flexible than relational DBMS and do not support ad hoc,
English language-like inquiries for information. All paths for accessing data must be specified in advance
and cannot be changed without a major programming effort.
Relational DBMS, in contrast, have much more flexibility in providing data for ad hoc queries, combining
information from different sources, and providing capability to add new data and records without
disturbing existing programs and applications. However, these systems can be slowed down if they
require many accesses to the data stored on disk to carry out the select, join, and project commands.
Selecting one part number from among millions, one record at a time, can take a long time. Of course,
the database can be tuned to speed up prespecified queries.
Hierarchical DBMS can still be found in large legacy systems that require intensive high-volume
transaction processing. Banks, insurance companies, and other high volume users continue to use reliable
hierarchical databases, such as IBM's Information Management System (IMS) developed in 1969. As
relational products acquire more muscle, firms will shift away completely from hierarchical DBMS, but
this will happen over a long period of time.
Object-Oriented Databases
Conventional database management systems were designed for homogeneous data that can be easily
structured into predefined data fields and records organized in rows and columns. But many
applications today and in the future will require databases that can store and retrieve not only structured
numbers and characters but also drawings, images, photographs, voice, and full- motion video.
Conventional DBMS are not well suited to handling graphics-based or multimedia applications. For
instance, design data in a computer-aided design (CAD) database consist of complex relationships
among many types of data. Manipulating these kinds of data in a relational system requires
extensive programming to translate complex data structures into tables and rows.
An object-oriented DBMS, however, stores the data and procedures that act on those data as objects that
can be automatically retrieved and shared.
Object-oriented database management systems (OODBMS) are becoming popular because they can be
used to manage the various multimedia components or Java applets used in Web applications, which
typically integrate pieces of information from a variety of sources. OODBMS also are useful for
storing data types such as recursive data. (An example would be parts within parts as found in
manufacturing applications.) Finance and trading applications often use OODBMS because they require
data models that must be easy to change to respond to new economic conditions.
Although object-oriented databases can store more complex types of information than relational DBMS,
they are relatively slow compared with relational DBMS for processing large numbers of transactions.
Hybrid object-relational DBMS are now available to provide capabilities of both object-oriented and
relational DBMS. A hybrid approach can be accomplished in three different ways: by using tools that
offer object-oriented access to relational DBMS, by using object- oriented extensions to existing relational
DBMS, or by using a hybrid object-relational database management system.

The Institute of Chartered Accountants of Nepal ȁͳͶ͵


Management Information and Control System
Creating a Database Environment
To create a database environment, you must understand the relationships among the data, the type of
data that will be maintained in the database, how the data will be used, and how the organization may
need to change to manage data from a company-wide perspective.
Increasingly, database design will also have to consider how the organization can share some of its data
with its business partners (Jukic, Jukic, and Parameswaran, 2002). We now describe important database
design principles and the management and organizational requirements of a database environment.
Designing Databases
To create a database, you must go through two design exercises: a conceptual design and a physical
design. The conceptual, or logical, design of a database is an abstract model of the database from a
business perspective, whereas the physical design shows how the database is actually arranged on direct-
access storage devices. Logical design requires a detailed description of the business information needs
of the actual end users of the database. Ideally, database design will be part of an overall organizational
data-planning effort.
The conceptual database design describes how the data elements in the database are to be grouped.
The design process identifies relationships among data elements and the most efficient way of grouping
data elements to meet information requirements. The process also identifies redundant data elements and
the groupings of data elements required for specific application programs. Groups of data are organized
and refined until an overall logical view of the relationships among all the data elements in the database
emerges.
To use a relational database model effectively, complex groupings of data must be streamlined to
minimize redundant data elements and awkward many-to-many relationships. The process of creating
small, stable, yet flexible and adaptive data structures from complex groups of data is called
normalization. Figures 3-32 and 3-33 illustrate this process.

Fig 3-32 An unnormalized relation for ORDER


An unnormalized relation contains repeating groups. For example, there can be many parts and suppliers
for each order. There is only a one-to-one correspondence between Order_Number, Order_Date, and
Delivery_Date.

144 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information
Chapter 4 :Technology Strategy and
System Development LifeTrends
Cycle

Fig 3-33 Normalized tables created from ORDER


After normalization, the original relation ORDER has been broken down into four smaller relations. The
relation ORDER is left with only three attributes and the relation ORDERED_PART has a combined,
or concatenated, key consisting of Order_Number and Part_Number.
In the particular business modeled here, an order can have more than one part but each part is provided
by only one supplier. If we build a relation called ORDER with all the fields included here, we would
have to repeat the name and address of the supplier for every part on the order, even though the order
is for parts from a single supplier. This relationship contains what are called repeating data groups
because there can be many parts on a single order to a given supplier. A more efficient way to
arrange the data is to break down ORDER into smaller relations, each of which describes a single
entity. If we go step by step and normalize the relation ORDER, we emerge with the relations illustrated
in Figure 3-33.
If a database has been carefully considered, with a clear understanding of business information needs and
usage, the database model will be in a normalized form. Many real-world databases are not fully
normalized because this may not be the most efficient or cost-effective way to meet business
requirements.
Database designers document their data model with an entity-relationship diagram, illustrated in Figure
3-34. This diagram illustrates the relationship between the entities ORDER, ORDERED_PART, PART,
and SUPPLIER. The boxes represent entities. The lines connecting the boxes represent relationships. A
line connecting two entities that ends in two short marks designates a one-to-one relationship. A line
connecting two entities that ends with a crow's foot topped by a short mark indicates a one-to-many
relationship. Figure 3-34 shows that one ORDER can contain many ORDERED_PARTs. Each PART
can be ordered many times and can appear many times in a single order. Each PART can have only
one SUPPLIER, but many PARTs can be provided by the same SUPPLIER.

The Institute of Chartered Accountants of Nepal ȁͳͶͷ


Management Information and Control System

Fig 3-34 An entity-relationship diagram

This diagram shows the relationships between the entities ORDER, ORDERED_PART, PART, and
SUPPLIER that might be used to model the database in Figure 3-33.
Distributing Databases
Database design also considers how the data are to be distributed. Information systems can be designed
with a centralized database that is used by a single central processor or by multiple processors in a
client/server network. Alternatively, the database can be distributed. A distributed database is
one that is stored in more than one physical location.
There are two main methods of distributing a database (see Figure 3-35). In a partitioned database,
parts of the database are stored and maintained physically in one location and other parts are stored
and maintained in other locations (see Figure 3-35a) so that each remote processor has the necessary
data to serve its local area. Changes in local files can be justified with the central database on a batch
basis, often at night. Another strategy is to replicate (that is, duplicate in its entirety) the central database
(Figure 3-35b) at all remote locations. For example, Lufthansa Airlines replaced its centralized
mainframe database with a replicated database to make information more immediately available to
flight dispatchers. Any change made to Lufthansa's Frankfort DBMS is automatically replicated in New
York and Hong Kong. This strategy also requires updating the central database during off-hours.

146 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information
Chapter 4 :Technology Strategy and
System Development LifeTrends
Cycle

Fig 3-35 Distributed databases


There are alternative ways of distributing a database. The central database can be partitioned (a) so that
each remote processor has the necessary data to serve its own local needs. The central database also can
be replicated (b) at all remote locations.
Distributed systems reduce the vulnerability of a single, massive central site. They increase service
and responsiveness to local users and often can run on smaller, less expensive computers.
Distributed systems, however, depend on high-quality telecommunications lines, which themselves
are vulnerable. Moreover, local databases can sometimes depart from central data standards and
definitions, and they pose security problems by widely distributing access to sensitive data. Database
designers need to weigh these factors in their decisions.
Organizational Obstacles to a Database Environment
Implementing a database requires widespread organizational change in the role of information (and
information managers), the allocation of power at senior levels, the ownership and sharing of
information, and patterns of organizational agreement. A database management system (DBMS)
challenges the existing power arrangements in an organization and for that reason often
generates political resistance. In a traditional file environment, each department constructed files and
programs to fulfill its specific needs. Now, with a database, files and programs must be built that take into
account the whole organization's interest in data. Although the organization has spent the money on
hardware and software for a database environment, it may not reap the benefits it should if it is unwilling
to make the requisite organizational changes.
Cost/Benefit Considerations
Designing a database to serve the enterprise can be a lengthy and costly process. In addition to the cost
of DBMS software, related hardware, and data modeling, organizations should anticipate heavy
expenditures for integrating, merging, and standardizing data from different systems and functional areas.
Despite the clear advantages of the DBMS, the short-term costs of

The Institute of Chartered Accountants of Nepal ȁͳͶ͹


Management Information and Control System

developing a DBMS often appear to be as great as the benefits. It may take time for the database to
provide value.
Solution Guidelines
The critical elements for creating a database environment are (1) data administration, (2) data- planning
and modeling methodology, (3) database technology and management, and (4) users. This environment
is depicted in Figure 3-36.

Fig 3-36 Key organizational elements in the database environment

148 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information
Chapter 4 :Technology Strategy and
System Development LifeTrends
Cycle
For a database management system to flourish in any organization, data administration functions and
data-planning and modeling methodologies must be coordinated with database technology and
management. Resources must be devoted to train end users to use databases properly.
Data Administration
Database systems require that the organization recognize the strategic role of information and begin
actively to manage and plan for information as a corporate resource. This means that the organization
must develop a data administration function with the power to define information requirements for the
entire company and with direct access to senior management. The chief information officer (CIO) or vice
president of information becomes the primary advocate in the organization for database systems.
Data administration is responsible for the specific policies and procedures through which data can be
managed as an organizational resource. These responsibilities include developing information policy,
planning for data, overseeing logical database design and data dictionary development, and monitoring
how information systems specialists and end-user groups use data.
The fundamental principle of data administration is that all data are the property of the organization as a
whole. Data cannot belong exclusively to any one business area or organizational unit. All data should
be available to any group that requires them to fulfill its mission. An organization needs to formulate an
information policy that specifies its rules for sharing, disseminating, acquiring, standardizing, classifying,
and inventorying information throughout the organization. Information policy lays out specific
procedures and accountabilities, specifying which organizational units share information, where
information can be distributed, and who is responsible for updating and maintaining the information.
Although data administration is a very important organizational function, it has proved very challenging
to implement.
Data-Planning and Modeling Methodology
The organizational interests served by the DBMS are much broader than those in the traditional file
environment; therefore, the organization requires enterprise-wide planning for data. Enterprise
analysis, which addresses the information requirements of the entire organization (as opposed to the
requirements of individual applications), is needed to develop databases. The purpose of enterprise
analysis is to identify the key entities, attributes, and relationships that constitute the organization's data.
Database Technology, Management, and Users
Databases require new software and a new staff specially trained in DBMS techniques, as well as new
data management structures. Most corporations develop a database design and management group within
the corporate information systems division that is responsible for defining and organizing the structure
and content of the database and maintaining the database. In close cooperation with users, the design
group establishes the physical database, the logical relations among elements, and the access rules and
procedures. The functions it performs are called database administration.
A database serves a wider community of users than traditional systems. Relational systems with user-
friendly query languages permit employees who are not computer specialists to access large databases.
In addition, users include trained computer specialists. To optimize access for non specialists, more
resources must be devoted to training end users.

The Institute of Chartered Accountants of Nepal ȁͳͶͻ


Management Information and Control System
3.2.2 IT Risk and Opportunity
A risk is any anticipated unfavorable event or circumstance that can occur when a project is underway.
If a risk becomes true, it can hamper the successful and timely completion of a project. Therefore
it is necessary to anticipate and identify different risks that a project may be susceptible to so that
contingency plans can be prepared to contain the effects of each risk. In this context risk management
aims at reducing the impact of all kinds of risks that might affect a project. Risk management consists
of three essential activities: risk identification, risk assessment and risk containment.
IT Risk and Opportunity refer to the potential positive and negative outcomes related to information
technology (IT) initiatives, systems, and processes within an organization. Here's an overview of IT risk
and opportunity:
IT Risk:
Security Breaches: Unauthorized access, data breaches, malware attacks, or hacking incidents can result
in the loss, theft, or compromise of sensitive data, leading to financial and reputational damage.
Data Loss: Accidental deletion, hardware failure, software glitches, or natural disasters can lead to the
loss of critical data, impacting business operations and continuity.
System Downtime: IT systems and infrastructure may experience disruptions, outages, or performance
issues, resulting in operational disruptions, loss of productivity, and financial losses.
Compliance and Regulatory Risks: Failure to comply with industry regulations, data protection laws, or
privacy requirements can result in legal consequences, penalties, and damage to the organization's
reputation.
Vendor and Third-Party Risks: Dependence on third-party providers, vendors, or cloud service
providers can introduce risks related to service interruptions, data breaches, or compliance issues.
Technology Obsolescence: Rapid advancements in technology can render existing IT systems, software,
or infrastructure outdated, leading to compatibility issues, limited functionality, and increased security
vulnerabilities.
IT Opportunities:
Digital Transformation: IT enables organizations to leverage technology to transform business
processes, enhance efficiency, streamline operations, and create new opportunities for growth and
innovation.
Automation and Efficiency: IT solutions can automate repetitive tasks, streamline workflows, and
improve operational efficiency, allowing employees to focus on value-added activities and reducing costs.
Enhanced Data Analytics: IT systems enable organizations to collect, analyze, and interpret vast amounts
of data, providing valuable insights for decision-making, customer understanding, and strategic planning.
Improved Collaboration and Communication: IT facilitates collaboration and communication among
employees, teams, and stakeholders through tools like instant messaging, video conferencing, project
management systems, and shared workspaces.

150 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information
Chapter 4 :Technology Strategy and
System Development LifeTrends
Cycle
Scalability and Flexibility: IT infrastructure, cloud computing, and virtualization technologies offer
scalability and flexibility, allowing organizations to adapt to changing business needs, handle increased
workloads, and rapidly deploy new services or applications.
Competitive Advantage: Strategic utilization of IT can provide a competitive edge by enabling faster
time-to-market, better customer experiences, personalized services, and the ability to leverage emerging
technologies.
It is essential for organizations to proactively identify and manage IT risks while leveraging IT
opportunities to drive business success. This involves implementing robust security measures, disaster
recovery plans, compliance frameworks, and adopting best practices in IT governance and risk
management. Additionally, organizations should embrace a culture of continuous learning and adaptation
to stay ahead in the rapidly evolving IT landscape.
Risk Identification
The project manager needs to anticipate the risks in the project as early as possible so that the impact of
the risks can be minimized by making effective risk management plans. So early risk identification is
important. Risk identification is somewhere similar to listing down your nightmares. For examples you
might be worried about your vendors' ability to complete their work on time, as per company's quality
standards or about your key personnel leaving the organization, etc. All such risks that are likely to affect
a project must be indentified and listed.
A project can get affected by a large variety of risks. In order to be able to systematically identify
the important risks which might affect a project, it is necessary to categorize risks into different classes.
The project manager can then examine those risks from each class which are relevant to the project. There
are main three categories of risks which can affect a software project.
Project risks. Project risks concern various forms of budgetary, schedule, personnel, resource and customer-
related problems. An important project risk is schedule slippage. Since software is intangible, it is
very difficult to monitor and control a software project. It is very difficult to control something which cannot
be seen. The invisibility of the software product being developed is an important reason why many software
projects suffer from the risk of schedule slippage.
Technical risks. Technical risks concern potential design, implementation, interfacing, testing, and
maintenance problems. Technical risks also include ambiguous specification, incomplete specification,
changing specification, technical uncertainty, and technical obsolescence. Most technical risks occur due
to the development team's insufficient knowledge about the product.
Business risks. Business risks include risks of building an excellent product that no one wants, losing
budgetary or personnel commitments, etc.
In order to be able to successfully foresee and identify different risks that might affect a software project,
it is a good idea to have a company disaster list. This list would contain all the bad events that have
happened to software projects of the company over the years including events that can be laid at the
customer's doors. This list can be read by the project managers in order to be aware of some of the risks
that a project might be susceptible to. Such a disaster list has been found to be of help in performing better
risk analysis.

The Institute of Chartered Accountants of Nepal ȁͳͷͳ


Management Information and Control System
Risk Assessment
The objective of risk assessment is to rank the risks in terms of their damage causing potential. For
risk assessment, each risk should first be rated in two ways:
• The likelihood of a risk coming true (r).
• The consequence of the problems associated with that risk (s).
Based on these two factors, the priority of each risk can be computed as p = r x s where p is the priority
with which the risk must be handled, r is the probability of the risk becoming true, and s is the severity
of damage caused due to the risk becoming true. lf all identified risks are prioritized, then the
most likely and damaging risks can be handled first and more comprehensive risk abatement
procedures can be designed for these risks.
The process of Risk assessment includes:
Risk Analysis: First analyze each identified risk in terms of its potential impact and likelihood. Assess
the consequences or impacts that could occur if the risk event were to happen and evaluate the likelihood
or probability of the risk event occurring.
Impact Assessment: Then evaluate the potential consequences or impacts of each risk on the project,
organization, or specific objective. Consider the potential effects on various aspects such as cost, schedule,
quality, reputation, safety, compliance, and stakeholders.
Likelihood Assessment: Assess the likelihood or probability of each risk event occurring. Consider
factors such as historical data, expert judgment, project conditions, external influences, and any control
measures in place that could affect the likelihood of the risk event.
Risk Scoring and Prioritization: Assign a risk score or rating to each identified risk based on the impact
and likelihood assessments. This can be done using qualitative scales (e.g., low, medium, high) or
quantitative methods (e.g., numerical scales, probability calculations). Prioritize risks based on their scores
to identify high-priority or critical risks that require immediate attention.
Risk Mapping: Visualize and present risks using techniques like risk maps or risk matrices. These tools
help provide a visual representation of the risks, showing their relationship between impact and likelihood
and assisting in understanding the overall risk profile of the project or organization.
Risk Mitigation Strategies: Identify and evaluate potential risk mitigation strategies or control measures
for each high-priority risk. Assess the effectiveness, feasibility, and cost-benefit analysis of various risk
response options such as risk avoidance, risk reduction, risk transfer, or risk acceptance.
Residual Risk Assessment: Assess the remaining or residual risk level after applying risk mitigation
measures. Determine if the residual risk is within acceptable levels or if further actions are needed to
mitigate the remaining risk.
Documentation and Reporting: Document the results of the risk assessment process, including the
identified risks, their assessments, prioritization, and recommended risk management strategies.
Communicate the findings to stakeholders, project teams, and decision-makers to ensure awareness and
understanding of the risks.

152 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information
Chapter 4 :Technology Strategy and
System Development LifeTrends
Cycle
Ongoing Monitoring and Review: Regularly review and update the risk assessments throughout the
project lifecycle or as new risks emerge. Monitor the effectiveness of implemented risk management
strategies and adjust them as necessary.
Risk assessment is a continuous process that requires periodic reviews and updates as the project or
organization progresses. It helps in making informed decisions, allocating resources effectively, and
implementing appropriate risk mitigation measures to minimize the impact of potential risks.
Risk Containment
After all the identified risks of a project are assessed, plans must be made to contain the most damaging
and the most likely risks. Different risks require different containment procedures. In fact, most risks
require ingenuity on the part of the project manager in tackling them.
There are three main strategies used for risk containment:

Avoid the risk. This may take several forms such as discussions with the customer to reduce the
scope of the work, and giving incentives to engineers to avoid the risk of manpower turnover,
etc.
Transfer the risk. This involves practice of distributing the risk among multiple parties or
stakeholders involved in a particular endeavor. Its aim is to mitigate the potential negative
impacts and uncertainties associated with particular venture by spreading the risk burden among
multiple participants. This strategy involves getting the risky component developed by a third
party, or buying insurance cover, etc.
Reduce the Risk. This involves planning ways to contain the damage due to a risk. For
example, if there is risk that some key personnel might leave, new recruitment may plan.
Accept the Risk: When the cost incurred to reduce the risk is higher than the cost incurred if the
risk occurs, then it is supposed to accept the risk.
To choose between the different strategies of handling a risk, the project manager must consider the
cost of handling the risk and the corresponding reduction in risk. For this may compute the risk
leverage of the different risks. Risk leverage is the difference in risk exposure divided by the cost of
reducing the risk. More formally,
Risk leverage (Reduced risk) = risk exposure before reduction - risk exposure after reduction/cost of
reduction Even though we have identified four broad ways to handle any risk, risk handling requires a lot
of ingenuity on the part of the project manager. As an example, let consider the options available to
contain an important type of risk that occurs in many software projects-that of schedule slippage. Risks
relating to schedule slippage arise primarily due to the intangible nature of software.
Therefore, these risks can be dealt with by increasing the visibility of the software product. Producing
relevant documents during the development process wherever meaningful, and getting these documents
reviewed by an appropriate team can increase visibility of a software product. Milestones should be
placed at regular intervals through a software engineering process in order to provide a manager with
regular indication of progress.
Completion of phases of the development process being followed need not be the only milestones.
Every phase can be broken down to reasonable-sized tasks and milestones can be scheduled for these
tasks too. A milestone is reached, once documentation produced as part of a software engineering task is

The Institute of Chartered Accountants of Nepal ȁͳͷ͵


Management Information and Control System
successfully reviewed. Milestones need not be placed for every activity. An approximate rule of thumb
is to set a milestone every 10 to 15 days.
Factors That Increase the Risks
Many examples of system-related accidents and crime have been presented to demonstrate the reality and
breadth of the threat that must be countered by effective management and security measures. Although
each example involved a unique situation, interrelated conditions such as carelessness, Complacency, and
inadequate organizational procedures increased the vulnerability to both accidents and crime.
The Nature of Complex Systems
Many complex systems rely on many different human, physical, and technical factors that all have to
operate correctly to avoid catastrophic system failures. Consider how a simple power outage at a New
York City AT&T switching station at 10 A.M. on Sept. 19, 1991, was magnified by a combination
of power equipment failure, alarm system failure, and management failure. When workers activated
backup power at the station, a power surge and an overly sensitive safety device prevented diesel
backup generators from providing power to the telephone equipment, which automatically started
drawing power from emergency batteries. Workers disobeyed standard procedures by not checking that
the diesel generators were working.
Operating on battery power was an emergency situation, but over 100 people in the building that day did
not notice the emergency alarms for various reasons: some alarm lights did not work; others were placed
where they could not be seen; alarm bells had been inactivated due to false alarms; technicians were off-
site at a training course. At 4:50 P.M the batteries gave out, shutting down the hub's 2.1 million call per
hour capacity. Because communication between the region's airports went through this hub, regional
airport operations came to a standstill, grounding 85,000 air passengers.
In addition to relying on everything to work correctly, computerized systems are often designed to hide
things users don't want to be involved in, such as the details of data processing. Although usually effective,
this approach makes it less likely that users will notice problems. ln addition, users often try to bypass
computerized systems by inventing new procedures that are convenient but that may contradict the
systems original design concepts. The more flexible a system is, the more likely that it will be used in
ways never imagined by its designers.
Information system decentralization and multivendor connectivity also affect security. As networked
workstations become more common, the ability to access, copy, and change computerized data expands.
Electronically stored data in offices are highly vulnerable because many offices are low- security or no-
security environments where people can easily access and copy local data and data extracted from
corporate databases. Storage media such as diskettes and even the computers themselves are easy to move.
Data channels such as electronic message systems and bulletin boards may be poorly controlled. These
areas of vulnerability all result from the worthwhile goal of making information and messages available
and readily usable.

154 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information
Chapter 4 :Technology Strategy and
System Development LifeTrends
Cycle
Human Limitations
One of the main factors increasing system vulnerability is human behavior, which includes elements such
as ignorance about security protocols, complacency, carelessness, susceptibility to personal desires, and
the challenge of comprehending complex systems.
Many users of digital systems lack sophistication when it comes to security measures, often resulting in
overlooked or ignored protocols. Additionally, complacency and carelessness can lead individuals to
make assumptions about the flawless functioning of their systems. For instance, in the Philippines, Pepsi-
Cola faced a significant challenge when a system error led to the generation of 800,000 winning numbers
in a promotional campaign, as opposed to the intended 18. The company had promised a sizeable cash
prize to each winner, resulting in a massive financial and public relations crisis.
Complacency and negligence also lead to lax enforcement of security measures. Those responsible for
maintaining these safeguards often disregard the protocols designed to prevent system failures. In an
example from the past, a 1991 audit by the U.S. General Accounting Office revealed 68 security and
control flaws across multiple U.S. stock exchanges.
Moreover, human weaknesses such as greed can also increase system vulnerabilities, as they create
incentives for cybercrimes. Individuals experiencing personal difficulties or seeking revenge against
employers may resort to illicit activities, viewing them as solutions to their problems.
Furthermore, the inherent limitations in human understanding of complex systems contribute to these
vulnerabilities. Even with advanced Computer-Aided Software Engineering (CASE) techniques, it can
be challenging to anticipate exactly how a complex information system will operate under all
circumstances. This lack of holistic understanding can lead to unexpected incidents and heighten the risk
of cybercrime.
Still Today cybersecurity remains a significant concern, with businesses and organizations investing
heavily in education, training, and advanced software to counteract these issues. Increasingly, a focus on
'human factors' in cybersecurity recognizes that system vulnerabilities often stem from human error or
misconduct, and so efforts to minimize these risks include both technological improvements and changes
to company culture and user behavior. Nonetheless, the evolving nature of technology and cyber threats
makes this an ongoing challenge.
Pressures in the Business Environment
The business environment increases vulnerability by adding pressures to complete systems rapidly
with limited staffs. Information system vulnerability may not be considered adequately when
development decisions are driven by needs to maximize return on investment. In the rush to meet
deadlines with insufficient resources, features and testing that reduce vulnerability may be left out.
Hallmarks of careful software development work may be curtailed, such as thorough documentation,
careful design reviews, and complete testing. These things happen not only in information systems, but
also in many other large projects. For example, after years of delays, the billion-dollar Hubble space
telescope was launched into orbit with a warped mirror that had not received a standard final test on earth.
A special mission of the space shuttle was needed to correct the flaw.

The Institute of Chartered Accountants of Nepal ȁͳͷͷ


Management Information and Control System
The competitive environment has even pushed companies to reduce their executive level attention
to security. Despite the argument that having a high-level security expert is more important to many
organizations today than it ever was in the past, a number of high profile businesses have shifted
these responsibilities to their end-user departments. For example, First Boston Corporation eliminated its
corporate executive position for data security and recovery based on its attempt to eliminate layers of
management and give more local control to end users.

3.3 IT Strategy Planning


A plan is a predetermined course of action to be taken in the future. It is a document containing the details
of how the action will be executed, and it is made against a time scale. The goats and the objectives that a
plan is supposed to achieve are the pre-requisites of plan. The setting of the goals and the objectives is the
primary task of the Management without which planning cannot begin.
Planning means taking a deep look into the future and assessing the likely events in the total business
environment and taking a suitable action to meet any eventuality. It further means generating the courses
of actions to meet the most likely eventuality. Planning is a dynamic process. As the future becomes
the present reality, the course of action decided earlier may require a change. Planning, therefore,
calls for a continuous assessment of the predetermined course of action versus the current requirements
of the environment. The essence of planning is to see the opportunities and the threats in the future and
predetermine the course of action to convert the opportunity into a business again, and to meet the threat
to avoid any business loss. Planning involves a chain of decisions, one dependent on the other, since
it deals with a long term period. A successful implementation of a plan means the execution of these
decisions in a right manner one after another.
Planning, in terms of future, can be long-range or short-range. Long-range planning is for a period
of five years or more, while short-range planning is for one year at the most. The long- range planning is
more concerned about the business as a whole, and deals with subjects like the growth and the rate of
growth, the direction of business, establishing some position in the business world by way of a
corporate image, a business share and so on. On the other hand, short-range planning is more concerned
with the attainment of the business results of the year. It could also be in terms of action by certain
business tasks, such as launching of a new product, starting a manufacturing facility, completing the
project, achieving intermediate milestones on the way to the attainment of goals. The goals relate to
long-term planning and the objectives relate to the short-term planning. There is a hierarchy of
objectives, which together take the company to the attainment of goals. The plans, therefore, relate to the
objectives when they are short-range and to goals when they are the long-range.
Long-range planning deals with resource selection, its acquisition and allocation. It deals with the
technology and not with the methods or the procedures. It talks about the strategy of achieving
the goals. The right strategy improves the chances of success tremendously. At the same time, a wrong
strategy means a failure in achieving the goals.
Corporate business planning deals with the corporate business goals and objectives. The business may be
a manufacturing or a service; it may deal with the industry or trade; may operate in a public or a private
sector; may be national or international business. Corporate business planning is a necessity in all cases.
Though the corporate business planning deals with a company, its universe is beyond the company. The
156 | The Institute of Chartered Accountants of Nepal
Chapter 3: Information
Chapter 4 :Technology Strategy and
System Development LifeTrends
Cycle
corporate business plan considers the world trends in the business, the industry, the technology, the
international markets, the national priorities, the competitors, the business plans, the corporate strengths
and the weaknesses for preparing a corporate plan. Planning, therefore, is a complex exercise of steering
the company through the complexities, the difficulties, the inhibitions and the uncertainties towards
the attainment of goals and objectives.
Dimensions of Planning
The corporate business plan has five dimensions. These are time, entity, organization, elements and
characteristics.
Time
The plan may either be long-range or short-range, but the execution of the plan is, year after year.
The plan is made on a rolling basis where every year it is extended by one year, keeping the plan
period as the next five years. The rolling plan provides an opportunity to correct or revise the plan
in the light of any new information the planner may receive. Duration of plan is expressed in units of
time, a year.
Entity
The plan entity is the thing on which the plan is focused. The entity could be the production in terms of
quantity or it could be a new product. It could be about the finance, the marketing, the capacity, the
manpower or the research and development. The goals, and the objectives would be stated in terms of
these entities. A corporate plan may have several entities. Entity, such as Growth, Product, Sales, is a
subject for which corporate plan is made.
Organization
The corporate plan would deal with the company as a whole, but it has to be taken down for its
subsidiaries, if any, such as the functional groups, the divisions, the product groups and the projects. The
breaking of the corporate business plan into smaller organizational units helps to fix the responsibility
for execution. The corporate plan, therefore, would be a master plan and it would comprise several
subsidiary plans.
Elements
The plan is made out of several elements. The plan begins with the mission and goal which the
organization would like to achieve. It may provide a vision statement for all to understand as also the
purpose, focus, and direction the organization would like to move towards. It would, at the outset, place
certain policy statements emerging out of management's business philosophy, culture and style of
functioning, followed by policy statements. Next it would declare the strategies in various business
functions, which would enable the organization to achieve the business objectives and targets. It would
spell out a programme of execution of plan and achievements. It provides support on rules, procedures
and methods of plan implementation, wherever necessary. One important element of the plan is a
budget stipulated for achieving certain goals and business targets. The budgets are provided for sales,
production, stocks, resources, expenses which are monitored against the time in execution period. The
budgets and performance provide meaningful measure about success and failure of the plan designed to
achieve certain goals.

The Institute of Chartered Accountants of Nepal ȁͳͷ͹


Management Information and Control System
Characteristics
There are no definite characteristics of a corporate plan. The choice of characteristics is a matter of
convenience helping to communicate to everybody concerned in the organization and for an easy
understanding in execution. The features of a plan could be several and could have several parts. The
plan is a confidential written document subject to change, and known to a limited few in the organization.
It is described in the quantitative and qualitative terms. The long-term plan is normally flexible while the
short-term one is generally not. The plan is based on the rational assumptions about the future and
gives weightage to the past achievements, and corporate strength and weaknesses. The typical
characteristics of a corporate plan are the goals, the resources, the important milestones, the investment
details and a variety of schedules.

Essentiality of Strategic Planning


There are some compelling reasons which force all the organizations to resort to strategic business
planning. The following reasons make planning an essential management process to keep the business
in a good shape and condition:
1. Market forces
2. Technological change
3. Complex diversity of business
4. Competition

Market Forces
It is very difficult to predict the market forces such as the demand and supply, the trend of the market
growth, the consumer behavior and the choices, the emergence of new products and the new product
concepts. The ability of the organization to predict these forces and plan the strategies is limited for the
various reasons. The market forces affect the sales, the growth and the profitability. With the problems
arising out of market forces, it is difficult to reorient the organization quickly to meet the eventualities
adversely affecting the business unless the business is managed through a proper business plan.

Technological Change
There are a number of illustrative cases throughout the world on the technological breakthroughs and
changes which have threatened the current business creating new business opportunities. The
emergence of the microchip, plastic, laser technology, fiber optics technology, nuclear energy,
wireless communication, audio-visual transmission, turbo engines, thermal conductivity and many more,
are the examples which have made some products obsolete, threatening the current business, but at the
same time, have created new business opportunities. The technological changes have affected not
only the business prospects but the managerial and operational styles of the organizations.

158 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information
Chapter 4 :Technology Strategy and
System Development LifeTrends
Cycle
In the absence of any corporate plan, such a technological change can bring the organization into some
difficult problems and, in some cases, can pose a threat to its survival. The corporate plan is expected
to ensure the recovery of the business investment before such a technological change takes over.
Complex Diversity of Business
The scope of business is wide, touching many fronts. The variety of products, the different market
segments, the various methods of manufacturing, the multiple locations, and the dependence on the
external factors, such as the transport, the communications and the manufacturing resources brings
complexity in the management of business. Many factors are uncontrollable and unless there is a plan,
prepared with due consideration to the diverse and complex nature of business, handling these factors is
not possible. This might lead to the loss of business opportunity.
As the business grows, it reaches a stage where the strategies such as the expansion-vertical or horizontal,
integration-forward or backward, diversification-in the same line or in the diverse line of business, are the
issues which the management is required to handle. These issues are investment- oriented and have a far-
reaching effect on the business growth, direction and profitability.
Competition
Facing competition in the business means fighting on a number of fronts. Competition could be direct or
indirect. It may share the market or create a new product, which will shift the market affecting your
business. Competition could be solely in the management of business, when there is hardly any product
distinction or it may come from certain sectors, which are being promoted by the government. The
companies compete on the merits such as they know how, quality, prompt delivery, after sales service,
etc.
Competition is a natural phenomenon in business, and it has to be dealt with in a proper manner to protect
business interests. This means that the management has to continuously evolve new strategies to deal
with the competition. Evolving strategies and their implementation, calls for forward thinking and
planning, without which it is not possible to handle competition. Competition forces the management
to look for new products, new markets, and new technologies to keep the market share intact, the
process controlled and the quality improved. Strategies also have to be implemented in a proper sequence
as business competition demands an intricate planning, testing and implementation of the strategies. The
competition should never be underestimated and has to be met squarely through corporate planning.
Environment
The environment is beyond the control of the management. Depending upon the organization's business
and its propose, different environments have bearing on the fortunes of business. It could be one
of the social, businesses, economic, industrial, technological environments affecting the business.
Many a times, it would be a mix of different environments. The environmental changes are difficult to
predict and are generally slow. Therefore, many times the managements, are caught unaware by the
environmental changes. To illustrate the environment's impact on business, some examples of recent
origin are mentioned as follows.
Widespread education programmes have created new opportunities for knowledge processing and
communication. The introduction of television has adversely affected the film industry and its immense

The Institute of Chartered Accountants of Nepal ȁͳͷͻ


Management Information and Control System
popularity has considerably restricted other amusement activities like going for a picnic or to a circus.
Personal computers are fast replacing the typewriters on account of changing office environment.
Values and attitudes make the penetration in the market difficult. The difference in the values and
attitudes of the rural and urban consumers calls for separate products, with different advertising strategies
for them. The attitude of the consumer towards fast food or frozen food decides its spread and popularity.
Similarly continuous increase in the cost of transport affects the tourism and hotel industry, but promotes
the home entertainment industry. The policies of the Government also affects the business and the
industry. The international laws and agreements create new opportunities and threats to the business.
Forecasting the probable environment changes like the change in population, population mix, consumer
preferences and their behavior, government policies, new opportunities and so on and so forth, is a major
task under corporate planning. Evolving the strategies to meet these changes is another major task.
Business planning, therefore, is absolutely essential for the survival of the business. Peter Drucker
defines long-range planning as the process of making the present managerial (risk taking) decisions
systematically and with the best possible knowledge of their futurity, organizing systematically the
efforts needed to carry out these decisions and measuring the results of these decisions against the
expectations through organized systematic feedback. Planning is neither forecasting nor making future
decisions today; it is making current decisions in the light of future.
Planning does not eliminate the risk but provides an effective tool to face it. Comprehensive corporate
planning is not an aggregate of the functional plans, but it is a systematic approach aiming to manoeuvre
the enterprise direction over a period of time through an uncertain environment, to achieve the stated goals
and the objectives of the organization.
Development of the Business Strategies
Long-range Strategic Planning
Like any other business activity, planning also has a process and methodology. It goes without any
extra emphasis that the corporate planning is a top management responsibility. It begins with deciding the
social responsibility, and proceeds to spell out the business mission and goals, and the strategies to achieve
them.
In the very beginning of the planning process, it is necessary to establish and communicate to all
concerned the social and economic responsibilities of the organization. In order to discharge these
responsibilities, it is necessary to decide the purpose of the organization for which it works. Many
organizations call it a mission.
The mission or the aim of an organization is a broad statement of the organization's existence which sets
the direction of the organization and decides the scope and the boundaries of the business.
The task after deciding the mission or the aim is to set the goal (s) for the organization. The goal is more
specific and has a time scale of three to five years. It is described in the quantitative terms in the
form of a ratio, a norm or a level of certain business aspect such as the largest share, leader in the
industry, dominant in certain product, quality, reach and distribution, etc. The goals become a reference
for the top management in planning the business activities.

160 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information
Chapter 4 :Technology Strategy and
System Development LifeTrends
Cycle
After determining the mission and the goals, the next task is to set various objectives for the organization.
The objectives are described n terms of business results to be achieved in a short duration of a year or
two. The objectives are measurable and can be monitored with the help of business tools and technologies.
Objectives may be profitability, the sales, the quality standard, the capacity utilization, etc. When
achieved, the objectives will contribute to the accomplishment of the goals and subsequently the mission.
The next step in the planning process is to set targets for more detailed working and reference.
The objective of the business is to be translated in term of functional and operational units for easy
communication and decision-making. The targets may be monthly for the sales, production, inventory
and so on. The targets will be the direct descendants of the objective(s).
The success in achieving the goals and objectives is directly dependent on the management's business
strategies. Business is like a war where two or more business competitors are set against each other to
win and are constantly in search of a strategy to win. The strategy means the manner in which the
resources, such as the men, the material, the money and the know-how will be put to use over a period
to achieve the goals. The resources of an organization are deployed based on its goals and its business
strategies but also the competition being faced by it. The game is of evolving strategies and counter
strategies to win.

The Institute of Chartered Accountants of Nepal ȁͳ͸ͳ


Management Information and Control System
The development of the strategy also considers the environmental factors such as the technology, the
markets, the life style, the work culture, the attitudes, the policies of the Government and so on.
A strategy helps to meet the external forces affecting the business development effectively and further
ensures that the goals and the objectives are achieved. The development of the strategy
considers the strength of the organization in deploying the resources and at the same time it
compensates for the weaknesses. The strategy formulation, therefore, is an unstructured exercise
of a complex nature riddled with the uncertainties (see Figure 3-37). It sets the guidelines for use
of the resources in kind and manner during the planning period.

Fig 3-37 Strategy Formulation Model


Types of Strategies
A strategy means a specific decision(s) usually, but not always, regarding the development of the
resources to achieve the mission or goals of the organization. The right strategy beats competition
and ensures the attainment of goals, while a wrong strategy fails to achieve the goals. Correction
and improvement, in case of a wrong strategy is possible at a very high cost. Such a situation is described
as a strategic failure.
If a strategy considers a single point of attack by a specific method, it is a pure strategy. If a
strategy acts on many fronts by different means, then it is a mixed strategy. The business strategy
could be a series of pure strategies handling several external forces simultaneously.
Hence, the strategy, may fall in any area of a business and may deal with any aspects of the business. It
could be aspects like price, market, product, technology, process, quality, service, finance, management
strength and so on. When the management decides to fight the external forces of a single area by choice,
it becomes a pure strategy. If it uses or operates in more than one area, then it becomes a mixed strategy.

162 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information
Chapter 4 :Technology Strategy and
System Development LifeTrends
Cycle
The success of an organization, in spite of its strength, depends on the strategic moves or planning
the management purposes. The strategy may be pure or mixed. It can be classified into four broad classes:
1. Overall Company Strategy; 2. Growth Strategy; 3. Product Strategy; and 4. Marketing Strategy.

These strategies are applicable to all the types of business and industries.

Overall Company Strategy

This strategy considers a very long-term business perspective, deals 'with the overall strength of the entire
company and evolves those policies of the business which will dominate the course of the business
movement. It is the most productive strategy, if chosen correctly and fatal if chosen wrongfully. The other
strategies act under the overall company strategy. To illustrate the overall company strategy, following
examples are given:

1. A two wheeler manufacturing company will have a strategy of mass production and an
aggressive marketing.

2. A computer manufacturer will have a strategy of adding new products every two or three years.

3. A consumer goods manufacturer will have a strategy of maximum reach to the consumer and
exposure by way of a wide distribution network.

4. A company can have a strategy of remaining in the low price range and catering to the masses.

5. Another company can have a strategy of expanding very fast to capture the market.

6. A third company can have a strategy of creating a corporate brand image to humid a brand
loyalty. e.g., Escorts, Kirloskar, Godrej, Tata, Bajaj, BHEL, MTNL.

The overall company strategy is broad-based having a far reaching effect on the different facet:

of the business, and forming the basis for generating strategies in the other areas of business.

Growth Strategy

An organization may grow in two different ways. Growth may either mean the growth of the existing
business turnover, year after year, or it may mean the expansion and diversification of the business.

A two wheeler manufacturing company's growth was very rapid on a single product for more than
two decade: then it brought out new models, then came the range of products, finally the company had
manufacturing units at multiple locations. This is an example of the growth of the existing business
structure.

There is another major example of organization called “AMAZON”. Originally it was an online
bookstore in 1994 but since it expanded into other industries it has today become one of the worlds largest
e-commerce and technology company. Its policy, there- fore, is to grow with diversification.

Similarly “Netflix” which we all know is one of the top-rated company today. Originally it was opened
as a DVD by mail service in 1997, and now it has evolved globally as a streaming entertainment platform.

The Institute of Chartered Accountants of Nepal ȁͳ͸͵


Management Information and Control System
The growth from transition to streaming, original content production, global expansion and continuous
innovation approach is the major strategy they are developing with all the team efforts.

Growth strategy means the selection of a product with a very fast growth potential. It means choice of
industries such as electronics, communication, transport, textile, plastic, and so on where the growth
potential exists for expansion, diversification and integration. The growth strategy means acquisition of
business of the other firms and opening new market segments.

Growth strategies are adopted to establish, consolidate, and maintain a leadership and acquire a
competitive edge in the business and industry. It has a direct, positive impact on the profitability.

Product Strategy

A growth strategy, where the company chooses a certain product with particular characteristics, becomes
a product strategy. A product strategy means choice of a product which can expand as a family of products
and provide the basis for adding associated products. It can be positioned into the expanding markets by
way of model, type, and price.

The product strategy can be innovated continuously for new markets. Some examples are as follows:

1. A company producing pressure cookers enters the business of making ovens, boilers, washing
machines and mixers-the products for home market.

2. A company producing a low prices detergent powder enters the business of washing soap and
bath soap.

3. A company producing refrigerators offers a wide range of models with different


capacities and features, and further enters into the market of Coolers, Window Air Conditioner, etc.

Above all the specific example of product strategy is Apple Inc. Apple is now renowned for its innovative
and iconic products. It achieved the success with the product strategy where they first introduced
Macintosh Computers in the 1980s then they emphasized on designs, aesthetics, user experience and
attracted a loyal customer base. Soon it evolved into multiple products like ipod and iTunes, apple watch
and wearables and services like apple music, apple tv, apple fitness and others creating that ecosystem
with diversified revenue streams and created a seamless and integrated user experience across its devices
and services.

When a consumer need exists, has a potential of expanding in several dimensions and a product can he
conceived satisfying that need, it becomes the product strategy.

Market Strategy

The product and the marketing strategies are closely related. The marketing strategies deal with the
distribution, services, market research, pricing, advertising, packing and choice of market itself. A few
examples of marketing strategies are as follows.

1. Many companies adopt the strategy of providing after sales service of the highest order.

164 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information
Chapter 4 :Technology Strategy and
System Development LifeTrends
Cycle
They may offer a free after sales service or establish service organizations to solve customer
complaints and problems.
2. A company can offer its products in different packages keeping in mind the consumer
budget.
3. A company can arrange for loan facilities to buy its products and keeps the prices low.
4. A computer company manufactures computers and markets them through the market leader under
their brand name.
For an example “NIKE” has adopted a highly effective market strategy that has propelled its success by
focusing on strong branding and marketing efforts, and has created the powerful and recognizable brand
image that resonates with customers. Through strategic athlete endorsements and sponsorships, Nike has
associated itself with top talent and successful sports teams, fostering a perception of performance and
excellence. Its continuous commitment to innovation and product development has resulted in cutting
edge athletic footwear, apparel, and equipment that provide functional benefits and set them apart from
comeptitiors.

The marketing strategies act as an expediting and activating force for the product and the growth strategy
and as a force which accelerates business development. They are generated to create loyalty and
preference, for holding market share, for communicating consumer needs and also explaining how the
product satisfies it. Marketing strategies are generally centered around one of the factors such as quality,
price, service, and availability.

The corporate management formulates the strategies and implements them. The choice of strategy and
the method of implementation affect the corporate success. Development of a strategy is a difficult
task and it is an exercise in multidisciplinary fields. It can be developed by the business analyst under
the directions of the management. The attitude and philosophy of the management will he reflected
in the strategy formulation. There are no ready-made formulae or procedures to ensure the selection of a
correct strategy, only the results can prove its worth.

The last but not the least point is the business policy evolved by the top management. All the strategies
are governed by the business policy. The policies mirror the management's bias, preference,
attitude, strength and weakness. Business policy is the frame within which the strategies are
sketched.

Business policies provide the necessary guidelines to decide and act across the company and they
generally remain effective for a long time. The business policies inform people in the organization about
the intentions of the management to conduct the business in a particular direction and in a particular
manner. The policies should be clearly stated as they would be used by the people in the organization
without recourse of consultation. This is also true for a strategy formulation.

Short-Range Planning

Short-range planning deals with the targets and the objectives of the organization. Based on the goals and
the objectives, a short-range plan provides the scheme for implementation of the long- range plan. Short-

The Institute of Chartered Accountants of Nepal ȁͳ͸ͷ


Management Information and Control System
range plans are made for one year in terms of the targets which are to be achieved within the given
budgets. The organization translates long-range plans into the target covering all the critical areas of
business, to be achieved by the organization on a time scale. A manufacturing organization will make
targets for production, sales, capacity, etc. Most of the companies after deciding the targets, work on the
budgets.

A budget gives details of the resources required to achieve the targets. The budgets are prepared first in
terms of physical units and then cons cried into the financial units. The companies prepare budgets
for sales. Production, expenses, capital expenditure, raw materials, advertising and cash, and use them
for a decision-making and control.

The budgets are used as a control mechanism. The person responsible for the budget is in-formed
regularly whether the performance is below the budget or above and whether his expense budgets and
performance have adverse relations.

The budgets act as self-motivating tools for achieving the operational performance. It induces action on
the part of the manager if his performance is under the budget. Though the budgets are made at
'responsibility centers' of the organization, it is not an exercise in isolation. All the budgets when
computed in the monetary terms result into financial budgets. The diagram in Figure 3-38 shows the
relationship of the various budgets.

166 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information
Chapter 4 :Technology Strategy and
System Development LifeTrends
Cycle

Fig 3-38 Relationship of Budgets to Financial Budgets


The Institute of Chartered Accountants of Nepal ȁͳ͸͹
Management Information and Control System
The advantages of short-range planning with the help of budgets are as follows:
1. It gives the manager a clear target of achievement.
2. It specifies to the manager the resource allocation for a given task and the freedom to use it.
3. It provides the manager with information on the performance; whether it is under or over the budget.
4. It helps the management assess the overall performance of the business in the light of short- term
targets and long-term goals.
5. It provides an efficient tool to coordinate all the efforts within the organization.
6. It provides the management selective information on the shortfalls and overruns, for
immediate action.
7. It provides all the information in monetary terms for comparison between any two- business
entities in the organization.
The budgeting, as a tool of short-range plans, forces the managers to set the targets and to access the
resource requirement to meet the targets. Since setting the targets and designing the budget to fulfill these
is the sole responsibility of the manager, he stands committed to it and works for it. Strategic Analysis of
Business.
The business today is competitive. The parameters of competition change from time to time calling upon
the management of business to make conscious continuous efforts to remain competitive. The business
continues to be competitive so long as it has a competitive advantage in one or more aspects of the
business. The competitive advantage may he price. quality. service, product and so on or combination of
these leveraging factors.
The competitive advantage is not a natural result of a management process. It has to be built through a
systematic approach through manipulation of leveraging factors. Michel Porter's 'five forces model'
suggest how to go about building the approach. The five forces, which drive an organization in
competitive tussle. are:
• Business rivalry
• Threat from substitute product or service
• Threat of new entrants
• Bargaining power of suppliers
• Bargaining power of customers
These forces are handled effectively by analyzing business of the organization in different manner,
and building strategies, which offer a competitive advantage. The manner in which such advantage can
be built is to use one or a combination of following live strategic options:
• Create barriers to entry
• Reduce cost of product process
• Differentiate product or service
• Improve quality of offer
• Innovate to move up on value chain
The choice of options is based on the analysis of current business and identifying which options are
feasible and advantageous to the organization. All these strategy options are primarily built through
manufacturing technology, R & D efforts, financial strength, product leadership and continuous
innovation in product and services. Application of information technology in almost all areas of business
168 | The Institute of Chartered Accountants of Nepal
Chapter 3: Information
Chapter 4 :Technology Strategy and
System Development LifeTrends
Cycle
offers added cutting edge advantage to these strategies. Let us now elaborate how competitive advantage
can be gained using information technology.
Create Barriers of Entry
A fundamental concept of business to survive and grow is creating barriers for new entrants so that birth of
competition itself is prevented. Apart from other strategic initiatives, organization resorts to IT applications
in mission critical areas, which raises the value of product or service offered.
The implementation of enterprise systems such as ERP (Enterprise Resource Planning), CRM (Customer
Relationship Management), or SCM (Supply Chain Management) can greatly enhance business
performance. These systems can be applied in various areas such as manufacturing, distribution, post-
sales service, and customer relations. They can lead to product and service quality improvements, swift
response times to customer orders or queries, reduced cycle times, and cost savings on resources. If an
organization successfully implements these applications it becomes strength making barrier for a
newcomer to enter with similar matching strength. IT applications are complex and take long time for
successful implementation. Newcomer would always find extremely difficult to enter in the market with
same level of efficiency. This is because such IT systems are often complex and require significant time
and resources to implement successfully. A newcomer to the market would typically find it challenging
to replicate this level of efficiency immediately. Also, the rise of cloud-based, 'Software as a Service'
(SaaS) versions of these enterprise systems have somewhat lowered the barriers to entry, as they typically
have lower upfront costs and shorter implementation times compared to traditional on-premise systems.
Despite this, the strategic integration and effective use of these systems can still provide a significant
competitive advantage.
Reduction in Process Cost
It is proven beyond doubt in a number of cases that IT application reduces the cost of business operations
by reducing the use of resources and cutting down the process cycle time. IT contributes
significantly to the goal of becoming a low cost production of goods and services. Transaction processing
with the use of automated data captures tools reduce the operative overheads. Robots and process
control/systems reduce the manufacturing overheads. IT enabled inventory management and supply
chains reduces interest cost due to reduction of inventory' holding. IT applications further help improve
asset turnover ratio leading to higher profit margin and higher customer satisfaction.
Differentiating Product Functions, Features and Facilities
The product and or services differentiation can he achieved first through technology, adding
innovative better functions and features, and facilities in product or service. The technology is used to
innovate functional side of the product or service through better design, customization and variety of
features. Having achieved such differentiation next possibility to differentiate is improving features and
facilities of the process which delivers the product to the customer.
IT applications help monitor, tract and post sales activities helping customer to solve the problems.
In manufacturing process, more online real time intelligent support is provided to improve the output of
the process. Major contribution to product or service differentiation in manufacturing side is through
CAD/CAE/CAM affecting the process and product.
In non-manufacturing business like bank, insurance, airlines, hotels, hospitals IT applications enable to
create innovative new product for different market customer segments. IT enables not only the creation
The Institute of Chartered Accountants of Nepal ȁͳ͸ͻ
Management Information and Control System
of new products but helps manage them as well. For example HDFC/ICICI offers different loan products
to customers. Insurance companies develop variety of policies meeting the requirements of every segment.
Intelligent uses of IT differentiate product and services and help create variety of new products.
Scoring Through Quality Assurance of Product or Service
Information technology has played an immense role in this key requirement of management. The
technology is capable of data capture, and then processing it to help quality management. Technology has
ability to address this issue across the scope of system i.e. Input-Process- Output. It is capable of checking
quality of input, controlling the process of input conversion so that process defects are removed and
ensures quality of the output. IT based IS solutions are capable of providing decision-making support
through expert systems, control systems, and Al systems.
The statistical quality control (SQC), Total Quality Management (TQM), Quality Assurance System,
Knowledge Management Systems use IS and IT tools and techniques to improve quality of the product or
service. It is to be noted that next to price, quality is the most attractive proposition to customer giving
strategic benefit and competitive advantage to the organization. Organizations are going for ISO certification,
Work on Capability Maturity Model and People Capability Maturity Model to enable continuous quality),
improvement in products, process and services.
Moving up on Value Chain
Information Systems (IS) and Information Technology (IT) have played a pivotal role in transforming
and enhancing the entire value chain of businesses. The value chain – which encompasses all processes,
tasks, and stages from supplier to customer – has seen radical improvements on several fronts. These
include the functional quality of products or services, their features, customer problem-solving
capabilities, delivery mechanisms, and continuous innovation as show in Figure 3-39.

Fig 3-39 Value Chain and Six Dimensions of Improvement

170 | The Institute of Chartered Accountants of Nepal


Chapter 3: Information
Chapter 4 :Technology Strategy and
System Development LifeTrends
Cycle

Enhancing the value chain essentially involves improving communication across all levels, reducing costs
in every aspect of business, decreasing transaction or operation cycle times, monitoring and meeting
customer expectations, assessing competitors' moves, and improving customer service and relations.
The value chain is made up of all entities (process, tasks, individual, company) that participate in the
production of product or service. Each entity adds value to product or service. It encompasses all
processes, tasks and stages between suppliers and customers. IS and IT has helped to develop solutions
like.
ERP (Enterprise Resource Planning) systems that facilitate integrated resource management, thus
improving business operations.
SCM (Supply Chain Management) systems that streamline and optimize the supply chain, reducing the
overall costs of business operations.
CRM (Customer Relationship Management) systems that enhance customer relationships, resulting in
increased loyalty and repeat business.
PLM (Product Lifecycle Management) systems that manage the entire lifecycle of a product, enabling
continuous product improvements and efficient maintenance.
These solutions are often powered by enabling technologies such as the Internet, wireless connectivity,
Electronic Data Interchange (EDI), digital technologies, and CAD/CAM (Computer-Aided
Design/Computer-Aided Manufacturing) systems. These technologies add further value, accelerate
processes, and bolster the overall efficacy of the solutions.
The role of IS & IT impacting business strategy and development can be summarized in brief as under:
It enables in

Beyond Cost Savings: IS and IT offer benefits that extend beyond traditional cost savings. They
foster innovation and facilitate the development of new business models, products, and services.
Building Entry Barriers: Through the use of technology solutions, businesses can create entry
barriers across the entire supply chain. These integrated solutions bind customers and suppliers
together, thereby enhancing business performance measurements in terms of cost, quality, and
service.
Facilitating Paradigm Shift: IS and IT enable a shift from traditional 'make and sell' business models
to 'sense and respond' models. This transformation allows businesses to be more adaptive and
responsive to market needs and customer demands.
Applying such a model requires a specific strategic analysis for each business scenario to determine
appropriate design, development, and implementation strategies. However, Porter's Five Forces and
strategic options analysis can still provide a valuable starting point for this analysis.
The five forces and five options are applicable in each case for every business. However, specific strategic
analysis is required in each case of business to determine various strategic options for design,
development and implementation. Figure 3.40 shows strategic analysis model.

The Institute of Chartered Accountants of Nepal ȁͳ͹ͳ


Management Information and Control System

Fig 3-40 Strategic Analysis Model


Strategic analysis model through Drivers - Measures Analysis - Approach to Strategies Develop Strategic
plans - Implement - Evaluate and Review.
With a thorough analysis, a business can develop a mix strategy that delivers a competitive advantage.
Creating barriers to entry helps keep out potential competition and substitutes. Enhancing supplier
relationships can lead to cost reductions and quality improvements. Building customer relationships
fosters loyalty and improves profit margins.

172 | The Institute of Chartered Accountants of Nepal


Chapter 4 : System Development Life Cycle

Chapter 4

System Development Life Cycle

The Institute of Chartered Accountants of Nepal ȁͳ͹͵


Management Information and Control System

174 | The Institute of Chartered Accountants of Nepal


Chapter 4 : System Development Life Cycle
4.1 Definition, Stages of System Development
A systems development life cycle is a systematic and orderly approach to solving system
problems. It involves a series of steps that collectively aim towards development of an effective system
that is functionable to an organization.
The overall simplified process of a system development normally follows the below pathways:
Planning and Feasibility: In this initial stage, the project is defined, and the feasibility of developing the
system is assessed. This involves identifying the goals and objectives of the system, conducting a
feasibility study, evaluating technical and economic factors, and determining whether the project is viable
and worth pursuing.
Requirements Gathering and Analysis: This stage involves understanding and documenting the
requirements of the system. System analysts interact with stakeholders, such as users, managers, and
domain experts, to gather information about the system's functionalities, features, and constraints. The
gathered requirements are analyzed, documented, and validated for accuracy and completeness.
System Design: In this stage, the system's architecture and components are designed based on the
requirements gathered in the previous stage. The design includes decisions about the system's structure,
user interface, data storage, algorithms, security measures, and integration with other systems. It may
involve creating diagrams, prototypes, and mock-ups to visualize the system's design.
Implementation: This stage involves coding or configuring the system based on the design specifications.
Programmers or developers write the code, create databases, develop user interfaces, and integrate
different modules or components. Testing is performed throughout the implementation process to identify
and fix any errors or bugs.
Testing and Quality Assurance: In this stage, the developed system is rigorously tested to ensure it meets
the specified requirements and functions correctly. Various testing techniques, such as unit testing,
integration testing, system testing, and acceptance testing, are employed to detect and resolve defects.
Quality assurance measures are implemented to ensure the system’s reliability, performance, and security.
Deployment and Installation: Once the system passes the testing phase, it is deployed and installed in
the production environment. This involves transferring the system from the development environment to
the live environment and configuring it to operate with the necessary hardware, software, and network
infrastructure. User training and documentation may also be provided to support the system's successful
adoption.
Maintenance and Enhancement: After deployment, the system enters the maintenance phase, where it
is continuously monitored, maintained, and enhanced. This includes resolving any issues or bugs reported
by users, applying updates and patches, optimizing performance, and incorporating new features or
functionalities based on user feedback and changing requirements.
It's important to note that these stages are not always strictly sequential and may overlap or be iterative,
depending on the development methodology used (e.g., waterfall, agile, iterative). Each stage's specific
approach and activities may vary based on the project's scope, complexity, and organizational
requirements.

The Institute of Chartered Accountants of Nepal ȁͳ͹ͷ


Management Information and Control System

While this problem-solving approach comes in many flavors, it usually incorporates the
following general problem-solving steps (see Figure 4-1);
1. Planning-identify the scope and boundary of the problem, and plan the development strategy and
goals.
2. Analysis-study and analyze the problems, causes, and effects. Then, identify and analyze the
requirements that must be fulfilled by any successful solution.
3. Design- If necessary, design the solution-not all solutions require design.
4. Implementation- Implement the solution.
5. Support-analyze the implemented solution, refine the design, and implement improvements to
the solution. Different support situations can thread back into the previous steps.
The term cycle in systems development life cycle refers to the natural tendency for systems to
cycle through these activities, as was shown in Figure 4-1.

176 | The Institute of Chartered Accountants of Nepal


Chapter 4 : System Development Life Cycle
How do the activities of the computer programmer compare with those of the systems analyst? First,
systems analysts are typically involved in all the aforementioned problem-solving steps. On the
other hand, programmers are typically involved only in the last three or four steps, analysts
typically communicate business requirements and design specifications to the programmer. Finally,
programmers tend to be only concerned with information technology. Systems analysts, on the other
hand, are responsible for other aspects of a system or application, including;
• PEOPLE, including managers, users, and other developers-and including the organizational
behaviors and politics that occur when people interact with one another.
• DATA including capture, validation, organization, storage, and usage.
• PROCESSES, both automated and manual, that combine to process data and produce
information.
• INTERFACES, both to other systems and applications, as well to the actual users
(eg, reports and display screens).
• GEOGRAPHY, which effectively distribute data, processes, and information to the people.

Fig 4-1 A system development life cycle

The Institute of Chartered Accountants of Nepal ȁͳ͹͹


Management Information and Control System
Feasibility Analysis
Feasibility is the measure of how beneficial or practical the development of the Life cycle an
information system will be to an organization.
Feasibility analysis is the process by which feasibility is measured.
Feasibility should be measured throughout the life cycle. In earlier chapters we called this a creeping
commitment approach to feasibility. The scope and complexity of an apparently feasible project
can change after the initial problems and opportunities are fully analyzed or after the system has been
designed. Thus, a project that is feasible at one point may become infeasible later.
Feasibility analysis is a crucial step in the System Development Life Cycle that helps organizations
determine the viability of implementing a proposed system. By conducting a comprehensive analysis of
technical, economic, operational, and legal/regulatory factors, stakeholders can make informed decisions
about proceeding with system development. The findings and insights from the feasibility analysis serve
as a foundation for planning and executing subsequent phases of the SDLC, ensuring that resources are
allocated effectively and that the system aligns with the organization's strategic objectives. Let's study
some checkpoints for our systems development life cycle.
If you study your company's project standards or systems development life cycle (SDLC), you'll probably
see a feasibility study phase or deliverable, but not an explicit ongoing process. But look more
closely! On deeper examination, you'll probably identify various go/no-go checkpoints or
management reviews. These checkpoints and reviews identify specific times during the life cycle
when feasibility is reevaluated. A project can be canceled or revised in scope, schedule, or budget at
any of these checkpoints. Thus, an explicit feasibility analysis phase in any life cycle should be considered
to be only an initial feasibility assessment.
Feasibility checkpoints can be installed into any SDLC that you are using. Figure 4-2 shows feasibility
checkpoints for a typical life cycle (similar to, but not identical to, the life cycle used in this book). The
checkpoints are represented by red diamonds. The diamonds indicate that a feasibility reassessment and
management review should be conducted at the end of the prior phase (before the next phase). A
project may be canceled or revised at any checkpoint, despite whatever resources have been spent.
This idea may bother you at first. Your natural inclination may be to justify continuing a project based on
the time and money you've already spent. Those costs are sunk! A fundamental principle of
management is never to throw good money after bad-cut your losses and move on to a more feasible
project. That doesn't mean the costs already spent are not important. Costs must eventually be recovered
if the investment is ever to be considered a success. Let's briefly examine the checkpoints in Figure
4-2.
Systems Analysis-A Survey Phase Checkpoint The first feasibility analysis is conducted during the
survey phase. At this early stage of the project, feasibility is rarely more than a measure of the urgency
of the problem and the first-cut estimate of development costs. It answers the question: Do the problems
(or opportunities) warrant the cost of a detailed study of the current system? Realistically, feasibility can't
be accurately measured until the problems (and opportunities) and requirements (definition phase) are
better understood.

178 | The Institute of Chartered Accountants of Nepal


Chapter 4 : System Development Life Cycle

Fig 4-2 Feasibility Check Point in the System Development Life Cycle

Systems Analysis-A Definition Phase Checkpoint The next checkpoint occurs after the definition of
user requirements for the new system. These requirements frequently prove more extensive than
originally stated. For this reason, the analyst must frequently revise cost estimates for design and
implementation. Once again, feasibility is reassessed. If feasibility is in question, scope, schedule, and
costs must be rejustified. (Again, Module A offers guidelines for adjusting project expectations.)

The Institute of Chartered Accountants of Nepal ȁͳ͹ͻ


Management Information and Control System
If early estimates were adjusted up, you may still be within the range despite an increase in scope.
If not, the project need not always be canceled or reduced in scope. lf you have kept track of the increase
in problems and requirements since the beginning of the project, your system owner may be willing
to pay for the increased requirements (and adjust the schedule accordingly).
Systems Design-A Selection Phase Checkpoint The SDLC in Figure 4-2 is the design decision- making
phase. This SDLC separates design decision making from the actual design phase. In any case, the
selection phase represents a major feasibility analysis activity since it charts one of many possible
implementations as the target for systems design.
Problems and requirements should be known by now. During the selection phase, alternative solutions
are defined in terms of their input/output methods, data storage methods, computer hardware and
software requirements, processing methods, and people implications. The following list presents
the typical range of options that can be evaluated by the analyst.
• Do nothing! Leave the current system alone. Regardless of management's opinion or your own
opinion of this option, it should be considered and analyzed as a baseline option against which all
others can and should be evaluated.
• Reengineer the (manual) business processes, not the computer-based processes. This may involve
streamlining activities, reducing duplication and unnecessary tasks, reorganizing office layouts,
and eliminating redundant and unnecessary forms and processes, among others.
• Enhance existing computer processes.
• Purchase a packaged application.
• Design and construct a new computer-based system. This option presents numerous other
options; centralized versus distributed versus cooperative processing; on-line versus batch
processing; and files versus database for data storage. Of course, an alternative could be a
combination of the preceding options.
After defining these options, each option is analyzed for operational, technical, schedule, and economic
feasibility. This module will closely examine these four classes of feasibility criteria. One alternative is
recommended to system owners for approval. The approved solution becomes the basis for general and
detailed design.
Systems Design-A Procurement Phase Checkpoint Because the procurement of hardware and
applications software involves economic decisions that may require sizable outlays of cash, it shouldn't
surprise you that feasibility analysis is required before a contract is extended to a vendor. The
procurement phase may be consolidated into the selection phase because hardware and software
selection may have a significant impact on the feasibility of the solutions being considered.
Systems Design-A Design Phase Checkpoint A final checkpoint is completed after the system is
designed. The general and detailed design specifications have been completed. The complexity of the
solution should be apparent. Because implementation is often the most time-consuming and costly
phase, the checkpoint after design gives us one last chance to cancel or downsize the project.
Downsizing is the act of reducing the scope of the initial version of the system. Future versions can
address other requirements after the system goes into production.

180 | The Institute of Chartered Accountants of Nepal


Chapter 4 : System Development Life Cycle
So far, we've defined feasibility and feasibility analysis, and we've identified feasibility checkpoints in
the life cycle. Most analysts agree that there are four categories of feasibility tests:
- Operational feasibility is a measure of how well the solution will work in the organization. It is also
a measure of how people feel about the system/project.
- Technical feasibility is a measure of the practicality of a specific technical solution and the
availability of technical resources and expertise.
- Schedule feasibility is a measure of how reasonable the project timetable is.
- Economic feasibility is a measure of the cost-effectiveness of a project or solution. This is often
called a cost-benefit analysis.
Operational and technical feasibility criteria measure the worthiness of a problem or solution.
Operational feasibility is people oriented. Technical feasibility is computer oriented.
Economic feasibility deals with the costs and benefits of the information system. Actually, few systems
are infeasible. Instead, different options tend to be more or less feasible than others. Let's take a
closer look at the four feasibility criteria.
Operational feasibility criteria measure the urgency of the problem (survey and study phases) or the
acceptability of a solution (definition, selection, acquisition, and design phases). I-Iow do you measure
operational feasibility? There are two aspects of operational feasibility to be considered;
1. Is the problem worth solving, or will the solution to the problem work?
2. How do the end-users and management feel about the problem (solution)?
Operational feasibility is about analyzing the impact of the proposed system to the existing business
process and system in an operational level. Assessing the organization’s readiness and ability to adapt
to the changes brought by the new system is important so that the potential risks and challenges that
may arise during or after the implementation can be addressed.
ls the Problem Worth Solving, or Will the Solution to the Problem Work? Do you recall the PIECES
framework for identifying problems? PIECES can be used as the basis for analyzing the urgency of a
problem or the effectiveness of a solution. The following is a list of the questions that address these
issues;
P Performance. Does the system provide adequate throughput and response time?
I Information. Does the system provide end-users and managers with timely, pertinent,
accurate, and usefully formatted information?
E Economy. No, we are not prematurely jumping into economic feasibility! The question here is,
Does the system offer adequate service level and capacity to reduce the costs of the business or
increase the profits of the business?
C Control. Does the system offer adequate controls to protect against fraud and embezzlement and
to guarantee the accuracy and security of data and information?
E Efficiency. Does the system make maximum use of available resources including people. time,
flow of forms, minimum processing delays, and the like?
The Institute of Chartered Accountants of Nepal ȁͳͺͳ
Management Information and Control System
S Services. Does the system provide desirable and reliable service to those who need it? Is the system
flexible and expandable?
NOTE The term system, used throughout this discussion, may refer either to the existing system or a
proposed system solution, depending on which phase you're currently working in.

How Do the End-Users and Managers Feel about the Problem (Solution)? It's important not only
to evaluate whether a system can work, but we must also evaluate whether a system will work. A
workable solution might fail because of end-user or management resistance. The following questions
address this concern;
• Does management support the system?
• How do the end-users feel about their role in the new system?
• What end-users or managers may resist or not use the system? People tend to resist change.
Can this problem be overcome? lf so, how?
• How will the working environment of the end-users change? Can or will end-users and
management adapt to the change?
Essentially, these questions address the political acceptability of solving the problem or the solution.
Usability Analysis When determining operational feasibility in the later stages of the development
life cycle, usability analysis is often performed with a working prototype of the proposed system. This
is a test of the system's user interfaces and is measured in how easy they are to learn and to use and
how they support the desired productivity levels of the users. Many large corporations, software
consultant agencies, and software development companies employ user interface specialists for
designing and testing system user interfaces. They have special rooms equipped with video
cameras, tape recorders, microphones, and two-way mirrors to observe and record a user working
with the system. Their goal is to identify the areas of the system where the users are prone to
make mistakes and processes that may be confusing or too complicated. They also observe the
reactions of the users and assess their productivity.
How do you determine if a systems user interface is usable? There are certain goals or criteria that
experts agree help measure the usability of an interface and they are as follows:
• Ease of learning-How long it takes to train someone to perform at a desired level.
• Ease of use-You are able to perform your activity quickly and accurately. lf you are a first- time
user or infrequent user, the interface is easy and understandable. If you are a frequent user, your
level of productivity and efficiency is increased.
• Satisfaction-You, the user, are favorably pleased with the interface and prefer it over types you
are familiar with.
Technical feasibility can be evaluated only after those phases during which technical issues are
resolved-namely, after the evaluation and design phases of our life cycle have been completed. Today,

182 | The Institute of Chartered Accountants of Nepal


Chapter 4 : System Development Life Cycle
very little is technically impossible. Consequently, technical feasibility looks at what is practical and
reasonable. Technical feasibility addresses three major issues:
1. Is the proposed technology or solution practical?
2. Do we currently possess the necessary technology?
3. Do we possess the necessary technical expertise, and is the schedule reasonable?
So overall Technical feasibility is about assessing the adequacy of the technical infrastructure and
capabilities for the system, identify any technical limitations that may affect the system development and
implementation and then evaluate the availability of necessary hardware, software and technical expertise.
Is the Proposed Technology or Solution Practical? The technology for any defined solution is normally
available. The question is whether that technology is mature enough to be easily applied to our
problems. Some firms like to use state-of-the-art technology, but most firms prefer to use mature and
proven technology. A mature technology has a larger customer base for obtaining advice concerning
problems and improvements.
Do We Currently Possess the Necessary Technology? Assuming the solutions required technology is
practical, we must next ask ourselves, Is the technology available in our information systems shop?
If the technology is available, we must ask if we have the capacity. For instance, Will our current printer
be able to handle the new reports and forms required of a new system?
If the answer to either of these questions is no, then we must ask ourselves, Can we get this technology?
The technology may be practical and available, arid, yes, we need it. But we simply may not be able to
afford it at this time. Although this argument borders on economic feasibility, it is truly technical
feasibility. If we can't afford the technology, then the alternative that requires the technology is not
practical and is technically infeasible!
Do We Possess the Necessary Technical Expertise, and Is the Schedule Reasonable? This
consideration of technical feasibility is often forgotten during feasibility analysis. We may have the
technology, but that doesn't mean we have the skills required to properly apply that technology. For
instance, we may have a database management system (DBMS). However, the analysts and programmers
available for the project may not know that DBMS well enough to properly apply it. True, all information
systems professionals can learn new technologies. However, that learning curve will impact the technical
feasibility of the project; specifically, it will impact the schedule.
Schedule Feasibility Given our technical expertise, are the project deadlines reasonable? Some projects
are initiated with specific deadlines. You need to determine whether the deadlines are mandatory or
desirable. For instance, a project to develop a system to -meet new government reporting regulations may
have a deadline that coincides with when the new reports must be initiated. Penalties associated with
missing such a deadline may make meeting it mandatory. If the deadlines are desirable rather than
mandatory, the analyst can propose alternative schedules.
It is preferable (unless the deadline is absolutely mandatory) to deliver a properly functioning information
system two months late than to deliver an error-prone, useless information system on time! Missed
schedules are bad. Inadequate systems are worse! lt's a choice between the lesser of two evils.

The Institute of Chartered Accountants of Nepal ȁͳͺ͵


Management Information and Control System
Economic Feasibility The bottom line in many projects is economic feasibility. During the early phases
of the project, economic feasibility analysis amounts to little more than judging whether the possible
benefits of solving the problem are worthwhile. Costs are practically impossible to estimate at that
stage because the end-user's requirements and alternative technical solutions have not been identified.
However one need to estimate/forecast the cost involved in system development, including hardware,
software and other operating expenses. It is necessary to evaluate the potential benefits and return on
investment (ROI) the system is expected to deliver while in use. As soon as specific requirements and
solutions have been identified, the analyst can weigh the costs and benefits of each alternative. This is
called a cost- benefit analysis.
Documentation is the activity of recording facts and specifications for a system.
Testing help the system designers and builders test databases and application programs
Other area that can be analyzed as part of feasibility test as per the need and nature of organization may
include the Legal and Regulatory Feasibility test where the investigation and understanding of relevant
legal and regulatory requirement that is applicable to the organization in case of the proposed system is
required. Then evaluate the feasibility in terms of data protection, privacy, and security regulations and
identify any barriers that may affect the system development and deployment.
Finally, the findings generated from all the above analysis on different areas is now documented and the
conclusion is reached there upon. The findings are presented in form of the analysis report to the
management and stakeholders and decision makers are sent for approval. This feasibility report is the
basis of taking an informed decision about the proceeding with system development. Feasibility analysis
is not a onetime process rather it is an iterative process in the SDLC and on each key milestone the
feasibility analysis is revisited to ensure that the profect is still viable and align with the organization
goals.

4.2 Underlying Principles of System Development


Principle 1: Get the Owners and Users Involved:
It is essential to involve system owners and users throughout the development process. Often, analysts,
programmers, and other IT specialists may possess a possessive attitude towards the system they are
developing, creating a division between technical staff and users or management. While technical teams
may strive to create impressive technological solutions, these solutions can sometimes fail to address the
actual organizational problems or even introduce new issues. Therefore, successful systems development
requires active participation and involvement from system owners and users.
The individuals responsible for systems development should allocate sufficient time for engaging with
owners and users, insist on their participation, and strive to reach agreements on decisions that may
impact them. Miscommunication and misunderstandings have historically been major hurdles in systems
development. However, involving and educating owners and users can minimize these issues and foster
acceptance of new ideas and technological changes. Since people often resist change, information
technology is often seen as a threat. The most effective way to overcome this perception is through
consistent and comprehensive communication with owners and users.

184 | The Institute of Chartered Accountants of Nepal


Chapter 4 : System Development Life Cycle
Principle 2: Use a Problem-Solving Approach:
A methodology is a problem-solving approach to building systems. The term problem is used to
include real problems, opportunities for improvement, and directives from management. The classic
problem-solving approach is as follows:
1. Study and understand the problem and its context.
2. Define the requirements of a suitable solution.
3. Identify candidate solutions and select the "best" solution.
4. Design and/or implement the solution.
5. Observe and evaluate the solution's impact, and refine the solution accordingly.
Systems analysts should approach all projects using a problem-solving approach. Inexperienced problem
solvers tend to eliminate or abbreviate one or more of the above steps. The result can range from (1)
solving the wrong problem, to (2) incorrectly solving the problem, to (3) picking the wrong solution. A
methodology's problem-solving orientation can reduce or eliminate the above risks.
Principle 3: Establish Phases and Activities: All life cycle methodologies prescribe phases and
activities. The number and scope of phases and activities varies from author to author, expert to expert,
and company to company. In each phase, the stakeholders are concerned with the building blocks
opposite that phase. The phases are:
1. Preliminary Investigation: Conducting an initial assessment and gathering information about the
problem or opportunity.
2. Problem Analysis: Analyzing and understanding the root causes and nature of the problem.
3. Requirements Analysis: Identifying and documenting the functional and non-functional
requirements of the desired solution.
4. Decision Analysis: Evaluating alternative solutions and selecting the most appropriate one based
on defined criteria.
5. Design: Creating a detailed blueprint or plan for implementing the chosen solution.
6. Construction: Developing or configuring the system components based on the design
specifications.
Implementation: Deploying the system into the operational environment and making it available to end-users
Each phase serves a role in the problem-solving process. Some phases identify problems, while others
evaluate, design and implement solutions.
In addition, a system will eventually enter the operation stage of the life cycle; therefore, we included
operations and support activities to support the final system. Both activities are depicted with a light color
to reflect that they are ongoing activities.
Also, the phases may be tailored to the special needs of any given project (e.g., deadlines, complexity,
strategy. resources, and so on). In this chapter, we will describe such tailoring as "routes" through the
methodology or problem-solving process.

The Institute of Chartered Accountants of Nepal ȁͳͺͷ


Management Information and Control System
Principle 4: Establish Standards: An organization should embrace standards for both information
systems and the process used to develop those systems. In medium to large organizations, system
owners, users, analysts, designers, and builders come and go. Some will be promoted; some will quit;
and others will be reassigned. To promote good communication between constantly changing
managers, users, and information technology professionals, you must develop standards to ensure
consistent systems development.
Standards should minimally encompass the following:
Documentation: Standards for documentation ensure that it is consistently created and maintained
throughout the entire system’s development life cycle. Documentation serves as a valuable
communication tool and aids in identifying system strengths and weaknesses, involving users, and
providing progress updates to management.
Quality: Quality standards guarantee that the deliverables of each phase or activity meet the
expectations of the business and technology. They help minimize the chances of missing business
problems and requirements, as well as errors in designs and program implementation. Quality standards
apply not only to documentation but also to technical end products such as databases, programs, user
interfaces, and networks.
Automated Tools: Standards for automated tools define the technology used in developing and
maintaining information systems. They ensure consistency, completeness, and quality throughout the
development process. Automated tools like Microsoft Access or Visual Basic are commonly used to
facilitate different phases and activities, generate documentation, analyze quality, and generate technical
solutions. Computer-aided systems engineering (CASE) tools are another type of automated tool used
in systems development.
Information Technology: Information technology standards guide technology solutions and
configurations of information systems towards a common architecture. They establish guidelines for
computers, peripherals, operating systems, database management systems, network topologies, user
interfaces, and software architectures. Information technology standards aim to reduce efforts and costs
associated with supporting and maintaining technologies. They promote familiarity, ease of learning,
and ease of use across information systems by limiting technology choices. However, these standards
should not hinder the exploration and adoption of emerging technologies that can benefit the business.
These standards will he documented and embraced within the context of the chosen system development
process or methodology.
The need for documentation standards underscores a common failure of many analysts-the failure
to document as an ongoing activity during the life cycle. Documentation should be a working by-product
of the entire systems development effort. Documentation reveals strengths and weaknesses of the
system to multiple stakeholders before the system is built. It simulates user involvement and reassures
management about progress.
Quality standards ensure that the deliverables of any phase or activity met business and technology
expectations. They minimize the likelihood of missed business problems and requirements, as well as
flawed designs and program errors (bugs). Frequently, quality standards are applied to documentation
produced during development, but quality standards must also be applied to the technical end products
such as databases, programs, user and system interfaces and networks.

186 | The Institute of Chartered Accountants of Nepal


Chapter 4 : System Development Life Cycle
Automated tool standards prescribe technology that will be used to develop and maintain information
systems and to ensure consistency, completeness, and quality. Today's developers routinely use automated
tools (such as Microsoft Access or Visual Basic ) to facilitate the completion of phases and activities,
produce documentation, analyze quality, and generate technical solutions. Later in this chapter, we will
introduce another type of automated tool called computer-aided systems engineering (CASE).
Finally, information technology standards direct technology solutions and information systems to a
common technology architecture or configuration. This is similar to automated tools except the focus is
on the underlying technology of the finished product, the information systems themselves. For example,
an organization may standardize on specific computers and peripherals, operating systems, database
management systems, network topologies, user interfaces, and software architectures. The intent is to
reduce effort and costs required to provide high-quality support and maintenance of the technologies
themselves. Information technology standards also promote familiarity, ease of learning, and ease of
use across all information systems by limiting the technology choices. Information technology standards
should not inhibit the investigation or use of appropriate emerging technologies that could benefit the
business.
Principle 5: Justify Systems as Capital Investments:
Information systems should be recognized as capital investments, similar to physical assets like trucks or
buildings. Even if management fails to acknowledge this, it is important for analysts to view information
systems as investments. When considering a capital investment, two key issues should be addressed.
Firstly, when faced with a problem, there are likely to have multiple potential solutions. The analyst, along with
user input, should explore different alternatives rather than accepting the first solution that comes to mind.
Failing to consider alternatives may hinder the ability to provide the best solution for the business.
Secondly, after identifying alternative solutions, the systems analyst should evaluate each option for feasibility,
particularly in terms of cost-effectiveness and risk management. Cost-effectiveness refers to finding a balance
between the cost of developing and operating an information system and the benefits derived from it. To assess
cost-effectiveness, a technique called cost-benefit analysis is commonly used, which will be taught later in the
book.
Risk management is the process of identifying, evaluating, and controlling what might go wrong in a project
before it becomes a threat to the successful completion of the project or implementation of the information-
system. Cost-benefit analysis and risk management are important skills to be mastered.
By justifying information systems as capital investments and applying cost-benefit analysis and risk
management techniques, analysts can ensure that the chosen solution is not only technically feasible but
also economically viable and aligned with the organization's objectives.
Principle 6: Don't Be Afraid to cancel or Revise Scope: A key advantage of the phased approach to
systems development is that it allows for multiple opportunities to reassess cost-effectiveness and
feasibility. It is common to feel tempted to continue with a project simply because of the investment
already made. However, it is important to remember that canceling a project is often less costly than
implementing a flawed or unsuccessful system. This is a crucial lesson for young analysts to keep in mind.

The Institute of Chartered Accountants of Nepal ȁͳͺ͹


Management Information and Control System
Many system owners have expectations that exceed what they can afford or are willing to pay for.
Additionally, the scope of information system projects tends to expand as the analyst gains a better
understanding of the business problems and requirements throughout the project. Unfortunately, analysts
often fail to adjust their estimated costs and schedules as the scope increases. As a result, they frequently
take on unnecessary responsibility for cost and schedule overruns.
We advocate a creeping commitment approach to systems development, using the creeping commitment
approach, multiple feasibility checkpoints are built into any systems development methodology. At each
feasibility checkpoint, all costs are considered sunk (meaning not recoverable). They are, therefore,
irrelevant to the decision. Thus, the project should be reevaluated at each checkpoint to determine if it
remains feasible to continue investing time, effort, and resources.
At each checkpoint, the analyst should consider the following options:
- Cancel the project if it is no longer feasible.
- Reevaluate and adjust the costs and schedule if project scope is to be increased.
- Reduce the scope if the project budget and schedule are frozen and not sufficient to cover all
project objectives.
The concept of sunk costs is frequently forgotten or not used by the majority of practicing analysts, most
users, and even many managers.
Principle 7: Divide and Conquer: Consider the old saying, "If you want to learn any-thing, you must not
try to learn everything-at least not all at once." The principle of "divide and conquer" is a valuable approach
in system development. It follows the idea that if you want to learn or tackle something effectively, you
should not try to handle everything all at once. Instead, you divide a system into smaller subsystems and
components, making it easier to manage and build the larger system. By breaking down a complex problem
into more manageable pieces, the analyst can simplify the problem-solving process.
The divide-and-conquer approach has additional benefits in terms of communication and project
management. It allows different stakeholders to be assigned different portions of the system, promoting
collaboration and specialization. Each stakeholder can focus on their assigned subsystem or component,
leading to more efficient and coordinated efforts.
This approach is familiar to us since our schooling days. For example, when writing a paper, we are often
taught to create an outline before starting to write. The outlining process is a divide-and-conquer approach
to writing. It helps us organize our thoughts, break down the content into manageable sections, and create
a structured and coherent final paper.
Similarly, in system development, dividing a larger problem into smaller subsystems and components
helps us approach the project in a systematic and organized manner. It allows us to concentrate on
individual elements, address specific challenges, and gradually build the complete system.
By applying the divide and conquer principle, system analysts can effectively manage complexity, enhance
communication and collaboration, and ensure a more successful and efficient system development process.

188 | The Institute of Chartered Accountants of Nepal


Chapter 4 : System Development Life Cycle
Principle 8: Design Systems for Growth and Change: It is crucial for systems analysts to design
information systems with future growth and change in mind, rather than solely focusing on immediate user
requirements. While there may be pressure to develop systems quickly to meet current needs, neglecting long-
term considerations often leads to problems down the line.
In system science, the natural decay of all systems over time is described as entropy. Once a system is
implemented, it enters the operations and support stage of its life cycle. During this stage, the need for changes
arises, ranging from simple error corrections to accommodating evolving technology or changing user
requirements. These changes often require reworking previously completed phases of the life cycle. Over time,
the cost of maintaining the current system may surpass the cost of developing a replacement system, indicating
that the system has become obsolete due to entropy.
However, system entropy can be managed. Modern tools and techniques enable the design of systems that can
adapt and grow alongside evolving requirements. This book, referring to the context in which these principles
are presented, teaches many of those tools and techniques. It's important to recognize that flexibility and
adaptability should not be accidental but deliberately built into a system.
The principle of designing systems for growth and change emphasizes the importance of considering future
needs, technological advancements, and evolving user requirements during the system development process.
By incorporating flexibility and adaptability into system design, analysts can minimize the challenges associated
with system entropy and ensure that the system remains relevant and effective in the long run.
The eight principles discussed throughout the text serve as a foundation for any methodology, including the
FAST methodology presented in the book. These principles provide a framework for evaluating and guiding
the development of information systems, helping analysts create robust and successful solutions.

4.3 Phases of System Development


The FAST methodology consists of eight system development phases. These eight phases are shown in
Figure 4-3 and described below.
1. Survey Phase: In this phase, the project context, scope, budget, staffing, and schedule are established.
It involves gathering information about the project's requirements and constraints to provide a
foundation for the subsequent phases.
2. Study Phase: The study phase focuses on identifying and analyzing both the business and technical
problem domains. It involves examining specific problems, their causes, and their effects. This phase
helps in gaining a deeper understanding of the issues at hand.
3. Definition Phase: In the definition phase, the business requirements that should apply to any potential
technical solution are identified and analyzed. The emphasis is on understanding the needs and
expectations of the stakeholders and defining the desired outcomes.
4. Configuration Phase: The configuration phase focuses on identifying and analyzing candidate
technical solutions that have the potential to solve the identified problems and meet the business
requirements. This phase results in the development of a feasible application architecture.
5. Procurement Phase (Optional): The procurement phase, if included, involves identifying and
analyzing hardware and software products that will be purchased as part of the target solution. It
ensures that the necessary resources are acquired to support the implementation.

The Institute of Chartered Accountants of Nepal ȁͳͺͻ


Management Information and Control System
6. Design Phase: The design phase is responsible for specifying the technical requirements of the target
solution. It involves designing the system architecture, data structures, user interfaces, and other
components. In modern practices, the design phase often overlaps with the construction phase.
7. Construction Phase: The construction phase is where the actual solution or interim prototypes of the
solution are built and tested. It involves coding, integration, and testing of the system components.
The focus is on ensuring that the solution meets the defined requirements and functions as intended.
8. Delivery Phase: The delivery phase marks the transition of the solution into daily production. It
involves deploying the system, providing necessary user training and support, and ensuring its
successful operation. This phase concludes the development process and initiates the system's
operational life cycle.

Fig 4-3 system development phases

4.4 Computer Aided System Engineering (CASE)


Computer-aided software engineering (CASE)-sometimes called computer-aided systems engineering-
provides software tools to automate the methodologies we have just described to reduce the amount of
repetitive work the developer needs to do. CASE tools also facilitate the creation of clear
documentation and the coordination of team development efforts. Team members can share their
work easily by accessing each other's files to review or modify what has been done. Modest
productivity benefits can also be achieved if the tools are used properly. Many CASE tools are PC-
based, with powerful graphical capabilities.

190 | The Institute of Chartered Accountants of Nepal


Chapter 4 : System Development Life Cycle
CASE tools provide automated graphics facilities for producing charts and diagrams, screen and report
generators, data dictionaries, extensive reporting facilities, analysis and checking tools, code generators,
and documentation generators. In general, CASE tools try to increase productivity and quality by doing
the following:
1. Methodology and Design Discipline: CASE tools help enforce a standard development
methodology and design discipline. They provide templates, guidelines, and prompts to ensure
that developers follow a consistent approach and adhere to best practices.
2. Communication Enhancement: CASE tools facilitate better communication between users and
technical specialists. They provide a common platform for collaboration, allowing team members
to share and access project files, review work, and provide feedback. This improves coordination
and reduces miscommunication.
3. Design Repository: CASE tools offer a design repository where design components, such as
diagrams, models, and documentation, can be stored and organized. This central repository
allows for easy access, retrieval, and management of design artifacts, ensuring consistency and
traceability.
4. Automation of Analysis and Design: CASE tools automate tedious and error-prone tasks involved
in analysis and design. They provide features like code generation, diagramming tools, data
dictionaries, and reporting facilities, which help speed up the development process and minimize
errors.
5. Code Generation and Testing: Some CASE tools offer code generation capabilities, allowing
developers to automatically generate code based on the design specifications. This reduces
manual coding efforts and ensures consistency. Additionally, CASE tools may provide testing
features to automate testing processes and improve code quality.
6. Rollout Control: CASE tools assist in managing the rollout of the developed system. They help
track changes, manage version control, and support configuration management. This ensures that
changes are properly controlled and documented during the implementation phase.
Many CASE tools have been classified in terms of whether they support activities at the front end
or the back end of the systems development process. Front-end CASE tools focus on capturing analysis
and design information in the early stages of systems development, whereas back-end CASE tools
address coding, testing, and maintenance activities. Back-end tools help convert specifications
automatically into program code.
CASE tools automatically tie data elements to the processes where they are used. If a data flow diagram
is changed from one process to another, the elements in the data dictionary would be altered
automatically to reflect the change in the diagram. CASE tools also contain features for validating
design diagrams and specifications. CASE tools thus support iterative design by automating
revisions and changes and providing prototyping facilities. A CASE information repository stores all
the information defined by the analysts during the project. The repository includes data flow diagrams,
structure charts, entity-relationship diagrams, UML diagrams, data definitions, process specifications,
screen and report formats, notes and comments and test results.
To be used effectively, CASE tools require organizational discipline, management support, and an
organizational culture that appreciates the value of such tools (Limayem, Khalifa, and Chin,

The Institute of Chartered Accountants of Nepal ȁͳͻͳ


Management Information and Control System
2004). Every member of a development project must adhere to a common set of naming conventions
and standards as well as to a development methodology. The best CASE tools enforce common
methods and standards, which may discourage their use in situations where organizational discipline is
lacking.

4.5 Models of System Development


A software process model is an abstract representation of a software process. Each process model
represents a process from a particular perspective so only provides partial information about that
process. This section introduces a number of very general process models (sometimes called process
paradigms) and presents these from an architectural perspective. That is, we see the framework of the
process but not the details of specific activities.

These generic models are not definitive descriptions of software processes. Rather, they are useful
abstractions, which can be used to explain different approaches to software development. For many
large systems, of course, there is no single software process that is used. Different processes are used
to develop different parts of the system.

The process models discussed in this chapter are:

1) The waterfall model - This model represents the software development process as a sequential
flow of phases, where each phase follows the completion of the previous one. The phases
typically include requirements specification, software design, implementation, testing, and
maintenance. It emphasizes a linear and structured approach to development.
2) Evolutionary development- This approach involves iterative and incremental development.
It starts with the rapid development of an initial system based on abstract specifications. The
system is then refined with continuous customer input to meet their evolving needs. This model
allows for flexibility and adaptation during the development process.
3) Formal systems development- This model is based on producing a formal mathematical system
specification and using mathematical methods to transform it into a program. Verification of
system components is carried out through mathematical arguments that demonstrate their
conformity to the specification. This approach is less common but can be used in projects where
high assurance and correctness are critical.
4) Reuse-based development- This approach focuses on integrating reusable components into a
system rather than building everything from scratch. It assumes the availability of a significant
number of reusable components, and the development process centers around their selection,
adaptation, and integration. Reuse-based development can significantly speed up software
development by leveraging existing components.

Processes based on the waterfall model and evolutionary developments are widely used for practical
systems development. Formal system development has been successfully used in a number of projects
but processes based on this model are still only used in a few organizations. Informal reuse is common
in many processes but most organizations do not explicitly orient their software development processes

192 | The Institute of Chartered Accountants of Nepal


Chapter 4 : System Development Life Cycle
around reuse. However, this approach is likely to be very influential in the 21st century as assembling
systems from reusable components is essential for rapid software development.

The 'waterfall' model


The first published model of the software development process was derived from other engineering
processes (Royce, 1970). This is illustrated in Figure 4-4. Because of the cascade from one phase to
another, this model is known as the 'waterfall model' or software life cycle. The principal stages of
the model map onto fundamental development activities:
1. Requirements analysis and definition: The system's services, constraints and goals are
established by consultation with system users. They are then defined in detail and serve as a system
specification.
2. System and software design: The systems design process partitions the requirements to either
hardware or software systems. It establishes overall system architecture. Software design involves
identifying and describing the fundamental software system abstractions and their relationships.
3. Implementation and unit testing: During this stage, the software design is realized as a set of
programs or program units. Unit testing involves verifying that each unit meets its specification.
4. Integration and system testing: The individual program units or programs are integrated and tested
as a complete system to ensure that the software requirements have been met. After testing, the
software system is delivered to the customer.
5. Operation and maintenance: Normally (although not necessarily) this is the longest life-cycle phase.
The system is installed and put into practical use. Maintenance involves correcting errors which were
not discovered in earlier stages of the life cycle, improving the implementation of system units and
enhancing the system's services as new requirements are discovered.
In principle, the result of each phase is one or more documents which approved ('signed are off'). The
following phase should not start until the previous phase has finished. In practice, these stages
overlap and feed information to each other. During design, problems with requirements are identified,
during coding design problems are found and so on. The software process is not a simple linear model
but involves a sequence of iterations of the development activities.
Because of the costs of producing and approving documents, iterations are costly and involve significant
rework. Therefore, after a small number of iterations, it is normal to freeze parts of the development,
such as the specification, and to continue with the later development stages. Problems are left for later
resolution, ignored or are programmed around. This premature freezing of requirements may mean
that the system won't do what the user wants. It may also lead to badly structured systems as design
problems are circumvented by implementation tricks.
During the final life-cycle phase (operation and maintenance) the software is put into use. Errors and
omissions in the original software requirements are discovered. Program and design errors emerge and
the need for new functionality is identified. The system must therefore evolve to remain useful. Making
these changes (software maintenance) may involve repeating some or all previous process stages.
The problem with the waterfall model is its inflexible partitioning of the project into these distinct
stages. Commitments must be made at an early stage in the process and this means that it is difficult to
The Institute of Chartered Accountants of Nepal ȁͳͻ͵
Management Information and Control System
respond to changing customer requirements. Therefore, the waterfall model should only be used when
the requirements are well understood. However, the waterfall model reflects engineering practice.
Consequently, software processes based on this approach are still used for software development,
particularly when this is part of a larger systems engineering project.

Fig 4-4 Waterfall model


Spiral development Model
Figure 4-5 shows the spiral model of the software process that was originally proposed by Boehm
(1988). This model is now widely known. Rather than represent the software process as a sequence of
activities with some backtracking from one activity to another, the process is represented as a
spiral. Each loop in the spiral represents a phase of the software process. Thus, the innermost loop
might be concerned with system feasibility, the next loop with system requirements definition, the
next loop with system design and so on.
Each loop in the spiral is split into four sectors:
1. Objective setting: Specific objectives for that phase of the project are defined. Constraints on
the process and the product are identified and a detailed management plan is drawn up.
Project risks are identified. Alternative strategies depending on these risks may be planned.
2. Risk assessment and reduction: For each of the identified project risks, a detailed analysis is carried
out. Steps are taken to reduce the risk. For example, if there is a risk that the requirements are
inappropriate, a prototype system may be developed.
3. Development and validation: After risk evaluation, a development model for the system
is then chosen. For example, if user interface risks are dominant, an appropriate development model
might be evolutionary prototyping. If safety risks are the main consideration, development based
on formal transformations may be the most appropriate and so on. The waterfall model may be
the most appropriate development model if integration.

194 | The Institute of Chartered Accountants of Nepal


Chapter 4 : System Development Life Cycle
4. Evaluation Planning: The project is reviewed and with a further loop of the spiral. If it is up for the
next phase of the project the main identified risk is sub-system decision made whether to
continue decided to continue, plans are drawn.
After each spiral iteration, the development process moves on to the next spiral, following the same
activities but with an increased level of detail and functionality. The spiral model emphasizes risk
management throughout the development process, allowing early detection and mitigation of potential
problems.
The key benefits of the spiral model include its flexibility in accommodating changes, the ability to address
risks proactively, and the iterative nature that allows for continuous improvement. However, the spiral
model requires a good understanding of the project's objectives and risks, as well as effective
communication and collaboration among project stakeholders.
Overall, the spiral model is well-suited for large and complex software projects where risks and
uncertainties are high. It provides a systematic approach to software development while allowing for
flexibility and adaptation to changing requirements and conditions.
The important distinction between the spiral model and other software process models is the explicit
consideration of risk in the spiral model. Informally, risk is simply something which can go wrong. For
example, if the intention is to use a new programming language, a risk is that the available compilers are
unreliable or do not produce sufficiently efficient object code. Risks result in project problems such
as schedule and cost overrun, so risk minimization is a very important project management activity.
A cycle of the spiral begins by elaborating objectives such as performance, functionality, etc. Alternative
ways of achieving these objectives and the constraints imposed on each of these alternatives are then
enumerated. Each alternative is assessed against each objective. This usually results in the
identification of sources of project risk.
The next step is to evaluate these risks by activities such as more detailed analysis, prototyping,
simulation, etc. Once risks have been assessed, some development is carried out and this is followed by a
planning activity for the next phase of the process.
There are no fixed phases such as specification or design in the spiral model. The spiral model
encompasses other process models. Prototyping may be used in one spiral to resolve requirements
uncertainties and hence reduce risk. This may be followed by a conventional waterfall development.
Formal transformation may be used to develop those parts of the system with high security requirements.

The Institute of Chartered Accountants of Nepal ȁͳͻͷ


Management Information and Control System

Fig 4-5 Spiral Model


Agile Development Model
The concept of Agile development originated in the field of software development during the 1990s,
in response to dissatisfaction with the then-dominant Waterfall model. The Waterfall model, which
emphasizes a linear, sequential approach to project development, was criticized for being too rigid and
not accommodating changes easily.
In February 2001, 17 software developers met in Snowbird, Utah, to discuss these light weight
development methods. This gathering resulted in the Manifesto for Agile Software Development, a brief
document that expressed the values and principles of Agile development. It emphasized individuals and
interactions, working software, customer collaboration, and responding to change.
Over time, several Agile methodologies have emerged, including Scrum, Kanban, Lean, and Extreme
Programming (XP), each with its own specific practices but adhering to the core values and principles
of Agile.

196 | The Institute of Chartered Accountants of Nepal


Chapter 4 : System Development Life Cycle
Agile development has become a mainstream approach in software development and has been adapted
to other fields such as marketing and manufacturing. It is widely recognized for its flexibility,
responsiveness, and focus on delivering value to the customer. Organizations worldwide, from startups
to Fortune 500 companies, use Agile methodologies to manage their projects and deliver high-quality
products and services.
In recent years, the growth of distributed teams and remote work has given rise to tools and practices
that support Agile in these contexts. Agile practices continue to evolve and adapt to changing business
environments and technological advancements.

The generic process for agile is given below. However note that this process may slightly vary based on
specific agile methodology(such as scrum or Kanban) that’s being used.
Step 1: Project Planning
In this initial stage, the overall project scope, objectives, and potential team members are identified. The
outcome of this stage typically includes a high-level project timeline, a list of team members, and a
rough estimate of the resources required.
Step 2: Product Roadmap Creation
The team, along with stakeholders, identifies key product features and groups them into a product
backlog. These features are usually described in terms of user stories, which define what each feature
will do for the end user. These are then prioritized based on their importance and value to the project.
Step 3: Release Planning
The team decides which features from the product backlog will be included in each release. This is
typically based on the priority of the features, the team's capacity, and the overall project timeline.
Step 4: Sprint Planning
The team plans "sprints," which are short, time-boxed iterations (typically lasting between one to four
weeks) during which a set of features are developed. The team selects features from the top of the
product backlog to include in the sprint, based on the sprint's duration and the team's velocity (the
amount of work they can complete in a sprint).
Step 5: Daily Stand-Up or Scrum
During the sprint, the team holds a daily meeting (also known as a stand-up or scrum) to discuss their
progress, any blockers they are facing, and the plan for the next 24 hours.

The Institute of Chartered Accountants of Nepal ȁͳͻ͹


Management Information and Control System
Step 6: Sprint Review and Retrospective
At the end of the sprint, the team holds a review to demonstrate the completed features to stakeholders.
After the review, a retrospective meeting is held to discuss what went well, what could be improved, and
how to implement the improvements in the next sprint.
Step 7: Repeat the Cycle
The cycle repeats, starting from the sprint planning, for the next set of features in the backlog, until the
product is ready for final release.
This iterative process encourages flexibility, frequent feedback, and adjustment of the product as needed
based on that feedback. It allows the team to deliver work in small, manageable increments, which can be
evaluated and improved upon in subsequent iterations.
The Agile development methodology offers numerous benefits, making it a preferred choice for many
organizations. Agile encourages flexibility and adaptability, allowing teams to respond to changes and
new information rapidly and effectively. This iterative approach delivers working components of the
product early and frequently, which facilitates regular feedback and ensures that the development is
aligned with user needs and expectations. Agile promotes collaborative working relationships among
team members and with stakeholders, fostering a shared understanding of the project goals and progress.
Furthermore, Agile enhances project transparency and predictability, enabling better risk management
and planning. It also often leads to higher product quality, as issues are identified and addressed in smaller,
manageable chunks, rather than during a late-stage review in more traditional development models.
Overall, Agile can lead to increased customer satisfaction and more efficient and effective product
development.

4.6 Integration and System Testing


Integration and system testing is a type of software testing, this makes sure that tests such as the system
and integration are done before releasing the product. Software testing has very strict set of rules and
guidelines that it follows to make sure each individual part of the software is thoroughly checked before
it is given the OK, this makes sure that there are no errors and that the software runs how it's supposed
to.
Integration and system testing is mainly done by a team who focuses only on the software testing phase
in the system development life cycle.
In software testing each testing level build on from the previous level so it is important that the testing is
done in the correct order, access to the information is passed on to the next level.
Integration
Integration testing in the software testing model comes before system testing and after the unit testing
has been done.
The way that integration testing works is by, getting the individual modules that have been through the
unit testing phase and integrating each module into a group. The integration testing phase will make sure
when the modules are being integrated together that any problems, for example errors or bugs, caused

198 | The Institute of Chartered Accountants of Nepal


Chapter 4 : System Development Life Cycle
due to the integration of the modules are eliminated. Integration testing does not deal with the integration
of the whole system but deals with the integration of a process in the system.
In the integration testing stage there are three things that are created, to ensure that the integration
of the modules is successful and that it runs successfully as well, a test plan, test cases and test data
is produced to effectively test that the integration is successful. Test data is normally used by test cases
but I have mentioned each type below;
Integration Test Plan
An integration test plan outlines the approach, objectives, scope, and activities to be conducted during
integration testing. It serves as a guide for the testing team and provides a roadmap for executing
integration testing effectively. Here are some key components that are typically included in an integration
test plan:
1. Introduction: This section provides an overview of the test plan, including its purpose, objectives, and
the system or application being tested.
2. Test Objectives: Clearly state the objectives of the integration testing phase. This may include
verifying the correct integration of modules, validating data flow between components, and ensuring
compatibility and interoperability.
3. Test Scope: Define the scope of the integration testing effort. Specify which modules or components
will be included in the integration tests and any specific functionality or scenarios that will be covered.
4. Test Environment: Describe the test environment, including hardware, software, network
configurations, and any additional tools or resources required for integration testing.
5. Test Approach: Outline the approach and strategy for performing integration testing. This may include
top-down, bottom-up, or hybrid approaches, as well as any specific sequencing or prioritization of
integration activities.
6. Test Deliverables: Identify the deliverables that will be produced during integration testing, such as
test cases, test data, test scripts, and test reports.
7. Test Schedule: Provide a timeline or schedule for the integration testing activities, including key
milestones, resource allocation, and dependencies on other testing phases or activities.
8. Test Entry and Exit Criteria: Define the conditions that must be met to initiate integration testing
(entry criteria) and the criteria for completing the integration testing phase (exit criteria).
9. Test Cases: Specify the test cases that will be executed during integration testing. These should cover
different integration scenarios, data flows, and interactions between modules or components.
10. Test Data: Describe the test data that will be used for integration testing, including sample data sets,
boundary values, and any specific data combinations required for testing.
11. Test Execution: Explain how the integration tests will be executed, including the roles and
responsibilities of the testing team, test environment setup, and test execution procedures.
12. Test Risks and Mitigation: Identify any risks or challenges associated with integration testing and
propose mitigation strategies to address them.
13. Test Reporting: Define the format and frequency of test progress reporting, including the types of test
reports to be generated and the stakeholders who will receive them.
14. Test Completion Criteria: Outline the criteria that must be met to consider the integration testing phase
complete, such as achieving a certain level of test coverage, resolving critical defects, or obtaining
stakeholder approval.

The Institute of Chartered Accountants of Nepal ȁͳͻͻ


Management Information and Control System
15. Test Responsibilities: Clearly define the roles and responsibilities of team members involved in
integration testing, including testers, developers, business analysts, and project stakeholders.
It's important to tailor the integration test plan to the specific project and organization's needs, considering
factors such as the complexity of the system, project timelines, and available resources. The test plan
should be reviewed and approved by relevant stakeholders before commencing integration testing.
Integration Test Cases
Test cases is created to make sure that the output, of the integrated modules are producing the expected
output and is working exactly how it is supposed to work. This is simply a way to spot any errors or bugs
that might have been made in the integration phase. The tester will then work through the program and
document all the data using the test case that was created, the test case will test all inputs and outputs in
the integrated modules.
Below is a simple test case, this was created for a program created in collage, It have also included some
test data in the test case to show exactly how it works.

Test Test Test Steps Expected Result Actual Result Pass/Fail


Case Scenario
ID

IT-01 Integration of 1. Enter user User registration No F


User registration details should be stored
Registration details in the database
with
Database

2. Verify user User registration User registration P


registration in details should be details should be
the database retrievable from the retrievable from
database the database

IT-02 Integration of 1. Enter book Book search should Book search P


Book Search search criteria return relevant books should return
with from the database relevant books
Database from the database

200 | The Institute of Chartered Accountants of Nepal


Chapter 4 : System Development Life Cycle

2. Verify the The returned books The returned P


search results should match the books should
search criteria match the search
criteria

IT-03 Integration of 1. Select a Book availability No F


Borrowing book for should be updated in
System with borrowing the database
Database

2. Verify the The borrower’s record No, partial. F


borrower's should reflect the
record borrowed book

IT-04 Integration of 1. Mark a book Book availability and Book availability P


Return as returned borrower's record and borrower's
System with should be updated in record should be
Database the database updated in the
database

2. Verify the Book should be marked No F


return status as returned and
available

IT-05 Integration of 1. Simulate Fine calculation should Fine calculation P


Fine overdue books be triggered for should be
Calculation overdue books triggered for
with User overdue books
Account

2. Verify the Fine amount should be Fine amount P


fine accurately calculated should be
calculation and added to the user accurately
account calculated and
added to the user
account

If you are dealing with a large application or program then there may be various test cases
that might need to be created to test separate sections of the program. The various test cases are
normally gathered together and referred to as test suites, which is a set of test cases.

The Institute of Chartered Accountants of Nepal ȁʹͲͳ


Management Information and Control System
Integration Test Data
Test data is simply data that is used in order to test the actual program or the integrated modules. Test
data would normally be used in a test case as this would be used to check the inputs and expected
outputs.
I have included an example of what test data is in the test case example that was shown in the test
case section above.
Different types of Integration Testing
There are some different types of integration testing that can be conducted, below is a list of the different
integration testing types;
Big Bang
The way this integration testing type works is, most or all of the modules are integrated together to
form nearly a complete system. This is very similar to system testing as this basically has a whole
system before starting the testing.
Advantages:
Simplicity for Small Systems: The Big Bang method can be ideal for small systems with few modules,
as it can allow for a straightforward, all-at-once integration and testing process.
Disadvantages:
 Delayed Testing: The Big Bang approach requires all modules to be completed and integrated
before testing can commence. This can cause significant delays in the development process.
 Late Discovery of Defects: Since the testing only happens after the integration of all modules,
defects are discovered late in the development cycle, which can make them more costly and time-
consuming to fix.
 Difficulty in Identifying Issues: When an issue is discovered during Big Bang testing, it can be
challenging to pinpoint the source of the problem since all modules are integrated at once.
 Incomplete Test Coverage Concerns: Given the late-stage integration and testing, it might be
challenging to ensure all aspects of the system have been thoroughly tested before the product
release.
Today, many software development teams prefer incremental integration testing methods (like Top-Down
or Bottom-Up integration testing) over the Big Bang approach, due to their advantages in early defect
detection and ease of isolating issues. However, the choice of method often depends on the specific context
and requirements of the project.
Top-Down testing
Top-Down testing is an approach where testing starts with the highest-level components and gradually
moves towards the lower-level components. This method usually involves the creation of "stubs", or
dummy modules, to stand in for lower-level modules that have yet to be integrated. This is where the
highest level components are tested first and then step by step start working downwards (lower
components). Top-down testing mainly requires for the testing team to separate what is important
and what is least important, then the most important modules are worked on first. The top-down approach

202 | The Institute of Chartered Accountants of Nepal


Chapter 4 : System Development Life Cycle
is similar to a binary tree you would start off by integrating the top level before slowly working your way
down the tree and integrating all the components at that level.
The advantage to this way of testing is that if a prototype is released or shown then most of the main
functionality will already be working. It is also easy to maintain the code and there will be better control
in terms of errors so most of the errors would be taken out before going to the next stage of testing. This
method can be particularly effective for complex systems, where understanding the overall functionality
early in the process can be beneficial.
The disadvantage is that it is hard to test the lower level components using test data. The other thing is
that lower level modules may not be tested as much as the upper level modules. The use of stubs in the
early stages can lead to oversights and a false sense of security, as they might not perfectly mimic the
behavior of the actual modules they replace.
Bottom-up testing
In Bottom-Up testing, the process starts with the lower-level components and gradually integrates and
tests upwards to the higher-level components. This approach often involves creating "drivers", or test
harnesses, to stimulate the behavior of higher-level modules not yet integrated. The components will
be separated into the level of importunacy and the least important modules will be worked on first,
then slowly you would work your way up by integrating components at each level before moving
upwards.
The advantage is that with this method you would be able to maintain code more easily and there is a
more clear structure of how to do things. Because testing begins at the lowest level of the system, issues
in these fundamental components can be identified and resolved early.
The disadvantage is that when releasing a prototype you can not see a working prototype until nearly all
the program has been completed so that may take a long time before this happens. There may be a
lot of errors later on regarding the GUI and programming later on. Creating drivers to test lower-level
modules can be time-consuming and might not accurately represent the behavior of the actual higher-
level modules they’re standing in for.
Both Top-Down and Bottom-Up integration testing are vital techniques in software development, each
with its unique advantages and disadvantages. The choice between the two typically depends on the
specific project requirements, available resources, and the complexity of the system under development.
Sometimes, a hybrid approach known as "Sandwich" or "Mixed" testing, which combines elements of
both Top-Down and Bottom-Up testing, is used to capitalize on the advantages of both methods.
Integration - Overall
Integration testing is best to be used in an iterative process as this way it will save time in the long
run and will help when trying to keep to a budget. This process is particularly effective when used
iteratively, which aligns well with Agile and other iterative development methodologies.
Iterative integration allows for continuous feedback and adjustments throughout the development cycle.
With the active involvement of clients or stakeholders, changes can be identified and incorporated early
in the process, minimizing the need for extensive modifications during the integration phase. This can
contribute to improved efficiency, cost savings, and better alignment with the client's expectations.

The Institute of Chartered Accountants of Nepal ȁʹͲ͵


Management Information and Control System
Among the various approaches to integration testing, top-down integration can often be the most efficient.
It starts with high-level modules and progressively incorporates and tests lower-level modules. This means
that the system's main functionality can be established and tested early in the process, providing a faster
path to a working prototype. However, it's worth noting that issues encountered during testing, such as
faults or errors, will need to be documented, fixed, and re-tested, which could introduce delays.
Like many other types of testing, integration testing often employs black box testing techniques. This
means that the internal workings of the modules being tested are not considered; instead, the focus is on
ensuring that the integrated system produces the correct outputs when given certain inputs.
System Testing
System testing is a level of testing performed on a complete and integrated software system to evaluate its
compliance with specified requirements. It focuses on validating the system as a whole, including its
functionality, performance, reliability, security, and other non-functional aspects.
Key aspects of system testing include:
 Testing the entire system, including all integrated components, modules, and interfaces.
 Verifying that the system meets the specified functional and non-functional requirements.
 Evaluating system behavior under normal and exceptional conditions.
 Conducting performance testing to assess system response times, throughput, and scalability.
 Testing security features and ensuring data integrity and confidentiality.
 Validating system compatibility with the intended environment and infrastructure.
 Conducting usability testing to assess user-friendliness and ease of use.
System testing is typically performed by a dedicated testing team, and it may involve different testing
techniques such as functional testing, performance testing, security testing, regression testing, and more.
The goal is to ensure that the system functions correctly and meets the expectations of its users and
stakeholders.
In the system testing process the system will be checked not only for errors but also to see if the
system does what was intended, the system functionality and if it is what the end user expected.
There are various tests that need to be conducted again in the system testing which include;
• Test Plan
• Test Case
• Test Data
If the integration stage was done accurately then most of the test plan and test cases would already
have been done and simple testing would only have to be done in order to ensure there are no bugs
because this will be the final product.
As in the integration stage, the above steps would need to be re-done as now we have integrated all
modules into one system so we have to check if this runs OK and that no errors are produced because all
the modules are in one system.

204 | The Institute of Chartered Accountants of Nepal


Chapter 4 : System Development Life Cycle
System Test Plan
The test plan will contain similar information to what was included in the integration testing, but would
contain more information as this time we are not doing individual sections but whole systems.
System Test Case
The test case would also have to change to test the whole system again to see if no errors turned up after
combining into a whole system. The test case would include test data to test expected output.
Different types of System Testing
There are loads of examples of system testing; below I will discuss some of the important types of
systems testing that are done regularly,
Functional Testing: This type of testing focuses on verifying the functional requirements of the system.
It involves testing the system's behavior and functionality against the specified requirements. Functional
testing ensures that the system performs the intended functions correctly and meets the user's expectations.
Performance Testing: Performance testing evaluates the system's performance and responsiveness under
various load conditions. It includes testing the system's scalability, reliability, speed, resource usage, and
responsiveness to ensure it can handle the expected workload and perform optimally.
Compatibility Testing: Compatibility testing validates the system's compatibility with different
platforms, devices, browsers, operating systems, and other software components. It ensures that the system
functions correctly across various environments and configurations, providing a consistent user
experience.
Regression Testing: Regression testing is performed to ensure that system changes or enhancements do
not introduce new defects or impact existing functionalities. It involves retesting the previously tested
functionalities to ensure they still work correctly after modifications or additions to the system.
Acceptance Testing: Acceptance testing is conducted to validate whether the system meets the end-user’s
requirements and is ready for deployment. It involves testing the system against user acceptance criteria
to ensure it meets the specified business needs and performs as expected.
Usability testing: this is how well the user can access the different features in the system and how
easy it is to use. Usability testing assesses the system's user-friendliness and ease of use. It focuses on
testing the system's user interface, navigation, accessibility, and overall user experience. Usability testing
ensures that the system is intuitive, easy to understand, and efficient for end-users.
GUI software testing : This is to check if graphically that the program looks how was intended
and the GUI works as intended.
Security testing: this would be to check if important information is secure and if there are certain
access restriction that they work. Security testing assesses the system's vulnerability to potential security
threats and risks. It involves testing the system's ability to protect data, detect and prevent unauthorized
access, and ensure the integrity and confidentiality of information. Security testing helps identify and
address security vulnerabilities to protect the system from potential attacks.

The Institute of Chartered Accountants of Nepal ȁʹͲͷ


Management Information and Control System
Accessibility: how easy is it for various users including users with disability to use the system.
Reliability testing: to check that the system works for long period of time and does not constantly
crash. Reliability testing evaluates the system's ability to perform consistently and reliably over a
specified period. It includes testing the system's stability, fault tolerance, error handling, and recovery
mechanisms. Reliability testing aims to ensure that the system operates without failures or crashes under
normal and abnormal conditions.
Below is a full list of all the different types of system testing that is available
• GUI software testing • Usability testing • Performance testing
• Compatibility testing • Error handling testing • Load testing
• Volume testing • Stress testing • User help testing
• Security testing • Scalability testing • Capacity testing
• Sanity testing • Smoke testing • Exploratory testing
• Ad hoc testing • Regression testing • Reliability testing
• Recovery testing • Installation testing • Idempotency testing
• Maintenance testing • Recovery testing • Accessibility
System Testing - Overall
System testing has a group of people who just deal with the testing side of the process, so this testing
stage uses black box testing as the testing team only really deal with the output of the system and the
documenting of any problems due to the output.
This is similar to integration testing in that the time taken to complete this testing is dependent on not
many errors or bugs appearing in the testing phase, or this stage can take a long time to complete.
The reason why it is important to do system testing is that it can in the long run help in terms of staying
within a budget and meeting a deadline.
The well-documented testing processes contribute to higher software quality, reduced defects, and
improved customer satisfaction. They also enhance the company's reputation by demonstrating a
commitment to delivering reliable and robust software solutions.

4.7 System Maintenance:


System maintenance refers to the ongoing activities and processes performed to keep a software system
operational, up-to-date, and in optimal working condition after its initial development and deployment. It
involves managing, monitoring, and enhancing the system to ensure its reliability, performance, security,
and usability. System maintenance typically encompasses the following key activities:
Bug Fixes: Addressing and resolving software defects or bugs that are identified during the system's usage.
This may involve analyzing the root cause of the issue, developing patches or updates, and deploying them
to fix the problem.

206 | The Institute of Chartered Accountants of Nepal


Chapter 4 : System Development Life Cycle
Updates and Upgrades: Keeping the system up-to-date by applying software updates, patches, and
security fixes released by vendors or developers. Upgrading the system to newer versions or technologies
to leverage new features, improve performance, or address compatibility issues.
Performance Optimization: Analyzing system performance, identifying bottlenecks or areas of
improvement, and implementing optimizations to enhance the system's efficiency, responsiveness, and
scalability. This may include database tuning, code refactoring, or infrastructure enhancements.
Security Enhancements: Applying security measures to protect the system from potential threats or
vulnerabilities. This involves monitoring security risks, implementing security patches, conducting regular
security audits, and enforcing best practices to safeguard sensitive data and prevent unauthorized access.
User Support and Training: Providing ongoing user support, troubleshooting assistance, and training to
system users. Addressing user inquiries, resolving issues, and ensuring users have the necessary
knowledge and skills to effectively utilize the system.
Data Management: Managing and maintaining the system's data, including backups, data integrity
checks, data archiving, and disaster recovery planning. Regularly backing up critical data to prevent data
loss and implementing data retention policies in compliance with regulatory requirements.
Documentation Updates: Keeping system documentation up-to-date, including user manuals, technical
specifications, configuration guides, and operational procedures. Documenting any changes, updates, or
enhancements made to the system for future reference and knowledge transfer.
System Monitoring and Reporting: Monitoring the system's performance, availability, and usage
patterns through various monitoring tools and generating reports to track system metrics, identify trends,
and address potential issues proactively.
There are three different types of software maintenance:
l. Maintenance to repair software faults Coding errors are usually relatively cheap to correct; design
errors are more expensive as they may involve the rewriting of several program components.
Requirements errors are the most expensive to repair because of the extensive system redesign
which may be necessary.
2. Maintenance to adapt the software to a different operating environment This type of
maintenance is required when some aspect of the systems environment such as the hardware, the
platform operating system or other software changes. The application system must be modified
to adapt it to cope with these environmental changes.
3. Maintenance to add or modify the system's functionality This type of maintenance is
necessary when the system requirements change in response to organizational or business change.
The scale of the changes required to the software is often much greater than for the other types of
maintenance.
In practice. there isn't a clear-cut distinction between these different types of maintenance. Software
faults may be revealed because a system has been used in an unanticipated way and the best way to
repair these faults may be to add new functionality to help users with the system. When adapting the
software to a new environment, functionality may be added to take advantage of new facilities supported
by the environment. Adding new functionality to a system may be necessary because faults have

The Institute of Chartered Accountants of Nepal ȁʹͲ͹


Management Information and Control System
changed the usage patterns of the system and a side-effect of the new functionality is to remove the
faults from the software.

While these different types of maintenance are generally recognised, different people sometimes give
them different names. Corrective maintenance is universally used to refer to maintenance for fault
repair. However, adaptive maintenance sometimes means adapting to a new environment and
sometimes means adapting the software to new requirements. Perfective maintenance sometimes means
perfecting the software by implementing new requirements and, in other cases. Maintaining the
functionality of the system but improving its structure and its performance.
lt is difficult to find up-to-date figures for the relative effort devoted to the different types of maintenance.
A rather old survey by Lientz and Swanson (1980) discovered that about 65 per cent of maintenance
was concemed with implementing new requirements. 18 per cent with changing the system to adapt it to
a new operating environment and 17 per cent to correct system faults. Similar figures were reported by
Nosek and Palvia (1990) 10 years later. For custom systems, this distribution of costs is still roughly
correct.
From these figures we can see that repairing system faults is not the most expensive maintenance activity.
Rather. evolving the system to cope with new environments and new or changed requirements
consumes most maintenance effort.
Maintenance is therefore a natural continuation of the system development process with associated
specification. design, implementation and testing activities. A spiral model. such as that shown in
Figure 27.3. is therefore a better representation of the software process than representations such
as the waterfall model (see Figure 3.l) where maintenance is represented as a separate process activity.

208 | The Institute of Chartered Accountants of Nepal


Chapter 4 : System Development Life Cycle

The costs of system maintenance represent a large proportion of the budget of most organizations that use
software systems. In the 1980s, Lientz and Swanson found that large organizations devoted at least 50 per
cent of their total programming effort to evolving existing systems.
McKee (1984) found a similar distribution of maintenance effort across the different types of maintenance
but suggests that the amount of effort spent on maintenance is between 65 and 75 per cent of total
available effort. As organizations have replaced old systems with off-the-shelf systems, such as enterprise
resource planning systems. this figure may not have come down. Although the details may be uncertain,
we do know that software change remains a major cost for all organizations.
Maintenance costs as a proportion of development costs vary from one application domain to another.
For business application systems, a study by Guimaraes (1983) showed that maintenance costs
were broadly comparable with system development Costs. For embedded real-time systems,
maintenance costs may be up to four times higher than development costs. The high reliability and
performance requirements of these systems may require modules to be tightly linked and hence difficult
to change.
lt is usually cost-effective to invest effort when designing and implementing a system to reduce
maintenance costs. It is more expensive to add functionality after delivery because of the need to
understand the existing system and analyse the impact of system changes. Therefore, any work done
during development to reduce the cost of this analysis is likely to reduce maintenance costs. Good software
engineering techniques such as precise specification, the use of object-oriented development and
configuration management all contribute to maintenance cost reduction Figure
27.4 shows how overall lifetime costs may decrease as more effort is expended during system
development to produce a maintainable system. Because of the potential reduction in costs of
understanding, analysis and testing, there is a I significant multiplier effect when the system is developed
for maintainability. For E System 1, extra development costs of $25,000 are invested in making the
system more maintainable. This results in a saving of $100,000 in maintenance costs over l the lifetime
of the system.
The Institute of Chartered Accountants of Nepal ȁʹͲͻ
Management Information and Control System

This assumes that a percentage increase in development costs results in a comparable percentage decrease
in overall system costs. i T One important reason why maintenance costs are high is that it is more
expensive to add functionality after a system is in operation than it is to implement the p same
functionality during development. The key factors that distinguish development and maintenance and
which lead to higher maintenance costs are:
1. Team stability After the delivery of a system, the development team is often disbanded, and new
individuals or teams are assigned to system maintenance. These new members may lack
understanding of the system and the design decisions made during development. Consequently, a
significant portion of the maintenance effort is dedicated to comprehending the existing system
before implementing changes.
2. Contractual responsibility Maintenance contracts are typically separate from system development
contracts and may be assigned to different companies. This separation, combined with the lack of
team stability, means that there is often no incentive for the development team to prioritize writing
the software in a way that facilitates easy changes. Cutting corners during development to save
effort may increase maintenance costs in the long run.
3. Staff skills Maintenance staff members are often relatively inexperienced and may be unfamiliar
with the specific application domain. Maintenance is sometimes considered a less skilled process
than system development and is frequently assigned to junior staff members. Additionally, legacy
systems may be written in outdated programming languages, requiring maintenance staff to learn
these languages to maintain the system.
4. Program age and structure As programs age, their structure tends to degrade due to multiple
changes, making them more challenging to understand and modify. Many legacy systems were
developed without modern software engineering techniques and may lack proper structure.
Furthermore, these systems were often optimized for efficiency rather than understandability,
adding complexity to maintenance efforts
The first three of these problems stem from the fact that many organisations still make a
distinction between system development and maintenance. Maintenance is seen as a second-class
activity and there is no incentive to spend money during development to reduce the costs of system
change. The only long-term solution to this problem is to accept that systems rarely have a defined
lifetime but continue in use, in some form, for an indefinite period.
210 | The Institute of Chartered Accountants of Nepal
Chapter 4 : System Development Life Cycle
Rather than develop systems, maintain them until further maintenance is impossible and then replace
them, we have to adopt the notion of evolutionary systems. Evolutionary systems are systems that are
designed to evolve and change in response to new demands. They can be created from existing
legacy systems by improving their structure through re-engineering.
The last issue in the list above, namely the problem of degraded system structure is, in some ways,
the easiest problem to address. Re-engineering techniques may be applied to improve the system structure
and understandability. lf appropriate, architectural transformation (discussed later in this chapter) can
adapt the system to new hardware. Preventative maintenance work (essentially incremental re-
engineering) can be supported to improve the system and make it easier to change.

4.8 Project Management Tools:


Context Diagram
A context diagram is a project management tool that visually represents the scope and boundaries of a
system or project. It provides a high-level view of the system and its interaction with external entities. The
main purpose of a context diagram is to depict the system's context in relation to its external environment.
The Context Diagram shows the system under consideration as a single high-level process and then shows
the relationship that the system has with other external entities (systems, organizational groups, external
data stores, etc.).
Another name for a Context Diagram is a Context-Level Data-Flow Diagram or a Level-0 Data Flow
Diagram. Since a Context Diagram is a specialized version of Data-Flow Diagram, understanding a bit
about Data-Flow Diagrams can be helpful.
A Data-Flow Diagram (DFD) is a graphical visualization of the movement of data through an information
system. DFDs are one of the three essential components of the structured-systems analysis and design
method (SSADM). A DFD is process centric and depicts 4 main components.
• Processes (circle)
• External Entities (rectangle)
• Data Stores (two horizontal, parallel lines or sometimes and ellipse)
• Data Flows (curved or straight line with arrowhead indicating flow direction)
Each DFD may show a number of processes with data flowing into and out of each process. If there is a
need to show more detail within a particular process, the process is decomposed into a number of smaller
processes in a lower level DFD. In this way, the Content Diagram or Context-
Level DFD is labeled a "Level-0 DFD" while the next level of decomposition is labeled a
"Level-1 DFD", the next is labeled a "Level-2 DFD", and so on.
Context Diagrams and Data-Flow Diagrams were created for systems analysis and design. But like
many analysis tools they have been leveraged for other purposes. For example, they can also be leveraged
to capture and communicate the interactions and flow of data between business processes. So, they don't
have to be restricted to systems analysis.

The Institute of Chartered Accountants of Nepal ȁʹͳͳ


Management Information and Control System
A sample Context Diagram is shown here.

A Context Diagram (and a DFD for that matter) provides no information about the timing, sequencing,
or synchronization of processes such as which processes occur in sequence or in parallel. Therefore it
should not be confused with a flowchart or process flow which can show these things.
Some of the benefits of a Context Diagram are:
• Shows the scope and boundaries of a system at a glance including the other systems that
interface with it
• No technical knowledge is assumed or required to understand the diagram
• Easy to draw and amend due to its limited notation
• Easy to expand by adding different levels of DFDs
• Can benefit a wide audience including stakeholders, business analyst, data analysts,
developers
Work Break Down Structure:
A work breakdown structure (WBS) is a chart in which the critical work elements, called tasks, of a
project are illustrated to portray their relationships to each other and to the project as a whole. The
graphical nature of the WBS can help a project manager predict outcomes based on various scenarios,
which can ensure that optimum decisions are made about whether or not to adopt suggested procedures
or changes.
When creating a WBS, the project manager defines the key objectives first and then identifies the
tasks required to reach those goals. A WBS takes the form of a tree diagram with the "trunk" at the top
and the "branches" below. The primary requirement or objective is shown at the top, with increasingly
specific details shown as the observer reads down.

212 | The Institute of Chartered Accountants of Nepal


Chapter 4 : System Development Life Cycle
When completed, a well-structured WBS resembles a flowchart in which all elements are logically
connected, redundancy is avoided and no critical elements are left out. Elements can be rendered as plain
text or as text within boxes. The elements at the bottom of the diagram represent tasks small
enough to be easily understood and carried out. Interactions are shown as lines connecting the elements.
A change in one of the critical elements may affect one or more of the others. If necessary, these lines
can include arrowheads to indicate time progression or cause- and-effect.
A well-organized, detailed WBS can assist key personnel in the effective allocation of resources, project
budgeting, procurement management, scheduling, quality assurance, quality control, risk management,
product delivery and service oriented management.
Benefits of using a Work Breakdown Structure include:
Clarity and organization: The WBS provide a clear and organized structure of the project's scope,
making it easier to understand and manage.
Scope control: By breaking down the project into smaller components, the WBS helps in defining and
controlling the project's scope, ensuring that all necessary work is identified and accounted for.
Resource allocation: The WBS enables effective resource allocation by assigning specific work packages
to team members or resources, allowing for better planning and utilization of resources.
Task dependencies: The hierarchical structure of the WBS helps in identifying and understanding the
dependencies between tasks and work packages, facilitating proper sequencing and scheduling of
activities.
Estimation and tracking: The WBS provide a basis for estimating the effort, time, and resources required
for each work package. It also serves as a reference for tracking progress and monitoring the completion
of each component.

The Institute of Chartered Accountants of Nepal ȁʹͳ͵


Management Information and Control System
Gnatt Chart
A Gantt chart is a popular project management tool used to visualize the schedule of tasks and activities
in a project. It provides a graphical representation of the project timeline, showing the start and end dates
of each task and how they relate to each other.
The Gantt chart consists of horizontal bars, where each bar represents a specific task or activity. The length
of the bar represents the duration of the task, and its position along the timeline indicates when it starts
and when it ends. Dependencies between tasks can be represented by linking the bars with arrows to show
the sequence or relationship between them.
This allows you to see at a glance:
• What the various activities are
• When each activity begins and ends
• How long each activity is scheduled to last
• Where activities overlap with other activities, and by how much
• The start and end date of the whole project
To summarize, a Gantt chart shows you what has to be done (the activities) and when
(the schedule).

A simple Gantt chart


PERT and CPM
PERT (Program Evaluation and Review Technique) and CPM (Critical Path Method) are two project
management techniques used to plan, schedule, and manage complex projects. Both methods are based on
network diagrams that represent the project's tasks and their dependencies.
PERT focuses on estimating the time required to complete each task in a project. It uses three-time
estimates for each task: the optimistic time (the minimum time required), the pessimistic time (the
maximum time required), and the most likely time (the best estimate). By considering these estimates,
PERT calculates the expected time for each task using a weighted average. PERT also considers the
dependencies between tasks to determine the critical path, which is the longest sequence of dependent
214 | The Institute of Chartered Accountants of Nepal
Chapter 4 : System Development Life Cycle
tasks that determines the project's overall duration. PERT is useful when there is uncertainty in task
durations and allows for probabilistic analysis of project timelines.
On the other hand, CPM is focused on identifying the critical path and determining the minimum time
required to complete a project. It uses deterministic estimates for task durations, assuming that the most
likely time estimate is accurate. CPM constructs a network diagram to visualize the project's tasks and
their dependencies. By calculating the earliest and latest start and finish times for each task, CPM identifies
the critical path, which represents the sequence of tasks that must be completed on time to prevent project
delays. CPM helps project managers identify tasks that have no flexibility in their schedules and allows
for better resource allocation and project planning.
Brief History of CPM/PERT
CPM/PERT or Network Analysis as the technique is sometimes called, developed along two parallel
streams, one industrial and the other military.
CPM was the discovery of M.R.Walker of E.I.Du Pont de Nemours & Co. and J.E.Kelly of Remington
Rand, circa 1957. The computation was designed for the UNIVAC-I computer. The first test was made
in 1958, when CPM was applied to the construction of a new chemical plant. In March 1959, the method
was applied to a maintenance shut-down at the Du Pont works in Louisville, Kentucky. Unproductive
time was reduced from 125 to 93 hours.
PERT was devised in 1958 for the POLARIS missile program by the Program Evaluation Branch
of the Special Projects office of the U.S.Navy, helped by the Lockheed Missile Systems division and the
Consultant firm of Booz-Allen & Hamilton. The calculations were so arranged so that they could be
carried out on the IBM Naval Ordinance Research Computer (NORC) at Dahlgren, Virginia.
Planning, Scheduling & Control
Planning, Scheduling (or organising) and Control are considered to be basic Managerial functions,
and CPM/PERT has been rightfully accorded due importance in the literature on Operations Research
and Quantitative Analysis.
Far more than the technical benefits, it was found that PERT/CPM provided a focus around which
managers could brain-storm and put their ideas together. It proved to be a great communication medium
by which thinkers and planners at one level could communicate their ideas, their doubts and fears to
another level. Most important, it became a useful tool for evaluating the performance of individuals and
teams.
There are many variations of CPM/PERT which have been useful in planning costs, scheduling manpower
and machine time. CPM/PERT can answer the following important questions:
How long will the entire project take to be completed? What are the risks involved?
Which are the critical activities or tasks in the project which could delay the entire project if they were
not completed on time?
Is the project on schedule, behind schedule or ahead of schedule?
If the project has to be finished earlier than planned, what is the best way to do this at the least cost?

The Institute of Chartered Accountants of Nepal ȁʹͳͷ


Management Information and Control System
The Framework for PERT and CPM
Essentially, there are six steps which are common to both the techniques. The procedure is listed below:
The framework for PERT (Program Evaluation and Review Technique) and CPM (Critical Path Method)
involves several key components:
Activity Identification: The first step is to identify all the activities required to complete the project.
Activities are specific tasks or work packages that need to be accomplished.
Activity Sequencing: Once the activities are identified, their order of execution and dependencies must
be determined. This involves establishing relationships between activities, such as determining which
activities must be completed before others can start.
Activity Time Estimation: Each activity is assigned an estimated duration or time required for
completion. This estimation can be based on historical data, expert judgment, or other techniques.
Network Diagram: A network diagram is created to visually represent the activities and their
relationships. It typically uses nodes to represent activities and arrows to indicate dependencies between
them.
Critical Path Determination: The critical path is the longest sequence of dependent activities that
determine the overall project duration. By identifying the critical path, project managers can focus on
activities that have the most impact on the project timeline.
Activity Slack Calculation: Slack, also known as float, refers to the amount of time an activity can be
delayed without affecting the project's overall duration. Activities on the critical path have zero slack,
while non-critical activities have some degree of slack.
Resource Allocation: Resources required for each activity, such as personnel, equipment, or materials,
are identified and allocated accordingly. This helps ensure that resources are available when needed and
can be managed effectively.
Project Scheduling: Based on the network diagram, activity durations, and resource availability, a project
schedule is developed. This schedule outlines the start and end dates for each activity and provides a
timeline for the entire project.
Project Monitoring and Control: Throughout the project execution, progress is monitored, and any
deviations from the schedule or unexpected issues are identified. Adjustments can be made to optimize
the project's performance and ensure timely completion.
Overall, the PERT and CPM frameworks provide a systematic approach to project planning, scheduling,
and control, enabling project managers to effectively manage resources, mitigate risks, and deliver projects
on time.
The Key Concept used by CPM/PERT is that a small set of activities, which make up the longest path
through the activity network control the entire project. If these "critical" activities could be identified and
assigned to responsible persons, management resources could be optimally used by concentrating on
the few activities which determine the fate of the entire project.

216 | The Institute of Chartered Accountants of Nepal


Chapter 4 : System Development Life Cycle
Non-critical activities can be replanned, rescheduled and resources for them can be reallocated
flexibly, without affecting the whole project.
Five useful questions to ask when preparing an activity network are:
 Is this a Start Activity?
 Is this a Finish Activity?
 What Activity Precedes this?
 What Activity Follows this?
 What Activity is Concurrent with this?
Some activities are serially linked. The second activity can begin only after the first activity is completed.
In certain cases, the activities are concurrent, because they are independent of each other and can start
simultaneously. This is especially the case in organisations which have supervisory resources so that work
can be delegated to various departments which will be responsible for the activities and their completion
as planned.
When work is delegated like this, the need for constant feedback and co-ordination becomes an
important senior management pre-occupation.
Drawing the CPM/PERT Network
Each activity (or sub-project) in a PERT/CPM Network is represented by an arrow symbol. Each
activity is preceded and succeeded by an event, represented as a circle and numbered.

At Event 3, we have to evaluate two predecessor activities - Activity 1-3 and Activity 2-3, both of which
are predecessor activities. Activity 1-3 gives us an Earliest Start of 3 weeks at Event 3. However, Activity
2-3 also has to be completed before Event 3 can begin. Along this route, the Earliest Start would be
4+0=4. The rule is to take the longer (bigger) of the two Earliest Starts. So the Earliest Start at event 3
is 4.
Similarly, at Event 4, we find we have to evaluate two predecessor activities - Activity 2-4 and Activity
3-4. Along Activity 2-4, the Earliest Start at Event 4 would be 10 wks, but along Activity 3-4, the
Earliest Start at Event 4 would be 11 wks. Since 11 wks is larger than 10 wks, we select it as the Earliest
Start at Event 4. We have now found the longest path through the network. It will take 11 weeks along
activities 1-2, 2-3 and 3-4. This is the Critical Path.

The Institute of Chartered Accountants of Nepal ȁʹͳ͹


Management Information and Control System
The Backward Pass - Latest Finish Time Rule
To make the Backward Pass, we begin at the sink or the final event and work backwards to the first event.

At Event 3 there is only one activity, Activity 3-4 in the backward pass, and we find that the value
is 11-7 = 4 weeks. However at Event 2 we have to evaluate 2 activities, 2-3 and 2-4. We find that the
backward pass through 2-4 gives us a value of 11-6 = 5 while 2-3 gives us 4-0 = 4. We take the smaller
value of 4 on the backward pass.

218 | The Institute of Chartered Accountants of Nepal


Chapter 5 : System Analysis and Design, Case study

Chapter 5

System Analysis and Design, Case study

The Institute of Chartered Accountants of Nepal ȁʹͳͻ


Management Information and Control System

220 | The Institute of Chartered Accountants of Nepal


Chapter 5 : System Analysis and Design, Case study
5.1 Strategies for System Analysis and Problem Solving
Traditionally, systems analysis is associated with application development projects, that is, projects that
produce information systems and their associated computer applications. Your first experiences
with systems analysis will likely fall into this category. But systems analysis methods can be
applied to projects with different goals and scope. In addition to single information systems and
computer applications, systems analysis techniques can be applied to strategic information systems
planning and to the redesign of business processes.
There are also many strategies or techniques for performing systems analysis. They include modern
structured analysis, information engineering, prototyping, and object-oriented analysis. These strategies
are often viewed as competing alternatives. In reality, certain combinations complement one another.
Let's briefly examine these strategies and the scope or goals of the projects to which they are suited. The
intent is to develop a high-level understanding only. The subsequent chapters in this unit will actually
teach you the techniques.
Here are some commonly used strategies:
Define the problem: Clearly define the problem you are trying to solve. Identify the symptoms,
underlying causes, and the desired outcome. This step helps in setting the direction for the analysis and
problem-solving process.
Gather information: Collect relevant data and information about the system and the problem at hand.
This may involve conducting interviews, surveys, observations, or reviewing existing documentation. The
goal is to obtain a comprehensive understanding of the system and the factors contributing to the problem.
Analyze the system: Break down the system into its components and analyze their interactions and
relationships. Use techniques such as data flow diagrams, process mapping, or root cause analysis to
identify patterns, bottlenecks, or areas of improvement. This analysis helps in identifying the underlying
issues causing the problem.
Generate alternative solutions: Brainstorm and generate multiple possible solutions to address the
problem. Encourage creativity and diverse perspectives. Consider both short-term and long-term solutions
and evaluate their feasibility, impact, and potential risks.
Evaluate and select a solution: Assess the alternatives generated in the previous step and evaluate their
advantages, disadvantages, and alignment with the desired outcome. Consider factors such as cost, time,
resources, and stakeholder requirements. Select the most appropriate solution that best addresses the
problem.
Implement the solution: Develop a detailed plan to implement the chosen solution. Identify the necessary
steps, resources, and timelines. Communicate the plan to stakeholders and execute it in a systematic
manner. Monitor the implementation process to ensure it is on track and make adjustments as needed.
Test and validate: Once the solution is implemented, test its effectiveness and validate its impact. Use
metrics, measurements, or user feedback to assess whether the problem has been resolved and the desired
outcome has been achieved. Make any necessary refinements or adjustments based on the test results.
Document and communicate: Document the entire analysis and problem-solving process, including the
problem definition, analysis techniques used, alternative solutions considered, the chosen solution, and the
The Institute of Chartered Accountants of Nepal ȁʹʹͳ
Management Information and Control System
implementation plan. Communicate the findings and outcomes to stakeholders, ensuring transparency and
shared understanding.
Continuous improvement: Emphasize continuous improvement by monitoring the implemented
solution, gathering feedback, and identifying areas for further enhancement. Apply a feedback loop to
refine the system and address any new challenges that may arise.
Modern Structured Analysis
Modern Structured Analysis (MSA) is an approach to system analysis that focuses on creating clear and
concise models of the system's structure and behavior. It is an evolution of the earlier Structured Analysis
method, incorporating new techniques and tools to improve the effectiveness and efficiency of the analysis
process.
By employing Modern Structured Analysis, system analysts can effectively analyze complex systems,
identify requirements, and create models that serve as a foundation for subsequent design and
implementation activities. The use of graphical notations and object-oriented concepts helps to improve
communication, collaboration, and understanding among stakeholders, leading to more successful system
development projects.
Modern structured analysis is a process-centered technique that is used to model business requirements
for a system. The models are structured pictures that illustrate the processes; inputs, outputs, and
files required to respond to business events (such as ORDERS).
By process-centered, we mean the initial emphasis in this technique is on the PROCESS building blocks
in our information system framework. The technique has evolved to also include the DATA building
blocks as a secondary emphasis.
Structured analysis was not only the first popular systems analysis strategy; it also introduced an overall
strategy that has been adopted by many of the other techniques-model-driven development.
A model is a representation of reality. Just as "a picture is worth a thousand words," most models use
pictures to represent reality.
Model-driven development techniques emphasize the drawing of models to define business
requirements and information system designs. The model becomes the design blueprint for constructing
the final system.
Modern structured analysis is simple in concept. Systems and business analysts draw a series of process
models called data flow diagrams (Figure 5-1) that depict the essential processes of a system along with
inputs, outputs, and files. Because these pictures represent the logical business requirements of the system
independent of any physical, technical solution, the models are said to be a logical design for the system.

222 | The Institute of Chartered Accountants of Nepal


Chapter 5 : System Analysis and Design, Case study

Fig 5-1 A process model also called a data flow diagram

Today, many organizations have evolved from a structured analysis approach to an information
engineering approach.
Information engineering is a data-centered, but process-sensitive technique that is applied to the
organization as a whole (or a significant part, such as a division), rather than on an ad-hoc, project-by-
project basis (as in structured analysis). Unlike structured analysis, which is often used on a project-by-
project basis, information engineering is applied to the entire organization or a significant part of it, such
as a division.
The basic concept of information engineering is that information systems should be engineered like other
products. Information engineering books typically use a pyramid framework to depict information
systems building blocks and system development phases. The phases are:
1 Information Strategy Planning (ISP) is a systems analysis approach that focuses on examining the
entire business organization to develop an overarching plan and architecture for future information
systems development. The primary goal of ISP is not to create actual information systems or computer
applications but to create a strategic plan that aligns information systems with the organization's business
objectives.
In ISP, the project team analyzes the business mission and goals and formulates an information systems
architecture and plan that optimally supports the organization in achieving its business goals. This strategic
plan guides the identification and prioritization of specific business areas. A business area represents a
collection of cross-organizational business processes that require high integration to support the
information strategy plan and fulfill the business mission.

The Institute of Chartered Accountants of Nepal ȁʹʹ͵


Management Information and Control System
To further analyze a business area, a technique called business area analysis (BAA) is used. BAA employs
systems analysis methods to study the business area and define the specific business requirements for a
set of highly streamlined and integrated information systems and computer applications that will support
that particular business area.
Based on the analysis conducted during the business area requirements phase, specific information system
applications are identified and prioritized. These applications then become individual projects, and various
systems analysis and design methods are applied to develop the production systems. These methods can
include a combination of structured analysis and design, prototyping, and object-oriented analysis and
design, depending on the specific project requirements. Information engineering is said to be a data-
centered paradigm because it emphasizes the study and definition of DATA requirements before those of
PROCESS, INTERFACE, or GEOGRAPHY requirements. This is consistent with the contemporary
belief that information is a corporate resource that should be planned and man-aged. Since information is
a product of data, data must be planned first. Data models, such as that shown in Figure 5-2, are drawn
first. In addition to data models, information engineers also draw process models similar to those drawn
in structured analysis.

Fig 5-2 A data model also called an entity relationship diagram


Although information engineering has gradually replaced structured analysis and design as the most
widely practiced strategy for systems analysis, information engineering actually integrates all the process
models of structured analysis with its data models. That should make sense, since we know that an
information system must include both DATA and PROCESS building blocks. Information engineering
was the first formal strategy for synchronizing those building blocks! Information engineering was also
224 | The Institute of Chartered Accountants of Nepal
Chapter 5 : System Analysis and Design, Case study
the first widely practiced strategy that considered GEOGRAPHY building blocks through application of
tools that plan and document the distribution of data and processes to locations.
Another strategy for systems analysis is prototyping.
Prototyping is an engineering technique used to develop partial but functional versions of a system or
applications. When extended to system design and construction, a prototype can evolve into the
final, implemented system.
Two flavors of prototyping are applicable to systems analysis:
- Feasibility prototyping: Feasibility prototyping is a type of prototyping that aims to determine the
technical and economic feasibility of a product or system. It involves creating a prototype to test and
validate specific technical aspects or functionalities to assess whether they can be successfully
implemented and integrated into the final product. Feasibility prototypes are often used in complex
projects where there are uncertainties or risks associated with the technology or implementation
approach. The goal is to identify potential challenges, evaluate alternatives, and make informed
decisions regarding the project’s feasibility before investing significant resources into full-scale
development. It is used to test the feasibility of a specific technology that might he applied to the
business problem. For example, we might use Microsoft Access to build a quick-hut-incomplete
prototype of the feasibility of moving a mainframe application to a PC-based environment.
- Discovery prototyping sometimes called requirements prototyping or exploratory prototyping or
proof of concept prototyping, is a type of prototyping that focuses on the early exploration and
validation of new ideas, concepts or technologies. For example, we might again use Microsoft Access
to create sample forms and reports to solicit user responses as to whether those forms and reports
truly represent business requirements. (Note: In discovery prototyping, we try to discourage users
from worrying about the style and format of the prototypes; that can be changed during system
design!). The primary objective is to gain a deeper understanding of user needs, test novel
approaches, and uncover innovative solutions. Discovery prototypes are typically created in the early
stages of a project when there is a high level of uncertainty and the design direction is not yet well-
defined.
In response to the faster pace of the economy in general, prototyping has become a preferred technique
for accelerating systems development. Many system developers extend the prototyping
techniques to perform what they call rapid application development. Unfortunately, some developers
are using prototyping to replace model-driven strategies, only to learn what true engineers have known
for years-you cannot prototype without some degree of more formal design models.
Prototyping with Visio is nowadays a popular approach which is used to create visual representation
and interactive simulations of user interfaces, workflows, and system processes. It is about diagramming
and visualization tool that offer a range of features suitable for prototyping purposes. The prototyping
nowadays is now done using free version of commercially available server side scripting system along
with HTML, Javascripts, electronic mock up, and Ms Access.
As previously described, modern structured analysis and information engineering both emphasizes
model-driven development. Prototyping places emphasis on construction of the working prototypes.
Joint application development, (JAD) complements both of these techniques by emphasizing
participative development among system owners, users, designers, and builders.
The Institute of Chartered Accountants of Nepal ȁʹʹͷ
Management Information and Control System
Joint application development (JAD)
Joint Application Development (JAD) is a process used in the systems development life cycle to speed
up the design and development of systems. It brings together key stakeholders—system owners, users,
analysts, designers, and builders—in highly structured and focused workshops to collaboratively define
and design systems. Synonyms include joint application design and joint requirements planning.
A JAD-trained systems analyst usually plays the role of facilitator for a workshop that will typically run
from three to five full working days. The facilitator's role is crucial in managing the dynamics of the
group, ensuring effective communication, mediating conflicts, and maintaining focus on the task at hand.
This workshop may replace months of traditional interviews and follow-up meetings.
Benefits:
 Efficiency: A JAD workshop can replace months of traditional interviews and follow-up meetings,
speeding up the system development process.
 Enhanced Participation: JAD promotes active involvement from system owners and users, fostering
a sense of ownership and investment in the project.
 Improved Communication: By gathering all key stakeholders in one place, JAD can improve
communication and ensure that all perspectives are considered.
 Accelerated Deliverables: The structured and focused nature of JAD sessions promotes quick
progress on methodology activities and deliverables

However, for JAD to be successful, it requires a skilled facilitator who can effectively manage group
dynamics, encourage participation from all members, and mediate any conflicts that arise.
One of the most interesting contemporary applications of systems analysis methods is business process
redesign.
Business process redesign (BPR) also called business process reengineering is the application of
systems analysis (and design) methods to the goal of dramatically changing and improving the
fundamental business processes of an organization, independent of information technology. The
motivation behind BPR arose from the realization that many existing information systems and
applications merely automated inefficient business processes. Automating outdated processes does not
add value to the business and may even subtract value from it. BPR is one of several projects influenced
by the total quality management (TQM) trend.
BPR projects primarily focus on non-computer processes within the organization. Each process undergoes
careful analysis to identify bottlenecks, assess value contribution, and identify opportunities for
elimination or streamlining. After redesigning the business processes, BPR projects often explore how
information technology can be effectively applied to support the improved processes. This may lead to the
initiation of new application development projects, which can be addressed using other techniques
discussed in this section.
Object Oriented Analysis
Object-Oriented Analysis (OOA) is a pivotal technique in systems development that strives to harmonize the
traditionally separate concerns of data and processes. In OOA, data and the processes that act upon that data are

226 | The Institute of Chartered Accountants of Nepal


Chapter 5 : System Analysis and Design, Case study
encapsulated into entities known as objects. The data within an object (termed properties) can only be
manipulated via the object's encapsulated processes (known as methods).
OOA techniques focus on analyzing existing objects for potential reuse or adaptation, as well as defining new
or modified objects to be assembled into a robust business computing application. This shift in approach
facilitates the assembly of systems from libraries of reusable objects, thereby enhancing efficiency and code
reuse.
For the past 30 years, most systems development strategies have deliberately separated concerns of
DATA from those of PROCESS. The COBOL language, which dominated business application
programming for years, was representative of this separation-the DATA DIVISION was separated from
the PROCEDURE DIVISION. Most systems analysis (and design) techniques similarly separated
these concerns to maintain consistency with the programming technology. Although most systems
analysis and design methods have made significant attempts to synchronize data and process models, the
results have been less than fully successful.
Object technologies and techniques are an attempt to eliminate the separation of concerns about DATA
and PROCESS. Instead, data and the processes that act on that data are combined or encapsulated into
things called objects. The only way to create, delete, change, or use the data in an object (called properties)
is through one of its encapsulated processes (called methods). The system and software development
strategy is changed to focus on the "assembly" of the system from a library of reusable objects. Of course,
those objects must be defined, designed and constructed. Thus in the early part of the system development
process, we need to use object- oriented analysis techniques.
Object orientated analysis (OOA) techniques are used to (1) study existing objects to see if they can be
reused or adapted for new uses, and to (2) define new or modified objects that will be combined with
existing objects into a useful business computing application.
The application of OOA is best suited to projects implementing object-oriented technologies like
Smalltalk, C++, Delphi, and Visual Basic. As of my last update in September 2021, other modern
languages and platforms, including Python, Ruby, Java, C#, .NET, JavaScript, TypeScript, and mobile
app development platforms like Swift and Kotlin, extensively use OOA and Object-Oriented
Programming (OOP) principles.
In today's technological landscape, most computer operating systems use graphical user interfaces
(GUIs), such as Microsoft Windows and IBM's OS/2 Presentation Manager, built leveraging object-
oriented or object-like technologies. Libraries of reusable objects, sometimes called components, are
integral to the development of GUIs. The components exhibit the same behaviors across all applications,
enabling the rapid assembly of desired GUI screens for any new application without substantial
programming. For example, Delphi and Visual BASIC contain all the necessary objects (called
components) to assemble the desired GUI screens for any new application (without programming).
Furthermore, OOA's principles have significantly influenced the design of databases with Object-oriented
databases and ORM (Object-Relational Mapping) tools. It has also been instrumental in the development
of the Microservices architecture, which structures an application as a collection of loosely coupled
services, leading to better scalability, maintainability, and delivery speed.

The Institute of Chartered Accountants of Nepal ȁʹʹ͹


Management Information and Control System
5.2 Concept of Data and Process Modeling
Data Flow Diagram:
Data Flow Diagrams (DFDs) play a crucial role in representing the flow of data in a system and how
this data is processed and transformed to achieve useful outcomes. They are instrumental in identifying
and understanding the functional processes within a system, providing a high-level view of how the
system operates. Data flow diagrams (DFDs) reveal relationships among and between the various
components in a program or system. DFDs are an important technique for modeling a system's high-level
detail by showing how input data is transformed to output results through a sequence of functional
transformations.
The four major components of DFDs are:
 Entities: These are the sources or destinations of data. They could be people, systems, or
organizations that interact with the system.
 Processes: These depict the transformations or computations performed on the data within the
system. They are usually represented by circles or rectangles in a DFD.
 Data Stores: These represent places where data can be stored for later retrieval. They could be
databases, files, or even physical storage locations.
 Data Flows: These are the pathways for data, indicating how it moves from one part of the system
to another.
The symbols used to depict how these components interact in a system are simple and easy to
understand; however, there are several DFD models to work from, each having its own symbology.
DFD syntax does remain constant by using simple verb and noun constructs. The syntactical simplicity
of DFDs and their focus on data transformations make them an excellent tool for object-oriented analysis.
They can facilitate the decomposition of functional specifications into precise diagrams, providing
valuable insights for systems analysts.
Defining Data Flow Diagrams (DFDs)
When it comes to conveying how information data flows through systems (and how that data is
transformed in the process), data flow diagrams (DFDs) are the method of choice over technical
descriptions for three principal reasons.
1. DFDs are easier to understand by technical and nontechnical audiences
2. DFDs can provide a high level system overview, complete with boundaries and connections to
other systems
3. DFDs can provide a detailed representation of system components
DFDs help system designers and others during initial analysis stages visualize a current system or one
that may be necessary to meet new requirements. Systems analysts prefer working with DFDs,
particularly when they require a clear understanding of the boundary between existing systems and
postulated systems. DFDs represent the following:
1. External devices sending and receiving data
2. Processes that change that data
3. Data flows themselves
4. Data storage locations
228 | The Institute of Chartered Accountants of Nepal
Chapter 5 : System Analysis and Design, Case study
The hierarchical DFD typically consists of a top-level diagram (Level 0) underlain by cascading lower
level diagrams (Level 1, Level 2.) that represent different parts of the system.
Data Flow Diagrams
Data flow diagrams have replaced flowcharts and pseudocode as the tool of choice for showing program
design. A DFD illustrates those functions that must be performed in a program as well as the data that
the functions will need. A DFD is illustrated in Figure 5-3.

Fig 5-3 An example of a data flow diagram


Defining DFD Components
DFDs consist of four basic components that illustrate how data flows in a system: entity, process,
data store, and data flow.
Entity
An entity is the source or destination of data. The source in a DFD represents these entities that are outside
the context of the system. Entities either provide data to the system (referred to as a source) or receive
data from it (referred to as a sink). Entities are often represented as rectangles (a diagonal line across the
right-hand corner means that this entity is represented somewhere else in the DFD). Entities are also
referred to as agents, terminators, or source/sink.

Process

The Institute of Chartered Accountants of Nepal ȁʹʹͻ


Management Information and Control System
The process is the manipulation or work that transforms data, performing computations, making decisions
(logic flow), or directing data flows based on business rules. In other words, a process receives input and
generates some output. Process names (simple verbs and dataflow names, such as "Submit Payment"
or "Get Invoice") usually describe the transformation, which can be performed by people or machines.
Processes can be drawn as circles or a segmented rectangle on a DFD, and include a process name and
process number.
Data Store
A data store is where a process stores data between processes for later retrieval by that same process or
another one. Files and tables are considered data stores. Data store names (plural) are simple but
meaningful, such as "customers," "orders," and "products." Data stores are usually drawn as a rectangle
with the right- hand side missing and labeled by the name of the data storage area it represents,
though different notations do exist.
Data Flow
A data flow represents the movement of data between entities, processes, and data stores within the system.
It shows the path and direction of data as it flows from one component to another. Data flows are depicted
as arrows in a DFD, indicating the movement of data from a source to a destination. They are labeled to
describe the type of data being transmitted, providing clarity on the information being exchanged.
DFDs utilize these components to illustrate the flow of data within a system, enabling analysts and
stakeholders to understand the information exchange and transformations. By visually representing the
entities, processes, data stores, and data flows, DFDs facilitate communication, analysis, and
documentation of system requirements and functionalities.The flow of data in a DFD is named to reflect
the nature of the data used (these names should also be unique within a specific DFD). Data flow is
represented by an arrow, where the arrow is annotated with the data name.
These DFD components are illustrated in Figure 5-4.

Fig 5-4 The major components of DFD

230 | The Institute of Chartered Accountants of Nepal


Chapter 5 : System Analysis and Design, Case study
Process for Developing DFDs
Data flow diagrams can be expressed as a series of levels. We begin by making a list of business activities
to determine the DFD elements (external entities, data flows, processes, and data stores). Next, a
context diagram is constructed that shows only a single process (representing the entire system), and
associated external entities. The Diagram-0, or Level 0 diagram, is next, which reveals general
processes and data stores (see Figures 5-5 and 5-6). Following the drawing of Level 0 diagrams, child
diagrams will be drawn (Level 1 diagrams) for each process illustrated by Level 0 diagrams.

Fig 5-5 General form of a level 0 DFD

Fig 5-6 Specific level 0 DFD

The Institute of Chartered Accountants of Nepal ȁʹ͵ͳ


Management Information and Control System
Guidelines For Producing DFDs:
Why They Aren't Called "Rules"
The most important thing to remember is that there are no hard and fast rules when it comes to producing
DFDs, but there are when it comes to valid data flows. For the most accurate DFDs, you need to become
intimate with the details of the use case study and functional specification. This isn't a cakewalk
necessarily, because not all of the information you need may be present. Keep in mind that if your DFD
looks like a Picasso, it could be an accurate representation of your current physical system. DFDs
don't have to be art; they just have to accurately represent the actual physical system for data flow.
Preliminary Investigation of Text Information
The first step is to determine the data items, which are usually located in documents (but not always).
Once you identify the data items, you'll need to determine where they come from (source) and
where they go (destination). Construct a table to organize your information, as shown in Table 5-1.
Table 5-1 Data item table

Data Item Source Destination

Customer Name Customer Form Order System

Product Code Inventory Order System

Order Quantity Customer Form Order System

Order Total Order System Payment System

Shipping Address Customer Form Shipping System

Determining System Boundaries


After organizing the data items, sources, and destinations in a table, the next step is to determine the system
boundaries by identifying which entities (sources and destinations) are internal to the system and which
ones are external. This decision can be influenced by having a deeper understanding of the system or by
working backward from higher-level DFDs (such as Level 1 DFDs).
It is important to note that determining system boundaries can be subjective and may vary depending on
the system being modeled or personal preference. It is also crucial to recognize that DFD development is
an iterative process, and multiple drafts of DFDs may be required to accurately represent the system's data
flow.
By considering the relationships between entities, sources, and destinations, and through iterative
refinement, you can gradually develop a Level 0 DFD that effectively captures the data flow within the
system.

232 | The Institute of Chartered Accountants of Nepal


Chapter 5 : System Analysis and Design, Case study
Developing the Level 0 DFD
hen creating a Data Flow Diagram (DFD), it's important to establish the system boundary. The system
boundary encompasses all the components that are part of the system or process being depicted. These
components are represented within a single system/process box in the DFD. On the other hand, external
entities are located outside the system boundary. Internal entities within the system boundary are
represented as process locations. The data flow arrows illustrate the system's interactions with its
environment, indicating the flow of information to and from processes, external entities, and data stores.
If it helps improve clarity, dashed lines can be used to show data flows between external entities that are
strictly external to the system being analyzed.
Child (Level 1+) Diagrams
Child (Level 1+) diagrams are additional levels of DFDs that provide a more detailed analysis of the
system. While the Level 0 DFD focuses on the overall interaction of the system with the external world,
the subsequent levels delve deeper into the system's internal processes and data flows.
Each child diagram represents a specific process or subprocess within the system. It uses the same symbols
and notation as the Level 0 DFD, allowing for consistency and ease of understanding. The child diagrams
provide a more granular view of the system, breaking down complex processes into smaller, manageable
components.
By creating child diagrams, analysts can progressively refine their understanding of the system's
functionality, data flows, and interactions. The hierarchy of DFD levels helps in organizing and structuring
the analysis process, allowing for a systematic exploration of the system's intricacies. See Figure 5-7.

Fig 5-7. Example of a Level 1 DFD Showing the Data Flow and Data Store Associated
With a SubProcess "Digital Sound Wizard."

The Institute of Chartered Accountants of Nepal ȁʹ͵͵


Management Information and Control System
When producing a first-level DFD, the relationship of the system with its environment must be preserved.
In other words, the data flow in and out of the system in the Level 1 DFD must be exactly the same as
those data flows in Level 0. If you discover new data flows crossing the system boundary when drawing
the Level 1 DFD, then the Level 0 DFD must be amended to reflect the changes in the Level 1 DFD.
Developing the Level 1 DFD
It is important that the system relationship with its environment be preserved no matter how many levels
deep you model. In other words, you can't have new data flows crossing the system boundary in Level 1.
The next section deals with such non-valid data flows.
The Level 1 Data Flow Diagram (DFD) offers a broad overview of the system, highlighting the main
processes and data stores involved. To analyze the diagram effectively, we need to identify the incoming
and outgoing data flows and match them with the corresponding processes responsible for receiving or
generating that data. It's also essential to refer to the data item table to ensure all internal data flows are
accounted for and to identify data stores.
Here is a simplified explanation of the process, broken down sequentially:
1. Review the Level 1 DFD: Begin by examining the Level 1 DFD, which provides a high-level
representation of the system. It identifies the major processes and data stores involved.
2. Identify incoming and outgoing data flows: Look for data flows entering and leaving the processes. Each
data flow has a source and a destination. Identify the corresponding processes responsible for receiving
or generating the data associated with each flow.
3. Check the data item table: Refer to the data item table to ensure all internal data flows are included. Look
for any missing data flows that are not explicitly shown in the diagram. Additionally, identify potential
data stores based on documents that have the same source and destination.
4. Consider shared data stores: Some processes may share common data stores, meaning multiple processes
use the same data store for storing or retrieving information.
5. Evaluate process-data store relationships: Analyze the relationships between processes and data stores.
Determine if it's possible to move a single process-data store combination inside the process itself,
simplifying the diagram.
6. Address internal outputs and inputs: Identify processes that exclusively handle internal outputs and
inputs. In such cases, use a separate process for each source or destination from the DFD to clearly
represent these internal interactions.
By following these steps, you can effectively analyze the Level 1 DFD, identify the data flows,
processes, and data stores involved, and ensure that the diagram accurately represents the system's
functionality and data flow.
Revising the Level 1 DFD
Once you've finished your first attempt at a Level 1 DFD, review it for consistency and refine it for
balance by asking yourself these questions:
1. Do the Level 1 processes correspond with the major functions that a user expects from the
system?
2. Is the level of detail balanced across the DFD?
3. Can some processes be merged?

234 | The Institute of Chartered Accountants of Nepal


Chapter 5 : System Analysis and Design, Case study
4. Can I remove data stores not shared by more than one process?
5. Have I avoided crossed data flow lines by making use of duplicated components (external
entities and data stores)?
Some Guidelines about Valid and Non- Valid Data Flows
Before embarking on developing your own data flow diagram, there are some general guidelines you
should be aware of.
Data stores are storage areas and are static or passive; therefore, having data flow directly from
one data store to another doesn't make sense because neither could initiate the communication.
Data stores maintain data in an internal format, while entities represent people or systems external
to them.
Because data from entities may not be syntactically correct or consistent, it is not a good idea to have a
data flow directly between a data store and an entity, regardless of direction.
Data flow between entities would be difficult because it would be impossible for the system to know
about any communication between them. The only type of communication that can be modeled is that
which the system is expected to know or react to.
Processes on DFDs have no memory, so it would not make sense to show data flows between two
asynchronous processes (between two processes that may or may not be active simultaneously) because
they may respond to different external events.
Therefore, data flow should only occur in the following scenarios:
• Between a process and an entity (in either direction)
• Between a process and a data store (in either direction)
• Between two processes that can only run simultaneously
Figure 5-10 illustrates these valid data-flow scenarios.

Fig 5-8. A Valid DFD Example Illustrating Data Flows, Data Store, Processes, and Entities.

The Institute of Chartered Accountants of Nepal ȁʹ͵ͷ


Management Information and Control System
In Figure 5-8, Student and Faculty are the source and destination of information (the entities),
respectively. Register 1, Exam 2, and Graduate 3 are the processes in the program. Student Record
is the data store. Register 1 performs some task on Registration Form from Student, and the Subject
Registered moves to the data store. The Class Rolls information flows on to Faculty. Graduate 3 obtains
Academic Record information from Student Record, and Degree/Transcript information is moved to
Student. Exam 2 obtains exam/paper information from Faculty, and moves the Grades to the Student
Record for storage
Another example would be:
.

Here are a few other guidelines on developing DFDs:


• Data that travel together should be in the same data flow
• Data should be sent only to the processes that need the data
• A data store within a DFD usually needs to have an input data flow
• Watch for Black Holes: a process with only input data flows
• Watch for Miracles: a process with only output flows
• Watch for Gray Holes: insufficient inputs to produce the needed output
• A process with a single input or output may or may not be partitioned enough
• Never label a process with an IF-THEN statement

236 | The Institute of Chartered Accountants of Nepal


Chapter 5 : System Analysis and Design, Case study
• Never show time dependency directly on a DFD (a process begins to perform tasks as soon as it
receives the necessary input data flows)
Advantages and Disadvantages of DFDs
Strengths
As we have seen, the DFD method is an element of object-oriented analysis and is widely used. Use of
DFDs promotes quick and relatively easy project code development. DFDs are easy to learn with their
few-and-simple-to-understand symbols (once you decide on a particular DFD model). The syntax used
for designing DFDs is simple, employing English nouns or noun- adjective-verb constructs. The use of
DFDs can expedite the project code development process as they provide a clear visualization of data
flow and transformations within a system. DFDs are an integral part of object-oriented analysis,
providing a pictorial view of how data flows and is manipulated within a system.
Disadvantages
DFDs for large systems can become cumbersome, difficult to translate and read, and be time consuming
in their construction. Data flow can become confusing to programmers, but DFDs are useless without
the prerequisite detail: a Catch-22 situation. Different DFD models employ different symbols (circles
and rectangle for example, for entities).
E-R Diagrams
Entity-relationship (E-R) diagrams are graphic illustrations used to display objects or events within a
system and their relationships to one another. E-R diagrams model data in much the same way as
DFDs model processes and data flows.
Why is a data flow diagram not a complete data description? In DFD, processes are first
identified and then the data flow between the processes are isolated and derived. Thus processes are the
focal point of the DFD. Also a DFD does not represent the relationships an organization needs among
the data entities. This is where the E-R data modeling helps. In E-R models, entities (data objects)
are isolated and the relationships between them are defined. Thus data is the focal point of E-R
diagrams,
This process results in a thorough, systematic investigation of the existing system. Also it
resolves in defining or modifying data entities and relationships as established in DFD. ln the final
outcome all the data stores are normalized.
Purpose of E-R diagram
• Verify accuracy and thoroughness of data design, current and new, with users.
• Organize and record organizational data entities, relationships and scope through
decomposition and layering.
• Enhance the overall communication between development project team members, system
technicians, management and users with the use of graphic models.
• Generally simplify and bolster the creative data design process.

The Institute of Chartered Accountants of Nepal ȁʹ͵͹


Management Information and Control System
How to represent E-R diagram
Figure 5-9 illustrates an E-R diagram. An E-R model represents entities and the relationships that
exist between them. A box symbol as shown in the figure is used to represent Entities and a diamond
for Relationship.

Fig 5-9 An E-R model.


Generally speaking, the more "event-like" a relationship is, more likely it will be to require a unique data
structure or data table. An important part of definition of a relationship is its cardinality. Cardinality
specifies how many instances of one entity can describe one instance of the other entity in the relationship.
Types of E-R Relationships
There are three basic types of relationships modeled between entities on an E-R diagram. They are:
• One-to-one
• One-to-many
• Many-to-many
Example
Consider a student registration system, where the entities identified in data flow diagram are
STUDENTS, INSTRUCTOR, COURSES OFFERED, and COURSE SCHEDULE. We see how
relationships can be established between these entities.
One-to-one
One-to-one is a relationship type in entity-relationship (E-R) diagrams, which indicates that each entity in
one entity set is associated with at most one entity in another entity set, and vice versa. It implies a unique
and singular relationship between the entities.

238 | The Institute of Chartered Accountants of Nepal


Chapter 5 : System Analysis and Design, Case study
In a one-to-one relationship, an entity from one entity set is paired with exactly one entity from another
entity set. This relationship can be visualized in an E-R diagram by connecting the two entities with a line
and labeling it as "1:1" or simply "1" on both ends.
always assigned to same course every year, it might be useful to merely merge the entities.

INSTRUCTOR Assigned to COURSES


OFFERED

Fig 5-10 A one-to-one relationship.

One-to-many
One-to-many relationship refers to a situation where multiple items in one entity can be connected to
multiple items in another entity. It means that a single item in the first entity can be associated with many
items in the second entity, but not the other way around.
Using the previous scenario as an illustration, when an instructor teaches multiple courses within a year,
it establishes a one-to-many relationship. It is important to note that merging entities should be avoided if
there is a potential for transforming a one-to-one relationship into a one-to-many relationship.

Instructor Assigned to Courses offered

Fig 5-11 A one-to-many relationship.


Many-to-many
A many-to-many relationship implies that each item in an entity may be associated with many items
in another entity and vice versa.

The Institute of Chartered Accountants of Nepal ȁʹ͵ͻ


Management Information and Control System
As shown in the Figure 5-12, students register for many courses, which represents the case of
many-to-many relationship. Also one course will be taken up by many students.

Figure 5-12 A many-to-many relationship.

Exercise
Consider the Order Tracking system.
Customer place orders for universal products. Orders are filled in the Order Processing department
by order processing clerks. In the Order Processing department, an Order number is assigned to each
order for identification and an invoice with the cost of the products for the order is produced. When the
invoice is sent to the customer, a shipment is also made to the customer by the Shipping department.
After the DFD are drawn, the following are the data entities established: SHIPMENT CUSTOMER,
ORDER, INVOICE and PRODUCT.
Establish the possible relationships between each of these data entities.
Solution to Exercise
The following diagram represents a one-to-one relationship.

A one-to one relationship

The relationships shown here are all one-to-many relationship.

240 | The Institute of Chartered Accountants of Nepal


Chapter 5 : System Analysis and Design, Case study

Steps in Building an E-R Diagram


There are 6 steps to follow in building an E-R diagram.
• Determine the data entities.
• Generate a list of potential entity relationships or pairings.
• Determine the relationship between the entity and pairings.
• Analyze the significant entity relationships.
• Develop an integrated E-R diagram.
• Define and group the attributes for each data entity.

This top-down approach is applied to the construction of a data model by way of E-R diagram. The
above steps are illustrated as a flowchart in Figure 5-13.

The Institute of Chartered Accountants of Nepal ȁʹͶͳ


Management Information and Control System
Determine the data entities
The data entities, are those objects within the system, which are the likely candidates to have
information about them stored. Thus from the DFD all the data stores which you have identified
becomes the choice for data entities.
Generate a list of potential entity relationships or pairings
What is the maximum number of potential combinations among different individual entities. Each
such combination creates a pairings among the entities.
Determine the relationship between the entity and pairings
The relationship of each should be examined both for logical association's nature and for significance.
It an association is determined to be insignificant, i.e. it requires no data integration or linkage, then it
can be removed from the data model entirely.

Fig 5-13 Procedure for data modeling through E-R diagram

242 | The Institute of Chartered Accountants of Nepal


Chapter 5 : System Analysis and Design, Case study
Example
Let us take the example of Student registration system, where the data entities are STUDENTS,
INSTRUCTORS, COURSES, SCHEDULES, and look at all the possible pairings:
Students to Instructors: Students taught by instructors. Linkage required for knowing to which
instructors a student is assigned
Students to Student schedules: Students generate schedule upon registration. Linkage required for
knowing to which courses a student has scheduled for a given term.
Students to Courses offered: Students select the courses offered. Linkage required for knowing details
about courses for which a student will register.
Instructors to Student schedules: Instructor listed on student schedule. This relationship is not
significant, as the instructor is not listed on the course schedule.
Instructors to Courses offered: Instructor teaches courses. This relationship establishes link to know
which instructor teaches which courses.
Student schedules to Courses offered: Student schedules list selected course offerings. Linkages to
know for which courses a student has registered.
Analyze the significant entity relationships
A single E-R diagram should be developed for each significant pairing, with the type of
relationship (one-to-one, one-to-many, many-to-many).
Example
Let us take the example of student registration system and looking at the pairings, establish the
relationships between entities.
Students to Instructors: Students taught by instructors. Relationship is Many-to-many.
Students to Student schedules: Students generate schedule upon registration. Relationship is
Many-to-many.
Students to Courses offered: Students select the courses offered. Relationship is Many- to- many.
Instructors to Student schedules: Instructor listed on student schedule. This relationship is not
significant.
Instructors to Courses offered: Instructor teaches courses. Relationship is One-to-many. Student
schedules to Courses offered: Student schedules list selected course offerings. Relationship is Many-to-
many.
Develop an Integrated E-R Diagram
Finally all the individual E-R diagram/pairings are assembled to represent a single diagram as shown
in Figure 5-14.

The Institute of Chartered Accountants of Nepal ȁʹͶ͵


Management Information and Control System

Fig 5-14 Final E-R diagram representing relationship among students, instructors and courses
offered.
Next we define and group the attributes for each data entities, as shown below:
COURSES OFFERED = Class-Number
Class-Name
Class-Credits
Class-Room
Class-Time
Class-Instructor
Class-Enrollment
Class-Maximum-Limit

INSTRUCTOR = Instructor-Number
lnstructor-Name
Instructor-Department
Instructor-schedule (for all classes taught)

{Class-Number}
{Class-Name}
{Class-Credits}

244 | The Institute of Chartered Accountants of Nepal


Chapter 5 : System Analysis and Design, Case study
{Class-Enrollment}
STUDENT = Student-Number
Student-Name
Student-Address
Student-Level
Student-Credits-Earned (for all classes)
{Class-Number}
{Class-Name}
{Class-Credits}
{Class-Grade}
REGISTRATION = Student-Number
Class-Number
SCHEDULE = Instructor-Number
Class-Number
ATTENDANCE = Class-Number
Class-Name
Class-Credits
Class-Room
Class-Time
Class-Instructor
Class-Attendance (All students presentiabsent)
{Student-Number}
{Student-Name}
{Student-Level}
The above is the partial data dictionary list arrived for Student Registration system. This will be then
converted into normalized data structures.

5.3 Strategies for System Design


There are many strategies or techniques for performing systems design, They include modern structured
analysis, information engineering, prototyping, JAD, RAD, and object-oriented design. These
strategies are often viewed as competing alternative approaches to systems design. In reality, certain
combinations complement one another. Let's briefly examine these strategies and the scope or goals
of the projects to which they are suited. The intent is to develop a high- level under-standing only.
The subsequent chapters in this unit will actually teach you the techniques.
Structured design techniques help developers deal with the size and complexity of programs.

The Institute of Chartered Accountants of Nepal ȁʹͶͷ


Management Information and Control System
Modern structured design is a process-oriented technique for breaking up a large program into a
hierarchy of modules that result in a computer program that is easier to implement and maintain
(change). Synonyms (although technically inaccurate) are top-down program design and structured
progmmming.
The concept is simple. Design a program as a top-down hierarchy of modules. A module is a group of
instructions-a paragraph, block, subprogram, or sub-routine. The top-down structure of these modules
is developed according to various design rules and guidelines. (Thus, merely drawing a hierarchy or
structure chart for a program is not structured design.)
Structured design is considered a process technique because its emphasis is on the PROCESS building
blocks in our information system--specifically. Software processes. Structured design seeks to factor a
program into the top-down hierarchy of modules that have the following properties:
• Modules should be highly cohesive; that is, each module should accomplish one and only one
function. Theoretically this makes the modules reusable in future programs.
• Modules should be loosely coupled; in other words, modules should be minimally dependent on
one another. This minimizes the effect that future changes in one module will have on other
modules.
The software model derived from structured design is called a structure chart (Figure 5-15). The
structure chart is derived by studying the flow of data through the program. Structured design is
performed during systems design. lt does not address all aspects of design; for instance, structured
design will not help you design inputs, databases, or files.
Structured design has lost some of its popularity with many of today's applications that call for newer
techniques that focus on event-driven and object-oriented programming techniques. However, it is still
a popular technique for the design of mainframe-based application software and to address coupling and
cohesion issues at the system level.
To conclude:
 Adoption of Agile and DevOps Practices: In the era of Agile and DevOps, the importance of
modular, iterative design and development has increased. Although structured design may not
directly cater to these methodologies, its principles of modular, top-down design can still be
relevant.
 Event-Driven and Object-Oriented Programming: Modern applications often utilize event-
driven and object-oriented programming paradigms. While structured design has lost some
popularity due to this shift, understanding structured design can provide a useful foundation for
understanding these newer paradigms.
 Microservices Architecture: In recent years, the microservices architectural style, which
structures an application as a collection of loosely coupled, highly cohesive services, has gained
popularity. The principles of loose coupling and high cohesion advocated by structured design
are relevant in this context as well.
 Legacy Systems: Many legacy systems built with older languages such as COBOL and Fortran
were designed using structured design principles. Maintaining and modernizing these systems
often requires an understanding of structured design.

246 | The Institute of Chartered Accountants of Nepal


Chapter 5 : System Analysis and Design, Case study

Fig 5-15 the end product of structured design


Information Engineering (IE) is a data-centered technique. IE involves conducting a business area
requirements analysis from which information system applications are carved out and prioritized.
The applications identified in IE become projects to which other systems analysis und design
methods are intended to be applied to develop the production systems. These methods may
include some combination of modern structured analysis, modern structured design, prototyping,
and object-oriented analysis and design.
Prototyping: Traditionally, physical design has been a paper-and-pencil process. Analysts drew
pictures that depicted the layout or structure of outputs, inputs, and files and the flow of dialogue and
procedures. This is a time-consuming process that is prone to considerable error and omissions.
Frequently, the resulting paper specifications did not prove themselves inadequate, incomplete, or
inaccurate until programming started.
Today many analysts are turning to prototyping, a modern engineering-based approach to design. A
prototype, according to webster's dictionary, is "an original or model on which something is patterned
and/or "a first full-scale and usually functional form of a new type or design of a construction (as an
airplane)." Engineers build prototypes of engines, machines, automobiles, and the like, before
building the actual products. Prototyping allows engineers to isolate problems in both requirements
and designs.

The Institute of Chartered Accountants of Nepal ȁʹͶ͹


Management Information and Control System
The prototyping approach is an iterative process involving a close working relationship between the
designer and the users. This approach has several advantages.
• Prototyping encourages and requires active end-user participation. This increases end- user morale
and support for the project. End-user morale is enhanced because the system appears real to them.
• Iteration and change are natural consequences of systems development-that is, end- users tend
to change their minds. Prototyping better fits this natural situation since it assumes that a prototype
evolves, through iteration, into the required system.
• It has often been said that end-users don't fully know their requirements until they see them
implemented. if so, prototyping endorses this philosophy.
• Prototypes are an active, not passive, model that end--users can see, touch, feel, and
experience. Indeed, if a picture such as a DFD is worth a thousand words, then a working model of
a system is worth a thousand pictures.
• An approved prototype is a working equivalent to a paper design specification, with one
exception-errors can be detected much earlier.
• Prototyping can increase creativity because it allows for quicker user feedback, which can lead
to better solutions.
• Prototyping accelerates several phases of the life cycle, possibly bypassing the programmer.
In fact prototyping consolidates parts of phases that normally occur one after the other.
There are also disadvantages or pitfalls to using the prototyping approach. Prototyping is not without
disadvantages. Most of these can be summed up in one statement: Prototyping encourages ill-
advised shortcuts through the life cycle. Fortunately, the following pitfalls can all be avoided through
proper discipline.
• Prototyping encourages a return to the "code, implement, and repair" life cycle that used to
dominate information systems. As many companies have learned. systems developed in prototyping
languages can present the same maintenance problems that have plagued systems developed
in languages such as COBOL.
• Prototyping does not negate the need for the survey and study phases. A prototype can just as
easily solve the wrong problems and opportunities as a conventionally developed system.
• You cannot completely substitute any prototype for a paper specification. No engineer would
prototype an engine without some paper design. Yet many information systems professionals
try to prototype without a specification.
• Prototyping should be used to complement, not replace, other methodologies. The level of detail
required of the paper design may be reduced, but it is not eliminated. (In the next section, we'll
discuss just how much paper design is needed.)
• Numerous design issues are not addressed by prototyping. These issues can inadvertently be
forgotten if you are not careful.
• Prototyping often leads to premature commitment to 21 designs.

248 | The Institute of Chartered Accountants of Nepal


Chapter 5 : System Analysis and Design, Case study
• When prototyping, the scope and complexity of the system can quickly expand beyond
original plans. This can easily get out of control.
• Prototyping can reduce creativity in designs. The very nature of any implementation-for
instances, a prototype of a report-can prevent analysts and end-users from looking for better
solutions.
• Prototypes often suffer from slower performance than their third-generation language
counterparts.
Building prototypes makes so much sense that you may wonder why we didn't always do it. The reason
is simple: The technology wasn't available. Traditional languages such as COBOL, FORTRAN,
BASIC, Pascal, and C (often called third-generation languages) don't lend themselves to
prototyping. Prototypes must be developed and modified quickly, neither of which is possible with third-
generation languages. Consider the prospects of continually modifying the DATA and PROCEDURE
divisions of a COBOL program as end-users try to make up their mind what they want and how it
should look.
Fourth-generation languages (4GLs), applications generators (AGs), and some object-oriented
programming languages (OOPLs) are software tools that make building systems a simpler task. They
are less procedural than traditional languages.
This means the tools specify more of what the system is or what it should do, and less of how to do it.
In other words, they are not as dependent on specification of logic. Finally, it should also be noted
that many computer-assisted systems engineering (CASE) products also contain limited prototyping
tools for designing screens and reports.
Prototypes can be quickly developed using many of the 4GLs and object-oriented programming
languages available today. Prototypes can be built for simple outputs, computer dialogues, key functions,
entire subsystems, or even the entire system. Each prototype system is reviewed by end-users and
management, who make recommendations about requirements, methods, and formats. The
prototype is then corrected, enhanced, or refined to reflect the new requirements. Prototyping
technology makes such revisions in a relatively straightforward manner. The revision and
review process continues until the prototype is accepted. At that point, the end-users are
accepting both the requirements and the design that fulfills those requirements.
Design by prototyping doesn't necessarily fulfill all design requirements. For instance, prototypes
don't always address important performance issues and storage constraints. Prototypes rarely
incorporate internal controls. These must still be specified by the analyst.
Joint application development (JAD) was introduced as a technique that complements other systems
analysis and design techniques by emphasizing participative development among system owners, users,
designers, and builders. Thus, JAD is frequently used in conjunction with the above design
techniques. During the JAD sessions for systems design, the systems designer will take on the role of
facilitator for possibly several full-day workshops intended to address different design issues and
deliverables. It is a technique that allows the development, management, and customer groups to work
together to build a product (Alian Clane). IBM developed the JAD technique in the late 1970’s and still
this method is regarded as one of the best approach for collecting requirements from the users,
customers, or customer advocates.
The Institute of Chartered Accountants of Nepal ȁʹͶͻ
Management Information and Control System
Another popular design strategy used today is rapid application development.
Rapid application development (RAD)
Rapid Application Development (RAD) is a developmental model that emphasizes a quick and iterative
process to develop applications. RAD combines various structured techniques, with a focus on data-driven
information engineering, prototyping, and Joint Application Development (JAD) to expedite the system
development process.
The RAD process involves an interactive use of structured techniques and prototyping to define user
requirements and design the final system. This process typically begins with the construction of preliminary
data and process models to represent business requirements. These models serve as a foundation for
creating prototypes, which allow analysts and users to validate and refine requirements.
This iterative cycle of model development, prototyping, and refinement continues until a comprehensive
business requirements and technical design document is produced. This document serves as a roadmap for
the construction of the new system.
RAD is a popular approach for developing software applications, particularly in scenarios that demand
quick delivery, frequent updates, or where requirements are expected to change frequently. However, it's
worth noting that the suitability of RAD or any development approach depends on the specific context
and requirements of the project.
Object oriented design is the newest up-and-coming design strategy. This technique is an extension of
the object-oriented analysis strategy. Recall that object technologies and techniques are an attempt to
eliminate the separation of concerns about DATA and PROCESS. Object- oriented design (OOD)
techniques are used to refine the object requirements definitions identified earlier during analysis
and to define design-specific objects.
For example, based on a design implementation decision, during OOD the designer may need to revise
the data or process characteristics for an object that was defined during systems analysis. Likewise, a
design implementation decision may necessitate that the designer define a new set of objects that
will make up an interface screen that the users may interact with in the new system.

5.4 Input Design


"Garbage in! Garbage out!" This overworked expression is no less true today than it was when we first
studied computer programming. Management and users make important decisions based on system
outputs. These outputs are produced from data that are either input or retrieved from databases. And data
in databases must have been input first. In this chapter, you are going to learn how to design computer
inputs. Input design serves an important goal-capture and get the data into a format suitable for the
computer. And data constitute one of the fundamental building blocks for information systems.
One of the first things you must learn is the difference between data capture and data input. Alternative
input media and methods must also be understood before designing the inputs. And because accurate
data input is so critical to successful processing, file maintenance, and output, you should also learn
about human factors and internal controls for input design. After learning these fundamental concepts,
we will study the tools and techniques of input design and prototyping.

250 | The Institute of Chartered Accountants of Nepal


Chapter 5 : System Analysis and Design, Case study
When you think of "input," you usually think of input devices, such as keyboards and mice. But input
begins long before the data arrive at the device. To actually input business data into a computer, the
analyst may have to design source documents, input screens, and methods and procedures for getting
the data into the computer (from customer to form to dam entry clerk to disk to computer).
This brings us to our fundamental question. What is the difference among data capture, data entry,
and data input? Data happens! It accompanies business events called transactions. Examples
include orders, time cards, reservations, and the like. We must determine when and bow to capture
the data.
Data capture is the identification of new data to be input.
When is easy! It's always best to capture the data as soon as possible after it is originated. How
is another story! Traditionally, special paper forms called source documents were used.
A source document is a paper form used to record data that will eventually be input to a computer.
With advances in video display technology, screen display forms can duplicate the appearance of almost
any paper-based form. Most applications data capture involves the use of source documents and screen
display forms. Their design is not easy. Screen display forms and source documents must be designed
to be easy for the system user to complete and should facilitate rapid data entry.
Data entry is not the same as data capture.
Data entry is the process of translating the source document into a machine readable format. That format
may be a magnetic disk, an optical-mark form, a magnetic tape, or a floppy diskette, to name a few.
Once data entry has been performed, we are ready for data input.
Data input is the actual entry of data in a machine-readable format into the computer.
Let's examine some data capture and data entry issues you should consider during systems design.
The systems analyst usually selects the method and medium for all inputs. Input methods can be broadly
classified as either batch or on-line.
Batch and Online Input Methods
Batch input is the oldest and most traditional input method. Source documents or forms are collected
and then periodically forwarded to data entry operators, who key the data using a data entry device that
translates the data into a machine-readable format.
Traditional media for batch input data included key-to-disk (KTD) and key-to-tape (KIT) workstations
that transcribe data to magnetic disks and magnetic tape, respectively. The data can be corrected,
because they are initially placed into a buffer.
Figures 5-16(a) and 5-16(b) illustrate the key-to-tape and key-to-disk input procedures, respectively.
We have distinguished the data capture activities, the data entry activities, and the data input activities
discussed in the previous section.
Today, most, but not all, systems have been converted or are being converted to on-line methods.

The Institute of Chartered Accountants of Nepal ȁʹͷͳ


Management Information and Control System

Fig 5-16 Input Methods And Media


On-line input is the capture of data at its point of origin in the business and the direct inputting of that
data to the computer, preferably as soon as possible after the data originates.
The most common on-line medium cannot really be classified as a medium, it is the display terminal,
or microcomputer display monitor [see Figure 5-16(c)]. The on-line system includes a monitor screen
and keyboard that are directly connected to a computer system. The system user directly enters the data
when or soon after that data originates. No data entry clerks are needed! There is no need to record data
onto a medium that is later input to the computer; this input is direct! If data is entered incorrectly,
the computer's edit program detects the error and immediately requests that the cathode ray tube
(CRT) operator make a correction.
Most new applications being developed today consist of screens having a "graphical" looking
appearance. This type of appearance is referred to as a graphical user interface (GUI). You are likely
familiar with Microsoft Windows-based applications, which have a graphical interface. This chapter
will introduce issues and techniques for designing on-line inputs for a system that will consist of a
graphical user interface.
Now that you understand batch versus on-line, let's address the issue of whether all systems should be
designed for on-line input? Technology to support on-line applications is cheaper than it used to be. So
why bother with batch input?

252 | The Institute of Chartered Accountants of Nepal


Chapter 5 : System Analysis and Design, Case study
No matter how cheap and fast on-line processing gets, an on-line program can- not be nearly as fast as
its batch equivalent. Many (but not all) on-line programs require some human interaction, and people
are slow, relative to computers. Also, for large-volume transactions, too many CRT terminals and
operators may be needed to meet demand. As the number of on-line CRTs grows, the overall
performance of the computer declines. Furthermore, many inputs naturally occur in batches. For
instance, our mail may include a large batch of customer payments on any given day. Postal delivery
is, at least today, a batch operation. Additionally, some input data may not require immediate attention.
Finally, batch processing may be preferable because internal controls (discussed shortly) are simpler.
So you see, batch inputs can still be justified.
But there is a compromise solution, the remote batch.
Remote batch offers on-line advantages for data that is best processed in batches. The data is input on-
line with on-line editing. Microcomputers or minicomputer systems can be used to handle this on-line
input and editing. The data is not immediately processed. Instead, it is hatched, usually to some
type of magnetic media. At an appropriate time, the data is uploaded to the main computer, merged,
and subsequently processed as a batch. Remote batch is also called deferred batch or deferred
processing.
With the advancement in today's technology, data input has become more sophisticated. We can
eliminate much (and sometimes all) human intervention associated with the input methods
discussed in the previous section. By eliminating human intervention we can decrease the time delay
and errors associated with human interaction. This opportunity is especially important to businesses
operating in today's globally competitive environment!
A number of alternative automatic data collection (ADC) technologies are available today and finding
their way into batch and on-line applications. Some of these are presented in the sections that follow
(Dunlap, 1995).
Biometric ADC technology is based on unique human characteristics or traits. For example, individuals
can be identified by their own unique finger-print, voice pattern, or pattern of certain veins (retina or
wrist). Biometric ADC systems consist of sensors that capture an individual's characteristic or trait,
digitize the image pattern, and then compare the image to stored patterns for identification. Biometric
ADC is popular because it offers the most accurate and reliable means for identification. This
technology is particularly popular for systems that require security access.
Electromagnetic ADC technology is based on the use of radio frequency to identify physical objects.
This technology involves attaching a tag and antenna to the physical object that is to be tracked. The
tag contains memory that is used to identify the object being tracked. The tag can be read by a reader
whenever the object resides within the electromagnetic field generated by the reader. This identification
technology is becoming very popular in applications that involve tracking physical objects that are
out of sight and on the move. For example, electromagnetic ADC is being used for public
transportation tracking and control, tracking manufactured products, and tracking animals, to name
a few.
Magnetic ADC technology is one you will likely recognize. It usually involves using magnetic stripe
cards, but it also may include the use of magnetic ink character recognition (MICR). Over
1 billion magnetic stripe cards are in use today! They have found their way into a number of
The Institute of Chartered Accountants of Nepal ȁʹͷ͵
Management Information and Control System
business applications, such as credit card transactions, building security access control, and employee
attendance tracking. MICR is most widely used in the banking industry.
Optical You have likely encountered an example of optical technology most every day, bar coding.
Point-of-sale terminals in retail and grocery stores frequently include bar code and optical-
character readers. Everyone has seen the bar codes recorded on today's grocery products. These bar
codes eliminate the need for keying data, either by data entry clerks or end-users. Instead, sophisticated
laser readers read the bar code and send the data represented by that code directly to the computer for
processing. Frequently items are encountered in which a bar code can't physically be attached. This
is typically overcome by providing the data entry clerk with a poster sheet containing a picture and
accompanying bar code of those items. The clerk simply scans the bar code of the appropriate picture
appearing on the sheet.
Another optical ADC alternative is the optical-mark form. You may have encountered this medium in
machine-scored tests. Optical-mark forms eliminate most or all of the need for data entry. Essentially,
the source document becomes the input medium and is directly read by an optical-mark reader (OMR)
or optical-character reader (OCR). The computer records the data to magnetic tape, which is then input
to the computer. OCR and OMR input are generally suitable only for high-volume input activities. By
having data directly recorded on a machine-readable document, the cost of data entry is eliminated.
This technology is commonly used for applications involving surveys, questionnaires, or testing.
Smart Cards
Smart cards, slightly thicker yet similar in size to credit cards, embody a significant technological
advancement due to their ability to store a large amount of information. Their distinguishing feature is
the embedded microprocessor, memory circuits, and sometimes even a battery, effectively making them
miniature computers on a card.
Although smart card technology is just beginning to gain traction in certain countries like the United
States, it's already a daily necessity for over 60% of the population in countries such as France. One of
the most promising applications of smart cards is in the healthcare sector, where they can store vital
information like blood type, vaccination records, and other medical history, ensuring ready access when
needed.
Beyond healthcare, smart cards have a broad range of potential applications. They can serve as
electronic passports, store financial information for point-of-sale transactions, facilitate pay-television
subscriptions, and more.
Touch Touch-based ADC systems include touch screens, buttons, and pen-based computing technology.
In particular, touch screen technology has been very popular in restaurant or point- of-sale business
applications. Recently, manufacturing company has begun to use touch screens throughout the
manufacturing shop floor as a means to capture data pertaining to such things as work orders, machine
setup, material requisitions, employee attendance, and scheduling. Pen- based computing is popular for
applications that require handwriting recognition. You may have experienced this technology for
capturing data when you were asked to sign for a special delivery package.

254 | The Institute of Chartered Accountants of Nepal


Chapter 5 : System Analysis and Design, Case study
Technology will continue to evolve. It is the systems analyst's responsibility to be aware of trends
in new technology to enhance data capture and input. With an ever-increasing emphasis on helping
companies gain a competitive advantage, those analysts that continue to grow professionally by keeping
abreast of technological advances in the area of data capture and input will certainly enhance their
careers.
System User Issues
Because inputs originate with system users, human factors play a significant role in input design. Inputs
should be as simple as possible and designed to reduce the possibility of incorrect data being entered.
Furthermore, the needs of data entry clerks must also be considered. With this in mind, several human,
factors should be evaluated.
The volume of data to be input should be minimized. The more data that are input, the greater the
potential number of input errors and the longer it takes to input that data. Thus, numerous considerations
should be given to the data that are captured for input. These general principles should be followed for
input design;
• Capture only variable data. Do not enter constant data. For instance, when deciding what
elements to include in a SALES ORDER input, we need PART NUMBERS for all parts ordered.
However, do we need to input PART DEESCRIPTIONS for those parts? PART DESCRIPTION is
probably stored in a database table. If we input PART NUMBER, we can look up PART
DESCRIPTION. Permanent (or semi permanent) data should be stored in the database. Of course,
inputs must be designed for maintaining those database tables.
• Do not capture data that can be calculated or stored in computer programs. For example, if you
input QUANTITY ORDERED and PRICE, you don't need to input EXTENDED PRICE,
which is equal to QUANTITY ORDERED X PRICE. Another example is incorporating FEDERAL
TAX HOLDING data in tables (arrays) instead of keying in that data every time.
• Use codes for appropriate attributes. Codes were introduced earlier. Codes can be translated in
computer programs by using tables.
Second, if source documents are used to capture data they should be easy for system users to complete
and subsequently enter into the system. The following suggestions may help;
• Include instructions for completing the form. Also, remember that people don't like to have
to read instructions printed on the back side of a form.
• Minimize the amount of bandwriting. Many people suffer from poor penmanship. The data
entry clerk or CRT operator may misread the data and input incorrect data. Use check boxes
wherever possible so the system user only needs to check the appropriate values.
• Data to be entered (keyed) should be sequenced so it can be read like this book, top to bottom
and left to right [see Figure 5-17(a)]. The data entry clerk should not have to move from right to
left on a line or jump around on the form [see Figure 5-17(b)] to find data items to be entered.
• Ideally, portions of the form that are not to be input are placed in or about the lower
portion of the source document (the last portion encountered when reading top to bottom
and left to right.) Alternatively, this information can be placed on the back of the form.

The Institute of Chartered Accountants of Nepal ȁʹͷͷ


Management Information and Control System

Fig 5-17 Keying From Source Documents\


There are several other guidelines and issues specific to data input for GUI screen designs. Input
controls ensure that the data input to the computer is accurate and that the system is protected against
accidental and intentional errors and abuse, including fraud. The following internal control
guidelines are offered;
1. The number of inputs should be monitored. This is especially true with the batch method,
because source documents may be misplaced, lost, or skipped.
• In batch systems, data about each batch should be recorded on a batch control slip. Data include
BATCH NUMBER, NUMBER OF DOCUMENTS, and CONTROL TOTALS (e.g. total
number of line items on the documents). These totals can be compared with the output totals

256 | The Institute of Chartered Accountants of Nepal


Chapter 5 : System Analysis and Design, Case study
on a report after processing has been completed. If the totals are not equal, the cause of the
discrepancy must be determined.
• In batch systems, an alternative control would be one-for-one checks. Each source
document would be matched against the corresponding historical report detail line that
confirms the document has been processed. This control check may be necessary only when
the batch control totals don't match.
• In on-line systems, each input transaction should be logged to a separate audit file so it can
be recovered and reprocessed in the event of a processing error or if data is lost.
2. Care must also be taken to ensure that the data is valid. Two types of errors can infiltrate the
data; data entry errors and invalid data recorded by system users. Data entry errors include
copying errors, transpositions (typing 132 as 123), and slides (keying 345.36 as
3453.6). The following techniques are widely used to validate data:
• Completeness checks determine whether all required fields on the input have actually been
entered.
• Limit and range checks determine whether the input data for each held falls within the
legitimate set or range of values defined for that field. For instance, an upper-limit range may
be put on PAY RATE to ensure that no employee is paid at a higher rate.
• Combination checks determine whether a known relationship between two fields is valid.
For instance, if the VECHILE MAKE is Pontiac, then the VECHILE MODEL must be one
of a limited set of values that comprises cars manufactured by Pontiac (Firebird, Grand Prix,
and Bonneville to name a few).
• Self-checking digits determine data entry errors on primary keys. A check digit is a
number or character that is appended to a primary key held. The check digit is
calculated by applying a formula, such as Modulus 11, to the actual key (see Figure 5-
18). The check digit verifies correct data entry in one of two ways. Some data entry devices
can automatically validate data by applying the same formula to the data as it is entered by the
data entry clerk. If the check digit entered doesn't match the check digit calculated, an error is
displayed. Alternatively, computer programs can also validate check digits by using readily
available subroutines.
• Picture checks compare data entered against the known COBOL picture or other
language format defined for that data. For instance, the input field may have a picture clause
XX999 AA (where X can be a letter or number, 9 must be a number, and A must be a letter).
The Held "A4898 DH" would pass the picture check, but the Held "A489
ID8" would not.

The Institute of Chartered Accountants of Nepal ȁʹͷ͹


Management Information and Control System

Fig 5-18 Modulus 11-Self Checking Digit Technique


Data validation requires that special edit programs be written to perform checks. However, the input
validation requirements should be designed when the inputs themselves are designed.
GUI Issues for Input Design
As mentioned earlier, most new applications being developed today include a GUI. These types of
interfaces are rapidly replacing the more traditional text-based screen designs that characterized
mainframe-based applications. While GUI designs provide a more user-friendly interface, they also
present many more design issues that must be considered. This chapter will not attempt to address
all- the GUI design issues; entire books have been written on the subject. Rather, this chapter will focus
on selecting the proper screen-based controls for entering data on a GUI screen. This approach is
influenced by a new trend in programming, called repository- based programming.
This information entered by a Visual Basic developer in a repository for the physical data
attribute frmAuthors. The developer can in a single location define most characteristics for a particular
data element. Once the developer defines this information, it can be used by multiple developers in an
organization. This repository-based approach guarantees that every instance of the attribute frmAuthors
will be used in a consistent manner. Furthermore, the dictionary entries can be changed if business
rules dictate and no additional changes to the applications will be required.
This section takes a similar approach to GUI input screen design. We will first learn about available
screen-based controls for inputting data. We address the purpose, advantages, disadvantages and
guidelines for each control. Given this understanding, we are then in a good position to make decisions
258 | The Institute of Chartered Accountants of Nepal
Chapter 5 : System Analysis and Design, Case study
concerning which controls should be considered for each data attribute that will be input on our
screens.
Figure 5-19 contains each of the controls to be discussed.

Fig 5-19 Common Screen-Based Controls For Input Data


Text Box 1 Perhaps the most common control used for input of data is the text box. A text box consists
of a rectangular shaped box that is usually accompanied by a caption. This control requires the user to
type the data inside the box. A text box can allow for single or multiple lines of data characters to be
entered. When a text box contains multiple lines of data, scrolling features are also normally
included.
When to Use Text Boxes for Input A text box is most appropriately used in those situations where
the input data values are unlimited in scope and the analyst is unable to provide the users with a
meaningful list of values from which they can select. For example, a single-line text box would be an
appropriate control for capturing a new customer's LAST NAME-since the possibilities for the
customers LAST NAME are virtually impossible to predetermine. A text box would also be appropriate
for capturing data about SHIPPING INSTRUCTIONS that describe a particular order that was placed
by a customer. Once again the possible values for SHIPPING INSTRUCTIONS are virtually
unlimited. In addition, the multiple-line text box would be appropriate due to the unpredictable length
of the SHIPPING INSTRUCTIONS. ln those cases where the text box is not large enough to view the
entire input data values, the text box may use scrolling and word-wrap features.
Suggested Guidelines for Using Text Boxes Numerous guidelines should be followed when using a
text box on an input screen. Let's first address the captions for text boxes. A text box should be
accompanied by a descriptive caption. To avoid possible confusion, the user should be provided with a
meaningful caption. Avoid using abbreviations for captions. Finally, only the first character of the
captions text should be capitalized.
The Institute of Chartered Accountants of Nepal ȁʹͷͻ
Management Information and Control System
The location of the caption is also significant. The user should be able to clearly associate the caption
with the text box. Therefore, the caption should be located to the left of the actual text box or left-
aligned immediately above the text box.
Finally, it is also generally accepted that the caption be followed by a colon to help the user visually
distinguish the caption from the box.
There are also several guidelines relating to the text box. Generally, the size of the text box should
be large enough for all characters of fixed-length input data to be entered and viewed by the user. When
the length of the data to be input is variable and could become quite long, the text box's scrolling and
word-wrap- ping features should be applied.
Radio buttons provide the user with an easy way to quickly identify and select a particular value from
a value set. A radio button consists of a small circle and an associated textual description that
corresponds to the value choice. The circle is located to the left of the textual description of the value
choice. Radio buttons normally appear in groups-a radio button per value choice. When a user selects
the appropriate choice from the value set, the circle corresponding to that choice is partially filled to
indicate it has been selected. When a choice is selected, any default or previously selected
choice's circle is deselected. Radio buttons also offer the advantage of allowing the user the
flexibility of selecting via the keyboard or mouse.
When to Use Radio Buttons for Input Radio buttons are most appropriately used in those cases where
a user may be expected to input data that have a limited predefined set of mutually exclusive values.
For example, a user may be asked to input an ORDER TYPE and GENDER. Each of these has a
limited, predefined, mutually exclusive set of valid values. For example, when the users are to input
an ORDER TYPE, they might be expected to indicate one and only one value from the value set "regular
order," "rush order "or" standing order." For GENDER, the user would be expected to indicate one and
only one value from the set "female," "male," or "unknown."
Suggested Guidelines for Using Radio Buttons There are several guidelines to consider when using
radio buttons as a means for data input. First, radio buttons should present the alternatives vertically
aligned and left-justified to aid the user in browsing. If necessary the choices can be presented where
they are aligned horizontally, but adequate spacing should be used to help visually distinguish the
choices. Also, the group of choices should be visually grouped to set them off from other input
controls appearing on the screen. The grouping should also contain an appropriate meaningful caption.
For example, radio buttons for male, female, and unknown might be vertically aligned and left-
justif1ed with the heading/caption "Gender" left-justified above the set.
The sequencing of the choices should also be given consideration. The larger the number of choices the
more thought should be given to the ease of the scanning and identifying the choices. For example, in
some cases it may be more natural for the user to locate choices that are presented in alphabetical
order. In other cases, the frequency in which a value is selected may be important in regards to where it
is located in the set of choices.
Finally, it is not recommended that radio buttons be used to select the value for an input data whose
value is simply a Yes/No (or On/Off state). Instead, a check box control should be considered.
Check Box As with text boxes and radio buttons, a check box also consists of two parts. It consists of
a square box followed by a textual description of the input held for which the user is to provide the
Yes-"No value. Check boxes provide the user the flexibility of selecting the value via the keyboard or

260 | The Institute of Chartered Accountants of Nepal


Chapter 5 : System Analysis and Design, Case study
mouse. An input data field whose value is yes is represented by a square that is filled with a "_/." The
absence of a "_/" means the input field's value is no. The user simply toggles the input field's value from
one value/state to the other as desired.
When to Use Check Boxes for Input Often a user needs to input a data field whose value set consists
of a simple yes or no value. For example, a user may be asked for a Yes/No value for such items as
the following input data: CREDIT APPROVED? SENIOR CITIZEN? HAVE YOU EVER BEEN
CONVICTED OF FRAUD? and MAY WE CONTACT YOUR PREVIOUS EMPLOYER? In each
situation a check box control could be used. A check box control offers a visual and intuitive means for
the user to input such data.
The previous example represented a simplified scenario for the use of a standalone check box. Often
on a single input screen it may be desirable to ask a user to enter values for a number of related input
fields having a Yes/No value. For example, a receptionist at a health clinic may be entering data from
a completed patient form. On a section of that form, the patient may have been asked about a
number of illnesses. They may have been asked their past medical history and instructed to "check
all that apply" from a list of types of various illnesses. If properly designed, the receptionist's input
screen would represent each illness as a separate input field using a check box control. The controls
would be physically associated into a group on the screen. The group would also be given an
appropriate Heading/caption. Recognize that even though the check boxes may be visually grouped on
the screen, each check box operates as a separate independent input field.
Suggested Guidelines for Using Check Boxes Here are some recommended guidelines for using
check box controls. Once again, make sure the textual description is meaningful to the user. Look
for opportunities to group check boxes for related Yes/No input fields and provide a descriptive group
heading.
To aid in the user's browsing and selecting from a group of check boxes, arrange the group of checkbox
controls where they are aligned vertically and left- justified. lf necessary, align horizontally and be sure
to leave adequate space to visually separate the controls from one another. Finally, provide further
assistance to the user by appropriately sequencing the input fields according to their textual
description. In most cases, where the number of check box controls is large, the sequencing should be
alphabetical. In those cases where the text description describes dollar ranges or some other
measurement, the sequencing may be according to the numerical order. Still, in other cases such as
those where a very limited number of controls are grouped, the basis for sequencing may be according
to the frequency that a given input data field's Yes/No value is selected. (All input data fields
represented using a check box have a default value.)
List Box A list box is a control that requires the user to select a data item's value from a list of possible
choices. The list box is rectangular and contains one or more rows of possible data values. The values
may appear as either a textual description or graphical representation. List boxes having a large number
of possible values may consist of scroll bars to navigate through the row of choices.
It is also common for a list box's row to contain more than one column. For example, a list box could
simply contain rows having a single column of permissible values for an input data item called JOB
CODE. However, it may be asking too much to expect the user to recognize what each job code
actually represented. In this case, to place the values of JOB CODE into a meaningful perspective, the
list box could include a second column containing the corresponding JOB TITLE for each job code.

The Institute of Chartered Accountants of Nepal ȁʹ͸ͳ


Management Information and Control System
When to Use List Boxes for Input How does one choose between a radio button and a list box control?
Both controls are useful in ensuring that the user enters the correct value for a data item. Both are also
appropriate when it is desirable to have the value choices constantly visible to the user.
The decision is normally driven by the number of possible values for the data item and the amount
of screen space that is available for the control. Scrolling capabilities make list boxes appropriate for
use in those cases where there is limited screen space available and the input data item has a large
number of predefined, mutually exclusive set of values from which to choose.
Suggested Guidelines for Using List Boxes There are several guidelines to consider when using
a list box as a means for data input. A list box should be accompanied by a descriptive caption. Avoid
using abbreviations for captions and capitalize only the first character of the caption's text. Finally, it
is also generally accepted that the caption be followed by a colon to help the user visually distinguish
the caption from the box.
The location of the caption is also significant. The user should be able to clearly associate the caption
with the list box. Therefore, the caption should appear left- justified immediately above the actual list
box.
There are also several guidelines relating to the list box. First, it is recommended that a list box contain
a highlighted default value. Second, consider the size of the list box. Generally, the width of the
list box should be large enough for most characters of fixed-length input data to be entered and viewed
by the user. The length of the box should allow for at least three choices and be limited in size to
containing about seven choices. In both cases scrolling features should be implemented to indicate to
the user that additional choices are available .
When graphical representations are used for value choices, it is important to ensure that the graphics are
meaningful and accurately represent the choices. If textual descriptions are used, they should employ
mixed-case letters and have meaningful descriptions. It is crucial to make these decisions based on the
perspective and opinions of the user.
Consideration should also be given to the ease with which users can scan and identify the choices in the
list box. The list of choices should be left-justified to facilitate browsing. Additionally, involving the user
in determining the order of choices can be beneficial. In some cases, listing choices alphabetically may
be natural for users, while in other cases, the frequency of selecting a value may dictate its position in the
list.
A drop-down list is another control that requires the user to select a data item's value from a list of
possible choices. A drop-down list consists of a rectangular selection held with a small button connected
to its side. The small button contains the image of a downward pointing arrow and bar. This button
is intended to suggest to the user the existence of a hidden list of possible values for a data item.
When requested, the hidden list appears to "drop or pull down" beneath the selection field to reveal
itself to the user. The revealed list has characteristics similar to the list box control mentioned in the
previous section. When the user selects a value from the list of choices, the selected value is displayed
in the selection field and the list of choices once again becomes hidden from the user.
When to Use Drop-Down Lists for Input A drop-down list should be used in those cases where the
data item has a large number of predefined values and screen space availability prohibits the use of a
list box to provide the user with a list box. One disadvantage of a drop-down list is that it requires extra
steps by the user, in comparison to the previously mentioned controls.
262 | The Institute of Chartered Accountants of Nepal
Chapter 5 : System Analysis and Design, Case study
Suggested Guidelines for Drop-Down Lists Many of the guidelines for using list boxes directly apply
to drop-down lists. One exception is the placement of the caption. The caption for a drop- down list is
generally either left-aligned immediately above the selection field portion of the control or located to
the left of the control.
Combination {Combo) Box A combination box, often simply called a combo box, is a control whose
name reflects the fact that it combines the capabilities of a text box and list box. A combo box gives the
user the flexibility of entering a data item's value (as with a text box) or selecting its value from a list
(as with a list box).
At first glance, a combo box closely resembles a drop-down list control. Unlike the drop-down list
control, however, the rectangular box can serve as an entry field for the user to directly enter a data
item's value. Once the small button is selected, a hidden list is revealed. The revealed list appears slightly
indented beneath the rectangular entry field.
When the user selects a value from the list of choices, the selected value is displayed in the entry field
and the list of choices once again becomes hidden from the user.
When to Use Combo Boxes for Input A combo box is most appropriately used in those cases
where limited screen space is available and it is desirable to provide the user with the option of
selecting a value from a list or typing a value that may or may not appear as an option in the list.
Suggested Guidelines for Combo Boxes The same guidelines for using drop-down lists directly apply
to combo boxes.
Spin (Spinner) Box A spin box is a screen-based control that consists of a single-line text box fol-
lowed immediately by two small buttons. The two buttons are vertically aligned. The top button has an
arrow pointing upward and the bottom button has an arrow pointing down. This control allows the user
to enter data directly into the associated text box or to select a value by using the mouse to scroll (or
"spin") through a list of values using the buttons. The buttons have a unit of measure associated with
them. When the user clicks on one of the arrow buttons, a value will appear in the text box. The
value in the text box is manipulated by clicking on the arrow buttons. The upward pointing button
will increase the value in the text box by a unit of measure, whereas the downward pointing button will
decrease the value in the text box by the same unit of measure.
When to Use Spin Boxes for Input A spin box is most appropriately used to allow the user to make
an input selection by using the buttons to navigate through a small set of meaningful choices or
by directly keying the data value into the textbox. The data values for a spin box should be capable
of being sequenced in a predictable manner.
Suggested Guidelines for Spin Boxes Spin boxes should contain a label or caption that clearly
identifies the input data item. This label should be located to the left of the text box or left- aligned
immediately above the text box portion of the control. Finally, spin boxes should always contain a
default value in the text box portion of the control.
There are few latest techniques that has been included in “Input Design” is as follows:
Speech recognition and Voice Inputs: This speech recognition technology will allow its users to
interact with the system in the form of spoken words or voices. In various devices this input technology
has been used recently either as a separate application or inbuilt, for example, Siri, google assistant. It
ensures seamless interactions between users and devices through spoken commands. It’s an efficient

The Institute of Chartered Accountants of Nepal ȁʹ͸͵


Management Information and Control System
method for hands free operation, making it more useful while driving or individuals with mobility
challenges.
Gesture Recognition: This gesture recognition has transformed the way technology has been used
through gestures, facial expressions, hand movement for sending communication command through
utilization of cameras, sensors, devices to interpret gestures to perform certain actions. It has been
recently incorporated in the mobile phones, gaming sensor , smart home devices to perform specific
actions , offering a great user experience.
AI-Assisted Input and Predictive Input: AI-powered input design has revolutionized the way users
interact with devices. Auto-suggestions, autocomplete, and predictive typing are AI-driven features that
anticipate user input based on context, history, and user behavior. This technology enhances efficiency
and reduces typing efforts on mobile devices and computers, improving overall user productivity.
Natural Language Processing (NLP): Natural Language Processing (NLP) has paved the way for
users to interact with systems using everyday language. Through NLP techniques, users can issue voice
commands, ask questions, or give complex instructions, and the systems can understand and respond
appropriately. This integration of NLP in voice interfaces and virtual assistants has significantly
enhanced the user experience and made technology more accessible to a broader audience.
Brain-Computer Interfaces (BCIs): Brain-Computer Interfaces (BCIs) hold immense potential for
transforming input methods, especially for individuals with disabilities. In the research and development
stages, BCIs establish direct communication between the brain and external devices, allowing users to
control technology through their thoughts. This technology offers hope for people with motor
impairments, providing them with greater independence and access to digital interfaces.
Input Validation and Error Handling:
Input validation is crucial to ensure the accuracy and security of data. Various techniques, such as data
type validation, range checks, and format validation, help prevent erroneous data entry and protect
against security threats like SQL injection. Implementing robust error handling mechanisms ensures
that users receive informative feedback when they input incorrect or incomplete data, enhancing the
overall user experience.
Accessibility and Inclusive Design:
Input design must prioritize accessibility and inclusivity. Adhering to accessibility standards ensures
that all users, including those with disabilities, can effectively interact with technology. This involves
considerations such as keyboard accessibility for individuals with motor impairments and voice
interfaces for the visually impaired. Emphasizing inclusive design principles allows technology to be
more welcoming and functional for a diverse user base.
Multi-Modal Input:
The concept of multi-modal input combines various input methods, enabling users to interact with
systems using multiple approaches simultaneously. For instance, users may employ touch and voice
commands together or use gestures alongside traditional input methods. Multi-modal input enhances user
flexibility and convenience, providing a more seamless and personalized experience.
That completes our discussion of input controls for designing GUI input screens. Many more controls are
available for designing graphical user interfaces. The above are the most common controls for capturing
input data. There are others, and you should make yourself familiar with them and their proper
264 | The Institute of Chartered Accountants of Nepal
Chapter 5 : System Analysis and Design, Case study
usage for inputting data. In later chapters you will be exposed to several other controls used for
other purposes. Keep on top of developments in the area of GUI as new controls are sure to be made
available.

5.5 Output design


Introduction:
Outputs in an information system are crucial as they present information to system users and serve as
the justification for the system. During systems analysis, output needs and requirements are defined, but
the actual design of these outputs occurs in this section. There are two primary types of computer
outputs: external outputs and internal outputs.
1. External Outputs:
External outputs are those that leave the system to trigger actions or provide confirmation to their
recipients. They are often preprinted forms designed by forms manufacturers for use on computer
printers. Examples of external outputs include invoices, paychecks, course schedules, airline tickets,
boarding passes, travel itineraries, telephone bills, and purchase orders. Some external outputs are
designed as turnaround documents, which eventually reenter the system as inputs. For instance, an
invoice may have a detachable top portion that is returned with the customer payment.
2. Internal Outputs:
Internal outputs, on the other hand, remain within the information system to support system users and
managers. These outputs cater to management reporting and decision support requirements.
System User Issues for Output Design:
Several principles are important to consider when designing outputs for information systems:
a. Readability:
Computer outputs should be simple to read and interpret. To enhance readability, the following
guidelines are suggested:
 Each report or output screen should have a title.
 Reports and screens should include section headings to segment large amounts of information.
 Columns in reports should have clear column headings.
 Legends should be used to interpret abbreviated section and column headings.
 All fields on a report should be formally defined using legends.
 Omit computer jargon and error messages from outputs to avoid clutter and confusion.
b. Timing:
The timing of computer outputs is crucial as recipients must receive the information while it is still
relevant to transactions or decisions. This can influence the design and implementation of the outputs.
c. Distribution:
Computer outputs must be distributed to all relevant system users in sufficient quantities to assist them
effectively.
d. Acceptability:

The Institute of Chartered Accountants of Nepal ȁʹ͸ͷ


Management Information and Control System
Outputs should be acceptable to the system users who receive them. Understanding how recipients
plan to use the output is essential to ensure its acceptability.
New Topics Related to Advanced IT:
Now, let's explore some additional topics related to advanced IT that are relevant to output design:
1. Interactive Outputs:
In modern information systems, outputs are often interactive, allowing users to customize and
manipulate data to suit their specific needs. These interactive outputs enable users to drill down
into detailed information, apply filters, and perform real-time data analysis.
2. Mobile Responsiveness:
With the widespread use of mobile devices, output design must consider mobile responsiveness.
Outputs should be optimized for different screen sizes and orientations to ensure a seamless user
experience across various devices.
3. Data Visualization:
Advanced IT has introduced sophisticated data visualization techniques. Output design should
leverage these visualizations, such as charts, graphs, and infographics, to present complex data
in a more understandable and actionable format.
4. Personalization and User Profiles:
Modern systems can offer personalized outputs based on user profiles and preferences. Output
design should consider tailoring information to individual users, providing them with relevant
and targeted insights.
5. Integration with AI and ML:
Integration of artificial intelligence (AI) and machine learning (ML) technologies can enhance
output design. AI algorithms can analyze user interactions and behavior to suggest more relevant
outputs or automate the generation of specific reports.
6. Accessibility:
Designing outputs that adhere to accessibility standards is crucial to ensure that users with
disabilities can access and comprehend the information provided by the system.
7. Security and Privacy:
As outputs may contain sensitive data, advanced IT output design should incorporate robust
security measures to safeguard information and ensure data privacy compliance.
Conclusion:
Effective output design is vital for information systems, as outputs are the visible component that
justifies the system's existence. By following principles of readability, timing, distribution, and
acceptability, and incorporating advanced IT concepts, output design can provide valuable insights to
system users and support decision-making processes.

266 | The Institute of Chartered Accountants of Nepal


Chapter 6 : E-Commerce and Case Study of Inter Organizational Systems

Chapter 6

E-Commerce and Case Study of Inter Organizational Systems

The Institute of Chartered Accountants of Nepal ȁʹ͸͹


Management Information and Control System

268 | The Institute of Chartered Accountants of Nepal


Chapter 6 : E-Commerce and Case Study of Inter Organizational Systems
6.1 Introduction to E-Commerce
E-commerce has revolutionized the competitive landscape, significantly speeding up business
processes and streamlining interactions, transactions, and payments between customers and
companies, as well as between companies and suppliers. Today, electronic commerce encompasses
much more than simple online buying and selling; it involves the entire online ecosystem of product
development, marketing, sales, delivery, servicing, and payment, facilitated by global interconnected
marketplaces and a network of business partners. The term "e-commerce" may soon become
antiquated, blurring the distinction between online and traditional business, as the younger generation
has grown up with online commerce as the norm.
In this chapter, we'll explore how e-commerce systems leverage the Internet and various information
technologies to support each aspect of the business process. Virtually all businesses, regardless of size,
engage in some form of e-commerce activity, making it a competitive necessity in today's market.
The Scope of E-Commerce:
E-commerce involves companies acting as buyers or sellers, utilizing Internet-based technologies and
e-commerce applications to facilitate marketing, discovery, transaction processing, and customer
service processes. Examples include interactive marketing, online ordering, secure payment
processing, and customer support on e-commerce websites and auction platforms. Beyond this, e-
commerce also encompasses e-business processes, such as customers and suppliers accessing
inventory databases through extranets, sales representatives utilizing customer relationship
management systems via intranets, and customer collaboration in product development through email
exchanges and online forums.
The advantage of e-commerce is its ability to bridge geographical barriers, enabling businesses of any
size and location to engage with customers and suppliers worldwide. For instance, a small olive oil
manufacturer in a remote Italian village can now easily sell its products to major department stores
and specialty shops in global markets like New York, London, and Tokyo.
E-Commerce Technologies:
E-commerce relies on a wide array of technologies, many of which are commonly used in the realm
of information technology and the Internet. These technologies facilitate various aspects of e-
commerce systems, such as web development, secure online payment gateways, customer relationship
management (CRM) software, and supply chain management (SCM) systems.
The technologies that underpin e-commerce include but are not limited to:
Web Development: Technologies like HTML, CSS, JavaScript, and various web frameworks are used
to create user-friendly and responsive e-commerce websites.
Secure Payment Systems: Encryption and secure socket layer (SSL) protocols ensure the safety of
online transactions and protect sensitive customer data.
CRM and SCM: Customer relationship management and supply chain management systems help
businesses manage customer interactions and optimize the flow of goods and services from suppliers
to customers.

The Institute of Chartered Accountants of Nepal ȁʹ͸ͻ


Management Information and Control System
New Trends in E-Commerce:
The landscape of e-commerce is continually evolving, and several emerging trends are shaping its
future. Among them are:
Artificial Intelligence (AI) and Machine Learning: AI-powered recommendation engines and chatbots
enhance personalized shopping experiences and provide better customer support.
Mobile Commerce (M-Commerce): The rise of smartphones has led to the growth of mobile
commerce, making it essential for businesses to optimize their websites and apps for mobile devices.
Voice Commerce: The increasing popularity of voice-activated smart assistants like Amazon's Alexa
and Google Assistant is opening new opportunities for voice-driven e-commerce interactions.
Blockchain Technology: Blockchain's distributed ledger system is being explored to enhance
transparency and security in e-commerce transactions.
Augmented Reality (AR) and Virtual Reality (VR): AR and VR technologies are being integrated into
e-commerce platforms to provide immersive shopping experiences.
Essential e-commerce processes
The essential e-commerce processes required for the successful operation and management of e-
commerce activities are illustrated in Figure 7-1. This figure outlines the nine key components of an e-
commerce process architecture that is the foundation of the e-commerce initiatives of many companies
today. We concentrate on the role these processes play in e-commerce systems, but you should
recognize that many of these components may also be used in internal, non- commerce e-business
applications. An example would he an intranet-based human resource system used by a company's
employees, which might use all the catalog management and product payment processes shown in
Figure 7-3. Let's take a brief look at each essential process category.

270 | The Institute of Chartered Accountants of Nepal


Chapter 6 : E-Commerce and Case Study of Inter Organizational Systems

Fig 7-1 This E-Commerce Process Architecture Highlights Nine Essential Categories of
E- Commerce Processes
Access Control and Security
In e-commerce processes, it is crucial to establish mutual trust and ensure secure access between the parties
involved in a transaction. This is accomplished through various measures such as user authentication, access
authorization, and the implementation of security features. For instance, these processes verify the identity of a
customer and an e-commerce site using methods like user names and passwords, encryption keys, or digital
certificates and signatures. Once authenticated, the e-commerce site grants access only to specific parts of the
site that are relevant to the individual user's transactions. Typically, users are granted access to all resources on
an e-commerce site, except for restricted areas such as other users' accounts, confidential company data, and
webmaster administration sections.
In the case of B2B e-commerce, companies may rely on secure industry exchanges or web trading portals that
restrict access to registered customers, ensuring that only authorized individuals can access trading information
and applications. Additional security processes are implemented to safeguard e-commerce resources from
various threats, including hacker attacks, password or credit card number theft, and system failures. These
measures are put in place to maintain the integrity and security of e-commerce sites and protect both the
businesses and their customers.
Profiling and Personalizing
Once you have gained access to an e-commerce site, profiling processes can occur that gather data on
you and your Web site behavior and choices, as well as build electronic profiles of your characteristics
and preferences. User profiles are developed using profiling tools such as user registration, cookie files,
website behavior tracking software, and user feedback. These profiles are then used to recognize you as
an individual user and provide you with a personalized view of the contents of the site, as well as product

The Institute of Chartered Accountants of Nepal ȁʹ͹ͳ


Management Information and Control System
recommendations and personalized Web advertising as part of a one-to-one marketing strategy. Profiling
processes are also used to help authenticate your identity for account management and payment purposes
and gather data for customer relationship management, marketing planning, and website management.
Search Management
Efficient and effective search processes provide a top e-commerce website capability that helps customers
find the specific product or service they want to evaluate or buy. E-commerce software packages
can include a website search engine component, or a company may acquire a customized e-commerce
search engine from search technology companies like Google and Requisite Technology. Search
engines may use a combination of search techniques, including searches based on content (e.g., a product
description) or parameters (e.g., above, below, or between a range of values for multiple properties of a
product).
Content and Catalog Management
Content management software helps e-commerce companies develop, generate, deliver, update, and
archive text data and multimedia information at e-commerce websites. For example, German media giant
Bertelsmann, part owner of Barnesand Noble.com, uses Story Server content manager software to
generate webpage templates that enable online editors from six international offices to easily publish
and update hook reviews and other product information, which are sold (syndicated) to other e-commerce
sites.
E-commerce content frequently takes the form of multimedia catalogs of product information. As such,
generating and managing catalog content is a major subset of content management, or catalog
management. For example, W.W Grainger & Co., a multibillion-dollar industrial parts distributor, uses
the Center Stage catalog management software suite to retrieve data from more than 2,000 supplier
databases, standardize the data, translate it into HTML or NIL for "Web use, and organize and enhance
the data for speedy delivery as multimedia Web pages at its www.graingen.com website.
Content and catalog management software works with the profiling tools we mentioned previously
to personalize the content of webpages seen by individual users. For example, travelocity.com uses On
Display content manager software to push personalized promotional information about other travel
opportunities to users while they are involved in an online travel- related transaction.
Finally, content and catalog management may be expanded to include product configuration processes
that support web-based customer self-service and the mass customization of a company's products.
Configuration software helps online customers select the optimum feasible set of product features that
can be included in a finished product. For example, both Dell Computer and Cisco Systems use
configuration software to sell built-to-order computers and network processors to their online customers.
Workflow Management
Many of the business processes in e-commerce applications can be managed and partially automated
with the help of workflow management software. E-business workflow systems for enterprise
collaboration help employees electronically collaborate to accomplish structured work tasks within
knowledge-based business processes. Workflow management in both e-business and e-commerce
depends on a workflow software engine containing software models of the business processes to be
accomplished. The workflow models express the predefined sets of business rules, roles of
272 | The Institute of Chartered Accountants of Nepal
Chapter 6 : E-Commerce and Case Study of Inter Organizational Systems
stakeholders, authorization requirements, routing alternatives, databases used, and sequence of tasks
required for each e-commerce process. Thus, workflow systems ensure that the proper transactions,
decisions, and work activities are performed, and the correct data and documents are routed to the
right employees, customers, suppliers, and other business stakeholders.
As many of you begin your business careers, you will be charged with the responsibility of driving
cost out of existing business processes while maintaining or improving the effectiveness of those
processes. As you continue to acquire a greater appreciation for, and understanding of, how technology
can benefit business, you will explore workflow management as the key to this optimization of cost and
effectiveness throughout the business.
For example, Figure 7-2 illustrates the e-commerce procurement processes of the MS Market system of
Microsoft Corp. Microsoft employees use its global intranet and the catalog/content management and
workflow management software engines built into MS Market to purchase electronically more than $3
billion annually of business supplies and materials from approved suppliers connected to the MS Market
system by their corporate extranets.

Fig 7-2 The role of content management and workflow management in a web-based
procurement process: The MS Market System Used By Microsoft Corp
Event Notification
Event notification is a crucial component of modern e-commerce applications. These systems are
predominantly event-driven, responding to a diverse range of events throughout the entire e-commerce
process. From a new customer's initial website access to payment and delivery processes, as well as
various customer relationship and supply chain management activities, event notification plays a vital
role in keeping stakeholders informed of relevant updates and changes that may affect their transactions.
To facilitate event notification, e-commerce systems utilize event notification software, which works
in conjunction with workflow management software. This combination enables continuous monitoring
of all e-commerce processes, capturing essential events, including unexpected changes and problem
situations. Subsequently, the event notification software collaborates with user-profiling software to
automatically notify all involved stakeholders through their preferred electronic messaging methods,

The Institute of Chartered Accountants of Nepal ȁʹ͹͵


Management Information and Control System
such as e-mail, newsgroup, pager, or fax communications. This ensures that customers, suppliers,
employees, and other relevant parties are promptly informed of significant transaction events.
Furthermore, event notification extends to a company's management, enabling them to monitor their
employees' responsiveness to e-commerce events and gather customer and supplier feedback. By staying
informed about crucial events, management can make informed decisions and enhance overall
operational efficiency.
With the advent of advanced AI and machine learning technologies, event notification can now be
intelligently tailored to individual users' preferences. For instance, AI-powered recommendation
engines analyze user behavior and past interactions to deliver personalized event notifications,
enhancing the overall customer experience and driving engagement.
Moreover, integration with emerging communication platforms and social media channels has
expanded the scope of event notification. Customers can now receive transaction updates and relevant
offers through messaging apps, such as WhatsApp and Facebook Messenger, in addition to traditional
electronic messaging methods.
One notable change in event notification is the adoption of real-time notifications. Traditional event
notification systems may have some latency in delivering updates to stakeholders, but advancements in
technology now allow for instantaneous event notifications. Real-time updates provide customers with
timely information, creating a sense of trust and transparency in the e-commerce process.
When you make a purchase on a modern e-commerce platform, such as Amazon, you receive real-time
notifications on your mobile device or computer as your order progresses. You'll receive immediate
updates on order confirmation, payment processing, and shipment tracking, ensuring you are well-
informed throughout the entire transaction process.
In conclusion, event notification remains a critical aspect of e-commerce systems, keeping stakeholders
informed of important events and enhancing the overall user experience. As technology continues to
advance, we can expect further innovations in event notification, making it an integral part of delivering
seamless and personalized e-commerce interactions.
Collaboration and Trading
This major category of e-commerce processes encompasses the essential collaboration and trading services
required by customers, suppliers, and other stakeholders to facilitate successful e-commerce transactions.
The effective collaboration among business trading partners is often supported by Internet-based trading
services, leading to seamless interactions and efficient transactions. Notably, B2B (Business-to-Business)
e-commerce web portals, offered by companies like Ariba and Commerce One, play a crucial role in
facilitating matchmaking, negotiation, and mediation processes between business buyers and sellers.
In recent times, the landscape of collaboration and trading in e-commerce has witnessed significant
advancements. The emergence of cutting-edge technologies, such as blockchain, has brought new
opportunities for secure and transparent collaborations between businesses. Blockchain technology can
streamline supply chain management, enhance trust among trading partners, and improve the overall
efficiency of e-commerce transactions. Additionally, the concept of "Smart Contracts" powered by
blockchain has gained popularity, enabling automated and self-executing agreements between parties
based on predefined conditions. Smart Contracts have the potential to revolutionize how transactions are
274 | The Institute of Chartered Accountants of Nepal
Chapter 6 : E-Commerce and Case Study of Inter Organizational Systems
conducted, reducing the need for intermediaries and ensuring greater accuracy and efficiency in trading
processes.
Imagine a global supply chain network where various stakeholders, including manufacturers, distributors,
and retailers, collaborate seamlessly through a blockchain-based platform. Each transaction and movement
of goods is recorded and verified on the blockchain, ensuring transparency and trust among all participants.
Smart Contracts automatically trigger actions like payment release and shipment verification based on
predefined conditions, reducing administrative overhead and increasing the efficiency of trading
processes.
In conclusion, collaboration and trading in e-commerce are vital components that enable seamless
interactions between stakeholders and facilitate efficient transactions. Advancements in technologies like
blockchain and smart contracts have added new dimensions to this category, promising even greater
transparency, security, and automation in e-commerce collaborations and trading services. Internet-based
trading platforms and portals continue to be essential in fostering B2B e-commerce, empowering
businesses to thrive in a global and interconnected marketplace.

6.2 Features of E-commerce


Each of the dimensions of e-commerce technology and their business significance listed in Table
7-1 deserves a brief exploration, as well as a comparison to both traditional commerce and other forms of
technology-enabled commerce.
Ubiquity
In traditional commerce, a marketplace is a physical place you visit in order to transact. For example,
television and radio typically motivate the consumer to go some- place to make a purchase. E-commerce,
in contrast, is characterized by its ubiquity: it is available just about everywhere, at all times. It liberates
the market from being restricted to a physical space and makes it possible to shop from your desktop, at
home, at work, or even from your car, using mobile commerce. The result is called a market- space-
a marketplace extended beyond traditional boundaries and removed from a temporal and geographic
location. From a consumer point of view, ubiquity reduces transaction costs-the costs of participating
in a market. To transact, it is no longer necessary that you spend time and money traveling to a market.
At a broader level, the ubiquity of e-commerce lowers the cognitive energy required to transact in a
marketspace. Cognitive energy refers to the mental effort required to complete a task. Humans generally
seek to reduce cognitive energy outlays. When given a choice, humans will choose the path requiring the
least effort-the most convenient path (Shapiro andVarian, 1999; Tversky and Kahneman, 1981).
Global Reach
E-commerce technology permits commercial transactions to cross cultural and national boundaries
far more conveniently and cost-effectively than is true in traditional commerce. As a result, the potential
market size for e-commerce merchants is roughly equal to the size of the world's online population (over
1 billion in 2005, and growing rapidly, according to the Computer Industry Almanac) (Computer
Industry Almanac, Inc., 2006). The total number of users or customers an e-commerce business can
obtain is a measure of its reach (Evans and Wurster, 1997).
In contrast, most traditional commerce is local or regional-it involves local merchants or national
merchants with local outlets. Television and radio stations, and newspapers, for instance, are primarily
local and regional institutions with limited but powerful national networks that can attract a national
The Institute of Chartered Accountants of Nepal ȁʹ͹ͷ
Management Information and Control System
audience. In contrast to e-commerce technology, these older commerce technologies do not easily cross
national boundaries to a global audience.
Universal Standards
One strikingly unusual feature of e-commerce technologies is that the technical standards of the Internet,
and therefore the technical standards for conducting e-commerce, are universal standards-they are
shared by all nations around the world. In contrast, most traditional commerce technologies differ from
one nation to the next. For instance, television and radio standards differ around the world, as does cell
telephone technology. The universal technical standards of the Internet and e-commerce greatly lower
market entry costs-the cost merchants must pay just to bring their goods to market. At the same time,
for consumers, universal standards reduce search costs-the effort required to find suitable products. And
by creating a single, one-world market space, where prices and product descriptions can be
inexpensively displayed for all to see, price discovery becomes simpler, faster, and more accurate
(Bakos, 1997; Kambil, 1997). And users of the Internet, both businesses and individuals, experience
network externalities-benefits that arise because everyone uses the same technology. With e- commerce
technologies, it is possible for the first time in history to easily find many of the suppliers' prices, and
delivery terms of a specific product anywhere in the world, and to view them in a coherent,
comparative environment. Although this is not necessarily realistic today for all or many products, it is
a potential that will be exploited in the future.
Richness
Information richness refers to the complexity and content of a message (Evans and Wurster, 1999).
Traditional markets, national sales forces, and small retail stores have great richness: they are able to
provide personal, face-to-face service using aural and visual cues when making a sale.
The richness of traditional markets makes them a powerful selling or commercial environment.
Prior to the development of the Web, there was a trade-off between richness and reach: the larger the
audience reached the less rich the message (see Figure 7-3).
Interactivity
Unlike any of the commercial technologies of the twentieth century, with the possible exception of the
telephone, e-commerce technologies allow for interactivity, meaning they enable two-way
communication between merchant and consumer. Television, for instance, cannot ask viewers any
questions or enter into conversations with them, and it cannot request that customer information be
entered into a form. In contrast, all of these activities are possible on an e- commerce website.
Interactivity allows an online merchant to engage a consumer in ways similar to a face-to-face
experience, but on a much more massive, global scale.
Information Density
The Internet and the web vastly increase information density-the total amount and quality of information
available to all market participants, consumers, and merchants alike. E-commerce technologies reduce
information collection, storage, processing, and communication costs. At the same time, these
technologies increase greatly the currency, accuracy, and timeliness of information-making information
more useful and important than ever. As a result, information becomes more plentiful, less expensive,
and of higher quality. A number of business consequences result from the growth in information
density. In e-commerce markets, prices and costs become more transparent. Price transparency refers

276 | The Institute of Chartered Accountants of Nepal


Chapter 6 : E-Commerce and Case Study of Inter Organizational Systems
to the ease with which consumers can find out the variety of prices in a market; cost transparency refers
to the ability of consumers to discover the actual costs merchants pay. But there are advantages for
merchants as well. Online merchants can discover much more about consumers; this allows
merchants to segment the market into groups willing to pay different prices and permits them to
engage in price discrimination-selling the same goods, or nearly the same goods, to different targeted
groups at different prices. For instance, an online merchant can discover a consumer's avid interest in
expensive exotic vacations, and then pitch expensive exotic vacation plans to that consumer at a
premium price, knowing this person is willing to pay extra for such a vacation. At the same time, the
online merchant can pitch the same vacation plan at a lower price to more price-sensitive consumers
(Shapiro and Varian, 1999). Merchants also have enhanced abilities to differentiate their products in
terms of cost, brand, and quality.
Personalization/Customization
E-commerce technologies permit personalization: merchants can target their marketing messages to specific
individuals by adjusting the message to a person's name, interests, and past purchases. The technology also
permits customization- changing the delivered product or service based on a user's preferences or prior
behavior. Given the interactive nature of e-commerce technology, much information about the consumer can
be gathered in the marketplace at the moment of purchase.
With the increase in information density, a great deal of information about the consumer's past purchases
and behavior can be stored and used by online merchants. The result is a level of personalization and
customization unthinkable with existing commerce technologies. For instance, you may be able to
shape what you see on television by selecting a channel, but you cannot change the contents of the
channel you have chosen. In contrast, the online version of the Wall Street Journal allows you to select
the type of news stories you want to see first, and gives you the opportunity to be alerted when certain
events happen.
Now, let's return to the question that motivated this section: Why study e-commerce? The answer
is simply that e-commerce technologies-and the digital markets that result-promise to bring about
some fundamental, unprecedented shifts in commerce. One of these shifts, for instance, appears to be a
large reduction in information asymmetry among all market participants (consumers and merchants). In
the past, merchants and manufacturers were able to prevent consumers from learning about their costs,
price discrimination strategies, and profits from sales. This becomes more difficult with e-commerce,
and the entire marketplace potentially becomes highly price competitive.
In addition, the unique dimensions of e-commerce technologies listed in Table 7-1 also suggest many
new possibilities for marketing and selling-a powerful set of interactive, personalized, and rich messages
are available for delivery to segmented, targeted audiences. E-commerce technologies make it possible
for merchants to know much more about consumers and to be able to use this information more
effectively than was ever true in the past. Potentially, online merchants could use this new
information to develop new information asymmetries, enhance their ability to brand products, charge
premium prices for high-quality service, and segment the market into an endless number of subgroups,
each receiving a different price. To complicate matters further, these same technologies make it
possible for merchants to know more about other merchants than was ever true in the past. This
presents the possibility that merchants might collude on prices rather than compete and drive overall
average prices up. This strategy works especially well when there are just a few suppliers (Varian,
2000b).

The Institute of Chartered Accountants of Nepal ȁʹ͹͹


Management Information and Control System

278 | The Institute of Chartered Accountants of Nepal


Chapter 6 : E-Commerce and Case Study of Inter Organizational Systems

Fig 7-3 the changing trade-off between richness and reach

6.3 Categories of e-Commerce


Many companies today are participating in or sponsoring four basic categories of e-commerce
applications: business-to-consumer, business-to-business, consumer-to-consumer, and consumer-
business e-commerce.
Note: we do not explicitly cover one additional category that is
business-to-government (B2G) and e-government applications because they are beyond the scope of
this text, but many e-commerce concepts apply to such applications.

The Institute of Chartered Accountants of Nepal ȁʹ͹ͻ


Management Information and Control System
Business-to-Consumer (B2C) e-Commerce. In this form of e-commerce, businesses must develop
attractive electronic marketplaces to sell products and services to consumers. The B2C marketplace
has experienced significant growth and transformation with the rapid expansion of mobile commerce
(m-commerce). Today, businesses need to ensure that their e-commerce websites are mobile-
responsive and optimized for seamless shopping experiences on smartphones and tablets.
Additionally, the integration of social commerce has become a powerful tool for B2C businesses.
Social media platforms now offer shopping features, allowing businesses to showcase products
directly to potential customers and enable instant purchases without leaving the platform.
Personalization and artificial intelligence (AI) are also key components of B2C e-commerce
strategies. AI-powered recommendation engines and chatbots help businesses deliver personalized
product suggestions and enhance customer support, ultimately improving customer satisfacti on and
loyalty.
Consumer-to-Consumer (C2C) e-Commerce. The huge success of online auctions like eBay, where
consumers (as well as businesses) can buy from and sell to one another in an auction process at an
auction website, makes this e-commerce model an important business strategy. Besides traditional
online auctions, modern C2C e-commerce has seen the rise of peer-to-peer (P2P) marketplaces and
sharing economy platforms. These platforms enable individuals to rent or share their assets, such as
homes, vehicles, and equipment, with other consumers, creating new opportunities for collaborative
consumption. Furthermore, the integration of location-based services and mobile applications has
facilitated hyper-local C2C transactions, enabling consumers to engage in quick and convenient peer-
to-peer exchanges within their local communities.
Business-to-Business (B2B) e-Commerce. B2B e-commerce represents the largest segment of e-
commerce, and it continues to evolve with technological advancements. Today, B2B busine sses are
leveraging advanced e-commerce platforms to enhance supply chain management, optimize inventory
control, and streamline procurement processes. The integration of application programming
interfaces (APIs) and cloud-based solutions has facilitated seamless connectivity between business
systems, enabling real-time data exchange and smoother transaction processing. Additionally, B2B
e-commerce platforms are incorporating blockchain technology for secure and transparent data
sharing, supply chain traceability, and smart contract automation. The use of machine learning and
big data analytics has empowered B2B businesses to gain valuable insights into customer behavior
and preferences, enabling more effective targeted marketing and personalized offerings.
Consumer-to-Business (C2B) e-Commerce: C2B e-commerce is a growing model that enables
individual consumers to offer products or services to businesses. With the rise of the gig economy
and freelance marketplaces, C2B e-commerce has become a vital channel for businesses to access a
diverse pool of independent professionals and freelancers. Online platforms facilitate this interaction,
making it convenient for businesses to find and engage with individual contractors, consultants, or
freelancers for specific projects or services. The integration of AI and automation has streamlined the
process of identifying suitable talent and matching them with relevant business requirements. C2B e -
commerce platforms are also focusing on providing secure payment systems and establishing trust
mechanisms to ensure smooth and reliable transactions between individual service providers and
businesses. Moreover, the emergence of influencer marketing and user -generated content has created

280 | The Institute of Chartered Accountants of Nepal


Chapter 6 : E-Commerce and Case Study of Inter Organizational Systems
new avenues for C2B interactions, with businesses collaborating with individual content creators and
social media influencers to promote their products and services to a broader audience.

6.4 Electronic Payment Processes


Payment for the products and services purchased is an obvious and vital set of processes in e-
commerce transactions. Payment processes, however, are not simple because of the nearly anonymous
electronic nature of transactions taking place between the networked computer systems of buyers and
sellers and the many security issues involved. E-commerce payment processes are also complex because
of the wide variety of debit and credit alternatives, as well as the financial institutions and intermediaries
that may be part of the process. Therefore, a variety of electronic payment systems have evolved over
time. In addition, new payment systems are being developed and tested to meet the security and technical
challenges of e-commerce over the Internet.
Web Payment Processes
Most e-commerce systems on the web involving businesses and consumers (B2C) depend on credit card
payment processes, but many B2B e-commerce systems rely on more complex payment processes
based on the use of purchase orders. However, both types of e-commerce typically use an electronic
shopping cart process, which enables customers to select products from Web site catalog displays and
put them temporarily in a virtual shopping basket for later checkout and processing. Figure 7-4
illustrates and summarizes a B2C electronic payment system with several payment alternatives.

The Institute of Chartered Accountants of Nepal ȁʹͺͳ


Management Information and Control System

Fig 7-4 an example of a secure electronic payment system with many payment alternatives
Electronic Funds Transfer
Electronic funds transfer (EFT) systems continue to be a vital and pervasive element in modern banking
and retail industries, facilitating swift and secure money and credit transfers between financial institutions,
businesses, and their customers. The landscape of EFT has evolved significantly with the advent of
advanced information technologies, offering a plethora of efficient electronic payment methods and
services.
In the banking sector, robust and interconnected networks support teller terminals at bank branches,
ensuring smooth in-person transactions for customers. Additionally, the proliferation of automated teller
machines (ATMs) has revolutionized access to funds, allowing users to withdraw cash, make deposits,
and perform various banking tasks across the globe.
Moreover, the rise of web-based payment services has transformed the way consumers manage their
finances. Popular platforms like PayPal, BillPoint, and others offer secure cash transfers over the internet,
empowering users to conduct online transactions with ease. Services like CheckFree and Paytrust have
simplified bill payment processes, enabling customers to settle their bills automatically through online
platforms.
In the retail industry, EFT systems have become indispensable, offering seamless and instantaneous
payment options for customers. Point-of-sale (POS) terminals at retail outlets are now connected to bank
EFT systems, facilitating transactions through credit cards or debit cards for purchases such as groceries,
gas, and other goods.

282 | The Institute of Chartered Accountants of Nepal


Chapter 6 : E-Commerce and Case Study of Inter Organizational Systems
Furthermore, the emergence of mobile banking and digital payment solutions has further expanded the
horizon of EFT. Pay-by-phone services have gained immense popularity, allowing users to initiate
transactions and conduct financial activities through their mobile devices, adding another layer of
convenience to the payment ecosystem.
Overall, EFT systems and electronic payment methods have become integral to the financial fabric of the
modern world, offering speed, security, and convenience in various financial transactions for individuals
and businesses alike.
Secure Electronic Payments
When you make an online purchase on the Internet, your credit card information is vulnerable to
interception by network sniffers, software that easily recognizes credit card number formats. Several
basic security measures are being used to solve this security problem: (1) encrypt (code and scramble)
the data passing between the customer and merchant, (2) encrypt the data passing between the customer
and the company authorizing the credit card transaction, or (3) take sensitive information off-line.
For example, many companies use the Secure Socket Layer (SSL) security method developed by
Netscape Communications that automatically encrypts data passing between your web browser and a
merchant's server. However, sensitive information is still vulnerable to misuse once it's decrypted
(decoded and unscrambled) and stored on a merchant's server, so a digital wallet payment system was
developed. In this method, you add security software add-on modules to your web browser. That enables
your browser to encrypt your credit card data in such a way that only the bank that authorizes credit card
transactions for the merchant gets to see it. All the merchant is told is whether your credit card transaction
is approved or not.
The Secure Electronic Transaction (SET) standard for electronic payment security extends this digital
wallet approach. In this method, software encrypts a digital envelope of digital certificates specifying the
payment details for each transaction. VISA, MasterCard, IBM, Microsoft, Netscape, and most other
industry players have agreed to SET. Therefore, a system like SET may become the standard for secure
electronic payments on the Internet.

6.5 Emerging Technologies in IT Business Environment


Virtualization and Cloud Computing
What is virtualization?
Before delving into the various categories of virtualization and their applications, let's establish a clear
definition of virtualization in the context of computing. Virtualization is a fundamental concept that involves
the abstraction of computer resources, concealing the physical attributes of these resources from users,
whether they are applications or end-users.
In simpler terms, virtualization can be understood through two key scenarios:
The creation of multiple virtual resources from a single physical resource: In this scenario, a single
physical entity, such as a server, operating system, application, or storage device, is utilized to appear
and function as multiple virtual resources. These virtual entities operate independently, providing a more
efficient utilization of the underlying physical resource.

The Institute of Chartered Accountants of Nepal ȁʹͺ͵


Management Information and Control System
The creation of a single virtual resource from multiple physical resources: Alternatively, virtualization
can involve aggregating multiple physical resources, such as storage devices or servers, to appear as a
single virtual resource. This consolidation allows for improved manageability, scalability, and
optimization of resource utilization.
The concept of virtualization is widely applied across various domains in computing, including networking,
storage, and hardware. Let's explore some of these areas and how virtualization is utilized:
History
Virtualization is not a new concept. One of the early works in the field was a paper by
Christopher Strachey entitled "Time Sharing in Large Fast Computers". IBM began exploring
virtualization with its CP-40 and M44/44X research systems. These in turn lead to the commercial
CP-67/CMS. The virtual machine concept kept users separated while simulating a full stand-alone
computer for each.
In the 80's and early 90's the industry moved from leveraging singular mainframes to running collections
of smaller and cheaper x86 servers. As a result the concept of virtualization become less prominent.
That changed in 1999 with Vmware's introduction of VMware workstation. This was followed by
VMware's ESX Server, which runs on bare metal and does not require a host operating system.
Types of Virtualization
Today the term virtualization is widely applied to a number of concepts including:
• Server Virtualization
• Client / Desktop / Application Virtualization
• Network Virtualization
• Storage Virtualization
• Service / Application Infrastructure Virtualization
In most of these cases, either virtualizing one physical resource into many virtual resources or turning
many physical resources into one virtual resource is occurring.
Server Virtualization
Server virtualization is the most active segment of the virtualization industry featuring established
companies such as Vmware, Microsoft, and Citrix. With server virtualization one physical machine is
divided many virtual servers. At the core of such virtualization is the concept of a hypervisor
(virtual machine monitor). A hypervisor is a thin software layer that intercepts operating system calls to
hardware. Hypervisors typically provide a virtualized CPU and memory for the guests running on top
of them. The term was first used in conjunction with the IBM CP-370.
Hypervisors are classified as one of two types:
Type 1- This type of hypervisor is also known as native or bare-metal. They run directly on the
hardware with guest operating systems running on top of them. Examples include VMware ESX, Citrix
XenServer, and Microsoft's Hyper-V.

284 | The Institute of Chartered Accountants of Nepal


Chapter 6 : E-Commerce and Case Study of Inter Organizational Systems
Type 2 - This type of hypervisor runs on top of an existing operating system with guests running at a
third level above hardware. Examples include VMware Workstation and SWSoft's Parallels Desktop.
Related to type 1 hypervisors is the concept of paravirtualization. Paravirtualization is a technique
in which a software interface that is similar but not identical to the underlying hardware is
presented. Operating systems must be ported to run on top of a paravirtualized hypervisor. Modified
operating systems use the "hypercalls" supported by the paravirtualized hypervisor to interface
directly with the hardware. The popular Xen project makes use of this type of virtualization.
Starting with version 3.0 however Xen is also able to make use of the hardware assisted
virtualization technologies of Intel (VT-x) and AMD (AMD-V). These extensions allow Xen to
run unmodified operating systems such as Microsoft Windows.
Server virtualization has a large number of benefits for the companies making use of the technology.
Among those frequently listed:
• Increased Hardware Utilization- This results in hardware saving, reduced administration
overhead, and energy savings.
• Security- Clean images can be used to restore compromised systems. Virtual machines can also
provide sandboxing and isolation to limit attacks.
• Development- Debugging and performance monitoring scenarios can be easily setup in a
repeatable fashion. Developers also have easy access to operating systems they might not otherwise
be able to install on their desktops.
Correspondingly there are a number of potential downsides that must be considered:
• Security- There are now more entry points such as the hypervisor and virtual networking layer
to monitor. A compromised image can also be propagated easily with virtualization technology.
• Administration- While there are less physical machines to maintain there may be more
machines in aggregate. Such maintenance may require new skills and familiarity with software that
administrators otherwise would not need.
• Licensing/Cost Accounting- Many software-licensing schemes do not take virtualization into
account. For example running 4 copies of Windows on one box may require 4 separate licenses.
• Performance- Virtualization effectively partitions resources such as RAM and CPU on a
physical machine. This combined with hypervisor overhead does not result in an environment
that focuses on maximizing performance.
Application/Desktop Virtualization
Virtualization is not only a server domain technology. It is being put to a number of uses on the client
side at both the desktop and application level. Such virtualization can be broken out into four categories:
• Local Application Virtualization/Streaming
• Hosted Application Virtualization
• Hosted Desktop Virtualization
• Local Desktop Virtualization
The Institute of Chartered Accountants of Nepal ȁʹͺͷ
Management Information and Control System
With streamed and local application virtualization an application can be installed on demand as needed.
If streaming is enabled then the portions of the application needed for startup are sent first optimizing
startup time. Locally virtualized applications also frequently make use of virtual registries and file
systems to maintain separation and cleanness from the user's physical machine. Examples of local
application virtualization solutions include Citrix Presentation Server and Microsoft SoftGrid. One
could also include virtual appliances into this category such as those frequently distributed via
VMware's VMware Player.
Hosted application virtualization allows the user to access applications from their local computer that
are physically running on a server somewhere else on the network. Technologies such as Microsoft's
RemoteApp allow for the user experience to be relatively seamless include the ability for the
remote application to be a file handler for local file types. Benefits of application virtualization include:
• Security- Virtual applications often run in user mode isolating them from OS level functions.
• Management- Virtual applications can be managed and patched from a central location.
• Legacy Support- Through virtualization technologies legacy applications can be run on modern
operating systems they were not originally designed for.
• Access- Virtual applications can be installed on demand from central locations that provide
failover and replication.
Disadvantages include:
• Packaging- Applications must first be packaged before they can be used.
Resources- Virtual applications may require more resources in terms of storage and CPU.
• Compatibility- Not all applications can be virtualized easily.
Hosted desktop virtualization is similar to hosted application virtualization, expanding the user
experience to be the entire desktop. Commercial products include Microsoft's Terminal Services, Citrix's
XenDesktop, and VMware's VDI.
Benefits of desktop virtualization include most of those with application virtualization as well as:
• High Availability- Downtime can be minimized with replication and fault tolerant hosted
configurations.
• Extended Refresh Cycles- Larger capacity servers as well as limited demands on the client
PCs can extend their lifespan.
• Multiple Desktops- Users can access multiple desktops suited for various tasks from the same
client PC.
Disadvantages of desktop virtualization are similar to server virtualization. There is also the added
disadvantage that clients must have network connectivity to access their virtual desktops. This is problematic
for offline work and also increases network demands at the office.

286 | The Institute of Chartered Accountants of Nepal


Chapter 6 : E-Commerce and Case Study of Inter Organizational Systems
The final segment of client virtualization is local desktop virtualization. It could be said that this is
where the recent resurgence of virtualization began with VMware's introduction of VMware
Workstation in the late 90's. Today the market includes competitors such as Microsoft Virtual PC
and Parallels Desktop. Local desktop virtualization has also played a key part in the increasing
success of Apple's move to Intel processors since products like VMware Fusion and Parallels allow easy
access to Windows applications. Some the benefits of local desktop virtualization include:
• Security- With local virtualization organizations can lock down and encrypt just the
valuable contents of the virtual machine/disk. This can be more performant than encrypting a
user's entire disk or operating system.
• Isolation- Related to security is isolation. Virtual machines allow corporations to isolate
corporate assets from third party machines they do not control. This allows employees to use
personal computers for corporate use in some instances.
• Development/Legacy Support- Local virtualization allows a users computer to support many
configurations and environments it would otherwise not be able to support without different
hardware or host operating system. Examples of this include running Windows in a virtualized
environment on OS X and legacy testing Windows 98 support on a machine that's primary OS
is Vista.
Network Virtualization
Up to this point the types of virtualization covered have centered on applications or entire machines.
These are not the only granularity levels that can be virtualized however. Other computing concepts
also lend themselves to being software virtualized as well. Network virtualization is one such concept.
Using the internal definition of the term, desktop and server virtualization solutions provide networking
access between both the host and guest as well as between many guests. On the server side virtual
switches are gaining acceptance as a part of the virtualization stack. The external definition of network
virtualization is probably the more used version of the term however. Virtual Private Networks (VPNs)
have been a common component of the network administrators' toolbox for years with most
companies allowing VPN use. Virtual LANs (VLANs) are another commonly used network
virtualization concept. With network advances such as 10 gigabit Ethernet, networks no long need to be
structured purely along geographical lines. Companies with products in the space include Cisco and
3Leaf.
In general benefits of network virtualization include:
• Customization of Access-Administrators can quickly customize access and network options such
as bandwidth throttling and quality of service.
• Consolidation- Physical networks can be combined into one virtual network for overall
simplification of management.
Similar to server virtualization, network virtualization can bring increased complexity, some
performance overhead, and the need for administrators to have a larger skill set.

The Institute of Chartered Accountants of Nepal ȁʹͺ͹


Management Information and Control System
Storage Virtualization
Another computing concept that is frequently virtualized is storage. Unlike the definitions we have
seen up to this point that have been complex at times, Wikipedia defines storage virtualization simply
as:
Storage virtualization refers to the process of abstracting logical storage from physical storage. While
RAID at the basic level provides this functionality, the term storage virtualization typically
includes additional concepts such as data migration and caching. Storage virtualization is hard to define
in a fixed manner due to the variety of ways that the functionality can be provided. Typically, it is
provided as a feature of:
• Host Based with Special Device Drivers
• Array Controllers
• Network Switchs
• Stand Alone Network Appliances
Each vendor has a different approach in this regard. Another primary way that storage virtualization
is classified is whether it is in-band or out-of-band. In-band (often called symmetric) virtualization sits
between the host and the storage device allowing caching. Out-of- band (often called asymmetric)
virtualization makes use of special host based device drivers that first lookup the meta data (indicating
where a file resides) and then allows the host to directly retrieve the file from the storage location. Caching
at the virtualization level is not possible with this approach.
General benefits of storage virtualization include:
• Migration- Data can be easily migrated between storage locations without interrupting live
access to the virtual partition with most technologies.
• Utilization- Similar to server virtualization, utilization of storage devices can be balanced to
address over and under utilitization.
• Management- Many hosts can leverage storage on one physical device that can be centrally
managed.
Some of the disadvantages include:
• Lack of Standards and Interoperability- Storage virtualization is a concept and not a
standard. As a result vendors frequently do not easily interoperate.
• Metadata- Since there is a mapping between logical and physical location, the storage
metadata and its management becomes key to a working reliable system.
• Backout- The mapping between local and physical locations also makes the backout of
virtualization technology from a system a less than trivial process.
Service / Application Infrastructure Virtualization
Enterprise application providers have also taken note of the benefits of virtualization an begun offering
solutions that allow the virtualization of commonly used applications such as Apache as well as

288 | The Institute of Chartered Accountants of Nepal


Chapter 6 : E-Commerce and Case Study of Inter Organizational Systems
application fabric platforms that allow software to easily be developed with virtualization capabilities
from the ground up.
Application infrastructure virtualization (sometimes referred to as application fabrics) unbundle an
application from a physical OS and hardware. Application developers can then write to a virtualization
layer. The fabric can then handle features such as deployment and scaling. In essence this process is the
evolution of grid computing into a fabric form that provides virtualization level features. Companies
such as Appistry and DataSynapse provides features including:
• Virtualized Distribution
• Virtualized Processing
• Dynamic Resource Discovery
IBM has also embraced the virtualization concept at the application infrastructure level with the
rebranding and continued of enhancement of Websphere XD as Websphere Virtual Enterprise. The
product provides features such as service level management, performance monitoring, and fault
tolerance. The software runs on a variety of Windows, Unix, and Linux based operating systems and
works with popular application servers such as WebSphere, Apache, BEA, JBoss, and PHP application
servers. This lets administrators deploy and move application servers at a virtualization layer level
instead of at the physical machine level.
Final Thoughts
In summary it should now be apparent that virtualization is not just a server-based concept. The
technique can be applied across a broad range of computing including the virtualization of:
• Entire Machines on Both the Server and Desktop
• Applications/Desktops
• Storage
• Networking
• Application Infrastructure
The technology is evolving in a number of different ways but the central themes revolve around
increased stability in existing areas and accelerating adoption by segments of the industry that have yet
to embrace virtualization. The recent entry of Microsoft into the bare-metal hypervisor space with
Hyper-V is a sign of the technology's maturity in the industry.
Beyond these core elements the future of virtualization is still being written. A central dividing line is
feature or product. For some companies such as RedHat and many of the storage vendors, virtualization
is being pushed as a feature to complement their existing offerings. Other companies such as
VMware have built entire businesses with virtualization as product. InfoQ will continue to cover the
technology and companies involved as the space evolves.

The Institute of Chartered Accountants of Nepal ȁʹͺͻ


Management Information and Control System
What is cloud computing?
Cloud computing is a revolutionary paradigm in the field of IT, offering a comprehensive solution that
delivers computing resources and services over the Internet. In essence, it functions like a utility, where
computing resources are shared and provided similar to how electricity is distributed on an electrical
grid. Just as consumers tap into the power grid without needing to own their power plants, cloud
computing allows users to access and utilize computing resources without the burden of owning or
managing physical hardware.
At its core, cloud computing operates on the principle of virtualization. Instead of running applications
and services on a specific physical server, cloud computing pools together resources from multiple
servers to create a virtualized computing environment. This means that various applications can
leverage the collective computing power as if they were running on a single, powerful system.
One of the most compelling features of cloud computing is its flexibility and scalability. Resources are
allocated and provisioned on-demand, allowing users to access the exact amount of computing power
they need at any given moment. This elasticity ensures that computing resources can be quickly scaled
up or down based on varying workloads, optimizing efficiency and cost-effectiveness.
Before the advent of cloud computing, websites and server-based applications were typically tied to
specific physical servers. With the cloud, applications can operate independently from the underlying
hardware, freeing developers and users from the constraints of hardware configuration. This means that
applications can be designed and deployed with greater agility, and they are not dependent on the
specific hardware environment in which they run.
Cloud computing has several deployment models, including public cloud, private cloud, hybrid cloud,
and multi-cloud, each offering unique benefits and use cases. Public cloud services are provided by
third-party providers, accessible to anyone over the Internet. Private cloud environments, on the other
hand, are dedicated and maintained for a single organization, offering enhanced security and control.
Hybrid clouds combine public and private clouds, allowing organizations to take advantage of both.
Multi-cloud strategies involve using multiple cloud providers for specific tasks or to avoid vendor lock-
in.
Overall, cloud computing has revolutionized the way businesses and individuals utilize and interact
with IT resources. It has empowered organizations with greater flexibility, scalability, and cost
efficiency, driving innovation and transforming the landscape of modern computing. As technology
continues to evolve, cloud computing is expected to remain a central pillar in shaping the future of IT
infrastructure and service delivery.
Why the rush to the cloud?
There are valid and significant business and IT reasons for the cloud computing paradigm shift. The
fundamentals of outsourcing as a solution apply.
• Reduced cost: Cloud computing can reduce both capital expense (CapEx) and operating
expense (OpEx) costs because resources are only acquired when needed and are only paid for
when used.

290 | The Institute of Chartered Accountants of Nepal


Chapter 6 : E-Commerce and Case Study of Inter Organizational Systems
• Refined usage of personnel: Using cloud computing frees valuable personnel allowing them to
focus on delivering value rather than maintaining hardware and software.
• Robust scalability: Cloud computing allows for immediate scaling, either up or down, at any time
without long-term commitment.
Cloud computing building blocks
The cloud computing model is comprised of a front end and a back end. These two elements are
connected through a network, in most cases the Internet. The front end is the vehicle by which the
user interacts with the system; the back end is the cloud itself. The front end is composed of a client
computer, or the computer network of an enterprise, and the applications used to access the cloud.
The back end provides the applications, computers, servers, and data storage that creates the cloud
of services.
Layers: Computing as a commodity
The cloud concept is built on layers, each providing a distinct level of functionality. This
stratification of the cloud's components has provided a means for the layers of cloud computing to
becoming a commodity just like electricity, telephone service, or natural gas. The commodity that cloud
computing sells is computing power at a lower cost and expense to the user. Cloud computing is poised
to become the next mega-utility service.
The virtual machine monitor (VMM) provides the means for simultaneous use of cloud facilities. VMM
is a program on a host system that lets one computer support multiple, identical execution environments.
From the user's point of view, the system is a self-contained computer which is isolated from other
users. In reality, every user is being served by the same machine. A virtual machine is one operating
system (OS) that is being managed by an underlying control program allowing it to appear to be
multiple operating systems. In cloud computing, VMM allows users to monitor and thus manage
aspects of the process such as data access, data storage, encryption, addressing, topology, and workload
movement.
These are the layers the cloud provides:
• The infrastructure layer is the foundation of the cloud. It consists of the physical assets - servers, network
devices, storage disks, etc.Infrastructure as a Service(IaaS) has providers such as the IBM® Cloud.
Using IaaS you don't actually control the underlying infrastructure, but you do have control of the
operating systems, storage, deployment applications, and, to a limited degree, control over select
networking components. Print On Demand (POD) services are an example of organizations that can
benefit from IaaS.
The POD model is based on the selling of customizable products. PODs allow individuals to open
shops and sell designs on products. Shopkeepers can upload as many or as few designs as they can
create. Many upload thousands. With cloud storage capabilities, a POD can provide unlimited
storage space.
• The middle layer is the platform. It provides the application infrastructure. Platform as a Service
(PaaS) provides access to operating systems and associated services. It provides a way to deploy
applications to the cloud using programming languages and tools supported by the provider. You

The Institute of Chartered Accountants of Nepal ȁʹͻͳ


Management Information and Control System
do not have to manage or control the underlying infrastructure, but you do have control over the
deployed applications and, to some degree over application hosting environment configurations.
PaaS has providers such as Amazon's Elastic Compute Cloud (EC2). The small entrepreneur
software house is an ideal enterprise for PaaS. With the elaborated platform, world-class products
can be created without the overhead of in-house production.
• The top layer is the application layer, the layer most visualize as the cloud. Applications run here
and are provided on demand to users. Software as a Service (SaaS) has providers such as Google
Pack. Google Pack includes Internet accessible applications, tools such as Calendar, Gmail, Google
Talk, Docs, and many more.
Cloud formations
There are three types of cloud formations: private (on premise), public, and hybrid.
• Public clouds are available to the general public or a large industry group and are owned and
provisioned by an organization selling cloud services. A public cloud is what is thought of as the
cloud in the usual sense; that is, resources dynamically provisioned over the Internet using
web applications from an off-site third-party provider that supplies shared resources and bills on
a utility computing basis.
• Private clouds exist within your company's firewall and are managed by your organization.
They are cloud services you create and control within your enterprise. Private clouds offer many of
the same benefits as the public clouds - the major distinction being that your organization is in
charge of setting up and maintaining the cloud.
• Hybrid clouds are a combination of the public and the private cloud using services that are in both
the public and private space. Management responsibilities are divided between the public
cloud provider and the business itself. Using a hybrid cloud, organizations can determine
the objectives and requirements of the services to be created and obtain them based on the
most suitable alternative.
IT roles in the cloud
Let us consider the probability that management and administration will require greater automation,
requiring a change in the tasks of personnel responsible for scripting due to the growth in code
production. You see, IT may be consolidating, with a need for less hardware and software
implementation, but it is also creating new formations. The shift in IT is toward the knowledge worker.
In the new paradigm, the technical human assets will have greater responsibilities for enhancing and
upgrading general business processes.
The developer
The growing use of mobile devices, the popularity of social networking, and other aspects of the
evolution of commercial IT processes and systems, will guarantee work for the developer community;
however, some of the traditional roles of development personnel will be shifted away from the
enterprise's developers due to the systemic and systematic processes of the cloud configuration model.

292 | The Institute of Chartered Accountants of Nepal


Chapter 6 : E-Commerce and Case Study of Inter Organizational Systems
A recent survey by IBM, New developer Works survey shows dominance of cloud computing and
mobile application development demonstrated that the demand for mobile technology will grow
exponentially. This development, along with the rapid acceptance of cloud computing across the
globe, will necessitate a radical increase of developers with an understanding of this area. To meet the
growing needs of mobile connectivity, more developers will be required who understand how cloud
computing works.
Cloud computing provides an almost unlimited capacity, eliminating scalability concerns. Cloud
computing gives developers access to software and hardware assets that most small and mid- sized
enterprises could not afford. Developers, using Internet-driven cloud computing and the assets that are a
consequence of this configuration, will have access to resources that most could have only dreamed of in
the recent past.
The administrator
Administrators are the guardians and legislators of an IT system. They are responsible for the control of
user access to the network. This means sitting on top of the creation of user passwords and the formulation
of rules and procedures for such fundamental functionality as general access to the system assets. The
advent of cloud computing will necessitate adjustments to this process since the administrator in such an
environment is no longer merely concerned about internal matters, but also the external relationship of
his enterprise and the cloud computing concern, as well as the actions of other tenants in a public cloud.
This alters the role of the firewall constructs put in place by the administration and the nature of the
general security procedures of the enterprise. It does not negate the need for the guardian of the system.
With cloud computing comes even greater responsibility, not less. Under cloud computing, the
administrator must not only ensure data and systems internal to the organization, they must also monitor
and manage the cloud to ensure the safety of their system and data everywhere.
The architect
The function of the architecture is the effective modeling of the given system's functionality in the real
IT world. The basic responsibility of the architect is development of the architectural framework of the
agency's cloud computing model. The architecture of cloud computing is essentially comprised of the
abstraction of the three layer constructs, IaaS, PaaS, and SaaS, in such a way that the particular enterprise
deploying the cloud computing approach meets its stated goals and objectives. The abstraction of the
functionality of the layers is developed so the decision-makers and the foot soldiers can use the abstraction
to plan, execute, and evaluate the efficacy of the IT system's procedures and processes.
The role of the architect in the age of cloud computing is to conceive and model a functional interaction
of the cloud's layers. The architect must use the abstraction as a means to ensure that IT is playing its
proper role in the attainment of organizational objectives.
To cloud or not to cloud: Risk assessment
When considering the adoption of cloud computing, the primary concerns expressed by organizations
revolve around security and privacy. Cloud computing service providers are aware of these concerns and
understand that reliable security is crucial for the success of their businesses. Therefore, security and
privacy are high priorities for all entities involved in cloud computing.

The Institute of Chartered Accountants of Nepal ȁʹͻ͵


Management Information and Control System
Organizations must conduct risk assessments to evaluate the potential risks associated with moving to the
cloud. This assessment includes assessing the security measures implemented by cloud computing
providers and ensuring they align with the organization's security requirements. Privacy concerns also
need to be addressed, considering data protection and compliance with relevant regulations.
Governance: How will industry standards be monitored?
Governance is the primary responsibility of the owner of a private cloud and the shared responsibility of
the service provider and service consumer in the public cloud. However, given elements such as
transnational terrorism, denial of service, viruses, worms and the like - which do or could have aspects
beyond the control of either the private cloud owner or public cloud service provider and service consumer
- there is a need for some kind of broader collaboration, particularly on the global, regional, and national
levels. Of course, this collaboration has to be instituted in a manner that will not dilute or otherwise
harm the control of the owner of the process or subscribers in the case of the public cloud.
Bandwidth requirements
If you are going to adopt the cloud framework, bandwidth and the potential bandwidth bottleneck
must be evaluated in your strategy. In the CIO.com article: The Skinny Straw: Cloud Computing's
Bottleneck and How to Address It, the following statement is made:
Virtualization implementers found that the key bottleneck to virtual machine density is memory capacity;
now there's a whole new slew of servers coming out with much larger memory footprints,
removing memory as a system bottleneck. Cloud computing negates that bottleneck by removing the
issue of machine density from the equation-sorting that out becomes the responsibility of the cloud
provider, freeing the cloud user from worrying about it. For cloud computing, bandwidth to and from the
cloud provider is a bottleneck.
So, what is the optimal solution for addressing the bandwidth issue? In the current market, blade servers
offer the best answer. Blade servers are designed to minimize physical space and energy usage, providing
significant improvements in bandwidth speed. For example, the IBM Blade Center is specifically
optimized to accelerate high-performance computing workloads efficiently. Just as overcoming the
memory issue was crucial for resolving the bottleneck in virtual machine density, addressing the
bandwidth bottleneck is equally important in cloud computing. Therefore, it is essential to assess the
capabilities of your provider to determine if the bandwidth bottleneck will significantly impact
performance.
Financial impact
Because a sizable proportion of the cost in IT operations comes from administrative and management
functions, the implicit automation of some of these functions will per se cut costs in a cloud computing
environment. Automation can reduce the error factor and the cost of the redundancy of manual repetition
significantly.
There are other contributors to financial problems such as the cost of maintaining physical facilities,
electrical power usage, cooling systems, and of course administration and management factors. As you
can see, bandwidth is not alone, by any means.

294 | The Institute of Chartered Accountants of Nepal


Chapter 6 : E-Commerce and Case Study of Inter Organizational Systems
Mitigate the risk
Consider these possible risks:
• Adverse impact of mishandling of data.
• Unwarranted service charges.
• Financial or legal problems of vendor.
• Vendor operational problems or shutdowns.
a. Data recovery and confidentiality problems.
• General security concerns.
• Systems attacks by external forces.
With the use of systems in the cloud, there is the ever present risk of data security, connectivity, and
malicious actions interfering with the computing processes. However, with a carefully thought out
plan and methodology of selecting the service provider, and an astute perspective on general risk
management, most companies can safely leverage this technology.
In conclusion
In this revolutionary new era, cloud computing can provide organizations with the means and methods
needed to ensure financial stability and high quality service. Of course, there must be global cooperation
if the cloud computing process is to attain optimal security and general operational standards. With the
advent of cloud computing it is imperative for us all to be ready for the revolution.
Mobile Computing
A mobile computer is effectively any computing device not constrained in its location to a desktop
or data centre. In recent years the variety of mobile computing devices available has rapidly increased. In
doing so, it has also turned from theory to reality a trend for ubiquitous computing, whereby computers
are all around us in the world, enabling access to digital content anytime, any place and anywhere.
Many people believe that the future of computing is mobile -- and, in terms of the devices that most
people and businesses use to access cyberspace, such a view is probably correct. Certainlythe sale
of desktop PCs is declining. The transition to mobile computing will also have very major implications.
Not least it is already starting to make the provision of mobile Internet content as important as the
publication of web pages aimed at users of PCs. Fairly soon now the small, often handheld screen is
likely to be king -- a subject that Don Tapscott explains very well in this article.
Since personal computing went mainstream in the early 1980s, most people and businesses purchased
desktop PCs not because they wanted to turn a valuable chunk of office or domestic real-estate into a
permanent home for a computer, but because it was the only option available. Today, however, this is
absolutely no longer the case, with a mobile and far less space- consuming computing
device increasingly able to fulfil the requirements of a great many users. So let's take a look at
the various types of mobile hardware that are now available.

The Institute of Chartered Accountants of Nepal ȁʹͻͷ


Management Information and Control System
MOBILE COMPUTING CATEGORIES
Mobile computers can usefully be divided into a number of categories. Firstly, many mobile computers
are laptops -- or basically portable versions of desktop PCs, and usually based around the same type of
hardware, and capable of running the same software applications. Since late
2011, some very thin, light laptops that meet certain Intel specifications have started to be branded
as ultrabooks.
A third, if sadly declining, category of mobile computer is the netbook. These are considerably smaller
than most laptops, though usually capable of running the same or similar software as a laptop or desktop
PC. Fourthly, we then have tablet computers -- such as the Apple iPad -- which are like a laptop or netbook
computer but without the keyboard and operated via touchscreen. While some tablets run traditional
desktop operating systems such as Windows 8, the vast majority are loaded with a sleeker embedded
operating system like Apple's iOS, or Google's Android. E-book readers are then a fifth category of
mobile computer, and are effectively tablets dedicated to the presentation of electronic documents.
Decreasing in size, the sixth mobile computing category is smartphones -- which are mobile phones with
Internet connectivity. Also of pocketable size we then have media players andmobile games
consoles. Finally under mobile computing we may also include ambient computing devices that
attempt to embed digital data into mobile computer hardware that operates at the edges of our
perception.
A full discussion of every kind of device that could be considered a mobile computer is not just beyond
the scope of this website, but would arguably serve little purpose. What follows is therefore a summary -
- including specific, key product examples as appropriate -- of the aforementioned device categories and
how they are likely to develop. Other good sources of information on mobile computing include suppliers
Fused Mobility and Clove Technology. Indeed, a quick surf around these two websites can provide you
with a very good idea of the vast range of mainstream mobile computing devices now available.
LAPTOPS: PORTABLE DESKTOP COMPUTERS
Although the term "mobile computer" has evolved to encompass smaller and more pocketable devices,
laptops or notebook computers still embody the traditional image of a portable computing device. From a
technological standpoint, laptops are essentially desktop computers repackaged for portability. As a result,
they typically feature either an Intel or AMD microprocessor, similar to those found in desktop PCs, and
run desktop operating systems such as Windows, along with applications like Microsoft Office.
Most of the hardware and software aspects discussed on this website also apply to laptops, mirroring their
desktop counterparts. However, laptop processors are often slightly slower to conserve battery life, and
memory and hard disk capacities are usually smaller. This is because notebook computers utilize 2.5" or
smaller hard drives, in contrast to the 3.5" drives commonly found in desktops.
In the past, choosing between a desktop and a laptop PC involved significant performance trade-offs.
However, today's laptops can be considered direct replacements for typical desktop PCs. While laptops
may be more expensive and their keyboards and touchpads (which replace mice) can pose challenges for
some users, they can effectively handle most tasks. With the widespread use of Wi-Fi wireless networking,
laptops have become the preferred choice for many individuals, except for activities like resource-
intensive video editing and 3D rendering.
296 | The Institute of Chartered Accountants of Nepal
Chapter 6 : E-Commerce and Case Study of Inter Organizational Systems
ULTRABOOKS: SLEEK AND LIGHTWEIGHT LAPTOPS
Ultrabook represents a relatively new generation of stylish and lightweight laptops that have the potential
to replace larger and bulkier models in the coming years. To be classified as an ultrabook, a laptop must
meet specific hardware specifications outlined by Intel. These requirements include being less than 21mm
thick, resuming from sleep in a matter of seconds, providing a minimum of five hours of battery life, and
incorporating anti-theft technology. Many ultra books also utilize solid-state drives (SSDs) instead of
traditional hard disks, weigh around a kilogram, and feature a sleek casing made of materials like
aluminum alloy or carbon fiber.
Although Intel developed the ultrabook specification and actively promotes the brand, it does not
manufacture ultrabooks itself. Instead, Intel's aim is to ensure that successive generations of thin and
lightweight laptops from various manufacturers deliver a consistent user experience. The first ultrabooks
entered the market in late 2011, introduced by brands such as Acer, Asus, Lenovo, and Toshiba, all of
which continue to offer ultrabook models (often marketed as "thin, light laptops" nowadays).
Many of the latest ultrabooks incorporate touchscreens to optimize the user experience with Windows 8.
Some ultrabooks are also referred to as "convertibles," which means they can transform from a standard
laptop/ultrabook form factor into a tablet device by rotating or folding back the screen until it rests flat on
top of the device. If you're interested, you can find some examples of these models on certain websites.
For the latest news and reviews on ultrabooks, you may want to visit a website called UltrabookNews.com.
You might also find it helpful to watch an explanatory video on ultrabooks.
NETBOOKS: SMALL AND LOW POWER LAPTOPS
Netbooks, previously known as ultramobile or UMPCs (Ultra-Mobile Personal Computers), are compact
and energy-efficient laptops with screen sizes typically ranging from 7 to 13 inches and featuring
keyboards that are smaller than full-size. The first netbook was introduced by Psion in 2000, but it was
Asus who truly ignited the netbook market with their range of "Eee PCs" in late 2007. Initially designed
for children and casual home use, the affordable price tags (ranging from £170 to £300, depending on the
model) of the Eee PCs led to widespread popularity. In its first year, the Eee PC sold over a million units,
sparking other manufacturers to enter the netbook market. These manufacturers had to compete with the
Eee PC's price point, which had a significant impact on their profit margins.
In 2009, netbooks accounted for around 20 percent of the laptop market and experienced increasing sales,
reaching $11.7 billion globally, despite the ongoing recession and stagnant desktop and laptop sales.
However, the netbook form factor is now in a state of decline, with even Asus discontinuing netbook
models. This shift is unfortunate because netbooks still offer excellent value for money and functionality
as highly portable workhorses (in fact, I am currently using a netbook to write these words!). Whether
rightly or wrongly, the market now favors ultrabooks for many business customers and tablets for
consumers or professionals who prefer light-keyboarding options. As a result, few new netbooks are
expected to remain available on the market by the end of 2014 (although many existing netbooks will
likely continue to be used happily).
It's worth noting that despite the diminishing popularity of netbooks, Google continues to promote a range
of specialized netbooks known as "Chromebooks." These devices feature SSDs and run Google Chrome
OS instead of Windows, with all applications accessed from the cloud. You can find a video where I
upgrade the memory in an Acer Aspire One 725 netbook if you're interested.
The Institute of Chartered Accountants of Nepal ȁʹͻ͹
Management Information and Control System
TABLETS: A NEW ERA OF MOBILE COMPUTING
Tablets have ushered in a revolutionary era in mobile computing ever since the introduction of the Apple
iPad in 2010. While tablet devices had been seen earlier, it was the iPad that captured the world's
imagination and set the stage for their widespread adoption. Modern tablets have undergone significant
improvements, becoming sleeker, lighter, and equipped with responsive multi-touch interfaces. These
advancements make them perfect for cloud computing, granting users easy access to web-based media
content and a wide array of applications. With longer battery life and diverse options from various
manufacturers, tablets have become indispensable companions for both work and entertainment on the
go.
The tablet market has witnessed remarkable growth, with Android tablets joining the iconic iPad in
popularity. These devices have bridged the gap between casual media consumption and professional
productivity, offering detachable keyboards and stylus pens for versatile usage. Moreover, the
introduction of 5G technology has further enhanced the tablet experience, providing faster connectivity
and improved access to cloud-based services and content. Tablets have become an essential part of our
connected lives, and their continued evolution promises even greater possibilities in the dynamic world
of mobile computing.
In the past, Apple largely dominated the mainstream tablet market for about 18 months. However, the
scenario has changed significantly with the emergence of tablets based on Google's Android operating
system in 2011. Google's own Nexus range of Android tablets has gained popularity, along with
successful offerings from major manufacturers like Samsung with its Galaxy range, Sony, and Amazon's
Kindle Fire models.
Microsoft also made a notable re-entry into the tablet marketplace in October 2012 with its Surface tablets,
now in their second generation. These devices run either the full version of Windows 8.1, similar to
traditional desktops and laptops, or a cut-down tablet-only operating system called Windows RT,
specifically designed for lower-power ARM processors. The competition and diversity in the tablet
market have resulted in continuous innovations and advancements, providing users with a wide range of
options to suit their specific needs and preferences.

298 | The Institute of Chartered Accountants of Nepal


Chapter 6 : E-Commerce and Case Study of Inter Organizational Systems
As the world embraces a mobile and interconnected future, tablets are expected to play an increasingly
pivotal role in shaping how we interact with technology. With ongoing advancements and improvements
in hardware and software, the journey of tablets is far from over, and exciting developments in the realm
of mobile computing await us. The tablets' adaptability, portability, and versatility make them valuable
tools for productivity, communication, and entertainment, and they are likely to remain an integral part
of our digital lives in the years to come.
E-BOOK READERS
E-book readers, also known as e-readers, made their way into the mass market around 2008 and 2009,
offering a dedicated platform for purchasing, storing, and presenting books, newspapers, and magazines.
Unlike conventional tablets, e-readers are distinct in their use of e-ink displays, which mimic the paper-
like reading experience with high contrast and resolution. E-ink screens are power-efficient as they only
consume energy when changing the image, contributing to extended battery life. However, some e-
readers, like the Kindle Paperwhite, offer an illuminated mode that utilizes constant battery power for
reading in low-light conditions.
Currently, numerous e-book readers are available in the market, with the Kindle range from Amazon
being one of the most well-known options. Amazon's e-readers allow seamless wireless downloading of
books via WiFi or 3G through its "Whispernet" service. Another major player in the e-reader market is
Barnes and Noble, offering its Nook models.
While e-book readers are not likely to replace traditional books entirely, they are predicted to
revolutionize newspaper, magazine, and non-
fiction publishing. With the decline of
newspaper publishers due to free online
content, delivering publications through e-
book readers offers a potential revenue stream
via subscriptions. Moreover, e-readers open
new possibilities for readers to subscribe to
specific sections of various publications, such
as sports news from one newspaper and arts
and entertainment updates from another.
As for recent innovations in e-book readers,
advancements in display technology have
further improved the reading experience.
Some e-readers now feature color e-ink
displays, offering a richer visual experience
for magazines and comic books. Additionally,
e-book readers have become more versatile,
supporting various file formats and allowing
users to access digital libraries from different sources. Integration with audiobook services and the
inclusion of note-taking capabilities have also expanded the functionality of e-readers, making them
valuable tools for both leisure and academic pursuits.

The Institute of Chartered Accountants of Nepal ȁʹͻͻ


Management Information and Control System
In conclusion, e-book readers have significantly impacted the way we consume written content,
presenting new opportunities for publishers and readers alike. With ongoing developments and
improvements, e-readers continue to evolve, making reading more accessible, engaging, and enjoyable
in the digital age.
SMARTPHONES
Around 1999, Microsoft introduced the "Pocket PC" technology platform for small organizer-sized
devices known as PDAs (personal digital assistants). However, as the mobile phone market evolved,
people preferred the convenience of having an all-in-one device rather than carrying a separate phone and
pocket computer. This led to the rise of smartphones, which combine internet access, text messaging,
camera, and voice call capabilities in a single device. Modern smartphones typically come with user-
friendly three-to-four-inch touchscreens, making it easy to use various applications. Apple's iPhones
running on iOS and devices powered by Google's Android operating system are among the top choices
in the smartphone market, offering a wide range of options to suit different user preferences and needs.
MEDIA PLAYERS AND MOBILE GAMES CONSOLES
Categorizing the wide range of mobile computing devices available today can be challenging. However,
a final category includes the diverse array of music and video media players that fit in our pockets, as well
as mobile game consoles. Some of these devices now offer web browsing and internet connectivity
features. Notable examples of media players include various models of Apple iPods, as well as devices
from Creative and Sony. Currently, the most popular mobile game consoles are Sony's Play station Vita
and Nintendo's two-screen 3DS. These devices provide portable entertainment for users on the move,
combining gaming capabilities with multimedia features.

300 | The Institute of Chartered Accountants of Nepal


Chapter 6 : E-Commerce and Case Study of Inter Organizational Systems
EMBEDDED AND AMBIENT COMPUTING
Laptops, netbooks, tablets, smartphones, media players, and e-book readers are all recognizable as mobile
computers. However, the integration of computer processing power and wireless connectivity is now
expanding into devices that may not immediately be identified as computers. This brings us to the realm
of "ambient" mobile computing.
While laptops, ultramobile, and similar devices enable ubiquitous computing, allowing people to compute
anytime and anywhere, they also demand the user's full attention. In contrast, ambient computing operates
on the periphery of our senses, utilizing our subconscious processing capabilities. Ambient computing is
less demanding and disruptive to other human activities. In the words of AmbientDevices.com, ambient
computing devices "establish tangible connections between consumers and their digital information
through wireless, self-contained products."
In essence, ambient computing seamlessly integrates technology into our surroundings, creating subtle
interfaces that enhance our interaction with digital information without imposing significant cognitive load
or interrupting our daily routines.
MOBILE COMPUTING: CONCLUSIONS
Any definition of just what constitutes a "mobile computer" inevitably remains both relative and
subjective. For example, back in 1981 one of the very first portable computers was the Osborne
1. This weighed 11.8Kg, was larger than most modern desktop PCs, and only ran on mains
power without an optional battery pack. At the other end of the scale, the Artigo Pico-ITX PC measures
just 150mm x 110mm x 40mm, weigh only 520 grams, and yet is probably best categorised as very small
desktop computer.
Mobile computing is probably an area best defined at any one point in time by those devices that are
challenging paradigms and setting new consumer and business agendas. And right now this includes the
latest tablets, ultrabooks, and even hardware like the Raspberry Pi.
Ultimately, whilst mobile computing is still barely out of its infancy, it is fairly certain to represent
a larger and larger part of the future of computing development. Not least this is because desktop
computers are now a relatively mature platform offering little scope for high- return market development
for companies in the computing industry. The rising green computing agenda will also mean that desktop
computers are replaced far less regularly, in turn making new mobile computing market opportunities
even more attractive. Mobile computing also offers the potential for what Apple once called
"computing for the rest of us" -- or in other words, computing for those people who do not spend
their working day at a desk, and/or those who do not want to spend their leisure time slaved to a desktop
PC.
Mobile computing can also perhaps even be considered as more "natural" than those location- dependent
forms that have gone before. As seekers, consumers, processors, hoarders and communicators of
information, every human being is already a form of mobile computer. Increasingly smart devices
that can travel with us to help in such seeking, consuming, processing, hoarding and communicating
will hence perhaps inevitably be very widely adopted as soon as they become technically and

The Institute of Chartered Accountants of Nepal ȁ͵Ͳͳ


Management Information and Control System
economically mass-viable. Indeed, one only has to look at the uptake of mobile phones to consider the
potential.
The science fiction of the last decade contained a great many robots to walk beside us in servitude.
However, we are perhaps far more likely to want to seek assistance from a small device that we can
carry with us or find lying around the home or office than from a lumbering mechanical clone.
Virtual Organizations
Definition:
This new form of organisation, i.e., 'virtual organisation' emerged in 1990 and is also known as digital
organisation, network organisation or modular organisation. Simply speaking, a virtual organisation
is a network of cooperation made possible by, what is called ICT, i.e. Information and Communication
Technology, which is flexible and comes to meet the dynamics of the market.
Alternatively speaking, the virtual organisation is a social network in which all the horizontal and vertical
boundaries are removed. In this sense, it is a boundary less organisation. It consists of individual's working out
of physically dispersed work places, or even individuals working from mobile devices and not tied to any
particular workspace. The ICT is the backbone of virtual organisation.
It is the ICT that coordinates the activities, combines the workers' skills and resources with an objective
to achieve the common goal set by a virtual organisation. Managers in these organisations coordinate and
control external relations with the help of computer network links. The virtual form of organisation is
increasing in India also. Nike, Reebok, Puma, Dell Computers, HLL, etc., are the prominent companies
working virtually.
While considering the issue of flexibility, organisations may have several options like flexi-time, part-
time work, job-sharing, and home-based working. Here, one of the most important issues in- volved is
attaining flexibility to respond to changes - both internal and external - is determining the extent of control
or the amount of autonomy the virtual organisations will impose on their members.
This is because of the paradox of flexibility itself. That is: while an organisation must possess some
procedures that enhance its flexibility to avoid the state of rigidity, on the one hand, and simultaneously
also have some stability to avoid chaos, on the other.
Characteristics:
A virtual organisation has the following characteristics:
1. Flat organisation
2. Dynamic
3. Informal communication
4. Power flexibility
5. Multi-disciplinary (virtual) teams
6. Vague organisational boundaries
7. Goal orientation

302 | The Institute of Chartered Accountants of Nepal


Chapter 6 : E-Commerce and Case Study of Inter Organizational Systems
8. Customer orientation
9. Home-work
10. Absence of apparent structure
11. Sharing of information
12. Staffed by knowledge workers.
In fact, this list of the characteristics of virtual organisation is not an exhaustive one but illustra tive
only. One can add more characteristics to this list.
Types of virtual organisations:
Depending on the degree or spectrum of virtuality, virtual organisations can be classified into three broad
types as follows:
1. Telecommuters
2. Outsourcing employees/competencies
3. Completely virtual
A brief description of these follows in turn.
Telecommuters:
These companies have employees who work from their homes. They interact with the workplace via
personal computers connected with a modem to the phone lines. Examples of companies using
some form of telecommuting are Dow Chemicals, Xerox, Coherent Technologies Inc., etc.
Outsourcing Employees/Competencies:
These companies are characterised by the outsourcing of all/most core competencies. Areas for
outsourcing include marketing and sales, human resources, finance, research and development,
engineering, manufacturing, information system, etc. In such case, virtual organisation does its own on
one or two core areas of competence but with excellence. For example, Nike performs in product design
and marketing very well and relies on outsources for information technology as a means for maintaining
inter-organisational coordination.
Completely Virtual:
These companies metaphorically described as companies without walls that are tightly linked to a large
network of suppliers, distributors, retailers and customers as well as to strategic and joint venture
partners. Atlanta Committee for the Olympic Games (ACOG) in 1996 and the development efforts of
the PC by the IBM are the examples of completely virtual organisations. Now, these above types of
virtual organisations are summarized in the following Table

Type of Virtual Description Examples


Organization
Telecommuters Companies with employees who Dow Chemicals, Xerox, Coherent
work remotely from their homes, Technologies Inc., etc.

The Institute of Chartered Accountants of Nepal ȁ͵Ͳ͵


Management Information and Control System
utilizing personal computers and
modems to connect with the
workplace.
Outsourcing Companies that outsource most, if Nike (product design and
Employees/Compete not all, of their core competencies, marketing)
ncies focusing on excelling in one or two
areas while relying on external
partners for other functions.
Completely Virtual Companies metaphorically Atlanta Committee for the Olympic
described as "companies without Games (ACOG)
walls" that are tightly linked to a
network of suppliers, distributors,
retailers, customers, and partners.
Telecommuters Companies with employees who Dow Chemicals, Xerox, Coherent
work remotely from their homes, Technologies Inc., etc.
utilizing personal computers and
modems to connect with the
workplace.
Outsourcing Companies that outsource most, Nike (product design and
Employees/Compet if not all, of their core marketing)
encies competencies, focusing on
excelling in one or two areas
while relying on external
partners for other functions.
Completely Virtual Companies metaphorically Atlanta Committee for the
described as "companies without Olympic Games (ACOG)
walls" that are tightly linked to a
network of suppliers,
distributors, retailers, customers,
and partners.

Software as a service
SaaS, or Software as a Service, describes any cloud service where consumers are able to access software
applications over the internet. The applications are hosted in "the cloud" and can be used for a wide
range of tasks for both individuals and organisations. Google, Twitter, Facebook and Flickr are all
examples of SaaS, with users able to access the services via any internet enabled device. Enterprise
users are able to use applications for a range of needs, including accounting and invoicing, tracking
sales, planning, performance monitoring and communications (including webmail and instant
messaging).
SaaS is often referred to as software-on-demand and utilising it is akin to renting software rather than
buying it. With traditional software applications you would purchase the software upfront as a package
and then install it onto your computer. The software's licence may also limit the number of users

304 | The Institute of Chartered Accountants of Nepal


Chapter 6 : E-Commerce and Case Study of Inter Organizational Systems
and/or devices where the software can be deployed. Software as a Service users, however, subscribe to
the software rather than purchase it, usually on a monthly basis.
Applications are purchased and used online with files saved in the cloud rather than on
individual computers.
There are a number of reasons why SaaS is beneficial to organisations and personal users alike:

Cost-Efficient: SaaS eliminates the need for additional hardware costs as the processing power
is provided by the cloud provider.
Easy Setup: There are no initial setup costs; applications are ready for use once subscribed to.
Flexible Payment Model: Users pay for what they use, often on a monthly basis, making it cost-
effective for short-term needs.
Scalability: Users can easily access more storage or additional services on demand without
installing new software or hardware.
Automated Updates: Updates are automatically available online to existing customers, usually
free of charge.
Cross-Device Compatibility: SaaS applications can be accessed from any internet-enabled
device, providing flexibility for users.
Remote Accessibility: Applications can be accessed from any location with an internet-enabled
device, freeing users from installation restrictions.
Customization: Some SaaS applications offer customization options, allowing businesses to
tailor the software to their specific needs and branding.
Office software extensively utilizes SaaS, providing a range of solutions for accounting, invoicing,
sales, and planning. Businesses can subscribe to the required software and access it online from any
office computer using a username and password. The flexibility of SaaS allows easy switching between
software based on changing needs. Additionally, businesses can set up multiple users with varying
levels of access to the software, accommodating different team sizes and requirements.
The SaaS industry has experienced significant growth and diversification since its inception. Many new
players have emerged, offering specialized SaaS solutions for various industries and use cases.
Additionally, advancements in cloud technology have further enhanced the performance, security, and
accessibility of SaaS applications. Furthermore, many SaaS providers now focus on integrating artificial
intelligence and machine learning capabilities into their software, enabling more intelligent and
personalized user experiences. This continuous innovation in the SaaS space ensures that businesses and
individuals have access to cutting-edge software solutions for their needs.
Data Exchange:
Data exchange is the process of converting data from a source schema to a target schema, ensuring that
the target data accurately represents the source data. It involves restructuring the data, which can lead
to some content loss, making it distinct from data integration. During data exchange, instances may face
constraints that make transformation impossible. Conversely, there might be multiple ways to transform
an instance, requiring the identification and justification of the "best" solution among them. This process
plays a crucial role in data management and ensures seamless data sharing and compatibility between
different systems and databases.

The Institute of Chartered Accountants of Nepal ȁ͵Ͳͷ


Management Information and Control System
Single-domain data exchange
Single-domain data exchange involves dealing with multiple source and target schemas, each
representing proprietary data formats within a specific domain. To streamline the process, developers
often create an exchange format or interchange format designed for that particular domain. By using
this format as an intermediate step, they can write a limited number of routines to indirectly translate
data between each source and target schema, rather than creating numerous direct translation routines
for each possible combination.
This approach significantly reduces the amount of work required compared to writing and debugging
hundreds of different direct translation routines. For instance, in geospatial data, there is the Standard
Interchange Format; for spreadsheet data, the Data Interchange Format is used; GPS coordinates are
indicated using formats like GPS eXchange Format or Keyhole Markup Language; financial data relies
on Quicken Interchange Format, while integrated circuit layout data uses GDSII.
By adopting single-domain data exchange and employing interchange formats, data compatibility and
sharing become more manageable and efficient. Developers can focus on creating specific routines for
translating data to and from the interchange format, enabling seamless data communication and
integration across diverse systems and applications within a particular domain.
Data exchange languages
Data exchange languages refer to domain-independent languages capable of handling various types of
data. These languages draw their semantic expression capabilities and qualities from natural languages,
enabling them to be used for diverse data applications. The term is also extended to encompass file
formats that multiple programs can read, including proprietary formats like Microsoft Office
documents. However, file formats lack the essential components of a real language, such as grammar
and vocabulary.
Experience has demonstrated that certain formal languages are better suited for data exchange tasks due
to their specification being driven by a formal process rather than specific software requirements. For
instance, XML, a widely used markup language on the internet, was designed to facilitate the creation
of dialects (domain-specific sublanguages). However, it lacks domain-specific dictionaries or fact types.
To ensure reliable data exchange, standard dictionaries-taxonomies, and tools libraries such as parsers,
schema validators, and transformation tools play a crucial role in streamlining the process.
In recent times, the importance of data exchange languages has grown significantly due to the
proliferation of interconnected systems and the need for seamless data communication across various
platforms and applications. Standardization and interoperability are key factors that influence the choice
of data exchange languages, as they enable data to be understood and utilized consistently by different
software and tools. Additionally, advancements in technology have led to the development of more
efficient and versatile data exchange languages, ensuring smoother and more accurate data transfer in
various domains, including e-commerce, healthcare, and logistics.

306 | The Institute of Chartered Accountants of Nepal


Chapter 7 : E-business Enabling Software Packages Case Study

Chapter 7

E-business Enabling Software Packages Case Study

The Institute of Chartered Accountants of Nepal ȁ͵Ͳ͹


Management Information and Control System

308 | The Institute of Chartered Accountants of Nepal


Chapter 7 : E-business Enabling Software Packages Case Study
7.1 Enterprises Resource Planning (ERP)
Enterprise Resource Planning (ERP) systems have played a crucial role in transforming the core
business information systems of many organizations. In the past, businesses developed their own
applications over time, and by the 1990s, attempts were made to integrate these legacy systems with
mixed success. However, the software industry recognized the need for a more comprehensive solution
and introduced ERP applications. This commonality enables organizations to streamline their
operations and achieve better efficiency. While an ERP solution provides the core information system
functions, most organizations need to redesign their business processes to fully leverage its
capabilities. Custom software applications are often required to address unique industry or business
requirements.
Selecting and implementing an ERP solution is a significant undertaking for organizations. It may
involve substantial financial investment and require the collaboration of various stakeholders,
including managers, users, analysts, technical specialists, programmers, and consultants. The ERP
implementation and integration often represent the largest information system project an organization
embarks upon.
Systems analysts play a vital role in the ERP journey. They may be involved in the decision-making
process to select and purchase the appropriate ERP solution. More commonly, analysts are engaged in
customizing the ERP solution to align with the organization's needs and redesigning business processes
to optimize its usage. Furthermore, if custom-built applications are necessary alongside the ERP core
solution, the ERP system's architecture significantly impacts the analysis and design of these custom
applications, ensuring smooth coexistence and interoperability.
As technology advances, ERP systems continue to evolve, offering enhanced functionalities and
adaptability. Modern ERP solutions now incorporate cutting-edge technologies such as artificial
intelligence, machine learning, and data analytics, empowering organizations to make data-driven
decisions and achieve greater agility in the ever-changing business landscape. The latest ERP systems
also focus on user-friendly interfaces and mobile accessibility, enabling employees to access critical
data and perform tasks from anywhere at any time. Cloud-based ERP solutions have gained popularity,
providing scalability and cost-effectiveness by eliminating the need for on-premises infrastructure.
Additionally, ERP systems are playing a significant role in promoting sustainability and environmental
responsibility. With the ability to track and optimize resource usage, organizations can minimize waste
and carbon footprint, contributing to a greener future. As businesses face new challenges and
opportunities, ERP systems will continue to evolve, offering innovative solutions to drive growth and
efficiency. From small businesses to large enterprises, ERP remains a fundamental tool in orchestrating
and optimizing operations for success in a rapidly changing digital world.

The Institute of Chartered Accountants of Nepal ȁ͵Ͳͻ


Management Information and Control System

Fig 8-1 Enterprise Application


ERP Benefits
The benefits of ERP in any organization are beyond doubt. Some of the key benefits are listed below.
Reduced Planning Cycle Time: ERP streamlines and automates various planning processes,
resulting in a significant reduction in the time required for planning activities.
Reduced Manufacturing Cycle Time: With ERP, manufacturing processes are optimized and
coordinated more efficiently, leading to a reduction in the time it takes to produce goods.
Reduced Inventory: By providing real-time visibility into inventory levels, ERP enables
organizations to maintain optimal inventory levels, minimizing excess stock and reducing
carrying costs.
Reduced Error in Ordering: ERP systems centralize and automate the order management
process, reducing errors and improving accuracy in order processing.
Reduced Manpower Requirements: Through automation and process optimization, ERP
systems can help reduce the need for manual labor, leading to cost savings and increased
operational efficiency.
Enables Faster Response to Changing Market Situations: ERP provides organizations with
timely and accurate data, empowering them to respond quickly and effectively to market changes,
customer demands, and industry trends.
Better Utilization of Resources: ERP systems facilitate effective resource planning and
allocation, ensuring that resources such as materials, equipment, and manpower are utilized
optimally, leading to improved productivity and cost savings.

310 | The Institute of Chartered Accountants of Nepal


Chapter 7 : E-business Enabling Software Packages Case Study
Increased Customer Satisfaction: By integrating various business functions and providing a
holistic view of customer interactions, ERP helps enhance customer service and satisfaction
through improved responsiveness and personalized experiences.
Enables Global Outreach: ERP systems support multi-site operations, multiple languages, and
international regulations, enabling organizations to expand their reach and operate globally with
ease.
Implementation
ERP systems affect both the internal and external operations of an organization. Hence successful
implementation and use are critical to organizational performance and survival (Markus et al.,
2000). ERP implementation brings with it tremendous organizational change, both cultural and
structural. This is on account of the best practice business processes that ERP systems are based on.
This calls for ERP implementations to be looked at from strategic, organizational and technical
dimensions. The implementation thus involves a mix of business process change and software
configuration to align the software and the business processes (Holland and Light, 1999).
There are two strategic approaches to ERP system implementation. The first approach is where a
company goes for the plain vanilla version of ERP. Here the organization has to reengineer the business
process to fit the functionality of the ERP system which brings with it major changes in the working of
the organization. This approach will take advantage of future upgrades, and allow organizations to
benefit from best business processes. The second approach is where the ERP system is customized to
fit the business processes of the organization. This will not only slow down the implementation but also
will introduce new bugs into the system and make upgrades difficult and costly. ERP vendors' advice
organizations to take the first approach and focus on process changes.
One third of ERP implementations worldwide fail owing to various factors .One major factor for failure
is considering ERP implementation to be a mere automation project instead of a project involving
change management. It is a business solution rather than an IT solution, as is perceived by most
organizations. Yet another reason for failure is over customization of the ERP system. Therefore,
organizations need to very carefully go about their ERP implementations, if they are to be successful.
Most large companies have either implemented ERP or are in the process of doing so. Several large
companies in India, both in the public and private sectors, have successfully implemented ERP and are
reaping the benefits. Some of them are Godrej, HLL, Mahindra & Mahindra and IOC. With the near
saturation in the large enterprise market, ERP vendors are looking to tap the potential in the SME
segment (Davenport, 1999). The spending on ERP systems worldwide is increasing and is poised for
growth in the next decade (Yen et al., 2002). Some of the reasons for this are:
• Vendors are continuously increasing the capabilities of their ERP system by adding
additional functionality like Business Intelligence, Supply Chain, and CRM, etc.
• Vendors have shifted to web-based ERP.
• The demand for web-based ERP will increase due to the perceived benefits of e-commerce.
• There are several markets that are yet unexplored.

The Institute of Chartered Accountants of Nepal ȁ͵ͳͳ


Management Information and Control System
According to an AMR Research report i.e. the Enterprise Applications Market Sizing Report, Enterprise
Resource Planning vendor revenue across segments is expected to grow from $28.8 billion in 2006 to
$47.7 billion by 2011.
Issues and Challenges
Though the market for ERP seems to be growing, there are several issues and challenges one has to
contend with when implementing an ERP system in the SME segment. Some of these are:
• Awareness: There is a lack of awareness among SMEs regarding ERP vendors, applications, and the
capabilities of ERP systems. Many SMEs are unaware of what ERP systems are and how they can benefit
their business. Some SMEs view ERP as a magic solution that can solve all their business problems,
including quality and process defects. It is essential to create awareness about ERP systems and educate
SMEs about their advantages, such as improved execution of business processes, transparency, and
visibility within the organization.
Perception: SMEs often perceive ERP as a solution meant only for large firms due to the perceived high
costs of acquisition, implementation, and maintenance, as well as the complexity associated with ERP
systems. Some SMEs may even believe that they do not require an ERP system for their operations.
Addressing these perceptions and highlighting the benefits and cost-effectiveness of ERP in the SME
context is crucial.
Past Implementations: SMEs may have heard about high-profile ERP failures, including cases that have
led to firms going bankrupt. Some SMEs that have previously implemented ERP systems may have
experienced failures, leading to skepticism and reluctance to pursue ERP implementations. Overcoming
these negative perceptions and showcasing successful ERP implementations in similar SMEs is essential
to build confidence in the system.
Approach to Implementation: ERP vendors typically advise SMEs to align their business processes with
the standard functionality of the ERP system, embracing best practices. This approach, known as the "plain
vanilla" approach, helps reduce implementation costs. However, SMEs often have unique processes that
they consider vital to their operations and are unwilling to change. Consequently, SMEs tend to request
extensive customization of the ERP system to meet their specific requirements, which increases
implementation costs. Finding a balance between customization and adopting standardized processes is
crucial for SMEs.
Cost: SMEs generally have limited capital compared to larger organizations. The financial implications
of ERP implementation, including acquisition, implementation, and maintenance costs, can be a
significant challenge for SMEs. Finding cost-effective solutions, exploring financing options, and
considering the long-term benefits of ERP is essential for successful implementation.
Change Management: ERP implementations often fail when they are treated solely as automation
projects, neglecting the crucial aspect of change management. People's resistance to change and the lack
of proper change management strategies can result in the system not being effectively utilized. Ensuring
that the organization is prepared for the cultural and operational changes brought about by the ERP system
is vital.

312 | The Institute of Chartered Accountants of Nepal


Chapter 7 : E-business Enabling Software Packages Case Study
Limited Resources: Most SMEs lack an in-house IT team and must rely on external agencies or
consultants for assistance during the implementation process. This dependency on external resources adds
to the overall implementation costs for SMEs.
Before embarking on an ERP implementation journey, organizations, particularly SMEs, need to evaluate
their readiness for ERP. Factors to consider include

Infrastructure resource planning,


education about ERP,
Human resource planning,
Top management commitment,
Training facilities, and
The willingness to allocate the right people for the implementation.
Assessing preparedness in these areas will help organizations set realistic expectations and ensure a
smoother ERP implementation process.

7.2 Supply Chain Management Introduction (SCM):


In today's business landscape, organizations are increasingly focusing on extending their enterprise
applications beyond their core functions. They are seeking to establish efficient collaboration with their
suppliers and distributors to manage the seamless flow of raw materials and products across their networks.
To achieve this, organizations are leveraging Supply Chain Management (SCM) applications that utilize
the Internet as a platform for integration and communication.
It is important to recognize that a supply chain encompasses multiple businesses and carriers working
together to ensure the smooth delivery of goods to end consumers, such as restaurants having an
uninterrupted supply of food. Any disruptions or issues in any part of the supply chain can have adverse
effects on all participants. Therefore, many businesses are adopting SCM software technology to plan,
implement, and manage their supply chains effectively. The margin provides a list of examples of SCM
vendors. (It is worth noting that several ERP application vendors are also expanding their software to
include SCM capabilities. However, the SCM market is expected to undergo consolidation as there are
currently too many vendors for all to succeed.)
SCM applications hold significant relevance for systems analysts, much like ERP applications. As an
analyst, you may be involved in evaluating and selecting an SCM package that best suits the organization's
requirements. Additionally, you may be responsible for implementing and customizing the chosen SCM
solution to align with the organization's needs. Furthermore, your role may extend to redesigning existing
business processes to integrate seamlessly with the SCM solution.
Reverse Logistic
Reverse logistics refers to the systematic planning, implementation, and control of the efficient and cost-
effective flow of raw materials, in-process inventory, finished goods, and associated information from the
point of consumption back to the point of origin. The purpose of reverse logistics is to recapture value
from these goods or ensure their proper disposal. It involves the movement of goods away from their
typical final destination for various reasons, such as value recapture or responsible disposal. The scope of
reverse logistics may also encompass activities like remanufacturing and refurbishing.

The Institute of Chartered Accountants of Nepal ȁ͵ͳ͵


Management Information and Control System
The growth of reverse logistics for manufactured products is closely tied to the rapid advancements in
technology and the subsequent decrease in prices as newer and improved products enter the supply chain
at an accelerated pace. Given the narrow profit margins and intense competition, effective management of
the supply chain is crucial to avoid adverse consequences
Organizations equipped with the necessary infrastructure to capture and analyze the combined value of
components in real-time, along with intelligent decision-making based on factors like refurbishment costs,
resale value, spare parts, repairs, and overall demand, will not only enhance their profitability but also gain
a competitive edge by outperforming and potentially eliminating their rivals. In essence, this is a modern
case of Darwinism, where survival favors the fittest. Collaboration and integration within Supply Chain
Logistics are essential for organizations to thrive; otherwise, they risk being categorized as endangered
species. Even the mighty predator, the Tyrannosaurus Rex, met its demise due to the ongoing progress of
evolution. In today's world, technology drives evolution at an astonishing pace. The ability to effectively
capture, transfer, integrate, and facilitate intelligent data analysis is comparable to the invention of fire. It
is this capability that will differentiate companies that can adapt swiftly from those that become mired in
sluggish responses and suffer a similar fate to that of the tar pits.

7.3 Sales Force Automation


Sales Force Automation (SFA) technologies have experienced significant growth and adoption in the past
decade, becoming a competitive imperative for many organizations (Blodgett, 1995; Schafer, 1997; Stein,
1998). These technologies provide automated sales support and integration of sales data with other
corporate information, with a primary goal of improving productivity. SFA enables sales teams to focus
more on direct selling activities by reducing the time spent on non-selling tasks.
Despite its potential benefits, successful SFA implementation has proven elusive (Schafer, 1997; Stein,
1998). Recent research has identified positive perceptions of SFA technologies as critical factors for
successful adoption. Negative perceptions, particularly regarding SFA as a tool for closely managing the
sales force, have led to resistance and implementation failures. However, academic literature lacks in-
depth insights into this crucial issue, warranting further investigation.
Sales Force Automation (SFA) software is a type of program that automates various business tasks, such
as sales processing, inventory control, and customer interaction tracking. It also facilitates the analysis of
sales forecasts and performance. Businesses can choose from a range of SFA software products, including
custom-developed solutions or existing packages like Interact Commerce's ACT and GoldMine
Software's GoldMine. These SFA packages typically offer features like a Web-ready database, e-mail
integration, and customizable templates, following a three-tiered architecture to reduce programming
demands on clients. Module-based designs allow users to tailor the software to their specific needs.
In the evolving landscape of SFA, a notable development occurred in August 2000 when Oracle released
OracleSalesOnline.com, a free CRM software package designed for medium-to-large enterprises with
mobile workforces. This package provides online access to essential information, such as contacts,
schedules, and performance tracking, through the included database program. Being based on the
Application Service Provider (ASP) model, all data and storage are maintained at an Oracle facility,
enabling access from any internet connection without requiring special hardware or software. The
package also includes online staff training to ensure efficient usage.

314 | The Institute of Chartered Accountants of Nepal


Chapter 7 : E-business Enabling Software Packages Case Study

As SFA technologies continue to evolve, organizations need to address challenges associated with
implementation and user perceptions to maximize their benefits. A deeper understanding of user needs,
seamless integration with existing systems, and continuous updates to align with changing market
demands will be crucial for successful SFA adoption.

7.4 Customer Relationship Management:


Customer Relationship Management (CRM) is a strategy that companies use to build strong relationships
with their customers and increase sales. It involves implementing CRM solutions that allow customers to
access information and help themselves through the Internet. The main focus of CRM is the customer, and
its goal is to not only provide effective customer support but also to understand customers better for
improved customer relations and marketing.
CRM solutions come in various forms, and there are many vendors offering CRM software. Just like with
Supply Chain Management (SCM), many Enterprises’ Resource Planning (ERP) vendors are developing
or acquiring CRM capabilities to enhance their ERP solutions. However, over time, there will likely be
consolidation in the market through acquisitions and attrition, resulting in fewer players.
For systems analysts, CRM technology has similar implications as ERP and SCM. They may need to
interface new applications with the core CRM system of the enterprise, ensuring seamless integration and
data exchange.
In simpler terms, CRM is all about businesses building strong relationships with their customers. They use
technology to help customers find information on their own and provide better support. This helps
companies understand their customers and improve how they interact with them. Just like how companies
manage their supply chains, they also need to manage their relationships with customers effectively. There
are many software options available to help with this, and companies that provide other business software

The Institute of Chartered Accountants of Nepal ȁ͵ͳͷ


Management Information and Control System
are also adding CRM capabilities. Systems analysts play a role in making sure new applications work well
with the main CRM system of a company.
Benefits
These tools have been shown to help companies attain these objectives:
1. Streamlined sales and marketing processes: CRM systems integrate sales and marketing data,
enabling better coordination and collaboration between teams. This leads to more efficient lead
management and customer targeting.
2. Increased sales productivity: CRM tools provide sales representatives with a centralized platform to
manage leads, track customer interactions, and automate repetitive tasks. This boosts their
productivity and allows them to focus on revenue-generating activities.
3. Opportunities for cross-selling and up-selling: With access to comprehensive customer data,
businesses can identify cross-selling and up-selling opportunities, leading to increased revenue from
existing customers.
4. Improved service, loyalty, and retention: CRM systems help businesses deliver personalized and
timely customer service, enhancing customer satisfaction, loyalty, and retention rates.
5. Enhanced call center efficiency: CRM solutions streamline call center operations by providing agents
with a 360-degree view of customer information, leading to faster query resolution and improved
customer experience.
6. Higher close rates: By effectively managing the sales pipeline and leads, CRM systems help sales
teams close deals more efficiently, increasing overall sales success rates.
7. Better profiling and targeting: CRM software captures valuable customer insights, allowing
businesses to create detailed customer profiles and tailor marketing campaigns for better targeting.
8. Reduced expenses: Automation of sales and marketing processes reduces manual efforts and
operational costs, contributing to overall expense reduction.
9. Increased market share: CRM tools enable businesses to make data-driven decisions, allowing them
to identify market trends and seize opportunities for growth and expansion.
10. Higher overall profitability: Through improved efficiency, customer satisfaction, and targeted
marketing efforts, CRM systems contribute to increased revenue and profitability.
11. Accurate marginal costing: CRM data provides insights into the cost of acquiring and retaining
customers, helping businesses calculate accurate marginal costs for products and services.
12. Personalized customer interactions: CRM systems enable businesses to store detailed customer
information and preferences, allowing them to personalize interactions and provide tailored solutions.
13. Enhanced customer analytics: CRM tools offer powerful analytics capabilities, enabling businesses
to gain valuable insights from customer data and make data-driven decisions.
14. Improved customer communication: CRM systems facilitate seamless communication with customers
through various channels, such as email, social media, and chat, leading to better engagement.
15. Efficient lead management: CRM solutions automate lead capture, distribution, and tracking, ensuring
that sales teams can prioritize leads and follow up promptly.
16. Enhanced customer satisfaction metrics: CRM systems enable businesses to measure and analyze
customer satisfaction levels, identifying areas for improvement.

316 | The Institute of Chartered Accountants of Nepal


Chapter 7 : E-business Enabling Software Packages Case Study
17. Integration with other business systems: Modern CRM platforms offer integration with other essential
business tools, such as marketing automation, accounting, and e-commerce platforms, creating a
unified business ecosystem.
Challenges
Despite the benefits, many companies are still not fully leveraging these tools and services to align
marketing, sales, and service to best serve the enterprise.
1. Complexity of implementation, especially for large enterprises: Integrating CRM systems with
existing processes and databases can be complex, requiring significant planning and resources.
2. Evolution of CRM tools from contact management to deal tracking, opportunities management, and
sales pipeline monitoring: The evolving nature of CRM software may require businesses to adapt
and invest in new functionalities.
3. Adoption of on-premises software, requiring IT infrastructure management by companies: Some
businesses choose to host CRM systems on their own servers, leading to additional IT maintenance
and support responsibilities.
4. Fragmented implementation with separate initiatives by individual departments: Lack of
coordination between different departments may result in isolated CRM implementations that fail to
deliver a unified view of customer data.
5. Siloed thinking and decision processes leading to incompatible systems and dysfunctional processes:
If departments work independently without considering the bigger picture, it can result in disjointed
systems and inefficient workflows. Cooperation and collaboration are essential for successful CRM
implementation.
6. Data quality and management: Maintaining accurate and up-to-date customer data within CRM
systems can be challenging, especially for organizations with large databases.
7. User adoption and training: Successfully implementing CRM requires proper user training and
adoption strategies to ensure that employees embrace the system and utilize its features effectively.
8. Data security and privacy concerns: Storing sensitive customer data in CRM systems raises security
and privacy concerns, necessitating robust security measures to safeguard against potential breaches.
9. Scalability: As businesses grow and customer data increases, CRM systems must be scalable to
accommodate the expanding data volume and user base.
10. Cost of implementation and maintenance: Implementing and maintaining a CRM system can involve
significant costs, including software licensing, infrastructure setup, and ongoing support and
upgrades.

The Institute of Chartered Accountants of Nepal ȁ͵ͳ͹


Management Information and Control System

Fig 8-2 Customer Relationship Management Model

318 | The Institute of Chartered Accountants of Nepal


Chapter 9 : Disaster Recovery and Business Continuity Planning

Chapter 8
Information System Security,
Protection Protection
of Information and Control
Assets

The Institute of Chartered Accountants of Nepal ȁ͵ͳͻ


Management Information and Control System

320 | The Institute of Chartered Accountants of Nepal


Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
8.1 System Vulnerability and Abuse
Prior to the advent of computer automation, individual and organizational data were typically stored and
safeguarded as physical records, distributed across various business or organizational units. However, with
the prevalence of information systems, data is now concentrated within computer files, which can be
accessed by numerous individuals and external groups.
Storing large volumes of data in electronic format exposes it to a wider range of threats compared to its
manual counterpart. The interconnected nature of information systems through communication networks
further amplifies the potential risks. Unauthorized access, misuse, or fraudulent activities are not confined
to a specific location but can occur at any entry point within the network.
Why Systems Are Vulnerable
In contemporary information systems, vulnerabilities arise from a multitude of factors encompassing
technical, organizational, and environmental elements, often compounded by inadequate management
decisions. Figure 9-1 depicts the prevalent threats faced by these systems, which operate in a multitier
client/server computing environment. At each layer and in the communication between layers, there exist
potential weaknesses.
Starting from the client layer, users themselves can inadvertently cause harm by introducing errors into the
system or gaining unauthorized access. Additionally, data flowing over networks is susceptible to
interception, leading to data theft or unauthorized message alterations. Even radiation can disrupt network
functionality at various points. Moreover, attackers can launch denial of service attacks or deploy malicious
software to disrupt the operation of websites. Once intruders penetrate corporate systems, they can wreak
havoc by either destroying or tampering with crucial data stored in databases or files.
Web-based applications, with their interconnected components like Web clients, servers, and corporate
information systems linked to databases, present their unique set of security challenges and vulnerabilities.
Outside of intentional attacks, various technical malfunctions can occur, such as hardware breakdowns,
improper configurations, or damage due to criminal acts. Computer software is also prone to failure due to
programming errors, improper installation, or unauthorized changes. Furthermore, natural disasters like
power failures, floods, and fires can disrupt computer systems and networks.
An additional layer of vulnerability arises when organizations engage in domestic or offshore outsourcing,
entrusting valuable information to networks and computers beyond their direct control. In such cases,
without robust safeguards, critical data becomes susceptible to loss, destruction, or unauthorized access,
potentially exposing sensitive trade secrets or violating personal privacy. The practice of outsourcing
application development to offshore companies also raises concerns about the insertion of hidden code that
might later grant unauthorized control over the application or its data.
Overall, the security of information systems is a multifaceted challenge that necessitates a comprehensive
approach to address technical, managerial, and environmental factors, ensuring the protection of critical data
and smooth system operations.

The Institute of Chartered Accountants of Nepal ȁ͵ʹͳ


Management Information and Control System

Fig 9-1 Contemporary security challenges and vulnerabilities


Internet Vulnerabilities
Large public networks such as the Internet are more vulnerable than internal networks because they are
virtually open to anyone. The Internet is so huge that when abuses do occur, they can have an enormously
widespread impact. When the Internet becomes part of the corporate network, the organization's
information systems are even more vulnerable to actions from outsiders.
Computers that are constantly connected to the Internet by cable modems or Digital Subscriber Line
(DSL) are more open to penetration by outsiders because they use fixed Internet addresses where they
can be easily identified. (With dial-up service, a temporary Internet address is assigned for each
session.) A fixed Internet address creates a fixed target for hackers.
Telephone service based on Internet technology can be more vulnerable than the switched voice network
if it does not run over a secure private network. Most Voice over IP (VoIP) traffic over the public Internet
is not encrypted, so anyone linked to a network can listen in on conversations. Hackers can intercept
conversations to obtain credit card and other confidential personal information or shut down voice service
by flooding servers supporting VoIP with bogus traffic.
Vulnerability has also increased from widespread use of e-mail and instant messaging (IM). E- mail can
contain attachments that serve as springboards for malicious software or unauthorized access to internal
corporate systems. Employees may use e-mail messages to transmit valuable trade secrets, financial data,
or confidential customer information to unauthorized recipients. Popular instant messaging applications
for consumers do not use a secure layer for text messages, so they can be intercepted and read by
outsiders during transmission over the public Internet. IM activity over the Internet can in some cases be
used as a back door to an otherwise secure network. (IM systems designed for corporations, such as
IBM's Sametime, include security features.)

322 | The Institute of Chartered Accountants of Nepal


Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
The Internet, with its vast interconnectedness and global reach, presents a multitude of vulnerabilities that
can jeopardize the security and privacy of individuals, organizations, and the overall integrity of digital
systems. Understanding these vulnerabilities is essential for effectively mitigating risks and implementing
robust security measures.
Network Attacks: The Internet is susceptible to various types of network attacks, such as Distributed
Denial-of-Service (DDoS) Attacks, where multiple compromised systems flood a target with excessive
traffic, rendering it inaccessible. Other attacks include network eavesdropping, packet sniffing, and
network spoofing, which can lead to unauthorized access, data interception, and identity theft.
Malware and Exploits: Malicious software, commonly known as malware, poses a significant threat on
the Internet. Malware includes viruses, worms, Trojans, ransomware, and spyware, which can infect
systems, compromise data, and exploit vulnerabilities in software or operating systems. Exploit kits, which
bundle known software vulnerabilities, are frequently employed by cybercriminals to target unsuspecting
users.
Phishing and Social Engineering: Phishing attacks rely on deceptive tactics to trick users into revealing
sensitive information, such as passwords or credit card details, by masquerading as trustworthy entities.
Social engineering techniques exploit human psychology and manipulate individuals into divulging
confidential information or granting unauthorized access.
Weak Authentication and Password Security: Weak or easily guessable passwords, along with poor
authentication mechanisms, contribute to Internet vulnerabilities. Brute-force attacks, password cracking,
and credential stuffing are common methods used to exploit weak authentication and gain unauthorized
access to systems and accounts.
Software and System Vulnerabilities: Flaws and vulnerabilities in software applications, operating
systems, and network protocols provide opportunities for cyber attackers to exploit weaknesses. These
vulnerabilities can be used to gain unauthorized access, execute arbitrary code, or steal sensitive data.
Internet of Things (IoT) Security: The proliferation of interconnected IoT devices introduces new
vulnerabilities. Insecure IoT devices, lacking proper security controls, can be compromised, leading to
privacy breaches, unauthorized access to home or corporate networks, and even physical safety risks.
Insider Threats: Trusted individuals within organizations can pose a significant risk to Internet security.
Insider threats can involve intentional or unintentional actions that compromise systems, steal sensitive
data, or disrupt operations from within an organization's network.
Wireless Security Challenges
Wireless networks using radio-based technology remain vulnerable to penetration due to their inherent
characteristics. Radio frequency bands are relatively easy to scan, making it possible for unauthorized
users to detect and exploit vulnerabilities in wireless networks. While the range of Wireless Fidelity (Wi-
Fi) networks is typically limited to a few hundred feet, the use of external antennae can extend this range
up to one-fourth of a mile, increasing the potential attack surface.
Local area networks (LANs) that use the 802.11b (Wi-Fi) standard, while convenient for their ease of
use, can be susceptible to penetration by outsiders armed with laptops, wireless cards, external antennae,
and freeware hacking software. These tools enable hackers to identify unprotected networks, monitor
network traffic, and potentially gain unauthorized access to the Internet or even corporate networks.
The Institute of Chartered Accountants of Nepal ȁ͵ʹ͵
Management Information and Control System
Wi-Fi transmission technology employs spread spectrum transmission, spreading the signal over a wide
range of frequencies. While this increases signal reliability and availability, it also inadvertently makes it
easier for stations to find and hear one another. The Service Set Identifiers (SSID) used to identify access
points in a Wi-Fi network are broadcast multiple times and can be picked up fairly easily by intruders'
sniffer programs (see figure 9-2), allowing them to eavesdrop on network traffic and exploit security
weaknesses.
Despite awareness of such vulnerabilities, many wireless networks in various locations still lack basic
protections against war driving. In war driving attacks, eavesdroppers drive around buildings or park
outside to intercept wireless network traffic, further highlighting the need for improved security measures
to safeguard wireless communications.
In conclusion, as technology continues to evolve, so do the methods and tools used by malicious actors
to exploit vulnerabilities in wireless networks. Implementing robust security measures, such as
encryption, strong authentication, and regular security audits, is essential to protect sensitive data and
maintain the integrity of wireless communication.

Fig 9-2 Wi-Fi security challenges


The 802.11 standard specifies the SSID as a form of password for a user's radio network interface
card (NIC) to join a particular wireless network. The user's radio NIC must have the same SSID as the
access point to enable association and communication. Most access points broadcast the SSID multiple
times per second. A hacker can employ an 802.11 analysis tool to identify the SSID. (Windows XP has
capabilities for detecting the SSID used in a network and automatically configuring the radio NIC within
the user's device.) An intruder that has associated with an access point by using the correct SSID can then
obtain a legitimate IP address to access other resources on the network because many wireless LANs
324 | The Institute of Chartered Accountants of Nepal
Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
automatically assign IP addresses to users as they become active. This enables an intruder who has
illicitly associated with a wireless LAN to use the Windows operating system to determine which
other users are connected to the network, and even to click on other users' devices, locate their
documents folders, and open or copy their files. This is a serious problem many end users overlook when
connecting to access points at airports or other public locations.
Once an intruder successfully associates with an access point by utilizing the correct SSID, they can
acquire a legitimate IP address, as many wireless LANs automatically assign IP addresses to active users.
This allows the intruder, who has illicitly connected to the wireless LAN, to exploit the functionalities
provided by the Windows operating system. They can determine which other users are connected to the
network and potentially gain access to their devices, locate their document folders, and open or copy their
files. This poses a significant security concern that many end users overlook when connecting to access
points in public locations like airports.
Intruders can also leverage the information they gather about Internet Protocol (IP) addresses and SSIDs
to set up rogue access points on different radio channels, strategically positioned near users. This tactic
compels a user's radio NIC to associate with the rogue access point, granting hackers the ability to capture
the usernames and passwords of unsuspecting users.
The initial security standard developed for Wi-Fi, known as Wired Equivalent Privacy (WEP), proves to
be inadequate in providing robust protection. While WEP is integrated into all standard 802.11 products,
its usage is optional. Users must manually activate it, but many neglects to do so, leaving numerous access
points vulnerable. The fundamental WEP specification mandates that an access point and all its users share
the same 40-bit encrypted password, which can be easily decrypted by hackers with access to a small
amount of network traffic.
To address these security shortcomings, manufacturers of wireless networking products are now enhancing
security measures by offering stronger encryption and authentication systems. These advancements aim
to bolster the overall security of wireless networks and mitigate the risks associated with unauthorized
access and data breaches.
Malicious Software: Viruses, Worms, Trojan Horses, and Spyware
Malware, a term that refers to various types of malicious software, includes threats such as computer
viruses, worms, and Trojan horses. A computer virus, one form of malware, operates by attaching itself to
other software programs or data files, allowing it to be executed, often without the user's knowledge or
consent. A computer virus carries a "payload," a set of instructions that it performs once activated.
The payload's effects can vary widely: some are relatively benign, merely triggering a display message or
image, while others can be highly destructive. These harmful payloads can destroy data, obstruct computer
memory, reformat a computer's hard drive, or cause programs to malfunction. The propagation of viruses
typically relies on human actions, such as the sharing of an email attachment or the copying of an infected
file.
In more recent times, the evolution of viruses includes the creation of polymorphic and metamorphic
viruses. These advanced forms of malware can change their code as they propagate, making them much
harder to detect and remove. Furthermore, the increasing prevalence of zero-day attacks, where hackers
exploit previously unknown vulnerabilities before they can be patched, also underscores the ever-evolving
threat landscape. The use of sophisticated phishing techniques and social engineering to trick users into
The Institute of Chartered Accountants of Nepal ȁ͵ʹͷ
Management Information and Control System
installing malware also underlines the complex and multidimensional nature of contemporary cyber
threats.
Therefore, a comprehensive approach to cybersecurity that encompasses advanced detection methods, user
education, and robust system protections is paramount to ward off these threats and safeguard digital
resources.
Many recent attacks have come from worms, which are independent computer programs that copy
themselves from one computer to others over a network. (Unlike viruses, they can operate on their own
without attaching to other computer program files and rely less on human behavior in order to spread
from computer to computer. This explains why computer worms spread much more rapidly than computer
viruses) Worms can destroy data and programs as well as disrupt or even halt the operation of computer
networks.
Worms and viruses are often spread over the Internet from files of downloaded software, from files
attached to e-mail transmissions, or from compromised e-mail messages. Viruses have also invaded
computerized information systems from "infected" disks or infected machines. Today e- mail attachments
are the most frequent source of infection, followed by Internet downloads and Web browsing.
Now viruses and worms are spreading to wireless computing devices (Bank, 2004). Mobile device
viruses could pose serious threats to enterprise computing because so many wireless devices are now
linked to corporate information systems.
Over 80,000 viruses and worms are known to exist, with about 25 new ones detected each day. Over the
past decade, worms and viruses have caused billions of dollars of damage to corporate networks, e-mail
systems and data. According to the research firm Computer Economics, viruses and worms caused an
estimated $12.5 billion in damage worldwide in 2003 (Hulme, 2004).
A Trojan horse is a software program that appears to be benign, but then does something other than
expected. The Trojan horse is not itself a virus itself because it does not replicate, but is often a way
for viruses or other malicious code to be introduced into a computer system. The term Trojan horse is
based on the huge wooden horse used by the Greeks to trick the Trojans into opening the gates to their
fortified city during the Trojan War. Once inside the city walls, Greek soldiers hidden in the horse revealed
themselves and captured the city.
An example of a modern-day Trojan horse is Trojan Xombe, which was detected on the Internet in early
2004. It masqueraded as an e-mail message from Microsoft, directing recipients to open an attached file
that purportedly carried an update to the Windows XP operating system. When the attached file was
opened, it downloaded and installed malicious code on the compromised computer. Once this Trojan
horse was installed, hackers could access the computer undetected, steal passwords, and take over the
machine to launch Denial-of-Service Attacks on other computers (Keizer, 2004).
Spyware, in some cases, also falls under the category of malicious software. These small programs install
themselves on computers to monitor users' web surfing activities and display targeted advertisements.
Some advertisers use spyware to collect information about users' purchasing habits for tailored
advertisements. While many users find such spyware annoying, critics express concerns about the invasion
of computer users' privacy. However, certain types of spyware are far more malicious. Keyloggers, for
example, record every keystroke made on a computer to steal software serial numbers, launch internet
attacks, gain access to email accounts, retrieve passwords for protected systems, or obtain personal
326 | The Institute of Chartered Accountants of Nepal
Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
information like credit card numbers. Other spyware programs alter web browser homepages, redirect
search requests, or degrade computer performance by consuming excessive memory. Nearly 1,000 forms
of spyware have been documented.
Hackers and Cyber-vandalism
A hacker is an individual who aims to gain unauthorized access to a computer system. Within the hacking
community, the term "cracker" is often used to refer to a hacker with criminal intent, although the public
press frequently uses the terms "hacker" and "cracker" interchangeably. Hackers and crackers exploit
vulnerabilities in the security measures of websites and computer systems, taking advantage of the various
features of the Internet that make it an open and easily accessible system.
The scope of hacker activities has expanded beyond simple system intrusion, encompassing theft of goods
and information, as well as system damage and cyber-vandalism. Cyber-vandalism involves intentionally
disrupting, defacing, or even destroying websites or corporate information systems. In early 2003, hackers
introduced the Slammer worm, which targeted a known vulnerability in Microsoft SQL Server database
software. Slammer affected thousands of companies, causing significant repercussions. For instance, it
crashed Bank of America's cash machines, particularly in the southwestern part of the United States, and
impacted cash registers at supermarkets such as the Publix chain in Atlanta, where the staff was unable to
dispense cash to frustrated customers. Additionally, it caused widespread internet connection outages in
South Korea, leading to a dip in the stock market. Some hackers, motivated by "hacktivism," launch
politically motivated attacks with similar effects. Following the October 12, 2002, terrorist nightclub
bombings in Bali, Indonesian hackers hacked or defaced over 200 Australian websites to protest security
raids targeting Indonesian families in Australia.
To provide current statistics, as of [current date], the prevalence of cyber-vandalism and malware attacks
continues to be a significant concern. These attacks target various industries and individuals, resulting in
financial losses and disruptions to critical systems. The frequency and sophistication of these incidents
have increased, requiring heightened security measures and proactive defense strategies. Continuous
monitoring and prompt response to emerging threats are crucial in combating cyber-vandalism and
protecting computer systems from unauthorized access. It is important to stay informed about the latest
trends and developments in cybersecurity to mitigate the risks posed by hackers and their malicious
activities.
Global Risks Perception Survey (GRPS) respondents reflect these trends, ranking “cybersecurity failure”
among the top 10 risks that have worsened most since the start of the COVID-19 crisis. Moreover, 85%
of the Cybersecurity Leadership Community of the World Economic Forum have stressed that ransomware
is becoming a dangerously growing threat and presents a major concern for public safety. At a regional
level, “cybersecurity failure” ranks as a top-five risk in East Asia and the Pacific as well as in Europe,
while four countries—Australia, Great Britain, Ireland, and New Zealand—ranked it as the number one
risk. Many small, highly digitalized economies—such as Denmark, Israel, Japan, Taiwan (China),
Singapore, and the United Arab Emirates—also ranked the risk as a top-five concern.
There are concerns that quantum computing could be powerful enough to break encryption keys—which
poses a significant security risk because of the sensitivity and criticality of the financial, personal, and
other data protected by these keys. The emergence of the metaverse could also expand the attack surface
for malicious actors by creating more entry points for malware and data breaches. As the value of digital
commerce in the metaverse grows in scope and scale—by some estimates projected to be over US$800
The Institute of Chartered Accountants of Nepal ȁ͵ʹ͹
Management Information and Control System
billion by 2024—these types of attacks will grow in frequency and aggression. The myriad forms of digital
property, such as NFT art collections and digital real estate, could further entice criminal activity.
Spoofing And Sniffing
Hackers attempting to hide their true identity often spoof, or misrepresent themselves by using fake e-
mail addresses or masquerading as someone else. Spoofing also can involve redirecting a Web link to
an address different from the intended one, with the site masquerading as the intended destination.
Links that are designed to lead to one site can be reset to send users to a totally unrelated site, one that
benefits the hacker. For example, if hackers redirect customers to a fake website that looks almost exactly
like the true site, they can then collect and process orders, effectively stealing business as well as sensitive
customer information from the true site. We provide more detail on other forms of spoofing in our
discussion of computer crime.
Spoofing involves disguising one's identity or impersonating another entity on a network. Hackers use
various spoofing techniques to deceive network devices and gain unauthorized access. Some common
forms of spoofing include:
IP Spoofing: By forging the source IP address in network packets, attackers can trick systems into thinking
they are communicating with a trusted entity, enabling them to bypass security measures.
Email Spoofing: Hackers manipulate the email header information to make it appear as if the email
originated from a different sender, often a trusted source. This technique is commonly used in phishing
attacks.
DNS Spoofing: In DNS (Domain Name System) spoofing, hackers alter the DNS resolution process,
redirecting users to malicious websites by associating false IP addresses with legitimate domain names.
Spoofing attacks can lead to various security breaches, including unauthorized access, data theft, and the
injection of malware or malicious code into a network.
A sniffer is a type of eavesdropping program that monitors information traveling over a network. When
used legitimately, sniffers can help identify potential network trouble-spots or criminal activity on
networks, but when used for criminal purposes, they can be damaging and very difficult to detect. Sniffers
enable hackers to steal proprietary information from anywhere on a network, including e-mail messages,
company files, and confidential reports.
Denial-of-Service Attacks
Denial-of-Service (DoS) attacks are malicious cyber-attacks that aim to disrupt or disable the normal
functioning of a target system, network, or service, rendering it inaccessible to legitimate users. In a DoS
attack, the attacker overwhelms the target with a flood of requests or malicious traffic, depleting its
resources and causing service disruptions. Let's delve into the history and evolution of DoS attacks:
DoS attacks have been prevalent since the early days of the internet. In the 1990s, attackers commonly
used flooding techniques, such as SYN floods, to exploit vulnerabilities in network protocols. These
attacks exploited the three-way handshake process in the TCP/IP protocol, overwhelming target systems
with a flood of SYN requests and consuming system resources, making the service unavailable.
Several notable DoS and DDoS attacks have occurred over the years, causing significant disruptions. Here
are a few examples:

328 | The Institute of Chartered Accountants of Nepal


Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
The "Ping of Death" attack (1996): This attack exploited vulnerabilities in the way certain operating
systems handled oversized ICMP packets, causing system crashes or freeze.
The "Code Red" worm (2001): Code Red exploited a vulnerability in Microsoft IIS web servers,
spreading rapidly and launching DDoS attacks against the White House's website.
The "Mirai" botnet (2016): Mirai infected numerous Internet of Things (IoT) devices, such as cameras
and routers, to create a massive botnet. The botnet launched powerful DDoS attacks on various targets,
including DNS provider Dyn, causing widespread service disruptions.
Amplification Attacks: Amplification attacks leverage vulnerabilities in certain network protocols to
amplify the attacker's traffic, magnifying the impact of the attack. Examples include DNS amplification,
NTP amplification, and SNMP reflection attacks, where attackers exploit misconfigured or insecure
servers to generate a large volume of traffic toward the target.
In a Denial-of-Service (DoS) attack, hackers flood a network server or Web server with many thousands
of false communications or requests for services to crash the network. The network receives so many
queries that it cannot keep up with them and is thus unavailable to service legitimate requests. A
Distributed Denial-of-Service (DDoS) attack uses numerous computers to inundate and overwhelm the
network from numerous launch points. For example, on June 15, 2004, Web infrastructure provider
Akamai Technology was hit by a distributed denial of service attack that slowed some of its customers'
websites for over two hours. The attack used thousands of "zombie" PCs, which had been infected
by malicious software without their owners' knowledge (Thomson, 2004). Microsoft, Apple, and
Yahoo! were among the sites affected.
Although DDoS attacks do not destroy information or access restricted areas of a company's information
systems, they can cause a Web site to shut down, making it impossible for legitimate users to access the
site. For busy e-commerce sites such as eBay and Buy.com, these attacks are costly; while the site is shut
down, customers cannot make purchases.
Figure 9-3 illustrates the estimated worldwide damage from all forms of digital attack, including hacking,
malware, and spam between 1997 and 2004.

The Institute of Chartered Accountants of Nepal ȁ͵ʹͻ


Management Information and Control System

Fig 9-3 Worldwide damage from digital attacks


Figure 9-3 shows estimates of the average annual worldwide damage from hacking, malware, and spam
since 1997 and another from 2009. The graph above shows the only recorded damage caused by
cybercriminals, and there are lots of cyber attacks which causes lots of damage, and the users or business
never report the incident due to the fear of reputational damage.
Computer Crime and Cyber-terrorism
Most hacker activities are criminal offenses, and the vulnerabilities of systems we have just described
make them targets for other types of computer crime as well. The U.S. Department of Justice defines
computer crime as "Any violations of criminal law that involve knowledge of computer technology
for their perpetration, investigation, or prosecution." The computer can be a target of a crime or an
instrument of a crime.
Defining Computer Crime:
Computer crime, as defined by the U.S. Department of Justice, refers to violations of criminal law that
involve the use of computer technology for perpetration, investigation, or prosecution. Computers can
either be targets or tools in the commission of crimes.
The magnitude of the Problem:
The true extent of computer crime remains difficult to determine precisely. The number of invaded
systems, the individuals involved, and the total economic damage are challenging to quantify. According
to research conducted by the Computer Crime Research Center, cybercrimes cost U.S. companies
approximately $14 billion annually. However, many companies hesitate to report such crimes due to
concerns about employee involvement or potential damage to their reputation.

330 | The Institute of Chartered Accountants of Nepal


Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
Economically Damaging Computer Crimes:
Several types of computer crimes have severe economic consequences. These include:
Distributed Denial-of-Service (DDoS) Attacks: DDoS attacks, which overwhelm systems with a flood
of traffic, can result in significant financial losses for targeted organizations. The disruption caused by
these attacks can lead to prolonged downtime and damage a company's reputation.
Introduction of Viruses: The intentional introduction of viruses into computer systems can cause data
loss, system malfunctions, and financial harm. Malicious actors develop and distribute viruses to
compromise the integrity and availability of systems.
Theft of Services: Unauthorized access to and utilization of computer services without proper
authorization can lead to substantial financial losses for service providers. Hackers exploit vulnerabilities
to gain access to services, resulting in financial damages for the affected organizations.
Disruption of Computer Systems: Deliberate acts that disrupt or disable computer systems, such as
altering or deleting critical files, can cause severe financial repercussions. Businesses rely heavily on
computer systems for daily operations, and disruptions can lead to significant financial losses and
operational setbacks.
Insiders vs. Outsiders:
Historically, insider threats, posed by employees with knowledge, access, and motives, have been
responsible for the most damaging computer crimes. However, the advent of the internet has expanded
opportunities for external actors to engage in computer crime and abuse. The ease of use and accessibility
of the internet has opened avenues for outsiders to exploit vulnerabilities and launch attacks on systems.
No one knows the magnitude of the computer crime problem-how many systems are invaded, how
many people engage in the practice, or the total economic damage. According to one study by the
Computer Crime Research Center, U.S. companies lose approximately $14 billion annually to
cybercrimes. Many companies are reluctant to report computer crimes because the crimes may involve
employees or the company fears that publicizing its vulnerability will hurt its reputation.
The most economically damaging kinds of computer crime are DDoS attacks, introducing viruses,
theft of services, and disruption of computer systems. Traditionally, employees-insiders- have been the
source of the most injurious computer crimes because they have the knowledge, access, and, frequently,
job-related motives to commit such crimes. However, the Internet's ease of use and accessibility has
created new opportunities for computer crime and abuse by outsiders.
Identity Theft
With the growth of the Internet and electronic commerce, identity theft has become especially troubling.
Identity theft is a crime in which an imposter obtains key pieces of personal information, such as social
security identification numbers, driver's license numbers, or credit card numbers, to impersonate
someone else, etc. The information may be used to obtain credit, merchandise, or services in the name of
victim or to provide the thief with false credentials. According to a 2003 U.S. Federal Trade Commission
report, 9.9 million cases of identity theft were reported in the United States in the 12 months ending
in April 2003, causing consumer losses of about $5 million (Chipman, 2004).

The Institute of Chartered Accountants of Nepal ȁ͵͵ͳ


Management Information and Control System
Identity theft is a serious crime that involves the unauthorized acquisition and use of an individual's
personal information for fraudulent purposes. While specific data on identity theft is continually evolving,
I can provide some general information about identity theft and its impact:
Understanding Identity Theft:
Identity theft occurs when someone gains access to personal information, such as Social Security numbers,
bank account details, credit card information, or driver's license numbers, without the individual's
knowledge or consent. This stolen information is then used to commit various fraudulent activities,
including financial fraud, unauthorized transactions, and impersonation.
Impact of Identity Theft:
Identity theft can have severe consequences for the victims. It can result in financial losses, damaged credit
history, legal complications, and emotional distress. Victims often spend a significant amount of time and
resources to resolve the aftermath of identity theft, including disputing fraudulent charges, repairing their
credit, and restoring their compromised identities.
Data Breaches and Identity Theft:
Data breaches, where sensitive information is exposed or stolen from organizations, can significantly
contribute to identity theft incidents. Cybercriminals target databases and systems that store personal
information, potentially compromising the data of thousands or even millions of individuals. Some notable
data breaches have occurred in recent years, leading to a substantial amount of personal information being
exposed.
The Internet has made it easy for identity thieves to use stolen information because goods can be
purchased online without any personal interaction. Credit card files are a major target of website hackers.
Moreover, e-commerce sites are wonderful sources of customer personal information- name, address, and
phone number. Armed with this information, criminals can assume a new identity and establish new credit
for their own purposes.
One increasingly popular tactic is a form of spoofing called phishing. It involves setting up fake websites
or sending e-mail messages that look like those of legitimate businesses to ask users for confidential
personal data. The e-mail message instructs recipients to update or confirm records by providing social
security numbers, bank and credit card information, and other confidential data either by responding to the
e-mail message or by entering the information at a bogus website.
For example, Dan Marius Stefan was convicted of stealing nearly $500,000 by sending e-mail messages
that appeared to come from the online auction site eBay to people who were unsuccessful auction bidders.
The message described similar merchandise for sale at even better prices. To purchase these goods,
recipients had to provide bank account numbers and passwords and wire the money to a fraudulent
"escrow site" that Stefan had set up.
Phishing appears to be escalating. Brightmail of San Francisco, a company that filters e-mail for spam,
identified 2.3 billion phishing messages in February 2004, representing 4 percent of the e- mail it
processed; this figure is up from only 1 percent in September 2003 (Hansell, 2004). Phishing scams have
posed as PayPal, the online payment service, online service provider America Online (AOL), Citibank,
Fleet Bank, American Express, the Federal Deposit Insurance Corporation, the Bank of England, and
other banks around the world. British security firm mi2g estimates the worldwide economic damage
332 | The Institute of Chartered Accountants of Nepal
Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
from phishing scams exceeded $13.5 billion in customer and productivity losses, business
interruptions, and efforts to repair damage to brand reputation (Barrett, 2004).
The U.S. Congress responded to the threat of computer crime in 1986 with the Computer Fraud and
Abuse Act. This act makes it illegal to access a computer system without authorization. Most states
have similar laws, and nations in Europe have similar legislation. Congress also passed the National
Information Infrastructure Protection Act in 1996 to make virus distribution and hacker attacks to disable
websites federal crimes. U.S. legislation such as the Wiretap Act, Wire Fraud Act, Economic Espionage
Act, Electronic Communications Privacy Act, E-Mail Threats and Harassment Act, and Child
Pornography Act covers computer crimes involving intercepting electronic communication, using
electronic communication to defraud, stealing trade secrets, illegally accessing stored electronic
communications, using e-mail for threats or harassment, and transmitting or possessing child
pornography.
While specific statistics on identity theft are continually evolving, it remains a prevalent and ongoing
concern. According to the Federal Trade Commission (FTC) in the United States, there were over 1.4
million reports of identity theft in 2020 alone. The availability of personal information on the internet,
coupled with the increasing sophistication of cybercriminals, underscores the importance of robust security
measures and proactive identity protection practices.
Phishing attacks continue to be one of the most significant threats facing organizations today. As
businesses increasingly rely on digital communication channels, cybercriminals exploit vulnerabilities in
email, SMS, and voice communications to launch sophisticated phishing attacks. With the COVID-19
pandemic leading to a surge in remote work over the past several years, the risk of phishing attacks has
only increased.
The latest phishing report from Zscaler ThreatLabz reveals that phishing attacks are still on the rise,
detailing a 47.2% increase in phishing attacks in 2022 compared to the previous year, a result of
cybercriminals using increasingly sophisticated techniques to launch large-scale attacks. Education was
the most targeted industry in 2022, with attacks increasing by 576%, while the retail and wholesale sector
dropped by 67% from 2021.

The Institute of Chartered Accountants of Nepal ȁ͵͵͵


Management Information and Control System
Phishing and identity theft have emerged as serious issues on the Internet. As an example, the Anti-
Phishing Working Group website features a fraudulent message that impersonates the U.S. Federal Deposit
Insurance Corporation (FDIC). This deceptive message deceives recipients into sharing their credit card
and social security numbers on a counterfeit website created by identity thieves. The prevalence of such
phishing attempts highlights the need for heightened awareness and proactive measures to safeguard
personal information online.
Cyber-terrorism And Cyber-warfare
Cyber-terrorism and cyber-warfare have become sources of growing concern due to the potential
exploitation of vulnerabilities in the Internet and other networks. There is increasing apprehension that
terrorists, foreign intelligence services, or other groups may leverage these vulnerabilities to cause
widespread disruption and harm. High-profile targets could include critical infrastructure systems like
electrical power grids, air traffic control systems, and major financial institutions. Several countries,
including China, have been actively exploring and mapping U.S. networks, while approximately 20
nations are believed to be developing both offensive and defensive cyber-warfare capabilities. Notably,
U.S. military networks and government agencies face numerous hacker attacks annually.
To address this evolving threat landscape, the U.S. government has implemented certain measures. The
Department of Homeland Security has established the Information Analysis and Infrastructure Protection
Directorate, responsible for coordinating cyber-security efforts. Within this directorate, the National Cyber
Security Division focuses on safeguarding critical infrastructure, conducting cyberspace analysis,
facilitating information sharing, issuing alerts, and supporting national recovery initiatives. Additionally,
the U.S. Department of Defense has established joint task forces dedicated to computer network defense
and managing computer network attacks. Congress has also approved the Cyber-security Research and
Development Act, allocating funds to universities engaged in researching ways to enhance the protection
of computer systems against cyber threats.
Internal Threats: Employees- A hidden risk
While we often focus on external threats to business security, it is essential to recognize that the most
significant financial risks to organizations often originate from within. Surprisingly, insiders, who were
once trusted employees, have caused some of the most significant disruptions to services, destruction of
e-commerce sites, and compromise of customer credit data and personal information. This highlights the
critical importance of addressing internal security threats.
Employees possess privileged access to sensitive information, and if proper internal security procedures
are not in place, they can navigate through an organization's systems without detection. Studies have
revealed that the lack of knowledge among users is the primary cause of network security breaches. Many
employees either forget their passwords or willingly share them with colleagues, inadvertently
compromising the entire system. Moreover, malicious intruders often employ social engineering
techniques, tricking employees into divulging their passwords by posing as legitimate company members
seeking information.
Both end users and information systems specialists, including employees across various roles, can
introduce errors into an information system. Mistakes may arise from the incorrect entry of data or failure
to follow proper data processing instructions and computer equipment usage protocols. Even information

334 | The Institute of Chartered Accountants of Nepal


Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
systems specialists themselves can inadvertently introduce software errors during the design,
development, or maintenance phases of new and existing software programs.
Software Vulnerability – A Persistent Threat
Software errors pose an ongoing and significant risk to information systems, resulting in substantial
productivity losses. According to the U.S. Department of Commerce National Institute of Standards and
Technology (NIST), software flaws, including vulnerabilities to hackers and malware, cost the U.S.
economy a staggering $59.6 billion annually (Hulme, 2004). This year’s report cites an increase in the cost
of poor software quality (CPSQ) in the U.S. to at least $2.41 trillion—up from $1.31 trillion two years
ago. (as per the report published by synopsis)
Three main problem areas contribute to CPSQ
Cybercrime losses due to software vulnerabilities rose 64% from 2020 to 2021; those losses have not
yet been determined for 2022.
Software supply chain problems in underlying third-party components rose significantly, and the
number of failures due to weaknesses in the open-source software supply chain increased by 650%
from 2020 to 2021.
Technical debt has become the biggest obstacle to making changes to existing codebases, with the
principal now at roughly $1.52 trillion.
One major challenge with software is the existence of hidden bugs or defects in program code. Studies
have demonstrated the near-impossibility of completely eliminating all bugs from large programs. The
complexity of decision-making code is a primary source of these bugs. Critical programs within
organizations may consist of tens of thousands or even millions of lines of code, each containing multiple
decision paths. Documenting and designing such complexity are challenging, and designers may
unintentionally document reactions incorrectly or overlook certain possibilities. Even after rigorous
testing, the true dependability of software can only be determined through extensive operational use.
Commercial software can contain flaws, which not only affect performance but also create security
vulnerabilities that open networks to potential intruders. These bugs and vulnerabilities can act as
loopholes, enabling malware to bypass antivirus defenses. While much of the malware historically targeted
vulnerabilities in Microsoft Windows and other Microsoft products, there has been an increasing trend of
malware exploiting vulnerabilities in the Linux operating system as well.
Upon identifying these software flaws, vendors devise lines of code known as patches to address the issues
without interrupting the normal functioning of the software. A case in point is Microsoft's XP Service
Pack 2 (SP2) introduced in 2004. This patch enhanced firewall protection, provided automatic security
updates, and offered an easy-to-use interface for managing security applications on the user's computer.
However, the onus of monitoring these vulnerabilities, testing, and applying patches— a process referred
to as patch management— falls on the software users.
Maintaining a comprehensive IT infrastructure, riddled with numerous business applications, operating
system installations, and other system services, necessitates a rigorous process of patch management. This
process can be both time-consuming and expensive. According to estimates from the Yankee Group, a
company with over 500 PCs could spend up to 120 staff hours testing and installing every patch.

The Institute of Chartered Accountants of Nepal ȁ͵͵ͷ


Management Information and Control System
The challenge of patch management is further compounded by the swift pace at which malevolent viruses
and worms are created. These malicious programs target systems that fail to implement relevant patches
promptly. The speed at which these threats are developed means companies often have a minimal response
window between the announcement of a vulnerability (and its patch) and the appearance of malware
exploiting this vulnerability. This rapid cycle partly explains why worms and viruses like Sasser, SQL
Slammer, Blaster, SoBig.F, and others have been able to infiltrate numerous computer systems swiftly.
To mitigate these challenges, companies should adopt proactive patch management strategies, deploy
intrusion detection and prevention systems, and embrace a layered defense approach. Emerging solutions
such as automated patch management tools and virtual patching can also help address the increasing
complexity and urgency of patch management.

8.2 System Quality Problems:


Software and data
Computer software, or just software, is the collection of computer programs and related data that provide
the instructions telling a computer what to do. The term was coined to contrast to the old term hardware
(meaning physical devices). In contrast to hardware, software is intangible, meaning it "cannot be
touched". Software is also sometimes used in a more narrow sense, meaning application software
only. Sometimes the term includes data that has not traditionally been associated with computers, such as
film, tapes and records.
Computer software can be classified into several categories, each serving different purposes and having
unique characteristics:
• Application Software: This category encompasses end-user applications such as word processors,
video games, and Enterprise Resource Planning (ERP) software. These programs are designed for
end-users to perform specific tasks such as document creation, entertainment, or business operations
management.
• Middleware: This type of software is essential for controlling and coordinating distributed systems.
It acts as a bridge between various applications and software components, facilitating their interaction
and communication.
• Programming Languages: These define the syntax and semantics of computer programs. For
instance, many legacy banking applications were written in COBOL, a language dating back to 1959.
However, more modern programming languages are often preferred for developing newer
applications, such as Python, JavaScript, and Java.
• System Software: This includes operating systems that manage computing resources. System
software also covers large applications running on remote machines, such as websites, which users
typically interact with through a Graphical User Interface (GUI) like a web browser. System software
essentially provides a platform for other software to run on.
• Testware: This is a subset of software designed specifically for testing other hardware or software
packages. It assists in identifying bugs or issues in the systems before their final deployment.

336 | The Institute of Chartered Accountants of Nepal


Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
• Firmware: This is low-level software typically stored on electrically programmable memory devices.
The term 'firmware' comes from its nature—it operates like hardware but can be updated like
software.
• Device Drivers: These are specific types of system software that control particular hardware
components of computers such as disk drives, printers, CD drives, or computer monitors. They act
as an interface between the hardware and the operating system or other software.
• Programming Tools: These tools assist in conducting computing tasks in any of the categories
mentioned above. For programmers, these could include tools for debugging, reverse engineering
legacy systems, source code editors, or integrated development environments (IDEs).
By understanding these various types of computer software, one can better appreciate the complexities
and capabilities of our digital world. It also underscores the need for different software protection measures
tailored to each software type's unique vulnerabilities and usage scenarios.
The term data means groups of information that represent the qualitative or quantitative attributes
of a variable or set of variables. Data (plural of "Datum", which is seldom used) are typically the results
of measurements and can be the basis of graphs, images, or observations of a set of variables. Data are
often viewed as the lowest level of abstraction from which information and knowledge are derived. Raw
data refers to a collection of numbers, characters, images or other outputs from devices that collect
information to convert physical quantities into symbols that are unprocessed.
Bugs and defects
A software bug refers to an error, flaw, fault, or failure in a computer program or system that leads to
incorrect or unexpected results or makes it behave in unintended ways. The majority of bugs result from
human errors in the program's source code or design, while a smaller portion can be attributed to compilers
generating incorrect code. When a program is riddled with a substantial number of bugs or when these
bugs significantly hamper its functionality, the program is said to be buggy. Reports documenting bugs in
a program are often referred to as bug reports, fault reports, problem reports, trouble reports, or change
requests.
These bugs can lead to Type I and Type II errors, which may generate a ripple effect with varying degrees
of impact on the program's user. Some bugs might have minimal influence on the program's functionality,
thus remaining unnoticed for extended periods. However, more severe bugs could cause the program to
crash or freeze, resulting in a denial of service. Certain bugs are classified as security bugs and might, for
example, allow malicious users to bypass access controls and gain unauthorized privileges.
The consequences of bugs can indeed be catastrophic. The Therac-25 radiation therapy machine is a
poignant case, where bugs in the code were directly linked to patient fatalities in the 1980s. The destruction
of the European Space Agency's $1 billion Ariane 5 rocket prototype less than a minute after its 1996
launch was traced back to a bug in the onboard guidance computer program. In another incident, a Royal
Air Force Chinook crashed into the Mull of Kintyre in 1994, causing 29 fatalities. Although initially
dismissed as pilot error, an investigation by Computer Weekly revealed evidence indicating that a software
bug in the aircraft's engine control computer may have been the culprit.

The Institute of Chartered Accountants of Nepal ȁ͵͵͹


Management Information and Control System
In the modern era, the impact of software bugs can extend even further, influencing a wide array of sectors
including finance, healthcare, transportation, and more. These incidents underscore the critical need for
thorough software testing and bug tracking procedures. Regular software updates and patches are also
crucial to fix these bugs and ensure software reliability and safety. Advanced approaches, such as
employing AI for bug detection and correction, are also being explored to enhance software quality and
security.
Maintenance nightmare
In the realm of software development, code duplication is often deemed a cardinal sin. It creates a host of
maintenance challenges within the system, necessitating redundant efforts to address the same issue in
multiple areas.
However, avoiding code duplication isn't as straightforward as it may seem. Drawing a parallel with dietary
habits, it's akin to advising against consuming unhealthy food. While some individuals exhibit the discipline
to adhere strictly to nutritious food choices to the extent of finding junk food unpalatable, others indulge in a
delectable burger occasionally.
This sentiment is echoed by Dave Thomas, one of the co-authors of "The Pragmatic Programmer," who notes:
"If you have more than one way to express the same thing, at some point the two or three different
representations will most likely fall out of step with each other."
In order to maintain the synchrony between these different representations and keep them up-to-date,
developers often find themselves trapped in a wasteful process of parallel changes— a situation that can be
aptly described as a 'maintenance nightmare.' This predicament is exacerbated when contradictions infiltrate
the code, amplifying the complexity of the maintenance task.
Furthermore, the scourge of duplication is not confined to code alone. It permeates other areas of software
development, including requirements and specifications. The presence of redundant information in these
domains complicates the software development process and can lead to inconsistencies and increased
maintenance effort.
In conclusion, the avoidance of duplication, whether in code or other aspects of software development, is vital
for the maintenance and overall health of software systems. It reduces complexity, improves maintainability,
and can significantly mitigate the occurrence of the 'maintenance nightmare.' Hence, strategies like code
refactoring, modular design, and diligent documentation should be employed to keep code duplication in
check.
Data Quality Problems
Data quality issues can inflict significant financial losses on organizations, resulting in wasted time and
resources, and misleading management into making suboptimal decisions. Often, the extent of the
damage caused by poor quality data is underestimated by executives and managers. This misjudgment,
along with a lack of understanding of how to address these data quality problems, frequently results in
the issue being overlooked.
Data is the life-force that powers technological systems. While hardware and software form the skeletal
framework of a system—comparable to the veins and arteries in a body—it is the data that endows the
system with functionality. Absent data, the technological infrastructure holds little to no value. However,

338 | The Institute of Chartered Accountants of Nepal


Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
many organizations disproportionately focus on server installation and application development,
neglecting the crucial task of ensuring consistent, high-quality data.
In an organization, data permeates every corner. The same piece of data serves multiple functions and
is utilized multiple times. Take, for instance, address data, which is used for delivery, invoicing, and
marketing purposes. Similarly, product data informs sales forecasting, marketing strategies, financial
forecasts, and supply chain management. Data underpins swift transaction processing (boosting
efficiency) and fuels analytic tools that support improved decision-making (enhancing effectiveness).
Given data's wide-reaching influence and diverse applications, maintaining high-quality data is
paramount. While correctly entering data just once can add value, the negative impact of poor data
quality is experienced every time the data is used. Despite many organizations recognizing poor data
quality as a significant problem, they often view it as an unavoidable pitfall. However, poor data quality
is neither an acceptable nor an inevitable circumstance.
With the rise of data-driven decision making and the advent of technologies like artificial intelligence
and machine learning, the importance of data quality has been further emphasized. High-quality data
ensures the accuracy of these advanced models and their outputs. Moreover, regulatory frameworks like
the General Data Protection Regulation (GDPR) stress the importance of data accuracy and integrity,
underlining the legal implications of poor data quality. Implementing robust data governance strategies,
data quality assurance procedures, and employing data management tools can play a vital role in
ensuring the accuracy, consistency, and overall quality of data within an organization.

8.3 Creating a Control Environment


Creating Control Framework
A control framework comprises a foundational set of controls necessary to safeguard an organization
against financial or information loss. The concept of "Control" originated in the financial realm, with
auditors assessing an organization's accounting practices for effective financial controls. Over time, this
concept transitioned into the technology domain, aligning with risk analysis principles as controls are
designed to prevent common attacks and mitigate vulnerabilities.
An example of a control is the "Separation of Duties" principle, which is crucial in accounting systems. It
ensures that individuals handling cash are not granted access to the corresponding cash records in the
accounting system. By presenting controls within a framework, organizations can evaluate their existing
controls against the framework itself and compare them to those implemented by similar organizations.
This framework-based assessment enables auditors to define audit projects effectively.
In certain industries, adherence to a specific control framework may be mandated by law or regulatory
bodies. Compliance with these frameworks ensures organizations meet industry standards and regulatory
requirements. By implementing a comprehensive control framework, organizations enhance their ability
to identify control gaps, address vulnerabilities, and effectively manage risk.
Creating a control framework is a proactive measure that enables organizations to establish robust control
mechanisms. It empowers them to protect their financial and information assets, mitigate potential threats,
and ensure compliance with applicable regulations. Developing and adhering to a well-defined control
framework is essential for maintaining the integrity and security of organizational operations.

The Institute of Chartered Accountants of Nepal ȁ͵͵ͻ


Management Information and Control System
Layers of control
Operating system control:
The operating system's core role involves managing processes and resources, necessitating up-to-date
information about each process and resource's current status. To provide this, the operating system creates
and maintains data tables about each entity under its management. The operating system should support
four key functions: memory, Input/Output (I/O), file, and process management. While specifics may vary
across different operating systems, all of them maintain information within these categories.
Memory tables are utilized to monitor both main (real) and secondary (virtual) memory. A portion of the
main memory is reserved for the operating system, with the rest available for processes. Processes are held
in secondary memory through a virtual memory system or a basic swapping mechanism. The memory tables
must register information about main and secondary memory allocation to processes, protection attributes
of memory segments—like which processes can access certain shared memory regions—and information
needed for virtual memory management.
I/O tables help the operating system manage the computer system's I/O devices and channels. At any time,
an I/O device may be available or assigned to a specific process. If an I/O operation is underway, the
operating system needs to know the operation's status and the main memory location being used as the I/O
transfer's source or destination.
In some systems, the operating system also maintains file tables that provide information about file
existence, their location in secondary memory, current status, and other attributes. In some cases, this
information is maintained and used by a file management system, and the operating system has limited
knowledge of files. However, in other systems, the operating system manages much of the file management
detail.
Moreover, to safeguard your vital information from being stolen, you can seek the help of a Spybot or
leverage a Free Spybot Search and Destroy Download. These tools prevent spyware from compromising
your computer. Lastly, the operating system must maintain process tables to manage processes. These tables
must be interconnected or cross-referenced in some way. Since memory, I/O, and files are managed on
behalf of processes, there must be a direct or indirect reference to these resources in the process tables,
ensuring seamless integration and effective resource management.
Finally, the operating system must maintain process tables to manage process. The remainder of this
section is devoted to an examination of the required process tables. Tables must be linked or cross
referenced in some fashion. Memory, I/O, and files are managed on behalf of processes, so there must
be some direct or indirect reference to these resources in the process tables.
Data Management Control
• If you're an experienced data management professional, such a statement might make you wince,
forcing you to resist the urge to issue a swift correction.
• If you're a Configuration Management (CM) person, this comment likely seems perfectly natural.
• And for those in program management, this statement often signals the resolution of numerous
issues.

340 | The Institute of Chartered Accountants of Nepal


Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
While usually well-intentioned, this comment is often misleading and frequently inaccurate. True, certain
elements are put under Configuration Control, but a more precise statement would likely be to place them under
Data Management (DM) Control. Please note the deliberate and significant capitalization of configuration
control, Configuration Control, and Data Management control. The significance of this will be highlighted as
we delve deeper into the discussion.
Upon a thorough reading and accurate interpretation of ANSI/GEIA-649 A 2004 National Consensus Standard
for Configuration Management (CM), it's evident that CM is tasked with the responsibility of controlling product
data—a crucial subset of the data assets existing within an enterprise. However, Data Management, as defined
in GEIA-859-2004, Data Management, assumes a more comprehensive enterprise perspective. It is responsible
for the management of all other data assets (excluding product data), which marks a clear demarcation.
In modern digital enterprises, the scope of Data Management has expanded to include various types of data such
as structured, semi-structured, and unstructured data, and covers aspects like data quality, data governance, data
privacy, and security. On the other hand, Configuration Management focuses primarily on the control and
management of product or system-related data, and ensures the consistency and reliability of physical and virtual
assets across an organization. Both, while distinct in their roles, are critical in maintaining the overall health of
an organization's data ecosystem.
Where the confusion may arise is that in certain cases DM uses techniques and processes borrowed from
the CM function. DM establishes three levels of control (GEIA-HB-859, Section 5):
Formal Control - this is what most people think when they think of control. This is the level which
requires:
• Formal change initiation
• C o m p r e h e n s i v e change evaluation for impact assessment
• Decision action by Board Change Authority
• Full status accounting of implementation and validation
Revision Control - less stringent but used when it is important to track change history of a data asset
• Change is required
• Revision is incremented
• Stakeholders are notified
Custody control
• Provide safe storage
• Provide means for retrieval
• Stakeholders are notified
Configuration and Data Management refers to two specific but related functions, with their own processes
and business rules. The term configuration management (non- capitalized) is often used to describe
the control activity resident within the Configuration and Data Management functions.

The Institute of Chartered Accountants of Nepal ȁ͵Ͷͳ


Management Information and Control System
The above discussion thread beg two questions 1) what data assets are under Configuration Management
control and which are under Data Management control; and, 2) for those data assets under
DM control, which require formal control, revision control, or custody control?
Regarding which data assets fall under CM and DM control:
Configuration Management Control: CM primarily manages product or system-related data. This might
include data detailing product specifications, versioning information, configuration settings, system
architecture, and the like. The goal of CM is to ensure consistency and reliability across physical and
virtual assets within the organization.
Data Management Control: DM, on the other hand, handles a broader spectrum of data assets within an
organization. Apart from product data, DM governs customer data, employee data, financial data,
operational data, and more. DM control ensures data quality, accessibility, consistency, security, and
privacy across the enterprise.
Concerning which DM-controlled data assets require formal control, revision control, or custody control:
Formal Control: Formal control is required for data assets that are critical for decision-making or
regulatory compliance. This could include financial data, customer personal data, and health records,
where accuracy, privacy, and security are paramount.
Revision Control: Revision control is necessary for data that undergoes frequent updates or changes. It
enables tracking of changes and reversion to previous versions if necessary. Examples might include
project management data, product development data, and policy or procedural documents.
Custody Control: Custody control becomes crucial when data needs to be preserved and protected for
future use or analysis, such as historical data, audit logs, and archived records. It ensures the data's
integrity and availability over time.
Both CM and DM play their parts in creating a resilient data governance strategy, which is instrumental
in today's data-driven business landscape. Recognizing the specific controls required for different types
of data assets enhances the efficiency and effectiveness of these strategies.
Functional Control Responsibility
This is where the business rules become important. These rules take into consideration Corporate and
Enterprise guidelines, organizational processes and procedures, program/project needs and customer
requirements.
Good data management practice requires, at the Enterprise level formal, documented policies,
organizational processes and procedures which document a wide spectrum of theses business rules. The
application of these rules, along with program/project requirements and customer requirements are
documented in the CM and DM Plans.
It is in these documents that the allocation of control responsibility is most often appropriately defined.
Several factors come into play with this approach:
• Administratively, CM and DM are often performed within the same organization,
• There seems to be a trend to combine CM plans and DM plans into a single plan.

342 | The Institute of Chartered Accountants of Nepal


Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
These factors may tend to blur the organizational responsibilities, but the functional
responsibility needs to be maintained.
Level of Control
For those data assets which will functionally be controlled by Data Management processes, it is further
necessary to define for each asset type, the level of control. Again, this definition is most conveniently
reserved for the Data Management Plan, considering the scope of the business rules.
If it is agreed that the CM & DM plans or a combined CDM plan (as a DM practitioner, if often wonder
why it is not a DCM Plan) is the most convenient means to define:
• Which data asset is controlled by the CM function and which is controlled by the DM
function, and
• For those asset controlled by DM, which require formal control, revision control, or custody
control.
Organization and Personnel control
Management basically deals with all the persons working in the concern who are responsible for
managing an organization. Everyone in the organization will have certain responsibilities and duties in
the enterprise. Personnel management includes planning and directing the applications, development and
utilization of human resource in the enterprise. Employees, unions, public relationship also plays a key
role in personnel management. So there is a need for personnel Management and planning of the members
play a vital role in the Enterprise.
Personnel Management is an important branch in Management of any business enterprise. It holds a
key to all actions and successful management. It is also concerned with human and social implications of
change in internal organization and methods of working and of economic and social changes in the
community. The main aim is to establish a better coordination between all the members from top
level management to down below the subordinates to have better cooperation, better focus to bring out
innovative ideas, their objectives, and understanding in the enterprise. Co-operative relationship is
achieved within the enterprise by creating harmonious relations, genuine consultation and participation
and system of effective communication.
Personnel management should designed in such a way it will have the capability to respond to the
changes. Maintain a good relationship within the organization; meet the enterprise social and legal
responsibilities. Human relations have to be nurtured constantly in the enterprise. Only the enterprise,
which is conscious of this need, can achieve their targets by efficiently handling their available resources
for a particular process.
The objectives of personnel management in any working organization are, to bring development of
individuals, maintain a safe and effective environmental conditions, utilize the available resources,
to ensure job satisfaction among workers. What are all the objective to be focused?
• Social.
• Personnel.
• Enterprise.
• Union.

The Institute of Chartered Accountants of Nepal ȁ͵Ͷ͵


Management Information and Control System
Social objective is concerned about how the enterprise creates new employment opportunities, how the
productivity of the enterprise can be maximized, bring satisfaction to the work force, avoidance of
wastage of resources and promote a healthy relationship between the human and the social welfare.
Personnel objectives specify the needs of the members by providing job security, maximizing the
development of the members, provide proper working environmental conditions to workers. Enterprise
objective is to bring a balance between demand and supply of the personnel and maintain competent
workers in the enterprise. Union objective deals with formulation of personnel policies in consultation
with unions and self-discipline within the enterprise.
Financial and physical resources required for a particular process to be done, and the members of the
organization. Personnel Management is responsible for both the enterprise operating system and the
workers. Other areas in which personnel management is expected to help the workers are include
maintenance of personnel records, determination of wage policy, methods and rates of remuneration.
Characteristics of good personnel management are:
• Stability, to appoint or replace key personnel executives with minimal loss
• Flexibility, capability to handle problems encountered within the enterprise.
• Simplicity, balancing the perfect line of relationship among the workers.
• Objectivity, feature of having definite objectives for all the levels or units in the enterprise.
Functional responsibilities concerned with the personnel management are
• Managerial functions
• Operative functions.
Managerial functions include planning, it involves formulating policies for future development of the
enterprise, program to choose adequate number of persons who can work efficiently and accomplish the
business objectives, provide training to the workers in the enterprise, integration and maintenance of work
force.
Organizing, it has to provide a clear layout about the inter-relationship between persons, jobs and physical
factor and every worker should have proper understanding of their job. Direction involves
motivation, which can be either positive or negative for the enterprise. It is necessary to motivate the
workers about the nature of their job. Instructions should be clear, neatly explained and easy to
understand.
Control, it helps in bringing out performance analysis of all the workers, which would be useful in,
evaluate and to discover their deviations. Operative functions include procurement, which deals with
recruitment of right kind of persons for the available job in the enterprise i.e. right person for right job.
Development, subordinates of the enterprise should be able to know what are the qualities needed to
get into higher levels in the organization. Integeration, it provides so- operation among the workers,
efficient channel for communication, satisfactory solution for problems and grievances.
Planning is a process of deciding the business targets and charting out the path of attaining those targets.
It is also described as a process of thinking before doing. Every enterprise that recruits people to carry
out its work, whether it is educational institutions, enterprise or business needs a personnel plan in various
phases of personnel.

344 | The Institute of Chartered Accountants of Nepal


Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
Planning in personnel management system is concerned about present manpower positions, what number
and kind of employees are required for the enterprise. And this can be done only when the enterprise
knows its objectives and how the plans are accomplished with right kind of resources. Future demand
and supply of personnel. Assessment of all the workers should be carried out.
• What each worker does?
• How his performance during his career?
• About his educational qualification skills and training in the concerned field.
• How his job is related to others. ?
• In what kind of environment his performance can be increased.
These evaluations can be carried out by conducting interviews to selected number of workers in the
enterprise or by having a detailed performance report. By assessing the enterprise we can also determine
the plan for the future. This can be carried out by analyzing the objectives and plan of the enterprise for
long term and short term required number of workers, resources needed for the future purpose, forecasting
the number of personnel requirements estimate not only of the number of personnel required but their
qualities and their types also should be specified. So assessment for future holds a key in the development
of the enterprise so it should be carefully analyzed.
Sources of personnel may be from Internal or External. Internal supply deals with new recruitments to
the concern this is determined by the enterprise itself it is not difficult to know what type and number
personnel they need would be to accomplish the objectives of the enterprise, transfers within the
enterprises this is not determined by the enterprise while transferring within a department or organization
it is difficult to fulfill all the workers satisfactions, personnel reporting after a period of leave.
Retirements, dismissals, voluntary resignations, retrenchments, deaths of an employee may decrease the
internal supply of the personnel of all these retirements are the easiest to forecast, deaths and voluntary
resignations are difficult to handle, dismissals and retrenchment can be broadly determined. External
supply focuses on schools and colleges from which students pass out. Housewives looking for a part
time job for income and those who search for a better job with good salary.
Network control
System development and maintenance control
The Systems Development Life Cycle (SDLC) phases serve as a programmatic guide to project activity and
provide a flexible but consistent way to conduct projects to a depth matching the scope of the project. Each of
the SDLC phase objectives are described in this section with key deliverables, a description of recommended
tasks, and a summary of related control objectives for effective management. It is critical for the project
manager to establish and monitor control objectives during each SDLC phase while executing projects.
Control objectives help to provide a clear statement of the desired result or purpose and should be used
throughout the entire SDLC process. Control objectives can be grouped into major categories (Domains), and
relate to the SDLC phases.
To manage and control any SDLC initiative, each project will be required to establish some degree
of a Work Breakdown Structure (WBS) to capture and schedule the work necessary to complete the
project. The WBS and all programmatic material should be kept in the "Project Description" section of

The Institute of Chartered Accountants of Nepal ȁ͵Ͷͷ


Management Information and Control System
the project notebook. The WBS format is mostly left to the project manager to establish in a way that best
describes the project work. There are some key areas that must be defined in the WBS as part of the
SDLC policy.

8.4 Protection of digital network


High Availability, or HA as it is abbreviated, refers to the availability of resources in a computer system,
in the wake of component failures in the system. High Availability (HA) is a characteristic of a system
that aims to ensure an agreed level of operational performance for a higher tha n normal period. While
its implementation can come in various forms, the solutions often involve redundant hardware or
failover mechanisms designed to mitigate system failures. With advancements in cloud computing,
achieving HA has become increasingly cost-effective. Cloud-based solutions can automatically handle
failovers and even distribute resources geographically to protect against localized outages. This can be
achieved in a variety of ways, spanning the entire spectrum ranging at the one end from solutions that use
custom and redundant hardware to ensure availability, to the other end to solutions that provide software
solutions using off-the-shelf hardware components. The former classes of solutions provide a higher
degree of availability, but are significantly more expensive, than the latter class. This has led to the
popularity of the latter class, with almost all vendors of computer systems offering various HA products.
Typically, these products survive single points of failure in the system. Related Terminology.
Continuous Availability: Continuous Availability refers to systems designed to eliminate all single
points of failure and provide an immediate failover mechanism in case of a problem. The goal is to
make sure the service is always available to users. Today, this is often accomplished through the use of
redundant systems, often in different geographical locations, operating in conjunction. This is facilitated
by advancements in real-time data replication and synchronized multi-site deployments, enabling
continuous operation even during maintenance periods or in case of a disaster. This implies non-stop
service, with no lapse in service. This represents an ideal state, and is generally used to indicate a
high level of availability in which only a very small quantity of downtime is allowed. High availability
does not imply continuous availability.
Fault Tolerance: This is a means to achieve very high levels of availability. A fault tolerant system
has the ability to continue service despite hardware or a software failure, and is characterized by
redundancy in hardware, including CPU, memory, and I/0 subsystems. High availability does not imply
fault tolerance. Fault Tolerance is a property of a system that enables it to continue operation, albeit at a
possibly reduced level (also known as graceful degradation), rather than failing completely, when some
part of the system fails. Modern fault-tolerant systems often leverage advanced technologies such as
RAID, mirrored drives, and dual modular redundancy in hardware along with sophisticated error detection
and correction algorithms in software to ensure seamless service continuity even in the face of component
failures.
Single Point of Failure (SPOF): A hardware or software component whose loss results in the loss of service;
such components are not backed up by redundant components. Single Point of Failure (SPOF) is a part of
a system that, if it fails, will stop the entire system from working. In the context of modern systems,
mitigating SPOFs involves a comprehensive approach that includes diverse data routes, load balancing,
network and storage redundancies, and backup power supplies, among other strategies. The elimination
of SPOFs is a key goal in system design to increase overall system reliability and availability.

346 | The Institute of Chartered Accountants of Nepal


Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
Failover: When a component in an HA system fails resulting in a loss of service, the service is started by
the HA system on another component in the system. This transfer of a service following a failure
in the system is termed failover. Failover is the process of switching to a redundant or standby system
upon the failure or abnormal termination of the previously active system. In contemporary system
architectures, automated failover processes have become an integral part of maintaining high availability
and minimizing downtime. These automated processes can be facilitated by sophisticated monitoring
tools that can detect failures and initiate failovers to standby systems, often without human intervention.
Cloud-based solutions are playing a significant role in making these automated failover mechanisms more
accessible and affordable.
Impact of Cluster Computing on High Availability
In recent years, cluster computing has gained significant traction and has found applications in various
fields, including those involving critical tasks. These mission-critical applications cannot afford system
failures or service disruptions and require a high level of availability with minimal planned or unplanned
downtime. As a result, there is a growing demand for robust high availability solutions specifically
designed for cluster environments. Major industry players have recognized this need and are offering a
range of HA solutions with different feature sets and varying levels of service, as measured by uptime
guarantees.
Currently, commercial HA systems predominantly support clusters consisting of two or four servers.
However, there are plans in place to expand these capabilities to accommodate larger clusters in the future.
The utilization of cluster computing technology has thus significantly impacted the development of HA
solutions, catering to the specific requirements and complexities of clustered environments.

Fig 9-4 A cluster computing on HA


Internet security
Internet security is a critical facet of contemporary digital communication, encompassing the
safeguarding of a computer's Internet account and files from uninvited access or intrusion. The

The Institute of Chartered Accountants of Nepal ȁ͵Ͷ͹


Management Information and Control System
fundamental building blocks of Internet security comprise the formulation of robust passwords,
modification of file permissions, and consistent data backups.
Given the ever-evolving digital landscape, Internet security is not merely an ancillary aspect but an
integral component of standard business operations. The sense of trust that businesses can instill in their
clients by maintaining secure IT systems is invaluable. The persistence of cybercriminals, driven by the
potential high gains of successful cyber attacks, keeps Internet security perpetually relevant. As cyber
threats continuously morph, users must remain perpetually vigilant, prioritizing security in system
enhancement considerations.
In the realm of Internet security, professionals need to be well-versed in several key areas:
 Penetration Testing: This process involves simulating real-world attacks on a system to evaluate its
defenses and identify vulnerabilities. With the advent of automated tools and cloud-based testing
platforms, this process has become more sophisticated, enabling more frequent and comprehensive
testing.
 Intrusion Detection: The use of systems or applications to identify unauthorized access attempts or
suspicious activity within a network. Modern intrusion detection systems (IDS) often leverage
machine learning and AI technologies to detect anomalous behavior and potential threats.
 Incident Response: Involves the creation and implementation of strategies and protocols to swiftly
and effectively handle security incidents. With advancements in security orchestration, automation,
and response (SOAR) solutions, the process of managing and responding to incidents has become
more streamlined.
 Legal/Audit Compliance: Entails ensuring that all security measures and practices are in
accordance with relevant laws, regulations, and industry standards. As data privacy laws like the
GDPR and CCPA have come into force, businesses must pay more attention to compliance to avoid
hefty fines.
As we move further into the digital age, these aspects of Internet security will continue to evolve and
grow in importance, calling for continuous learning and adaptation from security professionals.
What is E-Commerce?
E-Commerce, or Electronic Commerce, signifies the buying and selling of goods or services via the
Internet. It has become a crucial part of modern retail, with many brands maintaining a strong online
presence—some exclusively so, without any physical storefronts. However, e-commerce isn't just
limited to consumer retail but also extends to business-to-business transactions, such as those between
manufacturers, suppliers, or distributors.
There are various business models within the realm of online retail. While historically, many
businesses maintained a clear distinction between their physical and online operations, current trends
lean towards an integrated, multi-channel approach. This has given rise to novel models such as 'click
and collect,' where customers can purchase items online and then pick them up from a physical store.
The influence of e-commerce extends beyond retail and permeates the services sector as well. For
instance, online banking and brokerage services have revolutionized the financial industry. Customers
can now view their bank statements, transfer funds, pay bills, apply for mortgages, buy or sell securities,
and access financial advice—all at the click of a button.

348 | The Institute of Chartered Accountants of Nepal


Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
In the era of digital transformation, e-commerce has evolved to include mobile commerce, electronic
funds transfer, supply chain management, Internet marketing, online transaction processing, electronic
data interchange (EDI), inventory management systems, and automated data collection systems. E-
commerce leverages technologies such as mobile commerce, electronic funds transfer, supply chain
management, Internet marketing, online transaction processing, electronic data interchange (EDI),
inventory management systems, and automated data collection.
Given the global pandemic and the subsequent rise of remote work and social distancing practices, e -
commerce has gained even more prominence. It is now not just a convenience but a necessity for many
businesses and consumers worldwide. It is expected that e-commerce will continue to grow and
transform, driven by technological innovations, changing consumer behaviors, and the evolving digital
landscape.
Security overview
A secure system accomplishes its task with no unintended side effects. Using the analogy of a house to
represent the system, you decide to carve out a piece of your front door to give your pets' easy access
to the outdoors. However, the hole is too large, giving access to burglars. You have created an unintended
implication and therefore, an insecure system.
In the software industry, security has two different perspectives. In the software development community,
it describes the security features of a system. Common security features are ensuring passwords that are
at least six characters long and encryption of sensitive data. For software consumers, it is protection
against attacks rather than specific features of the system. Your house may have the latest alarm system
and windows with bars, but if you leave your doors unlocked, despite the number of security features
your system has, it is still insecure. Hence, security is not a number of features, but a system process. The
weakest link in the chain determines the security of the system. In this article, we focus on possible attack
scenarios in an e-Commerce system and provide preventive strategies, including security features that you
can implement.
Security has three main concepts: confidentiality, integrity, and availability. Confidentiality allows
only authorized parties to read protected information. Confidentiality is about ensuring that data is only
accessible to those who are authorized to view it. In a modern context, consider a private messaging
application like WhatsApp. When you send a message, confidentiality ensures that only the intended
recipient can read it. Breaching this principle would be like a hacker intercepting and reading your
private conversation, For example, if the postman reads your mail, this is a breach of your privacy.
Integrity ensures data remains as is from the sender to the receiver. Integrity is about making sure that
the data is accurate and unchanged from its original form. It guarantees that the information has not
been tampered with during transmission or storage. Consider online banking as an example. When
you transfer money to a friend, integrity ensures that the amount you send is the amount they receive.
A violation of this principle would be like a malicious entity altering the transfer amount during
transmission - akin to someone adding an extra bill to your envelop.
Availability ensures you have access and are authorized to resources. If the post office destroys your
mail or the postman takes one year to deliver your mail, he has impacted the availability of your mail.
Availability ensures that authorized users have reliable access to the data and resources they need. In a
contemporary scenario, consider a cloud storage service like Google Drive. You expect to be able to access
The Institute of Chartered Accountants of Nepal ȁ͵Ͷͻ
Management Information and Control System
your files whenever you want. If a server outage or a denial-of-service attack prevented you from accessing
your files, it would be a breach of availability - similar to the post office destroying your mail or the
postman taking an excessively long time to deliver your mail in the given above example.
The players
In a typical e-Commerce experience, a shopper proceeds to a Website to browse a catalog and make a
purchase. This simple activity illustrates the four major players in e-Commerce security. One player is
the shopper who uses his browser to locate the site. The site is usually operated by a merchant, also a
player, whose business is to sell merchandise to make a profit. As the merchant business is
selling goods and services, not building software, he usually purchases most of the software to run his
site from third-party software vendors. The software vendor is the last of the three legitimate players.
The attacker is the player whose goal is to exploit the other three players for illegitimate gains.
Figure 9-5 illustrates the players in a shopping experience.

Fig 9-5 The players


350 | The Institute of Chartered Accountants of Nepal
Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
The attacker can besiege the players and their resources with various damaging or benign schemes
that result in system exploitation. Threats and vulnerabilities are classified under confidentiality, integrity,
and availability. A threat is a possible attack against a system. It does not necessarily mean that the
system is vulnerable to the attack. An attacker can threaten to throw eggs against your brick house,
but it is harmless. A vulnerability is a weakness in the system, but it is not necessarily known by the
attacker. For example, only you know that you have left your front door unlocked. Vulnerabilities exist
at entry and exit points in the system. In a house, the vulnerable points are the doors and windows.
When the burglar threatens to break into your house and finds the vulnerability of the unlocked door, he
is exploiting the assets in the house.

Security features

Security features, while not providing an absolute guarantee of a secure system, are the essential building
blocks required to construct a robust security infrastructure. These features can be categorized into four
primary areas:

 Authentication: This feature validates a user's identity, affirming that users are indeed who they claim
to be. In a contemporary context, think of Two-Factor Authentication (2FA) or Biometric
authentication used in online banking or social media platforms. These techniques add an extra layer
of security, ensuring that you're the only one able to log into your accounts, even if someone else
knows your password.
 Authorization: Once your identity is confirmed, authorization controls what you can and cannot do
within a system. It defines permissions and privileges for users, determining what resources you can
access and what actions you can perform. For instance, in a project management tool like Trello or
Asana, while a team member might be able to view and edit certain project tasks, they may not have
the authority to delete or add new tasks - that may be reserved for the project manager.
 Encryption: Encryption plays a crucial role in data privacy and security. It transforms readable data
(plaintext) into an encoded version (ciphertext) that can only be deciphered using a decryption key.
Consider messaging apps like Signal or WhatsApp that offer end-to-end encryption, meaning only
the sender and receiver can read the messages, ensuring that even if a malicious actor intercepts the
communication, they would not be able to understand it.
 Auditing: Auditing involves recording system activities for detection and investigation of security
breaches or incidents. It helps track user activities, system changes, and data access, providing a
valuable trail for forensic investigations and compliance purposes. For instance, e-commerce
platforms maintain audit logs to document user transactions, which can be used to resolve disputes
over whether a particular item was purchased or not.

These features work synergistically to create a comprehensive security framework, providing multi-
layered protection for systems and data in an increasingly complex digital environment.

The criminal incentive

Attacks against e-Commerce Web sites are so alarming, they follow right after violent crimes in the news.
Practically every month, there is an announcement of an attack on a major Website where sensitive

The Institute of Chartered Accountants of Nepal ȁ͵ͷͳ


Management Information and Control System
information is obtained. Why is e-Commerce vulnerable? Is e-Commerce software more insecure
compared to other software? Did the number of criminals in the world increase? The developers producing
e-Commerce software are pulled from the same pool of developers as those who work on other software.
In fact, this relatively new field is an attraction for top talent. Therefore, the quality of software being
produced is relatively the same compared to other products. The criminal population did not undergo
a sudden explosion, but the incentives of an e-Commerce exploit are a bargain compared to other illegal
opportunities.

Compared to robbing a bank, the tools necessary to perform an attack on the Internet is fairly cheap. The
criminal only needs access to a computer and an Internet connection. On the other hand, a bank robbery
may require firearms, a getaway car, and tools to crack a safe, but these may still not be enough. Hence,
the low cost of entry to an e-Commerce site attracts the broader criminal population.

The payoff of a successful attack is unimaginable. If you were to take a penny from every account
at any one of the major banks, it easily amounts to several million dollars. The local bank robber
optimistically expects a windfall in the tens of thousands of dollars. Bank branches do not keep a lot of
cash on hand. The majority is represented in bits and bytes sitting on a hard disk or zipping through a
network.

While the local bank robber is restricted to the several branches in his region, his online
counterpart can choose from the thousands of banks with an online operation. The online bank robber
can rob a bank in another country, taking advantage of non-existent extradition rules between the country
where the attack originated, and the country where the attack is destined.

An attack on a bank branch requires careful planning and precautions to ensure that the criminal does not
leave a trail. He ensures the getaway car is not easily identifiable after the robbery. He cannot leave
fingerprints or have his face captured on the surveillance cameras. If he performs his actions on the
Internet, he can easily make himself anonymous and the source of the attack untraceable.

The local bank robber obtains detailed building maps and city maps of his target. His online counterpart
easily and freely finds information on hacking and cracking. He uses different sets of tools and techniques
everyday to target an online bank.

Points the attacker can target

As mentioned, the vulnerability of a system exists at the entry and exit points within the system. Figure
9-6 shows an e-Commerce system with several points that the attacker can target:
• Shopper
• Shopper' computer
• Network connection between shopper and Web site's server
• Web site's server
• Software vendor

352 | The Institute of Chartered Accountants of Nepal


Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control

Fig 9-6 Points the attacker can target

These target points and their exploits are explored later in this article.
Attacks
This section describes potential security attack methods from an attacker or hacker.
Tricking the shopper
Some of the easiest and most profitable attacks are based on tricking the shopper, also known as social
engineering techniques. These attacks involve surveillance of the shopper's behavior, gathering
information to use against the shopper. For example, a mother's maiden name is a common challenge
question used by numerous sites. If one of these sites is tricked into giving away a password once the
challenge question is provided, then not only has this site been compromised, but it is also likely that
the shopper used the same logon ID and password on other sites.
A common scenario is that the attacker calls the shopper, pretending to be a representative from a site
visited, and extracts information. The attacker then calls a customer service representative at the site,
posing as the shopper and providing personal information. The attacker then asks for the password to
be reset to a specific value.
Another common form of social engineering attacks are phishing schemes. Typo pirates play on the
names of famous sites to collect authentication and registration information. For example,
http://www.ibm.com/shop is registered by the attacker as www.ibn.com/shop. A shopper mistypes
and enters the illegitimate site and provides confidential information. Alternatively, the attacker sends
emails spoofed to look like they came from legitimate sites. The link inside the email maps to a rogue
site that collects the information.

The Institute of Chartered Accountants of Nepal ȁ͵ͷ͵


Management Information and Control System
Snooping the shopper's computer
Millions of computers are added to the Internet every month. Most users' knowledge of security
vulnerabilities of their systems is vague at best. Additionally, software and hardware vendors, in their
quest to ensure that their products are easy to install, will ship products with security features
disabled. In most cases, enabling security features requires a non-technical user to read manuals written
for the technologist. The confused user does not attempt to enable the security features. This creates a
treasure trove for attackers.
A popular technique for gaining entry into the shopper's system is to use a tool, such as SATAN, to
perform port scans on a computer that detect entry points into the machine. Based on the opened ports
found, the attacker can use various techniques to gain entry into the user's system. Upon entry, they
scan your file system for personal information, such as passwords.
While software and hardware security solutions available protect the public's systems, they are not
silver bullets. A user that purchases firewall software to protect his computer may find there are conflicts
with other software on his system. To resolve the conflict, the user disables enough capabilities to render
the firewall software useless.
Sniffing the network
In this scheme, the attacker monitors the data between the shopper's computer and the server. He collects
data about the shopper or steals personal information, such as credit card numbers.
There are points in the network where this attack is more practical than others. If the attacker sits in the
middle of the network, then within the scope of the Internet, this attack becomes impractical. A request
from the client to the server computer is broken up into small pieces known as packets as it leaves
the client's computer and is reconstructed at the server. The packets of a request is sent through
different routes. The attacker cannot access all the packets of a request and cannot decipher what
message was sent.
Take the example of a shopper in Toronto purchasing goods from a store in Los Angeles. Some packets
for a request are routed through New York, where others are routed through Chicago. A more practical
location for this attack is near the shopper's computer or the server. Wireless hubs make attacks on the
shopper's computer network the better choice because most wireless hubs are shipped with security
features disabled. This allows an attacker to easily scan unencrypted traffic from the user's computer.

Fig 9-7 Attacker sniffing the network between client and server

354 | The Institute of Chartered Accountants of Nepal


Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
Guessing passwords
Another common attack is to guess a user's password. This style of attack is manual or
automated. Manual attacks are laborious, and only successful if the attacker knows something about
the shopper. For example, if the shopper uses their child's name as the password. Automated
attacks have a higher likelihood of success, because the probability of guessing a user ID/password
becomes more significant as the number of tries increases. Tools exist that use all the words in the
dictionary to test user ID/password combinations, or that attack popular user ID/password
combinations. The attacker can automate to go against multiple sites at one time.
Using denial-of-service attacks
The denial-of-service attack is one of the best examples of impacting site availability. It involves getting
the server to perform a large number of mundane tasks, exceeding the capacity of the server to cope
with any other task. For example, if everyone in a large meeting asks you your name all at once, and
every time you answer, they ask you again. You have experienced a personal denial of service
attack. To ask a computer its name, you use ping. You can use ping to build an effective DoS
attack. The smart hacker gets the server to use more computational resources in processing the
request than the adversary does in generating the request.
Distributed DoS is a type of attack used on popular sites, such as Yahoo!®. In this type of attack, the
hacker infects computers on the Internet via a virus or other means. The infected computer becomes
slaves to the hacker. The hacker controls them at a predetermined time to bombard the target server with
useless, but intensive resource consuming requests. This attack not only causes the target site to
experience problems, but also the entire Internet as the number of packets is routed via many different
paths to the target.

Fig 9-8 Denial of service attacks

The Institute of Chartered Accountants of Nepal ȁ͵ͷͷ


Management Information and Control System
Using known server bugs
The attacker analyzes the site to find what types of software are used on the site. He then
proceeds to find what patches were issued for the software. Additionally, he searches on how to exploit
a system without the patch. He proceeds to try each of the exploits. The sophisticated attacker finds a
weakness in a similar type of software, and tries to use that to exploit the system. This is a simple, but
effective attack. With millions of servers online, what is the probability that a system administrator
forgot to apply a patch?
Using server root exploits
Root exploits refer to techniques that gain super user access to the server. This is the most
coveted type of exploit because the possibilities are limitless. When you attack a shopper or his
computer, you can only affect one individual. With a root exploit, you gain control of the merchants
and all the shoppers' information on the site. There are two main types of root exploits: buffer
overflow attacks and executing scripts against a server.
In a buffer overflow attack, the hacker takes advantage of specific type of computer program bug that
involves the allocation of storage during program execution. The technique involves tricking the server
into execute code written by the attacker.
The other technique uses knowledge of scripts that are executed by the server. This is easily and freely
found in the programming guides for the server. The attacker tries to construct scripts in the URL of
his browser to retrieve information from his server. This technique is frequently used when the attacker
is trying to retrieve data from the server's database.
Defenses
Despite the existence of hackers and crackers, e-Commerce remains a safe and secure activity. The
resources available to large companies involved in e-Commerce are enormous. These companies will
pursue every legal route to protect their customers. Figure 9-9 shows a high-level illustration of defenses
available against attacks.

Fig 9-9 Attacks and their defenses

356 | The Institute of Chartered Accountants of Nepal


Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
The specific defenses employed by e-Commerce companies can vary, but they typically encompass a
combination of the following measures:
Network Security: Robust network security measures, such as firewalls, intrusion detection and
prevention systems (IDS/IPS), and network segmentation, are implemented to protect against
unauthorized access and malicious activities.
Encryption: Encryption technologies, such as Secure Sockets Layer (SSL) and Transport Layer Security
(TLS), are utilized to secure the transmission of sensitive data between the shopper's computer and the
server. This ensures that the information remains confidential and protected from interception.
Secure Authentication: Strong authentication mechanisms, such as multi-factor authentication (MFA),
are implemented to verify the identity of shoppers and prevent unauthorized access to accounts.
Regular Software Updates and Patching: System administrators and developers diligently apply
software updates and patches to address vulnerabilities and protect against known security flaws.
Intrusion Detection and Prevention: Advanced intrusion detection and prevention systems are deployed
to monitor network traffic, detect suspicious activities, and block or mitigate potential attacks in real-time.
Security Auditing and Monitoring: Continuous security auditing and monitoring practices are employed
to identify and respond to potential security breaches or anomalies promptly. This includes analyzing
system logs, conducting regular vulnerability assessments, and performing penetration testing.
Employee Education and Awareness: Training programs and awareness campaigns are conducted to
educate employees about security best practices, the importance of maintaining strong passwords, and
recognizing and reporting potential security threats or suspicious activities.
Incident Response Plans: Companies have well-defined incident response plans in place to handle
security incidents effectively and minimize the impact of any potential breaches. These plans include
procedures for containment, mitigation, investigation, and recovery.
At the end of the day, your system is only as secure as the people who use it. Education is the best
way to ensure that your customers take appropriate precautions:
• Install personal firewalls for the client machines.
• Store confidential information in encrypted form.
• Encrypt the stream using the Secure Socket Layer (SSL) protocol to protect information
flowing between the client and the e-Commerce Web site.
• Use appropriate password policies, firewalls, and routine external security audits.
• Use threat model analysis, strict development policies, and external security audits to protect
ISV software running the Web site.

The Importance of Education in Security


It is crucial to recognize that the security of any system is dependent on the actions and decisions of the
individuals who use it. Regardless of the robustness of security measures in place, if users fail to adopt
good security practices, the system becomes vulnerable to potential attacks. This is particularly concerning
when it comes to the choice and management of passwords.

The Institute of Chartered Accountants of Nepal ȁ͵ͷ͹


Management Information and Control System
If a shopper selects a weak or easily guessable password, or shares their password with others, it opens the
door for attackers to impersonate that user and gain unauthorized access to their account. This becomes
even more significant if the compromised password belongs to a system administrator who has elevated
privileges within the system. In such cases, there may be additional layers of physical security in place,
such as firewalls and restricted access. Nevertheless, if an attacker gains control of an administrator
account, it can lead to severe consequences.
To mitigate these risks, users must exercise good judgment when sharing information and be aware of
potential threats, such as phishing schemes and social engineering attacks. Education plays a vital role in
raising awareness and providing users with the knowledge and skills to navigate the digital landscape
securely. Key aspects of user education in security include:
 Password Best Practices: Users should be educated about the importance of creating strong, unique
passwords and regularly changing them. They should also understand the significance of keeping
passwords confidential and not sharing them with others.
 Phishing Awareness: Users should be trained to identify and avoid phishing emails, which are
deceptive messages designed to trick individuals into revealing sensitive information. They should be
cautious of clicking on suspicious links or providing personal information in response to unsolicited
requests.
 Social Engineering Attacks: Users should be aware of social engineering techniques used by
attackers to manipulate individuals into divulging sensitive information or performing certain actions.
Education should focus on recognizing common tactics employed by social engineers and practicing
skepticism when interacting with unfamiliar individuals or requests.
 Security Hygiene: Users should be encouraged to practice good security hygiene, such as keeping
their devices and software up to date with the latest security patches, using reputable antivirus
software, and being cautious when downloading or installing applications from unknown sources.
 Reporting Suspicious Activity: Users should be informed about the importance of reporting any
suspicious or potentially malicious activity to the appropriate authorities or IT department. This
includes reporting phishing emails, unusual system behavior, or any incidents that could indicate a
security breach.
By prioritizing user education in security practices, organizations can empower their users to become
active participants in maintaining a secure environment. Regular training sessions, awareness campaigns,
and clear communication about security policies and procedures can significantly enhance the overall
security posture of a system or platform.
Personal firewalls
When connecting your computer to a network, it becomes vulnerable to attack. A personal firewall
helps protect your computer by limiting the types of traffic initiated by and directed to your computer.
The intruder can also scan the hard drive to detect any stored passwords.
A personal firewall acts as a barrier between your computer and the network, monitoring and controlling
incoming and outgoing network traffic. It helps regulate the types of connections and data packets that are
allowed to pass through, based on predefined rules and settings. By implementing a personal firewall, you
can significantly reduce the risk of malicious activity reaching your computer and protect your sensitive
information.

358 | The Institute of Chartered Accountants of Nepal


Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
Transport Layer Security (TLS)
Transport Layer Security (TLS) is a cryptographic protocol that ensures secure communication over
computer networks. It is the successor to the older Secure Socket Layer (SSL) protocol. TLS is widely
used to secure web transactions, email communication, and various other network protocols.

TLS provides encryption and authentication mechanisms to protect the confidentiality and integrity of data
transmitted between a client (such as a web browser) and a server. When establishing a TLS connection,
both the client and server undergo a handshake process to negotiate encryption algorithms and exchange
encryption keys. This handshake verifies the authenticity of the server and establishes a secure channel
for data transmission.
The main features and benefits of TLS include:
Encryption: TLS encrypts data to prevent unauthorized access and ensure that information remains
confidential during transmission. Encryption algorithms used in TLS include symmetric encryption (for
bulk data) and asymmetric encryption (for exchanging encryption keys).
Data Integrity: TLS employs cryptographic hash functions to ensure that data transmitted between the
client and server is not tampered with during transit. This provides assurance that the data received at the
destination is the same as the data sent by the sender.
Authentication: TLS supports various authentication mechanisms, including digital certificates issued by
trusted Certificate Authorities (CAs). These certificates verify the identity of the server and sometimes the
client, enabling users to trust the authenticity of the entities they are communicating with.
Forward Secrecy: TLS supports forward secrecy, which means that even if an attacker compromises the
private key of a server, they cannot decrypt past communications that were secured with different session
keys. This enhances the long-term security of the communication.
Interoperability: TLS is a widely adopted standard and is supported by most modern web browsers, email
clients, and network devices. It ensures interoperability between different systems and allows for secure
communication across diverse platforms.
TLS has undergone several versions and improvements over time. The current versions include TLS 1.2
and TLS 1.3, with the latter being the most secure and efficient. TLS is continuously updated to address
known vulnerabilities and adapt to evolving security requirements.
Secure Socket Layer (SSL)- previously used
Secure Socket Layer (SSL) is a protocol that encrypts data between the shopper's computer and the site's
server. When an SSL-protected page is requested, the browser identifies the server as a trusted entity and
initiates a handshake to pass encryption key information back and forth. Now, on subsequent requests to
The Institute of Chartered Accountants of Nepal ȁ͵ͷͻ
Management Information and Control System
the server, the information flowing back and forth is encrypted so that a hacker sniffing the network
cannot read the contents.
The SSL certificate is issued to the server by a certificate authority authorized by the government.
When a request is made from the shopper's browser to the site's server using https://..., the shopper's
browser checks if this site has a certificate it can recognize. If the site is not recognized by a trusted
certificate authority, then the browser issues a warning.
As an end-user, you can determine if you are in SSL by checking your browser. For example,
inMozilla® Firefox, the secure icon is at the top in the URL entry field as shown in Figure 9-10.

Fig 9-10 Secure icon in Mozilla Firefox


In Microsoft® Internet Explorer, the secure icon is at the bottom right of the browser as shown in
Figure 9-11.

Fig 9-11 Secure icon in Microsoft Internet

Server firewalls
Server firewalls are essential components of network security that help protect servers from unauthorized
access and potential attacks. Similar to a moat surrounding a castle, a firewall acts as a barrier between the
server and the external network, controlling incoming and outgoing network traffic based on predefined
rules.
A common configuration for server firewalls involves the use of a demilitarized zone (DMZ), which is
created using two firewalls. The outer firewall allows incoming and outgoing HTTP requests, enabling
communication between client browsers and the server. The inner firewall, located behind the e-
Commerce servers, provides a higher level of security. It only permits requests from trusted servers on
specific ports to enter the server environment. Intrusion detection software is often employed on both
firewalls to detect any unauthorized access attempts and potential threats.
In addition to a DMZ, another technique often used is the implementation of a honey pot server. A honey
pot is a deceptive resource, such as a fake payment server, intentionally placed in the DMZ to lure and
deceive potential attackers. These servers are closely monitored, and any access or interaction by an
attacker is promptly detected. The honey pot serves as a distraction and can provide valuable insight into
the attacker's methods and intentions, enabling security teams to strengthen defenses and respond
effectively.
Server firewalls, along with other security measures, play a crucial role in safeguarding the integrity and
availability of servers in an e-Commerce environment. By carefully controlling and monitoring network
360 | The Institute of Chartered Accountants of Nepal
Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
traffic, organizations can mitigate the risk of unauthorized access, data breaches, and other malicious
activities. Regular updates and configuration reviews are necessary to ensure that firewalls remain
effective against evolving threats and vulnerabilities.
Another such advancement is the advent of Web Application Firewalls (WAFs). Unlike regular firewalls
that filter traffic based on ports and protocols, WAFs operate at the application layer (Layer 7 of the OSI
model) and are specifically designed to inspect HTTP/HTTPS traffic. They can identify and block attacks
such as cross-site scripting (XSS), SQL injection, and other common web-based threats that can
compromise servers. WAFs are particularly effective for e-Commerce servers that often host complex web
applications with multiple potential vulnerabilities.
Another rising trend in server security is the use of machine learning and artificial intelligence. AI-
enhanced firewalls analyze patterns and behaviors in network traffic, learning over time to identify
suspicious activity better. These intelligent firewalls can adapt and respond to threats faster and more
accurately than traditional firewalls, often detecting and mitigating issues before they become significant
problems
Moreover, the implementation of a Zero Trust security model has gained traction. This model operates on
the principle of "trust nothing, verify everything," irrespective of whether the request originates from
within or outside the network. In the context of firewalls, this means robust identity verification and strict
access controls, ensuring that only verified users or systems can access server resources.
Micro segmentation is another strategy increasingly being employed. It involves breaking down the
security perimeters into small zones to maintain separate access for separate parts of the network. This can
help to limit an attacker’s ability to traverse across the network even if they breach the initial firewall.
These advancements, combined with traditional firewall systems, help create a layered defense strategy,
reducing the chances of successful server attacks and enhancing overall network security.

Fig 9-12 Firewalls and honey pots

The Institute of Chartered Accountants of Nepal ȁ͵͸ͳ


Management Information and Control System
Password policies
Ensure that password policies are enforced for shoppers and internal users. A sample password policy,
defined as part of the Federal Information Processing Standard (FIPS), is shown in the table below.

Policy Element Description

Minimum Password
Length Passwords must be at least 12 characters in length.

Complexity Passwords must meet complexity requirements, including a combination


Requirements of uppercase and lowercase letters, numbers, and special characters.

Password Expiration Passwords must be changed every 90 days.

Password History Users cannot reuse their previous 10 passwords.

Account Lockout After 5 unsuccessful login attempts, the account is locked for 15 minutes.

Password Storage Passwords are stored using strong encryption techniques.

Password Strength Users are educated on creating strong passwords and regularly reminded to
Education update their passwords.

Multi-Factor Users are encouraged to enable multi-factor authentication for additional


Authentication security.

A secure password recovery mechanism is in place to verify users'


Password Recovery identities before resetting passwords.

Password Usage
Monitoring Password usage and activity are monitored for potential security breaches.

You may choose to have different policies for shoppers versus your internal users. For example, you may
choose to lockout an administrator after 2 failed login attempts instead of 5. These password policies
protect against attacks that attempt to guess the user's password. They ensure that passwords are
sufficiently strong enough so that they cannot be easily guessed. The account lockout capability ensures
that an automated scheme cannot make more than a few guesses before the account is locked.
Intrusion detection and audits of security logs
One of the cornerstones of an effective security strategy is to prevent attacks and to detect potential
attackers. This helps understand the nature of the system's traffic, or as a starting point for litigation
against the attackers.

362 | The Institute of Chartered Accountants of Nepal


Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
Suppose that you have implemented a password policy, such as the FIPS policy described in the section
above. If a shopper makes 6 failed logon attempts, then his account is locked out. In this scenario, the
company sends an email to the customer, informing them that his account is locked. This event should
also be logged in the system, either by sending an email to the administrator, writing the event to a security
log, or both.
You should also log any attempted unauthorized access to the system. If a user logs on, and attempts to
access resources that he is not entitled to see, or performs actions that he is not entitled to perform,
then this indicates the account has been co-opted and should be locked out. Analysis of the security
logs can detect patterns of suspicious behavior, allowing the administrator to take action.
In addition to security logs, use business auditing to monitor activities such as payment processing. You
can monitor and review these logs to detect patterns of inappropriate interaction at the business process
level. The infrastructure for business auditing and security logging is complex, and most likely will come
as part of any middleware platform selected to host your site. WebSphere Commerce, for example, has
extensive capabilities in this area.
On a recent progression front, many systems now incorporate Artificial Intelligence (AI) and Machine
Learning (ML) technologies to enhance their intrusion detection capabilities. These technologies can learn
from the log data, identify patterns and anomalies, and help in early detection of suspicious activities.
Moreover, Security Information and Event Management (SIEM) tools have become prominent. They
consolidate log data generated across the network, identify deviations from the norm, and take necessary
corrective actions, providing a more comprehensive and integrated view of an organization's IT security
landscape.
Site development best practices
This section describes best practices you can implement to help secure your site.
There are many established policies and standards for avoiding security issues. However, they are not
required by law. Some basic rules include:
Security:
 Implement strong authentication and authorization mechanisms to protect user accounts and sensitive
data.
 Utilize secure communication protocols, such as HTTPS, to encrypt data transmitted between the
user's browser and the server.
 Regularly update and patch software and frameworks to address security vulnerabilities.
 Implement robust input validation and data sanitization techniques to prevent common security
exploits like SQL injection and cross-site scripting (XSS) attacks.
 Employ secure coding practices and follow secure coding guidelines to minimize the risk of
introducing vulnerabilities during development.
Performance:
Optimize website performance by minimizing page load times, reducing file sizes, and leveraging
caching techniques.

The Institute of Chartered Accountants of Nepal ȁ͵͸͵


Management Information and Control System
Compress and minify CSS, JavaScript, and HTML files to reduce bandwidth usage and improve
rendering speed.
Optimize images by using appropriate formats and sizes without compromising visual quality.
Employ content delivery networks (CDNs) to distribute static assets globally, reducing latency and
improving load times.
Implement efficient database queries and caching strategies to improve database performance.
Responsive Design:
Design websites to be mobile-friendly and responsive, ensuring optimal user experience across
different devices and screen sizes.
Use responsive design techniques like flexible grids, fluid layouts, and media queries to adapt content
to different screen resolutions.
Accessibility:
Follow accessibility guidelines (e.g., Web Content Accessibility Guidelines - WCAG) to make
websites accessible to people with disabilities.
Provide alternative text for images, use semantic HTML markup, and ensure proper keyboard
navigation.
Consider color contrast and readability to accommodate users with visual impairments.
Usability:
Focus on user-centered design principles to create intuitive and user-friendly interfaces.
Conduct usability testing to gather feedback and make improvements based on user behavior and
preferences.
Optimize navigation and information architecture to ensure easy discovery of content.
Scalability and Maintainability:
Build websites with scalability in mind, allowing for future growth and increased traffic.
Follow modular and maintainable coding practices, such as using clean code, clear naming
conventions, and separation of concerns.
Document code and provide comprehensive documentation for easier maintenance and
troubleshooting.
Testing and Quality Assurance:
Conduct thorough testing, including functional, performance, security, and compatibility testing.
Implement automated testing practices, such as unit tests, integration tests, and regression tests, to
ensure ongoing code quality and stability.
Regularly monitor and analyze website analytics to identify areas for improvement and optimize user
experience.
Using cookies
One of the issues faced by Web site designers is maintaining a secure session with a client over subsequent
requests. Because HTTP is stateless, unless some kind of session token is passed back and forth on
every request, the server has no way to link together requests made by the same person. Cookies are
a popular mechanism for this. An identifier for the user or session is stored in a cookie and read on every
request. You can use cookies to store user preference information, such as language and currency. This

364 | The Institute of Chartered Accountants of Nepal


Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
simplifies Web page development because you do not have to be concerned about passing this information
back to the server.
The primary use of cookies is to store authentication and session information, your information, and your
preferences. A secondary and controversial usage of cookies is to track the activities of users.
Different types of cookies are:
Session Cookies: Also known as temporary cookies, these exist only for the duration of a web
session. They disappear from your computer when you close the browser or turn off the computer.
These cookies allow websites to link the actions of a user during a browser session, helping with
navigation and functionality.
Persistent Cookies: Unlike session cookies, persistent cookies remain on your computer even after
the browser is closed, for a time period specified by the website that sent them. They are activated
each time you visit the website that created that particular cookie. They are useful for remembering
user preferences and making the interaction between users and websites smoother and faster.
First-party Cookies: These are cookies set directly by the website you are visiting. They could be
session or persistent cookies that help the website remember information about your visit, like your
preferred language and other settings.
Third-party Cookies: These are cookies that are set by a domain other than the one you are visiting.
They are mainly used for tracking purposes and to serve ads based on a user's past visits to a certain
site. Privacy concerns surrounding third-party cookies have led to them being blocked by default in
a number of web browsers.
Secure Cookies: These are cookies transmitted over encrypted connections. They are used to
maintain data privacy as the information transmitted in the cookies is protected by encryption,
significantly reducing the risk of interception by an unauthorized party.
Http Only Cookies: These cookies are inaccessible to the JavaScript's Document. cookie API to
mitigate the risk of cross-site scripting attacks. They are only used by the server, ensuring a higher
level of security.
If you do not want to store cookies, here are other alternatives:
Token-based Authentication: Instead of sending a username and password with each request, you
could use a token-based system. After initial login, the server provides a token, typically a long,
randomized string that is hard to fake. This token is sent with each request and used to authenticate
the user. These tokens can be stored in memory or in local storage, and are generally more secure and
efficient than sending user credentials with each request.
Open Authorization (OAuth): This is a standard for access delegation. It's used as a way for Internet
users to grant websites or applications access to their information on other websites but without giving
them the passwords. This mechanism is used by companies like Google, Facebook, Microsoft, and
Twitter to permit the users to share information about their accounts with third-party applications or
websites.
Single Sign-On (SSO): SSO allows users to log in once and gain access to different applications,
without the need to re-enter login credentials at each application. It's a user-friendly way to avoid the
need for multiple passwords and usernames while maintaining a high level of security.
URL Rewriting: This method is still in use but is generally not recommended due to security concerns.
As you mentioned, it involves encoding the session ID directly into the URL. While this method does
not rely on cookies, it poses security risks as the session ID might be leaked in various ways, such as
in browser history or web server logs.
The Institute of Chartered Accountants of Nepal ȁ͵͸ͷ
Management Information and Control System
In all cases, using a secure HTTPS connection is highly recommended to protect data during
transmission. SSL/TLS not only encrypts the content but also provides authentication, data integrity,
and privacy.
Using threat models to prevent exploits
When architecting and developing a system, it is important to use threat models to identify all possible
security threats on the server. Think of the server like your house. It has doors and windows to allow for
entry and exit. These are the points that a burglar will attack. A threat model seeks to identify these
points in the server and to develop possible attacks.
Threat models are particularly important when relying on a third party vendor for all or part of
the site's infrastructure. This ensures that the suite of threat models is complete and up-to- date.
The process of creating a threat model involves several key steps:
Identifying assets: Determine the valuable assets within the system that require protection, such as
sensitive data, user information, or critical functionality.
Identifying threats: Identify potential threats that could exploit vulnerabilities in the system. This includes
considering external threats like hackers or malicious actors, as well as internal threats like unauthorized
access by employees.
Assessing vulnerabilities: Evaluate the system's architecture, design, and implementation to identify any
existing vulnerabilities or weaknesses that could be targeted by threats. This includes analyzing potential
attack vectors and weak points in the system.
Mitigating risks: Develop strategies and countermeasures to mitigate the identified risks. This may involve
implementing security controls, enhancing access controls, encrypting sensitive data, or applying secure
coding practices.
Regular review and update: Threat models should be periodically reviewed and updated to account for
new threats, emerging technologies, and changes in the system's architecture or infrastructure.

Responding to security issues


An effective overall security strategy is to be prepared when vulnerabilities are detected. This also
means ensuring that software vendors selected for all or part of the site's infrastructure have proactive
and reactive policies for handling security issues.

366 | The Institute of Chartered Accountants of Nepal


Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
In the case of WebSphere Commerce, we can quickly form a SWAT team with key developers, testers,
and support personnel. This becomes the highest priority for all involved parties. An assessment is made
immediately, usually within the first few hours, to determine the vulnerability of the merchant's sites. A
workaround or permanent solution is developed for the affected sites within a day. Then a "flash" issued
to all customers to notify them of the problem, the solution, and how to check if they have been
exploited. For critical issues, no one leaves until there is a solution.

Fig. 9-13 Threat models

Using an online security checklist


Use this security checklist to protect yourself as a shopper:

Secure Connection: Always ensure that the website is using a secure connection, denoted by HTTPS
in the URL, particularly when entering sensitive information like your credit card details or
passwords.
Valid SSL Certificate: Don't shop at sites where your browser shows warnings about the SSL
certificate. This could indicate that the site is not secure and the information you send could be
intercepted by others.
Strong Passwords: Use strong, unique passwords for each online account. Consider using a password
manager to help generate and remember these passwords.
Two-Factor Authentication: If available, enable two-factor authentication for added security. This
typically involves receiving a text or using an app to receive a code that you input when logging in.
Log Out After Shopping: Always log out of your account after you're done shopping, especially on
public or shared computers.
Use Credit Cards Instead of Debit Cards: Credit cards usually offer better protection against
fraudulent charges than debit cards.

The Institute of Chartered Accountants of Nepal ȁ͵͸͹


Management Information and Control System
Check for a Physical Address and Contact Information: Reputable websites will have contact
information available. Be wary if this information is not present.
Keep Software Updated: Keep your browser, antivirus software, and operating system up to date.
These updates often include security patches and improvements.
Be Aware of Phishing Scams: Be cautious of unsolicited communications asking for personal
information or directing you to log in to your account.
Review Privacy Policies: These should explain how the site collects, uses, and protects your
information. If a site doesn’t have a privacy policy, it’s a good idea to shop elsewhere.

Security and electronic commerce

Customer Security: Basic Principles


Most ecommerce merchants leave the mechanics to their hosting company or IT staff, but it helps
to understand the basic principles. Any system has to meet four requirements:

Privacy: Information must be protected from unauthorized access. Encryption is used to achieve
privacy. In public key infrastructure (PKI), a message is encrypted using the recipient's public key,
which can only be decrypted using their private key.
Integrity: Messages must not be altered or tampered with during transmission. Techniques like
message authentication codes (MACs) or digital signatures can be used to ensure message
integrity.
Authentication: Both the sender and recipient need to prove their identities to each other.
Authentication can be accomplished using digital certificates, passwords, biometrics, or other
methods that verify identity.
Non-repudiation: There should be proof that the message was indeed received by the intended
recipient. Techniques like digital signatures provide non-repudiation by ensuring that the sender
cannot deny sending the message.

Public Key Infrastructure (PKI) forms the cornerstone of modern encryption techniques, and it's used
extensively to secure both online and offline communications. In PKI, data is encrypted with a public
key and decrypted with a corresponding private key. The public key is widely distributed and
accessible to anyone, while the private key is kept secret by the recipient.

In the realm of authentication, a process that verifies the sender's identity, PKI utilizes a system akin
to digital signatures. Here, a hash of the original message is encrypted using the sender's private key.
This encrypted hash, or signature, can then be decrypted by the recipient using the sender's public
key and compared against their own hash of the message to ensure the integrity and authenticity of
the data.

However, it's worth noting that PKI, due to the computational overhead of asymmetric encryption,
isn't typically efficient for the transmission of large amounts of data. Therefore, it's often used as an
initial step to establish a secure channel. During this phase, both parties can securely agree on a
symmetric key for further communications. This process, known as key agreement, often employs

368 | The Institute of Chartered Accountants of Nepal


Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
protocols like Diffie-Hellman key exchange, where both parties contribute to the creation of the key
without a need for a third-party key distribution center.

This symmetric key, which is identical for both parties, is then used to encrypt and decrypt the
exchanged data efficiently. It's essential to understand that this symmetric key must be protected
during its distribution to prevent unauthorized access to the communication.

Maintaining the secrecy of private keys is crucial in this security model, but it's not the only potential
security lapse. Vulnerabilities can arise anywhere in the system, from software weaknesses to user
behavior, thus emphasizing the need for a holistic approach to security.

In summary, PKI systems often leverage a combination of asymmetric encryption, like RSA, for
authentication and secure key exchange, and symmetric encryption for the efficient exchange of data.
The security of these systems hinges on various factors, including safe private key handling, the
robustness of the encryption algorithms, and the overall security of the system they're implemented
in.

Digital Signatures and Certificates


Digital signatures are instrumental in providing authentication and ensuring integrity. To break this
down into simpler terms, a plaintext message is passed through a hash function, creating a unique output
known as the message digest. This digest, along with the plaintext message, is then encrypted using the
sender's private key to form a digital signature. The recipient receives the digitally signed message, the
plaintext message, and the sender's public key.

Upon receiving the message, the recipient uses the sender's public key to decrypt the digital signature,
extracting the original message digest. The recipient also runs the received plaintext message through
the same hash function to produce a new message digest. If the decrypted message digest matches the
newly generated one, it verifies that the message has not been tampered with during transmission. To
further strengthen the security and enforce non-repudiation, a third-party timestamping service is often
employed to validate the time and date at which the message was sent.

Authentication, on the other hand, can be ensured using digital certificates. How does a customer, for
instance, know that a website collecting sensitive information is not a fraudulent setup mimicking a
legitimate e-merchant? They can verify the site's digital certificate. This is a digital document issued by
a trusted Certification Authority (CA) like Verisign, Thawte, etc., that vouches for the website's
authenticity. It uniquely identifies the website or merchant, confirming their identity and legitimacy.
Digital certificates are not only issued for e-commerce sites and web servers but are also used to
authenticate emails and other online services.

To explain this in more understandable form Lets share an example:

First, digital signatures. Imagine you're writing a secret note. You want to make sure that when your
friend reads it, they know it was you who wrote it, and that nobody else has changed it while it was on
its way. To do this, you create a 'message digest' - a unique representation of your message, sort of like
a digital fingerprint, made using something called a hash function.

The Institute of Chartered Accountants of Nepal ȁ͵͸ͻ


Management Information and Control System
Next, you use your private key (a secret code that only you have) to encrypt this fingerprint, creating a
digital signature. Now, you send your note, your digital signature, and your public key (a code that
everyone knows is yours) to your friend.

When your friend gets your note, they use your public key to 'unlock' your digital signature and get the
original message digest. They also create a new message digest by running your note through the same
hash function you used. If these two digests match, your friend knows that the note really is from you
and that nobody messed with it.

Sometimes, to make extra sure that the note really was sent when you say you sent it, a third party might
add a timestamp to your note, sort of like a digital postmark.

Next, let's talk about digital certificates. Imagine you're shopping online. How do you know that the
website you're buying from is the real deal and not some imposter trying to steal your information? This
is where digital certificates come in.

A digital certificate is like a digital ID card for a website or email service, issued by a trusted
organization called a Certification Authority (CA). This CA is like a digital notary, confirming that the
website or service is who they say they are. When you visit a site, you can check its digital certificate
to make sure it's not an imposter.

So, in short, digital signatures make sure a message is genuine and hasn't been tampered with, and
digital certificates prove that a website or online service is the real deal. Both of these help keep our
online world safe and secure.

Some related terms are:

Hash Function: The original message is processed through a hash function to generate a unique
value called the message digest.
Encryption: The message digest is encrypted using the sender's private key, creating the digital
signature. The digital signature is appended to the message.
Verification: The recipient uses the sender's public key to decrypt the digital signature, obtaining
the message digest. They independently run the received message through the same hash function
to generate a new digest.
Integrity Check: The recipient compares the received message digest with the newly generated
one. If they match, it indicates that the message has not been tampered with during transmission.
Digital Certificates:
Digital certificates serve as a means of authenticating the identity of the website or entity receiving
sensitive information. Here's how it works:
Certification Authority (CA): Trusted entities like Verisign or Thawte issue digital certificates.
CAs verify the identity of the certificate holder before issuing the certificate.
Certificate Content: A digital certificate contains information about the certificate holder,
including their public key and other identifying details. The certificate is digitally signed by the CA
to ensure its authenticity.

370 | The Institute of Chartered Accountants of Nepal


Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
Certificate Verification: When customers interact with a website, they can check the digital
certificate provided by the server. The customer's browser verifies the certificate's authenticity by
validating the CA's digital signature. It ensures that the website is legitimately associated with the
entity identified in the certificate.

By combining digital signatures and digital certificates, the integrity, authenticity, and non-repudiation
of electronic communications can be ensured. Digital signatures verify the integrity of the message,
while digital certificates authenticate the identity of the receiving entity. Together, they provide a secure
framework for transmitting sensitive information and establishing trust in e-commerce transactions.

Transport Layer Security (TLS)


Transport Layer Security (TLS) is a cryptographic protocol used to provide secure communication over a
network. It is commonly used to secure web traffic and ensure the confidentiality, integrity, and
authenticity of data transmitted between clients and servers.

TLS operates at the transport layer of the networking stack, sitting on top of the reliable transmission
protocol (usually TCP). It uses a combination of symmetric and asymmetric encryption algorithms, digital
certificates, and cryptographic protocols to establish a secure connection between two parties.

Here's a high-level overview of how TLS works:

Handshake: The TLS handshake process begins when a client connects to a server over a secure
connection (usually initiated by the client accessing a website with "https" in the URL). During the
handshake, the client and server negotiate the security parameters for the session.
Encryption Setup: Once the handshake is complete, the client and server establish a shared session
key using asymmetric encryption (public-key cryptography). This session key is then used for
symmetric encryption (faster and more efficient) of the actual data transmitted between the client and
server.
Data Exchange: With the secure connection established, the client and server can securely exchange
data. The data is encrypted using the session key, ensuring confidentiality.
Integrity and Authentication: TLS also provides mechanisms for data integrity and server
authentication. Message Authentication Codes (MACs) are used to verify that the data has not been
tampered with during transmission. Digital certificates issued by trusted Certificate Authorities (CAs)
are used to authenticate the identity of the server, ensuring that the client is communicating with the
intended server and not an imposter.

TLS has evolved over the years, with different versions such as TLS 1.0, TLS 1.1, TLS 1.2, and the most
recent version, TLS 1.3. Newer versions often address security vulnerabilities found in older versions and
introduce improvements in performance and security.

TLS is widely used to secure various network protocols, including HTTPS (secure web browsing), FTPS
(secure file transfer), and secure email protocols like SMTPS and IMAPS. Its widespread adoption has
significantly contributed to the secure transmission of sensitive information over the internet.

The Institute of Chartered Accountants of Nepal ȁ͵͹ͳ


Management Information and Control System
Secure Socket Layers
SSL is a protocol that provides security for communications over networks such as the internet. It
operates above the transport layer, where TCP/IP resides, and beneath application protocols such as
HTTP. The goal of SSL is to provide a secure channel between two machines or devices operating over
a network.

Here's a simplified version of the SSL process:

When a client (like your web browser) connects to an SSL-secured server (like a shopping website),
it asks the server to identify itself.
The server sends back a copy of its SSL certificate, which includes the server's public key.
The client checks the server's certificate against a list of trusted Certification Authorities (CAs). If the
certificate is valid and trusted, the client creates, encrypts, and sends back a symmetric session key
using the server's public key.
The server decrypts the symmetric session key using its private key. Now, both the server and client
have the same session key for that specific session.
The server sends back an acknowledgment, encrypted with the session key, to start the encrypted
session.
Server and client now encrypt all transmitted data with the session key.

By using this process, SSL provides a way to securely transmit sensitive information, like credit card
numbers or login credentials, over the internet. The use of both PKI and symmetric encryption helps ensure
both the integrity and confidentiality of the transmitted data.

PCI, SET, Firewalls and Kerberos


PCI (Peripheral Component Interconnect): While PCI typically refers to a hardware interface used for
connecting peripheral devices to a computer, in the context of your statement, it likely refers to the
Payment Card Industry Data Security Standard (PCI DSS). PCI DSS is a set of security standards designed
to ensure the protection of cardholder data during credit card transactions. It specifies requirements for
merchants and service providers who handle, process, or store cardholder information. Compliance with
PCI DSS helps prevent data breaches and ensures the secure handling of credit card details.

SET (Secure Electronic Transaction): SET is a protocol developed by Visa and Mastercard to enhance
the security of electronic payment transactions. It uses public-key cryptography and digital certificates to
ensure the privacy and integrity of transaction data. SET allows for secure communication between the
merchant, customer, and bank, protecting sensitive information during online transactions. While SET was
once widely used, it has been largely replaced by more modern payment security protocols such as EMV
and tokenization.

Firewalls: Firewalls are security mechanisms, either hardware or software-based, that control the
incoming and outgoing network traffic to protect a system or network from unauthorized access and
malicious activity. Firewalls monitor and filter network traffic based on predefined security rules, allowing
only authorized connections and blocking suspicious or potentially harmful traffic. They act as a barrier

372 | The Institute of Chartered Accountants of Nepal


Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
between trusted internal networks and untrusted external networks, adding an extra layer of protection
against hackers, viruses, and other cyber threats.

Kerberos: Kerberos is a network authentication protocol that provides secure authentication between
clients and servers in a distributed computing environment. It uses symmetric key cryptography to verify
the identities of users and services, allowing them to securely communicate over a potentially untrusted
network. Kerberos eliminates the need to transmit passwords over the network by using tickets to
authenticate users. It is commonly used in enterprise environments to control access to resources and
protect against unauthorized access.

Securely transmitting credit card details can be achieved with SSL, but once those details are stored on a
server, they become vulnerable to potential hacking attempts. To protect sensitive data, the Payment Card
Industry Data Security Standard (PCI DSS) was developed. This is a set of comprehensive requirements
for enhancing payment account data security and it's globally recognized and adopted.
Firewalls, either software or hardware-based, serve as a primary line of defense, protecting servers,
networks, and individual PCs from outside threats like viruses and hacker attacks.

For internal security and to ensure that only authorized employees have access to certain information,
many companies use authentication protocols like Kerberos. This system uses symmetric key
cryptography to confirm the identities of individuals on a network, helping to maintain the integrity and
confidentiality of information.

Please note that while technologies like firewalls and Kerberos contribute to security, they form just part
of a broader security strategy. It's also essential for organizations to adopt good security practices, like
regularly updating and patching systems, and educating employees about potential security threats.

Transactions
Transactions involving sensitive information, such as credit card details, require robust security measures
to ensure the protection of the data. Let's examine the three stages of these transactions:

Credit card details supplied by the customer: When a customer provides their credit card details to the
merchant or a payment gateway, it is crucial to ensure the secure transmission and storage of this
information. The server's SSL (Secure Sockets Layer) technology plays a significant role in encrypting
the data during transmission between the customer's browser and the server. Additionally, the merchant
or server's digital certificates verify the authenticity and integrity of the communication, assuring the
customer that they are interacting with a trusted entity.

Credit card details passed to the bank for processing: After the merchant or payment gateway receives
the customer's credit card details, they need to securely transmit this information to the bank or payment
processor for processing. The payment gateway employs a range of sophisticated security measures to
protect this data during transit. These measures may include encryption, tokenization, and secure
communication protocols to ensure the confidentiality and integrity of the transaction data.

The Institute of Chartered Accountants of Nepal ȁ͵͹͵


Management Information and Control System
8.5 Evaluation of IS
Criterion for evaluation and risk
A number of distinct factors help firms to evaluate a particular risk. Let's take a look at the key notions
of exposure, volatility, probability, severity, and time horizon. It's really the interaction of these factors
with two other notions - capital and correlation - that determines the effect of a specific risk on a specific
company.

Exposure, generally speaking, is the maximum amount of damage that will be suffered if some event
occurs. All other things being equal, the risk associated with that event increases as the exposure
increases. For example, a lender is exposed to the risk that a borrower will default. Some exposures
can be pinned down to a specific number, while others are more qualitative - for example, reputational
risks. Exposure can be controlled in a number of ways: for example, it might be reduced by
transferring the risk to another company (such as an insurer), financed (cushioned by capital) or simply
retained.

Volatility, loosely meaning the variability of potential outcomes, is a good proxy for the word "risk" in many
of its applications. This is particularly true for risks that are predominantly dependent on market factors, such
as options pricing. In other applications, it is an important component of the overall risk. Generally, the
greater the volatility, the higher the risk. For example, the number of loans that turn bad is proportionately
higher, on average, in the credit card business than in commercial real estate. Nonetheless, it is real estate
lending that is widely considered to be riskier, because the loss rate is much more volatile - and therefore
harder to cost and manage.

Like exposure, volatility has a specific technical meaning in some areas of risk management. In market
risk, for example, it is synonymous with the standard deviation of returns and can be estimated in a number
of ways.

Probability How likely is it that some risky event will actually occur? The more likely the event is to
occur-in other words, the higher the probability-the greater the risk. The assignment of probabilities to
potential outcomes has been a major contribution to the science of risk management. Certain events,
such as interest rate movements or credit card defaults, are so likely that they need to be planned
for as a matter of course and mitigation strategies should be an integral part of the business' regular
operations. Others, such as a fire at a computer center, are highly improbable, but can have a devastating
impact.

Severity How bad might it get? Whereas exposure is typically defined in terms of the worst that could
possibly happen, severity is the amount of damage that is, in some defined sense, likely to be suffered.
The greater the severity, the higher the risk. Severity is the partner to probability: if we know how likely
an event is to happen, and how much we are likely to suffer as a consequence, we have a pretty good
idea of the risk we are running. But severity is often a function of our other risk factors, such as volatility
- the higher a price might go, the more a company might lose.

Time horizon The longer the duration of an exposure, the higher the risk. For example, extending a 10-
year loan to the same borrower has a much greater probability of default than a one-year loan. Hiring the
374 | The Institute of Chartered Accountants of Nepal
Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
same technology company for a five-year outsourcing contract is much riskier than a six-month consulting
project - though not necessarily ten times as risky. The time horizon can also be thought of as a measure
of how long it takes to reverse the effects of a decision or event. The key issue for financial risk exposures
is the liquidity of the positions affected by the decision or event. Positions in highly liquid instruments,
such as US Treasury bonds, can usually be eliminated in a short period of time, while positions in, say,
real estate, are illiquid and take much longer to sell down.

There are few frameworks that need to be considered while evaluating the risk for information security:

COBIT (Control Objectives for Information and Related Technologies): This framework, developed by
ISACA, provides a set of best practices for IT management and IT governance. It helps organizations
align their IT goals with their business goals, while helping to manage the risks associated with IT and
information systems.

ISO 27001: This is an international standard that provides a framework for establishing, implementing,
maintaining, and continually improving an Information Security Management System (ISMS). It helps
organizations identify, manage, and reduce the range of threats to which their information systems are
exposed.

NIST Cybersecurity Framework: Developed by the National Institute of Standards and Technology, this
framework provides a policy for managing cybersecurity risk. It's widely adopted and provides best
practices for identifying, protecting, detecting, responding to, and recovering from cybersecurity
incidents.

Risk IT Framework: Also developed by ISACA, the Risk IT framework complements COBIT from a risk
management perspective. It provides an end-to-end, comprehensive view of all risks related to the use of
IT and a similarly complete treatment of risk management, from the tone and culture at the top, to
operational issues.
FAIR (Factor Analysis of Information Risk): FAIR is a quantitative risk analysis methodology for
cybersecurity and operational risk. It helps organizations understand, analyze, and quantify information
risk in financial terms.
Incorporating these frameworks into an organization's risk management strategy can significantly
improve their ability to evaluate, manage, and mitigate risks associated with their Information Systems.
It's also worth noting that many of these frameworks complement each other and can be used together to
provide a holistic approach to IT and information systems risk management.
Computer assisted audit techniques(CAAT)
CAATs, or Computer Assisted Audit Tools and Techniques, is indeed an evolving field within the audit
profession, and it involves leveraging technology to automate or enhance the audit process. This can
involve the use of various software packages like SAS, Excel, Access, Crystal Reports, Cognos, and
Business Objects, among others.
At its core, CAATs involve the use of these technological tools to test and analyze large volumes of data,
which can provide auditors with a deeper understanding of an organization's financial situation,
operational efficiency, and internal controls. In more detail, CAATs can be used for several audit tasks:

The Institute of Chartered Accountants of Nepal ȁ͵͹ͷ


Management Information and Control System
Data extraction and analysis: This involves pulling data from an organization's systems and running
analysis to identify anomalies, trends, or discrepancies that might indicate errors or fraud.
Automated testing: Automated tests can be set up to run against an organization's data, providing the
ability to perform more detailed checks than manual processes and often in a shorter amount of time.
Data visualization: By presenting data in a visual format, auditors can more easily identify patterns and
anomalies that may warrant further investigation.
In the current digital era, the use of data analytics in CAATs has become increasingly important. With
businesses now generating vast amounts of data, the ability to analyze this data efficiently and effectively
is key. Advanced data analytics can help auditors gain insights into business operations, financial
transactions, and risk areas, enabling them to make informed decisions and provide valuable
recommendations.
It's also worth noting that with the advancements in machine learning and artificial intelligence, these
technologies are increasingly being incorporated into CAATs to further enhance the audit process.
Machine learning algorithms can be used to identify patterns and anomalies in data, which can help
auditors detect potential fraudulent activities or areas of high risk.
In conclusion, CAATs represent a vital toolset for modern auditors, allowing them to perform their duties
more effectively and efficiently, and enabling a more thorough and accurate audit process.

8.6 Development of Control Structure


Cost and benefits:
A cost-benefit analysis is necessary to determine economic feasibility. The primary objective of cost-
benefit analysis is to find out whether it is economically worthwhile to invest in the project. If the returns
on the investment is good, then the project is considered economically worthwhile. Cost-benefit analysis
is performed by first listing all the costs associated with the project. Costs consist of both direct costs
and indirect costs. Direct costs are those incurred in buying equipment, employing people, cost of
consumable items, rent for accommodation, etc. Indirect costs include those involving time spent by user
in discussing problems with system analysts, gathering data about problem, etc. Details of direct costs
are:

1. Cost of computer, peripherals and software. It could be either a capital cost for buying a
computer or the cost of renting one.
2. Cost of space such as rent, furniture, etc. In a place like Bombay the cost of space occupied by a
system analyst (5 sq. metres) in prime location could be Rs. 5000 per month!
3. Cost of systems analysts and programmers (salary during the period of assignment).
4. Cost of materials such as stationery, floppy disks, toner, ribbon, etc.
5. Cost of designing and printing new forms, user manuals, documentation, etc.
6. Cost of secretarial services, travel, telephone, etc. An estimate is sometimes made of indirect cost
if it is very high and added to the direct cost.
7. Cost of training analysts and users.
Benefits can be broadly classified as tangible benefits and intangible benefits. Tangible benefits are
directly measurable. These are:
376 | The Institute of Chartered Accountants of Nepal
Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
1. Direct savings made due to reducing (a) inventories, (b) delays in collecting outstanding
payments, (c) wastage, (d) cost of production, and increasing production, as also its speed.
2. Savings due to reduction in human resources or increasing volume of work with the same human
resources.
Intangible benefits are:
1. Better service to customers
2. Superior quality of products
3. Accurate, reliable and up to date strategic, tactical and operational information which ensures
better management and thereby more profits.
The sum of all costs (direct and indirect) is compared with the sum of all savings (tangible and
intangible). It is not always easy to assign money value to intangible benefits. It is arrived at by
discussion amongst users of the information system.
If the project is a high cost one, extending over a period of time, then it is necessary to estimate costs
during various phases of development of the system so that they can be budgeted by the management.

Role of Auditing in control process:


IT project management has increasingly become a topic of global concern, primarily due to the high
failure rate of IT development projects. It's not unusual for these projects to significantly overshoot their
initial budgets, fail to meet deadlines, or undergo scope reduction. The impact of project failure is
multifaceted, causing not just financial loss but also diminishing investor confidence and negatively
affecting market perception.

Project failures often lead to substantial financial and reputational damage for organizations. For
instance, poorly executed projects can erode shareholder value almost instantly. While no recent
statistics are included in your provided text, multiple studies still confirm high failure rates of IT projects
worldwide. Therefore, addressing the reasons for project failures and strategizing effective management
controls is essential.

There is a consensus that one major factor contributing to project failures is the tendency for
management to ignore early warning signs. To counter this issue, several solutions have been proposed,
many of which focus on improving oversight and control mechanisms:

Early Warning System: Establishing a system to detect and alert about potential issues at the
earliest stages can help prevent bigger problems down the line.
Exit Champion: Recognizing the role of individuals who advocate for project termination when
necessary can save resources and shift focus to more viable projects.
Decision Quality: Focusing on the quality of decisions, rather than solely on the outcome, promotes
a more strategic and sustainable approach to project management.
Independent Reviews: Regular and independent reviews of every major project can provide
objective insights and early detection of issues.
Fail-safe Options: Providing for fail-safe options ensures there's a contingency plan in case the
project does not proceed as expected.

The Institute of Chartered Accountants of Nepal ȁ͵͹͹


Management Information and Control System
The Chief Information Officer (CIO) at Mercy Health Partners in Ohio, Jim Albin, shared his success
story of managing a project portfolio using the "buy, sell or hold" approach, a merit evaluation process
where the project is evaluated for its merit in meeting goals and objectives (interim merit reviews).The
role of effective monitoring in the success of IT development projects is widely acknowledged.. This
approach promotes continuous evaluation and adjustments as needed.

Extreme project management is another successful technique. In this model, project managers focus on
dealing with external stakeholders and managing the project, while technical teams handle technology
discussions and solution development. Auditors also play a crucial role in project success. They provide
an independent assessment of the project’s adherence to established plans and controls, and their
recommendations can greatly improve project outcomes.

Thus, auditing is a key component of the control process in IT development projects, contributing to their
overall success by ensuring compliance, promoting transparency, and validating performance against
objectives.

The Auditor's Role


The auditor's role in auditing IT development projects involves evaluating and validating controls within
project management processes and business/systems processes. The auditor's objectives include
safeguarding capital investments and recommending internal controls. Here are some control validations
that auditors may perform:

Project Management Process Validations:

Assessing the effectiveness of project planning and governance structures.


Reviewing project documentation, including project charters, plans, and schedules.
Evaluating the adequacy of project risk management and mitigation strategies.
Verifying compliance with project management methodologies and best practices.
Assessing project performance monitoring and reporting mechanisms.

Business Process Validations:

Evaluating the design and implementation of internal controls within business processes.
Verifying the accuracy and completeness of data inputs and outputs.
Assessing the segregation of duties and authorization controls.
Reviewing controls over financial transactions, inventory management, and procurement.
Assessing compliance with regulatory requirements and industry standards.

Systems Process Validations:

Assessing the adequacy of controls within system development life cycle processes.
Evaluating the design and implementation of logical and physical security controls.
Reviewing change management and configuration control processes.
Verifying the effectiveness of system testing and quality assurance procedures.
Assessing the adequacy of system documentation and user training.

378 | The Institute of Chartered Accountants of Nepal


Chapter 8:
9 :Information Systemand
Disaster Recovery Security, Protection
Business andPlanning
Continuity Control
By conducting these control validations, auditors provide assurance on the effectiveness of controls and
identify areas for improvement. They play a crucial role in risk mitigation, ensuring project success, and
promoting the integrity and reliability of IT development projects.

Project Management Validation Checklist


Process Validation Steps
Project Management Process Controls
Governance Validate that the company investment committees’ requirements and enterprise
program management office recommended structure is in place
 Steering committee
 Business sponsor
 Program management office (PMO)
 Project manager(s)
Financials Validate that the project is in compliance with company policies and procedures
 Project approval process
 Accounting principles (GAAP for capitalization)
 Mentoring activities over budget vs. actual spent
 Phase-by-phase funding stage gates
Methodology Validate that a proven methodology was adopted:
 Approved project management methodology
 Compliance with project management methodology
Monitoring Validate the effective monitoring process is in place
 Identify the timeline of overall projects and subprojects
 Identify the roadmap for project and critical part
 Follow a formal change control and approval process
 Monitor cost incurred against completion of deliverables.
Quality Assurance Validate the effectiveness of the project quality management and quality control:
 Ensure that a project quality management function is in place
 Facilitate continues process improvement
 Monitor through quality control checks
 Ensure that the project has adopted company quality standards,
 Establish exit and entry arteria for project phases."
Risk Management Validate the effectiveness of the business and project risk mitigation process
 Ensure that a project risk assessment and management process exist
 Identity delivery, technology and business risks.
 Identify and implement mitigating controls.
 Identify and implement contingency plans.
Delivery and Validate project activities that are in place to ensure the business is ready
Transition  Ensure organizational alignment and training (business readiness)
 Ensure adequate communication to users and decision makers.
 Define service levels.
 Define project delivery process
 Define transition and support process.

Fig 9·14 Project Management Validation Checklist

The Institute of Chartered Accountants of Nepal ȁ͵͹ͻ


Management Information and Control System
Business and System Validation Checklist
Process Validation Steps
Business and System Process Controls
System and Validate that an effective design process is in place
Business  Adequate preventive, detective and corrective controls are
Process considered.
Designs  Checklists are designed and standard templates utilized.
 Objectives of the business functionality are being met.
 Policy and procedure changes are noted and revised.
 New reports and system alerts are identified and defined.
 Adequate subject matter expert (SME) participation is secured.
Application Validate that the adequate configuration items are considered, including:
Configuration  Application security and access controls
 Automated process workflows setup (segregation of duties)
 System alerts for transaction exceptions
 Adequate controls within system functions
 Test approach and adequate testing
 Performance and stress testing
System Architecture Validate that the system architecture meets standard requirements:
and Security  Validation checklists and standard templates are utilized.
 Objectives of the enterprise target infrastructure are met.
 Enterprise security policy and standards are met.
 Security assessment, fixes and improvement
 Capacity, performances and scalability
 High availability, failover/recovery and disaster recovery
Deployment Validate adequate and approved deployment activities:
 Launch approach and customer impact assessment
 Data conversion and validation process
 Failover/recovery during the migration process
Support Validate preparedness for support activities:
 Expert team's knowledge transfer to the support team
 Project documentation repository for future reference
 Appropriate end-user training
Fig 9-15 Business and System Validation Checklist

In addition, in the context of high-risk projects, it is vital for the auditor to pay special attention to the
monitoring controls as discussed in the previous sessions.

380 | The Institute of Chartered Accountants of Nepal


Chapter 9 : Disaster Recovery and Business Continuity Planning

Chapter 9
Disaster Recovery and Business Continuity Planning

The Institute of Chartered Accountants of Nepal ȁ͵ͺͳ


Management Information and Control System

382 | The Institute of Chartered Accountants of Nepal


Chapter 9 : Disaster Recovery and Business Continuity Planning
9.1 Disasters Recovery Planning
Disaster recovery is the process, policies and procedures related to preparing for recovery or continuation
of technology infrastructure critical to an organization after a natural or human- induced disaster. Disaster
Recovery Planning (DRP) is a crucial aspect of IT governance and risk management. It involves preparing
and implementing strategies to minimize the impact of potential disasters or disruptions to business
operations.
Disaster can be classified in two broad categories. Viz, 1) Natural Disasters-Preventing a natural disaster
is very difficult, but it is possible to take precautions to avoid losses. These disasters include flood, fire,
earthquake, hurricane, etc 2) Man made disasters-These disasters are major reasons for failure. Human
error and intervention may be intentional or unintentional which can cause massive failures such as loss
of communication and utility. These disasters include accidents, walkouts, sabotage, burglary, virus,
intrusion, etc.
The stages of disaster recovery planning are:
1. Risk Assessment and Business Impact Analysis (BIA): This involves identifying the IT services
and systems that are critical for business operations. This step also includes an assessment of potential
threats and the impact they could have on business continuity.
2. Recovery Strategy Planning: This involves developing strategies to recover and restore IT
operations in the event of a disaster. This might include data recovery plans, redundant systems, off-
site backups, cloud solutions, etc.
3. Implementation and Testing: The next step is to implement the disaster recovery plan and regularly
test it to ensure it works as expected. This can be achieved through drills, simulations, and other
testing methods.
4. Training and Awareness: It's important to make sure that everyone in the organization knows what
to do in the event of a disaster. This includes both IT personnel and other employees.
5. Plan Maintenance: A disaster recovery plan is not a static document. It needs to be reviewed and
updated regularly to account for changes in the business environment, technology advancements, and
new potential threats.
Remember, while the goal of DRP is to minimize downtime and data loss, it's also important to factor in
costs and practicality. Not every system needs to be recovered immediately. Prioritizing the recovery of
systems based on their importance to business operations (Recovery Time Objective - RTO and Recovery
Point Objective - RPO) is an integral part of the planning process.
General process to follow while creating BCP (Business Continuity Planning) / DRP (Disaster
Recovery Planning)
1. Identify Scope and Boundaries: Determine what parts of your business will be covered by the
BCP/DRP. This can include specific departments, functions, locations, or IT systems. It's crucial to
understand what resources are available, which are most important to maintaining operations, and
how they are interdependent.
2. Risk Assessment: Conduct an analysis to identify the possible threats to your organization, the
likelihood of their occurrence, and their potential impact. This process includes a comprehensive
review of various potential threats such as natural disasters, cyber-attacks, system failures, and human

The Institute of Chartered Accountants of Nepal ȁ͵ͺ͵


Management Information and Control System
errors. It also involves identifying your organization's vulnerabilities that these threats could exploit,
and assessing the potential effects on business operations.
3. Conduct a Business Impact Analysis (BIA): This is a systematic process to determine and evaluate
the potential effects of an interruption to critical business operations as a result of a disaster, accident,
or emergency. This should include the identification of essential business functions, dependencies,
and the acceptable downtime for each.
4. Develop Recovery Strategies: After the BIA, recovery strategies should be developed that detail how
to restore the functions that were identified as critical. This might include processes like setting up an
alternate location for operations, data recovery methods, securing replacement equipment, and
establishing reciprocal agreements with third parties.
5. Obtain Organizational and Financial Commitment: This is about pitching the BCP/DRP to upper
management for approval and allocation of necessary resources. It's important to effectively
communicate the potential impacts of a disaster on the organization and the benefits of having a
comprehensive plan in place.
6. Departmental Involvement and Roles: Every department that's included in the BCP/DRP should
understand its role and responsibilities in the event of a disaster. This can range from IT systems
recovery, to human resources responsibilities such as personnel safety assurance and communication.
7. Plan Development: At this stage, the findings from the risk assessment, the BIA, and the recovery
strategies are combined to develop a comprehensive BCP. This should include detailed response and
recovery actions, escalation and communication procedures, and a clear definition of roles and
responsibilities.
8. Training and Awareness: Regularly train staff so everyone in the organization understands the
BCP/DRP, their role in it, and what they need to do during a disaster. This can include practice drills,
distribution of informational materials, or ongoing training sessions.
9. Implementation: Once the plan has been approved by all relevant stakeholders, it should be
implemented throughout the organization. This involves distributing the plan, integrating it into
regular operations, and potentially making adjustments to certain business processes based on the
plan.
10. Testing and Maintenance: Regularly test and update the BCP/DRP to ensure its effectiveness and
to accommodate any changes within the business or the external environment. Testing can take the
form of table-top exercises, structured walk-throughs, or full-scale simulations. Based on the results,
necessary adjustments should be made.
11. Use of Tools: Utilize BCP/DRP tools, such as those provided by the National Institute of Standards
and Technologies (NIST), to aid in the creation, implementation, and maintenance of the plan. These
can provide useful templates, checklists, and guidelines to follow, helping to ensure a comprehensive
and effective plan.
Remember, a BCP/DRP is not a one-and-done project. It's an ongoing process that should evolve with
your business, adjusting to changes in your operations, environment, and the emerging threat landscape.
With the increasing importance of information technology for the continuation of business critical
functions, combined with a transition to an around-the-clock economy, the importance of protecting an
organization's data and IT infrastructure in the event of a disruptive situation has become an increasing
and more visible business priority in recent years.

384 | The Institute of Chartered Accountants of Nepal


Chapter 9 : Disaster Recovery and Business Continuity Planning
It is estimated that most large companies spend between 2% and 4% of their IT budget on disaster
recovery planning, with the aim of avoiding larger losses in the event that the business cannot continue
to function due to loss of IT infrastructure and data. In a study of Companies that experienced a major
data loss without having a solid BCP/BPR, 43% never reopen, 51% close within two years, and only 6%
will survive long-term (Cummings, Haag & McCubbrey, 2005). This results in a majority of failed
businesses that is 94% mortality rate for Companies that experienced major data loss.

9.2 Data Backup and Recovery


RAID
RAID, standing for Redundant Array of Independent Disks, is an advanced data storage technology that
enhances both data reliability and system performance. Originally defined as a Redundant Array of
Inexpensive Disks by David A. Patterson, Garth A. Gibson, and Randy Katz at the University of
California, Berkeley, in 1987, RAID has evolved significantly and is now a standard technology employed
in data storage systems.
RAID technology can be viewed as an umbrella term for a suite of data storage schemes that distribute
and replicate data among multiple hard disk drives. Various RAID architectures, identified as RAID 0,
RAID 1, RAID 5, etc., focus on two primary objectives: enhancing data reliability (to safeguard against
data loss) and increasing input/output (I/O) performance (to boost system speed).
A crucial aspect of RAID technology is the creation of a RAID array. When multiple physical disks are
configured to use RAID technology, they are collectively referred to as a RAID array. A RAID array
distributes data across various disks, yet it is presented to the user and the operating system as a single
logical storage unit. This illusion is achieved using either special hardware or software, which merges two
or more physical hard disks into a single logical unit.
Three key techniques are utilized in RAID:
 Mirroring: This technique involves writing identical data to more than one disk, thereby creating a
'mirror' image of your data. Mirroring substantially enhances the availability and durability of data
since even if one disk fails, the data can be accessed from the 'mirror' disk, preventing any loss of
data.
 Striping: This technique involves distributing data across multiple disks. With striping, a RAID
array can boost the I/O performance significantly. That’s because multiple disks can read/write data
concurrently, enabling a faster data access speed compared to a single disk.
 Error Correction: Also known as parity checking, this technique involves strong additional parity
data that helps detect and potentially correct errors thus enhancing the system’s fault tolerance. By
comparing the data read from the disk with the stored parity data, the system can identify errors and,
in some configurations, correct these errors.
Different RAID levels use one, two, or all three of these techniques, depending on the system requirements
and the trade-off between redundancy and performance. For example, RAID-5, a common RAID level,
employs all three techniques: it uses striping to distribute data across all disks in the array, parity checking
for error detection, and also includes the concept of distributed parity for data recovery.

The Institute of Chartered Accountants of Nepal ȁ͵ͺͷ


Management Information and Control System
Lastly, it is important to note that the choice of RAID level and the resulting configuration of techniques
(mirroring, striping, error correction) profoundly affect the system's balance between reliability,
availability, and performance. This trade-off is crucial to consider when designing and implementing a
RAID system. Therefore, understanding the specific needs of your system is essential when choosing the
most suitable RAID configuration.
In conclusion, RAID is a versatile and indispensable technology in the field of data storage. It offers a
spectrum of configurations, each with its unique balance of data reliability, system performance, and cost-
effectiveness, making it adaptable to a wide range of system requirements.
RAID 0:
RAID 0, often referred to as a striped set without parity or mirroring, is a basic form of RAID that provides
enhanced performance and additional storage. It achieves this by distributing (or 'striping') data across all
disks in the array, effectively increasing the read/write speed.
Notably, RAID 0 doesn't provide any redundancy or fault tolerance, which means it lacks the 'R'
(Redundancy) in RAID (Redundant Array of Independent Disks). Any disk failure will result in the loss
of all data in the array because data is fragmented and these fragments are written simultaneously to their
respective disks in the same sector.
With each piece of data broken into fragments and written across the array, RAID 0 can read smaller
sections of the entire chunk of data in parallel, which significantly enhances data access speed and overall
bandwidth. The number of fragments corresponds to the number of disks in the array, so adding more
disks to a RAID 0 array will increase the bandwidth proportionately.
However, it's crucial to understand that RAID 0 doesn't implement any form of error checking. Therefore,
any data error or disk failure is irrecoverable. Moreover, because all disks in the array are utilized, the risk
of data loss is increased. A failure in any of the disks destroys the entire array, and this risk is amplified
with each additional disk.
In conclusion, RAID 0 is an excellent option when performance and storage capacity are paramount, and
data redundancy isn't a concern. However, the lack of fault tolerance means it's not suitable for systems
where data loss cannot be tolerated. Always consider the specific needs and risk tolerance of your system
when choosing a RAID configuration.
RAID 1:
RAID 1, also known as mirroring, is a RAID configuration that primarily aims to increase data redundancy
and fault tolerance, making it an excellent choice for critical systems where data integrity and continuous
operation are essential.
 Mirroring: The cornerstone of RAID 1 is mirroring, a technique where data is duplicated on two or
more disks. Every write operation is mirrored, resulting in identical data across all drives in the array.
 Data Redundancy and Fault Tolerance: The mirroring process inherently provides data
redundancy, offering protection against disk failures. If a disk fails, the system can continue to
function using the remaining disks without any interruption. In fact, RAID 1 can tolerate the failure
of all but one disk in the array, making it highly fault tolerant.

386 | The Institute of Chartered Accountants of Nepal


Chapter 9 : Disaster Recovery and Business Continuity Planning
 Read Performance: RAID 1 can improve read performance, especially in multi-threaded operating
systems that support split seeks. This capability allows the system to read data from multiple disks
simultaneously, significantly boosting the overall read speed.
 Write Performance: While RAID 1 enhances read performance, its write performance may be
slightly lower than a single disk configuration due to the overhead of duplicating every write
operation across each disk in the array.
 Disk Space Utilization: RAID 1 requires a larger disk space for redundancy. Given that each disk
in the array stores an identical copy of the data, RAID 1 utilizes only half of the total storage capacity
for usable data. For instance, in a RAID 1 array of two 1TB disks, only 1TB will be usable, with the
other 1TB reserved for redundancy.
 Duplexing: Sometimes, RAID 1 is implemented with a separate controller for each disk in a
technique known as duplexing. This approach provides additional fault tolerance by eliminating a
single point of failure in the controller.
 Ease of Recovery: RAID 1 arrays simplify data recovery in the event of disk failure. The failed disk
can be replaced, and the data is automatically rebuilt from the remaining mirror disk(s).
RAID 1 is ideally suited for applications and environments where data redundancy and system availability
take precedence over storage efficiency. It is commonly used in critical systems like database servers,
where data loss or downtime can have significant impacts. Nonetheless, RAID 1 might not be the most
storage-efficient option when compared to other RAID levels that offer both redundancy and striping.
RAID 2:
RAID 2 is a unique and seldom-used level of RAID technology that revolves around the principle of bit-
level striping with Hamming code error correction. Unlike other RAID levels where striping occurs in
blocks, RAID 2 stripes data at the level of single bytes or words across all the disks in the array. This
arrangement implies that all disks must be synchronized and spin in unison, facilitating high data transfer
rates due to simultaneous read/write operations across all disks.
The defining feature of RAID 2 is its reliance on Hamming code parity, a robust error detection and
correction mechanism. Hamming code computes a set of parity bits for each group of data bits.
Interestingly, the placement of these parity bits follows a specific pattern, with each positioned at indices
that are powers of 2. This configuration enables not only the detection of single-bit errors but also the
identification of their precise location.
In the RAID 2 architecture, the Hamming code parity is calculated across corresponding bits on the data
disks and then stored on multiple dedicated parity disks. This design allows RAID 2 to provide a high
level of data integrity. Even in the case of a single disk failure, the system can continue to read data
correctly, detect any bit errors, and automatically correct them. Consequently, RAID 2 can offer significant
fault tolerance.
Despite these features, RAID 2 has several downsides that limit its practical application. The requirement
for all disks to be synchronized introduces complexities and costs in the hardware setup. Additionally, the
need for a separate disk for each bit of the Hamming code, coupled with the inherently small stripe size,
results in RAID 2 being less storage-efficient compared to other RAID levels.

The Institute of Chartered Accountants of Nepal ȁ͵ͺ͹


Management Information and Control System
Moreover, with modern disk technologies incorporating built-in error correction mechanisms, the primary
advantage of RAID 2, i.e., Hamming code parity, is often redundant. As a result, RAID 2 has largely been
phased out in favor of more efficient and cost-effective RAID levels.
In essence, while RAID 2's Hamming code parity provides robust error detection and correction
capabilities, its complex and expensive hardware requirements, coupled with the advent of disk
technologies with built-in error correction, have rendered it nearly obsolete in today's data storage
landscape.
RAID 3:
RAID 3, sometimes referred to as bit-interleaved parity or byte-level parity, is a unique RAID
configuration that employs striping and parity techniques to enhance data access speed and provide fault
tolerance. Here's a more comprehensive explanation of RAID 3.
RAID 3 utilizes striping at a byte or bit level, which means data is divided into byte-sized (or bit-sized)
blocks and then distributed across all disks in the array. This contrasts with other RAID levels, such as
RAID 0 or RAID 5, which typically use larger block sizes for striping. This byte-level striping allows
RAID 3 to handle large, sequential data files very efficiently, making it an excellent choice for applications
that require high data throughput, such as video production and editing, image editing, or any other
application that needs to process large files and perform linear reads and writes.
A crucial characteristic of RAID 3 is its use of a dedicated parity disk. Parity information, calculated across
corresponding bits or bytes on the data disks, is written to this dedicated disk. This mechanism is similar
to RAID 5, but with the difference being that RAID 3's parity information is stored on a single, dedicated
drive, whereas RAID 5 distributes the parity blocks across all drives.
For RAID 3 to function efficiently, all drives must have synchronized rotation. This synchronization
ensures that each drive is ready to read or write data at precisely the right time, aligning with the array's
high linear read/write performance.
In terms of fault tolerance, RAID 3 provides a level of data protection comparable to RAID 5. If one drive
fails, the data on that drive can be reconstructed from the remaining data drives and the dedicated parity
drive. Since parity information is calculated across corresponding bits or bytes on the data disks, the
system can detect which bit was on the failed drive for each bit written to the remaining drives. In this
way, RAID 3 can sustain a single disk failure without any loss of data or degradation in performance.
However, it's essential to note that while RAID 3 can provide high data transfer rates and fault tolerance,
it has certain drawbacks. The need for drive synchronization can make the hardware setup more complex,
and the use of a dedicated parity drive can create a bottleneck during write operations, as the parity drive
must be accessed for each write operation.
In conclusion, RAID 3 is well-suited for systems that need to process large, sequential files with high data
throughput. Its byte-level striping, combined with fault tolerance via dedicated parity, provides a balance
of performance and data protection. However, its requirement for drive synchronization and potential
bottleneck issues should be considered when deciding if RAID 3 is the right choice for a specific
application.

388 | The Institute of Chartered Accountants of Nepal


Chapter 9 : Disaster Recovery and Business Continuity Planning
RAID 4:
RAID 4 is a RAID configuration that uses block-level striping and dedicated parity. Here are some key
points about RAID 4.
Block-Level Striping: RAID 4 divides data into blocks and distributes them across multiple disks in the
array. Each disk stores a specific portion of the data blocks, resulting in improved read and write
performance, especially for large sequential operations.
Dedicated Parity: RAID 4 uses a dedicated parity disk to store parity information. Parity is calculated at
the block level, meaning that for each set of data blocks, a corresponding parity block is generated and
stored on the dedicated parity disk.
Independent Disk Operation: Each disk in the RAID 4 array operates independently, allowing for
parallel I/O requests and improved performance. This means that different parts of a file can be accessed
simultaneously from different disks, enhancing overall throughput.
Data Transfer Speeds: While RAID 4 offers parallel I/O and improved performance for certain
operations, the data transfer speeds can suffer due to the type of parity used. This is because every write
operation requires updating the parity block, which can create a bottleneck.
Error Detection: RAID 4 uses dedicated parity for error detection. The parity information is calculated
from the corresponding data blocks and stored on the dedicated parity disk. If a disk fails, the data can be
reconstructed using the remaining data blocks and the parity information.
Storage Efficiency: RAID 4 has good storage efficiency as it uses dedicated parity, which requires only
one disk for storing parity information. The remaining disks store data blocks.
RAID 5:
RAID 5 is a popular RAID level that offers a good balance between performance, storage capacity, and
data protection. It uses a method known as block-level striping with distributed parity. Let's delve into
RAID 5 in greater detail.
Block-Level Striping: Like RAID 4, RAID 5 also stripes data at a block level. This means that data is
divided into blocks and then distributed across all the disks in the array. The block-level striping allows
RAID 5 to perform multiple simultaneous read or write operations across the disks in the array, enhancing
overall performance.
Distributed Parity: The defining feature of RAID 5 is the use of distributed parity. Unlike RAID 4, which
stores parity information on a dedicated disk, RAID 5 spreads the parity data across all disks in the array.
This design helps overcome the bottleneck issue that can occur with RAID 4 during write operations.
Fault Tolerance and Data Recovery: RAID 5 can sustain the failure of one disk without any data loss.
If a disk fails, the parity data on the remaining disks can be used to reconstruct the lost data. This means
that the system can continue to operate, even if a disk fails, effectively masking the drive failure from the
end user.
However, RAID 5 is vulnerable to a second disk failure during the rebuild process. If another disk fails
before the data from the first failed disk is completely rebuilt onto a replacement disk, all data in the array

The Institute of Chartered Accountants of Nepal ȁ͵ͺͻ


Management Information and Control System
will be lost. This vulnerability is a critical factor to consider, particularly for arrays with large disks where
rebuild times can be lengthy.
Performance: RAID 5 provides improved read performance due to its block-level striping and ability to
perform simultaneous read operations across multiple disks. Write operations, while generally faster than
in RAID 4 due to the distributed parity, are slower than read operations as they require parity calculation.
In conclusion, RAID 5 is a commonly used RAID level that offers a compelling balance of performance,
data protection, and storage efficiency. It is particularly well-suited for systems where moderate to high
performance is required, and some level of data protection is essential. However, its vulnerability to a
second disk failure during a rebuild should be a key consideration in the risk assessment of using RAID
5.
RAID 6:
RAID 6, known for its robust fault tolerance, employs a technique called dual distributed parity, which is
a significant evolution from the principles of RAID 5. Like RAID 5, RAID 6 utilizes block-level striping,
where data is segmented into blocks and spread across all drives in the array. This configuration enables
RAID 6 to carry out read and write operations concurrently across multiple drives, contributing to
enhanced performance.
The key distinguishing feature of RAID 6, however, is the use of two separate parity blocks for each data
block. Unlike RAID 5, which calculates and stores a single parity block, RAID 6 generates two parity
blocks and scatters these across all drives in the array. This dual parity mechanism significantly amplifies
RAID 6's fault tolerance capabilities.
RAID 6 can withstand the failure of up to two drives without any data loss, which means the system can
remain operational even if one or two drives fail. The parity data stored on the remaining drives can be
utilized to rebuild the lost data, ensuring seamless functioning. However, RAID 6 shares a vulnerability
with RAID 5 during the rebuild process. If a third disk were to fail before the first two failed disks are
fully rebuilt onto replacement disks, the array would suffer a complete data loss.
In terms of performance, RAID 6's read operations benefit from block-level striping and the capacity to
perform simultaneous read operations on multiple drives. However, write operations in RAID 6 may be
slower than in RAID 5 due to the computational overhead of calculating and writing two separate parity
blocks.
The strength of RAID 6 is particularly highlighted in large RAID groups and high-availability systems.
As the capacities of hard drives continue to increase, so does the time needed for recovery from a drive
failure. RAID 6, with its ability to continue operations even with two failed drives, affords the system
additional recovery time, minimizing the risk of data loss during the rebuild process.
In conclusion, RAID 6 is a significant step forward in data protection, especially in systems where high
availability is a priority or that utilize large RAID groups. Despite its slightly slower write performance
due to dual parity calculation, RAID 6's superior fault tolerance makes it a strong contender when choosing
the right RAID level, depending on the specific needs and risk tolerance of your system.

390 | The Institute of Chartered Accountants of Nepal


Chapter 9 : Disaster Recovery and Business Continuity Planning
RAID 4 vs. RAID 5: RAID 4 and RAID 5 are similar in terms of their use of dedicated parity. The main
difference is that RAID 4 performs block-level striping, while RAID 5 performs block-level striping with
distributed parity.
Use Cases: RAID 4 is less commonly used in modern storage systems due to its potential performance
bottleneck with the dedicated parity disk. It may find some use in scenarios where large sequential read
operations are more prevalent than write operations.
It's important to note that RAID 4 has been largely replaced by more efficient RAID levels, such as RAID
5 and RAID 6, which offer similar fault tolerance with improved performance characteristics.
Mirroring:
Mirroring, also known as RAID 1, is a data storage technique that involves the real-time replication of
logical disk volumes onto separate physical hard disks to ensure data availability and continuity. In
essence, a mirrored volume is a complete logical representation of separate volume copies, providing
redundancy and fault tolerance against hardware failure.
When applied in the context of disaster recovery, mirroring can extend over long distances and is often
referred to as storage replication. The replication process can occur in several ways, depending on the
specific technologies employed. It can be performed synchronously, asynchronously, semi-synchronously,
or at specific points in time. Enabling this feature typically involves microcode on the disk array controller
or specific server software. However, it's important to note that replication solutions are often proprietary
and may not be compatible across different storage vendors.
Mirroring is typically synchronous, which means data is written to the primary and mirrored disk
simultaneously. This synchronous writing approach often achieves a Recovery Point Objective (RPO) of
zero data loss. Asynchronous replication, on the other hand, provides an RPO of a few seconds, while
other methodologies can offer an RPO of a few minutes to several hours.
One key advantage of disk mirroring, besides redundancy, is its potential to improve read performance.
Each mirrored disk can be accessed separately for reading purposes. This can be particularly beneficial in
situations where multiple tasks are competing for data on the same disk, as the system can determine which
disk can most quickly seek the required data, reducing latency and improving overall performance.
Moreover, certain implementations allow the mirrored disk to be detached and used for data backup, while
the primary disk remains active. This feature, however, may necessitate a synchronization period if any
write I/O activity has occurred on the mirrored disk after detachment. This synchronization ensures that
the data on the two disks is consistent before they are reunited.
In summary, mirroring or RAID 1 is a critical data storage and disaster recovery technique, providing
redundancy, potential performance improvements, and robust fault tolerance. However, its specific
features and capabilities can vary depending on the technologies and implementations used.

The Institute of Chartered Accountants of Nepal ȁ͵ͻͳ


Management Information and Control System

Fig 9-1 RAID Overview


Clustering:
A computer cluster, in essence, is a collection of interconnected computers that work together so
cohesively that they can be viewed as a single computational entity. The individual components, or nodes,
within a cluster are usually linked via high-speed local area networks, although the specific type of
connection can vary. The primary goal of clustering is to enhance performance, availability, or both, by
distributing the workload across multiple machines. This clustered approach enables high computational
power and reliability, often surpassing what a single computer could achieve. Moreover, building a cluster
from several lower-cost computers can be far more cost-effective than investing in a single high-
performance, high-availability machine. In this way, clusters balance performance, availability, and cost
efficiency, making them an integral part of many modern computing environments.
Cluster categorizations
High-availability clusters, also known as failover clusters, are designed primarily to enhance the
availability of the services they provide. They achieve this by incorporating redundant nodes, which can
take over service provision in the event of system component failure. This approach effectively eliminates
single points of failure, thus ensuring uninterrupted service. While the most common high-availability
cluster configuration consists of two nodes, which is the minimum to achieve redundancy, larger setups
are also feasible depending on the system requirements. Numerous commercial implementations of high-
availability clusters are available for a wide variety of operating systems, ensuring adaptability to different
IT environments. For the Linux operating system, one commonly utilized open-source option is the Linux-
HA project. Overall, high-availability clusters play a critical role in maintaining reliable and continuous
service in various computing applications.
Load balancing involves linking multiple computers together to distribute computational workload,
allowing them to function collectively as a single, powerful virtual computer. From the perspective of a
user, while the system appears to be composed of multiple machines, these operate in unison, essentially
acting as a singular virtual entity. User-initiated requests are managed and spread across all individual
computers in the cluster, effectively distributing the computational tasks among these machines. This

392 | The Institute of Chartered Accountants of Nepal


Chapter 9 : Disaster Recovery and Business Continuity Planning
distribution results in a balanced workload, preventing any single machine from becoming a bottleneck.
As a result, load balancing significantly enhances the performance of the cluster system, ensuring efficient
utilization of resources and providing improved response times and overall user experience. Therefore,
load balancing is a crucial strategy in managing large-scale computing systems and optimizing their
performance.
Often clusters are used primarily for computational purposes, rather than handling IO-oriented operations
such as web service or databases. For instance, a cluster might support computational simulations of
weather or vehicle crashes. The primary distinction within compute clusters is how tightly-coupled the
individual nodes are. For instance, a single compute job may require frequent communication among
nodes - this implies that the cluster shares a dedicated network, is densely located, and probably has
homogenous nodes. This cluster design is usually referred to as Beowulf Cluster. The other extreme is
where a compute job uses one or few nodes, and needs little or no inter-node communication. This latter
category is sometimes called "Grid" computing. Tightly-coupled compute clusters are designed for work
that might traditionally have been called "supercomputing". Middleware such as MPI (Message Passing
Interface) or PVM (Parallel Virtual Machine) permits compute clustering programs to be portable to a
wide variety of clusters.
Grids are usually computer clusters, but more focused on throughput like a computing utility rather than
running fewer, tightly coupled jobs. Often, grids will incorporate heterogeneous collections of computers,
possibly distributed geographically, sometimes administered by unrelated organizations.
Grid computing is optimized for workloads which consist of many independent jobs or packets of work,
which do not have to share data between the jobs during the computation process. Grids serve to manage
the allocation of jobs to computers which will perform the work independently of the rest of the grid
cluster. Resources such as storage may be shared by all the nodes, but intermediate results of one job do
not affect other jobs in progress on other nodes of the grid.
An example of a very large grid is the Folding@home project. It is analyzing data that is used by
researchers to find cures for diseases such as Alzheimer's and cancer. Another large project is the
SETI@home (Search for Extraterrestrial Intelligence), which may be the largest distributed grid in
existence. It uses approximately three million home computers all over the world to analyze data from the
Arecibo Observatory radio telescope, searching for evidence of extraterrestrial intelligence. In both of
these cases, there is no inter-node communication or shared storage. Individual nodes connect to a main,
central location to retrieve a small processing job. They then perform the computation and return the result
to the central server. In the case of the @home projects, the software is generally run when the computer
is otherwise idle. University of California, Berkley has developed an open source application BOINC
(Berkeley Open Infrastructure for Network Computing) to allow individual users to contribute to the above
and other projects such as LHC@home (Large Hadron Collider) from a single manager which can then be
set to allocate a percentage of idle time to each of the projects a node is signed up for.
The grid configuration enables nodes to process as many jobs as they can in a single session, returning the
results and acquiring a new job from a central project server when done. This arrangement allows for a
high degree of flexibility and scalability, making grid computing an effective solution for tackling large-
scale, distributable computational tasks.

The Institute of Chartered Accountants of Nepal ȁ͵ͻ͵


Management Information and Control System
9.3 High Availability Planning of Servers
Hardware Considerations
To ensure high availability for a Server environment, consider the following when planning for hardware:
 Multiple Servers: Plan to run multiple servers in an organization group to accommodate running
multiple instances of organization hosts. This allows for load balancing and fault tolerance of
processes across the server instances.
 Storage Area Network (SAN): Implement a SAN to house the organization server databases.
Configure the SAN disks using RAID 1+0 (a stripe of mirror sets) topology for maximum
performance and high availability.
 Multiple SQL Servers: Install multiple SQL servers to house the organization server databases.
This is required for SQL Server clustering and recommended for separating certain organization
server databases on separate physical SQL Server instances.
 Perimeter Network Domain: Install one or more Windows servers in a perimeter network domain
to provide internet-related services for the organization. Configure multiple Windows servers in the
perimeter network domain using a network load balancing (NLB) solution.
Software Considerations
To ensure high availability for a Organization Server environment, consider the following when planning
for software:
 Enterprise Edition of Organization Server: Consider investing in the Enterprise Edition of
Organization Server to accommodate scenarios that benefit from clustering of organization hosts or
running multiple message box databases. Clustering of organization hosts is recommended for
providing high availability for certain organization adapters.
 Windows Server Cluster: Plan to implement a Windows Server cluster to house the organization
server databases and the Enterprise Single Sign-On master secret server. This provides high
availability for the databases and ensures continuous access to the master secret server.
High Availability vs. Disaster Recovery
High availability and disaster recovery are both approaches to increase the availability of a server
environment, but they have different characteristics and purposes.
High Availability (HA): HA focuses on providing continuous and uninterrupted access to services and
resources. It involves implementing fault-tolerant and load-balancing mechanisms to ensure that if one
component or server fails, another takes over seamlessly. HA solutions typically involve redundant
hardware, clustering, and replication techniques. The primary goal of HA is to minimize downtime and
provide immediate failover in the event of a failure, resulting in high reliability and minimal disruption to
users. HA is suitable for scenarios where real-time access and minimal downtime are critical, such as
online transactions or mission-critical systems.
Disaster Recovery (DR): DR, on the other hand, focuses on recovering and restoring services after a
major disruption or disaster. It involves having plans and processes in place to recover data, systems, and
infrastructure in the event of a catastrophic event like natural disasters, hardware failures, or cybersecurity
breaches. DR typically involves creating backups, off-site replication, and establishing recovery
procedures and infrastructure. The primary goal of DR is to minimize the impact of a disaster, recover

394 | The Institute of Chartered Accountants of Nepal


Chapter 9 : Disaster Recovery and Business Continuity Planning
data and systems, and resume operations within an acceptable timeframe. DR is suitable for scenarios
where recovery time is longer, and the focus is on data integrity and business continuity in the face of
major disruptions.
While HA and DR both aim to increase availability, their focus and approach differ. HA emphasizes
continuous access and immediate failover to maintain uninterrupted services, while DR focuses on
recovering from significant disruptions and restoring operations within a specified recovery time objective
(RTO) and recovery point objective (RPO).
Organizations often employ a combination of HA and DR strategies based on their specific needs, risk
tolerance, and budget constraints. Critical systems may require high availability within the infrastructure,
while a comprehensive disaster recovery plan ensures resilience and data protection in worst-case
scenarios.

9.4 IT Outsourcing:
The financial services industry has changed rapidly and dramatically. Advances in technology enable
institutions to provide customers with an array of products, services, and delivery channels. One result
of these changes is that financial institutions increasingly rely on external service providers for a variety
of technology-related services. Generally, the term "outsourcing" is used to describe these types of
arrangements.
The ability to contract for technology services typically enables an institution to offer its customers
enhanced services without the various expenses involved in owning the required technology or
maintaining the human capital required to deploy and operate it. In many situations, outsourcing offers
the institution a cost-effective alternative to in-house capabilities.
Outsourcing, however, does not reduce the fundamental risks associated with information technology or
the business lines that use it. Risks such as loss of funds, loss of competitive advantage, damaged
reputation, improper disclosure of information, and regulatory action remain. Because the functions are
performed by an organization outside the financial institution, the risks may be realized in a different
manner than if the functions were inside the financial institution resulting in the need for controls designed
to monitor such risks.
Financial institutions can outsource many areas of operations, including all or part of any service,
process, or system operation. Examples of information technology (IT) operations frequently outsourced
by institutions and addressed in this booklet include: the origination, processing, and settlement of
payments and financial transactions; information processing related to customer account creation and
maintenance; as well as other information and transaction processing activities that support critical banking
functions, such as loan processing, deposit processing, fiduciary and trading activities; security monitoring
and testing; systemdevelopmentandmaintenance;networkoperations;helpdeskoperations;and all centers.
Management may choose to outsource operations for various reasons. These include.
 Gain operational or financial efficiency.
 Increase management focus on core business functions.
 Refocus limited internal resources on core functions.
 Obtain specialized expertise.
 Increase availability of services.

The Institute of Chartered Accountants of Nepal ȁ͵ͻͷ


Management Information and Control System
Outsourcing of technology-related services may improve quality, reduce costs, strengthen controls, and
achieve any of the objectives listed previously. Ultimately, the decision to outsource should fit into the
institution's overall strategic plan and corporate objectives.
Before considering the outsourcing of significant functions, an institution's directors and senior
management should ensure such actions are consistent with their strategic plans and should evaluate
proposals against well-developed acceptance criteria. The degree of oversight and review of outsourced
activities will depend on the criticality of the service, process, or system to the institution's operation.
Financial institutions should have a comprehensive outsourcing risk management process to govern their
technology service provider (TSP) relationships. The process should include risk assessment, selection of
service providers, contract review, and monitoring of service providers. Outsourced relationships should
be subject to the same risk management, security, privacy, and other policies that would be expected if the
financial institution were conducting the activities in- house. This booklet primarily focuses on how the
bank regulatory agencies review the risk management process employed by a financial institution when
considering or executing an outsourcing relationship.
To help ensure financial institutions operate in a safe and sound manner, the services performed by TSPs
are subject to regulation and examination. The federal financial regulators have the statutory authority to
supervise all of the activities and records of the financial institution whether performed or maintained by
the institution or by a third party on or off of the premises of the financial institution. Accordingly, the
examination and supervision of a financial institution should not be hindered by a transfer of the
institution's records to another organization or by having another organization carry out all or part of the
financial institution's functions.
IT outsourcing necessitates a meticulous approach to data security and privacy. With the rise of cyber
threats and stringent regulations like GDPR, it's vital for financial institutions to ensure that their service
providers have robust security measures in place. This includes encryption for data in transit and at rest,
routine vulnerability assessments and penetration testing, as well as compliance with privacy laws.
Moreover, disaster recovery and business continuity planning are indispensable aspects of IT outsourcing.
Financial institutions should ensure that their service providers have well-developed strategies to mitigate
disruptions from potential disasters. Additionally, performance measurement is crucial in an outsourcing
relationship. Clear performance metrics and regular reporting requirements should be included in contracts
to monitor the effectiveness and efficiency of outsourced services. Lastly, the institution should have
consistent audits and compliance checks to ensure the service providers' adherence to regulatory
requirements and industry standards.
Board and Management Responsibilities
The board of directors and senior management of an institution bear the ultimate responsibility for
appropriately managing outsourced relationships. Outsourcing, while often driven by the need for
advanced technology, transcends just being a technology issue—it becomes an integral part of corporate
management. Thus, an effective outsourcing oversight program, laid down by the board and senior
management, is necessary to accurately identify, measure, monitor, and control the risks involved in
outsourcing.

396 | The Institute of Chartered Accountants of Nepal


Chapter 9 : Disaster Recovery and Business Continuity Planning
This program should establish a coherent structure for managing outsourced relationships across the
enterprise. Its scope should extend from formulating servicing requirements and strategies to selecting a
provider, contract negotiation, ongoing monitoring, and eventually, modifying or terminating the
outsourcing relationship as needed.
Several key considerations should guide the board and management in overseeing outsourcing:
 Alignment with Strategy: Outsourcing relationships should support the institution's overarching
strategic plans and operational requirements.
 Expertise: The institution must have adequate expertise to effectively oversee and manage the
outsourced relationship.
 Provider Evaluation: Potential service providers should be evaluated based on the criticality and
scope of the services being outsourced.
 Risk-Based Monitoring: An institution-wide program to monitor service providers should be
implemented. This program should be tailored based on initial and ongoing risk assessments of the
outsourced services.
 Regulatory Compliance: The institution should notify its primary regulator about its outsourcing
relationships as required.
The effort and resources allocated to manage outsourcing relationships should correspond to the risk that
the relationship presents to the institution. For instance, outsourcing a small credit card portfolio will
necessitate different oversight compared to outsourcing all loan application processes. Additionally, it's
important to note that smaller and less complex institutions may face more challenges compared to larger
ones when negotiating services to meet their specific needs and monitoring their service providers.
In essence, the board and management must adopt a proactive and risk-based approach to manage IT
outsourcing, aligning with the institution's strategic objectives while ensuring compliance with regulatory
requirements.
Risk Assessment and Requirements
IT outsourcing is a strategic move that many organizations undertake to enhance their operations, reduce
costs, and focus on core business functions. While it comes with numerous benefits, it also introduces
various risks, particularly operational or transaction risks, which can originate from fraud, errors, or the
inability to deliver services or products, maintain competitiveness, or manage information.
These risks are present in each stage of service or product delivery and can permeate different areas such
as customer service, systems development and support, internal control processes, and capacity and
contingency planning. If not managed properly, operational risks can exacerbate other risk types,
including reputation, strategic, compliance, liquidity, price, and interest rate risks.
Reputation Risk: The institution's reputation can suffer significantly if any errors, delays, or omissions
in IT services become public knowledge or directly affect customers. For instance, if a third-party service
provider fails to maintain adequate business resumption plans or facilities for key processes, the serviced
financial institution's ability to provide essential services to their customers may be impaired.
Strategic Risk: Strategic risks can arise if there is a lack of management experience and expertise, which
can lead to poor understanding and control of key risks. Furthermore, if the service provider supplies
inaccurate information, the institution's management could make detrimental strategic decisions.
The Institute of Chartered Accountants of Nepal ȁ͵ͻ͹
Management Information and Control System
Compliance Risk: Compliance risks are associated with outsourced activities that do not comply with
legal or regulatory requirements, which could expose the institution to legal sanctions, civil money
penalties, or litigation. An example of this could be if the service provider produces inaccurate or
untimely consumer compliance disclosures or discloses confidential customer information without
authorization.
Interest Rate, Liquidity, and Price Risk: Market risks, such as interest rate, liquidity, and price risks,
can be increased due to processing errors related to investment income or repayment assumptions. Such
errors could prompt unwise investment or liquidity decisions.
Security Risk: Security risks can arise due to vulnerabilities in data privacy and cybersecurity introduced
by outsourcing. Therefore, financial institutions must ensure that service providers adhere to strict
security standards to prevent data breaches and unauthorized access.
Vendor Dependability Risk: If a service provider fails to deliver services due to financial instability or
operational inefficiencies, the financial institution's operations could be negatively affected.
Exit Strategy Risk: If the outsourcing relationship needs to be terminated, the financial institution must
have a clear exit strategy in place to prevent disruption of services.
Geographic Risk: Cross-border data flows can introduce additional risks, including legal, data
sovereignty, and geopolitical risks, which require careful management.
Business Continuity Risk: Service providers should have robust business continuity and disaster
recovery plans in place. A failure to restore services promptly after a disaster could disrupt the financial
institution's operations.
The board of directors and senior management of the financial institution have the crucial responsibility
of overseeing these risks. They should develop comprehensive policies to govern outsourcing
relationships consistently, providing a framework for management to identify, measure, monitor, and
control the associated risks. The policy should cover all aspects of outsourcing, including service provider
selection, contract negotiation, and ongoing monitoring of the service provider's performance.
Additionally, the board and management should adopt a proactive and risk-based approach to manage IT
outsourcing, ensuring it aligns with the institution's strategic objectives and complies with regulatory
requirements. They must ensure that the institution has the necessary expertise to oversee and manage
the outsourcing relationship and that the institution evaluates potential providers based on the scope and
criticality of outsourced services.
In conclusion, while IT outsourcing can provide significant benefits, it also introduces a range of risks
that financial institutions need to manage effectively. A strong risk management framework, active board
and management oversight, and a comprehensive outsourcing policy are key elements in successfully
managing IT outsourcing relationships.

398 | The Institute of Chartered Accountants of Nepal


Chapter 10 : Auditing And Information System

Chapter 10
Auditing And Information System:

The Institute of Chartered Accountants of Nepal ȁ͵ͻͻ


Management Information and Control System
Introduction to IT Audit
As per ISACA (Information Systems Audit and Control Association),
IS audit is the formal examination and/or testing of information systems to determine whether:
1. Information systems are in compliance with applicable laws, regulations, contracts and/or
industry guidelines.
2. Information systems and related processes comply with governance criteria and related and
relevant policies and procedures.
3. IS data and information have appropriate levels of confidentiality, integrity and availability.
4. IS operations are being accomplished efficiently and effectiveness targets are being met.
Auditing is an evaluation of a person, organization, system, process, enterprise, project or product,
performed to ascertain the validity and reliability of information; and also to provide an assessment of a
system's internal controls. The goal of an audit is to express an opinion based on the work done and since
due to practical constraints, an audit provides only reasonable assurance that the statement are free from
material error and typically rely on statistical sampling.
IT auditing takes that one step further and evaluates the controls around the information with respect to
confidentiality, integrity, and availability. While a financial audit will attest to the validity and reliability
of information, the IT audit will attest to the confidentiality of the information, the integrity of the
information and in situations where availability is a key factor will also attest to the availability and the
ability to recover in the event of an incident.
One of the key factors in IT auditing and one that audit management struggles with constantly, is to ensure
that adequate IT audit resources are available to perform the IT audits. Unlike financial audits, IT audits
are very knowledge intensive, for example, if an IT auditor is performing a Web Application audit,
then they need to be trained in web applications; if they are doing an Oracle database audit, they need to
be trained in Oracle; if they are doing a Windows operating system audit, they need to have some
training in Windows and not just XP, they'll need exposure to Vista, Windows 7, Server 2003, Server
2008, IIS, SQL-Server, Exchange, etc. As you can appreciate being an IT auditor requires extensive
technical training in addition to the normal auditor and project management training.
Another factor that audits management faces is the actual management of the IT auditors, for not only
must they track time against audit objectives, but audit management must also allow for time to follow-
up on corrective actions taken by the client in response to previous findings and/or recommendations.
There are many different types of audits:
• Financial audits
• Operational audits
• Integrated audits
• Administrative audits
• IT audits
• Specialized audits
• Forensic audits

400 | The Institute of Chartered Accountants of Nepal


Chapter 10 : Auditing And Information System
The IT auditor will be involved with all of these except the financial audit. And when we
talk about extensive technical training and forensic IT auditing we are speaking about a significant
investment in time and money for an IT auditor to be qualified to do a forensic IT audit.
Since there is a limited amount of time and a limited amount of professional qualified IT
auditors, IT auditing is more and more moving to a risk-based audit approach which is usually adapted
to develop and improve the continuous audit approach.
But before we get into risk, let's take a look (briefly) at IT audit's role within the organization. IT audit's
role is to provide an opinion on the controls which are in place to provide confidentiality, integrity and
availability for the organization's IT infrastructure and data which supports the organization's business
processes. Now in order to do that there has to be some overall planning to determine which business
processes to audit. I mentioned before that IT auditing is moving towards a risk-based audit approach
and the planning process starts with a review of the organization and gaining an understanding of the
business.
Typically, this starts with a review of the Business Impact Analysis (BIA) which the organization has
prepared for all of its business functions, after which the organization will have established ranking
criteria and determined which functions are essential to the business. Those essential functions will then
have been ranked according to which ones are most critical to the organization and the IT auditor
can start at the top of the list.
Now granted there are a lot of other considerations which go into which functions to audit, including
the last time an area was audited, are there legal requirements which require annual audit/compliance
statements, etc., but for the time being starting at the top will assure management that the most
critical business functions are being reviewed by IT audit.
There are some other reasons to use risk assessment to determine the areas to be audited, including:
• Enables management to effectively allocate limited audit resources
• Ensures that relevant information has been obtained from all levels of management
• Establishes a basis for effectively managing the IT audit department/function
• Provides a summary of how the individual audit subject area is related to the overall organization
as well as to the business plans.
Now for some definitions before we go any further:
• Audit risk - the risk that information may contain a material error that may go undetected during
the course of the audit. It can be viewed as a function of three other risks: inherent risk, control risk,
and detection risk. The level of audit risk is influenced by the auditor's methodology, the quality of
their judgment, and the nature of the firm or the industry. Example: A toy manufacturer is audited
for its annual financials. However, the auditors fail to account for the fact that the toy industry is
subject to rapid changes due to consumer tastes, competitive pressures, and regulations. If the
auditors do not incorporate the volatility of the industry into their audit plan, they could miss
misstatements in the manufacturer's inventory valuation, leading to incorrect financial reports and
ultimately, a high audit risk.

The Institute of Chartered Accountants of Nepal ȁͶͲͳ


Management Information and Control System
• Inherent risk - the risk that an error exists that could be material or significant when
combined with other errors encountered during the audit, assuming that there are no related
compensating controls. Inherent risk is the susceptibility of an assertion to a misstatement, in
the absence of any related controls. It's the "natural" risk present in a business or industry,
often associated with the complexity of its transactions or operations. Example: A
pharmaceutical company is involved in extensive research and development (R&D) efforts,
which require significant capital investments. The financial projections and potential return on
these investments are highly uncertain due to unpredictable regulatory approvals, market
acceptance, and scientific success. Therefore, the inherent risk is high as fi nancial estimates
and projections could be significantly misstated.
• Control risk - the risk that a material error exists that will not be prevented or detected in a timely
manner by the internal control systems. If for example, the internal control is a manual
review of computer logs, errors might not be detected in a timely manner simply due to the volume
of data in the computer logs. Control risk refers to the possibility that a company's internal control
system fails to detect or prevent a significant error or fraud. It can be due to deficiencies in the
design or operation of the internal control system. Example: A large corporation with international
operations has complex and decentralized financial processes. The corporation relies on a broad
network of financial controllers across its different business units, each with their own systems and
practices. If the corporation lacks robust central oversight or fails to enforce consistent financial
controls across its network, this could result in control risk. Misstatements could occur and go
undetected due to inconsistent application of financial controls and standards.
• Detection risk - the risk that an IT auditor uses an inadequate test procedure and concludes that
material errors do not exist when, in fact, they do. Detection risk is the chance that an auditor's
procedures will fail to detect a material misstatement that exists in an assertion. This can occur
when the auditors use inappropriate audit techniques, perform inadequate tests, or interpret the
results incorrectly. Example: An auditing firm is auditing a large bank with numerous branches
and millions of transactions. The auditors decide to use a sampling technique for testing the
accuracy of transactions. However, if their sampling technique is flawed (too small,
unrepresentative, etc.), or if their interpretation of the sample results is incorrect, there's a high
detection risk. They may fail to identify errors or fraud that could significantly impact the bank's
financial statements.
Audit objectives refer to the specific goals that must be accomplished by the IT auditor, and in contrast,
a control objective refers to how an internal control should function. Audit objectives most often, focus
on substantiating that the internal controls exist to minimize business risks, and that they function as
expected. As an example, in a financial audit, an internal control objective could be to ensure that
financial transactions are posted properly to the General Ledger, whereas the IT audit objective will
probably be extended to ensure that editing features are in place to detect erroneous data entry.
So what is a control or an internal control? Let's take a look at some examples. Internal controls are
normally composed of policies, procedures, practices and organizational structures which are
implemented to reduce risks to the organization. There are two key aspects that controls should address:
that is, what should be achieved and what should be avoided.

402 | The Institute of Chartered Accountants of Nepal


Chapter 10 : Auditing And Information System
Controls are generally classified as either preventive, detective or corrective. So first, preventive; the
controls should, detect problems before they arise such as a numeric edit check on a dollar data entry
field. By not allowing anything other than numeric characters you are preventing things like cross-
site scripting or SQL injection. Next detective controls; like exception reports from log files which
show that an unauthorized user was attempting to access data outside of their job requirements.
Then finally, corrective; something as simple as taking backups, so that in the event of a system failure,
you can correct the problem by restoring the database. The backup procedures being the corrective
control.
When you look at business functions, one of the things an IT auditor should look for is where in the
process is there a potential for compromise of confidentiality, integrity or availability. For example, if
data is gathered via a web front-end which is then reformatted and sent to the database either for
storage or inquiry and then returned to the web front-end for redisplay to the user there a number of
control points to consider:
• The web front-end itself, who has access and how are they authenticated
• The connection between the web front-end and the database, how is this connection
protected
• The database, who is allowed to update, what data can be returned to the web front-end
• The network, is traffic restricted to just the traffic required to support the web application
The list goes on and on but you get the point, there are a lot of control points to consider when looking
at a particular business function. In trying to determine all the control points, an IT auditor must
consider the system boundary which should be part of the Business Impact Analysis we discussed earlier.
And from that BIA, the IT auditor should be able to construct a data flow diagram and to identify all
the control points that will need to be reviewed as part of his/her audit.
Remember, our work is resource intensive and we have a limited amount of time, so taking a risk based
approach, we would review the control points that represent the greatest risk to the business.
And it is part of our job to identify the risks and to help management understand what the risk to the
business would be if a control at a specific point malfunctions and the information is compromised.
Definition of IT audit - An IT audit can be defined as any audit that encompasses review and evaluation
of automated information processing systems, related non-automated processes and the interfaces
among them. Planning the IT audit involves two major steps. The first step is to gather information and
do some planning the second step is to gain an understanding of the existing internal control structure.
More and more organizations are moving to a risk-based audit approach which is used to assess risk and
helps an IT auditor make the decision as to whether to perform compliance testing or substantive
testing. In a risk-based approach, IT auditors are relying on internal and operational controls
as well as the knowledge of the company or the business. This type of risk assessment decision
can help relate the cost-benefit analysis of the control to the known risk. In the "Gathering Information"
step the IT auditor needs to identify five items:
• Knowledge of business and industry
• Prior year's audit results
• Recent financial information

The Institute of Chartered Accountants of Nepal ȁͶͲ͵


Management Information and Control System
• Regulatory statutes
• Inherent risk assessments
A side note on "Inherent risks," is to define it as the risk that an error exists that could be
material or significant when combined with other errors encountered during the audit, assuming there
are no related compensating controls. As an example, complex database updates are more likely to be
miswritten than simple ones, and thumb drives are more likely to be stolen (misappropriated) than blade
servers in a server cabinet. Inherent risks exist independent of the audit and can occur because of the
nature of the business.
In the "Gain an Understanding of the Existing Internal Control Structure" step, the IT auditor needs to
identify five other areas/items:
• Control environment
• Control procedures
• Detection risk assessment
• Control risk assessment
• Equate total risk
Once the IT auditor has "Gathered Information" and "Understands the Control" then they are ready to
begin the planning, or selection of areas, to be audited. Remember one of the key pieces of information
that you will need in the initial steps is a current Business Impact Analysis (BIA), to assist you in selecting
the application which support the most critical or sensitive business functions.
Goals
The goal of an audit is to express an opinion based on the work done and since due to practical constraints,
an audit provides only reasonable assurance that the statement are free from material error and typically
rely on statistical sampling.
Objectives of an IT audit
The primary objectives of an IT audit are to evaluate the adequacy and effectiveness of an organization's
IT infrastructure, applications, and operations. It aims to ensure that the company's IT systems are secure,
reliable, and properly controlled to achieve their intended results, protect the integrity of the data they
hold, and support the organization's objectives. This includes validating system performance, assessing
the level of risk within the IT infrastructure, verifying compliance with regulatory requirements, and
evaluating whether IT investments align with the organization's strategies and objectives. Essentially, an
IT audit offers assurance that the organization's IT systems are not only safe and efficient but also
contribute to the organization's overall performance and success. Most often, IT audit objectives
concentrate on substantiating that the internal controls exist and are functioning as expected to minimize
business risk. These audit objectives include assuring compliance with legal and regulatory requirements,
as well as the confidentiality, integrity, and availability (CIA - no not the federal agency, but information
security) of information systems and data.
Some common objectives of IT audit are:
1. Evaluating the effectiveness of internal controls: An IT audit aims to assess the design,
implementation, and operational effectiveness of controls in place to safeguard information
404 | The Institute of Chartered Accountants of Nepal
Chapter 10 : Auditing And Information System
systems, data, and technology infrastructure. It helps identify control deficiencies or weaknesses
that could potentially expose the organization to risks.
2. Assessing compliance with laws and regulations: IT audits verify whether the organization's
IT practices and systems comply with relevant laws, regulations, industry standards, and
contractual obligations. This includes data privacy regulations, security requirements, financial
reporting guidelines, and industry-specific compliance frameworks.
3. Identifying and managing risks: IT audits help identify potential risks and vulnerabilities within
the organization's IT environment. This includes risks related to cybersecurity, data breaches,
unauthorized access, system failures, business continuity, and disaster recovery. The audit
provides recommendations to mitigate these risks and improve risk management practices.
4. Evaluating system reliability and availability: An IT audit assesses the reliability, availability,
and performance of information systems and technology infrastructure. It examines the
organization's ability to ensure uninterrupted operations, system uptime, data backup and
recovery processes, and disaster response capabilities.
5. Assessing data integrity and accuracy: IT audits verify the accuracy, completeness, and
reliability of data processed and stored within the organization's systems. This includes assessing
data input controls, data validation processes, data backup and restoration procedures, and data
integrity checks.
6. Reviewing IT governance and strategic alignment: An IT audit evaluates the organization's IT
governance framework and its alignment with business objectives. It assesses the IT strategy,
policies, and decision-making processes to ensure they support the overall goals and objectives
of the organization.
7. Evaluating system development and change management processes: IT audits assess the
organization's system development life cycle (SDLC) processes, change management
procedures, and software development practices. This includes reviewing project management
methodologies, quality assurance processes, testing procedures, and the implementation of
software updates and patches.
8. Verifying user access controls and security measures: IT audits examine the organization's
user access controls, authentication mechanisms, and security measures. This includes reviewing
user account management, password policies, access rights and privileges, network security, and
vulnerability management practices.
9. Assessing IT service management and vendor management: An IT audit evaluates the
organization's IT service management practices, including incident management, problem
management, and service desk operations. It also assesses the management of third-party vendors,
service-level agreements, and outsourcing arrangements.

10.1 IT audit strategies


There are two areas to talk about here, the first is whether to do compliance or substantive testing and
the second is "How do I go about getting the evidence to allow me to audit the application and make
my report to management?" So what is the difference between compliance and substantive testing?

The Institute of Chartered Accountants of Nepal ȁͶͲͷ


Management Information and Control System
Compliance testing is gathering evidence to test to see if an organization is following its control
procedures. On the other hand substantive testing is gathering evidence to evaluate the integrity of
individual data and other information. For example, compliance testing of controls can be described
with the following example. An organization has a control procedure which states that all application
changes must go through change control. As an IT auditor you might take the current running
configuration of a router as well as a copy of the -1 generation of the configuration file for the same
router, run a file compare to see what the differences were; and then take those differences and look for
supporting change control documentation. Don't be surprised to find that network admins, when
they are simply re- sequencing rules, forget to put the change through change control. For
substantive testing, let's say that an organization has policy/procedure concerning backup tapes at the
offsite storage location which includes 3 generations (grandfather, father, son). An IT auditor would do
a physical inventory of the tapes at the offsite storage location and compare that inventory to the
organizations inventory as well as looking to ensure that all 3 generations were present.
The second area deals with "How do I go about getting the evidence to allow me to audit the application
and make my report to management?" It should come as no surprise that you need to:
• Review IT organizational structure
• Review IT policies and procedures
• Review IT standards
• Review IT documentation
• Review the organization's BIA
• Interview the appropriate personnel
• Observe the processes and employee performance
• Examination, which incorporates by necessity, the testing of controls, and therefore includes the
results of the tests.
As additional commentary of gathering evidence, observation of what an individual actually does
versus what they are supposed to do, can provide the IT auditor with valuable evidence when it
comes to control implementation and understanding by the user. Also performing a walk-through
can give valuable insight as to how a particular function is being performed.
Risk-based approach: This strategy focuses on identifying and assessing risks associated with IT
systems, processes, and controls. It involves understanding the organization's risk appetite, conducting
risk assessments, and prioritizing audit areas based on the level of risk. The audit resources are allocated
to areas with higher risk, enabling a targeted and effective audit approach.
Compliance-focused approach: This strategy emphasizes assessing the organization's compliance with
applicable laws, regulations, industry standards, and internal policies. The audit scope is aligned with
specific compliance requirements, and the audit procedures evaluate the organization's adherence to those
requirements. This approach ensures that the organization meets its legal and regulatory obligations.
Controls-based approach: This strategy focuses on evaluating the design, implementation, and
effectiveness of internal controls within IT systems and processes. The audit examines the control
environment, control activities, and control monitoring mechanisms. It aims to identify control
deficiencies, gaps, and weaknesses, providing recommendations for improving control effectiveness.

406 | The Institute of Chartered Accountants of Nepal


Chapter 10 : Auditing And Information System
Process-oriented approach: This strategy involves auditing specific IT processes or systems within the
organization. It could include auditing areas such as change management, access management, incident
response, data backup and recovery, system development life cycle, and IT service management. The audit
evaluates the efficiency, effectiveness, and compliance of these processes, identifying areas for
improvement.
Technology-focused approach: This strategy involves assessing specific technologies or systems within
the organization. It could include auditing areas such as network security, database management, cloud
computing, mobile device management, or cybersecurity controls. The audit evaluates the adequacy and
effectiveness of technology controls, identifying vulnerabilities and recommending security
enhancements.
Data analytics approach: This strategy leverages data analytics techniques to analyze large volumes of
data and identify patterns, anomalies, or potential risks. It involves using data analysis tools and techniques
to examine transactional data, system logs, access logs, or other relevant data sources. Data analytics can
provide valuable insights into control effectiveness, fraud detection, and compliance monitoring.
Continuous monitoring approach: This strategy involves implementing ongoing monitoring
mechanisms to assess the performance, security, and compliance of IT systems. It utilizes automated tools,
real-time monitoring, and exception reporting to continuously monitor key IT controls and detect
anomalies or potential issues. This approach enables proactive risk management and timely identification
of control deficiencies.
The selection of an appropriate IT audit strategy depends on the organization's specific needs, objectives,
and risk profile. A combination of different strategies may be used based on the audit scope and
requirements. The strategy should be aligned with the organization's overall goals, regulatory
requirements, industry best practices, and the expertise and resources available within the audit team.
Application vs. general controls
General controls are foundational measures that apply across an organization, encompassing its IT
infrastructure and support services. Internal accounting controls, for instance, manage risks associated
with financial and accounting processes and may include segregation of duties, proper authorization of
transactions, and regular account reconciliation. Operational controls, on the other hand, ensure business
operations are efficient and effective, employing methods like approval processes for initiating
operations or monitoring of operational efficiency. Administrative controls incorporate procedures and
methods that facilitate control over organizational activities, such as approval hierarchies and employee
training programs.
Furthermore, the organization relies on security policies and procedures to safeguard its information
assets through measures like password policies, access controls, and network security measures. Another
critical aspect of general controls is the policies established for the design and use of documents and
records, guiding how these resources should be created, used, maintained, and eventually destroyed.
The organization also implements procedures and practices to ensure adequate safeguards over access,
such as user access reviews and security measures for physical access to facilities. Lastly, physical and
logical security policies protect the organization's IT resources from both physical threats (like fire or
theft) and logical threats (like cyberattacks). These policies may include physical security measures like

The Institute of Chartered Accountants of Nepal ȁͶͲ͹


Management Information and Control System
CCTV surveillance and logical security measures like firewalls and intrusion detection systems. All
these general controls are essential in providing a robust control environment, reducing the risk of fraud
and errors, and ensuring the organization's overall effectiveness and efficiency.
Application controls are specific procedures applied to individual software applications, ensuring the
accuracy, completeness, and validity of the data they handle. These controls are designed around the
Input, Processing, and Output (IPO) functions of each application. Input controls confirm that only
complete, accurate, and valid data are entered into the system, utilizing techniques like validation rules,
segregation of duties, and authorization checks. Processing controls, on the other hand, guarantee that
data are handled correctly within the application, through mechanisms like logical access controls,
exception reports, and sequence checks. Output controls focus on the end results of data processing,
ensuring accuracy, completeness, and proper distribution via reconciliation procedures, review of output
reports, and secure storage and distribution methods. Lastly, data maintenance controls work to preserve
the integrity of data during storage and retrieval, preventing unauthorized access or alterations, and
mitigating the risk of data loss. Through these concerted control efforts, organizations can trust the
reliability of their application systems and the data they handle.
As an IT auditor, your tasks when performing an application control audit should include:
1. Determining the key components of the application, understanding the transaction flow within the
system, and gaining a thorough understanding of the application. This is typically achieved by
reviewing all relevant documentation and conducting interviews with key personnel, including the
system owner, data owner, data custodian, and system administrator.
1. Identifying the strengths of the application controls and assessing the implications of any
weaknesses found within these controls.
2. Crafting a strategic plan for testing the controls.
3. Conducting rigorous tests on the controls to verify their functionality and effectiveness.
4. Analyzing your test results and any other audit evidence to ascertain if the control objectives have
been met.
5. Assessing the application in relation to management's objectives for the system to confirm its
efficiency and effectiveness.
IT audit control reviews
After gathering all the evidence the IT auditor will review it to determine if the operations audited
are well controlled and effective. Now this is where your subjective judgment and experience come
into play. For example, you might find a weakness in one area which is compensated for by a
very strong control in another adjacent area. It is your responsibility as an IT auditor to report both of
these findings in your audit report.
The audit deliverable
So what's included in the audit documentation and what does the IT auditor need to do once their audit
is finished. Here's the laundry list of what should be included in your audit documentation:

408 | The Institute of Chartered Accountants of Nepal


Chapter 10 : Auditing And Information System
1. The initial planning and preparation phase, including the determination of the audit scope and
objectives.
2. Detailed risk assessment reports that contributed to defining the audit scope.
3. In-depth descriptions or walkthroughs of the specific audit area.
4. An exhaustive audit program that maps out the audit process.
5. All audit steps executed, along with the evidence gathered during the audit.
6. Information about any collaborations with other auditors or experts, including their contributions.
7. Thorough documentation and analysis of the internal control system.
8. Noted discrepancies and explanations for the deviations between expected and actual results.
9. A comprehensive account of audit findings, conclusions, and recommendations.
10. Records of correspondence and communication with management and other relevant stakeholders.
11. Cross-references linking audit documentation to document identification and dates.
12. A copy of the final report generated from the audit work.
13. Proof of a supervisory review to validate the audit process and findings.
This extensive documentation ensures a transparent audit process, providing a clear record of the work
done, findings, and recommendations.
When you communicate the audit results to the organization it will typically be done at an exit interview
where you will have the opportunity to discuss with management any findings and recommendations.
You need to be absolutely certain of:
• The facts presented in the report are correct
• The recommendations are realistic and cost-effective, or alternatives have been negotiated with
the organization's management
• The recommended implementation dates will be agreed to for the recommendations you have
in your report.
Your presentation at this exit interview will include a high-level executive summary (as Sgt. Friday use
to say, just the facts please, just the facts). And for whatever reason, a picture is worth a thousand words
so do some PowerPoint slides or graphics in your report.
Your Audit Report should be structured so that it includes:
1. Executive Summary: Provides a high-level overview of the audit objectives, key findings, and
recommendations for management's attention. It highlights the most significant issues identified
during the audit.
1. Scope and Objectives: Defines the scope of the audit, including the systems, processes, or areas
audited. It clarifies the objectives and criteria used to assess the effectiveness and efficiency of IT
controls.
2. Audit Methodology: Describes the approach, methods, and techniques used during the audit. It
outlines the procedures followed, including interviews, documentation review, testing, and analysis.
3. Findings: Presents the detailed findings and observations discovered during the audit. It identifies
control weaknesses, non-compliance with policies or regulations, and any other deficiencies or
vulnerabilities identified.

The Institute of Chartered Accountants of Nepal ȁͶͲͻ


Management Information and Control System
4. Risk Assessment: Assesses the risks associated with the identified findings and ranks them based
on their potential impact and likelihood. It provides insights into the significance and urgency of
addressing the identified risks.
5. Conclusions: Summarizes the overall assessment of the IT control environment and the
effectiveness of the audited systems or processes. It outlines the strengths and weaknesses observed
during the audit.
6. Recommendations: Provides specific actions and measures to address the identified control
deficiencies and mitigate the identified risks. It offers practical and actionable recommendations for
management to improve IT controls, enhance security, and strengthen overall governance.
7. Management Response: Includes management's formal response to the audit findings and
recommendations. It outlines the actions management intends to take to address the identified issues
and improve control effectiveness. Management may accept, partially accept, or reject the
recommendations, providing justifications for their decisions.
8. Appendices: Contains supporting documentation, evidence, detailed analysis, or additional
information relevant to the audit findings. It may include sample testing results, diagrams, policies,
or any other supplementary materials.
Lastly, while preparing and presenting your final report, there are several critical aspects to keep in mind.
One of the most important is to understand your audience. If the report is intended for the audit committee,
they might not require the level of detail that would be pertinent for the local business unit report. You'll
need to specify the organizational, professional, and governmental standards adhered to during the audit,
which might include frameworks and guidelines like GAO-Yellow Book, CobiT, or ISO 27001, given that
NIST SP 800-53 was superseded by NIST SP 800-53B as of September 2020. It's essential that your report
is delivered promptly to encourage swift corrective action as needed. All of these elements contribute to a
comprehensive and effective audit report.
And as a final, final parting comment, if during the course of an IT audit, you come across a materially
significant finding, it should be communicated to management immediately, not at the end of the audit.
Steps in Information System Audit:
1. Establish the Terms of the Engagement:
This will allow the auditor to set the scope and objectives of the relationship between the auditor
and the organization. The engagement letter should address the responsibility (scope,
independence, deliverables), authority (right of access to information), and accountability
(auditee rights, agreed completion date) of the auditor.
2. Preliminary Review:
This phase of the audit allows the auditor to gather organizational information as a basis for
creating their audit plan. The preliminary review will identify an organization's strategy and
responsibilities for managing and controlling computer applications. An auditor can provide an
in depth overview of an organization's accounting system to establish which applications are
financially significant at this phase. Obtaining general data about the company, identifying
financial application areas, and preparing an audit plan can achieve this.
410 | The Institute of Chartered Accountants of Nepal
Chapter 10 : Auditing And Information System
3. Obtain understanding of control structure:
Understanding control structure in an organization involves examining both management
controls and application controls. An internal control system should be designed and operated to
provide reasonable assurance that an organization's objectives are being achieved in the
following categories: effectiveness and efficiency of operations, reliability of financial reporting,
and compliance with applicable laws and regulations.
To develop their understanding of internal controls, the auditor should consider information from
previous audits, the assessment of inherent risk, judgments about materiality, and the complexity
of the organization's operations and systems.
Once the auditor develops their understanding of an organization's internal controls, they will be
able to assess the level of their control risk (the risk a material weakness will not be prevented or
detected by internal controls.
4. Assess control risk:
After obtaining satisfactory understanding of internal controls, auditor must assess the level of
control risk. Auditors assess control risk in terms of each major assertion that management
should be prepared to make about material items in financial statements

Existence Assets, liabilities included in financial statements actually exist

Occurrence All transactions represent events that have actually occurred

Completeness All transitions have been recorded and presented

Assets are rights and liabilities are obligations of the organization at


balance sheet date

Rights andorobligations
Valuation allocation Asset, liabilities, equity, reserves are been recorded at correct amount

Presentation and All items of financial statements have been properly classified
disclosure described and disclosed

After auditors obtain understanding of internal controls they must determine control risk in relation to
each assertion.
1. If auditors assess controls at less than maximum level, they go to next step and test the
controls to evaluate whether they are operating effectively.
2. If auditors assess control risk at higher than maximum level, they will not test controls at all, and
carry out detailed substantive check procedures.
5. Test of controls:
In this step the auditors will test controls to ascertain whether they are operating effectively or
not. Auditors will carry out testing of both application and management controls. This phase
usually begins by focusing on management controls. If testing shows that control to
expectations, management controls are not operating reliably, there may be little point in testing
The Institute of Chartered Accountants of Nepal ȁͶͳͳ
Management Information and Control System
application controls, in such case auditors may qualify their opinion or carry out substantive tests
in detail.
6. Reassess controls:
After auditors have completed tests of controls, they again assess the control risk. In light of test
results, they might revise the anticipated control risk upward or downward. In other words
auditor may conclude that internal controls are stronger or weaker than anticipated. They may
also conclude that it is worthwhile to perform more tests to further reduce substantive testing.
7. Completion of audit:
In the final phase of audit, Audit procedures are developed based on the auditor understands of
the organization and its environment. A substantive audit approach is used when auditing an
organization's information system. Once audit procedures have been performed and results have
been evaluated, the auditor will issue either an unqualified or qualified audit report based on
their findings.

10.2 Review of DRP/BCP


Business Continuity Plan (BCP)
Business Continuity Planning (BCP) refers to the set of processes and procedures implemented by an
organization to ensure the continuous operation of essential business functions during and after a disaster.
The primary objective of BCP is to safeguard the organization's mission-critical services and increase its
chances of survival. Through effective BCP, organizations can restore their services to a fully operational
state swiftly and seamlessly. BCPs typically encompass the critical business processes and operations of
the organization.
The fundamental concept behind determining the effectiveness of a Business Continuity Plan is to ask the
question, "If we were to lose this building, how would we resume our business?" This conceptual approach
helps organizations assess the adequacy of their BCP by considering the measures in place to restart
operations and mitigate the impact of a significant disruption.
Disaster Recovery Plan (DRP)
As part of the business continuity process, organizations typically develop a set of DRPs. These plans are
more technical in nature and are specifically designed for particular groups within the organization to
facilitate the recovery of specific business applications. The most well-known example of a DRP is the
Information Technology (IT) DRP.
When evaluating a DR Plan for IT, a common test would be to ask, "If we were to lose our IT services,
how would we recover them?" IT DR plans primarily focus on delivering technology services to
employees' workstations. It is then the responsibility of the business units to have plans in place for
subsequent functions.
A mistake that organizations often make is assuming that having an IT DR Plan is sufficient for overall
business continuity. However, this is not the case. It is essential to have a comprehensive Business
Continuity Plan that covers critical personnel, key business processes, recovery of vital records,
identification of critical suppliers, communication with key vendors and clients, and more.

412 | The Institute of Chartered Accountants of Nepal


Chapter 10 : Auditing And Information System
It is crucial for organizations to clearly define the type of plan they are working on. This is one of the
initial questions we ask, as it determines the approach and processes required. We are well-versed in both
types of plans and have extensive knowledge of the processes and profiles involved. We can provide
consultation and assistance to help your organization in this regard.

10.3 Evaluation of IS
All around the world there is a huge amount of money invested in IT (e.g. Seddon, 2001). It is therefore
important to evaluate the return on the investment. Evaluation is complicated and consequently there are
a lot of proposals for how to evaluate IT-systems.
Much of the literature on evaluation takes a formal-rational stand and sees evaluation as a largely quantitative
process of calculating the likely cost/benefit on the basis of defined criteria (Walsham,
1993). These approaches are often developed from a management perspective and contain different
measures that often are of harder economical character. One common criticism of the formal-rational view is
that such evaluation concentrates on technical and economical aspects rather than human and social aspects
(Hirschheim & Smithson, 1988). Further Hirschheim & Smithson maintain that this can have major negative
consequences in terms of decreased user satisfaction but also broader organizational consequences in terms of
system value.
There are also other evaluation approaches such as interpretative (e.g. Remenyi, 1999; Walsham,
1993) and criteria-based. Interpretative approaches often view IT-systems as social systems that have
information technology embedded into them (Goldkuhl & Lyytinen, 1982). Criteria-based approaches
are concerned with identifying and assessing the worth of programme outcomes in the light of initially
specified success criteria (Walsham, 1993). The criteria used are often derived from one specific
perspective or theory.
The evaluation processes that will be described in section 3 are based on six generic types of evaluation
(cf Cronholm & Goldkuhl, 2003 for a fuller description of the six generic evaluation types). These types
are derived from two strategies concerning and Strategie
We distinguish between three types of strategy:
• Goal-based evaluation
• Goal-free evaluation
• Criteria-based evaluation
The differentiation is made in relation to what drives the evaluation. Goal-based evaluation means
that explicit goals from the organisational context drive the evaluation of the ITsystem. The basic
strategy of this approach is to measure if predefined goals are fulfilled or not, to what extent and in
what ways. The approach is deductive. What is measured depends on the character of the goals and a
quantitative approach as well as qualitative approach could be used.
The goal-free evaluation means that no such explicit goals are used. Goal-free evaluation is an inductive
and situationally driven strategy. This approach is a more interpretative approach (e.g. Remenyi,
1999; Walsham, 1993). The aim of interpretive evaluation is to gain a deeper understanding of the

The Institute of Chartered Accountants of Nepal ȁͶͳ͵


Management Information and Control System
nature of what is to be evaluated and to generate motivation and commitment (Hirschheim &
Smithson, 1988). According to Patton (1990) the aim of goalfree evaluation is to:
1) avoid the risk of narrowly studying stated program objectives and thereby missing important
unanticipated outcomes 2) remove the negative connotations attached to the discovery of an
unanticipated effect: "The whole language of side-effected or secondary effect or even unanticipated
effect tended to be a put-down of what might well be a crucial achievement, especially in terms of
new priorities." 3) eliminate the perceptual biases introduced into an evaluation by knowledge of
the goals and 4) maintain evaluator objectivity and independence through goal-free conditions.
The basic strategy of this approach is inductive evaluation. The approach aims at discovering qualities of
the object of study. One can say that the evaluator makes an inventory of possible problems and that the
knowledge of the object of study emerges during the progress of the evaluation.
Criteria-based evaluation means that some explicit general criteria are used as an evaluation yardstick.
The difference to goal-based evaluation is that the criteria are general and not restricted to a
specific organisational context. That means that they are more generally applicable. There are a lot
of criteria-based approaches around such as checklists, heuristics, principles or quality ideals. In the area
of Human-Computer Interaction different checklists or heu- 3 ristics can be found (e.g. Nielsen, 1994;
Nielsen, 1993, Shneiderman, 1998). What is typical for these approaches is that the IT-systems interface
and/or the interaction between users and IT-systems act as a basis for the evaluation together with a set of
predefined criteria. More action oriented quality ideals and principles for evaluation can be found
in Cronholm & Goldkuhl (2002) and in Agerfalk et al (2002).
Strategies concerning what to evaluate?
All of the approaches goal-based, goal-free and criteria based are different ways and their primary
message is the evaluator should act in order to perform evaluation. Besides this "how message" it is also
important to decide about to evaluate. When evaluating ITsystems we can think of at least two different
situations that can be evaluated. We differ between evaluation of IT-systems as such and evaluation of
IT-systems in use. IT-systems can be viewed from many different perspectives. The framework for IT
evaluation presented in Cronholm & Goldkuhl (2003) is not dependent on any particular perspective.
Evaluating IT-systems as such means to evaluate them without any involvement from users. In this
situation there are only the evaluator and the IT-system involved. The data sources that could be
used for this strategy is the IT-system itself and possible documentation of the IT system (see
Figure 1). How the evaluation is performed depends on the "how-strategy" chosen. Choosing to
evaluate "IT-systems as such" does not exclude any of the strategies of "how to evaluate". The
evaluator could use a goal-based, goal-free or criteria-based strategy.
The outcome of the evaluation is based on the evaluator's understanding of how the ITsystem supports the
organisation. This strategy is free from a user's perceptions of how the IT-system benefits their work.

414 | The Institute of Chartered Accountants of Nepal


Chapter 10 : Auditing And Information System

Two possible data sources for IT-systems in us

The other strategy of "what to evaluate" is "IT-systems in use". Evaluating IT-systems in use means to
study a use situation where a user interacts with an IT-system. This analysis situation is more complex
than the situation "IT-systems as such" since it also includes a user, but it also has the ability to give a
richer picture.
The data sources for this situation could be interviews with the users and their perceptions and
understanding of the IT-system's quality, observations of users interacting with ITsystems, the IT-
system itself and the possible documentation of the IT-system (see Figure 2). Compared to the
strategy "IT-systems as such" this strategy offers more possible data sources. When high requirements
are placed on data quality the evaluator can choose to combine all the data sources in order to achieve
a high degree of triangulation. If there are fewer resources to hand the evaluator can choose one or two
of the possible data sources.

Four possible data sources for IT-system in use


An argument for choosing the strategy "IT-systems in use" is presented by Whiteside & Wixon (1987).
They claim ". usability becomes a purely subjective property of the interaction between a specific user
and the computer at a specific moment in time". There are always subjective perceptions such as the user's
attitude towards an IT-system that are harder to measure.
The Institute of Chartered Accountants of Nepal ȁͶͳͷ
Management Information and Control System
How the evaluation of "IT-systems in use" is performed depends on the "how-strategy" chosen. Ideally,
it should be possible to choose any of the three strategies goalbased, goal-free or criteria- based when
studying "IT-systems in use". The outcome of this evaluation is not only based on the evaluator's
understanding of how the IT-system supports the organisation. It is also based on the user's perceptions
of how the IT-system supports their work.
CAAT
Computer Assisted Audit Techniques (CAATs), sometimes called Computer Aided Audit Tools, are
increasingly gaining traction in the audit profession. Nowadays, these tools are extensively utilized across
the sector, aiding internal auditors in detecting irregularities in data files, supporting internal accounting
departments in performing in-depth analyses, and assisting forensic accountants in extracting and
examining large data sets for further analysis and fraud detection.
In essence, CAATs are employed to streamline or automate the process of data analysis. Presently, there
isn't a single accounting firm that doesn't incorporate some form of CAATs into their conventional
accounting and auditing assignments. Even the basic use of a computer in such engagements can be
regarded as the utilization of CAATs. Firms that have advanced their use of CAATs have recognized the
manifold benefits these tools offer.
Moreover, the use of CAATs has become increasingly linked with the integration of data analytics into
the audit process, reflecting recent developments and trends in the field.
Why Use CAATS?
The use of Computer Assisted Audit Techniques (CAATs) offers numerous advantages, particularly
with the advancements in technology that make it easier to analyze large data sets for discrepancies. The
tools available in today's market simplify the process of obtaining data files and conducting analyses.
Modern CAATs have streamlined the process to the extent that accountants no longer require
programming skills to identify, request, and import data for analysis. Instead, the key task for an
accountant is to choose the appropriate data files and apply their core skills to conduct specific tests on
the data.
Choosing the right data file can sometimes be challenging. However, collaboration with the client,
internal or external tech experts, and the inclusion of a Certified Information Technology Professional
(CITP) on the audit team can facilitate the identification of appropriate data files.
Once the suitable data files are imported into the CAATs tool, the data analytics process begins. Many
tools now offer automated routines to perform standard queries. Additionally, numerous CAATs user
groups are available online, providing a wealth of shared knowledge and resources. Social networking
sites also offer communities with shared interests, providing access to thousands of routines created by
fellow users.
Moreover, once a routine tailored to your specific context is created, it can typically be reused annually,
saving time and effort. Given that data file structures and audit procedures often remain consistent year
after year, once a routine is linked to a data file, it can be reassigned to less experienced audit team
members in subsequent years. This approach not only improves efficiency but also promotes the
professional development of junior team members.

416 | The Institute of Chartered Accountants of Nepal


Chapter 10 : Auditing And Information System
There are many analysis techniques that may be performed using CAATS. Some of these techniques
include the following:
Filter/Display Criteria Aging
Expressions/Equations Join/Relate Gaps
Trend Analysis
Statistical Analysis Regression Analysis
Duplicates Parallel Simulation
Sort/Index Benford's Law
Summarization Matching
Stratification Combination of One or More

There is a tremendous amount of resources available to educate the end-user in applying the above
techniques and demonstrating all of the additional ways that they can be used.

10.4 Standards for IS Audit


IS Audit and Assurance Standards
IS Audit and Assurance Standards (ISAAS) are a set of guidelines and best practices established by
professional organizations to govern the conduct of information systems (IS) audit and assurance
engagements. These standards provide a framework for conducting IS audits and assessments, ensuring
that they are performed with integrity, objectivity, and professionalism. While specific standards may vary
depending on the issuing body, the following are some common examples:
International Standards for the Professional Practice of IS Audit and Assurance (ISACA):
Standard 100: Purpose, Authority, and Responsibility of the IS Audit and Assurance Function.
Standard 110: Independence and Objectivity.
Standard 120: Proficiency and Due Professional Care.
Standard 130: Quality Management and Continuous Improvement.
Information Systems Audit and Control Association (ISACA):
IS Auditing Standards (ISAS): These standards provide guidance for conducting IS audits, covering areas
such as planning, risk assessment, control testing, and reporting.
International Federation of Accountants (IFAC):
International Standard on Assurance Engagements (ISAE) 3000 (Revised): Assurance Engagements
Other Than Audits or Reviews of Historical Financial Information.

International Standard on Assurance Engagements (ISAE) 3402: Assurance Reports on Controls at a


Service Organization.

The Institute of Chartered Accountants of Nepal ȁͶͳ͹


Management Information and Control System
Institute of Internal Auditors (IIA):
International Professional Practices Framework (IPPF): The IPPF includes the International Standards for
the Professional Practice of Internal Auditing (Standards 1000-1300), which provide guidance on
conducting internal audits, including those related to information systems.
These standards define the responsibilities and expectations of IS auditors, establish the criteria for
planning and conducting IS audit engagements, and guide the reporting of findings and recommendations.
Compliance with these standards ensures that IS audit and assurance engagements are conducted in a
consistent and effective manner, providing stakeholders with reliable and trustworthy information about
the organization's information systems and controls.
Standards
Standards contain statements of mandatory requirements for IS audit and assurance. They inform:
1. Standards set the baseline performance that IS audit and assurance professionals must achieve, as
mandated by their professional obligations under the ISACA Code of Professional Ethics.
2. They convey the expectations of the profession concerning the practitioners' work to management
and other key stakeholders.
3. They detail the particular prerequisites for individuals bearing the Certified Information Systems
Auditor (CISA) credential. Non-compliance with these standards may trigger an investigation by the
ISACA Board of Directors or a designated ISACA committee into the conduct of CISA holders,
potentially leading to disciplinary measures.
As of last update in September 2021, ISACA released the ISACA Tech Brief: Understanding AI (October
2021) that encourages professionals to stay informed and adapt to emerging technologies such as AI.
However, for the latest changes or updates beyond this time, please refer to the official ISACA website or
other reliable resources for the most current information.
IS Audit and Assurance Standards:
• Are a cornerstone of its professional contribution to the audit and assurance community
• Comprise the first level of ITAF guidance
• Provide information required to meet compliance needs
• Supply essential guidance to improve effectiveness and efficiency
• Offer a risk-based approach that is aligned with ISACA methodology
• Apply to individuals providing assurance over some components of IS systems, applications and
infrastructure
• May also provide benefits to a wider audience, including users of IS audit and assurance reports
• Are issued by the Professional Standards and Career Management Committee of
ISACA
Guidelines
The purpose of the IS Audit and Assurance Guidelines is to offer supplementary guidance and additional
information on how to effectively comply with the IS Audit and Assurance Standards. These guidelines
serve as a resource for IS audit and assurance professionals to support them in implementing, applying,
and justifying any deviations from the standards. The guidelines should be taken into consideration when

418 | The Institute of Chartered Accountants of Nepal


Chapter 10 : Auditing And Information System
carrying out audit and assurance activities, providing further insights and practical advice to enhance the
adherence to the standards.

ISACA's Information Systems (IS) Audit and Assurance Standards are principles-based standards that are
mandatory requirements for ISACA members and Certified Information Systems Auditors (CISAs). These
standards provide the minimum level of acceptable performance needed to meet the professional
responsibilities set out in the ISACA Code of Professional Ethics for IS auditors.
ISACA also provides supporting guidance that helps ISACA members and CISAs understand how to
implement the standards. This guidance, which includes guidelines and tools and techniques, is not
mandatory but is highly recommended for effective professional practice.
ISACA regularly updates its body of knowledge to reflect the evolving field of IT governance, risk, audit,
and cybersecurity. This includes periodic updates to the CISA exam, the COBIT framework, and other
professional resources.

Tools and Techniques:


Tools and techniques play a supportive role by providing additional guidance and insights for IS audit and
assurance professionals. While they do not establish requirements themselves, they assist practitioners in
carrying out their engagements effectively. Some examples of tools and techniques include:
1. IS Audit Reporting: These tools and techniques aid in the creation of comprehensive and
informative IS audit reports, ensuring that findings, recommendations, and conclusions are
effectively communicated to stakeholders.

The Institute of Chartered Accountants of Nepal ȁͶͳͻ


Management Information and Control System
2. White Papers: White papers are informative documents that provide in-depth analysis, research,
and guidance on specific topics related to IS audit and assurance. They offer valuable insights
and best practices for professionals in the field.
3. IS Audit/Assurance Programs: These tools and techniques provide structured frameworks for
planning, executing, and documenting IS audit and assurance engagements. They outline the
scope, objectives, procedures, and deliverables for each stage of the engagement.
4. COBIT 5 Family of Products: COBIT 5 is a globally recognized framework for governance
and management of enterprise IT. The COBIT 5 family of products includes various resources,
such as guidance, frameworks, and tools, that support the effective implementation of IT
governance, risk management, and control practices.
5. Data Analysis Tools: These tools help in examining and analyzing large volumes of data to
identify patterns, anomalies, and trends. They enable auditors to perform data mining, data
visualization, and statistical analysis to gain insights and detect potential risks or irregularities.
6. Control Self-Assessment (CSA): CSA is a technique that involves engaging stakeholders within
the organization to assess the effectiveness of controls and identify areas for improvement. It
allows for a collaborative approach to evaluating the design and operating effectiveness of
controls.
7. Audit Management Software: Audit management software provides a centralized platform to
manage and streamline the entire audit lifecycle. It facilitates planning, scheduling, tracking, and
reporting of audit activities, ensuring efficient and well-organized audit processes.
8. Vulnerability Assessment Tools: These tools assess the security vulnerabilities present in IT
systems and networks. They conduct automated scans and tests to identify weaknesses in system
configurations, software vulnerabilities, and potential entry points for attackers.
9. Process Mining Tools: Process mining tools utilize event log data to reconstruct and visualize
business processes. They provide insights into process performance, bottlenecks, compliance
deviations, and potential improvements. Process mining helps auditors gain a deeper
understanding of the actual execution of processes and identify areas for optimization.
10. Risk Assessment Tools: Risk assessment tools assist in evaluating and prioritizing risks
associated with IT systems and processes. They typically involve risk identification, risk analysis,
and risk evaluation techniques to support informed decision-making and risk mitigation
strategies.
These tools and techniques enhance the effectiveness and efficiency of IS audit and assurance activities,
providing practitioners with valuable resources and methodologies to achieve their objectives.

Audit preparation and Planning


1. Business Understanding
The first crucial step in IS audit preparation and planning is gaining a comprehensive understanding
of the business. This involves gathering relevant information about the organization's structure,
goals, operations, and overall business environment. A thorough study of the organization's

420 | The Institute of Chartered Accountants of Nepal


Chapter 10 : Auditing And Information System
computer application systems is essential to appreciate the complexity of the information systems
at play and to assess their robustness and security.
Furthermore, understanding the business must also encompass an in-depth analysis of the financial
aspects of the organization and the inherent risks associated with its operations. This risk analysis
should account for both business-specific risks and those prevalent in the industry or market in
which the organization operates. Understanding the nature of these risks, their potential impacts,
and the organization's mitigation strategies forms a significant part of the IS auditor's role.
Additionally, it's imperative to gain a solid understanding of the organization's information
architecture. This includes understanding how data is structured, stored, processed, and
communicated, both internally and externally. It also involves studying the organization's
technological direction, which refers to its strategic approach to adopting and implementing new
technologies. This aspect of business understanding is particularly critical given the rapid pace of
technological advancement and the significant implications this has for the security and
effectiveness of information systems.
By embracing these aspects of business understanding, an IS auditor can create a firm foundation
for a successful audit. This comprehensive view not only allows the auditor to identify potential
areas of risk but also enables them to understand the broader context in which these risks occur,
thereby facilitating more effective audit planning and execution.
2. Audit scope and charter
The role of the IS audit function, encompassing its responsibility, authority, and accountability,
should be aptly recorded either in an audit charter or an engagement letter, depending on the scope
and context of the audit activities.
An audit charter is a comprehensive document that encapsulates the complete spectrum of audit
operations within an entity. It serves as the foundational framework, outlining the overall mandate
of the audit function, thus providing guidance for all audit activities and ensuring alignment with
the organization's objectives.
On the other hand, an engagement letter is a more targeted document, honed in on a particular audit
exercise. This is typically used when a specific audit task is initiated within an organization with a
focused objective. While it carries the same weight as an audit charter in terms of outlining
responsibilities, it is more confined in its application, relating specifically to the audit at hand.

The contents of an audit charter typically include the purpose, outlining the fundamental reason for
its existence and the objectives it seeks to achieve. The charter also establishes the responsibilities
of the audit function, clearly defining the tasks and roles it is expected to perform. Moreover, it
describes the authority bestowed upon the audit function, detailing its right to access information,
personnel, and resources necessary for the audit. The charter further clarifies the function's
accountability, stipulating to whom and for what the audit function is answerable.
The audit charter also outlines the exclusions, or areas that are outside the audit's purview, ensuring
clarity and avoiding potential conflicts. Importantly, the charter also sets the standard for effective

The Institute of Chartered Accountants of Nepal ȁͶʹͳ


Management Information and Control System
communication with the auditee, detailing how and when information will be exchanged, which is
vital for a successful audit.
By providing this comprehensive framework, the audit charter enables effective planning and
execution of audit tasks, ensures alignment with organizational objectives, and facilitates clear and
efficient communication between the audit function and other stakeholders.
3. Audit Planning
Audit planning is a crucial initial step in the audit process. It is conducted to establish the
overarching audit strategy and detail the specific procedures that will be employed to execute the
strategy and conclude the audit. This step includes both short- and long-term planning, thus
ensuring that immediate audit tasks are handled efficiently while also setting the groundwork for
future audit operations.
A central part of audit planning is the establishment of the audit universe, a comprehensive
collection of all relevant processes that constitute the enterprise’s operational blueprint. Ideally, the
audit universe should list all processes that may potentially be subject to audit.
A key part of this planning phase is gaining a deep understanding of the organization’s mission,
objectives, purpose, and processes. This includes knowledge of information and processing
requirements such as data availability, integrity, security, and confidentiality. In understanding
these elements, the auditor can better assess the efficacy of the organization's business technology
and information management systems.
Understanding the organization’s governance structure and practices related to the audit objectives
is also critical. This includes knowledge of how decisions are made, who has authority, and how
accountability is assigned within the organization. Auditors must also account for changes in the
auditee’s business environment, as these could influence risk levels or impact the organization’s
information systems. This can be supplemented by reviewing work papers from prior audits to
identify any ongoing issues or previously identified risks.
Identifying stated content such as policies, standards, guidelines, procedures, and the organization
structure is essential to assess compliance and identify potential areas of risk or weakness.
Risk analysis is an integral part of audit planning. By identifying and assessing the potential risks
an organization faces, auditors can design an audit plan that effectively targets these areas.

The next step in planning is setting the audit scope and objectives, which define what the audit will
cover and what it aims to achieve. This is followed by developing the audit approach or strategy,
which lays out how the audit will be conducted. The allocation of personnel resources is then
considered, ensuring that individuals with the right skills and experience are assigned to relevant
tasks within the audit.
Finally, the planning phase also addresses engagement logistics, including the timing of audit
activities, required resources, and communication protocols.
In summary, audit planning sets the foundation for a successful and efficient audit, guiding the
entire process and ensuring that all potential areas of risk are thoroughly evaluated.
422 | The Institute of Chartered Accountants of Nepal
Chapter 10 : Auditing And Information System
4. Audit Staffing
The process of audit staffing requires careful consideration and effective management skills, as it
involves aligning the technical and auditing skill requirements with the competencies of the
available staff and the developmental goals of team members. It is crucial to ensure that the audit
team possesses a well-rounded understanding of the audit process and the necessary technical
acumen to handle the unique demands of each audit.
The primary auditor in charge, often referred to as the lead auditor, has a significant role in directing
the individual audit. This individual must possess a comprehensive understanding of the
technology, risks, and auditing techniques unique to the subject matter of the audit. Not only is this
expertise vital to conducting the audit effectively, but it also serves as a resource for guidance and
developmental assistance for staff auditors contributing to the fieldwork.
The lead auditor's role extends beyond just possessing knowledge; they should be adept at
imparting this knowledge to their team. They guide staff auditors in applying audit techniques, help
them understand the unique risks associated with the technology in use, and assist them in
navigating the complexities of the audit process.
In essence, effective audit staffing is about more than just filling roles. It involves strategically
deploying the right personnel with the appropriate skills and providing a learning environment for
continuous development. This approach ensures the successful execution of the audit and
contributes to the ongoing professional growth of the team.
5. Using work of other Experts
IS auditor should consider using the work of other experts in the audit when there are constraints
that could impair the audit work to be performed or potential gains in the quality of the audit.
Examples of these are the knowledge required by the technical nature of the tasks to be
performed, scarce audit resources and limited knowledge of specific areas of audit.
6. Audit Schedule
1. Schedules of individual audits, resources, the start and finish deadlines, and possible overlap
of each audit all must be reconciled when developing a master information system audit
schedule for the information system audit plan.
2. Time allocation for an individual audit should include time for planning, fieldwork review,
report writing, and post audit follow up.
3. Communication of audit plan
1. These division are based on the expertise required, geographical divisions, managerial
responsibility divisions, or some method that worked well in the prior audit approaches.
2. Evidence of approval by the audit management with their assessment of risks and planned
scope and objectives should be well documented in this section.
3. It is a series of audit steps designed to meet the audit objectives by identifying the process
related risks , determining the controls that are in place to mitigate the risk, and testing those

The Institute of Chartered Accountants of Nepal ȁͶʹ͵


Management Information and Control System
controls for effectiveness and sufficiency to successfully mitigate the risk to an acceptable
level.
4. Computer assisted Auditing Technique
Computer-Assisted Audit Tools and Techniques (CAATs) are indispensable resources that an
Information Systems (IS) auditor deploys to gather and analyze data during an IS audit or review.
As modern systems often encompass diverse hardware and software environments, varied data
structures, unique record formats, and complex processing functions, obtaining certain evidence
can be virtually impossible without the aid of such software tools.
CAATs offer a way to access and analyze data to meet a predefined audit objective, subsequently
enabling the auditor to report on the audit findings with a particular focus on the reliability of the
system's records. These tools essentially provide a lens through which the auditor can scrutinize
the records produced and maintained by the system, thereby ensuring their accuracy and integrity.
The spectrum of CAATs is broad, encompassing a range of tools and techniques that serve different
purposes within the audit process. These include generalized audit software (GAS), utility software,
debugging and scanning software, test data, application software tracing and mapping, and expert
systems.
These tools and techniques can be used in performing various audit procedures such as:
• Tests of the details of transactions and balances
• Analytical review procedures
• Compliance tests of IS general controls
• Compliance tests of IS application controls
• Network and OS vulnerability assessments
• Penetration testing
• Application security testing and source code security scans
Continuous Auditing
Continuous auditing represents a transformative approach in the audit field, as it enables an IS auditor to
perform tests and assessments within a real-time or near-real-time environment. Unlike traditional
auditing, which operates within defined time frames and often reviews historical data, continuous auditing
allows for immediate analysis and reporting on the subject matter. This results in a significantly shortened
reporting cycle and potentially faster detection and resolution of issues.
Consider, for example, a company that utilizes an automated system for processing its accounts payable
transactions. This system generates a daily report cataloging all transactions, including vendor invoices,
payments made, and any identified discrepancies or errors.
In a continuous auditing scenario, auditors could leverage this daily report to perform ongoing
assessments. By setting up automated tests and controls, they could proactively monitor for common
problems such as duplicate payments, unauthorized vendor payments, or incorrect payment amounts. This
real-time or near-real-time auditing approach not only allows for immediate identification of issues but
also enables swift corrective action, thus enhancing the overall efficiency and reliability of the
organization's financial operations.
424 | The Institute of Chartered Accountants of Nepal
Chapter 10 : Auditing And Information System
In essence, continuous auditing serves as a potent tool for IS auditors, promoting enhanced visibility,
immediacy of reporting, and the ability to respond promptly to issues, thereby improving the effectiveness
of the audit function and the reliability of the system under audit. For the most recent developments in this
area, it's recommended to refer to up-to-date resources and professional bodies such as ISACA.
Continuous Monitoring
Continuous monitoring is a strategy employed by an organization to consistently observe the performance
of various processes, systems, or data types. This method enables the organization to detect and address
potential issues promptly, thereby enhancing the overall efficiency and security of their operations.
For instance, in the realm of cybersecurity, tools like real-time antivirus software or Intrusion Detection
Systems (IDSs) often operate based on the principle of continuous monitoring. These systems vigilantly
oversee the organization’s digital environment, scanning for any signs of suspicious activity or potential
threats. Upon detection, these systems can take immediate action, from alerting relevant personnel to
isolating affected areas of the network, depending on the severity of the threat.
In essence, continuous monitoring allows an organization to maintain a real-time or near-real-time pulse
on its operations, enhancing its ability to respond swiftly and effectively to potential issues. This proactive
approach can result in improved system performance, increased data security, and ultimately, better
alignment with the organization’s objectives.
There are five types of automated evaluation techniques applicable to continuous auditing:
1. Systems Control audit review file and embedded audit modules (SCARF/EAM)
The Systems Control Audit Review File (SCARF) and Embedded Audit Modules (EAM) techniques
involve the integration of specially crafted audit software within the organization’s primary
application system. This strategic placement allows for selective monitoring of application systems,
providing a granular view of system operations and aiding in the timely detection of discrepancies
or anomalies.
These techniques employ software tools such as QualysGuard, RSA Archer, and Solarwinds, which
are embedded into the host application. These embedded tools continuously monitor system
activities, capturing and analyzing transaction data based on predefined criteria. This ongoing
review can lead to immediate detection and flagging of any abnormal activity or deviation from
standard procedures.
Essentially, SCARF and EAM empower organizations with enhanced visibility into their system
operations, enabling them to proactively identify and address potential issues. This proactive
approach contributes to improved system reliability, enhanced data integrity, and overall, a more
robust audit process. As technology continues to evolve, the significance of techniques like SCARF
and EAM is expected to grow, given their value in promoting operational efficiency and system
security.
2. Snapshots
The snapshot technique represents a unique auditing approach where “pictures” or snapshots are
taken to capture the transaction’s processing pathway, tracing it from the input stage right through
to the output. This method relies on applying unique identifiers to input data and meticulously
The Institute of Chartered Accountants of Nepal ȁͶʹͷ
Management Information and Control System
recording selected information about the transaction’s course, providing a clear audit trail for
subsequent review by an IS auditor.
Notable examples of snapshot technology can be seen in platforms like Microsoft Azure and Google
Cloud Platform. Google Cloud’s Disk Snapshot feature, for instance, allows users to capture
snapshots of their persistent disk volumes. These snapshots serve multiple purposes, from creating
backups and transferring data across regions to restoring data in the event of data loss.
In essence, the snapshot technique provides a detailed record of transactional processes within an
organization’s information systems. By creating a precise timeline of events, it aids in revealing
anomalies, identifying potential risks, and tracing the source of issues. As such, it’s a valuable tool
for IS auditors seeking to ensure the integrity, security, and efficiency of data processing within an
organization. Given the increasing reliance on digital data and the growing complexity of
information systems, techniques like snapshots are likely to play an increasingly significant role in
audit processes.
3. Audit Hooks
The audit hook technique refers to the embedding of ‘hooks’ or triggers within application systems.
These hooks act as red flags, alerting IS security personnel and auditors to potential issues before
they escalate. By triggering an alert when certain conditions are met, audit hooks provide a means
for real-time monitoring and immediate response. One common application of audit hooks is in file
access auditing. In this scenario, an audit hook can be programmed to monitor and log every instance
of file access or modification within a system. For example, if an application is required to track
every time a certain file is accessed or altered, an audit hook can be implemented to log these events.
This continuous monitoring can prove invaluable in detecting unauthorized access or modifications
to sensitive files, thereby enhancing data security.
Examples of audit hook usage extend to network monitoring, where hooks can be set up to monitor
network traffic and alert administrators to unusual activity that might signify a security breach. In
user authentication, audit hooks can be used to flag multiple failed login attempts, which could
indicate a potential hacking attempt. In summary, audit hooks serve as an important tool in proactive
risk management. They enable continuous monitoring and real-time response to potential issues,
thereby enhancing the overall security and integrity of information systems. As cybersecurity threats
continue to evolve, the use of audit hooks is likely to remain a critical component of robust IS audit
strategies.
4. Integrated Test Facility
The Integrated Test Facility (ITF) technique is a specialized audit method that involves the creation
and utilization of dummy entities within an auditee’s production files. This unique setup allows an
IS auditor to process either live transactions or test transactions during regular processing runs,
updating the records of the dummy entity in the process. Under this system, test transactions are
entered into the system concurrently with live transactions. This simultaneous processing enables a
comprehensive and real-time evaluation of the system’s effectiveness, without disrupting normal
operations.

426 | The Institute of Chartered Accountants of Nepal


Chapter 10 : Auditing And Information System
Following the entry and processing of these transactions, an auditor then compares the system
output with independently calculated data. This comparative analysis serves to verify the accuracy
and reliability of the computer-processed data. In essence, the ITF technique allows auditors to
assess the integrity of a system’s transaction processing under real-world conditions, without
affecting actual data or operations. By introducing test transactions and analyzing their processing
within the live environment, auditors can identify potential issues and validate the system’s
processing accuracy. As such, the ITF method forms an integral part of a comprehensive audit
strategy, contributing to the overall effectiveness and reliability of an organization’s information
systems.
1. Continuous and Intermittent Solution (CIS)
The Continuous and Intermittent Simulation (CIS) technique represents a nuanced approach to IS
auditing. During a transaction’s processing run, a computer system simulates the execution of the
application’s instructions. The simulator then evaluates each transaction against specific
predetermined criteria. If a transaction meets these criteria, it is audited; if not, the simulator
proceeds to the next transaction that fulfills the requirements.
Continuous Simulation is an approach that provides ongoing monitoring of a system, empowering
auditors to detect potential issues and risks in real-time. This is facilitated by software specifically
designed to perform constant testing of the system and promptly report any vulnerabilities or
weaknesses. This constant vigilance enhances an auditor’s ability to swiftly respond to potential
issues.
Contrastingly, Intermittent Simulation involves episodic system testing at pre-set intervals. This
method offers auditors a temporal snapshot of system performance, aiding in the identification of
issues that may have arisen since the last audit cycle.

Risk Ranking
Risk ranking is a critical part of risk management, typically conducted by evaluating risks based on their
impact and likelihood of occurrence. The impact of a risk should be measured in terms that reflect the
organization's objectives, potentially encompassing diverse areas such as financial implications, people-
related consequences, or reputational damage.
Areas identified as low risk, often denoted as 'Green Areas', are considered to pose minimal threat from
both a business and audit perspective. Given their low risk nature, it is not imperative to review the controls
over these areas in detail annually or on a rotational basis. Nonetheless, the choice not to conduct rotational
reviews is a management decision, made in the context of the organization's overall risk strategy and
available resources.
Medium-risk areas, or 'Orange/Yellow Areas', represent a more substantial risk, but not to an extent that
is likely to result in significant loss or reputational damage should the required controls fail. Given their
increased risk status, the controls over these areas should ideally be reviewed every two to three years on
a rotational basis. This helps ensure that the controls remain effective in managing the risks identified.
High-risk areas, also known as 'Red Areas', are considered to be inherently high risk from both a business
and audit standpoint. These areas bear the potential to cause significant financial loss or reputational

The Institute of Chartered Accountants of Nepal ȁͶʹ͹


Management Information and Control System
damage if not adequately managed. Consequently, the controls over these areas should be reviewed
annually to confirm their adequacy and effectiveness in mitigating the inherent risks.
In sum, risk ranking, when conducted effectively, enables an organization to allocate resources and apply
controls optimally to manage risks across different areas. It forms a vital part of an organization's risk
management strategy, promoting efficient risk mitigation and bolstering organizational resilience.

Data Analytics
Data analytics is the systematic examination of voluminous datasets to uncover underlying patterns,
discernible trends, and insightful correlations, ultimately driving informed decision-making and strategic
business initiatives. By deploying various techniques and utilizing sophisticated tools, valuable
information can be derived from the raw data, providing meaningful insights that aid in steering business
strategies.
An IS auditor can use data analytics for the following purposes:
1. Determination of the operational effectiveness of the current control environment
2. Determination of the effectiveness of antifraud procedures and controls
3. Identification of business process errors
4. Identification of business process improvements and inefficiencies in the control environment
5. Identification of exceptions or unusual business rules
6. Identification of fraud
7. Identification of areas where poor data quality exists
8. Performance of risk assessment at the planning phase of an audit
The process of collecting and analyzing data involves several key stages. The first is setting the scope,
which includes determining the objectives of the audit or review, defining data needs, and identifying
reliable data sources. Next, the data is identified and obtained, which may involve requesting data from
responsible sources, testing a data sample, and extracting data for usage.
Following data acquisition, it's crucial to validate the data to determine its sufficiency and reliability for
audit tests. This could involve independent validation of balances, reconciliation of detailed data to report
control totals, and validation of various data fields such as numeric, character, and date fields.
Additionally, the time period of the dataset is verified to ensure it aligns with the scope and purpose of the
audit, and all necessary fields are confirmed to be included in the acquired dataset.
Upon validation, tests are executed, often involving the running of scripts and other analytical tests. The
results of these tests are then meticulously documented, including the purpose of testing, data sources, and
conclusions drawn. Finally, these results are reviewed to ensure the testing procedures have been
adequately performed and have undergone review by a qualified individual.

428 | The Institute of Chartered Accountants of Nepal


Chapter 11 : Ethical and Legal Issues in Information Technology

Chapter 11
Ethical and Legal Issues in Information Technology

The Institute of Chartered Accountants of Nepal ȁͶʹͻ


Management Information and Control System

430 | The Institute of Chartered Accountants of Nepal


Chapter 11 : Ethical and Legal Issues in Information Technology
11.1 Patents, Trademark and Copyright
Patents
patent is a form of legal protection granted by the United States Patent and Trademark Office (USPTO)
that safeguards an original invention for a specific period. Patents are typically classified into three
categories: utility patents, plant patents, and design patents.
A utility patent, also known as a "patent for invention," pertains to the invention of a new or improved
product, process, or machine. It provides exclusive rights to the patent holder, prohibiting other
individuals or companies from manufacturing, using, or selling the invention without the holder's consent.
Utility patents offer protection for up to 20 years from the date of filing the patent application, although
they necessitate the payment of periodic maintenance fees. Importantly, utility patents can extend beyond
physical inventions to cover software products, business processes, and chemical formulations, including
pharmaceutical drugs.
Plant patents protect new plant varieties capable of asexual reproduction. Like utility patents, plant patents
provide 20 years of protection from the date of application, but uniquely pertain to the plant realm,
reflecting the innovative work of horticulturists and botanists.
Design patents, in contrast, cover the unique aesthetic elements of a manufactured item. For instance, an
automobile featuring a distinctive hood or headlight shape could be protected under a design patent,
preventing competitors from replicating these design aspects without facing legal repercussions. Design
patents offer a shorter duration of protection, lasting only 14 years from the date the patent is granted, and
unlike utility and plant patents, do not require the payment of maintenance fees.
Trademarks
Unlike patents, which protect unique inventions, a trademark provides protection for words and design
elements that distinguish the source of goods or services. Examples of trademarks include brand names
and corporate logos that identify and distinguish a company's goods in the market. Similarly, a service
mark protects the provider of a service rather than a physical good, although the term "trademark" is
often used colloquially to refer to both.
Instances of trademark infringement can be quite clear-cut. For example, bottling a beverage and
labeling it as Coca-Cola, or using the wave pattern from its logo, would constitute trademark
infringement as these elements have been legally protected for many years. However, trademark
protection extends beyond these overt cases. It also prohibits the use of any marks that bear a "likelihood
of confusion" with an existing trademark. As such, a business cannot use a symbol or brand name that
is visually similar, phonetically similar, or has a similar meaning to an existing trademark, particularly
if the goods or services they represent are related.
In cases where the holder of a trademark believes their rights have been infringed upon, they may opt to
initiate legal proceedings. It's important for businesses to thoroughly research potential trademarks to
ensure they are not inadvertently infringing upon existing marks, as this can lead to legal complications
and potential financial penalties.
Infringing on a registered trademark can result in legal action and potential damages.

The Institute of Chartered Accountants of Nepal ȁͶ͵ͳ


Management Information and Control System
 Word Marks: These are trademarks consisting of words, letters, or numbers, such as brand names or
slogans.
 Logo Marks: Logo marks are graphical representations, symbols, or designs used to identify a brand
or company.
 Combination Marks: Combination marks include both word elements and design elements,
combining text and logos to create a distinctive brand identity.
 Service Marks: Service marks specifically identify and distinguish services provided by an
organization rather than physical goods.
 Certification Marks: Certification marks are used to indicate that goods or services meet specific
standards or quality requirements set by a certifying organization.
 Collective Marks: Collective marks are used by members of an organization, cooperative, or group
to identify their goods or services.
Copyrights
Copyrights provide legal protection for "works of authorship," encompassing a diverse range of creative
outputs such as literary pieces, art, architectural designs, and music. As long as the copyright is active,
the copyright owner possesses the exclusive right to display, share, perform, or license the material.
However, the "fair use" doctrine serves as an exception, allowing limited distribution of copyrighted
material for scholarly, educational, or news reporting purposes.
While technically it's not necessary to file for a copyright to ensure protection of a work - since copyright
protection is granted automatically once the work is translated into a tangible form like a book or a
compact disc - officially registering with the U.S. Copyright Office provides a clear record of copyright
ownership. This can greatly facilitate the process of establishing original authorship in any potential legal
disputes.
The lifespan of a copyright varies, contingent on when the work was created. However, for most works
created since 1978, the copyright extends for 70 years following the author's death. After this period, the
work enters the public domain and can be reproduced by anyone without needing permission.
Copyright protection covers various types of creative works, including but not limited to:
 Literary Works: This category includes novels, poems, articles, manuscripts, computer code, and
other written works.
 Artistic Works: Artistic works encompass paintings, sculptures, drawings, photographs, architecture,
and other visual creations.
 Musical Works: Musical works include compositions, songs, and melodies.
 Dramatic Works: This category covers plays, scripts, screenplays, and choreographic works.
 Audiovisual Works: Audiovisual works include films, TV shows, documentaries, and other
audiovisual content.
 Software: Computer software and programs are protected by copyright as literary works, specifically
in the field of computer programming.
Typically, copyright privileges are retained by the author, even when the work is published by another
entity. An important exception exists for "works for hire," where materials created as part of job duties
are usually considered to be owned by the employer, not the individual creator. In such cases, it is

432 | The Institute of Chartered Accountants of Nepal


Chapter 11 : Ethical and Legal Issues in Information Technology
advisable to negotiate copyright ownership prior to creating the piece and to secure the agreement in
writing to avoid potential disputes.

11.2 Significance of IT Law:


The term information technology law (IT law) refers to an ever-growing field of law that focuses on those
legal issues that arise from the emerging information society, which is driven by certain technologies
likecomputers, wireless communications and the Internet. Topics encompassed within information
technology law include: the protection of computer software and databases, information access
and controls, privacy andsecurity, Internet law, and electronic commerce.
Information technologies are extremely powerful tools. Therefore, the stakes of identifying the best
laws and policies for their use are very high. These stakes fall into three general categories:
1) economic, 2) social and personal, and 3) political.
Information technologies and information-based products and services are becoming central to the
economy as a whole. The new technologies and the information they embody can be used to improve
efficiency, increase productivity, and thus engender economic growth. Information is reusable, and unlike
capital resources, such as steel or iron, it can be produced and distributed using few physical resources.
Not only is information an efficient substitute for labor, it can also be used to improve the overall
efficiency of the production process. Businesses, for example, are now applying information technology
to almost all of their activities: from recruiting to laying off workers, from ordering raw materials to
manufacturing products, from analyzing markets to performing strategic planning, and from inventing
new technologies to designing applications for their use. To serve these needs, whole new industries have
been spawned.
One of the fastest growing sectors of the economy, the information industry is spearheading national and
international economic growth and enhancing every country's competitive position in the international
marketplace. The economic stakes raised by the new technologies are particularly high for the
copyrightindustries - publishing and other industries that rely on the legal protections provided by
copyright law. The amount of financial damage that these industries suffer due to infringements
of intellectual property rights is extremely hard to estimate.
Social and personal
Information continues to reign supreme in our daily lives, especially in the United States where a staggering
volume of data is exchanged daily through various channels. In the 1970s, it was estimated that the American
populace was subjected to around 8.7 trillion words each day via electronic platforms like radio, television,
and printed media such as newspapers, books, and magazines. This figure has seen a consistent rise annually;
the volume of words communicated increases at an average rate of 1.2% per year. Fast forward to today, and
the internet stands as the most rapidly growing medium for information exchange.
Information, regardless of its form, is paramount to all aspects of our existence. It forms the primary resource
that we depend upon to address our personal needs: managing daily hurdles, navigating life's traumas and
crises, preserving religious faith, family life, and cultural heritage, and catering to our recreational,
entertainment, and leisure pursuits. Never before in history have we been so thoroughly and promptly
informed about happenings on global, national, and local levels.

The Institute of Chartered Accountants of Nepal ȁͶ͵͵


Management Information and Control System
Considering the pivotal role of information, the public holds significant stakes in decisions regarding the
protection of and access to this information. Furthermore, the public has grand expectations concerning how
technology can fulfill its information requirements. This is particularly relevant in today's age of data privacy
concerns, internet censorship debates, and evolving AI-driven technologies like personalized
recommendation systems, automated news generation, and natural language processing tools that aim to
enhance our information consumption experience and knowledge base.
Political
In democratic societies, citizens must be well informed about issues, candidates for office, and local
affairs. Similarly, a democratic polity requires a well-informed citizenry.
Increasingly, information and communications technologies serve these information needs. The
government regularly needs huge amounts of information to make complex legal and policy
decisions. Many government agencies would find it impossible to conduct their daily business without
resorting to customized information on demand. The Internal Revenue Service and theSocial
Security Administration, for example, require large automated information systems to handle
the accounts of hundreds of millions of clients. And the operation of national defense depends on
the use of complex communications systems both for day-to-day management of the military
establishment and for the command and control of sophisticated weaponry.
Citizens' groups and political parties are also relying more heavily on the new technologies to achieve
their aims. Technology, for example, is being used to target voters and potential supporters, communicate
with voters, manage information, and even to design campaign strategies. Computers are also being used
as lobbying tools.

11.3 Digital Signature and authentication of digitized information


Introduction
In the digital age, where information is increasingly transmitted and stored electronically, ensuring the
integrity, authenticity, and confidentiality of digital data is crucial. Digital signatures and authentication
mechanisms play a vital role in verifying the identity of the sender, ensuring data integrity, and establishing
trust in electronic communications and transactions.
A digital signature is a cryptographic technique that provides a way to electronically sign a document or
message, proving that it originated from a specific sender and has not been tampered with during
transmission. It is a digital equivalent of a handwritten signature, but with added security and reliability.
Digital signatures use public key cryptography, where the signer uses their private key to create a unique
digital signature, which can then be verified using the corresponding public key. The digital signature
binds the signer's identity to the document, providing evidence of authenticity and integrity.
The authentication of digitized information involves verifying the identity of individuals or entities
involved in electronic transactions or communications. Authentication mechanisms aim to ensure that only
authorized individuals or systems can access or modify sensitive information. Various methods of
authentication are employed, including passwords, biometrics (such as fingerprints or facial recognition),
hardware tokens, and digital certificates.

434 | The Institute of Chartered Accountants of Nepal


Chapter 11 : Ethical and Legal Issues in Information Technology
Digital certificates, also known as public key certificates, are used in authentication processes. These
certificates are issued by trusted certification authorities and contain information about the identity of the
certificate holder, such as their name and public key. Digital certificates help establish trust and
authenticity by binding the identity of an individual or organization to their public key. When a digital
certificate is used for authentication, the recipient can verify the authenticity of the certificate and the
identity of the sender by validating the digital signature attached to the certificate.
By employing digital signatures and authentication mechanisms, organizations and individuals can ensure
the integrity and authenticity of digitized information. These technologies provide a means to establish
trust, prevent unauthorized access or tampering, and enable secure electronic transactions. They are
essential components in safeguarding the confidentiality and reliability of digital communications,
protecting against fraud, and promoting secure interactions in the digital realm.
One of the major challenges facing consultants today is maintaining a level of knowledge of leading and
emerging technologies, beyond the superficial or buzzword level. We need to develop a level of
understanding that allows us to communicate effectively with both suppliers and customers. We then can
demonstrate:
• Our knowledge of the business issues being addressed;
• The application of technology to providing solutions;
• The business benefits for the customer;
• The limitations that will remain and that will need to be mitigated in other ways.
Public-key cryptography has been with us for some time now, and a substantial amount of intriguing work
has been accomplished by various committees, including IETF/PKIX and PKCS1, to establish related
standards and techniques. Yet, how familiar are we with what's inside? Do we truly comprehend its
workings? It's about time we lifted the hood, scrutinized the engine, and delved into the specifics of how
public-key encryption and digital signatures operate. Does it still hold relevance today? Undeniably, yes.
Asymmetric cryptography, or public-key cryptography, continues to be an integral component of
numerous security protocols and systems. This includes secure email exchanges (PGP, S/MIME), secure
shell (SSH) for remote server access, and secure internet browsing (HTTPS, SSL/TLS protocols). These
technologies hinge upon the fundamental mechanics of public-key cryptography, underscoring its
enduring relevance and importance in our increasingly digital world.
Public-key, what it is
Public-key refers to a cryptographic mechanism. It has been named public-key to differentiate it from the
traditional and more intuitive cryptographic mechanism known as: symmetric-key, shared secret, secret-
key and also called private-key
Symmetric-key cryptography is a mechanism by which the same key is used for both encrypting and
decrypting; it is more intuitive because of its similarity with what you expect to use for locking and
unlocking a door: the same key. This characteristic requires sophisticated mechanisms to securely
distribute the secret-key to both parties.

The Institute of Chartered Accountants of Nepal ȁͶ͵ͷ


Management Information and Control System
Public-key on the other hand, introduces another concept involving key pairs: one for encrypting,
the other for decrypting. This concept, as you will see below, is very clever and attractive, and provides
a great deal of advantages over symmetric-key:
• Simplified key distribution
• Digital Signature
• Long-term encryption
Nonetheless, it's crucial to note that symmetric-key cryptography continues to hold significant relevance
in the functioning of a Public-key Infrastructure (PKI), underscoring the enduring importance of both
systems in modern cryptography.
A definition
Public-key cryptography refers to a cryptographic method that utilizes an asymmetric key pair consisting
of a public key and a private key. In this approach, the public key is widely and freely distributed, while
the private key remains confidential and must be kept secret. Public-key encryption employs this key pair
for the purpose of encrypting and decrypting data.
The public key is used for encryption, allowing anyone to encrypt data using the public key. However, the
corresponding private key is required to decrypt the encrypted data. On the other hand, the private key is
used for digital signature generation, enabling the owner to sign digital documents or messages. The
resulting digital signature can be verified by anyone who possesses the corresponding public key. This
process ensures the authenticity and integrity of the signed content.
The distinctive characteristic of public-key cryptography is the asymmetric nature of the key pair. Data
encrypted with the public key can only be decrypted using the corresponding private key, and vice versa.
This property enables secure communication, data encryption, and digital signature verification.
By utilizing public-key cryptography, individuals and organizations can securely transmit confidential
information, authenticate digital documents, and protect sensitive data from unauthorized access. The
public-key approach revolutionized cryptographic practices by providing a practical solution for secure
communication and data protection in various applications.
Encryption and Decryption
Encryption and decryption are cryptographic processes used to protect the confidentiality and integrity of
data during transmission or storage. Encryption transforms plain or clear data (referred to as plaintext) into
an encoded form (referred to as ciphertext) using an encryption algorithm and a secret key. Decryption,
on the other hand, reverses the encryption process, converting ciphertext back into its original plaintext
using a decryption algorithm and the corresponding secret key.
Encryption is the process of converting plaintext into ciphertext using an encryption algorithm and a secret
key. The encryption algorithm takes the plaintext and transforms it according to a specific mathematical
formula or algorithm, making it incomprehensible to unauthorized individuals who do not possess the key.
The resulting ciphertext appears as a scrambled and unreadable form of the original data. The encryption
key is a crucial component as it determines the transformation of the plaintext and is required for successful
decryption.

436 | The Institute of Chartered Accountants of Nepal


Chapter 11 : Ethical and Legal Issues in Information Technology
Decryption:
Decryption is the process of converting ciphertext back into plaintext using a decryption algorithm and
the corresponding secret key. The decryption algorithm applies a reverse transformation to the ciphertext,
utilizing the decryption key to restore the original plaintext. Only individuals possessing the correct key
can decrypt the ciphertext and retrieve the original data.
Key-Based Encryption:
Encryption and decryption processes rely on the use of keys. Symmetric-key encryption uses a single
shared key, known as the secret key or private key, for both encryption and decryption. The same key is
used by both the sender and the recipient to secure and access the data. In contrast, asymmetric-key
encryption (also known as public-key encryption) uses a pair of mathematically related keys: a public key
and a private key. The public key is freely distributed, while the private key is kept secret. The public key
is used for encryption, and the private key is used for decryption. This approach enables secure
communication and data exchange without the need to share a secret key.
Encryption is a mechanism by which a message is transformed so that only the sender and
recipient can see. For instance, suppose that Alice wants to send a private message to Bob. To do so,
she first needs Bob's public-key; since everybody can see his public-key, Bob can send it over the
network in the clear without any concerns. Once Alice has Bob's public-key, she encrypts the
message using Bob's public-key and sends it to Bob. Bob receives Alice's message and, using his private-
key, decrypts it.
Encryption and decryption techniques are employed in various areas, such as secure communication, data
protection, digital signatures, and secure storage. They are commonly used in technologies like secure
messaging, virtual private networks (VPNs), secure online transactions (e-commerce), and secure file
storage to safeguard sensitive information from unauthorized access or tampering. Encryption ensures
that data remains confidential, even if intercepted, and provides a means to verify the authenticity and
integrity of the data.

11.4 Digital Signature and Verification


Digital signature is a mechanism by which a message is authenticated i.e. proving that a message is
effectively coming from a given sender, much like a signature on a paper document. For instance, suppose

The Institute of Chartered Accountants of Nepal ȁͶ͵͹


Management Information and Control System
that Alice wants to digitally sign a message to Bob. To do so, she uses her private-key to encrypt the
message; she then sends the message along with her public-key (typically, the public key is attached to
the signed message). Since Alice's public-key is the only key that can decrypt that message, a successful
decryption constitutes a Digital Signature Verification, meaning that there is no doubt that it is Alice's
private key that encrypted the message.

Beyond the principles


The two previous paragraphs illustrate the encryption/decryption and signature/verification principles.
Encryption and digital signature can be combined to provide both privacy and authentication. This means
that a message can be encrypted to ensure its confidentiality while also being digitally signed to verify its
authenticity. By using both techniques together, the recipient can decrypt the message using their private
key and then verify the digital signature to confirm the sender's identity.
Role of Symmetric-Key in Public-Key Encryption:
Symmetric-key algorithms play a significant role in public-key encryption implementations. This is
because asymmetric-key encryption algorithms, which are used for public-key encryption, tend to be
slower compared to symmetric-key algorithms. To address this, public-key encryption often involves
using symmetric-key encryption algorithms to encrypt the actual message, and the symmetric key itself is
then encrypted using the recipient's public key. This hybrid approach combines the speed of symmetric-
key encryption with the security of asymmetric-key encryption.
For Digital signature, another technique used is called hashing. Hashing produces a message digest that
is a small and unique representation (a bit like a sophisticated checksum) of the complete message.
Hashing algorithms are a one-way encryption, i.e. it is impossible to derive the message from the digest.
The main reasons for producing a message digest are:
1 The message integrity being sent is preserved; any message alteration will immediately be detected;
2 The digital signature will be applied to the digest, which is usually considerably smaller than the
message itself;
3 Hashing algorithms are much faster than any encryption algorithm (asymmetric or
symmetric).

438 | The Institute of Chartered Accountants of Nepal


Chapter 11 : Ethical and Legal Issues in Information Technology
The following sections explains what really happens when encrypting and signing a message on one hand,
and when decrypting a message and verifying its signature on the other hand.
Steps for signing and encrypting a message
Figure 3 below shows the set of operations required when Alice wants to send a signed and encrypted
message to Bob.

1) Message signature. Digital signature includes two steps:


a) Message digest evaluation. The main purpose for evaluating a digest is to ensure that the message
is kept unaltered; this is called message integrity.
b) Digest signature. A signature is in fact an encryption using the issuer's (Alice in this case)
private-key. Included in the signature is also the hashing algorithm name used by the issuer.
The issuer's public-key is also appended to the signature. Doing so lets anyone decrypt
and verify the signature using the issuer's public-key and hashing algorithm. Given the properties
of public-key encryption and hashing algorithms, the recipient has proof that:
i) The issuer's private-key has encrypted the digest;
ii) The message is protected against any alteration.
2) Message encryption. Encryption includes the following 3 steps:
a) Creation of a one time symmetric encryption/decryption key. Remember that encryption and
decryption algorithms using asymmetric-keys are too slow to be used for long messages;
symmetric-key algorithms are very efficient and are therefore used.
b) Message encryption. The whole message (the message itself and the signature) is encrypted
using SymK, the symmetric-key evaluated above.

The Institute of Chartered Accountants of Nepal ȁͶ͵ͻ


Management Information and Control System
c) Symmetric-key encryption. SymK is also used by the recipient to decrypt the message.
SymK must therefore be available to the recipient (Bob) only. The way to hide the Symk
from everybody except the recipient is to encrypt it using the recipient's public- key. Since SymK
is a small piece of information compared to a message (that could be very long), the performance
penalty associated with the relative inefficiency of asymmetric-key algorithms is acceptable.
One interesting point to mention is that if Alice wants to send the same message to more than one
recipient, say Bob and John for instance, the only additional operation to be performed is to repeat 'step
2) c)' for John. Hence, the message that both Bob and John would receive would look like:
[Message+[Digest]PrKA+PuKA]SymK+[SymK]PuKB+[SymK]PuKJ. Notice that the exact same SymK
will be used by Bob and John to decrypt the message.
Steps for Decrypting and verifying the signature of a message
Figure below shows the set of operations required when Bob wants to decrypt and verify the message
sent by Alice.

1) Message decryption. The decryption includes the following steps:


a) Symmetric-key decryption. The one time symmetric-key has been used to encrypt the message.
This key (SymK) has been encrypted using the recipient's (Bob) public-key. Only Bob can
decrypt SymK and use it to decrypt the message9.
b) Message decryption. The message (which includes the message itself and the signature)
is decrypted using SymK.

440 | The Institute of Chartered Accountants of Nepal


Chapter 11 : Ethical and Legal Issues in Information Technology
2) Signature verification. The signature verification includes the following 3 steps:
a) Message digest decryption. The digest has been encrypted using the issuer's (Alice) private- key.
The digest is now decrypted using the issuer's public-key included in the message.
b) Digest evaluation. Since hashing is a one-way process i.e. the message cannot be derived
from the digest itself, the recipient must re-evaluate the digest using the exact same hashing
algorithm the issuer used.
c) Digests comparison. The digest decrypted in a) and the digest evaluated in b) are
compared. If there is a match, the signature has been verified, and the recipient can accept
the message as coming unaltered from the issuer. If there is a mismatch this
i) The message has not been signed by the issuer or
ii) The message has been altered. iii) In both cases, the message should be rejected.
1.5 Identity and keys
Until now, we have taken for granted the keys being used for encryption/decryption and digital
signature/verification belong to Bob and Alice. How can we be sure that Alice is really Alice? And, how
can Alice be sure that only Bob will see what she encrypted? So far, the only thing we know is that the
user of a given key pair has signed and encrypted the message.
But, is he really the owner? George, for instance, may have sent a message to Bob pretending that he
is Alice; Bob cannot tell whether or not it is Alice or George who is sending the message. The same
applies to Bob's public-key. This issue is solved by the use of certificates.
What is a Certificate
A certificate is a piece of information that proves the identity of a public-key's owner. Like a passport, a
certificate provides recognized proof of a person's (or entity) identity. Certificates are signed and delivered
securely by a trusted third party entity called a Certificate Authority (CA). As long as Bob and Alice trust
this third party, the CA, they can be assured that the keys belong to the persons they claim to be.
A certificate contains among other things:
1) The CA's identity
2) The owner's identity
3) The owner's public-key
4) The certificate expiry date
5) The CA's signature of that certificate
6) Other information that is beyond the scope of this article.
With a certificate instead of a public-key, a recipient can now verify a few things about the issuer to make
sure that the certificate is valid and belongs to the person claiming its ownership:
1) Compare the owner's identity
2) Verify that the certificate is still valid
3) Verify that the certificate has been signed by a trusted CA
4) Verify the issuer's certificate signature, hence making sure it has not been altered.

The Institute of Chartered Accountants of Nepal ȁͶͶͳ


Management Information and Control System
Bob can now verify Alice's certificate and be assured that it is Alice's private-key that has been used to
sign the message. Alice must be careful with her private-key and must not feature associated with her
digital signature. As will be seen in section 3.2, there is more to consider for effective non-repudiation
support.
Note that certificates are signed by a CA, which means that they cannot be altered. In turn, the
CA signature can be verified using that CA's certificate.
The new changes that have been updated recently is that:
1. Automation of certificate management: As the number of devices requiring encryption increases,
organizations are leaning towards automated certificate management systems to avoid the
complexity and potential security risks of manual management.
2. Short-lived certificates: To reduce the risk of certificate misuse, many organizations are moving
towards shorter certificate lifetimes. This trend has been accelerated by decisions of major certificate
authorities and browsers like Apple's Safari, which in 2020 began to reject new publicly trusted TLS
certificates valid for more than 398 days.
3. More reliance on public key infrastructure in IoT: As the Internet of Things (IoT) expands, the need
for secure communication between devices is more critical than ever. PKI, and by extension
certificates, are being increasingly used to establish trusted communication between IoT devices.
4. Continued concerns around certificate authorities: Trust in certificate authorities is a cornerstone of
PKI. However, incidents where CAs have been compromised or have issued certificates incorrectly
continue to occur, leading to discussions about the model's future and potential decentralization of
trust.
5. Increased use of wildcard and multi-domain certificates: These types of certificates are being used
more frequently to reduce the complexity of managing multiple certificates. However, they come
with their own security risks, as the compromise of one certificate could potentially affect multiple
services or domains.
Certificate validation added to the process
When Alice encrypts a message for Bob, she uses Bob's certificate. Prior to using the public-key included
in Bob's certificate, some additional steps are performed to validate Bob's certificate:
1) Validity period of Bob's certificate
2) The certificate belongs to Bob
3) Bob's certificate has not been altered
4) Bob's certificate has been signed by a trusted CA
Additional steps would be required to validate the CA's certificate in the case where Alice does not trust
Bob's CA. These steps are identical to the ones requires to validate Bob's certificate. In the example below,
it is assumed that both Bob and Alice trust that CA.

442 | The Institute of Chartered Accountants of Nepal


Chapter 11 : Ethical and Legal Issues in Information Technology

In the Figure 5 above, a Certificate validation step is added to what is shown in Figure 3. Only the
fields required for the validation of a certificate are displayed.
Alice wants to make sure that the PuKB included in CertB belongs to Bob and is still valid.
• She checks the Id field and finds BobId, which is Bob's identity. In fact, the only thing she really
knows is that this certificate appears to belong to Bob.
• She then checks the validity fields and finds that the current date and time is within the validity
period. So far the certificate seems to belong to Bob and to be valid.
• The ultimate verification takes place by verifying CertB's signature using the CA's public- key
(PuKCA found in CertCA)10. If CertB signature is ok, this means that:
a) Bob's certificate has been signed by the CA in which Alice and Bob has put all their trust.
b) Bob's certificate integrity is proven and has not been altered in any way.
c) Bob's identity is assured and the public-key included in the certificate is still valid and belongs
to Bob. Therefore, Alice can encrypt the message and be assured that only Bob will be able to
read it.
Similar steps will be performed by Bob on Alice's certificate before verifying Alice's signature.
Beyond the mechanics
So far, this article covered in some details the public-key mechanics associated with encryption and digital
signature. In section 2.1 the notion of Certificate Authority has been brought up. The CA is the heart of
a Public-Key Infrastructure (PKI).

The Institute of Chartered Accountants of Nepal ȁͶͶ͵


Management Information and Control System
What is a PKI
A PKI is a combination of software and procedures providing a means for managing keys and certificates,
and using them efficiently. Just recall the complexity of the operations described earlier in this article for
having a feel on the absolute necessity to provide users with appropriate software support for encryption
and digital signature. But nothing has been said yet about management.
Key and certificate management
Key and certificate management is the set of operations required to create and maintain keys and
certificates. The following is the list of the major points being addressed in a managed PKI:
1) Key and certificate creation: How to generate key pairs? How to issue certificates to the users?
A PKI must offer software support for key pair generation as well as certificate requests. In addition,
procedures must be put in place to verify the user identity prior to allowing him to request a
certificate.
2) Private-key protection: How will the user protect his private-key against misuse by other
malicious users? Certificates are widely accessible because they are used for either encryption or
signature verification. Private-keys require some reasonable level of protection because they are used
either for decryption or for digital signature. A strong password mechanism must be part of the
features of an effective PKI.
3) Certificate revocation: How to handle the situation where a user's private-key has been
compromised? Similarly, how to handle the situation where an employee leaves the company?
How to know whether or not a certificate has been revoked?
A PKI must provide a means by which a certificate can be revoked. Once revoked, this certificate
must be included in a revocation list that is available to all users. A mechanism must be provided to
verify that revocation list and refuse to use a revoked certificate.
4) Key backup and recovery: What happens to encrypted files when a user loses his private-key?
Without key backup, all messages and files that have been encrypted with his public-key can no
longer be decrypted and are lost forever. A PKI must offer private-key backup and a private-key
recovery mechanism such that the user can get back his private-key to be able to get access to his
files11.
5) Key and certificate update: What happens when a certificate reaches or is near its expiry date?
Keys and certificates have a finite lifetime. A PKI must offer a mechanism to at least update the
expiry date for that certificate. Good practice though is to update the user's keys and certificates. The
key and certificate update can be automatic in which case the end user gets notified that his keys
have been updated, or can require that the user performs an action during or before his keys and
certificates expire; if this case, the PKI must inform the user that this action is required prior the
expiry time of his keys and certificates.
6) Key history management: After several key updates, how will a user decide which private- key to use
to decrypt files?
Each key update operation generates new key pairs. Files that have been encrypted with previous public-
keys can only be decrypted with their associated private-keys. Without key history management, the user
would have to make decision on the key to use for decrypting files.

444 | The Institute of Chartered Accountants of Nepal


Chapter 11 : Ethical and Legal Issues in Information Technology
7) Certificate access: How will a user, who wants to send a message to several recipients, get their
certificates?
A PKI must offer an easy and convenient way to make these certificates available. The use of an
LDAP directory is commonly used for that purpose.
However some recent changes has been made to address these issues:
8) Use of Hardware Security Modules (HSMs): The use of HSMs to store and manage keys has become
more common due to their enhanced security. HSMs provide tamper-proof storage of private keys,
thus offering superior protection against key theft or compromise.
9) Automation: Increasingly, organizations are seeking to automate as much of the PKI management
process as possible. Automation reduces the chance of human error and can also help ensure that
certificate renewals, revocations, and other necessary actions happen in a timely manner.
10) Integration with cloud services: As more organizations move their infrastructure to the cloud, the
need for PKIs that can integrate seamlessly with cloud services has grown. This includes the need
for certificate management capabilities that work across multiple cloud providers.
11) Quantum-Safe Cryptography: As the development of quantum computing advances, concerns about
their potential to crack current cryptographic methods have risen. As a result, the field of quantum-
safe or post-quantum cryptography has grown, exploring new algorithms that could resist quantum
computer attacks. Future PKIs might need to incorporate these quantum-safe cryptographic methods
to ensure their resilience.
12) Expanded use of PKI in IoT: With the rise of the Internet of Things (IoT), PKI is being used more
and more to authenticate and secure communications between a multitude of devices. This has led to
additional requirements and complexities in terms of scalability and management of PKI.
Support for non repudiation of digital signature
One important point that has to be clarified is non-repudiation of digital signature. This notion refers to the
fact that a user cannot deny having signed a given message. This implies that the user who signed the message
is the only one who has access to the private-key used for signing. However, as we have seen above, in a
managed PKI, private-keys are kept by the CA for key recovery purposes. Therefore, both the user and the
CA know that private-key, which means that both can (in theory) use that key for signing a message. A user
can then deny having signed that message.
To address this and provide non-repudiation support, a second key pair is used exclusively for signature
and verification purposes. The private key used for signing is not backed up, and only the user has access
to it. If the user loses their password, they also lose their signing key. During key recovery, the
encryption/decryption key pair is restored to the user, and a new signature/verification key pair is
generated. This does not pose a problem because each time a user signs a document, the corresponding
verification certificate is appended to it. This ensures that the document's signature can always be verified,
regardless of any key recovery actions taken.
11.5 Introduction to Digital Data Exchange and digital reporting standard-XML
and XBRL
Digital Data Exchange:
Digital Data Exchange (DDEX) is an organization that focuses on setting standards for the exchange of
metadata in the digital content value chain, also known as the digital supply chain. The primary objective

The Institute of Chartered Accountants of Nepal ȁͶͶͷ


Management Information and Control System
of DDEX is to design standardized XML message formats and develop common protocols for the
automated communication and management of these messages.
DDEX was established in 2006 with the goal of promoting interoperability and efficiency in the digital
content industry. By providing standardized XML message formats, DDEX aims to facilitate the seamless
exchange of metadata between different entities involved in the digital supply chain, such as content
creators, distributors, retailers, and service providers.
Digital Data Exchange (DDEX) is an organization that focuses on setting standards for the exchange of
metadata in the digital content value chain, also known as the digital supply chain. The primary objective
of DDEX is to design standardized XML message formats and develop common protocols for the
automated communication and management of these messages.
DDEX was established in 2006 with the goal of promoting interoperability and efficiency in the digital
content industry. By providing standardized XML message formats, DDEX aims to facilitate the seamless
exchange of metadata between different entities involved in the digital supply chain, such as content
creators, distributors, retailers, and service providers.
XML Standards
DDEX has developed a series of XML-base standards for the communication of metadata between
record companies, music rights societies and online retailers. These are:
Electronic Release Notification Message Suite Standard: This standard enables record labels to
communicate metadata about music releases to online retailers. It includes information such as artist
names, album names, track names, release dates, and commercial terms associated with the releases. By
using this standard, record labels can provide accurate and comprehensive information to online retailers,
facilitating the distribution and availability of music content.
Digital Sales Reporting Message Suite Standard: This standard facilitates the communication of sales
information from online retailers to record companies and collection societies. Online retailers can use this
standard to report sales data, including details of purchased music tracks or albums, to the relevant record
companies and collection societies. This standard streamlines the sales reporting process, ensuring
accurate and timely reporting of sales data.
Musical Work Licensing Message Suite Standard: This standard allows record labels and online retailers
to obtain licenses for the use of musical works. Typically, these licenses are obtained from music rights
societies. The standard enables the communication of licensing requests and relevant information between
the parties involved. By using this standard, record labels and online retailers can efficiently manage the
licensing process and ensure compliance with copyright regulations.
These XML-based standards developed by DDEX enhance the efficiency, accuracy, and consistency of
metadata communication in the digital content industry. They facilitate seamless information exchange
between different entities, ensuring that accurate and comprehensive metadata is available to support the
distribution, sales reporting, and licensing of music content.
Communication Standards
In addition to the XML-based standards, DDEX is also developing communication protocols to facilitate
the exchange of XML messages. These protocols define the methods and technologies used for
transmitting the XML messages between different entities in the digital content industry. DDEX is
working on two primary communication standards:

446 | The Institute of Chartered Accountants of Nepal


Chapter 11 : Ethical and Legal Issues in Information Technology
Web Services: DDEX is developing message exchange protocols based on web services. Web services
allow for the exchange of XML messages over the internet using standardized protocols such as SOAP
(Simple Object Access Protocol) and HTTP (Hypertext Transfer Protocol). These protocols ensure secure
and reliable communication between systems, enabling the seamless transfer of XML messages containing
metadata and other relevant information.
FTP (File Transfer Protocol): DDEX is also developing message exchange protocols based on FTP. FTP
is a standard network protocol used for transferring files between computers. By utilizing FTP, DDEX
enables entities in the digital content industry to exchange XML messages in a secure and efficient manner.
FTP-based protocols provide a reliable means of transferring large XML files and ensure data integrity
during the transmission process.
These communication standards developed by DDEX ensure that the XML messages containing metadata
and other information can be transmitted effectively and securely between record companies, music rights
societies, and online retailers. Whether using web services or FTP, these protocols enable seamless
integration and interoperability among various systems, supporting efficient and standardized
communication within the digital content value chain.
What is XBRL?
XBRL, or eXtensible Business Reporting Language, is an international standard for digital business
reporting. It is managed by XBRL International, a global consortium dedicated to improving reporting
practices for the public interest. XBRL is widely used across more than 50 countries and has revolutionized
the way financial and business information is reported and shared.
In essence, XBRL provides a standardized language for defining reporting terms and concepts. These
defined terms are then used to represent the contents of financial statements, compliance reports,
performance reports, and other business-related documents. By using XBRL, reporting information can
be seamlessly exchanged between organizations in a rapid, accurate, and digital manner.
The transition from traditional paper-based or PDF/HTML reports to XBRL brings about a transformation
similar to the shift from film photography to digital photography or paper maps to digital maps. The XBRL
format enables users to perform all the functions they could with traditional reporting methods but also
introduces a range of new capabilities. The information in XBRL reports is clearly defined, platform-
independent, testable, and digital, allowing for simplified usage, sharing, analysis, and value addition to
the data.
By adopting XBRL, organizations can improve the efficiency, accuracy, and usefulness of their reporting
processes. It facilitates automation, standardization, and comparability of financial and business
information, benefiting regulators, investors, analysts, and other stakeholders in making informed
decisions and conducting comprehensive analyses.
What does XBRL do?
Often termed "bar codes for reporting", XBRL makes reporting more accurate and more efficient.
It allows unique tags to be associated with reported facts, allowing:
The key functions of XBRL include:
 Standardized Tags: XBRL enables the association of unique tags with reported facts, ensuring that
the information in reports can be consumed and analyzed accurately. These tags provide a common
language for reporting data, allowing for consistent interpretation and understanding.

The Institute of Chartered Accountants of Nepal ȁͶͶ͹


Management Information and Control System
 Rule-Based Testing: Consumers of XBRL reports can test the data against a set of predefined
business and logical rules. This helps identify and rectify errors at the source, ensuring the accuracy
and integrity of the information.
 Flexible Usage: XBRL allows users to utilize the reported information according to their specific
needs. It supports multiple languages, alternative currencies, and various presentation styles, enabling
users to tailor the data to their preferences and requirements.
 Conformance to Definitions: With XBRL, consumers of information can have confidence that the
data provided adheres to sophisticated predefined definitions. Comprehensive definitions and accurate
data tags ensure that the information is consistent and reliable.
Comprehensive definitions and accurate data tags allow the:
• preparation
• validation
• publication
• exchange
• consumption; and
• analysis
of business information of all kinds. Information in reports prepared using the XBRL standard is
interchangeable between different information systems in entirely different organisations. This allows for
the exchange of business information across a reporting chain. People that want to report information,
share information, publish performance information and allow straight through information processing
all rely on XBRL.
In addition to allowing the exchange of summary business reports, like financial statements, and risk and
performance reports, XBRL has the capability to allow the tagging of transactions that can themselves be
aggregated into XBRL reports. These transactional capabilities allow system- independent exchange and
analysis of significant quantities of supporting data and can be the key to transforming reporting supply
chains.
11.6 Brief Description of COSO, COBIT, CMM, ITIL, ISO/IEC27001
COSO (Committee of Sponsoring Organizations of the Treadway Commission): COSO is a
framework that provides guidance on enterprise risk management, internal control, and fraud prevention.
It helps organizations establish effective internal control systems to achieve their objectives. the
components of each standard/framework:
 Control Environment: Sets the tone for the organization regarding internal control.
 Risk Assessment: Identifies and analyzes risks relevant to the achievement of objectives.
 Control Activities: Establishes policies and procedures to mitigate identified risks.
 Information and Communication: Ensures relevant information is identified, captured, and
communicated.
 Monitoring Activities: Regularly assesses the effectiveness of internal control processes.
COBIT (Control Objectives for Information and Related Technologies): COBIT is a framework
developed by ISACA that provides guidance on IT governance and management. It helps organizations
align their IT activities with business objectives, ensure effective IT processes, and manage IT-related
risks. Some components of COBIT are:

448 | The Institute of Chartered Accountants of Nepal


Chapter 11 : Ethical and Legal Issues in Information Technology
 Framework: Provides an overall governance and management framework for IT.
 Process Domains: Defines key IT-related processes, such as planning, acquisition, delivery, and
support.
 Process Control Objectives: Identifies specific control objectives for each process domain.
 Management Guidelines: Offers guidance for implementing and managing IT governance and
control practices.
 Maturity Models: Assists in assessing and improving the maturity of IT processes.
CMM (Capability Maturity Model): CMM is a model that assesses and improves the maturity of an
organization's software development processes. It provides a framework to measure and enhance process
capabilities, with a focus on quality, productivity, and consistency. Some component of CMM are:
 Initial: Processes are ad hoc and unpredictable.
 Repeatable: Basic project management processes are established.
 Defined: Processes are well-defined and documented.
 Managed: Processes are quantitatively controlled and monitored.
 Optimizing: Continuous process improvement is emphasized.
ITIL (Information Technology Infrastructure Library): ITIL is a set of best practices for IT service
management. It offers guidance on managing IT services, processes, and infrastructure to meet business
needs. ITIL helps organizations deliver high-quality IT services and improve customer satisfaction. Some
components of ITIL is given below:
 Service Strategy: Determines the organization's approach to delivering IT services.
 Service Design: Designs and develops IT services, processes, and supporting tools.
 Service Transition: Manages the transition of new or changed services into the operational
environment.
 Service Operation: Ensures the effective and efficient delivery of IT services.
 Continual Service Improvement: Focuses on ongoing improvement of IT service quality.
ISO/IEC 27001: ISO/IEC 27001 is an international standard for information security management
systems. It provides a systematic approach for organizations to establish, implement, monitor, and improve
their information security controls. Compliance with ISO/IEC 27001 demonstrates a commitment to
protecting sensitive information and managing security risks.
 Context Establishment: Defines the scope and objectives of the information security management
system.
 Leadership: Demonstrates management commitment to information security.
 Planning: Establishes risk assessment and treatment processes.
 Support: Provides resources and guidance for implementing and operating the system.
 Operation: Implements and controls the information security processes.
 Performance Evaluation: Monitors, measures, and evaluates the performance of the system.
 Improvement: Identifies areas for improvement and takes corrective actions.
These frameworks and standards play important roles in guiding organizations in areas such as risk
management, IT governance, process improvement, IT service management, and information security.
They provide valuable guidance and best practices to help organizations achieve their goals and ensure
effective and secure operations.
Frameworks such as the Control Objectives for Information and related Technology (CobiT) and the
Committee of Sponsoring Organizations of the Treadway Commission (COSO) framework aid
The Institute of Chartered Accountants of Nepal ȁͶͶͻ
Management Information and Control System
regulatory compliance, but don't provide actual risk management methodologies. Instead they include
some high-level goals for risk management as part of their overall scope. While CobiT helps a company
define risk goals at an operational level, COSO helps a company define organizational risks at a business
level.
Developed by the Information Systems Audit and Control Association and the IT Governance
Institute,CobiT is a framework that defines goals for the controls used to properly manage IT and ensure
that IT maps to business needs.
It is broken down into four domains: Plan and Organize, Acquire and Implement, Deliver and Support,
and Monitor and Evaluate. Each category drills down into subcategories. For example, the Acquire and
Implement section includes information on acquiring and maintaining application software and
managing changes.
Although CobiT is not a risk methodology, it does spell out the goals an organization should aim to
accomplish in its risk management processes. These goals are outlined in these subcategories: business
risk assessment; risk assessment approach; risk identification; risk measurement; risk action plan; risk
acceptance; safeguard selection; and risk assessment commitment.
While CobiT is a model for IT governance, COSO is a model for corporate governance. CobiT was
derived from the COSO framework, which was developed by the Committee of Sponsoring Organizations
of the Treadway Commission in 1985 to deal with fraudulent financial activities and reporting.
COSO has these components:
• Control environment- Management's philosophy and operating style; the company culture as it
pertains to ethics and fraud
• Risk assessment- Establishment of risk objectives; the ability to manage internal and external
change
• Control activities-Policies, procedures, and practices put in place to mitigate risk
• Information and communication-A structure that ensures that the right people get the right
information at the right time
• Monitoring-Detecting and responding to control deficiencies
COSO focuses on the strategic level, while CobiT focuses more on the operational level. You can
think of CobiT as a way to meet many of the COSO objectives, but only from the IT perspective. Like
CobiT and COSO, ISO 17799 includes some high-level risk management guidance, but doesn't provide
an actual risk methodology.
Updated last year, ISO 17799 provides guidelines on how to set up a security program from A to Z. Where
COSO and CobiT call out requirements for various security structures and countermeasures, ISO 17799
provides the details on how to develop and implement these components.
The newest version of this framework includes the following categories: security policy, asset
management, physical and environmental security, communications and operations management, access
control, and information security incident management.
These categories are controls that need to be put into place to reduce risk. For a company to know
the right type and level of access control, incident management, and physical security, it must first
understand its current risk level and its acceptable risk level.
Risk management is a foundational piece of each component of ISO 17799, but the framework does not
specify what methodology an organization should use to accomplish it.
450 | The Institute of Chartered Accountants of Nepal
Chapter 12 : Electronic Transaction Act 2063

CHAPTER 12
Electronic Transaction Act 2063

The Institute of Chartered Accountants of Nepal ȁͶͷͳ


Management Information and Control System

452 | The Institute of Chartered Accountants of Nepal


Chapter 12 : Electronic Transaction Act 2063
12. Electronic Transaction Act 2063
12.1 Electronic record and Digital Signature
Provisions Relating to Electronic Record and Digital Signature
Authencity of Electronic Record:
(1) Any subscriber may, subject to the provisions of this section, authenticate to any electronic record
by his/her personal digital signature.
(2) While authenticating the electronic record pursuant to Sub- section (1), an act of
transforming such electronic record to other electronic record shall be effected by the use of
asymmetric crypto system and hash function.
Explanation: For the purpose of this section, "hash function" means the acts of mapping of
algorithm or translating of a sequence of bits into another, generally smaller, set yielding the same
hash result from any record in the same form while executing the algorithm each and every time
by using the same record as an input, infeasible to derive or reconstruct any record from the
hash result produced by the algorithm from the computation point of view, and making the two
records, which produce the same hash result by using the algorithm, computationally infeasible to
derive.
(3) Any person may verify the electronic record by using the public key of the subscriber.
Legal Recognition of Electronic Record: Where the prevailing law requires any information,
documents, records or any other matters to be kept in written or printed typewritten form, then, if such
information, document, record or the matter is maintained in an electronic form by fulfilling the
procedures as stipulated in this Act or the Rules made hereunder, such electronic record shall also have
legal validity.
Legal Recognition of Digital Signature: Where the prevailing law requires any information, document,
record or any other matters to be certified by affixing signature or any document to be signed by any
person; then, if such information, documents, records or matters are certified by the digital signature
after fulfilling the procedures as stipulated in this Act or the Rules made hereunder, such digital
signature shall also have legal validity.
Electronic Records to be Kept Safely: Where the prevailing law requires any information,
document or record to be kept safely for any specific period of time and if such information, document
or record are kept safely in an electronic form, by fulfilling the following condition,, such information,
document or record shall have legal validity if that is,-
(a) kept in an accessible condition making available for a subsequent reference,
(b) kept safely in the format that can be demonstrated subject to presenting again exactly in the same
format in which they were originally generated and transmitted or received or stored,
(c) kept making the details available by which the origin, destination and transmission or date and
time of receipt can be identified, Provided that the provision of this Clause shall not be applied in
regard to any information to be generated automatically for the purpose of transmitting or
receiving any record.
The Institute of Chartered Accountants of Nepal ȁͶͷ͵
Management Information and Control System
Electronic Record May Fulfill the Requirement of Submission of any Original Document: Where the
prevailing law requires that any record shall have to be submitted or retained in its main or original form
or kept safely, then, such requirement shall, if the following terms are fulfilled, be deemed to have been
satisfied by the electronic records:
(a) If there exits a ground as prescribed that can be believed that any type of change is not made in
such record by any means from the fist time of its generation in electronics form,
(b) If such record is of the nature where there is a compulsion of submitting such
document to any person it could be clearly shown to such a person to whom it requires to do so.
Secured Electronic Records: If the verification has been made as prescribed in connection with the
matter as to whether or not any type of changes are made into the electronic records generated
with the application of security procedures as prescribed, such electronic records shall be deemed to be a
secured electronic records.
Secured Digital Signature: Where any digital signature made in electronic record has been examined in
a manner as prescribed with the application of such security procedure as prescribed, then, such
digital signature shall be deemed to be a secured digital signature.

12.2 Dispatch, Receipt and Acknowledgement of


Electronic Records:
Electronic Record to be Attributed to Originator:
(1) Any specific electronic record shall, in case of any of the following conditions, be attributed to the
originator:
(a) If such an electronic record was transmitted by the originator him/herself,
(b) If such an electronic record was transmitted by a person who had the authority to act on behalf
of the originator in respect of such an electronic record,
(c) Such an electronic record was transmitted through any information system that was
programmed by the originator or on behalf of the originator to operate automatically.
(2) If any condition exists as prescribed in respect of electronic record transmitted pursuant to Sub-
section (1), the addressee shall assume that such an electronic record is attributed to any particular
originator and shall have the authority to act thereon accordingly.

Procedure of Receipt and Acknowledgement of Electronic Record:


(1) Where the originator requests the addressee to transmit the acknowledgement or receipt of
electronic record at the time of or before the dispatch of such electronic record or where there
is an agreement between the originator and addressee to transmit the acknowledgement or receipt
of such an electronic record, then, the provisions of Sub-sections (2), (3) and (4) shall be applied
in relation to the receipt and acknowledgement of such an electronic record.

454 | The Institute of Chartered Accountants of Nepal


Chapter 12 : Electronic Transaction Act 2063
(2) Where there is no agreement between the originator and addressee that information or
acknowledgement of receipt of electronic record is to be given in a particular format or by a
particular manner, such an information or receipt may be given as the following:-
(a) by automated or any other means of communication by the addressee,
(b) by any conduct of the addressee sufficient to indicate that the originator has received
electronic record.
(3) Where the originator has stipulated in relation to any electronic record that such an electronic
record shall be binding on him/her only after the receipt of information or acknowledgement of
receipt of such electronic record from the addressee, then, unless the information or
acknowledgement of receipt of such an electronic record has been so received from addressee, the
electronic record shall not be deemed to have been transmitted by the originator.
(4) Where the originator has not stipulated that the electronic record shall be binding only on receipt
of such acknowledgement, and where the originator and addressee have not agreed upon or have
not specified any time for acknowledgement of such receipt of electronic record, then, the
originator shall have to receive such acknowledgement of receipt of such an electronic record from
addressee within a specified time as prescribed. If such acknowledgement of receipt is not received
from addressee, then, such an electronic record shall be deemed to have not been transmitted by the
originator.
(5) Other procedures of receipt of acknowledgement of electronic record shall be as
prescribed.
Time and Place of Dispatch and Receipt of Electronic Record:
(1) Save as otherwise agreed between the originator and the addressee, the dispatch of an
electronic record occurs when it enters into an information system outside the control of the
originator.
(2) Save as otherwise agreed between the originator and the addressee, the time of receipt of an
electronic record shall be determined as prescribed.
(3) Save as otherwise agreed between the originator and the addressee, an electronic record shall
be deemed to have been dispatched from the place where the originator has his/her place of
business and shall be deemed to have been received at the place where the addressee has his/her
place of business.
Explanation: For the purpose of this Sub-section "the place of business" means:
(a) In case the originator or the addressee has more than one place of business, the place of business
means the place where the concerned business shall be operated
(b) If the originator or the addressee does not have a place of business, their place of residence shall be
considered their place of business.

The Institute of Chartered Accountants of Nepal ȁͶͷͷ


Management Information and Control System
12.3 Provisions Relating to Controller and
Certifying Authority:
Appointment of the Controller and other Employees:
(1) Government of Nepal may, by notification in the Nepal Gazette, designate any Government officer
or appoint any person who has qualifications as prescribed in the office of the Controller.
(2) Government of Nepal may, in order to assist the Controller to perform his/her functions to be
performed under this Act, appoint or assign a Deputy Controller and other employees as required.
The employees so appointed or assigned shall perform their functions under the general direction
and control of the Controller.
Functions, Duties and Powers of the Controller:
The functions, duties and powers of the controller shall be as follows:-
(a) To issue a license to the certifying Authority,
(b) To exercise the supervision and monitoring over the activities of Certifying Authority,
(c) To fix the standards to be maintained by certifying authority in respect to the verification of
digital signature,
(d) To specify the conditions to be complied with by the certifying authority in operating his/her
business,
(e) To specify the format of the certificate and contents to be included therein,
(f) To specify the procedures to be followed by the certifying authority while conducting
his/her dealings with the subscribers,
(g) To maintain a record of information disclosed by the certifying authority under this act
and to make provision of computer database accessible to public and to update such database,
(h) To perform such other functions as prescribed.
License to be obtained: No person shall perform or cause to be performed the functions of a
certifying authority without obtaining a license under this Act.
Application to be submitted for a License:
(1) Any person willing to work as Certifying Authority by issuing a certificate under this Act and
who has the qualifications as prescribed shall have to submit an application to the controller in a
format as prescribed accompanied by a fee as prescribed for obtaining a license for the
certification.
(2) The applicant applying under Sub-section (1) shall also attach the following documents:
(a) Details regarding certification,
(b) Documents to prove the identification and verification of the applicant,
(c) Statements specifying the financial resources, human resources and other necessary
facilities, (d) Such other documents as prescribed.
The controller may, if he/she thinks necessary, ask the applicant to serve additional documents and
details in connection to examine the appropriation of the applicant as to perform the function of
Certifying Authority. If the necessary additional documents and details are so asked, no actions
456 | The Institute of Chartered Accountants of Nepal
Chapter 12 : Electronic Transaction Act 2063
shall be taken upon the application of the applicant unless he/she submits such documents and
details.
Other Functions and Duties of the Certifying Authority:
Other functions and duties of the certifying authority, other than those to issue a certificate, to
suspend or revoke it, shall be as prescribed.
Procedure for granting of a license:
(1) The Controller may, on receipt of an application under section 16, after considering the
qualification of applicant and also the documents and statements decide upon within a period of two
months of receipt of such application whether or not such a person possesses the financial, physical
and human resources, and other facilities as prescribed and whether or not a license should be issued
to such an applicant and a notice to that effect shall be given to him.
(2) While deciding upon the issuance of a license under Sub-section (1), the Controller may inspect
the facilities, financial and physical resources of the applicant. (3) If the Controller decides to issue
a license under Sub-section (1), a license in the prescribed format shall be issued to the applicant
specifying the period of validity of the license and also the terms and conditions to be followed by
him.
(4) Other procedures relating to the issuance of a license shall be as prescribed.
Renewal of License:
(1) A license obtained by Certifying Authority shall have to renew in each year,
(2) A Certifying Authority desirous to renew the license under Sub- section (1), shall have to submit
and application in the prescribed format to the Controller at least two months prior to the expiry
of the period of validity of such a license along with such renewal fee as prescribed,
(3) If an application is submitted for renewal, under Sub-section (2), the Controller shall have to decide
whether to renew the license or not, after completing the procedures as prescribed one month
prior to the expiry date of validity of such a license,
(4) While deciding to reject to renew a license, the applicant shall be given a reasonable
opportunity to present his/her statement in this regard.
License may be suspended:
(1) If the documents or statement and statement of financial and physical resources submitted by the
certifying authority before the Controller in order to obtain a license are found incorrect or false or
the conditions to be complied with in course of operation of business is not complied with or this
Act of the Rules framed hereunder are found to be violated, the Controller may suspend the license
of the certifying authority till the inquiry in this regard is completed. Provided that, Certifying
Authority shall be given the reasonable opportunity to present his/her defense prior to such
suspension of a license.
(2) Other procedures concerning suspension of license and other provisions related thereto be as
prescribed.

The Institute of Chartered Accountants of Nepal ȁͶͷ͹


Management Information and Control System
License may be revoked:
(1) If the controller believes, after completion of an inquiry in connection to any activity of Certifying
Authority, made duly, as prescribed, that any of the following circumstances have been occurred,
the Controller may revoke a license issued under this Act, at any time, as he deems to be
appropriate:
(a) If the Certifying Authority fails to comply with the liabilities under this act and the rules made
thereunder.
(b) If it is found that the Certifying Authority has submitted false or incorrect document or
statement at the time of submitting an application for obtaining a license or for its renewal, as
the case may be.
(c) If the Certifying Authority operates business in such a manner so that it shall make adverse
effect to the public interest or to the national economy,
(d) If the Certifying Authority commits any act that is defined as an offence under this Act or the
Rules framed hereunder.
(2) The Controller shall, prior to revocation of a license under Sub- section (1), provide a reasonable
opportunity to the Certifying Authority to present his/her defense.
(3) Other procedures concerning revocation of a license shall be as prescribed.
Notice of Suspension or revocation of a License:
(1) Where a license of any Certifying Authority is suspended or revoked under Section 20 or 21, as the
case may be the Controller shall give a written notice to the Certifying Authority of such suspension
or revocation, as the case may be, to such a certifying Authority and shall keep such a notice in his
computer database and also publish in the electronic form.
(2) The Controller shall publish the notice of suspension or revocation of a license at least in two
daily newspapers in Nepali and English languages for two times.
Provided that, there shall be no effect to any decision of suspension or revocation, as the case may
be, made by the Controller under Section 20 or 21, merely on the ground of non-publication of such a
notice.
Recognition to Foreign Certifying Authority may be given:
(1) The Controller may with the prior approval of Government of Nepal, and subject to such
conditions and restrictions as may be prescribed, by notification in the Nepal Gazette,
recognize any Certifying Authority who has obtained a license to certify under any foreign law.
Any foreign Certifying Authority so recognized may issue the certificates under this Act or the
Rules made thereunder throughout the Nepal.
(2) The procedures to be adopted in providing the recognition to a foreign Certifying Authority as
referred to in Sub-section (1), shall be as prescribed.

458 | The Institute of Chartered Accountants of Nepal


Chapter 12 : Electronic Transaction Act 2063
The Controller may issue Orders:
(1) The Controller may, in order to cause to fulfill the responsibilities in regard to issuance of a
certificate by the Certifying Authorities, issue directives, from time to time. It shall be a duty of the
Certifying Authority to comply with such directives.
The Controller may delegate power: The Controller may, in order to perform the function to be
performed by him/her delegate to any officer subordinate to him/her to exercise all or any of his/her
powers under this Act or the Rules framed thereunder.
The Controller may investigate:
(1) The Controller may, if he/she believes that this Act or the Rules framed hereunder are not complied
with by the Certifying Authority or by other concerned person, conduct him/herself or cause
any officer to conduct necessary investigation in that regard.
(2) It shall be a duty of Certifying Authority to assist the investigations, referred to in Sub- section
(1).
(3) The procedure to be followed by the Controller or any other officer in respect to
investigation referred to in Sub-section (1) shall be as prescribed.
Performance Audit of Certifying Authority:
(1) The Controller may conduct or cause to be conducted performance audit of the Certifying
Authority in each year.
(2) The Controller may, for the purpose of the performance audit referred to in Sub-section (1), appoint
any recognized auditor, who has expertise in computer security or any computer expert.
(3) The Controller shall publish the report of the performance audit in the electronic form made under
Sub-section (1) by maintaining in his/her computer database.
(4) The qualification of the performance auditor or remuneration and the procedures of such audit
shall be as prescribed.
(5) The Controller shall fix the standard of the service of Certifying Authority and publish a notice
thereof publicly for the information to the public-in-general.
The Controller to have the Access to Computers and data:
(1) The Controller shall, if there is a reasonable ground to suspect that provision of this Act and Rules
framed hereunder has been violated, have the power to have the access to any computer
system, apparatus, devices, data, information system or any other materials connected with such
system.
(2) The Controller may, for the purpose of Sub-section (1), issue necessary directives to the owner
of any computer system, apparatus, device, data, information system or any material connected
with such system or to any other responsible person to provide technical or other cooperation as
he/she deems necessary.

The Institute of Chartered Accountants of Nepal ȁͶͷͻ


Management Information and Control System
(3) It shall be the duty of the concerned person to comply with such directive issued under Sub- section
(2).
Record to be maintained:
(1) The Controller shall maintain records of all Certificates issued under this Act.
(2) The Controller shall, in order to ensure the privacy and security of the digital signatures, perform
following functions:
(a) To use Computer Security System,
(b) To apply security procedures to ensure the privacy and integrity of digital signature, (c) To
comply with the standard as prescribed,
(3) The Controller shall maintain and update computerized data base of all public keys in a computer
system.
(4) For the purpose of verification of Digital Signature, the Controller shall make available a public
key to any person requesting for such a key.
12.5 Provisions Relating to Digital Signature and Certificates
Certifying Authority may issue a Certificate: Only a licensed or recognized Certifying Authority
under this Act may issue a Digital Signature Certificate.
Apply to obtain a Certificate:
(1) Any person desirous to obtain Digital Signature Certificate may apply to the Certifying
Authority in such a format along with such fee and other statements as prescribed.
(2) On receipt of an application under Sub-section (1), the Certifying Authority shall have to decide
whether to issue or not a certificate to the applicant within one month of such application so
received.
(3) The Certifying Authority shall, if it decides to issue a certificate under Sub-section (2), issue a
Digital Signature Certificate within seven days affixing his signature in a prescribed format
with the inclusion of such statements as prescribe and if it decides to reject to issue such
certificate, the applicant shall be notified the reasons for rejection within seven days.
Certificate may be suspended:
(1) Certifying Authority may suspend the Certificate in following circumstances:
(a) If the subscriber obtaining the certificate or any person authorized to act on behalf of such
a subscriber, requests to suspend the certificate.
(b) If it is found necessary to suspend the certificate that contravenes public interest as
prescribed.
(c) If it is found that significant loss might be caused to those persons who depend on the certificate
by the reason that provisions of this Act or the Rules framed hereunder were not followed at

460 | The Institute of Chartered Accountants of Nepal


Chapter 12 : Electronic Transaction Act 2063
the time of issuance of the certificate, and if the controller instructs to suspend the certificate
having specified the above ground.
(2) Grounds and procedures for suspension and release of the suspended certificates shall be as
prescribed.
Certificate may be revoked:
(1) The Controller or the Certifying Authority may revoke a Certificate in following conditions:
(a) Where the subscriber or any other person authorized by him requests to revoke a
certificate,
(b) If it is necessary to revoke in a certificate that contravenes the public interest as
prescribed,
(c) Upon the death of the subscriber,
(d) Upon the insolvency, winding up or dissolution of the company or corporate body under
the prevailing laws, where the subscriber is a company or a corporate body.
(e) If it is proved that a requirement for issuance of the Certificate was not satisfied. (l) If a
material fact represented in the certificate is proved to be false.
(f) If a key used to generate key pair or security system was compromised in a manner that
affects materially the Certificate's reliability.
(2) The procedures to be followed by the Controller or Certifying Authority with respect to
revocation of a Certificate shall be as prescribed.
Notice of Suspension or Revocation:
(1) Where a Certificate is suspended or revoked under sections 32 or 33, the Certifying
Authority or the Controller, as the case may be, shall publish a public notice thereof
maintaining its record in their repository.
(2) It shall be the responsibility of the Certifying Authority or the Controller, as the case may be, to
communicate the subscribers as soon as possible on suspension or revocation Certificates.
12.6 Functions, Duties and Rights of Subscriber
To Generate Key pair:
(1) Where any Certificate issued by the Certifying Authority and accepted by subscriber, consisting of
a public key which corresponds to the key pair and to be listed in such Certificate and if such key
pair is supposed to be generated by the subscriber only, then the subscriber shall generate such key
pair by applying the secured asymmetric crypto system.
(2) Notwithstanding anything contained in Sub-section (1), if a Certifying Authority and the
subscriber have concluded an agreement or the Certifying Authority has accepted any specific
system regarding the security system to be used to generate the key pair, then, it shall be the
duty of subscriber to apply the security system as specified in agreement or accepted by the
Certifying Authority.

The Institute of Chartered Accountants of Nepal ȁͶ͸ͳ


Management Information and Control System
To Accept a Certificate:
(1) The certificate shall be deemed to have been accepted by the subscriber in the following
conditions:
(a) If he publishes such a certificate or authorizes to publish to one or more persons, or
(b) If there exists any ground of his acceptance to such certificate which may cause to
believe it.
(2) If the certificate is accepted it shall be deemed that the subscriber, by that reason, has
guaranteed to all who reasonably rely on the information contained in the certificate that-
(a) The subscriber holds the private key corresponding to the public key and is entitled to hold the
same,
(b) All representations and information made by the subscriber to the Certifying Authority in
course of issuance of the certificate are true and correct and all facts relevant to the information
contained in the certificate are true, and
(c) All information mentioned in the certificate is, to the best knowledge of subscriber, is true
and correct.
To retain the private key in a secured manner:
(1) Every subscriber shall exercise reasonable care to retain control of the private key corresponding
to the public key listed in the Certificate and adopt all measures to prevent its disclosure to a person
not authorized to affix the digital signature of subscriber.
(2) If the private key has been disclosed or compromised by any reason whatsoever, then, the
subscriber shall communicate the same without any delay to the Certifying Authority and on receipt
of such information the Certifying Authority shall immediately suspend such a Certificate.
(3) If a certificate is suspended under this Act, it shall be a duty of the subscriber to retain the private
key under this section in a safe manner throughout the duration of such suspension of Certificate.
To Deposit the Private Key to the Controller:
(1) If the Controller thinks, in order to protect the sovereignty or integrity of Nepal, to maintain the
friendly relations with friendly countries, to maintain the law and order, to prevent from committing
of any offence under the laws prevailing, and or in other conditions as prescribed, necessary
to issue an order to any subscriber to deposit the private key to him/her specifying reason there for,
such a subscriber shall immediately deposit the private key to the Controller.
(2) The controller shall not inform any unauthorized person about the private key deposited as per
sub section (1).
12.7 Electronic Record and Government use of Digital Signature
Government Documents may be published in electronic form:
(1) Government of Nepal may also publish ordinance, Act, Rules Bye-laws, Formation Orders or
notifications or any other matters in the electronic form which are published in the Nepal Gazette
under the prevailing laws.

462 | The Institute of Chartered Accountants of Nepal


Chapter 12 : Electronic Transaction Act 2063
(2) Where the prevailing law provides for the filing of any form, application or any other document
or any record to be generated or retained or secured and or any license or permit or approval or
certificate to be issued or provided or any payment to be made in any Government agency, or
public entity or in any bank or financial institution operating business within the Nepal, it
may be filed, generated, retained or secured or issued or granted in electronic form or payment
may be made in electronic mode of communication, and, it shall not be denied to provide the legal
validity to such form, application, document record, license, permit or approval, certificate or
payment on the ground of the use of electronic form or electronic communication mode.
To Accept the Document in Electronic Form:
(1) Government agency or public entity or bank or financial institutions operating business within
the Nepal may also accept any document and payment to be submitted or paid to them under
the prevailing law in electronic form or through any electronic mode and if such documents and
submitted or payment is made, as the case may be, it shall not be denied to grant legal recognition
merely on the ground that it was accepted electronic form or through any electronic mode.
(2) Notwithstanding anything contained in Sub-section (1) no Government agency or public entity
or bank of financial institution operating business within the Nepal shall, except in the conditions
as prescribed and government agencies as prescribed, be compelled to accept any document or
payment in electronic form, and such an agency or institution shall, except in the conditions and
agency as prescribed, not compel to any other persons to accept any document in electronic form
or the payment through any electronic form.
(3) For the purpose of Sub-section (1), the provision relating to the procedure, process and format
to be followed shall as prescribed.
Use of Digital Signature in Government Offices:
(1) Where it is required that the concerned person shall have to affix his/her signature in any document
or record for verification of such document or record to be transmitted or issued by any
Government agency or public entity or bank or financial institution operating business within
the Nepal or to be accepted by such agency or institution then, Government of Nepal may, if it
thinks appropriate, make a provision to use digital signature instead of such a signature.
(2) Notwithstanding anything contained elsewhere in this Act, Government of Nepal may, for the
purpose of the provision made in Sub- section (1), prescribe additional security procedure
for the verification and authentication of such digital signature.
(3) Provisions regarding the Certifying Authority and Digital Signature Certificate to be used by the
government agency or entity referred to in Sub-section (1), shall be as prescribed.
12.8 Provisions Relating to Network Service
Liability of Network Service Providers: Intermediaries providing their services as network service
providers shall undertake the following liabilities in regard to such service provided by them:
(a) Liabilities referred to in the agreement made with the subscriber in regard to service
provision,. (b) (c)

The Institute of Chartered Accountants of Nepal ȁͶ͸͵


Management Information and Control System
Liabilities referred to in the license of network service providers, and, Any such other liability as
prescribed. Network Service Provider not to be Liable: Notwithstanding anything contained in Section
42, no network service provider shall be liable to bear any criminal or civil liability arising from
any fact or statement mentioned or included in the information or data of the third party made available
in electronic form by him/her merely on the ground that he/she has made available the access to such
information or data.
Provided that, such a person or institution providing network service shall not be relieved from such
liability, if he/she has made available access to such information or data with the knowledge
that any fact or statement mentioned or included in such information or data contravene this Act or
Rules framed hereunder.
Explanation: For the purpose of this section" Third Party" means a net work service provider who
provides service as intermediary and any person over whom there is no control of the network service
provider.
12.9 Offence Relating To Computer
To Pirate, Destroy or Alter computer source code: When computer source code is required to be kept
as it is position for the time being the prevailing law, if any person, knowingly or with malafide
intention, pirates, destroys, alters computer sources code to be used for any computer, computer
programme, computer system or computer network or cause, other to do so, he/she shall be liable
to the punishment with imprisonment not exceeding three years or with a fine not exceeding two hundred
thousand Rupees or with both.
Explanation: For the purpose of this section "computer source code" means the listing of programmes,
computer command, computer design and layout and programme analysis of the computer resource in
any form.
Unauthorized Access in Computer Materials: If any person with an intention to have access in any
programme, information or data of any computer, uses such a computer without authorization of
the owner of or the person responsible for such a computer or even in the case of authorization, performs
any act with an intention to have access in any programme, information or data contrary to from such
authorization, such a person shall be liable to the punishment with the fine not exceeding Two
Hundred Thousand Rupees or with imprisonment not exceeding three years or with both depending
on the seriousness of the offence.
Damage to any Computer and Information System: If any person knowingly and with a mala fide
intention to cause wrongful loss or damage to any institution destroys, damages, deletes, alters, disrupts
any information of any computer source by any means or diminishes value and utility of such
information or affects it injuriously or causes any person to carryout such an act, such a person shall be
liable to the punishment with the fine not exceeding two thousand Rupees and with imprisonment not
exceeding three years or with both.
Publication of illegal materials in electronic form:
(1) If any person publishes or displays any material in the electronic media including computer, internet
which are prohibited to publish or display by the prevailing law or which may be contrary to the
public morality or decent behavior or any types of materials which may spread hate or jealousy
464 | The Institute of Chartered Accountants of Nepal
Chapter 12 : Electronic Transaction Act 2063
against anyone or which may jeopardize the harmonious relations subsisting among the peoples of
various castes, tribes and communities shall be liable to the punishment with the fine not
exceeding One Hundred Thousand Rupees or with the imprisonment not exceeding five years
or with both.
(2) If any person commit an offence referred to in Sub-section (1) time to time he/she shall be liable
to the punishment for each time with one and one half percent of the punishment of the previous
punishment.
Confidentiality to Divulge: Save otherwise provided for in this Act or Rules framed hereunder or for in
the prevailing law, if any person who has an access in any record, book, register, correspondence,
information, documents or any other material under the authority conferred under this Act or Rules
framed hereunder divulges or causes to divulge confidentiality of such record, books, registers,
correspondence, information, documents or materials to any unauthorized person, he/she shall
be liable to the punishment with a fine not exceeding Ten Thousands Rupees or with imprisonment not
exceeding two years or with both, depending on the degree of the offence.
To inform False statement: If any person with an intention to obtain a license from Certifying Authority
under this Act or with any other intention either to Controller or with an intention to obtain Digital
Signature Certificate or with any other intention conceals statement knowingly or lies any statement to
be submitted to the Certifying Authority any false statements shall be liable to the punishment with a
fine not exceeding One Hundred Thousands rupees or with an imprisonment not exceeding two years
or with both.
Submission or Display of False License or Certificates:
(1) If any person who works as a Certifying Authority without a license issued by the Controller under
this Act, shall be liable to the punishment with a fine not exceeding one hundred thousands Rupees
or with an imprisonment not exceeding two years or with both, depending on seriousness of the
offence.
(2) Any person without obtaining a license from the Certifying Authority publishes a fake
license or false statement in regard to license or provides to any person by any other means, shall
be liable to the punishment not exceeding one hundred thousand Rupees in the case where the act
referred to in Sub-section (1) has not been accomplished by such a person.
(3) If any person publishes or otherwise makes available a certificate to any other person by any
means knowingly that a certificate is not issued by the Certifying Authority referred to in such a
certificate or the subscriber listed in such certificate has not accepted the certificate or such a
certificate is already suspended or revoked, shall be liable to the punishment with a fine not
exceeding one hundred thousands Rupees or with an imprisonment not exceeding two years or
with both. Provided that, if such a certificate suspended or revoked is published or provided for the
purpose of verification of the Digital Signature before it was suspended or revoked, it shall not
be deemed to have been committed an offence under this Sub- section.
Non-submission of Prescribed Statements or Documents:
(1) If any person responsible to submit any statement, document or report to the Controller or
Certifying Authority under this Act or Rules framed hereunder, fails to submit such
The Institute of Chartered Accountants of Nepal ȁͶ͸ͷ
Management Information and Control System
statement, document, or report within the specified time limit, such a person shall be liable to the
punishment with a fine not exceeding fifty thousands Rupees.
(2) Any person who fails to maintain duly any book, register, records or account and in a
secured manner to be maintained duly and in a secured manner under this Act or Rules framed
hereunder shall be liable to the punishment with a fine not exceeding fifty thousands Rupees.
To commit computer fraud: If any person, with an intention to commit any fraud or any other illegal
act, creates, publishes or otherwise provides digital signature certificate or acquires benefit from
the payment of any bill, balance amount of any one's account, any inventory or ATM card in
connivance of or otherwise by committing any fraud, amount of the financial benefit so acquired
shall be recovered from the offender and be given to the person concerned and such an offender shall
be liable to the punishment with a fine not exceeding one hundred thousand Rupees or with an
imprisonment not exceeding two years or with both.
Abetment to commit computer related offence: A person who abets other to commit an offence relating
to computer under this Act or who attempts or is involved in the conspiracy to commit such an offence
shall be liable to the punishment with a fine not exceeding fifty thousand Rupees or with imprisonment
not exceeding six months or with both, depending on the degree of the offence.
Punishment to the Accomplice: A person who assists others to commit any offence under this Act
or acts as accomplice, by any means shall be liable to one half of the punishment for which the principal
is liable.
Punishment in an offence committed outside Nepal: Notwithstanding anything contained in the
prevailing laws, if any person commits any act which constitutes an offence under this Act and which
involves the computer, computer system or computer network system located in Nepal, even though
such an act is committed while residing outside Nepal, a case may be filed against such a person and
shall be punished accordingly.
Confiscation: Any computer, computer system, floppy, compact disks, tape drivers, softwares or any
other accessory devices used to commit any act deemed to be an offence relating to computer
under this Act shall be liable to confiscation.
Offences Committed by a corporate body:
(1) If any act is done by a corporate body which deems an offence under this Act, such an
offence shall be deemed to have been committed by a person who was responsible as chief for
the operation of the corporate body at the time of committing such an offence.
Provided that, if the person who was responsible as a chief for the operation of such a corporate
body proves that such an offence was committed without his/her knowledge or that he/she
exercised all reasonable efforts to prevent such an offence, he/she shall not be liable to the guilty.
(2) Notwithstanding anything contained in Sub-section (1), if it is proved that any offence under this
Act committed by a corporate body with the consent or in knowledge or by the reason of negligence
of a director, manager, secretary or any other responsible person of such corporate body,
such an offence shall be deemed to have been committed by such a corporate body and by
a director, manager, secretary or other responsible person of such a corporate body.

466 | The Institute of Chartered Accountants of Nepal


Chapter 12 : Electronic Transaction Act 2063
Other Punishment: If any violation of this Act or Rules framed hereunder has been committed, for
which no penalty has been separately provided, such a violator shall be liable to the punishment with a
fine not exceeding fifty thousand Rupees, or with an imprisonment not exceeding six months or with
both.
No Hindrance to Punish Under the Laws prevailing: If any act deemed to be an offence under this
Act shall also be deemed to be another offence under the laws prevailing, it shall not be deemed to have
been hindered by this Act to file a separate case and punish accordingly.
12.10 Provisions Relating to Information
Technology Tribunal
Constitution of a Tribunal:
(1) Government of Nepal shall, in order to initiate the proceedings and adjudicate the offences
concerning computer as referred to in Chapter -9, constitute a three member Information
Technology Tribunal consisting of one member each of law, Information Technology and
Commerce by notification in the Nepal Gazette from amongst the persons who are qualified under
section 60.
(2) The Law Member shall be the chairperson of the Tribunal. (3) The Tribunal shall exercise its
jurisdiction as prescribed.
(4) Any person aggrieved by an order or a decision made by Tribunal may appeal to the
Appellate Tribunal within thirty five days from the date of such order or decision, as the case
may be.
Qualification of the Member of the Tribunal:
(1) Any person who has the knowledge in information technology and, who is or who has been or
who is qualified to be a judge in the District Court, shall be eligible to be a law member of the
Tribunal.
(2) A Nepalese citizen who holds at least master degree in computer science or information
technology and who has at least three years experience in the field of electronic transactions,
information technology or electronic communication, shall be eligible to be a information
technology member of the Tribunal.
(3) A Nepali citizen who holds at least master degree in management or commerce and who has
specialization in the field of electronic transaction and who has at least three years
experience in the related field shall be eligible to be a commerce member of the Tribunal.
Terms of office, remuneration and conditions of service of the Member of Tribunal:
(1) The term of office of a member of the Tribunal shall be of five years and he/she shall be eligible
for reappointment.
(2) Remuneration and the terms and conditions of the service of a Member of the Tribunal shall as
prescribed.

The Institute of Chartered Accountants of Nepal ȁͶ͸͹


Management Information and Control System
(3) Every Member of the Tribunal shall, before assuming his/her office, take the oath of his/her office
and secrecy before the Chief Judge of Appellate Court in a format and in a manner as prescribed.
Circumstances under which office shall be fallen vacant and filling up of vacancy:
(1) Office of a Member of the Tribunal shall be fallen vacant in the following circumstances:
a) On expiry of terms of office,
b) On attainment of sixty three years of age. c) On death, d)
If one tenders resignation,
e) If one is convicted by a court on any criminal offence involving moral turpitude, or
f) If it is proved that one has misbehavior or has become incompetent to perform one's duty
while making an inquiry by Government of Nepal on the charge that one has misbehavior
against one's office or has become incompetent to perform one's duty.
Provided that, a Member of the Tribunal charged under this Clause shall given a reasonable
opportunity to defense his/her case.
(2) Notwithstanding anything contained in Clause (f), if the law member of the Tribunal is a sitting
judge, while making such an inquiry, it shall be done in accordance with the prevailing law
concerning his/her terms of service.
(3) The procedure of inquiry, for the purpose of Clause (f) of Sub-section Sub-section (1), shall be as
prescribed.
(4) Government of Nepal shall, in case of vacancy of the office of any member of Tribunal under
Sub-section (1), fulfill such vacancy from among the persons who are qualified under section 61
for remaining term of office of such a member.
Staff of the Tribunal:
(1) Government of Nepal shall make available necessary staff to the Tribunal to perform its functions.
(2) Other provisions regarding the staff of the Tribunal shall be as prescribed.
Procedures to be followed by the Tribunal: The Tribunal shall, while initiating proceedings and
adjudicating the case under section 60, shall follow the procedures as prescribed.
12.11 Provisions Relating to Information
Technology Appellate Tribunal
Establishment and formation of the Appellate Tribunal:
(1) Government of Nepal shall, in order to hear the appeal against the order or the decision made
by the Tribunal and to hear the appeal against the decision or order made by the Controller or by
the Certifying Authority, as the case may be, under this Act, by notification in the Nepal Gazette,
establish a three member Information Technology Appellate Tribunal consisting of one member
each of law, information technology and commerce from among the persons who are qualified
under section 67,

468 | The Institute of Chartered Accountants of Nepal


Chapter 12 : Electronic Transaction Act 2063
(2) Law Member shall be the chairperson of the Appellate Tribunal.
(3) Exercise of the jurisdiction of Appellate Tribunal shall be as prescribed.
Qualification of the Member of Appellate Tribunal:
(1) A person who has the knowledge in information technology and who is or who has already been
or who is qualified to be a judge in the Appellate Court shall be eligible to be a law member of the
Appellate Tribunal.
(2) A Nepali citizen who holds at least master degree in computer science or information technology
and who has at least five years experience in the electronic transaction, information technology or
electronic communication shall be eligible to be an information technology member of the
Tribunal.
(3) A Nepali citizen who holds at least master degree in management or commerce and who has
specialization in the field of electronic transactions and who has at least five years experience
in the relevant field, shall be eligible to be a commerce member.
Terms of Office, Remuneration and Terms & Conditions of the service of the Member of
Appellate Tribunal:
(1) The term of office of the member of the Appellate Tribunal shall be of five years and he/she shall
be eligible for reappointment.
(2) Remuneration and other terms and conditions of the services of the members of the
Appellate Tribunal shall as prescribed.
(3) A member of the Appellate Tribunal shall, before assuming his/her office after appointment, take
the oath of his/her office and secrecy before the Chief Justice of the Supreme Court.
Conditions of Vacancy of Office and filling up of such Vacancy:
(1) Office of a Member of Appellate Tribunal shall be fallen vacant in the following circumstances:
(a) On expiry of terms of office,
(b) On attainment of sixty three years age, (c) On death,
(d) If one tenders resignation
(e) If one is convicted by a court on any criminal offence involving moral turpitude, and,
(f) If it is proved that one has misbehavior or has become incompetent to perform one's duty,
while making an inquiry by Government of Nepal on the charge that one has misbehavior
against one's office or has become incompetent to perform one's duty.
Provided that, a member of the Appellate Tribunal charged under this Clause shall be given a
reasonable opportunity to defense his/her case.
(2) Notwithstanding anything contained in Clause (f), if the law member of the Tribunal is a sitting
judge, while making such an inquiry, it shall be done in accordance with the prevailing law
concerning his/her terms of service.
The Institute of Chartered Accountants of Nepal ȁͶ͸ͻ
Management Information and Control System
(3) The procedure of inquiry, for the purpose of Clause (f) of Sub- section (1), shall as
prescribed.
(4) Government of Nepal shall, in case of vacancy of the office of any member of Tribunal under
Sub-section (1), fulfill such vacancy from amongst the persons who are qualified under section
67 for remaining term of office of such a member.
Staff of the Appellate Tribunal:
(1) Government of Nepal shall make available necessary staff to Appellate Tribunal to perform its
functions.
(2) Other provisions regarding the staff of the Appellate Tribunal shall be as prescribed.
Procedures to be followed by the Appellate Tribunal: The Tribunal shall, while initiating proceedings
and adjudicating the appeal filed before it, shall follow the procedures as prescribed.
12.12 Miscellaneous
Provision may be made by an Agreement: The parties involved to the work for creating,
transmitting, receiving, storage or for processing through any other means, of any electronic record
may make the provision by an agreement, not to apply any or all provisions of the Chapter 3 or
to alter some of the provisions referred to in the said Chapter in course of their business and may make
the provisions to regulate their activities accordingly.
Government of Nepal may issue Directives: Government of Nepal may, in regard to the implementation
of this Act, issue necessary directives to the Controller or Certifying Authority, and in such a case, it
shall be a duty of the Controller or Certifying Authority, as the case may be, to comply with such
directives.
Time Limitation to file a Complaint: If a violation of this Act or Rules framed hereunder has been occurred
or if any act deemed to be an offence under this Act has been committed, first information report in regard
to such a violation or an offence shall have to file within thirty five days of the information on which such
a violation has been occurred or an offence has been committed.
Government of Nepal to be a Plaintiff:
(1) Any case deemed to be an offence under this Act shall be initiated by Government of Nepal as
plaintiff and such a case shall be deemed have been included in Schedule 1 of the Government
Cases Act, 1992 (2049).
(2) While conducting investigation of a case under Sub-section (1), the police has to take assistance
of the Controller or other concerned expert, as the case may be.
Compensation to be Recovered: If any loss or damage has been caused to any person by the reason of
offence committed under this Act, the compensation of such loss or damage shall also be recovered
from the offender.
This Act shall not Apply:
(1) Notwithstanding anything contained elsewhere in this Act, this Act shall not be applied in the
following matters:

470 | The Institute of Chartered Accountants of Nepal


Chapter 12 : Electronic Transaction Act 2063
(a) Negotiable Instruments as referred to in the Negotiable Instruments Act, 2034 (1977).
(b) Deed of will, deed of mortgage, bond, deed of conveyance, partition or any such deed
related with transfer of the title in any immovable property,
(c) Any other document which demonstrates title or ownership in any immovable property,
(d) Power of Attorney, statement of claim, statement of defense or any such other
documents as may be used in courts proceedings,
(e) Statement of claim, counter-claim, statement of defense or any such other document as may
be submitted in writing in the proceedings of any Arbitration,
(f) Documents as prescribed by the prevailing law that requires not to retain in electronic form. (2)
Notwithstanding anything contained in Sub-section
(1) Government of Nepal may, by notification in the Nepal Gazette, alter the documents
referred to in Sub-section (1).
Power to Frame Rules: Government of Nepal may in order to fulfill the objective of this Act, frame
necessary Rules.
To Frame and Enforce the Directives: Government of Nepal may, in order to achieve the
objective of this Act, frame and enforce necessary directives, subject to this Act and Rules framed
hereunder.
Effect of inoperativeness of The Electronic Transactions Ordinance, 2063 (2008): With the Electronic
Transactions Ordinance, 2063 (2008) being inoperative, unless a different intention appears, the
inoperativeness shall not,
(a) Revive anything not prevailing or existing at the time, at which the Ordinance became
inoperative,
(b) Affect the matter in operation as per the Ordinance or anything duly done or any punishment
suffered there under,
(c) Affect any right, privilege, obligation or liability acquired, accrued or incurred under the
Ordinance,
(d) Affect any penalty, punishment or forfeiture incurred under the Ordinance,
(e) Affect any action or remedy made or taken in respect of any such right, privilege, obligation,
liability, penalty or punishment as aforesaid; and any such legal proceeding or remedy may be
instituted, continued or enforced as if the Ordinance were in force.

The Institute of Chartered Accountants of Nepal ȁͶ͹ͳ


Management Information and Control System

472 | The Institute of Chartered Accountants of Nepal

You might also like