0% found this document useful (0 votes)
145 views216 pages

Software Engineering and Modeling F (1) enc

The document outlines the various postgraduate, diploma, and undergraduate programs offered in Software Engineering and Modeling at Amity University. It details the structure and content of the Software Engineering curriculum, including modules on software lifecycle, requirements engineering, design, testing, and project management. Additionally, it emphasizes the importance of systematic approaches in software development to ensure quality and efficiency.

Uploaded by

Anaaa Bellaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
145 views216 pages

Software Engineering and Modeling F (1) enc

The document outlines the various postgraduate, diploma, and undergraduate programs offered in Software Engineering and Modeling at Amity University. It details the structure and content of the Software Engineering curriculum, including modules on software lifecycle, requirements engineering, design, testing, and project management. Additionally, it emphasizes the importance of systematic approaches in software development to ensure quality and efficiency.

Uploaded by

Anaaa Bellaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 216

Software Engineering and Modeling

Programs Offered
Post Graduate Programmes (PG)

n e
i
• Master of Business Administration

l
• Master of Computer Applications
• Master of Commerce (Financial Management / Financial
Software Engineering



Technology)
Master of Arts (Journalism and Mass Communication)
Master of Arts (Economics)
Master of Arts (Public Policy and Governance)
and ModelingO n
y
• Master of Social Work
• Master of Arts (English)


Master of Science (Information Technology) (ODL)
Master of Science (Environmental Science) (ODL)

i t
Diploma Programmes
• Post Graduate Diploma (Management)

r s
e
• Post Graduate Diploma (Logistics)
• Post Graduate Diploma (Machine Learning and Artificial


Intelligence)
Post Graduate Diploma (Data Science)

i v
Undergraduate Programmes (UG)



Bachelor of Business Administration
Bachelor of Computer Applications
Bachelor of Commerce

Un
y
• Bachelor of Arts (Journalism and Mass Communication)

t
• Bachelor of Arts (General / Political Science / Economics /

i
English / Sociology)
• Bachelor of Social Work
• Bachelor of Science (Information Technology) (ODL)

A m
c) Product code

(
AMITY

Amity Helpline: (Toll free) 18001023434


For Student Support: +91 - 8826334455
Support Email id: [email protected] | https://amityonline.com
e
in
nl
O
Software Engineering and Modeling

ity
rs
ve
ni
U
ity
m
)A
(c
e
© Amity University Press

in
All Rights Reserved
No parts of this publication may be reproduced, stored in a retrieval system or transmitted

nl
in any form or by any means, electronic, mechanical, photocopying, recording or otherwise
without the prior permission of the publisher.

O
SLM & Learning Resources Committee

ity
Chairman : Prof. Abhinash Kumar

Members : Dr. Ranjit Varma



Dr. Divya Bansal
Dr. Arun Som
Dr. Maitree

rs
ve
Dr. Sunil Kumar
Dr. Reema Sharma

ni

Dr. Winnie Sharma

Member Secretary : Ms. Rita Naskar


U
ity
m
)A
(c

Published by Amity University Press for exclusive use of Amity Directorate of Distance and Online Education,
Amity University, Noida-201313
Contents
Page No.

e
Module - I: Introduction to Software Engineering 01
1.1 Software Engineering

in
1.1.1 Introduction of Software Engineering
1.1.2 Software Engineering: Purpose and Importance

nl
1.1.3 Software Crisis and Causes
1.1.4 Software Engineering’s Responsibility

O
1.1.5 Fundamental Qualities of Software Products
1.2 Software Life-Cycle Models
1.2.1 Software Lifecycle Exploration

ity
1.3 Case Study

Module - II: Software Requirement Engineering and Coding 51


2.1 Software Requirement Engineering

rs
2.1.1 Software Requirement Engineering and Coding Overview
2.1.2 Requirement Determination: Traditional and Modern Methods
ve
2.1.3 Process and Data Modeling Techniques
2.1.4 Documenting Requirements with Cases
2.2 Coding
ni

2.2.1 Trigonometry and Theorem Applications


2.2.2 Structured and Pair Programming Essentials
2.3 Case Study
U

2.3.1 Case study: Real-life Implementation

Module - III: Software Design 89


ity

3.1 Software Design”


3.1.1 Overview of Software Design
3.1.2 Goals and the Software Design Process
3.1.3 Methodologies for Structured Design
m

3.1.4 Modules Coupling and Cohesion


3.1.5 Types of Coupling and Cohesion
)A

3.1.6 Structured Chart


3.1.7 Quality of Good Software Design

Module - IV: Software Testing 122


(c

4.1 Introduction to Software Testing”


4.1.1 Software Testing Overview
4.1.2 Introduction
4.1.3 Level of Testing
4.1.4 Characteristics of Testing

e
4.2 Types of Software Testing
4.2.1 Black-Box and White-Box Testing

in
4.2.2 Alpha, Beta and Gamma Testing

Module - V: Software Project Planning and Management 168

nl
5.1 Software Project Planning
5.1.1 Introduction to Software Project Planning and Management

O
5.1.2 Planning Software Projects
5.1.3 Software Metrics
5.1.4 Cost and Size Metrics: FP and COCOMO

ity
5.1.5 Managing Configurations
5.2 Software Maintenance
5.2.1 Software Maintenance and Its Types

rs
ve
ni
U
ity
m
)A
(c
Software Engineering and Modeling 1
Module - I: Introduction to Software Engineering
Notes
Learning Objectives

e
At the end of this module, you will be able to:

in
●● Define software engineering
●● Understand software engineering: purpose and importance

nl
●● Analysesoftware crisis and causes
●● Define fundamental qualities of software products
●● Analyse software lifecycle exploration

O
Introduction
A methodical and rigorous approach to the creation, use and upkeep of software is

ity
known as software engineering. It includes a broad range of techniques, instruments
and procedures with the goal of producing software that satisfies user needs, operates
dependably and can be maintained over time. Software engineering encompasses all
phases of the software development lifecycle, including requirement analysis, design,
coding, testing, deployment and maintenance, in contrast to traditional programming, which
concentrates on producing code.

rs
By addressing the intricacies and difficulties of contemporary software systems, this
method guarantees their scalability, security and effectiveness. Software engineering is a
ve
field that combines the ideas of computer science, engineering and project management
to create reliable software solutions. In today’s technologically advanced world, these
solutions are critical for a wide range of applications.
ni

Before a software product can be developed, user needs and constraints must be
identified and made clear; the product must be made to be user- and implementer-friendly;
the source code must be carefully implemented and tested; and supporting documentation
must be created. Software maintenance tasks include reviewing and analysing change
U

requests, redesigning and altering source code, thoroughly testing updated code, updating
documentation to reflect changes and distributing updated work products to the right user.
The need for systematic approaches to software development and maintenance
ity

became apparent in the 1960s. Many software projects at the period suffered from cost
overruns, schedule slippage, unreliability, inefficiency and lack of customer consent.
It became evident that the demand for software was exceeding our capacity to produce
and update it as computer systems became bigger and more sophisticated. Consequently,
m

software engineering has developed into an important topic in technology.


The nature and complexity of software have seen significant changes in the past forty
years. Applications from the 1970s returned alphanumeric results using a single processor
)A

and single line inputs. On the other hand, modern software is much more complex, rely on
client-server technology and feature an intuitive user interface. They are compatible with a
wide range of CPUs, operating systems and even hardware from different countries.
Software groups tackle development issues and backlogs in addition to doing their
utmost to stay abreast of rapidly emerging new technologies. Improvements to the
(c

development process are even recommended by the Software Engineering Institute (SEI).
It’s an hourly requirement that cannot be avoided. But it often results in conflict between
individuals who welcome change and others who adamantly stick to traditional working
practices. Consequently, in order to prevent disputes and enhance software development
Amity University Online
2 Software Engineering and Modeling

and deliver high-quality software on schedule and under budget, it is imperative to embrace
software engineering concepts, practices and methods.
Notes Software is a collection of instructions that a user can use to gather inputs, manipulate

e
them and then produce the desired output in terms of features and performance. It also
comes with a collection of materials designed to help users comprehend the software

in
system, such as the program handbook. In contrast, engineering focuses on creating goods
through the application of precise, scientific ideas and techniques.

nl
1.1 Software Engineering
Software is one of the most significant technologies in the world today and is essential
to the development of computer-based systems and goods. Software has developed over

O
the past 50 years from a specialised tool for information analysis and problem solutions to a
whole industry. However, we still struggle to provide high-caliber software on schedule and
within budget.

ity
rs
ve
ni

Figure: Software Engineering


U

Software covers a broad range of technologies and application domains through


programs, data and descriptive information. Those that have to maintain legacy software
still face unique difficulties. Simple collections of information content have given way to
sophisticated systems that display complex functionality and multimedia content. These
ity

systems and applications are based on the web. These WebApps are software even though
they have special features and needs.
Software engineering is the process, methods and tools that make it possible to
create complex computer-based systems with speed and quality. All software initiatives
m

can benefit from the five framework activities that make up the software process:
planning, communication, modelling, construction and deployment. A set of fundamental
principles guide the problem-solving process of software engineering practice. Even as
)A

our collective understanding of software and the tools needed to construct it advances, a
wide range of software myths still mislead managers and practitioners. You’ll start to see
why these fallacies should always be disproved as you get more knowledge about software
engineering.
(c

1.1.1 Introduction of Software Engineering


Software systems are intricately designed intellectual works. Software development
must guarantee that the system is delivered on schedule, stays within budget and satisfies
the requirements of the intended application. In 1968, at a NATO meeting, the phrase
Amity University Online
Software Engineering and Modeling 3
software engineering was coined to advocate for the need of an engineering approach to
software production in order to achieve these aims. Software engineering has advanced
significantly and established itself as a field since that time. Notes

e
The goal of software engineering as a discipline is to significantly increase software
productivity and quality while lowering software costs and time to market through research,

in
education and practice of engineering processes, methods and techniques.
The process of turning a preliminary system concept into a software system that
operates in the intended environment is known as software development. Software

nl
development activities include software specification, software design, implementation,
testing, deployment and maintenance, just like many engineering projects. The user and
customer’s desires are determined by the software specification. These are stated as

O
prerequisites or competencies that the software system needs to meet. In order to fulfil
the software requirements, software design creates a software solution. Specifically,
it establishes the software system’s general software structure, also referred to as the
software architecture. The architecture shows the relationships, interfaces and interactions

ity
between the main parts of the system. High-level algorithms and user interfaces for the
system’s component parts are also defined by software design. The design is turned into
computer programs, which are tested to make sure they function as the user and customer
expects during implementation and testing. After installation, the software system is

rs
checked and adjusted to make sure it functions as intended in the target environment. The
software system is continuously updated to fix bugs and improve functionality during the
maintenance phase until it is replaced or abandoned.
ve
Software engineering is an engineering profession that covers every facet of software
creation, from system specification in its early phases to post-implementation maintenance.
Engineers are those who make things work. When applicable, they employ theories,
techniques and instruments. They do, however, employ them sparingly and always look
ni

for answers to issues, even in the absence of relevant theories and techniques. Engineers
search for answers within organisational and budgetary constraints because they are
aware of these limitations. The technical procedures involved in software development
U

are only one aspect of software engineering. In addition, it covers tasks like managing
software projects and creating instruments, procedures and theories to aid in the creation of
software.
Getting results of the necessary quality on time and within budget is the goal of
ity

engineering. Since engineers can’t be perfectionists, this frequently requires accepting


concessions. On the other hand, programmers can dedicate as much time as they like to
the development of their own programs.
Since producing high-quality software is frequently best achieved by methodical and
m

organised work practices, software engineers generally take this approach to their job. But
engineering is all about choosing the best way for a given set of conditions, so in some
cases, a less formal, more creative approach to development might work. For the creation
)A

of web-based systems, which calls for a combination of software and graphical design
talents, less formal development is especially appropriate.
Activities related to software quality assurance (QA) are conducted concurrently with
development activities. The purpose of quality assurance (QA) activities is to guarantee
(c

that development activities are executed accurately, that necessary artifacts—like software
design and requirements documents (SDD)—are created and adhere to quality standards
and that the software system will meet the requirements. Testing, requirements analysis,
design evaluation, code review and inspection are the methods used to achieve these.

Amity University Online


4 Software Engineering and Modeling

Activities related to software project management guarantee that the software system
under development will be delivered within budget and on schedule. Project planning is a
Notes crucial component of project management. It happens just at the start of a project, right

e
after the specifications for the software system are decided. Specifically, the expected
amounts of time and effort needed to complete the three project activity tracks. A project

in
schedule is created to provide direction. Project management is in charge of continuously
monitoring the project’s expenses and progress during the development and deployment
phase, as well as carrying out the necessary adjustments to adjust the project to new
circumstances.

nl
Origins of the Term Software Engineering
In the 1960s, software engineering became its own independent field of engineering.

O
Software engineering has been around since, at the very least, 1963–1964. It was first used
by Margaret Hamilton, who worked on the Apollo space program, to differentiate software
engineering from hardware and other engineering specialties. An hour on a computer cost

ity
hundreds of times as much as an hour on a programming at the time because hardware
was so valuable.2. In the name of efficiency, maintainability and clarity of the code were
frequently compromised. Software became more and more important as computer
applications became larger and more complex.

rs
When the phrase software engineering was used as the name of a 1968 NATO
conference, it was still more of a pipe dream than a practical field. The title was deliberately
chosen by the organisers, particularly Fritz Bauer, to suggest that software manufacturing
should be grounded in the kinds of theoretical underpinnings and applied disciplines that
ve
are customary in the more established engineering departments.
Many things have changed since then. Margaret Hamilton received the Presidential
Medal of Freedom on November 22, 2016, in recognition of her efforts developing software
ni

that helped prepare for the Apollo missions.

Why software engineering


First, every aspect of modern civilisation uses software. Software is essential to the
U

operation and growth of enterprises. Software is essential to the operation of many


machineries and gadgets, including cars, trucks, aeroplanes and medical equipment.
Software is also essential to cloud computing, artificial intelligence (AI) and the Internet
ity

of Things (IoT). Software systems are growing exponentially in size, complexity and
dispersion. These days, creating systems with millions of lines of code is not unusual.
The F35 fighter, for instance, has 8 million lines of code; the Windows operating
system from Microsoft has roughly 50 million lines of code; and Google Search, Gmail
m

and Google Maps combined have 2 billion lines of code. Three decades ago, the software
cost accounted for 5%–10% of the total system cost for many embedded systems, which
are composed of both hardware and software. Today, that percentage is between 90%
and 95%. Firmware, system on a chip (SoC) and/or application-specific integrated circuits
)A

(ASIC) are used in some embedded systems. These are integrated circuits, where
the hardware and software are fused together. Since they are expensive to replace, the
software’s quality is essential. To develop systems, these demand a software engineering
methodology.
(c

Second, collaboration is aided by software engineering, which is necessary for the


development of huge systems. It takes a lot of work to design, create, test and maintain
large software systems. An average software developer can write between fifty and one
hundred lines of source code a day. This covers the amount of time needed for analysis,

Amity University Online


Software Engineering and Modeling 5
design, implementation, testing and integration. For a system with 10,000 lines of code,
a single software engineer would need to dedicate approximately 100–200 days, or 5–10
months, of effort. For a software engineer, 5,000–10,000 days, or 20–40 years, would Notes

e
be needed to complete a medium-sized system with 500,000 lines of source code. Any
business cannot afford to wait this long. Thus, a team or teams of software engineers

in
are required to design and implement real-world software systems. For instance, 20–40
software engineers are needed for a year to work on a medium-sized software system.
Collaboration amongst two or more software engineers presents significant hurdles in terms
of coordination, communication and conception when developing software systems.

nl
The process of conceptualisation involves monitoring and categorising real-world
occurrences in order to create a mental model that will aid in understanding the intended

O
use of the system. Because software engineers may have various perspectives on
the world based on variances in their education, cultural backgrounds, professional
experiences, preconceptions and other variables, conceptualisation can be difficult for
teams working together. This is explained in the tale of the blind men and the elephant.

ity
Comparable to the four blind guys attempting to see or comprehend an application are
we software engineers. How can the team members create software that will accurately
automate the application if they have the wrong impression of it? How can a team of people
with disparate perspectives create and execute software components that complement
one another? Software engineering helps developers create a shared knowledge of an

rs
application for which the software is designed by providing modelling languages like the
Unified Modelling Language (UML), methods and procedures.
ve
Software engineers must convey their analysis and design concepts to one another
when working as a team. But the natural language is too colloquial and vague at times.
Once more, UML enhances developer communication. Lastly, how can software
engineering teams coordinate and cooperate with each other while they work together?
How do they assign the components to the teams and individual members, for instance
ni

and split up the work? How do they combine the elements created and put into practice
by various teams and individuals within the team? Software engineering offers an answer.
That is, these issues are resolved by software development methods and methodologies,
U

software project management and quality assurance.

Software Engineering Ethics


ity

Every part of our life is controlled and impacted by software, which permeates every
area of our society. Software has the power to benefit or hurt our society and other people.
Thus, when developing, deploying and testing software, software engineers need to take
social and ethical duties into account. The Software Engineering Code of Ethics and
Professional Practice was suggested by the ACM/IEEE-CS Joint Task Force on Software
m

Engineering Ethics and Professional Practices in this regard.


The goal of software engineers is to elevate the field of software analysis, specification,
design, development, testing and maintenance to a useful and esteemed one. As part of
)A

their responsibility to the public’s health, safety and welfare, software developers must
abide by the following eight principles:

1. Public: The public interest shall guide the actions of software engineers.
(c

2. Client and employer: software developers are expected to behave in a way that serves the
public interest while acting in the best interests of their employers and clients.
3. Product: Software engineers arfe responsible for making sure that their creations and any
associated changes adhere to the strictest industry standards.

Amity University Online


6 Software Engineering and Modeling

4. Judgment: It is expected of software engineers to exercise professional judgment with


independence and honesty.
Notes
5. Management: Managers and leaders in software engineering must support and adhere to an

e
ethical management philosophy for software development and upkeep.

in
6. Profession: In line with the public interest, software engineers should enhance the integrity and
credibility of their profession.
7. Colleagues: Software developers ane expected to treat their peers fairly and to encourage them.

nl
8. Self: Software engineers are expected to support an ethical attitude to the practice of their job
and engage in lifelong learning about it.

O
These ethical guidelines should guide software engineers in both their daily and
professional life. Software engineers, for instance, are required to maintain client or
employer confidentiality. A software engineer’s employer’s or client’s intellectual property
must also be respected and safeguarded. A software engineer occasionally has to make

ity
a decision. For instance, a software engineer may be aware that, in rare situations, a
component may behave abruptly, resulting in harm to property or even fatalities. He is also
aware that his business needs to regain market share by releasing the product as soon as
possible. If he discloses the issue, the release will need to be delayed significantly and he
will be labelled as the trouble maker. If he doesn’t report, a terrible tragedy could occur.

rs
In our sector, instances of this hypothetical situation have really happened time and time
again. Those in management must also make moral decisions.
ve
Software Engineering and Computer Science
What distinguishes computer science from software engineering? Both working
professionals and students frequently ask this question. Computer science prioritises
accuracy, performance, resource sharing, computational efficiencyandoptimisation. These
ni

are reasonably fast and accurately measurable. All of the time and money invested in
computer science research during the past few decades (from 1950 to the present) has
been directed towards enhancing these areas.
U

Software engineering prioritises software PQCT, in contrast to computer science. For


instance, the aim of computer science is frequently to find the best answer. A good-enough
solution would be used in software engineering to cut down on expenses and development
ity

or maintenance time. The goal of software engineering research and development is to


greatly increase software PQCT. Unfortunately, it is difficult and time-consuming to quantify
the influence of a software engineering process or technique. The influence needs to be
evaluated over an extended period of time and with significant resources in order to be
useful. For instance, it took experts over ten years to determine the detrimental effects of
m

the unregulated goto statement. In other words, when the goto statement is used carelessly,
the outcome is badly designed programs that are challenging to read, test and maintain.
Computer science is solely concerned with technical matters. Non-technical problems
)A

are dealt with by software engineering. For instance, the initial phases of the development
process concentrate on determining the needs of the business and creating specifications
and limitations. Domain expertise, experience with research and design, communication
prowess and client interactions are prerequisites for these tasks. Project management
expertise and knowledge are equally necessary for software engineering. Human
(c

variables like user preferences and system usage patterns must be taken into account
when designing user interfaces. Political considerations must also be taken into account
while developing software because the system may have an impact on a large number of
individuals.
Amity University Online
Software Engineering and Modeling 7
Understanding and appreciating software engineering processes, approaches
and principles may be facilitated by being aware of the distinctions between software
engineering and computer science. Take into consideration, for instance, the architecture Notes

e
of a software system that requires database access. Computer science may place an
emphasis on effective data retrieval and storage and support program designs that allow

in
direct database access. A program with such an architecture would be susceptible to
modifications made to the database management system (DBMS) and database design.
The program must be significantly altered if the database schema or DBMS are modified or
replaced. This might be expensive and challenging. Software engineers would therefore not

nl
view this as a sensible design choice unless they really need efficient database access. In
order to save maintenance time, money and effort, software engineers would rather have a
design that minimises the effects of database changes.

O
Computer science and software engineering are closely connected fields,
notwithstanding their differences. Similar to physics and electrical and electronics
engineering or chemistry and chemical engineering, computer science and software

ity
engineering work together. In other words, software engineering is built on the theoretical
and technological basis of computer science. Computer science is applied in software
engineering. Software engineering does, however, have its own areas of study. These
include, among other things, research on software processes and procedures, software
validation, software verification and testing strategies.

rs
The field of software engineering is vast. Programming languages, algorithms and data
structures, database management systems, operating systems, artificial intelligence and
ve
computer networks are just a few of the computer science topics that a software engineer
should be knowledgeable in. Software engineers working on embedded systems must
possess a fundamental understanding of electronic circuits and hardware interface. Lastly,
developing domain expertise and design experience is a gradual process for a software
engineer to become a skilled software architect. Software engineering is an attractive
ni

field because of these problems as well as the capacity to develop and construct big,
sophisticated systems to fulfil real-world objectives. Software engineers and researchers
have a lot of options thanks to the constantly growing field of computer applications.
U

Diverse Applications of Software Engineering


Software engineering plays a pivotal role in shaping the modern world, with
ity

applications spanning across various industries and domains. From developing cutting-
edge mobile apps to designing complex systems for space exploration, the impact of
software engineering is profound and far-reaching. Let’s delve into the diverse applications
of software engineering and how it drives innovation across different sectors.
m

1. Mobile App Development: Mobile app development is one of the most prominent
applications of software engineering. From social networking and entertainment to
productivity and e-commerce, mobile apps have become integral to our daily lives.
Software engineers leverage their expertise to design, develop, and deploy user-friendly
)A

and feature-rich mobile applications across platforms like iOS and Android.
2. Web Development: Web development encompasses the creation of websites and
web applications using programming languages, frameworks, and tools. Software
engineers utilize their skills to build interactive websites, e-commerce platforms, content
(c

management systems (CMS), and more. They focus on optimizing performance,


security, and user experience to ensure seamless functionality across different devices
and browsers.
3. Embedded Systems: Embedded systems are specialized computing devices designed
Amity University Online
8 Software Engineering and Modeling

for specific tasks or functions. From automotive electronics and medical devices to
industrial automation and consumer electronics, embedded systems are ubiquitous in
Notes modern technology. Software engineers develop firmware and software applications

e
tailored to the unique requirements of embedded systems, ensuring reliability, efficiency,
and safety.

in
4. Artificial Intelligence and Machine Learning: Artificial intelligence (AI) and machine
learning (ML) are revolutionizing industries by enabling computers to perform tasks
that traditionally required human intelligence. Software engineers play a crucial role in

nl
developing AI-powered applications and systems, including chatbots, recommendation
engines, autonomous vehicles, and predictive analytics tools. They leverage algorithms,
data analysis techniques, and programming languages to create intelligent solutions that

O
drive innovation and efficiency.
5. Space Exploration and Aerospace: Software engineering plays a vital role in space
exploration and aerospace industries, where reliability and precision are paramount.
Software engineers develop mission-critical software for spacecraft, satellites, and

ity
ground control systems, ensuring the success of space missions and the safety of
astronauts. They focus on fault tolerance, real-time processing, and system resilience to
withstand the harsh conditions of space.

rs
1.1.2 Software Engineering: Purpose and Importance
A fundamental component of contemporary technology, software engineering
encompasses a wide range of approaches, ideas and procedures for creating, preserving
ve
and overseeing software systems. It fills the void between unstructured programming and the
methodical, structured creation of software. This paper takes a close look at the significance
and goal of software engineering, emphasising its vital place in the technology ecosystem.

The Purpose of Software Engineering


ni

The goal of software engineering is to produce software that is dependable, efficient


and of high quality by using organised and systematic methods. The main goals consist of:
U
ity
m
)A
(c

Figure:Purpose of Software Engineering


Amity University Online
Software Engineering and Modeling 9
1. Structured Development
A clear and structured technique for ensuring that software projects are well-planned,
carried out and maintained is provided by structured development, which is essential to
Notes

e
software engineering. This involves multiple phases:
™™ Planning: Establishing project goals, needs, resources and timelines.

in
™™ Analysis: Understanding user demands and system requirements.
™™ Design: Creating system architecture and comprehensive design blueprints.

nl
™™ Implementation: Coding and combining software components.
™™ Testing: Verifying that the program satisfies requirements and is free of bugs.

O
™™ Deployment: Releasing the software to users.
™™ Maintenance: Updating and improving software post-deployment.
Software engineers can control complexity, guarantee coherence and keep their attention

ity
on project objectives by using an organised method.
2. Quality Assurance
Software quality assurance is crucial. The goal of software engineering is to create:
™™ Functional: Fulfils the given specifications.

rs
™™ Reliable: Consistently performs under anticipated circumstances.
™™ Usable: Simple to operate and comprehend.
ve
™™ Effective: Makes the best use of resources.
™™ Maintainable: Simple to improve and adjust.
Code reviews, unit testing, integration testing, system testing and user acceptability
testing are some of the procedures used in quality assurance. By identifying and fixing
ni

problems early on, these procedures lower the expense and labour associated with post-
release fixes.
U

3. Risk Management
Risks associated with software projects include scope creep, technical difficulties,
financial overruns and timetable delays. Those involved in effective risk management
include:
ity

™™ Identification: Identifying any hazards at the beginning of the project.


™™ Assessment: Calculating the probability and consequences of hazards.
™™ Mitigation: Creating plans to reduce the impact of risks.
m

™™ Monitoring: Keeping an ongoing eye on potential hazards during the course of a


project.
Software engineering makes ensuring that projects stay on course and achieve their
)A

goals by proactively controlling risks.


4. Efficiency and Productivity
Increasing productivity and efficiency is one of the main goals of software engineering.
This includes:
(c

™™ Best Practices: Using tried-and-true procedures and methods.


™™ Automation: The use of technologies to automate jobs that are repetitive, such
deployment and testing.
Amity University Online
10 Software Engineering and Modeling

™™ Using libraries and frameworks to use pre-existing solutions can expedite


development.
Notes ™™ Iterative and incremental development procedures are used in agile methods.

e
These techniques expedite software delivery, minimise development timeandmaximise
resource consumption.

in
5. Maintenance and Scalability
Software needs to be scalable and maintainable in order to accommodate expanding

nl
user bases and shifting requirements. The way software engineering does this is by:
™™ Designing components that can be changed or replaced on their own is known as
modular design.

O
™™ Documentation: supplying thorough and understandable documentation.
™™ Enforcing uniform coding techniques is known as coding standards.
Software can be made to last longer and have more value by scaling and maintaining it

ity
properly.
6. Customer Satisfaction
Software engineering must be driven by customer satisfaction. This includes:

rs
™™ Understanding and recording user demands is known as requirement gathering.
™™ Iterative Development: Constantly enhancing the program in response to user
input.
ve
™™ Testing for usability: Making sure the program is easy to use.
Software engineering guarantees that the end product meets user expectations and
provides a good experience by concentrating on customer demands and input.
ni

7. Cost-Effectiveness
Saving money without sacrificing quality is an important objective. This is accomplished
by software engineering through:
U

™™ Optimising the use of talent, time and resources is known as efficient resource
management.
™™ Identifying and resolving problems early in the development process is known as
ity

error prevention.
™™ Reuse: Making use of already written code and parts.
Cost-effective methods guarantee that projects are completed on time and to the highest
possible standard.
m

8. Innovation and Competitiveness


Efficient techniques ensure that projects are finished on schedule and at the best quality
)A

feasible.
™™ Using state-of-the-art instruments and technologies is known as adopting new
technologies.
™™ Continuous Learning: Promoting lifelong learning and skill improvement.
(c

™™ Experimentation: Giving opportunity to innovative ideas and trial and error.


Software engineering keeps businesses competitive and relevant by fostering
innovation.

Amity University Online


Software Engineering and Modeling 11
The Importance of Software Engineering
Beyond specific projects, software engineering affects the larger technological
ecosystem and many facets of contemporary society. Some of the key areas of significance
Notes

e
are as follows:
1. Managing Complexity

in
These days, software systems can have millions of lines of code, interact with various
components and integrate with other systems. They are extremely sophisticated. Software

nl
engineering offers the instruments and processes necessary to efficiently handle this
complexity, guaranteeing that even the most complex systems are trustworthy, intelligible
and maintainable.

O
2. Ensuring Quality and Reliability
Software quality and dependability are crucial, particularly for mission-critical applications
like banking, healthcare and transportation. Practices in software engineering, such
as thorough testing, code reviews and quality assurance procedures, guarantee that

ity
software operates dependably in any situation and lower the possibility of malfunctions
that could have detrimental effects.
3. Facilitating Collaboration

rs
Large teams are frequently involved in software development, sometimes dispersed
over several time zones and countries. Software engineering uses technologies like
version control systems, documentation and standard procedures to foster productive
teamwork. By doing this, team members may collaborate more easily and with less
ve
confusion and mistakes.
4. Enhancing Efficiency and Productivity
The productivity and efficiency of development teams are increased by software
ni

engineering through the use of best practices and automation. Teams are able to
produce more value in less time thanks to this, which also cuts down on development
time and expense. In particular, agile approaches place a strong emphasis on iterative
U

development and continual improvement, which speeds up the production of high-caliber


software.
5. Supporting Maintenance and Scalability
ity

In order to adapt to changing user needs and support expanding user bases, software
must change over time. Software engineering makes ensuring that programs are
created with maintainability and scalability in mind, which makes it simpler to upgrade
and expand them as needed. In addition to ensuring that software can expand with its
users, this lowers the long-term cost of ownership.
m

6. Reducing Risk
Software projects are by their very nature dangerous, with high failure rates attributable
)A

to things like scope creep, overspending and technological difficulties. A framework


for recognising, evaluating and reducing risks at every stage of the project lifecycle is
offered by software engineering, which raises the possibility that the project will succeed.
Software engineering guarantees that projects are completed on schedule, within budget
and in accordance with their goals by skillfully managing risks.
(c

7. Meeting Legal and Ethical Standards


Legal and ethical requirements for software development must be met, particularly
in regulated sectors like government, finance and healthcare. By integrating best
practices for security, privacy and ethical issues into the development process, software
Amity University Online
12 Software Engineering and Modeling

engineering guarantees adherence to these standards. This guarantees software safety


and responsibility and helps prevent legal problems.
Notes 8. Driving Innovation

e
To remain competitive in a continuously evolving technology context, innovation is
essential. Through its promotion of the adoption of novel technologies, techniques and

in
practices, software engineering promotes innovation. This helps companies stay ahead
of the curve and develop ground-breaking solutions that have the power to change
industries and enhance people’s lives.

nl
One cannot stress the significance or goal of software engineering enough. It offers
the framework and techniques required to manage the inherent complexity and risks of
software projects and produce software that is dependable, efficient and of high quality.

O
Software engineering ensures that software satisfies user needs and expectations, which in
turn fosters customer satisfaction and company success.
Additionally, software engineering fosters innovation and competitiveness by

ity
empowering businesses to use cutting-edge techniques and new technology to produce
innovative solutions. Software engineering plays a more important role than ever in a world
where software is used more and more, influencing the direction of technology and how it
affects society.

rs
Since producing high-quality software is frequently best achieved by methodical and
organised work practices, software engineers generally take this approach to their job. But
engineering is all about choosing the best way for a given set of conditions, so in some
ve
cases, a less formal, more creative approach to development might work. For the creation
of web-based systems, which calls for a combination of software and graphical design
talents, less formal development is especially appropriate.
Software engineering is important for two reasons:
ni

1. People and society as a whole depend more and more on sophisticated software
systems. We must be able to swiftly and affordably develop dependable and trustworthy
systems.
U

2. Using software engineering approaches and techniques for software systems is typically
less expensive in the long term than writing the programs as though they were personal
programming projects. The majority of expenses for most kinds of systems come from
ity

maintaining the software after it has been put into operation.


A software process is another term for the methodical technique used in software
engineering. A series of actions that result in the creation of a software product is called
a software process. Every software process involves these four essential steps. These
m

pursuits consist of:


●● Software specifications are created by engineers and customers to specify the type of
software to be developed and the limitations placed on it.
)A

●● Software development is the process of designing and programming software.


●● Software validation is the process of confirming that the program satisfies the needs of
the user.
●● Software evolution is the process of altering the program to meet the needs of the
(c

market and customers as they change.


Different development procedures are required for different kinds of systems. For
instance, before development starts, all specifications for real-time software in aeroplanes
must be met. The specification and the program are typically established in tandem for
Amity University Online
Software Engineering and Modeling 13
e-commerce systems. As a result, depending on the kind of software being produced, these
basic operations may be arranged differently and explained at varying depths.
Notes
Systems engineering and computer science are related to software engineering:

e
1. Software engineering is focused on the real-world issues associated with software

in
production, while computer science is concerned with the theories and techniques
that underpin computers and software systems. Software developers need to know a
little bit about computer science, just as electrical engineers need to know a little bit

nl
about physics. However, the theory of computer science is typically most useful for very
tiny programs. Large, complicated issues requiring software solutions are not always
amenable to elegant computer science theories.

O
2. The development and evolution of complex systems, where software plays a significant
role, are all under the purview of system engineering. Thus, in addition to software
engineering, system engineering also deals with policy and process design, hardware
development and system deployment. System engineers design the overall architecture

ity
of the system, specify the components and then integrate them to form the completed
system. Their focus is not as much on the engineering of the hardware, software and
other system components.
A methodical approach to software development, software engineering considers

rs
actual cost, scheduling and reliability concerns in addition to the requirements of software
producers and users. Depending on the program type, the organisation creating it and
the individuals working on it, there are wide variations in the actual implementation of this
ve
methodical approach. It is impossible to find a single set of software engineering best
practices that works for every system in every business. Instead, over the past 50 years, a
wide range of software engineering techniques and resources have emerged.
The kind of application being created is arguably the most essential aspect in choosing
ni

which software engineering approaches and techniques are most useful. There are
numerous varieties of applications, such as:
U

1. Applications on their own These are PCs or other local computers that are used as
application systems. They come with all the features you need and don’t require a
network connection. CAD programs, photo editing software, office applications on a PC
and so forth are examples of such applications.
ity

2. Applications based on interactive transactions These are programs that users access
from their personal computers or terminals and that run on a remote computer. These,
of course, include online apps like e-commerce platforms that let you transact with a
distant system to make purchases of goods and services. Additionally included in this
m

category of applications are business systems, which are accessed by a company via a
web browser, specialised client software, or cloud-based services like email and photo
sharing. A sizable data store that is accessed and modified with each transaction is
)A

frequently included in interactive applications.


3. Control systems embedded Hardware devices are managed and controlled by these
software control systems. Embedded systems are arguably the most prevalent sort of
systems in terms of quantity. A few instances of embedded systems are the operating
(c

systems found in mobile phones, automobiles with anti-lock brake systems and
microwave ovens that regulate the cooking process.
4. Systems for batch processing These are business systems built to handle massive
volumes of data processing. To provide matching outputs, they process a vast number
Amity University Online
14 Software Engineering and Modeling

of distinct inputs. Salary payment systems and periodic billing systems, like phone billing
systems, are two examples of batch systems.
Notes 5. Systems designed mostly for personal use with the intention of entertaining the user are

e
called entertainment systems. The majority of these systems are some sort of game. The
primary feature that sets entertainment systems apart is the calibre of user engagement

in
they provide.
6. Systems for simulation and modelling These are systems that engineers and scientists
have created to simulate real-world events or processes that involve several,

nl
independently interacting elements. These are frequently computationally demanding
and need to be executed on high-performance parallel platforms.
7. Systems for gathering data These systems use a collection of sensors to gather data

O
from their surroundings, which they then transmit to other systems for processing. The
program must communicate with sensors and is frequently put in harsh environments,
like an engine compartment or a remote area.

ity
8. Systems within systems These are systems made up of several different software
systems. A spreadsheet program is an example of a generic software product that
could be among them. It’s possible that certain systems in the assembly were created
especially for that setting.

rs
Because software has relatively varied features depending on the type of system, you
utilise distinct software engineering methodologies for each type of system. For instance,
a car’s embedded control system, which is built in the vehicle and is burned into ROM, is
crucial to safety. Changes are consequently exceedingly costly. Such a system requires a
ve
great deal of verification and validation in order to reduce the likelihood of having to recall
automobiles after they are sold in order to address software issues. There is no need to
employ a development method that depends on user interface prototyping because there is
little to no user contact.
ni

Given that a web-based system is made up of reusable components, an iterative


approach to development and delivery may be suitable. For a system of systems, on the
U

other hand, where precise specifications of the system interactions must be established
beforehand in order for each system to be constructed independently, such an approach
might not be feasible. However, several principles of software engineering are universally
applicable to all kinds of software systems:
ity

1. They ought to be created through a controlled and well-understood development


process. The company creating the software should have a well-defined plan for the
development process, as well as a concept of what it will deliver and when. Naturally,
different software kinds require different procedures.
m

2. Performance and dependability are crucial for all kinds of systems. Software ought
to function as intended, not malfunction and be accessible when needed. It ought to
operate safely and, to the greatest extent feasible, be protected from outside threats.
)A

The system ought to be resource-efficient and operate well.


3. It’s critical to comprehend and handle the requirements and software specification, which
outline what the program must accomplish. To produce an effective system on time and
within budget, you must manage the expectations of various clients and system users.
(c

You must also understand what they want from the system.
4. Utilise the resources that are now available as efficiently as you can. This implies that
you should write new software less often and reuse existing software when it makes
sense.

Amity University Online


Software Engineering and Modeling 15
Software Engineering Ethics
Similar to other engineering specialties, software engineering operates inside a legal
Notes
and social framework that restricts the independence of those who work in the field. It is

e
imperative for software engineers to acknowledge that their work encompasses more than
just utilising technical expertise. If you want to be acknowledged as a professional engineer,

in
you must also act morally and ethically.
Naturally, you should maintain the highest standards of integrity and honesty. It is not

nl
appropriate for you to use your expertise to act dishonestly or in a way that will damage the
reputation of the software engineering industry. But in other domains, legal constraints on
acceptable behaviour give way to the fuzzier concept of professional responsibility. Among

O
them are:
1. Keep Information Private Whether or whether a formal confidentiality agreement has
been written, you should typically maintain the confidentiality of your clients or employers.

ity
2. Ability It is improper for you to exaggerate your level of expertise. Knowingly taking on
tasks beyond your area of expertise is not something you should do.
3. Rights to intellectual property Local regulations pertaining to the use of intellectual
property, including copyright and patents, should be known to you. It’s important to take

4.
rs
precautions to safeguard employers’ and clients’ intellectual property.
misuse of computers It is not appropriate for you to abuse other people’s computers with
your technological expertise. Computer misuse can range from very minor (like playing
ve
games on a work computer) to very serious (like spreading viruses or other malware).

1.1.3 Software Crisis and Causes


The term software crisis, which was coined in the latter part of the 1960s, describes
ni

the variety of issues that surfaced as software systems grew increasingly sophisticated and
widespread. Project failures on a regular basis, budget overruns, missed deadlines and
software that was difficult to maintain or unstable were characteristics of this crisis. Software
U

engineering emerged as a profession to address these concerns as the field’s failure to


manage software development successfully became apparent as software started to play a
major role in a variety of sectors.
ity

Historical Context
Following the 1968 NATO Software Engineering Conference, which raised awareness
of the increasing challenges in software development, the phrase software crisis gained
m

popularity. The demand for software in this era has exceeded the capacity to generate it
properly and efficiently. Projects frequently fell short of expectations, resulting in large
financial losses and eroding trust in the abilities of the software sector.
)A

We have been dealing with the software dilemma since the 1970s. According to the
most recent IBM data, there are 94 project restarts for every 100 completed projects, 53%
overrun their cost estimates by an average of 189% and 31% of the projects are cancelled
before they are completed.
(c

Numerous industry watchers, myself included, have labelled the issues surrounding
software development as a crisis. Numerous books have detailed the consequences of
some of the most notable software malfunctions that have happened in the last ten years.
However, many have questioned whether the phrase software crisis is still suitable given

Amity University Online


16 Software Engineering and Modeling

the software industry’s remarkable achievements. Among those with a change of heart
is Robert Glass, the author of several books on software disasters. One sees exception
Notes reporting, spectacular failures in the midst of many successes, a cup that is nearly full, the

e
speaker says in reference to his failure stories.
It’s true that those who work in software succeed more often than not. It is also true

in
that there never seemed to be a software crisis, despite predictions made thirty years
ago. It’s possible that what we actually have is quite different. A crisis is described as a
turning point in the course of anything; decisive or crucial time, stage, or event in Webster’s

nl
Dictionary. However, there has been no turning point, no decisive time, just gradual,
evolutionary development interspersed with rapid technological advancements in software-
related fields in terms of overall software quality and the rate at which computer-based

O
systems and products are built.
The turning point in the course of a disease, when it becomes clear whether the
patient will live or die is another definition of the word crisis. This definition might help us

ity
understand the true nature of the issues that have bedevilled the software development
industry. It might be more accurate to describe what we actually have as a chronic affliction.
According to definitions, an ailment is anything causing pain or distress. The crux of our
argument, however, is in the definition of the term chronic: lasting a long time or recurring

rs
often; continuing indefinitely. The issues we have faced in the software industry are far
better described as a chronic affliction than a crisis.
The software crisis refers to the collection of issues that arise during the software
ve
development process. Issues related to the development process arise during the software
development process. We will now talk about the issues and reasons behind the software
crises that arise at various phases of the software development lifecycle.

Problems
ni

●● Estimates of schedule and cost are frequently wildly off.


●● Software professionals’ productivity hasn’t kept up with the demand for their services.
U

●● There are moments when software is of inadequate quality.


●● The effectiveness of new tools, procedures, or standards cannot be properly assessed
in the absence of a reliable productivity indicator.
ity

●● There is frequently a lack of communication between software developers and


customers.
●● The majority of software funds are used by maintenance tasks for software.
m

Causes
●● Because most software developers use past data when developing their programs, the
quality of the software is poor.
)A

●● The scheduling does not correspond with the real timing if there is a delay in any of the
processes or stages (such as analysis, design, coding and testing).
●● Misunderstanding the unique qualities of software and the issues surrounding its
development can lead to communication breakdowns between management and
(c

customers, software developers, support personnel, etc.


●● The software engineers in charge of realising the potential frequently oppose change
when it is proposed and alter when it is debated.

Amity University Online


Software Engineering and Modeling 17
Software Crisis from the Programmer’s Point-of-View
●● Compatibility issue.
Notes

e
●● The portability issue.
●● Issue with the documentation.

in
●● Software piracy is an issue.
●● Difficulty coordinating the efforts of several persons.
●● Issue with appropriate upkeep.

nl
Software Crisis form the User’s Point-of-View
●● The price of software is very high.

O
●● Hardware malfunctions.
●● Lack of development-related specialisation.
●● Problem with several software versions.

ity
●● Issue of perspectives.
●● Issue with bugs.

Key Problems of the Software Crisis


1. Project Failures
rs
The high percentage of project failures was one of the software crisis’ most obvious
signs. Numerous software projects were overbudget, delivered late, or failed to reach
ve
their goals. The ubiquitous nature of these challenges was highlighted by high-profile
failures like the London Stock Exchange’s Taurus project and the automation system of
the U.S. Internal Revenue Service.
ni

2. Budget Overruns and Delays


Schedule delays and budget overruns were frequent, frequently the consequence of
underestimating complexity and breadth. For example, the CHAOS report from the
U

Standish Group has always shown that a large portion of projects were completed over
budget, with very few being completed on schedule.
3. Poor Quality and Reliability
ity

Budget overruns and schedule delays were common, usually the result of underestimating
the depth and complexity of the task at hand. For instance, the Standish Group’s CHAOS
report consistently demonstrates that a sizable percentage of projects were finished over
budget and very few on time.
m

4. Maintenance Challenges
The complexity and size of software systems increased, making maintenance more and
more challenging. Software maintenance expenses frequently outweighed the initial
)A

development costs. Legacy systems have an infamous reputation for being difficult to
maintain and requiring a lot of resources to upgrade and repair. This is especially true in
vital areas like banking and government.
5. Lack of Standardisation
(c

The disarray was exacerbated by the absence of established procedures and methods.
Every development team frequently had its own method, which resulted in discrepancies
and made cooperation challenging. The lack of industry-wide standards made it more
difficult to exchange best practices and information.
Amity University Online
18 Software Engineering and Modeling

6. Skill Shortages
The need for skilled workers could not keep up with the software industry’s explosive
Notes growth. The talents that were needed and those that were in the workforce differed

e
greatly. Other issues were made worse by this scarcity since unskilled developers found
it difficult to handle challenging projects.

in
Responses to the Software Crisis
Numerous reactions aiming at enhancing the state of software development were

nl
sparked by the software crisis. These answers have had a big impact on modern software
engineering.
1. Emergence of Software Engineering

O
The phrase software engineering was first used to express the need for software
development methods that were more methodical and structured, much like those found
in traditional engineering fields. As a result, numerous approaches and best practices

ity
aimed at enhancing project outcomes were created.
2. Methodologies and Models
In order to give software projects an organised framework, new development approaches
and models were presented. One of the first was the Waterfall approach, which placed

rs
a strong emphasis on a sequential design process. Later, more adaptable approaches
such as Agile and DevOps surfaced, encouraging integration, continuous feedback
loops and iterative development.
ve
3. Improved Tools and Technologies
The software crisis has been addressed in large part by technological and tool
advancements such as continuous integration/continuous deployment (CI/CD)
ni

pipelines, version control systems, automated testing tools and integrated development
environments (IDEs).
4. Standardisation and Best Practices
U

The creation of frameworks and standards like the Capability Maturity Model Integration
(CMMI) and ISO/IEC standards is the result of efforts to standardise procedures and
practices. These offer standards for evaluating and enhancing software development
ity

procedures, guaranteeing increased consistency and quality.


5. Education and Training
It took a large investment in education and training to close the skills gap. Software
engineers may now acquire the skills and knowledge they need thanks to the specialised
m

curriculum and certification programs created by universities and professional


associations.
)A

6. Project Management Techniques


The implementation of advanced project management methodologies has facilitated
enhanced software project planning, execution and monitoring. Software project
management has incorporated methods like the Critical Path Method (CPM) and the
Program Evaluation and Review Technique (PERT) to improve time and resource
(c

management.

Impact of Addressing the Software Crisis


The software sector has significantly improved as a result of the coordinated efforts to
Amity University Online
Software Engineering and Modeling 19
overcome the software crisis. Even if there are still difficulties, improvements in techniques,
instruments and training have produced software systems that are more dependable,
effective and easily maintained. Notes

e
1. Enhanced Project Success Rates

in
The success rate of projects has significantly increased. Budget overruns and delays are
examples of difficulties that have declined in frequency and severity, although they have
not completely disappeared. The number of projects that are completed on schedule and
within budget has been steadily increasing, according to the Standish Group’s CHAOS

nl
reports.
2. Higher Quality Software

O
Software has been developed with greater reliability as a result of the emphasis on
testing and quality assurance. Software is extensively tested before deployment thanks
to automated testing, continuous integration and strict quality assurance procedures,
which lower the frequency of errors and vulnerabilities.

ity
3. Better Maintenance and Scalability
Software is now more scalable and maintainable because to advancements in design
concepts like modularity and conformance to coding standards. This guarantees that

4.
requirements.
Standardised Practices
rs
systems won’t become cumbersome as a result of expanding user bases and changing
ve
The software development process has become more consistent and predictable with
the introduction of standardised methods and frameworks. Organisations may now aim
higher thanks to standards like CMMI and ISO/IEC, which have improved overall process
maturity by providing benchmarks.
ni

5. Skilled Workforce
The workforce now has more skills as a result of the increased emphasis on education
U

and professional development. Today’s software engineers are better prepared to tackle
the intricacies of contemporary software development because of enhanced training
courses and certification programs.
ity

Ongoing Challenges
Notwithstanding notable advancements, the software sector nevertheless faces
difficulties. There are constant challenges due to the quick speed of technical advancement,
growing complexity of software systems and changing expectations of users.
m

1. Rapid Technological Change


Software engineers must constantly refresh their abilities and adjust to new tools,
)A

languages and frameworks due to the rapid evolution of technology. This calls for a
dedication to lifelong learning as well as adaptability while implementing new procedures.
2. Complexity of Modern Systems
Software systems grow increasingly sophisticated as they become more integrated
(c

and networked. Robust architectures, complicated tool sets and in-depth knowledge of
interdependencies and system interactions are necessary to manage this complexity.
3. Security Concerns
Security is still a major worry, particularly in light of the growing sophistication of
Amity University Online
20 Software Engineering and Modeling

cyberthreats. For software to be safe from flaws and hacks, sophisticated security
procedures and constant attention are needed.
Notes 4. User Expectations

e
Today’s users need software that is responsive, easy to use and constantly updated
with new features. Agile development techniques and a significant emphasis on user

in
experience (UX) design are necessary to meet these demands.
5. Ethical and Legal Issues

nl
The ethical use of artificial intelligence (AI), algorithmic bias, data privacy and other
ethical and legal concerns are becoming more and more intertwined with software
development. A thorough grasp of morality and adherence to the law are necessary for

O
navigating these situations.

Future Directions
In order to solve these persistent issues, software engineering will probably continue to

ity
evolve its methodology, tools and practices in the future. Important areas of attention could
be:
1. Advanced Automation

rs
Advanced automation methods, such as AI and machine learning, can help with
maintenance activities, enhance testing and quality assurance and further expedite
software development processes.
ve
2. Enhanced Security Practices
Cybersecurity procedures need to change along with cyberthreats. This calls for the
deployment of new technologies for threat detection and mitigation in addition to the
integration of security across the software development lifecycle (DevSecOps).
ni

3. User-Centric Development
Software will be developed with the requirements and expectations of users in mind
U

if user-centric development techniques—such as improved UX design and efficient


methods for obtaining and incorporating user feedback—are prioritised more.
4. Ethical Software Engineering
ity

In order to address the moral and legal issues raised by developing technology, software
engineering ethics norms and frameworks must be developed. This involves making AI
and other cutting-edge systems transparent, accountable and equitable.
5. Lifelong Learning
m

Professional development and ongoing education will be more crucial than ever. Software
engineers will require access to continual training and development opportunities from
organisations and educational institutions in order to stay abreast of the most recent
)A

developments.

1.1.4 Software Engineering’s Responsibility


Beyond simple coding, software engineering is an essential field that covers the entire
software development lifecycle, from concept to implementation to testing to deployment
(c

to retirement and maintenance. Because software systems are complicated and play a
vital role in modern society, software engineers have a wide range of duties. This thorough
investigation explores the duties of software engineering and looks at how it affects
business, technology, ethics and society at large.
Amity University Online
Software Engineering and Modeling 21

Notes

e
in
nl
O
Figure: Software Engineers Responsibility

Ensuring Quality and Reliability

ity
Ensuring the quality and dependability of software products is one of the main duties
of software engineers. This entails following standards and best practices at every stage
of the software development lifecycle. Strong systems that can manage anticipated loads,

rs
work effectively and dependably in a variety of scenarios are the responsibility of engineers.
To find and fix problems early, this entails thorough testing at every level, from unit tests to
system integration tests. High quality and consistency are maintained through the use of
deployment pipelines, continuous integration and automated testing tools. It is imperative
ve
for engineers to establish all-encompassing quality assurance procedures, such as
code reviews and pair programming, in order to identify possible problems and foster an
environment of excellence.
ni

Addressing Security Concerns


Given the growing frequency of cyber attacks, software engineers have a key duty
when it comes to security. In order to safeguard software systems against unauthorised
U

access, data breaches and other vulnerabilities, engineers must plan and execute security
procedures. Adopting secure coding methods, encryption and frequent security audits
are examples of security best practices. To protect software systems, engineers need to
keep up with new risks and always improve their knowledge and abilities. Furthermore, by
ity

incorporating security into DevSecOps, the development process, security issues are taken
into account early on and all the way through the software lifecycle.

Ensuring Ethical Practices


m

Software engineering is built on the foundation of ethical responsibility. It is imperative


for engineers to uphold ethical standards in their work by taking into account the software’s
social and moral ramifications. This entails protecting user privacy, preventing algorithmic
)A

bias and guaranteeing data usage openness. Professional associations such as the ACM
and IEEE have codes of ethics that engineers should abide by. These codes provide advice
on upholding honesty, privacy and equity. Software’s effect on society is also a matter of
ethics, thus engineers must balance the possible advantages and disadvantages of their
work and make an effort to produce software that improves society.
(c

Managing Complexity and Change


Due to their frequent interconnection with other systems and multiple interdependent
components, modern software systems are by nature complicated. It is the responsibility

Amity University Online


22 Software Engineering and Modeling

of engineers to manage this complexity by applying sensible design concepts like


abstraction, modularity and separation of concerns. This facilitates the understanding,
Notes development and maintenance of complicated systems by disassembling them into

e
smaller, more manageable components. Engineers also need to be skilled at dealing
with change, whether it’s adjusting to new specifications, advancements in technology,

in
or changing demands from users. In order to ensure that software evolves in a controlled
and predictable manner, this calls for a strong understanding of agile approaches, version
control systems and change management procedures.

nl
Ensuring Maintainability and Scalability
Systems that are scalable and maintainable must be designed by software developers
to enable updates and expansions in the future. Writing clear, well-documented code

O
that is simple to read and edit is essential to maintainability. This entails following coding
guidelines, giving variables descriptive names and offering thorough documentation. On
the other side, scalability refers to creating software that can grow and withstand higher

ity
loads without seeing a drop in performance. This calls for meticulous architectural design,
resource efficiency and both horizontal and vertical scaling. In addition, engineers have to
plan for possible bottlenecks and create systems that can adjust to shifting needs.

Facilitating Collaboration and Communication

rs
Working together and communicating well are crucial duties in the field of software
engineering. Since engineers frequently collaborate in groups, effective communication is
essential for organising tasks, exchanging information and resolving problems. This entails
ve
facilitating teamwork through the use of collaborative technologies like communication
platforms, project management software and version control systems. Engineers must also
be able to communicate intricate technical ideas to stakeholders who are not technical in
order to make sure that everyone is aware of the project’s objectives, status and difficulties.
ni

Effective communication promotes team cohesion, a cooperative atmosphere and the


successful completion of projects.

Upholding Professional Standards


U

It is the duty of software developers to maintain professional standards in their job.


This entails abiding by legal and regulatory standards, following industry best practices and
consistently upgrading their knowledge and abilities. It is important for engineers to pursue
ity

lifelong learning and stay up to date on the newest advancements in both technology and
methodology. Participating in professional organisations, obtaining certifications, attending
conferences and continuing study can all lead to professional development. By adhering to
these guidelines, engineers may be sure to provide software that is secure, dependable and
m

meets user and stakeholder needs.

Balancing Innovation and Practicality


The motivating factor behind software engineering is innovation, but engineers
)A

also need to balance it with pragmatism. This entails implementing fresh approaches
and technologies to enhance software development, all the while making sure they are
workable and advantageous for the current project. Engineers need to weigh the benefits
and drawbacks of novel strategies, taking performance, security and maintainability into
(c

account. By striking this equilibrium, projects are able to take advantage of the most recent
developments without sacrificing dependability or quality. In order to spur future innovation,
engineers must also be willing to try new things, learn from their mistakes and apply their
lessons learned.

Amity University Online


Software Engineering and Modeling 23
Commitment to User Experience
Positive user experience (UX) is the responsibility of software engineers. This entails
Notes
creating user-friendly, accessible and intuitive interfaces that satisfy users’ requirements

e
and expectations. To better understand user demands and adjust software, engineers
must perform usability testing, user research and feedback gathering. Ensuring software

in
accessibility for all users, including those with impairments, is part of this dedication to user
experience (UX). Engineers that put the user experience first produce software that is not
just efficient and useful, but also pleasurable to use.

nl
Managing Project Constraints
Budget, schedule and resource constraints are common constraints on software

O
initiatives. In order to guarantee that projects are finished on schedule, within budget and to
the necessary quality standards, engineers must properly manage these restrictions. This
calls for meticulous planning, precise resource estimation and the ability to make trade-offs
as needed. Engineers need to be skilled in setting priorities, controlling risks and staying

ity
focused on the goals of the project. Balanced restrictions and value delivery to stakeholders
are guarantees of effective project management.
Software engineers have a wide range of obligations that include professional, ethical
and technical aspects. It is the responsibility of engineers to manage complexity and

rs
change, promote teamwork, maintain professional standards and guarantee the quality,
dependability and security of software systems. Important facets of their work include
managing project limits, committing to user experience and striking a balance between
ve
creativity and pragmatism. Software engineers’ duties will only become more significant as
long as software plays a crucial part in contemporary society. They will be responsible for
influencing the direction of technology and its global effects.
The field of software engineering has grown to be regarded globally. Software
ni

engineers should follow an ethical code that directs their work and the things they create as
professionals. A Software Engineering Code of Ethics and Professional Practices (Version
5.1) has been developed by an ACM/IEEE-CS Joint Task Force. According to the code,
U

software engineers must dedicate themselves to elevating the field of software analysis,
specification, design, development, testing and maintenance to a useful and esteemed one.
As part of their responsibility to the public’s health, safety and welfare, software developers
ity

must abide by the following eight principles:


1. Public—The public interest shall guide the actions of software engineers.
2. Client and employer—Software developers are expected to behave in a way that serves
the public interest while acting in the best interests of their employers and clients.
m

3. Product—Software engineers are responsible for making sure that their creations and
any associated changes adhere to the strictest industry standards.
)A

4. Judgment—It is expected of software engineers to exercise professional judgement with


independence and honesty.
5. Management—Managers and leaders in software engineering must support and adhere
to an ethical management philosophy for software development and upkeep.
(c

6. Profession—In line with the public interest, software engineers should enhance the
integrity and credibility of their profession.
7. Colleagues—Software developers are expected to treat their peers fairly and to
encourage them.
Amity University Online
24 Software Engineering and Modeling

8. Self—Software engineers are expected to support an ethical attitude to the practice of


their job and engage in lifelong learning about it.
Notes A recurring theme emerges, despite the equal weight given to each of these eight

e
principles: a software engineer ought to act in the public interest. A software engineer
should personally follow these guidelines:

in
●● Don’t ever take data for yourself.
●● Never sell or disseminate confidential knowledge that you have acquired while working

nl
on a software project.
●● It is never acceptable to intentionally alter or destroy another person’s files, programs,
or data.

O
●● Never infringe on someone’s, a group’s, or an organisation’s privacy.
●● Never breach a system for financial gain or amusement.
●● Never produce or distribute worms or viruses for computers.

ity
●● Computer technology should never be used to support harassment or prejudice.
In the last ten years, some software industry players have advocated for laws that
would: (1) permit software to be released by companies without revealing known flaws; (2)
shield developers from liability for any harm caused by these known flaws; (3) prevent third

rs
parties from revealing flaws without the original developer’s consent; (4) permit the inclusion
of self-help software in products that can be remotely disabled; and (5) shield developers of
self-help software from liability should a third party disable the software.
ve
Like any legislation, political rather than technological considerations are frequently at
the centre of the discussion. But many people, including ourselves, believe that protective
legislation violates the software engineering code of ethics if it is written incorrectly
because it subtly absolves software developers of their duty to create high-quality software.
ni

Companies that store a lot of sensitive consumer data will demand stronger security
measures in light of the significant social media data breaches that happened in 2018.
We should think about the values these autonomous systems embody because of
U

their increasing capacity for decision-making and the impact they have on our day-to-day
existence. The field of software engineering must investigate methods for quantifying bias
in social networks and search algorithms. A number of new principles that address concerns
ity

with particular computer technologies, like artificial intelligence (AI), machine learning and
autonomous machines making morally important judgements, have been included to the
updated ACM Code of Ethics and Professional Conduct. The ACM and IEEE will probably
take these new developments into account when updating their Software Engineering Code
of Ethics.
m

1.1.5 Fundamental Qualities of Software Products


Almost every facet of contemporary life, including communication, entertainment and
)A

healthcare, depends on software products. Software’s success and efficacy are based on a
number of core characteristics that guarantee the program will function consistently, safely
and in accordance with user needs. This in-depth investigation explores the fundamental
characteristics that characterise software products of superior quality, looking at their
(c

importance, application and effects on developers and consumers alike.


By definition, an engineering discipline is guided by the pragmatic constraints of
budget, time and quality. It might not be acceptable to implement a solution that requires
a lot of resources and time. In a similar vein, an inexpensive, low-quality solution could not
Amity University Online
Software Engineering and Modeling 25
be very helpful. The three main determinants of software engineering, as in all engineering
disciplines, are schedule, quality and cost.
The cost of the resources necessary to create a system is its development cost; in the
Notes

e
case of software, this cost is mostly driven by labour costs because software development
is a labor-intensive process. Because of this, the price of a software project is frequently

in
expressed in terms of person-months, whereby the cost is expressed as the total number
of person-months that the project required. (Person-months can be expressed in monetary
terms by multiplying them by the average dollar cost of one person-month, which includes

nl
overhead expenses like those for hardware and tools.)

O
ity
Figure: Software quality attributes.

An significant consideration in many projects is the schedule. Business trends are


demanding that a product have a shorter time to market, or a shorter cycle time from

rs
conception to delivery. This implies that software has to be built more quickly.
A person’s productivity in terms of output (KLOC) per month can effectively account for
scheduling and cost issues. It should be evident that increased productivity translates into
ve
lower person-month costs since fewer person-months are needed to complete the same
task. In a similar vein, increased productivity increases the possibility of producing software
faster—a team with higher productivity will complete a task faster than a team of the same
size with lower productivity. (Of course, the amount of workers assigned to the project also
affects how long it will actually take.) Stated differently, productivity is a primary motivator
ni

for all firms and the pursuit of high productivity heavily influences how things are carried out.
Quality is the other main component that propels any production discipline. Quality is
U

now a key component of company plans and it serves as a slogan. Undoubtedly, producing
software of superior quality is one of software engineering’s main objectives.
As illustrated in the above Figure, the quality model defined by this standard states
that software quality is composed of six primary features, also referred to as characteristics.
ity

Using appropriate metrics, it is possible to measure the detailed qualities of these six
properties, which are regarded as fundamental. These characteristics of a software product
can be summed up as follows:
●● Functionality. The ability of the program to fulfil explicit and implicit needs when it is
m

used
●● Reliability. the capacity to continue performing at a certain level.
)A

●● Usability. the capacity to comprehend, acquire and apply knowledge.


●● Efficiency. the capacity to deliver performance that is appropriate for the quantity of
resources employed.
●● Maintainability. the ability to be altered in order to make adjustments, enhancements,
or modifications.
(c

●● Portability. The product’s adaptability to various specified environments can be


achieved without requiring the use of actions or means beyond those included in the
product.

Amity University Online


26 Software Engineering and Modeling

More information is provided by the characteristics for the various properties.


For instance, understandability, learnability and operability are qualities of usability;
Notes changeability, testability, stability, etc. are qualities of maintainability; adaptability,

e
installability, etc., are qualities of portability. Suitability (if the right collection of functions are
offered), accuracy (are the outcomes correct) and security are examples of functionality. It

in
should be noted that security is described in this classification as the capability to protect
information and data so that authorised persons or systems are not denied access to
themandunauthorised persons or systems cannot read or modify them. Security is seen as
a feature of functionality.

nl
The existence of several levels of quality has two significant implications. First of
all, one number (or one parameter) cannot adequately describe the quality of software.

O
Furthermore, quality is a project-specific idea. Usability may be more important than
dependability in a commercial package for PC gaming, while reliability may be more
important than usability in an extremely sensitive project. Therefore, the first step in any
software development project should be to define a quahty objective and the development

ity
process’s aim should be to achieve that purpose.
Rehabitability is widely acknowledged as the primary quality requirement, despite the
existence of numerous other quality elements. Since software flaws are the root cause of
unreliability, one way to assess software quality is to count the number of defects per unit

rs
size (usually thousands of lines of code, or KLOC) in the developed software. The quality
objective is to minimise the quantity of faults per KLOC, given that this is the primary quality
requirement. Software engineering best practices today have managed to bring down the
ve
defect density to fewer than one per KLOC.
It should be noted that a precise definition of a flaw is required in order to use this
definition of quality. A software defect could be something as simple as a glitch that results
in the program crashing, an output that is not aligned correctly, a word that is misspelt, etc.
ni

The project or the standards the organisation developing the project adopts (usually the
latter) will determine exactly what constitutes a fault.

Functionality
U

Any software product’s fundamental quality is its functionality, which is defined as its
capacity to carry out the purposes for which it was created. This includes being accurate,
comprehensive and suitable:
ity

●● Correctness: The program needs to accurately carry out its intended functions. This
calls for exact code, careful requirements analysis and extensive testing to make sure
every feature performs as planned.
●● Completeness: All necessary features and functionalities should be implemented by
m

the software, meeting every aspect of the user’s needs.


●● Appropriateness: The features and functionalities offered have to be pertinent to the
needs of the user and free of superfluous or extraneous choices.
)A

Gathering requirements carefully, designing in depth and conducting extensive testing


are all necessary to ensure functionality. Throughout the development process, functional
requirements should be thoroughly recorded and traceable to guarantee that the finished
product serves the intended purpose.
(c

Reliability
The capacity of software to function consistently and without error under given
circumstances is referred to as reliability. Important facets of dependability consist of:

Amity University Online


Software Engineering and Modeling 27
●● Availability: When needed, the program ought to be accessible and functional.
Reducing downtime requires sound design and efficient maintenance procedures.
●● Fault Tolerance: The software’s capacity to carry on even in the case of partial system Notes

e
failures. This entails creating failover and redundancy methods into systems.
●● Recovery: When something goes wrong, the program ought to be able to bounce back

in
fast and strong. Putting in place appropriate backup and restore processes is part of
this.
Strict testing, such as failure mode analysis, load testing and stress testing, is

nl
necessary to ensure reliability. Maintaining high reliability is aided by frequent upgrades and
ongoing monitoring.

O
Usability
Software must be usable in order to be user-friendly and to offer a satisfying user
experience. It includes multiple dimensions:

ity
●● Learnability: How simple it is for inexperienced users to learn how to use the program.
●● Efficiency: how quickly and simply users may do tasks after learning how to utilise the
program.
●● Memorability: How quickly users can pick up the software’s controls after being away

rs
from it for a while.
●● Errors: The frequency, severity and ease of recovery from user errors.
●● Satisfaction: How enjoyable the app is to use.
ve
By comprehending user requirements and preferences, conducting iterative testing with
real users and integrating user feedback into the design process, user-centered design can
improve usability. To find and fix usability problems, usability testing—including A/B testing
and heuristic evaluation—is crucial.
ni

Efficiency
Efficiency is the degree to which software utilises system resources as efficiently as
U

possible to carry out its tasks. This comprises:


●● Performance:how quickly the program completes its job. Both operational efficiency
and user satisfaction depend on high performance.
ity

●● Resource Utilisation: the degree to which the program makes use of disc space,
RAM and CPU on the system. Software that is efficient makes prudent use of these
resources, avoiding wasteful utilisation.
●● Scalability: The software’s capacity to withstand higher loads without seeing a
decrease in performance. This covers both vertical (increasing the capacity of current
m

machines) and horizontal (adding more machines) scalability.


To ensure high efficiency, load balancing, performance optimisation and effective
coding techniques are essential. Bottlenecks can be found and fixed with the use of
)A

performance testing and profiling tools.

Maintainability
To achieve maximum efficiency, load balancing, performance optimisation and effective
(c

coding strategies are necessary. Performance testing and profiling tools are useful for
identifying and resolving bottlenecks.
●● Modularity: The program is broken up into separate modules that may be
independently developed and tested.
Amity University Online
28 Software Engineering and Modeling

●● Reusability: The degree to which software components can be used to other projects
or environments.
Notes ●● Analyzability: The software’s comprehensibility facilitates defect identification and

e
correction for developers.
●● Modifiability: It is simple to modify the software to add new features or enhance current

in
ones.
●● Testability: The software’s simplicity of testing to make sure it functions as intended
following changes.

nl
Clean, modular design, coding standards compliance, extensive documentation and
rigorous testing procedures all contribute to good maintainability. Over time, code quality is
maintained by refactoring and code reviews.

O
Portability
The ease with which software can be moved from one environment to another is known

ity
as portability. This comprises:
●● Adaptability: The software’s adaptability to many situations and its ability to work
well in them. This entails using common libraries and tools and writing code that is
platform-independent.
●●
●●
rs
Installability: How simple it is to install and set up the software in a new setting.
Replaceability: The software’s capacity to be replaced with a different system with the
least amount of disturbance.
ve
Using high-level programming languages, following industry guidelines and staying
away from platform-specific dependencies all help to improve portability. Thorough testing in
various settings guarantees that the program operates as intended.
ni

Security
Software and its data are shielded from unwanted access, usage, disclosure,
disruption, alteration and destruction by security, a basic feature. Important elements
U

consist of:
●● Confidentiality: Maintaining data confidentiality means limiting access to only those
who are permitted.
ity

●● Integrity: Making certain that information is true and hasn’t been altered.
●● Availability: Making certain that resources and data are accessible to authorised users
when required.
●● Authentication: Verifying the identification of people and systems is known as
m

authentication.
●● Authorisation: Verifying that users have the appropriate authorisation to carry out
actions is known as authorisation.
)A

●● Non-repudiation: Keeping track of and making it impossible to refute acts taken within
the system.
Implementing encryption, access controls, secure coding techniques and routine
security audits are examples of security practices. Vulnerability assessments and
(c

penetration testing aid in locating and resolving possible security problems.

Interoperability
Software’s capacity to communicate with other components or systems is known as

Amity University Online


Software Engineering and Modeling 29
interoperability. For systems that must interface with other hardware or software, this is
essential. Important elements consist of:
●● Compatibility: The program needs to function well across a range of platforms and
Notes

e
systems.
●● Integration: The software’s ease of integration with other systems.

in
●● The Standards Compliance: Following pertinent industry guidelines to guarantee
interoperability and compatibility.

nl
The use of standard protocols, data formats and APIs promotes interoperability. It
is ensured by extensive testing in a variety of contexts that the program can properly
communicate with other systems.

O
Adaptability
Software’s ability to adapt to changes in its environment or requirements is referred to
as adaptability. This comprises:

ity
●● Configurability: The capacity of users to alter the program’s settings to meet their
requirements without changing the program’s underlying code.
●● Extensibility: The software’s capacity to add new functions or features.
●● Flexibility: The capacity to adjust to modifications in specifications or operational
settings.
rs
Implementing plug-in architectures, using configuration files and using modular design
are ways to achieve adaptability. Frequent upgrades and improvements guarantee the
ve
software’s continued relevance and utility.

Scalability
Software’s scalability refers to its capacity to accommodate growing workloads or easily
ni

expand. This includes:


●● Horizontal Scalability: Adding more machines to meet increasing loads is known as
horizontal scalability.
U

●● Vertical scalability: It is the ability to handle increasing loads by giving current


machines more power (CPU, memory).
Architectural design, load balancing and effective resource management are the keys
ity

to achieving scalability. Testing the software’s performance under varied load scenarios aids
in ensuring that it scales well.

Maintainability
m

Software must be easily maintainablein order to be updated and enhanced over time.
Important elements consist of:
●● Modularity: Software modularity is the ability to divide it up into manageable, stand-
)A

alone components that may be independently developed, tested and maintained.


●● Documentation: Providing thorough, understandable documentation to assist
developers in modifying the program.
●● Code Quality: Writing code that complies with coding standards and best practices
(c

while remaining clear, readableandorganised is known as code quality.


Practices like code reviews, automated testing and continuous integration help
maintainability. Software maintainability is maintained through regular codebase audits and
refactoring.
Amity University Online
30 Software Engineering and Modeling

1.2 Software Life-Cycle Models


Notes The steps and procedures involved in developing software from conception to
retirement are outlined by software life-cycle models, commonly referred to as software

e
development techniques. These models offer an organisedframework for handling
complexity and guaranteeing software project quality. The Waterfall model is a popular life-

in
cycle model that takes a step-by-step, sequential approach to phases like requirements
analysis, design, implementation, testing and maintenance. In contrast, the Agile
methodology places a strong emphasis on flexibility, teamwork and iterative development,

nl
which enables ongoing feedback and small-scale advancements. Large, complicated
projects can benefit from the Spiral model’s combination of risk management and iterative
development. Every model has benefits and is selected according to the risks, restrictions

O
and needs of the project. Software engineering teams may efficiently plan, carry out and
deliver successful software projects by using an appropriate life-cycle model.

ity
rs
ve
ni
U
ity

1.2.1 Software Lifecycle Exploration


The software process outlines the best way to oversee and plan a software
development project while keeping constraints and limitations in mind. A software process is
m

a set of operations connected by ordering constraints that, when carried out correctly and in
accordance with the ordering constraints, should result in the desired output. The objective
is to provide software of the highest calibre at a fair price. It is obvious that a method is
)A

unacceptable if it is unable to handle large software projects, scale up, or generate high-
quality software.
Large software development firms typically have multiple processes going at
once. Although many of these are unrelated to software engineering, they do affect
software development. It is possible to categorise these process models as non-software
(c

engineering. This category includes training models, social process models and business
process models. Though they fall beyond the purview of software engineering, these
procedures have an effect on software development. A software process is the procedure
that addresses the managerial and technical aspects of software development. It is obvious
Amity University Online
Software Engineering and Modeling 31
that developing software requires a wide range of tasks. It is preferable to consider the
software process as a collection of component processes, each with a distinct type of
activity, as different kinds of activities are typically carried out by different people. Even Notes

e
though they obviously cooperate to accomplish the overall software engineering goal,
each of these component processes typically has a distinct objective in mind. A collection

in
of principles, best practices and recommendations known as the Software Process
Framework delineates high-level software engineering procedures. It makes no mention of
the sequence or method by which these procedures are performed.

nl
A software process outlines a software development methodology. On the other hand,
a software project is a development endeavour that makes use of a software process.
Software products are the end results of a software project. Every software development

O
project starts with a set of specifications and is anticipated to produce software that satisfies
those specifications by the end. A software process is an abstract sequence of steps that
must be completed to translate user requirements into the final product. The software
process can be thought of as an abstract type and every project is completed using it as

ity
an example of this type. Put another way, a process may involve multiple initiatives, each of
which may result in a multitude of products.
The collection of actions and related outcomes that culminate in a software product is
called a software process. These tasks are primarily completed by software engineers. All

●●
rs
software processes have four basic process actions in common. These pursuits consist of:
Software Specification: It is necessary to define the software’s functionality as well as
the limitations that affect how it works.
ve
●● Software Development: It is necessary to build software that complies with the
specification.
●● Software Validation: To make sure the program accomplishes what the user desires, it
must be verified.
ni

●● Software Evolution: As client needs change, software must adapt as well. These
operations are organised differently and are explained in varying degrees of depth
by different software processes. Both the schedule and the outcomes of the various
U

activities differ.
To create the same kind of product, different companies could employ various
procedures. Nonetheless, certain procedures are better suited for particular kinds of
ity

applications than others. The software product that is to be developed will most likely be
of lower quality or less usefulness if an improper process is employed. These operations
are organised differently and are explained in varying degrees of depth by different
software processes. Both the schedule and the outcomes of the various activities differ.
m

To create the same kind of product, different companies could employ various procedures.
Nonetheless, certain procedures are better suited for particular kinds of applications than
others. The software product that is to be developed will most likely be of lower quality or
less usefulness if an improper process is employed.
)A

A streamlined illustration of a software process given from a particular perspective


is called a software process model. A software process model is an abstraction of the
process it represents since models are by definition simplifications. Process models might
incorporate tasks associated with software engineering personnel, software products and
(c

activities that are part of the software process.


Example: Here are a few instances of the several kinds of software process models
that could be created:

Amity University Online


32 Software Engineering and Modeling

A Workflow Model: This displays the order in which the process’s inputs, outputs and
dependencies are displayed. Human actions are represented by the activities in this model.
Notes An Activity or Dataflow Model This depicts the procedure as a collection of tasks, each

e
of which transforms data in some way. It demonstrates how an input, like a specification,
gets converted into an output, like a design, during a process. Compared to the activities

in
in a workflow model, these activities could be lower level. They could stand for human or
computer-performed alterations. An Action/Role Model: This illustrates the responsibilities
and tasks of the individuals working on the software process.

nl
The general models or paradigms of software development vary widely and include:
●● The Waterfall Approach: By using the aforementioned activities, this portrays them
as distinct process phases, such as software design, implementation, testing and

O
requirements definition. Each stage is signed off when it has been defined, at which
point work moves on to the next.
●● Evolutionary Development: The steps of specification, development and validation are

ity
interwoven in this method. From highly abstract specifications, a preliminary system
is quickly created. After receiving feedback from the client, this is improved to create
a system that meets their needs. After then, the system might be supplied. As an
alternative, it might be reimplemented with a more methodical approach to create a
system that is more reliable and manageable.
●●
rs
Formal Transformation: This method is centred on creating a formal mathematical
system definition and turning it into a program by applying mathematical techniques.
Since these changes maintain correctness, you can be certain that the created
ve
program complies with its specifications.
●● System Assembly from Reusable Components: This methodology presupposes the
existence of certain system components. Rather than creating these components from
the ground up, the system development approach concentrates on their integration.
ni

Characteristics of a Software Model


What are the qualities that good software should have? is the first question that every
U

developer thinks of while designing any kind of software. Before delving into the technical
aspects of any software, we would like to outline the fundamental expectations that users
have. A software solution must, first and foremost, satisfy all end-user or client needs. Users
The development and maintenance expenses of the program should also be kept to a
ity

minimum. The software development process must be completed in the allotted period.
These, then, were the obvious expectations for any project (recall that software
development is a project in and of itself). Let’s now examine the components of software
quality. The Software Quality Triangle provides a clear explanation for this group of
m

variables. Three qualities define quality application software:


™™ Operational Characteristics
)A

™™ Transition Characteristic
™™ Revision Characteristics

Operational Characteristics of a Software


These elements pertain to the exterior quality of software and are dependent on
(c

functionality. Software has a number of operational characteristics, including:


●● Correctness: Every need specified by the client should be satisfied by the software we
are developing.

Amity University Online


Software Engineering and Modeling 33
●● Usability/Learnability: Less time or effort should be needed to become proficient with
the software. Because of this, even those without any IT experience can easily utilise
the software. Notes

e
●● Integrity: Software can have side effects, such as affecting how another application
functions, just like medications can. However, good software shouldn’t have any

in
negative impacts.
●● Reliability: There should be no flaws in the program. In addition, it shouldn’t
malfunction while operation.

nl
●● Efficiency: This quality has to do with how well software makes use of the resources
at its disposal. The program must utilise the storage capacity efficiently and carry out
commands in accordance with the required temporal specifications.

O
●● Security: This element is becoming more significant due to the rise in security risks
in the modern world. The hardware and data shouldn’t be negatively impacted by
the software. It is important to take the right precautions to protect data from outside

ity
dangers.
●● Safety: The program shouldn’t endanger lives or the environment.

Revision Characteristics of Software


These engineering-based variables, such as efficiency, documentation and structure,

built in. Software’s various revision characteristics include:


●●
rs
are related to the interior quality of the software. Any excellent software should have these

Maintainability: Any type of user should be able to easily maintain the software.
ve
●● Flexibility: It should be simple to make changes to the software.
●● Extension: It ought to be simple to expand the range of tasks it can accomplish.
●● Scalability: It should be fairly simple to upgrade for increased workloads or user
ni

counts.
●● Testability: It should be simple to test the software.
●● Modularity: Software is said to be modular if it is composed of independent parts and
U

modules. The final software is then created by integrating these parts. Software has
high modularity if it is broken up into independent, discrete components that can be
tested and changed independently.
ity

Transition Characteristics of the Software


●● Interoperability: The capacity of software to transparently use and exchange data with
other applications is known as interoperability.
m

●● Reusability: Software is deemed reusable if its code may be applied to new or different
purposes with only minor alterations.
●● Portability: Software’s portability is demonstrated by its capacity to carry out the same
)A

operations on many platforms and contexts.


Each of these criteria has varying degrees of importance depending on the application.

Software Development Life Cycle (sdlc)


These are the models that support the development of the desired program. It is the
(c

comprehensive and diagrammatic visualisation of the software life cycle. It consists of all
the tasks required to advance a software product through each stage of its life cycle. Stated
differently, it organises the range of tasks carried out on a software product from inception
to retest. Figure below illustrates the many stages of the SDLC.
Amity University Online
34 Software Engineering and Modeling

Notes

e
in
nl
O
ity
rs
Figure: Illustrates various Software Development Life Cycle (SDLC) phases.
ve
https://iaeme.com/MasterAdmin/Journal_uploads/IJARET/VOLUME_11_ISSUE_12/IJARET_11_12_019.pdf

Requirements: One of the most crucial stages in determining the client’s need
is requirement. There will be multiple review meetings to ensure that the criteria are
consistent. Every review result ought to be recorded and monitored. They recommend
ni

doing both official and informal interviews with the appropriate stakeholders of the
applicant. This will make it easier for developers to understand exactly what is required of
the application. Make sure to properly record these findings so that the group’s retreat is
U

cognizant of the necessity. As a result, it aids in lowering the flaws brought about by the
criteria alone.
Design: The usage of case diagrams and thorough business-related design
ity

documentation is changing specifications.


Development: The development group is in charge of this phase, wherein the updated
technical reviews and structural documents are inputs. Every piece of code needs to go
through the team’s inspection process, which includes going over the developed code and
m

reviewing the unit’s test cases before executing them.


Testing: The testing step is one of the SDLC’s main validation stages. the emphasis on
thoroughly testing the apps that were created using the requirements matrix.
)A

Maintenance: To finalise and analyse the maintenance phase and organise the issues
and findings under consideration, a technical analysis meeting ought to be conducted.

Incremental Development
(c

The concept behind incremental development is to create a working prototype, share


it with users, then iterate through multiple iterations until a workable solution is created.
Activities for specification, development and validation are integrated rather than done in
isolation and there is quick feedback between them all.
Amity University Online
Software Engineering and Modeling 35

Notes

e
in
nl
O
Figure: Incremental development

ity
https://engineering.futureuniversity.com/BOOKS%20FOR%20IT/Software-Engineering-9th-Edition-by-Ian-
Sommerville.pdf

A key component of agile methodologies is incremental software development, which is


superior to waterfall approaches for the majority of commercial, e-commerce and personal

rs
applications. The method we solve issues is reflected in incremental development. We
rarely figure out the entire solution to an issue up front; instead, we approach a solution
piecemeal and then go back when we see that we made a mistake. It is less expensive and
ve
simpler to make modifications to the program while it is being built when it is developed
incrementally.
Part of the functionality required by the customer is incorporated into every system
version or increment. Typically, the most crucial or urgently needed functionality is included
ni

in the system’s initial increments. This implies that the client can assess the system at a
comparatively early development stage to determine whether it meets the needs. If not,
then all that needs to be done is modify the current increment and maybe provide new
U

functionality for future increments.


Comparing incremental development to the waterfall methodology reveals three key
advantages:
ity

1. It is less expensive to adapt to shifting customer needs. Comparatively speaking,


substantially less analysis and documentation needs to be repeated than with the
waterfall paradigm.
2. Receiving input from customers regarding the development work completed is simpler.
m

Consumers are able to provide feedback on software demos and observe the extent of
implementation. It is challenging for customers to assess development from software
design documentation.
)A

3. Even in cases when all of the functionality has not been included, it is still possible to
deliver and deploy valuable software to customers more quickly. Clients can utilise and
benefit from the software more quickly than they might in a waterfall procedure.
Nowadays, the most popular method for developing application systems is incremental
(c

development in one form or another. This strategy can be agile, plan-driven, or, more
frequently, a combination of these strategies. The system increments are predetermined in
a plan-driven method; if an agile approach is used, the development of later increments is
contingent upon progress and client goals, but the early increments are recognised.

Amity University Online


36 Software Engineering and Modeling

The gradual method has two issues from a management standpoint:


1. The procedure is not apparent. To track their progress, managers require regular
Notes
deliverables. Documents reflecting each iteration of the system are not cost-effective to

e
prepare when systems are developed quickly.

in
2. With each additional increment, the structure of the system tends to deteriorate.
Frequent change tends to destroy the software’s structure unless time and resources
are dedicated to refactoring to fix it. More software updates become more expensive and
difficult to incorporate.

nl
When multiple teams work on separate parts of large, complex, long-term systems,
the challenges associated with incremental development become especially severe.

O
Big systems require a solid foundation or architecture and the roles of the many teams
working on different components of the system must be distinctly outlined in relation to that
architecture.
Rather than being developed gradually, this needs to be thought out beforehand. It is

ity
possible to build a system piecemeal and get feedback from users without really delivering
and implementing it in the user’s environment. When software is deployed and delivered
incrementally, it is incorporated into actual, functional processes. This isn’t always feasible
because testing out new software can interfere with regular corporate operations.

rs
Initial software requirements are often quite well specified, but a strictly linear process
is not possible due to the sheer size of the development effort. Furthermore, there can
be a strong need to allow customers access to a small number of software features right
ve
away, then improve and expand on those features in upcoming software releases. In these
situations, a process model built to generate the software incrementally can be selected.
The linear and parallel process flow components covered in earlier sections are
ni

combined in the incremental model. With reference to the figure below, as calendar time
advances, the incremental model applies linear sequences in a staggered manner.
Deliverable increments of the software are generated by each linear sequence in a way that
U

is comparable to the increments generated by an evolutionary process flow.


Word processing programs created with the incremental paradigm, for instance,
might offer the following features in stages: basic file management, editing and document
ity

production in the first increment; more complex editing and document production in the
second; advanced page layout in the third increment; and spelling and grammar checking
in the fourth. It is important to remember that the prototyping paradigm can be included into
any increment’s process flow.
m

A core product is frequently the first increment in an incremental model. In other


words, while many additional features—some known, some unknown—are still unmet, the
fundamental needs are met.
)A

The consumer uses the core product (or has it thoroughly evaluated). A plan for the
subsequent increment is created in response to use and/or assessment. The plan covers
the development of new features and functionality as well as the modification of the core
product to better suit the needs of the consumer. After each increment is delivered, this
process is continued until the entire product is manufactured.
(c

Amity University Online


Software Engineering and Modeling 37

Notes

e
in
nl
O
ity
Figure: Incremental Process Models
https://www.mlsu.ac.in/econtents/16_EBOOK-7th_ed_software_engineering_a_practitioners_approach_
by_roger_s._pressman_.pdf

rs
A plan is created for the upcoming increment. The plan covers the development of new
features and functionality as well as the modification of the core product to better suit the
ve
needs of the consumer. After each increment is delivered, this process is continued until the
entire product is manufactured.
The delivery of a functioning product with each increment is the main goal of the
incremental process model. Although early iterations are simplified copies of the finished
ni

product, they do include features that benefit the user and a platform for user assessment.
When staffing is not available for a full implementation by the project’s set business
U

deadline, incremental development is especially helpful. Less personnel is needed to


implement early increments. If the main product is well received, more employees can
be brought on board to carry out the following increment, if needed. Increments can also
be scheduled to control technical concerns. For instance, new hardware that is under
ity

development and whose delivery date is uncertain can be needed for a major system. Early
increments may be able to be planned so as to avoid utilising this hardware, allowing for the
prompt delivery of some functionality to end customers.

Spiral Model
m

Boehm proposed the spiral model, a paradigm for risk-driven software processes. This
is depicted in the figure below. In this instance, the software process is depicted as a spiral
as opposed to a list of tasks with some backtracking. Every spiral loop stands for a different
)A

stage of the software development process. As a result, the innermost loop may deal with
system viability, the subsequent loop with requirements clarification, the following loop with
system design and so forth. Change tolerance and change avoidance are combined in
the spiral model. It makes the assumption that project risks are the cause of changes and
(c

incorporates explicit risk management techniques to lower these risks.


A risk-driven process model generator called the spiral development model is used
to direct multi-stakeholder concurrent engineering of software-intensive systems. It stands
out primarily for two reasons. One is a cyclical method that gradually increases the degree

Amity University Online


38 Software Engineering and Modeling

of definition and implementation of a system while lowering the degree of risk associated
with it. The other is a series of anchor point benchmarks designed to guarantee stakeholder
Notes commitment to workable and agreeable system solutions.

e
in
nl
O
ity
rs
ve
Figure: Boehm’s spiral model of the software process
https://engineering.futureuniversity.com/BOOKS%20FOR%20IT/Software-Engineering-9th-Edition-by-Ian-
ni

Sommerville.pdf

The spiral’s loops are divided into four sectors:


U

1. Establishing objectives There are set specific goals for that project phase. A thorough
management strategy is created after identifying the process and product constraints.
Risks associated with the project are identified. Depending on these risks, different plans
may be made for strategies.
ity

2. Evaluation and mitigation of risks A thorough study is done for every project risk that has
been identified. Measures are implemented to lower the risk. For example, a prototype
system might be created if there’s a chance the requirements aren’t adequate.
3. Creation and confirmation A development model for the system is selected following the
m

assessment of risks. Throwaway prototype, for instance, might be the ideal development
strategy in cases where user interface hazards predominate. Development based on
formal transformations might be the best course of action if safety concerns are the
)A

primary concern and so on. The waterfall approach might be the optimal development
methodology to adopt if sub-system integration is the primary risk that has been identified.
4. Organising After evaluating the project, a choice is made regarding whether to proceed
with a second spiral loop. Plans are created for the project’s subsequent phase in the
(c

event that it is agreed to proceed.


The spiral model’s clear identification of risk sets it apart from other software
process models. The spiral cycle starts with the elaboration of goals like functionality and
performance. Then, some approaches to accomplishing these goals and resolving the

Amity University Online


Software Engineering and Modeling 39
obstacles in their path are listed. Sources of project risk are identified and each alternative
is evaluated in relation to each goal. The following stage involves mitigating these risks
through information-gathering exercises including simulation, prototyping and in-depth Notes

e
analysis.
After the risks have been evaluated, some development work is done and then the

in
process moves on to planning the next step. Simply put, risk is the possibility of anything
going wrong. One danger, for instance, is that the existing compilers may not generate
object code that is sufficiently efficient, or they may be unreliable if a new programming

nl
language is to be used. Risk mitigation is a crucial component of project management since
risks can result in suggested software changes as well as project issues like schedule and
cost overruns.

O
Software is developed in a sequence of evolutionary releases using the spiral
approach. The release in the early stages could be a model or prototype. Later iterations
result in ever-more-complete versions of the engineered system. A practical method for
creating large-scale software and systems is the spiral model. Software changes as the

ity
process goes on, which helps the developer and the client recognise and respond to risks
at every stage of development. The prototype approach can be applied at any point in the
product’s lifecycle thanks to the spiral model, which also leverages it as a risk reduction
tool. It keeps the standard life cycle’s methodical, step-by-step methodology while

rs
incorporating it into an iterative framework that more closely mimics the real world. When
used correctly, the spiral model should lower risks before they become an issue by requiring
a direct assessment of technical risks at every stage of the project.
ve
However, the spiral model is not a cure-all, much like other paradigms. Customers
may be hard to persuade that the evolutionary approach is controlled, especially in contract
scenarios. It requires a high level of knowledge in risk assessment and depends on this
expertise to succeed. Issues will surely arise if a significant risk is not identified and
ni

controlled.
Risk Handling in Spiral Model
U

Any unfavourable circumstance that could compromise the effective execution of a


software project is considered a risk. The most significant component of the spiral model is
tackling these unforeseen risks after the project has started. It is easier to resolve such risks
by creating a prototype.
ity

1. By giving developers the opportunity to create prototypes at every stage of the software
development life cycle, the spiral approach facilitates risk management.
2. Risk management is also supported by the prototyping model; however, risks have to be
fully identified prior to the project’s development activity commencing.
m

3. However, in practice, project risk could arise after development work begins; in such
instance, the prototyping model cannot be applied.
)A

4. Every stage of the Spiral Model involves dating and analysing the product’s attributes, as
well as identifying and modelling the risks that exist at that particular moment.
5. As a result, this approach has far greater flexibility than previous SDLC models.

Why Spiral Model is called Meta Model?


(c

Because it incorporates every other SDLC model, the Spiral model is referred to as
a Meta-Model. The Iterative Waterfall Model, for instance, is genuinely represented by a
single loop spiral.

Amity University Online


40 Software Engineering and Modeling

1. The spiral model applies the Classical Waterfall Model’s step-by-step methodology.
2. As a risk-handling strategy, the spiral model builds a prototype at the beginning of each
Notes phase, adopting the Prototyping Model’s methodology.

e
3. Additionally, it is possible to view the spiral model as a support for the evolutionary
model, with each spiral iteration serving as a stage in the evolutionary process that

in
builds the entire system.

Advantages of the Spiral Model

nl
A few benefits of the Spiral Model are listed below.
●● Risk Handling: Because the Spiral Model incorporates risk analysis and risk handling

O
at every stage, it is the best development model to use for projects with a high number
of unknown hazards that may arise as development moves forward.
●● Good for big projects: The Spiral Model is advised for usage in complicated and large-
scale projects.

ity
●● Flexibility in Requirements: This model allows for the accurate incorporation of change
requests made at a later stage in the requirements.
●● Customer Satisfaction: Because they may observe the product’s evolution throughout

rs
the early stages of software development, customers become accustomed to using the
system even before the entire product is finished.
●● Iterative and Incremental Approach: In response to shifting requirements or
ve
unforeseen circumstances, the Spiral Model’s iterative and incremental approach to
software development promotes flexibility and adaptability.
●● Stress on Risk Management: To reduce the effects of risk and uncertainty on the
software development process, the Spiral Model lays a heavy emphasis on risk
ni

management.
●● Improved Communication: The Spiral Model calls for frequent reviews and
evaluations, which can help the development team and the client communicate more
U

effectively.
●● Improved Quality: The Spiral Model facilitates several software development iterations,
which may enhance the dependability and quality of the final product.
ity

Disadvantages of the Spiral Model


Some of the spiral model’s primary drawbacks are listed below.
●● Complex: The Spiral Model is far more complex than other SDLC models.
m

●● Expensive: The Spiral Model is not appropriate for small projects due to its high cost.
●● Too much Dependency on Risk Analysis: The successful completion of the project is
heavily dependent on Risk Analysis. Without highly experienced experts, developing a
)A

project using this model will be unsuccessful.


●● Difficulty in Time Management: Due to the unknown number of phases at the
beginning of the project, time estimation is very difficult.
●● Complexity: The Spiral Model can be complex because it involves multiple iterations of
(c

the software development process.


●● Time-Consuming: The Spiral Model can be time-consuming because it necessitates
multiple evaluations and reviews.

Amity University Online


Software Engineering and Modeling 41
Resource-Intensive: The Spiral Model can be Resource-intensive.
The biggest problem with the cascade approach is that it takes a long time to complete
an item, which makes the product outdated. We have another way, called the Winding
Notes

e
model or spiral model, to address this problem. Another name for the winding model is the
cyclic model.

in
When to Use the Spiral Model?
1. In software engineering, a spiral model is used for large-scale projects.

nl
2. When it’s important to release something frequently, a spiral technique is used.
3. When developing a prototype makes sense

O
4. When weighing the costs and risks is essential
5. For projects that range in risk from moderate to high, the spiral strategy works well.
6. The spiral model of the SDLC is useful for complex and unclear requirements.

ity
7. If changes are feasible at any time when committing to a long-term project is
impractical owing to shifting economic priorities.

Waterfall Model
The waterfall model, also known as the classic life cycle, proposes a methodical,

rs
sequential approach6 to software development that starts with requirements specified by
the customer and moves through planning, modelling, building and deployment before
ending with continued support for the finished product.
ve
The more generic system engineering methods served as the basis for the first
published model of the software development process. This model is shown in the figure
below. This paradigm is referred to as the software life cycle or the waterfall model because
of the way the phases flow into one another. One example of a plan-driven process is the
ni

waterfall model, which requires that all process activities be scheduled and planned out
before any work is done on them.
U
ity
m
)A

Figure: The waterfall model


(c

https://engineering.futureuniversity.com/BOOKS%20FOR%20IT/Software-Engineering-9th-Edition-by-Ian-
Sommerville.pdf

The core development activities are directly reflected in the main stages of the waterfall
model:
Amity University Online
42 Software Engineering and Modeling

1. Requirements analysis and definition:Users are consulted to determine the services,


limitations and objectives of the system. After that, they receive a thorough definition that
Notes acts as a system specification.

e
2. System and software design:By creating a general system architecture, the systems
design process assigns the needs to either hardware or software systems. Determining

in
and outlining the essential software system abstractions and their connections is part of
software design.
3. Implementation and unit testing:The software design is realised as a collection of

nl
programs, or program units, at this point. Verifying that each unit satisfies its specification
is the goal of unit testing.

O
4. Integration and system testing:To make sure the software criteria have been satisfied,
the separate program parts or programs are combined and tested as a whole system.
The customer receives the software system after testing.
5. Operation and maintenance:This is typically (though not always) the longest life cycle

ity
phase. Once implemented, the system is used in real life. In maintenance, mistakes that
were missed in the early phases of the system’s life cycle are fixed, system units are
implemented better and services are improved when new requirements are identified.
Each phase should, in theory, provide one or more approved (or signed off) documents.

rs
It is not advisable to begin the next step until the last one is complete. These phases
actually overlap and exchange information with one another. Issues with requirements are
found during the design process.
ve
Design flaws are discovered during coding and so forth. The software process involves
feedback from one phase to the next and is not a straightforward linear model. It could
subsequently be necessary to amend the documents created throughout each phase to
ni

reflect the adjustments made.


Iterations can be costly and require a lot of rework due to the expenses associated with
creating and approving documents. Consequently, it is common practice to freeze certain
U

aspects of the development process, such the specification and go on to other phases
after a limited number of iterations. Issues are disregarded, programmed around, or put off
for later settlement. The system might not function as the user desires as a result of this
early freezing of requirements. Additionally, when implementation strategies are used to get
ity

around design flaws, it could result in systems with poor structure.


The software is utilised in the latter stage of the life cycle, known as operation and
maintenance. It is found that the initial software specifications contained mistakes and
omissions. Errors in the program and design appear and the requirement for more
m

functionality is determined. Thus, for the system to continue to be useful, it must change.
Repeating earlier steps of the procedure may be necessary to make these modifications
(software maintenance).
)A

The waterfall approach produces documentation at every stage and is compatible


with different engineering process models. In order for managers to track advancement in
relation to the growth strategy, this makes the process apparent. The rigid division of the
project into discrete phases is its main issue. Early in the process, commitments must be
(c

made, which makes it challenging to adapt to shifting client needs.


The waterfall methodology should, in theory, only be applied in situations where
the requirements are clear and unlikely to change significantly while the system is being

Amity University Online


Software Engineering and Modeling 43
developed. The waterfall model, however, mimics the kind of procedure applied to other
technical undertakings. Software procedures based on the waterfall model are still widely
employed since it is simpler to utilise a single management model for the entire project. Notes

e
Formal system development, which involves creating a mathematical model of a
system specification, is a significant variation of the waterfall paradigm. Then, this model is

in
improved and turned into executable code by mathematical modifications that maintain its
coherence. You can thus make a compelling case that a program developed in this manner
is consistent with its specification, presuming that your mathematical transformations are

nl
accurate. Systems with strict requirements for safety, dependability, or security are best
developed using formal development methods, including those based on the B approach.
The process of creating a safety or security case is made easier by the formal

O
approach. Customers and regulators will be able to see this as proof that the system
satisfies safety and security criteria. Formal transformation-based processes are typically
limited to the development of systems that are either security-critical or safety-critical. They
call for specific knowledge. This strategy does not provide appreciable cost savings over

ity
alternative methods of system development for most systems.
The linear design of the traditional life cycle results in blocking states, where some
project team members must wait for other team members to finish dependent tasks,
according to an intriguing study of real projects. As a matter of fact, waiting times may really

rs
be longer than working productively! In a linear sequential process, the blocking states
are more common at the start and finish. Work on software these days is fast-paced and
constantly changing in terms of features, functionalities and information content. For this
ve
kind of job, the waterfall paradigm is frequently unsuitable. Nonetheless, in circumstances
where criteria are set and work is to be completed in a linear fashion, it can be a helpful
process model.

Prototype Model
ni

It is a software development methodology that involves creating, testing and refining


a prototype model until a workable version is produced. It is a demonstration of the real
system or product in action. Users can assess and test developer proposals through
U

prototyping before they are implemented. When compared to the real program, a prototype
model typically shows less functional capabilities, poor dependability and inefficient
performance. It works well in situations where the client is unaware of all the project’s
ity

requirements. The process is iterative and involves both the client and the developer
making mistakes.

Prototype model phase


™™ Requirements gathering and analysis
m

™™ Design
™™ Build prototype
)A

™™ User evaluation
™™ Refining prototype
™™ Implementation and Maintenance
Requirements gathering and analysis:The system’s needs are specified. Users
(c

of the system are questioned during this step to find out what they expect from it. Utilise
alternative methods to collect data.
Design:During this stage, the system’s basic design is constructed. It provides the user
with a quick overview of the system.

Amity University Online


44 Software Engineering and Modeling

Build prototype:During this stage, the first prototype is created, showcasing the
fundamental specifications and supplying user interfaces. The information obtained during
Notes the design process is used to create the real prototype.

e
User evaluation:The customer and other key project stakeholders are shown the
prototype during this phase for a preliminary assessment. The input is gathered and applied

in
to the ongoing development of the product. This stage aids in determining the working
model’s advantages and disadvantages.
Refining prototype:In this stage, discussions about issues like time and financial limits

nl
and the technical viability of the actual implementation take place based on input from
the customers. The cycle continues until the customer’s expectations are satisfied after
modifications are accepted and included into the new Prototype model.

O
Implementation and Maintenance:The final system is developed using the final
prototype, tested and then put into production. Regular maintenance and upgrades are
performed on the system in response to changes in the real-time environment to minimise

ity
downtime and avoid major breakdowns.

Types of Prototyping Models


Rapid Throwaway prototypes:This strategy provides an effective way to test concepts
and gather feedback from customers on each one. A produced prototype does not always

rs
have to be a part of the final, approved prototype when using this procedure. In order
to avoid needless design flaws, customer feedback helps produce higher-quality final
prototypes.
ve
Evolutionary prototype:After receiving input from the client, the prototype is gradually
improved until it is approved. It facilitates time and effort savings. This is due to the fact
that creating a prototype for each step of the process from start can occasionally be quite
annoying.
ni

Incremental Prototype:Incremental prototyping involves breaking down the ultimate


product into smaller prototypes that are then created separately. The several prototypes
eventually combine to become a single product. The application development team and
U

user feedback time can be shortened with the use of this technique.
Extreme Prototype:Web development is the main application for the extreme
prototyping process. There are three successive phases to it.
ity

●● All of the pages from the basic prototype are available in HTML format.
●● A prototype services layer can be used to simulate data processing.
●● The services are put into practice and included in the finished prototype.
m

Advantages of Prototype Model


●● The users who contribute to development. In the early phases of the software
development process, errors might be found.
)A

●● Ideal in situations where needs are shifting.


●● Diminish Upkeep expenses

Disadvantages of Prototype Model


(c

●● It is a laborious and slow process.


●● Cost rises in relation to time.
●● Because there is more customer participation, requirements are more dynamic and
have an impact on the creation of the entire product.
Amity University Online
Software Engineering and Modeling 45
1.3 Case Study

1.3.1 Case Study Notes

e
Case-Study

in
An insulin pump control system
An insulin pump is a medical device that mimics the function of the internal organ
known as the pancreas. This system is run by an embedded system’s software, which

nl
gathers data from a sensor and manages a pump to give a user a precise dosage of insulin.
Individuals with diabetes make advantage of the system. The very common illness
known as diabetes is caused by insufficient production of the hormone insulin by the human

O
pancreas. The blood’s glucose, or sugar, is metabolised by insulin. Insulin with genetic
engineering is injected on a daily basis as part of the traditional treatment for diabetes.
Using an external metre to check their blood sugar levels, diabetics determine how much

ity
insulin to inject.
The issue with this treatment is that the amount of insulin needed depends on
the time since the last insulin injection in addition to the blood glucose level. If there is
too much insulin, this can result in extremely low blood glucose levels and if there is

rs
not enough insulin, it can cause extremely high blood sugar levels. In the short run, low
blood sugar is more dangerous since it can cause transient brain damage that can lead
to unconsciousness and even death. However, persistently high blood glucose levels over
time might cause cardiac issues, kidney damage and eye damage.
ve
It is now feasible to create automated insulin delivery systems because to recent
developments in the miniaturisation of sensors. When necessary, these systems provide an
adequate dose of insulin and monitor blood sugar levels. Such insulin delivery systems are
ni

already in use for hospital patient care. Many diabetics may be able to have these systems
permanently affixed to their bodies in the future.
A microsensor implanted in the patient may be used to measure a blood parameter
U

proportionate to the patient’s blood sugar level by means of a software-controlled insulin


delivery system. After then, the pump controller receives this. This controller calculates the
required insulin dosage and blood sugar level. After that, it communicates with a little pump
to provide insulin through a needle that is fixed in place.
ity
m
)A
(c

FigureA: Insulin pump hardware

Amity University Online


46 Software Engineering and Modeling

Notes

e
in
nl
Figure B: Activity model of the insulin pump

O
The insulin pump’s hardware parts and arrangement are depicted above in Figure A.
Knowing that the blood sensor monitors the electrical conductivity of blood under various
conditions and that these results can be connected to the blood sugar level is all you need
to comprehend the examples. In response to a single controller pulse, the insulin pump

ity
releases one unit of insulin. As a result, the controller gives the pump 10 pulses in order to
deliver 10 units of insulin. An example of how the software converts an input blood sugar
level into a series of commands that operate the insulin pump may be seen in the UML
activity model above Figure B.

rs
This system is obviously vital to safety. The user’s blood sugar levels could go
excessively high or too low, which could harm their health or put them in a coma, if the
pump malfunctions or does not function properly. Thus, this system needs to satisfy two
ve
primary high-level requirements:
1. When needed, the system must be able to provide insulin.
2. The system must function dependably and provide the appropriate dose of insulin to
lower blood sugar levels.
ni

Case-Study
A patient information system for mental health care
U

A medical information system that keeps track of patients with mental health issues
and the treatments they have received is known as a patient information system supporting
mental health care. The majority of individuals with mental health issues don’t need
ity

specialised hospital care; instead, they just need to periodically visit clinics where they may
consult with doctors who are well-versed in treating their conditions. These clinics are not
limited to operating out of hospitals in order to facilitate patient attendance. They might also
take place at neighbourhood clinics or community centres.
m

Clinics are the target market for the Mental Health Care-Patient Management System
(MHC-PMS), an information system. Although it runs on a PC, it may also be accessed
and used from locations without secure network connectivity since it uses a centralised
)A

database of patient data. The patient data in the database is used by the local systems with
secure network access, but when they are not connected, they can download and utilise
local copies of the patient records. Since the system is incomplete, it does not keep track
of information regarding further medical issues. It might, nevertheless, communicate and
share information with other clinical information systems. A figure that shows how the MHC-
(c

PMS is organised is shown below.


There are two main objectives of the MHC-PMS:
1. To provide management data so that managers of health services can evaluate
performance in relation to regional and national goals.
Amity University Online
Software Engineering and Modeling 47
2. To deliver timely information to medical personnel so they can support patients’ treatment.

Notes

e
in
nl
O
Figure: The organisation of the MHC-PMS

ity
Due to the disorganised nature of mental health issues, people may miss
appointments, purposefully or unintentionally misplace prescription drugs and other
medications, forget directions and place excessive demands on medical personnel. They
might unintentionally stop by clinics. A small percentage of the time, they might pose a risk
to themselves or others. They might be homeless for an extended period of time or they

rs
might move addresses often. Patients who pose a risk may need to be sectioned, or taken
to a safe hospital where they will receive care and surveillance.
Clinical personnel, such as physicians, nurses and health visitors—nurses who visit
ve
patients at home to check on their treatment—are among the system’s users. Receptionists
who schedule appointments, medical records employees who manage the system and
administrative personnel who produce reports are examples of non-medical users.
Information on patients (name, address, age, next of kin, etc.), consultations (date,
ni

doctor seen, patient impressions, etc.), conditions and therapies are all recorded in the
system. Managers of the health authority and medical staff get reports on a regular basis.
Medical staff reports usually concentrate on specific patient data, while management
U

reports are anonymised and deal with conditions, treatment expenses, etc.
The key features of the system are:
1. Personal care administration Physicians have the ability to check patient histories, update
ity

data in the system and create records for patients. Data summaries are supported by
the system, which makes it easy for physicians who have never met the patient to learn
about the main issues and recommended courses of action.
2. patient observation The system sends out alerts when potential difficulties are found in
m

the patient records of those receiving treatment. It does this on a regular basis. Thus,
a warning might be sent if a patient hasn’t seen a doctor in a while. Keeping track of
patients who have been sectioned and making sure that the legally needed checks are
)A

performed on time are two of the most crucial functions of the monitoring system.
3. Reports from administration Each month, the system creates management reports that
include information on how many patients are treated at each clinic, how many patients
enter and exit the healthcare system, how many patients are sectioned, how much
(c

prescription drugs cost and other details.


The system is subject to two separate laws. These are mental health regulations that
control the involuntary detention of patients who are judged to be a danger to themselves
or others and data protection laws that control the privacy of personal information. In

Amity University Online


48 Software Engineering and Modeling

this way, mental health is distinct since no other medical specialty has the authority to
suggest detaining individuals against their consent. Strict legislative precautions apply
Notes to this. Making ensuring that employees always follow the law and that their actions are

e
documented for potential court scrutiny is one of the goals of the MHC-PMS.
Privacy is an essential system necessity, just like it is in any medical system. It is

in
imperative that patient data be kept private and never shared with anybody save the patient
and authorised medical personnel. Another system that is vital to safety is the MHC-PMS.
Some mental health conditions make their sufferers suicidal or dangerous to others. The

nl
system ought to alert medical personnel about potentially harmful or suicidal patients
whenever feasible.
The system’s general architecture must include privacy and security concerns. If

O
the system is unavailable when needed, patient safety could be jeopardised and it might
be hard to provide them the right prescription. There may be a problem here because
maintaining privacy is facilitated by having a single copy of the system data. Nonetheless,
several copies of the data should be kept in case of server failure or when unplugged from

ity
the network to guarantee availability.

Summary
●● Software engineering is a discipline focused on the design, development,

rs
maintenance, testing and evaluation of software systems. It applies principles from
engineering, computer science, project management and other fields to ensure that
software is reliable, efficient and meets user needs. a) Systematic Development, b)
ve
Quality Assurance, c) Project Management, d) Scalability and Maintenance, e) Risk
Management. Importance of software engineering: a) Managing Complexity, b)
Enhancing Productivity, c) Enhancing Productivity, d) Ensuring Reliability and Security,
e) Cost Efficiency, f) User Satisfaction, g)Regulatory Compliance, h) Facilitating
ni

Innovation.Software engineering is essential for developing complex software systems


that are reliable, efficient and meet user needs. By applying systematic approaches
and principles, software engineering enhances the quality, productivity and cost-
effectiveness of software development processes.
U

●● Software engineering encompasses a wide range of responsibilities that ensure


the development of high-quality software systems. These responsibilities span
various stages of the software development lifecycle and involve multiple roles, from
ity

developers to project managers to testers. Here are the primary responsibilities in


software engineering:
a) Requirements Analysis, b) Software Design,
c) Implementation (Coding), d) Testing,
m

e) Maintenance, f) Project Management,


g) Quality Assurance, h) Documentation,
)A

i) Configuration Management, j) Ethical Responsibility,


k) Collaboration and Communication.
The responsibilities in software engineering are extensive and multifaceted, involving
technical skills, project management, quality assurance and ethical considerations. By
fulfilling these responsibilities, software engineers ensure that the software developed is
(c

functional, reliable, maintainable and meets the needs of users and stakeholders.
●● The software lifecycle, also known as the Software Development Lifecycle (SDLC), is
a systematic process used for developing software. It encompasses various stages,
each with specific tasks and deliverables. The goal of the SDLC is to produce high-
Amity University Online
Software Engineering and Modeling 49
quality software that meets or exceeds customer expectations, reaches completion
within times and cost estimates and works effectively and efficiently in the current and
planned information technology infrastructure. Stages of lifecycle: Notes

e
a) Planning, b) Requirement analysis,
c) Design, d) Implementation,

in
e) Testing, f) Deployment,
g) Maintenance.

nl
The software lifecycle is a comprehensive framework that guides the systematic
development of software from inception to retirement. Each stage has its own objectives,
tasks and deliverables, ensuring that the software is developed in a controlled and

O
efficient manner. By following a structured lifecycle, development teams can produce
high-quality software that meets user needs and is delivered on time and within budget.

Glossary

ity
●● SEI: Software Engineering Institute
●● QA: Quality Assurance
●● SRD: SoftwareRequirements Document
●● SDD: Software Design Document
●●
●●
IoT: Internet of Things
ASIC: Application-Specific Integrated Circuits rs
ve
●● SoC: System on Chip
●● UML: Unified Modeling Language
●● DBMS: Database Management System
●● IDEs: Integrated Development Environments
ni

●● CI/CD: ContinuousIntegration/Continuous Deployment


●● CMMI: Capability Maturity Model Integration
U

●● CPM: Critical Path Method


●● PERT: Program Evaluation and Review Technique

Check Your Understanding


ity

1. In which SDLC phase is the software actually coded?


a) Design
b) Implementation
m

c) Testing
d) Maintenance
2. What does the acronym SDLC stand for?
)A

a) Software Development Lifecycle


b) System Design and Lifecycle
c) Software Deployment Lifecycle
(c

d) System Development and Logic Cycle


3. What is the primary goal of software engineering?
a) To create software that is expensive and complex
b) To produce software that is reliable, efficient and meets user needs
Amity University Online
50 Software Engineering and Modeling

c) To develop software without any planning


d) To ensure software development is always quick and cheap
Notes
4. Which phase in the Software Development Lifecycle involves gathering and analysing

e
user requirements?

in
a) Design
b) Implementation
c) Requirements Analysis

nl
d) Testing
5. Which of the following models is characterised by its linear and sequential approach?

O
a) Agile Model
b) Iterative Model
c) Waterfall Model

ity
d) Spiral Model

Exercise
1. Why software engineering? Explain briefly.

rs
2. What are Software engineering ethics?
3. What is the difference between software engineering and computer science?
4. What is the purpose of Software Engineering?
ve
5. What arethekey Problems of the Software Crisis? Explain.

Learning Activities
1. Assume you are leading a software development project for a client who prefers a
ni

sequential and linear approach to development. Describe how you would implement the
Waterfall model for this project, outlining the key phases, deliverables and milestones at
each stage. Discuss the advantages and limitations of using the Waterfall model in this
U

scenario.
2. Your organisation has decided to adopt the Scrum framework for managing software
development projects. Describe the roles and responsibilities of the Scrum Master,
ity

Product Owner and Development Team in the Scrum framework. Discuss how you
would conduct Sprint planning, daily stand-ups, Sprint reviews and retrospectives to
ensure effective collaboration and delivery.

Check Your Understanding - Answers


m

1. b) 2. a) 3. b) 4. c)
5. c)
)A

Further Readings and Bibliography


1. Software Engineering by Ian Sommerville.
2. Software Engineering: A Practitioner’s Approach by Roger S. Pressman and Bruce
R. Maxim.
(c

3. Fundamentals of Software Engineering by Carlo Ghezzi, Mehdi Jazayeri and Dino


Mandrioli.
4. Introduction to the Team Software Process by Watts S. Humphrey.

Amity University Online


Software Engineering and Modeling 51
Module - II: Software Requirement Engineering and
Coding Notes

e
Learning Objectives

in
At the end of this module, you will be able to:

●● Define Software Requirement Engineering and Coding Overview

nl
●● Understand Requirement Determination: Traditional and Modern Methods
●● Define Process and Data Modeling Techniques
●● AnalyseTrigonometry and Theorem Applications

O
●● Define Structured and Pair Programming Essentials

Introduction

ity
Coding is the process of creating software in accordance with the specified
requirements, whereas software requirement engineering concentrates on specifying what
needs to be developed. These stages are essential to the software development process
and help produce software systems that are dependable, functional and of the highest
calibre.

rs
The process of converting software design specifications into executable code using
programming languages and development tools is called coding, sometimes referred to as
ve
programming or implementation.
During the coding phase, developers turn theoretical ideas into functional software
products, bringing the program design to life. The long-term viability and longevity of the
software depend on clear, organised and maintainable code.
ni

2.1 Software Requirement Engineering


U

Requirement of Software the methodical process of obtaining, evaluating, recording


and verifying the requirements and specifications for a software system is called
engineering. First, requirements are gathered from users, clients and business analysts
using a variety of methods including workshops, questionnaires and interviews. Then, using
ity

methods like use cases, user stories and requirement specifications, these requirements
are examined, prioritised and thoroughly documented.
The process of validation guarantees that the requirements are accurate, logical,
verifiable and comprehensive. The cornerstone for software design, development and
m

testing activities is effective requirement engineering, which guarantees that the finished
product will satisfy stakeholders and achieve business goals.
)A

2.1.1 Software Requirement Engineering and Coding Overview


Creating and designing software is an enjoyable, creative and demanding endeavour.
To be honest, the allure of developing software is so strong that many developers want to
go straight in without fully comprehending what’s required. They contend that as the project
(c

progresses, more will become evident, that stakeholders in the project will only be able to
comprehend the need after reviewing early software iterations, that the requirements are
changing so quickly that it is a waste of time to try to understand them in detail and that
the most important thing is to produce a functional program—everything else is secondary.
Amity University Online
52 Software Engineering and Modeling

These arguments include components of truth, which is what makes them alluring. But
every argument has holes in it and can result in a software project going wrong.
Notes The vast range of activities and methods that result in a requirement understanding

e
are collectively referred to as requirements engineering. Requirements engineering is a
significant software engineering activity that starts in the communication phase and extends

in
into the modelling phase from the standpoint of the software process.
The foundation for design and construction is laid by requirements engineering.
Without it, there’s a good chance the final software won’t satisfy the needs of the user. It

nl
needs to be modified to meet the requirements of the project, the product, the workers and
the procedure. It is crucial to understand that all of these tasks are completed iteratively as
the stakeholders and the project team keep exchanging details regarding their own issues.

O
A need is a capability of the system or a feature that it must have in order to
accomplish its intended purpose.
The “what” of a system, not its “how,” is described by its requirements. Requirement

ity
engineering results in a single, substantial document that is written in plain English
and describes what the system will do without going into detail about how it will do it.
Requirements engineering is the methodical use of tried-and-true concepts, methodologies
and linguistic instruments for the economical analysis, recording and continuous

rs
assessment of the requirements of the user and the details of the outward behaviour of a
system to meet those needs. It can be characterised as a discipline that deals with object
needs throughout the system development process.
ve
Design and construction are connected by requirements engineering. However, where
did the bridge come from? It may be argued that the definition of business needs, the
description of user scenarios, the delineation of functions and features and the identification
of project constraints all start with the project stakeholders (managers, customers and
ni

end users, for example). Some would argue that a more comprehensive description of the
system should be the starting point, with software being just one part of the whole system
domain.
U

Regardless of where you begin, the trip across the bridge elevates you above the
project and gives you the opportunity to assess the software work’s context, the unique
requirements that design and construction must meet, the priorities that dictate the
ity

sequence in which tasks must be finished and the data, featuresandbehaviours that will
significantly influence the final design.
Inception, elicitation, elaboration, negotiation, specification, validation and management
are the seven tasks that make up requirements engineering, which sometimes has hazy
m

borders. It is significant to remember that some of these jobs are completed concurrently
and that each one is customised to the project’s requirements. You should anticipate
doing some requirements work in advance of design and some design work in advance of
)A

requirements.

Inception
How does one begin working on a software project? Most projects often start with a
recognised business need or the identification of a possible new market or service. You
(c

build a basic grasp of the problem, the stakeholders and the type of intended solution at
the outset of the project. To start an efficient collaboration, communication between the
software team and other stakeholders must be established during this assignment.

Amity University Online


Software Engineering and Modeling 53
Elicitation
Asking the customer, users and others what the goals of the system or product
are, what needs to be achieved, how the system or product fits into the demands of the
Notes

e
business and lastly, how the system or product is to be used on a daily basis, should
answer all of these questions. It certainly seems straightforward enough. However, it’s not

in
easy—it’s quite difficult.
A crucial aspect of elicitation is comprehending the objectives of the business. A
goal is an ultimate objective that a product or system needs to accomplish. Objectives

nl
may address functional or nonfunctional issues (such as usability, security and reliability).
Once defined, goals can be used to manage disagreements among stakeholders and
are frequently an excellent approach to convey requirements to stakeholders. Precisely

O
defined goals form the foundation for requirements elaboration, validation and verification,
negotiation, clarification and evolution.
Engaging stakeholders and urging them to be open about their objectives is your

ity
responsibility. After the objectives are determined, you set up a system for prioritising them
and develop a case for a possible architecture (that satisfies the objectives of stakeholders).
One crucial component of requirements engineering is agility. The goal of elicitation is to
quickly and efficiently transmit ideas from stakeholders to the software team. It is very
possible that as iterative product development takes place, new requirements will keep
coming up.

Elaboration rs
ve
The development of a more detailed requirements model that specifies different facets
of software behaviour, function and information is the main goal of the elaboration work.
The development and improvement of user scenarios gathered during elicitation serves as
the foundation for elaboration. These scenarios outline the interactions that other actors
ni

and end users will have with the system. Analysis classes, or end-user-visible business
domain elements, are extracted from each user scenario through parsing. Every analysis
class has its characteristics specified and the services needed by every class are listed.
U

The connections and cooperation amongst classes are noted. While being elaborative
might be beneficial, you must also know when to quit. The secret is to define the issue in
a way that provides a solid foundation for design before proceeding. Avoid being fixated on
unimportant minutiae.
ity

Negotiation
Customers and users frequently request more than the firm can reasonably provide,
given its limited resources. Conflicting requirements are also frequently proposed by
m

different users or consumers, who claim that their version is “essential for our special
needs.” Negotiation is the necessary process to resolve these conflicts. After ranking the
needs, users, customers and other stakeholders are requested to prioritise the discussion
of any conflicts. In a successful negotiation, there should be no winners and no losers.
)A

Because a “deal” that both parties can live with is established, both parties win. It is best
to employ an iterative process that ranks requirements according to importance, evaluates
their risk and expense and resolves internal disputes. Requirements are thus removed,
amalgamated and/or altered in order to satisfy each party to some extent.
(c

Specification
Regarding computer-based systems (including software), various individuals have
varied interpretations of what is meant by the phrase specification. A written document, a

Amity University Online


54 Software Engineering and Modeling

group of graphical models, a formal mathematical model, an assortment of use cases, a


prototype, or any combination of these can all be considered specifications.
Notes There are others who recommend creating a “standard template” for a specification,

e
claiming that doing so will result in requirements that are more consistently expressed and
comprehensible. When developing a specification, it is sometimes vital to maintain flexibility.

in
The size and complexity of the software that needs to be developed determine the formality
and format of a specification. A written document that combines graphical models and
natural language descriptions might be the best method for huge systems.

nl
Validation
During a validation process, the work items generated during requirements

O
engineering are evaluated for quality. Consistency is a major consideration while validating
requirements. Make sure that requirements have been specified consistently by using the
analysis model. Requirements validation looks over the specification to make sure that all
software requirements have been clearly stated, that errors, omissions and inconsistencies

ity
have been found and fixed and that the work products meet the project, process and
product standards.
The technical review is the main method for validating requirements. Software
engineers, clients, users and other stakeholders make up the review team that verifies

rs
requirements. They go over the specification, searching for mistakes in content or
interpretation, places where clarification might be needed, information gaps, inconsistencies
(which are a big issue when large products or systems are engineered), contradictory
ve
requirements, or unrealistic (unachievable) requirements.
Take into consideration these two seemingly innocent criteria to demonstrate some of
the issues that arise during requirements validation:
●● The software should be user friendly.
ni

●● The probability of a successful unauthorised database intrusion should be less than


0.0001.
U

It is very difficult for developers to test or evaluate the first requirement. What does
“user friendly” actually mean? It needs to be qualified or quantified in some way in order to
be validated.
Although there is a quantitative component to the second criteria (“less than 0.0001”),
ity

intrusion testing will be challenging and time-consuming. Is the application even in need of
this level of security? Is it possible for additional security criteria (such password protection
or specialised handshaking) to take the place of the mentioned quantitative requirement?
m

Requirements Management
For computer-based systems, requirements are dynamic and subject to change
during the course of the system’s life. A collection of procedures known as requirements
)A

management aids the project team in determining, monitoring and controlling requirements
as well as modifications to them as the project moves forward.

Types of Requirements
The requirements are divided into many groups. The needs are divided into the
(c

following three categories according to priority:


™™ Those that should be absolutely met
™™ Those that are highly desirable but not necessary.

Amity University Online


Software Engineering and Modeling 55
™™ Those that are possible but could be eliminated.
The criteria are divided into the following two categories based on how they function:
Notes
1. Essential Functions: They specify elements like timing, synchronisation, computing

e
power, storage architecture and I/O formats.

in
2. Non-Essential Specifications: They specify a product’s attributes, such as its usability,
performance, efficiency, space, dependability, portability, etc.
Requirements engineering is a crucial part of the software development process, and

nl
there are several common pitfalls that can lead to issues down the line. Here are some
examples:
1. Incomplete Requirements

O
™™ Pitfall: Requirements are not fully developed, leading to gaps in understanding
and implementation.
™™ Impact: Can result in missing features, incorrect functionality, and significant

ity
rework.
™™ Solution: Ensure thorough elicitation and validation of requirements with
stakeholders. Use techniques like interviews, workshops, and prototyping.
2. Ambiguous Requirements

rs
™™ Pitfall: Requirements are not clearly defined, leaving room for interpretation.
™™ Impact: Different team members may have different understandings, leading to
inconsistent implementation.
ve
™™ Solution: Use precise and unambiguous language. Include examples and
acceptance criteria to clarify intent.
3. Changing Requirements
ni

™™ Pitfall: Requirements are frequently changing without proper management.


™™ Impact: Can cause scope creep, project delays, and budget overruns.
™™ Solution: Implement a robust change management process. Use agile
U

methodologies to accommodate changes more flexibly.


4. Overly Complex Requirements
™™ Pitfall: Requirements are too detailed or complicated, making them hard to
ity

implement and test.


™™ Impact: Increased risk of errors, longer development time, and higher costs.
™™ Solution: Simplify requirements where possible. Break down complex
requirements into smaller, more manageable pieces.
m

5. Lack of Stakeholder Involvement


™™ Pitfall: Not involving all relevant stakeholders in the requirements gathering
)A

process.
™™ Impact: Important perspectives and requirements may be missed, leading to a
solution that does not meet all user needs.
™™ Solution: Identify and involve all key stakeholders from the beginning. Maintain
(c

regular communication and feedback loops.


6. Unprioritized Requirements
™™ Pitfall: All requirements are treated as equally important.
™™ Impact: Resources may be allocated inefficiently, critical features may be delayed.
Amity University Online
56 Software Engineering and Modeling

™™ Solution: Prioritize requirements based on business value, risk, and


dependencies. Use techniques like MoSCoW (Must have, Should have, Could
Notes have, Won’t have).

e
7. Poor Requirement Documentation
™™ Pitfall: Requirements are not well-documented, leading to confusion and

in
misinterpretation.
™™ Impact: Increased likelihood of errors and misunderstandings during development.

nl
™™ Solution: Use standardized templates and tools for requirement documentation.
Ensure documentation is clear, complete, and accessible to all team members.
8. Ignoring Non-Functional Requirements

O
™™ Pitfall: Focusing solely on functional requirements and neglecting non-functional
aspects like performance, security, and usability.
™™ Impact: The system may meet functional needs but fail in other critical areas,

ity
affecting overall quality and user satisfaction.
™™ Solution: Include non-functional requirements in the elicitation and documentation
process. Define clear metrics and acceptance criteria for them.
9. Assuming Requirements are Static

rs
™™ Pitfall: Assuming that once requirements are gathered, they will not change.
™™ Impact: Leads to inflexibility and challenges in adapting to new information or
changes in the market or technology.
ve
™™ Solution: Adopt an iterative approach to requirements engineering. Regularly
review and update requirements as necessary.
10. Lack of Validation and Verification
ni

™™ Pitfall: Failing to validate requirements with stakeholders and verify them during
development.
™™ Impact: Leads to the development of a product that does not meet user needs or
U

expectations.
™™ Solution: Conduct regular reviews and validation sessions with stakeholders. Use
techniques like prototyping, simulations, and test cases to verify requirements.
ity

Coding
Software is created with many years of use in mind. It does, however, occasionally
require adjustments to accommodate users’ evolving needs as a result of ongoing changes
to the organisation’s systems and surroundings. Software maintenance refers to minor
m

customisation or change of software. On sometimes, maintenance tasks take up more than


50% of the time for the software staff. Occasionally, maintenance expenses surpass the
price of software. Therefore, maintainability is a crucial software quality metric. The ease
)A

with which software allows itself to be changed to meet the evolving demands of users is
known as maintainability.
Multiple programmers work together to code. Therefore, it is ideal that all programmers
adhere to a well-defined and uniform style of coding known as the “Coding Standard” for
consistency and ease of comprehension. Following a coding standard offers the following
(c

benefits:
●● It maintains consistency across the program and offers the codes produced by many
engineers a uniform appearance.

Amity University Online


Software Engineering and Modeling 57
●● It facilitates debugging and modification by making the code easier to grasp.
●● It promotes appropriate programming techniques.
Notes
●● Productivity is increased.

e
A coding standard specifies rules and standards about error return conventions, code
structure (i.e., formatting characteristics), variable declaration and naming, etc. The majority

in
of software development companies have their own coding guidelines that programmers
must strictly adhere to.

nl
Three different kinds of coding standards exist.
1. General coding standards: Program codes for all projects produced in any language can
be applied to these standards.

O
2. Programming language-specific coding standards: These are unique to a given language.
Language-specific coding standards are necessary because computer languages differ
in their functionality and syntax. The way language standards are created ensures that
they complement generic coding standards rather than superseding them.

ity
3. Coding standards unique to a project: These standards are tailored to a single project.
Language and general code standards should be supplemented by project coding
standards, not replaced. It is not ideal to have a different coding standard for every
project since it takes time for individuals to become accustomed to a new standard.

diverse coding standards. rs


Additionally, it is challenging to reuse program codes across projects if they adhere to

Based on the kind of software they create, the platform they utilise, the programming
ve
language they use to create the software and the programming style that their employees
are most familiar with, software companies create their own coding standards. Occasionally,
the contract specifies the coding standard that must be adhered to for software
development. However, when creating a coding standard, a few widely acknowledged rules
ni

and criteria are typically adhered to.


Writing instructions in a programming language for a computer to follow is called
coding, sometimes referred to as programming or software development. In this stage,
U

a functioning software product is created by translating the requirements and design


specifications. One of the main tasks in software development is coding, which calls for a
blend of technical know-how, inventiveness and problem-solving capabilities.
ity

Key Aspects of Coding


●● Types: Low-level languages (like C, Assembly), high-level languages (like Python,
Java, C#) and scripting languages (like JavaScript, Ruby).
m

●● Selection: Predicted on elements such as developer experience, performance needs


and application type.
●● IDEs: Integrated Development Environments offer code development, testing and
)A

debugging tools. Examples of these are Visual Studio, Eclipse and IntelliJ IDEA.
●● Version Control Systems: Git and SVN are two tools that facilitate version control over
code versions and teamwork.
●● Coding Practices
(c

™™ Coding Standards: Guidelines for writing clear and consistent code.


™™ Code Reviews: Peer reviews to identify issues and improve code quality.
™™ Refactoring: Improving the structure of existing code without changing its
functionality.
Amity University Online
58 Software Engineering and Modeling

™™ Documentation: Writing comments and documentation to explain code logic and


usage.
Notes ●● Testing

e
™™ Unit Testing: Testing individual components or units of code.
™™ Integration Testing: Ensuring that different modules or services work together.

in
™™ System Testing: Testing the complete system to verify it meets the specified
requirements.

nl
™™ Automated Testing: Using tools to run tests automatically and continuously.
●● Debugging: Using debugging tools, logging and systematic testing to trace and resolve
issues.

O
●● Deployment: Manual deployment, automated deployment (e.g., CI/CD pipelines),
containerisation (e.g., Docker) and cloud deployment (e.g., AWS, Azure).

Importance of Coding

ity
●● Implementation of Requirements: Design and requirements are translated into a
working product through coding.
●● Innovation and Problem Solving: Makes it possible to develop new features and
resolve challenging issues.
●●

●●
rs
Quality Assurance: Robust coding techniques and thorough testing guarantee the
software’s dependability and functionality.
Scalability and Maintenance: Well-written code is simpler to grow, extend and
ve
maintain.
The seamless integration of requirement engineering and coding is critical to the
success of any software project. Good requirement engineering gives coding a firm base
ni

and precise direction. Coding, in the meantime, gives the specifications life and produces a
concrete product that satisfies the demands of the stakeholders.
To produce software projects that are effective, two essential components of software
U

development—software requirement engineering and coding—must cooperate. Coding


converts requirements from stakeholders’ demands into a functional software system,
whereas requirement engineering makes sure the proper product is developed by gathering
and verifying customer needs. Through the integration of optimal methodologies from both
ity

fields, development teams may produce software that is both user-friendly and long-lasting.

2.1.2 Requirement Determination: Traditional and Modern Methods


In the software development lifecycle, requirement determination is an important stage.
m

To make sure the finished product satisfies stakeholders’ needs and expectations, it entails
identifying their needs and expectations. Many techniques have been developed over time
to collect and specify these criteria. These techniques fall into two general categories:
)A

classic and modern. Every approach has advantages and disadvantages of its own and the
particular context and project requirements typically influence the technique selection.
Conventional techniques for determining requirements frequently take an organised,
linear approach. These techniques are well-established in the literature and have been
applied extensively for a long time.
(c

The elicitation and analysis of requirements may involve a wide range of individuals
within an organisation. Anybody who should have some direct or indirect impact upon the
requirements of the system is considered a system stakeholder. All individuals inside an

Amity University Online


Software Engineering and Modeling 59
organisation who will be impacted by the system, as well as end users who will engage
with it, are considered stakeholders. Trade union representatives, corporate managers,
domain specialists and engineers creating or maintaining other connected systems could be Notes

e
considered additional system stakeholders.
The figure below depicts a process model of the elicitation and analysis procedure.

in
Every organisation will have a unique implementation of this broad model, which will
vary based on local circumstances including personnel skills, the kind of system being
developed, standards being followed, etc.

nl
O
ity
rs
Figure: The requirements elicitation and analysis process
ve
Requirement Identification: This is the procedure for communicating with system
stakeholders to find out what they need. Stakeholders’ domain requirements and
documentation are also found during this process. One may address a few of these
ni

complementing approaches further in this section. They can be applied to requirements


discovery.
Organisation and classification of requirements: This task takes the disorganised set of
U

requirements and classifies them into meaningful clusters based on shared requirements.
The most popular method for classifying requirements is to identify subsystems and
assign requirements to each one using a model of the system architecture. Requirements
engineering and architectural design are not entirely distinct disciplines in real life.
ity

Prioritisingand Bargaining Over Requirements: When there are several parties


involved, requirements will inevitably clash. Prioritising requirements and identifying and
resolving requirements conflicts through negotiation are the main goals of this activity.
Stakeholders typically need to get together to settle disputes and decide on necessary
m

compromises.
Specification of Requirements: The criteria are recorded and added to the spiral’s
)A

subsequent iteration.
The requirements elicitation and analysis process is iterative, as the above
figure illustrates, with constant feedback from one activity to the next. Requirements
documentation completes the process cycle, which began with requirements discovery.
(c

With each cycle, the analyst’s comprehension of the requirements grows. When the
requirements document is finished, the cycle comes to a close.
It might be challenging to extract and comprehend requirements from system
stakeholders for a number of reasons.

Amity University Online


60 Software Engineering and Modeling

1. Stakeholders frequently have vague ideas about what they want from a computer system
and may find it difficult to express their needs. As a result, they may make irrational
Notes requests since they are unsure of what is and isn’t practical.

e
2. Stakeholders in a system naturally articulate needs using their own language and
implicitly understanding the work that they do. needs engineers might not comprehend

in
these needs if they have no prior knowledge in the customer’s domain.
3. Stakeholders vary in what they require and they may communicate this differently.
Requirements engineers must identify commonalities and conflicts as well as all possible

nl
sources of requirements.
4. A system’s requirements may be influenced by political issues. Demands for certain
system needs may come from managers who want to exert more control over the

O
organisation.
5. The study is conducted in a dynamic corporate and economic context. It will unavoidably
alter as the analysis progresses. The significance of specific requirements could fluctuate.

ity
It’s possible that previously unconsulted new stakeholders will have new requirements.
Different stakeholders will inevitably hold differing opinions on the significance and
order of needs and occasionally these opinions will clash. In order to achieve compromises,
you should schedule frequent stakeholder negotiations throughout the process. While it

rs
is impossible to please every stakeholder, some may purposefully try to sabotage the RE
process if they believe their opinions have not been given enough weight.
The needs that have already been elicited are documented at the requirements
ve
specification stage so that they can be used to aid in requirements discovery. At this point,
a partial set of requirements and missing sections may be included in an early draft of the
system requirements document. On the other hand, the requirements could be recorded
quite differently (for example, on cards or in a spreadsheet). Since needs are simple for
ni

stakeholders to manage, modify and arrange, writing them down on cards can be a very
effective strategy.

Requirements Discovery
U

Requirement discovery, also known as requirements elicitation, is the process of


gathering data about the needed system and current systems and extracting the user
and system requirements from it. Documentation, system stakeholders and specifications
ity

of comparable systems are some of the information sources that can be consulted during
the requirements discovery phase. Stakeholders are engaged through observation and
interviews and to help them comprehend what the system would be like, you may utilise
prototypes and scenarios.
m

A system’s end users, managers and external stakeholders like regulators who attest
to the system’s acceptability are all considered stakeholders. Stakeholders in the mental
healthcare patient information system, for instance, include:
)A

●● Patients whose data is stored in the database.


●● Medical professionals in charge of diagnosing and treating patients.
●● Nurses who oversee doctor discussions and provide certain treatments.
●● Medical receptionists who oversee patient scheduling.
(c

●● IT personnel, who are in charge of setting up and looking after the system.
●● A medical ethics manager, whose job it is to make sure the system complies with the
most recent moral standards for patient care.

Amity University Online


Software Engineering and Modeling 61
●● Healthcare administrators who use the system to access management data.
●● Medical records personnel who are in charge of making sure that protocols for
maintaining and preserving records have been correctly followed and that system Notes

e
information can be maintained and saved.
As we’ve already seen, requirements might originate not just from system stakeholders

in
but also from the application domain and other systems that communicate with the
system under specification. Each of these needs to be taken into account when gathering
requirements. Stakeholders, domains and systems are some examples of these various

nl
requirements sources that can all be represented as system viewpoints, each of which
displays a portion of the system’s requirements. Diverse perspectives on a given issue give
diverse insights into the problem. They typically overlap, though, so their points of view

O
are not entirely separate and they share some needs. These perspectives can be used to
organise the process of finding and documenting the system requirements.

Interviewing

ity
The majority of requirements engineering procedures involve conducting formal or
informal interviews with stakeholders in the system. The requirements engineering team
questions stakeholders during these interviews about the system they use now and the
system that will be developed. From the responses to these queries, requirements are
derived. There are two sorts of interviews:
1.
2.
rs
Closed interviews in which the subject responds to a pre-formulated list of inquiries.
Open interviews, which don’t follow a set schedule. By investigating a variety of topics with
ve
system stakeholders, the requirements engineering team improves their comprehension
of their requirements.
In actuality, stakeholder interviews typically combine the two of these. Answers to some
questions might be required, however these typically lead to more topics that are covered
ni

less formally. Very open-ended conversations are rarely productive. In order to keep the
interview focused on the system that has to be designed, you normally need to ask a few
questions to get things started.
U

Interviews are useful for gaining a general idea of stakeholders’ roles, potential
interactions with the new system and challenges they may have with the one they currently
use. People are often delighted to participate in interviews since they enjoy talking about
ity

their work. Interviews, however, are not very useful for comprehending application domain
needs.
Two factors may make it challenging to extract domain knowledge during interviews:
1. Every application professional speaks in domain-specific terms and jargon. They cannot
m

talk about domain requirements without referring to these terms. They typically employ
language in a nuanced and exact manner that makes it simple for requirements engineers
to misinterpret.
)A

2. Some stakeholders believe that certain domain knowledge is so basic that it isn’t worth
discussing, or they find it so familiar that it is difficult to express. For instance, it goes
without saying for a librarian that every acquisition be catalogued prior to being added
to the collection. But this might not be evident to the interviewer, which is why it’s not
(c

included in the standards.


Because there are nuanced power dynamics among the various individuals in the
organisation, interviews are also a poor method for obtaining information on organisational
requirements and constraints. The reality of decision-making in an organisation rarely aligns

Amity University Online


62 Software Engineering and Modeling

with published organisational frameworks, yet interviewees might not want to disclose the
actual structure to an unknown party instead of the theoretical one. People are typically
Notes reluctant to talk about organisational and political difficulties that could have an impact on

e
the needs.
Successful interviewers share the following two traits:

in
1. They are eager to listen to stakeholders, have an open mind and do not have preconceived
notions about the requirements. The stakeholder is open to changing their opinion about
the system if they have unexpected requirements.

nl
2. They use a springboard question, a requirements proposal, or collaborative work on a
prototype system to start a conversation with the interviewee. It’s unlikely that asking

O
someone to “tell me what you want” will yield helpful information. Speaking in a specific
context makes much more sense to them than speaking generally.
Interview data complements other system knowledge obtained from user observations,
documentation outlining business procedures or current systems, etc. Sometimes the

ity
information from the interviews may be the only source of information available regarding
the system needs, aside from the data in the system papers. Interviewing should, however,
be utilised in conjunction with other requirements elicitation approaches as conducting
interviews alone runs the risk of missing important information.

rs
Requirement Elicitation Through Questionnaire
We can only interview a certain amount of people due to time limits. An easy way for
ve
the analyst to get in touch with a lot of people and find out what they think about different
parts of the system is by using a questionnaire. When a system is large or the study spans
multiple departments, questionnaires can be sent to the appropriate parties asking them to
provide the information that is needed about the system.
ni

Extensive distribution guarantees respondent anonymity and promotes more truthful


responses. Standardised questions can also produce data that is more trustworthy.
Obtaining a sufficient response to surveys, however, is frequently a major issue. Complete
U

response is quite uncommon, despite the possibility of distribution to a sizable number of


people. People in business organisations typically take their time filling out questionnaires
since they are busy. In general, they don’t place much emphasis on answering
questionnaires.
ity

Review of Records
Many organisations frequently have a sizable amount of records and reports available,
which offer helpful details on the current system. The written policy manuals, rules and
m

standard operating procedures that the majority oforganisations keep on file as a reference
for management and staff are referred to as “records.” Manuals are texts that outline the
organisation’s current practices and activities. Nevertheless, most organisations’ standard
)A

operating procedures and manuals are frequently antiquated and outdated and they deviate
from the standard processes that are followed in such organisations.
Records provide analysts a sense of the amount of transactions and help them
become familiar with existing practices. A thorough examination of the various form kinds’
(c

applications leads to a deeper comprehension of the organisation’s current working


procedures. Further details can also be found in management reports, consultant briefs and
earlier study reports. It gives the analyst the reasoning behind some of the points that they
might find hard to understand.

Amity University Online


Software Engineering and Modeling 63
Observation
One useful method for learning about a system is to watch people as they perform
their tasks. Scientists, sociologists, psychologists and industrial engineers all recognise
Notes

e
observation as a fact-finding method for examining the range of tasks carried out in an
organisation.

in
Observation offers firsthand knowledge about the real methods used to carry out
the activity. It allows the analyst to determine whether particular processes outlined in
procedure manuals are actually followed when carrying out different tasks. For instance, an

nl
analyst can watch what kinds of information are sought, how quickly they are delivered and
where they originate from in order to learn how senior-level managers make choices.

O
Requirement Gathering Through Collaboration
All parties participating in the requirement elicitation process must be involved in order
to absorb risk and prevent future conflict. Therefore, one common method for eliciting
requirements is through collaborative requirement collecting.

ity
All that collaborative requirement collecting entails is a requirement gathering
methodology focused on the team. A meeting has been set for this purpose and the
software team as well as other organisation stakeholders are invited to attend. In order to
generate productive outcomes, a facilitator or coordinator is typically selected to set up and
run the meeting in a structured manner.
rs
The first actions define the problem’s extent and the general consensus regarding a
solution. The stakeholders draft a system/software product request based on the knowledge
ve
they have learned from these initial discussions. All participants receive the product
request well in advance of the scheduled meeting date. Every participant in the meeting is
requested to jot down a list of the following:
ni

●● List the items (or components) that make up the environment of the system.
●● Determine which items are generated by the system.
●● Determine the items the system uses to carry out its operations.
U

●● Determine which services (i.e., functions or processes) manipulate or engage with the
objects.
●● Determine the limitations (e.g., size, cost and business norms).
ity

●● Determine the performance standards, such as accuracy and speed.


The lists are meant to represent each person’s opinion of the system and are not
meant to be all-inclusive, the attendees are told.
The collaborative requirement phase procedure can be summed up as follows:
m

●● Guidelines for preparation and participation are developed;


●● Software engineers, customers and other relevant stakeholders conduct and attend
)A

meetings.
●● It is recommended that the meeting have an agenda that addresses all pertinent
topics. Nonetheless, a planned and controlled exchange of ideas is encouraged during
the conference.
(c

●● A developer, consultant, or client who oversees the meeting’s operations is referred to


as the “facilitator.”
●● The purpose of the meeting is to:
(i) define the issue;
Amity University Online
64 Software Engineering and Modeling

(ii) suggest potential solutions;


(iii) discuss and decide on various strategies; and
Notes
(iv) develop a draft set of specifications for the solution.

e
The collaborative process aims to include various stakeholders in identifying the
software’s functional requirements. Collaboration and teamwork among all parties involved

in
are critical to the creation of software.

Example

nl
Let’s look at an example of a cooperative requirement collecting process for a
software project. The banking company wishes to replace its traditional teller counters with
Automated Teller Machines (ATMs) for money withdrawal. Finding the different functional

O
needs for an ATM is the first step. A team comprising professionals from marketing,
hardware engineering, software engineering and production gathers requirements. A third-
party facilitator or consultant could also be useful during the requirement collection phase.

ity
The first item on the agenda or topic of discussion in the requirements collecting team
meeting is often “Justification for the new system/product.” Everyone ought to be persuaded
of the new system’s or product’s necessity. After reaching a consensus, each participant
shares his list for discussion. Every team member creates a list of the items that will make
up the ATM system. The note counting machine, printing machine, card reader, control

rs
panel with display device and other items could be among those mentioned in relation to
the ATM. Among the services offered might include programming the control panel, inserting
paper into the printer and filling the note counting machine. Each team member then
ve
creates a list of limitations in a similar manner.
Some of the limitations for the ATM could include, for instance, the requirement that
the system be user-friendly, that it be directly interfaced to a central server, that it reject
customer-inserted cards that are improper, etc. A set of performance criteria might include
ni

things like how quickly the system should identify a card (let’s say one second) and how
best to count the notes before paying out to clients. The lists can be put up on an electronic
bulletin board or pinned to the room’s walls using huge sheets of paper.
U

The group then creates a collective list after each person presents their list. The
combined list adds any new ideas that are brought up throughout the conversation and
removes any duplicate items. But typically, nothing gets removed from the list. The facilitator
ity

leads the discussion after all topic categories have combined lists created. The combined
list is expanded, contracted, or rewritten to accurately represent the system or product
under development. Creating a consensus list for each issue area—objects, services,
restrictions and performance—is the aim. After then, the lists are put into action.
m

Following the completion of the consensus lists, the team is split up into more
manageable subgroups. These subgroups are all working towards creating the
specifications for one or more of the lists’ components. For instance, the item specification
)A

for the object “Control Panel” might be as follows: “The control panel is a unit that is placed
close to the bottom of the display device and measures around 8 by 8 inches. A keypad is
used to communicate with users via the control panel. The keypad has two control keys,
“Cancel” and “Enter,” in addition to numeric keys numbered 0 through 9. Various menus and
displays are displayed on a “14 × 14” inch LCD display. The software offers similar features,
(c

interactive prompts and messages.


Following that, each subgroup presents its item specs for debate in the team
meeting. The standards are further enhanced, with deletions and additions made. There
are instances where additional items, services, limitations, or software performance
Amity University Online
Software Engineering and Modeling 65
requirements are discovered during the item specification creation process. After then,
these are included in the initial consensus list. It is possible for a wide range of topics to
come up during team meetings. It’s possible that some of these problems won’t be fixed Notes

e
during the meeting. In order to examine and address the issues mentioned at a later time, a
“issue list” is typically kept for this purpose.

in
Each team member compiles a list of the system’s or product’s validation requirements
after the item specifications are finished, which they then give to the team coordinator.
Next, a consensus set of validation requirements is produced. Lastly, utilisingall of the

nl
meeting’s input, any one or more participants are tasked with developing a comprehensive
draft specification. Usually, this work product or draft specification is finalised after being
evaluated in a team meeting.

O
The Ethnography
Software systems are not isolated entities. The social and organisational context in
which they are utilised may influence or dictate the needs for the software system. For the

ity
system to be successful, meeting these social and organisational conditions is frequently
essential. A common cause of software systems that are deployed but never used is that
the requirements do not properly account for the ways in which the organisational and
social context affect the system’s actual functionality.

rs
An observational method called ethnography can be used to comprehend operational
procedures and determine the kind of support these processes need. An analyst gets
fully immersed in the workplace where the technology will be utilised. The actual tasks
ve
that participants are involved in are noted while the daily work is observed. The benefit of
ethnography is that it may be used to identify implicit system requirements, which are more
reflective of people’s real working practices than of the official procedures established by
the organisation.
ni

It is often very difficult for people to explain the specifics of their profession because
it comes naturally to them. Although they are aware of their own work, they might not be
aware of how it relates to other projects inside the company. A dispassionate witness may
U

be the one to identify organisational and social aspects that influence the job but are hidden
from individuals. A work group might self-organise, for instance, so that participants are
aware of one another’s responsibilities and can cover for one another when needed. Since
the group might not consider this to be a crucial component of their work, it might not be
ity

brought up during an interview.


Suchman was a pioneer in the field of office work ethnography. She discovered that,
in contrast to the simplistic models that office automation systems presume, the real
work practices were more richer, more intricate and dynamic. The primary reason these
m

office solutions had no appreciable impact on productivity was the discrepancy between
the assumed and actual work. Since then, a great deal of research has been done and
Crabtree covers a broad overview of the application of ethnography in systems design. In
)A

work, We’ve looked into ways to record cooperative system interaction patterns and link
requirements engineering techniques with ethnography to incorporate it into the software
engineering process.
Ethnography is very useful for identifying two kinds of needs:
(c

1. Requirements that stem from how individuals really work, as opposed to how process
descriptions suggest they should. For instance, even though standard control procedures
mandate that a conflict alert system be utilised to identify aircraft with crossing flight
paths, air traffic controllers may choose to disable it. To assist in controlling the airspace,

Amity University Online


66 Software Engineering and Modeling

they purposefully placed the planes on opposing routes for brief periods of time. The
conflict alert alert diverts them from their duties and their control plan is set up to make
Notes sure that these aircrafts are separated before issues arise.

e
2. Requirements that result from collaboration and awareness of one another’s actions.
To estimate the amount of aircraft that will be entering their control sector, for instance,

in
air traffic controllers may utilise their knowledge of other controllers’ tasks. They then
adjust their control schemes in accordance with the anticipated workload. As a result, an
automated ATC system ought to give sector controllers some visibility into the activities

nl
taking place in neighbouring sectors.
Prototyping and ethnography can be mixed. The prototype is developed with the
ethnography in mind, reducing the number of prototype refinement cycles needed.

O
Prototyping also helps to focus the ethnography by highlighting issues and queries that the
ethnographer can then address. During the following stage of the system study, he or she
should then search for the solutions to these queries.

ity
Important process nuances that are frequently overlooked by conventional needs
elicitation approaches can be found through ethnographic research. Nevertheless, this
method is not always suitable for identifying organisational or domain requirements due to
its end-user focus. They are not always able to determine what additional features belong
in a system. It is important to note that ethnography is not a substitute for other elicitation

rs
techniques, such as use case analysis; rather, it should be utilised in conjunction with them.

Modern Methods
ve
In response to the shortcomings of earlier approaches, contemporary techniques
for determining requirements have developed. They frequently place an emphasis on
adaptability, teamwork and ongoing development.

Agile Methodologies
ni

Scrum and Kanban are two examples of agile approaches that emphasise iterative
development and ongoing feedback. Throughout the development process, requirements
U

are gathered and refined.


ity
m
)A
(c

Amity University Online


Software Engineering and Modeling 67
Many believed that meticulous project planning, formalised quality assurance, the use
of analysis and design techniques supported by CASE tools and rigorous and controlled
software development processes were the best ways to produce better software in the 1980s Notes

e
and early 1990s. The community of software engineers, who created huge, durable software
systems like those for the government and aerospace industry, held this point of view.

in
Large teams of developers from various companies worked on this program. Teams
worked on the program for extended periods of time and were frequently spread out
geographically. The control systems of a contemporary aeroplane are an example of this

nl
kind of software; from original specification to deployment, it can take up to ten years.
These plan-driven methods have a substantial system design, planning and documentation
overhead. When several development teams must coordinate their efforts, the system must

O
be considered vital and a wide range of individuals will be involved in the software’s lifetime
maintenance, this overhead becomes justifiable.
Software development techniques known as “agile methodologies” are based on
iterative development, in which self-organising cross-functional teams collaborate to evolve

ity
requirements and solutions. With the help of these approaches, which prioritise flexibility,
continuous improvement and the creation of compact, useful software modules, teams
are better equipped to adjust to changing requirements. Agile approaches prioritise client
collaboration over contract negotiation, adaptability over rigid planning and a preference for

rs
people and interactions over systems and tools. Well-known agile frameworks like Extreme
Programming (XP), ScrumandKanban offer roles and practices that support the agile
concepts of frequent iterations, or sprints, daily stand-up meetings and ongoing feedback
ve
loops. By delivering working software frequently and incorporating stakeholder feedback
throughout the development process, agile techniques attempt to improve productivity,
enhance product quality and raise customer happiness by encouraging an environment of
transparency, inspection and adaptability.
ni

Use Cases and User Stories


Techniques for capturing functional requirements from the viewpoint of end users
include use cases and user stories. Use cases offer comprehensive situations, whereas
U

user stories are succinct, straightforward summaries.


Though they accomplish this in different ways, use cases and user stories are both
crucial tools in software development because they both help to capture and express
ity

requirements from the viewpoint of the end user. User stories usually have the following
structure: “As a [user], I want [goal] so that [reason].” They are brief, straightforward
explanations of a feature from the viewpoint of the user who wants the new capacity. This
concept, which encourages gradual development and ongoing feedback, is common in
m

agile approaches. They are useful for iterative and incremental development because they
are lightweight, simple to understand by all stakeholders and concentrate on delivering
value rapidly. Use cases, on the other hand, offer more thorough scenarios that explain how
users work with the system to accomplish particular objectives.
)A

They frequently handle more intricate interactions and edge circumstances by including
extensions for alternate flows in addition to the primary success scenario. Use cases are
helpful for more thoroughly and systematically recording functional requirements, which
leads to a better comprehension of user interactions and system behaviours. Use cases
(c

and user stories both contribute significantly to making sure that the program fulfils user
demands and operates as intended. Use cases offer comprehensive, organised insights for
in-depth requirement analysis and system design, while user stories enable agile, iterative
development.

Amity University Online


68 Software Engineering and Modeling

2.1.3 Process and Data Modeling Techniques


Three interconnected pieces of information make up the data model: the data item, the
Notes attributes that describe the object and the connections that link the objects.

e
Data Objects: Almost any composite information that has to be interpreted by software

in
can be represented as a data object. When we talk about composite information, we’re
talking about something with several distinct characteristics. Consequently, dimensions—
which include height, width and depth—could be declared as an object, but width—as a
single value—would not be an acceptable data object.

nl
An external entity, such as anything that generates or consumes information, a thing,
such as a report or display, an occurrence, such as a phone call, or an event, such as

O
an alarm, a role, such as a salesperson, an organisational unit, such as the accounting
department, a place, such as a warehouse, or a structure, such as a file, can all be
considered data objects. A human or an automobile, for instance, can both be thought of as
data objects as they can both be described using a set of properties. The data object and all

ity
of its characteristics are included in the data object description.
Data objects that are bolded indicate relationships between them. For instance, a
person may possess a car, in which case there is implied to be a particular “connection”
between the owner and the vehicle. The context of the issue under analysis always defines
the relationships.

rs
Attributes: A data object’s attributes specify its features and can have one of
three types. They can be used to refer to another instance in a different table, (1) name
ve
an instance of the data object, or (2) describe the instance. To locate an instance of the
data object, the identifier attribute turns into a “key” when one or more of the attributes
are defined as identifiers. It is not necessary, but in certain situations, the values for the
identifier(s) are unique. An appropriate identification for the data object automobile could be
ni

its ID number.
Finding the right set of attributes for a particular data object requires knowing the
context of the issue. The characteristics for a car might be useful for a Department
U

of Motor Vehicles application, but they would be of little use to an automaker in need of
manufacturing control software. In the latter scenario, the car’s attributes might also include
its ID number, body typeandcolour, but in order to make the car a useful object in the
ity

context of manufacturing control, a lot more attributes would need to be added, such as
interior code, drive train type, trim package designator and gearbox type.
Relationships: There are various methods in which data objects are related to one
another. Think about the two data objects, bookshop and book. The basic notation can
m

be used to represent these items. Because the two objects are related, a relationship is
made between the book and the bookstore. However, what kind of relationships are there?
We must comprehend the function of books and bookshops in relation to the software that
)A

needs to be developed in order to come up with the solution. The pertinent relationships can
be defined by defining a set of object/relationship pairings. As an illustration
●● A bookstore orders books.
●● A bookstore displays books.
(c

●● A bookstore stocks books.


●● A bookstore sells books.
●● A bookstore returns books

Amity University Online


Software Engineering and Modeling 69
Data-Flow Diagrams (DFD)
Data-flow graphs and bubble charts are other names for data-flow diagrams (DFD).
Clarifying system requirements and recognising significant changes are the two goals of a
Notes

e
DFD. DFDs display the data flow within a system. It’s a valuable modelling tool that lets us
visualise a system as a web of interconnected functional processes.

in
Data-flow diagrams are a well-known and often used tool for outlining an information
system’s functionalities. Systems are defined as data sets that functions manipulate. There
are multiple methods to arrange data: storing them in repositories, moving them through

nl
data flows and transferring them to and from the outside world.
The fact that DFDs may be stated using an appealing graphical notation that makes
them simple to use is one of the factors contributing to their success.

O
Symbols Used for Constructing DFDs
Different kinds of symbols are employed in the construction of DFDs. Below is an

ity
explanation of each symbol’s meaning:
1. Function symbol. A circle is used to symbolise a function. This symbol, often known as a
process or bubble, processes input data in some way.

rs
ve
2. External Entity: A square designates a system data source or destination. Any entity that
provides information to or receives information from the system but is not a part of it is
referred to as an external entity.
ni
U

3. Data-flow symbol:A data-flow sign is an arrow or directed arc. A data-flow symbol shows
the direction of the data flow arrow and the data flow that takes place between two
processes or between an external entity and a process.
ity

4. Data-store symbol:Two parallel lines serve as the representation of a data-store symbol.


A physical file on disc or a data-store symbol, which can represent a data structure,
m

can both be represented by a logical file. A data-flow symbol links each data store to
a particular process. Whether data is being read from or written into a data store is
indicated by the direction of the data-flow arrow.
)A

5. Output Symbol:In the context of human-computer interaction, it is used to symbolise


data production and acquisition.
(c

Amity University Online


70 Software Engineering and Modeling

The fundamental notation used to create a DFD is insufficient to fully define software
needs by itself. An arrow in a DFD, for instance, indicates a data object that enters or exits
Notes a process. A structured collection of data is represented by a data store. However, what

e
information is conveyed by the store or suggested by the arrow? What kinds of objects
are represented by the arrow, or by the store? The data dictionary, another element of the

in
fundamental notation for structured analysis, is used to provide answers to these queries.
Information undergoes a number of modifications when it passes through software. An
illustration of information flow and the transformations made as data move from input to

nl
output is called a data flow diagram. The simplest type of data flow diagram, sometimes
referred to as a bubble chart or data flow graph.

Entity/relationship Diagram (ERD)

O
Others have expanded on Peter Chen’s original ERD proposal, which was made
for relational database system design. The ERD is made up of the following main parts:
relationships, data items, attributes and different type indicators. The representation of data

ity
objects and their relationships is the main objective of the ERD.
Rectangles with labels are used to represent data objects. An object-to-object
labelled line indicates relationships between things. The connecting line in certain ERD
variants has a diamond with the relationship labelled on it. Numerous unique symbols that

relationships.
rs
denote cardinality and modality are used to create connections between data objects and

The figure below illustrates the relationship between the data objects, car and
ve
manufacturer. One or more cars are produced by one manufacturer. The ERD’s implicit
context means that the data object car’s specification would differ significantly from the
previous one. It is evident that both occurrences’ modalities (the vertical lines) must be
present by looking at the symbols at the end of the connection line between the objects.
ni
U
ity
m
)A

Figure: A simple ERD and data object table (Note: In this ERD the relationship builds is indicated by a
diamond)

By extending the model, we provide an incredibly simplified ERD of the automotive


(c

industry’s distribution component. Shipper and dealership are two new data objects that
are introduced. Furthermore, new relationships that represent the associations between the
data elements depicted in the picture include stocks, contracts, licences andtransports.

Amity University Online


Software Engineering and Modeling 71

Notes

e
in
nl
O
ity
Figure: An expanded ERD
rs
ve
The analyst can depict data object type hierarchies in addition to the fundamental
ERD notation shown in the above Figures. A class or type of information may frequently
be represented as a data object. The data object automobile, for instance, falls into one
of three categories: Asian, European, or domestic. This classification is represented as a
ni

hierarchy using the ERD notation seen in the figure below.


U
ity
m
)A
(c

Figure: Data object type hierarchies

Additionally, a means for representing object associativity is provided by ERD notation.


The figure below depicts the representation of an associative data item. The data object

Amity University Online


72 Software Engineering and Modeling

automobile is linked to all the data objects in the diagram that represent the various
subsystems.
Notes

e
in
nl
O
ity
Figure: Associative data objects

rs
The entity relationship diagram and data modelling give the analyst a clear notation for
looking at data in the context of a software program. The data modelling approach can be
used for database design and to support other requirements analysis approaches, but it is
ve
typically used to develop one component of the analysis model.

2.1.4 Documenting Requirements with Cases


ni

The explicit declaration of what should be implemented by system developers is


found in the software requirements document, often known as the software requirements
specification, or SRS. It should provide a thorough description of the system requirements
as well as the user needs for the system.
U

User and system requirements are occasionally combined into a single description. In
other situations, the system requirements specification’s introduction contains a definition
of the user requirements. The specific system requirements might be included in a different
ity

document if there are a lot of needs.


When a software system is being developed by a third party, requirements documents
are crucial. Agile development approaches counter that requirements change so quickly
that writing a requirements document is out of date the moment it is written, meaning the
m

work is essentially in vain. Approaches like Extreme Programming gather user needs
piecemeal and write them on cards as user stories instead of putting them in a formal
document. Subsequently, the user assigns a priority to the requirements for the next system
)A

increment.
This strategy, works well for business systems whose requirements are erratic. It is
easy to forget the requirements that apply to the system as a whole when concentrating
on the functional needs for the upcoming system release, but believe that it is still helpful
to prepare a brief supporting document that outlines the system’s reliability and business
(c

requirements.
The users included in the requirements document are varied and include both the
engineers working on the software development and senior management of the company

Amity University Online


Software Engineering and Modeling 73
paying for the system. A prospective user of the paper and their usage are depicted in the
figure below.
Notes

e
in
nl
O
ity
rs
ve
ni

Figure: Users of a requirements document

Due to the wide range of potential users, the requirements document must strike a
balance between providing customers with an accurate definition of the needs, providing
U

developers and testers with a detailed definition of the requirements and providing
information on potential system evolution. System maintenance engineers who must modify
the system to meet new needs can benefit from knowing about impending changes as well
as system designers who want to avoid making too limiting design choices.
ity

Depending on the kind of system being designed and the methodology being
employed, a requirements document’s amount of depth will vary. Because safety and
security need to be carefully considered, critical systems must have comprehensive
requirements.
m

The system specifications must be exact and comprehensive when the system is
going to be constructed by a different business (for example, through outsourcing). The
requirements document can be far less thorough and any ambiguities can be worked out
)A

during the system’s development phase if an internal, iterative development methodology is


adopted.
Based on an IEEE standard for requirements documents, the organisation for a
requirements document is depicted below in Figure. This standard is a general one that
(c

can be tailored to certain applications. In this instance, We’ve expanded the standard to
incorporate details regarding anticipated system evolution. Future system features can be
supported by designers with the use of this knowledge, which also benefits the system
maintainers.

Amity University Online


74 Software Engineering and Modeling

Notes

e
in
nl
O
ity
rs
ve
ni
U
ity
m

Figure: The structure of a requirements document

Naturally, the kind of software being built and the development methodology to be
)A

employed determine what details should be included in a requirements document. Many of


the above-mentioned detailed chapters will be absent from the requirements document if,
for example, an evolutionary method is used for a software product.
Determining the high-level, non-functional system needs as well as the user
requirements will be the main priorities. In this instance, the system’s designers and
(c

programmers make their own decisions about how to satisfy the system’s general user
needs. However, it is typically required to specify the requirements in great detail when the
software is a part of a major system project that involves interacting hardware and software
systems.
Amity University Online
Software Engineering and Modeling 75
Organisation of an SRS Document
1. Introduction
Notes

e
1.1 Purpose
1.2 Scope

in
1.3 Definitions, Acronyms and Abbreviations
1.4 References
1.5 Overview

nl
2. The Overall Description
2.1 Product Perspective

O
2.1.1 System Interfaces
2.1.2 Interfaces
2.1.3 Hardware Interfaces

ity
2.1.4 Software Interfaces
2.1.5 Communications Interfaces
2.1.6 Memory Constraints
2.1.7
2.1.8
Operations
Site Adaptation Requirements
2.2 Product Functions
rs
ve
2.3 User Characteristics
2.4 Constraints
2.5 Operating Environment
ni

2.6 User Environment


2.7 Assumptions and Dependencies
2.8 Apportioning of Requirements
U

3. Specific Requirements
3.1 External Interfaces
(i) User Interface
ity

(ii) Hardware Interface


(iii) Software Interface
(iv) Communication Interface
m

3.2 Functions
3.3 Performance Requirements
)A

3.4 Logical Database Requirements


3.5 Design Constraints
3.5.1 Standards Compliance
3.6 Software System Attribute
(c

3.6.1 Reliability
3.6.2 Availability
3.6.3 Security

Amity University Online


76 Software Engineering and Modeling

3.6.4 Maintainability
3.6.5 Portability
Notes
3.7 Organising the Specific Requirements

e
3.7.1 System Mode

in
3.7.2 User Class
3.7.3 Objects
3.7.4 Feature

nl
3.7.5 Stimulus
3.7.6 Response

O
3.7.7 Functional Hierarchy
3.8 Additional Comments
4. Supporting Information

ity
4.1 Table of Contents and Index
4.2 Appendixes

Uses for SRS Documents

●● rs
A few significant applications for SRS documents are as follows:
It serves as the foundation for project managers’ plans and estimates of the time, effort
and resources.
ve
●● It is necessary for the development team to create the product.
●● It is necessary for the testing team to create test plans based on the external
behaviour that has been documented.
ni

●● It is necessary for the maintenance and product support staff to comprehend the
intended functions of the software product.
●● From it, the publications team creates manuals, documents and other materials.
U

●● Consumers depend on it to let them know what to expect from the goods.
●● It can be used by training staff to assist in creating instructional materials for the
software product.
ity

●● It is necessary for the maintenance and product support staff to comprehend the
intended functions of the software product.

IEEE Standards for SRS Documents


The IEEE Standards Association (IEEESA) Standards Board’s Standards Coordinating
m

Committees and IEEE Societies work together to generate IEEE standards publications.
Committee members give their voluntary and unpaid services. They don’t always have to
be Institute members. The standards created by the IEEE are a result of the vast subject-
)A

matter expertise within the organisation as well as the declared interest of external activities
in contributing to the standard’s development. Utilising an IEEE Standard is entirely
optional. The existence of an IEEE Standard does not exclude the production, testing,
measurement, acquisition, marketing and provision of additional products and services that
fall under its purview.
(c

Additionally, the opinions expressed at the time a standard is authorised and published
are open to modification due to advancements in the field and feedback from standard
users. At least once every five years, all IEEE Standards are reviewed and revised. It

Amity University Online


Software Engineering and Modeling 77
is plausible to conclude that a document that is older than five years and has not been
reaffirmed does not accurately reflect the state of the art today, even though its contents are
still valuable. Notes

e
It is advised that users make sure they have the most recent version of any IEEE
standard. Revisions to the IEEE Standards are open to all interested parties, regardless of

in
their IEEE membership status. Document modification requests should come in the form of
a suggested text change with the necessary justifications included.

Benefits of SRS

nl
A good SRS should offer a number of distinct advantages to suppliers, customers and
other stakeholders, including the following:

O
1. Create a foundation for understanding what the suppliers and customers agree on
regarding the functionality of the software product. Potential users can decide if the
software described fits their needs or whether it needs to be updated to meet their needs
with the help of the comprehensive description of the functions to be performed by the

ity
program specified in the SRS.
2. Scale back the work on development. The SRS reduces later redesign, recoding and
retesting by forcing the different interested parties in the customer’s organisation to
carefully examine all of the requirements before design begins. A thorough examination

3.
rs
of the SRS’s requirements can identify errors, misinterpretations and inconsistencies
early in the development cycle, when they are simpler to fix.
Offer a foundation for scheduling and cost estimation. In order to get permission for
ve
bids or price estimates, the SRS’s description of the product to be developed provides a
reasonable foundation for project cost estimation.
4. Offer a starting point for verification and validation. A strong SRS may help organisations
ni

create their validation and verification plans considerably more efficiently. The SRS is a
requirement of the development contract and offers a standard by which compliance can
be evaluated.
U

5. Promote the transfer. Transferring the software product to new users or machines is
made simpler by the SRS. As a result, providers find it easier to move the software to
new clients and customers find it easier to transfer it to other departments inside their
company.
ity

6. Provides a foundation for improvement. The SRS provides as a foundation for further
improvement of the final product because it talks about the product rather than the
project that created it. Although it could need to be modified, the SRS offers a framework
for ongoing production assessment.
m

2.2 Coding
Coding is a crucial phase in the software development life cycle (SDLC) because it
)A

transforms abstract designs and specifications into a functional software product. Clean,
well-structured and maintainable code is essential for the software’s long-term success,
scalability and ease of maintenance.
(c

2.2.1 Trigonometry and Theorem Applications


A larger module or difficulty is split up into smaller modules in the top-down method. On
the other hand, the bottom-up technique solves smaller problems first, then integrates them
to solve a larger problem.
Amity University Online
78 Software Engineering and Modeling

Top-Down Approach: What Is It?


The Top-Down Approach is an algorithm design methodology that involves segmenting
Notes a larger problem into smaller components. It employs the decomposition approach as a

e
result. Most structured programming languages, including C, COBOL and FORTRAN, adopt
this method.

in
Using a top-down method has the disadvantage of potentially having redundant code
because each component is developed independently. Additionally, this technique results in
less interaction and communication across the modules.

nl
The platform and programming language used to implement an algorithm utilising a
top-down method vary. The top-down method is typically applied to debugging code and
module documentation.

O
Bottom-Up Approach: What Is It?
A bottom-up approach involves solving minor problems first, then integrating those

ity
solutions to discover a solution to a larger one. As a result, the composition strategy is used.
It necessitates intensive communication between several parts. Object-oriented
programming paradigms like C++, Java and Python are typically utilised with it. This
strategy also uses data encapsulation and data concealing. When testing modules, the
bottom-up method is typically employed.

Top Down Approach rs Bottom Up Approach


With this method, we concentrate on segmenting Using a bottom-up method, we fix lesser issues
ve
the issue into manageable chunks. and then integrate them to finish the solution.
Using a bottom-up method, we fix lesser issues and Mostly utilised by object-oriented programming
then integrate them to finish the solution. languages like Python, C++ and C#.
ni

Each part is programmed separately therefore By using data encapsulation and data concealing,
contain redundancy. redundancy is reduced.
There is less communication between the In this module must have communication.
components in this.
U

It is employed in module documentation, debugging, Essentially, testing uses it.


etc.
In a top-down method, decomposition occurs. The bottom-up technique involves composition.
ity

It could be challenging to pinpoint the system’s Sometimes we can’t finish the work we started
primary function. in order to develop a program.
Details of this implementation could vary. People do not normally congregate like this.
m

2.2.2 Structured and Pair Programming Essentials


Structured Programming
)A

Edsgar Dijkstra and his associates’ work provided the basis of component-level design
in the early 1960s and strengthened it. Dijkstra and others suggested using a set of limited
logical constructs in the late 1960s, from which any program might be constructed. A focus
of the constructions was “maintenance of functional domain.” In other words, the reader
could more readily follow the procedural flow because each construct had a known logical
(c

structure, was entered at the top and was exited at the bottom.
Sequence, condition and repetition are the constructs. The processing stages that are
necessary for the formulation of any method are implemented by sequence. Repetition
permits looping and condition offers the ability for selective processing based on a logical
Amity University Online
Software Engineering and Modeling 79
occurrence. The essential building blocks of structured programming, a crucial component-
level design technique, are these three constructions.
The purpose of the structured constructs was to restrict software procedural design
Notes

e
to a limited set of predictable activities. According to complexity metrics, using structured
constructs improves readability, testability and maintainability by reducing program

in
complexity.
A process of human comprehension known as chunking is likewise facilitated by the
application of a small number of logical constructions. Take into consideration how you are

nl
reading this page in order to comprehend this process. Instead of reading individual letters,
you identify word or phrase formation patterns or letter combinations. Rather than reading
the design or code line by line, readers can identify procedural parts of a module by using

O
structured constructs, which are logical chunks.
When easily recognised logical patterns are found, understanding is improved. With
just these three structural elements, any program may be created and implemented,

ity
independent of application domain or level of technical sophistication. It should be
highlighted, nonetheless, that rigidly adhering to these ideas alone might occasionally lead
to problematic situations in real life.
The term “structured programming” describes an overall approach to creating high-

2.
It should perform all the desired actions.
rs
quality programs. A well-designed program possesses the following qualities:
1.
It should be reliable, i.e., perform the required actions within acceptable margins of error.
ve
3. It should be clear, i.e., easy to read and understand.
4. It should be easy to modify.
5. It should be implemented within the specified schedule and budget
ni

The single-entry, single-exit characteristic is present in structured programs. This


feature facilitates the reduction of the number of control flow paths. It will be difficult to read,
comprehend, debug and maintain the program if the flow of control follows unpredictable
U

paths.
A program can have either a dynamic or static structure. The program’s textual
structure, or static structure, is typically just a linear arrangement of the statements in the
ity

program. The series of assertions that are carried out when the program is being performed
makes up its dynamic structure.
The statements make up both static and dynamic structures. The sequence of
statements in a dynamic structure is not fixed, whereas it is in a static structure. This is
m

the only distinction. This implies that from execution to execution, the dynamic sequence
of statements may vary. A program’s static structure is simple to comprehend. A program’s
dynamic structure is readily visible while it is being executed.
)A

Objectives of Structured Programming


Ensuring consistency between static and dynamic structures is the aim of structured
programming. Put another way, the goal of structured programming is to create programs
such that the statements that are executed when the program is being executed match the
(c

statements that are written in the program’s text. The goal of structured programming is
to create programs whose control flow during execution is linearised and adheres to the
program text’s linear organisation, just as the statements in a program text are organised in
a linear fashion.

Amity University Online


80 Software Engineering and Modeling

It is obvious that no meaningful program can be written as a series of straightforward


statements with no repetition (which also requires branching) or branching. So, how will
Notes the goal of controlling the flow to be linearised be accomplished? through the application of

e
organised structures. A statement in structured programming is a structured statement as
opposed to a straightforward assignment statement.

in
Principles of Structured Programming
Stepwise refinement and three structured control structures serve as the foundation

nl
for all structured program design techniques. Program design’s goal is to convert the
program’s necessary function—expressed in the program specification—into a collection of
instructions that can be quickly translated into the programming language of choice. The
stated program function is divided into subsidiary functions in ever more detailed steps

O
until the lowest level functions are possible in the programming language. This is how the
stepwise refinement process works.
There are just three structured control constructions that can be used to develop any

ity
program, which is the second principle of structured program design. Figure (a, b, c, d)
below illustrates the sequence, iterations and selection of the constructions.

rs
ve
ni
U
ity
m

Figure: Basics of Structured Programming: Selection, Iterations and Sequence

A program element with only one portion that occurs zero or more times is called an
)A

iteration. Depending on when the condition turns false, the subcomponent will run a certain
number of times. This scenario may arise during the initial enter or following multiple
program iterations.
There are two types of repetition structures shown above in Figures c and d.
(c

Comprising the three fundamental constructions previously covered, nesting is the


fourth type of constructs. Figure below illustrates the nested architecture. The procedural
design layers change as we nest the build.

Amity University Online


Software Engineering and Modeling 81

Notes

e
in
nl
O
ity
Figure: Nesting Construct

Key Features of Structured Programming

rs
A structured program’s single entry and exit are its defining characteristics. As a result,
we can examine the program statement in order and statement by statement. The single
entry and single exit statements that are most frequently used are:
ve
Selection: if customer type is X
then price is $100
else price is $90
ni

Iteration: While customer type is X do use price formula P1.


Repeat P1 until type is X.
Sequencing: Task 1, Task 2, Task 3.
U

A linear flow is produced in the program build when these statements are used often.
If a good construct is fundamentally about readability and verification, then a structured
program is the means to that end.
ity

Advantages of Structured Programming


Structured programming has the benefit of making it very easy to incorporate logic
into the program in an organised manner. The program is easy to grasp for the user,
reader and programmer since complex logic may be handled with ease. The ability to
m

easily verify, conduct reviews and test structured programs in an organised manner is
another notable benefit of structured programming. If mistakes are discovered, they are
simple to find and fix.
)A

Pair Programming
Programmers now create software in pairs under another cutting-edge XP feature: pair
development. To develop the software, they literally share a workstation and sit together.
The same pairings do not, however, always program together. Instead, pairs are formed
(c

on the fly so that everyone in the team can collaborate with one another while the project is
being developed.
There are many benefits of using pair programming:

Amity University Online


82 Software Engineering and Modeling

It backs the notion of shared accountability and ownership for the system. This
embodies Weinberg’s concept of egoless programming, in which the team owns the
Notes software and no one is held accountable for errors in the code. Rather, the team must work

e
together to find solutions to these issues.
Because at least two individuals review every line of code, it functions as an informal

in
review process. A significant portion of software faults are successfully found through code
inspections and reviews. They do, however, require a lot of time to set up and usually cause
delays in the development process. Pair programming is a lot less expensive inspection

nl
method than formal program inspections, even if it is a less formal procedure that probably
doesn’t detect as many faults as code inspections.
It facilitates the process of software enhancement known as refactoring. Because

O
refactoring requires work for long-term gain, it is challenging to implement in a typical
development context. Refactoring practitioners could be perceived as less productive
than those who just keep writing code. Others gain from the restructuring right away when
pair programming and collaborative ownership are applied, hence they are more likely to

ity
support the process.
Pair programming may seem like a less effective method than individual programming.
Two developers working together would generate half as much code in a given amount
of time as two people working alone. Numerous studies examining the productivity of

rs
programmers who are compensated have yielded inconsistent findings. Williams and
her colleagues discovered, through the use of student volunteers, that pair programming
efficiency appears to be on par with that of two people working independently. It is proposed
ve
that pairs talk about the program before developing it, which will likely result in fewer false
starts and less rework. In addition, less mistakes are made thanks to the casual inspection,
which means that fixing faults found during testing takes less time.
Studies using more seasoned programmers, however, were unable to reproduce
ni

similar findings. When compared to two programmers working alone, they discovered that
there was a considerable productivity loss. Although there were some improvements in
terms of quality, they were insufficient to offset the burden of pair programming. However,
U

the knowledge exchanged during pair programming is crucial since it lowers the project’s
total risk when team members go. This could justify pair programming on its own.

2.3 Case Study


ity

2.3.1 Case study: Real-life Implementation


Introduction
m

Software requirement implementation case studies that are meticulously documented


can provide insightful information on the real-world applications of requirement collection,
analysis, documentation and management. The creation of a “Hospital Management
)A

System” (HMS), which attempts to enhance patient care, expedite hospital operations and
effectively manage resources, is examined in this case study.

Project Overview
(c

Project Name: Hospital Management System (HMS)


Client: Green Valley Hospital
Objective: To create an integrated system that handles staff administration, inventory
management, billing, appointment scheduling and patient data.
Amity University Online
Software Engineering and Modeling 83
Requirement Gathering

Stakeholders Involved: Notes

e
●● Hospital Administrators
●● Doctors and Nurses

in
●● IT Staff
●● Patients

nl
●● Billing Department
●● Pharmacy

O
Methods Used:

Interviews:

●● Interviewed hospital officials in-depth to learn about their demands in terms of

ity
management.
●● Gathered requirements for patient care and record management by interacting with
physicians and nurses.

rs
Surveys and Questionnaires:
●● Distributed questionnaires to employees in order to get feedback on the inadequacies
of the current system and requested features.
ve
Workshops:
●● Convened cross-departmental teams for workshops to go over possible features and
get feedback on the design of the system.
ni

Observation:
●● Observed the hospital’s regular operations to pinpoint problems and inefficiencies that
U

the new system could fix.

Requirement Analysis
An analysis of the gathered data was done to determine the essential needs. An all-
ity

encompassing system that could include several functionalities into a single platform was
required, according to the report.

Key Functional Requirements:


m

Patient Management:

●● Patient registration and record keeping


)A

●● Appointment scheduling
●● Electronic health records (EHR) management
●● Integration with lab and diagnostic reports

Staff Management:
(c

●● Staff scheduling and payroll management


●● Performance tracking
●● Communication tools for staff coordination

Amity University Online


84 Software Engineering and Modeling

Billing and Insurance:


●● Automated billing and invoicing
Notes
●● Insurance claim processing

e
●● Payment tracking and financial reporting

in
Inventory Management:
●● Medical and non-medical inventory tracking

nl
●● Automated restocking alerts
●● Supplier management

Pharmacy Management:

O
●● Prescription management
●● Drug inventory control
●● Interaction alerts and patient compliance tracking

ity
Key Non-Functional Requirements:

Performance:

rs
●● Up to 1,000 concurrent users should not cause performance issues for the system.
●● Critical procedures (such as patient search and record retrieval) should respond in
less than two seconds.
ve
Security:
●● To safeguard patient data, use role-based access control (RBAC).
●● Make sure that sensitive data is encrypted while it’s in transit and at rest.
ni

●● Consistent security audits and adherence to HIPAA and other healthcare requirements.

Usability:
U

●● Simple to use interface with intuitive navigation.


●● Thorough instruction and resources for users.
●● Suitable for users with special needs.
ity

Scalability:
The system ought to be expandable in order to handle future additions of medical
services and facilities.
m

Reliability:
●● Using strong disaster recovery mechanisms, guarantee 99.9% uptime.
●● Continual backups and failover systems.
)A

Requirement Documentation
A Software Requirements Specification (SRS) document containing comprehensive
descriptions, use cases and diagrams was used to document the requirements.
(c

Key Sections of the SRS:


●● Introduction:
™™ Purpose

Amity University Online


Software Engineering and Modeling 85
™™ Scope
™™ Definitions and acronyms
Notes
™™ References

e
●● Overall Description:

in
™™ Product perspective
™™ Product functions
™™ User characteristics

nl
™™ Constraints
™™ Assumptions and dependencies

O
●● Specific Requirements:
™™ Functional requirements (detailed descriptions, use cases)
™™ Non-functional requirements (performance, security, usability, etc.)

ity
™™ External interface requirements
●● Use Cases:
™™ Detailed scenarios describing user interactions with the system, including main
success scenarios and alternative flows.

Use Case Example: Patient Registration


Use Case Name: Patient Registration
rs
ve
Actors:
●● Receptionist
●● Patient
ni

Preconditions:
●● The receptionist is logged into the system.
U

Postconditions:
●● The patient’s information is stored in the database.
ity

●● A unique patient ID is generated.

Main Success Scenario:


●● The patient gives the receptionist personal information.
m

●● The information is input into the system by the receptionist.


●● The data is verified by the system.
●● A distinctive patient ID is produced by the system.
)A

●● The patient data is kept in the database by the system.


●● A notification of confirmation appears on the system.

Extensions (Alternative Flows):


(c

●● The system asks the receptionist to enter any missing information if it is not full.
●● The system gets the current record and alerts the receptionist whether the patient is
already registered.

Amity University Online


86 Software Engineering and Modeling

Implementation and Management

Notes Requirement Management:

e
●● To keep traceability and track changes, a requirement management solution was
utilised.

in
●● Stakeholders met on a regular basis to discuss concerns and assess the status of the
project.

Implementation:

nl
●● Agile development allowed for iterative development and frequent feedback because
the system was designed utilising this methodology.

O
●● Core functions like patient management and billing were the first to be implemented in
phases.

Testing:

ity
●● Unit tests, integration tests and user acceptance testing (UAT) were all conducted
thoroughly on every module.
●● Performance testing made sure the system satisfied the necessary standards.

Deployment:
●●

●●
gradually.
rs
To reduce interference with hospital activities, the technology was implemented

Staff members underwent training sessions to guarantee a seamless transfer.


ve
Outcome
Hospital operations have significantly improved as a result of the HMS’s deployment.
Principal advantages comprised:
ni

●● Efficiency: streamlined patient enrollment, invoicing and inventory control procedures.


●● Accuracy: A decrease in billing and patient record mistakes.
U

●● Accessibility: Lab report integration is seamless and patient information is easily


accessible.
●● Security: Strong security measures improve the protection of private patient
information.
ity

Conclusion
The thorough approach to requirement documentation and implementation in the
creation of a hospital management system is demonstrated by this case study. The project
successfully delivered a system that met the needs of Green Valley Hospital by carefully
m

understanding and documenting requirements, involving stakeholders and managing the


development process effectively. This highlights the crucial significance of thorough and
organised requirement management in software development.
)A

Summary
●● Software Requirement Engineering is crucial for gathering, refining, documenting,
validating and managing requirements to ensure that the software meets stakeholder
needs. It lays the foundation for the software development process, providing clarity
(c

and direction.
●● Coding translates these requirements into a working software product, focusing
on writing high-quality, maintainable and efficient code. Both SRE and coding are
fundamental to developing reliable, effective and user-centered software solutions.
Amity University Online
Software Engineering and Modeling 87
●● Requirement Determination is a critical phase in software development where
the needs and expectations of stakeholders are identified and documented. This
process ensures that the software product will meet the desired goals and functions Notes

e
as expected. There are various methods to gather these requirements, which can be
broadly categorised into traditional and modern methods.

in
●● Traditional methods, such as interviews, questionnaires and document analysis,
provide a solid foundation for gathering requirements but can be time-consuming and
may miss unarticulated needs.

nl
●● Modern methods, such as agile user stories, prototyping and contextual inquiry, offer
more dynamic and user-centered approaches, promoting iterative development and
continuous feedback. The choice of method often depends on the specific needs

O
of the project and the available resources, with many projects benefiting from a
combination of both traditional and modern techniques to ensure comprehensive and
accurate requirement gathering.
●● Process and Data Modeling Techniques are essential in the development of

ity
information systems as they help in understanding, documenting and communicating
the requirements, structure and behavior of the system. These techniques provide a
visual representation that aids in analysing and designing complex systems, ensuring
that all stakeholders have a clear understanding of the processes and data involved.

Glossary
●● CI:Continuous Integration rs
ve
●● CD:Continuous Delivery
●● ATM: Automated Teller Machines
●● DFD: Data-Flow Diagrams
●● ERD: Entity/relationship diagram
ni

●● DFD: Data Flow Diagram


●● SRS: Software Requirement Specification
U

●● SDLC: Software Development Life Cycle


●● HMS: Hospital Management System

Check Your Understanding


ity

1. What is the primary focus of software requirement engineering?


a) Converting software design specifications into executable code
b) Specifying what needs to be developed
m

c) Designing the user interface for the software system


d) Testing the software to ensure it meets user requirements
2. Which of the following best describes the purpose of coding in the software development
)A

process?
a) Gathering requirements from users and stakeholders
b) Turning theoretical ideas into functional software products
c) Validating the accuracy and completeness of requirements
(c

d) Prioritising and documenting user stories and use cases


3. In Agile methodologies, what is the primary unit of delivery?
a) A fully completed project
Amity University Online
88 Software Engineering and Modeling

b) Detailed design documents


c) Small, incremental releases of working software
Notes
d) A comprehensive test plan

e
4. Which of the following is a core principle of Agile methodologies?

in
a) Comprehensive documentation over working software
b) Following a strict sequential development process
c) Customer collaboration over contract negotiation

nl
d) Emphasising processes and tools over individuals and interactions
5. Which of the following statements accurately distinguishes between the top-down and
bottom-up approaches in software design?

O
a) The top-down approach focuses on creating individual components first, while the
bottom-up approach begins with a high-level overview and breaks it down into
smaller parts.

ity
b) The top-down approach starts with the overall system and progressively breaks
it down into smaller subsystems, while the bottom-up approach begins with
designing the fundamental components and integrates them into a complete
system.
c)

d)
rs
The top-down approach and bottom-up approach are identical, both starting with
the smallest components and building up to the complete system.
The top-down approach emphasises the design of individual components without
ve
considering the overall system, while the bottom-up approach focuses on the entire
system without addressing individual components.

Exercise
ni

1. Define software requirement engineering.


2. Explain traditional and modern methods for requirement determination
3. Define top down approach & bottom up approach.
U

4. What are the essential for structured and pair programming?

Learning Activities
ity

1. Develop an ERD for a student management system that includes entities like Students,
Courses and Enrollments.
2. Design a DFD for an inventory management system that includes processes for Inventory
Control, Order Management and Restocking.
m

Check Your Understanding - Answers


1. b) 2. b) 3. c) 4. c)
)A

5. b)

Further Readings and Bibliography


1. Head First Software Development by Dan Pilone and Russ Miles
(c

2. The Art of Software Testing by Glenford J. Myers, Corey Sandler and Tom Badgett
3. Software Requirements by Karl Wiegers and Joy Beatty
4. Requirements Engineering: Fundamentals, Principles and Techniques by Klaus
Pohl

Amity University Online


Software Engineering and Modeling 89
Module - III: Software Design
Notes
Learning Objectives

e
At the end of this module, you will be able to:

in
●● Define software design
●● Understand goals and the software design process

nl
●● Analysemethodologies for structured design
●● Modules coupling and cohesion
●● Understand different types of coupling and cohesion

O
Introduction
The process of developing a thorough plan for software systems to satisfy certain

ity
needs and objectives is known as software design. It entails choosing the right architecture,
designing each component separately, building user interfaces and formulating algorithms
to guarantee the program runs consistently, is readily maintained and scales well.
User needs and system requirements are transformed into a logical and well-

rs
structured software solution through the use of an organised method in software design.
For modularity, flexibility and reusability, design patterns, principles and techniques are
frequently used in this process. Effective software design is essential to the successful
development and implementation of high-quality software products because it follows best
ve
practices and standards.

3.1 Software Design


ni

The process of conceiving and drawing up a blueprint or plan for software systems
is known as software design. It entails making crucial choices regarding the software’s
behaviour, architecture and structure to make ensuring it satisfies predetermined needs
U

and goals. Software design is a multifaceted field that includes designing algorithms,
data structures, interfaces and system components while taking usability, maintainability,
scalability and dependability into account. Translating user wants and system requirements
into a logical, well-organised software solution that can be successfully deployed and
ity

maintained is the aim of software design. For the software architecture to be modular,
flexible and reusable, design patterns, concepts and procedures are frequently used. The
successful creation and implementation of superior software products is contingent upon
the implementation of effective software design.
m
)A
(c

Amity University Online


90 Software Engineering and Modeling

3.1.1 Overview of Software Design


Software design is the process of breaking down a system into its component
Notes elements. A system’s effective implementation and evolution depend on its design. To

e
generate high-quality designs, this decomposition follows several guiding concepts. The
final software system and its design representation are not visually connected, in contrast

in
to more traditional design domains. This makes it more difficult to communicate design
knowledge and emphasises how crucial it is to use accurate design representations.
As stated by Yourdon and Coad. The process of taking a definition of externally

nl
observable behaviour and adding details—such as task management, data management
and human interaction—needed for the actual implementation of a computer system is
known as software design.

O
Design Objectives/Properties
The following are some of the many desired qualities or goals of software design:

ity
1. Correctness. If a system is created precisely in accordance with the design and meets
its requirements, then the system is correctly designed. It is obvious that producing
accurate designs is the aim of the design phase. Since there might be multiple correct
designs, accuracy is not the only criterion used throughout the design phase. Making a
system design is not the only objective of the design process. Rather, the objective is to

2.
rs
identify the optimal design feasible given the constraints imposed by the specifications
and the operational physical and social context of the system.
Verifiability. The design must be accurate and its accuracy must be confirmed. Verifiability
ve
is the degree to which the accuracy of the design can be independently verified. Applying
different verification approaches to design should be simple.
3. Completeness. To be considered complete, every element of the design must be
ni

validated, meaning that all pertinent modules, data structures, external interfaces and
module linkages must be specified.
4. Traceability. One crucial feature that may be verified during design is traceability. The
U

entire design element must be able to be linked back to the specifications.


5. Efficiency. Any system’s efficiency is determined by how well it uses its limited resources.
Cost considerations force efficiency to become a need. It is preferable for resources to
be used effectively if they are costly and limited. Processor time and memory are the two
ity

resources in computer systems that are most frequently taken into account for efficiency.
A system that is efficient uses less memory and processor time.
6. Simplicity. Perhaps the most crucial need for software systems’ quality is simplicity.
Software system maintenance is typically highly costly. One of the most crucial elements
m

influencing the system’s maintainability is its design.


A key phase in the software development lifecycle is software design, which focuses
on specifying a system’s architecture, parts, interfaces and other features in order to meet
)A

predetermined criteria. It serves as a guide for programmers, outlining the architecture


and functionality of the program to guarantee its stability, scalability and maintainability.
Both high-level design decisions—like choosing suitable architectural styles and design
patterns—and low-level decisions—like choosing algorithms, data structures and
(c

interfaces—are made throughout this process.


Performance optimisation, improved user experience and ease of future extensions
and adjustments are the goals of good software design. It necessitates striking a balance
between theoretical concepts and real-world limitations, frequently requiring trade-

Amity University Online


Software Engineering and Modeling 91
offs between competing objectives like speed and memory economy. To express and
convey the design, tools and methods including flowcharts, UML diagrams and design
documentation are frequently employed. Software design helps to create high-quality Notes

e
software products by lowering complexity, minimising risks and aligning the development
process with business objectives through the provision of a clear and structured plan.

in
Microservice Architecture
In software development, developers are following different approaches, technologies,
architectural patterns, and best practices building the high quality software or the system.

nl
Microservice is one of the popular architectural pattern styles followed by the industries. You
might have heard this name, and you might have worked on it.

O
In a microservice architecture, we break down an application into smaller services.
Each of these services fulfills a specific purpose or meets a specific business need for
example customer payment management, sending emails, and notifications. In this article,
we will discuss the microservice architecture in detail, we will discuss the benefits of using

ity
it, and how to start with the microservice architecture.

What is Microservices Architecture?


In simple words, it’s a method of software development, where we break down an

rs
application into small, independent, and loosely coupled services. These services are
developed, deployed, and maintained by a small team of developers. These services have
a separate codebase that is managed by a small team of developers.
ve
These services are not dependent on each other, so if a team needs to update an
existing service, it can be done without rebuilding and redeploying the entire application.
Using well-defined APIs, these services can communicate with each other. The internal
implementation of these services doesn’t get exposed to each other.
ni

Advantages/Benefits of Microservices
1. Independent Development and Deployment: Each of the services in microservice is
deployed independently. As we have discussed that a small team takes the responsibility
U

for these services to build, test and deploy them. This enables faster development. Also,
it becomes easy to fix the bug and release the features. You can easily update the
service and roll it back if something goes wrong.
ity

This is one of the best things about microservices, this feature is not available in many
traditional applications. If a bug is found in one process or part of the application, then to
resolve it, you will have to block the entire release process.
2. Small Focused Team: To work on a single service, a small team is targeted. Code
m

becomes easy to understand and for new members, it becomes easy to join the team.
There is no need to spend weeks figuring out how complex monoliths work. In a large
team, it becomes difficult to have proper communication that results in a lack of flexibility.
)A

Management time gets increased and agility diminishes.


3. Small CodeBase: In monolithic application code, dependencies become tangled over
time. If a developer needs to add some new features, they need to make changes at
multiple places. In a microservice architecture, a codebase or database is not shared.
Dependency gets minimized, and it becomes easier to add new features. Also, it
(c

becomes easier to understand the code and add new features.


4. Mix of Technologies: In a microservices architecture, you’re free to choose any technology
that suits best for your service. You can mix up multiple technologies and use them for
your service.
Amity University Online
92 Software Engineering and Modeling

5. Fault Isolation: In a microservice architecture, you can easily handle critical points of
failure. If a service goes down, the functioning of the whole application won’t stop, as
Notes long as any upstream microservices are designed to handle the failure.

e
6. Scalability: You break down the application into smaller services and that makes them
easy to scale independently. You don’t need to scale the entire application. Using an

in
orchestrator such as Kubernetes or service fabric, you can pack the higher density of
services onto a single host. Resources are utilized more efficiently.
7. Data Isolation: Schema update is easier to perform because only a single microservice

nl
gets affected. Schema update is difficult in a monolithic application. Various parts of the
application may touch the same data. Updating the data can make schema risky.

O
Challenges/ Disadvantages of Microservices
1. Complexity: Services are simpler in microservices, but the whole system is more
complicated. You need to take care of all the services and the database, and you need
to deploy each of the services independently.

ity
2. Testing: You need a different approach to write small services that rely on other
dependent services. This is different from traditional monolithic or layered applications.
Writing tests will be more complex if services will be dependent on each other. To unit
test, a dependent service mock must be used.
3.
rs
Data Integrity: Microservice support distributed database architecture. Each service in
microservice is responsible for its own data persistence. Data integrity or data consistency
can be a challenge in these situations. You may have to update multiple business
ve
functions of the application for some business transitions. For different services, you
may have to update multiple databases. You will have to set up eventual consistency of
data with more complexity.
4. Network Latency: Many small services in Microservice require more interservice
ni

communication. If services in microservice will have a long chain of dependencies then


additional latency can become a problem for you. You will have to design the APIs
carefully.
U

5. Versioning: If a service needs to get updated then it shouldn’t break the service that
depends on it. If multiple services need to be updated at the same time then without
careful design you may face some problems with backward or forward compatibility.
ity

3.1.2 Goals and the Software Design Process


Regardless of the software process model being employed, software design forms
the technical foundation of software engineering. Software design is the final software
m

engineering step in the modelling process, starting after software requirements have been
examined and modelled. It prepares the ground for construction (code creation and testing).
The information needed to build the four design models needed for a comprehensive
)A

specification of design is provided by each component of the requirements model. The


figure below shows the information flow that occurs during software design. The design
work is fed by the requirements model, which takes the form of scenario-based, class-
based, flow-orientedandbehavioural features. Design generates a data/class design, an
architectural design, an interface design and a component design using design notation
(c

and design.
Class models are converted into design class realisations and the necessary data
structures needed to build the software via the data/class design. The data design action
is based on the objects and relationships established in the CRC diagram as well as
Amity University Online
Software Engineering and Modeling 93
the detailed data content represented by class attributes and other notation. Software
architecture design may involve some aspects of class design. As each software
component is built, the class design becomes more comprehensive. Notes

e
The architectural design establishes the relationships between the software’s main
structural components, the architectural styles and design patterns that may be applied to

in
meet the system requirements and the limitations that have an impact on how architecture
can be executed. The requirements model serves as the basis for the architectural design
representation, which is the computer-based system’s framework.

nl
O
ity
rs
ve
ni

Figure: Translating the requirements model into the design model

The architectural design establishes the relationships between the software’s main
structural components, the architectural styles and design patterns that can be applied
U

to meet the system requirements and the limitations that can affect how architecture is
implemented. The requirements model serves as the basis for the architectural design
representation, which is the computer-based system’s framework.
ity

The structural components of the software architecture are transformed into a


procedural description of software components via the component-level design. The
component design is based on data from the flow models, behavioural models and class-
based models.
m

Throughout the design process, choices are made that will eventually impact the
software development process’s success and, just as crucially, the product’s maintainability.
However, why is design so crucial?
)A

One word sums up the significance of software design: quality. In software engineering,
design is where quality is nurtured. You can evaluate software representations through
design and determine their level of quality. Stakeholder requirements can only be accurately
translated into a final software product or system through design. All subsequent software
engineering and support tasks are built upon the basis of software design. Without design,
(c

you run the danger of creating an unstable system that won’t work properly when minor
adjustments are made, could be challenging to test and whose quality won’t be able to be
determined until much later in the software development process, when money has already
been spent and time is at a premium.
Amity University Online
94 Software Engineering and Modeling

Design Process
Requirements are converted into a “blueprint” for software construction through the
Notes
iterative process of software design. The blueprint presents a comprehensive overview

e
of software at first. In other words, the design is expressed at a high enough degree of
abstraction to be linked directly to the particular system goal and to more specific data,

in
functional requirementsandbehavioural needs. Subsequent refining results in design
representations at progressively lower levels of abstraction as design iterations happen.
Although there is a more subtle correlation, these can still be linked to requirements.

nl
Software Quality Guidelines and Attributes
A number of technical reviews are conducted during the design process to evaluate

O
the quality of the evolving design. McGlaughlin offers three qualities that can be used to
determine what makes a successful design:
●● All of the implicit requirements that stakeholders have requested be accommodated

ity
by the design, in addition to implementing all of the stated requirements found in the
requirements model.
●● For people who write code, test and then maintain the program, the design has to be a
clear, comprehensible manual.
●●

rs
From an implementation standpoint, the design should give a comprehensive view of
the software, addressing the functional, behavioural and data domains.
Guidelines for Quality. Technical standards for good design must be established by
ve
you and other members of the software team in order to assess the calibre of a design
representation. Design principles that function as standards for software quality. For now,
take into account the following recommendations:
A design should have an architecture that can be executed in an evolutionary manner,
ni

which makes testing and implementation easier, is made up of components with good
design qualities and was produced using recognisable architectural styles or patterns.
U

●● A modular design is one in which the program is logically divided into components or
subsystems.
●● Different representations of the architecture, components, data and interfaces should
all be present in a design.
ity

●● A design should result in data structures that are derived from identifiable data
patterns and suitable for the classes that will be implemented.
●● Components produced by a design should have distinct functional properties.
m

●● Interfaces created by a design should result in less complicated linkages between


parts and the outside world.
●● A dependable procedure that is informed by data gathered from the analysis of
)A

software requirements should be used to develop a design.


●● A notation should be used to represent a design in a way that clearly conveys its
meaning.
The achievement of these design criteria is not accidental. They are accomplished by
(c

putting basic design ideas, a methodical approach and extensive evaluation to use.
Quality Attributes. The software quality attributes that Hewlett-Packard established are
known as FURPS, or functionality, usability, reliability, performance and supportability. Aims
for every software design are represented by the FURPS quality attributes:

Amity University Online


Software Engineering and Modeling 95
●● The program’s feature set and capabilities, the generality of the functions it offers
and the system’s overall security are all taken into consideration when assessing
functionality. Notes

e
●● Human aspects, overall aesthetics, consistency and documentation are taken into
account while evaluating usability.

in
●● The frequency and severity of failures, the precision of output data, the mean-time-
to-failure (MTTF), the program’s predictability and its capacity to bounce back from
setbacks are all used to gauge reliability.

nl
●● Processing speed, response time, resource consumption, throughput and efficiency
are all taken into account when measuring performance.

O
●● In addition to testability, compatibility, configurability (the capacity to arrange and
regulate components of the software configuration), ease of system installation and
ease of problem localisation, supportability also encompasses the ability to extend the
program (extensibility), adaptability and serviceability—these three attributes constitute

ity
a more common term, maintainability.

The Evolution of Software Design


Over the course of nearly six decades, software design has continued to evolve. Early
design work focused on top-down methods for developing software architectures and

rs
standards for the creation of modular programs. A design definition process that included
procedural elements gave rise to the structured programming paradigm. Subsequent
research suggested techniques for converting data flow or data structure into a design
ve
specification. An object-oriented method of design derivation was suggested by more recent
design methodologies.
The focus of software design in recent times has shifted towards software architecture,
ni

design patterns that facilitate the implementation of software architectures and the
utilisation of lower design abstraction levels. A growing number of approaches—such as
aspect-oriented methodologies, model-driven development and test-driven development—
focus on ways to improve designs’ architectural structure and modularity.
U

Numerous design techniques, originating from the previously mentioned study, are
being implemented across the industry. Every software design methodology has its own set
of notations and heuristics, along with a rather narrow definition of what constitutes good
ity

design. However, there are a few features that all of these approaches have in common:
(1) a way to convert the requirements model into a design representation; (2) a notation to
represent functional components and their interfaces; (3) heuristics to refine and partition;
and (4) standards for evaluating quality.
m

A set of fundamental principles should be applied to data, architectural, interface and


component-level design, regardless of the design methodology that is employed. The
sections that follow take these ideas into consideration.
)A

Design Concepts
Over software engineering’s history, a number of core principles for software design
have changed. Every concept has endured over time, despite changes in the level of
(c

interest in each over time. Each gives the software designer a starting point to work from
when implementing more complex design techniques. Each aids in your response to the
following queries:
●● What standards can be applied to separate software into its component parts?

Amity University Online


96 Software Engineering and Modeling

●● How is a functional or data structure detail distinguished from the software’s


conceptual representation?
Notes ●● What set of consistent standards determines a software design’s technical quality?

e
“The beginning of wisdom for a [software engineer] is to recognise the difference
between getting a program to work and getting it right,” stated M. A. Jackson in a statement.

in
The foundational ideas of software design offer the required structure for “getting it right.”

Abstraction

nl
Numerous layers of abstraction can be asked while thinking about a modular solution
to any given problem. A solution is articulated in general terms employing the language of
the problem environment at the highest level of abstraction. A more thorough explanation

O
of the solution is given at lower abstraction levels. To state a solution, implementation-
oriented terminology is combined with problem-oriented terminology. Lastly, the solution is
expressed in a way that is immediately implementable at the lowest level of abstraction.

ity
Procedural and data abstractions are created as various degrees of abstraction are
built. A series of instructions with a definite and restricted purpose is called a procedural
abstraction. A procedural abstraction’s name suggests these features, but it withholds
further information. The term “open” for a door is an illustration of a procedural abstraction.
Open suggests a drawn-out series of procedural actions (e.g., approach the door, extend

door, etc.).
rs
your hand and grab the knob, turn the knob to pull the door, back away from the moving

A designated set of data that characterises a data object is called a data abstraction.
ve
We can define a data abstraction named door in the context of the procedural abstraction
open. Similar to any other data item, the door’s data abstraction would include a collection
of properties that characterise it, such as its kind, swing direction, opening mechanism,
weight and dimensions. Consequently, the data abstraction door’s properties would be
ni

utilised by the procedural abstraction open to access information.

Architecture
U

“The overall structure of the software and the ways in which that structure provides
conceptual integrity for a system” are mentioned in relation to software architecture. The
simplest definition of architecture is the arrangement or layout of program modules, the way
these modules interact with one another and the data structures that the modules employ.
ity

However, components can also be used more broadly to refer to the main components of a
system and how they interact.
Creating an architectural representation of a system is one of the objectives of software
design. More in-depth design work is carried out using this rendering as a foundation. An
m

architect can handle typical design challenges with a set of architectural patterns.
Shaw and Garlan outline a number of characteristics that must be included in an
architectural design:
)A

Structural properties. This part of the architectural design representation outlines


the elements that make up a system (modules, objects, filters, etc.) as well as how those
elements are assembled and function together. As an illustration, objects are packaged
to include data as well as the processing that modifies the data and communicates with it
(c

through the call of methods.


Extra-functional properties. How the design architecture satisfies criteria for
performance, capacity, reliability, security, adaptability and other system attributes should be
covered in the architectural design description.
Amity University Online
Software Engineering and Modeling 97
Families of related systems. The architectural design ought to incorporate recurring
patterns that are frequently seen in the creation of families of related systems. Essentially,
the architecture should be able to repurpose its building blocks. Notes

e
The architectural design can be represented using one or more of a variety of
models depending on how these attributes are specified. Architecture is portrayed by

in
structural models as a well-organised set of program elements. By aiming to find repeated
architectural design frameworks that are encountered in comparable types of systems,
framework models raise the bar on design abstraction.

nl
The behavioural elements of the program architecture are covered by dynamic models,
which show how the configuration of the system or its structure may alter in response to
outside events. Process models concentrate on the technical or business process design

O
that the system needs to support. Lastly, the functional hierarchy of a system can be
represented using functional models.
To express these models, several architectural description languages (ADLs) have

ity
been created. Despite the fact that a wide variety of ADLs have been proposed, most of
them offer mechanisms for characterising system elements and the ways in which they are
interconnected.

Patterns

rs
Design patterns are described as “named nuggets of insight which convey the essence
of a proven solution to a recurring problem within a certain context amidst competing
concerns” by Brad Appleton. Put another way, a design pattern is a structure for design
ve
that addresses a specific design issue in a given context and with consideration for external
factors that might affect how the pattern is implemented.
Each design pattern is intended to include a description that helps a designer decide
whether it is (1) appropriate for the work at hand, (2) reusable, so saving design time, or
ni

(3) able to be used as a model for creating a similar but structurally or functionally different
pattern.
U

Separation of Concerns
According to the design principle of separation of concerns, any complicated issue can
be solved more readily if it is broken into smaller, independent parts that can be optimised
or solved separately. A behaviour or feature that is outlined in the software requirements
ity

model is considered a worry. It takes less time and effort to solve a problem when it is
divided into smaller, more manageable pieces.
In the case of two problems, p1 and p2, it can be inferred that the effort needed to
solve p1 will be higher than the effort needed to solve p2, provided that the difference in
m

perceived difficulty is greater. This result is intuitively clear in general. It is true that solving a
challenging problem takes longer.
)A

Consequently, the perceived complexity of two problems considered together is


frequently higher than the sum of the perceived complexity of the two problems taken
separately. This leads to a divide-and-conquer tactic: solving a complicated problem
becomes easier when it is divided into smaller, more manageable tasks. This has significant
effects on the modularity of software.
(c

Other related design ideas, such as modularity, aspects, functional independence


and refinement, also exhibit separation of concerns. Each will be covered in the ensuing
subsections.

Amity University Online


98 Software Engineering and Modeling

Modularity
The most prevalent example of concern separation is modularity. Software is broken up
Notes into discrete, addressable parts (also referred to as modules) that are combined to meet the

e
needs of the problem.
“The one feature of software that enables a program to be intellectually manageable” is

in
modularity, according to some statements. For a software engineer, monolithic software—a
complex program made up of just one module—is difficult to understand. It would be nearly
impossible to comprehend due to the quantity of control routes, scope of reference, amount

nl
of variables and general complexity. Whenever possible, it is best to divide the design into
multiple modules in order to facilitate comprehension and ultimately lower the software
development cost.

O
Remembering talk about the separation of concerns, we may draw the conclusion that
the work needed to produce software will become minuscule if it is divided into endless
subgroups! Sadly, additional factors come into play and render this conclusion worthless.

ity
With reference to the figure below, it can be seen that as the total number of software
modules increases, so does the effort (and expense) required to construct each module.
If all the modules have the same criteria, then each module will be smaller. Nevertheless,
the work (and expense) involved in integrating the modules increases with the number of
modules. These qualities result in the effort or total cost curve that is depicted in the figure.

rs
There exist M modules that would yield the lowest possible development cost; but, we lack
the requisite intelligence to confidently estimate M.
ve
ni
U
ity

Figure: Modularity and software cost

When modularity is taken into account, the curves displayed in the above figure do
m

offer helpful qualitative information. Modularisation is a good idea, but you should be careful
to keep close to M. Avoiding over- or under-modularity is advised. However, how are you
aware of M’s surroundings? To what extent should software be modularised?
)A

When a design (as well as the resulting program) is modularised, it becomes easier to
plan development, define and deliver software increments, accommodate changes, test and
debug more effectively and perform long-term maintenance without experiencing major side
effects.
(c

Information Hiding
“How do I decompose a software solution to obtain the best set of modules?” is a
fundamental question that arises from the concept of modularity. Modules should be
“characterised by design decisions that (each) hides from all others,” according to the

Amity University Online


Software Engineering and Modeling 99
information hiding principle. Stated differently, the specifications and architecture of
modules ought to be such that any data or algorithms housed within them are inaccessible
to other modules that do not require them. Notes

e
According to the concept of hiding, successful modularity can be attained by
establishing a collection of discrete modules that only exchange the data required for

in
software to work. The procedural (or informational) entities that comprise the software can
be better defined with the aid of abstraction. Access restrictions to procedural details inside
a module as well as any local data structures the module uses are defined and enforced by

nl
hiding.
The biggest advantages come from using information concealing as a modular system
design criterion when changes are needed for testing and later software maintenance.

O
Unintentional mistakes made during modification are less likely to spread to other areas
of the program since the majority of data and process information are hidden from other
sections of the program.

ity
Functional Independence
Separation of concerns, modularity, abstraction and information hiding all directly lead
to the idea of functional independence. Wirth and Parnas mention refinement strategies that
improve module independence in seminal papers on software design. Subsequent research
by Constantine, Myers and Stevens strengthened the idea.

rs
Creating modules with a “single-minded” function and a “aversion” to excessive contact
with other modules is the first step towards achieving functional independence. Put another
ve
way, you should create software such that, when viewed from other areas of the program
structure, each module has a straightforward interface and addresses a particular subset of
needs. It’s reasonable to wonder why independence matters.
Effective modularity, or independent modules, makes software development easier
ni

since functions can be divided into discrete areas and interfaces are made simpler (think
about the implications when development is done in a team). Because there are fewer
unintended consequences from design or code changes, error propagation is minimised
U

and reusable modules are available, independent modules are simpler to manage (and
test). In conclusion, design is the key to high-quality software and functional independence
is the cornerstone of good design.
ity

Cohesion and coupling are the two qualitative factors used to evaluate independence.
One way to measure a module’s relative functional strength is by its cohesiveness. The
degree of relative interdependence between modules is indicated by coupling.
A cohesive module carries out a single function and doesn’t communicate much with
m

other elements in other sections of the program. To put it simply, a coherent module should
preferably do a single task. While high cohesiveness, or single-mindedness, is usually
desirable, it is frequently necessary and wise to have a software component fulfil numerous
)A

jobs. On the other hand, if a good design is to be attained, “schizophrenic” components—


modules that execute numerous unconnected functions—should be avoided.
An indication of the connections between modules in a software structure is coupling.
The degree of complexity of the interfaces between modules, the point of entry or reference
to a module and the types of data that are transferred across the interface all affect
(c

coupling. The lowest possible coupling is what you should aim for when designing software.
Software with simple communication between modules is easier to understand and less
likely to experience the “ripple effect,” which happens when faults happen in one place and
spread throughout the system.
Amity University Online
100 Software Engineering and Modeling

Refinement
Niklaus Wirth is the one who first suggested the top-down design approach known as
Notes stepwise refining. Programs are created by iteratively improving procedural detail levels. A

e
procedural abstraction, or macroscopic declaration of function, is broken down step by step
until programming language statements are obtained, at which point a hierarchy is created.

in
In actuality, refinement is an elaboration process. A high-level abstraction-defined
declaration of function, or description of information, is where you start. In other words, the
statement conceptually explains a function or piece of information, but it doesn’t go into

nl
detail on how the function or information is organised internally. Then, you expand on the
initial claim, giving ever-more specific details with each new refinement (elaboration).
Refinement and abstraction are complimentary ideas. You can express procedures

O
and data inside with abstraction, preventing “outsiders” from needing to know specifics.
As design develops, refinement aids in bringing low-level elements to light. As the design
develops, you can use both ideas to produce a comprehensive design model.

ity
Aspects
Findings from requirements analysis reveal a number of “concerns.” These difficulties
“include requirements, use cases, features, data structures, variants, intellectual property
boundaries, collaborations, patterns and contracts, as well as quality-of-service issues.” The

rs
ideal structure for a requirements model is one that makes it possible to separate out each
issue (requirement) and evaluate it separately. In actuality, though, a few of these issues
are systemic in nature and are difficult to isolate.
ve
Requirements are honed into a modular design representation as design gets
underway. Think about A and B, the two prerequisites. “If a software decomposition
[refinement] has been chosen in which B cannot be satisfied without taking A into account,”
then requirement A crosses over into requirement B.
ni

Take into account, for instance, the two specifications for the SafeHomeAssured.
com WebApp. The ACS-DCV provides a description of Requirement A. The modules that
would allow a registered user to view footage from cameras positioned throughout a room
U

would be the subject of a design improvement. The general security criterion known as
criterion B stipulates that a registered user must undergo validation before having access to
SafeHomeAssured.com. This prerequisite applies to all features that registered SafeHome
users can access. A* is a design representation for requirement A and B* is a design
ity

representation for need B while design refinement takes place. As a result, B* crosses
across A* and A* and B* are representations of worries.
A cross-cutting issue is represented by an aspect. Thus, one feature of the
SafeHomeWebApp is the design representation, B*, of the requirement that a registered
m

user must be validated before utilising SafeHomeAssured.com. Aspects must be identified


in order for the design to appropriately take them into account when modularisation and
refinement take place. Instead of being implemented as software fragments that are
)A

“scattered” or “tangled” over numerous components, an aspect is ideally implemented


as a distinct module (component). In order to achieve this, a mechanism for creating an
aspect—a module that permits the worry to be implemented across all other concerns that it
crosses—should be supported by the design architecture.
(c

Refactoring
Refactoring is a crucial design activity recommended by many agile techniques. It is
a reorganisation technique that makes a component’s design (or code) simpler without
affecting its behaviour or functionality. Refactoring, according to Fowler, is “the process
Amity University Online
Software Engineering and Modeling 101
of changing a software system in such a way that it improves its internal structure while
maintaining the code’s external behaviour.”
Software is refactored to remove redundant code, underused design features,
Notes

e
superfluous or inefficient algorithms, improperly built or badly created data structures and
any other design flaws that can be fixed to provide a better design. A component with

in
minimal cohesiveness, for instance, could be the result of the first design iteration; it does
three tasks with little connection to each other. You might determine, after giving it some
thought, that the component should be refactored into three distinct parts, each with a high

nl
degree of cohesiveness.
Software that is simpler to integrate, test and maintain will be the end product.

O
Object-Oriented Design Concepts
The modern software engineering field makes extensive use of the object-oriented
(OO) paradigm. For readers who might not be familiar with OO design ideas, such as
classes and objects, inheritance, messaging and polymorphism, among others.

ity
Design Classes
A collection of analysis classes is defined by the requirements model. Each focuses on
parts of the problem that are visible to the user and describes a portion of the problem area.

rs
Generally speaking, an analysis class has a high level of abstraction.
As the design model develops, you will construct a software infrastructure to support
the business solution and establish a set of design classes that improve the analysis
ve
classes by offering design information necessary for the classes to be implemented. It
is possible to construct five various kinds of design classes, each of which represents a
different layer of the design architecture.
●● All abstractions required for human-computer interaction are defined by user interface
ni

classes (HCI). A chequebook, an order form, a fax machine, etc. are examples of
metaphors that are frequently used in HCI and the design classes for the interface
may be visual representations of the metaphor’s components.
U

●● The analysis classes that were previously defined are frequently refined into business
domain classes. The classes list the characteristics and functions (methods) needed to
carry out a certain business domain implementation.
ity

●● In order to properly manage the business domain classes, process classes must
implement lower-level business abstractions.
●● Persistent classes stand for data stores (like databases) that continue to exist after
program has finished running.
m

●● The software management and control functions that allow the system to function and
communicate both inside its computing environment and with external entities are
implemented by system classes.
)A

Every analysis class becomes a design representation as the architecture takes shape,
lowering the level of abstraction. In other words, analysis classes use the business domain
vocabulary to represent data items (and related services that are applied to them). Design
courses provide a great deal more technical information as an implementation guide.
(c

Complete and sufficient. A design class should contain every method and attribute
that, given a careful reading of the class name, one could reasonably expect to be present
for the class. For instance, a video-editing software class called Scene is only considered
complete if it has every property and method that is logically connected to the production

Amity University Online


102 Software Engineering and Modeling

of a video scene. By ensuring that the design class only has the methods necessary to
accomplish the class’s goals—neither more nor less—sufficiency is ensured.
Notes Primitiveness. A design class’s methods should be concentrated on completing a

e
single task for the class. Once a method has been used to implement the service, the class
shouldn’t offer an additional means of achieving the same goal. For instance, the start and

in
end points of the clip may be indicated by the start and end points of the class VideoClip in
video editing software (keep in mind that the raw video that is fed into the system may be
longer than the clip that is used). The only ways to determine the start and finish points of

nl
the clip are through the use of the setStartPoint() and setEndPoint() methods.
High cohesion. A cohesive design class applies qualities and techniques with single-
mindedness to carry out a limited, well-defined set of duties. For instance, a collection of

O
methods for modifying the video clip may be present in the class VideoClip. Cohesion is
preserved as long as every technique concentrates only on characteristics related to the
video clip.

ity
Low coupling. Design classes must cooperate with one another in order to follow
the design model. Collaboration should, nevertheless, be limited to a manageable level.
A design model that is heavily coupled—all design classes work together—makes it
challenging to test, deploy and maintain the system over time. Within a subsystem, design
classes should, in general, know very little about other classes. The Law of Demeter is a

rs
limitation that implies a method should only communicate with methods in classes that are
adjacent to its own.
ve
Design Models
The design model is viewable in two dimensions, as seen in the figure below. As design
tasks are carried out as part of the software process, the process dimension shows how
the design model changes. The degree of detail at which each component of the analytical
ni

model is converted into a design counterpart and subsequently improved through iteration
is represented by the abstraction dimension. The dashed line in the accompanying figure
represents the separation between the analysis and design models. It is possible in some
U

situations to distinguish the analysis and design models clearly. In other situations, it is harder
to distinguish the analytical model from the design since it gradually becomes part of it.
ity
m
)A
(c

Amity University Online


Software Engineering and Modeling 103
Many of the UML diagrams used in the analysis model are also used in the design
model’s elements. The distinction is that these diagrams are improved and elaborated
during the design process; more implementation-specific information is given and emphasis Notes

e
is placed on architectural structure and style, components housed within the architecture
and interfaces with the external environment.

in
It is important to acknowledge that the development of model elements displayed along
the horizontal axis is not always done in a sequential manner. Preliminary architectural
design typically establishes the framework and is followed by interface and component-level

nl
design, which frequently happen concurrently. Until the design is complete, the deployment
model is typically postponed.

Data Design Elements

O
Similar to other software engineering endeavours, data design, also known as data
architecting, aims to generate a high-level abstraction model of data and/or information,
which represents the user’s or customer’s perspective on the data. The computer-based

ity
system can then process these increasingly more implementation-specific representations
of the data model. The architecture of the data in many software applications will greatly
impact the architecture of the software that has to process it.
Data structure has always played a significant role in program design. The design

rs
of data structures and the related algorithms needed to handle them at the program
component level are critical to producing applications of the highest calibre. A system’s
ability to accomplish its business goals at the application level depends critically on the
ve
data model—which is created as part of requirements engineering—being translated into a
database. At the corporate level, data mining and knowledge discovery are made possible
by gathering information from many databases and rearranging it into a “data warehouse.”
These activities can have an effect on the company’s overall success. In each situation,
ni

data design is crucial.

Architectural Design Elements


The house floor plan is the analogue of the architectural design for software. The
U

rooms’ general arrangement, including their dimensions, shapes and connections to one
another, as well as the windows and doors that let passage in and out of the rooms, are
shown in the floor plan. We can have a general overview of the house from the floor plan.
ity

Architectural design components provide us with a broad overview of the program.


Three sources are used to create the architectural model: (1) details about the
application domain in which the software is to be developed; (2) particular requirements
model elements, like data flow diagrams or analysis classes and how they relate to one
m

another and work together to solve the problem; and (3) the availability of architectural
styles and patterns.
The architectural design element is typically represented as a collection of related
)A

subsystems that are frequently taken from the requirements model’s analysis packages.
Every subsystem could have its own architecture; for example, a graphical user interface
could be designed in accordance with an established user interface architectural style.
methods for obtaining particular architectural model elements.
(c

Interface Design Elements


An extensive collection of drawings (and requirements) for a house’s doors, windows
and external utilities can be compared to the interface design of software. The size,
form and operation of doors and windows are all included in these drawings, along with
Amity University Online
104 Software Engineering and Modeling

the distribution of utility connections (such as phone, gas, electricity and water) across
the rooms that are shown in the floor plan. They provide information on the location of
Notes the doorbell, if a visitor’s presence should be announced over an intercom and how to

e
establish a security system. The intricate designs (as well as requirements) for the doors,
windows and outside utilities essentially tell us how materials and information enter and exit

in
the house as well as within the floor plan’s designated rooms. Software interface design
elements show how data enters and exits the system as well as how it is shared across the
pieces that make up the architecture.

nl
The user interface (UI), external interfaces to other systems, devices, networks,
or other information producers or consumers and internal interfaces between different
design components are the three main components of interface design. The program can
communicate externally thanks to these interface design aspects, which also facilitate

O
internal communication and teamwork among the various parts that make up the software
architecture.
One important software engineering task is UI design, also increasingly known

ity
as usability design. Usability design combines technical (e.g., UI patterns, reusable
components), ergonomic (e.g., information arrangement and placement, metaphors, UI
navigation) and aesthetic (e.g., layout, colour, visuals, interaction mechanisms) elements.
Generally speaking, the user interface is a distinct subsystem inside the larger application

rs
architecture.
Detailed information on the entity to which information is given or received is
necessary for the design of external interfaces. Under all circumstances, this data ought
ve
to be gathered during requirements engineering and confirmed after interface design gets
underway. Error checking and relevant security elements should be incorporated into the
design of external interfaces where needed.
Internal interface design and component-level design are closely related fields.
ni

All operations and the messaging methods needed to facilitate coordination and
communication amongst operations in different classes are represented by design
realisations of analysis classes. Every message needs to be planned to support both the
U

unique functional requirements of the requested action and the necessary information
transfer. The interface of each software component is developed based on data flow
representations and the functionality described in a processing narrative if the traditional
input-process-output method to design is selected.
ity

An interface and a class can be modelled in similar ways in some situations. “An
interface is a specifier for the externally-visible [public] operations of a class, component, or
other classifier (including subsystems) without specification of internal structure,” according
to the UML definition of an interface. Put more simply, an interface is a collection of actions
m

that give access to and explain a portion of a class’s behaviour.


For instance, the control panel utilised by the SafeHome security feature enables
a homeowner to manage specific security function features. A wireless PDA or mobile
)A

phone may be used to implement control panel functionalities in a more advanced version
of the system.
The readKeyStroke () and decodeKey () functions must be implemented by the
ControlPanel class since it offers keypad behaviour. It is helpful to establish an interface
as seen in the image if these operations need to be made available to other classes (in
(c

this case, WirelessPDA and MobilePhone). The KeyPad interface is displayed as either a
small labelled circle connected to the class with a line, or as the > stereotype. The set of
activities required to achieve keypad behaviour are described for the interface, along with
no properties.
Amity University Online
Software Engineering and Modeling 105
The ControlPanel class includes KeyPad actions in its behaviour, as seen by the
dashed line with an open triangle at the end. This is known as a realisation in UML. That
example, by implementing KeyPad actions, a portion of the ControlPanel’sbehaviour will be Notes

e
implemented. Other classes that access the interface will have access to these functions.

Component-Level Design Elements

in
Software component-level design is like having a set of meticulous blueprints (along
with specs) for every room in the house. The wiring and plumbing in each room, as well as

nl
the locations of wall switches and electrical outlets, faucets, sinks, showers, tubs, drains,
cabinets and closets, are all shown in these drawings. Along with every other detail related
to a space, they also specify the flooring to be utilised and the mouldings to be installed.
Every software component’s internal detail is fully described by the component-level design.

O
In order to do this, the component-level design specifies algorithmic detail for all processing
that takes place within a component, data structures for all local data objects and an
interface that grants access to all component operations (behaviours).

ity
In UML diagrammatic representation, a component is used in object-oriented software
engineering.A section of the Safe Home security function called Sensor Management is
shown. The component is allocated the class Sensor, which is connected to it by a dashed
arrow. Sensor Management handles all the tasks related to Safe Home sensors, such as

rs
setting them up and keeping an eye on them.
A component’s design details can be modelled at several distinct degrees of
abstraction. Processing logic can be shown using a UML activity diagram. A component’s
ve
detailed procedural flow can be shown diagrammatically using flowcharts, box diagrams,
or pseudo code, among other forms. The guidelines for structured programming—a
collection of limited procedural constructs—are followed by algorithms. Data structures are
typically modelled using pseudo code or the programming language that will be used for
ni

implementation. These choices are made based on the type of data items that need to be
handled.

Deployment-Level Design Elements


U

The distribution of software functionality and subsystems inside the physical computing
environment supporting the software is indicated by deployment-level design components.
For instance, the SafeHome product’s components are set up to function in three main
ity

computer environments: a PC at home, the SafeHome control panel and a server located at
CPI Corp. that offers Internet-based access to the system.
A UML deployment diagram is created during design and subsequently improved
upon. Three computing environments are depicted in the picture; in reality, there would
m

be more because of sensors, cameras and other devices. Each computer element’s
subsystems, or functionalities, are listed. Subsystems that carry out security, surveillance,
home management and communications features, for instance, are housed within personal
computers. Furthermore, all attempts to access the SafeHome system from an external
)A

source will be handled by an external access subsystem. The components that each
subsystem implements would be indicated by a detailed description.
In other words, the deployment diagram displays the computer environment but omits
any clear information on configuration. The “personal computer,” for instance, is not given
(c

any more details. It may be a Linux box, a Sun workstation, a Mac or Windows-based PC.
When the deployment diagram is reviewed in instance form in the later stages of design
or when building starts, these details are supplied. Every deployment instance (a distinct
hardware configuration with a name) is uniquely identifiable.
Amity University Online
106 Software Engineering and Modeling

3.1.3 Methodologies for Structured Design


Structured design and structured analysis are components of structured methodology.
Notes Software design can be approached with discipline using structured design.

e
●● It permits the problem’s form to dictate the solution’s shape.

in
●● Its foundation lies in the idea that breaking down a huge, complex system into smaller
components can make it simpler.
●● It also encourages the use of common graphic tools to help with system design.

nl
●● It provides a range of approaches for creating a resolution. It provides a set of
standards for judging what constitutes a good design.

Software Design Consists of Four Parts:

O
1. Architectural design: It outlines the connections between the software’s structural parts.
2. Detail design: This component-level design provides procedural information on software
components, or modules; for example, it answers the question, “How is a specific piece

ity
of processing done?”
3. Data design: It converts the data model into the data structures needed for software
implementation, as shown by the ER diagram and the data dictionary. The software
system’s goal is to convert input data into outputs. To create high-quality software,

4. rs
appropriate data representation is essential.
Interface design: Various users are intended for different computer systems. The system
is used by users for a variety of objectives. The communications that occur between the
ve
software and its users are characterised by interface design.
Data Flow Diagrams (DFD) and other graphical tools are used in analysis models to
illustrate the various processes of the system and the data flow among them. Determining
ni

and defining the top-level modules that the software should include is the first stage in the
design process. Every software module should carry out a certain set of tasks and advance
the system’s goals.
U

Until the lower-level modules of a system are of a tolerable size and complexity, the
top-level modules are further broken down into smaller sub-modules. We refer to this
process as decomposition or modularisation. There is a hierarchical structure to the
modules. Every module has certain input requirements, executes specific procedures and
ity

produces specific results. The modules are integrated with one another after being created
as separate, autonomous entities.
The hierarchical perspective of the system is highlighted by structured design. The
highest level in the hierarchy represents the most significant division of labour, while the
m

lowest level displays the specifics. The modules are accessed top to bottom, with the
module at the upper level gaining access to the module at the lower level. We refer to this
method of system design as the top-down approach. Typically, the top-down method is
)A

applied across the entire software design process. For instance, there are multiple options
on the main menu. Selecting one item opens a new menu with other options for the user to
consider. This feature improves the system’s usability by allowing the user to choose one
option at a time from the variety of options displayed in the menu.
One tactic for preventing mistakes in software design is modularity. Each module in
(c

a correctly modular system is often created to carry out a single, distinct task. Modular
architecture is simple to comprehend. It facilitates easy modification and system
maintenance.

Amity University Online


Software Engineering and Modeling 107
Software engineers utilise structured design methodologies, which are methodical
techniques, to produce software systems that are well-structured and of superior quality.
These approaches place a strong emphasis on the methodical breakdown of complicated Notes

e
systems into smaller parts, modularity and clarity. Important methods for structured design
include:

in
1. Top-Down Design
™™ Description: This method divides the maximum degree of system functionality into
more manageable, finely tuned components.

nl
™™ Process: starts with a general description of the system and works its way down to
specific subsystems and modules.

O
™™ Benefits: guarantees a well-defined general framework and facilitates
comprehension of the high-level system prior to exploring specifics.
2. Modular Design
™™ Description: focuses on breaking up the system into separate modules, each of

ity
which is in charge of handling a certain task.
™™ Process: ensures that the modules are well-defined and have little coupling while
maintaining good cohesion.

rs
™™ Benefits: improves concurrent development, reusability and maintainability.
3. Data Flow-Oriented Design
™™ Description: emphasises comprehending and illustrating the flow of data
ve
throughout the system.
™™ Process: makes use of data flow diagrams, or DFDs, to illustrate the entry,
processing, storage and output of data.
™™ Benefits: makes data dependencies and transformations visible, making mistake
ni

detection and optimisation easier.


4. Structured Analysis and Design Technique (SADT)
U

™™ Description: a top-down, hierarchical method of designing systems that represents


the functions and data flows with diagrams.
™™ Process: uses a number of diagrams to represent the system’s functions, data
flow and control flow.
ity

™™ Benefits: makes the functional structure and interactions more understandable,


which supports thorough analysis and design.
5. Jackson Structured Programming (JSP)
m

™™ Description: a technique where program structures were created using the input
and output data structures as a guide.
™™ Process: transfers the data structures and sequences directly to program
)A

structures after analysing and mapping them.


™™ Benefits: guarantees that the program design closely complies with data
processing specifications, enhancing performance and dependability.
6. Structured Systems Analysis and Design Method (SSADM)
(c

™™ Description: a methodical, exacting process for developing systems that


incorporates organised design and analysis.
™™ Process: includes phases including requirements analysis, system design,
implementation planning and feasibility assessment.

Amity University Online


108 Software Engineering and Modeling

™™ Benefits: helps to ensure consistency and lower risk in system development by


offering a comprehensive and standardised framework.
Notes 7. Hierarchical Input Process Output (HIPO)

e
™™ Description: a technique that shows the roles of a system as an input, process and
output hierarchy.

in
™™ Process: uses HIPO diagrams to show how control and information move across
the system.

nl
™™ Benefits: aids in recognising the interactions between various components and in
visualising the overall structure of the system.
8. Warnier-Orr Diagramming

O
™™ Description: focuses on employing a hierarchical representation to show the
logical structure of data and control flows.
™™ Process: makes diagrams that display the data and system processes in a

ity
hierarchical arrangement.
™™ Benefits: improves comprehension of system logic and supports the creation of
reliable and effective systems.

Implementation Steps in Structured Design Methodologies:


●●

●●
as requirement analysis. rs
Collecting and evaluating the system’s functional and non-functional needs is known

System Specification: Outlining the high-level components of the system architecture.


ve
●● Module decomposition is the process of dividing the system into more manageable,
smaller parts.
●● Interface design is the process of defining the interfaces between modules to
ni

guarantee integration and transparent communication.


●● Data structure design is the process of creating the data structures needed to run a
system.
U

●● Creating the algorithms for every module to make sure they carry out the intended
tasks is known as algorithm design.
●● Documentation: To guarantee clarity and ease future maintenance, thorough
ity

documentation must be created at each stage.


●● Review and Verification: To guarantee adherence to specifications and design
principles, reviews and verifications should be carried out at each level.

Benefits of Structured Design Methodologies:


m

●● Enhanced Clarity: Offers an organised, lucid picture of the system, which facilitates
comprehension and administration.
●● Improved Maintainability: A modular design makes guarantee that modifications to one
)A

component of the system don’t significantly affect other components.


●● Improved Quality: Detailed design and methodical breakdown aid in the early
identification and resolution of possible problems.
●● Collaboration is facilitated by modular structure and clear documentation, which allow
(c

for simpler teamwork and parallel development.


●● Lowers Risk: The likelihood of project overruns and failures is decreased by early
requirement and possible problem identification.

Amity University Online


Software Engineering and Modeling 109
A framework that guarantees the development of reliable, stable and scalable
software systems is provided by structured design approaches, which are the cornerstone
of disciplined software development. Developers may systematically handle complexity, Notes

e
improve communication and produce high-quality software that fulfils user requirements and
survives the test of time by following these approaches.

in
3.1.4 Modules Coupling and Cohesion
Some data interchange between modules is required for an integrated system.

nl
Because of this prerequisite, there is some interdependence between the modules.
Coupling is the degree of interdependence between modules. Any modifications made
in one module will necessitate matching adjustments in the other if there is greater

O
interdependence or coupling between the two components. Thus, it becomes harder to
modify software if components are tightly connected. Furthermore, a mistake made in one
module now impacts the other modules as well as the original module. As a result, there are
increased opportunities for error and debugging becomes more challenging. Consequently,

ity
there should be minimal coupling or data interchange between modules in well-designed
software.
The degree of connectivity between modules in a software framework is measured by
coupling. The degree of complexity of the interfaces between modules, the point of entry

rs
or reference to a module and the types of data that are transferred across the interface
all affect coupling. The goal of software design is the lowest possible coupling. Software
with simple communication between modules is easier to understand and less likely to
ve
experience the “ripple effect,” which happens when faults happen in one place and spread
throughout the system.
A few instances of various module coupling types are shown in the figure below.
Different modules hold a higher authority than modules a and d. Since none of them are
ni

connected, there is no direct linkage. Data are supplied to module c, which is accessible
using a traditional argument list and is subordinate to module a. Low coupling, also known
as data coupling, is demonstrated in this part of the structure as long as a simple argument
U

list is available (that is, simple data are provided; a one-to-one correlation of items occurs).
When a part of a data structure—instead of basic arguments—is supplied via a module
interface, a variant of data coupling known as stamp coupling results. Between modules b
and a, this happens.
ity
m
)A
(c

Figure: Types of coupling


Amity University Online
110 Software Engineering and Modeling

Control passing between modules is a characteristic of coupling at moderate levels.


The majority of software designs use control coupling, as seen in the above figure where
Notes modules d and e exchange a “control flag,” or a variable that governs decisions in a

e
subordinate or superordinate module.
When modules are connected to an environment outside of software, rather high

in
amounts of coupling take place. I/O, for instance, connects a module to particular hardware,
file systems and communication methods. External connection is necessary, but it ought
to be restricted to a select few structurally-based modules. Another sign of high coupling is

nl
when multiple modules make reference to the same global data area. This mode, known as
common coupling, is depicted in the above figure. A data item in a global data area (such
as a disc file or a globally accessible memory area) is accessed by modules c, g and k

O
individually. The item is initialised by module c. The item is updated and recomputed by
module g later on. Assume for the moment that g changes the item inaccurately due to
a mistake. The item is read by K much later in the processing module and when it fails
to process it, the software aborts. Module k is the apparent cause of the abort; module

ity
g is the true cause. Problem diagnosis in structures with significant common coupling is
challenging and time-consuming. This does not imply, however, that using global data is
inherently “bad.” It does imply that a software creator needs to be mindful of the possible
repercussions of common coupling and exercise extra caution to avoid them.

rs
The usage of data or control information kept inside the borders of another module by
one module results in the highest degree of coupling, known as content coupling. Secondly,
when branches are added into the midst of a module, content coupling takes place. One
ve
can and need to steer clear of this connection mode.
The design choices taken throughout the development of the structure resulted in
the coupling types that were just covered. On the other hand, during coding, variations
of external coupling could be added. For instance, operating system (OS) coupling links
ni

design and generated code to operating system “hooks” that can cause chaos when OS
changes; compiler coupling attaches source code to certain (and frequently nonstandard)
features of a compiler.
U

A cohesive module works within a software procedure to do a specific task, interacting


with other program procedures just minimally. To put it simply, a coherent module should
preferably do a single task.
ity

One way to visualise cohesion is as a “spectrum.” While we sometimes accept the mid-
range of the spectrum, we always aim for high cohesiveness. Cohesion has a nonlinear
scale. In other words, middle range cohesiveness is almost as “good” as high range
cohesiveness, however low range cohesiveness is significantly “worse”. In actual use, a
designer does not have to worry about classifying cohesiveness into a particular module.
m

Instead, when designing modules, low degrees of coherence should be avoided and the
general idea should be grasped.
On the lower (undesirable) end of the scale, we come across a module that carries out
)A

a series of functions that have little to no relationship with one another. We refer to these
modules as coincidentally cohesive. Logically coherent modules carry out duties that make
sense (for example, producing all output, regardless of type). Temporal cohesiveness in a
module is demonstrated by the presence of linked tasks that must all be completed within
(c

the same time frame.


Take an engineering analysis package module that handles error processing as an
illustration of a poor cohesiveness module. When computed data exceeds predefined
bounds, the module is called.
Amity University Online
Software Engineering and Modeling 111
It carries out the following duties:
1. computes additional data using the initial computations as a basis.
Notes

e
2. generates an error report on the user’s workstation that includes graphical elements.
3. executes more calculations at the user’s request, modifies a database and permits menu

in
option for further processing
Despite their tenuous connections, the aforementioned tasks are each distinct
functional entities that would be better served by being completed as separate modules.

nl
The only effect of combining the functions into a single module is to make it more likely that
an error will spread if one of its processing jobs is altered.
In terms of degree of module independence, moderate degrees of cohesiveness

O
are comparatively near to one another. Procedural coherence occurs when a module’s
processing elements are interrelated and need to be carried out in a particular order. A data
structure exhibits communicational cohesiveness when every processing element focuses

ity
on a single location. A module with high cohesion completes a single, unique procedural
task.
It is not required to ascertain the exact degree of cohesiveness, as we have already
mentioned. Instead, it’s critical to identify low cohesion and aim for high cohesion so that

rs
software architecture can be changed to increase functional independence.

3.1.5 Types of Coupling and Cohesion


ve
There are various methods for coupling modules together. The descriptions of several
coupling types are given below:
1. Data coupling: The obligatory sharing of data pieces across modules that are otherwise
ni

completely independent of one another is known as data coupling. The ideal pairing is
this one.
2. Stamp coupling: Data structures or records are used to transfer data in stamp coupling.
U

Data structures are typically exchanged for data pieces, which tends to increase system
complexity. Therefore, data coupling is superior to stamp coupling.
3. Control coupling: Two modules are said to be control-coupled when one uses a signal or
ity

piece of data to direct the actions of the other. The receiver module is instructed on what
actions to take by the control information. It is also possible for a subordinate module
to transfer control information to a superordinate module. This kind of work is called
inversion.
m

4. Common coupling: Two modules are considered common linked when they make
reference to the same global data area. Many computer languages, like COBOL (where
the data division is global to any paragraph in the procedure division) and FORTRAN
)A

(common block), support global data areas. In these situations, the degree of module
interdependence increases significantly.
5. Content coupling: When two modules are content coupled, one module directly references
the internal operations of the other module. One module could modify a statement coded
into another module or change data in a second module. Modules in content coupling
(c

are closely connected to one another. No semblance of independence exists. It is the


worst kind of partnership. Fortunately, content coupling is not supported by the majority
of high level languages.

Amity University Online


112 Software Engineering and Modeling

Each module’s instructions should be related to a particular function in a well-designed


system. Cohesion is the degree to which a module’s instructions help carry out a single,
Notes cohesive task. An information system’s modules ought to work together. Cohesion comes in

e
a variety of forms, as follows:
1. Functional cohesion: The best kind of module is one that functions cohesively, meaning

in
that every instruction in the module relates to a single task or purpose. A module’s name,
like “Calculate Pay,” “Select Vendor,” “Register Order,” and so on, could imply that it
works well together.

nl
2. Sequential cohesion: If a piece of information moves from one instruction to the next
to generate the intended result, the module is said to be sequentially coherent. When
there is sequential coherence, the first instruction processes the data and the second

O
instruction takes its output as input. The second instruction’s output then feeds into
the third instruction and so forth. In order to prevent forward or backward jumps, the
instructions must occur in a logical order for sequential cohesiveness.

ity
3. Communicational cohesion: The order in which activities are completed is not as crucial
for communicational cohesiveness as the fact that all activities must operate upon the
same data. The same input data is used by each instruction, or it is concerned with the
same output data.
4.

rs
Logical cohesion: The tasks that need to be completed in a logically cohesive module
are chosen from outside the module. Certain commonly used operations, like ADD,
DELETE, SORT and so on, are organised into distinct modules that are utilised by other
ve
modules.
A well-designed system must strike the correct balance between minimum coupling
and strong cohesiveness.
ni

3.1.6 Structured Chart


One of the most often used techniques for system design is the structure
U

chart. Architectural designers utilise structure charts to record system parameters,


interconnections and hierarchical systems.
A visual tool used in structured design approaches to depict a software system’s
hierarchical organisation is a structured chart. It shows how these parts interact and relate
ity

to one another by dissecting the system into its component parts. In order to show a clear
hierarchy, the chart usually begins with the highest-level module—often referred to as the
main module—at the top and branches out to lower-level modules.
m

These modules are connected by arrows or lines that show control flow and, on
occasion, data flow. The chart’s modules each carry out a distinct, well-defined task,
guaranteeing modularity, readability and ease of maintenance. Structured charts help
developers and stakeholders better understand, communicate and plan by giving them
)A

an organised and thorough perspective of the system architecture. This improves the
software’s quality and manageability in the long run.
A system is divided into “black boxes” by it. A “black box” is one in which the user is
aware of the functionality without having to understand internal design. Black boxes receive
(c

inputs and produce the necessary outputs on their own. Because details are kept hidden
from people who have no need or want to know, this idea decreases complexity. Systems
are therefore simple to build and manage. The black boxes are placed hierarchically in this
instance, as seen in Figures (a) and (b) below.
Amity University Online
Software Engineering and Modeling 113

Notes

e
in
nl
O
Figure: Hierarchical Format of a Structure Chart

ity
rs
ve
Figure: Format of a Structure Chart
ni

The lower level modules are called by the upper level modules. Lines connecting the
rectangular boxes indicate the connections between modules. Usually, the components are
read from left to right, top to bottom. A hierarchical numbering system is used to identify the
U

modules. There is only one module at the top of any structure chart, referred to as the root.

Fundamental Components of a Structure Chart


A structure chart’s fundamental components are as follows:
ity

1. Rectangular Boxes. A module is represented as a rectangular box. Annotating a


rectangular box with the name of the module it represents is standard practice.
m

2. Arrows. When two modules are connected by an arrow, it means that control is transferred
)A

from one module to the other in the arrow’s direction during program execution.
(c

3. Data-flow Arrows. Data-flow arrows indicate the direction in which the designated data
flows from one module to the next.

Amity University Online


114 Software Engineering and Modeling

Notes

e
4. Library Modules. Modules, also known as library modules, are commonly depicted

in
as double-edged rectangles. A module becomes a library module when it is called by
numerous other modules.

nl
5. Selection. One module out of multiple modules attached to the diamond symbol is

O
invoked based on the condition that is written in the diamond symbol being satisfied.

ity
6. Repetitions. When a loop appears around the control-flow arrows, it means that the

rs
corresponding modules are used frequently.
ve
ni

A key component of structured design approaches are structured charts, which


offer an orderly and transparent means of illustrating the software system’s hierarchical
U

structure. Using modules to represent the system’s relationships, structured charts facilitate
the understanding, design and upkeep of complex software systems. They encourage
modularity, foster better communication and raise the software’s general calibre and
maintainability.
ity

3.1.7 Quality of Good Software Design


Strong, effective and maintainable software systems are the result of a variety of ideas
and characteristics that make up good software design.
m

1. Modularity
A software system can be made more modular by breaking it up into separate,
)A

autonomous modules that each carry out a particular function. This feature makes software
development, testing, understanding and maintenance easier.
The term “modularity” describes how easily a system’s parts may be taken apart and
put back together. We’ll go into detail in this part on the importance of modularity for parallel
(c

development, reusability and maintainability. Additionally, methods like concern separation


and encapsulation will be covered, with practical examples provided to support the ideas.
●● Maintenance ease: Modular updates and replacements can be made without
impacting the system as a whole.

Amity University Online


Software Engineering and Modeling 115
●● Reusability: Modules can be utilised again in other projects or in other sections of the
system.
●● Parallel Development: To speed up development, several teams or developers might Notes

e
work on different modules at once.

Implementation Strategies:

in
™™ Encapsulation: Combining into a single module the operations performed on the
data and the data itself.

nl
™™ Separation of Concerns: Ensuring that distinct modules manage various parts of a
system.
™™ Clearly defining interfaces that allow modules to communicate with one another is

O
known as interface design.

2. Scalability
Software is said to be scalable if it can expand resources to handle an increase in

ity
demand without sacrificing its functionality. A system’s scalability refers to its capacity to
accommodate growing loads. The significance of both vertical and horizontal scalability, as
well as concepts like load balancing and caching, will be discussed in this section along
with several scalable system examples.
●●

●●
number of users and data.
rs
Performance under Load: Verifies that the program can handle an increase in the

Cost-effectiveness: Makes effective use of extra resources to sustain performance


ve
standards.

Implementation Strategies:
™™ distributing workloads equitably among several servers is known as load
ni

balancing.
™™ Caching: To lessen the strain on the primary database, frequently accessed
material is kept in memory.
U

™™ Processing asynchronously: Managing work in the background to increase


responsiveness.
Examples:Social media networks that scale out their infrastructure to preserve
ity

performance during periods of high usage.

3. Maintainability
The ease with which a software system can be updated to fix bugs, enhance
functionality, or adjust to a changing environment is known as maintainability. The ease with
m

which a software system can be changed is referred to as maintainability. The readability


and documentation of the code, as well as best practices like code reviews, refactoring
and automated testing, will all be covered in this section. Case studies will be used to
)A

demonstrate how maintainability affects a project’s long-term success.


●● Lower Costs: Updating and fixing is easier, which cuts down on maintenance time and
expense.
●● Longevity: Assures that the program will continue to be relevant and helpful throughout
(c

time.

Implementation Strategies:
™™ Code Readability: Producing comprehensible and transparent code.
Amity University Online
116 Software Engineering and Modeling

™™ Documentation: Giving developers access to thorough documentation.


™™ Automated Testing: Using tests to make sure updates don’t bring in new problems.
Notes
Examples: Older systems that still work properly because of thorough test suites and

e
well-documented code.

in
4. Performance
The term “performance” describes how well software reacts to user input and makes
use of system resources. The efficiency with which a system functions is its performance.

nl
Performance measurements including throughput and latency, optimisation design ideas
and performance testing tools and procedures will all be covered in this part.
●● User satisfaction is increased by software that is quick and responsive.

O
●● Resource Efficiency: Costs are minimised by making the best use of hardware
resources.

Implementation Strategies:

ity
™™ Selecting algorithms that work well with the anticipated input size is known as
efficient algorithm selection.
™™ Choosing data structures that offer effective access and modification activities is

rs
known as optimal data structure selection.
™™ Profiling and Optimisation: Locating bottlenecks in performance and code
optimisation.
ve
Examples:applications for real-time gaming that need minimal latency and high frame
rates.

5. Reliability
ni

The likelihood that software will function flawlessly for a predetermined amount of time
under predetermined circumstances is called reliability. A system’s reliability is its capacity
to operate for a predetermined amount of time under predetermined conditions. This section
U

will address design strategies for reliability, measurements such as MTBF and MTTR and
real-world situations where dependability is essential.
●● Dependability: Users can count on the program to carry out the tasks for which it was
designed.
ity

●● Data integrity guards against loss and corruption of data.

Implementation Strategies:
™™ Error Control: Putting strong error control systems in place.
m

™™ Redundancy: Having backup systems ready to take over in the event of an


emergency.
™™ Comprehensive testing, comprising system, integration and unit tests.
)A

Examples:software for banks that guarantees data loss-free transaction processing.

6. Usability
The term “usability” describes how user-friendly and intuitive software is. Usability
(c

guarantees that the program is easy to use and meets user needs. The concepts of
user-centered design, consistency, methods such as user testing and prototyping and
illustrations of extremely useful software will all be covered in this part.
●● User Adoption: Software that is easier to use and learn gets adopted by more users.
Amity University Online
Software Engineering and Modeling 117
●● Productivity: Those who use intuitive interfaces can finish activities faster.

Implementation Strategies:
Notes

e
™™ Designing with the requirements and constraints of final users in mind is known as
user-centered design.

in
™™ Keeping UI components and actions constant is known as consistency.
™™ Feedback mechanisms: Giving users unambiguous feedback regarding their
activities and the state of the system.

nl
Examples:Mobile apps with responsive design that adjusts to multiple screen sizes and
easy to navigate.

O
7. Security
Software must be protected from unauthorised access and data integrity and privacy
must be guaranteed. Keeping the system safe from harmful attacks is part of security. This
part will cover techniques like secure coding practices and encryption, as well as case

ity
studies of security breaches and their resolutions. It will also cover concepts like least
privilege and defence in depth.
●● Data protection: Prevents unauthorised access to private information.

rs
●● Compliance: Fulfils data security obligations set forth by regulations.

Implementation Strategies:
™™ Authentication and Authorisation: Making sure that the system is only accessible
ve
by those who are authorised.
™™ Using encryption, data is safe both in transit and at rest.
™™ Adhering to recommended methods to steer clear of prevalent vulnerabilities such
ni

as SQL injection and cross-site scripting (XSS) is known as secure coding.


Examples: Websites for online sales that use secure transactions to safeguard client
payment information.
U

8. Portability
The ease with which software can be moved from one environment to another is
known as portability. The ease with which software can be moved from one environment to
ity

another is known as portability. This part will look at things like software samples, platform
independence and standards compliance and portability design strategies.
●● Software’s versatility allows it to run on a variety of hardware and operating systems.
●● Savings: Lessens the requirement to create distinct copies for various settings.
m

Implementation Strategies:
™™ Writing code that is cross-platform compatible means that it can be compiled or
)A

interpreted on several systems.


™™ Standards compliance includes using cross-platform libraries and abiding by
industry standards.
Examples:Software created in Java that may be used using a Java Virtual Machine
(c

(JVM) on any operating system.

9. Reusability
The degree to which software components can be utilised in many systems is known

Amity University Online


118 Software Engineering and Modeling

as reusability. The degree to which software components can be utilised in many systems
is known as reusability. Different reusability levels, techniques such as component-based
Notes architecture and design patterns and case studies will all be covered in this part.

e
●● Efficiency: By reusing pre-existing components, development time is decreased.
●● Consistency: Makes ensuring that the same components are used in all projects.

in
Implementation Strategies:
™™ Design patterns: Applying standard fixes for persistent issues.

nl
™™ Component-Based Architecture: creating independent parts that fit together
seamlessly to form various systems.

O
Examples:Frameworks and libraries that offer reusable parts and functionality for web
development.

10. Flexibility

ity
The ease with which software can be changed to satisfy new specifications or adjust
to shifting circumstances is known as flexibility. The ease with which a software system can
adjust to changes is known as its flexibility. The harmony between over-engineering and
flexibility, methods like dependency injection and loose coupling and practical examples will

rs
all be covered in this part.
●● Flexibility: Without necessitating a total redesign, software can change to
accommodate new requirements.
ve
●● Future-Proofing: Lessens the chance that the program will become outdated.

Implementation Strategies:
™™ Reducing dependencies between modules to make it simpler to change one
ni

without impacting the others is known as loose coupling.


™™ Design Patterns: Using patterns like Strategy and Observer, which promote
flexibility.
U

Examples:Designs for plugins that let new features be added without changing the
main system.

11. Interoperability
ity

The capacity of software to function with other systems is known as interoperability.


In this section, we’ll go over the significance of standards and protocols, strategies like
middleware and SOA and case studies that demonstrate effective interoperability.
m

●● Software’s capacity to function with other systems is known as interoperability.


●● Enhances functionality by facilitating integration with other systems.
Efficiency: Lowers the amount of work needed to link several systems.
)A

Implementation Strategies:
™™ Standard Protocols: Making use of widely used data formats and communication
protocols.
(c

™™ APIs: Providing application programming interfaces with thorough documentation


so that other systems can communicate with them.
Examples: Enterprise software that interfaces with third-party services through RESTful
APIs.

Amity University Online


Software Engineering and Modeling 119
Features that improve a piece of software’s functionality, performance and
maintainability are indicative of good design. Important characteristics that affect
software quality as a whole include portability, reusability, flexibility, modularity, scalability, Notes

e
maintainability, performance, dependability, usability, security and interoperability.
Developers can build software systems that are reliable, effective and flexible to changing

in
requirements by following these guidelines and using the right design techniques. This will
ensure the systems’ success and longevity in a rapidly evolving technological environment.

Summary

nl
●● The software design process is a critical phase in the software development lifecycle
where the abstract requirements are transformed into a blueprint for constructing the
software system. This phase involves creating a detailed architecture and design that

O
lays out how the system will function and how the various components will interact.
The design objectives aim to ensure that the software meets user requirements, is
maintainable, efficient and scalable.

ity
●● Design objectives are the goals that the software design process aims to achieve.
These objectives ensure that the software is not only functional but also robust,
maintainableandscalable.The software design process is a critical step in ensuring
that the final product meets user requirements, is robust and can be efficiently
maintained and scaled. By following a structured design process and focusing on key

rs
design objectives, developers can create software systems that are both effective
and sustainable in the long term. This process involves careful planning, iterative
development and continuous validation to achieve a well-architected and reliable
ve
software solution.
●● Structured design methodologies are systematic approaches used in software
development to create well-organised, efficient and maintainable systems. These
methodologies focus on breaking down a complex system into smaller, manageable
ni

components, ensuring that the system’s architecture and design are robust and easy
to understand.
●● Structured design emphasises modularity, clarity and a hierarchical organisation
U

of software components, which aids in reducing complexity and enhancing


maintainability.Structured design methodologies provide a systematic approach to
software development that emphasises modularity, clarity and maintainability. These
ity

methodologies help in managing the complexity of large systems by breaking them


down into smaller, manageable components. By following a structured approach,
developers can create robust, efficient and maintainable software systems that meet
user requirements and can be easily updated or expanded as needed.
●● In software design, coupling and cohesion are fundamental principles that help define
m

the structure and quality of software modules. They play a crucial role in determining
how software components are organised and interact with each other, impacting
maintainability, scalability and ease of development.In software engineering, a module
)A

is a self-contained unit of code that encapsulates a specific piece of functionality.


Modules can be functions, classes, or components that can be developed, tested
and maintained independently. The design of these modules should aim to maximise
cohesion and minimise coupling to create a robust and maintainable system.Coupling
refers to the degree of interdependence between software modules. It measures
(c

how closely connected different modules are and the extent to which changes in one
module affect others. Low coupling is desirable because it indicates that modules can
function independently and changes in one module are less likely to impact others.

Amity University Online


120 Software Engineering and Modeling

●● Cohesion refers to the degree to which the elements within a single module belong
together. High cohesion is desired because it means the module is focused on a
Notes single task or related set of tasks, making it easier to understand, maintainandreuse.

e
Understanding and applying the principles of coupling and cohesion are essential for
creating high-quality, maintainable software systems. By focusing on low coupling and

in
high cohesion, software engineers can design systems that are easier to understand,
maintain and evolve, leading to more robust and reliable software solutions.
●● A structured chart is a graphical representation of the system architecture that breaks

nl
down the system into smaller, manageable modules and depicts the relationships
among them. It is a fundamental tool in structured design methodologies and helps
visualise the hierarchy and interaction of system components, promoting a better

O
understanding of the system’s structure and facilitating modular design.Structured
charts are a powerful tool in software engineering for visualising and managing the
design of complex systems.

Glossary

ity
●● FURPS: Functionality, Usability, Reliability, Performance and Supportability.
●● MTTF: Mean-Time-To-Failure
●● HCI: Human Computer Interaction
●●
●●
●●
UML: Boundary Value Analysis
CPI: Cost Performance Index
DFD: Data Flow Diagram
rs
ve
●● SADT: Structured Analysis and Design Technique
●● JSP: Jackson Structured Programming
●● SSADM: Structured Systems Analysis and Design Method
ni

●● HIPO: Hierarchical Input Process Output

Check Your Understanding


U

1. Which of the following principles in software design aims to minimise the dependencies
between software modules?
a) Cohesion
ity

b) Encapsulation
c) Coupling
d) Abstraction
2. In the context of software design, which type of cohesion is considered the most
m

desirable?
a) Logical Cohesion
)A

b) Procedural Cohesion
c) Temporal Cohesion
d) Functional Cohesion
3. What does the principle of separation of concerns in software design advocate for?
(c

a) Grouping related functionalities into a single module.


b) Dividing a system into distinct features that overlap as much as possible.
c) Avoiding redundancy by merging multiple responsibilities into one module.

Amity University Online


Software Engineering and Modeling 121
d) Splitting a system into distinct sections that address different aspects or concerns.
4. In a structured design approach, which diagram is typically used to represent the
hierarchical relationship among software modules? Notes

e
a) Entity-Relationship Diagram (ERD)

in
b) Use Case Diagram
c) Structured Chart
d) Sequence Diagram

nl
5. Which software design pattern is most appropriate for managing multiple versions of an
object without the need to create many subclasses?
a) Singleton Pattern

O
b) Observer Pattern
c) Factory Pattern
d) Strategy Pattern

ity
Exercise
1. What are thegoals and the software design process? Explain briefly.
2. Explain methodologies for structured design.
3.
4.
What is the difference between coupling and cohesion?
Explain in detail structured chart. rs
ve
5. What are the qualities of good software design?

Learning Activities
1. Your team is developing a real-time chat application for a social networking platform.
ni

Describe the software design process you would use to design the chat application’s
architecture and functionality.
2. Assume you are tasked with designing a registration system for an online platform.
U

Describe the steps you would take in the software design process to create an efficient
and user-friendly registration system. Discuss how you would gather requirements,
define system components, create interfaces and design algorithms to handle user
registration, validation and authentication.
ity

Check Your Understanding - Answers


1. c) 2. d) 3. d) 4. c)
5. d)
m

Further Readings and Bibliography


1. Design Patterns: Elements of Reusable Object-Oriented Software by Erich
)A

Gamma, Richard Helm, Ralph Johnson and John Vlissides (The Gang of Four).
2. Clean Architecture: A Craftsman’s Guide to Software Structure and Design by
Robert C. Martin.
3. Software Architecture in Practice by Len Bass, Paul Clements and Rick Kazman.
(c

4. Head First Design Patterns: A Brain-Friendly Guide by Eric Freeman and Elisabeth
Robson.
5. Domain-Driven Design: Tackling Complexity in the Heart of Software by Eric Evans.

Amity University Online


122 Software Engineering and Modeling

Module - IV: Software Testing


Notes
Learning Objectives

e
At the end of this module, you will be able to:

in
●● Define software testing overview
●● Analyselevel of testing

nl
●● Understand characteristics of testing
●● Define Black-Box and White-Box Testing
●● Understand alpha, beta and gamma testing

O
Introduction
A wide number of approaches and procedures are included in software testing, which

ity
is intended to guarantee the effectiveness, functionality and quality of software products.
These testing kinds are divided into a number of groups, each of which has a distinct
function and stage in the software development lifecycle. The main goal of unit testing is to
check each individual part or code unit for accuracy.

rs
In order to find problems with their interfaces and data flow, integration testing looks at
how integrated units interact with one another. System testing verifies that the entire system
operates as intended in a comprehensive, integrated context by assessing its compliance
with the given standards. Before a system is deployed, acceptance testing, which includes
ve
user acceptance testing (UAT), verifies that it satisfies the needs and expectations of real
users by comparing it to user demands and business requirements.
While non-functional testing looks at things like performance, usability and security to
ni

make sure the program works well in a variety of scenarios, functional testing evaluates
particular functionalities and features against predetermined criteria. Regression testing
makes ensuring that updates or improvements don’t break current features or add new ones.
U

Sanity testing and smoke testing are short tests to verify recent modifications and
fundamental processes, respectively. In exploratory testing, testers use their instincts and
experience to explore the program without following pre-written scripts in an attempt to find
unforeseen problems. When combined, these many kinds of software testing guarantee
ity

that software products are dependable, strong and up to the highest standards of quality
before they are delivered to end customers.

4.1 Introduction to Software Testing


m

Software testing is an essential stage in the software development lifecycle that


guarantees a software product’s reliability, quality and usefulness. It entails methodically
testing software by running it under controlled circumstances in order to find any flaws,
)A

problems, or difficulties that might affect user experience or performance. This procedure
aids in confirming that the program satisfies requirements and operates as anticipated
under diverse conditions. Software testing lowers development costs, minimises risks and
improves user happiness by spotting and fixing issues early on. Different software testing
techniques are used to investigate various parts of the program, ranging from individual
(c

components to the system as a whole. These techniques include unit testing, integration
testing, system testing and acceptance testing. In addition to enhancing the product’s
quality, efficient software testing guarantees that it is ready for deployment and use in real-
world settings.
Amity University Online
Software Engineering and Modeling 123

Notes

e
in
nl
O
4.1.1 Software Testing Overview
Testing is a collection of procedures that can be organised ahead of time and carried
out methodically. For this reason, the development process should provide a template for

ity
software testing, which is a series of steps into which we may insert particular test-case
creation methodologies and testing methods.
Numerous approaches to software testing have been put forth in the literature. They all
have the following common traits and offer a template for testing to the software developer:
●●

●●
rs
A software team should do efficient formal technical reviews before beginning any
testing. This way, a lot of problems will be fixed before testing starts.
Testing proceeds outward from the component level to the integration of the complete
ve
computer-based system.
●● At different times, different testing methods are applicable.
●● The software developers and (in the case of larger projects) an independent test group
ni

carry out the testing.


●● Although debugging and testing are two independent tasks, every testing strategy
must take debugging into account.
U

Software testing strategies need to support both high-level tests that confirm key
system functionalities against customer requirements and low-level tests that are required
to confirm that a small part of the source code has been implemented successfully. A
strategy needs to give the manager a set of benchmarks to work towards and direction
ity

for the practitioner. The test strategy’s steps take place when deadline pressure is starting
to build, therefore progress needs to be quantifiable and issues need to arise as soon as
possible. One definition of software testing is:
●● The procedure for dissecting software in order to find errors or discrepancies between
m

the necessary and actual circumstances (i.e., bugs) and to assess the software’s
characteristics.
●● Testing is the process of running a program with the goal of identifying faults. Analysis
)A

is the analysis of a program with the goal of detecting flaws.


●● To prove to the client and developer that the program satisfies their needs. This
means that for custom software, each need in the requirements document should
have a minimum of one test. This indicates that tests for every system feature, as
(c

well as combinations of these features, that will be included in the product release are
necessary for generic software products.
●● To identify circumstances where the program behaves in an improper, undesired, or
non-conforming manner. These result from flaws in the program. Finding and fixing
Amity University Online
124 Software Engineering and Modeling

unwanted system behaviour, such as crashes, undesired interactions with other


systems, inaccurate computations and data corruption, is the focus of defect testing.
Notes The first objective sets up validation testing, in which you utilise a predetermined

e
collection of test cases that represent the system’s intended use to expect the system to
function successfully. Defect testing, which is the result of the second purpose, uses test

in
cases that are intended to reveal flaws. Defect testing test cases don’t have to replicate
how the system is typically used; they can be purposefully cryptic. Naturally, there isn’t a
clear distinction between these two testing methodologies. You will discover system flaws

nl
during validation testing and some tests conducted during defect testing will demonstrate
that the program satisfies its requirements.

Testing Principles

O
Software testing is guided by a number of ideas. A software engineer must
comprehend the fundamental ideas that underpin software testing before implementing
techniques to create efficient test cases. The primary tenets of testing are as follows:

ity
1. Every test ought to be linked to the needs of the client. This is to find any flaws that could
lead to the system or program not meeting the needs of the client.
2. Plans for tests should be made long before the tests start. Testing can start as soon as
the requirements model is finished. As soon as the design model is completed, detailed

3.
test cases can be started.

rs
Software testing is subject to the Pareto principle. To put it simply, the Pareto principle
suggests that 20 percent of all program components are likely to be responsible for
ve
80 percent of all faults found during testing. Of course, separating these questionable
components and giving them a rigorous examination is the issue.
4. Testing ought to start in the small and work its way up to in the large. Typically, the
initial planned and conducted tests concentrate on individual parts. The focus of testing
ni

changes as it goes along in an effort to identify faults in integrated clusters of components


and eventually in the system as a whole.
U

5. Testing in its whole is not feasible. For a program of even a moderate scale, the number
of path permutations is remarkably huge. It is therefore difficult to test every potential
combination of pathways. Nonetheless, it is feasible to guarantee that all conditions in
the component-level design have been met and that program logic has been sufficiently
ity

covered.
6. Independent third-party testing is recommended for optimal efficacy. It is not advisable to
have the software engineer who designed the system perform all of the software’s tests.
The distinctions between defect and validation testing may be better understood by
m

referring to the diagram in the figure below. Consider the system under test as a black box.
The system creates outputs in an output set O after receiving inputs from an input set I.
A portion of the results will be inaccurate. These are the set Oe outputs that the system
)A

produces in reaction to the set Ie inputs. Defect testing prioritises identifying inputs within
the set Ie, as these indicate systemic issues. Validation testing entails utilising accurate
inputs that are not Internet Explorer-based. These encourage the system to produce the
anticipated, accurate outputs.
(c

Testing is not able to show that the software is error-free or that it will operate as
intended under all conditions. There is always a chance that a test you missed could find
more issues with the system. Edsger Dijkstra, a pioneer in the field of software engineering,
has said succinctly that testing cannot show the absence of errors; it can only show their
presence.
Amity University Online
Software Engineering and Modeling 125
Testing is a step in the larger software validation and verification process (V & V).
Despite common confusion, validation and verification are not the same thing.
Notes

e
in
nl
O
Figure: An input-output model of program testing

ity
Software engineering pioneer Barry Boehm summed up the distinction between them
this way:
●● ‘Validation: Are we building the right product?’
●● ‘Verification: Are we building the product right?’

rs
Processes for verification and validation look to see if the program being produced
satisfies its specifications and provides the capabilities that users who are paying for
the product expect. These checks take place at every level of the development process,
ve
beginning as soon as the requirements are ready.
Verification’s goal is to confirm that the program satisfies its declared functional and
non-functional requirements. On the other hand, validation is a broader procedure. Verifying
ni

that the program satisfies the customer’s expectations is the goal of validation. It proves
that the program performs as the customer wants it to, going beyond merely verifying
compliance with the specification. Because requirements specifications don’t always
accurately represent the desires or demands of system users and customers, validation is
U

crucial.
Assuring that the software system is fit for purpose is the ultimate goal of the
verification and validation processes. Thus, the system needs to be adequate for the
ity

purpose for which it is designed. The goal of the system, user expectations and the
system’s present marketing environment all influence the necessary degree of confidence.
1. Software’s objective: The more crucial the software, the more crucial its reliability. In
contrast to a prototype built to showcase novel product concepts, software utilised to
m

operate a safety-critical system requires a far higher degree of confidence.


2. User requirements: Many customers have low expectations for software quality since they
have dealt with unstable, faulty software in the past. When their software malfunctions,
)A

they are not shocked. Users may put up with failures when a new system is implemented
because the advantages of use outweigh the expenses of failure recovery. You might not
need to spend as much time evaluating the software in these circumstances. However,
customers want software to grow more trustworthy as it matures, thus more extensive
testing of subsequent versions would be necessary.
(c

3. Environment for marketing: The price that clients are ready to pay, the deadline for
system delivery and rival products are all factors that system vendors must consider
while marketing their product. In order to get into the market early in a competitive
setting, a software corporation may choose to release a program before it has undergone
Amity University Online
126 Software Engineering and Modeling

extensive testing and debugging. When a software product is inexpensive, users can be
ready to put up with less reliability.
Notes Software inspections and reviews may be a part of the verification and validation

e
process in addition to software testing. The system requirements, design models, program
source code and even suggested system tests are all analysed and verified throughout

in
inspections and reviews.
These are what are known as static V & V procedures, meaning that you can verify
them without running the program. Software testing and inspections help V & V at several

nl
software process stages, as the figure below illustrates. The phases of the procedure when
the techniques may be applied are shown by the arrows.

O
ity
rs
Figure: Inspections and testing

Though any legible representation of the software, including its requirements or a


ve
design model, can be examined, system source code is typically the focus of inspections.
When you examine a system, you look for flaws using your understanding of the system,
the application domain and the programming or modelling language.
Three benefits of software inspection over testing are as follows:
ni

1. Faults in testing might conceal or mask other faults. You can never be certain if successive
output abnormalities are the result of a new error or are a byproduct of the initial problem
when an error produces unexpected outputs. Interactions between mistakes are not
U

a concern because inspection is a static procedure. As a result, numerous faults in a


system can be found during a single inspection session.
2. It is not expensive to inspect incomplete versions of a system. To test the pieces that are
ity

available, you must create specialised test harnesses if a program is unfinished. Clearly,
this raises the price of system development.
3. An inspection can look for more general program quality features including maintainability,
portability and conformity with standards in addition to looking for program flaws. You
m

can search for inefficiencies, improper algorithms and sloppy programming that could
make updating and maintaining the system challenging.
Program inspections are not as new as they may seem, since numerous research and
)A

experiments have shown that they are a more efficient way to find defects than program
testing. According to Fagan, informal program inspections can identify more than 60%
of program flaws. It is stated that program inspections can find over 90% of flaws in the
Cleanroom procedure.
Inspections, however, cannot take the role of software testing. Inspections are not
(c

useful for finding flaws that result from timing issues, unanticipated interactions between
various program components, or issues with system performance.
Additionally, since all possible team members could also be software engineers, it
can be challenging and costly to assemble a distinct inspection team, particularly in small
Amity University Online
Software Engineering and Modeling 127
businesses or development teams. automated static analysis, in which irregularities are
found by automatically examining a program’s source text.
An abstract representation of the traditional testing procedure used in plan-driven
Notes

e
development is shown in the figure below. In addition to a description of the test being
conducted, test cases include requirements for the inputs to be used in the test as well as

in
the expected output from the system (the test results).
The inputs designed to test a system are called test data. While test data can
occasionally be generated automatically, it is hard to generate test cases automatically

nl
since the intended test outcomes need to be specified by humans who know what the
system is supposed to perform. Test execution, though, is automatable. It is not necessary
for someone to manually search the test run for mistakes and abnormalities because the

O
expected and projected outcomes are automatically compared.

ity
1.
rs
Figure: A model of the software testing process

A commercial software system typically needs to pass three testing phases:


Testing the system as it is being developed to find errors and faults is known as
ve
development testing. Programmers and system designers most typically participate in
the testing phase.
2. Release testing entails having a different testing team test the entire system before
ni

making it available to users. Verifying that the system satisfies the needs of system
stakeholders is the goal of release testing.
3. User testing is the process of having current or potential users test a system in their own
U

setting. When it comes to software, the user could be an internal marketing team that
determines whether the product can be sold, distributed and advertised. One kind of
user testing is acceptance testing, in which a system is formally tested by the customer
to determine whether it should be accepted from the system supplier or whether more
ity

work needs to be done.


In actuality, a combination of automated and manual testing is typically used in the
testing process. When testing manually, a tester runs the program using test data and
assesses how well the outcomes match their expectations. Discrepancies are noted and
m

reported to the program creators.


When a system under development needs to be tested, an automated testing program
)A

with the tests embedded in it is executed. Regression testing, which involves rerunning
earlier tests to ensure that modifications to the program have not introduced new flaws, is
typically faster than manual testing.
Over the past few years, automated testing has become increasingly common.
(c

However, since automated tests can only verify that a program performs as intended,
testing can never be fully automated. Automated testing is nearly impossible to utilise for
testing programs that have unintended side effects or systems that rely on appearances
(such as graphical user interfaces).

Amity University Online


128 Software Engineering and Modeling

4.1.2 Introduction
The goal of software testing, a critical stage in the software development life cycle, is to
Notes find flaws, mistakes and inconsistencies in software systems prior to their implementation.

e
It includes a range of testing kinds, including automated, non-functional, functional and
manual tests using methods including regression, equivalency splitting and boundary value

in
analysis.
Tests are carefully planned, designed, carried out and reported on in order to
guarantee the operation, quality and dependability of the software. Software testing is

nl
essential to validating software systems through the use of techniques and technologies.
This improves the software systems’ performance, usability, security and scalability and
ultimately helps them satisfy stakeholder and end-user expectations.

O
4.1.3 Level of Testing
All testing done by the team developing the system is referred to as development

ity
testing. Though this isn’t always the case, the programmer who created the software
is typically the one testing it. Programmer/tester pairs are used in some development
processes, where a tester is assigned to each programmer to help with test development
and execution. A more formal method with a separate testing group within the development
team may be employed for essential systems. They are in charge of creating tests and

rs
keeping thorough records of the test findings.
Testing is a crucial stage in software engineering that guarantees the reliability, quality
and functionality of software systems. To find and fix problems at different phases of the
ve
development process, testing is done at different levels. Every testing level has a distinct
function and covers a certain feature of the program, ranging from standalone components
to the entire integrated system. This thorough analysis explores the various testing tiers,
emphasising their significance, approaches and influence on software quality.
ni

Three granularity levels of testing can be performed during development:


1. Unit testing is the process of testing individual object classes or program units. Testing
U

an object’s or method’s functionality should be the main goal of unit testing.


2. Integration testing is the process of creating composite components by integrating many
distinct pieces. The primary goal of component testing need to be component interface
ity

testing.
3. System testing is the process of testing a system once some or all of its components
have been integrated. Interactions between components should be the main emphasis
of system testing.
m

Finding software flaws is the main goal of the defect testing procedure that is called
development testing. As a result, it is typically combined with debugging, which is the
process of identifying coding errors and modifying the program to correct them.
)A

Unit Testing
Testing individual program elements, like methods or object classes, is known as unit
testing. The simplest kind of component are single functions or methods. These procedures
should be called in your tests with various input parameters. Individual components are
(c

checked to make sure they function properly during unit testing. The verification effort is the
main focus. Every component of the software design is tested separately, apart from other
system components, on the smallest possible unit.
Unit testing is preferable to comprehensive product testing for the following reasons:
Amity University Online
Software Engineering and Modeling 129
●● Because a single module is small enough, finding an error is not too difficult.
●● We may try to test the module in a way that is clearly comprehensive because it is
small enough. Notes

e
●● Multiple faults that cause confusion in vastly disparate areas of the software are
removed.

in
It is best to plan your tests so that they cover every characteristic of the object when
testing object classes. This implies that you ought to:

nl
●● Test every action connected to the item;
●● set and verify the value of every attribute connected to the object; and
●● place the object in every state conceivable. This implies that any scenario that results

O
in a state change should be simulated.
Think about the weather station, for instance. This object’s interface is displayed in the
figure below. Its identification is its only characteristic. When the weather station is setup, a

ity
constant is set.
Thus, all you need to do is run a test to see if everything has been configured
correctly. For each method connected to the object, such as reportWeather, reportStatus,
etc., test cases must be defined. Although it is ideal to test procedures separately, there
are situations where test sequences are required. For instance, you must have used the

rs
restart technique in order to try the shutdown method, which turns off the weather station’s
equipment.
ve
ni
U

Figure: The weather station object interface


ity

Object class testing becomes more difficult with generalisation or inheritance. An


operation cannot be tested in the class in which it is defined and then depended upon to
function as intended in subclasses that inherit it. It’s possible for the inherited operation
to infer characteristics and other operations. Certain subclasses that inherit the operation
might not accept these. As a result, you need to test the inherited procedure in every
m

scenario in which it appears.


State transition sequences that need to be tested can be identified and event
sequences can be defined to trigger these transitions. While testing every potential state
)A

transition sequence should be done in theory, in practice this may be too costly. The
following are some instances of state sequences that ought to be examined in the weather
station:
Shutdown → Running → Shutdown Configuring → Running → Testing → Transmitting
(c

→ Running → Collecting → Running → Summarising → Transmitting → Running


Unit testing should be automated whenever possible. You write and execute your
program tests using an automated test automation framework (like JUnit) in automated
unit testing. Generic test classes are provided by unit testing frameworks, which you can
Amity University Online
130 Software Engineering and Modeling

enhance to generate customised test cases. After that, they can execute every test you’ve
put in place and provide a report on its success or failure, frequently using a graphical user
Notes interface. It is feasible to run the whole test suite in a matter of seconds, meaning that you

e
can do so any time you make changes to the program.
Three components make up an automated test:

in
1. A setup phase in which the inputs and anticipated outputs from the test case are used to
initialise the system.

nl
2. A call section, in which the object or test function is called.
3. A section of the assertion where the predicted and call result are compared. The test has
been successful if the claim evaluates to true; if not, it has failed.

O
Occasionally, an object under test has dependencies on other objects that may not
have been written or that, if utilised, slow down the testing process. If your object calls a
database, for instance, it might need to go through a lengthy setup process before it can

ity
be utilised. You might choose to utilise mock objects in these situations. Mock objects are
objects that mimic the functionality of external objects by having an interface similar to it. As
a result, a mock object that mimics a database might only contain a small number of array-
based data elements.

rs
As a result, they can be promptly accessed without incurring the costs associated with
accessing discs and calling databases. Mock objects can also be used to mimic unusual
behaviour or infrequent occurrences. For instance, your mock object can simply return
ve
those times, regardless of the actual clock time, if your system is programmed to execute at
specific times of the day.

Choosing Unit Test Cases


ni

Selecting efficient unit test cases is crucial as testing is costly and time-consuming. In
this context, effectiveness refers to two things:
1. The test cases ought to demonstrate that the component you are testing performs as
U

intended when utilised in accordance with expectations.


2. Test cases are supposed to disclose any flaws in the component.
Thus, you ought to create two different types of test cases. The first of them ought to
ity

demonstrate how the component functions and represent how a program would normally
run. Your test case should demonstrate that the record exists in a database and that its
fields have been set as stated, for instance, if you are testing a component that creates and
initialises a new patient record.
m

The other type of test case ought to be grounded in testing scenarios where typical
issues occur. To ensure that aberrant inputs are correctly processed and do not cause the
component to fail, it should use them.
)A

Here, we go over two viable approaches that may be useful to you in selecting test
cases. These are
1. Partition testing is the process of identifying sets of inputs that share traits and ought to
be handled similarly. Tests from each of these categories should be your selections.
(c

2. Guideline-based testing, in which test cases are selected based on testing


recommendations. These recommendations are based on past knowledge of the kinds
of mistakes that programmers frequently make when creating components.

Amity University Online


Software Engineering and Modeling 131

Notes

e
in
nl
O
ity
Figure: Equivalence partitioning

A program’s input data and output results are frequently categorised into several
groups having similar traits. Menu choices, negative numbers and positive numbers are a
few examples of these classes. Typically, programs act in a similar manner for each student

rs
in a class. In other words, you would anticipate that a program that performs a computation
and needs two positive integers will act in the same manner for all positive numbers when
you tested it.
ve
These classes are sometimes referred to as equivalence partitions or domains due to
their equivalent behaviour. The identification of all input and output partitions for a system or
component is the foundation of one methodical test case design methodology. The inputs
and outputs of test cases are arranged to fall inside these divisions. Test cases can be
ni

created using partition testing for both components and systems.


The set of all potential inputs for the program under test is represented by the huge
shaded ellipse on the left in the accompanying figure. Equivalency partitions are shown by
U

the smaller, unshaded ellipses. Every participant in an input equivalency partition should
be processed by the program under test in the same manner. Partitions known as output
equivalency partitions are those in which every output has a similar attribute.
The equivalence partitions for the input and output can occasionally be mapped 1:1.
ity

This isn’t always the case, though; in certain situations, you might need to construct a
different input equivalency partition in which the inputs’ only shared trait is that they produce
outputs that fall into the same output partition. In the left ellipse, the invalid inputs are
indicated by the shaded area. Possible exceptions are shown by the darkened area in the
m

right ellipse (i.e., reactions to incorrect inputs).


Select test cases are taken from each of the partitions that you have identified.
Selecting test cases along the partition boundaries as well as those at the partition’s
)A

midpoint is a solid general rule of thumb. This is because, while creating a system,
programmers and designers frequently take normal input values into account. Selecting the
partition’s midway allows you to test them. Developers may ignore boundary values since
they are frequently anomalous (zero, for example, may behave differently from other non-
negative numbers). Processing these unusual data frequently results in program failures.
(c

In order to identify partitions, you can use the user manual or program specification, as
well as your expertise predicting which input value classes are most likely to include faults.
Let’s take an example where a program specification specifies that four to eight five-digit

Amity University Online


132 Software Engineering and Modeling

integers larger than 10,000 can be fed into the program. To determine the input partitions
and potential test input values, use this information. These can be seen in the figure below.
Notes

e
in
nl
O
ity
Figure: Equivalence partitions

rs
Black-box testing refers to the process of determining equivalency partitioning using a
system’s specification. You don’t need to understand how the system operates in this case.
Nevertheless, whitebox testing, in which you examine the program’s code to uncover other
ve
test possibilities, might be a useful addition to the black-box tests.
For instance, you might have exceptions in your code to deal with bad inputs. With
this information, you can determine exception partitions, or several ranges where the same
ni

treatment of exceptions ought to be used.


Because it helps account for mistakes that programmers frequently make while
processing inputs at the borders of partitions, equivalency partitioning is a useful testing
U

strategy. Test case selection might also be aided by consulting testing recommendations.
The knowledge of what kinds of test cases work best for finding faults is embodied in
guidelines. For instance, the following recommendations may assist in identifying errors
ity

when testing programs that use lists, arrays, or sequences:


1. Use sequences with a single value to test software. Sequence construction is a natural
thought process for programmers and they occasionally incorporate this notion into their
code. Consequently, a program might not function correctly if it is given a single-value
m

sequence.
2. For each test, use a different sequence with a different size. This lessens the possibility
that a defective program will unintentionally generate a correct output due to peculiarities
)A

in the input.
3. Create tests that provide for access to the sequence’s beginning, middle and end
elements. This method highlights issues at the boundaries of partitions.

Integration Testing
(c

Integration testing is the term for the second stage of testing. Integration testing is a
methodical approach to building the program’s structure while simultaneously testing to find
interface-related mistakes. Subsystems comprised of numerous unit-tested modules are

Amity University Online


Software Engineering and Modeling 133
assembled and tested in this testing process. Checking if the components can be correctly
merged is the aim here.
Notes
Testing module interfaces to make sure there are no problems in parameter passing

e
when one module calls another is the main goal of integration testing. A system’s many
modules are systematically integrated using an integration plan during integration testing.

in
The sequence and steps by which the modules are assembled to create the entire system
are laid out in the integration plan. The partially integrated system is tested following each
integration phase.

nl
Approaches to Integration Testing
The several methods for integration testing include:

O
1. Incremental Approach
2. Top-down Integration

ity
3. Bottom-up Integration
4. Regression Testing
5. Smoke Testing
6. Sandwich Integration

rs
Incremental Approach. The incremental approach calls for assembling and testing just
two components at a time. If there are errors, fix them; if not, add another component, test it
again and so on until the system is completed.
ve
The program is built and tested in tiny chunks during incremental integration, making it
simpler to identify and fix problems.
ni
U
ity

Figure: Incremental Approach


m

As shown in the preceding Figure, tests T1, T2 and T3 are initially conducted on a
system made up of modules A and B in test sequence 1. Test C, or test sequence 2, is
integrated if these are fixed or error-free. If not, tests T1, T2 and T3 are repeated; if an issue
)A

occurs in these tests, they communicate with the newly integrated module.
Because the issue is localised, finding and fixing defects is made easier. Ultimately,
test sequence 3 is tested utilising both new (T6) and old (T1 to T5) tests as module D is
merged.
(c

Top-Down Integration Testing. Building program structures incrementally is known as


top-down integration testing. Starting with the maincontrol module, modules are integrated
by descending via the control hierarchy.

Amity University Online


134 Software Engineering and Modeling

Notes

e
in
nl
Figure: Top-Down Integration Testing

O
Considering above Figure:
1. Depth-first integration: By doing this, every part on a significant structural control
path would be integrated. M1, M2 and M5, for instance, would be merged first. M8 or

ity
M6 integration would come next. Next, the control routes to the right and centre are
constructed.
2. Breadth-first integration: This includes any element that is directly subordinate at every
level and travels horizontally across the structure. Components M2, M3 and M4 from the

so forth.
rs
preceding figure would be integrated first. The following control levels are M5, M6 and

There are five steps involved in the integration process:


ve
1. All components that report directly to the main-control module are replaced with stubs
and the main-control module serves as a test driver.
2. Subordinate stubs are swapped out for real components one at a time, based on the
ni

integration strategy chosen (i.e., depth or breadth first).


3. As each part is integrated, tests are run.
4. Upon the conclusion of every test set, the actual component is substituted for another
U

counterfoil.
5. It is possible to perform regression testing to make sure that no new faults have been
introduced.
ity

Bottom-Up Integration Testing:As the name suggests, bottom-up integration testing


starts with the components that are the lowest in the program hierarchy and works its way
up.
The following actions can be taken to implement a bottom-up integration strategy:
m

●● Low-level parts are assembled into builds, or clusters, that carry out particular software
subtasks.
)A

●● The writing of a driver, or testing control program, synchronises the input and output of
test cases.
●● A test is run on the cluster.
●● Clusters are consolidated and drivers are eliminated as one moves up the program
structure.
(c

Regression Testing. The process of ensuring that modifications—whether brought


about by testing or for other reasons—do not result in unexpected behaviour or new
mistakes is known as regression testing.

Amity University Online


Software Engineering and Modeling 135
Three distinct classes of test cases make up the regression test suite:
●● Extra examinations that concentrate on software features.
Notes
●● A selection of tests that is typical of all software functionality.

e
●● Tests that concentrate on the modified software components.

in
Smoke Testing. One popular integration testing technique used in the development
of shrink-wrapped software products is smoke testing. Because smoke testing involves
rebuilding the software with new components and testing, it is referred to be a rolling

nl
integration technique. The following activities are included in smoke testing:
●● Code-translated software components are combined to form a build. Every data file,
library, reusable module and engineering component needed to carry out one or more

O
product functions is included in a build.
●● A battery of tests is intended to reveal bugs that prevent the build from operating as
intended.

ity
●● The product (as it stands) is smoke tested every day and the build is integrated with
other builds.

rs
ve
ni

Applying smoke testing to intricate software engineering projects has several


advantages:
U

●● Integration risk is reduced to a minimum.


●● The final result is of higher quality.
ity

●● The diagnosis and rectification of errors are made simpler.


●● It is simpler to evaluate progress.
Sandwich Integration Testing. Sandwich integration testing combines the bottom-up
and top-down methodologies. Thus, another name for it is mixed integration testing. Like a
m

sandwich, the entire system is split into three layers: one layer sits above the target, while
the other two layers sit below it. The target is at the centre of the system.
)A

The layer above the target uses the top-down technique, while the layer below the
target uses the bottom-up approach. The intermediate layer’s testing coverage combines
(c

the benefits of the top-down and bottom-up approaches. It is selected based on the
component hierarchy’s structure and system attributes. Drivers are also needed. It is
difficult to organise and manage the sequence in it. The work parallelism is medium at the
beginning. It also needs stubs. It has a medium capacity to test certain pathways.
Amity University Online
136 Software Engineering and Modeling

System Testing
The system is composed of integrated subsystems. Errors arising from unexpected
Notes interactions between subsystems and system components are the focus of the testing

e
process. Verifying that the system satisfies its functional and non-functional requirements is
another of its concerns.

in
When testing a system during development, components are combined to form
a version of the system, which is then tested. System testing ensures that parts work
together, communicate with one another properly and exchange the appropriate data

nl
across their interfaces at the appropriate moment. Although there are clear similarities with
component testing, there are two key distinctions:
●● Reusable, independently designed components and off-the-shelf systems might be

O
combined with newly developed components during system testing. Next, a system-
wide test is conducted.
●● At this point, components created by several organisations or team members may be

ity
combined. System testing is a collaborative process as opposed to an individual one.
System testing may be carried out by a different testing team in some businesses, one
that is not composed of programmers or designers.
Emergent behaviour results from the integration of constituent parts to form a system.

rs
This implies that certain aspects of the system’s functionality are only apparent when the
parts are assembled. Testing is necessary to determine whether this is planned emergent
behaviour. For instance, you could combine an information-updating component with an
authentication component. Then, a system feature limits authorised users’ access to update
ve
information. Occasionally, though, the emerging behaviour is unwelcome and unanticipated.
It is necessary to create tests that verify the system is only doing its intended functions.
As a result, the main goal of system testing should be to examine how different objects
ni

and components interact with one another. In order to make sure that reusable parts or
systems function as intended when combined with new parts, you can also test them. The
goal of this interaction testing is to find any issues in the component that are only visible
U

when it interacts with other parts of the system. Determining misinterpretations of other
system components by component developers is another benefit of interaction testing.
Use case-based testing is a useful method for system testing since it emphasises
interactions. Every use case is usually implemented by multiple objects or components
ity

within the system. These exchanges must take place in order to test the use case. You can
see the items or parts that are engaged in the interaction if you have created a sequence
diagram to represent the use case implementation.
Fundamentally, there are three primary categories of system testing:
m

™™ Alpha testing
™™ Beta testing
)A

™™ Acceptance testing
Alpha Testing. The term alpha testing describes the system testing that the
development organisation’s test team does. Under the direction of the project team, the
customer conducts the alpha test at the developer’s location. Users test the program on the
development platform during this test, pointing out bugs that need to be fixed.
(c

However, the alpha test’s capacity to identify and fix faults is constrained because it
is only conducted by a small number of users on the development platform. Alpha testing
takes place in a regulated setting. It mimics how things are used in real life. The software

Amity University Online


Software Engineering and Modeling 137
product is prepared to be moved to the customer site for development and installation after
the alpha test is over.
Beta Testing. Beta testing is the process of having a small number of amiable
Notes

e
consumers test a technology. The software is not immediately put into use if the system is
complicated. After installation, all users are requested to use the program in testing mode;

in
live usage is not permitted.
We refer to this as the beta test. Beta testing takes place on the customer’s premises,
where a large number of users are exposed to the program. The software developer

nl
could be present or absent throughout its usage. Thus, beta testing is an actual software
experience that isn’t put into use. End users document their observations, blunders, errors
and so forth in this test and report them on a regular basis.

O
A user may propose an alteration, a significant change, or a departure during a beta
test. For a seamless transition from freshly developed software to amended, improved
software, the development must review the suggested change and incorporate it into the

ity
change management system. It is customary to include all such modifications in updates to
the software.
Acceptance Testing. The system testing that the client does to decide whether to
accept or reject the system delivery is known as acceptance testing. A series of acceptance

rs
tests are carried out when custom software is developed for a single customer, allowing
them to verify all requirements. An acceptance test, which is carried out by the end-user
instead of the software engineers, can be anything from a casual test drive to a carefully
thought out and carried out set of tests. Acceptance testing can really take place over a
ve
few weeks or months, allowing for the discovery of accumulating flaws that could eventually
cause the system to malfunction.

Recovery Testing
ni

Many computer-based systems have deadlines for recovering from errors and starting
up again. A system may occasionally need to be fault tolerant, meaning that errors in
processing shouldn’t stop the system from working as a whole. In other situations, there is
U

a deadline for fixing a system failure or there would be significant financial harm. Recovery
testing is a type of system test that checks that recovery is carried out correctly by forcing
the software to fail in various ways.
ity

Checkpointing methods, restart, data recoveryandreinitialisation are all checked for


accuracy if recovery is automatic (carried out by the system itself). In the event that human
involvement is necessary for recovery, the mean time-to-repair (MTTR) is assessed to see if
it falls within acceptable bounds.
m

Security Testing
Improper or unlawful intrusion can attack any computer-based system that controls
sensitive data or initiates actions that could wrongfully hurt (or profit) people. The term
)A

penetration refers to a wide range of actions, including hacker-for-sport, unhappy employee


and dishonest person trying to get unauthorised access to networks for personal gain.
Verifying that a system’s defences against unwanted intrusion will, in fact, keep it safe
is the goal of security testing. According to Beizer: The system’s security must, of course,
(c

be tested for invulnerability from frontal attack—but must also be tested for invulnerability
from flank or rear attack.
The tester assumes the role(s) of the person who wants to break into the system during
security testing. Everything is acceptable! The tester could try to obtain passwords through
Amity University Online
138 Software Engineering and Modeling

outside clerical means; use custom software to attack the system and undermine any
defences put in place; overload the system and prevent others from using it; intentionally
Notes create system errors in the hopes of breaking through during recovery; or comb through

e
insecure data in the hopes of discovering the key to system entry.
A system will eventually be breached by competent security testing if it is given

in
sufficient time and resources. The system designer’s job is to make penetration more
expensive than the information that can be gleaned from it.

Stress Testing

nl
White-box and black-box methodologies produced a detailed examination of typical
program functions and performance throughout previous software testing processes. Stress

O
tests are intended to expose programs to unusual circumstances. When doing stress
testing, the tester essentially asks, How high can we crank this up before it fails?
Stress testing involves running a system in a way that requires unusually high levels
of volume, frequency, or quantity of resources. Examples of such tests include: (1) creating

ity
special tests that produce ten interrupts per second when the average rate is one or two; (2)
increasing input data rates by an order of magnitude to see how input functions react; (3)
executing test cases that require the maximum amount of memory or other resources; (4)
creating test cases that may cause thrashing in a virtual operating system; and (5) creating

rs
test cases that may cause excessive hunting for disk-resident data. In essence, the tester
tries to get the program to malfunction.
Sensitivity testing is a method that is a variant of stress testing. Extreme and
ve
even incorrect processing or significant performance deterioration can result from a
very tiny range of data falling within the parameters of valid data for a program in some
circumstances (mathematical algorithms are the most typical examples of these scenarios).
Sensitivity testing looks for combinations of data within valid input classes that might lead to
ni

processing errors or instability.

Performance Testing
U

Software that fulfils the necessary function but does not meet performance
requirements is unacceptable for real-time and embedded systems. The purpose of
performance testing is to evaluate a program’s runtime performance in the context of an
integrated system. Every stage of the testing process includes performance testing. As
ity

white-box tests are carried out, the performance of a single module may be evaluated even
at the unit level. But the real performance of a system cannot be determined until all of its
components are properly integrated.
Stress testing and performance testing are frequently combined and both hardware
m

and software instrumentation are typically needed. That is to say, precise measurements
of resource utilisation (such as processor cycles) are frequently required. External
instrumentation has the capability to regularly sample machine states, log events (such as
)A

interrupts) as they happen and monitor execution intervals. A tester can find conditions that
contribute to system degradation and potential failure by instrumenting the system.

Continuous Integration and Continuous Delivery (CI/CD)


Continuous Integration and Continuous Delivery (CI/CD) involves the right principles,
(c

techniques, practices, and technology to empower the DevOps team to develop and test
the software in an automated and repeatable manner.
CI CD testing is one of the phases in this process that takes care of quality assurance
and perfect delivery of the software with zero errors. The goal of CI/CD testing is to promote
Amity University Online
Software Engineering and Modeling 139
the update or release only when it passes the functional requirements testing, remains
stable, and is free of defects. Typically, it is an automated process that is fully integrated
with the CI CD pipeline. Notes

e
Where Does CI CD Testing Fit into the CI/CD Process?

in
It’s much easier to address and rectify an issue soon after it’s introduced into the
codebase. Early detection prevents additional code from being built upon a shaky
foundation, ultimately saving time and effort. Additionally, addressing problems promptly

nl
keeps your development team in context, ensuring efficient problem-solving.
Automated build testing tools seamlessly integrate with CI/CD tools, allowing you
to feed test data into the pipeline and conduct testing in progressive stages. At each step,

O
you receive results that inform your next move. Depending on your chosen CI tool, you can
decide whether to advance a build to the next stage based on the outcome of previous tests.
To maximize the benefits of your CI/CD pipeline, it’s advisable to organize your build
tests in an order that prioritizes the fastest ones to run first. This approach provides swift

ity
feedback and optimizes the utilization of test environments. You can ensure that initial tests
have passed before moving on to longer and more intricate ones.
When strategizing the creation and execution of automated tests, it’s valuable to
conceptualize the process in terms of the testing pyramid, which emphasizes a balanced

rs
mix of unit, integration, and end-to-end tests. This well-structured approach to CI/CD testing
ensures comprehensive coverage while maintaining efficiency and effectiveness throughout
the software development lifecycle.
ve
An atypical CICD flow may look like this:
ni
U

Figure: CI/CD Process

1. Developing the requested feature within a sprint or within a committed deadline to the
ity

customer
2. Pushing and merging code into the existing codebase
3. Initial source code quality analysis by unit test case
m

4. Building and executing the integrated code through automated test cases
5. Generate the deployable code version in lower environments
6. Report the status in the lower environments and fix the defects
)A

7. Pushing code to higher environments and getting customer feedback


8. Production release of the software
CI/CD testing is performed through a well-defined and automated pipeline that
integrates various testing stages into the software development and deployment process.
(c

Here’s how CI/CD testing is typically done:


●● Code Commit (CI): Developers commit code changes to a version control system
(e.g., Git). This triggers the CI pipeline to initiate.

Amity University Online


140 Software Engineering and Modeling

●● Automated Builds: The CI/CD server automatically pulls the latest code changes and
builds the application. This ensures that the code is compilable and ready for further
Notes testing.

e
●● Unit Testing (CI): Automated unit tests are executed to verify the functionality of
individual code components. Any failures at this stage are quickly reported to

in
developers.
●● Integration Testing (CI): After successful unit tests, integration tests are performed
to check how different parts of the application work together. This verifies that code

nl
changes have not introduced integration issues.
●● Automated Deployment (CD): In Continuous Delivery, the tested code is deployed to a
staging environment that closely resembles the production environment.

O
●● Monitoring and Reporting: Throughout the CI/CD process, monitoring tools collect data
on application performance and test results. Reports and alerts are generated to notify
teams of any issues.

ity
●● Deployment to Production (CD): If all tests pass and stakeholders approve, the code
changes are deployed to the production environment.
●● Post-Deployment Monitoring (CD): After deployment, continuous monitoring ensures
that the application performs as expected in the production environment.

4.1.4 Characteristics of Testing


rs
We would anticipate that all manufacturers of goods and services would be concerned
about quality. But software has unique needs due to its unique qualities, namely its
ve
intangibility and complexity.
1. Increasing Criticality of Software. Naturally, the end user or client is concerned about the
overall quality of the software, particularly its reliability. This is becoming more and more
ni

of an issue as businesses rely more on their computer systems and software is utilised
in more and more safety-critical applications, like controlling aeroplanes.
2. The Intangibility of Software. This makes it challenging to determine whether a specific
U

project task has been satisfactorily accomplished. By requiring the developers to provide
deliverables that can be checked for quality, the jobs’ outcomes can be made more
concrete.
3. Accumulating Errors During Software Development. The process of developing a
ity

computer system entails several steps, each of which is the input for the next. As a
result, errors in the initial deliverables will compound and have an adverse effect on
subsequent steps. Generally speaking, the later an error is discovered in a project, the
costlier it will be to correct. Furthermore, it might be particularly challenging to manage
m

the debugging phases of a project because it is uncertain how many mistakes there are
in the system.
In order to guarantee the quality, reliability and performance of software products,
)A

software testing is a crucial component of the software development lifecycle. Robust


testing finds bugs, confirms that the program satisfies criteria and guarantees that it is
appropriate for its intended use. The attributes of testing comprise a range of concepts,
procedures and objectives that together characterise its extent and significance.
(c

1. Comprehensiveness
Thorough testing entails assessing every facet of the software to guarantee that it
operates accurately in a range of scenarios. This feature is essential for finding bugs that
may not be noticeable in the early stages of testing.
Amity University Online
Software Engineering and Modeling 141
●● Functional Testing: Verifies that every software function performs in accordance with
the requirements specified.
●● Non-Functional Testing: compares and contrasts characteristics like security, usability, Notes

e
performance and reliability.
In order to cover as much of the application as possible, including edge cases

in
and potential failure areas, comprehensive testing employs a variety of test cases and
scenarios. This methodical approach aids in finding any potential flaws and guarantees that
the program functions properly under a variety of circumstances.

nl
2. Systematic Approach
Testing is done in a methodical manner that includes organising, creating, carrying out

O
and documenting test operations. Testing is made sure to be organised and ordered by this
meticulous approach.
●● Specifying the resources, methodology, timetable and extent of the testing operations

ity
is known as test planning.
●● Test design is the process of writing test scripts and cases in accordance with the
requirements and design guidelines.
●● Executing the tests and recording the outcomes is known as test execution.
●●

rs
Test reporting: Providing an overview of the results, highlighting any flaws found, their
seriousness and suggested fixes.
A methodical approach guarantees comprehensive coverage and aids in handling the
ve
complexity of testing large software systems.

3. Objective and Unbiased


To guarantee accurate and dependable results, testing has to be impartial and
ni

unbiased. This quality is essential to the validity of the testing procedure and results.
●● Independent Testing Teams: enlisting testers from outside the development team to
offer a dispassionate viewpoint.
U

●● Clear Criteria: To guarantee objectivity, precise acceptance criteria and test cases
based on requirements must be established.
When testing, objectivity helps find true defects instead of problems that are missed
ity

because of prejudice or familiarity.

4. Traceability
The capacity to connect test cases to the specifications and requirements that they are
intended to verify is known as traceability.
m

●● A document that links test cases to user requirements is called a requirements


traceability matrix, or RTM.
)A

●● Test Coverage: Ensuring that test cases address every need.


Traceability makes ensuring that every requirement is checked and that any
modifications to the requirements are represented in the test cases that correspond to those
changes.
(c

5. Reusability
In testing, reusability refers to the creation of test cases and scripts that are
transferable to other projects or test cycles.

Amity University Online


142 Software Engineering and Modeling

●● Creating reusable automated scripts for repetitive activities is known as automated test
scripting.
Notes ●● Creating test cases that are easily modified or reused in many contexts is known as

e
modular test case design.
Reusability increases productivity and saves time and effort when testing new versions

in
or projects in the future.

6. Repeatability

nl
Repeatability guarantees that tests may be carried out repeatedly with identical
outcomes, which is essential for confirming fixes and guaranteeing stability.

O
●● Automated Testing: To guarantee consistent execution, use automated tests.
●● Maintaining thorough records of test cases, scripts and execution procedures is known
as test documentation.

ity
Reliability in regression testing and software integrity preservation across successive
versions are made possible by repeatability.

7. Early Detection
Early fault detection in the software development lifecycle is the goal of effective

●● rs
testing. Reducing the expense and labour needed to remedy faults requires early discovery.
Integrating testing early in the development process, such as during the requirements
and design phases, is known as shift-left testing.
ve
●● Continuous Testing: Integrating testing into the pipeline for continuous integration and
deployment (CI/CD).
Early identification increases overall project efficiency by addressing problems before
ni

they become more complicated and expensive to fix.

8. Scalability
U

The capacity to test software effectively as it develops in size and complexity is referred
to as scalability in testing.
●● Developing automation frameworks that can manage higher loads and more
ity

complicated test scenarios is known as scalable test automation.


●● Performance testing: Verifying the program’s functionality under varied load scenarios.
Scalability guarantees that the testing procedure can adjust to the software’s
expanding requirements, preserving quality as the program changes.
m

9. Adaptability
The testing process’s adaptability is its capacity to adjust to modifications in the
)A

specifications, the design, or the implementation.


●● Agile Testing: Using agile approaches that enable ongoing testing and flexibility in
response to evolving needs.
●● Flexible Test Plans: Drafting test strategies that are easily adaptable to new
(c

information.
Testing must be flexible in order to be applicable and efficient in ever-changing
development contexts.

Amity University Online


Software Engineering and Modeling 143
10. Efficiency
In testing, efficiency is defined as using the least amount of resources to achieve
maximum coverage and defect identification.
Notes

e
●● Risk-Based Testing: Concentrating testing resources on the most dangerous regions.

in
●● Test Automation: Making use of automation to speed up test execution and minimise
manual labour.
Effective time and resource management is made possible by efficient testing,

nl
which guarantees that the testing process contributes substantial value without needless
overhead.

11. Thoroughness

O
Being thorough guarantees that every facet of the software, including potential weak
points and edge cases, is thoroughly tested.
●● Testing the borders between partitions is known as boundary value analysis.

ity
●● Equivalency partitioning involves testing representative values after dividing input data
into equivalent partitions.
Thoroughness in testing ensures that the software is strong and dependable and aids
in detecting hidden issues.

12. Transparency
rs
Clear communication and documentation of the testing procedure, conclusions and
ve
outcomes are essential components of testing transparency.
●● Detailed Test Reports: delivering thorough test results reports that include coverage,
flaws and execution status.
ni

●● Effective Communication: Making sure that everyone involved is aware of the testing
process and any problems that may arise.
Transparency gives stakeholders a clear picture of the software’s quality state and aids
U

in fostering confidence.

13. Verification and Validation


Verification (confirming that the program satisfies criteria) and validation (confirming
ity

that the program satisfies user needs and expectations) are both included in testing.
●● Verification: Examining how closely the program adheres to its requirements.
●● Validation: Making sure the program serves end users’ intended purposes.
m

Both validation and verification make sure the software is the right software and was
produced appropriately (validation).
A robust testing process is defined by all of the features of software testing, which
)A

include completeness, transparency, objectivity, traceability, reusability, repeatability, early


detection, scalability, adaptability, efficiency, thoroughness and verification and validation.
These features guarantee that testing finds bugs as well as confirmsthat the program
satisfies all criteria and operates dependably under actual circumstances. Software
engineering teams may produce dependable software products that fulfil customer
(c

expectations and withstand heavy usage by following these principles.


Software testing is the process of assessing and confirming that an application
functions in accordance with user expectations. Software testing’s primary goal is to
find issues and notify the development team so that they can be fixed. Various testing
Amity University Online
144 Software Engineering and Modeling

methodologies are employed based on the user’s and application’s requirements. Software
testing models include the Spiral model, Agile, SDLC and many more. Software testing
Notes comes in two flavours: automated software testing and manual software testing.

e
Customer Satisfaction

in
Serving clients with the greatest features and experiences is the main objective of any
service-based business; an application’s popularity and success are dependent exclusively
on its ability to satisfy its users.

nl
●● By guaranteeing a flawless application, software testing contributes to the
development of consumer satisfaction and trust.
●● User interface testing raises client satisfaction. Software testing looks for any potential

O
flaws and tests an application in accordance with user specifications.
●● eCommerce, for instance, depends on its customers and happy customers increase
the company’s market worth and profits.

ity
Cost Effective
An application’s owner will save a significant amount of money if it operates flawlessly
and requires little maintenance.
●● Software testing aids in the early identification of defects and their correction to

●● rs
provide a better and more effective return application.
Every application needs upkeep and the owner of the application must invest a
significant sum of money to keep it operating and functional.
ve
●● The maintenance area saves money by reducing too many folds through application
testing.
●● When software flaws are found early on in the development process, it is easier and
ni

less expensive for the developer to rethink the module rather than having to find bugs
after the product has been fully developed. This helps the developing organisation
save money.
U

Quality Product
Software testing’s primary goal is to provide clients with a high-quality product that will
satisfy them. A product can only remain high-quality if it is free of errors and satisfies all
ity

user needs.
●● Compatibility testing is required if an application functions well with other related apps
in order for it to meet product standards.
●● Software testing involves a variety of testing methodologies to provide a high-quality
m

final result. The testing team makes every effort to validate defect-free applications by
creating test cases and test scenarios.

Low Failure
)A

An application’s functionality and brand value are affected when it fails. Software
testing makes it possible to identify the situations in which a given program is most likely to
fail.
●● For instance, high traffic and load increase the likelihood of an eCommerce application
(c

failing.
●● Applications of this kind are subjected to stress testing in order to determine their
maximum capacity to manage the load.

Amity University Online


Software Engineering and Modeling 145
●● Software testing increases the stability and dependability of an application.

Bug-Free Application
Notes

e
Software testing’s primary responsibility is to find defects and notify the relevant
development team so they can be fixed. Testers recheck a corrected problem to determine

in
its current status.
●● An application that functions properly and without any hiccups, errors, or defects is
said to be bug-free. An application’s sole requirement is that it performs as intended

nl
and exhibits no abnormal behaviour brought on by flaws.
●● Although it is practically impossible to have a 100% bug-free application, the testing
team tries to find as many flaws as they can by developing test cases. To find a bug,

O
software testing uses the STLC procedure.

Security
Security is the primary issue facing the digital world. Customers’ top priority list is

ity
always a secure system. Owners must spend a lot of money to protect their systems from
theft, harmful attacks and hackers. Application security is assessed using security testing
techniques and testers look for ways to compromise an application’s security.
●● Because bank applications handle customer funds and are constantly monitoring theft,

●●
rs
security is the most important requirement in this context.
The development team works to add several levels of protection to the program, while
the testing team uses security testing to find flaws.
ve
●● Software testing is therefore essential to preserving an application’s security.

Easy Recovery
Recovery is the process by which an application, after failing, resumes its normal
ni

operation in a shorter amount of time. When an application recovers fast and resumes its
regular operations, it is considered successful.
●● Software testing aids in determining an application’s recovery rate and the amount of
U

time it takes to recover overall.


●● Testers run an application to find case scenarios when it is most likely to crash and to
measure how long it takes for it to restart.
ity

●● Testers provide feedback to the development team, which then modifies internal code
to enable quick application recovery.

Speed Up the Development Process


m

Only when an application is developed quickly can it be supplied early. Software testing
allows the development team to speed up their development process by discovering errors
and reporting them. The observed problem can be quickly rectified in the early stages of
system development and this does not influence the functioning with other capabilities.
)A

●● Software testing work corresponds with the development team to make them know
there is a defect.
●● The development process of an application is improved when bugs are found and
fixed concurrently with system development, as this eliminates the need for the
(c

development team to wait for bug finding and fixing.


●● Developers must design and verify the functioning of all linked modules if a module
has a defect that impacts the functionality of other associated modules.

Amity University Online


146 Software Engineering and Modeling

Early Defect Detection


Defect detection is the main goal of software testing. Early defect discovery is made
Notes easier and the development team is assisted in fixing errors if the software testing, or

e
quality assurance, team, operates in tandem with the software development team from the
start. If testing begins after the software has been fully developed, the developer will need

in
to rethink all linked modules in order to address any flaws in any one module.

Reliable Product

nl
A product may only be considered dependable if it fulfils user expectations and fosters
client confidence.
●● Application reliability is increased through software testing, which tests the program

O
using performance, security and other testing methods.
●● Software testing, which can be either comprehensive or exhaustive, verifies that
a program operates in accordance with user requirements and looks for as many

ity
potential flaws as feasible.

4.2 Types of Software Testing


The many techniques and procedures used to assess the functionality, quality and

rs
performance of software systems and applications are referred to as types of software
testing. Every form of testing has a distinct function in the software development lifecycle,
from component-level unit testing to system testing for end-to-end validation and from
ve
performance testing to guarantee scalability and responsiveness to security testing to
protect against vulnerabilities. Other kinds, such usability, compatibility, acceptability,
regressionandlocalisation testing, concentrate on making sure the program satisfies user
needs, is stable in a variety of environments and is applicable to a range of platforms and
geographical areas. These testing techniques work together to produce dependable, safe
ni

and user-friendly software solutions of the highest calibre.

4.2.1 Black-Box and White-Box Testing


U

Black-box Testing
Functional testing does not take the program’s structure into account. The criteria or
ity

specifications of the program or module are used to determine the test cases; the internal
workings of the program or module are not taken into account. Functional testing is defined
as testing that does not attempt to analyse the code that generates the result; instead, it
simply observes the output for specific input values. The program’s internal organisation
is disregarded. Because of this, functional testing is sometimes referred to as black-box
m

testing (also known as behavioural testing), in which the inputs and outputs of a black-box
are fully understood but its content is unknown.
)A
(c

Amity University Online


Software Engineering and Modeling 147
Behavioural testing, another name for black-box testing, concentrates on the software’s
functional requirements. In other words, a software engineer can determine sets of input
conditions through black-box testing that will completely test all of a program’s functional Notes

e
requirements. White-box methodologies are not superseded by black-box testing. Instead,
it’s a supplementary strategy that will probably reveal a distinct category of mistakes than

in
white-box techniques.
In black-box testing, problems fall into five categories:
1) behaviour or performance errors;

nl
2) interface errors;
3) data structure or external database access errors;

O
4) wrong or missing functions; and
5) initialisation and termination faults.
Blackbox testing is typically used in later phases of testing, as opposed to white-box

ity
testing, which is carried out early in the process. Black-box testing concentrates on the
information domain since it intentionally ignores control structure. The following questions
are intended to be addressed by tests:
●● How is the test for functional validity conducted?
●●
●●
●●
How is the performance and behaviour of the system tested?
Which input classes will yield useful test cases?
rs
Does a particular input value cause the system to behave differently?
ve
●● How are a data class’s borders isolated?
●● How much data volume and at what rates can the system handle?
●● What impact will particular data combinations have on how the system operates?
ni

Through the application of black-box techniques, we derive a set of test cases that
meet the following requirements:
1) test cases that decrease the number of additional test cases that need to be
U

designed by a count greater than one in order to achieve reasonable testing; and
2) test cases that provide information about the presence or absence of error classes
as opposed to errors related to the particular test at hand.
ity

Categories of Black-box Testing


Functional testing can be divided into two groups:
1. Positive Functional Testing: This type of testing comprises using legitimate input to
m

exercise the application’s functions and confirming that the outputs are accurate. Keeping
with the word processing example, printing a document with both text and images to a
printer that is online, loaded with paper and has the appropriate drivers installed could
be a good way to test the printing function.
)A

2. Negative Functional Testing: In this type of testing, application functionality is put to


the test through a variety of out-of-bounds scenarios, unexpected operating states and
incorrect inputs.
a. In keeping with the word processing example, unplugging the printer from the
(c

computer while a document is printing could serve as a negative test for the
printing feature.
b. In this case, the customer should likely receive an error notice in simple English
explaining what went wrong and providing instructions on how to fix it.
Amity University Online
148 Software Engineering and Modeling

c. Alternatively, the word processing program may just crash or hang up as a result of
improper handling of the abnormal loss of printer communications.
Notes
Advantages of Black-box Testing

e
This kind of testing has the following benefits:

in
●● Because the tester and the designer are separate entities, the test is objective.
●● The tester is not required to be knowledgeable about any particular programming
language.

nl
●● The user, not the designer, is the point of view from which the test is conducted.
●● Once the specs are finished, test cases can be created.

O
Graph-Based Testing Methods
Understanding the objects6 that are modelled in software and the connections between
them is the first step in black-box testing. After completing this, the following action is

ity
to design a set of tests to confirm that all objects have the expected relationship to one
another. To put it another way, software testing starts with the creation of a graph that
includes significant objects and their relationships. Next, a set of tests is designed to cover
the graph, ensuring that every object and relationship is tested and flaws are found.

rs
A graph comprising nodes that represent objects, links that represent the relationships
between objects, node weights that describe a node’s properties (such as a particular data
value or state behaviour) and link weights that describe some aspect of a link is first created
by the software engineer in order to carry out these steps.
ve
A graph’s symbolic representation is displayed in the figure below. Circles with various
types of links connecting them are called nodes. A relationship that moves solely in one
direction is indicated by a directed link, which is symbolised by an arrow. A symmetric
ni

link, often known as a bidirectional link, suggests that the relationship is applied in both
directions. When several distinct relationships are established between graph nodes,
parallel linkages are employed.
U
ity
m
)A
(c

Figure: (A) Graph notation (B) Simple example

Amity University Online


Software Engineering and Modeling 149
As a simple example, consider a portion of a graph for a word-processing application
(above Figure B) where
Object #1 = new file menu select
Notes

e
Object #2 = document window

in
Object #3 = document text
A menu select on new file creates a document window (see figure). A list of the window
properties that should be anticipated when the window is produced may be found in the

nl
document window’s node weight. The window needs to be formed in less than 1.0 seconds,
according to the link weight.
A parallel connection denotes a relationship between the document window and the

O
document text, while an undirected link creates a symmetric relationship between the
new file menu select and the document text. In actuality, test case design would require
the generation of a significantly more complete graph first. Next, by going over each

ity
relationship in the graph and creating test cases, the software engineer creates test cases.
The purpose of these test cases is to look for mistakes in any of the relationships.
Beizer lists several graph-based behavioural testing techniques that include:
Transaction flow modeling. The links show the logical relationship between steps (e.g.,

rs
flight.information.input is followed by validation/availability. Processing) and the nodes
indicate steps in some transaction (e.g., the steps required to make an airline reservation
using an online service). This kind of graph can be made with the help of the data flow
ve
diagram.
Finite state modeling. The links represent the transitions that take place to move
from one state to another, such as when order information is verified during an inventory
availability look-up and is followed by the input of customer billing information. The nodes
ni

represent various user observable states of the software, such as each of the screens that
appear as an order entry clerk takes a phone order. This kind of graph can be created with
the help of the state transition diagram.
U

Data flow modeling. The links are the conversions that take place to change one data
object into another and the nodes are the data objects. For example, the relationship FTW
= 0.62 GW is used to compute the node FICA.tax.withheld (FTW) from gross.wages (GW).
ity

Timing modeling. Program items are the nodes and the sequential connections
between those objects are the links. The necessary execution times during program
execution are specified using link weights.
The first step in graph-based testing is to define each node and its weight. In other
m

words, characteristics and items are identified. Although many nodes may be program
objects (not explicitly represented in the data model), the data model can be used as a
starting point. It is helpful to define entry and exit nodes because they show where the
graph starts and stops.
)A

After the identification of nodes, linkages and link weights need to be determined.
Although links that indicate control flow between program objects do not require names,
links should generally have names.
Loops, or paths through the graph where one or more nodes are encountered
(c

repeatedly, are a common feature of graph models. Applying loop testing at the behavioural
(black-box) level is another option. The graph will help determine which loops require
testing.

Amity University Online


150 Software Engineering and Modeling

To create test cases, each relationship is examined independently. To find out how
relationships affect different objects defined in a graph, the transitivity of sequential
Notes relationships is examined. To demonstrate transitivity, let’s look at three objects: X, Y and Z.

e
Think about the connections below:
X is required to compute Y

in
Y is required to compute Z
Therefore, a transitive relationship has been established between X and Z:

nl
X is required to compute Z
Based on this transitive relationship, tests to find errors in the calculation of Z must
consider a variety of values for both X and Y.

O
When designing test cases, symmetry in a relationship (graph link) is a crucial guide.
Testing this feature is crucial to determine if a link is, in fact, bidirectional (symmetric).
Limited symmetry is implemented by the UNDO feature found in many personal computer

ity
applications. In other words, UNDO permits the negation of an action after it has been
carried out. All exceptions (i.e., locations where UNDO cannot be utilised) should be noted
and this should be thoroughly tested. Lastly, a no action or null action loop, or a relationship
that circles around to itself, should exist for each node in the network. These reflexive links
should also be tested.

rs
Attaining node coverage is the primary goal of test case design. This means that tests
ought to be created to show that nodes have been unintentionally left out and that node
ve
weights—that is, object attributes—are accurate. Link coverage is discussed next. Based
on its features, every relationship is put to the test. To prove that a relationship is, in fact,
bidirectional, for instance, a symmetric relationship is evaluated. To prove that transitivity
exists, a transitive relationship is examined. To verify that there is a null loop, a reflexive
relationship is examined. When link weights have been specified, tests are designed to
ni

establish that these weights are valid. Finally, loop testing is invoked.

Equivalence Partitioning
U

Equivalence partitioning is a black-box testing technique that separates a program’s


input domain into data classes so that test cases can be generated from them. A perfect
test case can identify a class of errors (e.g., processing all character data incorrectly) that
ity

many cases might otherwise need to run through before the general issue is detected. By
defining a test case that finds classes of faults, equivalency partitioning aims to minimise
the overall amount of test cases that need to be created.
The assessment of equivalency classes for an input condition is the basis of the test
m

case design for equivalency partitioning. An equivalency class exists if a set of objects can
be connected by symmetric, transitive and reflexive links, using ideas from the previous
section. For input conditions, an equivalency class denotes a collection of states that are
either valid or incorrect. An input condition can be any of the following: a Boolean condition,
)A

a range of values, a set of related values, or a specific numerical value. The following
criteria may be used to determine equivalency classes:
●● There are two defined equivalency classes—one valid and one invalid—if an input
condition includes a range.
(c

●● There are two types of equivalence classes—one valid and one invalid—if an input
condition demands a particular value.
●● There are two types of equivalence classes defined: one valid and one invalid, if an
input condition identifies a member of a set.
Amity University Online
Software Engineering and Modeling 151
●● There is one defined valid class and one defined invalid class if an input condition is
Boolean.
Take information kept as a component of an automated banking application, for Notes

e
instance. Using a personal computer, the user can log in to the bank, enter a six-digit
password and then input a sequence of commands to initiate different banking functions.

in
The banking application’s software takes data in the following format during the log-on
process:
area code—blank or three-digit number

nl
prefix—three-digit number not beginning with 0 or 1
suffix—four-digit number

O
password—six digit alphanumeric string
commands—check, deposit, bill pay and the like
Each data element’s corresponding input requirements for the banking application can

ity
be defined as
area code: Input condition, Boolean—the area code may or may not be
present.

specific exceptions.
prefix: Input condition, range—specified value >200
rs
Input condition, range—values defined between 200 and 999, with
ve
Input condition, value—four-digit length
password: Input condition, Boolean—a password may or may not be present.
ni

Input condition, value—six-character string.


command: Input condition, set—containing commands noted previously
It is possible to create and run test cases for every input domain data item by following
U

the procedures for equivalency class derivation. The purpose of selecting test cases is to
exercise as many properties of an equivalency class as possible at once.

Boundary Value Analysis


ity

The centre of the input domain is not where most errors occur; instead, they tend
to occur around its edges for unclear reasons. Boundary value analysis (BVA) has been
created as a testing technique because of this. A set of test scenarios that exercise
bounding values is the result of boundary value analysis.
m

An additional test case design method to equivalency partitioning is boundary value


analysis. BVA results in the selection of test cases at the edges of the class as opposed
to choosing any element inside an equivalency class. BVA pulls test cases from the output
)A

domain as well, not only the input circumstances.


There are numerous similarities between the guidelines for equivalency partitioning and
BVA.
1. Test cases should be built with values a and b, as well as just above and just below a and
(c

b, if an input condition provides a range bordered by values a and b.


2. Test cases that exercise the minimum and maximum values should be created if an input
condition specifies a number of values. We also test values that are somewhat above
and below the minimum and maximum.
Amity University Online
152 Software Engineering and Modeling

3. Regarding output conditions, follow guidelines 1 and 2. Consider an engineering analysis


program where the output is required to be a temperature vs. pressure table. The goal of
Notes test cases should be to produce an output report with the maximum and least amount of

e
table entries that are permitted.
4. Make careful to create a test case to exercise internal program data structures at their

in
borders, such as when an array has a set maximum of 100 items.
To some extent, most software developers do BVA instinctively. Boundary testing
will be more thorough and therefore have a higher chance of error identification if these

nl
principles are followed.

Comparison Testing

O
Certain scenarios (such as automotive braking systems and aeroplane avionics)
require software reliability above everything else. To reduce the chance of error, redundant
hardware and software are frequently employed in these applications. When software is
built redundantly, different software engineering teams use the same specification to create

ity
independent versions of an application. To make sure that all versions produce the same
results in such cases, the versions can all be checked using the same test data. Then, to
guarantee consistency, all versions are run simultaneously and the results are compared in
real time.

rs
Researchers have proposed that even when only one version of the software would
beutilised in the supplied computer-based system, separate copies of the software should
be generated for important applications in order to learn from redundant systems. These
ve
separate versions serve as the foundation for comparison testing, also known as back-to-
back testing, which is a black-box testing method.
When there are several implementations of the same specification, test cases created
with alternative black-box methods (such equivalency partitioning) are fed into every
ni

software version. It is considered that every implementation is accurate if the results from
every version are the same. In the event that the output differs, every program is examined
to ascertain whether a flaw in one or more versions is to blame. Most of the time, an
U

automated tool can handle the output comparison.


Comparative analysis is not infallible. All versions will probably reflect any errors found
in the specification that was used to create them. Additionally, condition testing will be
ity

unable to identify the issue if every independent version generates data thatare identical but
inaccurate.

Orthogonal Array Testing


The input domain is comparatively small in many applications. In other words, there
m

aren’t many input parameters and the possible values for each parameter are well-defined.
It is feasible to take into account each input permutation and thoroughly verify the input
domain’s processing when these numbers are relatively small (e.g., three input parameters
)A

taking on three discrete values each). Extensive testing becomes unfeasible, though, as the
number of discrete values for each data item rises along with the number of input values.
When dealing with issues where the input domain is both too broad to support
exhaustive testing and reasonably tiny, orthogonal array testing might be used. When it
(c

comes to identifying mistakes related to area faults, a type of defect that is linked to flawed
logic in a software component, the orthogonal array testing method is especially helpful.
Consider a system with three input items, X, Y and Z, to show how orthogonal array
testing differs from the traditional one input item at a time techniques. There are three
Amity University Online
Software Engineering and Modeling 153
discrete values connected to each of these input items. 33 = 27 test scenarios are feasible.
Phadke offers a geometric interpretation of the potential test scenarios connected to X, Y
and Z that are shown in the figure below. With reference to the illustration, variations can Notes

e
be made sequentially along each input axis, one input item at a time. As a result, the input
domain (shown by the left-hand cube in the figure) is only partially covered.

in
nl
O
ity
Figure: A geometric view of test cases

An L9 orthogonal array of test cases is produced during orthogonal array testing. There
is a balancing property to the L9 orthogonal array. In other words, as the right-hand cube in

rs
the above figure illustrates, test cases, which are represented by black dots in the figure,
are dispersed uniformly throughout the test domain. There is more thorough test coverage
throughout the input domain.
ve
Take a look at the fax application’s transmit function to see how the L9 orthogonal array
is used. Four parameters are sent to the transmit function: P1, P2, P3 and P4. Everybody
assumes three distinct values. P1 takes on values, for instance:
ni

P1 = 1, send it now
P1 = 2, send it one hour later
U

P1 = 3, send it after midnight


In addition, P2, P3 and P4 would have the values 1, 2 and 3, corresponding to
additional send functions. The following test sequences (P1, P2, P3, P4) would be provided
ity

if a one input item at a time testing technique were to be used: (1, 1, 1, 1), (2, 1, 1), (3, 1, 1),
(1, 2, 1, 1), (1, 3, 1, 1), (1, 1, 2, 1), (1, 1, 2, 2) and (1, 1, 1, 3). Phadke evaluates these test
cases using the subsequent methodology:
Only when it is certain that these test parameters do not interact are such test cases
m

relevant. They are able to identify logic errors, in which the software malfunctions due to
a single parameter value. We refer to these errors as single mode errors. This approach
is unable to identify logic errors that result in malfunction when two or more parameters
)A

concurrently adopt certain values; in other words, it is unable to identify any interactions. Its
capacity to identify errors is so constrained.
Extensive testing is feasible due to the discrete values and comparatively limited
number of input parameters. 34 = 81 tests are needed, which is a significant but doable
quantity. All errors related to data item permutation would be discovered, but it would take
(c

a lot of work. Compared to the exhaustive strategy, we can give good test coverage with
considerably fewer test cases when using the orthogonal array testing approach. The figure
below shows a L9 orthogonal array for the fax transmit function.

Amity University Online


154 Software Engineering and Modeling

Test Case Test Parameters

Notes P1 P2 P3 P4

e
1 1 1 1 1
2 1 2 2 2

in
3 1 3 3 3
4 2 1 2 3
5 2 2 3 1

nl
6 2 3 1 2
7 3 1 3 2

O
8 3 2 1 3
9 3 3 2 1
Figure: An L9 orthogonal array

ity
Phadke uses the L9 orthogonal array to evaluate test results in the following ways:
Detect and isolate all single mode faults. A persistent issue with any level of any single
parameter is known as a single mode fault. It is a single mode failure, for instance, if every
test case for factor P1 = 1 results in an error state. Tests 1, 2 and 3 in this example will

rs
display errors. One can determine which parameter values are the source of the error by
looking at the data regarding which tests exhibit faults. In this case, the problem can be
isolated to [logical processing connected with send it now (P1 = 1)] by observing that tests
1, 2 and 3 result in an error. To rectify the error, this kind of fault isolation is crucial.
ve
Detect all double mode faults. A double mode fault occurs when certain amounts of
two parameters occur simultaneously and causes a continuous issue. In fact, pairwise
incompatibility or detrimental interactions between two test parameters are indicated by a
ni

double mode fault.


Multimode faults. Only single and double mode errors can be guaranteed to be
detected by orthogonal arrays [of the type depicted]. Nonetheless, these tests also identify
U

a large number of multi-mode defects.

White-Box Testing
A method of testing programs where the tests are predicated on an understanding of
ity

the components and structure of the program. In order to conduct white-box testing, source
code access is required.
Structural or white-box testing is an approach that is used in addition to functional
or black-box testing. Test groups using this method need to be well conversant with the
m

internal workings of the product. Structural testing can be defined as a testing methodology
in which the tests are developed based on an understanding of the architecture and
design of the software. Subroutines and other relatively small program units, as well as the
)A

activities connected to an object, are typically the targets of structural testing. As the term
suggests, the tester is able to extract test data by analysing the code and applying their
understanding of a component’s structure. The number of test cases required to ensure that
each statement in the program is run at least once during testing can be determined by
analysing the code. Software with untested statements should not be released since the
(c

results could be terrible. This goal may sound straightforward, but it can be more difficult to
accomplish than it may initially seem when it comes to basic structural testing objectives.
Other terms for white-box testing include glass-box testing, path-oriented testing, clear-
box testing, open-box testing, logic-driven testing and clear-box testing.
Amity University Online
Software Engineering and Modeling 155
Reasons White-box Testing is Performed
●● Every path within a process is functioning correctly.
Notes
●● True and false conditions are applied to every logical judgement.

e
●● Every loop is run with its limit values examined.

in
●● To find out if the standards for the input data structures are tested before being used
for further processing.

Advantages of Structural/White-box Testing

nl
White-box testing has a number of benefits, including:
●● makes the test developer consider implementation carefully.

O
●● roughly represents the division carried out using execution equivalency.
●● reveals hidden coding problems.
A test case design concept known as white-box testing, sometimes known as glass-box

ity
testing or structural testing, derives test cases from the control structure that is described as
part of component-level design. Test cases that: (1) ensure that every independent path in a
module has been executed at least once; (2) exercise all logical decisions on their true and
false sides; (3) execute all loops at their boundaries and within their operational bounds;
and (4) exercise internal data structures to verify their validity can all be derived using
white-box testing techniques.

Basis Path Testing


rs
ve
The white-box testing method known as basis path testing was first put forth by Tom
McCabe. Using the basis path technique, a procedural design’s logical complexity can be
determined by the test-case designer, who can then use this measure to define a basic set
of execution pathways. It is ensured that during testing, each statement in the program will
ni

be executed at least once thanks to test cases created to exercise the basis set.
It is necessary to explain a basic notation for the representation of control flow, known
as a flow graph (or program graph), before the basis path technique can be discussed. Only
U

when a component’s logical structure is complex should a flow graph be constructed. You
may more easily track program pathways with the help of the flow graph.
ity
m
)A
(c

Figure: (a) Flowchart and (b) flow graph

Amity University Online


156 Software Engineering and Modeling

Take a look at the procedural design representation in Figure-a below to see how a
flow graph is used. Here, the program control structure is shown via a flowchart. Assuming
Notes that the decision diamonds in the flowchart do not contain any compound conditions, Figure

e
B below converts the flowchart into a comparable flow graph. With reference to Figure b
below, a flow graph node is represented by a circle that embodies one or more procedural

in
statements.
A decision diamond and a series of process boxes can map into a single node. Similar
to the arrows on a flowchart, the edges or links on a flow graph indicate the flow of control.

nl
Even in cases when a node does not represent any procedural statements (e.g., the if-then-
else construct is represented by the flow graph symbol), an edge must finish at that node.
Regions are areas enclosed by nodes and edges. The space outside the graph is counted

O
as a region when counting regions.
Any path through the program that adds a new condition or at least one new set of
processing statements is considered independent. An independent path must travel along

ity
at least one edge that hasn’t been crossed before the path is specified, as expressed in
terms of a flow graph. For the flow graph shown in Figure b above, for instance, a collection
of independent pathways is
Path 1: 1-11
Path 2: 1-2-3-4-5-10-1-11
Path 3: 1-2-3-6-8-9-10-1-11 rs
ve
Path 4: 1-2-3-6-7-9-10-1-11
Keep in mind that every new path adds a fresh edge. The route
1-2-3-4-5-10-1-2-3-6-8-9-10-1-11
ni

is not regarded as an independent path since it only combines paths that have already
been defined and doesn’t pass through any new edges. The flow graph in Figure b’s basis
set is made up of paths 1 through 4. In other words, every statement in the program will
U

have been ensured to be run at least once and every condition will have been executed
on both its true and false sides, if you can create tests to force execution of these paths
(a basis set). It is important to remember that the basic set is not exclusive. It is actually
possible to derive many basis sets for a given procedural design.
ity

A software statistic called cyclomatic complexity gives an exact assessment of a


program’s logical complexity. The value calculated for cyclomatic complexity, when applied
to the basis path testing method, gives you an upper bound on the number of tests required
to verify that each statement has been run at least once and indicates the number of
m

independent paths in the program’s basis set.


With its roots in graph theory, cyclomatic complexity gives you a very helpful software
statistic. Three methods are used to compute complexity:
)A

1. The number of regions of the flow graph corresponds to the cyclomatic complexity.
2. Cyclomatic complexity V(G) for a flow graph G is defined as V(G) = E − N + 2
where E is the number of flow graph edges and N is the number of flows
(c

graph nodes.
3. Cyclomatic complexity V(G) for a flow graph G is also defined as
V(G) = P + 1

Amity University Online


Software Engineering and Modeling 157
where P is the flow graph G’s predicate node count.
Once again, with reference to the flow graph in Figure b above, the cyclomatic
Notes
complexity can be calculated with each of the previously mentioned algorithms:

e
1. The flow graph has four regions.

in
2. V(G) = 11 edges − 9 nodes + 2 = 4.
3. V(G) = 3 predicate nodes + 1 = 4.
Therefore, the flow graph in Figure B above has a cyclomatic complexity of 4. More

nl
importantly, you may use the value of V(G) to get an upper constraint on the number of
independent pathways that make up the basis set and, consequently, on the number of
tests that need to be created and run in order to ensure that all program statements are

O
covered. Therefore, to exercise each distinct logic path in this scenario, we would need to
design a maximum of four test cases.

Control Structure Testing

ity
Although foundation path testing is very easy to use and quite successful, it is not
enough on its own. These increase the scope of testing and raise the standard of white-box
testing.

rs
A test-case design technique called condition testing is used to put a program module’s
logical conditions to the test. Data flow testing chooses a program’s test paths based on
where definitions and uses of variables are found inside the program.
ve
Loop testing is a type of white-box testing where the validity of loop constructions is the
only thing being tested. Simple loops and nested loops are the two types of loops that can
be defined.
Simple Loops. For simple loops, the set of tests that follow can be used, where n is the
ni

maximum number of passes that the loop can have.


1. Skip the loop entirely.
U

2. Only one pass through the loop.


3. Two passes through the loop.
4. m passes through the loop where m < n.
ity

5. n − 1, n, n + 1 passes through the loop.


Nested Loops. The number of available tests would increase geometrically as the level
of nesting increased if we were to apply the test strategy for simple loops to nested loops.
There would be an excessive number of tests as a result. Beizer offers a strategy that will
m

aid in lowering the quantity of tests:


1. Beginning with the innermost loop. Reduce the values in the other loops.
)A

2. Holding the outer loops at their minimum iteration parameter (e.g., loop counter) values,
do basic loop tests for the innermost loop. To check for excluded or out-of-range numbers,
add more tests.
3. As you move outward, test the subsequent loop while maintaining the minimum values
(c

for all other outer loops and the typical values for all other nested loops.
4. Continue testing until every loop has been done.

Amity University Online


158 Software Engineering and Modeling

Notes

e
in
nl
O
ity
Figure: Classes of loops

4.2.2 Alpha, Beta and Gamma Testing


Alpha testing

rs
Users test the program at the developer’s premises during alpha testing, which is
regarded as a type of internal acceptability testing. Stated differently, the purpose of this
ve
testing is to evaluate the software’s performance within the development environment.
Users report bugs to software developers so they can fix them when alpha testing is over.
ni
U
ity

Before a product is made available to a wider public, alpha testing serves as the initial
round of external user testing, making it a crucial stage in the software development lifecycle.
m

Usually, it is carried out in a controlled setting by internal teams and particular end-user
groups. Finding and fixing bugs, usability problems and other faults that would not have
been discovered during earlier phases of development is the main goal of alpha testing. This
thorough overview provides a thorough grasp of alpha testing’s function in assuring software
)A

quality by examining the significance, methodology and results of the test.

Significance of Alpha Testing


Alpha testing is necessary for a number of reasons.
(c

●● Early Detection of Defects: Alpha testing assists in finding defects and problems that
developers might have overlooked by incorporating real users in the testing process.
These first results are essential for fixing bugs before the program reaches later
phases, when the necessary fixes become more involved and expensive.
Amity University Online
Software Engineering and Modeling 159
●● Usability Assessment: The chance to assess the software’s user interface and overall
user experience is offered via alpha testing. Testers are able to point out places in an
application where users may find it difficult to use or confusing, which gives developers Notes

e
the opportunity to improve usability.
●● Performance and Stability: This stage aids in evaluating the software’s functionality

in
in various scenarios. It enables programmers to keep an eye on the application’s
performance in order to make sure it can withstand typical usage scenarios.
●● Requirement Validation: Alpha testing aids in making sure the program satisfies the

nl
criteria and user expectations. It offers input regarding the completeness and accuracy
of the software’s implementation.

O
The Alpha Testing Process
There are various steps to the alpha testing process:
●● Preparation: The software must mature to a certain point before alpha testing starts.

ity
This usually indicates that all significant features have been added and that the
program is reliable enough for end customers to test. The scope, goals and success
criteria are outlined in a comprehensive test plan that is produced.
●● Selection of Testers: A small number of external users who are representative of the

rs
target demographic are typically combined with internal staff members, including
developers, testers and product managers, to form the alpha testers. These testers
are selected based on their domain expertise or familiarity with related products, which
allows them to offer insightful input.
ve
●● Test Execution: In the execution stage, testers utilise the program in a way that is
comparable to that of end users. They carry out a range of functions, from standard
use cases to edge cases intended to test the boundaries of the software. When they
ni

come across bugs, performance difficulties, or usability concerns, testers record their
observations in a document.
●● Bug Reporting and Tracking: Testers use a bug tracking system to report any problems
U

they discover. Every report contains comprehensive details regarding the issue, how
to replicate it and the circumstances surrounding it. Its methodical methodology aids in
the understanding and prioritisation of changes by developers.
●● Analysis and Prioritisation: The development team looks over the issues that have
ity

been reported, investigates their underlying causes and ranks them according to
importance and severity. Minor bugs may be postponed to later phases; critical bugs
that impact essential functions or cause crashes are fixed first.
●● Iteration and Improvement: The software is modified as needed by developers based
m

on input from alpha testing. Several iterations of testing and bug fixes may be required
until the program satisfies the required quality requirements.

Outcomes and Benefits of Alpha Testing


)A

Alpha testing produces a number of significant results and advantages:


●● Improved Software Quality: Alpha testing greatly improves the overall quality of
the product by spotting and fixing bugs early on. As a result, the product is more
(c

dependable, stable and user-friendly.


●● Enhanced User Experience: Developers may enhance the entire user experience and
improve the user interface using input from real users. This is essential to make that
the program fulfils the requirements and expectations of the user.

Amity University Online


160 Software Engineering and Modeling

●● Risk Mitigation: Early testing reduces the risks connected to the deployment of
software. Developers can save money on changes and possibly prevent failures
Notes at later phases, like beta testing or final release, by addressing significant concerns

e
during alpha testing.
●● Requirement Confirmation: Alpha testing verifies that the program satisfies its

in
predetermined specifications. This guarantees that the application complies with the
original project objectives and that all functionalities are implemented correctly.
●● Informed Decision Making: For project stakeholders, the insights from alpha testing

nl
offer useful information. Decisions regarding project schedules, resource allocation
and preparedness for ensuing testing stages can all be made with the help of this data.

Challenges and Best Practices

O
Although alpha testing has many advantages, it also has drawbacks.
●● Scope Management: It’s critical to define the parameters for alpha testing. While

ity
testing too narrowly could overlook important issues, testing too widely might
overload testers and diminish focus. In order to guarantee thorough coverage without
overtaxing the testing process, a balanced approach is required.
●● Tester Selection: Selecting the appropriate testers is essential to getting insightful
feedback. The desired user base should be represented by testers and they should be

●● rs
qualified to find pertinent problems.
Effective Communication: It’s critical that developers and testers have open lines of
communication. Testers should submit thorough, useful feedback and developers
ve
should answer inquiries and provide clarifications as needed.
●● Iteration Management: It can be difficult to oversee several iterations of testing
and bug fixes. It’s crucial to keep an organised procedure in place that monitors
ni

development and guarantees that all pressing problems are resolved.


●● Balancing Speed and Quality: While it’s critical to address problems as soon as
possible, hastily applying changes can result in new ones. To preserve software
U

quality, speed and thoroughness must be balanced.


Alpha testing is an essential stage in the software development lifecycle that is
responsible for guaranteeing the software products’ quality, usability and performance.
Alpha testing offers early feedback to assist developers find and repair bugs before the
ity

product goes to widespread testing and eventual release by involving internal teams and
a limited number of external users. Alpha testing is a methodical process that has several
advantages, such as better software quality, an improved user experience and lower risks.
These benefits come from planning, carrying out, analysing and improving the process.
m

Effective alpha testing, in spite of its difficulties, lays the groundwork for a software release
that is successful and results in a product that satisfies user needs and can withstand
practical use.
)A

Beta Testing
As the last testing stage prior to a product’s official release, beta testing is an essential
part of the software development lifecycle. In contrast to alpha testing, which is carried
out internally and frequently involves a small number of users, beta testing is carried out
(c

by a larger group of actual end users in their natural surroundings. The major goals of
beta testing are to verify the program in practical situations, find any defects or usability
problems that still need to be fixed and make sure the product fulfils the requirements
and expectations of its intended market. Before the program is made widely available,
Amity University Online
Software Engineering and Modeling 161
developers can make last-minute changes and enhancements with the assistance of the
crucial feedback this phase offers.
Notes
Key Aspects of Beta Testing

e
●● Real-World Validation: Actual consumers can utilise the software in real-world

in
scenarios thanks to beta testing. This assists in identifying problems that might not
have been seen in the controlled context of alpha testing. Performance concerns,
compatibility problems and other flaws resulting from various hardware and software

nl
combinations can be found through real-world usage.
●● User Feedback: Real end users that participate in beta testing offer insightful
commentary on the usability, functionality and general user experience of the program.

O
Users can communicate their satisfaction or discontent with different aspects, report
any issues they run upon and make enhancement suggestions. This input is critical to
improving the software’s usability and guaranteeing that it satisfies consumer needs.
●● Bug Identification: Any lingering bugs that were missed in previous testing stages can

ity
be found and fixed with the help of beta testing. Both small and significant flaws that
potentially affect the functionality of the software or user satisfaction are included in
this. Before the official release, developers can utilise this information to prioritise and
address issues.
●●

rs
Performance Assessment: The software’s performance is assessed in real-world
scenarios during beta testing. This involves testing its stability, speed and response
under various loads and usage scenarios. It is possible to fix performance problems
ve
found during beta testing to guarantee that the program functions properly in a variety
of settings.
●● Market Readiness: Determining whether software is ready for market release is aided
by beta testing. It offers information on how effectively the product fulfils customer
ni

expectations and highlights any serious problems that may need to be fixed. While
negative feedback from beta testers can point out areas that still need work, positive
input can boost confidence in the product’s preparedness.
U

●● Usability Enhancements: One common feature of beta tester feedback is


recommendations for enhancing the usability of the software. This may entail adding
useful features, streamlining the navigation, or improving the interface’s intuitiveness.
These improvements have the potential to greatly raise customer satisfaction and
ity

enhance the entire user experience.

Process of Beta Testing


●● Preparation: Prior to beta testing, a detailed strategy is created that includes the goals,
m

objectives and standards for success. Beta testers are chosen from among current
clients, devoted users and those who fit the desired demographic.
●● Distribution: Beta testers receive the program after completing an invitation-only
)A

or sign-up procedure. Instructions on how to use the product, report bugs and offer
comments are given to testers.
●● Execution: Beta testers carry out their regular tasks using the software in their natural
setting. This makes it easier to mimic real-world use and spot problems that would not
show up in a controlled testing setting.
(c

●● Feedback Collection: When reporting their findings, testers make note of any flaws,
performance concerns, or usability difficulties they run upon. Feedback is gathered via
a number of methods, including forums, issue tracking systems, surveys and direct
conversation.
Amity University Online
162 Software Engineering and Modeling

●● Analysis and Fixing: To find trends and rank fixes in order of importance, the
development team reviews the issues and feedback that have been submitted. Less
Notes serious concerns are addressed after critical bugs and high-impact issues.

e
●● Iteration: There may be more versions of testing and fixing depending on the
feedback. Every version seeks to enhance the software’s quality by incorporating the

in
most recent comments from beta testers.
●● Final Review: A final review is carried out to make sure the software satisfies the
required quality standards and is prepared for release, following the resolution of the

nl
major problems and the implementation of the necessary improvements.

Benefits of Beta Testing

O
●● Enhanced Product Quality: Beta testing finds and fixes bugs that were missed in
previous testing stages, ensuring the final product is of the highest calibre.
●● Improved User Satisfaction: Developers can improve software usability and
functionality by incorporating feedback from actual users. This leads to increased user

ity
satisfaction.
●● Reduced Risk: Finding and fixing problems during beta testing lowers the possibility
of launching a faulty product, which could cause financial losses and damage to the
company’s reputation.
●●
rs
Market Readiness: By giving a more accurate image of the software’s preparedness
for the market, beta testing helps to guarantee a successful and positively received
product launch.
ve
●● Customer Engagement: Participating users in beta testing fosters a sense of
camaraderie and allegiance. When users feel that their feedback is valued, they are
more likely to become product advocates.
ni

Challenges and Best Practices


●● Managing Feedback: It can be difficult to manage a lot of beta tester input.
Establishing an effective system for classifying, ranking and responding to feedback is
U

crucial.
●● Selecting Testers: Selecting the ideal group of beta testers is essential. The desired
user base should be represented by testers, who should offer a variety of viewpoints
on the functionality and performance of the program.
ity

●● Communication: It’s critical to communicate with beta testers in an efficient and


clear manner. To sustain their involvement and guarantee that important insights are
obtained, it can be helpful to give clear instructions, frequent updates and prompt
answers to their input.
m

●● Scope Control: Establishing a well-defined scope for beta testing facilitates


concentration of efforts and guarantees comprehensive testing of crucial software
functionalities. Excessive scope expansion may result in a loss of concentration and
)A

the omission of important details.


●● Iterative Improvement: Multiple testing and repairing iterations are necessary for beta
testing to be most effective. Every iteration should be an improvement over the last,
progressively raising the standard of the software and fixing any bugs that are found.
(c

A crucial stage in the software development lifecycle, beta testing ensures that the
product is ready for the market, collects insightful user feedback and validates the product
in real-world circumstances. Beta testing assists in locating any lingering bugs, performance
concerns and usability issues that might have gone unnoticed in previous testing stages
Amity University Online
Software Engineering and Modeling 163
by involving a larger audience of actual end users. Developers can make last-minute
tweaks and improvements based on the information gathered from beta testing, which
results in a high-caliber, user-friendly product that lives up to market expectations. Effective Notes

e
beta testing, in spite of its difficulties, is essential to a successful software release and
a favourable user response, both of which boost a product’s chances of success on the

in
market.

Gamma Testing

nl
After beta testing but prior to the software product’s official release, gamma testing
is a crucial but less often stage in the software development lifecycle. It is the last phase
of testing and is typically carried out once the program is deemed stable and feature-

O
complete. Before software is placed onto the market, gamma testing is used to make sure
that all significant problems have been repaired, it operates well under expected conditions
and it satisfies the highest quality standards. This thorough overview provides a thorough
grasp of gamma testing’s function in assuring software quality by examining its significance,

ity
methodology and results.

Significance of Gamma Testing


●● Final Quality Assurance: The final stage of software quality verification is gamma

rs
testing. By verifying that all significant and important problems found during beta
testing have been fixed, it guarantees that the product is prepared for release.
●● Performance Validation: In order tomake that the program can withstand expected
ve
workloads and usage scenarios without degrading or failing, this step helps test the
software’s performance in real-world settings.
●● User Acceptance: A small number of end users who grant the program their final
approval are frequently involved in gamma testing. By providing feedback, the product
ni

is improved to match user needs and expectations, increasing the chance of user
acceptance and satisfaction after release.
●● Regulatory Compliance: Gamma testing is a way to verify compliance for software
U

solutions that must adhere to industry or regulatory standards. This guarantees that
the product complies with all regulations and is suitable for marketing and use.

The Gamma Testing Process


ity

●● Preparation: Comprehensive test strategy is created prior to gamma testing. The


scope, goals, success criteria and particular duties and responsibilities of the testing
team are all outlined in this plan. After passing all previous testing stages and being
feature-complete, the software is now ready for testing.
m

●● Selection of Testers: Internal employees, quality assurance specialists and a small


group of end customers that either have a lot of expertise with the product or belong
to important demographics are usually the ones who serve as gamma testers. These
)A

testers were selected based on their capacity to offer comprehensive and perceptive
input.
●● Test Execution: Gamma testers use the software under situations that closely
resemble real-world settings during the execution phase. They carry out a variety of
(c

duties with an emphasis on usability, functionality, performance and any lingering edge
cases that might not have been thoroughly evaluated during the beta stage.
●● Issue Reporting and Tracking: Every problem found during gamma testing is
documented through a methodical and comprehensive bug tracking procedure. Every

Amity University Online


164 Software Engineering and Modeling

issue report contains a precise description, replication instructions, expected and


actual outcomes and an assessment of the issue’s seriousness.
Notes ●● Analysis and Prioritisation: Based on their seriousness and effect on the entire

e
product, the development team evaluates and ranks the reported issues. While minor
issues may be reported for future updates or enhancements, critical ones are handled

in
right away.
●● Final Fixes and Validation: The software goes through one further phase of validation
after the significant issues have been fixed to make sure that no new issues have

nl
been introduced and that the solutions have been applied successfully. Until the
product satisfies the established quality criteria, this iterative procedure is carried out.
●● Documentation and Sign-Off: A thorough record of the testing procedure, conclusions

O
and fixes is created. Important stakeholders examine the testing team’s final report
after it has been provided. The software is approved for production when it satisfies all
requirements for release.

ity
Outcomes and Benefits of Gamma Testing
●● High-Quality Release: Before being released, gamma testing makes sure the software
is at the best potential quality. It attests to the fact that all significant problems have
been fixed and that the product operates dependably in typical circumstances.
●●

rs
Enhanced User Confidence: Gamma testing contributes to the development of user
confidence in the product by including end users in the last testing stage. Testimonials
and encouraging comments from users who take part in gamma testing can be used in
ve
marketing campaigns.
●● Risk Mitigation: The likelihood of running into serious issues after release is decreased
by finding and fixing any flaws that persist during gamma testing. By being proactive,
the organisation can avoid expensive post-release fixes and reputational harm.
ni

●● Regulatory Assurance: Gamma testing offers a final assurance that software conforms
with all relevant standards and criteria for items that need regulatory certification,
guaranteeing its safe and legal usage in the market.
U

●● Market Readiness: A definite indicator of the software’s suitability for commercial


release is gamma testing. Stakeholders can be certain that the product will function as
intended and be well-received by users thanks to the phase’s positive outcomes.
ity

Challenges and Best Practices


●● Resource Allocation: It is crucial to make sure that enough money, staff and time are
set out for gamma testing. Cutting corners might result in inadequate testing and the
eventual delivery of a substandard product.
m

●● Tester Selection: Selecting the appropriate testers is necessary to get insightful input.
To offer insightful feedback, testers should be well-versed in the product and its
intended use cases.
)A

●● Clear Communication: It is essential that developers and testers continue to


communicate in an efficient and transparent manner. Developers must respond to
queries and explanations and testers must offer thorough and useful feedback.
●● Focus on Critical Issues: The primary goal of gamma testing should be to find and fix
(c

serious problems that could affect the functionality, efficiency and user satisfaction of
the product. Small problems should be noted but may not always be fixed in time for
the product’s introduction unless they have a major impact.
●● Comprehensive Documentation: It is essential to have thorough records of the testing
Amity University Online
Software Engineering and Modeling 165
procedure, problems experienced and solutions found. This documentation facilitates
future maintenance and changes by offering a transparent account of the testing
procedures. Notes

e
Gamma testing is the last stage of quality control before a software product is put
on the market, making it an essential part of the software development lifecycle. Gamma

in
testing makes ensuring the program is dependable, stable and meets user expectations
by extensively testing it in real-world settings. This stage offers vital confirmation that the
product is ready for release and that all significant problems have been fixed. Though it

nl
presents difficulties, good gamma testing contributes to the software’s commercial success
by reducing risks, boosting user confidence and guaranteeing a high-quality release.

Summary

O
●● Software testing is a critical aspect of the software development lifecycle (SDLC)
aimed at ensuring the quality, reliability and performance of software. It involves the
systematic identification and correction of defects to ensure that the software meets

ity
its requirements and works as expected.Objectives of Software Testing: a) Validation
and Verification, b) Defect Identification, c) Quality Assurance, d) Risk Mitigation.
Software testing is an essential process that ensures the delivery of high-quality
software. It involves various methods, levels and types of testing to verify and validate
that the software meets its requirements and is free of defects. Effective testing helps

●●
better user experience. rs
in identifying issues early in the development process, reducing costs and ensuring a

Black-box testing, also known as behavioral or functional testing, evaluates the


ve
software based on its specifications and functionalities without any knowledge of the
internal code structure. The tester is unaware of the internal workings and focuses
solely on the inputs and outputs.
ni

●● White-box testing, also known as structural or glass-box testing, involves testing the
internal structures or workings of an application. The tester requires knowledge of the
internal code, algorithmsandlogic.Both black-box and white-box testing are essential
for a comprehensive testing strategy. Black-box testing ensures that the software
U

meets user requirements and handles various input scenarios, while white-box testing
ensures that the internal workings of the software are correct and optimised. Using
both approaches in conjunction allows for a thorough evaluation of the software from
ity

multiple perspectives.
●● Alpha testing is an internal testing phase conducted by the development team and
internal testers within the organisation before the software is released to external
testers or customers. It aims to identify and fix bugs before the software is exposed to
a wider audience.
m

●● Beta testing is an external testing phase where a limited group of actual users (beta
testers) tests the software in a real-world environment. This phase follows alpha
)A

testing and aims to identify issues that may not have been found during alpha testing.
●● Gamma testing is less common and is sometimes considered an extension of beta
testing. It occurs when the software is essentially ready for release, with no more
critical issues expected and is tested by a very limited group of end users before the
final release.Alpha, Beta and Gamma testing are essential stages in the software
(c

testing process, each serving a unique purpose in ensuring the quality and readiness
of software before its official release. These stages help identify and fix issues, gather
user feedback and validate the software’s performance in real-world conditions,
ultimately leading to a more robust and user-friendly product.

Amity University Online


166 Software Engineering and Modeling

Glossary
●● UAT: User Acceptance Testing
Notes
●● GUI: Graphical User Interface

e
●● MTTR: Mean-Time-To-Repair

in
●● BVA: Boundary Value Analysis

Check Your Understanding

nl
1. Which type of testing is performed without any knowledge of the internal workings of the
application?
a. White-box testing

O
b. Black-box testing
c. Unit testing
d. Integration testing

ity
2. In which phase of testing are real users involved to ensure the software meets their
needs and works in real-world scenarios?
a. Alpha testing
b. Unit testing
c.
d.
Beta testing
Integration testing rs
ve
3. What is the primary focus of integration testing?
a. Testing individual units or components
b. Ensuring that combined components work together
ni

c. Verifying the complete system against requirements


d. Testing the software’s performance under load
4. Which technique involves testing at the boundaries between partitions where errors
U

often occur?
a. Equivalence partitioning
b. Boundary value analysis
ity

c. Decision table testing


d. State transition testing
5. What is the main objective of alpha testing?
a. To gather feedback from end users in a real-world environment
m

b. To perform initial internal testing to identify and fix major issues


c. To test the software’s performance under maximum load conditions
)A

d. To verify that the software is ready for release with a final group of trusted users

Exercise
1. What are the principles for testing? Explain briefly.
(c

2. Explain in detail Integration Testing.


3. What is System Testing?
4. What are the characteristic of Testing?
5. What is the difference between Black box testing and white box testing? Explain briefly.
Amity University Online
Software Engineering and Modeling 167
Learning Activities
1. Assume you are a QA engineer responsible for testing a web application. Describe how
you would perform black-box testing for the login functionality of the application. Identify
Notes

e
the inputs, outputs and expected behavior for different test cases.
2. You are tasked with testing a sorting algorithm implemented in a programming language

in
such as Python or Java. Explain how you would perform white-box testing to ensure the
correctness and efficiency of the algorithm.

nl
Check Your Understanding - Answers
1. b 2. c 3. b 4. b
5. b

O
Further Readings and Bibliography
●● Software Testing: Principles and Practices by Srinivasan Desikan and Gopalaswamy

ity
Ramesh.
●● Foundations of Software Testing: ISTQB Certification by Dorothy Graham, Erik van
Veenendaal, Isabel Evans and Rex Black.
●● Testing Computer Software by CemKaner, Jack Falk and Hung Q. Nguyen.
●●
●●
rs
Lessons Learned in Software Testing by CemKaner, James Bach and Bret Pettichord.
Agile Testing: A Practical Guide for Testers and Agile Teams by Lisa Crispin and Janet
Gregory.
ve
ni
U
ity
m
)A
(c

Amity University Online


168 Software Engineering and Modeling

Module - V: Software Project Planning and Management


Notes
Learning Objectives

e
At the end of this module, you will be able:

in
●● Explain software project planning and management
●● Analyseplanning software projects

nl
●● Define software metrics
●● Understand software maintenance and its types

O
Introduction
Strong coordination, leadership and communication abilities are necessary for the
planning and management of software projects. A thorough understanding of software
development techniques, tools and best practices is also necessary. Teams can enhance

ity
the probability of delivering superior software products within budget and on schedule by
adopting a methodical approach and utilising suitable project management approaches.

rs
ve
ni
U

5.1 Software Project Planning


ity

Software project planning is the process of defining the scope, objectives, tasks,
schedules and resources needed to successfully execute a software development project.
It involves thorough analysis and decision-making to ensure that the project is completed
on time, within budget and meets stakeholder expectations.
m

5.1.1 Introduction to Software Project Planning and Management


)A

It is crucial to comprehend the idea of a project before talking about project


management. “A temporary endeavour undertaken to create a unique product, service,
or result” is what is meant by a project. Contrarily, operations refer to the work done by
organisations to maintain their business. It centres on the continuous creation of products
and services. Projects terminate when their goals are met or the project is ended, which is
(c

how they vary from operations. It is crucial to remember that collaboration between those
working on operations and projects is necessary for a seamless transition.
DevOps, for instance, is a relatively new word in software development that refers to
a culture of cooperation between operations and software development teams to build,
Amity University Online
Software Engineering and Modeling 169
test and deliver dependable software more quickly. It’s critical to distinguish projects from
organisational procedures before working on them. A process is any continuous, daily
activity that an organisation engages in in order to create goods and services. A process Notes

e
integrates efforts in a repeatable way by using already-existing systems. It has multiple
goals and is a component of the line organisation as well.

in
On the other hand, a project is carried out in a different way from regular, process-
driven work. A project, as contrast to a process, is a one-time action having a
predetermined start and end to the series of tasks. It is typically described in terms of

nl
the expected budget, timeline and level of performance. Projects require the abilities and
capabilities of personnel from many organisational departments.
As to the widely accepted definition found in PMBOK The Guide, a project is defined as

O
a “temporary endeavour undertaken to create a unique product or service.”
A project is defined as the accomplishment of a certain goal, whereas project
management is the process of organising, coordinating and managing a project in order

ity
to achieve its goals. This excludes the crucial interpersonal interactions and project
assessment carried out following project completion. Change is brought about by projects
and project management is the effective handling of change.
“The application of knowledge, skills, tools and techniques to project activities in

rs
order to meet or exceed stakeholder needs and expectations from a project” is the PMI’s
definition of project management, which is more broadly stated.

Program Management and Portfolio Management


ve
A program is an assortment of connected undertakings that rely on one another.
Projects are a component of a program that is managed and coordinated collectively to
provide advantages and control that are not possible with individual management. Its
ultimate delivery may be a good or service and both have the same objective. The primary
ni

focus of program management is on the interdependencies across projects, which enables


the development of the best possible management strategy. managing resource restrictions,
aligning strategic directions and managing change management challenges are among the
U

primary actions associated with interdependencies.


Program management is described as “centralised coordinated management of a
program to achieve the program’s strategic objectives and benefits” in the PMBOK, 2008
ity

handbook.
A portfolio is an organisation’s assortment of initiatives and/or programs that are
arranged to support efficient administration in the pursuit of the strategic business goal.
Projects can share the same resources and a same strategic goal thanks to portfolio
m

management. Maintaining a balance between short-term demands and restrictions


and long-term strategic goals is another benefit of portfolio management. “The central
management of one or more portfolios, which includes identifying, prioritising, authorising,
)A

managing and controlling projects, programs and other related work, to achieve specific
strategic business objectives” is how the PMBOK (2008) described portfolio management.
As project portfolio management, or portfolio management as it is referred to in
this text, gathers and manages projects and programs as a portfolio of investments that
contribute to the success of the entire enterprise. Project managers in many organisations
(c

also support this emerging business strategy. Portfolio managers choose and evaluate
projects from a strategic standpoint, assisting their companies in making informed
investment decisions. It is possible for portfolio managers to have worked as project
or program managers in the past. The most crucial thing is that they comprehend how
Amity University Online
170 Software Engineering and Modeling

initiatives and programs may help achieve strategic goals and possess good analytical and
financial skills.
Notes
Project Management Body of Knowledge

e
The majority of projects are managed using the PM body of knowledge as a standard.

in
The phrase refers to the general knowledge within the project management profession
and is inclusive. It contains tried-and-true methods and instruments for overseeing project
management procedures in order to achieve successful project outcomes. The. The

nl
PMBOK guide, published by the PMI, is the source of the body of knowledge that identifies
and acknowledges best practices. The body of knowledge outlines the essential knowledge
domains for project management skills and tasks that all practitioners must comprehend in
order to receive comprehensive training for their position. A more comprehensive rundown

O
of the project management procedures is included in this knowledge area. The PMBOK
handbook lists nine knowledge domains.
“The application of knowledge, skills, tools and techniques to project activities to

ity
meet project requirements” is the definition of project management. Project managers are
responsible for facilitating the entire process to fulfil the requirements and expectations
of individuals involved in or impacted by project activities, in addition to working towards
defined scope, schedule, cost and quality targets.

Project Scope Management


rs
Four essential tasks are part of the project scope management process: scope
definition, work breakdown structure (WBS), project delivery plan and scope change
ve
control. Every project that is completed follows a defined procedure for scope project
management. A well-executed scope management strategy guarantees that the scope is
precisely specified and communicated to all relevant parties.
ni

Project Time Management


Estimating the length of project work packages, estimating resource requirements,
project scheduling (including sequencing and prioritising) and time change control are the
U

four main components of time management, similar to scope change management.

Project Cost Management


The primary tasks carried out in project cost management are the processes for work
ity

package cost estimation, project cost planning and cost change control.

Project Quality Management


In the context of a project, quality management refers to the processes by which the
m

executing organisation establishes and carries out quality standards, objectives, policies
and responsibilities in order to meet the needs for which the project was initiated. The three
primary procedures in project quality management are quality planning, quality assurance
)A

and quality control. Since the results of various project management procedures are
compared to predefined criteria, this field of knowledge is crucial.

Project Human Resource Management


A fundamental component of project management expertise and a necessity for project
(c

success is human resource management. It is the procedure necessary to utilise people’s


skills for a project in the most efficient way possible. Customers, sponsors, stakeholders,
team members, individual contributors and others are all included in project HRM. As per
PMBOOK 2008, HRM comprises three primary processes:
Amity University Online
Software Engineering and Modeling 171
●● Organisational planning: this involves determining, recording and allocating reporting
relationships, roles and duties for projects (individuals and groups)
●● Staff acquisitions: the process of acquiring the necessary personnel and assigning Notes

e
them to projects.
●● Team development: focuses on improving the team’s and stakeholders’ managerial

in
and technical capabilities.

Project Risk Management

nl
Identifying and managing project risk is the process of project risk management. By
taking appropriate management activities, risks can be mitigated or avoided and risk
management helps to maintain a balance between opportunities and dangers. Risk

O
ownership, risk identification, risk analysis, risk response and backup plans are all included
in project risk management.

Project Communication Management

ity
Project communication management creates the vital connections between individuals,
concepts and data required for a project’s success. Determining who needs what
information, in what format and for how long is one of its components. It is the most crucial
element of the knowledge fields related to project management since it guarantees that
project data are created, gathered, preserved, distributed and shared in a timely manner in
accordance with the official communication plan.

Project Procurement Management rs


ve
Contract management is another name for project procurement management. It
includes the procedures necessary to purchase goods and services from suppliers.
Planning for the procurement process, requesting bids for goods and services, choosing
possible suppliers, managing contracts and contract close-out are also included.
ni

Project Integration Management


In order to create consistent, thorough and well-designed project processes and
U

activities, as well as to coordinate the various activities related to project planning,


execution and control, this knowledge area integrates the outputs of other project
management bodies of knowledge. The result is the integrated project plan, which includes
the following: a project charter; a description of the PM strategy; a scope statement (a
ity

list of the deliverables and objectives of the project); a work breakdown structure (WBS);
cost estimates; scheduled start and finish dates; responsibility assignments; performance
measurement; milestone schedules; necessary key personnel; key risks; constraints;
assumptions; subsidiary management plans; and other supporting details (outputs from
m

other planning processes not included in the project plan, documentation of technical details
and relevant standards).

Project - Life - Cycle (PLC)


)A

A project is a work process that involves multiple unique phases or stages in order
to reach a certain goal. This process is known as the project life cycle. It is sometimes
referred to as the project development stages. PLC serves as an example of the project
management logical structure. It serves as a framework for creating our plans, choosing
(c

when to assign resources and assessing the project’s advancement. It is crucial to map the
life cycle and cost of a project over its whole duration, as certain deliverables and activities
may change as it progresses. Several project managers classified PLC into various phases
based on the scope and intricacy of the project. Four phases are recognised for the project
life cycle, according PMBOK;
Amity University Online
172 Software Engineering and Modeling

●● Starting the project


●● Organising and preparing
Notes
●● Carrying out the project

e
●● Closing the project

in
The project manager can divide the task into more manageable chunks by categorising
the deliverables into distinct divisions for each phase of the project. The borders may or
may not be merged and overseeing the various project stages calls for a variety of abilities

nl
as well as control and monitoring systems. The. The first phase of the project involves
planning and involves fewer staff members (key stakeholders), with limited associated
costs. In contrast, the execution phase involves carrying out the actual project activities and
requires a larger degree of staff engagement and costs, as indicated in the figure below.

O
ity
rs
ve
ni

Figure: Typical cost and staffing levels across the Project Life Cycle
U

Importance of Project Management


Applying knowledge, skills, tools and procedures to project activities in order to achieve
project requirements is known as project management. The proper use and integration of
ity

the project management procedures chosen for the project enable project management to
be completed. Organisations may carry out projects effectively and efficiently with the help
of project management.
Good project management facilitates the following activities for people, teams and
m

public and commercial organisations.


●● Meet business objectives;
●● Satisfy stakeholder expectations;
)A

●● Be more predictable;
●● Increase chances of success;
●● Deliver the right products at the right time;
●● Resolve problems and issues;
(c

●● Respond to risks in a timely manner;


●● Optimise the use of organisationalresources;
●● Identify, recover, or terminate failing projects;

Amity University Online


Software Engineering and Modeling 173
●● Manage constraints (e.g., scope, quality, schedule, costs, resources);
●● Balance the influence of constraints on the project (e.g., increased scope may
increase cost or Notes

e
●● schedule); and
●● Manage change in a better manner.

in
Poorly managed projects or the absence of project management may result in:
●● Missed deadlines,

nl
●● Cost overruns,
●● Poor quality,

O
●● Rework,
●● Uncontrolled expansion of the project,
●● Loss of reputation for the organisation,

ity
●● Unsatisfied stakeholders and
●● Failure in achieving the objectives for which the project was undertaken.
Project management is no longer a management practice for unique needs. It is quickly
taking on the role of the norm in company operations. View the 2009 edition of Snapshot

rs
from Practice: Project Management in Action. The average firm is dedicating a larger portion
of its effort to projects. The importance and contribution of projects to an organisation’s
strategic direction are expected to grow in the future. This is the case for a number of
reasons, which are briefly covered here.
ve
Compression of the Product Life Cycle
The reduction of the product life cycle is one of the main factors driving the need for
project management. For instance, the average product life cycle in high-tech companies
ni

nowadays is between one and three years. Life cycles of ten to fifteen years were not
unusual thirty years ago. For innovative products with limited life cycles, time to market has
become more and more crucial. In the area of high-tech product development, it’s generally
U

accepted that a six-month project delay can cause a 33 percent reduction in the product’s
revenue share. Thus, quickness becomes a competitive advantage; increasing numbers of
businesses are depending on interdisciplinary project teams to launch innovative goods and
services as soon as feasible.
ity

Knowledge Explosion
Project complexity has increased as a result of the expansion of new knowledge since
projects now incorporate the newest developments. For instance, constructing a road
m

thirty years ago was a reasonably easy procedure. The complexity of materials, standards,
rules, aesthetics, equipment and necessary professionals has expanded in every sector
nowadays. Similar to this, it is getting more difficult to discover a new product in today’s
)A

digital and electronic age that does not include at least one microchip. Divergent technology
integration is now more important than ever due to product complexity. To do this, project
management has become a crucial discipline.

Triple Bottom Line (planet, people, profit)


(c

The increasing threat posed by global warming has elevated the need of sustainable
business operations. Companies can no longer just concentrate on increasing profits at the
expense of society and the environment. Effective project management makes it possible
to lower carbon footprint and use renewable resources. The goals and methods employed
Amity University Online
174 Software Engineering and Modeling

to finish projects have changed as a result of this shift towards sustainability. View Practice
Snapshot: The First “Green” Hospital in the World Is Now Dell’s Children.
Notes
Corporate Downsizing

e
Organisational life has undergone a significant transformation in the past ten years.

in
Many businesses now depend on downsizing—or rightsizing, if you’re still employed—and
adhering to their core skills in order to survive. The skeleton of middle management is all
that remains of it. Project management is taking the place of middle management as the go-

nl
to method for making sure things get done in today’s flatter and leaner organisations where
change is inevitable. An further effect of corporate downsizing is a shift in how organisations
handle projects. Large portions of project work are outsourced by businesses and project
managers are responsible for overseeing both their own staff and those of their colleagues

O
in other organisations.

Increased Customer

ity
Enhanced Attention to Customers Customer satisfaction is becoming more important
than ever due to increased competition. Consumers no longer only accept generic goods
and services. They desire goods and services that are tailored to their own requirements.
The provider and the recipient must collaborate considerably more closely in order to
fulfil this mission. Salespeople and account executives are taking on more of a project

demands and demands of customers.


rs
management responsibility as they collaborate with their company to meet the specific

Customised goods and services have also been developed as a result of increased
ve
consumer attention. Purchasing a set of golf clubs, for instance, was a fairly straightforward
procedure ten years ago: you choose a set based on feel and pricing. Golf clubs are now
available for both tall and short players, those who slice or hook the ball, high-tech clubs
featuring the newest metallurgical finding that promises to increase distance and so on.
ni

Project management is essential for maintaining profitable customer connections as well as


for the development of customised goods and services.
An organisation that works on software projects follows the Software Development Life
U

Cycle (SDLC). It comprises of a thorough plan on how to create, update, swap out, modify
and improve particular software. A methodology for raising the calibre of software and the
entire development process is defined by the life cycle.
ity

The several phases of a typical SDLC are graphically represented in the following
diagram.
m
)A
(c

Figure-Software Development Cycle

Amity University Online


Software Engineering and Modeling 175
Stage 1: Planning and Requirement Analysis
The most crucial and fundamental phase of the SDLC is requirement analysis. The
senior team members carry it out with assistance from the sales department, the client,
Notes

e
market research and industry domain specialists. The fundamental project strategy is then
planned using this information and a technical, operational and financial examination of the

in
product’s viability is carried out.
During the planning stage, it is also necessary to identify the risks related to the project
and arrange for quality assurance requirements. The technical feasibility study’s conclusion

nl
identifies the different technical strategies that can be used to carry out the project profitably
and with the fewest possible risks.

O
Stage 2: Defining Requirements
Following requirement analysis, the product needs must be precisely defined,
documented and approved by the customer or market analysts. An SRS (Software
Requirement Specification) document, which includes all of the product requirements to be

ity
planned and developed throughout the project life cycle, is used to do this.

Stage 3: Designing the Product


When it comes to creating the ideal architecture for a product, product architects use

rs
SRS as a guide. A DDS, or design document specification, is a document that documents
multiple design approaches for the product architecture, each of which is proposed based
on the requirements outlined in the SRS.
ve
All significant stakeholders have examined this DDS and the optimal design approach
is chosen for the product based on a number of factors including risk assessment, product
robustness, design modularity, budget and schedule restrictions.
A design approach outlines all of the product’s architectural modules in detail, as well
ni

as how they communicate and flow data with external and third-party modules (if any).
Every module of the suggested architecture should have a well-defined internal design,
down to the last detail, in DDS.
U

Stage 4: Building or Developing the Product


The actual development phase of the SDLC begins here and the product is
ity

constructed. At this point, the programming code is generated in accordance with DDS.
Code creation can be completed easily with careful planning and organisation of the design
process. When writing code, developers have to abide by the coding standards set forth by
their company. Compilers, interpreters, debuggers and other programming tools are used to
generate code. Coding is done in a variety of high-level programming languages, including
m

C, C++, Pascal, Java and PHP. The kind of software being produced influences the choice
of programming language.
)A

Stage 5: Testing the Product


Since testing activities are typically included in all stages of the SDLC in modern
models, this stage is typically a subset of all the stages. This stage, however, only relates
to the product’s testing phase, during which product flaws are discovered, monitored,
corrected and retested until the product meets the SRS’s quality standards.
(c

Stage 6: Deployment in the Market and Maintenance


The product is formally released in the relevant market once it has undergone testing
and is prepared for deployment. Product deployment might occasionally take place in
Amity University Online
176 Software Engineering and Modeling

phases in accordance with the company’s business plan. The product might be tested in
an actual business setting before being made available to a restricted market (UAT: User
Notes Acceptance Testing).

e
The product may then be released with proposed improvements in the targeted market
niche or as is, depending on the feedback received. The product’s current customer base

in
receives upkeep once it is introduced to the market.

5.1.2 Planning Software Projects

nl
Choosing the right SDLC or SDLCs for software development in accordance with the
requirements of the project is crucial. We must choose a different SDLC that better suits the
requirements of our project if we choose an improper SDLC or SDLCs and later determine

O
that our choice was incorrect.

Choosing the wrong SDLC can lead to:


●● Longer development time and increased costs.

ity
●● Missing tasks and inappropriate task ordering, which undercuts project planning and
efficiency.
●● Dissatisfaction of project’s stakeholders (i.e., customers, upper managers and project

rs
members)
●● Risk exposure

SDLC Selection Methods


ve
The following methods/techniques are used for SDLC selection:
●● Making use of ruled-based methods, including expert systems.
●● Dillman proposes a decision cube with two conditions for each of the three criteria—
ni

budget (high or low), time horizon (short or long) and functionality (dynamic or static)—
and the suggested SDLC is the resulting quadrant.
●● Making use of SDLC comparison tables.
U

●● Applying heuristic ranking algorithm(s) (a list of models and their assigned score may
be the outcome).
●● Selecting the right SDLC with the help of specialists; experts from outside the project
ity

or company make this decision. Experts typically make decisions based on past
successes and failures.
●● You are compelled to choose the SDLC in accordance with the established processes
by the firm or foundation’s set procedures. For instance, certain businesses or
m

foundations describe the development process as following a single cycle of phases,


including requirement specification, design, coding, integration and testing and they
specify that several papers must be produced at each stage. You feel comfortable
)A

using waterfall or waterfall-style SDLCs in that situation.


●● All projects adhere to the company’s decision to adopt one or more established
SDLCs.
●● Look through books and the internet to select one or more SDLCs.
(c

●● Ad hoc selection: The SDLC selection process lacks a defined, conventional method
or solution. Project managers use a heuristic to select one SDLC. Project and
company managers typically follow the same SDLC from start to finish.
●● Applying fuzzily logic.

Amity University Online


Software Engineering and Modeling 177
SDLC Selection Criteria
When choosing an SDLC for a project, the following factors can be taken into account:
Notes

e
●● Functionality (a project’s functional requirements will either be dynamic and change
during its life cycle or static and stay the same), budget and time horizon (short term
versus long term).

in
●● Defining requirements clearly
●● Size of the product and team (quantity of developers),

nl
●● Intricacy of the system or software,
●● The experience of developers and users (including application domain experience),
the capacity of users to articulate needs, the application domain experience and

O
software engineering experience of developers,
●● The project’s characteristics, including its size, level of structure, development duration
and function within the organisation; the IT and organisational infrastructure to support

ity
the development effort and the application; and the knowledge and skill sets of the
project team
●● The quantity of critical hazards,
●● Estimating expenses, early functional requirements,
●●
●●
The level of client participation in the project

rs
The project (development) environment, including the organisation’s features, the level
of experience with the technology to be employed and the competitive pressures to get
ve
the project started,
●● The information system type, software size and complexity, system architecture,
modularity and degree of module integrity, user needs’ diversity and clarity and system
architecture are some examples of product-related criteria.
ni

●● Project-related criteria include fundamental project dimensions including the project’s


estimated timeline, total project costs, resource characteristics, project hazards and
user kinds based on computer proficiency.
U

Stages of Project Management


One of the most important procedures in any project is project management. This is
because the fundamental procedure that unites all other project-related activities and
ity

procedures together is project management.


There are many different types of project management activities. These numerous
project management tasks, however, can be divided into five primary procedures.
m

Project Initiation
Any project begins with the project initiation phase. All of the procedures involved in
winning a project happen throughout this process. Pre-sale activities typically constitute this
)A

phase’s primary activity.


The service provider gets the business by demonstrating to the client during the pre-
sale phase that they are qualified and capable of finishing the project. Next is the process of
acquiring precise requirements.
(c

All of the customer needs are gathered and examined in preparation for
implementation throughout the requirements collecting process. Negotiations may be
conducted during this activity to modify or eliminate specific requirements.
Requirements sign-off often marks the conclusion of the project initiation phase.
Amity University Online
178 Software Engineering and Modeling

Project Planning
One of the primary procedures in project management is project planning. The project
Notes management team may face severe repercussions in the following stages of the project if

e
they make a mistake at this point.
As a result, the project management team must give this project’s methodology careful

in
consideration.
The project plan is created throughout this step to handle the project’s needs, including

nl
its scope, budget and deadlines. The project schedule is created after the project plan has
been determined.
The resources are subsequently assigned to the project based on the budget and

O
timeline. In terms of project effort and cost, this phase is the most crucial.

Project Execution
In this phase, the project management completes all paperwork and then manages the

ity
project to meet the project goals.
When it comes to execution, each team member completes their respective tasks by
the deadline specified for each task. The comprehensive project schedule will be utilised to
monitor the advancement of the project.

rs
There are numerous reporting tasks that must be completed throughout the project’s
execution. The company’s senior management will need to get status updates on the
project’s advancement on a daily or weekly basis.
ve
Furthermore, the client might also like to monitor the project’s advancement. To
ascertain if the project is moving in the right path or not, it is imperative to monitor the
project’s effort and expense during its execution.
ni

During the course of the project, there are several deliveries that must be made in
addition to reporting. Project deliveries are typically not one-time items given at the project’s
conclusion. Rather, the deliveries are made on schedule and spaced out during the course
U

of the project’s execution.

Control and Validation


The project operations should be closely monitored and verified during the whole
ity

duration of the project. Following the original procedures, such as the project plan, quality
assurance test plan and communication plan, is the primary way to control.
There may occasionally be situations that fall outside the purview of these protocols. In
these scenarios, the project manager should regulate the situation by using appropriate and
m

essential measurements.
One auxiliary task that continues from the first to the last day of a project is validation.
To ensure a successful conclusion or effective completion, every activity and delivery should
)A

have its own set of validation criteria.


A distinct team known as the “quality assurance team” will support the project team
with validation and verification tasks in relation to project deliveries and requirements.
(c

Closeout and Evaluation


It’s time to turn over the implemented system and wrap up the project once all
requirements have been met. The client will formally accept and pay for the project if the
deliveries meet the acceptance criteria that have been specified.

Amity University Online


Software Engineering and Modeling 179
It is now time to assess the project as a whole after it has been closed out. The project
team will identify its faults from this evaluation and take the appropriate action to prevent
them in future projects. Notes

e
The service provider may discover during the project evaluation phase that they
haven’t made the anticipated profit margins and may have gone over the initial schedules.

in
In these situations, the service provider’s project is not entirely successful. As a result,
these incidents need to be thoroughly examined and appropriate steps should be taken to
prevent them in the future.

nl
The method of project management is accountable. The project management
procedure establishes the project’s harmony and links all other project activities. As a

O
result, the team responsible for project management needs to be well-versed in all project
management procedures as well as the instruments available to them.

Project Management Plan Contents

ity
The project’s general scope, timeline and cost baselines are briefly described in
the project management plan. More thorough baseline data is available in each of those
knowledge domains through specific strategies. For instance, the cost baseline created as
part of the project cost management knowledge area includes precise cost forecasts by

rs
WBS per month, but the project management plan may just provide a high-level baseline
budget for the entire project.
A description and development method of the project life cycle can also be included in
ve
the project management plan.
Plans for project management should be dynamic, adaptable and sensitive to changes
in the project’s surroundings. The project manager should have a lot of help from these
plans in managing the team and determining the project’s current state. Project plans are
ni

as distinct as projects themselves. A project charter, scope statement and Gantt chart
may be the only official project planning documents required for a small project lasting a
few months, eliminating the requirement for a separate project management plan. A
U

comprehensive project management strategy and individual plans for each expertise area
would be beneficial for a large project employing 100 individuals over a three-year period.
Project plans should be customised for each project as needed, as they are meant to assist
in directing the completion of the specific project.
ity

Nonetheless, the majority of project management plans share the following


characteristics:
●● Introduction/overview of the project
m

●● Project organisation
●● Management and technical processes (including project lifecycle description and
development approach, as applicable)
)A

●● Work to be performed (scope)


●● Schedule and budget information
●● References to other project planning documents
(c

Guidelines to Create Project Management Plans


Guidelines are used by many organisationswhile developing project management
strategies. There are various template files included with project management software
programs such as Microsoft Project 2016 that can be used as guides. But a Gantt chart and
Amity University Online
180 Software Engineering and Modeling

a project management plan are not the same thing. As previously said, a Gantt chart is just
one component of a project management strategy.
Notes There are also a lot of government organisations that offer project management plan

e
creation instructions. For instance, contractors developing software development plans
for DOD projects must adhere to the format specified in U.S. Department of Defence

in
(DOD) Standard 2167, Software Development Plan. The contents of its Software Project
Management Plan (SPMP) are outlined in IEEE Standard 1058-1998, which is issued by
the Institute of Electrical and Electronics Engineers.

nl
This or a comparable criteria must be adhered to by businesses who work on
Department of Defence software development projects.
Specific documentation standards are generally less strict in private organisations,

O
but project management plan development is frequently governed by guidelines. It is
best practice to create project management plans in accordance with the organisation’s
standards or rules in order to make its execution easier.

ity
rs
ve
ni
U
ity

IEEE software project management plan (SPMP)

Directing and Managing Project Work


One of the primary inputs for this process is the project management plan, which
m

is used to direct and manage the work that has to be done. Approved change requests,
corporate environmental factorsandorganisational process assets are examples of
additional inputs. Both the bulk of the budget and the majority of the time spent on a project
)A

are typically allocated to its execution.


Since products are generated during the execution phase of a project, the application
area has a direct impact on how the project is carried out. For instance, all related software
and documentation, as well as the next-generation DNA sequencing device from the
opening case, would be generated during project execution. To effectively develop the
(c

device, the project team would need to draw on its experience in testing, hardware and
software development and biology.
To successfully carry out the project management plan, the project manager must
concentrate on managing stakeholder relationships and guiding the project team.
Amity University Online
Software Engineering and Modeling 181
Stakeholder, communications and project resource management are critical to a project’s
success.
Notes
Coordinating Planning and Execution

e
Project planning and execution are entwined and integral tasks in project integration

in
management. Developing a project management strategy serves the primary purpose
of directing the project’s execution. A good strategy should define what makes good work
results and assist in producing good products or work outcomes. Plans should be updated

nl
to take into account lessons learned from earlier project completion. A solid plan is crucial,
as anyone who has attempted to create a computer program from scratch will attest. Good
execution is important, as anyone who has had to document a badly programmed system
will know.

O
This straightforward guideline should be adhered to in order to improve coordination
between the creation and execution of project plans: “Those who will do the work should
plan the work.” Every member of the project team must acquire expertise in both planning

ity
and executing, as well as both. Programmers who have to develop intricate specifications
for IT projects and then use those specs to create code get better at creating specifications.
Similar to this, the majority of systems analysts start out as programmers, so they are aware
of the kinds of analyses and documentation required to produce high-quality code. While

rs
creating the overall project management plan, project managers also need to consult with
team members who are creating plans for each knowledge area.

Providing Strong Leadership and a Supportive Culture


ve
Having a supportive organisational culture and having strong leadership are essential
for project execution. To emphasise how important it is to draft sound project plans
and then adhere to them when carrying out projects, project managers need to set a
good example. Plans are frequently made by project managers for tasks they must do
ni

themselves. Project managers increase the likelihood that their team members will follow
through on their own goals.
A supportive organisational culture is also necessary for good project execution.
U

Organisational practices, for instance, might facilitate or impede the completion of a project.
It will be simpler for project managers and their teams to plan and complete their work if
an organisation has helpful project management templates and guidelines that everyone in
ity

the organisation follows. The relationship between effective planning and execution will be
supported by the organisation’s culture if work is carried out based on the project plans and
progress is tracked throughout the execution phase. Project managers and their teams, on
the other hand, will become irritated if organisations have unclear or bureaucratic project
management principles that make it difficult to complete tasks or track progress against goals.
m

In order to deliver project outcomes on time, project managers may occasionally


need to breach the rules—even in the presence of a supportive organisational culture.
The outcomes of project managers who violate regulations will be influenced by politics.
)A

For instance, the project manager must utilise political abilities to persuade interested
stakeholders that utilising standard software would be insufficient if the project calls for the
use of nonstandard software. Possessing strong political, communication and leadership
abilities is necessary to break organisational regulations and get away with it.
(c

Capitalising on Product, Business and Application Area Knowledge


To successfully complete projects, project managers need to have a solid
understanding of business, product and application areas in addition to excellent

Amity University Online


182 Software Engineering and Modeling

leadership, communication and political abilities. IT project managers frequently benefit


from having previous technical experience or even just a basic understanding of IT
Notes products. For instance, it would be beneficial for the project manager to speak with the

e
team’s business and technical experts in order to help develop user requirements.
Since many IT projects are tiny, project managers might need to assist with technical

in
tasks or provide team members with guidance in order to finish the project. For instance,
having a project manager who can handle some of the technical work would be quite
beneficial for a three-month project involving just three team members to develop a Web-

nl
based application. But for bigger projects, the project manager’s main duty is to oversee the
group and interact with important project participants. The technical job cannot wait for the
project manager to finish it. It is normally preferable in this situation for the project manager

O
to have a deeper understanding of the project’s business and application area than its
technical aspects.
The project manager for very large projects needs to be knowledgeable about the
project’s business and application areas. For instance, Northwest Airlines has finished a

ity
number of initiatives to improve and expand its reservation systems. During moments of
high demand, the company employed over 70 full-time employees and expended millions of
dollars on the projects.
Despite lacking experience in an IT department, the project manager, PeeterKivestu,

rs
was well-versed in the airline sector and the reservation procedure. He made sure his team
leaders had the necessary technical and product expertise when he carefully selected them.
ResNet was a huge success and the first major IT project at Northwest Airlines
ve
overseen by a business manager as opposed to a technical expert. Numerous
establishments have discovered that sizable information technology initiatives necessitate
seasoned general managers who comprehend the technology’s commercial and application
domains rather than its technical aspects.
ni

Tools and Techniques for Plan Scope Management


It takes specific tools and methods, some of which are exclusive to project
U

management, to oversee and manage project activity. Project managers can carry out tasks
that are a part of execution procedures by using particular tools and methodologies. Among
them are the following:
ity

Expert judgement is crucial for making wise decisions; everyone who has worked on a
big, complicated project understands this. It is imperative for project managers to get advice
from specialists on several subjects, including programming languages, methodologies and
training approaches.
m

Meetings: Throughout the course of a project, meetings are essential. Meetings


over the phone and virtually are just as crucial as in-person interactions with individuals
or groups of people. Meetings facilitate the development of connections, the ability to
)A

read crucial body language and voice tonality and the facilitation of problem-solving
conversations. Setting up specific times for meetings with different stakeholders is
frequently beneficial. Nick could have arranged for senior managers to meet briefly once
a week, for instance. He also had the option of setting up daily stand-up meetings for the
project team, lasting ten minutes each.
(c

Project management information systems: There are currently hundreds of project


management software options available. Strong enterprise project management systems
that integrate with other systems, like financial systems and are available over the Internet
are used by many big businesses. Project managers or other team members can construct
Amity University Online
Software Engineering and Modeling 183
Gantt charts with links to other planning documents on an internal network, even in smaller
organisations. For instance, Nick or his helper may have made links to other important
planning documents made in Word, Excel, or PowerPoint and constructed a thorough Gantt Notes

e
chart for their project in Project 2016.
The procedures necessary to guarantee that the project comprises all of the work

in
necessary and only the work necessary to successfully complete the project are included in
project scope management. Its main focus is on specifying and regulating what is and is not
included in the project.

nl
A summary of the main procedures for project scope management:
●● Initiation—committing the organisation to begin the next phase of the project.

O
●● Scope Planning—developing a written scope statement as the basis for future project
decisions.
●● Scope Definition—subdividing the major project deliverables into smaller, more
manageable components.

ity
●● Scope Verification—formalising acceptance of the project scope.
●● Scope Change Control—controlling changes to project scope.

Initiation

rs
The formal acknowledgement of the existence of a new project or the recommendation
that an ongoing project be carried out to its next stage is known as initiation. The project
is now formally linked to the performing organisation’s ongoing activities. Within certain
ve
establishments, the formal start of a project is contingent upon the conclusion of a feasibility
study, preliminary plan, or comparable analysis that was started independently. Certain
initiatives, particularly those involving internal services and new product development, are
started informally and require a little bit of work to obtain the necessary clearances before
ni

being formally began. Projects are usually approved due to one or more of the following
reasons:
●● A demand from the market (an oil corporation authorising the construction of a new
U

refinery in response to ongoing petrol shortages, for example).


●● A necessity for the firm (for example, a training provider approves a project to develop
a new course in order to boost sales).
ity

●● A request from a customer (for example, when an electrical utility approves a project to
construct a new substation to supply a new industrial park).
●● A breakthrough in technology, such as when an electronics company approves a
new project to create a video game player following the release of the video cassette
m

recorder.
●● A mandate from the law (a paint manufacturer authorising a project to create
regulations for the handling of hazardous products, for example).
)A

Tools and Techniques for Initiation


Project selection method: Methods for selecting projects typically fall into one of two
main categories:
●● Techniques for measuring benefits, such as economic models, benefit contribution
(c

models, scoring models and comparison techniques.


●● Limited optimisation techniques: mathematical models that employ multi-objective,
dynamic, integer, nonlinear and linear programming algorithms.

Amity University Online


184 Software Engineering and Modeling

Decision models are a common term used to describe these techniques. Decision
models encompass both specialised (Analytic Hierarchy Process, Logical Framework
Notes Analysis, etc.) and generalised methodologies (decision trees, forced choice, etc.). It is

e
common practice to approach the application of intricate project selection criteria in a
sophisticated model as a distinct project phase.

in
Expert judgment:It will frequently be necessary to use expert judgement to evaluate the
process’s inputs. Such knowledge can be obtained from a variety of sources and can be
supplied by any organisation or person with specific training or knowledge, such as:

nl
●● Other units within the performing organisation.
●● Consultants.

O
●● Professional and technical associations.
●● Industry groups.

Scope Planning

ity
The process of creating a documented scope statement that will serve as the
foundation for all project decisions going forward, including the standards by which the
project or phase will be judged to have been effectively completed, is known as scope
planning. Both projects and subprojects require a formal scope definition. For instance,

rs
a scope statement outlining the parameters of the engineering company’s work on the
design subproject is mandatory for any firm hired to build a petroleum processing plant.
By outlining the main project deliverables and the project objectives, the scope statement
ve
serves as the foundation for an agreement between the project team and the project
customer.

Tools and Techniques for Scope Planning


ni

●● Product analysis:Product analysis is the process of gaining a deeper comprehension


of the project’s output. Techniques including value engineering, systems engineering,
function analysis, value analysis and quality function deployment are included.
U

●● Cost/benefit analysis: Benefit/cost analysis is the process of calculating the outlays


and returns associated with different project options, both tangible and intangible.
Financial metrics like payback periods and return on investment are then used to
evaluate the relative desirability of the selected alternatives.
ity

●● Identification of alternatives: This is a general word for any method utilised to come up
with several project methods. Here, a range of general management approaches are
frequently employed, the most popular being lateral thinking and brainstorming.
m

Scope Definition
The goal of scope definition is to increase the accuracy of cost, time and resource
predictions by breaking down the primary project deliverables (as stated in the scope
)A

statement) into smaller, more controllable components.


Establish a baseline for monitoring and controlling performance. Ensure that
responsibilities are assigned clearly.
A project’s success depends on its scope being defined correctly. “Final project costs
(c

can be expected to be higher when there is poor scope definition due to the inevitable
changes that cause rework, disrupt project rhythm, increase project time and lower worker
morale and productivity.”

Amity University Online


Software Engineering and Modeling 185
Tools and Techniques for Scope Definition
●● Work breakdown structure templates:A new project can frequently be started using a
work breakdown structure from an earlier one as a template. WBSs can frequently be
Notes

e
“reused,” despite the fact that every project is different from the others in some way.
For instance, most projects within a certain organisation will have project life cycles

in
that are the same or comparable and as a result, the deliverables that are needed at
each stage will also be the same or comparable. Standard or semi-standard WBSs
are available for many application areas and can serve as templates. The Defence

nl
Materiel Items standard work breakdown structures have been established by the U.S.
Department of Defence.
●● Decomposition:The process of decomposition is breaking down the main project

O
deliverables into smaller, easier-to-manage parts until the deliverables are sufficiently
specified to support the planning, executing, controlling and closing phases of the
project. The following are the main steps involved in decomposition:
●● Determine the project’s main components. Project management and project

ity
deliverables will, in general, be the key components. The main components, however,
should always be specified in terms of the project’s real management.
●● Determine whether sufficient time and cost estimates can be created for each
component at this level of specificity.
●●
rs
List the components that make up the deliverable. To aid in performance
measurement, constituent pieces should be defined in terms of concrete, verifiable
outcomes.
ve
●● Confirm that the breakdown is accurate.

Scope Verification
The process of formalising the stakeholders’ (sponsor, client, customer, etc.)
ni

acceptance of the project scope is known as scope verification. To make sure everything
was finished accurately and successfully, it is necessary to assess the work items and
results. The scope verification procedure should determine and record the degree
U

and scope of completion in the event that the project is terminated early. In contrast to
quality control, which is primarily concerned with the accuracy of the work results, scope
verification is more concerned with acceptability of the work results.
ity

Tools and Techniques for Scope Verification


Inspection:Inspection includes activities such as measuring, examining and testing
undertaken to determine whether results conform to requirements. Inspections are variously
called reviews, product reviews, audits and walk-throughs; in some application areas, these
m

different terms have narrow and specific meanings.

Scope Change Control


)A

In order to ensure that changes are positive, scope change control focuses on three
main areas: (a) influencing the reasons that lead to scope changes, (b) identifying when
a change has occurred and (c) managing the actual changes when and if they occur. The
other control procedures (such as quality control, cost control, time control and others) and
scope change control must be fully linked.
(c

Tools and Techniques for Scope Change Control


●● Scope change control system:Procedures for changing the project scope are outlined
in a scope change control system. It consists of the documentation, monitoring
Amity University Online
186 Software Engineering and Modeling

programsandauthorisation tiers required to approve modifications. When the project


is completed under contract, the scope change control system needs to adhere to all
Notes applicable clauses in the contract.

e
●● Performance measurement:Techniques for measuring performance. Determining
the cause of the variance and whether corrective action is necessary are crucial

in
components of scope change control.
●● Additional planning:Not many projects go exactly as planned. Potential scope
adjustments can necessitate WBS adjustments or the examination of different

nl
strategies.

5.1.3 Software Metrics

O
Measuring is an essential component of every engineering process. The software
process can benefit from measurement in order to be continuously improved. Throughout
a software project, measurement can help with project control, productivity evaluation,
estimation and quality assurance. Measures can be used to evaluate the quality of the

ity
engineering goods or systems that you build, as well as to gain a deeper understanding of
the characteristics of the models that you generate.
Software engineers can utilise measurement to make tactical decisions as a
project moves along and to evaluate the quality of work deliverables. However, software

rs
engineering is not based on the fundamental quantitative rules of physics like other
engineering specialties are. In the domain of software, direct measurements like voltage,
mass, velocity, or temperature are rare. Software metrics and measures are debatable
ve
because they are frequently imprecise.
A software team’s primary concerns in the context of the software process and the
projects carried out using it are productivity and quality metrics. These are measurements
of the “output” of software development as a function of time and effort invested and of
ni

the “fitness for use” of the work products that are generated. Our interest is historical for
the sake of planning and estimating. What was the productivity of software development
for previous projects? How well-made was the program that was created? How can quality
U

and productivity statistics from the past be projected into the present? How can it make our
planning and estimating more precise?
A technological and managerial tool is measurement. When done correctly, it gives you
ity

insight. Consequently, it helps the software team and the project manager make decisions
that will make the project successful.

Software Measurement
Measurement, machine learning and the forecasting of future occurrences using these
m

metrics are the three main focuses of data science. Measurement ascribes numbers or
symbols to characteristics of real-world items. A measurement model with a uniform set of
rules is needed to achieve this.
)A

Despite the frequent interchangeability of the terms measure, measurement and


metrics, it’s crucial to understand their significant distinctions. A measure is established
when one data point is gathered, such as the quantity of defects found in a single software
component. When one or more data points are gathered, measurement takes place (for
(c

example, by looking into the number of errors in several component reviews and unit tests).
Software metrics, such as the average number of errors discovered per review or the
average number of errors discovered per unit test, somehow connect the various measures.
To generate indicators, a software engineer gathers data and creates measurements. A

Amity University Online


Software Engineering and Modeling 187
measure, or set of metrics, that offers information about a software project, the software
process, or the product itself is called an indicator.
Notes
Attributes of Effective Software Metrics

e
For computer software, hundreds of metrics have been proposed; nevertheless, not all

in
of them offer the software engineer any useful assistance. Some require measurement that
is too complicated, some that is so arcane that few professionals in the actual world can
hope to understand them and some that defies the most fundamental intuitive ideas about

nl
what truly constitutes high-quality software. Based on past experiences, a metric will only
be applied if it is simple to calculate and intuitive. It seems doubtful that the measure will be
widely used if numerous “counts” and intricate calculations must be performed.

O
Ejiogu outlines a collection of characteristics that useful software metrics ought to
include. The measure should be reasonably simple to calculate and shouldn’t need a lot of
time or effort to understand. When measuring a product attribute, the metric should validate
the engineer’s intuitive beliefs (e.g., a metric measuring module cohesiveness should grow

ity
in value as the level of cohesion increases). Unambiguous outcomes should always be
produced by the metric. The metric should be computed mathematically using measures
that don’t result in strange unit combinations. For instance, dividing the number of members
on the project teams by the program’s programming language variables yields an odd

rs
mixture of units that are not immediately convincing. The requirements model, the design
model, or the program’s actual structure should serve as the foundation for metrics. They
shouldn’t be subject to the whims of the syntax or semantics of programming languages.
Ultimately, the metric ought to furnish you with insights that can culminate in a final work of
ve
superior quality.
A metric is a way to quantify how much a system, one of its components, or a process
possesses a particular attribute. The various characteristics of a software product,
ni

software development resource and software development process are measured using
software metrics. Software metrics pertains to the assessment and quantification of many
characteristics of the software product and the development process.
U

There are three main categories for software metrics. These are the following:
●● Product Metrics
●● Process Metrics
ity

●● Resource Metrics
Product Metrics are the measures of different characteristics of the software product.
The two important characteristics of software are:
●● Size and complexity of soft ware
m

●● Quality and reliability of soft ware


Software attributes, such as the size of the finished program and the complexity of
the software design, can be measured using metrics that are created for each stage of the
)A

development process.
Process metrics are measurements of many aspects of the software development
life cycle. They measure several aspects of the software development process, such
as the effectiveness of fault detection. They are employed to gauge the qualities of the
(c

instruments, processes and strategies used in the creation of software systems.


The quantitative measurements of the different resources used in the software project
are called resource metrics. Three main categories of resources are used in software
development. They are as follows: (1) Hardware resources; (2) Software resources; and
Amity University Online
188 Software Engineering and Modeling

(3) Human resources. Resource metrics show how well the project is using its available
resources. These kinds of KPIs are of concern to project managers.
Notes The metrics may also be hybrid metrics, such as cost per function point (FP), which

e
incorporates data from the product, process and resource domains. The metrics are also
categorised as follows:

in
●● Internal Metrics and
●● External Metrics

nl
Internal metrics are those that are used to assess attributes that software developers
consider to be more significant. Line of Code (LOC) and other size measurements might
be considered internal measures. External metrics are those that are used to measure

O
attributes that are thought to be more significant to the user. External metrics might include
a product’s attributes that are apparent to users, such as performance, usability, usefulness
and dependability. When compared to internal metrics, they are more difficult to measure
and primarily subjective. Typically, these are stated in terms of internal measurements.

ity
Software Size Metrics
The majority of the early research on product metrics focused on source code features.
The size of software is the most crucial product metric. Many software size metrics

rs
have been put out and put to use. A software product’s size is usually expressed as a
straightforward count of certain properties that indicate its volume. There are three ways to
express size metrics:
ve
●● Metrics expressed in terms of physical size of the program
●● Metrics expressed in terms of meaningful functions or services it provides
●● Metrics expressed in terms of logical size
The physical size is the count of architectural (and physical) items at a selected
ni

architectural level. Some examples of this kind of metrics are:


●● „ LOC
U

●● „ Number of classes
●● „ Number of bytes used by the program
The most popular size metric is LOC. The functional size metrics aim to measure
ity

the significant functions or services that consumers receive from the software. One well-
known statistic used in this area is the Function Point Analysis (FPA), as specified by the
International Function Point User’s Group (IFPUG). There are numerous FP variations. One
of these is the Object Point (OP) technique. Physical and functional size are combined with
logical complexity in the logical size measures. Logical size measures include Function
m

Bang and Halstead’s software science metrics.

LOC Metrics
)A

The LOC, or thousands of LOC (KLOC), has historically been used as the main
indicator of software size. The majority of LOC metrics include all data declarations and
executable instructions; however, they do not include comments, blanks, or continuation
lines. By contrasting the functionality of the new software with identical functionality found
in other apps that already exist, LOC can be used to estimate size by analogy. Clearly, the
(c

foundation for a more accurate comparison is provided by having more specific information
about the new software’s functioning. It makes it possible to record the size data required
to create precise estimates for upcoming projects. The fact that LOC estimates are directly
related to the software that needs to be developed is by far their greatest benefit.
Amity University Online
Software Engineering and Modeling 189
But connecting software functional requirements to LOC can be challenging,
particularly in the early phases of development. It is difficult to obtain the degree of detail
needed for LOC estimation. It has been difficult to standardise the definition of how LOCs Notes

e
are counted because they are language specific. Despite the availability of conversion
factors, this makes comparing size estimates across programs developed in different

in
programming languages challenging.

5.1.4 Cost and Size Metrics: FP and COCOMO

nl
Feature Point Metrics
In 1977, A. J. Albrecht created feature point (FP) metrics at IBM. In October 1978, they
were released into the public domain. FP definitions and counting rules were developed

O
by the International Function Point Users Group (IFPUG), a non-profit organisation that
was established in 1986. IFPUG currently boasts over 3000 members and 24 nations of
affiliates. In addition to the IFPUG basic FP, over 24 FP variations have been created.

ity
Backfired FPs, COSMIC FPs, Finnish FPs, Engineering FPs, The Netherlands FPs,
Unadjusted Function Points (UFPs) and so on are a few examples of these. Nonetheless,
the bulk of projects are measured using the IFPUG model and the standard IFPUG FP is
still the most often used version.

rs
The quantity and variety of functions utilised in an application are counted to determine
its FPs. An application’s various functions can be divided into five categories: queries,
external interfaces, external inputs, external outputs, internal files and queries. The table
below provides a quick explanation of the five characteristics that are used for FPs.
ve
Table: Definition of FP Attributes
ni
U
ity

The complexity of each of them is then evaluated separately. For basic external inputs,
the weight value is 3, while for complicated internal files, it is 15. The table provides the
m

standard weight values for each category of function types.

Table: Weights of Various FP Attributes


)A
(c

To find the UFP of the subsystem, the functional complexities are multiplied by the
appropriate weights against each function and all the results are then combined together.
Amity University Online
190 Software Engineering and Modeling

A few General System Characteristics (GSCs) are taken into account when adjusting
UFPs. The New Environments Committee of IFPUG has established a set of 14 GSCs,
Notes which are listed in Table.

e
Table: Categories of GSCs

in
nl
O
The process for modifying UFP is as follows. First, on a scale of 0 to 5, the Degree

ity
of Influence (DI) for each of the 14 GSCs is evaluated. A given GSC’s weight is 0 if it has
no influence and 5 if it has a significant influence. Table lists DIs together with the relevant
weights.

Table: Weights for Different DIs

rs
ve
ni

The process for modifying UFP is as follows. First, on a scale of 0 to 5, the Degree
U

of Influence (DI) for each of the 14 GSCs is evaluated. A given GSC’s weight is 0 if it has
no influence and 5 if it has a significant influence. Table lists DIs together with the relevant
weights.
ity

VAF = (TDI * 0.01) + 0.65


VAF is a number that falls between 0.65 and 1.35. The Final FP Count is calculated by
multiplying the VAF by the UFP.
FP = UFP × VAF
m

The primary purpose of FP metrics is to quantify the amount of management


information system (MIS) software. The table below lists some distinctions between FP
metrics and LOC metrics.
)A

Table: Difference between FP and LOC


(c

Amity University Online


Software Engineering and Modeling 191
An application’s LOCs can be calculated using FPs. LOC varies according on the
language. The programming language used will affect the LOC for the same application.
Table displays the LOC per FP for several programming languages for comparison’s sake. Notes

e
Table: Ratios of LOC to FP for Different Languages

in
nl
O
ity
These data can be used to compute LOC from the FP measure.

Feature Point Metrics


rs
ve
Another size metric is called Feature Points. It is comparable to “FPs.” It is mostly
applied to software with sophisticated algorithms, including embedded and real-time
systems. FP and OP are comparable. It examines the quantity of things (screens, reports,
ni

etc.) that the software is expected to produce. The algorithm is given additional weight by
the feature point method. Function and feature point counts produce the same numerical
numbers for applications where the number of algorithms and logical data files is the same.
U

Nevertheless, feature points yield a higher total than FPs when there are more algorithms
than fi les. The table below provides an illustration of this by displaying the ratio of feature
points to FP for various application kinds.

Table: Ratios of Feature Points to FP


ity
m
)A

COCOCMO Cost Estimation Model


(c

Barry W. Boehm created the algorithmic software cost estimation methodology


known as the Constructive Cost methodology (COCOMO). The model makes use of a
straightforward regression technique and its parameters are determined by looking at past
project data and present project features.

Amity University Online


192 Software Engineering and Modeling

Based on the size of the source code, the Constructive Cost Estimation Model, or
COCOMO, calculates the amount of work and time needed to finish the model. It uses
Notes 15 multiplying factors derived from various project variables and uses this information to

e
compute time and effort in the end.
In Boehm’s 1981 book Software Engineering Economics, COCOMO was initially

in
presented as a methodology for software project effort, cost and schedule estimation. It was
based on an analysis of sixty-three projects conducted by Boehm, the Director of Software
Research and Technology at TRW Aerospace.

nl
The study looked at projects with sizes between 2,000 and 100,000 lines of code and
programming languages between PL/I and assembly. The waterfall approach of software
development, which was the most widely used process in 1981, served as the foundation

O
for these initiatives.
A regression model called Cocomo (Constructive Cost Model) is based on LOC, or the
number of lines of code. It is a method of accurately projecting the different aspects of a

ity
project, including size, effort, cost, time and quality. It is a procedural cost estimate model
for software projects. One of the best-documented models, it was introduced by Barry
Boehm in 1981 and is based on the analysis of 63 projects.
The primary factors that determine the calibre of software products, which are also a

●●
rs
result of the Cocomo, are Time & Effort:
Effort: The amount of work needed to finish a project. Units of measurement are
person-months.
ve
●● Schedule: It only denotes the amount of time needed to do the task, which is obviously
correlated with the amount of effort made. It is expressed in terms of time intervals like
weeks and months.
Various Cocomo models have been developed to estimate the cost estimation at
ni

various levels, depending on the necessary level of precision and accuracy. These models
are all applicable to a range of projects, the attributes of which dictate the value of the
constant to be employed in the ensuing computations. These attributes that are specific to
U

various system kinds are listed below.


The meaning of organic, semidetached and embedded systems according to Boehm
●● Organic:A software project is classified as organic if the necessary team size is
ity

suitably small, the problem is well-known and has already been resolved and the team
members have only rudimentary expertise with it.
●● Semi-detached:A software project is classified as semi-detached if it possesses
essential attributes that fall between those of an embedded and an organic project,
m

such as team size, experience and familiarity with several programming environments.
Compared to organic initiatives, semi-detached projects are less well-known and
more challenging to produce; they need for greater experience, better leadership and
)A

more inventiveness. Examples of semi-detached systems are compilers and other


embedded systems.
●● Embedded:This type of software project involves the highest degree of intricacy,
inventiveness and expertise requirements. Compared to the other two models, this
program necessitates a larger staff and its developers must be sufficiently skilled and
(c

imaginative to create such intricate models.


●● The constants used in the effort calculations are employed with varying values in each
of the aforementioned system types.

Amity University Online


Software Engineering and Modeling 193
Basic COCOMO
Basic COCOMO computes software development effort (and cost) as a function of
program size. Program size is expressed in estimated thousands of source lines of code
Notes

e
(SLOC)
COCOMO applies to three classes of software projects:

in
●● Organic projects: “small” groups with “good” backgrounds and “less than rigid”
specifications

nl
●● Semi-detached projects: “Medium” teams with a range of backgrounds and a
combination of strict and flexible needs
●● Embedded projects, which are created under “tight” limitations. Additionally, it

O
combines semi-detached and organic project types. (Operations, hardware,
software,...)
The basic COCOMO equations take the form

ity
Effort Applied (E) = a_b (KLOC)^b b [man-months]
Development Time (D) = c_b (Effort Applied)^b b [months]
where, KLOC is the estimated number of delivered lines (expressed in thousands) of
code for project.

rs
The coefficients a_b , b_b, c_b and d_b are given in the following table:
ve
ni

For a rapid assessment of software expenses, basic COCOMO is useful. It does not,
U

however, take into consideration variations in hardware limitations, staff qualifications and
experience, use of contemporary tools and approaches and other factors.

Intermediate COCOMOs
ity

The software development effort is calculated by Intermediate COCOMO based on


program size and a set of “cost drivers” that comprise the user’s subjective evaluation of
the product, hardware, personnel and project attributes. Four “cost drivers” are taken into
consideration in this extension, each of which has several supporting attributes:
m

●● Product attributes
™™ Required software reliability
)A

™™ Size of application database


™™ Complexity of the product
●● Hardware attributes
™™ Run-time performance constraints
(c

™™ Memory constraints
™™ Volatility of the virtual machine environment
™™ Required turnabout time

Amity University Online


194 Software Engineering and Modeling

●● Personnel attributes
™™ Analyst capability
Notes
™™ Software engineering capability

e
™™ Applications experience

in
™™ Virtual machine experience
™™ Programming language experience
●● Project attributes

nl
™™ Use of software tools
™™ Application of software engineering methods

O
™™ Required development schedule
The Intermediate Cocomo formula now takes the form:
E = ai (KLoC)b i. EAF

ity
where E is the effort applied in person-months, SLoC is the estimated number of
thousands of delivered lines of code for the project and EAF is the factor calculated above.
The coefficient ai and the exponent b,iare given in the next table.

rs
ve
ni

The Development time D calculation uses E in the same way as in the Basic
COCOMO.
U

Detailed COCOMO
All of the features of the intermediate version are included in detailed COCOMO, along
with an evaluation of how the cost driver affects each stage of the software engineering
process (analysis, design, etc.).
ity

Various effort multipliers are used by the detailed model for every cost driver attribute.
The purpose of these phase-sensitive effort multipliers is to calculate how much work is
needed to finish each phase.
m

In a thorough COCOMO, a collection of cost drivers corresponding to each stage of the


software life cycle and program size are used to compute the effort.
A comprehensive project timeline is always evolving.
)A

The detailed COCOMO is divided into five phases:


™™ plan and requirement.
™™ system design.
™™ detailed design.
(c

™™ module code and test.


™™ integration and test.

Amity University Online


Software Engineering and Modeling 195
5.1.5 Managing Configurations
The information produced by the software process can be broadly categorised into
three areas: (1) computer programs (both executable and source level); (2) user- and
Notes

e
technical practitioner-oriented documentation describing the computer programs; and
(3) data (either external to or contained within the program). All of the components that

in
make up the data generated during the software process are referred to as a software
configuration.
There are a growing number of software configuration items (SCIs) as the software

nl
process advances. Documents pertaining to hardware are also produced by a System
Specification, along with a Software Project Plan and Software Requirements Specification.
These in turn give rise to further documents, hence forming a hierarchy of data. Few

O
misunderstandings would arise if every SCI just produced more SCIs. Regretfully, change
introduces another variable into the equation. Anything can change at any time and for any
cause. The First Law of System Engineering, in actuality “The system will change no matter
where you are in the life cycle and the desire to change it will persist throughout the life

ity
cycle,” the statement reads.
Where did these changes come from? As diverse as the changes themselves is the
response to this query. Nonetheless, there are four main places when change originates:

rs
●● Modifications to product criteria or company standards are required due to new
business or market situations.
●● Modification of data generated by information systems, functionality provided by
ve
goods, or services provided by computer-based systems is required to meet new client
needs.
●● Project priorities or the composition of the software engineering team may alter as a
result ofreorganisation or business expansion or contraction.
ni

●● The system or product is redefined as a result of financial or scheduling restrictions.


A collection of procedures known as “software configuration management” was created
to control change in computer software during its entire life cycle. SCM can be thought of
U

as an activity used throughout the software process to ensure the quality of the product.
We look at key SCM duties and key ideas that aid in change management in the following
sections.
ity

Baselines
In the world of software development, change is inevitable. Clients wish to change
the specifications. The goal of developers is to alter the technical strategy. The project
managers wish to change the plan of action. Why the numerous modifications? Actually,
m

it’s not that hard of an answer. Every constituency gains knowledge over time (about what
they require, the best strategy and how to complete the task while still making money). The
majority of changes are motivated by this new information, which also leads to a fact that
)A

many software engineering practitioners find hard to accept: The majority of modifications
are warranted!
A baseline is a concept from software configuration management that aids in change
management without significantly obstructing acceptable change. According to the IEEE
(IEEE Std. No. 610.12-1990), a baseline is a product or specification that has undergone
(c

formal review and agreement, is used as the foundation for further development and may
only be modified through formal change control procedures.
An analogy can be used to explain a baseline:

Amity University Online


196 Software Engineering and Modeling

Think about the kitchen doors at a sizable restaurant. There are two doors; one is
labelled OUT and the other IN. Because of their stops, the doors can only be opened in the
Notes proper direction. When a waiter receives an order in the kitchen, puts it on a tray and then

e
realises he ordered the wrong dish, he can quickly and casually switch to the correct dish
before leaving the kitchen.

in
But if he exits the kitchen, serves the client and later learns of his mistake, he has to do
a certain protocol: Check the check to see if something went wrong, apologise a lot, enter
the kitchen again through the IN door, describe the issue and so on.

nl
A baseline is comparable to the restaurant’s kitchen doors. A software configuration
item may be changed swiftly and casually before it becomes a baseline. But once we
establish a baseline, we essentially walk through a one-way swinging door. It is possible to

O
make changes, but each one must be evaluated and verified through the use of a formal,
defined procedure.

ity
rs
ve
ni
U
ity

Figure: Baselined SCIs and the project database

Within the field of software engineering, a baseline denotes a software development


milestone, identified by the delivery of one or more software configuration items and the
formal technical review acceptance of these SCIs. The. The components of a design
m

specification, for instance, have been examined and documented. Errors are located and
fixed. The Design Specification becomes a baseline after every section has been examined,
amended and accepted. The program architecture (described in the Design Specification)
)A

cannot be altered further until each has been assessed and authorised. While it is possible
to define baselines at any level of detail, the most widely used software baselines are
displayed in the above figure.
The preceding Figure also shows the sequence of events that result in a baseline. One
or more SCIs are produced by software engineering tasks. SCIs are stored in a project
(c

database (also referred to as a project library or software repository) once they have
been examined and authorised. Software engineering teams copy baselined SCIs from
the project database into the engineers’ individual workspaces when they wish to make
changes to them. Nevertheless, only if SCM controls are adhered to may this extracted SCI
Amity University Online
Software Engineering and Modeling 197
be altered. The alteration path for a baselined SCI is depicted by the arrows in the above
figure.
Notes
Software Configuration Items

e
Information developed throughout the software engineering process has previously

in
been described as a software configuration item. At its most extreme, a SCI may be
compared to a single test case inside a big test suite or a single portion of a lengthy
specification. More practically speaking, a named program component (such as an Ada

nl
package or a C++ function) or a document comprise a SCI.
Many software engineering organisations place software tools under configuration
control in addition to the SCIs produced from software work deliverables. In other words,

O
some iterations of editors, compilers and further CASE tools are “frozen” into the software
setup. These tools must be accessible for making modifications to the programsetup
because they were utilised to create data, source code and documentation. Even if issues
are uncommon, a tool’s (like a compiler) latest version could yield different outcomes from

ity
its previous one. That’s why tools may be baselined as part of an all-inclusive configuration
management approach, just like the software they contribute to.
SCIs are really arranged to create configuration objects that can have a single name
and be catalogued in the project database. An object with configuration characteristics, a

rs
name and relationships allows it to be “connected” to other objects. The configuration
objects, test specification, data model, component N, source code and design specification
are all defined independently, as shown in the figure below. Nonetheless, as the arrows
ve
demonstrate, every object is connected to every other one. A compositional link is shown by
a curving arrow. In other words, component N and the data model are included in the object
design specification. A straight arrow with two heads denotes a relationship.
ni
U
ity
m
)A
(c

Figure: Configuration objects

A software engineer can identify which other objects (and SCIs) might be impacted by
changes made to the source code object thanks to the interrelationships.
Amity University Online
198 Software Engineering and Modeling

The SCM Process


Software quality assurance includes software configuration management as a key
Notes component. Its main duty is to maintain control over change. Nevertheless, SCM is also in

e
charge of identifying certain SCIs and different software versions, reviewing the software
configuration to make sure it was created correctly and reporting any modifications made to

in
the configuration.
A series of difficult problems are introduced in any discussion about SCM:

nl
●● In order to effectively accommodate change, how does an organisation identify and
manage the numerous versions of a program that are currently in use (as well as its
documentation)?

O
●● How does a company manage modifications both before and after software is made
available to users?
●● Who is in charge ofauthorising and assigning rankings to changes?

ity
●● How can we make sure the modifications have been applied correctly?
●● What method is employed to evaluate changes made by others?

Identification of Objects in the Software Configuration


Each software configuration item needs to be given a unique name before being

rs
arranged in an object-oriented manner for management and control. Two categories of
items are discernible The fundamental and combined objects. A “unit of text” created by a
software engineer during analysis, design, coding, or testing is referred to as a basic object.
ve
A set of test cases that are used to run the code, a portion of a requirements specification,
or a source listing for a component are a few examples of basic objects. A collection of
fundamental objects and additional aggregate items is known as an aggregate object.
The Design Specification is an aggregate object, as seen in the preceding figure. From a
ni

conceptual standpoint, it can be thought of as a named (identified) collection of pointers that


describe fundamental items like component N and the data model.
Every object is individually identified by a collection of unique qualities that include a
U

name, a description, a list of resources and a “realisation.” An unambiguous character string


that identifies the thing is its name. A list of data pieces that identify the thing is the object
description.
ity

●● the SCI type (e.g., document, program, data) represented by the object
●● a project identifier
●● change and/or version information
Resources are defined as “entities that are provided, processed, referenced or
m

otherwise required by the object.” Examples of object resources include variable names,
data kinds and particular functions. For a basic object, the realisation is a pointer to the “unit
of text”; for an aggregate object, it is null.
)A

Relationships between named objects must be taken into account when identifying
configuration objects. An aggregate object can be recognised as an object. The hierarchy of
objects is defined by the relationship <part-of>. Using the simple notation, for instance.
E-R diagram 1.4 <part-of> data model;
(c

data model <part-of> design specification;


We arrange SCIs in a hierarchy. Assuming that the only connections between objects
in an object hierarchy occur via the hierarchical tree’s direct paths is unreal. Objects are
frequently related to one another across different branches of the object hierarchy. For
Amity University Online
Software Engineering and Modeling 199
instance, (assuming structured analysis is used) a data model is coupled to data flow
diagrams and to a collection of test cases for a certain equivalency class. The following
forms of representation are possible for these cross-structural relationships: Notes

e
data model <interrelated> data flow model;
data model <interrelated> test case class m;

in
In the first case, the interrelationship is between a composite object, while the second
relationship is between an aggregate object (data model) and a basic object (test case
class m). The interrelationships between configuration Using a module interconnection

nl
language (MIL), objects can be expressed. A MIL allows any version of a system to be
automatically built and explains the dependencies between configuration elements.

O
The way that software objects are identified needs to take into account the fact that
they change as the software is developed. An object may alter often before it is baselined
and it may continue to alter frequently even after a baseline has been set. Any item can
have an evolution graph made for it. As seen in the following figure, the evolution graph

ity
shows how an object has changed over time. After revision, configuration object 1.0
becomes object 1.1. Versions 1.1.1 and 1.1.2 are minor adjustments and changes, while
object 1.2 is a substantial update that comes after. During versions 1.3 and 1.4, item 1.0’s
evolution proceeds, but in the meantime, a significant alteration to the object gives rise to
version 2.0, a new evolutionary route. As of right now, support is available for both versions.

rs
Any version may be modified, but not all versions necessarily must. In what way
does the developer cite all the parts, papers and test cases for version 1.4? How does
the marketing division find out which clients are using version 2.1 right now? How can we
ve
be certain that the design documentation appropriately reflects modifications made to the
version 2.1 source code? Identification is a critical component in each of these questions’
answers. Numerous automated supply chain management (SCM) solutions have been
created to support identification and other SCM operations. A tool may occasionally be
ni

made to keep complete copies of just the most current iteration. In order to obtain previous
iterations (of documents or programs), modifications (identified by the tool) are “subtracted”
from the latest version. This technique facilitates the easy derivation of other versions and
U

makes the present configuration instantly available.

Version Control
ity

In order to manage various versions of configuration objects developed during the


software development process, version control integrates processes and tools. Version
control is explained by Clemm within the framework of SCM.
By choosing the right versions, configuration management enables a user to specify
different software system configurations. This is made possible by assigning attributes to
m

every software version and enabling the specification and construction of a configuration
through the description of the desired attribute set.
)A

These “attributes” could be as straightforward as a version number appended to every


object or as intricate as a series of Boolean variables (switches) denoting distinct functional
modifications implemented in the system.
The evolution graph shown in the figure below is one way to show the several iterations
of a system. Every node in the network represents an aggregate object, or a full copy of the
(c

program. The software is available in multiple versions, each of which is made up of various
SCIs (source code, documents and data). Take a look at a simplified program that consists
of entities 1, 2, 3, 4 and 5 to demonstrate this idea.3. Entity 4 is only utilised in software
implementations that make advantage of colour screens. When monochrome monitors are
Amity University Online
200 Software Engineering and Modeling

available, entity 5 is used. Consequently, it is possible to specify two versions of the version:
(1) entities 1 through 4; (2) entities 1 through 5.
Notes

e
in
nl
O
ity
Figure: Evolution graph

Each entity can be assigned a “attribute-tuple”—a set of properties that determine


whether or not the entity should be employed when a specific software version variant is to
be constructed—in order to generate the suitable variant of a given version of a program.

rs
Each version is given one or more properties. To specify which entity should be included
when colour displays are to be supported, for instance, a colour attribute might be utilised.
Representing entities, variations and versions (revisions) as an object pool is another
ve
approach to think about their relationship. The. With reference to the figure below, a three-
dimensional representation of the relationship between configuration objects and entities,
variants and versions can be found. A group of objects at the same revision level make up
an entity. A variant coexists in parallel with other variations since it is a distinct collection of
ni

objects at the same revision level.


When significant modifications are made to one or more items, a new version is
defined. Over the past ten years, several automated version control strategies have been
U

put forth. The key distinction between the techniques is the level of sophistication of the
qualities and the mechanics of the development process that are used to build different
versions and variants of a system.
ity
m
)A
(c

Figure: Object pool representation of components, variants and versions


Amity University Online
Software Engineering and Modeling 201
Change Control
James Bach has done a fantastic job of distilling the reality of change control in the
context of contemporary software engineering.
Notes

e
Controlling change is essential. However, it’s also frustrating due of the same
circumstances that make it necessary. We are concerned about modification because even

in
a minor alteration to the code can lead to a major breakdown in the final result. However,
it can also unleash amazing new capabilities or correct a major failure. We are concerned
about change because one bad developer could ruin the project, but same bad developers

nl
sometimes come up with the best ideas and a cumbersome change management
procedure could deter them from being creative.
Bach acknowledges that we must maintain equilibrium. When change control is

O
excessive, we run into issues. If we use too little, we’ll run into more issues. Uncontrolled
change quickly becomes pandemonium for a major software engineering project. Change
control for these kinds of projects is a process that combines automated tools and human

ity
procedures to manage change. The figure below shows a schematic representation of
the change control process. Technical merit, any side effects, overall impact on other
configuration items and system functions and the modification’s estimated cost are all
taken into consideration while evaluating a change request4. The evaluation’s findings
are provided in a change report that the change control authority (CCA), an individual or

rs
organisation that ultimately determines the change’s priority and status, uses. For every
change that is accepted, an engineering change order (ECO) is created.
ve
ni
U
ity
m
)A
(c

Amity University Online


202 Software Engineering and Modeling

Notes

e
in
nl
Figure: The change control process

O
The ECO outlines the necessary modification, the requirements that must be followed
and the standards for audit and evaluation. After the object has to be modified, it is
“checked out” of the project database, changed and the necessary SQA procedures are
followed. After the item has been “checked in” to the database, the software’s next version

ity
is created by utilising the proper version control systems.
Access control and synchronisation control are two crucial components of change
control that are implemented by the “check-in” and “check-out” procedures. The ability of
software developers to view and alter a specific configuration item is controlled by access

rs
control. The purpose of synchronisation control is to prevent two distinct persons from
overwriting each other while they make concurrent modifications.
The flow of access and synchronisation control is schematically shown in the figure
ve
below. A software engineer examines a configuration object in accordance with ECO and
an authorised modification request. The software engineer’s ability to check out the object
is guaranteed by an access control functionandsynchronisation control locks the item in
the project database to prevent updates until the checked-out version has been replaced.
ni

You can check out additional copies, but you cannot make any further edits. The software
engineer makes changes to an extracted version, which is a duplicate of the baselined
item. The updated version of the item is checked in and the new baseline object is unlocked
U

following the necessary SQA and testing.


ity
m
)A
(c

Figure: Access and synchronisation control

Amity University Online


Software Engineering and Modeling 203
The amount of bureaucracy suggested by the explanation of the change control
process may start to unnerve some readers. This is a common way of feeling. Change
control, if not properly safeguarded, can impede development and lead to needless red Notes

e
tape. The majority of software developers who have change control methods in place—
sadly, many do not—have established several levels of control in order to assist prevent the

in
issues mentioned below.
Only informal change control needs to be used before a SCI becomes the baseline.
Changes that are warranted by technical and project requirements may be made by the

nl
developer of the configuration object (SCI) in question, provided that they do not impact
broader system requirements that fall outside the purview of the developer’s job description.
A baseline is established once the object has passed thorough technical evaluation and

O
been given approval.
Project level change control is put into place after a SCI is established as the standard.
In order to implement a modification, the developer needs permission from the project

ity
manager (in the case of a “local” change) or the CCA (in the case of a change affecting
other SCIs). Formal creation of change requests, change reports and ECOs is optional in
some circumstances. All modifications are tracked and examined, though and each change
is evaluated.

rs
A formal change control system is implemented when the software product is made
available to customers. The preceding Figure provides an outline of the formal change
control approach. In the second and third tiers of control, the change control authority
actively participates. The CCA may consist of one person, the project manager, or several
ve
persons, depending on the scope and nature of a software project (e.g., representatives
from software, hardware, database engineering, support, marketing).
The CCA’s job is to evaluate changes that go beyond the specific SCI in question by
ni

adopting a global perspective. How will hardware be impacted by the change? How will
performance be impacted by the change? How will the alteration affect how customers view
the product? How will the modification impact the dependability and quality of the product?
U

The CCA addresses these and numerous more questions.

Configuration Audit
In an otherwise chaotic and fluid scenario, software developers can preserve order with
ity

the aid of identification, version control and change control. Even the best control systems,
though, can only monitor a change until an ECO is produced. How can we be certain
that the modification has been applied correctly? There are two parts to the answer: Two
processes exist: the software configuration audit and formal technical reviews.
m

The official evaluation of technical matters the changed configuration object’s technical
validity is the main focus. Reviewers evaluate the SCI for possible side effects, omissions
and consistency with other SCIs. For all changes even the smallest ones, a full technical
)A

review ought to be carried out. By evaluating a configuration object for attributes that are
typically overlooked during review, a software configuration audit enhances the formal
technical review process. The following queries are posed and addressed by the audit:
●● Has the ECO’s recommended change been implemented? Have there been any
(c

further changes implemented?


●● Has the technical accuracy been evaluated through an official technical review?
●● Have software engineering standards been correctly applied and has the software
process been adhered to?
Amity University Online
204 Software Engineering and Modeling

●● Has the SCI “highlighted” the change? Have the author and date of the change been
mentioned? Has the update been reflected in the configuration object’s attributes?
Notes ●● Have the SCM protocols for reporting, documenting and marking the change been

e
followed?
●● Have all relevant SCIs received the latest updates?

in
The audit questions are occasionally posed as a component of a formal technical
examination. On the other hand, the quality assurance team performs the SCM audit
independently when SCM is an official activity.

nl
Status Reporting
Configuration status reporting, often known as status accounting, is an activity in

O
supply chain management that provides answers to the following queries: (1) What took
place? (2) Who carried it out? (3) What time did it occur? (4) What other aspects are
impacted? The above figure shows the information flow for configuration status reporting, or
CSR. A CSR entry is made each time a new or updated identity is assigned to a SCI. A CSR

ity
entry is made each time the CCA approves a change (i.e., issues an ECO). The findings
are presented as part of the CSR task after every configuration audit. The CSR output may
be added to an online database so that maintainers or developers of software could obtain
change information categorised by keywords. A CSR report is also produced on a monthly

rs
basis with the aim of informing practitioners and management of significant developments.
Reporting on configuration status is essential to a big software development project’s
success. The phenomenon known as “the left hand not knowing what the right hand is
ve
doing” is likely to arise when a large number of people are involved. It is possible for two
developers to try to change the same SCI with opposing and divergent goals. It can take a
software engineering team several months to develop software for a hardware specification
that is now outdated. The individual who would be aware of significant adverse effects from
ni

a proposed change does not know that it is being made. CSR assists in eradicating these
issues by enhancing communication between all parties.

SCM Standards
U

Many software configuration management standards have been put forth during the
previous 20 years. MIL-STD-483, DODSTD-480A and MIL-STD-1521A are only a few of
the early SCM standards that concentrated on software created for military usage. Though
ity

they are recommended for both large and small software engineering organisations, more
contemporary ANSI/IEEE standards, like ANSI/IEEE Stds. No. 828-1983, No. 1042-1987
and Std. No. 1028-1988 The, are relevant for nonmilitary software.
m

5.2 Software Maintenance


Software maintenance refers to the process of modifying, updating and enhancing
existing software applications to ensure they continue to meet the evolving needs of users,
)A

address issues and remain compatible with changing environments. It involves various
activities aimed at prolonging the life of software systems, improving their performance and
enhancing their functionality.

5.2.1 Software Maintenance and Its Types


(c

The computer is a tool to increase the productivity of mental labour. There are two
components to a computer: hardware and software. The physical components of computers
and related devices are referred to as hardware. Software is the term for comprehensive

Amity University Online


Software Engineering and Modeling 205
instructions written in a way that computer hardware can comprehend. Software programs
are frequently utilised for several years after installation. Software is not subject to wear
and tear or degradation over time, unlike hardware, which is actual objects. As a result, Notes

e
maintaining software differs from maintaining physical items. The process of periodically
altering software to meet the evolving requirements of users, organisations, or the

in
environment is known as software maintenance. According to IEEE, software maintenance
is defined as follows:
Software maintenance refers to the process of modifying a software product to address

nl
issues, enhance functionality or other features, or adjust it to a changed environment after it
has been delivered.
A significant amount of resources used in the information technology sector are

O
dedicated to software system maintenance. According to a number of recent studies,
maintenance accounts for between 50% and 80% of the entire life cycle costs, with a mean
amount of 65%.

ity
Almost immediately, maintenance starts. After software is made available to end users,
reports of bugs are sent back to the software engineering team in a matter of days. In a
few of weeks, one user class demands that the software be modified to meet the unique
requirements of their setting. After a few months, a different business group that had no
intention of using the software upon its release has come to realise that it might have

rs
unforeseen advantages. It will require some adjustments to function in their world.
The task of maintaining software has commenced. There are an increasing number
of bug repairs, requests for adaptations and complete changes that need to be planned,
ve
scheduled and eventually completed. Soon enough, the backlog has gotten longer and
the additional work it requires could soon exceed the limited resources. Over time, your
company discovers that developing new apps takes up less time and money than
maintaining current ones. Indeed, it is not out of the ordinary for a software company to
ni

dedicate up to 60–70% of its total resources on software maintenance for products that
have been in continuous use for a number of years.
The foundation of any software development is the ubiquitous nature of change. When
U

computer-based systems are designed, change is unavoidable; consequently, you need to


create systems for assessment, management and adjustment.
The practice of keeping a functioning computer system up to date with user needs and
ity

data processing activities is known as software maintenance. The process of maintaining


software can be costly, hazardous and extremely difficult. The following factors make
software maintenance necessary:
●● Changes in user requirements with time
m

●● Program/System problems
●● Changing hardware/Software environment
)A

●● To improve system efficiency and throughout


●● To modify the components
●● To test the resulting product to verify the correctness of changes
●● To eliminate any unwanted side effects resulting from modifications
(c

●● To augment or fine-tune the software


●● To optimize the code to run faster
●● To review standards and efficiency

Amity University Online


206 Software Engineering and Modeling

●● To make the code easier to understand and work with


●● To eliminate any deviations from specifications
Notes
The world around us is always changing. In our surroundings, social, political,

e
economic and technical developments are constant. Therefore, software must be updated
on a regular basis to reflect changes in the environment and the needs of its users. The

in
final phase of the software life cycle is maintenance. Software should adapt to changes in
the organisation. Maintenance is therefore a continuous activity. Software must be adjusted
to make it hardware-compatible whenever the hardware platform is changed. Every time

nl
the software’s support environment changes, the software also needs to be modified. For
example, the software can require adjustments when the operating system is changed.
Every software product typically undergoes ongoing evolution through maintenance

O
activities following its creation.

Software Maintenance Types


Software maintenance can be broadly divided into four categories:

ity
rs
ve
ni
U

●● Corrective maintenance
●● Adaptive maintenance
ity

●● Perfective maintenance
●● Preventive maintenance

Corrective Maintenance
m

Repairing errors or flaws that are discovered after the software has been used in real
life but were missed during testing is known as corrective maintenance. Errors in design,
logic and coding can all lead to defects. These mistakes, which are often referred to as
)A

“residual errors” or “bugs,” impair the software’s functionality. In the majority of situations,
a software program might operate as intended, but in a few unusual circumstances, it
might not. Bug reports submitted by end users typically signal the need for corrective
maintenance. Corrective maintenance is the process of repairing coding errors.
Reactive software modification to fix issues found after the program has been delivered
(c

to the customer’s end user is known as corrective maintenance. Corrective maintenance


refers to fixing errors in processing or performance or implementing adjustments due to
issues that were left unfixed before.

Amity University Online


Software Engineering and Modeling 207
Adaptive Maintenance
Adaptive maintenance refers to altering the function of the program. This is the result
of adapting to changes in the external environment. For instance, the current system was
Notes

e
created to compute taxes on income only after the dividend on stock shares has been
subtracted. The government has now issued directives requiring the payout to be included

in
in the business’s profit for calculating taxes. For this function to work with the new system,
it must be modified. Reactively altering software after it has been delivered to maintain its
usability in an evolving end-user environment is known as adaptive maintenance.

nl
Adapting software to changes in the hardware or operating system environment is
known as adaptive maintenance. The entirety of external factors acting on the system is
referred to as the environment. For instance, if there is a change in the taxation system,

O
hardware, operating systems, work patterns, corporate standards, or government policies,
accounting software may need to be modified. Adapting software to novel situations is the
focus of adaptive maintenance.

ity
Perfective Maintenance
Perfective maintenance refers to improving the functionality or making changes to the
programs to accommodate new or evolving user needs. For instance, before the stores
were electronically connected via leased lines, data was transferred from the stores to the

sent via leased lines.


rs
headquarters on magnetic media. However, the software was improved to enable data to be

There have been attempts made to lower the expenses of maintenance because
ve
it is both incredibly expensive and necessary. Audits for software modifications and
maintenance management are two ways to cut costs. The two types of software alteration
are system-level upgrades and program rewriting. The proactive updating of software after
delivery with the goal of adding new user features, optimising the program’s code structure,
ni

or improving documentation is known as perfective maintenance.


Perfective maintenance involves enhancing the system by including additional
functionality. Software enhancements like adding a new report to the sales analysis system,
U

enhancing the payroll program to enable the deduction of insurance premiums from salary,
streamlining the user interface to make it more intuitive and incorporating an online HELP
command are a few examples of perfective maintenance.
ity

Preventive Maintenance
The technique by which we keep our system from becoming outdated is called
preventive maintenance. Reverse engineering, or re-engineering an outdated system with
outdated technology utilising modern technology, is a notion that is included in preventive
m

maintenance. The system won’t go extinct thanks to this maintenance.


The proactive modification of software following delivery with the goal of identifying
and fixing product flaws before users in the field do is known as preventive maintenance.
)A

Updating documentation and strengthening the software’s modular structure to make


it easier to maintain are two aspects of preventive maintenance. Corrective maintenance
is referred to as “traditional maintenance,” whilst other forms are referred to as “software
evolution.”
(c

Maintenance Process Models


Software maintenance can be approached in a variety of ways. These methods differ
in certain aspects. The particular requirements of the maintenance issue are taken into
consideration while choosing a software maintenance strategy.
Amity University Online
208 Software Engineering and Modeling

Quick-fix Model

Notes Working on the code first and then making the necessary adjustments to the

e
associated documentation is a common strategy for software maintenance. The quick-
fix model encompasses this methodology. For small-scale reworks projects, where
changes are made directly to the code and then reflected in the pertinent documents, this

in
methodology is ideal. Using this method, acquiring the requirements for adjustments is the
first step in the project.

nl
The following step is to analyse the requirements in order to develop the code
modification strategies. Usually, a few individuals of the initial development team are
involved in this process. The presence of a functional outdated system makes the job
easier for the maintenance crew since it gives them a better understanding of how the

O
system functions. They can also contrast how their changed system functions with that of
the original system. Additionally, because the program traces of the two systems can be
compared to locate the problems, debugging errors become simpler.

ity
The fast-fix model uses an impromptu strategy. Its objective is to locate the issue and
swiftly address it, without considering the long-term consequences of the solutions. The
requirement, design, testing and any other available papers impacted by the changes
should ideally be updated after the code has been altered. These essential updates are

rs
frequently overlooked, though. Regression testing, effect analysis, impact analysis and
sufficient planning are frequently lacking in the adjustments. This is because users and
consumers demand that software be changed rapidly and affordably, which puts pressure
on both time and money. Frequent software alterations made with a quick-fix methodology
ve
may destroy the original design, increasing the difficulty and expense of subsequent
updates.

Iterative-enhancement Model
ni

Evolutionary life cycle models offer a different strategy for maintaining software. These
models are predicated on the notion that, during the course of a system’s operation, its
numerous requirements are gradually accumulated and comprehended. An evolutionary
U

model is the iterative-enhancement model. This concept suggests that systems should
be created and then improved iteratively in response to user feedback. This model has
been modified for both development and upkeep. There are three steps to the model.
The system needs to be examined first. Proposed changes are then categorised. Lastly,
ity

the adjustments are put into practice. The iterative-enhancement model’s ability to update
documentation in tandem with code changes is one of its main benefits. Since the model
presupposes that the system has complete documentation, it is ineffective when the
documentation is incomplete.
m

Reuse-oriented Model
Another well-liked strategy for software maintenance is the reuse-oriented paradigm.
)A

It sees maintenance as a specific instance of software development that is focused on


reuse. It is assumed by the reuse-oriented paradigm that it is possible to reuse current
program components. The reuse model’s processes are to identify the old system’s pieces
that may be reused, fully comprehend those portions, adapt those parts in accordance
with the new requirements and integrate the updated parts into the new system. When
(c

a repository of documentation and parts from both current and previous iterations of the
system to be changed as well as from other systems in the same application domain exists,
the reuse-oriented paradigm works well. Reuse is now thoroughly documented as a result.
Additionally, it encourages the creation of more reusable parts.

Amity University Online


Software Engineering and Modeling 209
Summary
●● Planning and managing software projects entails methodically arranging, coordinating
and carrying out tasks to guarantee the productive creation and completion of
Notes

e
software projects. Project start, scope definition, resource allocation, scheduling, risk
management and stakeholder communication are just a few of the aspects it includes.

in
Setting specific goals, outlining the project’s needs, calculating costs and schedules
and drafting an elaborate project plan are all necessary for effective project planning.
Monitoring progress, handling problems, handling adjustments and making sure the

nl
project stays on course to achieve its objectives within financial and time limits are
all part of project management. Software project planning and management apply
best practices, approaches such as Agile or Waterfall and project management tools

O
in order tomaximise productivity, minimise risks and provide high-quality software
products that satisfy stakeholders.
●● The quantitative measurements used to evaluate different facets of the software
development process and product are referred to as software metrics. These metrics

ity
can include project management metrics like effort estimation, project duration and
defect density, as well as code quality metrics like lines of code, cyclomatic complexity
and code coverage. Stakeholders may discover areas for improvement, assess
the performance, quality and progress of software projects and make data-driven

●●
rs
decisions to increase productivity and efficiency with the aid of software metrics.
When it comes to budgeting, resource allocation and software project planning,
FPA and COCOMO offer invaluable insights. While FPA concentrates on measuring
ve
software functionality, COCOMO provides a more comprehensive view by taking into
account a variety of project characteristics in order to more precisely predict effort and
cost. Organisations may increase the success rate of software projects and make well-
informed decisions by utilising these cost and size measures.
ni

●● The process of continuously monitoring and updating software systems to guarantee


their sustained efficacy, dependability and relevance throughout time is known as
software maintenance. It includes a range of tasks, such as preventive maintenance,
U

which aims to proactively identify and address potential problems before they
arise, adaptive maintenance, which adjusts software to changing environments or
requirements, perfective maintenance, which improves software functionality or
performance and corrective maintenance, which fixes errors and defects. Software
ity

systems may prolong their lifespan and maximise their value to stakeholders by
staying secure, functioning and in line with changing user needs and technical
improvements with the help of various maintenance kinds.

Glossary
m

●● SCM: Software Configuration Management


●● SCIs: Software Configuration Items
)A

●● SRD: Software Requirements Document


●● ECO: Engineering Change Order
●● CCA: Change Control Authority
●● GSC: General System Characteristics
(c

●● FP: Function point


●● LOC: Line of Code
●● SPMP: Software Project Management Plan

Amity University Online


210 Software Engineering and Modeling

●● SDLC: Software Development Life Cycle


●● PLC: Project - Life – Cycle
Notes

e
Check Your Understanding
1. What is the primary difference between projects and operations?

in
a) Projects are continuous, while operations are temporary
b) Projects are undertaken to create a unique product, service, or result, while
operations focus on the continuous creation of products and services

nl
c) Projects have no defined start or end, while operations have predetermined start
and end dates

O
d) Projects involve daily activities, while operations involve one-time actions
2. What is the role of project management in achieving project goals?
a) Project management ensures that projects never deviate from their original scope

ity
b) Project management guarantees that projects are always completed within budget
c) Project management facilitates the application of knowledge, skills, tools and
techniques to project activities to meet project requirements
d) Project management solely focuses on coordinating project activities without

3.
considering stakeholder needs

rs
Which of the following is NOT a type of software maintenance?
a) Adaptive maintenance
ve
b) Perfective maintenance
c) Corrective maintenance
d) Disruptive maintenance
ni

4. Which type of software maintenance involves modifying the software to ensure


compatibility with new hardware or software platforms?
a) Adaptive maintenance
U

b) Corrective maintenance
c) Preventive maintenance
d) Perfective maintenance
ity

5. Which type of software maintenance aims to improve the software’s performance,


efficiency, or user experience?
a) Corrective maintenance
m

b) Adaptive maintenance
c) Preventive maintenance
d) Perfective maintenance
)A

Exercise
1. What are the key components of software project planning?
2. How does planning software projects differ from managing configurations?
(c

3. Explain the concept of software metrics and its significance in project management.
4. Compare and contrast Function Point Analysis (FPA) and COCOMO in terms of cost and
size metrics.
5. What are the different types of software maintenance and how do they differ from each
Amity University Online
Software Engineering and Modeling 211
other?
6. Describe the role of configuration management in software project planning and
management.
Notes

e
7. How do you measure software quality using metrics?

in
Learning Activities
1. Conduct a post-implementation review of a recently completed software project,
evaluating its success against predefined metrics and identifying lessons learned for

nl
future projects.
2. Estimate the project cost using the COCOMO model for a software development project
that involves building a mobile app for a startup company.

O
Check Your Understanding - Answers
1. b 2. c 3. d 4. a

ity
5. d

Further Readings and Bibliography


1. The Mythical Man-Month: Essays on Software Engineering by Frederick P. Brooks
Jr.
2.
3.
Software Project Survival Guide by Steve McConnell
Agile Estimating and Planning by Mike Cohn rs
ve
4. Scrum: The Art of Doing Twice the Work in Half the Time by Jeff Sutherland
5. Software Engineering: A Practitioner’s Approach by Roger S. Pressman
6. Project Management for Software Development by James W. Over
ni

7. Managing the Unmanageable: Rules, Tools and Insights for Managing Software
People and Teams by Mickey W. Mantle and Ron Lichty
8. Software Project Management: A Unified Framework by Walker Royce
U
ity
m
)A
(c

Amity University Online

You might also like