Course Material -Software Engineering
MODULE - I: software engineering and software process
1.1 The nature of software
The software is instruction or computer program that when executed provide
desired features, function, and performance. A data structure that enables
the program to adequately manipulate information and document that
describe the operation and use of the program.
Characteristic of software:
There is some characteristic of software which is given below:
1. Functionality
2. Reliability
3. Usability
4. Efficiency
5. Maintainability
6. Portability
Changing Nature of Software:
Nowadays, seven broad categories of computer software present continuing
challenges for software engineers .which is given below:
1. System Softwa
2. re:
System software is a collection of programs which are written to service
other programs. Some system software processes complex but determinate,
information structures. Other system application process largely
indeterminate data. Sometimes when, the system software area is
characterized by the heavy interaction with computer hardware that requires
scheduling, resource sharing, and sophisticated process management.
3.
4. Application Software:
Application software is defined as programs that solve a specific business
need. Application in this area process business or technical data in a way that
facilitates business operation or management technical decision making. In
addition to convention data processing application, application software is
used to control business function in real time.
5. Engineering and Scientific Software:
This software is used to facilitate the engineering function and task. however
modern application within the engineering and scientific area are moving
away from the conventional numerical algorithms. Computer-aided design,
system simulation, and other interactive applications have begun to take a
real-time and even system software characteristic.
6. Embedded Software:
Embedded software resides within the system or product and is used to
implement and control feature and function for the end-user and for the
system itself. Embedded software can perform the limited and esoteric
function or provided significant function and control capability.
7. Product-line Software:
Designed to provide a specific capability for use by many different customers,
product line software can focus on the limited and esoteric marketplace or
address the mass consumer market.
8. Web Application:
It is a client-server computer program which the client runs on the web
browser. In their simplest form, Web apps can be little more than a set of
linked hypertext files that present information using text and limited
graphics. However, as e-commerce and B2B application grow in importance.
Web apps are evolving into a sophisticate computing environment that not
only provides a standalone feature, computing function, and content to the
end user.
9. Artificial Intelligence Software:
Artificial intelligence software makes use of a nonnumerical algorithm to
solve a complex problem that is not amenable to computation or
straightforward analysis. Application within this area includes robotics, expert
system, pattern recognition, artificial neural network, theorem proving and
game playing.
1.2 Software engineering
The software is a collection of integrated programs.
Software subsists of carefully-organized instructions and code written by developers on
any of various particular computer languages.
Computer programs and related documentation such as requirements, design models
and user manuals.
Engineering is the application of scientific and practical knowledge to invent, design,
build, maintain, and improve frameworks, processes, etc.
1.3 Software engineering layers
1. A quality focus: It defines the continuous process improvement principles
of software. It provides integrity that means providing security to the
software so that data can be accessed by only an authorized person, no
outsider can access the data. It also focuses on maintainability and usability.
2. Process: It is the foundation or base layer of software engineering. It is
key that binds all the layers together which enables the development of
software before the deadline or on time. Process defines a framework that
must be established for the effective delivery of software engineering
technology. The software process covers all the activities, actions, and tasks
required to be carried out for software development.
Process activities are listed below:-
Communication: It is the first and foremost thing for the development of
software. Communication is necessary to know the actual demand of the
client.
Planning: It basically means drawing a map for reduced the complication of
development.
Modeling: In this process, a model is created according to the client for
better understanding.
Construction: It includes the coding and testing of the problem.
Deployment:- It includes the delivery of software to the client for evaluation
and feedback.
3. Method: During the process of software development the answers to all
“how-to-do” questions are given by method. It has the information of all the
tasks which includes communication, requirement analysis, design modeling,
program construction, testing, and support.
4. Tools: Software engineering tools provide a self-operating system for
processes and methods. Tools are integrated which means information
created by one tool can be used by another.
Fig: 1.1 Software Engineering layers
1.4 The software process
A software process (also knows as software methodology) is a set of related
activities that leads to the production of the software. These activities may
involve the development of the software from the scratch, or, modifying an
existing system.
Any software process must include the following four activities:
1. Software specification (or requirements engineering): Define the main
functionalities of the software and the constrains around them.
2. Software design and implementation: The software is to be designed and
programmed.
3. Software verification and validation: The software must conforms to it’s
specification and meets the customer needs.
4. Software evolution (software maintenance): The software is being
modified to meet customer and market requirements changes. In practice,
they include sub-activities such as requirements validation, architectural
design, unit testing, …etc. There are also supporting activities such as
configuration and change management, quality assurance, project
management, user experience.
1.5 Software engineering process
The process encompasses the entire range of activities, from initial customer
inception to software production and maintenance. It's also known as the
Software Development Life Cycle (SDLC). Let's take a look at each of the
steps involved in a typical software engineering process.
Step 1: Understanding Customer Requirements
This step is also known as the ''requirements collection'' step. It's all about
communicating with the customer before building a software, so you get to
know their requirements thoroughly. It's usually conducted by a business
analyst or product analyst. A Customer Requirement Specification (CRS)
document is written from a customer's perspective and describes, in a simple
way, what the software is going to do.
Step 2: Requirement Analysis: Is the Project Feasible?
This stage involves exploring issues related to the financial, technical,
operational, and time management aspects of software development. It's an
essential step towards creating functional specifications and design. It's
usually done by a team of product managers, business analysts, software
architects, developers, HR, and finance managers.
Step 3: Creating a Design
Once the analysis stage is over, it's time to create a blueprint for the
software. Architects and senior developers create a high-level design of the
software architecture, along with a low-level design describing how each and
every component in the software should work.
Step 4: Coding, Testing, and Installation
Next, software developers implement the design by writing code. After all the
code developed by different teams is integrated, test engineers check if the
software meets the required specifications, so that developers can debug
code. The release engineer then deploys the software on a server.
Step 5: Keeping it Going: Maintenance
Maintenance is the application of each of the previous steps to the existing
modules in the software in order to modify or add new features, depending
on what the customer needs.
Fig:1.2 Software Engineering Process
1.6 Software engineering practice
Practice is a broad array of concepts, principles, methods, and tools that you
must consider as software is planned and developed.
It represents the details—the technical considerations and how to’s—that are
below the surface of the software process—the things that you’ll need to
actually build high-quality computer software.
1.6.1 The Essence of Practice
This section lists the generic framework (communication, planning, modeling,
construction, and deployment) and umbrella (tracking, risk management,
reviews, measurement, configuration management, reusability management,
work product creation, and product) activities found in all software process
models.
George Polya, in a book written in 1945 (!), describes the essence of
software engineering practice …
1. Understand the problem (communication and analysis).
Who are the stakeholders?
What are the unknowns? “Data, functions, features to solve the
problem?”
Can the problem be compartmentalized? “Smaller that may be easier
to understand?
Can the problem be represented graphically? Can an analysis model be
created?
2. Plan a solution (modeling and software design).
Have you seen a similar problem before?
Has a similar problem been solved? If so, is the solution reusable?
Can sub-problems be defined?
Can you represent a solution in a manner that leads to effective
implementation?
3. Carry out the plan (code generation).
Does the solution conform to the plan?
Is each component part of the solution probably correct?
4. Examine the result for accuracy (testing and quality assurance).
Is it possible to test each component part of the solution?
Does the solution produce results that conform to the data, functions,
features, and behavior that are required?
1.7 Software myths
Software myths are misleading attitudes that have caused serious problems
for managers and technical people alike. Software myths propagate
misinformation and confusion. There are three kinds of software myths:
1) Management myths: Managers with software responsibility are often
under pressure to maintain budgets, keep schedules from slipping, and
improve quality. Following are the management myths:
Myth: We already have a book that’s full of standards and procedures for
building software, won’t that provide my people with everything they need to
know?
Reality: The book of standards may very well exist, but isn’t used. Most
software practitioners aren’t aware of its existence. Also, it doesn’t reflect
modern software engineering practices and is also complete.
Myth: My people have state-of-the-art software development tools, after all,
we buy them the newest computers.
Reality: It takes much more than the latest model mainframe, workstation,
or PC to do high-quality software development. Computer-aided software
engineering (CASE) tools
are more important than hardware for achieving good quality and
productivity, yet the majority of software developers still do not use them
effectively.
Myth: If we get behind schedule, we can add more programmers and catch
up (sometimes called the Mongolian horde concept).
Reality: Software development is not a mechanistic process like
manufacturing. As new people are added, people who were working must
spend time educating the newcomers, thereby reducing the amount of time
spent on productive development effort. People can be added but only in a
planned and well-coordinated manner.
Myth: If I decide to outsource the software project to a third party, I can just
relax and let that firm build it.
Reality: If an organization does not understand how to manage and control
software projects internally, it will invariably struggle when it outsources
software projects.
2) Customer myths: Customer myths lead to false expectations (by the
customer) and ultimately, dissatisfaction with the developer. Following are
the customer myths:
Myth: A general statement of objectives is sufficient to begin writing
programs-we can fill in the details later.
Reality: A poor up-front definition is the major cause of failed software
efforts. A formal and detailed description of the functions, behavior,
performance, interfaces, design constraints, and validation criteria is
essential.
Myth: Project requirements continually change, but change can be easily
accommodated because software is flexible.
Reality: It is true that software requirements change, but the impact of
change varies with the time at which it is introduced. When changes are
requested during software design, the cost impact grows rapidly. Resources
have been committed and a design framework has been established. Change
can cause heavy additional costs. Change, when requested after software is
in production, can be much more expensive than the same change requested
earlier.
3) Practitioner’s myths: Practitioners have following myths:
Myth: Once we write the program and get it to work, our job is done.
Reality: Industry data indicate that between 60 and 80 percent of all effort
expended on software will be expended after it is delivered to the customer
for the first time.
Myth: Until I get the program “running” I have no way of assessing its
quality.
Reality: One of the most effective software quality assurance mechanisms
can be applied from the inception of a project—the formal technical review.
1.8 Process models
A software process model is an abstraction of the actual process, which is
being described. It can also be defined as a simplified representation of a
software process. Each model represents a process from a specific
perspective. Basic software process models on which different type of
software process models can be implemented:
A workflow Model –
It is the sequential series of tasks and decisions that make up a business
process.
The Waterfall Model –
It is a sequential design process in which progress is seen as flowing steadily
downwards. Phases in waterfall model:
(i) Requirements Specification
(ii) Software Design
(iii) Implementation
(iv) Testing
Dataflow Model –
It is diagrammatic representation of the flow and exchange of information
within a system.
Evolutionary Development Model –
Following activities are considered in this method:
(i) Specification
(ii) Developm
ent
(iii) Validation
Role / Action Model –
Roles of the people involved in the software process and the activities.
1.9 A generic process model
The generic process model is an abstraction of the software development
process. It is used in most software since it provides a base for them.
The generic process model encompasses the following five steps:
Communication
Planning
Modelling
Construction
Deployment
Fig: A generic process model
Communication
In this step, we communicate with the clients and end-users.
We discuss the requirements of the project with the users.
The users give suggestions on the project. If any changes are difficult to
implement, we work on alternative ideas.
Planning
In this step, we plan the steps for project development. After completing the
final discussion, we report on the project.
Planning plays a key role in the software development process.
We discuss the risks involved in the project.
Modelling
In this step, we create a model to understand the project in the real world.
We showcase the model to all the developers. If changes are required, we
implement them in this step.
We develop a practical model to get a better understanding of the project.
Construction
In this step, we follow a procedure to develop the final product.
If any code is required for the project development, we implement it in this
phase.
We also test the project in this phase.
Deployment
In this phase, we submit the project to the clients for their feedback and add
any missing requirements.
We get the client feedback.
Depending on the feedback form, we make the appropriate changes.
1.10 The waterfall model
Waterfall approach was first SDLC Model to be used widely in Software
Engineering to ensure success of the project. In "The Waterfall" approach,
the whole process of software development is divided into separate phases.
In this Waterfall model, typically, the outcome of one phase acts as the input
for the next phase sequentially.
The sequential phases in Waterfall model are −
Requirement Gathering and analysis − All possible requirements of the
system to be developed are captured in this phase and documented in a
requirement specification document.
System Design − The requirement specifications from first phase are studied
in this phase and the system design is prepared. This system design helps in
specifying hardware and system requirements and helps in defining the
overall system architecture.
Implementation − With inputs from the system design, the system is first
developed in small programs called units, which are integrated in the next
phase. Each unit is developed and tested for its functionality, which is
referred to as Unit Testing.
Integration and Testing − All the units developed in the implementation
phase are integrated into a system after testing of each unit. Post integration
the entire system is tested for any faults and failures.
Deployment of system − Once the functional and non-functional testing is
done; the product is deployed in the customer environment or released into
the market.
Maintenance − There are some issues which come up in the client
environment. To fix those issues, patches are released. Also to enhance the
product some better versions are released. Maintenance is done to deliver
these changes in the customer environment.
All these phases are cascaded to each other in which progress is seen as
flowing steadily downwards (like a waterfall) through the phases. The next
phase is started only after the defined set of goals are achieved for previous
phase and it is signed off, so the name "Waterfall Model". In this model,
phases do not overlap.
Waterfall Model - Application
Every software developed is different and requires a suitable SDLC approach
to be followed based on the internal and external factors. Some situations
where the use of Waterfall model is most appropriate are −
Requirements are very well documented, clear and fixed.
Product definition is stable.
Technology is understood and is not dynamic.
There are no ambiguous requirements.
Ample resources with required expertise are available to support the product.
The project is short.
Waterfall Model - Advantages
The advantages of waterfall development are that it allows for
departmentalization and control. A schedule can be set with deadlines for
each stage of development and a product can proceed through the
development process model phases one by one.
Development moves from concept, through design, implementation, testing,
installation, troubleshooting, and ends up at operation and maintenance.
Each phase of development proceeds in strict order.
Some of the major advantages of the Waterfall Model are as follows −
Simple and easy to understand and use
Easy to manage due to the rigidity of the model. Each phase has specific
deliverables and a review process.
Phases are processed and completed one at a time.
Works well for smaller projects where requirements are very well understood.
Clearly defined stages.
Well understood milestones.
Easy to arrange tasks.
Process and results are well documented.
Waterfall Model - Disadvantages
The disadvantage of waterfall development is that it does not allow much
reflection or revision. Once an application is in the testing stage, it is very
difficult to go back and change something that was not well-documented or
thought upon in the concept stage.
The major disadvantages of the Waterfall Model are as follows −
No working software is produced until late during the life cycle.
High amounts of risk and uncertainty.
Not a good model for complex and object-oriented projects.
Poor model for long and ongoing projects.
Not suitable for the projects where requirements are at a moderate to high
risk of changing. So, risk and uncertainty is high with this process model.
It is difficult to measure progress within stages.
Cannot accommodate changing requirements.
Adjusting scope during the life cycle can end a project.
Integration is done as a "big-bang. at the very end, which doesn't allow
identifying any technological or business bottleneck or challenges early.
1.11 Incremental process model
Incremental Model is a process of software development where requirements
divided into multiple standalone modules of the software development cycle.
In this model, each module goes through the requirements, design,
implementation and testing phases. Every subsequent release of the module
adds function to the previous release. The process continues until the
complete system achieved.
1. Requirement analysis: In the first phase of the incremental model, the product
analysis expertise identifies the requirements. And the system functional
requirements are understood by the requirement analysis team. To develop the
software under the incremental model, this phase performs a crucial role.
2. Design & Development: In this phase of the Incremental model of SDLC, the
design of the system functionality and the development method are finished with
success. When software develops new practicality, the incremental model uses style
and development phase.
3. Testing: In the incremental model, the testing phase checks the performance of
each existing function as well as additional functionality. In the testing phase, the
various methods are used to test the behavior of each task.
4. Implementation: Implementation phase enables the coding phase of the
development system. It involves the final coding that design in the designing and
development phase and tests the functionality in the testing phase. After
completion of this phase, the number of the product working is enhanced and
upgraded up to the final system product.
Advantage of Incremental Model
Errors are easy to be recognized.
Easier to test and debug
More flexible.
Simple to manage risk because it handled during its iteration.
The Client gets important functionality early.
Disadvantage of Incremental Model
Need for good planning
Total Cost is high.
Well defined modul
e interfaces are needed.
1.12 Specialized process model
1.12.1 Component-Based Development(CBD)
The component Based Development Model has the characteristics of a spiral
model, hence is evolutionary and iterative in nature. In this model,
applications are built from pre-packaged software components that are
available from different vendors. Components are modular products with
well-defined functions that can be incorporated into the project of selection.
The modelling and construction stages are begun with identifying candidate
components suitable for the project.
Steps involved in this approach are implemented in an evolutionary fashion:
Components suitable for the application domain are selected.
Component integration issues are catered to.
Software architecture is designed based on the selected components.
Components are integrated into the architecture.
Testing of the functionality of components is done.
1.12.2 Formal Methods Model(FMM)
In the Formal Methods Model, mathematical methods are applied in the
process of developing software. It Uses Formal Specification Language (FSL)
to define each system characteristics. FSL defines the syntax, notations for
representing system specifications, several objects and relations to define the
system in detail.
The careful mathematical analysis that is done in FMM results in a defect-free
system. It also helps to identify and correct ambiguity and inconsistency
easily. This method is time-consuming and expensive in nature. Also,
knowledge of formal methods is necessary for developers and is a challenge.
1.13 The unified process
Unified process (UP) is an architecture centric, use case driven, iterative and
incremental development process. UP is also referred to as the unified software
development process.
The Unified Process is an attempt to draw on the best features and characteristics
of traditional software process models, but characterize them in a way that
implements many of the best principles of agile software development. The Unified
Process recognizes the importance of customer communication and streamlined
methods for describing the customer’s view of a system. It emphasizes the
important role of software architecture and “helps the architect focus on the right
goals, such as understandability, reliance to future changes, and reuse”.
It suggests a process flow that is iterative and incremental, providing the
evolutionary feel that is essential in modern software development.
1.14 Agile development
The meaning of Agile is swift or versatile."Agile process model" refers to a
software development approach based on iterative development. Agile
methods break tasks into smaller iterations, or parts do not directly involve
long term planning. The project scope and requirements are laid down at the
beginning of the development process. Plans regarding the number of
iterations, the duration and the scope of each iteration are clearly defined in
advance.
Each iteration is considered as a short time "frame" in the Agile process
model, which typically lasts from one to four weeks. The division of the entire
project into smaller parts helps to minimize the project risk and to reduce the
overall project delivery time requirements. Each iteration involves a team
working through a full software development life cycle including planning,
requirements analysis, design, coding, and testing before a working product
is demonstrated to the client.
Phases of Agile Model:
Following are the phases in the Agile model are as follows:
1. Requirements gathering
2. Design the requirements
3. Construction/ iteration
4. Testing/ Quality assurance
5. Deployment
6. Feedback
1. Requirements gathering: In this phase, you must define the requirements.
You should explain business opportunities and plan the time and effort
needed to build the project. Based on this information, you can evaluate
technical and economic feasibility.
2. Design the requirements: When you have identified the project, work with
stakeholders to define requirements. You can use the user flow diagram or the
high-level UML diagram to show the work of new features and show how it will
apply to your existing system.
3. Construction/ iteration: When the team defines the requirements, the work
begins. Designers and developers start working on their project, which aims to
deploy a working product. The product will undergo various stages of
improvement, so it includes simple, minimal functionality.
4. Testing: In this phase, the Quality Assurance team examines the product's
performance and looks for the bug.
5. Deployment: In this phase, the team issues a product for the user's work
environment.
6. Feedback: After releasing the product, the last step is feedback. In this, the team
receives feedback about the product and works through the feedback.
1.15 Extreme programming
Extreme programming (XP) is one of the most important software
development frameworks of Agile models. It is used to improve software
quality and responsiveness to customer requirements. The extreme
programming model recommends taking the best practices that have worked
well in the past in program development projects to extreme levels. Good
practices need to be practiced in extreme programming: Some of the good
practices that have been recognized in the extreme programming model and
suggested to maximize their use are given below:
Code Review: Code review detects and corrects errors efficiently. It suggests
pair programming as coding and reviewing of written code carried out by a
pair of programmers who switch their works between them every hour.
Testing: Testing code helps to remove errors and improves its reliability. XP
suggests test-driven development (TDD) to continually write and execute
test cases. In the TDD approach test cases are written even before any code
is written.
Incremental development: Incremental development is very good because
customer feedback is gained and based on this development team comes up
with new increments every few days after each iteration.
Simplicity: Simplicity makes it easier to develop good quality code as well as
to test and debug it.
Design: Good quality design is important to develop good quality software.
So, everybody should design daily.
Integration testing: It helps to identify bugs at the interfaces of different
functionalities. Extreme programming suggests that the developers should
achieve continuous integration by building and performing integration testing
several times a day.
Basic principles of Extreme programming: XP is based on the frequent
iteration through which the developers implement User Stories. User stories
are simple and informal statements of the customer about the functionalities
needed. A User Story is a conventional description by the user of a feature of
the required system. It does not mention finer details such as the different
scenarios that can occur. Based on User stories, the project team proposes
Metaphors. Metaphors are a common vision of how the system would work.
The development team may decide to build a Spike for some features. A
Spike is a very simple program that is constructed to explore the suitability
of a solution being proposed. It can be considered similar to a prototype.
Some of the basic activities that are followed during software development
by using the XP model are given below:
Coding: The concept of coding which is used in the XP model is slightly
different from traditional coding. Here, the coding activity includes drawing
diagrams (modeling) that will be transformed into code, scripting a web-
based system, and choosing among several alternative solutions.
Testing: XP model gives high importance to testing and considers it to be the
primary factor to develop fault-free software.
Listening: The developers need to carefully listen to the customers if they
have to develop good quality software. Sometimes programmers may not
have the depth knowledge of the system to be developed. So, the
programmers should understand properly the functionality of the system and
they have to listen to the customers.
Designing: Without a proper design, a system implementation becomes too
complex and very difficult to understand the solution, thus making
maintenance expensive. A good design results elimination of complex
dependencies within a system. So, effective use of suitable design is
emphasized.
Feedback: One of the most important aspects of the XP model is to gain
feedback to understand the exact customer needs. Frequent contact with the
customer makes the development effective.
Simplicity: The main principle of the XP model is to develop a simple system
that will work efficiently in the present time, rather than trying to build
something that would take time and may never be used. It focuses on some
specific features that are immediately needed, rather than engaging time and
effort on speculations of future requirements.
Applications of Extreme Programming (XP): Some of the projects that are
suitable to develop using the XP model are given below:
Small projects: XP model is very useful in small projects consisting of small
teams as the face-to-face meeting is easier to achieve.
Projects involving new technology or Research projects: This type of project
face changing requirements rapidly and technical problems. So XP model is
used to complete this type of project.
1.16 Scrum
SCRUM is an agile development process focused primarily on ways to
manage tasks in team-based development conditions.
There are three roles in it, and their responsibilities are:
Scrum Master: The scrum can set up the master team, arrange the meeting
and remove obstacles for the process
Product owner: The product owner makes the product backlog, prioritizes the
delay and is responsible for the distribution of functionality on each
repetition.
Scrum Team: The team manages its work and organizes the work to
complete the sprint or cycle.
1.17 Lean software development
DSDM (dynamic systems development method ) is a rapid application
development strategy for software development and gives an agile project
distribution structure. The essential features of DSDM are that users must be
actively connected, and teams have been given the right to make decisions.
The techniques used in DSDM are:
Time Boxing
MoSCoW Rules
Prototyping
1.18 Dynamic system development method
The DSDM project contains seven stages:
Pre-project
Feasibility Study
Business Study
Functional Model Iteration
Design and build Iteration
Implementation
Post-project
Feature Driven Development(FDD):
This method focuses on "Designing and Building" features. In contrast to
other smart methods, FDD describes the small steps of the work that should
be obtained separately per function.
Lean Software Development:
Lean software development methodology follows the principle "just in time
production." The lean method indicates the increasing speed of software
development and reducing costs. Lean development can be summarized in
seven phases.
1.19 Agile modeling
When you want the ultimate definition of any concept, you can’t do better
than going right to the source. AgileModeling defines Agile modeling as “…a
practice-based methodology for effective modeling and documentation of
software-based systems. Simply put, Agile Modeling (AM) is a collection of
values, principles, and practices for modeling software that can be applied on
a software development project in an effective and lightweight manner.”
The modeling adds to existing Agile methodologies such as the Rational
Unified Process (RUP) or extreme programming (XP). Agile modeling helps
developers create a customized software development process that fulfills
their development needs yet is flexible enough to adjust to future situations.
1.20 Agile unified process
Agile Modeling (AM) is a practice-based methodology for effective modeling
and documentation.
Some important concepts:
Model. A model is an abstraction that expresses important aspects of a thing
or concepts. Models may be visual (diagrams), non-visual (text descriptions),
or executable (working code or equivalent). Models are sometimes called
maps or roadmaps within the agile community.
Agile model. Agile models can be something as simple as stickies on a wall,
sketches on a whiteboard, diagrams captured digitally via a drawing tool, or
detailed models captured using a model-based software engineering (MBSE)
tool.
Modeling. Modeling is the act of creating a model. Modeling is sometimes
called mapping.
Agile modeling. Agile modeling is modeling performed in a collaborative and
evolutionary manner.
Document. A document is a persistent representation of a thing or concept. A
document is a model, but not all models are documents (most models are
not persistent).
Agile document. Agile documents can be something as simple as point-form
notes, detailed text, executable tests, or one or more agile models.
1.21 The cleanroom strategy
Processes of Cleanroom development:
Clean room software development approaches consist of four key processes i.e.
Management –
It is persistent throughout the whole project lifetime which consists of project
mission, schedule, resources, risk analysis, training, configuration management,
etc.
Specification –
It is considered the first process of each increment which consists of requirement
analysis, function specification, usage specification, increment planning, etc.
Development –
It is considered the second process of each increment which consists of software
reengineering, correctness verification, incremental design, etc.
Certification –
It is considered the final process of each increment which consists of usage
modeling and test planning, statistical training and certification process, etc.
MODULE-II-Requirements Engineering modeling
2.1 Functional and non-Functional requirements
Functional Requirements: These are the requirements that the end user specifically
demands as basic facilities that the system should offer. All these functionalities
need to be necessarily incorporated into the system as a part of the contract. These
are represented or stated in the form of input to be given to the system, the
operation performed and the output expected. They are basically the requirements
stated by the user which one can see directly in the final product, unlike the non-
functional requirements.
Non-functional requirements: These are basically the quality constraints that the
system must satisfy according to the project contract. The priority or extent to
which these factors are implemented varies from one project to other. They are
also called non-behavioral requirements.
They basically deal with issues like:
Portability
Security
Maintainability
Reliability
Scalability
Performance
Reusability
Flexibility
Table: 2.1Differences between functional and non functional requirements
2.2 The software requirements document
Documentation ensures that the software development team or other stakeholders are
on the same page regarding what needs to be built and are fully aware of the goal,
scope, functional requirements, challenges, and budget regarding the software.
However, as much as creating software is exciting, documenting its requirements
can be boring and tiresome.
Introduction :
(i) Purpose of this Document –
At first, main aim of why this document is necessary and what’s purpose of
document is explained and described.
(ii) Scope of this document –
In this, overall working and main objective of document and what value it will
provide to customer is described and explained. It also includes a description of
development cost and time required.
(iii) Overview –
In this, description of product is explained. It’s simply summary or overall review of
product.
General description :
In this, general functions of product which includes objective of user, a user
characteristic, features, benefits, about why its importance is mentioned. It also
describes features of user community.
Functional Requirements :
In this, possible outcome of software system which includes effects due to
operation of program is fully explained. All functional requirements which may
include calculations, data processing, etc. are placed in a ranked order.
Interface Requirements :
In this, software interfaces which mean how software program communicates with
each other or users either in form of any language, code, or message are fully
described and explained. Examples can be shared memory, data streams, etc.
Performance Requirements :
In this, how a software system performs desired functions under specific condition
is explained. It also explains required time, required memory, maximum error rate,
etc.
Design Constraints :
In this, constraints which simply means limitation or restriction are specified and
explained for design team. Examples may include use of a particular algorithm,
hardware and software limitations, etc.
Non-Functional Attributes :
In this, non-functional attributes are explained that are required by software
system for better performance. An example may include Security, Portability,
Reliability, Reusability, Application compatibility, Data integrity, Scalability capacity,
etc.
2.3 Requirements specification
The production of the requirements stage of the software development process is
Software Requirements Specifications (SRS) (also called a requirements document).
This report lays a foundation for software engineering activities and is constructing
when entire requirements are elicited and analyzed. SRS is a formal report, which
acts as a representation of software that enables the customers to review whether
it (SRS) is according to their requirements. Also, it comprises user requirements for
a system as well as detailed specifications of the system requirements.
The SRS is a specification for a specific software product, program, or set of
applications that perform particular functions in a specific environment. It serves
several goals depending on who is writing it. First, the SRS could be written by the
client of a system. Second, the SRS could be written by a developer of the system.
The two methods create entirely various situations and establish different purposes
for the document altogether. The first case, SRS, is used to define the needs and
expectation of the users. The second case, SRS, is written for various purposes and
serves as a contract document between customer and developer.
Following are the features of a good SRS document:
1. Correctness: User review is used to provide the accuracy of requirements
stated in the SRS. SRS is said to be perfect if it covers all the needs that are
truly expected from the system.
2. Completeness: The SRS is complete if, and only if, it includes the following
elements:
(1). All essential requirements, whether relating to functionality,
performance, design, constraints, attributes, or external interfaces.
(2). Definition of their responses of the software to all realizable classes of
input data in all available categories of situations.
Note: It is essential to specify the responses to both valid and invalid values.
(3). Full labels and references to all figures, tables, and diagrams in the SRS
and definitions of all terms and units of measure.
3. Consistency: The SRS is consistent if, and only if, no subset of individual
requirements described in its conflict. There are three types of possible
conflict in the SRS:
(1). The specified characteristics of real-world objects may conflicts. For
example,
(a) The format of an output report may be described in one requirement as
tabular but in another as textual.
(b) One condition may state that all lights shall be green while another states
that all lights shall be blue.
(2). There may be a reasonable or temporal conflict between the two
specified actions. For example,
(a) One requirement may determine that the program will add two inputs,
and another may determine that the program will multiply them.
(b) One condition may state that "A" must always follow "B," while other
requires that "A and B" co-occurs.
(3). Two or more requirements may define the same real-world object but
use different terms for that object. For example, a program's request for user
input may be called a "prompt" in one requirement's and a "cue" in another.
The use of standard terminology and descriptions promotes consistency.
4. Unambiguousness: SRS is unambiguous when every fixed requirement has
only one interpretation. This suggests that each element is uniquely
interpreted. In case there is a method used with multiple definitions, the
requirements report should determine the implications in the SRS so that it is
clear and simple to understand.
Ranking for importance and stability: The SRS is ranked for importance and stability if
each requirement in it has an identifier to indicate either the significance or stability
of that particular requirement.
Typically, all requirements are not equally important. Some prerequisites may be
essential, especially for life-critical applications, while others may be desirable. Each
element should be identified to make these differences clear and explicit. Another
way to rank requirements is to distinguish classes of items as essential, conditional,
and optional.
6. Modifiability: SRS should be made as modifiable as likely and should be capable of
quickly obtain changes to the system to some extent. Modifications should be
perfectly indexed and cross-referenced.
7. Verifiability: SRS is correct when the specified requirements can be verified with a
cost-effective system to check whether the final software meets those
requirements. The requirements are verified with the help of reviews.
8. Traceability: The SRS is traceable if the origin of each of the requirements is clear
and if it facilitates the referencing of each condition in future development or
enhancement documentation.
2.4 Requirement engineering process
Requirement Engineering is the process of defining, documenting and maintaining
the requirements. It is a process of gathering and defining service provided by the
system. Requirements Engineering Process consists of the following main activities:
Requirements elicitation
Requirements specification
Requirements verification and validation
Requirements management
Requirements Elicitation:
It is related to the various ways used to gain knowledge about the project domain
and requirements. The various sources of domain knowledge include customers,
business manuals, the existing software of same type, standards and other
stakeholders of the project.
The techniques used for requirements elicitation include interviews, brainstorming,
task analysis, Delphi technique, prototyping, etc. Some of these are discussed here.
Elicitation does not produce formal models of the requirements understood.
Instead, it widens the domain knowledge of the analyst and thus helps in providing
input to the next stage.
Requirements specification:
This activity is used to produce formal software requirement models. All the
requirements including the functional as well as the non-functional requirements
and the constraints are specified by these models in totality. During specification,
more knowledge about the problem may be required which can again trigger the
elicitation process.
The models used at this stage include ER diagrams, data flow diagrams(DFDs),
function decomposition diagrams(FDDs), data dictionaries, etc.
Requirements verification and validation:
Verification: It refers to the set of tasks that ensures that the software correctly
implements a specific function.
Validation: It refers to a different set of tasks that ensures that the software that
has been built is traceable to customer requirements.
If requirements are not validated, errors in the requirement definitions would
propagate to the successive stages resulting in a lot of modification and rework.
The main steps for this process include:
The requirements should be consistent with all the other requirements i.e no two
requirements should conflict with each other.
The requirements should be complete in every sense.
The requirements should be practically achievable.
Reviews, buddy checks, making test cases, etc. are some of the methods used for
this.
Requirements management:
Requirement management is the process of analyzing, documenting, tracking,
prioritizing and agreeing on the requirement and controlling the communication to
relevant stakeholders. This stage takes care of the changing nature of
requirements. It should be ensured that the SRS is as modifiable as possible so as
to incorporate changes in requirements specified by the end users at later stages
too. Being able to modify the software as per requirements in a systematic and
controlled manner is an extremely important part of the requirements engineering
process.
2.5 Requirements elicitation and analysis
Requirements elicitation is perhaps the most difficult, most error-prone and most
communication intensive software development. It can be successful only through
an effective customer-developer partnership. It is needed to know what the users
really need.
Requirements elicitation Activities:
Requirements elicitation includes the subsequent activities. Few of them are listed
below –
Knowledge of the overall area where the systems is applied.
The details of the precise customer problem where the system are going to be
applied must be understood.
Interaction of system with external requirements.
Detailed investigation of user needs.
Define the constraints for system development.
Requirements elicitation Methods:
There are a number of requirements elicitation methods. Few of them are listed
below –
Interviews
Brainstorming Sessions
Facilitated Application Specification Technique (FAST)
Quality Function Deployment (QFD)
Use Case Approach
The success of an elicitation technique used depends on the maturity of the analyst,
developers, users, and the customer involved.
1. Interviews:
Objective of conducting an interview is to understand the customer’s expectations
from the software.
It is impossible to interview every stakeholder hence representatives from groups
are selected based on their expertise and credibility.
Interviews maybe be open-ended or structured.
In open-ended interviews there is no pre-set agenda. Context free questions may
be asked to understand the problem.
In structured interview, agenda of fairly open questions is prepared. Sometimes a
proper questionnaire is designed for the interview.
2. Brainstorming Sessions:
It is a group technique
It is intended to generate lots of new ideas hence providing a platform to share
views
A highly trained facilitator is required to handle group bias and group conflicts.
Every idea is documented so that everyone can see it.
Finally, a document is prepared which consists of the list of requirements and their
priority if possible.
3. Facilitated Application Specification Technique:
It’s objective is to bridge the expectation gap – difference between what the
developers think they are supposed to build and what customers think they are
going to get.
A team oriented approach is developed for requirements gathering.
Each attendee is asked to make a list of objects that are-
Part of the environment that surrounds the system
Produced by the system
Used by the system
Each participant prepares his/her list, different lists are then combined, redundant
entries are eliminated, team is divided into smaller sub-teams to develop mini-
specifications and finally a draft of specifications is written down using all the inputs
from the meeting.
4. Quality Function Deployment:
In this technique customer satisfaction is of prime concern, hence it emphasizes on
the requirements which are valuable to the customer.
3 types of requirements are identified –
Normal requirements –
In this the objective and goals of the proposed software are discussed with the
customer. Example – normal requirements for a result management system may be
entry of marks, calculation of results, etc
Expected requirements –
These requirements are so obvious that the customer need not explicitly state
them. Example – protection from unauthorized access.
Exciting requirements –
It includes features that are beyond customer’s expectations and prove to be very
satisfying when present. Example – when unauthorized access is detected, it should
backup and shutdown all processes.
The major steps involved in this procedure are –
Identify all the stakeholders, eg. Users, developers, customers etc
List out all requirements from customer.
A value indicating degree of importance is assigned to each requirement.
In the end the final list of requirements is categorized as –
It is possible to achieve
It should be deferred and the reason for it
It is impossible to achieve and should be dropped off
5. Use Case Approach:
This technique combines text and pictures to provide a better understanding of the
requirements.
The use cases describe the ‘what’, of a system and not ‘how’. Hence, they only give
a functional view of the system.
The components of the use case design includes three major things – Actor, Use
cases, use case diagram.
Actor –
It is the external agent that lies outside the system but interacts with it in some
way. An actor maybe a person, machine etc. It is represented as a stick figure.
Actors can be primary actors or secondary actors.
Primary actors – It requires assistance from the system to achieve a goal.
Secondary actor – It is an actor from which the system needs assistance.
Use cases –
They describe the sequence of interactions between actors and the system. They
capture who(actors) do what(interaction) with the system. A complete set of use
cases specifies all possible ways to use the system.
Use case diagram –
A use case diagram graphically represents what happens when an actor interacts
with a system. It captures the functional aspect of the system.
A stick figure is used to represent an actor.
An oval is used to represent a use case.
A line is used to represent a relationship between an actor and a use case.
2.6 Requirement validation
Requirements validation is the process of checking that requirements defined for
development, define the system that the customer really wants. To check issues
related to requirements, we perform requirements validation. We usually use
requirements validation to check error at the initial phase of development as the
error may increase excessive rework when detected later in the development
process.
In the requirements validation process, we perform a different type of test to check
the requirements mentioned in the Software Requirements Specification (SRS),
these checks include:
Completeness checks
Consistency checks
Validity checks
Realism checks
Ambiguity checks
Verifiability
The output of requirements validation is the list of problems and agreed on actions
of detected problems. The lists of problems indicate the problem detected during
the process of requirement validation. The list of agreed action states the corrective
action that should be taken to fix the detected problem.
There are several techniques which are used either individually or in conjunction
with other techniques to check to check entire or part of the system:
Test case generation:
Requirement mentioned in SRS document should be testable, the conducted tests
reveal the error present in the requirement. It is generally believed that if the test
is difficult or impossible to design than, this usually means that requirement will be
difficult to implement and it should be reconsidered.
Prototyping:
In this validation techniques the prototype of the system is presented before the
end-user or customer, they experiment with the presented model and check if it
meets their need. This type of model is generally used to collect feedback about the
requirement of the user.
Requirements Reviews:
In this approach, the SRS is carefully reviewed by a group of people including
people from both the contractor organisations and the client side, the reviewer
systematically analyses the document to check error and ambiguity.
Automated Consistency Analysis:
This approach is used for automatic detection of an error, such as nondeterminism,
missing cases, a type error, and circular definitions, in requirements specifications.
First, the requirement is structured in formal notation then CASE tool is used to
check in-consistency of the system, The report of all inconsistencies is identified
and corrective actions are taken.
Walk-through:
A walkthrough does not have a formally defined procedure and does not require a
differentiated role assignment.
Checking early whether the idea is feasible or not.
Obtaining the opinions and suggestion of other people.
Checking the approval of others and reaching an agreement.
2.7 Requirement management
Requirements management is important because it empowers everyone to clearly
understand stakeholder expectations and confidently deliver a product that has
been verified to meet the requirements and validated to meet the needs.
Requirements management is a sophisticated process that includes many moving
parts and diverse groups of people. Typically, the product management
department, specifically the product manager, is responsible for the requirements
management process. They work with the stakeholders, including business teams,
customers, users, developers, testers, regulators, and quality assurance.
Additionally, a product may have only 100 requirements, or it may have several
thousand. This number will depend on the product’s complexity and level of
regulation. With all these elements at play, success hinges on the ability to get
everyone on the same page, working toward the same goal.
Therefore, the business value of managing requirements is huge, as successful
requirements management is key to project success. Benefits of requirements
management include:
Enhance understanding of stakeholder needs, requirements, and expectations and
the problem or opportunity the product means to address
Gain clarity on scope, budget, and schedule
Minimize costly, time-consuming rework
Increase product quality
Mitigate risk
Improve likelihood of delivering the right product, within budget and schedule with
the required quality.
2.8 Requirement analysis
Requirement analysis is significant and essential activity after elicitation. We
analyze, refine, and scrutinize the gathered requirements to make consistent and
unambiguous requirements. This activity reviews all requirements and may provide
a graphical view of the entire system. After the completion of the analysis, it is
expected that the understandability of the project may improve significantly. Here,
we may also use the interaction with the customer to clarify points of confusion and
to understand which requirements are more important than others.
The various steps of requirement analysis are shown in fig:
Requirements Analysis
(i) Draw the context diagram: The context diagram is a simple model that defines
the boundaries and interfaces of the proposed systems with the external world. It
identifies the entities outside the proposed system that interact with the system.
The context diagram of student result management system is given below:
Requirements Analysis
(ii) Development of a Prototype (optional): One effective way to find out what the
customer wants is to construct a prototype, something that looks and preferably
acts as part of the system they say they want.
We can use their feedback to modify the prototype until the customer is satisfied
continuously. Hence, the prototype helps the client to visualize the proposed
system and increase the understanding of the requirements. When developers and
users are not sure about some of the elements, a prototype may help both the
parties to take a final decision.
Some projects are developed for the general market. In such cases, the prototype
should be shown to some representative sample of the population of potential
purchasers. Even though a person who tries out a prototype may not buy the final
system, but their feedback may allow us to make the product more attractive to
others.
The prototype should be built quickly and at a relatively low cost. Hence it will
always have limitations and would not be acceptable in the final system. This is an
optional activity.
(iii) Model the requirements: This process usually consists of various graphical
representations of the functions, data entities, external entities, and the
relationships between them. The graphical view may help to find incorrect,
inconsistent, missing, and superfluous requirements. Such models include the Data
Flow diagram, Entity-Relationship diagram, Data Dictionaries, State-transition
diagrams, etc.
(iv) Finalise the requirements: After modeling the requirements, we will have a
better understanding of the system behavior. The inconsistencies and ambiguities
have been identified and corrected. The flow of data amongst various modules has
been analyzed. Elicitation and analyze activities have provided better insight into
the system. Now we finalize the analyzed requirements, and the next step is to
document these requirements in a prescribed format.
2.9 Data modeling concepts
It is the process of creating a data model for the data to be stored in a
database. This data model is a conceptual representation of Data objects, the
associations between different data objects, and the rules.
Data modeling helps in the visual representation of data and enforces business
rules, regulatory compliances, and government policies on the data. Data
Models in DBMS
The Data Model is defined as an abstract model that organizes data description,
data semantics, and consistency constraints of data. The data model emphasizes
on what data is needed and how it should be organized instead of what operations
will be performed on data. Data Model is like an architect’s building plan, which
helps to build conceptual models and set a relationship between data items.
The two types of Data Modeling Techniques are
1. Entity Relationship (E-R) Model
2. UML (Unified Modelling Language)
Types of Data Models in DBMS
Types of Data Models: There are mainly three different types of data models:
conceptual data models, logical data models, and physical data models, and each
one has a specific purpose. The data models are used to represent the data and
how it is stored in the database and to set the relationship between data items.
Conceptual Data Model: This Data Model defines WHAT the system contains. This
model is typically created by Business stakeholders and Data Architects. The
purpose is to organize, scope and define business concepts and rules.
Logical Data Model: Defines HOW the system should be implemented regardless of
the DBMS. This model is typically created by Data Architects and Business Analysts.
The purpose is to developed technical map of rules and data structures.
Physical Data Model: This Data Model describes HOW the system will be
implemented using a specific DBMS system. This model is typically created by DBA
and developers. The purpose is actual implementation of the database.
Fig: Types of data model
Conceptual Data Model
A Conceptual Data Model is an organized view of database concepts and their
relationships. The purpose of creating a conceptual data model is to establish
entities, their attributes, and relationships. In this data modeling level, there is
hardly any detail available on the actual database structure. Business stakeholders
and data architects typically create a conceptual data model.
The 3 basic tenants of Conceptual Data Model are
Entity: A real-world thing
Attribute: Characteristics or properties of an entity
Relationship: Dependency or association between two entities
Data model example:
Customer and Product are two entities. Customer number and name are
attributes of the Customer entity
Product name and price are attributes of product entity
Sale is the relationship between the customer and product
Fig: Conceptual Data Model
Characteristics of a conceptual data model
Offers Organisation-wide coverage of the business concepts.
This type of Data Models are designed and developed for a business
audience.
The conceptual model is developed independently of hardware specifications
like data storage capacity, location or software specifications like DBMS
vendor and technology. The focus is to represent data as a user will see it in
the “real world.”
Conceptual data models known as Domain models create a common vocabulary for
all stakeholders by establishing basic concepts and scope.
Logical Data Model
The Logical Data Model is used to define the structure of data elements and to
set relationships between them. The logical data model adds further information to
the conceptual data model elements. The advantage of using a Logical data model
is to provide a foundation to form the base for the Physical model. However, the
modeling structure remains generic.
Logical Data Model
At this Data Modeling level, no primary or secondary key is defined. At this Data
modeling level, you need to verify and adjust the connector details that were set
earlier for relationships.
Characteristics of a Logical data model
Describes data needs for a single project but could integrate with other
logical data models based on the scope of the project.
Designed and developed independently from the DBMS.
Data attributes will have datatypes with exact precisions and length.
Normalization processes to the model is applied typically till 3NF.
Physical Data Model
A Physical Data Model describes a database-specific implementation of the data
model. It offers database abstraction and helps generate the schema. This is
because of the richness of meta-data offered by a Physical Data Model. The physical
data model also helps in visualizing database structure by replicating database
column keys, constraints, indexes, triggers, and other RDBMS features.
Fig:Physical Data Model
Characteristics of a physical data model:
The physical data model describes data need for a single project or
application though it maybe integrated with other physical data models based
on project scope.
Data Model contains relationships between tables that which addresses
cardinality and nullability of the relationships.
Developed for a specific version of a DBMS, location, data storage or
technology to be used in the project.
Columns should have exact datatypes, lengths assigned and default values.
Primary and Foreign keys, views, indexes, access profiles, and
authorizations, etc. are defined.
Advantages and Disadvantages of Data Model:
Advantages of Data model:
The main goal of a designing data model is to make certain that data objects
offered by the functional team are represented accurately.
The data model should be detailed enough to be used for building the
physical database.
The information in the data model can be used for defining the relationship
between tables, primary and foreign keys, and stored procedures.
Data Model helps business to communicate the within and across
organizations.
Data model helps to documents data mappings in ETL process
Help to recognize correct sources of data to populate the model
Disadvantages of Data model:
To develop Data model one should know physical data stored characteristics.
This is a navigational system produces complex application development,
management. Thus, it requires a knowledge of the biographical truth.
Even smaller change made in structure require modification in the entire
application.
There is no set data manipulation language in DBMS.
2.10 Flow oriented modeling
Flow Oriented Modeling
It shows how data objects are transformed by processing the function.
The Flow oriented elements are:
i. Data flow model
It is a graphical technique. It is used to represent information flow.
The data objects are flowing within the software and transformed by
processing the elements.
The data objects are represented by labeled arrows. Transformation are
represented by circles called as bubbles.
DFD shown in a hierarchical fashion. The DFD is split into different levels. It
also called as 'context level diagram'.
ii. Control flow model
Large class applications require a control flow modeling.
The application creates control information instated of reports or displays.
The applications process the information in specified time.
An event is implemented as a boolean value.
For example, the boolean values are true or false, on or off, 1 or 0.
iii. Control Specification
A short term for control specification is CSPEC.
It represents the behaviour of the system.
The state diagram in CSPEC is a sequential specification of the behaviour.
The state diagram includes states, transitions, events and activities.
State diagram shows the transition from one state to another state if a
particular event has occurred.
iv. Process Specification
A short term for process specification is PSPEC.
The process specification is used to describe all flow model processes.
The content of process specification consists narrative text, Program Design
Language(PDL) of the process algorithm, mathematical equations, tables or
UML activity diagram.
2.11 scenario based modeling
Requirements for a computer-based system can be seen in many different ways.
Some software people argue that it’s worth using a number of different modes of
representation while others believe that it’s best to select one mode of
representation.
The specific elements of the requirements model are dedicated to the analysis
modeling method that is to be used.
Scenario-based elements :
Using a scenario-based approach, system is described from user’s point of view. For
example, basic use cases and their corresponding use-case diagrams evolve into
more elaborate template-based use cases. Figure 1(a) depicts a UML activity
diagram for eliciting requirements and representing them using use cases. There
are three levels of elaboration.
Class-based elements :
A collection of things that have similar attributes and common behaviors i.e.,
objects are categorized into classes. For example, a UML case diagram can be used
to depict a Sensor class for the SafeHome security function. Note that diagram lists
attributes of sensors and operations that can be applied to modify these attributes.
In addition to class diagrams, other analysis modeling elements depict manner in
which classes collaborate with one another and relationships and interactions
between classes.
Behavioral elements :
Effect of behavior of computer-based system can be seen on design that is chosen
and implementation approach that is applied. Modeling elements that depict
behavior must be provided by requirements model.
2.12 UML modeling
In UML, use-case diagrams model the behavior of a system and help to capture the
requirements of the system.
Use-case diagrams describe the high-level functions and scope of a system. These
diagrams also identify the interactions between the system and its actors. The use
cases and actors in use-case diagrams describe what the system does and how the
actors use it, but not how the system operates internally.
Use-case diagrams illustrate and define the context and requirements of either an
entire system or the important parts of the system. You can model a complex
system with a single use-case diagram, or create many use-case diagrams to model
the components of the system. You would typically develop use-case diagrams in
the early phases of a project and refer to them throughout the development
process.
Use-case diagrams are helpful in the following situations:
Before starting a project, you can create use-case diagrams to model a business so
that all participants in the project share an understanding of the workers,
customers, and activities of the business.
While gathering requirements, you can create use-case diagrams to capture the
system requirements and to present to others what the system should do.
During the analysis and design phases, you can use the use cases and actors from
your use-case diagrams to identify the classes that the system requires.
During the testing phase, you can use use-case diagrams to identify tests for the
system.
The following topics describe model elements in use-case diagrams:
Use cases
A use case describes a function that a system performs to achieve the user’s goal.
A use case must yield an observable result that is of value to the user of the
system.
Actors
An actor represents a role of a user that interacts with the system that you are
modeling. The user can be a human user, an organization, a machine, or another
external system.
Subsystems
In UML models, subsystems are a type of stereotyped component that represent
independent, behavioral units in a system. Subsystems are used in class,
component, and use-case diagrams to represent large-scale components in the
system that you are modeling.
Relationships in use-case diagrams
In UML, a relationship is a connection between model elements. A UML relationship
is a type of model element that adds semantics to a model by defining the structure
and behavior between the model elements.
2.13 case study
Use case diagram is a behavioral UML diagram type and frequently used to analyze
various systems. They enable you to visualize the different types of roles in a
system and how those roles interact with the system. This use case
diagram tutorial will cover the following topics and help you create use cases better.
Importance of use case diagrams
Use case diagram objects
Use case diagram guidelines
Relationships in use case diagrams
How to create use case diagrams ( with example )
o Identifying actors
o Identifying use cases
o When to use “Include”
o How to use generalization
o When to use “Extend”
Use case diagram templates of common scenarios
Importance of Use Case Diagrams
As mentioned before use case diagrams are used to gather a usage requirement of
a system. Depending on your requirement you can use that data in different ways.
Below are few ways to use them.
To identify functions and how roles interact with them – The primary
purpose of use case diagrams.
For a high-level view of the system – Especially useful when presenting
to managers or stakeholders. You can highlight the roles that interact with
the system and the functionality provided by the system without going deep
into inner workings of the system.
To identify internal and external factors – This might sound simple but
in large complex projects a system can be identified as an external role in
another use case.
Use Case Diagram objects
Use case diagrams consist of 4 objects.
Actor
Use case
System
Package
The objects are further explained below.
Actor
Actor in a use case diagram is any entity that performs a role in one given
system. This could be a person, organization or an external system and usually
drawn like skeleton shown below.
Use Case
A use case represents a function or an action within the system. It’s drawn as
an oval and named with the function.
System
The system is used to define the scope of the use case and drawn as a
rectangle. This an optional element but useful when you’re visualizing large
systems. For example, you can create all the use cases and then use the system
object to define the scope covered by your project. Or you can even use it to show
the different areas covered in different releases.
Package
The package is another optional element that is extremely useful in complex
diagrams. Similar to class diagrams, packages are used to group together use
cases. They are drawn like the image shown below.
Use Case Diagram Guidelines
Although use case diagrams can be used for various purposes there are some
common guidelines you need to follow when drawing use cases.
These include naming standards, directions of arrows, the placing of use cases,
usage of system boxes and also proper usage of relationships.
We’ve covered these guidelines in detail in a separate blog post. So go ahead and
check out use case diagram guidelines.
Relationships in Use Case Diagrams
There are five types of relationships in a use case diagram. They are
Association between an actor and a use case
Generalization of an actor
Extend relationship between two use cases
Include relationship between two use cases
Generalization of a use case
We have covered all these relationships in a separate blog post that has examples
with images. We will not go into detail in this post but you can check
out relationships in use case diagrams.
How to Create a Use Case Diagram
Up to now, you’ve learned about objects, relationships and guidelines that are
critical when drawing use case diagrams. I’ll explain the various processes using a
banking system as an example.
Identifying Actors
Actors are external entities that interact with your system. It can be a person,
another system or an organization. In a banking system, the most obvious actor is
the customer. Other actors can be bank employee or cashier depending on the role
you’re trying to show in the use case.
An example of an external organization can be the tax authority or the central
bank. The loan processor is a good example of an external system associated as an
actor.
Identifying Use Cases
Now it’s time to identify the use cases. A good way to do this is to identify what the
actors need from the system. In a banking system, a customer will need to open
accounts, deposit and withdraw funds, request check books and similar functions.
So all of these can be considered as use cases.
Top level use cases should always provide a complete function required by an actor.
You can extend or include use cases depending on the complexity of the system.
Once you identify the actors and the top level use case you have a basic idea of the
system. Now you can fine tune it and add extra layers of detail to it.
Look for Common Functionality to use Include
Look for common functionality that can be reused across the system. If you find
two or more use cases that share common functionality you can extract the
common functions and add it to a separate use case. Then you can connect it via
the include relationship to show that it’s always called when the original use case is
executed. ( see the diagram for an example ).
Is it Possible to Generalize Actors and Use Cases
There may be instances where actors are associated with similar use cases while
triggering a few use cases unique only to them. In such instances, you can
generalize the actor to show the inheritance of functions. You can do a similar thing
for use case as well.
One of the best examples of this is “Make Payment” use case in a payment system.
You can further generalize it to “Pay by Credit Card”, “Pay by Cash”, “Pay by Check”
etc. All of them have the attributes and the functionality of payment with special
scenarios unique to them.
Optional Functions or Additional Functions
There are some functions that are triggered optionally. In such cases, you can use
the extend relationship and attach an extension rule to it. In the below banking
system example “Calculate Bonus” is optional and only triggers when a certain
condition is matched.
Extend doesn’t always mean it’s optional. Sometimes the use case connected by
extending can supplement the base use case. The thing to remember is that the
base use case should be able to perform a function on its own even if the extending
use case is not called.
Use Case Diagram Templates
MODULE-III -Design Engineering and Metrics
3.1 Design Engineering
Software design sits at the technical kernel of software engineering and is applied
regardless of the software
process model that is used. Beginning once software requirements have been
analyzed and modeled, software
design is the last software engineering action within the modeling activity and sets
the stage for construction
(code generation and testing).
The requirements model, manifested by scenario-based, class-based, flow-oriented,
and behavioral elements, feed the design task. Using design notation and design
methods discussed in later chapters, design produces a data/class design, an
architectural design, an interface design, and a component design. The data/class
design transforms class models into design class realizations and the requisite data
structures required to implement the software. The objects and relationships
defined in the CRC diagram and the detailed data content depicted by class
attributes and other notation provide the basis for the data design action. Part of
class design may occur in conjunction with the design of software architecture.
More detailed class design occurs as each software component is designed. The
architectural design defines the relationship between major structural elements of
the software, the architectural styles and design patterns that can be used to
achieve the requirements defined for the system, and the constraints that affect the
way in which architecture can be implemented. The architectural design
representation—the framework of a computer-based system—is derived from the
requirements model. The interface design describes how the software
communicates with systems that interoperate with it, and with humans who use it.
An interface implies a flow of information (e.g., data and/or control) and a specific
type of behavior. Therefore, usage scenarios and behavioral models provide much
of the information required for interface design.The importance of software design
can be stated with a single word—quality. Design is the place where quality is
fostered in software engineering. Design provides you with representations of
software that can be assessed for quality. Design is the only way that you can
accurately translate stakeholder‘s requirements into a finished software product or
system.
3.2 The design process
Software design is an iterative process through which requirements are
translated into a ―blueprint‖ for
constructing the software. Initially, the blueprint depicts a holistic view of
software. That is, the design is
represented at a high level of abstraction— a level that can be directly
traced to the specific system objective
and more detailed data, functional, and behavioral requirements. As design
iterations occur, subsequent
refinement leads to design representations at much lower levels of
abstraction. These can still be traced to
requirements, but the connection is more subtle
The design phase of software development deals with transforming the
customer requirements as described in the SRS documents into a form
implementable using a programming language.
The software design process can be divided into the following three levels
of phases of design:
Interface Design
Architectural Design
Detailed Design
Fig: Design Process
Interface Design:
Interface design is the specification of the interaction between a system and its
environment. this phase proceeds at a high level of abstraction with respect to the
inner workings of the system i.e, during interface design, the internal of the
systems are completely ignored and the system is treated as a black box. Attention
is focused on the dialogue between the target system and the users, devices, and
other systems with which it interacts. The design problem statement produced
during the problem analysis step should identify the people, other systems, and
devices which are collectively called agents.
Interface design should include the following details:
Precise description of events in the environment, or messages from agents to which
the system must respond.
Precise description of the events or messages that the system must produce.
Specification on the data, and the formats of the data coming into and going out of
the system.
Specification of the ordering and timing relationships between incoming events or
messages, and outgoing events or outputs.
Architectural Design:
Architectural design is the specification of the major components of a system, their
responsibilities, properties, interfaces, and the relationships and interactions
between them. In architectural design, the overall structure of the system is
chosen, but the internal details of major components are ignored.
Issues in architectural design includes:
Gross decomposition of the systems into major components.
Allocation of functional responsibilities to components.
Component Interfaces
Component scaling and performance properties, resource consumption
properties, reliability properties, and so forth.
Communication and interaction between components.
The architectural design adds important details ignored during the interface
design. Design of the internals of the major components is ignored until the
last phase of the design.
Detailed Design:
Design is the specification of the internal elements of all major system components,
their properties, relationships, processing, and often their algorithms and the data
structures.
The detailed design may include:
Decomposition of major system components into program units.
Allocation of functional responsibilities to units.
User interfaces
Unit states and state changes
Data and control interaction between units
Data packaging and implementation, including issues of scope and visibility
of program elements
Algorithms and data structures
3.3 Design concepts
Concepts are defined as a principal idea or invention that comes into our mind or in
thought to understand something. The software design concept simply means the
idea or principle behind the design. It describes how you plan to solve the problem
of designing software, the logic, or thinking behind how you will design software. It
allows the software engineer to create the model of the system or software or
product that is to be developed or built. The software design concept provides a
supporting and essential structure or model for developing the right software. There
are many concepts of software design and some of them are given below:
The following points should be considered while designing Software:
Abstraction- hide Irrelevant data
Abstraction simply means to hide the details to reduce complexity and increases
efficiency or quality. Different levels of Abstraction are necessary and must be
applied at each stage of the design process so that any error that is present can be
removed to increase the efficiency of the software solution and to refine the
software solution. The solution should be described in broad ways that cover a wide
range of different things at a higher level of abstraction and a more detailed
description of a solution of software should be given at the lower level of
abstraction.
Modularity- subdivide the system
Modularity simply means dividing the system or project into smaller parts to reduce
the complexity of the system or project. In the same way, modularity in design
means subdividing a system into smaller parts so that these parts can be created
independently and then use these parts in different systems to perform different
functions. It is necessary to divide the software into components known as modules
because nowadays there are different software available like Monolithic software
that is hard to grasp for software engineers. So, modularity in design has now
become a trend and is also important. If the system contains fewer components
then it would mean the system is complex which requires a lot of effort (cost) but if
we are able to divide the system into components then the cost would be small.
Architecture- design a structure of something
Architecture simply means a technique to design a structure of something.
Architecture in designing software is a concept that focuses on various elements
and the data of the structure. These components interact with each other and use
the data of the structure in architecture.
Refinement- removes impurities
Refinement simply means to refine something to remove any impurities if present
and increase the quality. The refinement concept of software design is actually a
process of developing or presenting the software or system in a detailed manner
that means to elaborate a system or software. Refinement is very necessary to find
out any error if present and then to reduce it.
Pattern- a repeated form
The pattern simply means a repeated form or design in which the same shape is
repeated several times to form a pattern. The pattern in the design process means
the repetition of a solution to a common recurring problem within a certain context.
Information Hiding- hide the information
Information hiding simply means to hide the information so that it cannot be
accessed by an unwanted party. In software design, information hiding is achieved
by designing the modules in a manner that the information gathered or contained in
one module is hidden and can’t be accessed by any other modules.
Refactoring- reconstruct something
Refactoring simply means reconstructing something in such a way that it does not
affect the behavior of any other features. Refactoring in software design means
reconstructing the design to reduce complexity and simplify it without affecting the
behavior or its functions. Fowler has defined refactoring as “the process of changing
a software system in a way that it won’t affect the behavior of the design and
improves the internal structure”.
3.4 Software architecture
The architecture of a system describes its major components, their relationships (structures), and how
they interact with each other. Software architecture and design includes several contributory factors such
as Business strategy, quality attributes, human dynamics, design, and IT environment.
Fig: Software architecture
Architecture serves as a blueprint for a system. It provides an abstraction to
manage the system complexity and establish a communication and coordination
mechanism among components.
It defines a structured solution to meet all the technical and operational
requirements, while optimizing the common quality attributes like performance and
security.
Further, it involves a set of significant decisions about the organization related to
software development and each of these decisions can have a considerable impact
on quality, maintainability, performance, and the overall success of the final
product. These decisions comprise of
Selection of structural elements and their interfaces by which the system is
composed.
Behavior as specified in collaborations among those elements.
Composition of these structural and behavioral elements into large
subsystem.
Architectural decisions align with business objectives.
Architectural styles guide the organization.
Goals of Architecture
The primary goal of the architecture is to identify requirements that affect
the structure of the application. A well-laid architecture reduces the business
risks associated with building a technical solution and builds a bridge
between business and technical requirements.
Some of the other goals are as follows −
Expose the structure of the system, but hide its implementation details.
Realize all the use-cases and scenarios.
Try to address the requirements of various stakeholders.
Handle both functional and quality requirements.
Reduce the goal of ownership and improve the organization’s market
position.
Improve quality and functionality offered by the system.
Improve external confidence in either the organization or system.
Limitations
Software architecture is still an emerging discipline within software
engineering. It has the following limitations −
Lack of tools and standardized ways to represent architecture.
Lack of analysis methods to predict whether architecture will result in an
implementation that meets the requirements.
Lack of awareness of the importance of architectural design to software
development.
Lack of understanding of the role of software architect and poor
communication among stakeholders.
Lack of understanding of the design process, design experience and
evaluation of design.
Role of Software Architect
A Software Architect provides a solution that the technical team can create
and design for the entire application. A software architect should have
expertise in the following areas −
Design Expertise
Expert in software design, including diverse methods and approaches such as
object-oriented design, event-driven design, etc.
Lead the development team and coordinate the development efforts for the
integrity of the design.
Should be able to review design proposals and tradeoff among themselves.
Domain Expertise
Expert on the system being developed and plan for software evolution.
Assist in the requirement investigation process, assuring completeness and
consistency.
Coordinate the definition of domain model for the system being developed.
Technology Expertise
Expert on available technologies that helps in the implementation of the
system.
Coordinate the selection of programming language, framework, platforms,
databases, etc.
Methodological Expertise
Expert on software development methodologies that may be adopted during
SDLC (Software Development Life Cycle).
Choose the appropriate approaches for development that helps the entire
team.
Hidden Role of Software Architect
Facilitates the technical work among team members and reinforcing the trust
relationship in the team.
Information specialist who shares knowledge and has vast experience.
Protect the team members from external forces that would distract them and
bring less value to the project.
Deliverables of the Architect
A clear, complete, consistent, and achievable set of functional goals
A functional description of the system, with at least two layers of
decomposition
A concept for the system
A design in the form of the system, with at least two layers of decomposition
A notion of the timing, operator attributes, and the implementation and
operation plans
A document or process which ensures functional decomposition is followed,
and the form of interfaces is controlled.
3.5 Architectural styles
The software needs the architectural design to represents the design of software.
IEEE defines architectural design as “the process of defining a collection of
hardware and software components and their interfaces to establish the framework
for the development of a computer system.” The software that is built for
computer-based systems can exhibit one of these many architectural styles.
Each style will describe a system category that consists of :
A set of components(eg: a database, computational modules) that will perform a
function required by the system.
The set of connectors will help in coordination, communication, and cooperation
between the components.
Conditions that how components can be integrated to form the system.
Semantic models that help the designer to understand the overall properties of the
system.
The use of architectural styles is to establish a structure for all the components of
the system.
Taxonomy of Architectural styles:
Data centered architectures:
A data store will reside at the center of this architecture and is accessed frequently
by the other components that update, add, delete or modify the data present within
the store.
The figure illustrates a typical data centered style. The client software access a
central repository. Variation of this approach are used to transform the repository
into a blackboard when data related to client or data of interest for the client
change the notifications to client software.
This data-centered architecture will promote integrability. This means that the
existing components can be changed and new client components can be added to
the architecture without the permission or concern of other clients.
Data can be passed among clients using blackboard mechanism.
Data flow architectures:
This kind of architecture is used when input data to be transformed into output data
through a series of computational manipulative components.
The figure represents pipe-and-filter architecture since it uses both pipe and filter
and it has a set of components called filters connected by pipes.
Pipes are used to transmit data from one component to the next.
Each filter will work independently and is designed to take data input of a certain
form and produces data output to the next filter of a specified form. The filters
don’t require any knowledge of the working of neighboring filters.
If the data flow degenerates into a single line of transforms, then it is termed as
batch sequential. This structure accepts the batch of data and then applies a series
of sequential components to transform it.
Call and Return architectures: It is used to create a program that is easy to scale
and modify. Many sub-styles exist within this category. Two of them are explained
below.
Remote procedure call architecture: This components is used to present in a main
program or sub program architecture distributed among multiple computers on a
network.
Main program or Subprogram architectures: The main program structure
decomposes into number of subprograms or function into a control hierarchy. Main
program contains number of subprograms that can invoke other components.
Fig: Software architecture styles
Object Oriented architecture: The components of a system encapsulate data and
the operations that must be applied to manipulate the data. The coordination and
communication between the components are established via the message passing.
Layered architecture:
A number of different layers are defined with each layer performing a well-defined
set of operations. Each layer will do some operations that becomes closer to
machine instruction set progressively.
At the outer layer, components will receive the user interface operations and at the
inner layers, components will perform the operating system
interfacing(communication and coordination with OS)
Intermediate layers to utility services and application software functions.
3.6 Architectural design
An architectural design performs the following functions.
1. It defines an abstraction level at which the designers can specify the
functional and performance behaviour of the system.
2. It acts as a guideline for enhancing the system (when ever required)
by describing those features of the system that can be modified easily
without affecting the system integrity.
3. It evaluates all top-level designs.
4. It develops and documents top-level design for the external and
internal interfaces.
5. It develops preliminary versions of user documentation.
6. It defines and documents preliminary test requirements and the
schedule for software integration.
The sources of architectural design are listed below.
regarding the application domain for the software to be developed
Using data-flow diagrams
Availability of architectural patterns and architectural styles.
Architectural Design Representation
Architectural design can be represented using the following models.
Structural model: Illustrates architecture as an ordered collection of program
components
Dynamic model: Specifies the behavioral aspect of the software architecture and
indicates how the structure or system configuration changes as the function
changes due to change in the external environment
Process model: Focuses on the design of the business or technical process,
which must be implemented in the system
Functional model: Represents the functional hierarchy of a system
Framework model: Attempts to identify repeatable architectural design patterns
encountered in similar types of application. This leads to an increase in the level
of abstraction.
Architectural Design Output
The architectural design process results in an Architectural Design Document
(ADD). This document consists of a number of graphical representations
thatcomprises software models along with associated descriptive text. The
softwaremodels include static model, interface model, relationship model, and
dynamic processmodel. They show how the system is organized into a process
at run-time.
Architectural design document gives the developers a solution to the problem
stated in the Software Requirements Specification (SRS). Note that it considers
only those requirements in detail that affect the program structure. In addition
to ADD, other outputs of the architectural design are listed below.
Various reports including audit report, progress report, and configuration status
accounts report
Various plans for detailed design phase, which include the following
Software verification and validation plan
Software configuration management plan
Software quality assurance plan
Software project management plan.
3.9 metrics in the process and project domains
A software metric is a measure of software characteristics which are measurable or
countable. Software metrics are valuable for many reasons, including measuring
software performance, planning work items, measuring productivity, and many other
uses.
Within the software development process, many metrics are that are all connected.
Software metrics are similar to the four functions of management: Planning,
Organization, Control, or Improvement.
Classification of Software Metrics
Software metrics can be classified into two types as follows:
1. Product Metrics: These are the measures of various characteristics of the
software product. The two important software characteristics are:
1. Size and complexity of software.
2. Quality and reliability of software.
These metrics can be computed for different stages of SDLC.
2. Process Metrics: These are the measures of various characteristics of the
software development process. For example, the efficiency of fault detection. They
are used to measure the characteristics of methods, techniques, and tools that are
used for developing software.
Types of Metrics
Internal metrics: Internal metrics are the metrics used for measuring properties
that are viewed to be of greater importance to a software developer. For example,
Lines of Code (LOC) measure.
External metrics: External metrics are the metrics used for measuring properties
that are viewed to be of greater importance to the user, e.g., portability, reliability,
functionality, usability, etc.
Hybrid metrics: Hybrid metrics are the metrics that combine product, process,
and resource metrics. For example, cost per FP where FP stands for Function Point
Metric.
Project metrics: Project metrics are the metrics used by the project manager to
check the project's progress. Data from the past projects are used to collect various
metrics, like time and cost; these estimates are used as a base of new software.
Note that as the project proceeds, the project manager will check its progress from
time-to-time and will compare the effort, cost, and time with the original effort,
cost and time. Also understand that these metrics are used to decrease the
development costs, time efforts and risks. The project quality can also be improved.
As quality improves, the number of errors and time, as well as cost required, is also
reduced.
Advantage of Software Metrics
Comparative study of various design methodology of software systems.
For analysis, comparison, and critical study of different programming language
concerning their characteristics.
In comparing and evaluating the capabilities and productivity of people involved in
software development.
In the preparation of software quality specifications.
In the verification of compliance of software systems requirements and
specifications.
In making inference about the effort to be put in the design and development of the
software systems.
In getting an idea about the complexity of the code.
n taking decisions regarding further division of a complex module is to be done or
not.
In guiding resource manager for their proper utilization.
In comparison and making design tradeoffs between software development and
maintenance cost.
3.10 software measurement
A measurement is a manifestation of the size, quantity, amount, or dimension of a
particular attribute of a product or process. Software measurement is a titrate
impute of a characteristic of a software product or the software process. It is an
authority within software engineering. The software measurement process is
defined and governed by ISO Standard.
Software Measurement Principles:
The software measurement process can be characterized by five activities-
Formulation: The derivation of software measures and metrics appropriate for the
representation of the software that is being considered.
Collection: The mechanism used to accumulate data required to derive the
formulated metrics.
Analysis: The computation of metrics and the application of mathematical tools.
Interpretation: The evaluation of metrics resulting in insight into the quality of the
representation.
Feedback: Recommendation derived from the interpretation of product metrics
transmitted to the software team.
Need for Software Measurement:
Software is measured to:
Create the quality of the current product or process.
Anticipate future qualities of the product or process.
Enhance the quality of a product or process.
Regulate the state of the project in relation to budget and schedule.
Classification of Software Measurement:
There are 2 types of software measurement:
Direct Measurement: In direct measurement, the product, process, or thing is
measured directly using a standard scale.
Indirect Measurement: In indirect measurement, the quantity or quality to be
measured is measured using related parameters i.e. by use of reference.
Metrics:
A metric is a measurement of the level at which any impute belongs to a system
product or process.
Software metrics will be useful only if they are characterized effectively and
validated so that their worth is proven. There are 4 functions related to software
metrics:
Planning
Organizing
Controlling
Improving
Characteristics of software Metrics:
Quantitative: Metrics must possess quantitative nature. It means metrics can be
expressed in values.
Understandable: Metric computation should be easily understood, and the method
of computing metrics should be clearly defined.
Applicability: Metrics should be applicable in the initial phases of the development
of the software.
Repeatable: The metric values should be the same when measured repeatedly and
consistent in nature.
Economical: The computation of metrics should be economical.
Language Independent: Metrics should not depend on any programming language.
Classification of Software Metrics:
There are 3 types of software metrics:
Product Metrics: Product metrics are used to evaluate the state of the product,
tracing risks and undercover prospective problem areas. The ability of the team to
control quality is evaluated.
Process Metrics: Process metrics pay particular attention to enhancing the long-
term process of the team or organization.
Project Metrics: The project matrix describes the project characteristic and
execution process.
Number of software developer
Staffing patterns over the life cycle of software
Cost and schedule
Productivity
3.11 Metrics for software quality
Software metrics can be classified into three categories −
Product metrics − Describes the characteristics of the product such as size,
complexity, design features, performance, and quality level.
Process metrics − These characteristics can be used to improve the development
and maintenance activities of the software.
Project metrics − This metrics describe the project characteristics and execution.
Examples include the number of software developers, the staffing pattern over the
life cycle of the software, cost, schedule, and productivity.
Some metrics belong to multiple categories. For example, the in-process quality
metrics of a project are both process metrics and project metrics.
Software quality metrics are a subset of software metrics that focus on the quality
aspects of the product, process, and project. These are more closely associated
with process and product metrics than with project metrics.
Software quality metrics can be further divided into three categories −
Product quality metrics
In-process quality metrics
Maintenance quality metrics
Product Quality Metrics
This metrics include the following −
Mean Time to Failure
Defect Density
Customer Problems
Customer Satisfaction
Mean Time to Failure
It is the time between failures. This metric is mostly used with safety critical
systems such as the airline traffic control systems, avionics, and weapons.
Defect Density
It measures the defects relative to the software size expressed as lines of code or
function point, etc. i.e., it measures code quality per unit. This metric is used in
many commercial software systems.
Customer Problems
It measures the problems that customers encounter when using the product. It
contains the customer’s perspective towards the problem space of the software,
which includes the non-defect oriented problems together with the defect problems.
The problems metric is usually expressed in terms of Problems per User-Month
(PUM).
PUM = Total Problems that customers reported (true defect and non-defect oriented
problems) for a time period + Total number of license months of the software
during
the period
Where,
Number of license-month of the software = Number of install license of the
software ×
Number of months in the calculation period
PUM is usually calculated for each month after the software is released to the
market, and also for monthly averages by year.
Customer Satisfaction
Customer satisfaction is often measured by customer survey data through the five-
point scale −
Very satisfied
Satisfied
Neutral
Dissatisfied
Very dissatisfied
Satisfaction with the overall quality of the product and its specific dimensions is
usually obtained through various methods of customer surveys. Based on the five-
point-scale data, several metrics with slight variations can be constructed and used,
depending on the purpose of analysis. For example −
Percent of completely satisfied customers
Percent of satisfied customers
Percent of dis-satisfied customers
Percent of non-satisfied customers
Usually, this percent satisfaction is used.
In-process Quality Metrics
In-process quality metrics deals with the tracking of defect arrival during formal
machine testing for some organizations. This metric includes −
Defect density during machine testing
Defect arrival pattern during machine testing
Phase-based defect removal pattern
Defect removal effectiveness
Defect density during machine testing
Defect rate during formal machine testing (testing after code is integrated into the
system library) is correlated with the defect rate in the field. Higher defect rates
found during testing is an indicator that the software has experienced higher error
injection during its development process, unless the higher testing defect rate is
due to an extraordinary testing effort.
This simple metric of defects per KLOC or function point is a good indicator of
quality, while the software is still being tested. It is especially useful to monitor
subsequent releases of a product in the same development organization.
Defect arrival pattern during machine testing
The overall defect density during testing will provide only the summary of the
defects. The pattern of defect arrivals gives more information about different
quality levels in the field. It includes the following −
The defect arrivals or defects reported during the testing phase by time interval
(e.g., week). Here all of which will not be valid defects.
The pattern of valid defect arrivals when problem determination is done on the
reported problems. This is the true defect pattern.
The pattern of defect backlog overtime. This metric is needed because development
organizations cannot investigate and fix all the reported problems immediately. This
is a workload statement as well as a quality statement. If the defect backlog is
large at the end of the development cycle and a lot of fixes have yet to be
integrated into the system, the stability of the system (hence its quality) will be
affected. Retesting (regression test) is needed to ensure that targeted product
quality levels are reached.
Phase-based defect removal pattern
This is an extension of the defect density metric during testing. In addition to
testing, it tracks the defects at all phases of the development cycle, including the
design reviews, code inspections, and formal verifications before testing.
Because a large percentage of programming defects is related to design problems,
conducting formal reviews, or functional verifications to enhance the defect removal
capability of the process at the front-end reduces error in the software. The pattern
of phase-based defect removal reflects the overall defect removal ability of the
development process.
With regard to the metrics for the design and coding phases, in addition to defect
rates, many development organizations use metrics such as inspection coverage
and inspection effort for in-process quality management.
Defect removal effectiveness
It can be defined as follows −
DRE=DefectremovedduringadevelopmentphaseDefectslatentintheproduct×100%
This metric can be calculated for the entire development process, for the front-end
before code integration and for each phase. It is called early defect removal when
used for the front-end and phase effectiveness for specific phases. The higher the
value of the metric, the more effective the development process and the fewer the
defects passed to the next phase or to the field. This metric is a key concept of the
defect removal model for software development.
Maintenance Quality Metrics
Although much cannot be done to alter the quality of the product during this phase,
following are the fixes that can be carried out to eliminate the defects as soon as
possible with excellent fix quality.
Fix backlog and backlog management index
Fix response time and fix responsiveness
Percent delinquent fixes
Fix quality
Fix backlog and backlog management index
Fix backlog is related to the rate of defect arrivals and the rate at which fixes for
reported problems become available. It is a simple count of reported problems that
remain at the end of each month or each week. Using it in the format of a trend
chart, this metric can provide meaningful information for managing the
maintenance process.
Backlog Management Index (BMI) is used to manage the backlog of open and
unresolved problems.
BMI=NumberofproblemsclosedduringthemonthNumberofproblemsarrivedduringthem
onth×100%
If BMI is larger than 100, it means the backlog is reduced. If BMI is less than 100,
then the backlog increased.
Fix response time and fix responsiveness
The fix response time metric is usually calculated as the mean time of all problems
from open to close. Short fix response time leads to customer satisfaction.
The important elements of fix responsiveness are customer expectations, the
agreed-to fix time, and the ability to meet one's commitment to the customer.
Percent delinquent fixes
It is calculated as follows −
PercentDelinquentFixes=
NumberoffixesthatexceededtheresponsetimecriteriabyceveritylevelNumberoffixesdeli
veredinaspecifiedtime×100%
Fix Quality
Fix quality or the number of defective fixes is another important quality metric for
the maintenance phase. A fix is defective if it did not fix the reported problem, or if
it fixed the original problem but injected a new defect. For mission-critical software,
defective fixes are detrimental to customer satisfaction. The metric of percent
defective fixes is the percentage of all fixes in a time interval that is defective.
A defective fix can be recorded in two ways: Record it in the month it was
discovered or record it in the month the fix was delivered. The first is a customer
measure; the second is a process measure. The difference between the two dates is
the latent period of the defective fix.
Usually the longer the latency, the more will be the customers that get affected. If
the number of defects is large, then the small value of the percentage metric will
show an optimistic picture. The quality goal for the maintenance process, of course,
is zero defective fixes without delinquency.
MODULE-IV Software Testing Strategies and Applications
4.1 Testing Strategies
Software Testing is a type of investigation to find out if there is any default or error
present in the software so that the errors can be reduced or removed to increase
the quality of the software and to check whether it fulfills the specifies requirements
or not.
According to Glen Myers, software testing has the following objectives:
The process of investigating and checking a program to find whether there is an
error or not and does it fulfill the requirements or not is called testing.
When the number of errors found during the testing is high, it indicates that the
testing was good and is a sign of good test case.
Finding an unknown error that’s wasn’t discovered yet is a sign of a successful and
a good test case.
The main objective of software testing is to design the tests in such a way that it
systematically finds different types of errors without taking much time and effort so
that less time is required for the development of the software.
The overall strategy for testing software includes:
Fig: Software testing strategies
Before testing starts, it’s necessary to identify and specify the requirements of the
product in a quantifiable manner.
Different characteristics quality of the software is there such as maintainability that
means the ability to update and modify, the probability that means to find and
estimate any risk, and usability that means how it can easily be used by the
customers or end-users. All these characteristic qualities should be specified in a
particular order to obtain clear test results without any error.
Specifying the objectives of testing in a clear and detailed manner.
Several objectives of testing are there such as effectiveness that means how
effectively the software can achieve the target, any failure that means inability to
fulfill the requirements and perform functions, and the cost of defects or errors that
mean the cost required to fix the error. All these objectives should be clearly
mentioned in the test plan.
For the software, identifying the user’s category and developing a profile for each
user.
Use cases describe the interactions and communication among different classes of
users and the system to achieve the target. So as to identify the actual requirement
of the users and then testing the actual use of the product.
Developing a test plan to give value and focus on rapid-cycle testing.
Rapid Cycle Testing is a type of test that improves quality by identifying and
measuring the any changes that need to be required for improving the process of
software. Therefore, a test plan is an important and effective document that helps
the tester to perform rapid cycle testing.
Robust software is developed that is designed to test itself.
The software should be capable of detecting or identifying different classes of
errors. Moreover, software design should allow automated and regression testing
which tests the software to find out if there is any adverse or side effect on the
features of software due to any change in code or program.
Before testing, using effective formal reviews as a filter.
Formal technical reviews is technique to identify the errors that are not discovered
yet. The effective technical reviews conducted before testing reduces a significant
amount of testing efforts and time duration required for testing software so that the
overall development time of software is reduced.
Conduct formal technical reviews to evaluate the nature, quality or ability of the
test strategy and test cases.
The formal technical review helps in detecting any unfilled gap in the testing
approach. Hence, it is necessary to evaluate the ability and quality of the test
strategy and test cases by technical reviewers to improve the quality of software.
For the testing process, developing a approach for the continuous development.
As a part of a statistical process control approach, a test strategy that is already
measured should be used for software testing to measure and control the quality
during the development of software.
4.2 Strategic issues
Following are the issues considered to implement software testing
strategies.
Specify the requirement before testing starts in a quantifiable manner.
According to the categories of the user generate profiles for each category of
user.
Produce a robust software and it's designed to test itself.
Should use the Formal Technical Reviews (FTR) for the effective testing.
To access the test strategy and test cases FTR should be conducted.
To improve the quality level of testing generate test plans from the users
feedback.
Test strategies for conventional software
Following are the four strategies for conventional software:
1) Unit testing
2) Integration testing
3) Regression testing
4) Smoke testing
1) Unit testing
Unit testing focus on the smallest unit of software design, i.e module or
software component.
Test strategy conducted on each module interface to access the flow of input
and output.
The local data structure is accessible to verify integrity during execution.
Boundary conditions are tested.
In which all error handling paths are tested.
An Independent path is tested.
Fig:4.2 unit testing
Integration testing
Integration testing is used for the construction of software architecture.
There are two approaches of incremental testing are:
i) Non incremental integration testing
ii) Incremental integration testing
i) Non incremental integration testing
Combines all the components in advanced.
A set of error is occurred then the correction is difficult because isolation
cause is complex.
ii) Incremental integration testing
The programs are built and tested in small increments.
The errors are easier to correct and isolate.
Interfaces are fully tested and applied for a systematic test approach to it.
Following are the incremental integration strategies:
a. Top-down integration
b. Bottom-up integration
a. Top-down integration
It is an incremental approach for building the software architecture.
It starts with the main control module or program.
Modules are merged by moving downward through the control hierarchy.
Problems with top-down approach of testing
Following are the problems associated with top-down approach of testing
as follows:
Top-down approach is an incremental integration testing approach in which
the test conditions are difficult to create.
A set of errors occur, then correction is difficult to make due to the isolation
of cause.
The programs are expanded into various modules due to the complications.
If the previous errors are corrected, then new get created and the process
continues. This situation is like an infinite loop.
b. Bottom-up integration
In bottom up integration testing the components are combined from the lowest
level in the program structure.
The bottom-up integration is implemented in following steps:
The low level components are merged into clusters which perform a specific
software sub function.
A control program for testing(driver) coordinate test case input and output.
After these steps are tested in cluster.
The driver is removed and clusters are merged by moving upward on the
program structure.
Regression testing
In regression testing the software architecture changes every time when a
new module is added as part of integration testing.
4) smoke testing
The developed software component are translated into code and merge to
complete the product.
4.3 Test Strategies for object oriented software
Techniques of object-oriented testing are as follows:
Fault Based Testing:
This type of checking permits for coming up with test cases supported the
consumer specification or the code or both. It tries to identify possible faults (areas
of design or code that may lead to errors.). For all of these faults, a test case is
developed to “flush” the errors out. These tests also force each time of code to be
executed.
This method of testing does not find all types of errors. However, incorrect
specification and interface errors can be missed. These types of errors can be
uncovered by function testing in the traditional testing model. In the object-
oriented model, interaction errors can be uncovered by scenario-based testing. This
form of Object oriented-testing can only test against the client’s specifications, so
interface errors are still missed.
Class Testing Based on Method Testing:
This approach is the simplest approach to test classes. Each method of the class
performs a well defined cohesive function and can, therefore, be related to unit
testing of the traditional testing techniques. Therefore all the methods of a class
can be involved at least once to test the class.
Random Testing:
It is supported by developing a random test sequence that tries the minimum
variety of operations typical to the behavior of the categories
Partition Testing:
This methodology categorizes the inputs and outputs of a category so as to check
them severely. This minimizes the number of cases that have to be designed.
Scenario-based Testing:
It primarily involves capturing the user actions then stimulating them to similar
actions throughout the test.
These tests tend to search out interaction form of error.
4.4 validation software
Software Testing - Validation Testing
The process of evaluating software during the development process or at the end of the
development process to determine whether it satisfies specified business requirements.
Validation Testing ensures that the product actually meets the client's needs. It can also be
defined as to demonstrate that the product fulfills its intended use when deployed on appropriate
environment.
Validation Testing - Workflow:
Validation testing can be best demonstrated using V-Model. The Software/product under test is
evaluated during this type of testing.
Fig:4.4 Validation testing
Activities:
Unit Testing
Integration Testing
System Testing
User Acceptance Testing
4.5 system testing
System Testing is a type of software testing that is performed on a complete
integrated system to evaluate the compliance of the system with the corresponding
requirements. In system testing, integration testing passed components are taken
as input. The goal of integration testing is to detect any irregularity between the
units that are integrated together. System testing detects defects within both the
integrated units and the whole system. The result of system testing is the observed
behavior of a component or a system when it is tested. System Testing is carried
out on the whole system in the context of either system requirement specifications
or functional requirement specifications or in the context of both. System testing
tests the design and behavior of the system and also the expectations of the
customer. It is performed to test the system beyond the bounds mentioned in the
software requirements specification (SRS). System Testing is basically performed
by a testing team that is independent of the development team that helps to test
the quality of the system impartial. It has both functional and non-functional
testing. System Testing is a black-box testing. System Testing is performed after
the integration testing and before the acceptance testing.
System Testing Process: System Testing is performed in the following steps:
Test Environment Setup: Create testing environment for the better quality testing.
Create Test Case: Generate test case for the testing process.
Create Test Data: Generate the data that is to be tested.
Execute Test Case: After the generation of the test case and the test data, test
cases are executed.
Defect Reporting: Defects in the system are detected.
Regression Testing: It is carried out to test the side effects of the testing process.
Log Defects: Defects are fixed in this step.
Retest: If the test is not successful then again test is performed.
Fig:4.5 System Testing
Types of System Testing:
Performance Testing: Performance Testing is a type of software testing that is
carried out to test the speed, scalability, stability and reliability of the software
product or application.
Load Testing: Load Testing is a type of software Testing which is carried out to
determine the behavior of a system or software product under extreme load.
Stress Testing: Stress Testing is a type of software testing performed to check the
robustness of the system under the varying loads.
Scalability Testing: Scalability Testing is a type of software testing which is carried
out to check the performance of a software application or system in terms of its
capability to scale up or scale down the number of user request load.
Tools used for System Testing :
JMeter
Gallen Framework
Selenium
Advantages of System Testing :
The testers do not require more knowledge of programming to carry out this
testing.
It will test the entire product or software so that we will easily detect the errors or
defects which cannot be identified during the unit testing and integration testing.
The testing environment is similar to that of the real time production or business
environment.
It checks the entire functionality of the system with different test scripts and also it
covers the technical and business requirements of clients.
After this testing, the product will almost cover all the possible bugs or errors and
hence the development team will confidently go ahead with acceptance testing.
Disadvantages of System Testing :
This testing is time consuming process than another testing techniques since it
checks the entire product or software.
The cost for the testing will be high since it covers the testing of entire software.
It needs good debugging tool otherwise the hidden errors will not be found.
4.6 The art of debugging
In the context of software engineering, debugging is the process of fixing a bug in
the software. In other words, it refers to identifying, analyzing, and removing
errors. This activity begins after the software fails to execute properly and
concludes by solving the problem and successfully testing the software. It is
considered to be an extremely complex and tedious task because errors need to be
resolved at all stages of debugging.
Debugging Process: Steps involved in debugging are:
Problem identification and report preparation.
Assigning the report to the software engineer to the defect to verify that it is
genuine.
Defect Analysis using modeling, documentation, finding and testing candidate flaws,
etc.
Defect Resolution by making required changes to the system.
Validation of corrections.
The debugging process will always have one of two outcomes :
1. The cause will be found and corrected.
2. The cause will not be found.
Later, the person performing debugging may suspect a cause, design a test case to
help validate that suspicion and work toward error correction in an iterative fashion.
During debugging, we encounter errors that range from mildly annoying to
catastrophic. As the consequences of an error increase, the amount of pressure to
find the cause also increases. Often, pressure sometimes forces a software
developer to fix one error and at the same time introduce two more.
Debugging Approaches/Strategies:
Brute Force: Study the system for a larger duration in order to understand the
system. It helps the debugger to construct different representations of systems to
be debugging depending on the need. A study of the system is also done actively to
find recent changes made to the software.
Backtracking: Backward analysis of the problem which involves tracing the program
backward from the location of the failure message in order to identify the region of
faulty code. A detailed study of the region is conducted to find the cause of defects.
Forward analysis of the program involves tracing the program forwards using
breakpoints or print statements at different points in the program and studying the
results. The region where the wrong outputs are obtained is the region that needs
to be focused on to find the defect.
Using the past experience of the software debug the software with similar problems
in nature. The success of this approach depends on the expertise of the debugger.
Cause elimination: it introduces the concept of binary partitioning. Data related to
the error occurrence are organized to isolate potential causes.
Debugging Tools:
Debugging tool is a computer program that is used to test and debug other
programs. A lot of public domain software like gdb and dbx are available for
debugging. They offer console-based command-line interfaces. Examples of
automated debugging tools include code based tracers, profilers, interpreters, etc.
Some of the widely used debuggers are:
Radare2
WinDbg
Valgrind
4.7 software testing fundamentals
SOFTWARE TESTING Fundamentals (STF) is a platform to gain (or refresh) basic
knowledge in the field of Software Testing. If we are to ‘cliche’ it, the site is of the
testers, by the testers, and for the testers. Our goal is to build a resourceful
repository of Quality Content on Quality.
Software testing can be stated as the process of verifying and validating whether a
software or application is bug-free, meets the technical requirements as guided by
its design and development, and meets the user requirements effectively and
efficiently by handling all the exceptional and boundary cases.
The process of software testing aims not only at finding faults in the existing
software but also at finding measures to improve the software in terms of
efficiency, accuracy, and usability. It mainly aims at measuring the specification,
functionality, and performance of a software program or application.
Software testing can be divided into two steps:
1. Verification: it refers to the set of tasks that ensure that the software correctly
implements a specific function.
2. Validation: it refers to a different set of tasks that ensure that the software that
has been built is traceable to customer requirements.
Verification: “Are we building the product right?”
Validation: “Are we building the right product?”
What are different types of software testing?
Software Testing can be broadly classified into two types:
1. Manual Testing: Manual testing includes testing software manually, i.e., without
using any automation tool or any script. In this type, the tester takes over the role
of an end-user and tests the software to identify any unexpected behavior or bug.
There are different stages for manual testing such as unit testing, integration
testing, system testing, and user acceptance testing.
Testers use test plans, test cases, or test scenarios to test software to ensure the
completeness of testing. Manual testing also includes exploratory testing, as testers
explore the software to identify errors in it.
2. Automation Testing: Automation testing, which is also known as Test
Automation, is when the tester writes scripts and uses another software to test the
product. This process involves the automation of a manual process. Automation
Testing is used to re-run the test scenarios quickly and repeatedly, that were
performed manually in manual testing.
Apart from regression testing, automation testing is also used to test the
application from a load, performance, and stress point of view. It increases the test
coverage, improves accuracy, and saves time and money when compared to
manual testing.
What are the different types of Software Testing Techniques ?
Software testing techniques can be majorly classified into two categories:
1. Black Box Testing: The technique of testing in which the tester doesn’t
have access to the source code of the software and is conducted at the
software interface without any concern with the internal logical
structure of the software is known as black-box testing.
2. White-Box Testing: The technique of testing in which the tester is
aware of the internal workings of the product, has access to its source
code, and is conducted by making sure that all internal operations are
performed according to the specifications is known as white box
testing.
3. Software level testing can be majorly classified into 4 levels:
1. Unit Testing: A level of the software testing process where individual
units/components of a software/system are tested. The purpose is to validate
that each unit of the software performs as designed.
2. Integration Testing: A level of the software testing process where
individual units are combined and tested as a group. The purpose of this level
of testing is to expose faults in the interaction between integrated units.
3. System Testing: A level of the software testing process where a complete,
integrated system/software is tested. The purpose of this test is to evaluate
the system’s compliance with the specified requirements.
4. Acceptance Testing: A level of the software testing process where a
system is tested for acceptability. The purpose of this test is to evaluate the
system’s compliance with the business requirements and assess whether it is
acceptable for delivery.
Fig:4.7 software testing
4.8 White box testing
White box testing techniques analyze the internal structures the used data
structures, internal design, code structure and the working of the software rather
than just the functionality as in black box testing. It is also called glass box testing
or clear box testing or structural testing.
Working process of white box testing:
Input: Requirements, Functional specifications, design documents, source code.
Processing: Performing risk analysis for guiding through the entire process.
Proper test planning: Designing test cases so as to cover entire code. Execute
rinse-repeat until error-free software is reached. Also, the results are
communicated.
Output: Preparing final report of the entire testing process.
Testing techniques:
Statement coverage: In this technique, the aim is to traverse all statement at least
once. Hence, each line of code is tested. In case of a flowchart, every node must be
traversed at least once. Since all lines of code are covered, helps in pointing out
faulty code.
Fig:4.8.1.Testing mechanism
Branch Coverage: In this technique, test cases are designed so that each branch
from all decision points are traversed at least once. In a flowchart, all edges must
be traversed at least once.
Fig:4.8.2 Testing mechanism
White box testing involves the testing of the software code for the following:
Internal security holes
Broken or poorly structured paths in the coding processes
The flow of specific inputs through the code
Expected output
The functionality of conditional loops
Testing of each statement, object, and function on an individual basis
White Box Testing Techniques
A major White box testing technique is Code Coverage analysis. Code Coverage
analysis eliminates gaps in a Test Case suite. It identifies areas of a program that
are not exercised by a set of test cases. Once gaps are identified, you create test
cases to verify untested parts of the code, thereby increasing the quality of the
software product
There are automated tools available to perform Code coverage analysis. Below are
a few coverage analysis techniques a box tester can use:
Statement Coverage:- This technique requires every possible statement in the code
to be tested at least once during the testing process of software engineering.
Branch Coverage – This technique checks every possible path (if-else and other
conditional loops) of a software application.
Apart from above, there are numerous coverage types such as Condition Coverage,
Multiple Condition Coverage, Path Coverage, Function Coverage etc. Each technique
has its own merits and attempts to test (cover) all parts of software code. Using
Statement and Branch coverage you generally attain 80-90% code coverage which
is sufficient.
Following are important WhiteBox Testing Techniques:
Statement Coverage
Decision Coverage
Branch Coverage
Condition Coverage
Multiple Condition Coverage
Finite State Machine Coverage
Path Coverage
Control flow testing
Data flow testing
Types of White Box Testing
White box testing encompasses several testing types used to evaluate the usability
of an application, block of code or specific software package. There are listed below
Unit Testing: It is often the first type of testing done on an application. Unit
Testing is performed on each unit or block of code as it is developed. Unit
Testing is essentially done by the programmer. As a software developer, you
develop a few lines of code, a single function or an object and test it to make
sure it works before continuing Unit Testing helps identify a majority of bugs,
early in the software development lifecycle. Bugs identified in this stage are
cheaper and easy to fix.
Testing for Memory Leaks: Memory leaks are leading causes of slower
running applications. A QA specialist who is experienced at detecting memory
leaks is essential in cases where you have a slow running software
application.
Apart from above, a few testing types are part of both black box and white box
testing. They are listed as below
White Box Penetration Testing: In this testing, the tester/developer has full
information of the application’s source code, detailed network information, IP
addresses involved and all server information the application runs on. The
aim is to attack the code from several angles to expose security threats
White Box Mutation Testing: Mutation testing is often used to discover the
best coding techniques to use for expanding a software solution.
White Box Testing Tools
Below is a list of top white box testing tools.
ParasoftJtest
EclEmma
NUnit
PyUnit
HTMLUnit
CppUnit
Advantages of White Box Testing
Code optimization by finding hidden errors.
White box tests cases can be easily automated.
Testing is more thorough as all code paths are usually covered.
Testing can start early in SDLC even if GUI is not available.
Disadvantages of WhiteBox Testing
White box testing can be quite complex and expensive.
Developers who usually execute white box test cases detest it. The white
box testing by developers is not detailed can lead to production errors.
White box testing requires professional resources, with a detailed
understanding of programming and implementation.
White-box testing is time-consuming, bigger programming applications take
the time to test fully.
4.9 Black box Testing
Black Box Testing is a software testing method in which the functionalities of
software applications are tested without having knowledge of internal code
structure, implementation details and internal paths. Black Box Testing mainly
focuses on input and output of software applications and it is entirely based on
software requirements and specifications. It is also known as Behavioral Testing.
Fig:4.9 black box model
The generic steps followed to carry out any type of Black Box Testing.
Initially, the requirements and specifications of the system are examined.
Tester chooses valid inputs (positive test scenario) to check whether SUT
processes them correctly. Also, some invalid inputs (negative test scenario)
are chosen to verify that the SUT is able to detect them.
Tester determines expected outputs for all those inputs.
Software tester constructs test cases with the selected inputs.
The test cases are executed.
Software tester compares the actual outputs with the expected outputs.
Defects if any are fixed and re-tested.
Types of Black Box Testing
There are many types of Black Box Testing but the following are the
prominent ones –
Functional testing – This black box testing type is related to the functional
requirements of a system; it is done by software testers.
Non-functional testing – This type of black box testing is not related to
testing of specific functionality, but non-functional requirements such as
performance, scalability, usability.
Regression testing – Regression Testing is done after code fixes, upgrades or
any other system maintenance to check the new code has not affected the
existing code.
Tools used for Black Box Testing:
Tools used for Black box testing largely depends on the type of black box
testing you are doing.
For Functional/ Regression Tests you can use – QTP, Selenium
For Non-Functional Tests, you can use – LoadRunner, Jmeter
Table:4.9 Differences between black box and white box testing
Black Box Testing Techniques
Following are the prominent Test Strategy amongst the many used in Black
box Testing
Equivalence Class Testing: It is used to minimize the number of possible test
cases to an optimum level while maintains reasonable test coverage.
Boundary Value Testing: Boundary value testing is focused on the values at
boundaries. This technique determines whether a certain range of values are
acceptable by the system or not. It is very useful in reducing the number of
test cases. It is most suitable for the systems where an input is within certain
ranges.
Decision Table Testing: A decision table puts causes and their effects in a
matrix. There is a unique combination in each column.
4.10 Object oriented testing methods
Software typically undergoes many levels of testing, from unit testing to system or
acceptance testing. Typically, in-unit testing, small “units”, or modules of the
software, are tested separately with focus on testing the code of that module. In
higher, order testing (e.g, acceptance testing), the entire system (or a subsystem)
is tested with the focus on testing the functionality or external behavior of the
system.
As information systems are becoming more complex, the object-oriented paradigm
is gaining popularity because of its benefits in analysis, design, and coding.
Conventional testing methods cannot be applied for testing classes because of
problems involved in testing classes, abstract classes, inheritance, dynamic binding,
message, passing, polymorphism, concurrency, etc.
Testing classes is a fundamentally different problem than testing functions. A
function (or a procedure) has a clearly defined input-output behavior, while a class
does not have an input-output behavior specification. We can test a method of a
class using approaches for testing functions, but we cannot test the class using
theseapproaches.
According to Davis the dependencies occurring in conventional systems are:
Data dependencies between variables
Calling dependencies between modules
Functional dependencies between a module and the variable it computes
Definitional dependencies between a variable and its types.
But in Object-Oriented systems there are following additional dependencies:
Class to class dependencies
Class to method dependencies
Class to message dependencies
Class to variable dependencies
Method to variable dependencies
Method to message dependencies
Method to method dependencies
Issues in Testing Classes:
Additional testing techniques are, therefore, required to test these dependencies.
Another issue of interest is that it is not possible to test the class dynamically, only
its instances i.e, objects can be tested. Similarly, the concept of inheritance opens
various issues e.g., if changes are made to a parent class or superclass, in a larger
system of a class it will be difficult to test subclasses individually and isolate the
error to one class.
In object-oriented programs, control flow is characterized by message passing
among objects, and the control flow switches from one object to another by inter-
object communication. Consequently, there is no control flow within a class like
functions. This lack of sequential control flow within a class requires different
approaches for testing. Furthermore, in a function, arguments passed to the
function with global data determine the path of execution within the procedure. But,
in an object, the state associated with the object also influences the path of
execution, and methods of a class can communicate among themselves through
this state because this state is persistent across invocations of methods. Hence, for
testing objects, the state of an object has to play an important role.
Techniques of object-oriented testing are as follows:
Fault Based Testing:
This type of checking permits for coming up with test cases supported the
consumer specification or the code or both. It tries to identify possible faults (areas
of design or code that may lead to errors.). For all of these faults, a test case is
developed to “flush” the errors out. These tests also force each time of code to be
executed.
This method of testing does not find all types of errors. However, incorrect
specification and interface errors can be missed. These types of errors can be
uncovered by function testing in the traditional testing model. In the object-
oriented model, interaction errors can be uncovered by scenario-based testing. This
form of Object oriented-testing can only test against the client’s specifications, so
interface errors are still missed.
Class Testing Based on Method Testing:
This approach is the simplest approach to test classes. Each method of the class
performs a well defined cohesive function and can, therefore, be related to unit
testing of the traditional testing techniques. Therefore all the methods of a class
can be involved at least once to test the class.
Random Testing:
It is supported by developing a random test sequence that tries the minimum
variety of operations typical to the behavior of the categories
Partition Testing:
This methodology categorizes the inputs and outputs of a category so as to check
them severely. This minimizes the number of cases that have to be designed.
Scenario-based Testing:
It primarily involves capturing the user actions then stimulating them to similar
actions throughout the test.
These tests tend to search out interaction form of error.
MODULE-V Risk, Quality Management and Reengineering
5.1 Risk and quality Management
Risk Management is the system of identifying addressing and eliminating these
problems before they can damage the project.
We need to differentiate risks, as potential issues, from the current problems of the
project.
A software project can be concerned with a large variety of risks. In order to be
adept to systematically identify the significant risks which might affect a software
project, it is essential to classify risks into different classes. The project manager
can then check which risks from each class are relevant to the project.
There are three main classifications of risks which can affect a software project:
Project risks
Technical risks
Business risks
1. Project risks: Project risks concern differ forms of budgetary,
schedule, personnel, resource, and customer-related problems. A vital
project risk is schedule slippage. Since the software is intangible, it is
very tough to monitor and control a software project. It is very tough
to control something which cannot be identified.
2. For any manufacturing program, such as the manufacturing of cars,
the plan executive can recognize the product taking shape.
2. Technical risks: Technical risks concern potential method, implementation,
interfacing, testing, and maintenance issue. It also consists of an ambiguous
specification, incomplete specification, changing specification, technical uncertainty,
and technical obsolescence. Most technical risks appear due to the development
team's insufficient knowledge about the project.
3. Business risks: This type of risks contain risks of building an excellent product
that no one need, losing budgetary or personnel commitments, etc.
Other risk categories
Learn more
1. Known risks: Those risks that can be uncovered after careful assessment of the
project program, the business and technical environment in which the plan is being
developed, and more reliable data sources (e.g., unrealistic delivery date)
2. Predictable risks: Those risks that are hypothesized from previous project
experience (e.g., past turnover)
3. Unpredictable risks: Those risks that can and do occur, but are extremely tough
to identify in advance.
Principle of Risk Management
Global Perspective: In this, we review the bigger system description, design, and
implementation. We look at the chance and the impact the risk is going to have.
Take a forward-looking view: Consider the threat which may appear in the future
and create future plans for directing the next events.
Open Communication: This is to allow the free flow of communications between the
client and the team members so that they have certainty about the risks.
Integrated management: In this method risk management is made an integral part
of project management.
Continuous process: In this phase, the risks are tracked continuously throughout
the risk management paradigm.
5.2 Reactive and proactive risk strategies
Root Cause Analysis (RCA) is one of the best methods to identify main cause or
root cause of problems or events in very systematic way or process. RCA is based
on the idea that for effective management, we need to find out way to prevent
arising or occurring problems.
Each one needs to understand that if they want to solve or eliminate any problem,
it is essential to go to the root cause of the problem and then eliminate problems so
that they can reduce or control the reoccurrence of the problem. For organizations
that want to improve and grow continuously, it is very essential to identify the root
cause although it is tough to do so, it is essential. RCA can also be used to modify
or change core processes and issues in such way that prevents future problems.
Reactive and Proactive RCA :
The main question that arises is whether RCA is reactive or proactive? Some people
think that RCA is only required to solve problems or failures that have already
occurred. But, it’s not true. One should know that RCA can be both i.e. reactive and
proactive as given below –
1. Reactive RCA :
The main question that arises in reactive RCA is “What went wrong?”. Before
investigating or identifying the root cause of failure or defect, failure needs to be in
place or should be occurred already. One can only identify the root cause and
perform the analysis only when problem or failure had occurred that causes
malfunctioning in the system. Reactive RCA is a root cause analysis that is
performed after the occurrence of failure or defect.
It is simply done to control, implemented to reduce the impact and severity of
defect that has occurred. It is also known as reactive risk management. It reacts
quickly as soon as problem occurs by simply treating symptoms. RCA is generally
reactive but it has the potential to be proactive. RCA is reactive at initial and it can
only be proactive if one addresses and identifies small things too that can cause
problem as well as exposes hidden causes of the problem.
Advantages :
Helps one to prioritize tasks according to its severity and then resolve it.
Increases teamwork and their knowledge.
Disadvantages :
Sometimes, resolving equipment after failure can be more costly than preventing
failure from an occurrence.
Failed equipment can cause greater damage to system and interrupts production
activities.
2. Proactive RCA :
The main question that arises in proactive RCA is “What could go wrong?”. RCA can
also be used proactively to mitigate failure or risk. The main importance of RCA can
be seen when it is applied to events that have not occurred yet. Proactive RCA is a
root cause analysis that is performed before any occurrence of failure or defect. It
is simply done to control, implemented to prevent defect from its occurrence. As
both reactive and proactive RCAs are is important, one should move from reactive
to proactive RCA.
It is better to prevent issues from its occurrence rather than correcting it after its
occurrence. In simple words, Prevention is better than correction. Here, prevention
action is considered as proactive and corrective action is considered as reactive. It
is also known as proactive risk management. It identifies the root cause of problem
to eliminate it from reoccurring. With help of proactive RCA, we can identify the
main root cause that leads to the occurrence of problem or failure, or defect. After
knowing this, we can take various measures and implement actions to prevent
these causes from the occurrence.
Advantages :
Future chances of failure occurrence can be minimized.
Reduce overall cost required to resolve failure by simply preventing failure from an
occurrence.
Increases overall productivity by minimizing chances of interruption due to failure.
Disadvantages :
Sometimes, preventing equipment from failure can be more costly than resolving
failure after occurrence.
Many resources and tools required to prevent failure from an occurrence that can
affect the overall cost.
Requires highly skilled technicians to perform maintenance tasks.
5.3 Software risks
Software development is a multi stage approach of design, documentation,
programming, prototyping, testing etc which follows a Software Development Life
Cycle (SDLC) process. Different tasks are performed based on SDLC framework
during software development. Developing and Maintaining software project involves
risk in each step.
Most enterprises rely on software and ignoring the risks associated with any phase
needs to be identified and managed/solved otherwise it creates unforeseen
challenges for business. Before analyzing different risks involved in software
development, Let’s first understand what is actually risk and why risk management
is important for a business.
Risk and importance of risk management :
Risk is uncertain events associated with future events which have a probability of
occurrence but it may or may not occur and if occurs it brings loss to the project.
Risk identification and management are very important task during software project
development because success and failure of any software project depends on it.
Various Kinds of Risks in Software Development :
Schedule Risk :
Schedule related risks refers to time related risks or project delivery related
planning risks. The wrong schedule affects the project development and delivery.
These risks are mainly indicates to running behind time as a result project
development doesn’t progress timely and it directly impacts to delivery of project.
Finally if schedule risks are not managed properly it gives rise to project failure and
at last it affect to organization/company economy very badly.
Some reasons for Schedule risks –
Time is not estimated perfectly
Improper resource allocation
Tracking of resources like system, skill, staff etc
Frequent project scope expansion
Failure in function identification and its’ completion
Budget Risk :
Budget related risks refers to the monetary risks mainly it occurs due to budget
overruns. Always the financial aspect for the project should be managed as per
decided but if financial aspect of project mismanaged then there budget concerns
will arise by giving rise to budget risks. So proper finance distribution and
management are required for the success of project otherwise it may lead to
project failure.
Some reasons for Budget risks –
Wrong/Improper budget estimation
Unexpected Project Scope expansion
Mismanagement in budget handling
Cost overruns
Improper tracking of Budget
Operational Risks :
Operational risk refers to the procedural risks means these are the risks which
happen in day-to-day operational activities during project development due to
improper process implementation or some external operational risks.
Some reasons for Operational risks –
Insufficient resources
Conflict between tasks and employees
Improper management of tasks
No proper planning about project
Less number of skilled people
Lack of communication and cooperation
Lack of clarity in roles and responsibilities
Insufficient training
Technical Risks :
Technical risks refers to the functional risk or performance risk which means this
technical risk mainly associated with functionality of product or performance part of
the software product.
Some reasons for Technical risks –
Frequent changes in requirement
Less use of future technologies
Less number of skilled employee
High complexity in implementation
Improper integration of modules
Programmatic Risks :
Programmatic risks refers to the external risk or other unavoidable risks. These are
the external risks which are unavoidable in nature. These risks come from outside
and it is out of control of programs.
Some reasons for Programmatic risks –
Rapid development of market
Running out of fund / Limited fund for project development
Changes in Government rules/policy
Loss of contracts due to any reason
5.4 Risk mitigation monitoring and management
RMMM Plan :
A risk management technique is usually seen in the software Project plan. This can
be divided into Risk Mitigation, Monitoring, and Management Plan (RMMM). In this
plan, all works are done as part of risk analysis. As part of the overall project plan
project manager generally uses this RMMM plan.
In some software teams, risk is documented with the help of a Risk Information
Sheet (RIS). This RIS is controlled by using a database system for easier
management of information i.e creation, priority ordering, searching, and other
analysis. After documentation of RMMM and start of a project, risk mitigation and
monitoring steps will start.
Risk Mitigation :
It is an activity used to avoid problems (Risk Avoidance).
Steps for mitigating the risks as follows.
Finding out the risk.
Removing causes that are the reason for risk creation.
Controlling the corresponding documents from time to time.
Conducting timely reviews to speed up the work.
Risk Monitoring :
It is an activity used for project tracking.
It has the following primary objectives as follows.
To check if predicted risks occur or not.
To ensure proper application of risk aversion steps defined for risk.
To collect data for future risk analysis.
To allocate what problems are caused by which risks throughout the project.
Risk Management and planning :
It assumes that the mitigation activity failed and the risk is a reality. This task is
done by Project manager when risk becomes reality and causes severe problems. If
the project manager effectively uses project mitigation to remove risks successfully
then it is easier to manage the risks. This shows that the response that will be
taken for each risk by a manager. The main objective of the risk management plan
is the risk register. This risk register describes and focuses on the predicted threats
to a software project.
Example:
Let us understand RMMM with the help of an example of high staff turnover.
Risk Mitigation:
To mitigate this risk, project management must develop a strategy for
reducing turnover. The possible steps to be taken are:
Meet the current staff to determine causes for turnover (e.g., poor working
conditions, low pay, competitive job market).
Mitigate those causes that are under our control before the project starts.
Once the project commences, assume turnover will occur and develop
techniques to ensure continuity when people leave.
Organize project teams so that information about each development activity
is widely dispersed.
Define documentation standards and establish mechanisms to ensure that
documents are developed in a timely manner.
Assign a backup staff member for every critical technologist.
Risk Monitoring:
As the project proceeds, risk monitoring activities commence. The project manager
monitors factors that may provide an indication of whether the risk is becoming
more or less likely. In the case of high staff turnover, the following factors can be
monitored:
General attitude of team members based on project pressures.
Interpersonal relationships among team members.
Potential problems with compensation and benefits.
The availability of jobs within the company and outside it.
Risk Management:
Risk management and contingency planning assumes that mitigation efforts have
failed and that the risk has become a reality. Continuing the example, the project is
well underway, and a number of people announce that they will be leaving. If the
mitigation strategy has been followed, backup is available, information is
documented, and knowledge has been dispersed across the team. In addition, the
project manager may temporarily refocus resources (and readjust the project
schedule) to those functions that are fully staffed, enabling newcomers who must
be added to the team to “get up to the speed“.
Drawbacks of RMMM:
It incurs additional project costs.
It takes additional time.
For larger projects, implementing an RMMM may itself turn out to be another
tedious project.
RMMM does not guarantee a risk-free project, infact, risks may also come up
after the project is delivered.
5.5 software quality factors
The various factors, which influence the software, are termed as software factors.
They can be broadly divided into two categories. The first category of the factors is
of those that can be measured directly such as the number of logical errors, and
the second category clubs those factors which can be measured only indirectly. For
example, maintainability but each of the factors is to be measured to check for the
content and the quality control.
Several models of software quality factors and their categorization have been
suggested over the years. The classic model of software quality factors, suggested
by McCall, consists of 11 factors (McCall et al., 1977). Similarly, models consisting
of 12 to 15 factors, were suggested by Deutsch and Willis (1988) and by Evans and
Marciniak (1987).
All these models do not differ substantially from McCall’s model. The McCall factor
model provides a practical, up-to-date method for classifying software requirements
(Pressman, 2000).
McCall’s Factor Model
This model classifies all software requirements into 11 software quality factors. The
11 factors are grouped into three categories – product operation, product revision,
and product transition factors.
Product operation factors − Correctness, Reliability, Efficiency,
Integrity, Usability.
Product revision factors − Maintainability, Flexibility, Testability.
Product transition factors − Portability, Reusability, Interoperability.
Product Operation Software Quality Factors
According to McCall’s model, product operation category includes five software
quality factors, which deal with the requirements that directly affect the daily
operation of the software. They are as follows −
Correctness
These requirements deal with the correctness of the output of the software system.
They include −
Output mission
The required accuracy of output that can be negatively affected by
inaccurate data or inaccurate calculations.
The completeness of the output information, which can be affected by
incomplete data.
The up-to-dateness of the information defined as the time between the
event and the response by the software system.
The availability of the information.
The standards for coding and documenting the software system.
Reliability
Reliability requirements deal with service failure. They determine the maximum
allowed failure rate of the software system, and can refer to the entire system or to
one or more of its separate functions.
Efficiency
It deals with the hardware resources needed to perform the different functions of
the software system. It includes processing capabilities (given in MHz), its storage
capacity (given in MB or GB) and the data communication capability (given in MBPS
or GBPS).
It also deals with the time between recharging of the system’s portable units, such
as, information system units located in portable computers, or meteorological units
placed outdoors.
Integrity
This factor deals with the software system security, that is, to prevent access to
unauthorized persons, also to distinguish between the group of people to be given
read as well as write permit.
Usability
Usability requirements deal with the staff resources needed to train a new
employee and to operate the software system.
Product Revision Quality Factors
According to McCall’s model, three software quality factors are included in the
product revision category. These factors are as follows −
Maintainability
This factor considers the efforts that will be needed by users and maintenance
personnel to identify the reasons for software failures, to correct the failures, and to
verify the success of the corrections.
Flexibility
This factor deals with the capabilities and efforts required to support adaptive
maintenance activities of the software. These include adapting the current software
to additional circumstances and customers without changing the software. This
factor’s requirements also support perfective maintenance activities, such as
changes and additions to the software in order to improve its service and to adapt it
to changes in the firm’s technical or commercial environment.
Testability
Testability requirements deal with the testing of the software system as well as with
its operation. It includes predefined intermediate results, log files, and also the
automatic diagnostics performed by the software system prior to starting the
system, to find out whether all components of the system are in working order and
to obtain a report about the detected faults. Another type of these requirements
deals with automatic diagnostic checks applied by the maintenance technicians to
detect the causes of software failures.
Product Transition Software Quality Factor
According to McCall’s model, three software quality factors are included in the
product transition category that deals with the adaptation of software to other
environments and its interaction with other software systems. These factors are as
follows −
Portability
Portability requirements tend to the adaptation of a software system to other
environments consisting of different hardware, different operating systems, and so
forth. The software should be possible to continue using the same basic software in
diverse situations.
Reusability
This factor deals with the use of software modules originally designed for one
project in a new software project currently being developed. They may also enable
future projects to make use of a given module or a group of modules of the
currently developed software. The reuse of software is expected to save
development resources, shorten the development period, and provide higher quality
modules.
Interoperability
Interoperability requirements focus on creating interfaces with other software
systems or with other equipment firmware. For example, the firmware of the
production machinery and testing equipment interfaces with the production control
software.
5.6 Defect Amplication model
Defect Amplification and Removal
Defect amplification model:
used to illustrate the generation and detection of errors during the preliminary
design, detail design, and coding steps of the software engineering process.
Fig:5.6.1 Development step
A box represents a software development step.
During the step, errors may be inadvertently generated.
Review may fail to uncover newly generated errors and errors from previous steps,
resulting in some number of errors that are passed through.
In some cases, errors passed through from previous steps are amplified
(amplification factor, x) by current work.
box subdivisions represent each of these characteristics and the percent of
efficiency for detecting errors, a function of the thoroughness of the review.
Fig: 5.6.2 validation Test
Hypothetical example of defect amplification for a software development process in
which no reviews are conducted.
Each test step uncovers and corrects 50 percent of all incoming errors without
introducing any new errors.
Ten preliminary design defects are amplified to 94 errors before testing
commences.
Twelve latent errors are released to the field.
Fig:5.6.3 Testing model
Same conditions except that design and code reviews are conducted as part of each
development step.
Ten initial preliminary design errors are amplified to 24 errors before testing
commences.
Only three latent errors exist.
5.7 Formal technical reviews(FTR)
Objectives of the FTR –
to uncover errors in function, logic, or implementation for any representation
of the software
to verify that the software under review meets its requirements
to ensure that the software has been represented according to predefined
standards
to achieve software that is developed in a uniform manner;
to make projects more manageable.
FTR
serves as a training ground
promotes backup and continuity because a number of people become familiar
with parts of the software that they may not have otherwise seen.
class of reviews that includes:
walkthroughs
inspections
round-robin reviews
other small group technical assessments of software.
Each FTR is conducted as a meeting that is planned, controlled, and attended.
The Review Meeting
Review meeting constraints:
Between three and five people should be involved in the review.
Advance preparation should require no more than two hours of work for each
person.
The duration of the review meeting should be less than two hours.
NOTE: FTR focuses on a specific (and small) part of the overall software.
o Walkthroughs are conducted for each component or small group of
components.
o FTR focuses on a work product (e.g., a portion of a requirements
specification, a detailed component design, a source code listing for a
component).
o Individual who has developed the work product informs the project
leader that the work product is complete and that a review is required.
o The project leader contacts a review leader, who evaluates the product
for readiness, generates copies of product materials, and distributes
them to two or three reviewers for advance preparation.
o Each reviewer is expected to spend between one and two hours
reviewing the product, making notes, and otherwise becoming familiar
with the work. Concurrently, the review leader also reviews the
product and establishes an agenda for the review meeting, which is
typically scheduled for the next day.
o Review meeting attended by
o review leader
o all reviewers
o the producer.
o One reviewer takes on the role of the recorder; that is, the individual
who records (in writing) all important issues raised during the review.
1. The FTR begins with an introduction of the agenda and a brief introduction by
the producer.
2. The producer then proceeds to "walk through" the work product, explaining the
material, while reviewers raise issues based on their advance preparation.
3. When valid problems or errors are discovered, the recorder notes each.
4. At the end of the review, all attendees of the FTR must decide whether to
1. accept the product without further modification
2. reject the product due to severe errors (once corrected, another review must
be performed)
3. accept the product provisionally (minor errors have been encountered and
must be corrected, but no additional review will be required).
5.8 Software quality assurance
Software Quality Assurance (SQA) is simply a way to assure quality in the software.
It is the set of activities which ensure processes, procedures as well as standards
are suitable for the project and implemented correctly.
Software Quality Assurance is a process which works parallel to development of
software. It focuses on improving the process of development of software so that
problems can be prevented before they become a major issue. Software Quality
Assurance is a kind of Umbrella activity that is applied throughout the software
process.
Software Quality Assurance has:
A quality management approach
Formal technical reviews
Multi testing strategy
Effective software engineering technology
Measurement and reporting mechanism
Major Software Quality Assurance Activities:
SQA Management Plan:
Make a plan for how you will carry out the sqathrough out the project. Think about
which set of software engineering activities are the best for project. check level of
sqa team skills.
Set The Check Points:
SQA team should set checkpoints. Evaluate the performance of the project on the
basis of collected data on different check points.
Multi testing Strategy:
Do not depend on a single testing approach. When you have a lot of testing
approaches available use them.
Measure Change Impact:
The changes for making the correction of an error sometimes re introduces more
errors keep the measure of impact of change on project. Reset the new change to
change check the compatibility of this fix with whole project.
Manage Good Relations:
In the working environment managing good relations with other teams involved in
the project development is mandatory. Bad relation of sqa team with programmers
team will impact directly and badly on project. Don’t play politics.
Benefits of Software Quality Assurance (SQA):
SQA produces high quality software.
High quality application saves time and cost.
SQA is beneficial for better reliability.
SQA is beneficial in the condition of no maintenance for a long time.
High quality commercial software increase market share of company.
Improving the process of creating software.
Improves the quality of the software.
Disadvantage of SQA:
There are a number of disadvantages of quality assurance. Some of them include
adding more resources, employing more workers to help maintain quality and so
much more.
5.9 software reliability
Software Reliability means Operational reliability. It is described as the ability of
a system or component to perform its required functions under static conditions for
a specific period.
Software reliability is also defined as the probability that a software system fulfills
its assigned task in a given environment for a predefined number of input cases,
assuming that the hardware and the input are free of error.
Software Reliability is an essential connect of software quality, composed with
functionality, usability, performance, serviceability, capability, installability,
maintainability, and documentation. Software Reliability is hard to achieve because
the complexity of software turn to be high. While any system with a high degree of
complexity, containing software, will be hard to reach a certain level of reliability,
system developers tend to push complexity into the software layer, with the speedy
growth of system size and ease of doing so by upgrading the software.
For example, large next-generation aircraft will have over 1 million source lines of
software on-board; next-generation air traffic control systems will contain between
one and two million lines; the upcoming International Space Station will have over
two million lines on-board and over 10 million lines of ground support software;
several significant life-critical defense systems will have over 5 million source lines
of software. While the complexity of software is inversely associated with software
reliability, it is directly related to other vital factors in software quality, especially
functionality, capability, etc.
5.10 Reengineering
Software Re-engineering is a process of software development which is done to
improve the maintainability of a software system. Re-engineering is the
examination and alteration of a system to reconstitute it in a new form. This
process encompasses a combination of sub-processes like reverse engineering,
forward engineering, reconstructing etc.
Re-engineering is the reorganizing and modifying existing software systems to
make them more maintainable.
Objectives of Re-engineering:
To describe a cost-effective option for system evolution.
To describe the activities involved in the software maintenance process.
To distinguish between software and data re-engineering and to explain the
problems of data re-engineering.
Steps involved in Re-engineering:
Inventory Analysis
Document Reconstruction
Reverse Engineering
Code Reconstruction
Data Reconstruction
Forward Engineering
Fig:5.10 Reengineering
Re-engineering Cost Factors:
The quality of the software to be re-engineered
The tool support available for re-engineering
The extent of the required data conversion
The availability of expert staff for re-engineering
Advantages of Re-engineering:
Reduced Risk: As the software is already existing, the risk is less as
compared to new software development. Development problems, staffing
problems and specification problems are the lots of problems which may
arise in new software development.
Reduced Cost: The cost of re-engineering is less than the costs of
developing new software.
Revelation of Business Rules: As a system is re-engineered , business rules
that are embedded in the system are rediscovered.
Better use of Existing Staff: Existing staff expertise can be maintained and
extended accommodate new skills during re-engineering.
Disadvantages of Re-engineering:
Practical limits to the extent of re-engineering.
Major architectural changes or radical reorganizing of the systems data
management has to be done manually.
Re-engineered system is not likely to be as maintainable as a new system
developed using modern software Re-engineering methods.
5.11 Business process reengineering
Business process re-engineering is not just a change, but actually it is a dramatic
change and dramatic improvements. This is only achieved through overhaul the
organization structures, job descriptions, performance management, training and
the most importantly, the use of IT i.e. Information Technology.
Fig:5.11 Business Process Re-engineering
BPR projects have failed sometimes to meet high expectations. Many unsuccessful
BPR attempts are due to the confusion surrounding BPR and how it should be
performed. It becomes the process of trial and error.
Phases of BPR :
According to Peter F. Drucker, ” Re-engineering is new, and it has to be done.”
There are 7 different phases for BPR. All the projects for BPR begin with the most
critical requirement i.e. communication throughout the organization.
Begin organizational change.
Build the re-engineering organization.
Identify BPR opportunities.
Understand the existing process.
Reengineer the process
Blueprint the new business system.
Perform the transformation.
Objectives of BPR :
Following are the objectives of the BPR :
To dramatically reduce cost.
To reduce time requirements.
To improve customer services dramatically.
To reinvent the basic rules of the business e.g. The airline industry.
Customer satisfaction.
Organizational learning.
Challenges faced by BPR process :
All the BPR processes are not as successful as described. The companies that have
start the use of BPR projects face many of the following challenges :
Resistance
Tradition
Time requirements
Cost
Job losses
Advantages of BPR :
Following are the advantages of BPR :
BPR offers tight integration among different modules.
It offers same views for the business i.e. same database, consistent reporting
and analysis.
It offers process orientation facility i.e. streamline processes.
It offers rich functionality like templates and reference models.
It is flexible.
It is scalable.
It is expandable.
Disadvantages of BPR :
Following are the Disadvantages of BPR :
It depends on various factors like size and availability of resources. So, it will not fit
for every business.
5.12 software reengineering
Software Re-Engineering is the examination and alteration of a system to
reconstitute it in a new form. The principles of Re-Engineering when applied to the
software development process is called software re-engineering. It affects positively
at software cost, quality, service to the customer and speed of delivery. In Software
Re-engineering, we are improving the software to make it more efficient and
effective.
The need of software Re-engineering: Software re-engineering is an economical
process for software development and quality enhancement of the product. This
process enables us to identify the useless consumption of deployed resources and
the constraints that are restricting the development process so that the
development process could be made easier and cost-effective (time, financial,
direct advantage, optimize the code, indirect benefits, etc.) and maintainable. The
software reengineering is necessary for having-
a) Boost up productivity: Software reengineering increase productivity by
optimizing the code and database so that processing gets faster.
b) Processes in continuity: The functionality of older software product can be still
used while the testing or development of software.
c) Improvement opportunity: Meanwhile the process of software reengineering, not
only software qualities, features and functionality but also your skills are refined,
new ideas hit in your mind. This makes the developers mind accustomed to
capturing new opportunities so that more and more new features can be developed.
d) Reduction in risks: Instead of developing the software product from scratch or
from the beginning stage here developers develop the product from its existing
stage to enhance some specific features that are brought in concern by
stakeholders or its users. Such kind of practice reduces the chances of fault
fallibility.
e) Saves time: As we stated above here that the product is developed from the
existing stage rather than the beginning stage so the time consumes in software
engineering is lesser.
f) Optimization: This process refines the system features, functionalities and
reduces the complexity of the product by consistent optimization as maximum as
possible.
Re-Engineering cost factors:
The quality of the software to be re-engineered.
The tool support availability for engineering.
The extent of the data conversion which is required.
The availability of expert staff for Re-engineering.
Software Re-Engineering Activities:
1. Inventory Analysis:
Every software organisation should have an inventory of all the applications.
Inventory can be nothing more than a spreadsheet model containing information
that provides a detailed description of every active application.
By sorting this information according to business criticality, longevity, current
maintainability and other local important criteria, candidates for re-engineering
appear.
The resource can then be allocated to a candidate application for re-engineering
work.
2. Document reconstructing:
Documentation of a system either explains how it operates or how to use it.
Documentation must be updated.
It may not be necessary to fully document an application.
The system is business-critical and must be fully re-documented.
3. Reverse Engineering:
Reverse engineering is a process of design recovery. Reverse engineering tools
extract data, architectural and procedural design information from an existing
program.
4. Code Reconstructing:
To accomplish code reconstructing, the source code is analysed using a
reconstructing tool. Violations of structured programming construct are noted and
code is then reconstructed.
The resultant restructured code is reviewed and tested to ensure that no anomalies
have been introduced.
5. Data Restructuring:
Data restructuring begins with a reverse engineering activity.
Current data architecture is dissected, and the necessary data models are defined.
Data objects and attributes are identified, and existing data structure are reviewed
for quality.
6. Forward Engineering:
Forward Engineering also called as renovation or reclamation not only for recovers
design information from existing software but uses this information to alter or
reconstitute the existing system in an effort to improve its overall quality.
5.13 restructuring
Software restructuring modifies source code and/or data in an effort to make it
amenable to future changes. In general, restructuring does not modify the overall
program architecture. It tends to focus on the design details of individual modules
and on local data structures defined within modules. If the restructuring effort
extends beyond module boundaries and encompasses the software architecture,
restructuring becomes forward engineering .
Arnold defines a number of benefits that can be achieved when software is
restructured:
• Programs have higher quality—better documentation, less complexity, and
conformance to modern software engineering practices and standards.
• Frustration among software engineers who must work on the program is reduced,
thereby improving productivity and making learning easier.
• Effort required to perform maintenance activities is reduced.
• Software is easier to test and debug.
Restructuring occurs when the basic architecture of an application is solid, even
though technical internals need work. It is initiated when major parts of the
software are serviceable and only a subset of all modules and data need extensive
modification.
Code Restructuring
Code restructuring is performed to yield a design that produces the same function
but with higher quality than the original program. In general, code restructuring
techniques model program logic using Boolean algebra and then apply a series of
transformation rules that yield restructured logic. The objective is to take
"spaghetti-bowl" code and derive a procedural design that conforms to the
structured programming philosophy.
Other restructuring techniques have also been proposed for use with reengineering
tools. A resource exchange diagram maps each program module and the resources
(data types, procedures and variables) that are exchanged between it and other
modules. By creating representations of resource flow, the program architecture
can be restructured to achieve minimum coupling among modules.
Data Restructuring
Before data restructuring can begin, a reverse engineering activity called analysis of
source code must be conducted. All programming language statements that contain
data definitions, file descriptions, I/O, and interface descriptions are evaluated. The
intent is to extract data items and objects, to get information on data flow, and to
understand the existing data structures that have been implemented. This activity
is sometimes called data analysis .
Once data analysis has been completed, data redesign commences. In its simplest
form, a data record standardization step clarifies data definitions to achieve
consistency among data item names or physical record formats within an existing
data structure or file format. Another form of redesign, called data name
rationalization, ensures that all data naming conventions conform to local standards
and that aliases are eliminated as data flow through the system.
When restructuring moves beyond standardization and rationalization, physical
modifications to existing data structures are made to make the data design more
effective. This may mean a translation from one file format to another, or in some
cases, translation from one type of database to another.
5.14 reverse engineering
Software Reverse Engineering is a process of recovering the design, requirement
specifications and functions of a product from an analysis of its code. It builds a
program database and generates information from this.
The purpose of reverse engineering is to facilitate the maintenance work by
improving the understandability of a system and to produce the necessary
documents for a legacy system.
Reverse Engineering Goals:
Cope with Complexity.
Recover lost information.
Detect side effects.
Synthesise higher abstraction.
Facilitate Reuse.
Steps of Software Reverse Engineering:
Collection Information:
This step focuses on collecting all possible information (i.e., source design
documents etc.) about the software.
Examining the information:
The information collected in step-1 as studied so as to get familiar with the system.
Extracting the structure:
This step concerns with identification of program structure in the form of structure
chart where each node corresponds to some routine.
Recording the functionality:
During this step processing details of each module of the structure, charts are
recorded using structured language like decision table, etc.
Recording data flow:
From the information extracted in step-3 and step-4, set of data flow diagrams are
derived to show the flow of data among the processes.
Recording control flow:
High level control structure of the software is recorded.
Review extracted design:
Design document extracted is reviewed several times to ensure consistency and
correctness. It also ensures that the design represents the program.
Generate documentation:
Finally, in this step, the complete documentation including SRS, design document,
history, overview, etc. are recorded for future use.
Reverse Engineering Tools:
Reverse engineering if done manually would consume lot of time and human labour
and hence must be supported by automated tools. Some of tools are given below:
CIAO and CIA: A graphical navigator for software and web repositories along with
a collection of Reverse Engineering tools.
Rigi: A visual software understanding tool.
Bunch: A software clustering/modularization tool.
GEN++: An application generator to support development of analysis tools for the
C++ language.
PBS: Software Bookshelf tools for extracting and visualizing the architecture of
programs.
5.15 forward engineering
Forward engineering is the process of building from a high-level model or concept
to build in complexities and lower-level details. This type of engineering has
different principles in various software and database processes.
Generally, forward engineering is important in IT because it represents the 'normal’
development process. For example, building from a model into an implementation
language. This will often result in loss of semantics, if models are more semantically
detailed, or levels of abstraction.
Forward engineering is thus related to the term 'reverse engineering,’ where there
is an effort to build backward, from a coded set to a model, or to unravel the
process of how something was put together.
o Reverse engineering can extract design information from source code, but
the degree of abstraction, the thoroughness of the documentation, and the
degree to which tools and a human analyst work together.
o The process's directionality varies greatly.
o The sophistication of the design information that can be derived from the
source code is referred to as the abstraction level of a reverse engineering
process and the tools used to achieve it.
o Ideally, the abstraction level ought to be as high as is practical.
o The level of detail offered at an abstraction level is what is referred to as the
completeness of a reverse engineering process. The completeness typically
declines as the number of abstractions rises.
o An interaction refers to the degree to which a human is integrated with
automated technologies to produce successful reverse engineering.
o In many circumstances, interaction must rise as the abstraction level does or
completeness would suffer.
o The reverse engineering method has a one-way directionality: all knowledge
gleaned from the source code is sent to the software engineer for use in
future maintenance tasks.
o An example of backward engineering is research on Instruments etc.
What is Reverse Engineering?
Exploring Existing Designs and Maneuvers
We are able to observe what already exists thanks to reverse engineering. This
includes any components, system, or procedures that might serve communities in
other way. Through analysis of existing products, innovation and discovery are
made possible.
Discovering Any Product Vulnerabilities
Reverse engineering helps in identifying product flaws, just as in the prior step. This
is done to protect the users of the product's safety and wellbeing. An issue should
ideally come up in the research stage rather than the distribution stage.
Inspiring Creative Minds with Old Ideas
Last but not least, reverse engineering provides a way for innovative design. An
engineer may come upon a system during the process that could be valuable for a
totally unrelated project. This demonstrates how engineering links tasks to prior
knowledge.
Creative a Reliable CAD Model for Future Reference
The majority of reverse engineering procedures include creating a fully functional
CAD file for future use. In order to inspect the part digitally in the event that
problems develop later, a CAD file is made. This type of technology has improved
product expressiveness and engineering productivity.
Bringing Less Expensive and More Efficient Products to the Market
The basic objective of reverse engineering is to guide engineers toward success and
innovation. Reduced manufacturing costs and maximum product efficacy are
necessary for succeeding.
Reconstructing a Product that is Outdated
Understanding the product itself is a crucial component of redesigning an existing
product. Working out the quirks in an antiquated system with the help of reverse
engineering gives us the visual. The most crucial factor in this procedure is quality.
S.NO Forward Engineering Reverse Engineering
In forward engineering, the application
In reverse engineering or backward
are developed with the given engineering, the information are
1. requirements. collected from the given application.
Forward Engineering is a high Reverse Engineering or backward
2. proficiency skill. engineering is a low proficiency skill.
While Reverse Engineering or
Forward Engineering takes more time backward engineering takes less time
3. to develop an application. to develop an application.
The nature of forward engineering is The nature of reverse engineering or
4. Prescriptive. backward engineering is Adaptive.
In reverse engineering, production is
In forward engineering, production is started by taking the existing
5. started with given requirements. products.
The example of forward engineering is
the construction of electronic kit, An example of backward engineering
6. construction of DC MOTOR , etc. is research on Instruments etc.
Table:5.15 Differences between forward and reverse engineering