Software Engineering: Self Learning Material
Software Engineering: Self Learning Material
Software Engineering
(BSBC-401)
SECTION
B
S/W Project Planning: Objectives, Decomposition Techniques: S/W Sizing, Problem Based
Estimation, Process Based Estimation, Cost Estimation Models: COCOMO Model, The S/W
Equation, System Analysis: Principles Of Structured Analysis, Requirement Analysis, DFD,
Entity Relationship Diagram, Data Dictionary. S/W Design: Objectives, Principles, Concepts,
Design Methodologies: Data Design, Architecture Design, Procedural Design, Object –
Oriented Concepts.
SECTION C
Testing Fundamentals: Objectives, Principles, Testability, Test Case Design: White Box &
Black Box testing, Testing Strategies: Verification & Validation, Unit Testing,
Integration Testing, Validation Testing, System Testing.
SECTION D
1.0 Objectives
1.1 Introduction
1.1.1 Software
1.5 Summary
1.6 Glossary
1.0 Objectives
1
1.1 Introduction
1.1.1 Software
A software is a set of step by step instructions that when executed provide desired output. In
other words software is a data structure that enables the programs to change the information
according to the requirement.
Individuals are developing programs for their personal use. Programs are smaller in size and
have functionality that is limited but software products are category of larger system. While
developing a program, the programmer is the user of his program but in the case of software
product user are the ones who have not developed the software product. Program is a single
developer approach while a software product involves a large number of developers are there.
For a program, the user interface may not be very important, because the programmer is the
sole user. On the other hand, for a software product, user interface must be carefully designed
and implemented because developers of that product and users of that product are totally
different.
As a discipline, software engineering is a branch of engineering that deals with all aspects of
software production from the system specifications to implementation and maintenance of the
system years after its use. Software engineering has progressed drastically in few years,
particularly as compared to other engineering disciplines. Computing, in early years was quite
small as compared to the present situation. Notable feature is the progression of software
engineering in the past few years. It has shown its visibility in almost all the fields including
Scientific, Military application, radar and software developers to the people of all ages. “Today,
software engineering is working behind, virtually in almost all aspects of our lives, from the
critical systems that affect our lives directly and indirectly for our well-being.
Requirement Specifications
Design Specifications
Testing
Maintenance
Configuration management
Quality Assurance.
2
Requirement Specifications:- A software requirements specification (SRS) is a detail
description of the purpose and conditions for software under development. The SRS in
detail explains what the software is going to deliver and how the expected output will come
out to be. SRS generally reduces the time and effort that is required by developers to
achieve on time specified goals and it also reduces the cost of development. A good SRS
describes the way the applications will interact with system hardware, and programs and
users of the system under consideration in a wide variety of real-world situations.
Parameters such as operating speed, response time, availability, portability,
maintainability, footprint, security and speed of recovery from adverse events are
evaluated.
Software testing, depending upon the application under consideration can be implemented at
any time in the development process. Usually, most of the testing effort occurs after the
requirements have been elicit and the coding process has been done. As such, the
methodology of the testing is taken care by the software development methodology applied.
Different software development models will concentrate on the test effort at different level in the
development process. Newer development methods, like Agile, often employ test driven
strategy and place an increased section of the testing in on developer side, before it reaches a
formal team of testers. In a more traditional model, most of the test execution occurs after the
requirements have been defined and the coding process has been completed.
3
Maintenance: - The process of changing a software system or component after delivery to
correct errors and improve performance or other characteristics, so as to adapt changing
environment.
Tools that are required to gain insight in static structure of the system
Tools to gain inside view of the system in dynamic behavior
Tools to inspect history of different version of same software.
Quality Assurance:-
• Controlling Variations in the system is most important in assuring Quality.
• Software developers strive to control the
– Processes implied
– resources used
– end product quality characteristics
• Quality of design
– refers to characteristics designers specify for the end product to be constructed
• Quality of conformance
– degree to which design specifications are followed in manufacturing the product
4
• Quality control
– series of inspections, reviews, and tests used to ensure conformance of a work product
to its specifications
• Quality assurance
– auditing and reporting procedures used to provide management with data needed to
make proactive decisions
1. Define Software.
2. How is a program different from Software Product?
3. List various elements of Software engineering.
While developing software, the developer must understand the characteristics of the software. It
is usually required to make it different from other software. While constructing any hardware
(analysis, design, testing) its prototype is converted into physical form. For eg:- Construction of
Computer system , involves steps form initial design drawings to integration of chips and other
parts etc.
Software is basically a logical concept rather than physical concept. Therefore, Software has
characteristics that are very much different from hardware.
5
1. Software is not manufactured it is Engineered or Developed: Though, there may -exist
similarities between software development and manufacturing of hardware. Both the
activities are performed differently. Both the activities are performed by people and in both
activities quality is the major concern, but the approach for developing them is
considerably different.
2. The considerable difference is wearing out. Software never wears out while the hardware
does wear out. This can be represented by a curve Called Bathtub in which , it indicates
that during early years the failure rate is high, defects are corrected and failure rate drops
to some steady state level. With the passage of time, failure rate increases with time as
components suffers from various envoirmental and physical factors.
Infant
mortality Wear-out
Fig 1.1
6
The software must deliver the accurate and timely results that can only be achieved by
functionality and performance, and software should be one that is easy to maintain,
dependable, efficient and acceptable.
3. Efficiency: - Efficient utilization of various components should be more for good software.
4. Acceptability: - Acceptability of the software by the user end is also a feature of good
quality software.
Components contain different system processes. Data and functions are encapsulated
inside each component in such a way that they are semantically related. As far system
related co-ordination is concerned, components usually communicate with each other
via various interfaces. When a component provides services to other parts of the system, it
adopts a given interface that specifically includes the services that other components can
use. This interface is necessary for the component –in the client server environment- the
client does need not to know about the inner details of the component and its
implementation in order to use those components.
Software is applied in situations where some predefined set of procedural steps like an algorithm, has
been defined. Type of content and determinacy are important factors in figuring the nature of a software
application. By content we mean form of incoming and outgoing information. For example, many
7
business applications use highly structured input data (a database) and produce reports that are well
structured.
Determinacy refers to the predicting sequence and timely information. An engineering program accepts
data that have a predefined set of sequence, executes the analysis algorithm(s) without interrupting,
and produces output data in report format. Such applications are determinate type of data.
System software: Service programs or system software are written to service other programs.
Compilers, editors, and file management utilities programs are some commonly used system software,
that structures the data that are complex and determinate while Other systems applications such as
operating system elements and components, device drivers, processors that process indeterminate
data. In all cases, the system software area is characterized by largely interaction with computer
hardware; multiple users effective usage; concurrent operations that requires process scheduling,
sharing various resources, and managing processes; complex data structures.
Real-time software: Software that usually controls the real-world events as they occur in real time is
called real time. Real time software include information gathering component that gathers and formats
the necessary information from external sources, an analysis component that transforms information as
required by the application program, a control/output component that responds to the external
environment, and a monitoring component that coordinates the other components so that real-time
response (typically ranging from 1 millisecond to 1 second) can be achieved.
Business software: Information systems that are used in business applications comes under this
category, in this processes has much larger domain as compared to other software. Payroll systems,
inventory system, account receivable system also called discrete system has evolved into
management information system (MIS) software that accesses one or more large databases also
known as BIS (business information system). Applications area includes reformulation of present data
in such a way that facilitates operations and decision making of Business system. Apart form traditional
data processing application, business processing software applications also includes computing
techniques that are much more interactive than earlier systems. Points-of- sale, transaction processing
systems are some of the examples of business information systems.
Scientific software: Such Software range from automotive to astronomy and orbital dynamics to
molecular biology. Somehow largely modern applications within the scientific area show a distance
from traditional numerical algorithms approach. Interactive applications that include Computer-aided
design, simulations, and applications have taken real-time system software characteristics.
8
Embedded software: Intelligent software, also known as embedded software have taken a larger
space in industrial consumer market . Embedded and intelligent software are used to control varied
products. Embedded software performs very limited functionality. Functionality includes controlling
various activities or events like keypad controlling in microwave oven etc. Significant capabilities
include fuel control in an automobile, dashboard displays, and braking systems).
Personal computer: Exponential rise has been seen in personal computer software market. Over the
past two decades, Word processing, computer graphics, multimedia applications, spreadsheets,
entertainment software, personal and business financial applications, external network, and database
access are only a few of hundreds of applications.
Web-based software: Web Base software basically runs on server while user connect s it from their
computers, They are also used to develop and use web applications, HTML, DHTML,XML, PHP are
some of the examples of web based software.
Artificial Intelligence software: Artificial intelligence (AI) software makes use of inference mechanism
to solve complex problems that hardly require computations and are based on straightforward analysis.
Expert systems, also called knowledge based systems, pattern recognition theorem proving, and game
playing, neural network comes under this category.
5. Education sector: - These Software aid in learning and organization of study material. Such software
is similar to tutor to assist with studying. Millions of subjects and can be customized for almost any
learning. With the need for institutions to become "paperless", more educational institutions are looking
for alternative ways of assessing and testing student’s performance in virtual environment. Assessment
9
software prompt students to complete online tests and examinations that is, usually networked. The
software then evaluates and score each test transcript and gives results for each student.
6. Military applications:-Military forces are using wide range of equipments at their disposal. Tracking
that equipment can be a challenge. For communication, and other purposes military is using software.
The example of such software is Intellitrack.
Issues associated with developing and maintaining software are termed as a "crisis." Only few
observers have been able to figure out the impact of some of the more complex software failures that
have occurred over the past few years. The great successes achieved by the software industry in
developing complex applications have led many to question whether the term software crisis is
accurate or not. It is true that software people succeed more often than they fail. What we really have
may be something rather different.
True or False
10
Fill in the Blanks
a) response
b) licensing
c) memory usage
d) processing time
a) software companies
b) developers
c) Both a and b
a) drawing DFD
b) SRS Document
c) coding the project
d) the user Manual
a) Developer
b) User
c) Contractor
d) Initiator
e) Client.
11
6. The nature of software applications can be characterized by their information
a) complexity
b) content
c) determinacy
d) none of the above
7. …………………. software resides only in read only memory and is used to control products and
systems for the consumer and industrial markets.
a) Business
b) Embedded
c) System
d) Personal
a) Productivity
b) Portability
c) Timeliness
d) Visibility
1.5 Summary
Software is a set of instructions used to produce desired output. Software is not manufactured; rather it
is engineered or developed. An element of software engineering usually involves various phases from
requirement analysis to maintenance. Some of the major characteristics of good software are its
maintainability, dependability, Efficiency and acceptability. Components of software include data and
programs. Some types of software are system software, real-time software, artificial Intelligence
software, embedded software. Application areas of software engineering are education sector, military,
accounting system, research and development etc.
12
1.6 Glossary
Software: It is a set of step by step instructions that when executed provide desired output.
SRS: A software requirements specification (SRS) is a detail description of the purpose and conditions
for software under development.
Software component: It usually a software package or a web service, a web resource, or a module.
Ans 1: Software is a set of step by step instruction that when executed provide desired
output. In other words software is a data structure that enables the programs to change the
information according to the requirement.
Ans 2: Programs are smaller in size and have functionality that is limited but software products
are category of larger system. Software product, a large number of developers are there.
Ans 3:
Requirement Specifications
Design Specifications
Testing
Maintenance
Configuration management
Quality Assurance.
True or False
1. F
2. F
3. T
4. T
5. T
1. CRISIS
2. ARTIFICIAL
13
3. REAL TIME
4. PHASE
5. ACCOUNTING
1. c
2. b
3. b
4. d
5. e
6. c
7. b
8. a
9. a
1. Roger. S. Pressman, Software Engineering - A Practitioner's Approach, 7th Edition, McGraw Hill,
2010.
14
2. Rajib Mall, “Fundamental of Software Engineering “, 3rd edition, PHI, 2009.
3. Naseeb Singh Gill, “Software Engineering: Software reliability, testing and quality, Khanna Book
Publishing, 2011
15
Lesson- 2 Software Process models
Structure of the lesson
2.0 Objective
2.1 Introduction
2.2 Software process
2.3 Life cycle model
2.4 Different software life cycle models
2.4.1 Classical waterfall model
2.4.2 Prototype Model
2.4.3 Spiral model
2.5 Agile development model
2.5.1 Scrum Methodology
2.5.2 Extreme Programming or XP
2.6 Summary
2.7 Glossary
2.8 Answers to check your progress/self assessment questions
2.9 References/ Suggested Readings
2.10 Models questions
2.0 Objective
After studying this lesson, students will be able to:
1. Define the concept of software process.
2. Discuss various software life cycle models.
3. Explain different phases of a life cycle model.
4. Describe the need of agile methodology.
5. Explain XP and Scrum methodologies of Agile Development.
2.1 Introduction
IT industry did go through a face that is popularly known as software crisis. Major software
projects either failed or did not finish. Cost of managing large software's went out of bound. It all
happened because there were no software development models available at that time. Major thrust
was on reducing the cost and improving the efficiency of computer hardware. Soon the experts
realized that there was an emergent need of software engineering too, and need to develop some
models to manage the progress of software projects.
16
2.2 Software process
A software process is broader term than the software methodology. It includes all the activities
performed in the software development phases and also the methodology used to carry out each
activity.
2.3 Life cycle model
A software process model is also known as software development life cycle SDLC. It is a
descriptive representation of the software life cycle. A life cycle model represents all the
necessary activities needed to make a software product transit through its life cycle phases. It also
specifies the strict order in which these activities should be preformed. Different life cycle models
map the activities to phases in different
ways. No matter which life cycle model you decide to use, all the basic activities are performed in
it, only the ordering may change. Single life cycle phase may include more than single activity.
For example, activities like structured analysis and structured design may be included in the
single phase called design.
17
___________________________________________________________________________
It is called a waterfall model, because the process flow from one phase to the next phase is like
water falling downwards, which means you cannot move backwards or upwards.
Common phases of waterfall life cycle model"
1. Feasibility Study
2. Requirements Analysis and Specification
18
3. Design
4. Coding and Unit Testing
5. Integration and System Testing
6. Maintenance
Feasibility study: -
Feasibility study is carried out to ascertain if it is feasible or viable to develop the software from
technical, economic, social and other key factors. The process starts with the project leaders
visiting the client side to get a sense of client requirements. They study different input data,
constraints on the behaviour of the system, kind of processing needed, and output data to be
produced by the system. Once they are familiar with the problem in hand, they try to formulate
different solutions for the problem in hand. Then they try to estimates the cost of each solution in
terms of time needed, investment involved, work force needed and much more. Next obvious
thing is to understand the budget of the customer and offer the solution that is feasible in that
budget.
19
Design phase: -
The goal of the design phase is to design a structure that is suitable for implementation in some
programming language. This structure is derived from the SRS document. The design phase can
be carried out using either of the two design approaches:
a. Traditional design approach
In case of traditional design approach; initially a structured analysis of the requirements
specification is carried out where the detailed structure of the problem is examined, followed by
structured design activity in which the results of structured analysis are transformed into the
software design.
b. Object-oriented design approach
In case of object-oriented design approach, objects in the problem domain and the solution
domain are first identified. The object structure is then obtained by identifying the relationships
between various objects and creating the detailed design.
Coding and unit testing phase: -
It is also called the implementation phase. In this phase you translate the design structure into
source code using any programming language. Each component of the design is implemented as
a program module. These modules are then tested individually and in isolation of each other. It is
also called unit testing.
20
Maintenance phase: -
Maintenance may require more effort that the actual implementation phase. Maintenance phase
consist of the following activities:
a. Corrective maintenance: Correcting of errors that could not be discovered during the
implementation phase.
b. Perfective maintenance: Improving or enhancing the system implementation as per the
customer’s requirements.
c. Adaptive maintenance: Getting the software to work on a new
computer platform or with a new operating system.
Q7. _________________ testing is carried after all the modules have been successfully integrated
and tested.
Q8. Correcting of errors that could not be discovered during the implementation phase is called
_________________ maintenance.
21
A variation of classical waterfall model called iterative waterfall model became very popular. In
case iterative waterfall model, there was a provision of moving back to any phase in case a
correction was encountered.
22
The diagrammatic representation of spiral model is like a spiral with multiple iterations. The
number of iterations in it are not fixed. A single iteration of spiral represents a single phase of the
software process. The innermost iteration is concerned with the first phase called the feasibility
study, the next iteration with requirements specification, and so on. Also, each phase in spiral
model is split into four sectors as shown in the figure above. Activities carried out in each sector
of a single phase in spiral model are as follows:
1. Objectives of the phase are identified in the first sector.
2. In the second sector, a detailed analysis is carried out for each identified project risk and steps
are taken to reduce the same.
3. Third sector involves the activities concerned with the development and validation of next
level of the product.
4.Fourth and the last sector is concerned with the reviewing of results achieved and planning the
next iteration around the spiral. With each iteration, a more comprehensive and complete version
of the software is built.
23
development models. Agile model is optimized to facilitate quick completion of the project by
fitting the process to the model and removing all the activities that may not be needed or expected
to waste time. Agile model refers to a group of development processes with certain common
characteristics. Two Agile development models are:
1. Scrum
2. Extreme Programming or XP.
Agile model uses the iterative approach to develop the decomposed requirements incrementally.
Agile development tries to keep each iteration to be small so that is can be finished in a maximum
period of 2-3 weeks, called time box. After each iteration, the incremented software is delivered
to the customer.
Scrum Master: Just like a facilitator between the Product Owner and team members, a Scrum
Master works to remove obstacles that are stopping the team from achieving its sprint goals.
Team: Team in case of Scrum Model is self managing. A Scrum development team consists of
about 3-9 members. Team consists of programmers, software engineers, analysts, architects, ,
testers, UI designers, etc. The team has both the autonomy and responsibility to meet the goals of
the sprint.
Scaled Agile
A single Product Owner manages several teams working on one product
24
2.5.2 Extreme Programming or XP
Extreme programming model places higher priority on adaptability than predictability and
assumes that a software developer should be able to react to any change by the customer at given
point of time. Changes in this model are addressed on daily basis by the managers and the
developers.
XP Practices
1. Confronting the changes early helps in getting more time to resolve the same.
2. The focus is only on the current requirements without making any predictions.
3. Even if a set of changes are requested by the customer, only one change at a time is integrated
with the baseline.
4. It is most preferred where the progress needs to be demonstrated frequently.
Team Structure
The concept of pair programming is used in XP model. Driver is one who is responsible for
typing and observer is responsible for reviewing. Individual developers may be given the
responsibility to write prototypes, but the production code. Pairs are rotated often, to give each
team member a thorough knowledge of the project.
2.6 Summary
A software process includes all the activities performed in the software development phases and
also the methodology used to carry out each activity. A software process model is a descriptive
representation of the software life cycle. A life cycle model represents all the necessary activities
needed to make a software product transit through its life cycle phases. A software life cycle
model defines entry and exit criteria for each phase. Common phases of any software
development model are Feasibility Study, Requirements Analysis and Specification, Design,
Coding and Unit Testing, Integration and System Testing, Maintenance. In case iterative waterfall
model, there was a provision of moving back to any phase in case a correction was encountered.
A prototype exhibits only the limited functional capabilities as compared to the actual software. A
prototype is built using several shortcuts. A prototype turns out to be a very crude version of the
actual system. Spiral model is like a spiral with multiple iterations. The number of iterations in it
are not fixed. A single iteration of spiral represents a single phase of the software process. Agile
development model is designed to handle the change requests by the customers more quickly than
any of the traditional development models. Scrum is based on the concept of inspect and adapt
feedback iterations to cope with complexity and risk. Extreme programming model places higher
25
priority on adaptability then or predictability and assumes that a software developer should be
able to react to any change by the customer at given point of time.
2.7 Glossary
Software process- It includes all the activities performed in the software development phases and
also the methodology used to carry out each activity.
Agile model- Agile development model is designed to handle the change requests by the
customers more quickly than any of the traditional development models.
Prototype- A prototype exhibits only the limited functional capabilities as compared to the actual
software.
Module- It refers to a single component of a software.
SRS- consists of both the functional and non-functional requirements, as well as the goals of
implementation.
26
2.9 References/ Suggested Readings
"1. Roger. S. Pressman, Software Engineering - A Practitioner's Approach, 7th Edition, McGraw
Hill,
2010.
2. Rajib Mall, “Fundamental of Software Engineering “, 3rd edition, PHI, 2009.
3. Naseeb Singh Gill, “Software Engineering: Software reliability, testing and quality, Khanna
Book
Publishing, 2011."
27
Unit No 1
Lesson 3: Software Project Management
Structure of the lesson
3.0 Objectives
3.1 Introduction
3.2 Project Management
3.2.1 Need of software project management
3.3 Software Project Manager
3.3.1 Responsibilities that a project manager
3.4 Software Management Activities
3.4.1 Project Planning
3.4.2 Scope Management
3.4.3 Project Estimation
3.5 Software Project Scheduling Techniques
3.5.1 Work break down Structure
3.6 Resource Management
3.7 Software Project Organization
3.7.1 Major Issues in organizing
3.7.2 Organizational Activities for a Software Project
3.8 Project Organization Structure
3.8.1 Functional Project organization
3.8.2 Project Organization
3.9 Software Project Team
3.10 Project Risk Management
3.10.1 Risk Management Process
3.10.2 Project Execution & Monitoring
3.10.3 Project Communication Management
3.10.4 Change Control
3.11 Summary
3.12 Glossary
3.13 Multiple Choice Questions
3.14 Solution to Exercise (Self-Assessment Questions)
3.15 Bibliography/References/Suggested Readings
3.16Model Questions
3.0 Objectives: -
28
3.1 Introduction: - A project is well-defined task, which is a collection of several
operations done in order to achieve a goal (for example, software development and
delivery). A Project can be characterized as:
29
Figure 1 https://tensix.com/2014/10/management-and-the-triple-constraints/
The image above shows triple constraints for software projects. It is an essential part of
software organization to deliver quality product, keeping the cost within client’s budget
constrain and deliver the project as per scheduled. There are several factors, both internal
and external, which may impact this triple constrain triangle. Any of three factors can
severely impact the other two. Therefore, software project management is essential to
incorporate user requirements along with budget and time constraints.
Managing People
Managing Project
30
Risk analysis at every phase
Take necessary step to avoid or come out of problems
Act as project spokesperson
Project Planning
Scope Management
Project Estimation
Software project planning is task, which is performed before the production of software
actually starts. It is there for the software production but involves no concrete activity
that has any direction connection with software production; rather it is a set of multiple
processes, which facilitates software production.
It defines the scope of project; this includes all the activities; process need to be done in
order to make a deliverable software product. Scope management is essential because it
creates boundaries of the project by clearly defining what would be done in the project
and what would not be done. This makes project to contain limited and quantifiable tasks,
which can easily be documented and in turn avoids cost and time overrun.
31
For an effective management accurate estimation of various measures is a must. With
correct estimation managers can manage and control the project more efficiently and
effectively.
Software size may be estimated either in terms of KLOC (Kilo Line of Code) or
by calculating number of function points in the software. Lines of code depend
upon coding practices and Function points vary according to the user or software
requirement.
Effort estimation
Time estimation
Once size and efforts are estimated, the time required to produce the software can
be estimated. Efforts required are segregated into sub categories as per the
requirement specifications and interdependency of various components of
software. Software tasks are divided into smaller tasks, activities or events by
Work Breakthrough Structure (WBS). The tasks are scheduled on day-to-day
basis or in calendar months. The sum of time required to complete all tasks in
hours or days is the total time invested to complete the project.
Cost estimation
This might be considered as the most difficult of all because it depends on more
elements than any of the previous ones. For estimating project cost, it is required to
consider -
o Size of software
o Software quality
o Hardware
o Additional software or tools, licenses etc.
o Skilled personnel with task-specific skills
32
o Travel involved
o Communication
o Training and support
3.5.1 Work Breakdown Structure: - The Work Breakdown Structure (WBS) is used for
defining work packages and developing and tracking the cost and schedule for the
project. The work is broken down into tasks, each of which has a manager, a responsible
institution, costs and schedule, technical scope, and, to the extent possible, a specific
geographic piece of the machine. Each level 3 element has a Task Manager who is
responsible for the execution of the project plans for that element.
There are many ways of breaking downtheactivities in a project, but the most usual is
into:
- Work packages;
- Tasks
- Deliverable
- Milestone.
A work package is a large, logically distinct section of work typically requiringat least
12 months’duration and may include multiple concurrent activities independent of other
activities but may depend on, or feed into other activities typically allocated to a single
team.
A task is typically a much smaller piece of work or a part of a work package typically
requireing 3–6 person months and effort maybe dependent on other concurrent activities
typically allocated to a single person.
33
A deliverable is an output of the project that canmeaningfully be assessed. Examples: – a
report (e.g., requirements spec)– code (e.g., alpha tested product).Deliverables are
indicators of progress.
Figure
2http://www.technologyuk.net/computing/project_management/work_breakdown_s
tructure.shtml
34
3.5.3 Gantt Chart: -A Gantt chart is a bar chart. A Gantt chart is useful for tracking and
reporting progress, as well as for graphically displaying a schedule. Gantt charts are often
used to report because they represent an easily understandable picture of project status.
Gantt charts are not an ideal tool for project control. Gantt chart provides a graphical
illustration of a schedule that helps to plan, coordinate, and track specific tasks in a
project. The purpose of Gantt chart is to present a project that shows the relationship of
activities over time. Gantt chart is a project planning tool that can be used to represent the
timing of tasks required to complete a project. Gantt charts are simple to understand and
easy to construct, therefore they are used by most project managers for all but the most
complex projects.
Figure 3https://www.ablebits.com/office-addins-blog/2014/05/23/make-gantt-chart-
excel/
35
3.5.4 PERT Chart: -A PERT chart is a project management tool used to schedule,
organize, and coordinate tasks within a project. PERT stands for Program Evaluation
Review Technique. A PERT chart presents a graphic illustration of a project as a network
diagram consisting of numbered nodes (either circles or rectangles) representing events,
or milestones in the project linked by labelled vectors (directional lines) representing
tasks in the project. The direction of the
arrows on the lines indicates the sequence of tasks. Pert chart depict task, duration and
dependency information. Each chart starts with an initiation node from which the first
task, originates. If multiple tasks begin at the same time, they are all started from the
nodes out from the starting point. Each task is represented by a line, which states its name
or other identifier, its duration, the number of people assigned to it, and in some cases the
initials of the personal assigned. The other end of the task line is terminated by another
node, which identifies the start of another task, or the beginning of any slack time, that is
waiting time between tasks.
Figure 4https://www.edrawsoft.com/template-simple-pert-chart.php
36
– nodes corresponding to activities,
– arcs labelled with estimated times;
– Activities are linked if there is a dependency between them.
3.6 Resource management: - All elements used to develop a software product may be
assumed as resource for that project. This may include human resource, productive tools
and software libraries. The resources are available in limited quantity and stay in the
organization as a pool of assets. The shortage of resources hampers the development of
project and it can lag behind the schedule. Allocating extra resources increases
development cost in the end. It is therefore necessary to estimate and allocate adequate
resources for the project.
Organizing involves:
- Itemizing project activities required to achieve project objectives.
3.7.1 Major Issues in organizing: The major issues in organizing a software engineering
project are as follows:
37
- A matrix organizational structure is not accepted by many software
development personnel.
- Many team leaders are expected to perform technically as well as manage
his/her team.
38
Figure 5https://aacecasablanca.wordpress.com/2012/03/20/w9-
0_rq_implementation-of-the-earned-value-management-process-for-housing-
project-the-process-steps-3-organization-and-responsibility-assignment/
39
Figure 6
http://www.rff.com/project_orgchart.htm
40
Figure 7.
https://www.edrawsoft.com/project-organizational-chart.php
- Consists of 10 to 12 members
- Decisions are made by consensus
- Group leadership responsibility rotates
- No permanent central authority
- Rarely found today, however, sometimes used in research organizations.
41
3.9.3 The Hierarchical Team (the controlled decentralized team, and project
team):
Experienced staff leaving the project and new staff coming in.
Change in organizational management.
Requirement change or misinterpreting requirement.
Under-estimation of required time and resources.
Technological changes, environmental changes, business competition.
Identification - Make note of all possible risks, which may occur in the project.
Categorize - Categorize known risks into high, medium and low risk intensity as
per their possible impact on the project.
Manage - Analyse the probability of occurrence of risks at various phases. Make
plan to avoid or face risks. Attempt to minimize their side-effects.
Monitor - Closely monitor the potential risks and their early symptoms. Also
monitor the effects of steps taken to mitigate or avoid them.
In this phase, the tasks described in project plans are executed according to their
schedules. Execution needs monitoring in order to check whether everything is going
according to the plan. Monitoring is observing to check the probability of risk and taking
measures to address the risk or report the status of various tasks.
42
Activity Monitoring - All activities scheduled within some task can be monitored
on day-to-day basis. When all activities in a task are completed, it is considered as
complete.
Status Reports - The reports contain status of activities and tasks completed
within a given time frame, generally a week. Status can be marked as finished,
pending or work-in-progress etc.
Milestones Checklist - Every project is divided into multiple phases where major
tasks are performed (milestones) based on the phases of SDLC. This milestone
checklist is prepared once every few weeks and reports the status of milestones.
Effective communication plays vital role in the success of a project. It bridges gaps
between client and the organization, among the team members as well as other stake
holders in the project such as hardware suppliers.
Planning - This step includes the identifications of all the stakeholders in the
project and the mode of communication among them. It also considers if any
additional communication facilities are required.
Sharing - After determining various aspects of planning, manager focuses on
sharing correct information with the correct person on correct time. This keeps
everyone involved the project up to date with project progress and its status.
Feedback - Project managers use various measures and feedback mechanism and
create status and performance reports. This mechanism ensures that input from
various stakeholders is coming to the project manager as their feedback.
Closure - At the end of each major event, end of a phase of SDLC or end of the
project itself, administrative closure is formally announced to update every
stakeholder by sending email, by distributing a hardcopy of document or by other
mean of effective communication. After closure, the team moves to next phase or
project.
IEEE defines it as “the process of identifying and defining the items in the system,
controlling the change of these items throughout their life cycle, recording and reporting
43
the status of items and change requests, and verifying the completeness and correctness
of items”.
Generally, once the SRS is finalized there is less chance of requirement of changes from
user. If they occur, the changes are addressed only with prior approval of higher
management, as there is a possibility of cost and time overrun.
Baseline
Change control is function of configuration management, which ensures that all changes
made to software system are consistent and made as per organizational rules and
regulations.
44
Close request - The change is verified for correct implementation and merging
with the rest of the system. This newly incorporated change in the software is
documented properly and the request is formally being closed.
3.11Summary: -
3.12 Glossary: -
45
A work package is a large, logically distinct section of work typicallyrequiring at least
12 months’ duration and may include multiple concurrent activities independent of other
activities but may depend on, or feed into other activities typically allocated to a single
team.
A task is typically a much smaller piece of work or a part of a work package typically
requireing 3–6 person months and effort maybe dependent on other concurrent activities
typically allocated to a single person.
1. The process each manager follows during the life of a project is known as
a) Project Management
b) Manager life cycle
c) Project Management Life Cycle
d) All of the mentioned.
4. Project managers have to assess the risks that may affect a project.
a) True
b) False
46
7.) What is Software?
a) Set of computer programs, procedures and possibly associated document
concerned with the operation of data processing.
b) A set of compiler instructions
c) A mathematical formula
d) None of above.
1. c 2. d 3. b 4. b 5. c 6. a 7. a 8.d 9. a 10.d
47
3.14 Solutions to Exercises I & II: -
Exercise-I
Project Planning
Scope Management
Project Estimation
Project Planning
Software project planning is task, which is performed before the production of software
actually starts. It is there for the software production but involves no concrete activity
that has any direction connection with software production; rather it is a set of multiple
processes, which facilitates software production.
Scope Management
It defines the scope of project; this includes all the activities; process need to be done in
order to make a deliverable software product. Scope management is essential because it
creates boundaries of the project by clearly defining what would be done in the project
and what would not be done. This makes project to contain limited and quantifiable tasks,
which can easily be documented and in turn avoids cost and time overrun.
48
Define the scope
Decide its verification and control
Divide the project into various smaller parts for ease of management.
Verify the scope
Control the scope by incorporating changes to the scope
Managing People
Managing Project
Software size may be estimated either in terms of KLOC (Kilo Line of Code) or
by calculating number of function points in the software. Lines of code depend
upon coding practices and Function points vary according to the user or software
requirement.
Effort estimation
49
The managers estimate efforts in terms of personnel requirement and man-hour
required to produce the software. For effort estimation software size should be
known. This can either be derived by managers’ experience; organization’s
historical data or software size can be converted into efforts by using some
standard formulae.
Time estimation
Once size and efforts are estimated, the time required to produce the software can
be estimated. Efforts required are segregated into sub categories as per the
requirement specifications and interdependency of various components of
software. Software tasks are divided into smaller tasks, activities or events by
Work Breakthrough Structure (WBS). The tasks are scheduled on day-to-day
basis or in calendar months.The sum of time required to complete all tasks in
hours or days is the total time invested to complete the project.
Cost estimation
This might be considered as the most difficult of all because it depends on more
elements than any of the previous ones. For estimating project cost, it is required
to consider -
o Size of software
o Software quality
o Hardware
o Additional software or tools, licenses etc.
o Skilled personnel with task-specific skills
o Travel involved
o Communication
o Training and support
Exercise-II
- Consists of 10 to 12 members
- Decisions are made by consensus
- Group leadership responsibility rotates
50
- No permanent central authority
- Rarely found today, however, sometimes used in research organizations.
c) The Hierarchical Team (the controlled decentralized team, and project team):
51
- The PERT network is continuously useful to project managers prior to and
during a project.
- The PERT network is a graphical representation of the project tasks help
to show the task interrelationship.
- It allows scheduling and simulation of alternative schedules.
52
Lesson No. 4:- Software Measurement and Metrics
4.0 Objectives
4.1 Introduction
4.11 Glossary
53
4.13 References/ Suggested Readings
4.14Models questions
4.0 Objective
Student should be able to
4.1 Introduction
As, the softwares were suffering from software crisis, the need of the hour was to define a
methodology so as to make more accurate schedule of softwares and cost estimates, improved
quality products, and more productivity. These can be achieved by effective strategy of
software management and control measures. Such management and quality control measures
can only be achieved if we have a method of measuring the software. The traditional software
management methodologies are ineffective because software development is extremely
complex. Software engineers require well-defined, reliable methods of either the process or the
product to help and evaluate development of a product so, effective, quality oriented, planned
methodology was not difficult to achieve, and hence the role of software measurement comes
into existence.
54
4.2 Software Measurements
The essential component in day to day life from scientific to engineering discipline consists of
measurements. Practical implementation of applications would not be possible without
measurements which gives an idea about proper management and planning. Measurements are
must from qualitative to quantitative planning, monitoring and controlling various productions
from Qualitative and Quantitative perspectives, analysis cost and its benefits, especially when
new techniques are developed or introduced. Industrial software development requires the
implication of various software engineering methods and methodologies that are impressive
and effective, i.e., they allow organizations to develop quality assured software products
efficiently and optimize production resources to their best.
Reliable- The major characteristics of good measurement is its reliability, i.e. output or
outcome of a measurement process should be reliable.
Valid- The measurement process must be valid.
Sensitive- The measurement process must show variations according to responses
when there exists some stimulus or any kind of situation.
In the context of software engineering , measurement is the never ending process of defining,
analysing data in software development process and its products in order to evaluate and
control the process and its defined products , and to generate meaningful information so as to
improve that process and its products . Quality software products cannot be building, without
measurements. So for achieving the basic management measurement is crucial.
55
4.2.2 Software Measurement process
The Process of measurement should be an orderly method and should be able to quantify and
adjust and ultimately lead to improvement of a process. Data is collected based on known
analysis and development issues, and queries. Such data is analyzed with respect to the software
development process and products. The major components of any measurement process are:-:
Use of analytical results to implement process improvements and identify new problems
Fig.4.1 describes the measurement process. The activities required to design a
measurement process using this architecture are:
Developing the measurement process.
Planning the process on projects and documenting procedures.
Process implementation on projects by executing the plans and procedures.
Process improvement by evolving plans and procedures .
56
Fig.4.1: Software Measurement- Challenges
The Vital thing in most software development method is unambiguous measurements. The
standardized measurement should:-
57
information can be used effectively and can control the development process. There is
hardly any metric that is developed for requirements.
Metrics are applied so as to gain knowledge about process descriptors or information can be
ascertained.
Some Process metrics gives indicating information that lead to long term process quality
improvement
Project metrics enable a manager to
o Evaluate the status of project under development.
o Track various potential risks arising at different phases.
o Identify problems at early stages
o Adjust various tasks or work flows.
Process Metrics
58
Private Process metrics : They are those metrics that are developed by private organizations
and are only known to the individual or team concerned.
Metrics are used to enable organizations to make various changes to improve the software
process quality.
Metrics should not be used to evaluate the performance of individual personals.
Statistical software process improvement helps the organization to discover where they are
strong.
Project Metrics
Team uses software project metrics to know project technical assessment and workflows.
Project metrics are used to reduce development schedules and delays, mitigate many future
risks, and assess the quality of product.
Every project should measure its inputs and deliverables.
Size-Oriented Metrics
Derived by normalizing (e.g. defects or human effort) associated with the product or project
by LOC that is Lines Of Code.
Size oriented metrics are widely used but their validity and applicability is widely debated.
Function-Oriented Metrics
Function points are calculated from direct measures of the information domain of a business
software application and assessing its complexity.
Once computed function points are used like LOC to normalize measures for software
productivity, quality, and other attributes.
The relationship of LOC and function points depends on the language used to implement the
software.
Programming language decides the relationship between lines of code and function
points.
Function Point and LOC-based metrics are accurate technique to measure effort
and cost.
Lines Of Code and Function Points are used for estimating and developing a base.
59
OO Metrics
Quality of product comes from three perspectives (product operation, product revision,
product modification).
Quality factors operation, revision etc are there.
Factors that require measure include:
o correctness (defects per KLOC)
o maintainability (mean time to change)
o integrity (threat and security)
o usability (easy to learn, easy to use, productivity increase, user attitude)
Removal efficiency in defects (DRE) is a measure of the filtering ability of the quality
assurance and control activities as they are applied throughout the process framework
DRE = E / (E + D)
60
4.4 Effort Estimation
It is an important activity in the early stages of software development. Software size, be it
Function Points or Lines of Code, plays a main role in this process, and forms the basis of
number of metrics to measure various aspects of the software, through the development of a
project. However, measuring the size of software becomes difficult. There are many sizing
measures in practice such as classes, programs, modules and so on, however Lines of Code and
Function Points are most widely used. Following sections explain these two measures in brief.
61
External Inquiry is a simple single process with input and output that leads to retrieval of data
from one or more internal logical files and external interface files.
AF = 0.65 + DI/100
62
4.6 Lines of Code
Lines of code is a software metric used for measuring the length of software program or software.
LOC measures the effort that will be needed to develop a program, as well as to estimate
productivity once the software is produced. Software size measure is done by the number of lines
of code :-
1. Physical LOC is a count of "non-blank, non-comment lines" in the text of the program's source
code.
2. Logical LOC to measure the number of "Lines", but definitions different from different
language
(a) Comparison: As function point metric measures the system from functional perspective,
it is not only language independent but hardware/software independent. This metric can
be used to make a comparison with other tools, environments or language, whether the
language is more productive than other.
(b) Aid in Monitoring :
This metric provides a method to monitor scope creep. FP counts can be compared after
all the phases of system design are over. The FP count designed and delivered can be
compared. With the growth of project there is a software creep.
(c) Contract Negotiations:. It becomes easy for customer to explain to vendor the functionality
level, key attribute to be delivered. These are used with fixed price contracts. It becomes clear
what will be delivered. For vendor it becomes easy to calculate cost, effort by using metric like
function point.
(d) Volatility handling: Function points are directly calculated from the requirements so these
bring early project cost estimation, effort required, schedule of project and hence show the current
63
status of requirements completeness. As we add new features, cost, effort and schedule will grow
accordingly. If organization removes some feature, function point metric is capable of handling it
and show the true status.
(e) Historic Data: Once project size and the scope of the project is agreed, project manager
calculate the appropriate estimate by using previous data. As Function point is independent of
language and tools, data from previous projects can be used to produce consistent outputs, unlike
Lines of Code data which is very much attached to languages requiring many other parameters to
be considered.
(f) Better Communication Managment: Function Point can greatly enhance communications
with higher management since it communicates in terms of functionality rather than any
processing details, technical issues, or code. Furthermore, Function Points are easily understood
by the non-technical user. This helps communicate sizing information to a user (or customer) as
well.
4.7.2 Cons
FPA does have many disadvantages. However, organizations can easily overcome these
problems by practicing FPA consistently over a period of time.
(a) Manual Work: Manual work in counting the various functions in FP is more.
(b) Requires Significant Detail: Detail is required for estimating the software size of Function
Points. Information on databases, enquires, inputs, outputs and even records are required to
perform Function point.
(c) Requires Experience: Experience is required to handle details in FP, as it inherently requires
sufficient knowledge of the counting methods.
64
(b) Intuitive metric:- Loc Metric behaves as intuitive metric for measuring the size of software
and visualized effect. Function point is a physical metric. In this way Loc behaves as easy
programming methodology.
4.8.2 Cons
(a) Missing Accountability: Lines of code metric suffers from basic problems. The most
important, it is completely difficult to measure the productivity of a candidate project with the
outcome of one of the phases.
(b) Missing Cohesion in Functionality: Effort is much more correlated with FP and
Functionality is less correlated with functionality.i.e developers may develop the same
functionality with small code.
(c) Experienced Developer: Same logic may vary from developer to developer methodology of
working based on the experience and affects the Lines of code count. A developer experience
may affect functionality and can change the line of code.
(d) Language Difference: Suppose there are two applications to be developed that provide the
same functionality (screens, reports, databases). First applications are written in C++ and the
other one is Perl. The number of function points would be exactly the same, but aspects of the
two will be different. The lines of code to develop will certainly be different for developing same
functionality. And hence the effort required to develop will also differ. (hours per function point).
(e) Enhanced GUI Tools: Enhancement of GUI-based languages/tools such as Visual Basic,
most of development is carried by drag-and-drop tools, in this method programmer usually
writes no piece of code. Counting the lines of code is difficult in this case. Such difference leads
to larger variations in productivity and metrics with respect to various languages, making the
Lines of Code.
(g) OO Development: In Object oriented methodology , Lines of Code does not makes any
sense development where everything is treated in terms of Objects and classes . FPA is more
Connected with Object-Oriented software development.
(h) Many Languages Issue: In todays world, one language is no longer used for development.
Often, different languages are used depending upon the ambiguity and requirements. Tracking
and analysisng problems becomes difficult.
65
Parameters Function Point Lines of code
Count Easier to count a function point for any Difficult to count lines of codes, it may be
system which is present or yet to be erroneous about 60-70% .
developed.
Estimation Effort and cost estimation is independent of Line of code for estimating the effort and
Function point. It depends on the average cost makes it dependent on technology and
productivity of the team. skill set of the person implementing the
software rather on productivity
However, Function Point is not good for
measuring the maintenance effort.
Selection Function Point Analysis can be used to Loc is difficult and erroneous , difficult to
mechanism determine whether the tool is effective or use the LOC technique for technology
for not within an organization . selection because it is difficult to estimate
Technology amount of lines of code .
Metrics Function Point aids in deriving meaning of Not Suitable for metrics such as
software metrics such as productivity, productivity, software quality as the amount
software quality, defect density etc. of LOC is negatively correlated with design
efficiency.
These metrics are independent of design
and technology and would remain same.
Q4. ____________ and ____________ are the two types of measures of lines of code metric.
Q5. There are _______ general system characteristics which leads to Value Adjustment
factor(VAF) in fuction point metric.
66
Q6. Which of the following does not affect the software quality and organizational
performance?
a) Market
b) Product
c) Technology
d) People
67
Q11.Which of the following is not an information domain required for determining
function point in FPA ?
a) Number of user Input
b) Number of user Inquiries
c) Number of external Interfaces
d) Number of errors
4.10 Summary
The essential component in day to day life from scientific to engineering discipline consists of
measurements. Measurement is number or symbol assigned to an entity, in which mapping
is done in order to characterize an attribute. The Process of measurement should be an orderly
method and should be able to quantify and adjust, ultimately leading to improvement of a
process. In the context of software engineering , measurement is the never ending process of
defining, analysing data in software development process and its products in order to
evaluate and control the process and its defined products , and to generate meaningful
information so as to improve that process and its products . Quality software products cannot be
built without measurements. Process and project metrics quantitatively measure the inner detail,
so as to gain insight knowledge about the efficiency of the projects conducted, using various
process frameworks. There is a wide variety of size oriented, function oriented, object oriented,
quality oriented metrics to guide and control the overall process of software development. Quality
of product comes from three perspectives (product operation, product revision, product
modification).
4.11 Glossary
Metric: It can be defined as mapping from the empirical world to relational world.
FP: Function point is functionality based metric used as LOC metric to estimate software effort.
68
Process and Project Metrics: They are applied so as to gain knowledge about process
descriptors
Ans 1:-They are quantitative measure of management tools. They offer insight to the
effectiveness of software process and the projects that are conducted using the process as a
framing tool. Basic quality and productivity data are collected. The data is analysed and
compared with previous outputs.
Ans 3:- Process metrics are collected across all projects and over long periods of time for making
decisions. The basic concept is to provide techniques that lead to long term software process
improvement
Ans 5: - 14
Ans 6:- a
Ans 7:- d
69
Ans 8: - c
Ans 9:- a
Ans 10: - c
Ans 11:- d
"1. Roger. S. Pressman, Software Engineering - A Practitioner's Approach, 7th Edition, McGraw
Hill, 2010.
3. Naseeb Singh Gill, “Software Engineering: Software reliability, testing and quality, Khanna
Book Publishing, 2011."
Q5. Compare and contrast Lines of Code and Function point metrics in the context of effort
estimation.
70
Lesson 5: Software Project Planning
Contents
5.0 Objective
5.1 Introduction
5.2 Software project planning
5.2.1 The software management plan (SPMP) document
5.3 Software decomposition techniques.
5.3.1 Example
5.3.2 Gantt Charts
5.3.3 PERT Charts
5.4 Software sizing
5.5 Problem based estimation techniques
5.5.1 LOC
5.5.2 Function / Feature point metrics
5.6 Project based estimation
5.6.1 Empirical Estimation Technique
5.6.1.1 Expert judgment Technique
5.6.1.2 Delphi cost estimation technique
5.6.2 Heuristic estimation technique
5.6.2.1 Basic Cocomo model
5.6.2 Intermediate Cocomo model
5.6.3 Complete Cocomo model
5.7 Summary
5.8 Glossary
5.9Answers to self-assessment questions
5.10 References / suggested readings
5.11 Model questions
5.0. Objective
This lesson will add to our knowledge with basic introduction of software project
planning. We will be able to estimate software project with cost and efforts. This lesson
will also enhance our knowledge about decomposing of software product so the project
sizing and staffing should be done on a right channel.
5.1. Introduction
Once we are done with feasibility study and we found the project to be feasible. The
planning of software project starts. The chief of the project i.e. the software project
manager took initiative and starts planning. All these activities should be done before we
actually start working on the project. As easy as we plan our day a night before and
expect everything should be done as we have planned. A lot of exercises needs to be
consider to software project plan that we will study in this lesson
71
A lot of new terms are added in the diagram any how all of these terms are related to
software project planning. We will discuss them all in further topics.
The very basic & necessary thing to know is “software project management process
begins with project planning” and objective of software project planning is to provide an
estimate for manager to know about resources, cost and schedules.
Here we will discuss things which are necessary in software project planning.
72
Scheduling: How to break down the work, but resources & manpower is needed
to complete the project.
Staffing: How many members would be enough in team to complete project.
What will be the team structure?
Risk Resolution: If at any time any risk occurs, how it will be resolved & what are
our plans to solve it.
Miscellaneous plans like Quality assurance, installation maintains etc.
Project planning has to be done with a lot of care & full attention because anything
that goes wrong results in dis-satisfaction of customer. In today’s world where “Customer
is king”, he/she has a lot of options to choose a software company from but once the
customer is dissatisfied, t he company will not get work next time. So, the software
project manager has a lot of information and knowledge so that He / she would be able to
complete software project with accuracy.
Once the software project planning has been completed the manger has answers
of a lot of things. SPMP document should discuss.
73
Check your progress/ Self-assessment questions
Exercise-I
74
a. Quality assurance
b. Installation
c. Delivery
d. All of above
75
Once the task is decomposed in smaller modules the manager looks for
dependences among modules i.e. say we have modules A, B & C that need to be done for
completion of project & all the modules are dependent on each other, then the module A
should be completed first so that the work of module B get started followed by module C.
Later on resources will be allocated to the modules which is done by Gantt Chart
once it is done, PERT (Project evolution and review techniques) chart representation is
to be developed.
5.3.1. Example
Here we are decomposing the bigger activity into multiple smaller modules & each
module is assigned a particular time period to complete as shown in the diagram.
76
Arrows are representing the interdependences and thicker arrows are representing the
critical path .From diagram we can estimate the following thing.
The critical task is done with zero stack time from the figure we can calculate.
77
5.3.2. Gantt chart
The resources required by the modules could be staff, hardware & anything else.
Gantt chart represents the resources allocated to the modules. The name of Gantt chart
was derived from its developer Henry Gantt. Gantt chart is very useful in resource
planning.
Consider the previous example, the Gantt chart is based on bars where the shaded part
represents the estimated time required to complete the module & the white part represents
the slack time.
PERT (Project evaluation & review technique) instead to showing the each
module by a different bar. A PERT chart represents the modules statically. We consider
three terms in PERT charts, minimum, likely & maximum. Each module will have some
minimum time to get complete, some likely time and some maximum time. Taking the
78
same example as shown in figure each module has three variants of time representing
their time limits.
Every one of us knows that units of measuring current, room temperature, thickness,
length and a lot of things to measure but does anyone knows the unit to measure the
software size ? Here the question arise that how can be estimate the size of the software
so that we tell customer about cost & resources.
Let us consider a situation say we are building a new house & the contractor told us that
the approximate cost required to make this house is 3 lac rupees so how does contractor
gets to know the cost. The contractor just gives us a rough review by calculating staff,
resources re-novation and all other things. The cost may grow up to 3.25 lacs or even
3.50 lacs so we will be all right with that, but what if this cost comes to 4 lacs definitely
this will pinch us because we have made up our budge according to 3 lacs. Anyhow we
can tolerate up to 3.50 lacs but not with 4 lacs. Obviously we will have a word with
contractor & contractor will tell us some points that why the cost has been raised so here
we are customer and 1 lac extra is pinching us.
Take the second side of coin you are a software project manager and with your
experience you told to customer that the software project with cost up to 50,000 rupees,
so a figure is clear in customer’s mind but if it crosses more than 60,000 definitely the
customer will have problem with it.
So, it is understood that software sizing should be done with expertise and a lot of things
are to be considered in the process.
79
To accurately estimate the software size we have some techniques as under.
LOC(Lines of Code):
We can calculate the software size by its line of codes that will tell us the efforts
needed to prepare the software product. But some programmers add a lot of extra lines
for comments and starting of modules that will not give us proper efforts and cost to
estimate the software size so a common criteria is made up to write the line of code so
that by which we can calculate the efforts and code.
We can estimate the project size from the projects of same type which were made
in the past time. Say if we had made up a project of School management system we know
the size, efforts & cost of the project and now we got a new project of college
management system which is new to us but we can estimate its size by studying the
previous project.
Here we call the experts & show them SRS (software requirement specification)
and based on their experience and judgment we calculate the software size.
This approach over comes the problems with expert judgment technique when
everything was open. Here each expert is called and given a form based on project. A
meeting is being carried on and co-ordinated by a coordinator. Experts fills the form and
then based on their forms the coordinator estimates the software size.
80
Check your progress/ Self-assessment questions
Exercise-II
81
_____________________________________________________________________
___
5.5. Problem based estimation techniques
Based on the problems we can estimate the size of the software project, even
though we know that the exact project size, cost is not accurately measured based on the
problem. We have some methods to find out the project cost & size based on problem we
will discuss these points one by one.
The very simple & easiest method to find out the problem is the lines of code.
This is a very popular approach used by software project managers In this approach we
calculate the number of lines which the programmers has written in coding based on this
lines of coding we calculate the size of project. Following things are taken into
consideration for LOC.
Since it is a very simple method so a lot of problems may occur in this. When
software project manager divides tasks into modules than modules into sub modules
different software engineers works on these sub modules & each engineer has its own
way of work.
Approach 1 Approach 2
//line 1 int a=10; //line 1 for (int a=1;a<=10;a++)
//line 2 while (a<=10) //line 2 { cout<<a<<endl; }
//line 3 { //line 3 getch();
//line 4 cout<<a;
//line 5 }
//line 6 a++;
//line 7 cout<<endl;
//line 8 getch();
82
Two different approaches for same output one approach took 8 lines & second
approach look 3 lines, so the style of programming needs to be considered.
Problem size does not accurately tell the complexity of the project it only
computes the line of codes & not the functions / features of the project.
Programming language is another factor because different programming
languages have different approaches where a code can be written in 2 lines in
one language it is possible that another language took 8 lines for the same code.
For example: A simple hotel management software only books room and on check out it
makes bill but an another one not only performs the basic function of hotel but also
entertains with games, good graphics, music, animation, tells about staff and has a lot of
other features that will definitely increase its cost & will be loved by hotel owners and
more demand of this software will come up.
Summing up, the software project with more features will require more effort. So
the cost is dependent on all the features & not the LOC only in this approach.
83
Making an estimate for the project is the basic planning activity which tells the software
project manager about size & efforts to develop the project. These estimates not only
helps in finding the cost of the project but also useful in finding resources & plan
scheduling. Here we have broad category & project estimation techniques.
84
It is the most widely used project estimation technique. This approach is very popular
because in this approach we call an expert & tells him/her about our project. Expert
studies the SRS document & then makes an educated guess of the problem size. A
different expert follows different approach to calculate the cost and to find out the project
size. For example some experts may consider the user interface & features of project to
make a guess about the cost & some experts may not consider these things.
Delphi cost estimation approach works in a different way. In this technique we call a
team of experts and a coordinator, and each of them is provided with SRS. After studying
SRS they are provided a form based on the project which may include software features
LOC, UI & some other things. All the experts fill their form accordingly. Then the
information filled in these form are compiled by the coordinator and then he/she makes
an estimates about software project size and cost.
Proposed by Mr. Boehm in 1981 based on this model the software project was
divided into three categories.
Mr. Boehm proposed that while developing a software product we should not only
consider characteristics of the product but also the staff (development team) and
85
development environment. According to this model, the projects can be classified as
follows:
Organic: - A type of software project which is well understood, size of development team
is small and the team is well experienced in developing these types of projects.
Semidetached: -This type of software project is a little bit similar to one which was
developed earlier by the team. The team consists of experienced people as well as un-
experienced staff and team members are familiar with some aspects of the project.
Embedded:-A complex hardware situation, a totally new project, team members are
experienced.
5.6.2.1. Basic Cocomo: -The basic cocomo model provides nearby estimate of the
project by following expressions.
Tdev = b1 x (Efforts)b2PM
Product
86
Computers
Personal
Development environment
Database Part
Graphical user interface Part
Communication Part
Q3. Which of the following formula is correct to calculate efforts in basic cocomo
model?
a. Efforts= a1 x (KLOC) a2 PM
b. Efforts= a1 x (LOC) a2 PM
c. Efforts= a2 x (KLOC) a1 PM
d. None of these
87
5.7. Summary: - In this lesson we have learnt that software planning is an important step
before development of software starts. A lot of exercises need to be done like scheduling,
staffing, sizing, project estimates etc. to estimate the efforts and cost to develop software
product.
We have also learnt that while developing software only project is not considered but also
the development team and development environment is taken care off.
5.8 Glossary
Delphi cost Estimation: This approach over comes the problems with expert judgment
technique when everything was open. Here each expert is called and given a form based
on project. A meeting is being carried on and co-ordinated by a coordinator. Experts fills
the form and then based on their forms the coordinator estimates the software size.
SPP: Objective of software project planning is to provide an estimate for manager to
know about resources, cost and schedules.
Gantt chart: represents the resources allocated to the modules.
Cocomo Model: (COnstructive, COst estimation MOdel)
Exercise – I
Answer 1. There is always risk in software development because client at any point tells
the modifications in software. Anyhow if something goes wrong the software manager
tries to overcome the risk and make it as much less as much he/she can.
He identifies the risk then estimates the risk and resolve the risk.
Answer 2. D
Answer 3. A
Exercise- II
Answer 1. It is the most widely used project estimation technique. This approach is very
popular because in this approach we call an expert & tell him/her about our project.
88
Expert studies the SRS document & then makes an educated guess of the problem size. A
different expert follows different approach to calculate the cost and to find out the project
size. For example some experts may consider the user interface & features of project to
make a guess about the cost & some experts may not consider these things.
Answer 2.
Answer 3. The resources required by the modules could be staff, hardware & anything
else. Gantt chart represents the resources allocated to the modules. The name of Gantt
chart was derived from its developer Henry Gantt. Gantt chart is very useful in resource
planning.
Exercise- III
Answer 1.
Answer 2.
Delphi cost estimation approach works in a different way. In this technique we call a
team of experts and a coordinator each of them are provided with SRS. After studying
SRS they are provided a form based on the project which may include software features
LOC, UI & some other things. All the experts fill their form accordingly. Then the
information filled in these form are compiled by the coordinator and then He/she makes
an estimates about software project size and cost.
Answer 3. A
89
2. Fundamentals of software Engineering, Rajib Mall, Second Edition, Prentice-Hall of
India Pvt. Ltd. New Delhi.
1. What are relative advantages of using LOC and feature point metric in problem
based estimation.
2. What is egoless programming? How it can be realized.
3. Explain Cocomo model in detail.
4. Suppose you are developing a software product in organic mode. You have
estimated the size of product to be 100,00 lines of codes. Compute the nominal
efforts and development time.
5. Explain various types of project estimation techniques.
90
Lesson 6: Cost Estimation Models
Contents
6.0 Objectives
6.1.Introduction
6.2.Software effort estimation models
6.3.Cost estimation
6.4.Constructive cost model (COCOMO)
6.4.1 Basic Cocomo
6.4.2 Intermediate Cocomo
6.4.3 Detailed/Embedded Cocomo
6.5.Software equation
6.6.Summary
6.7.Glossary
6.8.Answers to self assessment questions
6.9.References/Suggested Readings
6.10. Model Questions
6.0 Objectives
Understanding the basics of software cost estimation and study the relation
between price and development cost of the software.
Defining the metrics used to measure software.
Elaborating the structure of software estimation.
Implementing Constructive cost model for cost estimation.
Understanding software equations
6.1 Introduction
With the change in time, lines of code of software are changing from thousand to
millions, and the software development team is increasing from single person to set of
91
experts, and completion of projects are converting from months to years. The mistakes in
cost estimation bring gap between software benefit and loss.
The project gets started with the continuous flow of instructions and finishes with the
complete resources and cash. The complete information about resources, cash, planning,
and control are under the supervision of project manager to achieve the objectives of the
project. The objective to complete the project is achieved with complete team work. And
every member of the team must realize his/her work and perform the task with full
dedication.
Once the estimates of the project are done they are not final. With the development of the
project the estimates also varies as per the project requirements. The prime concern is to
estimate the tasks to complete the project. Cost of the project is not an initial issue. As the
project progresses more information on project and process requirement is produced.
Software cost estimation is a complex activity that requires knowledge of key attributes
about project for which estimate is being constructed. Outcome of the project depends
upon the accuracy and relationship among the parameters. The parameters that reduce the
gap between profit and loss are:
- Size of source code, manuals, and specifications: Simple codes are easy to
process and remove errors. Complex codes take more time and effort to
understand and update which results in rise in cost and efforts.
- Rate at which requirements changes: Frequent changes in the project require
relative updates in software coding. The frequent updates in software bring
difference in profit and loss.
- Set of project activities: The activities of the software changes with change in
requirement. These activities get stable at the end of project. To estimate the cost
at the end of project is not practical.
- Comparing and estimating the cost and schedule of previous project with new
projects: The cost and schedule calculation of new project can be compared with
the similar last projects to find the positive or negative comparison.
92
- Simple decomposition technique to generate project cost and efforts: If the huge
project is divided into different modules, then the cost estimation will be
performed on individual models and at last all the modules will get integrated and
estimates can be calculated.
Software estimation models are calculated by using the values of LOC and FP.
E = A + B * (Size) c
A, B, and C are experimental constants that depends upon size and type of software
development,
‘Size’ may be the code size (KLOC) or function point (FP) of software.
To calculate the effort estimate for the project with 25000 (25K) LOC, with the constant
A= 5.5, B=0.73, and C = 1.16 is
= 5.5 + 30.5
93
E = 36 persons/month
The approximate measurement of cost of the project in terms of amount of time a person
works with efforts is known as Cost estimation. The cost estimation will determine which
features can be included within the project. The risk of the project is reduced when the
most important features are included at the beginning because the complexity of project
increases with its size. Probability of mistakes rises with increase in size of project. There
is direct affect of cost estimation on cost and schedule of project.
Total software project cost = software development labour cost + other labour cost
+ non-labour cost
Non-labour cost
The project approval, project management, and project team understanding are done by
estimating the cost of the software
94
- Project approval: For every project there must be a decision by the organization to
undertake the project. Such decisions require an estimate of money and resources
required to complete the project.
- Project management: Project managers are responsible for planning and control of
project. Both activities require an estimate of the activities required to complete a
project and resources required for each activity.
- Project team understanding: For the project team members it is necessary that
each one understands individual role in the project and overall activity of the
project. The project task is generated by the cost estimation.
Que1. Define various variables used in estimating the software cost. How the individual
variable is defines?
Ans:
_____________________________________________________________________
________________________________________________________________________
__
Ans:
______________________________________________________________________
_____________________________________________________________________
___
COCOMO model allows software project manager to use regression formulae and
estimate project cost and duration, with current and previous projects data. These data are
analyzed to discover formulae that were the best fit to the observations.
95
The explanation of COCOMO model is done in three levels, each level corresponds to
the detail analysis of cost estimation. The first level provides an initial rough estimate, the
second level modified by using number of project multipliers, and the third level produce
estimates for different phases of the project.
1. Basic COCOMO
2. Intermediate COCOMO
3. Embedded COCOMO
MODES
Organic Mode
Designed under simple environment using easy tools,
Copy the features of previous projects
Little innovative
Intermediate Mode
In-between between Organic and Embedded Modes
Modified by using number of project multipliers.
Embedded Mode
Strong rigid interface requirements
Densely innovative.
Hold all the features of intermediate version
96
Basic COCOMO model estimates the software development effort by using single
variable Delivery Source Instructions (DSI) and three software development modes. It is
used for small projects and only a few cost drivers are concerned. Cost drivers depend
upon project size mainly. It is useful when the team size is small.
The effort (E) and schedule (S) of the project are calculated as follows
b
Effort E = a * (KDSI) * EAF Where KDSI is number of thousands of delivery
source instructions, ‘a’ and ‘b’ are constants.
Schedule S= c * (E) d , where E is the Effort and ‘c’, ‘d’ are constants.
EAF is called Effort Adjustment Factor which is 1 for basic COCOMO , this
value may vary.
Basic COCOMO is good for quick, early, rough order of magnitude estimates of
software costs
97
Average Staffing = effort / schedule = 231 MM /14.25 months
= 16 SSP (average staff size for a project)
Intermediate model is used for medium sized projects. The cost drivers are intermediate
between basic and advanced COCOMO model. Cost drivers depend upon product
reliability, database size, execution and storage. The size of the project development team
is medium. There are four areas for drivers used in Intermediate COCOMO Model
Product itself
Computer hardware/software
Personnel
Project itself
Product Attributes
RELIB --- Required Software Reliability
98
DATA --- Data Base Size
CMPLX --- Software Product Complexity (complexity of product)
Computer Attributes
TIME --- Execution Time Constraint
STORG --- Main Storage Constraint
Personnel Attributes
APEXP --- Applications Experience
PCAP --- Programmer Capability
PLE --- Programming Language Experience
Project Attributes
MODP --- Modern Programming Practices
TOOL --- Use of Software Tools
Attr
ibut
onn
el
99
VEXP 1.21 1.1 1 .95
PLE 1.14 1.07 1 .95
MODP 1.24 1.1 1 .91 .82
Attribut
TOOL Project 1.24 1.1 1 .91 .83
e
Table-4: List of Cost drivers used in Intermediate COCOMO model
Size Strength
Low 5KDSI
Nominal 8KDSI
High 32KDSI
100
Organic E = EAF * 3.2 * (KDSI)1.05 TDEV = 2.5 * (E)0.38
The Intermediate Model can be applied across the entire software product for
easily and rough cost estimation during the early stage
it can be applied at the software product component level for more accurate cost
estimation in more detailed stages
The major project is combination of various mini projects. The drawback of basic and
intermediate COCOMO models is that they consider a software product as a single
homogeneous entity. However, large systems are made up of several smaller subsystems.
A detailed COCOMO model differs from intermediate model in that case. It uses effort
multipliers for each phase of the project. In embedded COCOMO model the cost of each
subsystem is estimated separately. This approach reduces the margin of error in the final
estimate.
101
Embedded COCOMO holds all the characteristics of intermediate model i.e. analysis,
design, test, etc. The detailed model uses different effort multipliers for each cost driver
attribute. In detailed model the whole software is divided into different modules and then
intermediate COCOMO model is applied to different modules to estimate efforts and then
summed up.
In detailed COCOMO model, the effort is calculated as function of program size and set
of cost drivers given according to each phase of software life cycle. The various phases of
detailed COCOMO model are:
Indian railways reservation system is distributed information system with various offices
at several place across India and accesses its data by interacting different subsystems,
database, communication media. The processing of the distributed information system
project may be distributed in different modes of COCOMO. The communication
subsystem project can be processed according to embedded mode. The database
subsystem could be processed by semidetached mode formulae, and the graphical user
interface (GUI) subsystem processing could be done as per organic mode formulae. The
cost of these three subsystems can be estimated separately and summed up to give the
overall cost of the system.
102
1. At the early stage of the project, it is difficult to make estimates.
2. Delivery source instructions (DSI) calculate length not size.
3. It is difficult to collect the previous data for current use.
Based on the collected data, the estimation model can be described in the form
E= LOC x B0.333 x 1
P t4
Where,
The software equation has two independent parameters: (1) LOC or estimate size and (2)
project duration‘t’ in months or years.
103
Self Assessment Questions/Exercise -I
___________________ , ______________________ ,
______________________
_______________________________________
_______________________________________
_______________________________________
6.6 Summary
The contract of the software usually depends upon the price/cost of the software.
Various techniques are prepared for software cost estimation, some of them are used,
and finally the best is applied. Insufficient amount of information disagree the
software estimates. Factors that affect the software productivity are size of project,
working environment, and tools used in project development.
Dividing project effort by designed schedule does not calculate the number of people
required for project development team. The project manager has to use a well
constructed model to find the proper estimate the staff and time required for project
development. COCOMO model is the well designed algorithmic model that takes
various project attributes into account when formulating the cost estimates.
6.7 Glossary
Function point (FP): It is a unit of measurement of software in software engineering to
express an amount of functionality provided to a user. Function point is a tool to measure
the size of the software.
104
Line of code (LOC): metric measures the size of the software by counting the number of
lines in the software source code
COCOMO: COnstructive COst MOdel allows software project manager to use
regression formulae and estimate project cost and duration, with current and previous
projects data.
Basic COCOMO: This model estimates the software development effort by using single
variable Delivery Source Instructions (DSI) and three software development modes.
Intermediate COCOMO: This model is used for medium sized projects. The cost drivers
are intermediate between basic and advanced COCOMO model.
Detailed/Embedded COCOMO: This model uses effort multipliers for each phase of the
project. The cost of each subsystem is estimated separately.
Ans 4: Basic
Intermediate
Detailed
105
6.9 References/ Suggested Readings
Q1. Explain briefly various variables used in estimating the software cost.
Q2. Describe various parameters that reduce the gap between the profit and loss of the
software.
Q3: What is constructive cost model? Explain various levels of COCOMO model.
Q4. Calculate effort and schedule using basic and intermediate level (using Organic
software with Product attribute complexity) of a project with 12000DSI and 36000 DSI
using COCOMO model
3. Non-labour cost
106
Support and services provided in project development, such as workstations,
ground equipments, network and phone charges.
Training charges
Ans-2: Parameters that reduce the gap between benefit and loss are:
Ans-3: COCOMO model is a model that allows one to estimate the cost, effort, and
schedule when planning a new software development activity. Various levels of
COCOMO are:
Ans:4.
107
Intermediate COCOMO model
DSI-12000 DSI-36000
Effort 43.5 person-month 158.5 person-month
Schedule 10.5 months 17.3 months
108
Lesson- 7 System Analysis
Structure of the lesson
7.0 Objective
7.1 Introduction
7.2 System analysis
7.3 Requirement analysis
7.4 SRS Document
7.5 Structure of SRS Document
7.6 Structured Analysis
7.6.1 Principles of structured analysis
7.6.2 Data Flow Diagram (DFD)
7.6.3 Data dictionary
7.6.4 ER Model and ER Diagram
7.7 Summary
7.8 Glossary
7.9 Answers to check your progress/self assessment questions
7.10 References/ Suggested Readings
7.11 Models questions
7.0 Objective
After studying this lesson, students will be able to:
1. Explain the need of requirement analysis.
2. Discuss the importance of SRS document.
3. Describe the structure of SRS.
4. Define the concept of structured analysis.
5. Explain various tools used for structured analysis.
7.1 Introduction
It is important to understand the requirements of the customer thoroughly before starting with the
development process. Based on the customer requirements, a document is prepared and passed to
the design and implementation teams. A bad design of requirements can result into lot of chaos in
the end. Changes in the requirements make it difficult to manage and especially if those changes
are due to bad requirement analysis. This lesson focuses on the need of conducting a proper
requirement analysis and how it helps to assist the design and implementation effort.
109
7.2 System analysis
System analyst is responsible for conducting the system analysis. Systems analysis refers to the
process of collecting relevant data, understanding the processes and activities involved,
identifying problems and recommending feasible solutions for improving the functionality of the
system. It involves the study of business processes, gathering operational data, understanding of
data flow, finding out evolving solutions to overcome the system weaknesses. It also involves the
decomposition of complex business processes into sub-processes. The result of this process is a
logical system design. Systems analysis is an iterative process that continues until a preferred and
acceptable solution emerges.
110
perform. It is completed at the end of the requirement phase of the software development life
cycle. System analyst is responsible for developing the system requirement specification
document after gathering requirements from all stake holders. Clients submit their requirements
written in natural language. System analyst is responsible to document the requirements received
from the client in technical language. ER diagrams, Data flow diagrams, make it easy for the
development team to understand as to what is expected from them.
SRS document is an understanding of system analyst about the prospective system requirements
and dependencies before the actual design or development work.
Need of SRS
It represents the client requirements in a structured form to the design and development
team.
Many people with variety of backgrounds are involved.
Proper documentation results in one interpretation, which is vital before the design and
development phase.
Requirements change over time, it is important to incorporate them later into the SRS
and analyze if those changes are still feasible.
111
Characteristics of a good SRS
Complete
Consistent
Correct
Modifiable
Traceable
Unambiguous
Valid
Verifiable
1. Introduction
1.1 Purpose of the requirements document
1.2 Intended Audience and Reading Suggestions:
1.3 Definitions, Acronyms, and Abbreviations
1.4 Project Scope:
It's functions and how it interacts with other systems.
2. System Models:
One or more models clearly showing the relationship between the system components and the
relationship between the system and its environment.
The models include the following:
(a) Object Models
(b) Dataflow Models
(c) Semantic Models
(d) Data Dictionaries
3. Functional Requirements:
It defines functions of a system and its components.
4. Non-functional Requirements
Non-functional requirements are requirements that specify criteria that can be used to judge the
operation of a system, rather than specific behaviours.
4.1 Performance Requirements
4.2 Safety Requirements
112
4.3 Security Requirements
4.4 Software Quality Attributes
5. Motivation and assumptions:
Assumptions about the system and its environment, anticipated future changes, the reasons for
decisions, etc.
6. Detailed Requirements Specifications:
For every task/function required for the system, might include interfaces for many parts of the
process.
7. Other Requirements:
Defining all technical terms used in the document. No assumption should be made about the
reader having expertise or experience either in the problem domain or in aspects of computer
design./ development.
113
helps to build a logical system model to familiarize the user with system processes and
interrelation between those processes before implementation.
A system consists of number of functions. Following is a list of basic symbols used in DFD:
These symbols are used to represent the functions performed by a system and the flow of data
between these functions.
Circle with names is used to define the transformations or functions.
An arrow is used to specify the direction of data flow, and its label identifies the data.
Rectangles are used to indicate the source and destination of data.
114
Parallel lines are used to indicate the permanent storage like files, databases, etc.
A DFD with single bubble or circle is called context level and it specifies the highest level of
abstraction. Initially a DFD consists of single bubble only. A DFD is then decomposed into child
DFDs, and the process is called levelling. The decomposition process is then repeated for the
child DFD's and it is re-iterated until the data flow for a software process is clearly understood.
Observe the figure above; both the functions are directly connected by a data flow. It means that
‘validate data' function can begin only after the ‘read data function has successfully supplied data
to it. This type of flow is called synchronous data flow.
In this figure, the two functions are connected through a data store and hence the two functions
are independent. This type of flow is called synchronous data flow
115
Q5. ________________________________ refers to top-down decomposition of a set of high-
level functions and to represent them graphically.
Q6. Define DFD.
___________________________________________________________________________
____________________________________________________________________________
___________________________________________________________________________
116
Composite data items are defined using the following data definition operators:
'+' denotes composition of two data items. For example, X + Y represents composition data items
X
and Y.
'[,]' represents selection. For example, [X,Y] represents occurrence of either X or Y.
'()' it refers to the optional data item that may or may not appear. For example, X + (Y)
represents either occurrence of either X or X + Y.
'{}' it is used to represent iterative data definition, for example, {name}5 represents five
name data and {name}*represents zero or more instances of name data.
'=' represents equivalence. For example, X= Y + Z means that X represents Y and Z.
ER diagram is a tool for visual representation of relational model. Key symbols used in ER
diagram:
Rectangle- Rectangle symbol is used to represent entities.
117
Diamond shaped box- It is used to represent the relationship between two entities.
Advantages of ER Diagram:
Following are some of the key advantages of ER Diagram:
1. ER Diagram Key provides a visual representation of entity relationship model. It is easy to
analyze the relationship between the entities with the help of ER diagram. It focuses on the data
flows and interactions between various entities of the database.
2. ER Diagram is an effective tool for communicating key entities and their relationship in a
database.
3. ER Diagram is simple to understand. Non-technical end users can easily understand the
working and relationship in the database.
4. ER Diagram can make use of already existing ER models. ER Diagram is a blueprint of
database.
118
Check your progress/ Self assessment questions- 3
Q8. Define Data Dictionary.
___________________________________________________________________________
____________________________________________________________________________
___________________________________________________________________________
7.7 Summary
System analyst is responsible for conducting the system analysis. Systems analysis refers to the
process of collecting relevant data, understanding the processes and activities involved,
identifying problems and recommending feasible solutions for improving the functionality of the
system. Formal model of the system helps to detect the most delicate anomalies and
inconsistencies, which may otherwise be left untraced. SRS document is an understanding of
system analyst about the prospective system requirements and dependencies before the actual
design or development work. Structured analysis refers to top-down decomposition of a set of
high-level functions and to represent them graphically. The Data Flow Diagram specifies various
activities or functions to be performed by the system, and the data interchange between these
functions. A data dictionary is used to list all data items that appear in a DFD of a system, i.e. all
the data flows and data stores appearing on the DFD. It is also used to list the purpose of each
data item along with the definition of all composite data items. Typical information included in
data dictionary is Name, Alias, Data Structure, Description, Duration, Accuracy, Range, Data
flows. ER model is a data model and is best suited for designing the relational databases. ER
Diagram Key provides a visual representation of entity relationship model. It is easy to analyze
the relationship between the entities with the help of ER diagram. It focuses on the data flows and
interactions between various entities of the database.
119
7.8 Glossary
SRS- It is a document represents complete description about how the system is expected to
perform.
DFD- The Data Flow Diagram is a hierarchical graphical model of a system.
Data Dictionary- It is used to list all data items, data stores, data flows, purpose of each data item
along with the definition of all composite data items
ER Model- The Entity Relation model is used to define the conceptual view of a database.
ER Diagram- ER diagram is a tool for visual representation of relational model.
System Analysis- System analyst is responsible for conducting the system analysis.
120
a. ER Diagram Key provides a visual representation of entity relationship model. It is easy
to analyze the relationship between the entities with the help of ER diagram. It focuses on
the data flows and interactions between various entities of the database.
b. ER Diagram is simple to understand. Non-technical end users can easily understand the
working and relationship in the database.
121
Lesson- 8 System Design
Structure of the lesson
8.0 Objective
8.1 Introduction
8.2 System Design and its objectives
8.3 Design Principles
8.3.1 Problem Partitioning and Hierarchy
8.3.2 Abstraction
8.3.3 Modularity
8.3.4 Top-Down and Bottom-Up Strategies
8.4 Design Concepts
8.4.1 Functional Independence
8.4.2 Data Structure
8.4.3 Coupling
8.4.4 Cohesion
8.5 Summary
8.6 Glossary
8.7 Answers to check your progress/self assessment questions
8.8 References/ Suggested Readings
8.9 Models questions
8.0 Objective
After studying this lesson, students will be able to:
1. Explain the need of system design.
2. Discuss the need for modular design.
3. Describe the strategies of developing system design.
4. Define the concept of functional independence between modules.
5. Explain the concept of cohesion and coupling in system modules.
8.1 Introduction
Once the requirements have been defined in software requirement specification document during
analysis phase, it is time to develop the design document that should act as a blue print for the
development team. It is just like architecture of a building that suggests how the final building
will look like. Ease of implementation and maintenance of the software system relies on the
122
quality of a system design. A good system design helps in improving the reusability of already
developed modules.
123
Q3. Define top level and logic design.
___________________________________________________________________________
____________________________________________________________________________
___________________________________________________________________________
124
input, transformation or processing and output as its three partitions. Partitioning the architecture
horizontally makes it easy to test, maintain and extend the software. It also results in propagation
of fewer side effects. On big disadvantage of horizontal partitioning is that, it can complicate the
overall control of program flow often when more data needs to be passed across module or
functions.
In case of vertical partitioning, also known as factoring, control and processing is distributed top-
down in the program structure. Top- level modules are mainly concerned with control functions,
and they do little processing. Modules that are low in the structure are called workers, performing
all input, computation, and output tasks.
Probability of propagating side effects to modules that are low in structure, due to change in
control module is much higher as compared to change in worker module. In general, changes to
computer programs revolve around changes to input, computation or transformation, and output.
Vertically partitioned structures are less likely to be susceptible to side effects when changes are
made, and are more preferred than horizontal partitioning.
8.3.2 Abstraction
Abstraction principle allows you to separate conceptual aspects of a system from implementation
details during requirements definition and design. For example, you may specify whether to use
125
FIFO based queue or to use LIFO based stack data structure without having to worry about the
representation scheme for the implementation of two data structures. Also you can specify the
functional characteristics of the routines like PUSH, POP, TOP, for stack and INSERT, DELETE,
FRONT, REAR without concern for its algorithmic details.
Abstraction permits a designer to consider a component at an abstract level without having to
worry about the implementation details of that component. Components of a system or the system
itself provides services to its environment, and abstraction of a component describes the external
behavior of that component without the need of knowing the internal details that bring about the
behavior.
Components of a system are not completely independent and often interact with each other. The
designer has to specify how a component will interact with other components. Abstraction allows
the designer to concentrate on one component at a time.
Three levels of abstraction can be created; procedural abstraction, data abstraction and control
abstraction. A procedural abstraction is a named sequence of instructions that has a specific and
limited function. A data abstraction is a named collection of data that describes a data object.
Control abstraction implies a program control mechanism without specifying internal details.
8.3.3 Modularity
Principle of partitioning is successful, only if the modules are solvable and modifiable separately.
It will be even better, if changes made to one component does not require you to recompile the
whole system. In a modular system, change in one component has no or minimal impact on other
components. Modularity helps in easy debugging of the system.
In a modular system, each module supports a well-defined abstraction and has a clear interface
through which it can interact with other modules. Abstraction and partitioning together result in
modularity.
126
The bottom-up approach starts with the lowest-level component and proceeds progressively by
integrating them to form the higher levels. You need to identify the most basic or primitive
components first and then work with layers of abstraction. A top-down approach is followed in
cases where the requirements are very well defined. It is useful in cases you want to automate an
already existing system.
Q6. The component at the _______________ level of the hierarchy refers to the total system.
Q7. Define bottom-up approach of system design. When should you follow bottom-up approach
of system design.
___________________________________________________________________________
____________________________________________________________________________
___________________________________________________________________________
127
Functional independence means that the modules should be developed believing that they will be
executed separately in isolation and there will be no interaction with the other modules of the
system. Module developers should focus only a sub-problem in hand. Its interface should be
simple, when viewed from other modules of the program structure. It is easy to develop software
that comprises of functionally independent modules. It is easy to maintain software with
independent modules because of the following reasons:
1. Secondary effects caused due to modification in design are limited.
2. Functional independence means that changes in one module does not affect other modules in
the system, and hence, error propagation is reduced.
3. Independent modules can be reused in multiple software systems, as the interface is easy.
8.4.3 Coupling
Objective of a good software design is to reduce the complexity of interconnections between the
system modules. Two modules are considered independent if one can function completely
128
without the presence of other. Practically, it is difficult to achieve 100% modularity in the system.
Coupling in software design is used to define the strength or "how strongly" two or more
modules are interconnected.
Coupling refers to a measure of interdependence among modules. "Tightly coupled" means the
two modules are strongly interconnected, and "loosely coupled" modules are weakly
interconnected. It is better to have loose coupling between the two couples, as completely
independent modules have no coupling. Coupling between the two modules is defined during the
design phase only and cannot be changed later on.
Coupling is affected by the type of connection between modules, the complexity of the interface,
and the type of information flow between modules. Coupling increases with the increase in
complexity between modules and number of interfaces per module. Complexity for a module
refers to number of data items being passed to it by other modules. Passing of information, only
using the defined entry interface of a module helps to reduce the coupling. Passing of
information directly using the internals of a module or shared variables increases the coupling.
Data and control are the two types of information that flow between the modules. Control
information is used to control the actions of the entry module, whereas data as input means a
simple input- output function. Interfaces with control information have high coupling and lesser
abstraction, and interfaces with data information have low coupling and greater abstraction
8.4.4 Cohesion
Coupling is concerned with the measurement of strength between the modules, whereas cohesion
is a measure for the strength of binding elements within the module. Parameters used to define
cohesion of elements are based on a scale of weakest to strongest. In the last section, you learned
that coupling can be reduced by minimizing the connections between the two modules. Coupling
can also be reduced by achieving strong cohesion or to strengthen the binding between elements
in the same module. Cohesion tries to determine how closely the elements of a module are related
to each other.
Cohesion and coupling are inversely related to each other. Higher cohesion within the modules,
means lower coupling between the modules. This is what a designer tries to achieve, but it may
not be that perfect a correlation. Following are different levels of scale on which cohesion can be
measured:
Coincidental
Logical
Temporal
129
Procedural
Communicational
Sequential
Functional
Coincidental represents the lowest level of cohesion and functional represent the highest level of
cohesion. Cohesion of a module is defined by the highest level of cohesion applicable to each
element in a module.
Coincidental cohesion is generally achieved when an already existed software is decomposed into
modules, and there is no meaningful relationship between the two elements of a module. It may
also result into different modules having duplicate code. It results into strong coupling between
the modules and hence they cannot be modified separately, which is un-desirable.
Logical cohesion exists in case the elements are logically connected to each other and they
perform functions that fall in the same logical class. For example, elements performing input
function fall in the same logical class.
In case of temporal cohesion, the element are not only logically connected to each other, but are
also executed together. For example, elements involved in activities like initialization and
termination are usually temporally bound
Procedural cohesion means that the elements belong to some common procedural unit. For
example, elements that belong to some loop structure.
A module has communicational cohesion, if the elements operate on the same input or output
data.
Sequential cohesion is achieved when the output of one element becomes the input of next
element within the same module. There are no guidelines to define sequential cohesion.
Functional cohesion is the strongest of all cohesions. Functional cohesion is the strongest of all
cohesion, and it means that all elements within a function are related to performing a single
function.
130
Check your progress/ Self assessment questions- 3
Q8. ______________________refers to the logical representation of relationship between the
individual elements of data.
Q9. Define coupling.
___________________________________________________________________________
____________________________________________________________________________
___________________________________________________________________________
8.5 Summary
The system design is concerned with defining how to solve the problem, whereas the analysis
phase was concerned with defining what a solution looks like. It is not feasible to tackle or solve
large problems at once in one go, whereas a small problem is easy to solve in one go. Design
process of software system follows the principle of divide and conquer. Abstraction principle
allows you to separate conceptual aspects of a system from implementation details during
requirements definition and design. Abstraction permits a designer to consider a component at an
abstract level without having to worry about the implementation details of that component.
Principle of partitioning is successful, only if the modules are solvable and modifiable separately.
It will be even better, if changes made to one component does not require you to recompile the
whole system. The top-down approach starts from the highest-level component and then
decomposes it into smaller components, and iterating until the desired level of detail is achieved.
The bottom-up approach starts with the lowest-level component and proceeds progressively by
integrating them to form the higher levels. Data structure refers to the logical representation of
relationship between the individual elements of data. Two modules are considered independent if
one can function completely without the presence of other. Coupling refers to a measure of
interdependence among modules. Cohesion is a measure for the strength of binding elements
within the module. Cohesion and coupling are inversely related to each other. Higher cohesion
within the modules, means lower coupling between the modules.
131
8.6 Glossary
Cohesion- Cohesion is a measure for the strength of binding elements within the module.
Coupling- Coupling refers to a measure of interdependence among modules.
Abstraction- Separation of conceptual aspects of a system from implementation details during
requirements definition and design.
Module- Module refers to a single component within a system.
SRS- SRS is a technical document defines the user requirements.
132
11. It is good to have tight coupling as tight cohesion within the modules, means lower coupling
between the modules and greater functional independence between the modules.
133
Lesson- 9 Software design methodologies
Structure of the lesson
9.0 Objective
9.1 Introduction
9.2 Design Notion
9.2.1 Structure Charts
9.2.2 Specifications
9.3 Design Phase
9.3.1 Data design
9.3.2 Architecture Design
9.3.2.1 Components of system
9.3.2.2 Connectors of the system
9.3.3 Procedural Design Methodology
9.3.3.1 Restate the Problem as a Data Flow Diagram
9.3.3.2 Identify the Most Abstract Input and Output Data Elements
9.3.3.3 First-Level Factoring
9.3.3.4 Factoring the Input, Output, and Transform Branches
9.4 Summary
9.5 Glossary
9.6 Answers to check your progress/self assessment questions
9.7 References/ Suggested Readings
9.8 Models questions
9.0 Objective
After studying this lesson, students will be able to:
1. Define the design notions used in design phase
2. Discuss the concept of data design.
3. Describe the components and connectors used in the design.
4. Explain in detail the procedural design.
9.1 Introduction
Once the SRS document has been defined, the software development moves to the design phase.
SRS document specifies the problem domain and the focus of the design phase is to specify the
solution domain. The activities in design phase might be similar to the analysis phase, but the
134
objective is different. Design phase is concerned with creating a document format that is closer to
the implementation and is easy to understand for the coding team.
The structure above shows that only data information is being passed between the modules. Total
of 4 modules are there. Procedural information like loops and decisions can be explicitly
specified in structural chart. For example, if a module repeatedly calls its sub modules, it can be
represented using looping arrows around the arrows used to invoke the sub module.
135
Figure 9.2 Looping arrows in structure chart.
Even decisions can be explicitly specified using diamond symbol. For example, consider that a
super module invokes a sub module based on some decision, then a diamond symbol can be
added to the head of the arrow that connects the two modules.
A module can also perform functions of more than one type of module. A structure chart is best
suited for representation of a design that uses functional abstraction. A typical structure chart is
used to specify the following:
1. Modules and their call hierarchy.
2. Modules and their interfaces.
136
3. Type of information passed between modules.
Once the structure design is finalized, the modules and their interfaces cannot be changed. The
aim of structural design is to make programs implementing it:
1. Also have a hierarchical structure.
2. Have functionally cohesive modules.
3. And there be very few interconnections between modules.
9.2.2 Specifications
It is important that design specifications are used to communicate the design to others. Design
specifications are used to specify the data structures, module specification, and design decisions.
A formal description of all data structures to be used in the software are specified in the design
document. Module specifications include description of interfaces used between the modules,
abstract behaviour of the module and the sub modules being used by a module. After the design is
approved, the design is implemented using a programming language that best suites the design
architecture. The design also includes all the major decisions taken by the designer. It gives a
brief description of all choices available and the explanation for why the specified choice was
selected.
Q3. If a module repeatedly calls its sub modules, it can be represented using ___________ arrows
around the arrows used to invoke the sub module.
Q4. Decisions can be explicitly specified using structure chart. ( TRUE / FALSE )
___________________________________________________________________________
137
Design phase is carried out to transform the requirements specified during analysis phase into
format that is easy to implement. The Design is carried out as follows.
138
1. Module
2. Component and connector
3. Allocation
Module view represents the view of the system as collection of coded transformations that are
used to implement a specified system functionality. Modules represent the key elements of this
view and some of the examples of module elements are class, method, package, etc. relationship
between the modules depend on the interaction between the modules.
Component and connector view represents the view of the system as collection of run time
components. If you are familiar with object oriented programming, then objects or set of objects
belonging to a class are its run time components. Process is also an example of run time
component. Connectors define how the two objects interact with each other at run time, and
examples of connectors are pipes and sockets.
9.3.2.1 Components of system
Components in architecture design refers to a computation unit or data stores. Component name
is based on the function it performs and it provides a unique identity to it that is used for
referencing details about the component in the supporting documents.
139
architecture. Connectors can also be used to provide n-way communication between multiple
components.
Bus type connector- It is used by system components to broadcast message to other components
of the system.
Database connector- It is used by a functional component, when it wants to access the database
component of the system.
RPC- It is used by the system components do specify the remote procedure call.
Pipe: it is used to represent a simple message passing between the two components.
Request-Reply- It is used to show a simple connection between two components of system, that
shows request by one component and reply by other component.
An allocation view represents the view of how different software modules are allocated resources
like the hardware, file systems, etc. It is used to represent the relationship between the various
elements of the software system and the environment in which the same is to be executed. In
technical terms, you can say that this view is concerned with exposing structural properties like
which processes run on which processor, and how to organize the files on a file system.
140
____________________________________________________________________________
___________________________________________________________________________
141
The DFD is concerned with identification of all major functions and the flow of data between
these functions.
9.3.3.2 Identify the Most Abstract Input and Output Data Elements
Functions or transformations cannot be directly applied on physical input, and that input is
converted into a form suitable for applying transformations. Similarly, the outputs generated by
transformations are converted into physical output. This step focuses on the separation of two
type of transformations: one that performs actual transformations and other that covert the input
and output formats.
For this, you need to identify the highest abstract level of input and output. Data elements that are
farthest removed from the physical input elements and still can be used to represent input data are
called most abstract level input data elements. You can recognize the most abstract input data
elements by moving from the physical inputs toward the outputs in the data flow diagram, until
you reach the data elements that can no longer be considered incoming.
Similarly, data elements that are farthest removed from the physical output elements and still can
be used to represent output data are called most abstract level output data elements. You can
recognize the most abstract output data elements by moving from the physical outputs toward the
inputs in the data flow diagram, until you reach the data elements that can no longer be
considered outgoing. These data elements represent the logical output data items, and the
transforms after these data items merely convert the logical output into a physical output format.
142
The actual transformation happens between the most abstract input data elements and most
abstract output data elements.
143
2. Repeat the first-level factoring process for new central transform, considering the main module
to be the input modules.
3. Create a subordinate input module for each input data stream coming into this new central
transform
4. Repeat the same process, until the physical inputs are reached.
No output subordinate modules are produced during the factoring of input modules. Output
modules can be factored by repeating the same process.
Factoring the central transform is functional decomposition. Factoring of the central transform
can be achieved by creating a subordinate transform module for each of the transforms in this
data flow diagram. This process can be repeated for the new transform modules that are created,
until we reach atomic modules.
Q10. What is the need of factoring input, output and transformation branches?
___________________________________________________________________________
____________________________________________________________________________
___________________________________________________________________________
9.4 Summary
144
Design notions are used to represent the design or design decisions during the design phase. The
structure chart is used to describe the structure of a program. Labelled rectangular box is used to
represent a module. An arrow is used to show the parent and child relationship between the two
modules. Main focus of the data design is to define the data structure to be used by various
software components. Data types and data constraints for all data items listed in the data
dictionary are specified during data design. Software architecture may be defined as combination
of software elements, their external view, and the relationships between these components.
Software architecture provides three types of views namely, Module view, Component and
connector view and Allocation view. Procedural design methodology is based on the principle of
problem partitioning. The system is partitioned into sub-systems to handle input, output and
transformation. Modelling of solution domain is the key objective of DFD in design phase.
Second step of procedural design focuses on the separation of two type of transformations: one
that performs actual transformations and other that covert the input and output formats. Next you
need to perform the factoring of input, output and transformation modules.
9.5 Glossary
Structure chart- The structure chart is used to describe the structure of a program.
DFD- It is used to show the flow of data between various components of system.
ER Diagram- The relationship between various entities is defined in the ER diagram.
Data Dictionary- Data dictionary is used to list all data items that appear in a DFD of a system
and also to list the purpose of each data item along with the definition of all composite data items.
SRS Document- SRS document specifies the end user requirements or the problem domain.
2. The structure chart is used to describe the structure of a program. Labelled rectangular box is
used to represent a module. An arrow is used to show the parent and child relationship between
the two modules. An arrow from module A to module B, describes that modules A is invoking
module B and that module B is subordinate of module A.
3. Looping.
4. TRUE.
5. Data design is used to define the data structure to be used by various software components. It
helps to reduce the program complexity, makes the program structure modular. The information
145
domain model developed during analysis phase is transformed into data structures needed for
implementing the software.
6. Following are some of the advantages of software architecture:
a. Understanding and communication
b. Reuse.
c. Construction and Evolution
d. Analysis.
7. Component and connector view represents the view of the system as collection of run time
components. Objects or set of objects are run time components of class. Process is also an
example of run time component. Connectors define how the two objects interact with each other
at run time, and examples of connectors are pipes and sockets.
8. DFD for the design phase is different from the DFD for the analysis phase. Modelling of
problem domain is the key objective of DFD in analysis phase, whereas modelling of solution
domain is the key objective of DFD in design phase.
9. Data elements that are farthest removed from the physical input elements and still can be used
to represent input data are called most abstract level input data elements.
10. The first-level factoring leaves a lot of work for each subordinate module to perform. These
modules must be further factored into subordinate modules to reduce the work load on each
module at higher level.
146
Lesson- 10 Object oriented concepts
Structure of the lesson
10.0 Objective
10.1 Introduction
10.2 Object oriented concepts
10.3 Unified Modelling Language (UML)
10.4 OO Design Methodology
10.5 Summary
10.6 Glossary
10.7 Answers to check your progress/self assessment questions
10.8 References/ Suggested Readings
10.9 Models questions
10.0 Objective
After studying this lesson, students will be able to:
1. Define various terms associated with object oriented programming.
2. Explain the use of UML in object oriented design.
3. Discuss various types of diagrams created in UML.
4. List various steps involved in OO design methodology
10.1 Introduction
Object-oriented (OO) approach is the most popular software development approach today. An
object oriented design is less affected by change in requirements. Inheritance and close
association of objects in design to problem domain encourages the reusability of modules that
help to reduce the overall cost and effort needed to develop the software. Object-oriented
approach provides structural support for implementing abstraction.
147
oriented programming can be achieved only with the help of class type and you cannot call any
program or language supporting OOP paradigm if it does not include class data type.
Object- A class is simply a type that forms the basis for OOP. An object may be defined as
instance of a class. It is the active entity of a class. Object occupies space in the memory. When
you create multiple objects of a class; multiple instances class members is created.
There was a clear separation between the data and functions in procedural or structural languages.
More emphasis was given to the coding than data. A class supports the encapsulation feature of
OOP. Encapsulation means binding of data and functions (coding) in a single type. A class is a
collection of data members and member functions, and hence it supports encapsulation.
Encapsulation leads to data hiding.
Objects are entities that encapsulate some state and provide services to be used by its
environment. The basic property of an object is encapsulation. Interface of an object refers to the
services that can be requested from the object. Encapsulation allows only limited access to the
data, that lets you achieve data integrity. State of an object is preserved until the object is
destroyed. Attributes and services provided by an object are defined by the class, it belongs to. A
class may also be considered as a set of objects that share same behaviour.
A system consists of a number of objects belonging to different classes. These objects interact
with each other in order to achieve the system objective. Mechanism of messaging is used for
interaction between the objects. The object that receives the request executes the appropriate
service requested and returns the result to the object requesting for the service. It is a clear case
of encapsulation and abstraction supported by objects. Abstraction in OOP means providing only
the interface to the users and hiding the un-necessary details or coding from the user. Abstraction
is also a process of creating some abstract object from a class that depicts a real life entity. Some
of the examples of abstract objects are employee, student, car, account, human, etc. The main
objective of abstraction is to reduce the complexity and improve the performance. A class that
contains only the prototype of data members and member functions is a perfect example of
abstract view of class. You can access member function of class using objects of that class
without knowing any details about that member function.
148
Relation between object- Two objects are related in some way, if an object invokes a service in
other object, and there is an association between the two objects, if an object uses a specified
service of another object. Links are used to represent such association between objects.
Association leads to visibility. Suppose that object A wants to send a message to object B, or
invoke some service of object B, then the object B must be visible to object A in the final
program definition.
Another important type of relationship between objects is aggregation. It is used to represent the
whole/part-of relationship. Aggregation is often referred to as containment. For example, if an
object OBJ1 is an aggregation of objects OBJ2 and OBJ3, aggregation states that objects OBJ2
and OBJ3 will normally be within object OBJ1.
Inheritance- It is probably the most powerful feature of object oriented programming. It lets you
create a new class by re-using or inheriting the features of already existing class and adding new
features to the same. Inheritance helps you to improve the reusability of your code by re-using
already tested classes. It helps to reduce a lot of programming effort and also improves the
performance. Inheritance also helps you to break one large class into smaller classes that helps
improve the abstraction.
Inheritance represents “is a” relation. Inheritance relation can be best represented using
hierarchical structure. A subclass inherits the features of a superclass. Hierarchy should be such
that an object of a class is also an object of all its superclasses in the problem domain. Subclass is
an extension of superclass , or all common features of the subclasses are accumulated in the
superclass. Features can be inherited from the superclass class and used in the subclass directly. A
derived class can also be considered to be a specialized class of available abstract classes. In case
of strict inheritance, a subclass takes all the features from the superclass and adds additional
features to specialize it.
In strict inheritance, all data members and operations of base class are available in the derived
class.
In case of non-strict inheritance, subclass does not inherit all the features of superclass, or
redefines some of the features of superclass.
149
A class may also inherit from multiple classes. It means that the relationship may not necessarily
be a tree like hierarchical structure. When a subclass inherits features from multiple classes, it is
called multiple inheritance.
Polymorphism- In yet another key feature of OOP, you can create multiple forms of a single
object. Polymorphism is also known as overloading. Polymorphism is un-avoidable in a system
that supports inheritance, as there “is a” relation supported by inheritance. If B is a subclass of
superclass A, an object of class B can also be used to access the instance of class A. Static type of
object polymorphism is specified in the program text, and it remains unchanged. The dynamic
type of object polymorphism can change from time to time and is known only at reference time.
The dynamic type of object will be defined at the time of reference of the object. This type of
polymorphism requires dynamic binding of operations. Dynamic binding means that the code
associated with a given procedure call is not known until the moment of the call.
Q2 Define abstraction.
___________________________________________________________________________
____________________________________________________________________________
___________________________________________________________________________
150
Class Diagram- It is the core of the UML model. A class diagram is used to define the following:
1. Classes that are part of the system.
2. Association or relationship between the classes.
3. Inheritance relationship between the classes.
A class in UML is represented as a box divided into three parts. Top part specifies the class name,
middle part lists the data members of attributes of the class and the bottom part specifies the
functions that transform the class state.
It is important to describe the relationships between the classes, as the interaction between the
classes is must to achieve the system objective. One common relationship is the generalization-
specialization relationship between classes. It can be best represented using inheritance hierarchy
in which, properties of general significance are assigned to a more general class—the
superclass—while properties which can specialize an object further are put in the subclass.
Subclass contains its own properties as well as those of the superclass. The generalization-
specialization relationship is specified by having arrows coming from the subclass to the
superclass, with the empty triangle-shaped arrowhead touching the superclass.
151
Figure 10.2: A class hierarchy.
Association is another relationship that allows objects to communicate with each other, and it
means that an object one class needs services from objects of other class. Line is used to show the
association between two classes. Label on the line is used to specify the name of the association.
Association roles, attributes and cardinality can also be defined. A zero or many multiplicity is
represented by a “*”.The part-whole type of relationship is used when an object is composed of
many objects. It represents containment, which means that a class object is contained within the
object of another class.
Aggregation relationship is represented using a line originating from a little diamond connecting
it to classes which represent the parts.
152
collaboration diagrams are used to represent the system behaviour when it performs some of its
functions. Sequence diagrams and collaboration diagrams are collectively known as interaction
diagrams.
A sequence diagram shows the temporal ordering and series of messages exchanged between
objects as they collaborate or interact to implement desired system functionality.
Objects, instead of classes participate in the sequence diagrams, as it tries to depict the dynamic
behaviour of the system. Objects in sequence diagram are shown on top with the help of labelled
boxes. Lifeline of an object is represented using a vertical bar. Arrow is used to represent
message from one object to another from the lifeline of one to the lifeline of the other. Message
name generally refers to a method in the class. Each message has a return, which is when the
operation finishes and returns to the invoking object. It is desirable to show the return message
explicitly, even though it is not mandatory to. This is done by using a dashed arrow.
A collaboration diagram is also a good representative of objects communication and looks more
like a state diagram. An object is represented as box, and messages are shown as numbered
arrows between the objects. Message numbering is used to capture the chronological ordering of
messages.
153
Figure 10.5: Collaboration diagram.
Activity Diagram. It is also used for modelling the dynamic behaviour of the system.
It focuses on modelling the activities during the system execution. An activity in activity diagram
is represented using oval shape. The activity name is written inside the oval shape. System
proceeds between activities and which will be the next activity depends on some decision.
Diamond shape is used to represent the decision and the same is connected to multiple activities.
Activity diagram resembles somewhat to flow charts. An activity diagram also have notation to
specify parallel execution of activities in a system.
154
___________________________________________________________________________
____________________________________________________________________________
___________________________________________________________________________
155
Design of dynamic model and the definition functions on classes
The initial class diagram only gives the module-level design. This design needs further modelling
to ensure that the expected behaviour for the events can be supported. Main aim of this step, is to
specify how the state of various objects changes when events occur. An event occurs, when a
request is made to an object for some service. A series of events during system execution refers to
a scenario.
A scenario defines the different services being performed by each object. . All scenarios put
together, represent the behaviour of complete system. A design capable of supporting all the
scenarios, is also capable of supporting the desired dynamic behaviour of the system.
Functional Modelling
It does not consider the control aspects of the computation, and is only concerned with how the
output
values are computed from the input values of the system. Functional view represents the mapping
from inputs to outputs and also the steps involved in achieving this mapping. DFD is used to
represent the functional modelling. Functional modelling is done to ensure that the object model
can perform the transformations required from the system.
156
____________________________________________________________________________
___________________________________________________________________________
10.5 Summary
A class is as a collection or binding of data members and member functions. An object may be
defined as instance of a class. It is the active entity of a class. Mechanism of messaging is used
for interaction between the objects. Abstraction in OOP means providing only the interface to the
users and hiding the un-necessary details or coding from the user. If an object invokes a service in
other object, and there is an association between the two objects. Association leads to visibility.
Suppose that object A wants to send a message to object B, or invoke some service of object B,
then the object B must be visible to object A. Inheritance lets you create a new class by re-using
or inheriting the features of already existing class and adding new features to the same.
Polymorphism is also known as overloading, and it lets you create multiple forms of a single
object. Polymorphism is un-avoidable in a system that supports inheritance. Class Diagram is
used to represent classes, association and relationship between classes. Sequence diagrams or
collaboration diagrams are used to represent the system behaviour when it performs some of its
functions. Activity Diagram is also used for modelling the dynamic behaviour of the system. It
focuses on modelling the activities during the system execution. An object oriented design
generally consists of the following steps:
– Identification of classes and the relationships between them.
– Design of dynamic model and the definition functions on classes.
– Design of functional model and the definition of functions on classes.
– Identification of internal classes and functions.
– Optimization and packaging.
10.6 Glossary
Class- A class is as a collection or binding of data members and member functions.
Object- An object may be defined as instance of a class. It is the active entity of a class.
157
Abstraction- Abstraction in OO design means providing only the interface to the users and hiding
the un-necessary details or coding from the user.
Inheritance- It lets you create a new class by re-using or inheriting the features of already existing
class and adding new features to the same.
Polymorphism- It lets you create multiple forms of a single object.
Class Diagram- It is used to represent classes, association and relationship between classes.
Sequence diagram- It is used to represent the system behaviour when it performs some of its
functions.
Activity Diagram- It is also used for modelling the dynamic behaviour of the system.
158
8. Functional modelling does not consider the control aspects of the computation, and is only
concerned with how the output values are computed from the input values of the system.
Functional view represents the mapping from inputs to outputs and also the steps involved in
achieving this mapping.
159
Lesson- 11 Software Testing
Structure of the lesson
11.0 Objective
11.1 Introduction
11.2 Software testing
11.3 A software testing cycle
11.4 Objectives of software testing
11.5 Principles of software testing
11.6 Testability
11.7 Black-Box Testing
11.8 White-Box Testing
11.9 Validation Testing
11.10 Types of Validation Testing
11.10.1 Unit Testing
11.10.2 Integration Testing
11.10.3 System Testing
11.10.4 Acceptance Testing
11.11 Summary
11.12 Glossary
11.13 Answers to check your progress/self assessment questions
11.14 References/ Suggested Readings
11.15 Models questions
11.0 Objective
After studying this lesson, students will be able to:
1. Define the term software testing.
2. List key objectives of software testing.
3. Explain key principles of software testing.
4. Explain the concept of testability.
5. Discuss in detail various types of software testing techniques.
11.1 Introduction
No matter what product you produce, it is imperative to test the product before it is handed over
to the customer. Software testing is a key phase in development process of any software.
160
Software must be tested to determine if it meets the standards and requirements defined in the
SRS. Depending on the nature of the software under development, testing can be performed by
the developer itself, or by a special team comprising of testers. Testing can be performed right
from the beginning of the software development process or at the end, before the software is
handed over to the customer. In this lesson you will also learn about various software testing
techniques practically used.
161
Test Suite- It refers to a set or collection of interrelated test cases with the aim of common testing
goal.
Test Driver- Software tool that is responsible for the application of test cases.
Test strategy- It refers to an algorithm used to select test cases from a representation.
The process starts by designing the test cases or picking up already designed test cases. Then the
test data or input is prepared. The actual software to be tested is then executed on the test data and
the results are obtained. Results specify the identified errors, bugs, gaps, and also the value for
various measures such as memory used, time taken to successful completion, etc. The results are
then compared with the test cases to identify the gaps and generate the test report.
162
___________________________________________________________________________
____________________________________________________________________________
___________________________________________________________________________
Q2 What is the advantage of starting with the software testing from the very beginning of
development process?
___________________________________________________________________________
____________________________________________________________________________
___________________________________________________________________________
Q3. _______________ refers to a set of inputs, operational conditions, and expected outcome,
and ____________________ refers to a set or collection of interrelated test cases with the aim of
common testing goal.
163
7. Absence of errors fallacy- If the testing fails to find any bugs in the software, it doesn’t mean
that the software is ready to be delivered. It may be possible that the tests were designed to see if
the software matched the user’s requirements?
11.6 Testability
Testability is a software quality characteristic. Some of the definitions of testability:
"The degree to which a software or its modules facilitates testing is called testability."
or
"Testability is the degree of difficulty of testing a system".
Testability is determined by both the system being tested and its development approach. Higher
testability means better tests at same cost and lower testability means fewer weaker tests at same
cost. Testability determines the limit to which the risk of costly bugs can be reduced to an
acceptable level. Delivering a system with nasty bugs, means poor testability. Improved
testability means there are good chances to find bugs as you do more testing. Designing a
software system with testing in mind is called design for testability.
Following are some of the characteristics of testability, which are listed below:
1. High quality software can be tested in a better manner. This is because if the software is designed
and implemented considering quality, then comparatively fewer errors will be detected during the
execution of tests.
2. Software becomes stable when changes made to the software are controlled and when the existing
tests can still be performed.
3. Testers can easily identify whether the output generated for certain input is accurate simply by
observing it.
4. Software that is easy to understand can be tested in an efficient manner. Software can be properly
understood by gathering maximum information about it. For example, to have a proper
knowledge of the software, its documentation can be used, which provides complete information
about the software code, thereby increasing its clarity and making the testing easier.
5. By breaking software into independent modules, problems can be easily isolated and the modules
can be easily tested.
164
performed from the user interface. The tester provides inputs using the user interface and checks
the output without knowing the processing details.
165
Figure 11.3 White-Box testing
166
___________________________________________________________________________
____________________________________________________________________________
___________________________________________________________________________
Validation testing is carried out before the software is handed over to the customer. The aim of
validation testing is to ensure that the software is made according to the customer requirements.
The acceptance of the software by the end customer is part of validation testing.
167
Figure 11.4 V-Model of validation testing
168
correctness of the units when executed after integration. Integration testing can be done using two
approaches:
2. Top-down integration
In this testing, the highest-level modules are tested first , followed by testing the lower-level
modules.
169
A tester has to think like the client, or even interact with the client and test the software with
respect to user needs, requirements, and business processes, and determine whether the software
is ready for the delivery or not. This testing is key to determine the confidence of the client in the
system.
11.11 Summary
Testing refers to the process of evaluating a system or its to check if it satisfies the specified
requirements or not. Following are 7 key principles of testing:
The degree to which a software or its modules facilitates testing is called testability. Testability
determines the limit to which the risk of costly bugs can be reduced to an acceptable level.
Delivering a system with nasty bugs, means poor testability. In case of Black-Box testing, the
software is treated as a black box and the testing is performed without having any knowledge of
the components or modules of the software. White-box testing methodology refers to the detailed
investigation of structure and of the software modules and code. Validation testing is done during
the software development process and also at the end of it. Validation testing is carried out before
the software is handed over to the customer. Unit testing is performed by the developers on the
individual units of source code. Integration testing is done to check the functional correctness of
the units when executed after integration. System testing is performed on the complete software
to check the behaviour of the whole system as defined by the scope of the project. Acceptance
testing is done to test the software with respect to user needs, requirements, and business
processes, and determine whether the software is ready for the delivery or not.
11.12 Glossary
Test case- It refers to a set of inputs, operational conditions, and expected outcome.
170
Test Suite- It refers to a set or collection of interrelated test cases with the aim of common testing
goal.
Software testing- It refers to the process of evaluating a system or to check if it satisfies the
specified requirements or not.
Bug- Missing or incorrect code that may lead to a failure.
Testability- Degree to which a software or its modules facilitates testing is called testability.
2. Starting with the software testing from the very beginning of development process helps to
reduce the time and cost to correct these errors.
3. Test case , test suite.
4. The degree to which a software or its modules facilitates testing is called testability, or
testability is the degree of difficulty of testing a system.
5. In case of Black-Box testing, the software is treated as a black box and the testing is performed
without having any knowledge of the components or modules of the software. Tester doesn't have
knowledge of the software architecture and cannot view the source code.
6. Advantages of White-Box Testing:
a. It is easy for the skilled tester to identify the type of data that can help in testing the
application effectively.
b. Maximum coverage is achieved during test, due to the knowledge levels of skilled tester.
7. FALSE.
8. System.
171
3. Naseeb Singh Gill,Software Engineering: Software reliability, testing and quality, Khanna
Book
Publishing, 2011."
11.15 Model questions
1. Differentiate between black-box and white-box testing.
2. What is validation testing? Explain various tests performed in validation testing.
3. List various principles of software testing.
4. Define testability.
5. What is software testing? What is the need of it?
172
Lesson 12: Verification and Validation
Contents
12.1 Objective
12.2 Introduction
12.3 Verification
12.4 Validation
12.5 Need of Verification and Validation
12.6 Foremost of verification and validation
12.7 Software verification and validation approaches and their applicability
12.8 Principal of Verification and Validation
12.9 Difference between Verification and Validation
12.10 Verification activities
12.11 Validation activities
12.12 System View
12.13Difficulty in Verification and Validation
12.14 Verification and Validation in life cycle
12.14.1 Verification and Validation in Requirement phase
12.14.2 Verification and Validation in Design phase
12.14.3 Verification and Validation in Implementation phase
12.14.4 Verification and Validation in Integration phase
12.14.5 Verification and Validation in Testing phase
12.14.6 Verification and Validation in Maintenance phase
12.15 Example
12.16 Summary
12.17 Glossary
12.18 Answers to self-answering questions
12.19 References / Suggested readings
12.20 Model questions / Self answering questions
12.1 Objective
This chapter aims at understanding the major and most important part of software
engineering which is verification and validation because unless until one does not know
how to verify and validate a software product that software product is of no use which
doesn’t meet the user’s business requirements. In this chapter we will learn basics of
making a quality software product which will leads to better performance and a flawless
product.
12.2 Introduction
In software Engineering verification and validation are two very wisely used terms which
seems to be same but the fact is that the both of these terms are quite different.
173
Fulfils the SRS document’s requirements
Fulfils the user’s requirements
When we talk about the SRS document’s requirements, we know that SRS is completely
accepted by both consumer and producer, so once it fulfils the SRS requirements the final
product produced will be quality oriented.
We will talk about both of these terms in detail but firstly let’s talk about these terms one
by one and try to understand these terms.
12.3 Verification
By right method we mean that what so ever we have already planned to make that
product/software. Are we following that way so that the final product which will be made
will be correct and by right product we mean that final software which will be handed
over to the end user will perform according to user’s requirements &fulfils his/her
objectives.
12.4 Validation
And if not I will refuse to accept the product. This will be the real validation & complete
testing.
174
Gaining early intuition in software performance
Synchronizing design and test data
Managing the testing procedure
Improved software quality
Reduced assurance cost
Reduced prototyping cost
Flattened design cycle time
Fig 12.1
Here we have some questions that should be answered before we start going in more
details of verification and validation.
Validation
“Are we working on the right system?”
Does our software product precisely capture the actual problem?
Does our product have justification for the needs of all the participants?
Verification
175
Does the given system work according to user’s instructions?
Verification and validation of the software product is being carried out throughout the
development of the product. There are a lot of techniques to check software either with
isolation or combing of modules. In a broad way we have classified these approaches in
five broad ways which are discussed in next section
Levels of testing
176
Module testing
Integration testing
System testing
Regression testing
3. Proof of correctness
Proof of correctness is mathematical & an analytical technique which provides
proofs of getting work done correctly for example some facts and figures can help
to get correct results about any software product. For example any restaurant
automation software will provide correct bills and takes correct inputs are said to
be proof of correctness.
4. Simulation and prototyping
Simulation is a technique of model building, which helps to understand the
software product in more clarification. There are many approaches to understand
the needs and working of software but simulating the software is the best way.
In simulation we make a dummy model of that software and test it according to
our requirements and with random set of inputs.
5. Requirement tracing
Tracing user’s requirements because it may vary as time passes away. There are a
lot of situations where user ask developers to make changes in the software
product according to their convenience so in that case it is always important to
trace user’s requirements and specifications.
Self-Assessment Questions
Exercise-I
177
According to IEEE/ANSI definition. “It is the process of evaluating a system or
components during or at the end of the development process to determine whether it
satisfies the specified requirement”. Validation is therefore ‘end to end’ verification
[IEEE Education society].
Up to now we get to know what are verification and validations. So let’s now try to
understand the difference between these two terms
178
Integration testing i.e. combining two or more units results to good / satisfied
performance.
System testing i.e. compilation of the complete system / software to check its
performance.
Acceptance testing whether the end user accept the product or not.
Audit
Fig 12.2
Verification Activities
Self-Assessment Questions
179
Exercise -II
Fig 12.3
180
12.13 Difficulty & Importance of verification and validation
Fig 12.4
Fig 12.5
181
Process of verification validation with credibility
Self-Assessment Questions
Exercise –III
The software requirement specification between user & the development team
correctly describe the functional requirements for which the system is to be built
or extended.
The graphical user interface which includes appearance, the look & feel & output
formats.
Non-functional requirement which includes security requirement, user
friendliness etc.
Any ambiguity which includes in definition application, specific content &
formulation of requirements.
182
1. The notations of the design specified language should be used correctly.
2. Design, Mismatch between as expected by the customer & design functionality.
3. Mismatch between user interface which is expected by the customer & one which
is development by the development team
4. This phase ensures that the requirement & the capabilities which were states in
each requirement earlier should actually be delivered when the system is being
implemented.
This is the most important phase in software engineering &it takes a lot of time &
efforts to implement the actual product / software at user’s workplace and user’s
environment. The phase focus on
Following up with above phases Testing phase is the most important phase of
Verification and Validation. In this phase system testing is performed in development
environment as well as in the environment provided by the user. Objective is to make
sure that the functional &non-functional requirement should meet as per user’s business
necessities. This phase ensures.
183
1. Random testing
2. Use cause based testing
3. Acceptance testing
Once the system is installed in the user’s workplace, it is the time to maintain the
software in good condition & the software should not fade away. This phase includes.
Firstly our company’s core team will have a meeting with core team of the college and
will finalize the software product’s basic structure also known as Software Requirement
Specifications (SRS) which is later on signed by the software producer and user then
management of our company will nominate a team to develop that software product.
Then the team nominated will conduct regular meetings with the college faculty to
understand their requirements from the software product
As discussed earlier in verification and validation topic of life cycle the nominated team
will conduct regular sessions with the college staff and gather all the information that will
help them to develop a good software product. These meetings will help team to get
knowledge about these phases
184
And Finally
In this phase the nominated team will check the software product which is made by them
with random and unexpected inputs. The development team will check the integration
between various modules to make a better software product.
When the development team installs the software product at user’s workplace the user
will check the software product according to his/her specified needs, if the needs are as
per specifications in SRS met by the software product, user accepts the product.
Then maintenance phase of software product started by the development team which
keeps on updating the product with new technologies, hardware support and changes
suggested by the user.
Self-Assessment Questions
Exercise –IV
12.16 Summary
This chapter helps us to learn various testing techniques as well as the module checking
of the software products. Verification and validation is an important part of to test
software product which tells us that are we going in right direction to a final software
product and after end of the whole procedure validation checks the complete software to
make sure that it meets the customer business requirements.
12.17 Glossary
Acceptance Testing: This type of testing is being conducted at user’s workplace probably
which tells the software developers that if user is accepting the software product or not.
185
CMM: The Capability Maturity Model for Software is a model for getting the maturity of
the software product of an association and for identifying the key practices that are
required to increase the development of theseprocesses
Exercise-I
1) Acceptance Testing: This type of testing is being conducted at user’s workplace
probably which tells the software developers that if user is accepting the software product
or not.
2) Agile Testing: Testing practice for projects using agile methodologies, treating
development as the customer of testing and emphasizing a test-first design paradigm.
4) Validation
“Are we working on the right system?”
Does our software product precisely capture the actual problem?
Does our product have justification for the needs of all the participants?
Verification
Answers
Exercise-II
1) Black Box Testing: Based on study of the a software product without referring its
internal working. The aim is to test how well the component conforms to the published
requirements for the component.
186
2) CMM: The Capability Maturity Model for Software is a model for getting the
maturity of the software product of an association and for identifying the key practices
that are required to increase the development of theseprocesses.
Answers
Exercise-III
1) Quality Assurance: All those planned or methodical actions required to provide
sufficient confidence that a software product is of the type and quality needed and
expected as per customer’s need.
3) Stress Testing: Testing the software product at its best and with random inputs to
evaluate its performance with worst conditions.
Answers
Exercise-IV
1) This is the most important phase in software engineering &its takes a lot of time &
efforts to implement the actual product / software at user’s workplace and user’s
environment.
2) Integration phase bunch up all the modules to integrate complete software. Dynamic
validation ensuring is the important part of this phase. Purpose of this phase is to ensure
the interface between various modules & get it right.
187
1. Correct assignment of actual parameters to formal parameters.
2. Correct assignment of values of variables.
3. Correct sequence of execution of modules.
188
Lesson-13
Structure
13.0 Objectives
13.1 Introduction
13.4 Re-engineering
13.6 Summary
13.7 Glossary
13.9 References
13.0 Objective
13.1 Introduction
Not all software's are developed after following the principles of software engineering.
Hence the documents related to the software development and its various phases are also
not available. Reverse engineering helps to generate those documents. In this lesson you
learn about the need of reverse engineering, re-engineering and Web engineering.
189
Computer Aided Software Engineering (CASE) Tools are built to improve the efforts
done in software developments and maintenance. Case tools are helping in automating
the activities used in software development process. In this chapter we will discuss about
CASE Tools and CASE Environment. There are many CASE tools, some are assisting in
phase related tasks like specification, analysis, design, coding, etc. and some are assisting
in non-phase related tasks like configuration management.
The Main purpose of using CASE tools is to develop software at low cost, of high
quality and to increase the productivity.
Many CASE tools are integrated to work in CASE Environment. While working
in CASE environment the power of CASE tools increases to much extent. Single CASE
tool can also be used for some purpose but it is not much powerful while working
individually. Format conversion may also be needed if different CASE tools are not
integrated. Because the data generated by one tool will not be given as input to another
CASE tool without format conversion. This may result in extra efforts in exporting and
importing data.
When CASE tools are working in a common environment they are sharing
information among themselves with the help of a central repository. As different tools
work at different stages so they need to integrate the information to have a consistent
view of associated information of software. The central repository is a data dictionary
which contains the definition of all elementary and composite data items .To easily
understand this let us take an example of programming environment. In programming
environment there is an integrated collection of tools that support the coding phase of
software development. The tools that can be integrated in this programming environment
are text editor, a compiler and a debugger. They work collectively by sharing information
among them as the compiler detect an error, then editor will automatically go to
statement in error and then error statement will be highlighted.
The CASE environment is having collection of many CASE tools which will
support the software development life cycle. The CASE tools are most useful when the
user requirements are so complex which need to be projected with these tools easily. The
following are some types of CASE tools:
Project planning and management tools: This tool is used for the
beginning of development process by planning and project management.
In this phase of software development life cycle the initials of project are
taken care and planned.
190
Fig13.1 –Types of CASE Tools
Analysis tools: The analysis tools are used to find the requirements of the
project as if all the necessary requirement are captured or not. Analysis
tools have to take care that captured requirements are correct, consistent
and complete.
Design tools: Design tools are used for detailed specification of the
project. These tools are required to support the drawing of design
diagrams effortlessly and also through different levels of hierarchy.
191
Database design tool: This tool is used to design the database according
to specification and generate system control information.
Report Generator: This is one tool which is used to generate the report
based on specification.
Benefits of CASE tools:
Cost Saving: With the help of CASE tools there is effective reduction of cost as
once these tools are implemented then we can use these tools again and again.
They reduce the effort up to 30% to 40% which automatically reduces the cost.
Improves Quality: These tools also improve the quality as the chances of human
error reduces and one can iterate through different phase of development process.
Consistent Documents:In the CASE environment all tools are saving their data
in central repository so the redundancy reduces and data exists in consistent state.
192
The first step in reverse engineering is to focus on code. The readability, structure
and understandability of code is improved primarily and its functionality is changed latter
as needed. There can be a problem to change the code as sometimes there is usage of
complex control structure and un-predictable variable name which can create problem in
this process. So there should always be the usage of simple control structure, similar
variable name and similar function names. After these changes are made in coding, the
another process involved in software development can be carried out like design,
requirement specification etc.
Requirements
specification
Design
Module
specification
Code
In the design phase the reverse engineering tools read the code and make the
appropriate graphical and textual representation of design. Some automatic tools are also
there which can be used to derive data flow and flow control diagrams from code. The
structure chart is also necessary for design phase and at the end the specifications can be
made after the code and design has extracted.
193
somewhat high level abstraction is capable of deriving program and data structure
information, a relatively high abstraction is capable of deriving object models, data
models and control flow models and a high level abstraction is used to deriving UML
class state and deployment diagrams. With the increase in abstraction level the
understandability of the program increases. The completeness in the reverse engineering
is referring to the detail in the abstraction level. If the directionality is one way then
information extracted can be used for maintenance purpose by engineer, but if it is of two
way then it can be used to restructure the old program that is used in the reengineering
process.
13.4 Re-engineering
194
Fig 13.3 – Re-engineering
This above studied approach i.e. re-engineering is one way of maintenance. There
are also some other ways to maintain the project. Re-engineering is done when there is
significant need of rework on code or we can say project. But there are also some other
ways which can be used to maintain the project. We will discuss one way which does not
need to do a lot of work as in re-engineering. This approach is used when there is need of
little work to be done on old project. In this approach new requirements are gathered
which are need to change the existing project. Then these changed requirements are
analyzed. If changed requirements are feasible and good then new code strategies are
195
developedand applied. Finally we will update the documents and test the new code. In
this way the old project is modified and maintained. For this approach always the old
project team is best for modifying the project as they have proper insight into the old
project. This approach is also having very easy debugging as traces of both programs can
be compared to locate the bugs also team is familiar with code. But this approach is not
obtaining the structured design compared to old design.
Re-engineering tools read the source code and change the existing project to
improve the quality and performance of new project. In the re-engineering process the
new project is providing much efficiency, more structured design and good
documentation. The drawback of re-engineering is that it is very costly. Re-engineering
approach is preferred for those projects who suffer from high failure rates, poor design
and code structure. This approach is making the project up-to-date to new technology and
the new project becomes restructured and re-documented.
196
13.4 Web Engineering
The layers of web engineering may be categorized among Process, Methods and
Technologies (Tools) which is conceptually equivalent to software engineering layers.
Process
As there are huge number of internet users, the most web development’s efforts
are towards the reduction of development cycles. In the process layer the problem
must be recognized properly to ensure that whether the rapid cycle times
dominates the development thinking or not. If dominate then there is need to re-
think on that problem, a design should be developed and an organized testing
approach much be recognized. These web development activities are framed
within a process that:
1. Accept change
2. Inspire the development team and their creativity and builds a strong relations
with WebApp stakeholders.
3. Build Web systems using small development staff.
197
4. Uses small development cycles in the process of development.
Methods
The methods of Web development includes some technical tasks which are
helping a Web engineers to understand and build a high quality WebApp. The
methods which are used in Web development are discussed in following way:
13.6 Summary
198
CASE tools are set of software application programs, which are used to automate
SDLC activities. CASE tools are used by software project managers, analysts and
engineers to develop software system. CASE tools available to simplify various
stages of Software Development Life Cycle such as Analysis tools, Design tools,
Project management tools, Database Management tools, Documentation tools are to
name a few. Reverse engineering is taking apart an object to see how it works in
order to duplicate or enhance the object. The practice, taken from older industries, is
now frequently used on computer hardware and software. Reverse engineering, also
called back engineering, is the processes of extracting knowledge or design
information from anything man-made and re-producing it or reproducing anything
based on the extracted information.
13.7 Glossary:
1.
Analysis tools
Design tools
Information Integrator
Code Generator
2. Improve quality
Cost saving
Reduced time consumption
3. UML: Unified Modeling Language
4.
199
13.9 References:
1. http://www.slideshare.net/rhspcte/software-engineering-ebook-roger-s-pressman
200