0% found this document useful (0 votes)
12 views

SEPM_Module2_notes

The document outlines key concepts in software engineering and project management, focusing on requirements engineering, which includes steps such as elicitation, analysis, specification, modeling, validation, and management. It emphasizes the importance of stakeholder involvement and various techniques for gathering requirements, as well as the significance of the Software Requirements Specification (SRS) in guiding development and testing. Additionally, it discusses project estimation methods, particularly software sizing, to improve cost and effort estimation accuracy.

Uploaded by

Madhura Kanse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

SEPM_Module2_notes

The document outlines key concepts in software engineering and project management, focusing on requirements engineering, which includes steps such as elicitation, analysis, specification, modeling, validation, and management. It emphasizes the importance of stakeholder involvement and various techniques for gathering requirements, as well as the significance of the Software Requirements Specification (SRS) in guiding development and testing. Additionally, it discusses project estimation methods, particularly software sizing, to improve cost and effort estimation accuracy.

Uploaded by

Madhura Kanse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

NOTES

ON

SOFTWARE ENGINEERING AND


PROJECT MANAGEMENT

B.E - III YEAR – SEM 6

(2023-24)

DEPARTMENT OF

ARTIFICIAL INTELLIGENCE & DATA SCIENCE


Module 2
Requirement Analysis and Cost Estimation
 REQUIREMENTS ENGINEERING
Requirements engineering provides the appropriate mechanism for understanding what the customer
wants, analyzing need, assessing feasibility, negotiating a reasonable solution, specifying the solution
unambiguously, validating the specification, and managing the requirements as they are transformed
into an operational system.

The requirements engineering process can be described in five distinct steps

• requirements elicitation

• requirements analysis and negotiation

• requirements specification

• system modeling

• requirements validation

• requirements management

a) Requirements Elicitation

It certainly seems simple enough—ask the customer, the users, and others what the objectives for the
system or product are, what is to be accomplished, how the system or product fits into the needs of the
business, and finally, how the system or product is to be used on a day-to-day basis. But it isn’t simple—
it’s very hard.

Involves a number of people in an organization.

Stakeholder definition-- Refers to any person or group who will be affected by the system directly or
indirectly i.e. End-users, Engineers, business managers, domain experts.
Reasons why eliciting is difficult
1. Stakeholder often don’t know what they want from the computer system.
2. Stakeholder expression of requirements in natural language is sometimes difficult to understand.
3. Different stakeholders express requirements differently
4.Influences of political factors Change in requirements due to dynamic environments.

REQUIEMENTS DISCOVERY TECHNIQUES

1. View points --Based on the viewpoints expressed by the stake holder


--Recognizes multiple perspectives and provides a framework for discovering conflicts in the
requirements proposed by different stakeholders
Three Generic types of viewpoints
1. Interactor viewpoint--Represents people or other system that interact directly with the system
2. Indirect viewpoint--Stakeholders who influence the requirements, but don’t use the system
3.Domain viewpoint--Requirements domain characteristics and constraints that influence the
requirements.
2. Interviewing--Puts questions to stakeholders about the system that they use and the system to
be
developed. Requirements are derived from the answers.
Two types of interview
– Closed interviews where the stakeholders answer a pre-defined set of questions.
– Open interviews discuss a range of issues with the stakeholders for better understanding their
needs.
Effective interviewers
a) Open-minded: no pre-conceived ideas
b) Prompter: prompt the interviewee to start discussion with a question or a proposal
c). Scenarios --Easier to relate to real life examples than to abstract description. Starts with an
outline of the interaction and during elicitation, details are added to create a complete description
of that interaction
Scenario includes:
1. Description at the start of the scenario
2. Description of normal flow of the event
3. Description of what can go wrong and how this is handled
4.Information about other activities parallel to the scenario
5.Description of the system state when the scenario finishes

The work products produced as a consequence of the requirements elicitation activity will vary
depending on the size of the system or product to be built. For most systems, the work products
include
• A statement of need and feasibility.
• A bounded statement of scope for the system or product. A list of customers, users, and other
stakeholders who participated in the requirements elicitation activity.
• A description of the system’s technical environment.
• A list of requirements (preferably organized by function) and the domain constraints that apply
to each.
• A set of usage scenarios that provide insight into the use of the system or product under
different operating conditions.
• Any prototypes developed to better define requirements. Each of these work products is
reviewed by all people who have participated in the requirements elicitation.

b) Requirements Analysis and Negotiation


Once requirements have been gathered, the work products noted earlier form the basis for
requirements analysis. Analysis categorizes requirements and organizes them into related
subsets; explores each requirement in relationship to others; examines requirements for
consistency, omissions, and ambiguity; and ranks requirements based on the needs of
customers/users. The requirements analysis activity commences, the following questions are
asked and answered:
• Is each requirement consistent with the overall objective for the system/product?
• Have all requirements been specified at the proper level of abstraction? That is, do some
requirements provide a level of technical detail that is in appropriate at this stage?
• Is the requirement really necessary or does it represent an add-on feature that may not
be essential to the objective of the system?
• Is each requirement bounded and unambiguous?
• Does each requirement have attribution? That is, is a source (generally, a specific
individual) noted for each requirement?
• Do any requirements conflict with other requirements?
• Is each requirement achievable in the technical environment that will house the system
or product?
• Is each requirement testable, once implemented? It isn’t unusual for customers and
users to ask for more than can be achieved, given limited business resources. It also is
relatively common for different customers or users to propose conflicting requirements,
arguing that their version is “essential for our special needs.”
c) Requirements Specification
In the context of computer-based systems (and software), the term specification mean different
things to different people. A specification can be a written document, a graphical model, a formal
mathematical model, a collection of usage scenarios, a prototype, or any combination of these.
Some suggest that a “standard template” should be developed and used for a system
specification, arguing that this leads to requirements that are presented in a consistent and
therefore more understandable manner. However, it is sometimes necessary to remain flexible
when a specification is to be developed. For large systems, a written document, combining
natural language descriptions and graphical models may be the best approach. However, usage
scenarios may be all that are required for smaller products or systems that reside within well-
understood technical environments. The System Specification is the final work product produced
by the system and requirements engineer. It serves as the foundation for hardware engineering,
software engineering, database engineering, and human engineering. It describes the function
and performance of a computer-based system and the constraints that will govern its
development. The specification bounds each allocated system element. The System Specification
also describes the information (data and control) that is input to and output from the system.
d) System Modeling
Assume for a moment that you have been asked to specify all requirements for the construction
of a gourmet kitchen. You know the dimensions of the room, the location of doors and windows,
and the available wall space. You could specify all cabinets and appliances and even indicate
where they are to reside in the kitchen. Would this be a useful specification? The answer is
obvious. In order to fully specify what is to be built, you would need a meaningful model of the
kitchen, that is, a blueprint or three-dimensional rendering that shows the position of the cabinets
and appliances and their relationship to one another. From the model, it would be relatively easy
to assess the efficiency of work flow
e) Requirements Validation
The work products produced as a consequence of requirements engineering (a system
specification and related information) are assessed for quality during a validation step.
Requirements validation examines the specification to ensure that all system requirements have
been stated unambiguously; that inconsistencies, omissions, and errors have been detected and
corrected; and that the work products conform to the standards established for the process, the
project, and the product. The primary requirements validation mechanism is the formal technical
review. The review team includes system engineers, customers, users, and other stakeholders
who examine the system specification5 looking for errors in content or interpretation, areas
where clarification may be required, missing information, inconsistencies, conflicting
requirements, or unrealistic (unachievable) requirements. Although the requirements validation
review can be conducted in any manner that results in the discovery of requirements errors, it is
useful to examine each requirement against a set of checklist questions. The following questions
represent a small subset of those that might be asked:
• Are requirements stated clearly? Can they be misinterpreted?
• Is the source (e.g., a person, a regulation, a document) of the requirement identified? Has the
final statement of the requirement been examined by or against the original source?
• Is the requirement bounded in quantitative terms?
• What other requirements relate to this requirement? Are they clearly noted
via a cross-reference matrix or other mechanism?
• Does the requirement violate any domain constraints?
• Is the requirement testable? If so, can we specify tests (sometimes called validation criteria) to
exercise the requirement?
• Is the requirement traceable to any system model that has been created?
• Is the requirement traceable to overall system/product objectives?
f) Requirements Management
Requirements management is a set of activities that help the project team to identify, control, and
track requirements and changes to requirements at any time as the project proceeds. Many of these
activities are identical to the software configuration management techniques. Like SCM,
requirements management begins with identification. Each requirement is assigned a unique
identifier that might take the form <requirement type><requirement #> where requirement type
takes on values such as
F = functional requirement,
D = data requirement, B = behavioral requirement, I = interface requirement, and P = output
requirement. Hence, a requirement identified as F09 indicates a functional requirement assigned
requirement number 9. Once requirements have been identified, traceability tables are developed.
Among many possible traceability tables are the following:
Features traceability table. Shows how requirements relate to important customer observable
system/product features.
Source traceability table. Identifies the source of each requirement.
Dependency traceability table. Indicates how requirements are related to one another.
Subsystem traceability table. Categorizes requirements by the subsystem(s) that they govern.
Interface traceability table. Shows how requirements relate to both internal and external system
interfaces. In many cases, these traceability tables are maintained as part of a requirements
database so that they may be quickly searched to understand how a change in one requirement will
affect different aspects of the system to be built.
THE SOFTWARE REQUIREMENTS SPECIFICATION
The Software Requirements Specification is produced at the culmination of the analysis task. The
function and performance allocated to software as part of system engineering are refined by
establishing a complete information description, a detailed functional description, a representation
of system behavior, an indication of performance requirements and design constraints, appropriate
validation criteria, and other information pertinent to requirements.
The Introduction of the software requirements specification states the goals and objectives of
the software, describing it in the context of the computer-based system. Actually, the
Introduction may be nothing more than the software scope of the planning document.
The Information Description provides a detailed description of the problem that the software
must solve. Information content, flow, and structure are documented. Hardware, software, and
human interfaces are described for external system elements and internal software functions.
Functional Description. A description of each function required to solve the problem is
presented in the Functional Description. A processing narrative is provided for each function,
design constraints are stated and justified, performance characteristics are stated, and one or
more diagrams are included to graphically represent the overall structure of the software and
interplay among software functions and other system elements. The Behavioral Description
section of the specification examines the operation of the software as a consequence of external
events and internally generated control characteristics.
Validation Criteria is probably the most important and, ironically, the most often neglected
section of the Software Requirements Specification. How do we recognize a successful
implementation? What classes of tests must be conducted to validate function, performance, and
constraints? We neglect this section because completing it demands a thorough understanding of
software requirements—something that we often do not have at this stage. Yet, specification of
validation criteria acts as an implicit review of all other requirements. It is essential that time and
attention be given to this section.
Finally, the specification includes a Bibliography and Appendix. The bibliography contains
references to all documents that relate to the software. These include other Functional
Description software engineering documentation, technical references, vendor literature, and
standards. The appendix contains information that supplements the specifications. Tabular data,
detailed description of algorithms, charts, graphs, and other material are presented as appendixes.
Format of SRS
1. Introduction

1.1 Purpose

1.2 Intended Audience

1.3 Intended Use

1.4 Product Scope

1.5 Definitions and Acronyms


2. Overall Description

2.1 User Needs

2.2 Assumptions and Dependencies

3. System Features and Requirements

3.1 Functional Requirements

3.2 External Interface Requirements

3.3 System Features

3.4 Nonfunctional Requirements


4. Change Management Process
5. Documents Approval
6. Supporting Information

Uses of SRS document


a) Development team require it for developing product according to the need.
b) Test plans are generated by testing group based on the describe external behavior.
c) Maintenance and support staff need it to understand what the software product is
supposed to do.
d) Project manager base their plans and estimates of schedule, effort and resources on it.
e) Customer rely on it to know that product they can expect.
f) As a contract between developer and customer.
g) In documentation purpose.
The 3Ps in the software project spectrum refer to People, Process, and Product. These are three
key elements that play a crucial role in the success of a software project. Here's a brief
explanation of each:
People: People are the individuals involved in the software project, including stakeholders,
project managers, developers, testers, and end-users. The success of a project heavily depends on
the skills, expertise, and collaboration of the people involved. Effective communication,
teamwork, and leadership are essential for successful project execution.
Process: Process refers to the set of activities, methods, and procedures followed to develop and
deliver the software. It includes project planning, requirement gathering, design, coding, testing,
deployment, and maintenance. A well-defined and structured process ensures that the project
progresses smoothly, minimizes risks, and delivers a high-quality product within the specified
time and budget.
Product: Product represents the software or system being developed. It includes the features,
functionality, and usability that meet the requirements and expectations of the stakeholders. The
product should be reliable, scalable, maintainable, and user-friendly. Continuous monitoring,
feedback, and improvement are necessary to ensure that the product meets the desired objectives.
The 3Ps are interconnected and influence each other throughout the software project lifecycle.
Neglecting any of these elements can lead to project failure. Therefore, it is crucial to focus on
all three aspects and strike a balance between them to achieve successful outcomes.
PROJECT ESTIMATION
In the early days of computing, software costs constituted a small percentage of the overall
computer-based system cost. An order of magnitude error in estimates of software cost had
relatively little impact. Today, software is the most expensive element of virtually all computer-
based systems. For complex, custom systems, a large cost estimation error can make the difference
between profit and loss. Cost overrun can be disastrous for the developer. Software cost and effort
estimation will never be an exact science. Too many variables— human, technical, environmental,
political—can affect the ultimate cost of software and effort applied to develop it. However,
software project estimation can be transformed from a black art to a series of systematic steps that
provide estimates with acceptable risk.
1. Software Sizing
The accuracy of a software project estimate is predicated on a number of things: (1) the degree to
which the planner has properly estimated the size of the product to be built; (2) the ability to
translate the size estimate into human effort, calendar time, and dollars (a function of the
availability of reliable software metrics from past projects); (3) the degree to which the project
plan reflects the abilities of the software team; and (4) the stability of product requirements and
the environment that supports the software engineering effort. The “size” of software to be built
can be estimated using a direct measure, Line Of Code (LOC), or an indirect measure, Function
Point (FP).
In this section, we consider the software sizing problem. Because a project estimate is only as
good as the estimate of the size of the work to be accomplished, sizing represents the project
planner’s first major challenge. In the context of project planning, size refers to a quantifiable
outcome of the software project.
Four different approaches to the sizing problem: “Fuzzy logic” sizing, Function point
sizing. Standard component sizing, Change sizing.
2. LINE OF CODE (LOC)
LOC is any line of text in a code that is not a comment or blank line, and also header lines, in
any case of the number of statements or fragments of statements on the line. LOC clearly
consists of all lines containing the declaration of any variable, and executable and non-
executable statements. As Lines of Code (LOC) only counts the volume of code, you can only
use it to compare or estimate projects that use the same language and are coded using the same
coding standards.
Features of Lines of Code (LOC):
Change Tracking: Variations in LOC as time passes can be tracked to analyze the growth or
reduction of a codebase, providing insights into project progress. Limited Representation of
Complexity: Despite LOC provides a general idea of code size, it does not accurately depict code
complexity. It is possible for two programs having the same LOC to be incredibly complex.
Ease of Computation: LOC is an easy measure to obtain because it is easy to calculate and takes
little time.
Easy to Understand: The idea of expressing code size in terms of lines is one that stakeholders,
even those who are not technically inclined, can easily understand.
Advantages of Lines of Code (LOC):
1. Effort Estimation: LOC is occasionally used to estimate development efforts and project
deadlines at a high level. Although caution is necessary, project planning can begin with
this.
2. Comparative Analysis: High-level productivity comparisons between several projects or
development teams can be made using LOC. It might provide an approximate figure of the
volume of code generated over a specific time frame.
3. Benchmarking Tool: When comparing various iterations of the same program, LOC can
be used as a benchmarking tool. It may bring information on how modifications affect the
codebase’s total size.
Disadvantages of Lines of Code (LOC):
1. Challenges in Agile Work Environments: Focusing on initial LOC estimates may not
adequately reflect the iterative and dynamic nature of development in agile development,
as requirements may change.
2. Not Considering into Account External Libraries: Code from other libraries or frameworks,
which can greatly enhance a project’s overall usefulness, is not taken into account by LOC.
3. Challenges with Maintenance: Higher LOC codebases are larger codebases that typically
demand more maintenance work.

3. FUNCTION POINT (FP)


Functional Point Analysis gives a dimensionless number defined in function points which we have
found to be an effective relative measure of function value delivered to our customer.
Objectives of Functional Point Analysis:
Encourage Approximation: FPA helps in the estimation of the work, time and materials needed to
develop a software project. Organizations are able to plan and manage projects more accurately
when a common measure of functionality is available.
To assist with project management: Project managers can monitor and manage software
development projects with the help of FPA. Managers are able to evaluate productivity, monitor
progress, and make well-informed decisions about resource allocation and project timeframes by
measuring the software’s functional points.
Comparative analysis: By enabling benchmarking, it gives businesses the ability to assess how
their software projects measure up to industry standards or best practices in terms of size and
complexity. This can be useful for determining where improvements might be made and for
evaluating how well development procedures are working.
Improve Your Cost-Benefit Analysis: It offers a foundation for assessing the value provided by
the program in respect to its size and complexity, which helps with cost-benefit analysis. Making
educated judgements about project investments and resource allocations can benefit from having
access to this information.
Comply with Business Objectives: It assists in coordinating software development activities with
an organization’s business objectives. It guarantees that software development efforts are directed
towards providing value to end users by concentrating on user-oriented functionality.
Types of Functional Point Analysis:
There are two types of Functional Point Analysis:
1. Transactional Functional Type
External Input (EI): EI processes data or control information that comes from outside the
application’s boundary. The EI is an elementary process.
External Output (EO): EO is an elementary process that generates data or control information sent
outside the application’s boundary.
External Inquiries (EQ): EQ is an elementary process made up of an input-output combination that
results in data retrieval.
2. Data Functional Type
Internal Logical File (ILF): A user-identifiable group of logically related data or control
information maintained within the boundary of the application.
External Interface File (EIF): A group of users recognizable logically related data allusion to the
software but maintained within the boundary of another software.

Characteristics of Functional Point Analysis:


We calculate the functional point with the help of the number of functions and types of functions
used in applications. These are classified into five types:
Measurement Parameters Examples

Number of External Inputs (EI) Input screen and tables

Number of External Output (EO) Output screens and reports

Number of external inquiries (EQ) Prompts and interrupts

Number of internal files (ILF) Databases and directories

Number of external interfaces (EIF) Shared databases and shared routines

Functional Point helps in describing system complexity and also shows project timeline

Example of Function Point

Finally, the estimated number of FP is derived:


FP estimated = count-total x [0.65 + 0.01 x Summation (Fi)]
FP estimated = 375
4. USE CASE POINTS ESTIMATION (UCP)
A Use-Case is a series of related interactions between a user and a system that enables the user to
achieve a goal. Use-Cases are a way to capture functional requirements of a system. The user of
the system is referred to as an ‘Actor’. Use-Cases are fundamentally in text form.
Use-Case Points – Definition
Use-Case Points (UCP) is a software estimation technique used to measure the software size with
use cases. The concept of UCP is similar to FPs.
a) The number of UCPs in a project is based on the following −
b) The number and complexity of the use cases in the system.
c) The number and complexity of the actors on the system.
d) Various non-functional requirements (such as portability, performance, maintainability)
that are not written as use cases.
e) The environment in which the project will be developed (such as the language, the team’s
motivation, etc.)
Estimation with UCPs requires all use cases to be written with a goal and at approximately the
same level, giving the same amount of detail. Hence, before estimation, the project team should
ensure they have written their use cases with defined goals and at detailed level. Use case is
normally completed within a single session and after the goal is achieved, the user may go on to
some other activity.
Use-Case Points Counting Process
The Use-Case Points counting process has the following steps −
Step 1: Calculate Unadjusted Use-Case Points.
You calculate Unadjusted Use-Case Points first, by the following steps −

Step 1.1 − Determine Unadjusted Use-Case Weight.

Step 1.1.1 − Find the number of transactions in each Use-Case.

If the Use-Cases are written with User Goal Levels, a transaction is equivalent to a step in the Use-
Case. Find the number of transactions by counting the steps in the Use-Case.

Step 1.1.2 − Classify each Use-Case as Simple, Average or Complex based on the number of
transactions in the Use-Case. Also, assign Use-Case Weight as shown in the following table −

Use-Case Complexity Number of Transactions Use-Case Weight

Simple ≤3 5

Average 4 to 7 10

Complex >7 15

Step 1.1.3 − Repeat for each Use-Case and get all the Use-Case Weights. Unadjusted Use-Case
Weight (UUCW) is the sum of all the Use-Case Weights.

Step 1.1.4 − Find Unadjusted Use-Case Weight (UUCW) using the following table −

Use-Case Use-Case Number of


Product
Complexity Weight Use-Cases

Simple 5 NSUC 5 × NSUC

Average 10 NAUC 10 × NAUC


Complex 15 NCUC 15 × NCUC

5 × NSUC + 10 × NAUC + 15 ×
Unadjusted Use-Case Weight (UUCW)
NCUC

Where,

NSUC is the no. of Simple Use-Cases.

NAUC is the no. of Average Use-Cases.

NCUC is the no. of Complex Use-Cases.

Step 1.2 − Determine Unadjusted Actor Weight.

An Actor in a Use-Case might be a person, another program, etc. Some actors, such as a system
with defined API, have very simple needs and increase the complexity of a Use-Case only slightly.

Some actors, such as a system interacting through a protocol have more needs and increase the
complexity of a Use-Case to a certain extent.

Other Actors, such as a user interacting through GUI have a significant impact on the complexity
of a Use-Case. Based on these differences, you can classify actors as Simple, Average and
Complex.

Step 1.2.1 − Classify Actors as Simple, Average and Complex and assign Actor Weights as shown
in the following table −

Actor Complexity Example Actor Weight

Simple A System with defined API 1

Average A System interacting through a Protocol 2

Complex A User interacting through GUI 3

Step 1.2.2 − Repeat for each Actor and get all the Actor Weights. Unadjusted Actor Weight
(UAW) is the sum of all the Actor Weights.

Step 1.2.3 − Find Unadjusted Actor Weight (UAW) using the following table −

Actor Actor Number of


Product
Complexity Weight Actors

Simple 1 NSA 1 × NSA

Average 2 NAA 2 × NAA

Complex 3 NCA 3 × NCA

Unadjusted Actor Weight (UAW) 1 × NSA + 2 × NAA + 3 × NCA


Where,

NSA is the no. of Simple Actors.

NAA is the no. of Average Actors.

NCA is the no. of Complex Actors.

Step 1.3 − Calculate Unadjusted Use-Case Points.

The Unadjusted Use-Case Weight (UUCW) and the Unadjusted Actor Weight (UAW) together
give the unadjusted size of the system, referred to as Unadjusted Use-Case Points.

Unadjusted Use-Case Points (UUCP) = UUCW + UAW

The next steps are to adjust the Unadjusted Use-Case Points (UUCP) for Technical Complexity
and Environmental Complexity.

Step 2: Adjust For Technical Complexity

Step 2.1 − Consider the 13 Factors that contribute to the impact of the Technical Complexity of a
project on Use-Case Points and their corresponding Weights as given in the following table −

Factor Description Weight

T1 Distributed System 2.0

T2 Response time or throughput performance objectives 1.0

T3 End user efficiency 1.0

T4 Complex internal processing 1.0

T5 Code must be reusable 1.0

T6 Easy to install .5

T7 Easy to use .5

T8 Portable 2.0

T9 Easy to change 1.0

T10 Concurrent 1.0

T11 Includes special security objectives 1.0

T12 Provides direct access for third parties 1.0

T13 Special user training facilities are required 1.0

Many of these factors represent the project’s nonfunctional requirements.


Step 2.2 − For each of the 13 Factors, assess the project and rate from 0 (irrelevant) to 5 (very
important).

Step 2.3 − Calculate the Impact of the Factor from Impact Weight of the Factor and the Rated
Value for the project as

Impact of the Factor = Impact Weight × Rated Value

Step (2.4) − Calculate the sum of Impact of all the Factors. This gives the Total Technical Factor
(TFactor) as given in table below −

Weight
Factor Description
(W)

T1 Distributed System 2.0

Response time or throughput


T2 1.0
performance objectives

T3 End user efficiency 1.0

T4 Complex internal processing 1.0

T5 Code must be reusable 1.0

T6 Easy to install .5

T7 Easy to use .5

T8 Portable 2.0

T9 Easy to change 1.0

T10 Concurrent 1.0

Includes special security


T11 1.0
objectives

Provides direct access for


T12 1.0
third parties

Special user training facilities


T13 1.0
are required

Step 2.5 − Calculate the Technical Complexity Factor (TCF) as −

TCF = 0.6 + (0.01 × TFactor)

Step 3: Adjust For Environmental Complexity


Step 3.1 − Consider the 8 Environmental Factors that could affect the project execution and their
corresponding Weights as given in the following table −

Factor Description Weight

F1 Familiar with the project model that is used 1.5

F2 Application experience .5

F3 Object-oriented experience 1.0

F4 Lead analyst capability .5

F5 Motivation 1.0

F6 Stable requirements 2.0

F7 Part-time staff -1.0

F8 Difficult programming language -1.0

Step 3.2 − For each of the 8 Factors, assess the project and rate from 0 (irrelevant) to 5 (very
important).

Step 3.3 − Calculate the Impact of the Factor from Impact Weight of the Factor and the Rated
Value for the project as

Impact of the Factor = Impact Weight × Rated Value

Step 3.4 − Calculate the sum of Impact of all the Factors. This gives the Total Environment
Factor (EFactor) as given in the following table −

Weight
Factor Description
(W)

Familiar with the project


F1 1.5
model that is used

F2 Application experience .5

F3 Object-oriented experience 1.0

F4 Lead analyst capability .5

F5 Motivation 1.0

F6 Stable requirements 2.0

F7 Part-time staff -1.0

Difficult programming
F8 -1.0
language
Step 3.5 − Calculate the Environmental Factor (EF) as −1.4 + (-0.03 × EFactor)

Step 4: Calculate Adjusted Use-Case Points (UCP)

Calculate Adjusted Use-Case Points (UCP) as −UCP = UUCP × TCF × EF

EMPIRICAL ESTIMATION MODEL

An estimation model for computer software uses empirically derived formulas to predict effort as
a function of LOC or FP. Values for LOC or FP are estimated using the approach described in
Sections 5.6.2 and 5.6.3. But instead of using the tables described in those sections, the resultant
values for LOC or FP are plugged into the estimation model.

The empirical data that support most estimation models are derived from a limited sample of
projects. For this reason, no estimation model is appropriate for all classes of software and in all
development environments. Therefore, the results obtained from such models must be used
judiciously.

The Structure of Estimation Models

A typical estimation model is derived using regression analysis on data collected from past
software projects. The overall structure of such models takes the form

E = A + B x(ev)C

where A, B, and C are empirically derived constants, E is effort in person-months, and ev is the
estimation variable (either LOC or FP). In addition to the relationship noted in above equation,
the majority of estimation models have some form of project adjustment component that enables
E to be adjusted by other project characteristics (e.g., problem complexity, staff experience,
development environment). Among the many:

LOC-oriented estimation models proposed in the literature are

E = 5.2 x (KLOC)0.91 Walston-Felix model

E = 5.5 + 0.73 x (KLOC)1.16 Bailey-Basili model

E = 3.2 x (KLOC)1.05 Boehm simple model

E = 5.288 x (KLOC)1.047 Doty model for KLOC > 9

FP-oriented models have also been proposed. These include

E = -13.39 + 0.0545 FP Albrecht and Gaffney model

E = 60.62 x 7.728 x 10-8 FP3 Kemerer model

E = 585.7 + 15.12 FP Matson, Barnett, and Mellichamp model

The COCOMO II Model


In his classic book on “software engineering economics,” Barry Boehm [BOE81] introduced a
hierarchy of software estimation models bearing the name COCOMO, for COnstructive COst
MOdel. The original COCOMO model became one of the most widely used and discussed
software cost estimation models in the industry. It has evolved into a more comprehensive
estimation model, called COCOMO II [BOE96, BOE00].

Like its predecessor, COCOMO II is actually a hierarchy of estimation models that address the
following areas:

Application composition model. Used during the early stages of software engineering, when
prototyping of user interfaces, consideration of software and system interaction, assessment of
performance, and evaluation of technology maturity are paramount.

Early design stage model. Used once requirements have been stabilized and basic software
architecture has been established.

Post-architecture-stage model. Used during the construction of the software.

Like all estimation models for software, the COCOMO II models require sizing information. Three
different sizing options are available as part of the model hierarchy: object points, function points,
and lines of source code.

The COCOMO II application composition model uses object points and is illustrated in the
following paragraphs. It should be noted that other, more sophisticated estimation models (using
FP and KLOC) are also available as part of COCOMO II.

Like function points, the object point is an indirect software measure that is computed using counts
of the number of (1) screens (at the user interface), (2)reports, and (3) components likely to be
required to build the application. Each object instance (e.g., a screen or report) is classified into
one of three complexity levels (i.e., simple, medium, or difficult) using criteria suggested by
Boehm [BOE96]. In essence, complexity is a function of the number and source of the client and
server data tables that are required to generate the screen or report and the number of views or
sections presented as part of the screen or report.
Once complexity is determined, the number of screens, reports, and components are weighted
according to Table 5.1. The object point count is then determined by multiplying the original
number of object instances by the weighting factor in Table 5.1 and summing to obtain a total
object point count. When component-based development or general software reuse is to be applied,
the percent of reuse (%reuse) is estimated and the object point count is adjusted:
NOP = (object points) x [(100 %reuse)/100]
where NOP is defined as new object points.
To derive an estimate of effort based on the computed NOP value, a “productivity rate” must be
derived. Table 5.2 presents the productivity rate
PROD = NOP/person-month

For different levels of developer experience and development environment maturity. Once the
productivity rate has been determined, an estimate of project effort can bederived as
estimated effort = NOP/PROD
In more advanced COCOMO II models,15 a variety of scale factors, cost drivers, and adjustment
procedures are required. A complete discussion of these is beyond the scope of this book. The
interested reader should see [BOE00] or visit the COCOMO
The Software Equation
The software equation [PUT92] is a dynamic multivariable model that assumes a specific
distribution of effort over the life of a software development project. The model has been derived
from productivity data collected for over 4000 contemporary software projects. Based on these
data, an estimation model of the form
E = [LOC

B0.333/P]3

(1/t4)
where E = effort in person-months or person-years
t = project duration in months or years
B = “special skills factor”16
P = “productivity parameter” that reflects:
• Overall process maturity and management practices
• The extent to which good software engineering practices are used
• The level of programming languages used
• The state of the software environment
• The skills and experience of the software team
• The complexity of the application
Typical values might be P = 2,000 for development of real-time embedded software; P = 10,000
for telecommunication and systems software; P = 28,000 for business systems applications.17 The
productivity parameter can be derived for local condition using historical data collected from past
development efforts.
It is important to note that the software equation has two independent parameters:
(1) an estimate of size (in LOC) and (2) an indication of project duration in calendar months or
years.
To simplify the estimation process and use a more common form for their estimation model,
Putnam and Myers [PUT92] suggest a set of equations derived from the software equation.
Minimum development time is defined as tmin = 8.14 (LOC/P)0.43 in months for tmin > 6 months
(5-4a)
E = 180 Bt3 in person-months for E ≥ 20 person-months
Note that t in Equation (5-4b) is represented in years.
Using Equations (5-4) with P = 12,000 (the recommended value for scientific software) for the
CAD software discussed earlier in this chapter,
tmin = 8.14 (33200/12000)0.43
tmin = 12.6 calendar months
E = 180 x 0.28 x (1.05)3
E = 58 person-months

You might also like