0% found this document useful (0 votes)
180 views

Software Quality Metric Unit 2 Notes

This document discusses software quality metrics including customer problem metrics, customer satisfaction metrics, and software maintenance metrics. It also discusses function point analysis which is used to measure the functional size of a software product in terms of function points.

Uploaded by

Aditi Goel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
180 views

Software Quality Metric Unit 2 Notes

This document discusses software quality metrics including customer problem metrics, customer satisfaction metrics, and software maintenance metrics. It also discusses function point analysis which is used to measure the functional size of a software product in terms of function points.

Uploaded by

Aditi Goel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

KCA035:SOFTWARE QUALITY ENGINEERING

UNIT-2

Software Quality Metrics Product Quality Metrics

Software Quality Metrics : Introduction

 Software quality metrics means the ability of a software product to be and to perform as per
the standard defined. Metric means standards.

 The software quality metrics ensures that the software product is of highest quality and
standard.
 There are three major software quality metrics. These are:

Software Quality Metrics

Software Quality Metrics : Customer Problem Metrics

 The quality metrics used in software industry that is responsible for measuring the
problems encountered by the customers while using the product.
 From the customers point of view, all the problems they encountered while using the
software product are problems with the software including the valid defects.
 The usability problems, unclear documentation and errors are the problems that cannot be
considered as valid defects.
 The customer problem metrics is usually expressed in terms of problems per unit month
i.e. PUM.

PUM =Total problems reported by a customer for a period of time + Total number of license
months of the software during the period.
Here,

No of license months = Number of install licenses of software * Number of months in


calculation period.

 PUM is usually calculated for each month after the software is released to the market, and
also for monthly averages by year.

Software Quality Metrics : Customer Satisfaction Metrics

 Customer Satisfaction Metric deals with the overall quality of the product and how much a
customer is satisfied with that software product.
 Customer satisfaction is often measured by customers survey data via a five point scale.
These are:
1. Very Satisfied.
2. Satisfied.
3. Neutral.
4. Dissatisfied.
5. Very Dissatisfied.

Customer Satisfaction Metrics

 Based on a five point scale data, several metrics with slight variations can be constructed
and used. These are:
1. Percentages of completely satisfied customers.
2. Percentage of satisfied customers.
3. Percentage of dissatisfied customers.
4. Percentage of non-satisfied customers.

Note: Generally, ”Percentage of satisfied customers is used.”

 Net Satisfaction Index(NSI) is also used by some companies for customer satisfaction
metrics.
 The net satisfaction index ha following weighing criteria:
1. Completely Satisfied = 100%.
2. Satisfied = 75%.
3. Neutral = 50%.
4. Dissatisfied = 25%.
5. Completely Dissatisfied = 0%.
Software Quality Metrics : Software Maintenance Metrics

 A software product can be called as in maintenance phase when its development is


complete and is released to the market.
 During this interval, the defect arrival by time interval and customer problem is called as de
facto metrics.
 Also, the number of defect or problems arrived mainly depends upon the development
phase and not on the maintenance phase.
 Therefore, certain metrics are required for software maintenance. These are:
1. Backlog Management Index(BMI)
 The metric that is used to manage the backlog of open and unresolved
problems. It can be defined as:
BMI = (Number of problems closed during the month * 100)/(Number
of problems arrived during the month)

2. Fix Response Time(FRT)


 While developing the software product, certain guidelines regarding
reporting of defect are made. It includes the guideline for time limit of fixes
that must available against the reported defects.
 The fix response time for more severe defects is low and the fix response
time for less severe defect is high.
 The fix response time metric can be calculated as:

FRT = (Mean time of all problems from open to closed)

3. Fix Quality
 Fix quality of software product is the number of defects fixed.
 Every software product contains defects. But, Fix Quality ensures that the
defects in software product are fixed on time as per the quality standard.
 Fix quality ensures that the fixes for the defects should not turn out to be
defective.
 A fix can be called as defective if it does not fixes the problems and
generates new problems in that software product.

A Function Point (FP) is a unit of measurement to express the amount of business functionality,
an information system (as a product) provides to a user. FPs measure software size. They are
widely accepted as an industry standard for functional sizing.

Functional Point (FP) Analysis


Allan J. Albrecht initially developed function Point Analysis in 1979 at IBM and it has been
further modified by the International Function Point Users Group (IFPUG). FPA is used to make
estimate of the software project, including its testing in terms of functionality or function size of
the software product. However, functional point analysis may be used for the test estimation of
the product. The functional size of the product is measured in terms of the function point, which
is a standard of measurement to measure the software application.

Objectives of FPA

The basic and primary purpose of the functional point analysis is to measure and provide the
software application functional size to the client, customer, and the stakeholder on their request.
Further, it is used to measure the software project development along with its maintenance,
consistently throughout the project irrespective of the tools and the technologies.

Following are the points regarding FPs

1. FPs of an application is found out by counting the number and types of functions used in the
applications. Various functions used in an application can be put under five types, as shown in
Table:

Current Time 0:00

Duration 18:10

Types of FP Attributes

Measurements Parameters Examples

1.Number of External Inputs(EI) Input screen and tables

2. Number of External Output (EO) Output screens and reports

3. Number of external inquiries (EQ) Prompts and interrupts.

4. Number of internal files (ILF) Databases and directories

5. Number of external interfaces (EIF) Shared databases and shared routines.

All these parameters are then individually assessed for complexity.

The FPA functional units are shown in Fig:


2. FP characterizes the complexity of the software system and hence can be used to depict the
project time and the manpower requirement.

3. The effort required to develop the project depends on what the software does.

4. FP is programming language independent.

5. FP method is used for data processing systems, business systems like information systems.

6. The five parameters mentioned above are also known as information domain characteristics.

7. All the parameters mentioned above are assigned some weights that have been experimentally
determined and are shown in Table

Weights of 5-FP Attributes

Measurement Parameter Low Average High

1. Number of external inputs (EI) 7 10 15

2. Number of external outputs (EO) 5 7 10

3. Number of external inquiries (EQ) 3 4 6

4. Number of internal files (ILF) 4 5 7


5. Number of external interfaces (EIF) 3 4 6

The functional complexities are multiplied with the corresponding weights against each function,
and the values are added up to determine the UFP (Unadjusted Function Point) of the subsystem.

Here that weighing factor will be simple, average, or complex for a measurement parameter type.

The Function Point (FP) is thus calculated with the following formula.

FP = Count-total * [0.65 + 0.01 * ∑(fi)]


= Count-total * CAF

where Count-total is obtained from the above Table.

CAF = [0.65 + 0.01 *∑(fi)]

and ∑(fi) is the sum of all 14 questionnaires and show the complexity adjustment value/ factor-
CAF (where i ranges from 1 to 14). Usually, a student is provided with the value of ∑(fi)

Also note that ∑(fi) ranges from 0 to 70, i.e.,

0 <= ∑(fi) <=70

and CAF ranges from 0.65 to 1.35 because


1. When ∑(fi) = 0 then CAF = 0.65
2. When ∑(fi) = 70 then CAF = 0.65 + (0.01 * 70) = 0.65 + 0.7 = 1.35

Based on the FP measure of software many other metrics can be computed:

1. Errors/FP
2. $/FP.
3. Defects/FP
4. Pages of documentation/FP
5. Errors/PM.
6. Productivity = FP/PM (effort is measured in person-months).
7. $/Page of Documentation.

8. LOCs of an application can be estimated from FPs. That is, they are interconvertible. This
process is known as backfiring. For example, 1 FP is equal to about 100 lines of COBOL code.

9. FP metrics is used mostly for measuring the size of Management Information System (MIS)
software.

10. But the function points obtained above are unadjusted function points (UFPs). These (UFPs)
of a subsystem are further adjusted by considering some more General System Characteristics
(GSCs). It is a set of 14 GSCs that need to be considered. The procedure for adjusting UFPs is as
follows:

1. Degree of Influence (DI) for each of these 14 GSCs is assessed on a scale of 0 to 5. (b) If
a particular GSC has no influence, then its weight is taken as 0 and if it has a strong
influence then its weight is 5.
2. The score of all 14 GSCs is totaled to determine Total Degree of Influence (TDI).
3. Then Value Adjustment Factor (VAF) is computed from TDI by using the formula: VAF
= (TDI * 0.01) + 0.65

Remember that the value of VAF lies within 0.65 to 1.35 because

1. When TDI = 0, VAF = 0.65


2. When TDI = 70, VAF = 1.35
3. VAF is then multiplied with the UFP to get the final FP count: FP = VAF * UFP

Example: Compute the function point, productivity, documentation, cost per function for the
following data:

1. Number of user inputs = 24


2. Number of user outputs = 46
3. Number of inquiries = 8
4. Number of files = 4
5. Number of external interfaces = 2
6. Effort = 36.9 p-m
7. Technical documents = 265 pages
8. User documents = 122 pages
9. Cost = $7744/ month

Various processing complexity factors are: 4, 1, 0, 3, 3, 5, 4, 4, 3, 3, 2, 2, 4, 5.

Solution:

Measurement Parameter Count Weighing factor

1. Number of external inputs (EI) 24 * 4 = 96

2. Number of external outputs (EO) 46 * 4 = 184

3. Number of external inquiries (EQ) 8 * 6 = 48

4. Number of internal files (ILF) 4 * 10 = 40

5 = 10
5. Number of external interfaces (EIF) Count-total → 2 *
378

So sum of all fi (i ← 1 to 14) = 4 + 1 + 0 + 3 + 5 + 4 + 4 + 3 + 3 + 2 + 2 + 4 + 5 = 43

FP = Count-total * [0.65 + 0.01 *∑(fi)]


= 378 * [0.65 + 0.01 * 43]
= 378 * [0.65 + 0.43]
= 378 * 1.08 = 408

Total pages of documentation = technical document + user document


= 265 + 122 = 387pages

Documentation = Pages of documentation/FP


= 387/408 = 0.94
Differentiate between FP and LOC
FP LOC

1. FP is specification based. 1. LOC is an analogy based.

2. FP is language independent. 2. LOC is language dependent.

3. FP is user-oriented. 3. LOC is design-oriented.

4. It is extendible to LOC. 4. It is convertible to FP (backfiring)

Software quality metrics are a subset of software metrics that focus on the quality aspects of the
product, process, and project. These are more closely associated with process and product
metrics than with project metrics.

Software quality metrics can be further divided into three categories −

 Product quality metrics


 In-process quality metrics
 Maintenance quality metrics

Product Quality Metrics

This metrics include the following −

 Mean Time to Failure


 Defect Density
 Customer Problems
 Customer Satisfaction

Mean Time to Failure

It is the time between failures. This metric is mostly used with safety critical systems such as the
airline traffic control systems, avionics, and weapons.

Defect Density

It measures the defects relative to the software size expressed as lines of code or function point,
etc. i.e., it measures code quality per unit. This metric is used in many commercial software
systems.
Customer Problems

It measures the problems that customers encounter when using the product. It contains the
customer’s perspective towards the problem space of the software, which includes the non-defect
oriented problems together with the defect problems.

The problems metric is usually expressed in terms of Problems per User-Month (PUM).

PUM = Total Problems that customers reported (true defect and non-defect oriented
problems) for a time period + Total number of license months of the software during
the period

Where,

Number of license-month of the software = Number of install license of the software ×


Number of months in the calculation period

PUM is usually calculated for each month after the software is released to the market, and also
for monthly averages by year.

Customer Satisfaction

Customer satisfaction is often measured by customer survey data through the five-point scale −

 Very satisfied
 Satisfied
 Neutral
 Dissatisfied
 Very dissatisfied

Satisfaction with the overall quality of the product and its specific dimensions is usually obtained
through various methods of customer surveys. Based on the five-point-scale data, several metrics
with slight variations can be constructed and used, depending on the purpose of analysis. For
example −

 Percent of completely satisfied customers


 Percent of satisfied customers
 Percent of dis-satisfied customers
 Percent of non-satisfied customers

Usually, this percent satisfaction is used.

In-process Quality Metrics

In-process quality metrics deals with the tracking of defect arrival during formal machine testing
for some organizations. This metric includes −
 Defect density during machine testing
 Defect arrival pattern during machine testing
 Phase-based defect removal pattern
 Defect removal effectiveness

Defect density during machine testing

Defect rate during formal machine testing (testing after code is integrated into the system library)
is correlated with the defect rate in the field. Higher defect rates found during testing is an
indicator that the software has experienced higher error injection during its development process,
unless the higher testing defect rate is due to an extraordinary testing effort.

This simple metric of defects per KLOC or function point is a good indicator of quality, while
the software is still being tested. It is especially useful to monitor subsequent releases of a
product in the same development organization.

Defect arrival pattern during machine testing

The overall defect density during testing will provide only the summary of the defects. The
pattern of defect arrivals gives more information about different quality levels in the field. It
includes the following −

 The defect arrivals or defects reported during the testing phase by time interval (e.g.,
week). Here all of which will not be valid defects.
 The pattern of valid defect arrivals when problem determination is done on the reported
problems. This is the true defect pattern.
 The pattern of defect backlog overtime. This metric is needed because development
organizations cannot investigate and fix all the reported problems immediately. This is a
workload statement as well as a quality statement. If the defect backlog is large at the end
of the development cycle and a lot of fixes have yet to be integrated into the system, the
stability of the system (hence its quality) will be affected. Retesting (regression test) is
needed to ensure that targeted product quality levels are reached.

Phase-based defect removal pattern

This is an extension of the defect density metric during testing. In addition to testing, it tracks the
defects at all phases of the development cycle, including the design reviews, code inspections,
and formal verifications before testing.

Because a large percentage of programming defects is related to design problems, conducting


formal reviews, or functional verifications to enhance the defect removal capability of the
process at the front-end reduces error in the software. The pattern of phase-based defect removal
reflects the overall defect removal ability of the development process.
With regard to the metrics for the design and coding phases, in addition to defect rates, many
development organizations use metrics such as inspection coverage and inspection effort for in-
process quality management.

Defect removal effectiveness

It can be defined as follows −

DRE=Defect removed during a development phase /Defects latent in the


product ×100%

This metric can be calculated for the entire development process, for the front-end before code
integration and for each phase. It is called early defect removal when used for the front-end and
phase effectiveness for specific phases. The higher the value of the metric, the more effective
the development process and the fewer the defects passed to the next phase or to the field. This
metric is a key concept of the defect removal model for software development.

Maintenance Quality Metrics

Although much cannot be done to alter the quality of the product during this phase, following are
the fixes that can be carried out to eliminate the defects as soon as possible with excellent fix
quality.

 Fix backlog and backlog management index


 Fix response time and fix responsiveness
 Percent delinquent fixes
 Fix quality

Fix backlog and backlog management index

Fix backlog is related to the rate of defect arrivals and the rate at which fixes for reported
problems become available. It is a simple count of reported problems that remain at the end of
each month or each week. Using it in the format of a trend chart, this metric can provide
meaningful information for managing the maintenance process.

Backlog Management Index (BMI) is used to manage the backlog of open and unresolved
problems.

BMI= Number of problems closed during them Month /Number of


problem arrived during the month×100%

If BMI is larger than 100, it means the backlog is reduced. If BMI is less than 100, then the
backlog increased.
Fix response time and fix responsiveness

The fix response time metric is usually calculated as the mean time of all problems from open to
close. Short fix response time leads to customer satisfaction.

The important elements of fix responsiveness are customer expectations, the agreed-to fix time,
and the ability to meet one's commitment to the customer.

Percent delinquent fixes

It is calculated as follows −

PercentDelinquentFixes= Numberoffixesthatexceededtheresponsetimecriteriabyceveritylevel/
Numberoffixesdeliveredinaspecifiedtime×100%

Fix Quality

Fix quality or the number of defective fixes is another important quality metric for the
maintenance phase. A fix is defective if it did not fix the reported problem, or if it fixed the
original problem but injected a new defect. For mission-critical software, defective fixes are
detrimental to customer satisfaction. The metric of percent defective fixes is the percentage of all
fixes in a time interval that is defective.

A defective fix can be recorded in two ways: Record it in the month it was discovered or record
it in the month the fix was delivered. The first is a customer measure; the second is a process
measure. The difference between the two dates is the latent period of the defective fix.

Usually the longer the latency, the more will be the customers that get affected. If the number of
defects is large, then the small value of the percentage metric will show an optimistic picture.
The quality goal for the maintenance process, of course, is zero defective fixes without
delinquency.

DIFFERENCE BETWEEN QUALITY MEASURE,QUALUITY METRIC AND QUALITY


INDICATOR:

QUALITY MEASURE QUALITY METRIC QUALITY INDICATOR


A measure is to ascertain or A metric is a quantitative An indicator is a device or
appraise by comparing to a measure of the degree to variable, which can be set to a
standard. which a system, component, prescribed state, based on the
or process possesses a given results of a process or the
attribute occurrence of a specified
condition
A standard or unit of It is a calculated or composite An indicator usually compares
measurement covers: indicator based upon two or a metric with a baseline or
The extent, dimensions, more measures. expected result.
capacity of anything,
especially as determined by a
standard.

A measure gives very little or A metric is a comparison of Indicator help the decision-
no information in the absence two or more measures like makers to make a quick
of a trend to follow or an defects per thousand source comparison that can provide a
expected value to compare lines of code. perspective as to the “health”
against. of a particular aspect of the
project.
Measure does not provide
Software quality metrics is Software quality indicators act
enough information to make
used to assess throughout the as a set of tools to improve the
meaningful decisions. development cycle whether management capabilities of
the software quality personnel responsible for
requirements are being met or monitoring software
not. development projects.
SOFTWARE QUALITY INDICATOR:

A Software Quality Indicator can be calculated to provide an indication of the quality of the system by
assessing system characteristics.

1) Progress: Measures the amount of work accomplished by the developer in each phase. This measure flows
through the development life cycle with a number of requirements defined and baselined, then the amount of
preliminary and detailed designed completed, then the amount of code completed, and various levels of tests
completed.
2) Stability: Assesses whether the products of each phase are sufficiently stable to allow the next phase to proceed.
This measures the number of changes to requirements, design, and implementation.
3) Process compliance: Measures the developer’s compliance with the development procedures approved at the
beginning of the project. Captures the number of procedures identified for use on the project versus those complied
with on the project.
4) Quality evaluation effort: Measures the percentage of the developer’s effort that is being spent on internal
quality evaluation activities. Percent of time developers are required to deal with quality evaluations and related
corrective actions.
5) Test coverage: Measures the amount of the software system covered by the developer’s testing process. For
module testing, this counts the number of basis paths executed/covered, & for system testing it measures the
percentage of functions tested.
6) Defect detection efficiency: Measures how many of the defects detectable in a phase were actually discovered
during that phase. Starts at 100% and is reduced as defects are uncovered at a later development phase.
7) Defect removal rate: Measures the number of defects detected and resolved over a period of time. Number of
opened and closed system problem reports (SPR) reported through the development phases.
8) Defect age profile: Measures the number of defects that have remained unresolved for a long period of time.
Monthly reporting of SPRs remaining open for more than a month’s time.
9) Defect density: Detects defect-prone components of the system. Provides measure of SPRs / Computer Software
Component (CSC) to determine which is the most defect-prone CSC.
10) Complexity: Measures the complexity of the code. Collects basis path counts (cyclomatic complexity) of code
modules to determine how complex each module is.

Change Traffic and Stability

Overall change traffic is one specific indicator of progress and quality. Change traffic is defined
as the number of software change orders opened and closed over the life cycle (Figure 13-5).
This metric can be collected by change type, by release, across all releases, by team, by
components, by subsystem, and so forth. Coupled with the work and progress metrics, it provides
insight into the stability of the software and its convergence toward stability (or divergence
toward instability). Stability is defined as the relationship between opened versus closed SCOs.
The change traffic relative to the release schedule provides insight into schedule predictability,
which is the primary value of this metric and an indicator of how well the process is performing.
The next three quality metrics focus more on the quality of the product.

Project Schedule

Figure . Stability expectation over a healthy project's life cycle

Breakage and Modularity

Breakage is defined as the average extent of change, which is the amount of software baseline
that needs rework (in SLOC, function points, components, subsystems, files, etc.). Modularity is
the average breakage trend over time. For a healthy project, the trend expectation is decreasing
or stable (Figure 13-6).

This indicator provides insight into the benign or malignant character of software change. In a
mature iterative development process, earlier changes are expected to result in more scrap than
later changes. Breakage trends that are increasing with time clearly indicate that product
maintainability is suspect.

Rework and Adaptability

Rework is defined as the average cost of change, which is the effort to analyze, resolve, and
retest all changes to software baselines. Adaptability is defined as the rework trend over time.
For a healthy project, the trend expectation is decreasing or stable .

MTBF and Maturity

MTBF is the average usage time between software faults. In rough terms, MTBF is computed by
dividing the test hours by the number of type 0 and type 1 SCOs. Maturity is defined as the
MTBF trend over time (Figure 13-8).

Early insight into maturity requires that an effective test infrastructure be established.
Conventional testing approaches for monolithic software programs focused on achieving
complete test coverage of every line of code, every branch, and so forth.

Released Baselines

Project Schedule

Maturity expectation over a healthy project's life cycle today's distributed and componentized
software systems, such complete test coverage is achievable only for discrete components.
Systems of components are more efficiently tested by using statistical techniques. Consequently,
the maturity metrics measure statistics over usage time rather than product coverage.

You might also like