Software Quality Metric Unit 2 Notes
Software Quality Metric Unit 2 Notes
UNIT-2
Software quality metrics means the ability of a software product to be and to perform as per
the standard defined. Metric means standards.
The software quality metrics ensures that the software product is of highest quality and
standard.
There are three major software quality metrics. These are:
The quality metrics used in software industry that is responsible for measuring the
problems encountered by the customers while using the product.
From the customers point of view, all the problems they encountered while using the
software product are problems with the software including the valid defects.
The usability problems, unclear documentation and errors are the problems that cannot be
considered as valid defects.
The customer problem metrics is usually expressed in terms of problems per unit month
i.e. PUM.
PUM =Total problems reported by a customer for a period of time + Total number of license
months of the software during the period.
Here,
PUM is usually calculated for each month after the software is released to the market, and
also for monthly averages by year.
Customer Satisfaction Metric deals with the overall quality of the product and how much a
customer is satisfied with that software product.
Customer satisfaction is often measured by customers survey data via a five point scale.
These are:
1. Very Satisfied.
2. Satisfied.
3. Neutral.
4. Dissatisfied.
5. Very Dissatisfied.
Based on a five point scale data, several metrics with slight variations can be constructed
and used. These are:
1. Percentages of completely satisfied customers.
2. Percentage of satisfied customers.
3. Percentage of dissatisfied customers.
4. Percentage of non-satisfied customers.
Net Satisfaction Index(NSI) is also used by some companies for customer satisfaction
metrics.
The net satisfaction index ha following weighing criteria:
1. Completely Satisfied = 100%.
2. Satisfied = 75%.
3. Neutral = 50%.
4. Dissatisfied = 25%.
5. Completely Dissatisfied = 0%.
Software Quality Metrics : Software Maintenance Metrics
3. Fix Quality
Fix quality of software product is the number of defects fixed.
Every software product contains defects. But, Fix Quality ensures that the
defects in software product are fixed on time as per the quality standard.
Fix quality ensures that the fixes for the defects should not turn out to be
defective.
A fix can be called as defective if it does not fixes the problems and
generates new problems in that software product.
A Function Point (FP) is a unit of measurement to express the amount of business functionality,
an information system (as a product) provides to a user. FPs measure software size. They are
widely accepted as an industry standard for functional sizing.
Objectives of FPA
The basic and primary purpose of the functional point analysis is to measure and provide the
software application functional size to the client, customer, and the stakeholder on their request.
Further, it is used to measure the software project development along with its maintenance,
consistently throughout the project irrespective of the tools and the technologies.
1. FPs of an application is found out by counting the number and types of functions used in the
applications. Various functions used in an application can be put under five types, as shown in
Table:
Duration 18:10
Types of FP Attributes
3. The effort required to develop the project depends on what the software does.
5. FP method is used for data processing systems, business systems like information systems.
6. The five parameters mentioned above are also known as information domain characteristics.
7. All the parameters mentioned above are assigned some weights that have been experimentally
determined and are shown in Table
The functional complexities are multiplied with the corresponding weights against each function,
and the values are added up to determine the UFP (Unadjusted Function Point) of the subsystem.
Here that weighing factor will be simple, average, or complex for a measurement parameter type.
The Function Point (FP) is thus calculated with the following formula.
and ∑(fi) is the sum of all 14 questionnaires and show the complexity adjustment value/ factor-
CAF (where i ranges from 1 to 14). Usually, a student is provided with the value of ∑(fi)
1. Errors/FP
2. $/FP.
3. Defects/FP
4. Pages of documentation/FP
5. Errors/PM.
6. Productivity = FP/PM (effort is measured in person-months).
7. $/Page of Documentation.
8. LOCs of an application can be estimated from FPs. That is, they are interconvertible. This
process is known as backfiring. For example, 1 FP is equal to about 100 lines of COBOL code.
9. FP metrics is used mostly for measuring the size of Management Information System (MIS)
software.
10. But the function points obtained above are unadjusted function points (UFPs). These (UFPs)
of a subsystem are further adjusted by considering some more General System Characteristics
(GSCs). It is a set of 14 GSCs that need to be considered. The procedure for adjusting UFPs is as
follows:
1. Degree of Influence (DI) for each of these 14 GSCs is assessed on a scale of 0 to 5. (b) If
a particular GSC has no influence, then its weight is taken as 0 and if it has a strong
influence then its weight is 5.
2. The score of all 14 GSCs is totaled to determine Total Degree of Influence (TDI).
3. Then Value Adjustment Factor (VAF) is computed from TDI by using the formula: VAF
= (TDI * 0.01) + 0.65
Remember that the value of VAF lies within 0.65 to 1.35 because
Example: Compute the function point, productivity, documentation, cost per function for the
following data:
Solution:
5 = 10
5. Number of external interfaces (EIF) Count-total → 2 *
378
Software quality metrics are a subset of software metrics that focus on the quality aspects of the
product, process, and project. These are more closely associated with process and product
metrics than with project metrics.
It is the time between failures. This metric is mostly used with safety critical systems such as the
airline traffic control systems, avionics, and weapons.
Defect Density
It measures the defects relative to the software size expressed as lines of code or function point,
etc. i.e., it measures code quality per unit. This metric is used in many commercial software
systems.
Customer Problems
It measures the problems that customers encounter when using the product. It contains the
customer’s perspective towards the problem space of the software, which includes the non-defect
oriented problems together with the defect problems.
The problems metric is usually expressed in terms of Problems per User-Month (PUM).
PUM = Total Problems that customers reported (true defect and non-defect oriented
problems) for a time period + Total number of license months of the software during
the period
Where,
PUM is usually calculated for each month after the software is released to the market, and also
for monthly averages by year.
Customer Satisfaction
Customer satisfaction is often measured by customer survey data through the five-point scale −
Very satisfied
Satisfied
Neutral
Dissatisfied
Very dissatisfied
Satisfaction with the overall quality of the product and its specific dimensions is usually obtained
through various methods of customer surveys. Based on the five-point-scale data, several metrics
with slight variations can be constructed and used, depending on the purpose of analysis. For
example −
In-process quality metrics deals with the tracking of defect arrival during formal machine testing
for some organizations. This metric includes −
Defect density during machine testing
Defect arrival pattern during machine testing
Phase-based defect removal pattern
Defect removal effectiveness
Defect rate during formal machine testing (testing after code is integrated into the system library)
is correlated with the defect rate in the field. Higher defect rates found during testing is an
indicator that the software has experienced higher error injection during its development process,
unless the higher testing defect rate is due to an extraordinary testing effort.
This simple metric of defects per KLOC or function point is a good indicator of quality, while
the software is still being tested. It is especially useful to monitor subsequent releases of a
product in the same development organization.
The overall defect density during testing will provide only the summary of the defects. The
pattern of defect arrivals gives more information about different quality levels in the field. It
includes the following −
The defect arrivals or defects reported during the testing phase by time interval (e.g.,
week). Here all of which will not be valid defects.
The pattern of valid defect arrivals when problem determination is done on the reported
problems. This is the true defect pattern.
The pattern of defect backlog overtime. This metric is needed because development
organizations cannot investigate and fix all the reported problems immediately. This is a
workload statement as well as a quality statement. If the defect backlog is large at the end
of the development cycle and a lot of fixes have yet to be integrated into the system, the
stability of the system (hence its quality) will be affected. Retesting (regression test) is
needed to ensure that targeted product quality levels are reached.
This is an extension of the defect density metric during testing. In addition to testing, it tracks the
defects at all phases of the development cycle, including the design reviews, code inspections,
and formal verifications before testing.
This metric can be calculated for the entire development process, for the front-end before code
integration and for each phase. It is called early defect removal when used for the front-end and
phase effectiveness for specific phases. The higher the value of the metric, the more effective
the development process and the fewer the defects passed to the next phase or to the field. This
metric is a key concept of the defect removal model for software development.
Although much cannot be done to alter the quality of the product during this phase, following are
the fixes that can be carried out to eliminate the defects as soon as possible with excellent fix
quality.
Fix backlog is related to the rate of defect arrivals and the rate at which fixes for reported
problems become available. It is a simple count of reported problems that remain at the end of
each month or each week. Using it in the format of a trend chart, this metric can provide
meaningful information for managing the maintenance process.
Backlog Management Index (BMI) is used to manage the backlog of open and unresolved
problems.
If BMI is larger than 100, it means the backlog is reduced. If BMI is less than 100, then the
backlog increased.
Fix response time and fix responsiveness
The fix response time metric is usually calculated as the mean time of all problems from open to
close. Short fix response time leads to customer satisfaction.
The important elements of fix responsiveness are customer expectations, the agreed-to fix time,
and the ability to meet one's commitment to the customer.
It is calculated as follows −
PercentDelinquentFixes= Numberoffixesthatexceededtheresponsetimecriteriabyceveritylevel/
Numberoffixesdeliveredinaspecifiedtime×100%
Fix Quality
Fix quality or the number of defective fixes is another important quality metric for the
maintenance phase. A fix is defective if it did not fix the reported problem, or if it fixed the
original problem but injected a new defect. For mission-critical software, defective fixes are
detrimental to customer satisfaction. The metric of percent defective fixes is the percentage of all
fixes in a time interval that is defective.
A defective fix can be recorded in two ways: Record it in the month it was discovered or record
it in the month the fix was delivered. The first is a customer measure; the second is a process
measure. The difference between the two dates is the latent period of the defective fix.
Usually the longer the latency, the more will be the customers that get affected. If the number of
defects is large, then the small value of the percentage metric will show an optimistic picture.
The quality goal for the maintenance process, of course, is zero defective fixes without
delinquency.
A measure gives very little or A metric is a comparison of Indicator help the decision-
no information in the absence two or more measures like makers to make a quick
of a trend to follow or an defects per thousand source comparison that can provide a
expected value to compare lines of code. perspective as to the “health”
against. of a particular aspect of the
project.
Measure does not provide
Software quality metrics is Software quality indicators act
enough information to make
used to assess throughout the as a set of tools to improve the
meaningful decisions. development cycle whether management capabilities of
the software quality personnel responsible for
requirements are being met or monitoring software
not. development projects.
SOFTWARE QUALITY INDICATOR:
A Software Quality Indicator can be calculated to provide an indication of the quality of the system by
assessing system characteristics.
1) Progress: Measures the amount of work accomplished by the developer in each phase. This measure flows
through the development life cycle with a number of requirements defined and baselined, then the amount of
preliminary and detailed designed completed, then the amount of code completed, and various levels of tests
completed.
2) Stability: Assesses whether the products of each phase are sufficiently stable to allow the next phase to proceed.
This measures the number of changes to requirements, design, and implementation.
3) Process compliance: Measures the developer’s compliance with the development procedures approved at the
beginning of the project. Captures the number of procedures identified for use on the project versus those complied
with on the project.
4) Quality evaluation effort: Measures the percentage of the developer’s effort that is being spent on internal
quality evaluation activities. Percent of time developers are required to deal with quality evaluations and related
corrective actions.
5) Test coverage: Measures the amount of the software system covered by the developer’s testing process. For
module testing, this counts the number of basis paths executed/covered, & for system testing it measures the
percentage of functions tested.
6) Defect detection efficiency: Measures how many of the defects detectable in a phase were actually discovered
during that phase. Starts at 100% and is reduced as defects are uncovered at a later development phase.
7) Defect removal rate: Measures the number of defects detected and resolved over a period of time. Number of
opened and closed system problem reports (SPR) reported through the development phases.
8) Defect age profile: Measures the number of defects that have remained unresolved for a long period of time.
Monthly reporting of SPRs remaining open for more than a month’s time.
9) Defect density: Detects defect-prone components of the system. Provides measure of SPRs / Computer Software
Component (CSC) to determine which is the most defect-prone CSC.
10) Complexity: Measures the complexity of the code. Collects basis path counts (cyclomatic complexity) of code
modules to determine how complex each module is.
Overall change traffic is one specific indicator of progress and quality. Change traffic is defined
as the number of software change orders opened and closed over the life cycle (Figure 13-5).
This metric can be collected by change type, by release, across all releases, by team, by
components, by subsystem, and so forth. Coupled with the work and progress metrics, it provides
insight into the stability of the software and its convergence toward stability (or divergence
toward instability). Stability is defined as the relationship between opened versus closed SCOs.
The change traffic relative to the release schedule provides insight into schedule predictability,
which is the primary value of this metric and an indicator of how well the process is performing.
The next three quality metrics focus more on the quality of the product.
Project Schedule
Breakage is defined as the average extent of change, which is the amount of software baseline
that needs rework (in SLOC, function points, components, subsystems, files, etc.). Modularity is
the average breakage trend over time. For a healthy project, the trend expectation is decreasing
or stable (Figure 13-6).
This indicator provides insight into the benign or malignant character of software change. In a
mature iterative development process, earlier changes are expected to result in more scrap than
later changes. Breakage trends that are increasing with time clearly indicate that product
maintainability is suspect.
Rework is defined as the average cost of change, which is the effort to analyze, resolve, and
retest all changes to software baselines. Adaptability is defined as the rework trend over time.
For a healthy project, the trend expectation is decreasing or stable .
MTBF is the average usage time between software faults. In rough terms, MTBF is computed by
dividing the test hours by the number of type 0 and type 1 SCOs. Maturity is defined as the
MTBF trend over time (Figure 13-8).
Early insight into maturity requires that an effective test infrastructure be established.
Conventional testing approaches for monolithic software programs focused on achieving
complete test coverage of every line of code, every branch, and so forth.
Released Baselines
Project Schedule
Maturity expectation over a healthy project's life cycle today's distributed and componentized
software systems, such complete test coverage is achievable only for discrete components.
Systems of components are more efficiently tested by using statistical techniques. Consequently,
the maturity metrics measure statistics over usage time rather than product coverage.