STM Unit 1
STM Unit 1
• Predicates
• Path Sensitizing
• Path Instrumentation
A family of structural test techniques based on judiciously selecting a set of test paths
through the programs.
Goal: Pick enough paths to assure that every source statement is executed at least once.
Path testing concepts are used in and along with other testing techniques
Code Coverage: During unit testing: # stmts executed at least once / total # stmts
Assumptions:
Observations
• Assembly language, Cobol, Fortran, Basic & similar languages make path testing necessary.
IF NO: ELSE DO
Process Block Do Process A Decisions
A=B?
YES: THEN DO
Junctions 1 2
CASE-OF
CASE 1
1
CASE 2
Case Statement 2
CASE N
N
Control Flow Graph Elements
Process Block:
Junction:
• A point in the program where control flow can merge (into a node of the graph)
• Examples: target of GOTO, Jump, Continue
Decisions:
• A program point at which the control flow can diverge (based on evaluation of a condition).
• Examples: IF stmt. Conditional branch and Jump instruction.
Case Statements:
Focuses on Inputs, Outputs, and the control Focuses on the process steps inside
flow into and out of the block.
Inside details of a process block are not Every part of the process block are drawn
shown
NO
INPUT X, Y INPUT X, Y Z := X + Y V := X - Y Z >= 0 ? JOE
Z := X + Y
V := X - Y
IF Z >= 0 GOTO SAM
LOOP SAM Z := Z + V
JOE: Z := Z + V Z := Z - 1 N := 0 Z := Z - V
SAM: Z := Z - V
FOR N = 0 TO V
NO
Z := Z - 1 N=V? N := N+1
NEXT N
END
YES END One to One Flow Chart
NO
INPUT X, Y P1 Z >= 0 ? JOE
Z := X + Y
V := X - Y
IF Z >= 0 GOTO SAM
LOOP SAM P2
JOE: Z := Z + V P4 P3
SAM: Z := Z - V
FOR N = 0 TO V
NO
Z := Z - 1 N=V? P5
NEXT N
END
YES END
INPUT X, Y
Z := X + Y 1 2 3 4 5
V := X - Y
IF Z >= 0 GOTO SAM
JOE: Z := Z + V
6 7
SAM: Z := Z - V
FOR N = 0 TO V
Z := Z - 1
NEXT N
END
1 2 3 4 5
6 7
Node Processing, label, Decision Next-Node
Path segment is a succession of consecutive links that belongs to the same path. (3,4,5)
Name of a path is the set of the names of the nodes along the path. (1,2,3 4,5, 6)
(1,2,3,4, 5,6,7, 5,6,7, 5,6)
• Integration issues:
Case
Begin 1
1
Begin 2 Begin
2
Begin N
N
• Merge all exits to Single-exit point after setting one exit parameter to a value.
Exit 1 SET E = 1
1
Exit 2
2 SET E = 1
Exit N
N SET E = 1
Execute enough tests to assure that every branch alternative has been exercised
at least once under some test.
Denoted by C2
Objective: 100% branch coverage and 100% Link coverage.
ref boris beizer 16
Control Flow Graphs and Path Testing U2
2
1
Z >= 0 ?
f P1
a
b
NO c NO
YES
END LOOP SAM
N=V?
g e d
6 4 3
5
abdefeg Y N Y Y Y Y Y Y
acdefeg N N Y Y Y Y Y Y
ref boris beizer 17
Make small changes in the path changing only 1 link or
Control Flow Graphs and Path Testing U2
2. Pick additional paths as small variations from previous paths. (pick those with no loops,
shorter paths, simple and meaningful)
3. Pick additional paths but without an obvious functional meaning (only to achieve C1+C2
coverage).
4. Be comfortable with the chosen paths. play hunches, use intuition to achieve C1+C2
2. Nested loops.
1 2 3 4
3. Concatenated Loops.
4. Horrible Loops
1 2 3 4
1 2 3 4 5 6
• Select an entry/exit path. Write down un-interpreted predicates for the decisions along the
path. If there are iterations, note also the value of loop-control variable for that pass.
Converting these into predicates that contain only input variables, we get a set of boolean
expressions called path predicate expression.
X1 + 3X2 + 17 >= 0
X3 = 17
X4 – X1 >= 14 X2
A: X5>0 E: X 6 < 0
B: X1 + 3X2 + 17 >= 0 F: X1 + 3X2 + 17 >= 0
C: X3 = 17 G: X3 = 17
D: X4 – X1 >= 14 X2 H: X4 – X1 >= 14 X2
ABCD+EBCD (A+E)BCD
(A + E ) B C D
Path Sensitization
It’s the act of finding a set of solutions to the path predicate expression.
In practice, for a selected path finding the required input vector is not difficult. If there is difficulty, it
may be due to some bugs.
Heuristic procedures:
Choose an easily sensitizable path set, & pick hard-to-sensitize paths to achieve more coverage.
Identify all the variables that affect the decisions. For process dependent variables,
express the nature of the process dependency as an equation, function, or whatever is
convenient and clear. For correlated variables, express the logical, arithmetic, or
functional relation defining the correlation.
1. Identify correlated predicates and document the nature of the correlation as for variables.
If the same predicate appears at more than one decision, the decisions are obviously
correlated.
2. Start path selection with uncorrelated & independent predicates. If coverage is achieved,
but the path had dependent predicates, something is wrong.
ref boris beizer 22
Control Flow Graphs and Path Testing U2
4. If the coverage is not achieved yet with independent uncorrelated predicates, extend the
path set by using correlated predicates; preferably process independent (not needing
interpretation)
5. If the coverage is not achieved, extend the path set by using dependent predicates
(typically required to cover loops), preferably uncorrelated.
6. Last, use correlated and dependent predicates.
7. For each of the path selected above, list the corresponding input variables. If the variable
is independent, list its value. For dependent variables, interpret the predicate ie., list the
relation. For correlated variables, state the nature of the correlation to other variables.
Determine the mechanism (relation) to express the forbidden combinations of variable
values, if any.
8. Each selected path yields a set of inequalities, which must be simultaneously satisfied to
force the path.
3. Dependent Predicates
4. Generic
abcdef A C abcdef A C
aghcimkf A B C D abcimjef A C D
aglmjef A B D abcimkf A C D
aghcdef A B C
aglmkf A B D
3. Dependent Predicates
Usually most of the processing does not affect the control flow.
Determine the value of loop control variable for a certain # of iterations, and then
work backward to determine the value of input variables (input vector).
No simple procedure to solve for values of input vector for a selected path.
2. Tackle the path with the fewest decisions first. Select paths with least # of loops.
3. Start at the end of the path and list the predicates while tracing the path in reverse.
Each predicate imposes restrictions on the subsequent (in reverse order) predicate.
4. Continue tracing along the path. Pick the broadest range of values for variables affected
and consistent with values that were so far determined.
5. Continue until the entrance & therefore have established a set of input conditions for the path.
Alternately:
2. Pick a path & adjust all input values. These restricted values are used for next decision.
3. Continue. Some decisions may be dependent on and/or correlated with earlier ones.
4. The path is unachievable if the input values become contradictory, or, impossible.
If the path is achieved, try a new path for additional coverage.
PATH INSTRUMENTATION
Output of a test:
Results observed. But, there may not be any expected output for a test.
Outcome:
Any change or the lack of change at the output.
Expected Outcome:
Any expected change or the lack of change at the output (predicted as part of
design).
Actual Outcome:
Observed outcome
PATH INSTRUMENTATION
Coincidental Correctness:
X = 16 CASE SELECT Y := X - 14
Here
Y := 2
Y is 2
Y := X mod 14
1. General strategy:
2. A trace confirms the expected outcome is or isn’t obtained along the intended path.
2. Instrument the links so that the link is recorded when it is executed (during the test)
3. The succession of letters from a routine’s entry to exit corresponds to the pathname.
No
Input B$ = C=
i A = 7? j k l
A,B,C “a” ? 0?
No No
m
n
No No
B$ = C≤
o p q
“d” ? 0?
r
s
Limitations
• Instrumentation probe (marker, counter) may disturb the timing relations & hide
racing condition bugs.
If the presence or absence of probes modifies things (for example in the data base)
in a faulty way, then the probes hide the bug in the program.
To achieve C1 or C2 coverage:
• Predicate interpretation may require us to treat a subroutine as an in-line-code.
• Sensitization becomes more difficult.
• Selected path may be unachievable as the called components’ processing may block it.
• It assumes that effective testing can be done one level at a time without bothering what
happens at lower levels.
• predicate coverage problems & blinding.
ref boris beizer 35
Control Flow Graphs and Path Testing U2
Implementation & Application of Path Testing
• Use the procedure similar to the idealistic bottom-up integration testing, using a
mechanized test suite.
• Use the procedure similar to the idealistic bottom-up integration testing, but without using
stubs.
• Newer and more effective strategies could emerge to provide coverage in maintenance
phase.
• Path testing with C1 + C2 coverage is a powerful tool for rehosting old software.
• A translator from the old to the new environment is created & tested. Rehosting
process is to catch bugs in the translator software.
• A complete C1 + C2 coverage path test suite is created for the old software. Tests
are run in the old environment. The outcomes become the specifications for the
rehosted software.
• Another translator may be needed to adapt the tests & outcomes to the new
environment.
• The cost of the process is high, but it avoids risks associated with rewriting the code.
Q. Illustrate with an example, how statement and branch coverage can be achieved during path selection.
Write all steps involved in it.
Q. Define Path Sensitization. Explain heuristic procedure for sensitizing paths with the help of an example.
Q. How can a program control structure be represented graphically? Explain with the help of required
diagrams.
Q. Explain about Multi entry and Multi exit routines & fundamental path selection criteria
Q. Explain about path instrumentation. How are link counters useful in Path Instrumentation method?
Q. Write about implementation of path testing. What are the various applications of path testing?
Purpose of testing
Dichotomies
Consequences of bugs
Taxonomy of bugs
STM-boris beizer 42
Introduction
What is Testing?
A Test
Passes: Functionality OK.
Fails: Application functionality NOK.
Bug/Defect/Fault: Deviation from expected functionality.
It’s not always obvious.
STM-boris beizer 43
Purpose of Testing
1. To Catch Bugs
STM-boris beizer 44
Purpose of Testing
1. To Catch Bugs
• Statistics:
• QA costs: 2% for consumer products
80% for critical software
• Quality Productivity
STM-boris beizer 45
Purpose of Testing
Purpose of testing contd…
Bug prevented rework effort is saved [bug reporting, debugging, correction, retesting]
If it is not possible, Testing must reach its secondary goal of bug discovery.
Good test design & tests clear diagnosis easy bug correction
STM-boris beizer 46
Purpose of Testing
Purpose of testing contd…
STM-boris beizer 47
Purpose of Testing
5 Phases in tester’s thinking continued…
Phase 4: A state of mind regarding “What testing can do & cannot do. What makes
software testable”.
Applying this knowledge reduces amount of testing.
Testable software reduces effort
Testable software has less bugs than the code hard to test
STM-boris beizer 48
Purpose of Testing
purpose of testing contd..
Inspection is also called static testing.
Methods and Purposes of testing and inspection are different, but the objective is to
catch & prevent different kinds of bugs.
To prevent and catch most of the bugs, we must
Review
Inspect &
Read the code
Do walkthroughs on the code
STM-boris beizer 49
Purpose of Testing
Further…
Bug Prevention
Mix of various approaches, depending on factors
culture, development environment, application, project size, history, language
Inspection Methods
Design Style
Static Analysis
Pesticide paradox
Complexity Barrier
STM-boris beizer 50
STM-boris beizer 51
Dichotomies
Dichotomies
3. Designer vs Tester
6. Buyer vs Builder
STM-boris beizer 52
Dichotomies
1. Testing Vs Debugging
Testing is to find bugs.
Debugging is to find the cause or misconception leading to the bug.
Their roles are confused to be the same. But, there are differences in goals, methods and
psychology applied to these
# Testing Debugging
1 Starts with known conditions. Uses predefined Starts with possibly unknown initial conditions.
procedure. Has predictable outcomes. End cannot be predicted.
2 Planned, Designed and Scheduled. Procedures & Duration are not constrained.
5 Should be predictable, dull, constrained, rigid There are intuitive leaps, conjectures,
& inhuman. experimentation & freedom.
STM-boris beizer 53
Dichotomies
Dichotomies contd…
# Testing Debugging
8 A theory establishes what testing can do or There are only Rudimentary Results (on how
cannot do. much can be done. Time, effort, how etc.
depends on human ability).
STM-boris beizer 54
Dichotomies
Dichotomies contd..
Functional Testing: Treats a program as a black box. Outputs are verified for conformance to
specifications from user’s point of view.
Structural Testing: Looks at the implementation details: programming style, control method,
source language, database & coding details.
Interleaving of functional & Structural testing:
A good program is built in layers from outside.
Outside layer is pure system function from user’s point of view.
Each layer is a structure with its outer layer being its function.
Examples:
Application2
Malloc()
Link block()
User Devices
O.S.
Application1
STM-boris beizer 55
Dichotomies
Interleaving of functional & Structural testing: (contd..)
For a given model of programs, Structural tests may be done first and later the Functional,
Or vice-versa. Choice depends on which seems to be the natural choice.
Both are useful, have limitations and target different kind of bugs. Functional tests can
detect all bugs in principle, but would take infinite amount of time. Structural tests are
inherently finite, but cannot detect all bugs.
The Art of Testing is how much allocation % for structural vs how much % for functional.
STM-boris beizer 56
Dichotomies
Dichotomies contd..
3. Designer vs Tester
Completely separated in black box testing. Unit testing may be done by either.
Artistry of testing is to balance knowledge of design and its biases against ignorance &
inefficiencies.
Tests are more efficient if the designer, programmer & tester are independent in all of unit,
unit integration, component, component integration, system, formal system feature testing.
The extent to which test designer & programmer are separated or linked depends on testing
level and the context.
5. A module implies a size, an internal structure and an interface, Or, in other words.
6. A module (well defined discrete component of a system) consists of internal complexity &
interface complexity and has a size.
STM-boris beizer 58
Dichotomies
# Modularity Efficiency
1 Smaller the component easier to understand. Implies more number of components & hence more #
of interfaces increase complexity & reduce efficiency
(=> more bugs likely)
2 Small components/modules are repeatable Higher efficiency at module level, when a bug occurs
independently with less rework (to check if a with small components.
bug is fixed).
3 Microscopic test cases need individual setups More # of test cases implies higher possibility of bugs
with data, systems & the software. Hence can in test cases. Implies more rework and hence less
have bugs. efficiency with microscopic test cases
4 Easier to design large modules & smaller Less complex & efficient. (Design may not be enough
interfaces at a higher level. to understand and implement. It may have to be
broken down to implementation level.)
So:
Optimize the size & balance internal & interface complexity to increase efficiency
Optimize the test design by setting the scopes of tests & group of tests (modules) to minimize cost of
test design, debugging, execution & organizing – without compromising effectiveness.
STM-boris beizer 59
Dichotomies
Dichotomies contd..
# Small Big
1 More efficiently done by informal, intuitive A large # of programmers & large # of
means and lack of formality – components.
if it’s done by 1 or 2 persons for small
& intelligent user population.
2 Done for e.g., for oneself, for one’s office or Program size implies non-linear effects (on
for the institute. complexity, bugs, effort, rework quality).
3 Complete test coverage is easily done. Acceptance level could be: Test coverage of 100%
for unit tests and for overall tests ≥ 80%.
STM-boris beizer 60
Dichotomies
6. Buyer Vs Builder (customer vs developer organization)
Buyer & Builder being the same (organization) clouds accountability.
Separate them to make the accountability clear, even if they are in the same organization.
The accountability increases motivation for quality.
The roles of all parties involved are:
Builder:
Designs for & is accountable to the Buyer.
Buyer
Buyer:
Pays for the system.
Hopes to get profits from the services to the User.
User
User:
User
Ultimate beneficiary of the system.
system
Interests are guarded by the Tester.
Tester
Tester:
Tester
Dedicated to the destruction of the s/w (builder)
Tests s/w in the interests of User/Operator.
Operator:
Lives with: Mistakes of the Builder Murky specs of Buyer
Oversights of Tester Complaints of User
STM-boris beizer 61
STM-boris beizer 62
A Model for Testing
A model for testing - with a project environment - with tests at various levels.
(1) understand what a project is. (2) look at the roles of the Testing models.
1. PROJECT:
An Archetypical System (product) allows tests without complications (even for a large project).
Testing a one shot routine & very regularly used routine is different.
A model for project in a real world consists of the following 8 components:
1) Application:
Application An online real-time system (with remote terminals) providing timely responses to
user requests (for services).
3) Schedule: project may take about 24 months from start to acceptance. 6 month maintenance
period.
4) Specifications:
Specifications is good. documented. Undocumented ones are understood well in the team.
STM-boris beizer 63
A Model for Testing
4) Acceptance test: Application is accepted after a formal acceptance test. At first it’s the
customer’s & then the software design team’s responsibility.
5) Personnel:
Personnel The technical staff comprises of : A combination of experienced professionals &
junior programmers (1 – 3 yrs) with varying degrees of knowledge of the application.
6) Standards:
Standards
Programming, test and interface standard (documented and followed).
A centralized standards data base is developed & administrated
STM-boris beizer 64
A Model for Testing
1. PROJECT: contd …
6) Objectives:
Objectives (of a project)
A system is expected to operate profitably for > 10 yrs (after installation).
Similar systems with up to 75% code in common may be implemented in future.
A model project is
Environment Unexpected
Environment
Model
Expected
Program
Program Tests Outcome
Model
Nature &
Bug Model
Psychology
STM-boris beizer 66
A Model for Testing contd..
2) Environment: includes
All hardware & software (firmware, OS, linkage editor, loader, compiler, utilities,
libraries) required to make the program run.
Usually bugs do not result from the environment. (with established h/w & s/w)
But arise from our understanding of the environment.
3) Program:
Complicated to understand in detail.
Deal with a simplified overall view.
Focus on control structure ignoring processing & focus on processing ignoring
control structure.
If bug’s not solved, modify the program model to include more facts, & if that fails,
modify the program.
STM-boris beizer 67
A Model for Testing contd..
STM-boris beizer 68
A Model for Testing contd..
2. Roles of Models for Testing contd …
4) Bugs: (bug model) contd ..
Belief that the bugs respect code & data separation in HOL programming.
In real systems the distinction is blurred and hence such bugs exist.
Belief that the language syntax & semantics eliminate most bugs.
But, such features may not eliminate Subtle Bugs.
STM-boris beizer 69
A Model for Testing contd..
2. Roles of Models for Testing contd …
4) Bugs: (bug model) contd ..
Belief that testers are better at test design than programmers at code design.
STM-boris beizer 70
A Model for Testing contd..
5) Tests:
Formal procedures.
Input preparation, outcome prediction and observation, documentation of test,
execution & observation of outcome are subject to errors.
An unexpected test result may lead us to revise the test and test model.
STM-boris beizer 71
A Model for Testing contd..
2. Roles of Models for Testing contd …
2) Integration Testing:
A B D
A B
C
Sequence of Testing:
Unit/Component tests for A, B. Integration tests for A & B. Component testing
for (A,B) component
STM-boris beizer 72
A Model for Testing contd..
2. Roles of Models for Testing contd …
3) System Testing
Used for the testing process until system behavior is correct or until the model is
insufficient (for testing).
STM-boris beizer 73
Oracles, completeness of testing
Additional reading …
Sources of Oracles -
STM-boris beizer 74
Taxonomy of Bugs etc..
STM-boris beizer 75
Importance of Bugs
We will see the importance and the consequences of Bugs before turning to the taxonomy of bugs.
Importance of Bugs
Depends on frequency, correction cost, installation cost & consequences of bugs
1. Frequency
• Statistics from different sources are in table 2.1 (Beizer)
• Note the bugs with higher frequency & mark them in this order:
2. Correction Cost
STM-boris beizer 76
Importance of Bugs
Importance of Bugs contd …
3. Installation Cost
• Depends on # of installations.
• May dominate all other costs, as we nee to distribute bug fixes across all installations.
• Depends also on application and environment.
4. Consequences (effects)
• Measure by the mean size of the awards given to the victims of the bug.
• Depend on the application and environment.
• Mild
• Aesthetic bug such as misspelled output or mal-aligned print-out.
• Moderate
• Outputs are misleading or redundant impacting performance.
• Annoying
• Systems behavior is dehumanizing for e.g. names are truncated/modified arbitrarily,
bills for $0.0 are sent.
• Till the bugs are fixed operators must use unnatural command sequences to get
proper response.
• Disturbing
• Legitimate transactions refused.
• For e.g. ATM machine may malfunction with ATM card / credit card.
• Serious
• Losing track of transactions & transaction events. Hence accountability is lost.
STM-boris beizer 78
Consequences of Bugs
Consequences contd …
• Very serious
System does another transaction instead of requested e.g. Credit another account,
convert withdrawals to deposits.
• Extreme
• Frequent & Arbitrary - not sporadic & unusual.
• Intolerable
• Long term unrecoverable corruption of the Data base.
(not easily discovered and may lead to system down.)
• Catastrophic
• System fails and shuts down.
• Infectious
• Corrupts other systems, even when it may not fail.
STM-boris beizer 79
Consequences of Bugs
Consequences contd …
Assignment of severity
• Assign flexible & relative rather than absolute values to the bug (types).
• Number of bugs and their severity are factors in determining the quality quantitatively .
• Organizations design & use quantitative, quality metrics based on the above.
• Nightmares
• Define the nightmares – that could arise from bugs – for the context of the
organization/application.
STM-boris beizer 80
Consequences of Bugs
Consequences contd …
1. List all nightmares in terms of the symptoms & reactions of the user to their consequences.
2. Convert the consequences of into a cost. There could be rework cost. (but if the scope extends to
the public, there could be the cost of lawsuits, lost business, nuclear reactor meltdowns.)
3. Order these from the costliest to the cheapest. Discard those with which you can live with.
4. Based on experience, measured data, intuition, and published statistics postulate the kind of
bugs causing each symptom. This is called ‘bug design process’. A bug type can cause
multiple symptoms.
5. Order the causative bugs by decreasing probability (judged by intuition, experience, statistics etc.) .
Calculate the importance of a bug type as:
7. Design tests & QA inspection process with most effective against the most important bugs.
8. If a test is passed or when correction is done for a failed test, some nightmares disappear.
As testing progresses, revise the probabilities & nightmares list as well as the test strategy.
• Designing a reasonable, finite # of tests with high probability of removing the nightmares.
• Test suites wear out.
• As programmers improve programming style, QA improves.
• Hence, know and update test suites as required.
STM-boris beizer 82
Taxonomy of Bugs ..
we had seen the:
STM-boris beizer 83
Taxonomy of Bugs .. and remedies
Reference of IEEE Taxonomy: IEEE 87B
Why Taxonomy ?
To study the consequences, nightmares, probability, importance, impact and the methods of prevention
and correction.
Adopt known taxonomy to use it as a statistical framework on which your testing strategy is based.
STM-boris beizer 84
Taxonomy of Bugs .. and remedies
Reference of IEEE Taxonomy: IEEE 87B
3 types of bugs : Requirement & Specs, Feature, & feature interaction bugs
STM-boris beizer 85
Taxonomy of Bugs .. and remedies
1) Requirements, Features, Functionality Bugs contd..
Arise due to unpredictable interactions between feature groups or individual features. The
earlier removed the better as these are costly if detected at the end.
Examples: call forwarding & call waiting. Federal, state & local tax laws.
No magic remedy. Explicitly state & test important combinations
Remedies
Short-term Support:
Specification languages formalize requirements & so automatic test generation is possible.
It’s cost-effective.
Long-term support:
Even with a great specification language, problem is not eliminated, but is shifted to a higher
level. Simple ambiguities & contradictions may only be removed, leaving tougher bugs.
Testing Techniques
Functional test techniques - transaction flow testing, syntax testing, domain testing, logic
testing, and state testing - can eliminate requirements & specifications bugs.
STM-boris beizer 86
Taxonomy of Bugs .. and remedies
2. Structural Bugs
Paths left out, unreachable code, spaghetti code, and pachinko code.
Improper nesting of loops, Incorrect loop-termination or look-back, ill-conceived switches.
Novice programmers.
Old code (assembly language & Cobol)
Using a look-alike operator, improper simplification, confusing Ex-OR with inclusive OR.
Deeply nested conditional statements & using many logical operations in 1 stmt.
Prevention
STM-boris beizer 88
Taxonomy of Bugs .. and remedies
Structural bugs contd..
IV. Initialization Bugs
STM-boris
Data flow testing methods & beizer
matrix based testing methods. 89
Taxonomy of Bugs .. and remedies
3. Data Bugs
Depend on the types of data or the representation of data. There are 4 sub categories.
STM-boris beizer 90
Taxonomy of Bugs .. and remedies
Data Bugs contd…
Due to data object specs., formats, # of objects & their initial values.
Generalized components with reusability – when customized from a large parametric data to
specific installation.
Using control tables in lieu of code facilitates software to handle many transaction types with
fewer data bugs. Control tables have a hidden programming language in the database.
Caution - there’s no compiler for the hidden control language in data tables
STM-boris beizer 91
Taxonomy of Bugs .. and remedies
II. Dynamic Data Vs Static Data
STM-boris beizer 92
Taxonomy of Bugs .. and remedies
Data Bugs contd..
Static or dynamic data can serve in any of the three forms. It is a matter of perspective.
What is information can be a data parameter or control data else where in a program.
Examples: name, hash code, function using these. A variable in different contexts.
Bugs
STM-boris beizer 93
Taxonomy of Bugs .. and remedies
Data Bugs contd..
Contents: are pure bit pattern & bugs are due to misinterpretation or corruption of it.
Structure: Size, shape & alignment of data object in memory. A structure may have
substructures.
Attributes: Semantics associated with the contents (e.g. integer, string, subroutine).
Bugs
Severity & subtlety increases from contents to attributes as they get less formal.
Structural bugs may be due to wrong declaration or when same contents are interpreted by
multiple structures differently (different mapping).
Good source lang. documentation & coding style (incl. data dictionary).
dictionary
Data structures be globally administered. Local data migrates to global.
STM-boris beizer 94
Taxonomy of Bugs .. and remedies
4. Coding Bugs
Coding errors
typographical, misunderstanding of operators or statements or could be just arbitrary.
Documentation Bugs
Solution:
STM-boris beizer 95
Taxonomy of Bugs .. and remedies
5. Interface, Integration and Systems Bugs
component
3) Hardware Architecture Bugs
8) Integration bugs
Application
9) System bugs software
STM-boris beizer 96
Taxonomy of Bugs .. and remedies
5. Interface, Integration and Systems Bugs contd..
1) External Interfaces
Means to communicate with the world: drivers, sensors, input terminals, communication lines.
Primary design criterion should be - robustness.
Bugs: invalid timing, sequence assumptions related to external signals, misunderstanding external
formats and no robust coding.
Domain testing, syntax testing & state testing are suited to testing external interfaces.
2) Internal Interfaces
S/w bugs originating from hardware architecture are due to misunderstanding of how h/w works.
Expecting a device to respond too quickly, or to wait for too long for response, assuming a
device is initialized, interrupt handling, I/O device address
H/W simultaneity assumption, H/W race condition ignored, device data format error etc..
Nowadays hardware has special test modes & test instructions to test the H/W function.
Due to:
STM-boris beizer 99
Taxonomy of Bugs .. and remedies
Interface, Integration and Systems Bugs contd …
The subroutines pass thru unit and integration tests without detection of these bugs. Depend on
the Load, when the system is stressed. These are the most difficult to find and correct.
Due to:
Assumption that there are no interrupts, Or, Failure to block or unblock an interrupt.
Assumption that code is re-entrant or not re-entrant.
Bypassing data interlocks, Or, Failure to open an interlock.
Assumption that a called routine is memory resident or not.
Assumption that the registers and the memory are initialized, Or, that their content did not
change.
Local setting of global parameters & Global setting of local parameters.
Remedies:
Test Techniques
All test techniques are useful in detecting these bugs, Stress tests in particular.
Due to:
Ignored timing
Assumption that events occur in a specified sequence.
Starting a process before its prerequisites are met.
Waiting for an impossible combination of prerequisites.
Not recognizing when prerequisites are met.
Specifying wrong priority, Program state or processing level.
Missing, wrong, redundant, or superfluous process steps.
Remedies:
Good design.
highly structured sequence control - useful
Specialized internal sequence-control mechanisms such as an internal job control language
– useful.
Storage of Sequence steps & prerequisites in a table and interpretive processing by control
processor or dispatcher - easier to test & to correct bugs.
Test Techniques
Due to:
Wrong resource used (when several resources have similar structure or different kinds of
resources in the same pool).
Resource already in use, or deadlock
Resource not returned to the right pool, Failure to return a resource. Resource use forbidden
to the caller.
Remedies:
Design: keeping resource structure simple with fewest kinds of resources, fewest pools,
and no private resource mgmt.
Designing a complicated resource structure to handle all kinds of transactions to save
memory is not right.
Centralize management of all resource pools thru managers, subroutines, macros etc.
Test Techniques
Path testing, transaction flow testing, data-flow testing & stress testing.
STM-boris beizer 102
Taxonomy of Bugs .. and remedies
Interface, Integration and Systems Bugs contd …
8) Integration Bugs:
Are detected late in the SDLC and cause several components and hence are very costly.
Due to:
Remedies:
Test Techniques
Those aimed at interfaces, domain testing, syntax testing, and data flow testing when
applied across components.
9) System Bugs:
Due to:
Bugs not ascribed to a particular component, but result from the totality of interactions
among many components such as:
programs, data, hardware, & the O.S.
Remedies:
Thorough testing at all levels and the test techniques mentioned below
Test Techniques
Transaction-flow testing.
All kinds of tests at all levels as well as integration tests - are useful.
It’s difficult & takes time to identify if a bug is from the software or from the test script/procedure.
Tests require code that uses complicated scenarios & databases, to be executed.
Though an independent functional testing provides an un-biased point of view, this lack of bias
may lead to an incorrect interpretation of the specs.
Test Criteria
Testing process is correct, but the criterion for judging software’s response to tests is
incorrect or impossible.
If a criterion is quantitative (throughput or processing time), the measurement test can perturb
the actual value.
Remedies:
1. Test Debugging:
Testing & Debugging tests, test scripts etc. Simpler when tests have
localized affect.
Good design inhibits bugs and is easy to test. The two factors are multiplicative and results in
high productivity.
25.0
Bugs percentage
20.0
15.0
10.0
5.0
0.0
Activity
Q. Specify on which factors the importance of bugs depends. Give the metric for it.
Ans: Importance of bugs as discussed in chapter 2
Q. What are the differences between static data and dynamic data?
Ans: 2nd point in Data bugs in taxonomy of bugs
• Predicates
• Path Sensitizing
• Path Instrumentation
A family of structural test techniques based on judiciously selecting a set of test paths
through the programs.
Goal: Pick enough paths to assure that every source statement is executed at least once.
Path testing concepts are used in and along with other testing techniques
Code Coverage: During unit testing: # stmts executed at least once / total # stmts
Assumptions:
Observations
• Assembly language, Cobol, Fortran, Basic & similar languages make path testing necessary.
IF NO: ELSE DO
Process Block Do Process A Decisions
A=B?
YES: THEN DO
Junctions 1 2
CASE-OF
CASE 1
1
CASE 2
Case Statement 2
CASE N
N
Control Flow Graph Elements
Process Block:
Junction:
• A point in the program where control flow can merge (into a node of the graph)
• Examples: target of GOTO, Jump, Continue
Decisions:
• A program point at which the control flow can diverge (based on evaluation of a condition).
• Examples: IF stmt. Conditional branch and Jump instruction.
Case Statements:
Focuses on Inputs, Outputs, and the control Focuses on the process steps inside
flow into and out of the block.
Inside details of a process block are not Every part of the process block are drawn
shown
NO
INPUT X, Y INPUT X, Y Z := X + Y V := X - Y Z >= 0 ? JOE
Z := X + Y
V := X - Y
IF Z >= 0 GOTO SAM
LOOP SAM Z := Z + V
JOE: Z := Z + V Z := Z - 1 N := 0 Z := Z - V
SAM: Z := Z - V
FOR N = 0 TO V
NO
Z := Z - 1 N=V? N := N+1
NEXT N
END
YES END One to One Flow Chart
NO
INPUT X, Y P1 Z >= 0 ? JOE
Z := X + Y
V := X - Y
IF Z >= 0 GOTO SAM
LOOP SAM P2
JOE: Z := Z + V P4 P3
SAM: Z := Z - V
FOR N = 0 TO V
NO
Z := Z - 1 N=V? P5
NEXT N
END
YES END
INPUT X, Y
Z := X + Y 1 2 3 4 5
V := X - Y
IF Z >= 0 GOTO SAM
JOE: Z := Z + V
6 7
SAM: Z := Z - V
FOR N = 0 TO V
Z := Z - 1
NEXT N
END
1 2 3 4 5
6 7
Node Processing, label, Decision Next-Node
Path segment is a succession of consecutive links that belongs to the same path. (3,4,5)
Name of a path is the set of the names of the nodes along the path. (1,2,3 4,5, 6)
(1,2,3,4, 5,6,7, 5,6,7, 5,6)
• Integration issues:
Case
Begin 1
1
Begin 2 Begin
2
Begin N
N
• Merge all exits to Single-exit point after setting one exit parameter to a value.
Exit 1 SET E = 1
1
Exit 2
2 SET E = 1
Exit N
N SET E = 1
• Each pass through a routine from entry to exit, as one traces through it, is a potential path.
• The above includes the tracing of 1..n times tracing of an interactive block each separately.
• Note: A bug could make a mandatory path not executable or could create new paths not
related to processing.
Point 1 => point 2 and 3. Point 2 & 3 are not the same
Point 1 is impractical. For a structured language, Point 3 => Point 2
Execute enough tests to assure that every branch alternative has been
exercised at least once under some test.
Denoted by C2
Objective: 100% branch coverage and 100% Link coverage.
ref boris
For well structured software, beizer testing & coverage include statement 127
branch
coverage
Control Flow Graphs and Path Testing U2
2
1 NO
Z >= 0 ?
f P1
a
b
NO c
YES
END LOOP SAM
N=V?
g e d
6 4 3
5
abdefeg Y N Y Y Y Y Y Y
acdefeg No Y Y Y Y Y Y Y
ref boris beizer 128
Make small changes in the path changing only 1 link or
Control Flow Graphs and Path Testing U2
2. Pick additional paths as small variations from previous paths. (pick those with no loops,
shorter paths, simple and meaningful)
3. Pick additional paths but without an obvious functional meaning (only to achieve C1+C2
coverage).
4. Be comfortable with the chosen paths. play hunches, use intuition to achieve C1+C2
2. Nested loops.
3. Concatenated Loops.
4. Horrible Loops
7. Try n = nmax
8. Try n = nmax + 1.
What prevents V (& n) from having this value?
What happens if it is forced?
1. Try nmin - 1
Could the value of loop (control) variable V be < nmin? 1 2
What prevents that ?
2. Try nmin
3. Try nmin + 1
6. Try n = nmax
7. Try n = nmax + 1.
What prevents V (& n) from having this value?
What happens if it is forced?
2. Example: 1 2
1 2 3 4
• Multiplying # of tests for each nested loop => very large # of tests
1. Start at the inner-most loop. Set all outer-loops to Min iteration parameter values: Vmin.
2. Test the Vmin, Vmin + 1, typical V, Vmax - 1, Vmax for the inner-most loop. Hold the outer-
loops to Vmin. Expand tests are required for out-of-range & excluded values.
3. If you have done with outer most loop, Go To step 5. Else, move out one loop and do
step 2 with all other loops set to typical values.
4. Do the five cases for all loops in the nest simultaneously.
1 2 3 4
• Expand tests for solving potential problems associated with initialization of variables and
with excluded combinations and ranges.
1 2 3 4
• Two loops are concatenated if it’s possible to reach one after exiting the other while still on
the path from entrance to exit.
• If their iteration values are inter-dependent & these are same path, treat these like a nested
loop.
1 2 3 4 5 6
• Avoid these.
• Even after applying some techniques of paths, resulting test cases not definitive.
• Thinking required to check end points etc. is unique for each program.
• Jumps in & out of loops and intersecting loops etc, makes test case selection an ugly task.
• etc. etc.
• Longer testing time for all loops if all the extreme cases are to be tested.
• Unreasonably long test execution times indicate bugs in the s/w or specs.
Case: Testing nested loops with combination of extreme values leads to long test times.
• Show that it’s due to incorrect specs and fix the specs.
• Prove that combined extreme cases cannot occur in the real world. Cut-off those tests.
• The test time problem is solved by rescaling the test limit values.
• Path testing (with mainly P1 & P2) catches ~65% of Unit Test Bugs ie., ~35% of all bugs.
• Limitations
• Unit-level path testing may not catch interface errors among routines.
• A lot of work
• Creating flow graph, selecting paths for coverage, finding input data values to force
these paths, setting up loop cases & combinations.
• Careful, systematic, test design will catch as many bugs as the act of testing.
Test design process at all levels at least as effective at catching bugs as is running the test
designed by that process.
Path
Predicate
Compound Predicate
• Two or more predicates combined with AND, OR etc.
Path Predicate
• Every path corresponds to a succession of True/False values for the predicates traversed
on that path.
Predicate Interpretation
• The symbolic substitution of operations along the path in order to express the predicate
solely in terms of the input vector is called predicate interpretation.
• Example:
INPUT X, Y INPUT X
ON X GOTO A, B, C IF X < 0 THEN Y:= 2
A: Z := 7 @ GOTO H ELSE Y := 1
B: Z := -7 @ GOTO H IF X + Y*Y > 0 THEN …
C: Z := 0 @ GOTO H
H: DO SOMETHING
K: IF X + Z > 0 GOTO GOOD ELSE GOTO BETTER
• Path predicates are the specific form of the predicates of the decisions along the selected
path after interpretation.
ref boris beizer 142
Control Flow Graphs and Path Testing U2
Process Dependency
• An input variable is independent of the processing if its value does not change as a result of
processing.
• A predicate is process dependent if its truth value can change as a result of processing.
• A predicate is process independent if its truth value does not change as a result of
processing.
• If all the input variables (on which a predicate is based) are process independent, then
predicate is process independent.
Correlation
• Two input variables are correlated if every combination of their values cannot be specified
independently.
• Variables whose values can be specified independently without restriction are uncorrelated.
• A pair of predicates whose outcomes depend on one or more variables in common are
correlated predicates.
• Every path through a routine is achievable only if all predicates in that routine are
uncorrelated.
• If a routine has a loop, then at least one decision’s predicate must be process dependent.
Otherwise, there is an input value which the routine loops indefinitely.
• Select an entry/exit path. Write down un-interpreted predicates for the decisions along the
path. If there are iterations, note also the value of loop-control variable for that pass.
Converting these into predicates that contain only input variables, we get a set of boolean
expressions called path predicate expression.
X1 + 3X2 + 17 >= 0
X3 = 17
X4 – X1 >= 14 X2
A: X5>0 E: X 6 < 0
B: X1 + 3X2 + 17 >= 0 F: X1 + 3X2 + 17 >= 0
C: X3 = 17 G: X3 = 17
D: X4 – X1 >= 14 X2 H: X4 – X1 >= 14 X2
ABCD+EBCD (A+E)BCD
(A + E ) B C D
Predicate Coverage:
• Due to semantics of the evaluation of logic expressions in the languages, the entire
expression may not be always evaluated.
• Realize that on our achieving C2, the program could still hide some control flow bugs.
• Predicate coverage:
• If all possible combinations of truth values corresponding to selected path have been
explored under some test, we say predicate coverage has been achieved.
• If all possible combinations of all predicates under all interpretations are covered, we
have the equivalent of total path testing.
ref boris beizer 147
Control Flow Graphs and Path Testing U2
Testing blindness
• coming to the right path – even thru a wrong decision (at a predicate). Due to the
Testing blindness
• Assignment blinding: A buggy Predicate seems to work correctly as the specific value
chosen in an assignment statement works with both the correct & buggy predicate.
Correct Buggy
X := 7 X := 7
IF Y > 0 THEN … IF X + Y > 0 THEN … (check for Y=1)
• Equality blinding:
• When the path selected by a prior predicate results in a value that works both for the
correct & buggy predicate.
Correct Buggy
IF Y = 2 THEN … IF Y = 2 THEN ..
IF X + Y > 3 THEN … IF X > 1 THEN … (check for any X>1)
• Self-blinding
• When a buggy predicate is a multiple of the correct one and the result is
indistinguishable along that path.
Correct Buggy
X := A X := A
IF X - 1 > 0 THEN … IF X + A -2 > 0 THEN … (check for any
ref boris beizer 149
X,A)
Control Flow Graphs and Path Testing U2
Achievable Paths
1. Objective is to select & test just enough paths to achieve a satisfactory notion of
test completeness such as C1 + C2.
2. Extract the program’s control flow graph & select a set of tentative covering paths.
4. Trace the path through, multiplying the individual compound predicates to achieve a
boolean expression. Example: (A + BC) ( D + E)
6. Each product term denotes a set of inequalities that, if solved, will yield an input
vector that will drive the routine along the selected path.
7. A set of input values for that path is found when any of the inequality sets is solved.
Path Sensitization
It’s the act of finding a set of solutions to the path predicate expression.
In practice, for a selected path finding the required input vector is not difficult. If there is difficulty, it
may be due to some bugs.
Heuristic procedures:
Choose an easily sensitizable path set, & pick hard-to-sensitize paths to achieve more coverage.
Identify all the variables that affect the decisions. For process dependent variables,
express the nature of the process dependency as an equation, function, or whatever is
convenient and clear. For correlated variables, express the logical, arithmetic, or
functional relation defining the correlation.
1. Identify correlated predicates and document the nature of the correlation as for variables.
If the same predicate appears at more than one decision, the decisions are obviously
correlated.
2. Start path selection with uncorrelated & independent predicates. If coverage is achieved,
but the path had dependent predicates, something is wrong.
ref boris beizer 151
Control Flow Graphs and Path Testing U2
4. If the coverage is not achieved yet with independent uncorrelated predicates, extend the
path set by using correlated predicates; preferably process independent (not needing
interpretation)
5. If the coverage is not achieved, extend the path set by using dependent predicates
(typically required to cover loops), preferably uncorrelated.
6. Last, use correlated and dependent predicates.
7. For each of the path selected above, list the corresponding input variables. If the variable
is independent, list its value. For dependent variables, interpret the predicate ie., list the
relation. For correlated variables, state the nature of the correlation to other variables.
Determine the mechanism (relation) to express the forbidden combinations of variable
values, if any.
8. Each selected path yields a set of inequalities, which must be simultaneously satisfied to
force the path.
3. Dependent Predicates
4. Generic
abcdef A C abcdef A C
aghcimkf A B C D abcimjef A C D
aglmjef A B D abcimkf A C D
aghcdef A B C
aglmkf A B D
3. Dependent Predicates
Usually most of the processing does not affect the control flow.
Determine the value of loop control variable for a certain # of iterations, and then
work backward to determine the value of input variables (input vector).
No simple procedure to solve for values of input vector for a selected path.
2. Tackle the path with the fewest decisions first. Select paths with least # of loops.
3. Start at the end of the path and list the predicates while tracing the path in reverse.
Each predicate imposes restrictions on the subsequent (in reverse order) predicate.
4. Continue tracing along the path. Pick the broadest range of values for variables affected
and consistent with values that were so far determined.
5. Continue until the entrance & therefore have established a set of input conditions for the path.
Alternately:
2. Pick a path & adjust all input values. These restricted values are used for next decision.
3. Continue. Some decisions may be dependent on and/or correlated with earlier ones.
4. The path is unachievable if the input values become contradictory, or, impossible.
If the path is achieved, try a new path for additional coverage.
PATH INSTRUMENTATION
Output of a test:
Results observed. But, there may not be any expected output for a test.
Outcome:
Any change or the lack of change at the output.
Expected Outcome:
Any expected change or the lack of change at the output (predicted as part of
design).
Actual Outcome:
Observed outcome
PATH INSTRUMENTATION
Coincidental Correctness:
X = 16 CASE SELECT Y := X - 14
Here
Y := 2
Y is 2
Y := X mod 14
1. General strategy:
2. A trace confirms the expected outcome is or isn’t obtained along the intended path.
2. Instrument the links so that the link is recorded when it is executed (during the test)
3. The succession of letters from a routine’s entry to exit corresponds to the pathname.
No
Input B$ = C=
i A = 7? j k l
A,B,C “a” ? 0?
No No
m
n
No No
B$ = C≤
o p q
“d” ? 0?
r
s
i A = 7? j Process A Process B
No
No
k Process C ? n Process D
Problem:
Processing in the links may be chewed open by bugs. Possibly due to GOTO
statements, control takes a different path, yet resulting in the intended path again.
ref boris beizer 163
Control Flow Graphs and Path Testing U2
i A = 7? j Process A Process B l
m n
o Process C p ? q Process D r
Two link markers specify the path name and both the beginning & end of the link.
• Increment a link counter each time a link is traversed. Path length could confirm
the intended path.
• For avoiding the same problem as with markers, use double link counters.
Expect an even count = double the length.
• Now, put a link counter on every link. (earlier it was only between decisions)
If there are no loops, the link counts are = 1.
• Sum the link counts over a series of tests, say, a covering set. Confirm the total link
counts with the expected.
• Using double link counters avoids the same & earlier mentioned problem.
• Does the input-link count of every decision equal to the sum of the link counts of the
output links from that decision?
• Do the sum of the input-link counts for a junction equal the output-link count for that
junction?
• Do the total counts match the values you predicted when you designed the covering
test set?
This procedure and the checklist could solve the problem of Instrumentation.
Limitations
• Instrumentation probe (marker, counter) may disturb the timing relations & hide
racing condition bugs.
If the presence or absence of probes modifies things (for example in the data base)
in a faulty way, then the probes hide the bug in the program.
• Probes are written in the source code & tagged into categories.
• Counters & traversal markers can be implemented.
• Can selectively activate the desired probes.
• Use macros or function calls for each category of probes. This may have less bugs.
• A general purpose routine may be written.
In general:
• Plan instrumentation with probes in levels of increasing detail.
ref boris beizer 168
Control Flow Graphs and Path Testing U2
To achieve C1 or C2 coverage:
• Predicate interpretation may require us to treat a subroutine as an in-line-code.
• Sensitization becomes more difficult.
• Selected path may be unachievable as the called components’ processing may block it.
• It assumes that effective testing can be done one level at a time without bothering what
happens at lower levels.
• predicate coverage problems & blinding.
ref boris beizer 169
Control Flow Graphs and Path Testing U2
Implementation & Application of Path Testing
• Use the procedure similar to the idealistic bottom-up integration testing, using a
mechanized test suite.
• Use the procedure similar to the idealistic bottom-up integration testing, but without using
stubs.
• Newer and more effective strategies could emerge to provide coverage in maintenance
phase.
• Path testing with C1 + C2 coverage is a powerful tool for rehosting old software.
• A translator from the old to the new environment is created & tested. Rehosting
process is to catch bugs in the translator software.
• A complete C1 + C2 coverage path test suite is created for the old software. Tests
are run in the old environment. The outcomes become the specifications for the
rehosted software.
• Another translator may be needed to adapt the tests & outcomes to the new
environment.
• The cost of the process is high, but it avoids risks associated with rewriting the code.
Q. Illustrate with an example, how statement and branch coverage can be achieved during path selection.
Write all steps involved in it.
Q. Define Path Sensitization. Explain heuristic procedure for sensitizing paths with the help of an example.
Q. How can a program control structure be represented graphically? Explain with the help of required
diagrams.
Q. Explain about Multi entry and Multi exit routines & fundamental path selection criteria
Q. Explain about path instrumentation. How are link counters useful in Path Instrumentation method?
Q. Write about implementation of path testing. What are the various applications of path testing?