Chapter 7 - Software Testing-1
Chapter 7 - Software Testing-1
1 Tadele M.
Topics to be covered
What is testing? • Levels of Testing
Software Testing Terminologies • Unit, Integration, System
Software Testing Life Cycle and Acceptance Testing
Test case design • Test Plan
• Test case specifications,
Black Box testing
execution and Analysis
Requirement Based Testing
Equivalent Class partitioning
• Test automation
Boundary value analysis
• Limitations of Testing
• Debugging
White Box testing
Control Flow Based testing
2
3
What is testing?
Several definitions:
“Testing is the process of establishing confidence that a
program or system does what it is supposed to.” ( Hetzel
1973)
“Testing is any activity aimed at evaluating an attribute or
capability of a program or system and determining that it
meets its required results.” (Hetzel 1983)
“Testing is the process of executing a program or system with
the intent of finding errors.” (Myers 1979)
Testing is not
the process of demonstrating that errors are not present.
3
4
Background
Software testing process has two distinct goals:
To demonstrate to the developer and customer that the
software meets its requirements; (Validation testing)
To discover faults or defects in the software where its behavior
is incorrect or not in conformance with its specification;
(Defect testing)
Who is involved in testing?
Software Test Engineers and Testers
Test manager
Development Engineers
Quality Assurance Group and Engineers
4
5
Software Testing: Terminologies
Error, Mistake, Bug, Fault and Failure
An error is a mistake made by an engineer
This may be a syntax error, misunderstanding of specifications
or logical errors.
Bugs are coding mistakes/errors.
A fault/defect is the representation of an error, where
representation is the mode of expression, such as narrative text,
data flow diagrams, ER diagrams, source code etc.
A failure is an incorrect output/behavior that is caused by
executing a fault
A particular fault may cause different failures, depending on
how it has been exercised.
5
7
Software Testing: Terminologies…
In order to test a software, develop test cases and test suite
Test cases are specification of the inputs to test the system and the
expected outputs from the system plus a statement of what is being
tested.
During testing, a program is executed with a set of test cases
failure during testing shows presence of defects.
Test/Test Suite: A set of one or more test cases used to
test a module, group of modules, or entire system
Verification : the software should conform to its specification.
o i.e. Are we building the product right”.
Validation: The software should do what the user really requires.
o . i.e."Are we building the right product”.
7
10
Test case design
Test case design involves designing the test cases (inputs and
outputs) used to test the system.
Two approaches to design test cases are
Functional/ behavioral/ black box testing
Structural or white box testing
8
12
Black Box testing
It is designed to validate functional requirements without regard
to the internal workings of a program
The test cases are decided solely on the basis of the requirements
or specifications of the program or module
No knowledge of internal design or code required.
The tester only knows the inputs that can be given to the
system and what output the system should give.
9
13
Black Box testing…
Black box testing focuses only on functionality
What the program does; not how it is implemented
Advantages
Tester can be non-technical.
Test cases can be designed as soon as the functional specifications are
complete
Disadvantages
The tester can never be sure of how much of the system under test has
been tested.
i.e. chances of having unidentified paths during this testing
The test inputs needs to be from large sample space.
1
14
0
Equivalence Class partitioning
Divide the input space into equivalent classes
If the software works for a test case from a class then it is likely
to work for all
Can reduce the set of test cases if such equivalent classes can be
identified
Getting ideal equivalent classes is impossible, without looking at
the internal structure of the program
For robustness, include equivalent classes for invalid inputs also
Example: Look at the following taxation table
Income Tax Percentage
Up to and including 500 0
More than 500, but less than 1,300 30
1,300 or more, but less than 5,000 40
1
17
1
Equivalence Class partitioning…
Based on the above table 3 valid and 4 invalid equivalent classes can be
found
Valid Equivalent Classes
Values between 0 to 500, 500 to 1300 and 1000 to 5000
Invalid Equivalent Classes 2100
Values less than 0, greater than 5000, no input at all and inputs
containing letters
From this classes we can generate the following test cases
Test Case ID Income Tax
1 200 0
2 1000 300
3 3500 1400
4 -4500 Income can’t be negative
5 6000 Tax rate not defined
6 Please enter income
1
18 7 98ty Invalid income
2
Boundary value analysis
It has been observed that programs that work correctly for a set of
values in an equivalence class fail on some special values.
These values often lie on the boundary of the equivalence class.
A boundary value test case is a set of input data that lies on the edge of
a equivalence class of input/output
Example
Using an example in ECP generate test cases that provides 100% BVA
coverage.
1
26
4
Control flow based criteria
Considers the program as control flow graph - Nodes represent
code blocks – i.e. set of statements always executed together
An edge (i, j) represents a possible transfer of control from node i
to node j.
Any control flow graph has a start node and an end node
A complete path (or a path) is a path whose first node is the start
node and the last node is an exit node.
Control flow graph has a number of coverage criteria. These are
Statement Coverage Criterion
Branch coverage
Linearly Independent paths
(ALL) Path coverage criterion
1
27
5
Statement Coverage Criterion
The simplest coverage criteria is statement coverage;
Which requires that each statement of the program be executed at least
once during testing.
I.e. set of paths executed during testing should include all nodes
This coverage criterion is not very strong, and can leave errors
undetected.
Because it has a limitation in that it does not require a decision to
evaluate to false if no else clause
E.g. : If A = 3, B = 9
1
29
7
Linearly Independent paths
Prepare test cases that covers all linearly independent Paths
Binary search flow graph
1
31
9
Levels of Testing
User needs Acceptance testing
43
Test Plan
Testing usually starts with test plan and ends with acceptance
testing.
Test plan is a general document that defines the scope and
approach for testing for the whole project
Inputs are SRS, project plan, design, code, …
Test plan identifies what levels of testing will be done, what
units will be tested, etc in the project
It usually contains
Test unit specifications: what units need to be tested separately
Features to be tested: these may include functionality, performance,
usability,…
Approach: criteria to be used, when to stop, how to evaluate, etc
Test deliverables
Schedule and task allocation
2
49
5
Test case specifications
Test plan focuses on approach; does not deal with details of testing
a unit.
Test case specification has to be done separately for each unit.
Based on the plan (approach, features,..) test cases are determined
for a unit
Expected outcome also needs to be specified for each test case
Together the set of test cases should detect most of the defects
Test data are Inputs which have
been devised to test the system
i.e. the larger set will detect most of the defects, and a smaller set cannot
2
50 catch these defects
6
Test automation
253
7
Limitations of Testing
Testing has its own limitations.
You cannot test a program completely - Exhaustive testing is impossible
You cannot test every path
You cannot test every valid input
You cannot test every invalid input
We can only test against system requirements
- May not detect errors in the requirements
- Incomplete or ambiguous requirements may lead to inadequate or incorrect
testing.
Time and budget constraints
You will run out of time before you run out of test cases
Even if you do find the last bug, you’ll never know it
255
8
Debugging
Debugging is the process of locating and fixing or bypassing bugs
(errors) in computer program code
To debug a program is to start with a problem, isolate the source of the
problem, and then fix it.
Testing does not include efforts associated with tracking down
bugs and fixing them.
Testing finds errors; debugging localizes and repairs them.
57