Software testing
Software testing
1. Verification:
•Definition: The process of checking whether the software is being built
correctly.
•Focus: Ensures the product meets the specified requirements and
design specifications.
•Objective: To confirm that the development process is followed correctly
and the system aligns with technical specifications.
•When: Done during development (static testing).
•Methods:
•Reviews (code, design, or requirements)
•Walkthroughs
•Inspections
•Static analysis
•Question: Are we building the product right?
2. Validation:
•Volume Testing: Assesses the system's ability to handle large volumes of data efficiently.
•Configuration Testing: Ensures the system works correctly with various hardware and
software configurations.
•Regression Testing: Checks that new changes or updates do not negatively impact existing
functionality.
•Recovery Testing: Tests the system's ability to recover from crashes, failures, or
unexpected disruptions.
•Maintenance Testing: Evaluates the system's performance and functionality after updates
or modifications.
•Documentation Testing: Ensures that all user manuals, guides, and documentation are
accurate and helpful.
•Usability Testing: Measures how user-friendly and intuitive the system is for end-users.
Testing
• There is a massive misunderstanding about
testing: that it improves software. It doesn't!
– Weighing yourself doesn't reduce your weight
– Going to the doctor doesn't make you healthy.
• Those things help to identify problems that
you might choose to resolve. Testing does too.
• The testing does not make the product better,
even though it's part of a process that does
make a product better.
24
TEST SELECTION
25
Test cases
• A test case, is a set of conditions under which a
tester will determine whether an application,
software system or one of its features is working as
it was originally established for it to do.
• Test cases are often referred to as test scripts,
particularly when written – when they are usually
collected into test suites.
• A Test Case is a set of actions executed to verify a
particular feature or functionality of your software
application.
26
Test cases
• A test case is a description of a specific interaction that a tester
will have in order to test a single behavior of the software.
• Test cases are very similar to use cases, in that they are step-by-
step narratives which define a specific interaction between the
user and the software.
• A typical test case is laid out in a table, and includes:
– A unique name and number
– A requirement which this test case is exercising
– Preconditions which describe the state of the software before the test
case
– Steps that describe the specific steps which make up the interaction
– Expected Results which describe the expected state of the software
after the test case is executed
27
Test cases
• Test cases must be repeatable.
• Good test cases are data-specific, and describe
each interaction necessary to repeat the test
exactly.
28
Writing test cases
• First you must understand the language fundamentals
– Sizes and limits of variables, platform specific information
• Second, you must understand the domain
• Read the requirements
• Think like a user – what possible things do they want to do
• Think about possible “mistakes”; i.e. Invalid input
• Think about impossible conditions or input
• What is the testing intended to prove?
– Correct operation – gives correct behavior for correct input
– Robustness – responds to incorrect or invalid input with proper results
– User acceptance – typical user behavior
• Write down the test cases
29
Writing Good Test Cases
• Test Cases need to be simple and transparent
• Create Test Case with end user in mind
• Avoid test case repetition
• Do not Assume
– Stick to the Specification Documents.
• Ensure 100% Coverage
• Test Cases must be identifiable.
• Implement Testing Techniques
– It's not possible to check every possible condition in your software
application
– Testing techniques help you select a few test cases with the
maximum possibility of finding a defect
30
Writing Good Test Cases
• Boundary Value Analysis (BVA)
– testing of boundaries for specified range of values.
• Repeatable and self-standing
– The test case should generate the same results
every time no matter who tests it
31
Writing a test case
32
Writing test cases
• Cover all possible valid input
– Try multiple sets of values, not just one set of values
– Permutations of values
• Check boundary conditions
– Check for off-by-one conditions
• Check invalid input
– Illegal sets of value
– Illegal input
• Impossible conditions
33
Writing test cases
• Beware of problems with comparisons
– How to compare two floating numbers
• Never do the following:
float a, b;
. . .
if (a == b)
• Is it 4.0000000 or 3.9999999 or 4.0000001 ?
• What is your limit of accuracy?
– In object oriented languages make sure whether you are comparing
the contents of an object or the reference to an object
String a = “Hello world!\n”
String b = “Hello world!\n”
if ( a == b )
vs.
if ( a.equals(b) )
35
Writing test cases
36
Tips for testing
• You cannot test every possible input, parameter value, etc.
– So you must think of a limited set of tests likely to expose bugs.
• Think about boundary cases
– positive; zero; negative numbers; infinity; very small
– right at the edge of an array or collection's size (plus or minus
one)
• Think about empty cases and error cases
– 0, -1, null; an empty list or array
• test behavior in combination
– maybe add usually works, but fails after you call remove
– make multiple calls; maybe size fails the second time only
37
Test Cases – Good Example
38
Test Cases – Bad Example
39
Test cases
• Why we write test cases?
– The basic objective of writing test cases is to
validate the testing coverage of the application.
• Keep in mind while writing test cases that all
your test cases should be simple and easy to
understand.
• For any application basically you will cover all
the types of test cases including functional,
negative and boundary value test cases.
40
Trustworthy tests
• Test one thing at a time per test method.
– 10 small tests are much better than 1 test 10x as large.
• Each test method should have few (likely 1) assert
statements.
– If you assert many things, the first that fails stops the test.
– You won't know whether a later assertion would have also failed.
• Tests should avoid logic.
– minimize if/else, loops, switch, etc.
– avoid try/catch
• If it's supposed to throw, use expected= ... if not, let JUnit catch it.
• Torture tests are okay, but only in addition to simple tests.
41
Test Execution
• The software testers begin executing the test plan after the developers
deliver the alpha build, or a build that they feel is feature complete.
• The alpha should be of high quality—the developers should feel that it
is ready for release, and as good as they can get it.
• There are typically several iterations of test execution.
– First, focus on new functionality
– Then, regression test to make sure that a change to one area of the software
has not caused any other part of the software
– Regression testing usually involves executing all test cases which have
previously been executed
– There are typically at least two regression tests for any software project
42
Test Execution
• When is testing complete?
– No defects found
– Or defects meet acceptance criteria outlined in
test plan
43
Automating Test Execution
• Designing test cases and test suites is creative
– Like any design activity: A demanding intellectual
activity, requiring human judgment
• Executing test cases should be automatic
– Design once, execute many times
• Test automation separates the creative human
process from the mechanical process of test
execution
44
From Test Case Specifications to Test Cases
45
Scaffolding
46
Scaffolding ...
• Test driver
– A “main” program for running a test
• May be produced before a “real” main program
• Provides more control than the “real” main program
– To drive program under test through test cases
• Test stubs
– Substitute for called functions/methods/objects
• Test harness
– Substitutes for other parts of the deployed environment
• Ex: Software simulation of a hardware device
47
Stubs
48
Drivers
49
Generic or Specific?
• How general should scaffolding be?
– We could build a driver and stubs for each test case
– ... or at least factor out some common code of the driver
and test management (e.g., JUnit)
– ... or further factor out some common support code, to
drive a large number of test cases from data (as in
DDSteps)
– ... or further, generate the data automatically from a more
abstract model (e.g., network traffic model)
• A question of costs and re-use
– Just as for other kinds of software
50
Oracles
• Did this test case succeed, or fail?
– No use running 10,000 test cases automatically if
the results must be checked by hand!
• Range of specific to general, again
– ex. JUnit: Specific oracle (“assert”) coded by hand
in each test case
– Typical approach: “comparison-based” oracle with
predicted output value
– Not the only approach!
51
Comparison-based oracle
55
Smoke Tests
• A smoke test is a subset of the test cases that
is typically representative of the overall test
plan.
– Smoke tests are good for verifying proper
deployment or other non invasive changes.
– They are also useful for verifying a build is ready to
send to test.
– Smoke tests are not substitute for actual
functional testing.
56
Summary
• Goal: Separate creative task of test design from
mechanical task of test execution
– Enable generation and execution of large test suites
– Re-execute test suites frequently (e.g., nightly or after
each program change)
• Scaffolding: Code to support development and
testing
– Test drivers, stubs, harness, including oracles
– Ranging from individual, hand-written test case drivers
to automatic generation and testing of large test suites
– Capture/replay where human interaction is required
57