0% found this document useful (0 votes)
96 views33 pages

Test Strategy Template

This test strategy document provides a high-level overview for testing an unnamed subsystem. It describes the introduction, scope, assumptions, testing objectives, risk assessment, test focus areas, levels of testing, types of testing, organizational responsibilities, test methods, entry/exit criteria, tools, metrics, and test management/reporting for the subsystem.

Uploaded by

Rana Gaballah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as ODT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
96 views33 pages

Test Strategy Template

This test strategy document provides a high-level overview for testing an unnamed subsystem. It describes the introduction, scope, assumptions, testing objectives, risk assessment, test focus areas, levels of testing, types of testing, organizational responsibilities, test methods, entry/exit criteria, tools, metrics, and test management/reporting for the subsystem.

Uploaded by

Rana Gaballah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as ODT, PDF, TXT or read online on Scribd
You are on page 1/ 33

APP 141 - Test Strategy

Author: IBM
Owner: IBM

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 1 of 33
Document History

Document Location
This is a snapshot of an on-line document. Paper copies are valid only on the day they are printed. Refer to
the author if you are in any doubt about the currency of this document.

Revision History
Date of this revision: 08-26-2004 Date of next revision (date)

Revision Revision Summary of Changes Changes


Number Date marked
(#) (-) (Describe change) (N)

Approvals
This document requires following approvals. Signed approval forms are filed in the Quality section of the
PCB.

Name Title
(name) (title)

Distribution
This document has been distributed to

Name Title
(name) (title)

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 2 of 33
Contents
1. Introduction............................................................................................................. 4
1.1 Identification.................................................................................................................................. 4
1.2 References.................................................................................................................................... 4
1.3 Notation......................................................................................................................................... 4
2. Scope...................................................................................................................... 5
3. Assumptions............................................................................................................6
4. Testing Objectives...................................................................................................7
5. Risk Assessment.....................................................................................................8
6. Test Focus Areas....................................................................................................9
7. Levels of Test........................................................................................................10
8. Types of Test........................................................................................................ 12
8.1 Functional Types of Test.............................................................................................................. 12
8.2 Structural Types of Test............................................................................................................... 13
8.3 Test Focus / Types Matrix........................................................................................................... 14
8.4 Test Levels / Types Matrix........................................................................................................... 14
8.4.1 Functional Types of Test....................................................................................................... 15
8.4.2 Structural Types of Test........................................................................................................ 15
9. Organizational Responsibility................................................................................16
10. Test Methods........................................................................................................ 17
11. Entry / Exit Criteria................................................................................................ 18
12. Tools..................................................................................................................... 19
13. Metrics...................................................................................................................20
14. Test Management and Reporting..........................................................................21
14.1 Problem Tracking/Management Procedures............................................................................21
14.2 Change Management Procedures........................................................................................... 22
14.3 Progress Tracking Procedures................................................................................................. 22
14.4 Configuration Management...................................................................................................... 22
15. Approvals.............................................................................................................. 23
16. Appendix A – Glossary of Testing Terms..............................................................24

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 3 of 33
1. Introduction
Please note that:
 Text in the “Editor’s comments” style (such as this) provides guidance to the editor, in
conjunction with the corresponding work product description, and should be removed from
the published version of the document.
 The “Reader’s comments” style (in purple) provides guidance for the (client) reader and
should be modified as appropriate by the editor to reflect any customization of the template.
 The “Normal” style (in black) should be used by the editor when adding the text describing
the (sub)system appropriate for this work product.
When using this template, refer to the Global Services Method work product description “APP 141
Test Strategy” and the associated technique papers “CAD Defect Management Procedures”, “CAD
Full Lifecycle Testing Concepts”, “CAD Performance Testing”, “CAD Static Testing Guidelines”,
“CAD Test Management Procedures” and “CAD Usability Testing”.
A significant amount of default text is included in this template. This default text is based on the full
lifecycle testing method described in the like-named technique paper. The text should be adopted,
changed or replaced by the editor as appropriate for the engagement and the (sub)system(s) under
test.
This chapter identifies the document and the (sub)system to which it relates, describes the contents of the
document, and states its purpose.

1.1 Identification
This document describes the test strategy for the (sub)system Error: Reference source not found.
Via the IBM menu, Edit the Document Information, enter the name of the (sub)system being described
into the field “Text 2”, and press the “Update” button.

1.2 References
This document is based on and refers to the following documents:
[<no.>] <author>, <title>, <reference>, <version no.>, <date>{,<pages>}

where items in {} are optional.


e.g.,
[1] IBM Global Services, "Use Case Model for <(sub)system name>", gsmtpsdu.doc, V1.0.0A, 18
Apr 2001.
Then in the body of the text refer to the document by: Reference [1].

1.3 Notation
This section is optional. If appropriate describe any notation used in the document.
<Notation>

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 4 of 33
2. Scope
The scope defines the boundaries of testing. It identifies the applications, networks and operations within
scope, and any known to be out-of-scope.
Identify the applications, networks and operations within scope, and any known to be out-of-scope
and the sequence in which they will be released. Where appropriate refer to the Use Case Model
(APP 130), the Operational Model (ARC 113) and the Release Plan (ENG 108).

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 5 of 33
3. Assumptions
This chapter states the underlying assumptions of the test strategy.
Ensure that any such assumptions are also recorded and managed by the project manager in the
Issue Document (ENG 331) and Issue Log (ENG 332) work products, and/or using the HE1 Handle
Issues process/work pattern within WWPMM.

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 6 of 33
4. Testing Objectives
This chapter describes the tangible strategic goals of testing.
Describe the testing objectives as discussed in sections 2 “Testing Fundamentals” and 3.1.4 “Test
Objectives” of the technique paper “CAD Full Lifecycle Testing Concepts”.
Testing is the systematic search for defects in all project deliverables. It is the process of examining an
output of a process under consideration, comparing the results against a set of pre-determined expectations,
and dealing with the variances.
Testing is conducted to ensure that a product is developed that will prove useful to the end user.
The primary objectives of testing are to assure that:
 The system meets the users' needs ... has 'the right system been built'
 The user requirements are built as specified ... has 'the system been built right'
Other secondary objectives of testing are to:
 Instill confidence in the system, through user involvement
 Ensure the system will work from both a functional and performance viewpoint
 Ensure that the interfaces between systems work
 Establish exactly what the system does (and does not do) so that the user does not receive any
"surprises" at implementation time
 Identify problem areas where the system deliverables do not meet the agreed to specifications
 Improve the development processes that cause errors.

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 7 of 33
5. Risk Assessment
This chapter comprises a risk assessment driven by the business functions and technical requirements of the
solution under development.
Examine the project risk assessment and expand on risks specific to the test domain, as defined in
the Risk Definition (ENG 351), Risk Management Plan (ENG 352) and Risk Occurrence Document
(ENG 353) work products, and as described in section 3.1.2 “Risk Assessment” of the technique
paper “CAD Full Lifecycle Testing Concepts”. Aim to mitigate the project risks through testing and
manage the specific testing risks.

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 8 of 33
6. Test Focus Areas
Test focus areas are those critical attributes of the system that must be tested to provide the expected level
of confidence in the application or system.
Identify the test focus areas using section 3.1.3 “Test Focus” of the technique paper “CAD Full
Lifecycle Testing Concepts” as a guide. If there are any indicators from the RFP or from discussions
with client, as to what areas the client considers critical to the business or to the technology
implementation, these may be captured here in this section.
The following is a list of test focus areas:
Auditability
Ability to provide supporting evidence to trace processing of data.
Continuity of Processing
Ability to continue processing if problems occur. Include the ability to backup & recover after failure.
Correctness
Ability to process data according to prescribed rules. Controls over transactions and data field edits provide
an assurance on accuracy and completeness of data.
Maintainability
Ability to locate and fix an error in the system and to make dynamic changes to the system environment
without making system changes.
Operability
Effort required to learn and operate the system (can be manual or automated).
Performance
Ability of the system to perform certain functions or process certain volumes of transactions within a pre-
scribed time.
Portability
Ability for a system to operate in multiple operating environments.
Reliability
Extent to which system will provide the intended function without failing.
Security
Assurance that the system/data resources will be protected against accidental and or intentional modifica-
tions or misuse.
Usability
Effort required to learn and use the system.

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 9 of 33
7. Levels of Test
This section defines the levels at which testing will be performed. The levels correspond with different stages
of development and represent known levels of physical integration and quality.
Identify the levels of test using section 4 “Levels of Testing” of the technique paper “CAD Full
Lifecycle Testing Concepts” as a guide. The levels of testing must be synchronized with the build
and release strategies as defined in the Increment Goals (ENG104) and Release Plan (ENG108), as
described in section 3.1.6 “The Build Strategy” and Appendix B “Integration Approaches” of the
same technique paper.
Where appropriate change these default test levels to correspond with the testing standards of the
organization.
The system is tested in steps, in line with the build and release strategies, from individual units of code
through integrated subsystems to the deployed releases and to the final system.
Testing proceeds through various physical levels of the application development lifecycle. Each completed
level represents a milestone on the project plan and each stage represents a known level of physical
integration and quality. These stages of integration are known as Testing Levels.
This is illustrated in the diagram below.

The Levels of Testing used in the application development lifecycle are:


Requirements Testing
Requirements testing involve the verification and validation of requirements through static and dynamic tests.
The validation testing of requirements is covered under Acceptance Testing.
Design Testing

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 10 of 33
Design testing involves the verification and validation of the system design through static and dynamic tests.
The validation testing of external design is done during Acceptance Testing and the validation testing of
internal design is covered during, Unit, Integration and System Testing.
Unit Testing
Unit level test is the initial testing of new and changed code in a module. It verifies the program specifications
to the internal logic of the program or module and validates the logic.
Integration Testing
Integration level tests verify proper execution of application components. Communication between modules
within the sub-system is tested in a controlled and isolated environment within the project.
System Testing
System level tests verify proper execution of the entire application components including interfaces to other
applications. Both functional and structural types of tests are performed to verify that the system is
functionally and operationally sound.
Systems Integration Testing
Systems integration testing is a test level which verifies the integration of all applications, including interfaces
internal and external to the organization, with their hardware, software and infrastructure components in a
production-like environment.
Acceptance Testing
Acceptance tests verify that the system meets user requirements as specified. It simulates the user
environment and emphasizes security, documentation and regression tests and will demonstrate that the
system performs as expected to the sponsor and end-user so that they may accept the system.
Operability Testing
Operability tests verify that the application can operate in the production environment.

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 11 of 33
8. Types of Test
This section defines the types of testing, which relate to business functions (Functional Types) or to structural
functions (Structural Types).
Identify the levels of test using section 5 “Types of Test” of the technique paper “CAD Full Lifecycle
Testing Concepts” as a guide.
These are generic for most applications, but if there are some specific to the industry or technology,
it is important to include these as well.

8.1 Functional Types of Test


The following is a list of the functional types of tests:
Audit and Controls
Verifies the adequacy and effectiveness of controls and completeness of data processing results.
Normally carried out as part of System Testing once the primary application functions have been
stabilized.
Conversion
Verifies the compatibility of converted programs, data and procedures with the "old" ones that are being
converted or replaced.
Normally starts early in development and continues.
(User) Documentation and Procedures
Verifies that the interface between the system and the people works and is useable. Verifies that the
instruction guides are helpful and accurate.
Normally done after the externals of the system are stabilized.
Error Handling
Verifies the system function for detecting and responding to exception conditions. Completeness of error
handling determines the usability of a system and ensures that incorrect transactions are properly handled.
Should be included in all levels of testing.
Function
Verifies that each business function operates according to the Detailed Requirements, the External and
Internal Design documents.
Normally done as part of Integration, System and Business Acceptance Testing.
Installation
Verifies that applications can be easily installed and run in the target environment.
Normally done for network systems, multiple location installs and vendor packages after System
Testing, possibly in parallel with Business Acceptance Testing.
Interface / Inter-system
Verifies that the interconnection between applications and systems functions correctly.
Normally done as part of System Testing.

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 12 of 33
Parallel
Verifies that same input on "old" and "new" systems produces same results.
It is more an implementation than a testing strategy. Normally done to compare an old application
with its replacement or when automating manual systems.
Regression
Verifies that, as a result of making changes to one part of the system, unwanted changes were not introduced
to other parts.
Normally done as the last part of System Testing.
Transaction Flow
Verifies the proper and complete processing of a transaction from the time it enters the system to the time of
its completion or exit from the system.
Normally done after the point in System Testing when the application is demonstrably stable.
Usability
Verifies that the final product is user-friendly and easy to use.
Normally done as part of functional testing during System and Business Acceptance Testing.

8.2 Structural Types of Test


The following is a list of the structural types of testing:
Backup and Recovery
Verifies the capability of the application to be restarted after a failure.
Normally carried out as part of System Testing and verified in Operability Testing.
Contingency
Verifies that a crucial application and its databases, networks, and operating processes can be recovered
after a major outage or disaster.
Normally conducted by operations staff after System Testing and probably concurrent with the
Acceptance and Operability testing.
Job Stream
Verifies the execution of job streams and the correct handling of exceptions.
Normally done as a part of operational testing.
Operational
Verifies the ability of the application to operate at an acceptable level of service in the production-like
environment.
Normally carried out in Integration and System Testing and verified during Operability Testing.
Performance
Verifies that the application meets the expected level of performance in a production-like environment.
Normally started as soon as working programs (not necessarily defect free) are available. Attention
to performance issues should begin in the Design Phase. Read the technique paper “CAD
Performance Testing” for more information.

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 13 of 33
Security
Verifies that the application provides an adequate level of protection for confidential information and data
belonging to other systems.
Normally started during System Testing and continues through Business Acceptance Testing.
Stress / Volume
Verifies that the application has acceptable performance characteristics under peak load conditions.
Normally carried out at the end of System Testing after all functions are stable.

8.3 Test Focus / Types Matrix


The table below documents which test types cover each test focus area.
This paragraph is optional. If appropriate, include and complete the table below, changing any row or
column headings, and changing any cell markings, as required. Alternatively include an empty table
in an appendix for later completion as suggested by the work product description.
Test Types Functional Testing Structural Testing
B
A T A
U R C
D E A K
I D R N U S
T O R I S P T
C O N A O C P R
& C U R S R C & P O J E E
Test Focus O M T I E T U E N O R S
C N E H F A N P G I S R R T B F S S
O V N A U L T A R O A E A I O E /
N E T N N L E R E N B C T N S R C V
T R A D C A R A S I O I G T M U O
R S T L T T F L S F L V O E R A R L
O I I I I I A L I L I E N N E N I U
L O O N O O C E O O T R A C A C T M
S N N G N N E L N W Y Y L Y M E Y E
Auditability   
Continuity of     
Processing
Correctness           
Maintainability 
Operability    
Performance     
Portability   
Reliability    
Security   
Usability    

8.4 Test Levels / Types Matrix


The tables below document the functional and structural test levels in which each test type is performed.

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 14 of 33
This paragraph is also optional. If appropriate, include and complete the tables below as specified in
section 5.4 “Relationship Between Levels and Types of Testing” of the technique paper “CAD Full
Lifecycle Testing Concepts” changing any row or column headings, and changing any cell markings,
as required. Alternatively include empty tables in an appendix for later completion as suggested by
the work product description.

8.4.1 Functional Types of Test


Levels Systems
Unit Integration System Acceptance Operability
Types Integration
Audit & Controls   
Conversion    
Documentation &    
Procedures
Error Handling      
Function     
Installation  
Interface /    
Inter-system
Parallel   
Regression      
Transaction Flow   
Usability  

8.4.2 Structural Types of Test


Levels Systems
Unit Integration System Acceptance Operability
Types Integration
Back-up &    
Recovery
Contingency  
Job Stream     
Operational   
Performance   
Security    
Stress / Volume  

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 15 of 33
9. Organizational Responsibility
This chapter describes in generic terms the handoffs and handovers that occur at various points in the
development lifecycle as the product or solution evolves from specification to design to implementation.
The following table summarizes the testing responsibilities.
Identify at a high level which business or functional areas need to get involved in order to achieve the
strategic testing objectives.
Complete the table below using sections 2.10 “The Testing Team” and 8.3 “Test Team Composition &
User Involvement” of the technique paper “CAD Full Lifecycle Testing Concepts” as a guide. Change
any row or column headings as required. The default column headings are roles defined in Global
Services Method.

Environment
Testing
Specialist - Technical

Specialist - Business
Responsibility

Test Architect
Test
Application

Operations
Developer

Manager

User
Test

Test

Test
Specialist

Test Planning
Master Test Plan
Detailed Test Plans
Unit Test
Integration Test
System Test
Systems Integration Test
Acceptance Test
Operability Test
Test Preparation
Unit Test
Integration Test
System Test
Systems Integration Test
Acceptance Test
Operability Test
Test Execution
Unit Test
Integration Test
System Test
Systems Integration Test
Acceptance Test
Operability Test
Results Reporting
Unit Test
Integration Test
System Test
Systems Integration Test
Acceptance Test

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 16 of 33
Environment
Testing

Specialist - Technical

Specialist - Business
Responsibility

Test Architect
Test
Application

Operations
Developer

Manager

User
Test

Test

Test
Specialist
Operability Test

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 17 of 33
10. Test Methods
This section documents the methods and techniques used for testing the system.
Identify the standards to be applied to testing the system and mention any techniques specific to the
development or technology environment. This could include testing standards of the organization,
business unit or competency.

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 18 of 33
11. Entry / Exit Criteria
This section defines the pre-requisites (entry criteria) and post conditions (exit criteria) for each of the
detailed levels of testing within project scope.
The concept of establishing pre-requisites (entry criteria) and post conditions (exit criteria) is
extremely important in managing any process. The need for using entry / exit criteria is
documented to ensure clear understanding where hand offs occur so that the sending organization
or unit can verify completeness before handing over and the receiving unit can validate ‘readiness’
to initiate the process.
Entrance criteria are those factors that must be present, at a minimum, to be able to start an activity. In
Integration Testing for example, before a module can be integrated into a program, it must be compiled
cleanly and have successfully completed unit testing.
If the entrance criteria of the next phase have been met, the next phase may be started even though the
current phase is still under way.
Exit criteria are those factors that must be present to declare an activity completed. To proclaim System
Testing completed, two criteria might be that all test cases must have been executed with a defined level of
success (if other than 100%) and that there must be no more than a mutually agreed upon number of
outstanding problems left unresolved. Exit criteria must be specifically expressed using terms such as, "X will
be accepted if Y and a Z are completed."
The User Acceptance Test may be viewed as the Exit Criteria for the development project.
Document the entry and exit criteria for each level of testing using section 2.5 “Entrance and Exit
Criteria” of the technique paper “CAD Full Lifecycle Testing Concepts” for a more complete
description.
Testing Level Entry Criteria Exit Criteria
Unit Test
Integration Test
System Test
Systems Integration Test
Acceptance Test
Operability Test

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 19 of 33
12. Tools
This chapter documents the testing tools strategy for the project.
Identify here the technology and/or tools strategy applied to the project, including any technology or
tools strategy that the organization, business unit or competency may have embarked on and which
is relevant to the project. The impact of these on any other strategy components should be noted in
their discussion.

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 20 of 33
13. Metrics
This section documents the metrics strategy for the project.
Include here any metrics program applied to the project. As with the tool strategy, if the organization,
business unit or competency has any metrics program or measurements strategy relating to testing,
the test strategy is the place to include the information.

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 21 of 33
14. Test Management and Reporting
The chapter identifies and describes the purpose of the test management and reporting procedures.
Identify and describe the purpose of the test management and reporting procedures used while
testing the system.
Read the technique paper “CAD Test Management Procedure” for a description of what is required.
IBM Global Services Method processes and Worldwide Project Management Method (WWPMM) work
patterns are employed for test management and reporting purposes.
Visit http://w3-1.ibm.com/transform/project/pmmethod/ for a complete description of the WWPMM
work patterns.

14.1 Problem Tracking/Management Procedures


Read section 8.6 “Problem Management” of the technique paper “CAD Full Lifecycle Testing
Concepts” for a description of what is required.
Defect Management Process
The purpose of the defect management process is to implement the processes to manage the tracking and
fixing of defects found during testing and perform root cause analysis.
Read the technique paper “CAD Defect Management Procedures” for a complete description of the
process.
Severity Level Definitions
When reporting defects, the following severity levels are used:
Severity Description Example
Level
1 System Failure. No further Critical to application availability, results, functionality,
processing is possible. performance or usability.
2 Unable to proceed with Application sub-system available, key component
selected function or unavailable or functionally incorrect and workaround is
dependants. not available.
3 Restricted function capability, Non-critical component unavailable or functionally
however, processing can incorrect; incorrect calculation results in functionally
continue. critical key fields/dates and workaround is available.
4 Minor cosmetic change. Usability errors; screen or report errors that do not
materially affect quality and correctness of function,
intended use or results.

HE1: Handle Issue


The purpose of this procedure is to monitor issue resolution. This procedure is performed to monitor the
resolution of issues requiring the attention of project management.
HE6: Handle Controversial Defect
The purpose of this procedure is to resolve findings for which immediate agreement about the impact and the
corrective actions cannot be reached. This procedure is performed when a deliverable is said to have a
“defect”, even though the deliverable could be viewed as meeting a plausible interpretation of the

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 22 of 33
requirements or specifications. The issue with respect to the deliverable may have been uncovered either
before or after a delivery. At this point, the controversial defect is dealt with in order to determine which part
of it will be considered to be a non-conformity and which part will be considered to be a change request.
Then, it will be necessary to ensure that modifications to the plans are made in order to allow the deliverable
in question to be completed. This procedure can be applied to handle a single, potentially significant
controversial defect or a number of smaller controversial defects.

14.2 Change Management Procedures


Read section 8.7 “Change Management” of the technique paper “CAD Full Lifecycle Testing
Concepts” for a description of what is required.
HE5: Handle Change Request
The purpose of this procedure is to resolve a change request. This procedure is performed when a change to
an aspect of the project that is under change control is requested. The request may come from the sponsor,
the delivery organization, or a supplier. It may deal with a single major change or a number of small changes.
It may address the schedule, resources, solution, or any aspect of the project. Handling the change request
includes deciding whether it should be analyzed further. If further analysis is launched, handling the change
request involves revising base-lined elements and dependent documents (which may include the
Agreement ) along with communicating the change decisions to the team.

14.3 Progress Tracking Procedures


Read section 8.5 “Reporting” of the technique paper “CAD Full Lifecycle Testing Concepts” for a
description of what is required.
M1: Track and Control Progress
The purpose of the procedure is to keep project progress consistent with the plan. This procedure is
performed on a short-term, periodic basis (weekly, semi-monthly, or as required by the project) in order to
control project progress and risks. Project progress and risks are tracked against the plan. Then decisions
must are made to ensure progress stays within the plan or to update the plan.

14.4 Configuration Management


Identify and describe the purpose of the configuration management procedures.
Read section 8.8 “Configuration Management (Version Control)” of the technique paper “CAD Full
Lifecycle Testing Concepts” for a description of what is required.
As development progresses, the interim work products (e.g., design, modules, documentation, and test
packages) go through levels of integration and quality. Testing is the process of removing the defects of
components under development at one testing level and then promoting the new defect-free work product to
the next test level. A configuration manager assures the quality of the interim work products before they are
promoted. Update access to the work product development libraries is denied to authors. Control is
enforced by a configuration management procedure. These controls are key to avoiding rework or defective
work slipping by uncontrolled.

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 23 of 33
15. Approvals
This chapter defines the sign-off requirements at major milestones and checkpoints as well as acceptance
criteria.
Define the sign-off requirements and acceptance criteria for each major milestones and checkpoints.

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 24 of 33
16. Appendix A – Glossary of Testing Terms
This appendix defines the terms introduced in this document and used in the subsequent test documents.
This glossary defines the testing terms used in this test strategy and the subsequent test documents. These
definitions originate from many different industry standards and sources, such as the British Standards
Institute (BSI), the Institute of Electrical and Electronics Engineers (IEEE), as well as from IBM.
Term Definition
Acceptance Criteria The definition of the results expected from the test cases used for
acceptance testing. The product must meet these criteria before
implementation can be approved.
Acceptance Testing (1) Formal testing conducted to determine whether or not a system satisfies
its acceptance criteria and to enable the client to determine whether or not to
accept the system.
(2) Formal testing conducted to enable a user, client, or other authorized
entity to determine whether to accept a system or component.
Acceptance Test Plan Describes the steps the client will use to verify that the constructed system
meets the acceptance criteria. It defines the approach to be taken for
acceptance testing activities. The plan identifies the items to be tested, the
test objectives, the acceptance criteria, the testing to be performed, test
schedules, entry/exit criteria, staff requirements, reporting requirements,
evaluation criteria, and any risks requiring contingency planning.
Ad-hoc Testing A loosely structured testing approach that allows test developers to be
creative in their test selection and execution. Ad-hoc testing is targeted at
known or suspected problem areas.
Audit and Controls A functional type of test that verifies the adequacy and effectiveness of
Testing controls and completeness of data processing results.
Auditability A test focus area defined as the ability to provide supporting evidence to
trace processing of data.
Backup and A structural type of test that verifies the capability of the application to be
Recovery Testing restarted after a failure.
Black Box Testing Evaluation techniques that are executed without knowledge of the program’s
implementation. The tests are based on an analysis of the specification of
the component without reference to its internal workings.
Bottom-up Testing Approach to integration testing where the lowest level components are tested
first then used to facilitate the testing of higher level components. This
process is repeated until the component at the top of the hierarchy is tested.
See "Top-down."
Boundary Value A test case selection technique that selects test data that lie along
Analysis "boundaries" or extremes of input and output possibilities. Boundary Value
Analysis can apply to parameters, classes, data structures, variables, loops,
etc.
Branch Testing A white box testing technique that requires each branch or decision point to
be taken once.
Build (1) An operational version of a system or component that incorporates a
specified subset of the capabilities that the final product will provide. Builds
are defined whenever the complete system cannot be developed and
delivered in a single increment.
(2) A collection of programs within a system that are functionally independent.
A build can be tested as a unit and can be installed independent of the rest of
the system.

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 25 of 33
Term Definition
Business Function A set of related activities that comprise a stand-alone unit of business. It may
be defined as a process that results in the achievement of a business
objective. It is characterized by well-defined start and finish activities and a
workflow or pattern.
Capability Maturity A model of the stages through which software organizations progress as they
Model (CMM) define, implement, evolve, and improve their software process. This model
provides a guide for selecting process improvement strategies by
determining current process capabilities and identifying the issues most
critical to software quality and process improvement. This concept was
developed by the Software Engineering Institute (SEI) at Carnegie Mellon
University.
Causal Analysis The evaluation of the cause of major errors, to determine actions that will
prevent reoccurrence of similar errors.
Change Control The process, by which a change is proposed, evaluated, approved or
rejected, scheduled, and tracked.
Change A process methodology to identify the configuration of a release and to
Management manage all changes through change control, data recording, and updating of
baselines.
Change Request A documented proposal for a change of one or more work items or work item
parts.
Condition Testing A white box test method that requires all decision conditions be executed
once for true and once for false.
Configuration (1) The process of identifying and defining the configuration items in a
Management system, controlling the release and change of these items throughout the
system life cycle, recording and reporting the status of configuration items
and change requests, and verifying the completeness and correctness of
configuration items.
(2) A discipline applying technical and administrative direction and
surveillance to (a) identify and document the functional and physical
characteristics of a configuration items, (b) control changes to those
characteristics, and (c) record and report change processing and
implementation status.
Conversion testing A functional type of test that verifies the compatibility of converted programs,
data and procedures with the “old” ones that are being converted or replaced.
Coverage The extent to which test data tests a program’s functions, parameters, inputs,
paths, branches, statements, conditions, modules or data flow paths.
Coverage Matrix Documentation procedure to indicate the testing coverage of test cases
compared to possible elements of a program environment (i.e., inputs,
outputs, parameters, paths, cause-effects, equivalence partitioning, etc.)
Continuity of A test focus area defined as the ability to continue processing if problems
Processing occur. Included is the ability to backup and recover after a failure.
Correctness A test focus area defined as the ability to process data according to
prescribed rules. Controls over transactions and data field edits provide an
assurance on accuracy and completeness of data.
Data flow Testing Testing in which test cases are designed based on variable usage within the
code.
Debugging The process of locating, analyzing, and correcting suspected faults. Compare
with testing.
Decision Coverage Percentage of decision outcomes that have been exercised through (white
box) testing.
Defect A variance from expectations. See also Fault.
Defect Management A set of processes to manage the tracking and fixing of defects found during

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 26 of 33
Term Definition
testing and to perform causal analysis.
Documentation and A functional type of test that verifies that the interface between the system
Procedures Testing and the people works and is usable. It also verifies that the instruction
guides are helpful and accurate.
Design Review (1) A formal meeting at which the preliminary or detailed design of a system
is presented to the user, customer or other interested parties for comment
and approval.
(2) The formal review of an existing or proposed design for the purpose of
detection and remedy of design deficiencies that could affect fitness-for-use
and environmental aspects of the product, process or service, and/or for
identification of potential improvements of performance, safety and economic
aspects.
Desk Check Testing of software by the manual simulation of its execution. It is one of the
static testing techniques.
Detailed Test Plan The detailed plan for a specific level of dynamic testing. It defines what is to
be tested and how it is to be tested. The plan typically identifies the items to
be tested, the test objectives, the testing to be performed, test schedules,
personnel requirements, reporting requirements, evaluation criteria, and any
risks requiring contingency planning. It also includes the testing tools and
techniques, test environment set up, entry and exit criteria, and
administrative procedures and controls.
Driver A program that exercises a system or system component by simulating the
activity of a higher level component.
Dynamic Testing Testing that is carried out by executing the code. Dynamic testing is a
process of validation by exercising a work product and observing the
behavior of its logic and its response to inputs.
Entry Criteria A checklist of activities or work items that must be complete or exist,
respectively, before the start of a given task within an activity or sub-activity.
Environment See Test Environment.
Equivalence Portion of the component’s input or output domains for which the
Partitioning component’s behavior is assumed to be the same from the component’s
specification.
Error (1) A discrepancy between a computed, observed or measured value or
condition and the true specified or theoretically correct value or condition.
(2) A human action that results in software containing a fault. This includes
omissions or misinterpretations, etc. See Variance.
Error Guessing A test case selection process that identifies test cases based on the
knowledge and ability of the individual to anticipate probable errors.
Error Handling A functional type of test that verifies the system function for detecting and
Testing responding to exception conditions. Completeness of error handling
determines the usability of a system and ensures that incorrect transactions
are properly handled.
Execution Procedure A sequence of manual or automated steps required to carry out part or all of
a test design or execute a set of test cases.
Exit Criteria (1) Actions that must happen before an activity is considered complete.
(2) A checklist of activities or work items that must be complete or exist,
respectively, prior to the end of a given process stage, activity, or sub-
activity.
Expected Results Predicted output data and file conditions associated with a particular test
case. Expected results, if achieved, will indicate whether the test was
successful or not. Generated and documented with the test case prior to
execution of the test.

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 27 of 33
Term Definition
Fault (1) An accidental condition that causes a functional unit to fail to perform its
required functions.
(2) A manifestation of an error in software. A fault if encountered may cause
a failure. Synonymous with bug.
Full Lifecycle Testing The process of verifying the consistency, completeness, and correctness of
software and related work products (such as documents and processes) at
each stage of the development life cycle.
Function (1) A specific purpose of an entity or its characteristic action.
(2) A set of related control statements that perform a related operation.
Functions are sub-units of modules.
Function Testing A functional type of test, which verifies that each business function, operates
according to the detailed requirements, the external and internal design
specifications.
Functional Testing Selecting and executing test cases based on specified function requirements
without knowledge or regard of the program structure. Also known as black
box testing. See "Black Box Testing."
Functional Test Those kinds of tests used to assure that the system meets the business
Types requirements, including business functions, interfaces, usability, audit &
controls, and error handling etc. See also Structural Test Types.
Implementation (1) A realization of an abstraction in more concrete terms; in particular, in
terms of hardware, software, or both.
(2) The process by which software release is installed in production and
made available to end users.
Inspection (1) A group review quality improvement process for written material,
consisting of two aspects: product (document itself) improvement and
process improvement (of both document production and inspection).
(2) A formal evaluation technique in which software requirements, design, or
code are examined in detail by a person or group other than the author to
detect faults, violations of development standards, and other problems.
Contrast with walk-through.
Installation Testing A functional type of test which verifies that the hardware, software and
applications can be easily installed and run in the target environment.
Integration Testing A level of dynamic testing which verifies the proper execution of application
components and does not require that the application under test interface with
other applications.
Interface / Inter- A functional type of test which verifies that the interconnection between
system Testing applications and systems functions correctly.
JAD An acronym for Joint Application Design. Formal session(s) involving clients
and developers used to develop and document consensus on work products,
such as client requirements, design specifications, etc.
Level of Testing Refers to the progression of software testing through static and dynamic
testing.
Examples of static testing levels are Project Objectives Review,
Requirements Walkthrough, Design (External and Internal) Review, and
Code Inspection.
Examples of dynamic testing levels are: Unit Testing, Integration Testing,
System Testing, Acceptance Testing, Systems Integration Testing and
Operability Testing.
Also known as a test level.
Lifecycle The software development process stages. Requirements, Design,
Construction (Code/Program, Test), and Implementation.

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 28 of 33
Term Definition
Logical Path A path that begins at an entry or decision statement and ends at a decision
statement or exit.
Maintainability A test focus area defined as the ability to locate and fix an error in the
system. Can also be the ability to make dynamic changes to the system
environment without making system changes.
Master Test Plan A plan that addresses testing from a high-level system viewpoint. It ties
together all levels of testing (unit test, integration test, system test,
acceptance test, systems integration, and operability). It includes test
objectives, test team organization and responsibilities, high-level schedule,
test scope, test focus, test levels and types, test facility requirements, and
test management procedures and controls.
Operability A test focus area defined as the effort required (of support personnel) to learn
and operate a manual or automated system. Contrast with Usability.
Operability Testing A level of dynamic testing in which the operations of the system are validated
in the real or closely simulated production environment. This includes
verification of production JCL, installation procedures and operations
procedures. Operability Testing considers such factors as performance,
resource consumption, adherence to standards, etc. Operability Testing is
normally performed by Operations to assess the readiness of the system for
implementation in the production environment.
Operational Testing A structural type of test that verifies the ability of the application to operate at
an acceptable level of service in the production-like environment.
Parallel Testing A functional type of test, which verifies that the same input on “old” and “new”
systems, produces the same results. It is more of an implementation that a
testing strategy.
Path Testing A white box testing technique that requires all code or logic paths to be
executed once. Complete path testing is usually impractical and often
uneconomical.
Performance A test focus area defined as the ability of the system to perform certain
functions within a prescribed time.
Performance Testing A structural type of test which verifies that the application meets the expected
level of performance in a production-like environment.
Portability A test focus area defined as ability for a system to operate in multiple
operating environments.
Problem (1) A call or report from a user. The call or report may or may not be defect
oriented.
(2) A software or process deficiency found during development.
(3) The inhibitors and other factors that hinder an organization’s ability to
achieve its goals and critical success factors.
(4) An issue that a project manager has the authority to resolve without
escalation. Compare to ‘defect’ or ‘error’.
Quality Plan A document which describes the organization, activities, and project factors
that have been put in place to achieve the target level of quality for all work
products in the application domain. It defines the approach to be taken when
planning and tracking the quality of the application development work
products to ensure conformance to specified requirements and to ensure the
client’s expectations are met. A
Regression Testing A functional type of test, which verifies that changes to one part of the system
have not caused unintended adverse effects to other parts.
Reliability A test focus area defined as the extent to which the system will provide the
intended function without failing.
Requirement (1) A condition or capability needed by the user to solve a problem or achieve

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 29 of 33
Term Definition
an objective.
(2) A condition or capability that must be met or possessed by a system or
system component to satisfy a contract, standard, specification, or other
formally imposed document. The set of all requirements forms the basis for
subsequent development of the system or system component.
Review A process or meeting during which a work product, or set of work products, is
presented to project personnel, managers, users or other interested parties
for comment or approval.
Root Cause Analysis See Causal Analysis.
Scaffolding Temporary programs may be needed to create or receive data from the
specific program under test. This approach is called scaffolding.
Security A test focus area defined as the assurance that the system/data resources
will be protected against accidental and/or intentional modification or misuse.
Security Testing A structural type of test which verifies that the application provides an
adequate level of protection for confidential information and data belonging to
other systems.
Software Quality (1) The totality of features and characteristics of a software product that bear
on its ability to satisfy given needs; for example, conform to specifications.
(2) The degree to which software possesses a desired combination of
attributes.
(3)The degree to which a customer or user perceives that software meets his
or her composite expectations.
(4)The composite characteristics of software that determine the degree to
which the software in use will meet the expectations of the customer.
Software Reliability (1) The probability that software will not cause the failure of a system for a
specified time under specified conditions. The probability is a function of the
inputs to and use of the system as well as a function of the existence of faults
in the software. The inputs to the system determine whether existing faults, if
any, are encountered.
(2) The ability of a program to perform a required function under stated
conditions for a stated period of time.
Statement Testing A white box testing technique that requires all code or logic statements to be
executed at least once.
Static Testing (1) The detailed examination of a work product's characteristics to an
expected set of attributes, experiences and standards. The product under
scrutiny is static and not exercised and therefore its behavior to changing
inputs and environments cannot be assessed.
(2) The process of evaluating a program without executing the program. See
also desk checking, inspection, walk-through.
Stress / Volume A structural type of test that verifies that the application has acceptable
Testing performance characteristics under peak load conditions.
Structural Function Structural functions describe the technical attributes of a system.
Structural Test Types Those kinds of tests that may be used to assure that the system is technically
sound.
Stub (1) A dummy program element or module used during the development and
testing of a higher-level element or module.
(2) A program statement substituting for the body of a program unit and
indicating that the unit is or will be defined elsewhere.
The inverse of Scaffolding.
Sub-system (1) A group of assemblies or components or both combined to perform a
single function.
(2) A group of functionally related components that are defined as elements

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 30 of 33
Term Definition
of a system but not separately packaged.
System A collection of components organized to accomplish a specific function or set
of functions.
Systems Integration A dynamic level of testing which ensures that the systems integration
Testing activities appropriately address the integration of application subsystems,
integration of applications with the infrastructure, and impact of change on
the current live environment.
System Testing A dynamic level of testing in which all the components that comprise a
system are tested to verify that the system functions together as a whole.
Test Bed (1) A test environment containing the hardware, instrumentation tools,
simulators, and other support software necessary for testing a system or
system component.
(2) A set of test files, (including databases and reference files), in a known
state, used with input test data to test one or more test conditions, measuring
against expected results.
Test Case (1) A set of test inputs, execution conditions, and expected results developed
for a particular objective, such as to exercise a particular program path or to
verify compliance with a specific requirement.
(2) The detailed objectives, data, procedures and expected results to conduct
a test or part of a test.
Test Condition A functional or structural attribute of an application, system, network, or
component thereof to be tested.
Test Conditions A worksheet used to formulate the test conditions that, if met, will produce
Matrix the expected result. It is a tool used to assist in the design of test cases.
Test Conditions A worksheet that is used for planning and for illustrating that all test
Coverage Matrix conditions are covered by one or more test cases. Each test set has a Test
Conditions Coverage Matrix. Rows are used to list the test conditions and
columns are used to list all test cases in the test set.
Test Coverage Matrix A worksheet used to plan and cross check to ensure all requirements and
functions are covered adequately by test cases.
Test Data The input data and file conditions associated with a specific test case.
Test Environment The external conditions or factors that can directly or indirectly influence the
execution and results of a test. This includes the physical as well as the
operational environments. Examples of what is included in a test
environment are: I/O and storage devices, data files, programs, JCL,
communication lines, access control and security, databases, reference
tables and files (version controlled), etc.
Test Focus Areas Those attributes of an application that must be tested in order to assure that
the business and structural requirements are satisfied.
Test Level See Level of Testing.
Test Log A chronological record of all relevant details of a testing activity.
Test Matrices A collection of tables and matrices used to relate functions to be tested with
the test cases that do so. Worksheets used to assist in the design and
verification of test cases.
Test Objectives The tangible goals for assuring that the Test Focus areas previously selected
as being relevant to a particular Business or Structural Function are being
validated by the test.
Test Plan A document prescribing the approach to be taken for intended testing
activities. The plan typically identifies the items to be tested, the test
objectives, the testing to be performed, test schedules, entry / exit criteria,
personnel requirements, reporting requirements, evaluation criteria, and any
risks requiring contingency planning.

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 31 of 33
Term Definition
Test Procedure Detailed instructions for the setup, operation, and evaluation of results for a
given test. A set of associated procedures is often combined to form a test
procedures document.
Test Report A document describing the conduct and results of the testing carried out for a
system or system component.
Test Run A dated, time-stamped execution of a set of test cases.
Test Scenario A high-level description of how a given business or technical requirement will
be tested, including the expected outcome; later decomposed into sets of test
conditions, each in turn, containing test cases.
Test Script A sequence of actions that executes a test case. Test scripts include
detailed instructions for set up, execution, and evaluation of results for a
given test case.
Test Set A collection of test conditions. Test sets are created for purposes of test
execution only. A test set is created such that its size is manageable to run
and its grouping of test conditions facilitates testing. The grouping reflects
the application build strategy.
Test Sets Matrix A worksheet that relates the test conditions to the test set in which the
condition is to be tested. Rows list the test conditions and columns list the
test sets. A checkmark in a cell indicates the test set will be used for the
corresponding test condition.
Test Specification A set of documents that define and describe the actual test architecture,
elements, approach, data and expected results. Test Specification uses the
various functional and non-functional requirement documents along with the
quality and test plans. It provides the complete set of test cases and all
supporting detail to achieve the objectives documented in the detailed test
plan.
Test Strategy A high level description of major system-wide activities which collectively
achieve the overall desired result as expressed by the testing objectives,
given the constraints of time and money and the target level of quality. It
outlines the approach to be used to ensure that the critical attributes of the
system are tested adequately.
Test Type See Type of Testing.
Testability (1) The extent to which software facilitates both the establishment of test
criteria and the evaluation of the software with respect to those criteria.
(2) The extent to which the definition of requirements facilitates analysis of
the requirements to establish test criteria.
Testing The process of exercising or evaluating a program, product, or system, by
manual or automated means, to verify that it satisfies specified requirements,
to identify differences between expected and actual results.
Testware The elements that are produced as part of the testing process. Testware
includes plans, designs, test cases, test logs, test reports, etc.
Top-down Approach to integration testing where the component at the top of the
component hierarchy is tested first, with lower level components being
simulated by stubs. Tested components are then used to test lower level
components. The process is repeated until the lowest level components
have been tested.
Transaction Flow A functional type of test that verifies the proper and complete processing of a
Testing transaction from the time it enters the system to the time of its completion or
exit from the system.
Type of Testing Tests a functional or structural attribute of the system. E.g., Error Handling,
Usability. (Also known as test type.)
Unit Testing The first level of dynamic testing and is the verification of new or changed code

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 32 of 33
Term Definition
in a module to determine whether all new or modified paths function correctly.
Usability A test focus area defined as the end-user effort required to learn and use the
system. Contrast with Operability.
Usability Testing A functional type of test which verifies that the final product is user-friendly
and easy to use.
User Acceptance See Acceptance Testing.
Testing
Validation (1) The act of demonstrating that a work item is in compliance with the original
requirement. For example, the code of a module would be validated against
the input requirements it is intended to implement. Validation answers the
question "Is the right system being built?”
(2) Confirmation by examination and provision of objective evidence that the
particular requirements for a specific intended use have been fulfilled. See
"Verification".
Variance A mismatch between the actual and expected results occurring in testing. It
may result from errors in the item being tested, incorrect expected results,
invalid test data, etc. See "Error".
Verification (1) The act of demonstrating that a work item is satisfactory by using its
predecessor work item. For example, code is verified against module level
design. Verification answers the question "Is the system being built right?”
(2) Confirmation by examination and provision of objective evidence that
specified requirements have been fulfilled. See "Validation."
Walkthrough A review technique characterized by the author of the object under review
guiding the progression of the review. Observations made in the review are
documented and addressed. Less formal evaluation technique than an
inspection.
White Box Testing Evaluation techniques that are executed with the knowledge of the
implementation of the program. The objective of white box testing is to test
the program's statements, code paths, conditions, or data flow paths.
Work Item A software development lifecycle work product.

Document: 725133596.odt Date: 08-26-2004


Path: /conversion/tmp/activity_task_scratch/725133596.odt Version: V1.0.0A

Owner: IBM Status: Draft


Subject: APP 141 - Test Strategy Page 33 of 33

You might also like