0% found this document useful (0 votes)
72 views21 pages

Test Management

The document discusses test management and outlines key activities in test planning, documentation, estimation, monitoring, control, and risk management. It describes defining test levels and approach, test documentation standards, estimation techniques, common metrics for tracking test progress, reporting test results, and configuration management for releases. Key risks to testing are identifying project risks and handling them through mitigation, contingency planning, transfer, or acceptance.

Uploaded by

kamarulzamani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views21 pages

Test Management

The document discusses test management and outlines key activities in test planning, documentation, estimation, monitoring, control, and risk management. It describes defining test levels and approach, test documentation standards, estimation techniques, common metrics for tracking test progress, reporting test results, and configuration management for releases. Key risks to testing are identifying project risks and handling them through mitigation, contingency planning, transfer, or acceptance.

Uploaded by

kamarulzamani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 21

TEST

MANAGEMENT
TEST PLANNING ACTIVITIES
 Define test approach, test levels
 Integrate, coordinate testing into life cycle
 Decide who, what, when and how of testing
 Assign resources for test tasks
 Define test documentation
 Set the level of detail for test cases, procedures in order to provide enough information to
support reproducible test preparation and execution
 Select test monitoring, controlling and reporting metrics, charts, and reports
IEEE 829 TEST PLAN OUTLINE
 Test plan identifier  Test deliverables
 Introduction  Test tasks
 Test items  Environmental needs
 Features to be tested  Responsibilities
 Features not to be tested  Staffing and training needed
 Approach  Schedule
 Item pass/fail criteria  Risk and contingencies
 Test criteria (entry/exit, suspension and  approvals
resumption)
TRANSITIONS: ENTRY
CRITERIA
 Entry criteria measure whether the system is ready for a particular test phase
 Deliverables (test objects, test items) ready and testable?
 Lab(including test cases, data, test environment and test tool) ready?
 Teams (developer, tester, others) ready?

 These tend to become increasingly rigorous as the phases proceed


TRANSITIONS:
CONTINUATION CRITERIA
 Continuation criteria measure whether testing can efficiently, effectively proceed
 Test environment problems
 Test blocking bugs in system under test

 “Continuation criteria” is a polite way of saying “stopping criteria” in the reverse


 Stopping a test phase is seldom popular
TRANSITIONS: EXIT CRITERIA
 Exit criteria measure whether the test phase can be deemed complete
 Thoroughness measures, such as code coverage, functionality or risk
 Estimates of defect density or reliability measures
 Cost
 Residual risks, such as defects not fixed or lack of test coverage in certain areas
 Schedules such as those based on time to market

 Remember that these are business decisions


DEVELOPING A WORK
BREAKDOWN STRUCTURE(WBS)
 What are the major stages of a testing subproject?
 Planning
 Staffing
 Test environment acquisition and configuration
 Test development
 Test execution

 Break down to discrete tasks within each stage


 What test suites are required for the critical risks?
 E.g. functionality, performance, error handling

 For each suite, what tasks are required?

 Tasks should be short in durations (e.g. few days)


 milestones
ESTIMATION
 There are two general approaches for estimation
 Estimating individual tasks by the owner of these tasks or by experts (bottom up via WBS)
 Estimating the testing effort based on metrics of former or similar projects or base don typical values
(top-down or bottom via parametric models, rules of thumbs, etc.)
 While both are useful, drawing upon team wisdom to create a WBS, then applying models and
rules of thumb to check and adjust the estimate, is usually more accurate
FACTORS TO CONSIDER IN
TEST ESTIMATION
 Testing is complex influenced by:
 Process factor - pervasive testing, change control, process maturity, lifecycle, development and test
processes, earlier test phases, (estimated vs actual) bug level and fixing
 Material factor: tools, test system, test and debugging environment, project documentation, similarity
 People factor: skills, expectation, support and relationship
 Delaying factor: complexity, many stakeholders, too much newness, geographical distribution, need
for detailed test documentation, tricky logistics and fragile test data
 Understand estimation techniques and these factors
 Deviation from the test estimates can arise from outside factors and evetns before or during
testing
TEST PROGRESS
MONITORING AND
CONTROL
COMMON TEST METRICS
 % completion of test case preparation
 % completion of preparing the test environment
 Test case execution
 no. of test cases run/ not run
 no. of test cases passed / failed

 Defect information
 Defect density
 Defect found and fixed
 Failure rate
 Retest results

 Coverage of requirements, risks or code by tests


 Level of confidence of tester in the product
 Dates of test milestones
 Testing costs
TEST REPORTING
 Summarize / analyse the test results
 Key events (meeting exit criteria)
 Analysis (for recommendation, guidance) of..
 Remaining defects
 Cost/benefit or more testing
 Outstanding risks
 Level of confidence
 Metrics gathered to access:
 Test objectives adequacy for test level
 Test approaches adequacy
 Testing effectiveness per objectives
TEST CONTROL
 Guiding and corrective actions due to test information and metrics, which may affect testing
activities - or other software life cycle activities
 Examples of test control actions:
 Risk triggered test re-prioritizing
 Test schedule adjustments due to availability of a test environment
 Set an entry criterion requiring re testing of by fixes by developers before integration into build
CONFIGURATION
MANAGEMENT
TESTING AND
CONFIGURATION
MANAGEMENT
 Configuration management establishes, maintains the integerity of the items that make up the
software or system through the project and product life cycle
 For testing, configuration management:
 Allows management of test ware and results
 Ensures every item in test can be related back to known system components
 Supports delivery of a test release into test lab

 During project and test planning, the configuration management procedures and infrastructure
(tools) should be chosen, documented, and implemented so that no surprises occur at test
execution time
KEY TASKS OF CM
 Store and control access to the items that make up the system (source code, though it goes
beyond code)
 Identify and document the items under control
 Allow change to control items through an orderly process (change control board)
 Report on changes pending, underway and complete
 Verify completeness of implementation
TEST RELEASE
MANAGEMENT
 Release schedule (weekly, daily, hourly?)
 Update apply (process to install new build)
 Update un-apply (process to remove bad build)
 Build naming (revision level e.g. X.01.017)
 Interrogation (process to determine rev. level)
 Synchronizing with databases, other systems
 Roles and responsibilities for each step
IEEE 829 TEST ITEM
TRANSMITTAL REPORT
 A test item transmittal report describes the items being delivered for testing, and includes the
following sections
 Test item transmittal identifier
 Transmitted items (include items names and revision numbers)
 Locations (where, what media, labelling)
 Status (bug fixed, changes introduced)
 Approvals

 More commonly used are release notes, which include some of this information, usually
informally
RISK AND
TESTING
PROJECT RISKS
 Testing is also subject to risk
 A risk is the possibility of a negative outcome, and that would include events like late test
releases, environment problems, etc.
 To discover risks to the testing effort, ask yourself an other stakeholders:
 What could go wrong on the project that would delay or invalidate your test plan and or estimate?
 What kind of unacceptable testing outcomes do you worry about?
HANDLING PROJECT RISKS
 For each project risk, you have four options:
 Mitigation: reduce the likelihood or impact through preventive steps
 Contingency: have a plan in place to reduce the impact
 Transfer: get some other party to accept the consequences
 Ignore: do nothing about it (best if both likelihood and impact are low)

 Another typical risk management option, buying insurance, is usually not available

You might also like