DISC Personality Assessment
2010 Colt Telecom Group Limited. All rights reserved.
Metrics & Measurement : TSIDs Current State - Gaps
Assessment Evaluation by Cognizant
As per the Test Maturity Assessment conducted by Cognizant in August 2012, Measurement & Analysis area is measured at Inception Level. i.e. It is completely people dependent with very minimal or no management and without consistent processes.
Inception Functional Performing Best in Class
Cognizants Findings
Review& Sign-off Templates, Checklists, etc Risk Management Knowledge management Requirement Change Management Build/Release Management Test Artifacts Configuration Test Measurement Strategy
The effectiveness and efficiency of testing is not measured adequately to initiate corrective and preventive actions
3
Current State - Gaps
1
2
No Metrics process for Testing Team.
Non availability of tools to capture and gather data for metrics analysis. Only basic metrics captured at project level (such as Voice
of Customer; Defect per Severity, Category, Priority; Defect Status).
4 5 6
No metrics at the team level (TSID level).
No benchmarks (industry or Org. Level) to compare and measure against.
Data analysis mechanism is missing due to which improvements are not fed back into the system & processes. No MIS/dashboard can be published to the stakeholders / senior management.
7
4
Business Benefits of Metrics in TSID
1
360o view process performance against the goals.
Data driven process evaluation and decision making.
Proactive management rather than reactive management. Helps in achieving operational/process excellence through identification of areas for improvement. Embed Predictability (into estimation process & calibration). Dashboard to Sr. management/stakeholders providing the performance view of the team and projects.
Current State Domain wise Metrics
DOMAINS
S. No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Legends
Metrics Voice of Customer Test Execution Productivity Cost of Quality Automation ROI Defect Seepage to UAT Defect by Severity Defect by Category Schedule Variance Test Design Productivity Review Effectiveness Requirement Traceability Index Test Case Effectiveness Defect Seepage to SIT Defect Rejection Ratio Defect Ageing by Severity Test case planned vs. executed
Not Implemented Partial Implemented Implemented
Workflow Siebel Oracle SpeedBoat Mediation
Sales & Service Order Order OSS Tools Marketing Management Management provisioning
NOTE: Metrics captured in Billing and ERP are still awaited.
Metrics & Measurement : TSIDs Desired State
Desired State
1 Develop an effective Metrics Process.
4 Advanced test management and industry leading practices with continuous process improvement and innovation 3 Enterprise-wide testing framework with measurable processes, predictive capabilities & sharing of best practices Performing Defined test processes but less effective with basic test management practices 1 Functional
Establish efficient and robust Metrics Model. Tools changes to capture and gather data for metrics analysis.
2
Identification of metrics both at Project & TSID level. Identification of benchmarks Industry and/or Organization level. Initiation of Data capture and metrics analysis. Continue Improvement Cycle through Metrics Analysis. Publish the MIS reports/dashboard for the stakeholders/senior management. 8
Maturity:
5
6 7
Completely people dependent with very minimal or no test management and without consistent processes
Inception
Functional
Performing
Inception
Best-in-Class
Cognizants proprietary Test Maturity Model Framework
Best-in-Class
Desired State Domain wise Metrics
DOMAINS
S. No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Legends
Metrics Voice of Customer Test Execution Productivity Cost of Quality Automation ROI (as applicable) Defect Seepage to UAT Defect by Severity Defect by Category Schedule Variance Test Design Productivity Review Effectiveness Requirement Traceability Index Test Case Effectiveness Defect Seepage to SIT Defect Rejection Ratio Defect Ageing by Severity Test case planned vs. executed
Not Implemented Partial Implemented Implemented
Workflow Siebel Oracle SpeedBoat Mediation
Sales & Service Order Order OSS Tools Marketing Management Management provisioning
NOTE: We can add more metrics as applicable.
9
TSIDs Metrics Model - Approach
10
We will have the Phased Approach
1.
PHASE 3
Define Organization Benchmarks Statistical Analysis of Metrics Introduction of Change of process to include metrics related to Post Production efficiency Causal Analysis & Continuous Improvement Cycle
PHASE 2
1. Collection of Metrics Analysis of Metrics Refine & Improve Strategy, if required Trend Analysis Causal Analysis and improvements fed into the system 4. 2. 3.
PHASE 1
1. Define Metrics Strategy/Process Time box Strategy Collection & Analysis Mechanism 2. Identify Metrics
2. 3. 4. 5.
We are here currently
3. Identify initial Benchmarks 4. Identify Data Collection Mechanism Changes in Tools 5. Identify Roles & Responsibilties
INCEPTION
FUNCTIONAL 1 3 Months
PERFORMING 4 6 Months
6 Months + Ongoing BEST IN CLASS
11
Major Changes Needed - 1 of 2
S. No
1 2 3 4
Change Category
Change in Tool Change in Tool Institutionalization of Process Change in Process
Change Description
PPM (ATLAS) For accurate Time recording Quality Centre For Defect Model Standardization Metrics Process To define the Metrics Model and strategy Other relevant process which impact Metrics (Estimation process, Test Execution Process, Defect Management process etc.) Industry Benchmarks to be taken to measure our performance against
Needed in Phase
Phase 1 Phase 1 Phase 1 Phase 1
Introduction of initial Benchmarks
Phase 1
12
Major Changes Needed 2 of 2
S. No 6 7 Change Category
Introduction of New Role Introduction of Org. Level Benchmarks Change in Metrics Analysis Method Change in Process
Change Description
Introduction of Metrics Manager Role To perform Metrics Analysis at TSID level To develop the performance benchmarks relevant to our organization (this will be done after we have data of 6 + months) Statistical Analysis of Metrics Introduction of metrics related to Post Production efficiency
Needed in Phase
Phase 2 Phase 3
8 9
Phase 3 Phase 3
13
Appendix
14
TSIDs Proposed Metrics Model
15
I4 Metrics Model 1 of 2
Identify
Improve
Implement
Investigate
16
I4 Metrics Model 2 of 2
Identify Define Metrics Strategy What to measure & How to measure Roles & Responsibilities Capture Metrics & Measurement Using Tools
Implement
Identify
Investigate
Analysis of the Metrics Trend Analysis Statistical Analysis
Improve
Implement
Improve
Investigate
Causal Analysis & Improvement
17
Types of Metrics
Types of Metrics
TSID Level Metrics
Process Metrics
Identify
Product Metrics Project Specifics Metrics
Improve
Implement
Investigate
Small Projects
Large Projects
Ongoing Enhancements
18
Proposed Metrics 1 of 6 TSID Level Metrics
PROCESS METRICS Metrics
Voice of Customer Test Execution Productivity Cost of Quality
KPI
Average of individual ratings from the feedback form # of Test cases executed per day per tester [(Appraisal Effort + Prevention Effort + Failure Effort) / Total Project effort] * 100 Total benefit derived from automation / Total cost of automation
Benefit of measuring this
Customer feedback on Testing services provided Monitor the execution performance and thereby improving the same Evaluate the cost expended in NOT creating a quality product or service
System Changes Needed?
NO
NO
YES [Effort tracking in PPM needs change] YES [Effort tracking in PPM needs change]
Automation ROI
Return on Investment from the Automation
NOTE: These are few initial proposed metrics. List will be refined 19
Proposed Metrics 2 of 6
TSID Level Metrics
PRODUCT METRICS Metrics
Defect Seepage to UAT Defect by Severity Defect by Category
KPI
# of Defects seeped to UAT # of Defects per Severity # of Defects per Category
Benefit of measuring this
Evaluate the number of defects that could not be contained in SIT Measure the benefit of detecting high severity defects early before it goes to users Provides snapshot of how many defects detected per diff. categories
System Changes Needed?
NO
YES [Quality Centre to be streamlined for Severity Values] YES [Quality Centre to be streamlined for Category Values]
NOTE: These are few initial proposed metrics. List will be refined 20
Proposed Metrics 3 of 6
Project Level Metrics
PROCESS METRICS Metrics
Schedule Variance
KPI
[(Actual End Date Estimated End Date)/ (Estimated End Date Estimated Start date)] X100 # of Test cases created per day per tester # of Test cases executed per day per tester Internal vs external Review feedback of Test cases
Benefit of measuring this
Monitor Project Performance vis-vis plan
System Changes Needed?
NO
Test Design Productivity Test Execution Productivity Review Effectiveness
Monitor the test design performance and thereby improving the same Monitor the execution performance and thereby improving the same Evaluate the efficiency of finding defects during reviews
NO
NO
NO
NOTE: These are few initial proposed metrics. List will be refined 21
Proposed Metrics 4 of 6
Project Level Metrics
PROCESS METRICS Metrics
Requirement Traceability Index Test Case Effectiveness
KPI
Requirements mapped to Test Scenarios / cases (Total Defects - Defects not mapped to Test cases) / Total # test cases ) x 100
Benefit of measuring this
Ensure that all the testable requirements are mapped with test scenarios and test cases Measures % of Defects mapped to test cases to evaluate the Test Design process
System Changes Needed?
NO
NO
NOTE: These are few initial proposed metrics. List will be refined 22
Proposed Metrics 5 of 6 Project Level Metrics
PRODUCT METRICS Metrics
Defect Seepage to SIT
KPI
# of Defects seeped to SIT from earlier phases # of Defects seeped to UAT # of Defects per Severity
Benefit of measuring this
Evaluate the number of defects that could not be contained in SIT
System Changes Needed?
YES [Changes in Quality Centre to introduce Defect Injection Phase]
NO
Defect Seepage from SIT Defect by Severity
Evaluate the number of defects that could not be contained in phases prior to SIT Measure the benefit of detecting high severity defects early before it goes to users Provides snapshot of how many defects detected per diff. categories
YES [Quality Centre to be streamlined for Severity Values] YES [Quality Centre to be streamlined for Category Values]
Defect by Category
# of Defects per Category
NOTE: These are few initial proposed metrics. List will be refined 23
Proposed Metrics 6 of 6 Project Level Metrics
PRODUCT METRICS Metrics
Defect Rejection Ratio Defect Ageing by Severity Test case planned vs. executed
KPI
# of Rejected defects in SIT / Total Defects Time Taken to fix the defect No. of test cases planned vs. executed
Benefit of measuring this
No. of defects rejected per total defects provides inputs for improvement in test process Measure the period it took to fix the defect & hence causing delay in test schedule This gives a snapshot of test execution status at any given point of test execution
System Changes Needed?
NO
NO
NO
NOTE: These are few initial proposed metrics. List will be refined 24
VOICE OF CUSTOMER
Metrics
Voice of Customer
KPI
Average of individual ratings from the feedback form 1. 2.
Tools used to capture the Metrics
Feedback Form Sharepoint Site (Need to explore this possibility)
Measured For
Programs
YES / NO
YES
Frequency
At the completion of UAT for Each release in the program, Or Each iteration of the program, Or Any other agreed milestone At the completion of the UAT phase of the project None
Projects Enhancements
YES NO??
25
VOICE OF CUSTOMER - Questionnaire
How would you rate the Level of Professionalism of Testing team (Test manager and testing team members)? How would you rate the knowledge levels of testing team members?
Was the testing team efficient in responding to the projects and businesss needs?
Has the testing team aligned to the testing schedule and project budget as agreed? How do you rate the Quality of Test Plan & Control Deliverables Test Plan, Test Estimates, Daily Test Status and Test Closure Report? How do you rate Testers Efficiency w.r.t Test cases, Test Execution and Quality of Defects logged? How do you rate the usage and effectiveness of tools used (QC / QTP / LR)? How do you rate Testing teams efficiency w.r.t defects seepage to UAT?
26
VOICE OF CUSTOMER - Questionnaire
Did you find the communication with Testing team open and timely? Did you find testing team proactive in overcoming projects issues?
How would you rate the overall experience of testing services offered?
5 Point Scale Measurement
Scale
Excellent Good Average Needs Improvement Poor
Equivalent Rating
5 4 3 2 1
27
VOICE OF CUSTOMER Work Instructions
TEST MANAGER PROJECT MANAGER
On Completion of the project, Test manager sends the Testing feedback form to the Project Manager
Project Manager receives the Testing feedback form and update the feedback
Project Manager receives the filled up feedback form and compute the average rating from the feedback on diff. parameters
Yes
Is the feedback <=3?
No
Prepare the Lessons Learnt & Document Good Practices
Conduct Causal Analysis. Identify Improvement Areas
28
DEFECT SEEPAGE TO UAT
Metrics
Defect Seepage to UAT
KPI
# of Defects seeped to UAT 1. 2.
Tools used to capture the Metrics
Quality Center Excel or any other tool used by UAT members
Measured For
Programs
YES / NO
YES
Frequency
At the completion of UAT for Each release in the program, Or Each iteration of the program, Or Any other agreed milestone
At the completion of the UAT phase of the project Monthly consolidated for all the enhancements went live in a month
Projects Enhancements
YES YES
29
Defect Seepage to UAT Work Instructions
TEST MANAGER
On Completion of the UAT, Test manager gathers the data for UAT defects
Test Manager Analyzes the UAT defects and compare it with the established project and TSID level goals
YES
Are the UAT defects more than the defined goals?
Conduct Causal Analysis. Identify Improvement Areas
NO
Submit it to Metrics Manager for the Trend graph and for TSID level analysis
Close
30
TEST EXECUTION PRODUCTIVITY
Metrics
Test Execution Productivity
KPI
# of Test cases executed per day per tester 1. 2.
Tools used to capture the Metrics
Quality Center Execution Efforts logged in ATLAS
Measured For
Programs
YES / NO
YES
Frequency
At the completion of Each release in the program, Or Each iteration of the program, Or Any other agreed milestone
At the completion of the project Monthly consolidated for all the enhancements went live in a month
Projects Enhancements
YES YES
31
TEST EXECUTION PRODUCTIVITY Work Instructions
TESTER TEST MANAGER
During the test execution phase, tester would log the test cases executed per day. Execution time is logged in QC
At the completion of Test Execution, Test Manager would collate this data and compute Test Execution Productivity
Yes
Is the Test Execution Productivity more than the defined goals?
No
Conduct Causal Analysis. Identify Improvement Areas
Submit it to Metrics Manager for the Trend graph and for TSID level analysis
Close
32
COST OF QUALITY
Metrics
Cost of Quality
KPI
[(Appraisal Effort + Prevention Effort + Failure Effort) / Total Project effort] * 100 1.
Tools used to capture the Metrics
Execution Efforts logged in ATLAS
Measured For
Programs
YES / NO
YES
Frequency
At the completion of Each release in the program, Or Each iteration of the program, Or Any other agreed milestone
At the completion of the project N/A
Projects Enhancements
YES NO
33
COST OF QUALITY Work Instructions
TESTER TEST MANAGER
During the testing life cycle, Testing team log the efforts in ATLAS
At the completion of Project/Release, Test Manager would collate this data and compute Cost Of Quality
Yes
Is the COQ more than defined goals?
No
Conduct Causal Analysis. Identify Improvement Areas
Submit it to Metrics Manager for the Trend graph and for TSID level analysis
Close
34
AUTOMATION ROI
Metrics
Automation ROI
KPI
Total benefit derived from automation / Total cost of automation 1.
Tools used to capture the Metrics
Execution Efforts logged in ATLAS
Measured For
Programs, if Automation is involved
YES / NO
YES
Frequency
At the completion of Each release in the program, Or Each iteration of the program, Or Any other agreed milestone
Projects, if Automation is involved
Enhancements, if Automation is involved
YES
At the completion of the project
NO
N/A
35
AUTOMATION ROI Work Instructions
TESTER TEST MANAGER
During the testing life cycle, Testing team log the efforts in ATLAS
At the completion of Project/Release, Test Manager would collate this data and compute Automation ROI
Submit it to Metrics Manager for the Trend graph and for TSID level analysis
Close
36
DEFECT BY SEVERITY
Metrics
Defect by Severity
KPI
# of Defects per Severity 1.
Tools used to capture the Metrics
Defects logged in Quality Center
Measured For
Programs
YES / NO
YES
Frequency
At the completion of Each release in the program, Or Each iteration of the program, Or Any other agreed milestone
At the completion of the project Monthly consolidated for all the enhancements went live in a month
Projects Enhancements
YES YES
This metrics can be reported as a Daily Status to all relevant stakeholders
37
DEFECT BY SEVERITY - Work Instructions
TESTER TEST MANAGER
During the test execution, defects are logged in Quality Center
At the completion of Project/Release, Test Manager would collate this data and compute Defect by Severity
Submit it to Metrics Manager for the Trend graph and for TSID level analysis
Close
38
DEFECT BY CATEGORY
Metrics
Defect by Category
KPI
# of Defects per Category 1.
Tools used to capture the Metrics
Defects logged in Quality Center
Measured For
Programs
YES / NO
YES
Frequency
At the completion of Each release in the program, Or Each iteration of the program, Or Any other agreed milestone
At the completion of the project Monthly consolidated for all the enhancements went live in a month
Projects Enhancements
YES YES
This metrics can be reported as a Daily Status to all relevant stakeholders
39
DEFECT BY CATEGORY- Work Instructions
TESTER TEST MANAGER
During the test execution, defects are logged in Quality Center
At the completion of Project/Release, Test Manager would collate this data and compute Defect by Category
Submit it to Metrics Manager for the Trend graph and for TSID level analysis
Close
40
Schedule Variance
Metrics
Schedule Variance
KPI
[(Actual End Date - Estimated End Date)/ (Estimated End Date - Estimated Start date)] X100 1. 2.
Tools used to capture the Metrics
MPP Excel spreadsheet used to maintain the plan
Measured For
Programs
YES / NO
YES
Frequency
At the completion of TSID activities for: Each release in the program, Or Each iteration of the program, Or Any other agreed milestone At the completion of the TSID activities of the project Monthly consolidated for all the enhancements went live in a month
Projects Enhancements
YES YES
41
Schedule Variation Work Instructions
TEST MANAGER
On Completion of the TSID activities, Test manager gathers the data and computes Schedule Variation
Test Manager Analyzes the schedule Variation and compare it with the established project and TSID level goals
YES
Is the schedule Variation more than the defined goals?
Conduct Causal Analysis. Identify Improvement Areas
NO
Submit it to Metrics Manager for the Trend graph and for TSID level analysis
Close
42
TEST DESIGN PRODUCTIVITY
Metrics
Test Design Productivity
KPI
# of Test cases created per day per tester 1. 2.
Tools used to capture the Metrics
Quality Center Test Design Efforts logged in ATLAS
Measured For
Programs
YES / NO
YES
Frequency
At the completion of Each release in the program, Or Each iteration of the program, Or Any other agreed milestone
At the completion of the project N/A
Projects Enhancements
YES NO
43
TEST DESIGN PRODUCTIVITY Work Instructions
TESTER TEST MANAGER
During the test design phase, tester would log the test cases designed per day. Test Design time is logged in QC
At the completion of Test Execution, Test Manager would collate this data and compute Test Design Productivity
Yes
Is the Test Design Productivity more than the defined goals?
No
Conduct Causal Analysis. Identify Improvement Areas
Submit it to Metrics Manager for the Trend graph and for TSID level analysis
Close
44
REVIEW EFFECTIVENESS
Metrics
Review Effectiveness
KPI
Internal vs external Review feedback of Test cases 1. 2.
Tools used to capture the Metrics
Quality Center for review defects Review defects logged in excel sheet by external reviewers
Measured For
Programs
YES / NO
YES
Frequency
At the completion of Each release in the program, Or Each iteration of the program, Or Any other agreed milestone
At the completion of the project N/A
Projects Enhancements
YES No
45
REVIEW EFFECTIVENESS Work Instructions
TESTER TEST MANAGER
During the test design phase, tester would conduct peer review of the test cases designed by peers
Test Manager would collate this data and compute Review Effectiveness
During the test design phase, the test cases would be peer reviewed by external stakeholders
Yes
Is the Review No Effectiveness more than the defined goals?
Conduct Causal Analysis. Identify Improvement Areas
Submit it to Metrics Manager for the Trend graph and for TSID level analysis
Close
46
REQUIREMENT TRACEABILITY INDEX
Metrics
Requirement Traceability Index
KPI
Requirements mapped to Test Scenarios / cases 1. 2.
Tools used to capture the Metrics
Quality Center TCM Matrix
Measured For
Programs
YES / NO
YES
Frequency
During the course of Each release in the program, Or Each iteration of the program, Or Any other agreed milestone
During the course of the project N/A
Projects Enhancements
YES NO
47
REQUIREMENT TRACEABILITY INDEX Work Instructions
TESTER TEST MANAGER
During the test design phase, tester would map the test case with the requirements. This will be managed/updated throughout test lifecycle
During the course of testing lifecycle, Test Manager would compute this metrics to ensure that all testable requirements are tested
Close
48
TEST CASE EFFECTIVENESS
Metrics
Test Case Effectiveness
KPI
(Total Defects - Defects not mapped to Test cases) / Total # test cases ) x 100 1.
Tools used to capture the Metrics
Quality Center
Measured For
Programs
YES / NO
YES
Frequency
At the completion of Each release in the program, Or Each iteration of the program, Or Any other agreed milestone
At the completion of the project N/A
Projects Enhancements
YES NO
49
TEST CASE EFFECTIVENESS Work Instructions
TESTER TEST MANAGER
During the test execution phase, tester log the defect and map it to the test case defined
At the completion of Test Execution, Test Manager would collate this data and compute Test Case Effectiveness
Yes
Is the Test case Effectiveness too less?
No
Conduct Causal Analysis. Identify Improvement Areas
Submit it to Metrics Manager for the Trend graph and for TSID level analysis
Close
50
DEFECT SEEPAGE TO SIT
Metrics
Defect Seepage to SIT
KPI
# of Defects seeped to SIT from earlier phases 1.
Tools used to capture the Metrics
Quality Center
Measured For
Programs
YES / NO
YES
Frequency
At the completion of UAT for Each release in the program, Or Each iteration of the program, Or Any other agreed milestone
At the completion of the UAT phase of the project Monthly consolidated for all the enhancements went live in a month
Projects Enhancements
YES YES
This metrics can be reported as a Daily Status to all relevant stakeholders
51
Defect Seepage to SIT Work Instructions
TEST MANAGER
On Completion of the SIT, Test manager gathers the data for SIT defects
Test Manager Analyzes which phase contributed the maximum defects to SIT
Test manager submit the data to PM and may participate in the Causal Analysis with stakeholders, if required
Submit it to Metrics Manager for the Trend graph and for TSID level analysis
Close
52
DEFECT REJECTION RATIO
Metrics
Defect Rejection Ratio
KPI
# of Rejected defects in SIT / Total Defects 1.
Tools used to capture the Metrics
Quality Center
Measured For
Programs
YES / NO
YES
Frequency
At the completion of Each release in the program, Or Each iteration of the program, Or Any other agreed milestone At the completion of the project Monthly consolidated for all the enhancements went live in a month
Projects Enhancements
YES YES
This metrics can be reported as a Daily Status to all relevant stakeholders
53
DEFECT REJECTION RATIO Work Instructions
TESTER TEST MANAGER
During the test execution phase, tester log the defect and its status
At the completion of Test Execution, Test Manager would collate this data and compute Defect Rejection Ratio
Yes
Is the Defect Rejection ratio very high as compared to TSID or project level goals?
No
Conduct Causal Analysis. Identify Improvement Areas
Submit it to Metrics Manager for the Trend graph and for TSID level analysis
Close
54
DEFECT AGEING BY SEVERITY
Metrics
Defect Rejection Ratio
KPI
Time Taken to fix the defect across various severity levels 1.
Tools used to capture the Metrics
Quality Center
Measured For
Programs
YES / NO
YES
Frequency
At the completion of Each release in the program, Or Each iteration of the program, Or Any other agreed milestone
At the completion of the project N/A
Projects Enhancements
YES NO
This metrics can be reported as a Daily Status to all relevant stakeholders
55
DEFECT AGEING BY SEVERITY Work Instructions
TESTER TEST MANAGER
During the test execution phase, tester log the defects open and close date
At the completion of Test Execution, Test Manager would collate this data and compute Defect Ageing by Severity of Defects Test manager submit the data to PM and may participate in the Causal Analysis with stakeholders, if required Yes
No
Test manager submit the data to PM and may participate in the Causal Analysis with stakeholders, if required
Close
56
TEST CASES PLANNED vs. EXECUTED
Metrics
Test case planned vs. executed
KPI
No. of test cases planned vs. executed 1.
Tools used to capture the Metrics
Quality Center
Measured For
Programs Projects Enhancements
YES / NO
YES YES NO
Frequency
Ongoing as part of Test execution Ongoing as part of Test execution N/A
This metrics can be reported as a Daily Status to all relevant stakeholders
57
TEST CASES PLANNED vs. EXECUTED Work Instructions
TESTER TEST MANAGER
During the test execution phase, tester will updated test cases executed against planned Test cases
As an ongoing activity Test manager would compute test cases planned vs. executed
Test manager reports this metrics on a daily basis as part of daily status
Yes
No
Close
58