Software Test Metrics Report, What is Metric?, Type of Metrics, Crucial Web Based Testing Metrics, and Software Test Metrics Life Cycle.
Manual Testing – Software Test Metrics Report
When we can measure what we are speaking about and express it in numbers, we know something about it; but when we cannot measure, when we cannot express it in numbers, our knowledge is of a meager and unsatisfactory kind: it may be the beginning of knowledge, but we have scarcely, in your thoughts, advanced to the stage of science.
What is Metric?
A measurement scale and the method used for measurement
Why we need Metrics?
“We cannot improve what we cannot measure”
“We cannot control what we cannot measure”
Test Metrics help in,
• Take decision for the next phase of activities
• Evidence of the claim or prediction
• Understand the type of improvement required
• Take decision on process or technology change
2. Type of Metrics
a) Base Metrics (Direct Measure)
Base metrics constitute the raw data gathered by a Test Analyst throughout the testing effort. These metrics are used to provide project status reports to the Test Lead and Project Manager; they also feed into the formulas used to derive Calculated Metrics.
Ex: # of Test Cases, # of Test Cases Executed
b) Calculated Metrics (Indirect Measure)
Calculated Metrics convert the Base Metrics data into more useful information. These types of metrics are generally the responsibility of the Test Lead and can be tracked at many different levels (by module, tester, or project).
Ex: % Complete, % Test Coverage
Base Metrics & Test Phases
o # of Test Cases (Test Development Phase)
o # of Test Cases Executed (Test Execution Phase)
o # of Test Cases Passed (Test Execution Phase)
o # of Test Cases Failed (Test Execution Phase)
o # of Test Cases Under Investigation (Test Development Phase)
o # of Test Cases Blocked (Test dev/execution Phase)
o # of Test Cases Re-executed (Regression Phase)
o # of First Run Failures (Test Execution Phase)
o Total Executions (Test Reporting Phase)
o Total Passes (Test Reporting Phase)
o Total Failures (Test Reporting Phase)
o Test Case Execution Time ((Test Reporting Phase)
o Test Execution Time (Test Reporting Phase
Calculated Metrics & Test Phases
The below metrics are created at Test Reporting Phase or Post test Analysis phase
o % Complete
o % Defects Corrected
o % Test Coverage
o % Rework
o % Test Cases Passed
o % Test Effectiveness
o % Test Cases Blocked
o % Test Efficiency
o 1st Run Fail Rate
o Defect Discovery Rate
o Overall Fail Rate
3. Crucial Web Based Testing Metrics
1) Requirement Volatility
Number of Requirements agreed vs. number of Requirements changed.
• (Number of Requirements Added + Deleted + Modified) *100 / Number of Original Requirements
• Ensure that the Requirements are normalized or defined properly while estimating
Example: VSS 1.3 release had total 67 requirements initially, later they added another 7 new requirements and removed 3 from initial requirements and modified 11 requirements.
So, Requirement Volatility is
(7 + 3 + 11) * 100/67 = 31.34%
Means almost 1/3 of the Requirements changed after initial identification
2) Test Plan Coverage on Functionality
Total number of Requirements vs. number of Requirements covered through Test scripts.
• (No of Requirements covered / Total number of Requirements) * 100
Define Requirements at the time of Effort estimation
Example: Total number of Requirements estimated are 46, total number of requirements tested 39; blocked 7…define what is the coverage?
Note: Define Requirement clearly at project level
Total number of Defects found in Test scripts vs. Developed and executed.
• (Defective Test Scripts / Total Test Scripts) * 100
Total Test scripts developed 1360,
Total Test scripts executed 1280,
Total Test scripts passed 1065,
Total test script failed 215
So, Test case defect density is
215 X 100
—————- = 16.8%
This 16.8% value can also be called as Test case efficiency %, which depends upon the total number of test cases which uncovered defects
4) Defect Slippage Ratio
Number of Defects slipped (reported from production) vs. number of Defects reported during execution.
• Number of Defects Slipped / (Number of Defects Raised – Number of Defects Withdrawn)
Customer filed Defects are 21,
Total Defects found while testing are 267,
Total number of invalid Defects are 17,
So, Slippage Ratio is
[21/ (267-17)] X 100 = 8.4%
4. Review Efficiency
The Review Efficiency is a metric that offers insight on the review quality and testing. Some organizations also use this term as “Static Testing” efficiency and they are aiming to get min of 30% defects in static testing.
Review Efficiency=100*Total number of Defects found by Reviews/Total number of Project Defects
A project found total 269 defects in different Reviews, which were fixed and test team got 476 defects which were reported and valid
So, Review Efficiency is [269/(269+476)] X 100 = 36.1%
Efficiency and Effectiveness of Processes
• Effectiveness: Doing the right thing. It deals with meeting the desirable attributes that are expected by the customer.
• Efficiency: Doing the thing right. It concerns the resources used for the service to be rendered
Important Metrics for Software Testing:
Defect Removal Effectiveness
DRE = Defects Removed during Development phase x 100% Defects latent in the product
Defects latent in the product = Defects removed during Development phase+ Defects found later by User
Efficiency of Testing Process (define size in KLoC or FP, Req.)
Testing Efficiency= Size of Software Tested / Resources used
What is Metric?
A measurement scale and the method used for measurement.
What is Measurement?
The process of assigning a number or category to an entity to describe an attribute of that entity.
What is Measurement Scale?
A scale that constrains the type of data analysis that can be performed on it.
What is Measure?
The number or category assigned to an attribute of an entity by making a measurement.
Test Metrics Life Cycle
Phase 1: Analysis
• Identification of Metrics
• Define the identified Metrics
Phase 2: Communication
• Explain the need for Metrics to Stakeholders
• Educate Testing team about the Datapoints
Phase 3: Evaluation
• Capture and Verify the Data
• Calculate the Metrics
Phase 4: Report
• Develop the Report
• Distribute the Report to Stakeholder
• Take feedback from Stakeholder