Let's discuss testing metrics: selecting the ones that will be the most efficient for your testing process, recording and measuring them. I'll show you some examples of how this process works in Innotech.
Why Record Metrics?
Testing metrics are used to track the effort spent on ensuring the quality of the released software code. They allow you to quantify whether you have achieved the specified quality level or target. A visual representation of the results creates a clear picture of the testing process that may reveal problems or bottlenecks.
In the testing process, metrics are used to:
track the team's progress in terms of the project timeframe, deadlines and other time periods;
assess the quality of the current system state;
control the quality of the testing process;
set targets and plan efficiently based on an understanding of existing problems.
One shouldn't remain content with the current quality of the systems and, more importantly, of the processes. These properties are the foundation for the improvement of the team's efficiency and results achieved. The rational use of human resources is directly linked to overall performance. It's a shame to have a team member who is hardly achieving 50 % of their potential. This lack of focus could cost you some unpleasant moments, including reputational and financial loss.
Metrics Most Often Mentioned in the Articles on Testing
When testing for projects, we may single out a number of metrics that are most often mentioned in the majority of training courses and articles.
Passed/Failed Test Cases. It is used to assess the number of test cases passed successfully compared to the number of failed ones. This metric allows you to identify the causes for test failureand the ways to eliminate these causes.
Test Cases Not Run. It shows the number of tests that need to be executed for this project. This metrics allows you to identify the causes for test non-execution and the ways to eliminate these causes.
Open/Closed Bugs. The ratio of open to closed bugs. This metric helps you evaluate the bug fixing rate and identify the causes for not closing the bugs.
Reopened/Closed Bugs. It calculates the ratio of reopened to closed bugs. This metric reflects the efficiency of closing the bug by the developers and helps you identify the causes for the low level of bug fixing.
Bugs by Severity/Priority. Total number of bugs by severity/priority. This metric reflects the quality of the code submitted for testing.
What Additional Metrics Need to Be Recorded?
In addition to the standard metrics, Innotech uses a number of additional ones. They allow us to get an objective picture of the process.
Coverage by Test Cases / Checklists
It is applied to the requirements, user history, risk or code acceptance criteria. Based on this metric, testers can identify features that are not covered by the test documentation. A non-covered functionality poses a certain risk that would be unacceptable for many projects. To evaluate the coverage, you can use a requirements traceability matrix.
Written Test Cases / Checklists Percentage
You can use the overall number of features, traceability matrix or user history as the basis. The requirements coverage density should also be taken into account. Use the atomicity principle, i. e. break each functionality down to the atomic features that should also be covered by the test documentation.
Competed Test Environment / Test Data Preparation Percentage
Testing is carried out on test data; therefore, it's important to know that this data is complete and correct. Tests must be accompanied by the relevant test data covering all aspects of the tested code. To prepare the data, we use cross-domain analysis and pairwise testing tables, as well as state transition diagrams and tables. You need this metric to assess the ready to start criteria. It’s necessary to start with the required quantity of test data and track progress by comparing readiness data against the standard.
Test Execution Metrics
It reflects the number of passed and failed test cases, the ratio of the passed / failed test cases to the overall number of cases, and the average test case execution time. We track this metric to allocate testers' resources correctly, prioritise cases, involve additional resources in regression testing, and determine the current status to adjust the time frames. As a rule, RMS systems are used for this purpose. The number of passed cases can also be recorded in Excel spreadsheets.
These include density, number of identified and fixed defects, failure rate and verification test results. They allow us to obtain the most objective information about product quality over a particular period of time. Obviously, it is difficult to assess usability based on the defect density. However, it’s a good measure to assess the product status in terms of requirements. Likewise, using the information about blocking and high-priority bugs allows for development path corrections. You need to record this metric in terms of regressions and current project status. It can be tracked using failed test cases or additionally recorded in Excel spreadsheets.
Information about the Status of Tasks, Testers, Load and Effort
This metric tracks the load on the testing team, the allocation of effort across tasks, etc. Introducing the metric is most difficult if the project does not support dailytask logging. However, the positive impact of the metric will allow you to allocate time across tasks correctly, track the actual and expected scope of the tasks, and adjust this scope in the future. You can track metrics by keeping the daily task log in JIRA, or you can do this in other systems used for the project.
This metric concerns testing, project and other team costs. It is necessary for planning financial resources and tracking "leaks". You need this metric to avoid budget holes and show the team's expenses to the stakeholders more effectively. For this purpose, you need to account for your team's financial expenses and plan your future costs.
You need to regularly monitor the processes, comparing the results at various development stages. Metric relevance is controlled by the specialist implementing them, and their results used by the stakeholders to make corrections to the development path.
A separate article could be written about recording the metrics. To save time, I'll summarise the key points:
To win big in the long run, you need to spend just a little time now and introduce metrics recording to the processes;
A specialist is an integral component for controlling the quality of your metrics.