Will number of bugs and number of tests KPIs improve quality?
My company wishes to use two primary KPIs to help improve quality.
- Number of bugs (less is better)
- Number of tests (more is better)
Will improving these figures actually improve quality?
carriann last edited by
Some general guidelines:
- Any metric can and will be gamed. It is possible to have an application with thousands of passing unit tests that crashes on startup. It is also possible for bug count metrics to drive actual bug counts underground.
- Using a metric as a guideline to indicate the team's current status will work. Using one to push the team to change/improve their ways will not.
- The proposed metrics have favor in some organizations because they are easy to collect, not because they have value.
- The way to improve quality in any product or organization is for everyone involved to be actively working to improve quality in their part of the organization. This includes ensuring that nobody feels the need to take short-cuts to meet deadlines, accepting that bugs will happen, and that when taken far enough the root cause is usually some variation of:
- product complexity, where there what appears to be a simple problem causes unforeseen knock-on effects. This is usually caused by the developers/testers being insufficiently familiar with the application in test, although if the application is large enough or complex enough, it's possible that nobody can be sufficiently familiar with the application to prevent unforeseen circumstance bugs.
- communication issues, where there is a mismatch between what customers expect, what developers/testers are told, and what actually gets built. It's entirely possible for nobody to be at fault in these cases - and just as possible for everyone to be at fault
- schedule pressures, forcing developers/testers to take shortcuts to meet deadlines outside their control.
- Metrics imposed without the agreement of a team will generate resentment and mistrust, which inevitably leads to gaming the metric.
- Not all bugs are equal, and not all tests are equal. Using bug counts as KPIs effectively ranks the showstopper bug that prevents users logging on as equal to the typo on a page only ever seen by top level users and used maybe once a year. Using test counts as KPIs effectively ranks tests of the most commonly used functionality as equal to tests of "set it and forget it" functionality.
I have yet to see any good reason for using bug counts or test counts as a KPI.