Is it valuable to measure bug count per hour of dev time?
Recently a custom metric was presented to our team during a meeting about software quality.
The metric was: number of found bugs divided by the sum of dev time (aggregated as team average and also per developer)
How are those numbers calculated?
We have a team of testers which do manual testing (based on new features and test scenarios). If a bug is discovered, it will be tracked in our internal bug tracker. Additionaly, the overall dev time is recorded as well.
After doing some research in the net I couldn't find anything about such a metric. I would say that it is related to defect density (bugs per loc) but I'm not sure.
Does this kind of metric make sense? Does it measure dev quality or software testing quality? And is there a common term for this metric?
emmalee last edited by
It absolutely is not valuable to measure bug count per hour of dev time. It is especially bad to measure bug count per hour for individual devs.
Devs on more complex or difficult code will generally have more bugs/hour than devs working on cleaner, more straightforward code. Similarly, devs will produce more bugs/hour at the end of a long workday than they do at the beginning.
Unclear requirements will generate more bugs than clear, well-structured requirements.
If devs are penalized for excess bugs, some testers will start reporting problems informally in order to avoid being the cause of devs being penalized. Others could well decide to be petty and create a lot of small bugs to penalize someone they dislike.
In addition, there is a difference between a single application-breaking bug and a number of small trivial bugs - which is worse for the application? The trivial bugs would be counted as more damaging than the application-breaking bug with this kind of metric.
In my opinion the only way to use something like a bug count/hour is as an average used to forecast approximately how much padding you need to leave in the schedule for bug fixes: if team A typically produces 0.05 bugs/hour and 1 in 5 typically need to be fixed, then in 500 hours there will probably be 25 bugs generated, of which 5 are likely to need fixing before release. If the typical time per bug fix including testing is 5 hours, you'd include padding of around 25 hours to cover likely bug fixes.
(caveat: the numbers are not real - they're just for illustration of how the metric should be used)