Does defect density make a good key performance indicator?



  • Does defect density make for a good KPI?

    Is there any baseline which recommends that defect density should be above or below a certain amount for a release to be considered as 'healthy' or a 'better release'?



  • As with many KPIs that try to use a quantity of defects as part of the measurement I'd say "Yes, but use it with caution. It's a matter of context".

    As an example, most of my career has been with startups in varying stages of maturity. Often what we were developing wasn't meant to be entirely bug free, stable, and able to scale. It was meant to solicit feedback. In situations like that, an absolute expectation for density may not be useful and may not be how you want the business to measure the success of your team. Any reasonably experienced and capable test engineer can shake an immature applications and a lot of defects will fall out.

    An alternative I'd suggest considering is approaching density as "how many defects did the internal team find and report" vs "how many did customers find". If applications are released and fixing know defects is deferred to later this is an intentional decision about how to use resources and not necessarily an indicator of success for development or QA. If an application is shipped and the customer finds all the bugs while the team is unaware of them I'd argue this is a riskier situation.

    Additionally, I think density is a fantastic factor to consider when doing exercises like risk analysis which can be useful when determining how to allocate finite development and testing resources for future work.


Log in to reply
 

Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2