Defect Removal Efficiency confusion



  • I have read several articles by Capers Jones, who states that:

    Defect Detection Efficiency (DDE): percentage of defects found after release
    Defect Removal Efficiency (DRE): percentage of fixed defects when compared to customer reported issues.

    In the further text, he gives an example of DRE: when the development team found and fixed 90 defects, and the customer then finds 10, DRE=90%.

    There is a note about the difference between DDE and DRE, stating that some companies choose not to fix all found defects before release, making DRE thus lower.

    But how are those unfixed issues reflected? Let's say I found 100 defects and fix only 50. The customer, for some reason, found only 5. Following the stated process, DRE then should be 50/50+5=90%, even when I left 50 defects unfixed.

    Is that truly correct? To me it seems counterintuitive that when I leave 50% of defects unfixed, in a situation when the customer found nothing, DRE would be 100%.



  • Bugs that are not fixed are comprised of many categories:

    • they were unknown
    • they were were not found
    • they are viewed as features
    • they are not well understood
    • they are not important enough to fix
    • they are not worth fixing based on the effort required
    • they are not worth fixing based on the life span on the product
    • they are not worth fixing because customers expect them and use workarounds

    Using only the categories of "we found them" vs "customers found them" is too crude a measure for improving quality. Such a simplification will not address the complexities of the underlying issues and will also likely lead to human manipulation and misrepresentation to achieve other personal goals.




Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2