Examples and Best Practices for Seeding Defects?
Defect Seeding seems to be one of the few ways a development organization can tell how thorough an independent testing group is. I'm a fan of using metrics to help counter overconfidence biases, and drive discussions around facts. With that said, I haven't seen Seeding Defects used in practice.
Are there best practices above and beyond what McConnell explained? Are there public examples where this has been done? In the absence of the above, any thoughts on why it hasn't been done more?
Alberto last edited by user
I, too, have never seen this in practice. In my experience, QA departments are always short of resources and/or time. So taking extra time to validate the tests instead of the software they are meant to test would never gain traction, in my experience.
What I have seen is QA groups know the coverage of their tests. Where the coverage is thin, they build them up. Typically, this starts with manual tests, where the test scripts are documented in Word documents (for example). If the script doesn't match the software, then either the software is at fault (a bug) or the script is updated (assuming the feature has been modified).
Where tests can be automated, then this is done (although not by all QA departments I've seen). Having the automated tests allows them to test more faster. the tests do require maintenance, but it's time worth spending (in my opinion).