Can anyone provide actual data or scientific studies that show developers are (or are not) less effective at testing their own code than "independent" testers? I've seen many opinions but little hard data.
A few followup clarifications:
- This is referring to system/e2e tests.
- There are also two interpretations of "developers testing their own code": an individual developer testing the code they authored, or more generally developers testing code that may have been written by other developers. Data for either one would be instructive and I think there are different considerations for each. We currently do both.
- We use team inspections in most cases for requirements, code, system/e2e tests, etc., with the exception of unit tests. In my opinion this does mitigate at least some of the risk of the test author being "blind" to gaps in what is being tested.
I'm part of a team of around 20 developers, and until recently we had no dedicated test team. On my first project I was the "software test lead" while still doing actual development, and the expectation was that when you implemented a feature, you didn't merge it until you had also written the tests related to it. We also have a very good culture of celebrating finding (and fixing) bugs, egoless reviews, etc., and to top it all off we have produced very high quality results. In my opinion, it is possible for developers to be just as enthusiastic about breaking their own code as independent testers can be, if you have that kind of culture in place.
I just started reading How Google Tests Software and it seems we aren't the only ones who have had good results when developers are responsible for testing their own code. I haven't even gotten past the introduction yet but the whole idea of having a "Engineering Productivity" group rather than a group labeled "QA" or "Testing", and those people working to enable developers to test faster and better actually sound very similar to my experience: though we didn't have separate titles there are people in our group that are known for writing tools to increase productivity, whether in testing or otherwise, which combined with the culture of quality mentioned above has proven successful.
Our management is pushing for creating a dedicated testing team. They've given a number of reasons, and I've seen many of these reasons/opinions offered in answers to questions like How does a tester's perspective towards software differ from a developer's? and Can developer do automation test for the feature that he has implemented?, but while I've seen lots of opinions (many from experience, and I do respect them as such), I've yet to see any hard evidence or data that developers are inherently less effective at testing their own code than an "independent" party. My biggest concern with moving from our current model to a "separate test team" model is that developers will no longer view quality as their responsibility, and I've seen some of this mentality start to set in on a project where we had a test team. It was an offshore team that proved utterly incapable of writing quality tests, but because we didn't ditch them until a year into the project, we built up a lot of quality debt while also allowing many devs to get out of the mindset of also being testers.
I'd like to push back on this idea of separating development and testing but want to do my homework first. There are opinions and anecdotal/experiential evidence that support both viewpoints, but I'm interested in knowing if there's any actual data to support one or the other (or "it depends").