How do I assess the testing process of a project?
inna last edited by
I need to asses the efficiency/productivity of a project's testing process, which also uses SCRUM. How do I go about? Are there some peer-reviewed or well-known standard approachs to evaluate the testing process of a project? Are there any useful, industry standard metrics I can use?
It depends, and there are no industry standards. Seriously. Any metric can be gamed (and will be, if you use it for assessment). I'm not aware of any standard approaches, not least because the teams are - or should be - evaluating themselves regularly and looking for ways to improve their own processes (if they aren't then they're probably using SCRUM-but... where they have the forms of scrum but are actually doing something not at all agile. This is sadly quite common and is a failure of management rather than a team failure). Some of the things you can do: Be part of the retrospectives. All of them, if possible. This is where the team evaluates its performance and looks to improve. If most of the problems that are highlighted in the retrospectives are external to the team, then any problems are with management, not the teams. Discussions with the testers. Talk to the testers and ask them how they feel they're doing. If they don't think the information will be used as a stick in performance evaluations, they'll be honest (because I've yet to meet a tester who likes to be involved with something that's not meeting his/her standards). If you absolutely must have metrics, there are exactly two that have any value. One is the ratio of customer-reported issues with a project to tester-reported issues. After you filter off the customer issues that relate to misunderstandings, the customer realizing after using the product that what they thought would work doesn't, and so forth (which happens but has nothing to do with how effectively your testers are working), you'll have a reasonable metric. What you should find is that the majority of customer-reported issues are edge cases (yes, you do have to do a fair bit of evaluation to use this information properly). The other metric is the WTF per minute metric. I'm actually not joking here - if the reaction of someone using the software for the first time is, effectively, "WTF were they thinking?" there is a big problem. The more often this reaction comes up, the bigger the problem. No matter what you choose to do, do not blame your test team. I can't emphasize this enough. With very few exceptions, your team is doing the best they can with the information and constraints they've been given. They'll work with you if they know the goal is to build the best software possible. They won't if they think you're going to penalize them because things didn't go well. This is human nature - you will likely find that if you treat the evaluation as a way to get better software without blaming anyone your test team will be harder on themselves than anyone else would.