Writing test cases before implementation?
So currently at my company it is common practice to write test case high level steps before implementation. Myself and others find this to be a somewhat wasted effort. I often end up blowing away much of the steps when I write the final draft.
In my opinion the final draft is very quick to write once implemented.
So my question/problem to solve is, what can we better do with our time during this phase(pre-implementation)? What's a more constructive endeavor for this phase?
My current suggestion is to map the use case and data/control flow of the requirement being tested. That way writing the test case should be simple later because time was spent understanding the problem.
Well, first of all - congrats buddy, glad to know you work for customer / company with understanding that involving QA at early stages is definitely worth it!
I'll reveal below several related points from my own experience of managing QA team (up to 10 FTE) involved in testing of fairly big project, where reqs page count was about 1000 pages and more, and business logic complicated enough so that typical project investigation for QA team newcomer took about 2-3 weeks.
Many customers do believe that investment in any kind of test documentation is a waste of efforts in any case: indeed, during these hours we do not test and do not reveal any bugs, so what's the profit? Consider the following before making your mind!
- Creating test cases (or simply any kind of test documentation) at early project stages, like prototyping, reqs refinement, early DEV has one great advantage: at the very same time we test requirements! Bugs are expected there sometimes even more often than in the application itself, but cost of their fixing at early stages is several hours of BA / analyst instead of hundreds of hours of DEV work later. In my case initial team of 3 QAs involved in test cases creation 2 months before the first build saw the sunlight was able to reveal more than 500 reqs issues of different severity: from incorrect warning messages to contradictions in business process. All of these were easily handled by team of 3 analysts - compare to 40+ DEV team at active stage of implementation and actual app testing.
- At the same time, I must admit that many of early test cases were blown away from final suites. In fact, from about 2.5k+ of overall test cases count created before the 1st build only about 1.5k were included in running suites. Was that really worth it? Well, in addition to the above point related to reqs testing, by the time of 1st tests QA team itself was familiar enough with the functionality so that testing efforts for the same tests at the very start were only about 20-25% higher than later. Having in mind complexity of the system and unmovable project release dates (REALLY unmovable in our case!) any other approach was actually impossible, since for the team of any size project investigation would take the same 2-3 weeks. So, in our case that was the most important part of product investigation.
- Regarding test cases format and detail: sure thing creating detailed test cases at reqs refinement stage is not at all the best choice. In the described case we finally ended up with the following approach: at first short document with the high level description of test directions only, e.g. "
For the document submission form: personal info details integrity, attachments availability for download and printed form version are tested using this and that sets of test data". Such document, which we called "
Test Design" (our PM just liked that ) was then passed to analysts, who then crossed out, corrected and adjusted priorities for the described scenarios. Only after that stage (which took about 10-15% of time comparing to detailed test cases creation) such refined test approach was transformed to the detailed test cases. No doubts this is worth it when the logic is complex and project is huge. For me that looks just about the same as "
My current suggestion is to map the use case and data/control flow of the requirement being tested." from the original question!)
- Do not neglect testing environment preparation or at least planning at this stage: this will help to avoid surprises later and save hours and even days of precious time at release week (besides, for me that was THAT project where true understanding of "
hour lost at project start is that very same hour you so terribly miss at release date" came).
Perhaps the above is a bit broader than expected from the OP, but I hope some ideas will be helpful for anyone else. Sure thing actual approach at early stages with QA involvement depends on the project / team / complexity / release calendar, etc. - and therefore "
test cases before the application" approach is doubtful for projects of small size / simple logic, but for big projects these QA efforts will make a definite overall profit when planned wisely.