Approach to writing Manual Test cases/Scripts
I am in the process of writing Manual Test Scripts for a system in System Testing Phase. A comprehensive set of Use Cases have been prepared by the Business Analysts on team. From these Test Scenarios and Test Conditions have been derived. The system is a multi component request processing system where a user at one end can request a service from a devices at at the other end. The system developed processes these requests (authenticates user, authorises requests, routes requests to correct device, etc...). There are also different data variables involved - different Service users, different device alerts, different device statuses, etc... These will also need to be considered when testing. So what is the best way of writing the scripts? Each use case has multiple steps in the flow. Do we write a test for each step in the flow or for for the whole use case? I wrote tests for each step in the flow, but in the end it looked very repetitive. I could have written one test for the whole use case and just change the test data (but is this really one test?)An example use case would be 'Process request'. A sample step in the flow is 'Acknowledge request from user' and another would be ' Log request'. Should a separate test be written for each of these parts of the flow? Or one test for 'Process Request' I dont know If I'm looking in the wrong places, but I usually find lots of articles, docs on test planning, test preparation,etc.. but little on writing effective manual test cases/scripts. Thanks
In my experience with documenting system tests, I've found a multi-layered approach works. I really like Microsoft Test Manager for this because of two things: the ability to define input parameters for manual tests and the concept of shared test steps which can be used by any test case. You don't mention if you're using a test case tool, word, excel or some other method of documentation, but you can adapt what I'm describing to any tool. Work top-down: I start at the use case or acceptance criteria if there are no test scenarios or test conditions defined, and treat each scenario as a high level test in a larger test suite. Identify and extract repetitions: In any moderately complex system there will be a lot of repeated actions, either with identical data or with near-identical data. I aim to extract anything that is going to happen more than once into its own unit (either a test that gets linked as a prerequisite to another test or as shared test steps) and nest as deep as I have to. For instance, with login credentials, I'll define a test or shared steps for "successful admin login" which contains admin login credentials, a link to "successful login" shared steps, and the expected view or data returned (this may be in the generic successful login). The "successful login" contains even more generic "login" shared steps with no indication of what happens on completion - that's handled by the success/fail test steps. It's effectively refactoring for DRY in your manual test cases. Define data separately: I prefer to have my test data as a separate source than the test cases, and reference it. The precise form of the data will vary, but I've used database, zipped files containing data sets, text files, XML... Whatever works. The key thing here is that it's something I can reference from any test case and reuse infinitely. "Just in time" detailing: Generally speaking, I don't go into details in my test cases until I need to. When I'm initially creating them, I'll keep it basic - a test case might be "logged in admin requests A from device Z, all OK" (this usually corresponds closely to any identified scenarios). Then I'll break this down to a series of smaller tests: "1. log in as admin; 2. send request A. 3. check response from device Z". From that, I'll reference the more detailed repeated items - so for instance I'll have a reference to the exact structure of the request being sent. I rarely if ever go as far as "click this button" type test steps. Leave room for exploration: Rather than script manual tests in extreme detail, I prefer to work at a level that allows the tester to choose different ways of entering data or performing actions. I'll usually include mention of critical actions to use (such as "check each defined keyboard shortcut triggers the defined action in document X page Y") - this works better when testers are familiar with the application and/or are experienced testers. Steel thread first: This should go without saying, but still... My priority is always to start with the functionality defined in the use cases, and make sure that works under the conditions which are expected and/or defined. I don't even consider anything outside that boundary until I know that much is working. Often I don't detail other tests until after I've got the steel threat sorted (because there is rarely time to test as completely as I'd like and if I had a dollar for every time I've been unable to test anything outside steel thread I'd have a whole lot more money than I do). Remember the goal: The reason for your tests is to provide the business people running the project with information about the state of the project. Testers aren't the gatekeepers - we don't have the perspective. Your tests should be designed to give as much information as possible about the project's state in the shortest possible time. In order to do this, you'll need to identify the most critical scenarios (which can be interesting when you're in an environment where everything is critical - been there...) and prioritize your tests accordingly.