How to write Test Strategy

  • How do I write test strategy in simple way by referring Functional requirement document. and Business requirement document. What are the bullet points?

  • A simple test strategy can only guarantee a simple assessment of quality.

    According to James Bach:

    The purpose of a test strategy is to clarify the major tasks and challenges of the test project.

    You can (and probably should) expand "tasks and challenges" to mean "goals, activities, deliverables, constraints, risks, and dependencies." Given that, your test strategy should answer his later question:

    How will you cover the product and assess quality?

    If all the business requires is a simple assessment of quality, by all means, stick to a simple strategy. I've found that these cases are the exception, though.

    1. What's the goal?

    Start by committing to a description of goals of this effort. Note that there are valid business reasons for saying something like: "The purpose of this testing effort is primarily to verify that the same tests we've manually run for the past six months still fail to detect any defects in the product." It's not a grand goal, but neither is it a deceptive, corner-cutting goal, if everyone agrees that that really is the goal.

    Your project may require serious compromise. Here's some other goals that may sound like cheating, but that I've seen used to get everyone on board with the testing effort. (The wording may show some dramatic license, but no dishonesty.)

    • "Robotically execute the predefined list of manual tests so that we can report compliance with obscure government regulations before an immovable date."
    • "Verify only the newest changes, and the lightest possible sampling of legacy features, to show that this week's release is better than the one that broke customers last week and to give us the time to do it right."
    • "Use the full set of automated tests and the P1 and P2 manual tests to ensure backwards compatibility at all costs, even if it means that we pull that new feature for the second time, in favor of low-risk stability fixes."

    I focus here on "emergency" situations, because they emphasize the need to balance business and functional requirements. Each of these one-line descriptions briefly address both parts of the main question: "How will you cover the product and assess quality?" They also imply some responses to the functional (what you will verify?) and business (for what purpose?) requirements.

    2. What will you do to meet the goal?

    Here's where you describe the balance between running existing manual and automated tests, performing exploratory testing, and investing (hopefully) in new automation. Back your decisions up with data, and projected costs in time and people. Weigh those costs against the need to plan for the future. Questions you want to consider in this section include:

    • How long will it take for new automation to pay for itself?
    • How productive are our manual tests at finding defects?
    • How experienced are our testers with the product?
    • How much of the product is affected by these changes?
    • Can we rely on existing and/or new dev unit tests?

    Here, also, is where you most directly address the Functional requirements. For example:

    • What types of tests are most likely to reveal defects introduced with these changes?
    • Which areas are most likely to have defects?
    • If this module failed, or if this requirement was incorrectly implemented, what would be the impact to the user?

    If the original planning for this product went well, most of your Business requirements should be covered by Functional requirements. But there's always some that aren't necessarily Functional. Such as:

    • What's the performance delta over last release?
    • Are we vulnerable to data loss or theft in some new way?
    • Does the new stuff look mostly like the old stuff?
    • Will customers actually be able to use this for the intended purpose?

    You could spend forever on this section. Keep focused on the goal, and stop when it's clear how you'll meet it.

    3. How will you show that you met the goal?

    This step tends to get skipped in smaller organizations, or in big ones that might view QA as an enemy instead of a partner. You need to have a way to demonstrate the results of all your hard work. It doesn't matter how much you got accomplished if you can't show that work to those who make the final decisions. Again, be frank here about what you can measure and what you can't. Some helpful metrics:

    • Code coverage
    • Requirement coverage, mapped to test cases
    • Run, Pass, and Fail results for your sets of manual and automated tests
    • Defects found while still in active development (earlier is better!)
    • Defects found after code freeze
    • Severity and priority of open defects over time
    • Timeline comparing new code changes and new defects found

    None of these are enough by themselves; they all work together. Also, here's a couple to avoid, if at all possible. These are either useless, or directly harmful to the organization:

    • Number of tests executed
    • Any "per-tester" metrics (defects found, hours spent, tests authored, etc.)

    4. What can you use to get the job done?

    These are the things that are under your control. What can you depend on having available to you?

    Questions that need answering here include:

    • How much time and/or money do we have?
    • How many people?
    • What hardware and software are available?
    • How much time will we need to setup, or learn the product or tools?

    This is also a good place to call out your understanding of what is most flexible on this project: schedule, scope, or resources?

    5. What's outside your control, but has to happen anyway?

    Most projects need at least a few things that they can't get themselves. At a minimum, you'll need something to test!

    Here is where you define your pre-conditions for each activity and deliverable you listed above. If there's anything missing from those sections that you have to have, it goes here. Other candidates:

    • "We can't start tesing module X until we have (or know) ..."
    • "We don't have anyone on the team who knows about technology Y. This prevents us from ..."
    • "Bob is out during milestone Z. We'll need additional help or a change in scope there."

    6. What might, realistically, prevent you from meeting the goal?

    Dependencies are a good place to start looking; what happens if you don't get what you need, or if the planned thing falls through? What events (scheduled or otherwise) have pulled resources away from previous efforts? Is any of the planning based on staff that haven't been hired, or assets not yet acquired? How often have requirements or schedule changed in the past?

    Once you've got a good list of these, come up with a brief response plan for each. It will almost always mean a change in either schedule or scope. Sometimes extra resources may be brought on, but it's rarely as useful as anyone wants it to be.


    The Test Strategy document is your very best tool for communicating your hard work to those outside the organization. A simple bullet list might be all you need, but it's more likely that you're passing up an opportunity.

    Know your goal. Be clear and open on your priorities and plans, and be flexible when they must change.

Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2