Can we avoid some Test Cases?



  • There is a question which has been bothering me a lot while writing test cases. For every Test suite, say we have a login window. When writing the testcases do we need to list out all the possible test cases? Is there any standard or way that we can avoid some basic test cases?(For example editable ability of fields) There can be many test cases if we extend our imagination; but do we have to list them all? If some of the basic ones can be avoided, who takes that decision in the real time software development cycle?



  • When writing the testcases do we need to list out all the possible test cases? You have a limited amount of time for testing, and if you think about it long enough, you will realize there is no end to the number of test cases. You need to consider as many as you can, but you need to prioritize them according to how hard they are to test and how much pain they cause if they fail. Is there any standard or way that we can avoid some basic test cases? (For example editable ability of fields) I don't know anything about testing standards, but here are some techniques you can use to avoid some basic test cases, using editability of text fields as an example: Don't test every editable field if they are all coded the same way. In manufacturing, when there are too many units to test every one, testers use sampling to infer something about their product's quality. They can do that because every unit is made the same way. A similar idea holds in software testing. If every editable field is implemented in exactly the same way, you may be able to test a small number of fields (or perhaps just a single one) and infer from the results whether the other fields are correct. To draw that inference, you need to know how the software is put together. Spend less effort on editable fields that were unit tested. Sometimes, developers test their software themselves before they make it available to testers. You can ask your developers whether (and how) they tested a feature, and use that information to decide how much additional testing you need to do. Spend less effort on editable fields if they have not changed. If a feature hasn't changed since the last time you tested, you may not need to test it. Be careful though; those fields interact with other components, which in turn interact with other components, and so on. Spend less effort on editable fields if they don't tend to break. If editability doesn't tend to be a problem, it may be better to spend time on things that do tend to break. None of these techniques are perfect, but when you have limited time, you have to make judgement calls based on the information you have. There can be many test cases if we extend our imagination; but do we have to list them all? I assume this question is more about the quantity of test cases than about whether you should actually list them. If you are curious and you pay attention, you will realize that it is not possible to test (or list) everything. If some of the basic ones can be avoided, who takes that decision in the real time software development cycle? Go ask someone your manager or a co-worker. Or if there is no one to ask, you get to decide. Software changes constantly. To do a good job, you need to keep track of how it is changing and which aspects of it matter the most. It is not easy, and none of us do it perfectly. Get as much information as you can absorb, make deliberate decisions, and when your fail, try to learn from your mistakes. That is what the rest of us try to do.



Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2