Pairwise-Testing: How do you identify equivalent software code?
As some of you might've already noticed I'm trying to find some shortcuts in software testing with pairwise method.
Last friday I stumbled upon some modules under test, which all have some basic features in common. When I asked some colleagues (from QA) weather the code for a common feature is the same for all modules or not, I got five different opinions. Some of them said developers are likely to use copy/paste when programming common features, others said you can't drop any tests because in general there are more than just one developers coding one common feature (e.g. one does the code of that feature for module A and another one for module B) or sometimes two developers code one feature for one module (e.g. one is ill and another one does his job).
My feeling tells me there are unnecessary tests being done. That is because I think some of the testers (if not most of them) are already skipping some tests of the form I described above (but they would never admit it of course).
Creating a testplan with pairwise tool wouldn't be a problem. My problem is that I need some kind of proof or at least very convincing point that we can skip most of these tests. I already asked a similar question and I'm waiting for a developer in order to carry out an experiment for that case, but in this particular case I think its hard to do an experiment because even if you take two modules and let two developers code a common feature for each of these modules (that would be a best case scenario without any interaction of developers) and even if pairwise testing would've done a good job in this case, you still can't be sure it does in general can you?
I read once this paragraph in the Django (Python framework) tutorial about repeating testing and it definitely convinced me:
When testing, more is better: It might seem that our tests are growing out of control. At this rate there will soon be more code in our tests than in our application, and the repetition is unaesthetic, compared to the elegant conciseness of the rest of our code. It doesn’t matter. Let them grow. For the most part, you can write a test once and then forget about it. It will continue performing its useful function as you continue to develop your program. Sometimes tests will need to be updated. Suppose that we amend our views so that only Questions with Choices are published. In that case, many of our existing tests will fail - telling us exactly which tests need to be amended to bring them up to date, so to that extent tests help look after themselves. At worst, as you continue developing, you might find that you have some tests that are now redundant. Even that’s not a problem; in testing redundancy is a good thing. As long as your tests are sensibly arranged, they won’t become unmanageable. Good rules-of-thumb include having: a separate TestClass for each model or view, a separate test method for each set of conditions you want to test and test method names that describe their function.
For me, test repetition is never a problem. The main problem is the functional code repetition, and you can manage it by using tools like Sonar.