Checking two related functionalities in same test case
Each of our end-to-end test cases checks two aspects/functionalities of message processing:
- Whether the message has been processed correctly (resulted in a correct action of the system).
- Whether information about processing been correctly logged to DB (for further reporting purposes).
We decided to verify both aspects in a single case, because end-to-end tests takes much time (this is distributed system and message traverse through a number of slower, legacy systems).
Now, when assertions related to first functionality pass, and for the second will fail, it does not matter. All test cases are marked as failed, so the message sent to the management is that 100% of test cases failed, ignoring the fact, that the latter functionality is less important to our business. This is because wustomers won't care having correct reports when processing is incorrect.
How would you solve this?
- Create separate test cases for different aspects? (more time to run the tests) OR
- Configure TestNG to create separate reports for each functionality/group of assertions? (if so, then how?)
Update: Having correct reports when processing is incorrect is useful only to us for troubleshooting. That is one another reason why we decided to combine checking both aspects in one test.
You did not provide a lot of detail about the relationship between the logging system and the processing system, so my answer is going to be general/vague. I would try the following, in this order:
Educate your management. You have a good reason for how you structured your tests. Explain what they can and cannot infer from a 100% failure rate. Explain the cost of structuring the tests in a different way.
Reorganize the tests even if they need to run twice as long. You didn't actually say whether running twice as long is just an inconvenient to you or is an impediment to others in your organization as well. The next two alternatives will almost certainly be more complicated than this one; you need to decide which is more painful: the extra run time of this alternative or the additional coding time / maintenance cost of the next two alternatives.
Structure your logging tests so, when the process aspect fails, the logging tests report errors rather than failures. I'm not sure this obviates the need to educate your management, but at least you will have a way to distinguish "Logging not tested" conditions from "Logging tests failed" conditions.
Find a way to test logging (aspect 2) without actually doing any processing (aspect 1). You did not provide enough detail to indicate whether this is possible, and presumably if it were, you would have explored this already. Still, if the first three alternatives are not doable, this is what you are left with.