How do you manage dependencies between automated UI tests?



  • I am currently helping out with the TodoMVC project by adding automated tests. I have been 'exercising' my tests by trying them out on new submissions, such as this one recently. One problem I have discovered is that the test results often look worse than they actually are due to dependencies between tests. Because these are automated UI tests, if for example the process of adding items to the todo list has a small flaw, the knock on effect is that numerous other tests will fail indirectly, even if the function that are trying to test is actually correctly implemented. I don't think it looks great to have 14 failures when there are actually only 3 underlying failures. It doesn't help people diagnose and solve these issues. What might be better is that if a pre-requisite test fails subsequent tests are not executed. In a general / conceptual sense, how do you manage dependencies between tests? Do you acknowledge these dependencies in any way?



  • Whenever possible, instead of managing dependencies, I work very hard to eliminate them, or at least reduce them. Another high priority goal for me is to eliminate any technology that is not directly involved in the feature I'm testing. Every additional element of technology offers possibilities for the tests to fail independent of whether the system implements the feature correctly. For through-the-UI tests, I like to have each test use the GUI only for GUI feature the test is testing. Think of a test in three rough parts: Set up preconditions Invoke the feature being tested. Verify the results produced by the system. If my test is testing that the feature can be invoked through the GUI, then I'll go through the GUI to invoke the feature. But I will skip around the GUI to set up the preconditions and verify the results. I'll call some lower level API, or stuff data into the database, or something. If my test is testing that the system displays specific results in some specific way through the GUI, then I'll go through the GUI to verify the results. But I will skip around the GUI to set up the preconditions and invoke the feature. And if my test is not testing anything specifically about the GUI, I will eliminate the GUI from the test altogether, and make API calls into the system. Further, when I am testing GUI stuff, I like to eliminate the system. Instead, provide a mock system that I can control directly with my test code. Then my test verifies that when the user fiddles with some widget, the GUI makes the right API call. Or it verifies that when the system gives a response, the GUI displays it correctly (to the limited extent that can be verified with automation). I like to have very few end-to-end automated tests. Just enough to discover whether the parts are wired together correctly. For more complex things like following data through the system, have people do the tests. People are much less sensitive to minor technology burps that would cause an automated end-to-end test to fail inadvertently.



Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2