Using a system test framework that handles test dependencies
To explain better, if test B depends on test A, then
- if test A fails, then test B should not be run and fail implicitly
- if test A produces some output, then it should be available for test B
I can see why this would be bad for Unit Tests, which should run in isolation, but I was thinking about System Tests where these are common unavoidable scenarios, and it would possibly be a better alternative to long tests which require many steps. I am thinking of something that works like a build file or Ant (where targets play the role of tests).
Is this a terrible idea? If yes, are there testing frameworks that support this test style? If not, what problems am I overlooking?
There are tools that support this. SmartBear's TestComplete does - you can configure to continue after a failure or to stop after a failure - you'll find some details below about how the team at my previous employer handled dependent tests with TestComplete.
I don't consider it a terrible idea - I've worked mostly with large, complex applications where it's simply not feasible to make each test independent.
Here's some of the reasons you'd want to use a sequence of dependent tests:
- You have a long sequence of events that would have to be repeated for every test (such as restore known database, install the target build of the application, run the application, log on to the application with specific credentials, perform the configuration you need... all before you get to the actual test that interests you)
- You have a limited set of resources to run automated tests in parallel and a limited timeframe in which to run your automated tests
- You have a long sequence of cleanup events after each test or tests (exit the application, run database validation, back up application data to an archival location, export application logs to an archival location...)
- You have a large number of tests where the setup steps are identical or nearly identical
- You have a large number of tests that are logically dependent on prior actions (practically anything that operates as a process flow can fall into this bucket)
- You have an application that has a significant startup/initialization time (I worked with one that could take over a minute to start and load everything - and that was with a relatively small data set. Large data sets could take much longer)
- You need a lot of data to perform a single test due to the long sequence of preparation and/or cleanup events, and much of this data is the same for other tests.
I've done a lot of work with applications where every single one of these points is true - not surprisingly we used a lot of dependent tests. In our case, there was a 3 hour configuration script run once a week that would set everything up (because this was more efficient than each of the 30-some test suites running up to 90 minutes of configuration on a daily basis, and the focus of our automation was on transaction handling rather than configuration), then a common startup routine for each test suite that would copy the database and file backups from a known location, restore the database backup, unzip the file backup, install the target build of the applications being tested, set specialized permissions required, then run the application. This usually took about 40 minutes.
Even with a farm of virtual and physical machines, and the regression scripts made as efficient as possible, it wasn't possible to run one suite per machine, so the team relied on dependent tests and defined which kinds of failures were fatal - for fatal failures, we configured the scripts to stop, perform the teardown, and send an email before closing completely.
Microsoft's CodedUI framework doesn't support this style at all. I don't know enough about any other frameworks to give an opinion.