To mock or not to mock with 'Automated Acceptance Tests'
I have been reading alot about automated acceptance testing. However, I have not found any information about how to manage external dependencies. Let's say you have a web application that acts like a UI for various central business applications which expose services in different technologies. The web application also has some reasonable amount of own business logic, but it's really related to the application and not to the core business of the company, all of that is handled by central services. The issue is that trying to write an automated acceptance tests on the "real" system would yield to two main problems: It's not possible to guarantee that a given dataset for the core systems will remain unaltered (maybe not even coherent anymore) the next time it's needed by the test. Adjusting it via direct data injection is not an option (too complex), adjusting via service calls is possible but will only cover the most common issues, meaning that it's always possible that someone will have to check tests to see what happened to the dataset (or analyze if it's a code issue). It's also not possible to guarantee a stable development environment. At the end, other systems might be testing some features so central systems become unavailable or unreliable at least. One option is to run the test against a more stable environment (QA, Preproduction), but if the development depends on some change of a central service, we would loose the "quick feedback" nature of acceptance tests in agile development. So, having these issues, it occured to me that if the team would already test all the services with automated integration tests against coherence at least, we could trust in the communication layer and in the input/output maps of the services we call. At least an error in these tests would clearly point at the problem (connection with the service or coherence of the service). We could skip the part of depending on business logic which is not on our side. After that, we could mock the service layers on acceptance tests so that we isolate the behaviour of our application from the environment and focus to test what our system does and avoid having to validate the proper behaviour of other entities where we don't have direct control. However, I am unable to find any literature from the widely known agile gurus on this subject. It would be great to hear if someone has had success on such an approach, hear why such an approach would be moronic, or find some reading of similar approaches...
Demir last edited by
I would agree with Laurent, and we have faced a very similar situation. Our team tests the APIs for data transport and parsing. The developer unit/component level tests were all against mock objects and data stores. Unit tests are not always comprehensive so we built a 'fake server' to emulate various web services that we used for running additional functional tests. The pros of this approach are control over the data (contacts, feeds, photos, etc), and the ability to simulate network errors via fault injection. But, while a fake server is good for evaluating raw functionality in a isolated environment, it doesn't test how the software performs and functions end-to-end in the "real world." The point of acceptance testing is to evaluate the software in a 'real-world' situation similar to what the customers will experience; and that is the biggest advantage of going against real servers. One of the down-sides of going against real services is that some of your tests may become unreliable. Some sporadic issues dealing with real services can be overcome by re-designing the automated tests. For example, ping the service before the test begins to make sure it is running and abort the test if it is not ready after a brief period of time. So, we still use a mix of both approaches, but our growing suite of customer scenario tests (acceptance tests) and our performance tests are now going against real servers.