What arguments would you make to persuade management that UI automation is a bad investment
morde last edited by user
After working with UI automation for some time and having to maintain many system tests (huge end-to-end tests that start all the way from software installation to testing via UI), I finally realized that it is a bad investment to have many UI tests. I saw some diagrams in the past showing how many tests should exist on Unit/Integration/System level (around 70/25/5).
Question: if you were to find correct and strong arguments for management to persuade them not to invest a lot into UI automation, but more into Unit/Integration tests, what arguments would you use (maybe some links to case studies, test books, etc.)?
inna last edited by user
This is often a difficult argument, not least because it frequently shifts "test" responsibility to developers, who may not wish to deal with writing unit or integration tests and see no value to either.
Some points I've found helpful here are:
- It's much cheaper in terms of licenses needed and time required to keep business logic tests at the unit test or integration test level where they can run in seconds every time a build runs. If you have a large number of UI level tests that are covering business logic/calculations, this one is (almost) a gimme. (As an example, at a prior workplace, there were thousands of UI-based tests that took a grand total of nearly 12 hours to run. They could - and should - have been unit or integration tests of tax calculations with different input parameters. There was NO unit testing around that code, and worse, it was so spaghettified it was untestable without major refactoring.)
- the more unit testing and integration testing that's built into the codebase by developers, the more stable the application will be and the easier it will be to modify without breaking existing functionality - and if you get a lot of problems with regression, this is a big deal (in the company with no unit testing, emergency patch releases consumed more than half the test team's time - which meant that new development testing and regression testing got short-changed, leading to more regression bugs leading to... you get the picture)
- even the most well-designed UI automation is fragile and high-maintenance. Even if the issues caused by record/playback have been dealt with, something as simple as making a field non-editable can cause hours of automation refactoring
- UI automation tends to run into one of two problems: either there are a lot of repeated actions with corresponding problems with repeated code, or the test run is highly dependent on prior tests having completed correctly. In the first case, you can have a small change to the UI causing massive refactoring issues, and in the second a failure early in the run can cause cascading failures further along the run. I have yet to encounter a clean way to maximize DRY automation code while minimizing dependency.
- The closer a test is to the thing it's testing, the easier (and faster - and therefore cheaper) it is to identify the problem. If a unit test fails, the problem is in that unit or the unit test. If an integration test fails, the problem could be in any of the units involved, the communication interfaces between the units, or the test. If a UI test fails, you can add in the user interface, anything else the computer happened to be doing at the time (a scheduled virus scan can wreak havoc on a UI automation run), and communication between the UI and the back end. Even with the best logging and test design in the world, this is a challenge for UI test automation.
The short version is that if you're running a lot of UI tests to validate business logic, those tests would be better handled as unit or integration tests for time, cost, and effectiveness.