What are anti-patterns in test automation?



  • anti-pattern:

    There must be at least two key elements present to formally distinguish an actual anti-pattern from a simple bad habit, bad practice, or bad idea:

    • A commonly used process, structure or pattern of action that despite initially appearing to be an appropriate and effective response to a problem, typically has more bad consequences than beneficial results, and
    • A good alternative solution exists that is documented, repeatable and proven to be effective.

    I have recently encountered different anti-patterns in test automation, that make reading, reusing and maintaining tests hard.

    A similar catalog has been created for anti-patterns in unit testing, but automation of end-to-end tests is different in nature from unit-testing. First, some patterns that are anti-patterns in unit testing, might be acceptable in end-to-end test automation. For instance, adding a new assertion to existing test instead of creating a new test case (The Free Ride / Piggyback design pattern) might be acceptable because setup in end-to-end environment takes usually more time. Second, there are anti-patterns specific for end-to-end test automation, e.g.:

    • Test data too much coupled with SUT DB
    • Environment configuration hardcoded in tests

    Can you suggest others?



  • Unit and system test automation is different, but at least a few of the unit test anti-patterns apply, such as concentration on happy-path scenarios. Thanks for including that link!

    In the automation I have implemented, I was forced to implement GUI automation due to the structure of the legacy Java client-server systems I was testing. However, these are also applicable to web automation tools like Selenium that act at the user interface. Note that this applies to keyboard/mouse interfaces, touch interfaces are a different animal entirely and would require their own evaluation.

    I will start with an obvious anti-pattern: Dependence on Record and Playback The alternative that I have implemented on all of my automation is to use recorded actions to obtain the structure of the GUI elements, then parameterize them into a function that searches the application interface structure to locate the object. This makes the GUI automation resilient to structural changes to the user interface (which happens often).

    Making Intermittent Bugs Low Priority This will show up as "glitches" that a user or a manual tester would overcome quickly and often ignore. However, unstable or unpredictable operation is the bane of automated tests. Even if they are EBM ("every blue moon") intermittent defects, they would still prevent most automation from being implemented without a considerable number of restarts.

    Here is my favorite: Ignoring Accessibility Development Standards These standards were developed to allow persons with disabilities access to software applications. Not only is it the right thing to do, but implementation of these standards improves the accessibility for all users. A little-known benefit of adopting these standards is the dramatic improvement in the testability of the application, both for manual and automated approaches. One aspect of these standards is especially important: making the application interface readable by external screen reader applications (such as Jaws). Not implementing accessibility standards forces you to implement "blind" automation, which is essentially the Record and Playback approach discussed above.


Log in to reply
 

Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2