How do I automate "service-level" testing for a GUI app

  • I have extensive experience automating GUI testing with tools like TestComplete or Microsoft's CodedUI. However, I am trying to get out of the game of testing at the UI layer and instead would like to get started trying to test at the "service" or "API" or "logical" level as advocated widely throughout the internet. My understanding of this is that I need to somehow access the functions being called underneath the hood when I am clicking buttons or otherwise interacting with the UI. However, I am not really sure how to do go about doing this.

    With my GUI automation scripts, basically I just wrote a separate application that was run side by side with the application under test to click buttons and whatnot. Since there isn't really a test API available to touch the service layer externally, how do I go about testing this layer? Do I just go into Eclipse and make some "Unit Tests" to call functions from the application directly? Do I need to bother the developers and ask them to write me a test API so I can access the service layer externally? Or is there a better way to go about doing this?

    EDIT: Here's an example of what I wish to accomplish. Let's say I have some application wherein I am supposed to take some sales order with ID SO12345 through some business workflow with three separate screens (Initiate Order, Process Order, Complete Order). In each of these screens, I basically just enter the sales order ID into a text box and click save, which then updates the status of this order in the backend SQL database (insert new sales order record with status "New Order", update status to "Order Processed", update status to "Order Complete").

    Basically all I want to do in this example is move a sales order through these three screens and test the database to see if the correct updates have been made along the way. With UI automation, I would make an entirely new application and write a method for each screen and write code in each of those methods to navigate to the appropriate screen, enter the sales order ID into a text box, then click save. Then with these three methods in hand, I could write code calling these three methods and check the database in between each automated step. But now if I want to instead test at the service level and just call the underlying methods underneath each button click, where should I be doing that? Can I make a unit test project and write my automation code in there (even though it's not actually a true unit test) or is it better to get the developers to write me a library that I can reference in a separate app to access these navigation methods directly?

  • Paul,

    A lot depends on what's available in your Application Under Test: often if there is no API available, the functions are only accessible when you have the correct GUI controls open. This is, alas, all too common, particularly with older applications (I spent a lot of time working with an application that still had code from 25 years ago grandfathered in, where a lot of the functional logic happened at the presentation layer. You can imagine the fun that was to test! (The devs hated it too, but between the complexity of it and the company's business decisions there was never time to clean it up properly)).

    Where you can access the service layer, it's easy to work with no matter what tool you're dealing with. I've done API-level testing with TestComplete, dynamically building XML files to send to the service and parsing the responses. When you can't, what I've found to be the best option is to include data layer tests in your code, checking your database for correct data storage after you've exercised the GUI for the tests you're performing (which with your experience you probably know already).

    Some of the things I've done to make this less painful when the inevitable GUI changes occur include:

    • object oriented test code
    • heavily data-driven tests (at my previous employer I worked with a relational database of tests and test data that had evolved from the first data driven tests - and was maintained entirely in CSV files. Functional, but not exactly easy for a new automator to follow).
    • shared library code to handle navigation within the AUT. Typically in the code I worked with at my last employer, the automation code had a class for the data of each object, and a library to handle the navigation functions involving manipulating that object (I'm starting clean at the current employer, so there's nothing there right now, but this is the model I'm going to use).
    • Shared definition of all the AUT objects manipulated. At my last employer this was a shared list of constants defining string constants for the form names and menu choices, so that instead of calling Appname.FormName.Section.Section.Fieldname.SetText we could call CUSTOMER_FORM_CONSTANT.Fieldname.SetText - and in the most commonly used fields, CUSTOMER_FORM_FIELD_NAME_CONST.SetText. That meant if there was a need to change a form field, at best the change could happen in a single line of code. At worst, it might require a search/replace in one unit (Except that the oldest automation code was over ten years old and still in use, and that was mostly record/playback with some parameterization and library use).
    • Data validation per test and at the end of each run. This part needs to happen with API testing too, since I'm sure you've seen tests that appear to complete without any problems - and store incorrect data. At my last employer, there was typically key validation of data at the end of each test in the suite, then when the GUI tests completed, every database table the test changed was checked against a set of baseline files.
    • You probably already know this one, but it never hurts to be reminded to start from a known data set. The first thing any test involving complex data should do is set the AUT database to a known starting point. Because the applications I work with are so complex it's not feasible to input all the data as part of a test (it would be hours before there was enough data in place to run the test) I'll start by restoring a known database and build tests from that.
    • Where you can make function calls directly, do that - if your AUT is compiled with debug info TestComplete can access most of its internal functions and members (with the caveat that Embarcadero introduced a nasty bug into Delphi XE where building a sufficiently large and complex application (the one I was working with had over 4 million lines of code and counting) corrupted debug info. Since I'm no longer with that company, I don't know which version of the Delphi IDE contains the fix for that one - but you can imagine the havoc it caused in a half-million line automation code base utterly dependent on the debug info) and can often call them directly. Developers can use this to expose internal process calls for your automation.

    Good luck.

    UPDATE: Adding to this to reflect your updated question:

    I wouldn't call what you want to do "unit tests" - they're functional or API tests, since none of them exercise only one single unit.

    Here's how I would approach this, if no API exists -

    • I'd start with a database containing a number of orders in various states of fulfilment. The key thing here is knowing what the orders are and what state they're in.
    • I'd write my script code so that it could take an order ID as a parameter, check what status it should be (this could be maintained in a second database or a CSV file, or within the test tool), and run the sequence based on the order ID. I'd loop through them, and have the test code set up to expect an error message if I tried to create a new order for an existing order ID, and to check the database after each step. If you organize your script to treat each order ID as its own object with whatever properties it needs, this becomes a trivial exercise and one which allows you to perform a large number of tests with the same code base.

    Without the API access it's slower, but still feasible.

Log in to reply

Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2