Comparing GUI Automation Recording Approaches
I have recently joined the automation team. We are currently developing an automation framework for our web-app which is already in live environment for a while. I have designed and developed the framework to execute any test scripts recorded using Sahi Open Source 4.4. I am stuck on evaluating approaches for recording GUI test cases.
Application Under Test: AUT is an web-app which uses AJAX and also has some custom graphs and reports.
The available recording approaches are as below
- Recording Regression Bugs: There are around 3000 bugs which are closed / open in the system which can be automated and used as regression suite. The upfront problem here even after 3000 recorded flows, the automation may not cover the complete web-app.
- Recording your business flows: This approach will record the business flow along with its validations. This will eventually cover the complete web-app if used data-driven testing smartly. But the web-app is complex and can have large number of flows even with data driven testing. We are also running on thin time-frame which might restrict this method. Also some of the flows are complex enough so that the script and its data driven support might become very complex and in-turn non-maintainable.
- Recording Validations and Flows independently: This approach will record validations as a separate test case and the actual business use case as a separate test case. This will reduce my recorded scripts size and hence will be maintainable. Separation gives me an exact point of failure stating whether it is field validation fails or the business flow is failing.Hence my debugging of script failure. But I am not able to decide if this approach has any unseen issues.
My Queries / Concerns:
- I want to know if I can go ahead with Approach 3 as my recording approach or should there be any other changes or a new approach altogether. I am trying to search how recording is done in other companies but could not come to conclusion.
- I want to evaluate if my Approach 3 can stand the agile methodology that we will be adopting soon.
- If my recording tool selection of Sahi Open Source is right or not for AJAX.
- Whether recording tool output should be exported as scripts or it should be exported as programming language export like java code.
jeanid last edited by user
It's rather difficult to give a clear answer here, but I can offer a few thoughts for you.
- Go with maintainable as a first priority. In my experience once a regression suite is up and running it can be a very long time before it goes away.
- After maintainability, look to the 80/20 rule - the 20% of the application that gets 80% of the use. This is where your highest regression ROI comes from.
- After you have the 80/20 rule set up, look for gaps in your coverage. This will show up as the areas where most of the regression bug reports come in from customers. The manual testers will also know which parts of the application are most fragile.
- Each time you add coverage to your regression tests, start with a smoke test of core functionality for the feature you're adding. Then expand to the 80/20 rule. After that you can consider adding tests for the bugs that have been reported against that feature.
- Don't worry about the tool. If it can reach the components on the page that it needs to reach, it's good enough for the task.
- Be prepared for chaos during the adoption of agile. I've yet to see an adoption of any agile implementation that didn't include a phase of utter chaos. Your concern as for GUI automation is that you do not automate against a moving target: as a rule, GUI automation should be happening after each slice of functionality is stable. Any attempt at GUI automation before this turns into thrashing (yes, I've been there). Whichever approach you choose will make no difference - the time needed to build GUI regression doesn't change because the development methodology changes. On the plus side, if moving to a more agile approach adds unit testing, you should be able to reduce the amount of GUI regression that focuses on areas which are more properly the domain of unit tests.
- Record/playback is dangerous. I can't stress this enough. Almost every tool sells itself on being "code-free", but the truth is sooner or later you will need to move to using the record feature to identify the components you need to interact with, and then code your interactions for maximum reusability and maintainability.
- Output doesn't matter - reusability and maintainability does. As long as you can run the regression on any system and spin up and spin down systems at will, it doesn't matter if you're exporting your tool output as scripts or as a compilable/compiled language export.