UAT Best practises | Level of detail
For our current software product some of our enterprise clients want to run User Acceptance Testing on their own acceptance environment. To help them get started we have setup an UAT testing suite which we deliver as a base set of tests for our products.
My questions is, what are the best practises for setting up UAT tests for larger software products? Of-course this depends on what the end-client wants, but sure there are some good guidelines to get started.
Some challenges that come to mind are:
What level of detail do you describe in the steps. Full detail or short-hand?
"Left click on the [Refresh] button" versus just "Click [Refresh]"
Every feature versus high level work-flow walk-through only: Our current suite feels more as a system regression test as it tests every combination possible. Personally I think end-users should just test their main work-flow and see if the product still fits their needs. Thus I would want to supply them with end-to-end test cases which represents a realistic way working instead.
How much effort can you expect from clients: Test coverage versus time investment
Prior knowledge: Should anyone (even without knowledge of the application) be able to run the suite or should we expect a basic training first
Random data or exact steps: We have a basic testing data set, often we describe to pick a random item to test the steps. This sometimes leads to extra thinking and extra time investment, its easier to just follow clear steps, but more effort to setup the suite.
As you say - 'it depends' but a few comments from my experiences:
If you supply exact steps and data then what is the point of UAT? You might as well get your testers to run the scripts. I'd rather give the users some training on the system and give them scenarios to follow - which will have been developed with their input.
Should anyone be able to run the suite without knowledge? Again, depends on the app - if it requires domain knowledge then why would you expect anyone to run it? It won't happen when the system goes live.
You'll likely run into problems getting hold of people to run the tests - the ones with full deep domain knowledge are also the people likely to be in demand for their 'normal' jobs.
Talk with your users and see what their expectations and requirements are - usually, they will be OK with just running the main workflows especially if you've explained what another testing has been happening and how much time and effort would be involved with them running all cases. Adding in some exception cases so they can see how the system handles it can sometimes help.