How is QA involved in usability testing?
Usability testing provides feedback how well users can accomplish tasks using a GUI.
In many companies there dedicated departments for usability testing, separate from QA testing.
Still, I wonder what low-cost testing approach QA teams performs?
Approaches I tried occasionally, either in industry or in academy, either formally or informally:
- Hallway testing, when you pick a random person from your office to try something with UI
- Task-based testing with think aloud protocol analysis, where you prepare tasks, and listen to the person as she performs the task with your app, listening how she perceives UI and why performs this or the other action
- UX Guidelines compliance, though compliance to UX guidelines does not always gives warrancy that GUI is usable.
However, all those approaches were done usually once GUI was implemented or partially implemented. Can QA be involved in shaping the usability aspect at design phase?
I work for a small company with four developers and two testers. Our product is a web-based application that we deploy to Amazon EC2. The application targets holders of a special kind of bank account.
I will describe the process we use with external participants, i.e. with participants who are not our employees (we also run informal usability tests with employees). My process will not work for all companies or all products, but it may work for yours.
We use UserTesting.com, a site that users freelance testers for fifteen-minute testing sessions. You supply the instructions and the kind of tester you are looking for in terms of age range, kinds of computer/mobile device, or anything else you can think of. (We only accept testers with a specific kind of bank account.) For $39, a tester will follow your instructions and give you verbal and written feedback on their experience. You also get a video recording of the entire session. We point the tester to a non-production installation of our site that we maintain specifically for testing. (It too is deployed at Amazon EC2.) When we aren't testing, we shut the test site down, so we only pay for it when we need it.
For each round of testing, we pick a new/updated feature or an area that seems to generate a lot of support calls. We design some specific tasks that we want to test. We craft our scenarios and instructions carefully because the tester won't have us around to ask questions. The instructions are designed to be completed in fifteen minutes or less. We typically use 3-5 testers per round of testing.
For new features, we run the test as soon as the feature is stable enough that the tester is not at risk of being hamstrung by a bug. It is ok to tell the tester, "This is a new feature and it isn't finished yet but we are looking for early feedback."
We often have some tester feedback within an hour of posting a test. I often have feedback from 3-5 testers within a morning or an afternoon.
You have to be careful about how you use test results. They will frequently identify some kind of difficulty, but just because a single tester finds something hard or confusing does not necessarily mean you need to make any changes in your product. Instead, you need to consider the circumstances and look for trends.
These tests are not designed to be statistically significant. Instead, the goal is to pick a feature, point the testers at it, and just see what happens. It is an informal, qualitative process, and yet we have initiated a lot of improvements in our site this way.
Because these tests are informal and inexpensive, they can be executed frequently and quickly, which means we are more likely to use them than we would if there were a lot of process/bureaucracy involved.
I read a few books in usability testing before going down this path. One I recommend is Rocket Surgery Made Easy, By Steve Krug. It is short, easy to read, and practical.