Every project can certainly launch without being tested and we can hope that the entire team made the right choices for the users involved. The alternative is to identify potential pitfalls and issues before it’s out there on the marketplace, before a lot of money and effort have been spent and before phase 2.0 to address those pesky UX issues that could seriously prevent conversion.
Why — This is where we outline the objectives of the usability test and align these objectives with one or multiple goals of the project.
Who — How many user groups should be represented and how many should be involved in each group?
What — What types of tasks should we test and how many?
Heuristic Evaluation — Best performed with a small group of users, a heuristic evaluation is the extremely quick and very cost-effective method of usability testing. A group of about 5 evaluators can critique the usability so that we can quickly refine and adjust any usability issues.
Direct Observation — With direct observation in a lab setting, we experience first-hand how the user feels, reacts and experiences specific tasks. Generally, this type of setting involves one-on-one interviews and talk-out-loud sessions with each participant. On the flip side of the lab setting is a somewhat rogue but super effective method for mobile research and testing, direct observation in the field or shall we say testing on the streets? It’s perfect since a lab setting for mobile app testing won’t quite do the trick. The in-the-field approach really offers some context in terms of mobile device usage and adds the realism of the actual location. No matter if the test occurs in a lab or out on the streets, direct observation does garner user feedback that could potentially be missed when using other usability testing methods.
Lean participant testing — Jacob Nielsen notes that “The maximum benefit-cost ratio is achieved when using between three and five subject, …” The methodology being that you can greatly reduce the overhead of testing as well as test more frequently with a smaller group. Performing multiple tests allows for a true iterative design approach.
Appropriate participants — Screening of the participants is another vital element of a successful usability test. It’s far more important to focus on the participant’s behavior and attitude rather than the known demographics of the project’s user groups.
Participant incentives — It’s great to know that there is a desire to be heard, but ultimately an incentive will prevent no-shows and possibly encourage a participant to return for the second round if needed. Small gift cards are a great way to provide that incentive to show up and truly feel valued for providing vital feedback. Of course there are all kinds of ways to compensate the participants, depending on the time requested as well as what is being tested, we like being inventive here and make it worth everyone’s time spent.
Pre-session questions — These questions are kept quite general. Most include a sprinkling of demographic information and first impressions of what the participant is viewing.
Task-based questions — These questions are the core of hands-on user testing. They ask the participant to carry out a specific task or realistic scenarios while they share their thoughts and experience out loud as tasks are completed. All task-based questions should be simple and non-leading so as not bias the participant’s response or taint the test results.
Post-session questions — This is when the participant is asked to share their overall impression of a given feature or system. Like their pre-session counterparts, these questions should remain general, but focus on the tested system rather than participant information.
It’s time for the test! First, there’s a warm welcome to make the participant feel at ease and next we begin with our pre-determined set of questions. After each test, we make sure to summarize our findings and observations, highlighting the likes/dislikes, the expected/unexpected and pinpointing the friction points as well as the successes.