Remote moderated usability testing: A participant and a facilitator are in two different locations and connect through real-time two-way communications. It is great in situations when participants are hard to find, travel budgets are low and time frames are tight. However, the downsides of it are that the body language is not taken into account, it is harder to build rapport with participants, also, there might be some unexpected distractions – doorbells, children etc. Since non-verbal cues that participants are having difficulties are unavailable, follow-up questions and probing during testing are very important.
Tools: You could either set up a needed system by yourself or use one of all-in-one moderated testing services.
If doing it by yourself, you will need:
- A real time online connection tool. It could be Skype, Webex, GoToMeeting, JoinMe or any other, as long as it allows audio and visual communication between a participant and a facilitator.
- Tools for screen capturing and recording. There is a lot of software to choose from, for example Camtasia, SnagIt, Adobe Captivate; everything is suitable, as long as it records the audio and the video of a participant’s screen along with the voice of a moderator.
- Tools for video editing. If you need to communicate the findings to other team members, you will want to show them only the video material that shows users discovering usability issues, not full recordings of tests. You will need some tools for editing videos to highlight the parts that need attention, e.g. Camtasia, Movie Maker, iMovie.
There are some paid all-in-one testing services that could be used instead:
- Validately (https://validately.com): easy to schedule sessions, allows finding participants that match your user personas, observing and asking participants questions in a real time, creating flags during testing sessions to mark notable moments and jumping to the flags when watching the recordings later.
- UserTesting Pro Version (https://www.usertesting.com/plans) – an all-in-one solution that allows remote moderated testing either with your own participants or with UserTesting.com panel members. Allows highlighting important moments during a session.
These tools make testing easier by allowing you to utilize large panels of users, as well as by providing easy ways to handle payments of incentives and to schedule sessions, however, if your budget does not allow it, you could get the same quality of results by doing everything by yourself.
Basic moderated user testing script (in-person or remote):
- Define the goals of the study – whether it is testing the full interface or some particular aspects.
- Identify the tasks you would like the participants to perform, write them down in a form of short scenarios.
- Recruit some representative users.
- Conduct a pilot study (a session or two before the test) to find out if there are any problems with the clarity of tasks, if the time frames are realistic and if the equipment works as expected.
- Greet each participant, introduce them to the study and thinking aloud – explain them, that they need to express their thoughts verbally, if they have a question or do not understand something – they should say it, however, they will not get any answers before the session is over.
- Demonstrate thinking aloud. Since thinking aloud is not something that people do naturally, you should demonstrate it by firstly doing something yourself and thinking aloud, then asking participants to do a simple task, e.g. find something on Google while thinking aloud. Practice tasks need to be simple, so participants feel confident in their thinking aloud skills.
- Ask the participants to carry out the tasks created in Step 1 while thinking aloud and observe them. It is a good practice to record sessions.
- Discuss the findings with your team and consider how the interface could be changed to address the issues observed.
Number of participants: It is recommended to have 5 participants for qualitative testing, since it has the best return on investment. However, if there is more than one clearly different target group (e.g., teachers, students and parents), you will need 3-4 participants per group, depending on the overlap between the groups; essentially it is running a separate test for each audience.
Tasks: Rather than telling participants to do something without any context and explanation, it is better to provide a short scenario that sets the context and helps to fully understand the task. An example task scenario: “You are going to Florida for a week on the 20th of June. You would like to find the best deal for a hotel”.
Good task scenarios should:
- Be realistic: they should not ask participants to do what they would not normally do, otherwise they might complete the task without realistically engaging the interface. Too specific tasks (e.g. “Buy a size 12 red skirt of X brand”) are often unrealistic for many participants, since they might not search that way, they might browse by style etc. A better scenario would be “Buy a skirt for under $20”. The focus is not how successful in finding things they are, but how they use the interface to achieve their goals.
- Encourage action: participants should actually perform some tasks, not to tell how they would do it (e.g. instead of “Where would you click to find X”, it is better to say “Find X”) – the participants should not answer in words, you have to observe the whole process of doing a task.
- Should not have any hidden clues in task descriptions: for example, if a sign up button is labeled “Sign up for a newsletter”, ideally the task should not be “Sign up for the events’ newsletter”, but “Find a way to get email updates of upcoming events”. However, you should not be too vague, since that would make tasks hard to understand; all information that the participant needs to complete the tasks needs to be provided.
Poorly written tasks focus too much on making participants interact with a specific feature instead of observing how they choose to use the interface.
What to look at: Focus on the paths participants take to achieve their goals, what problems they have, the comments they make.
Talking to participants: Talk less, observe more, since they key is observation, not conversation. Before probing a participant further, always consider whether you have enough of information from just observing the participant, whether you would truly benefit from asking questions. Key moments appropriate for interrupting a participant are when the participant has offered some commends, asked questions or naturally interrupted his work.
It is important not to provide answers or cues how to do tasks, so if a participant is unsure how to do something, instead of giving some advice, you could say “What would you do if you were doing this at home on your own?”, always reflect the participant’s questions back. It is the hardest part of being a facilitator.
Thinking aloud: it is a good practice to encourage participants to think aloud during the process. This way you could discover misconceptions, misinterpretations, understand why users struggle with particular parts of the user interface. However, you should not push users too much to think aloud all the time in order not to make it too unnatural.