“Remote, unmoderated testing is as reliable as lab-based testing”
My view is that remote, unmoderated testing is actually less reliable than lab-based testing and that in some situations it's positively misleading.
There are two reasons for this.
The first problem with remote testing is that you lose those “teachable moments”. These are those critical moments when the participant does something surprising or unusual, something that you just didn't expect. In a lab, these are the moments when we can probe deeper, get behind the behaviour and rummage around in the participant's head. The participant can teach us about the way they see the world. With a remote test, all you have is a data dashboard, or perhaps a video, and these act as an invisible wall between you and your user. Reliability isn't just about numbers: it's about understanding why a problem is a problem.
But there's a second reason to do with reliability which I think is actually more important: remote usability testing has the potential to be positively harmful to creating great user experiences. Here's why. In many design teams, a lab-based usability test is the only chance the design team get to see real users. Jared Spool's research shows that the most effective design teams get two hours of exposure to users every six weeks. Remote, moderated testing puts a wall between you and your users. It encourages you to look at the data first and the users second. Exposure to users — not just exposure to user data — is something that remote unmoderated testing just can't recreate and it's why lab-based testing is still the canonical research method in user experience. Would you seriously replace the one-way mirror of a usability lab with the brick wall of a remote, unmoderated usability study?
“Report only the 3-5 most serious problems to the client. Tell them that you would be happy to give them more problems to work on, but ONLY after they had fixed the most serious ones.”
To give you an idea of how patronising this statement is, let me use an analogy.
Imagine you're thinking about re-decorating your house and you contact an interior designer for advice. He turns up at your house and flounces from room to room, making notes and tut-tutting. Eventually, he sashays over to you and says, “There are many things wrong with your house. However, I will tell you only the 3-5 most serious design crimes. When you have fixed those, I will tell you what to work on next.”
You would probably punch him in the face.
The reason this philosophy is so unhelpful is because user experience doesn’t live in a bubble under the control of an external consultant. Design teams have a range of stakeholders who want to influence the product, including marketing, product design, legal, manufacturing, senior managers and so on. If you tell them only the 3-5 most serious problems then you are hiding other problems from them that will almost certainly have an impact on these other discussions. For example, “difficulty navigating” might not be one of your top 5 issues, so what happens when the dev team want to discuss possible changes? Important data is being hidden from them.
You don’t know what might come up in the future in other meetings where people need to know what's been discovered.
“You don't need to follow an iterative process: one-off usability tests are still useful.”
My position is that one-off usability tests aren't just useless — they're worse than useless.
The reason for this is very simple. It's because you play into the hands of managers and design teams who incorrectly conflate usability testing with user acceptance testing. You've condemned yourself to being the UAT guy. A usability test will never make anything more usable. A usability test is great for identifying problems. But, it’s poor at identifying solutions. It's only when you take action on the results of the test, iterate the design, and then check your solution with another test that you begin to make progress.
In contrast, we know that iterative design is the cornerstone of usability. And this is where it gets interesting. Because, with the growth of Agile and Lean, people outside our field are beginning to get it too. Don’t let this moment pass. If you continue to encourage bad behaviour in your clients by running one-off usability tests then you're turning your back on a critical moment in our industry. Iterative testing as part of Agile — well, it's like Adam's first words to Eve: “Stand back! I don't know how big this thing's going to get.”
And that's why usability testing must be iterative.
“Users have valuable insight into why they struggle”
One of the books that's had most influence on the way I do usability testing was written by someone who has never been part of the user experience field. His name is Timothy Wilson and the book is called Strangers to Ourselves. In the book, Wilson, who's a psychologist, marshals research study after research study that shows just how poor we are at introspecting into the reasons for our own behaviour. “When it comes to maintaining a sense of well-being,” Wilson notes, “each of us is the ultimate spin doctor”.
More recently Daniel Kahneman has updated the list of studies in another book, Thinking Fast and Slow. You might not have heard of Timothy Wilson, but you will have heard of Daniel Kahneman. He's won a Noble Prize for his work in this area, showing we have very poor insight into our behaviour.
It's a hard truth to acknowledge but we just don’t know ourselves very well. It's the same reason most of us describe our driving skills as better than average. It's tempting to believe a participant when he explains why he missed the navigation bar. It's not that the participant is lying — although that's always a possibility — it's just that our understanding of our own behaviour is an illusion.
If you choose to take the opposite view, you're like a creationist denying evolutionary theory. The science is stacked against you. You might want to believe that users know why they struggle, but they don't.
It's not what users say. It's what users do that matters.
“If you’ve found 90 serious or critical problems in a usability test, you shouldn’t report them all in the usability test report”
When I hear statements like this I need to check my watch to make sure I've not travelled back to the 1990s. This is a curious world, where the usability testers “own” user experience and create test reports that we throw over the wall to our clients. Well, it didn’t work then, and it doesn’t work now. The only way to effectively analyse usability problems is by carrying out an affinity sort with the design team. The design team are just as responsible for prioritising the problems as you are.
Listen to that statement again: “90 serious or critical problems”. I've never tested a system that bad. If such an awful system exists, the design team need to know it. They need to work with you to analyse the problems. It's only through that experience that you'll change hearts and minds. Otherwise you're being complicit with them in avoiding the truth.
Some people in our field spend a lot of time claiming that we should make written usability test reports more usable. In my view, this is nonsense. Not because it’s wrong, but because it belongs to a different age. Usability reports these days are collaborative meetings between you and the design team, not a 50 page Word document.
What's your view?
I'm being deliberately provocative here and I can of course see some merit in the opposing view — but not enough merit to change my opinion. I'd like to hear your thoughts for or against. Please try to convince me in the comments.
About the author
Dr. David Travis (@userfocus on Twitter) holds a BSc and a PhD in Psychology and he is a Chartered Psychologist. He has worked in the fields of human factors, usability and user experience since 1989 and has published two books on usability. David helps both large firms and start ups connect with their customers and bring business ideas to market. If you like his articles, you'll love his online user experience training course.
Love it? Hate it? Join the discussioncomments powered by Disqus
Foundation Certificate in UX
Gain hands-on practice in all the key areas of UX while you prepare for the BCS Foundation Certificate in User Experience. More details
Every month, we share an in-depth article on user experience with over 10,000 newsletter readers. Want in? Sign up now and download a free guide to usability test moderation.
User Experience Articles
Our most popular articles
Our most commented articles
Our most recent articles
- Jan 9: The 8 competencies of user experience: a tool for assessing and developing UX Practitioners
- Dec 5: Non-UX books that every UX practitioner should read
- Nov 1: What one UX skill or ability is the most important to master?
- Oct 5: What do we mean by user experience leadership?
- Sep 5: The Reflective User Researcher
Search for articles by keyword
- 7 articles tagged accessibility
- 4 articles tagged axure
- 5 articles tagged benefits
- 16 articles tagged careers
- 8 articles tagged case study
- 1 article tagged css
- 8 articles tagged discount usability
- 2 articles tagged ecommerce
- 13 articles tagged ethnography
- 14 articles tagged expert review
- 1 article tagged fitts law
- 4 articles tagged focus groups
- 1 article tagged forms
- 6 articles tagged guidelines
- 10 articles tagged heuristic evaluation
- 7 articles tagged ia
- 14 articles tagged iso 9241
- 9 articles tagged iterative design
- 3 articles tagged layout
- 2 articles tagged legal
- 11 articles tagged metrics
- 3 articles tagged mobile
- 7 articles tagged moderating
- 3 articles tagged morae
- 2 articles tagged navigation
- 9 articles tagged personas
- 15 articles tagged prototyping
- 7 articles tagged questionnaires
- 1 article tagged quotations
- 4 articles tagged roi
- 16 articles tagged selling usability
- 12 articles tagged standards
- 43 articles tagged strategy
- 2 articles tagged style guide
- 4 articles tagged survey design
- 5 articles tagged task scenarios
- 2 articles tagged templates
- 21 articles tagged tools
- 52 articles tagged usability testing
- 3 articles tagged user manual