Her team had spent weeks creating an Axure prototype of a new Android media player on a PC.
Last week, she’d demonstrated the prototype to the CEO. He was so convinced by the demo that he immediately gave Rebecca the green light for a usability test and he was now talking to the press about the product launch.
Naturally, Rebecca needed to usability test the app on a real Android device (rather than on the PC on which it had been created and demonstrated). Getting the prototype to run on the Android device was easy — with its HTML underpinnings, Axure had taken care of that. The problem was that the prototype didn’t know how to deal with gestures. Clicks, drags and swipes would work, but how was she going to deal with all the long presses and two finger pinches that the new app used — let alone the ‘shake’ gesture to call up a random song?
Here’s how we can come to Rebecca’s aid.
Meet the Wizard of Oz
More precisely, meet Wizard of Oz usability testing.
There are various forms of Wizard of Oz testing. In one form we shamelessly fool the participants into believing the system is doing something that it is not. For example, Wizard of Oz testing was used in developing early speech recognition software. The original recognition algorithms were far too slow and unreliable so while the techies sorted that out, the designers got on with creating the interface. They set up a usability test where the participant talked into a microphone and an expert typist, listening on headphones next door, made the right words appear on the participant’s screen using a pair of slaved monitors. This test design meant that the interface could be built in parallel to the core technology.
But there’s another type of Wizard of Oz test that’s less well known. With this type of test, we exploit our human ability to imagine things.
Let’s see how this might work using our media player example.
Imagine Rebecca’s media player displaying a list of songs in a playlist. She wants this app to play the song when a user taps on it, but delete the song from the playlist if the user does a long press. There are two steps we need to take: first, we need to create ‘hooks’ in the Axure prototype to support these actions; and then we need to exploit those hooks in the usability test.
Step 1: The Axure prototype
When developing the Axure prototype, we set-up two ‘mutually exclusive non-conditional cases’ for a song’s OnClick event. In general terms, this is a technique whereby we have two (or more) cases on an event, which do not have any conditions. This forces Axure to ask the user which case to execute when that event is detected. In this way, we effectively define the behaviour of ‘songs’ so that: Tap = ‘play’ and ‘Long Press’ = ‘Remove from playlist’. In both cases, all the user ever actually does is click on a song.
Step 2: The usability test
When facilitating the usability test, I tell participants that they will be using a prototype media player app and — because it’s just a prototype — they’ll need to use a bit of imagination at times, but should assume that they can do all of the things that they’d normally do when using a mobile app.
In the first task, I ask him to remove a song from a playlist. I notice that he tries to drag the song off the screen with his finger, so I ask him what he’s trying to do. He replies: “I was trying to drag it off the screen”.
So I say: “In this system, that action wouldn’t have any effect. What would you do next?”
He then tries a long press on the song, and the Axure prototype pops up a little menu with two items: “Tap: Play” and “Long press: Remove from playlist”. This is where our “mutually exclusive non-conditional cases” come into play.
I then ask, “Please tell me what you were doing there”.
He replies: “Pressing on the song for a while”.
At this point I have detected a behaviour that is supported in the prototype, in this case a ‘Long Press’, so I ask the user to select the first item in the menu so that the Axure prototype removes the song. I then confirm that this was what the participant expected.
Using similar functionality in Axure (and a few more insider techniques), I’m able to simulate the app’s other gestures like a two finger pinch and even a shake, while at the same time keeping the test reasonably realistic and flowing well.
Limitations of the approach
This kind of Wizard of Oz test generally works well. It will often quickly tell you what you want to know and at relatively low cost. Of course, it does have some limitations:
- The usability test has to be carefully designed and facilitated by someone who is an expert in this type of testing. There are lots of pitfalls that an inexperienced tester can easily fall into, such as asking leading questions (“Was that a long press you just did?”) and inadvertently providing extrinsic feedback (like tacitly suggesting or reinforcing ‘correct’ actions in the participant through body language and tonality).
- The prototype and usability test must be designed simultaneously so that the test tasks we want to use and the associated special control functionality are supported inside the prototype.
- Measurement of time on task can be difficult, or impossible, to determine accurately.
The wider lesson
Although I've focussed on the specific problem of usability testing a mobile prototype on a native mobile device, the general lesson applies more widely. Wizard of Oz testing allows us to perform usability tests much earlier in the prototyping cycle than is commonly the case. This helps de-risk development projects by catching show stopping usability problems much sooner and, therefore, more cheaply.
About the author
Dr. Ritch Macefield (@Ax_Stream on Twitter) holds a BA in Creative Design, an MSc in IT/Computing and a PhD in HCI. He is an acknowledged expert in Axure having led Axure projects for clients like Thomson-Reuters, Dell computers and Vodafone. He was a panel speaker at Axure World 2012, contributed to the book “Axure RP 6 Prototyping Essentials” and founded the Axure RP Pro LinkedIn Group.
Love it? Hate it? Join the discussioncomments powered by Disqus
An Introduction to User Experience
May 12-13, London: Master user experience in this practical, content-rich, hands-on user experience training course. More details
Every month, we share an in-depth article on user experience with over 8,000 newsletter readers. Want in? Sign up now and download a free guide to usability test moderation.
User Experience Articles
Our most popular articles
Our most commented articles
Our most recent articles
- Feb 3: How red routes can help you take charge of your product backlog
- Jan 6: The missing role in your design team
- Dec 2: The UX debrief: A tale of two meetings
- Nov 4: How to wow me with your UX research portfolio
- Oct 7: Does your company deliver a superior customer experience?
- Sep 2: The 1-page usability test plan
Search for articles by keyword
- 7 articles tagged accessibility
- 3 articles tagged axure
- 4 articles tagged benefits
- 11 articles tagged careers
- 8 articles tagged case study
- 1 article tagged css
- 8 articles tagged discount usability
- 2 articles tagged ecommerce
- 4 articles tagged ethnography
- 14 articles tagged expert review
- 1 article tagged fitts law
- 1 article tagged focus groups
- 1 article tagged forms
- 6 articles tagged guidelines
- 10 articles tagged heuristic evaluation
- 7 articles tagged ia
- 14 articles tagged iso 9241
- 6 articles tagged iterative design
- 3 articles tagged layout
- 1 article tagged legal
- 10 articles tagged metrics
- 3 articles tagged mobile
- 5 articles tagged moderating
- 3 articles tagged morae
- 2 articles tagged navigation
- 6 articles tagged personas
- 13 articles tagged prototyping
- 7 articles tagged questionnaires
- 1 article tagged quotations
- 4 articles tagged roi
- 14 articles tagged selling usability
- 12 articles tagged standards
- 34 articles tagged strategy
- 2 articles tagged style guide
- 4 articles tagged survey design
- 4 articles tagged survey design
- 5 articles tagged task scenarios
- 2 articles tagged templates
- 18 articles tagged tools
- 41 articles tagged usability testing
- 3 articles tagged user manual