“If I had 20 days to solve a problem,”
…observed Albert Einstein, illuminating an approach to research that may come as a shock to some customer and user researchers in the corporate world…
“I would take 19 days to define it.”
In a culture somewhat preoccupied with solutions, the idea of deliberating over a research problem may seem heretical to some. Of course, logic and common sense tell us that you can’t arrive at a solution if you don’t understand the problem. And turning that thought around, one may conclude that customer-focused research that fails to get to grips with a meaningful problem is pretty much destined to arrive at no useful solution. And that’s a rather worrying thought because companies spend a lot of money on customer and user research and it would be nice to think it was solving something.
According to the Council of American Survey Research Organizations (CASRO) an estimated $6.7 billion is spent in the USA, $2 billion in the UK, and $18.9 billion spent globally, each year on survey research (research concerned with measuring the opinions, attitudes, perceptions and behaviors of population samples).
Alas, most of the research is inadequate because it doesn’t move knowledge forward. In fact, Rohit Deshpande, Harvard Business School professor and former executive director of the Marketing Science Institute, estimates that 80% of all customer research serves only to reinforce what companies already know, rather than testing or developing new possibilities.
Can we detect such inadequate research? Fortunately, we can. Brace yourself, this might ring some bells. In his book Research Strategies, William Badke explains that inadequate research:
- Merely gathers data and regurgitates it.
- Deals in generalities and superficial surveys, avoiding depth and analysis.
- Asks no analytical questions.
- Does not advance knowledge, but is happy to summarize what’s already known.
- Is boring.
I sense you are nodding. Most of us have experienced research presentations or reports that bear some of these hallmarks. We’ve sat behind the one-way mirror fighting ennui as the moderator goes through the motions of rolling out the same old method, asking the same old questions, revealing the same old ‘insights’, and generally inspiring what writer Douglas Adams would have called an acute attack of no curiosity.
But then, from time to time, we get to experience research that blows us away, it’s sharp and incisive and positively exciting, and we want to tell everyone about it.
So why are some research studies so lame while others are so inspiring?
It’s obvious from the list above that inadequate research lacks a clearly defined and interesting research problem. There are two main reasons why this happens with consumer research:
- The superficial motivation to hear what people think about X, or whether they prefer X, Y or Z (a common but too-literal understanding of what is meant by ‘Voice of the Customer’) is often enough to meet an internal requirement to “do some research” even though it’s not addressing anything interesting. A ‘problem’ may be assumed to exist, but it turns out to be next to impossible to find anyone who can actually write the research question on a whiteboard and put a question mark at the end of it.
- In lieu of an actual research question, customer and user research often ends up being ‘method led’ or sometimes ‘technology led’. That is to say, we do a survey or a card sort because doing surveys or card sorts is what we do. We do eye tracking because we have eye tracking equipment. Hammer therefore nail.
But Einstein knew a thing or two about doing research. Not for nothing is his name synonymous with genius. He knew that the research problem dictates the research method and the study design, and that every aspect of an investigation follows from that point. So let’s take his approach a bit further.
What if you had ‘19 days’ to define a research problem? How might you go about it?
Here are five techniques you can use to help you better understand a research problem, determine clear objectives, and sharpen the research question:
- “Move off the solution”
- Find out what other stakeholders need to know
- Deconstruct the construct
- Measure something
- Shake out the issues
1. “Move off the solution”
When a client or research sponsor asks you for help, deliberately shift the conversation away from solutions as quickly as you can. It might not be what your client expects, but it’s important that you don’t start out talking about solutions, methods or outcomes. Not only can you not possibly know what the solution should be before knowing the problem, but you will start to close down possibilities and paint yourself into a corner.
Starting with the solution is the very definition of ‘method led’ research. It’s where you receive a request such as, “We need an online survey (or focus group, field visit, usability test etc.)” and you simply comply. But remember, your main goal is not to ‘do some research’, it’s to help your client to succeed. If you too eagerly accede to a misguided request you may pick up Brownie points for appearing compliant or for racing ahead to data collection, but you’ll fail your client in the long run when it turns out your data don’t address the issues the development team were worrying about. Of course, every now and again you’ll get it right and hit on a winning solution using this approach, but only in the same way that a stopped clock tells the correct time twice a day.
In his brilliant book, Let’s Get Real or Let’s Not Play, business-development consultant Mahan Khalsa introduces a phrase that I really like: Move off the solution. I like it because it describes a simple but clever strategy. It’s not that you’re avoiding the solution or won’t be able to think of a solution, its just not time to talk about it yet. I can help you more effectively, make better use of your valuable resources, design a better study, collect more useful data, and make you look good, if I first understand your problem.
If your initial contact doesn’t know what the problem is, find out who does and talk to them. No matter how keen you are to impress your client, or how interesting your potential solutions might be — or even if you think you can anticipate ‘the answer’ — keep it under wraps for now.
Top tip: One way of shifting from the solution to the problem is to first acknowledge that “Yes, usability testing (or whatever method your client is asking for) is one of a number of different techniques we sometimes use. If we ran the best usability test imaginable, what problem will it solve for you?”
2. Find out what other stakeholders need to know
It’s very tempting — especially if timelines are short — to simply take your brief from the first meeting, or from a Request for Proposal (RFP) and assume you’ve learned all there is to know about a project. In reality, an initial brief can be quite weak, especially for internal researchers who may not get a formal RFP. Perhaps your sponsor has not really had an opportunity to prepare anything or think things through other than to know that it’s time for some research. But, rather than guessing or making assumptions, there’s an opportunity for a researcher to add real value right at the outset, by helping to define and sharpen the research problem.
Keep in mind that the person who commissions your work and gives you the initial brief is representing a larger development team, and the team has a lot of knowledge and experience that you can, and must, tap into. Different disciplines within the team are likely to need your research data in order to make business, design or engineering decisions, so you need to find out what they want to know and how they think about the research problem.
Begin by creating a list of possible stakeholders, and then arrange to meet with them. Your list is likely to include marketing experts, market researchers, engineers, designers, customer support agents, user experience specialists, technical writers, business analysts, and even legal experts. Find out what they know about the research problem, how it is being experienced, what’s been tried already, what will happen if nothing is done, why this problem, why now, what success will look like, and what each person’s needs, wishes and concerns are. Find out what the pressure points are, identify any constraints, discover the timeline and the budget, get the background and the history. Peel back the layers to get at what’s motivating the call for help. Seeing the research requirements through the eyes of these key players will help you understand the kind of research that’s needed.
It’s not uncommon for customer and user research activities to pass right by a development team, because no one thought to share the plan or invite contributions from the different disciplines. In my experience, members of a development team are always pleased to be consulted, and value the opportunity to contribute. After all, when the dust has settled, these are the players who are going to put your design recommendations into action, so you need them involved from the beginning. This is not only a necessary way to see a problem in a new light, but it is also a great way to connect with the team and get early buy-in.
Top tip: Remember that your goal here is to better understand the initial research problem. However, you will invariably collect a shopping list of other research wants and needs. Feed these back to your client and work together to determine the priorities. But at all costs, resist the temptation to try addressing every need in the same research study — that’s a recipe for disaster.
3. Deconstruct the construct
Another way of better defining a research problem is to deconstruct the phenomenon that is being investigated.
Most of the phenomena that market researchers and UX researchers are likely to study are constructs. That is to say they do not exist in any physical sense and cannot be directly observed. Usability is an example of a construct. You can’t weigh it or put it in a box like you can with say, pencils or armadillos. Quality is also a construct. So are emotions, desires, intelligence, attitudes, preferences and the propensity to buy. This doesn’t mean we can’t research them or measure them, but in order to do so we have to deconstruct them to reveal their constituent elements and then find ways to operationalize those elements. Not only is this an essential step in designing research, it’s really the essence of what’s meant by ‘drilling down’ into a problem.
The construct ‘quality’ gives us a good example. You know what quality is and how to judge it, but have you ever tried defining or explaining it to someone? Of course, you could simply ask customers what they think to the quality of a product, but you can have no idea what they are really responding to or whether their concept of quality is the same thing that you’re talking about. Even experts can’t agree on a definition of quality, though there have been some very useful attempts (‘fitness for use’ is my favorite). In fact, if you were thinking that quality is quite straightforward take a look at Robert Pirsig’s exploration of the metaphysics of quality in his now classic work Zen and the Art of Motorcycle Maintenance. Thankfully, in the world of product development and system design, we don’t need to delve as deeply as Pirsig (who drove himself to a mental breakdown in his pursuit of understanding), but we do need to unpack the construct if we are to design a study around it.
When we do this we get to see that quality is not some homogeneous blob of stuff but is an artificial construct made up of internal components such as performance, features, reliability, conformance to standards, durability, serviceability and aesthetics. Suddenly the research problem starts to look clearer, and immediately we can see ways of measuring the components. The same holds true for usability — the research problem leads us directly to the test design when we deconstruct the concept into elements such as system effectiveness, efficiency and satisfaction (following ISO 9241-11).
Top tip: Begin by reading about the construct under investigation. Don’t simply make up, or guess at, component elements. These concepts and constructs are the result of decades of work by psychologists and standards organizations and the constituent elements, and how to measure them, are well documented.
4. Measure something
“Were not here to admire the architecture Hodgson, were here to measure it.” You may not be using a tape measure or a theodolite, but the admonition received by a rather naÔve 16-year-old trainee draughtsman (caught gawping at the interior of a 14th century church) still rings true. Whatever kind of customer or user research you are doing, and no matter whether your data will be quantitative or qualitative, you are measuring some aspect of human behavior.
Understanding a problem and understanding what can be measured are inextricably linked such that focusing on the measurements you will make is a way of clarifying the nature of the problem. So ask questions like:
- What specifically do we need to measure?
- What kinds of metrics will differentiate specific concepts or different levels of a variable?
- What will my dependent variable be and what will I need to manipulate in order to detect differences?
- What variables will I need to control?
- Will these kinds of data convince the development team?
- Can I just use rating scales or are there some objective behavioral measures I can use?
- How will I analyze the data?
- How can I connect my metrics back to the business?
Remember, if you’re not measuring something Ö you’re just gawping!
Top tip: Don’t just regurgitate the raw data or report obvious descriptive statistics. Analyze the data properly. There are hidden gems. Interrogate the data and make it work for you.
5. Shake out the issues
Customer research can often require a sizable investment in time and costs, and because the outcome will dictate the direction of a project and influence its success, there’s too much at stake to risk mishaps or misunderstandings happening during the test. Although it seems increasingly common in the corporate world to skip this step, you should always conduct a pilot test prior to commencing the full research project. The term ‘pilot’ derives from the Greek word for rudder, and refers to steering and adjusting the course of something. TV shows are always piloted to get early audience reaction, engineers test jet engines on the ground before they use them to fly aircraft, and military leaders send out an advanced scouting party to check the lie of the land before any major action, all so that they can make adjustments to the plan. Doing research is no different. In fact, we would be remiss in our obligations to our client team if we jumped straight in to a ‘stage production’ study without first giving everything a good shake down.
Typically, a research pilot test is conducted quite late in the preparation stage and resembles the kind of full dress rehearsal that theatrical actors would perform. It is typically used to check that the test design will return valid data, give the test administrators and data loggers an opportunity to practice, make sure the timing and logistics are in order, and to check for any potential glitches in testing or recording equipment.
But we can also run a much earlier and much less formal pilot to help us better understand the research problem. This "pre-pilot" is more akin to actors doing an early read-through of a script. It requires no costumes or stage props. It requires virtually no budget and no recording equipment or testing lab. It’s not about collecting real data, its just about airing the research problem and getting it in front of some customers or users to help flush out any issues before before advancing further.
The Chinese have a phrase: “Hitting the grass to startle the snake.” This is the same thing. It’s a way of ‘hitting’ the problem to see what jumps out, and it can be a useful way of testing any assumptions you might have made, and discovering any previously unknown facets to the problem, prior to moving on to the test design step.
It’s also a good way to identify any stakeholders you might have missed. For example, a while back we did a user research study for an organisation that required store visits to generate personas. During the planning phase, we made sure that senior management were aware of the research. At the time, the organisation was in the midst of a merger. As we started preparing for our pre-pilot, word came back down the chain to delay the site visits because store managers were concerned that their staff would see us as management consultants on the search for cost savings. If staff thought we were doing time and motion studies as part of a downsizing exercise we would create confusion and anxiety, and we would be unlikely to get any good data. By planning an early pre-pilot we created an opportunity for this potentially damaging issue to reveal itself.
Top tip: If you’re planning a pilot test or a pre-pilot, remember to include members of the development team and invite them to join you so they can give you feedback and help shape the final test design.
The 140-character version
I hope you find these techniques useful. You don’t need to use all of them, but using one or two of them should help you and your development team get a better grasp of the problem or underlying question that is motivating the need for research.
I want to leave the final word to my colleague, psychologist Dr. Tendayi Viki (@tendayiviki) who must be on the same wavelength as me. Just as I was putting the final touches to this article, he tweeted: “Never, ever, ever, talk about your solution, before you understand the customer's problem.”
Good advice. Same thought. Fewer words.
About the author
Dr. Philip Hodgson (@bpusability on Twitter) holds a PhD in Experimental Psychology and is a member of the User Experience Professionals' Association, the Association for Psychological Science, the Industrial Designers Society of America, and the Association for the Advancement of Medical Instrumentation. He has over 25 years of experience as a researcher, consultant, and trainer in product usability, user experience, human factors and experimental psychology.
Web Usability: An Introduction to UX
June 10-11, London: Get hands-on practice in all the key areas of usability, from identifying your customers through to usability testing your web site. More details
Download the best of Userfocus. For free.
100s of pages of practical advice on user experience, in handy portable form. 'Bright Ideas' eBooks.
Every month, we share an in-depth article on user experience with over 8,000 newsletter readers. Want in? Sign up now and download a free guide to usability test moderation.
User Experience Articles
Our most popular articles
Our most commented articles
Our most recent articles
- May 6: My place or yours? How to decide where to run your next usability test
- Apr 8: The usability error you don't know you're making
- Mar 4: Adapting your usability testing practise for mobile
- Feb 4: What Russian dolls and Fantastic Voyage can teach us about designing for mobile
- Jan 7: "I want to speak to my users but they donít want to speak to me"
Search for articles by keyword
- 7 articles tagged accessibility
- 3 articles tagged axure
- 4 articles tagged benefits
- 8 articles tagged case study
- 1 article tagged css
- 6 articles tagged discount usability
- 2 articles tagged ecommerce
- 3 articles tagged ethnography
- 13 articles tagged expert review
- 1 article tagged fitts law
- 1 article tagged focus groups
- 1 article tagged forms
- 6 articles tagged guidelines
- 10 articles tagged heuristic evaluation
- 7 articles tagged ia
- 14 articles tagged iso 9241
- 6 articles tagged iterative design
- 3 articles tagged layout
- 1 article tagged legal
- 27 articles tagged management
- 10 articles tagged metrics
- 3 articles tagged mobile
- 5 articles tagged moderating
- 3 articles tagged morae
- 2 articles tagged navigation
- 6 articles tagged personas
- 13 articles tagged prototyping
- 7 articles tagged questionnaires
- 1 article tagged quotations
- 4 articles tagged roi
- 11 articles tagged selling usability
- 12 articles tagged standards
- 2 articles tagged style guide
- 4 articles tagged survey design
- 4 articles tagged task scenarios
- 2 articles tagged templates
- 17 articles tagged tools
- 38 articles tagged usability testing
- 3 articles tagged user manual