But it’s not a panacea. The fact is, there are many questions that qualitative testing either doesn’t answer well or for which qualitative testing isn’t the most efficient solution. I cover some of them in my A Faster Horse post.
The trick is knowing which questions you can answer by listening to your users and which questions need a different methodology.
Unfortunately, one of the most important questions people want answered isn’t particularly well suited to qualitative testing.
If I Build It, Will They Buy?I get asked a lot whether users will buy a product if the team adds a specific feature. Sadly, I always have to answer, “I have no idea.”
The problem is, people are terrible at predicting their future behavior. Imagine if somebody were to ask you if you were going to buy car this year. Now, for some of you, that answer is almost certainly yes, and for others it’s almost certainly no. But for most of us, the answer is, “it depends on the circumstances.”
For some, the addition of a new feature - say, an electric motor - might be the deciding factor, but for many the decision to buy a car depends on a lot of factors, most of which aren’t controlled by the car manufacturer: the economy, whether a current car breaks down, whether we win the lottery or land that job at Goldman Sachs, etc. There are other factors that are under the control of the car company but aren't related to the feature: maybe the new electric car is not the right size or isn't in our price range or isn't our style.
This is true for smaller purchases too. Can you absolutely answer whether or not you will eat a cookie this week? Unless you never eat cookies (I'm told these people exist), it’s probably not something you give a lot of thought to. If somebody were to ask you in a user study, your answer would be no better than a guess and would possibly even be biased by the simple act of having the question asked.
Admit it, a cookie sounds kind of good right now, doesn’t it?
There are other reasons why qualitative testing isn't great at predicting future behavior, but I'm not going to bore you with them. The fact is, it's just not the most efficient or effective method for answering the question, "If I build it, will they come?"
What Questions Can Qualitative Research Answer Well?Qualitative research is phenomenal for telling you whether your users can do x. It tells you whether the feature makes sense to them and whether they can complete a given task successfully.
To a smaller extent, it can even tell you whether they are likely to enjoy performing the task, and can certainly tell you if they hate it. (Trust me, run a few user tests on a feature they hate. You'll know.)
This obviously has some effect on whether the user will do x, since they’re a lot more likely to do it, if it isn’t annoying or difficult. But it's really better at predicting the negative case (ie. the user most likely won't use this feature as you're currently building it) than the positive one.
Sometimes qualitative research can also give you marginally useful feedback if your users are extremely likely or unlikely to make a purchase. For example, if you were to show them an interactive prototype with the new feature built into it, you might be able to make a decent judgement based on their immediate reactions if all of your participants were exceptionally excited or incredibly negative about a particular feature.
Unfortunately, this, in my experience, is the exception, rather than the rule. It’s rare that a participant in a study sees a new feature and shrieks with delight or recoils in horror. Although, to be fair, I’ve seen both.
What’s the Best Way to Answer This Question?Luckily, this is a question that can be pretty effectively answered using quantitative data, even before you build a whole new feature. A lot of companies have had quite a bit of success with adding a “fake” feature or doing a landing page test.
For example, one client who wanted to know their expected purchase conversion rate before they did all the work to integrate purchasing methods and accept credit cards simply added a Buy button to each of their product pages. When a customer clicked the button, he was told that the feature was not quite ready, and the click was registered so that the company could tell how many people were showing a willingness to buy.
By measuring the number of people who thought they were making a commitment to purchase, the client was able to estimate more effectively the number of people who would actually purchase if given the option.
The upshot is that the only really effective way to tell if users will do something is to set up a test and watch what they actually do, and that requires a more quantitative testing approach.
Are There Other Questions I Can’t Answer Qualitatively?Yep. Lots of them. I’ll probably cover them at some point in the future if people are interested. Feel free to ask about other specific questions in the comments, and I’ll try to let you know what sorts of testing methods work best for answering them.
Enjoy the post? Thanks!
How about following me on Twitter?