I was talking to an engineer the other day who was describing his startup's first experience in trying to get user feedback about their new product. Since it was a small company and the product didn't exist in production yet, their goals for gathering user feedback were:
- Get information about whether people thought the product was a good idea.
- Identify potential customer types, both for marketing and further research purposes.
- Talk to as many potential users as possible to get a broad range of feedback.
- Keep it as cheap as possible!
In order to help others who don't have a user experience background not make those same mistakes, I've compiled a list of 5 things you're almost certainly doing wrong if you're trying to get customer feedback without much experience. Even if you've been talking to users for years, you might still be doing these things, since I've seen these mistakes made by people who really should know better. Of course, this list is not exhaustive. You could be making dozens of other mistakes, for all I know! But just fixing these few small problems will dramatically increase the quality of your user feedback, regardless of the type of research you're doing.
Don't give a guided tourOne of the most common problems I've seen in customer interviews is inexperienced moderators wanting to give way too much information about the product up front. Whether they're trying to show off the product or trying to "help" the user not get lost, they start the test by launching into a long description of what the product is, who it's for, what problems it's trying to solve, and all the cool features it has. At the end of the tour, they wrap up with a question like, "So, do you think you would use this product to solve this exact problem that I told you about?" Is there any other possible answer than, "ummm...sure?"
Instead of the guided tour, start by letting the user explore a bit on his own. Then, give the user as little background information as possible to complete a task. For example, to test the cool new product we worked on for Superfish, I might give them a scenario they can relate to like, "You are shopping online for a new pair of pants to wear to work, and somebody tells you about this new product that might help. You install the product as a plug in to Firefox and start shopping. Show me what you'd do to find that pair of pants." The only information I've given the user is stuff they probably would have figured out if they'd found the product on their own and installed it themselves. I leave it up to them to figure out what Superfish is, how it works, and whether or not it solves a problem that they have.
Ask open ended questionsWhen you do start to ask questions, never give the participant a chance to simply answer yes or no. The idea here is to ask questions that start a discussion.
These questions are bad for starting a discussion:
- "Do you think this is cool?"
- "Was that easy to use?"
- "What do you think of this?"
- "How'd that go?"
Follow upThis conversation happens at least a dozen times in every test:
Me: What did you think about that?
User: It was cool.
Me: WHAT WAS COOL ABOUT IT?
User: [something that's actually interesting and helpful.]
Study participants will often respond to questions with words that describe their feelings about the product but that don't get at why they might feel that way. Words like "cool," "intuitive," "fun," and "confusing" are helpful, but it's more helpful to know what it was about the product that elicited that user reaction. Don't assume you know what makes a product cool!
Let the user failThis can be painful, I know. Especially if it's your design or product that's failing. I've had engineers observing study sessions grab the mouse and show the participant exactly what to do at the first sign of hesitation. But the problem is, you're not testing to see if somebody can be SHOWN how to use the product. You're testing to see if a person can FIGURE OUT how to use the product. And frequently, you learn the most from failures. When four out of four participants all fail to perform a task in exactly the same way, maybe that means that the product needs to change so that they can perform the task in that way.
Also, just because a participant fails to perform a task immediately doesn't mean that they won't discover the right answer with a little exploration. Watching where they explore first can be incredibly helpful in understanding the partipant's mental model of the application. So let them for fail for awhile, and then give them a small hint to help them toward their goal. If they still don't get it, you can keep giving them stronger hints until they've completed the task.
Are those all the tricks to a successful user study? Well, no. But they're solutions to mistakes that get made over and over, especially by people without much experience or training in talking to users, and they'll help you get much better information than you would otherwise. Now get out there and start talking to your users!