Pages

Monday, August 17, 2009

5 Things People Get Wrong When Talking to Users

This post originally appeared on the Sliced Bread Design blog.

I was talking to an engineer the other day who was describing his startup's first experience in trying to get user feedback about their new product. Since it was a small company and the product didn't exist in production yet, their goals for gathering user feedback were:
  • Get information about whether people thought the product was a good idea.
  • Identify potential customer types, both for marketing and further research purposes.
  • Talk to as many potential users as possible to get a broad range of feedback.
  • Keep it as cheap as possible!
He had, unsurprisingly, a number of stories about mistakes they had made and lessons they'd learned during the process of talking to dozens of people. As he was sharing the stories with me, the thought that kept going through my head was, "OF COURSE that didn't work! Why didn't you [fill in the blank]?" Obviously, the reason he had to learn all this from scratch was because he hadn't moderated and viewed hundreds of usability sessions or had any training in appropriate user interview techniques. Many of things that user researchers take for granted were brand new to him. Having spoken with many other people at small companies with almost non-existent research budgets, I can tell you that this is not an isolated incident. While it's wonderful that more companies are taking user research seriously and understanding how valuable talking to users can be, it seems like people are relearning the same lessons over and over.

In order to help others who don't have a user experience background not make those same mistakes, I've compiled a list of 5 things you're almost certainly doing wrong if you're trying to get customer feedback without much experience. Even if you've been talking to users for years, you might still be doing these things, since I've seen these mistakes made by people who really should know better. Of course, this list is not exhaustive. You could be making dozens of other mistakes, for all I know! But just fixing these few small problems will dramatically increase the quality of your user feedback, regardless of the type of research you're doing.

Don't give a guided tour

One of the most common problems I've seen in customer interviews is inexperienced moderators wanting to give way too much information about the product up front. Whether they're trying to show off the product or trying to "help" the user not get lost, they start the test by launching into a long description of what the product is, who it's for, what problems it's trying to solve, and all the cool features it has. At the end of the tour, they wrap up with a question like, "So, do you think you would use this product to solve this exact problem that I told you about?" Is there any other possible answer than, "ummm...sure?"

Instead of the guided tour, start by letting the user explore a bit on his own. Then, give the user as little background information as possible to complete a task. For example, to test the cool new product we worked on for Superfish, I might give them a scenario they can relate to like, "You are shopping online for a new pair of pants to wear to work, and somebody tells you about this new product that might help. You install the product as a plug in to Firefox and start shopping. Show me what you'd do to find that pair of pants." The only information I've given the user is stuff they probably would have figured out if they'd found the product on their own and installed it themselves. I leave it up to them to figure out what Superfish is, how it works, and whether or not it solves a problem that they have.

Shut up, already

Remember, while you may have been staring at this design for weeks or months, this may be the first time your participant has even heard of your product. When you first share a screen or present a task, you may want to immediately start quizzing the participant about it. Resist that impulse for a few minutes! Give people a chance to get their bearings and start to notice things on their own. There will be plenty of time to have a conversation with the person after they've become a little more comfortable with the product, and you'll get more in depth comments than if you put them on the spot immediately.

Ask open ended questions

When you do start to ask questions, never give the participant a chance to simply answer yes or no. The idea here is to ask questions that start a discussion.

These questions are bad for starting a discussion:
  • "Do you think this is cool?"
  • "Was that easy to use?"
These questions are much better:
  • "What do you think of this?"
  • "How'd that go?"
The more broad and open ended you keep your questions, the less likely you are to lead the user and the more likely you are to get interesting answers to questions you didn't even think to ask.

Follow up

This conversation happens at least a dozen times in every test:
Me: What did you think about that?
User: It was cool.
Me: WHAT WAS COOL ABOUT IT?
User: [something that's actually interesting and helpful.]

Study participants will often respond to questions with words that describe their feelings about the product but that don't get at why they might feel that way. Words like "cool," "intuitive," "fun," and "confusing" are helpful, but it's more helpful to know what it was about the product that elicited that user reaction. Don't assume you know what makes a product cool!

Let the user fail

This can be painful, I know. Especially if it's your design or product that's failing. I've had engineers observing study sessions grab the mouse and show the participant exactly what to do at the first sign of hesitation. But the problem is, you're not testing to see if somebody can be SHOWN how to use the product. You're testing to see if a person can FIGURE OUT how to use the product. And frequently, you learn the most from failures. When four out of four participants all fail to perform a task in exactly the same way, maybe that means that the product needs to change so that they can perform the task in that way.

Also, just because a participant fails to perform a task immediately doesn't mean that they won't discover the right answer with a little exploration. Watching where they explore first can be incredibly helpful in understanding the partipant's mental model of the application. So let them for fail for awhile, and then give them a small hint to help them toward their goal. If they still don't get it, you can keep giving them stronger hints until they've completed the task.

Are those all the tricks to a successful user study? Well, no. But they're solutions to mistakes that get made over and over, especially by people without much experience or training in talking to users, and they'll help you get much better information than you would otherwise. Now get out there and start talking to your users!