Thursday, August 18, 2011

Breaking the Rules: A UX Case Study

Recently, I was lucky enough to be featured in Smashing Magazine's brand new UX section! Smashing is already a fabulous resource for web design and coding, and I think it's going to be a great place to learn about user experience.

You should read my first article, Breaking the Rules: A UX Case Study.

Here's a little something to get you started:

I read a lot of design articles about best practices for improving the flow of sign-up forms. Most of these articles offer great advice, such as minimizing the number of steps, asking for as little information up front as possible, and providing clear feedback on the status of the user’s data.

If you’re creating a sign-up form, you could do worse than to follow all of these guidelines. On the other hand, you could do a lot better.

Design guidelines aren’t one size fits all. Sometimes you can improve a process by breaking a few rules. The trick is knowing which rules to break for a particular project.

Read the rest of the article!

Tuesday, August 9, 2011

Stop Worrying About the Cupholders

Every startup I’ve ever talked to has too few resources. Programmers, money, name it, startups don’t have enough of it.

When you don’t have enough resources, prioritization becomes even more important. You don’t have the luxury to execute every single great idea that you have. You need to pick and choose, and the life of your company depends on choosing wisely.

Why is it that so many startups work so hard on the wrong stuff?

By “the wrong stuff” I mean, of course, stuff that doesn’t move a key metric - projects that don’t convert people into new users or increase revenue or drive retention. And it’s especially problematic for new startups, since they are often missing really important features that would drive all those key metrics.

It’s as if they had a car without any brakes, and they’re worried about building the perfect cupholder.

For some reason, when you’re in the middle of choosing features for your product, it can be really hard to distinguish between brakes and cupholders. How do you do it?

You need to start by asking (and answering) two simple questions:
  • What problem is this solving?
  • How important is this problem in relation to the other problems I have to solve?
To accurately answer these questions, it helps to be able to identify some things that frequently get worked on that just don’t have that big of a return. So, what does a cupholder project look like? It often looks like:

Visual Design

Visual design can be incredibly important, but nine times out of ten, it’s a cupholder. Obviously colors, fonts, and layout can affect things like conversion, but it’s typically an optimization of conversion rather than a conversion driver.

For example, the fact that you allow users to buy things on your website at all has a much bigger impact on revenue than the color of the buy button. Maybe that’s an extreme example, but I’ve seen too many companies spending time quibbling over the visual design of incredibly important features, which just ends up delaying the release of these features.

Go ahead. Make your site pretty. Some of that visual improvement may even contribute to key metrics. But every time you put off releasing a feature in order to make sure that you’ve got exactly the right gradient, ask yourself, “Am I redesigning a cupholder here, or am I turbocharging the engine?”

Monday, August 1, 2011

Hypothesis Generation vs. Validation

A lot of people ask me what sort of research they should be doing on their products. There are a lot of factors that go into deciding which sort of information you should be getting from users, but it pretty much boils down to a question of “what do you want to learn.”

Today, I’m going to explore one of the many ways you can go about looking at this: Hypothesis Generation vs. Hypothesis Validation. Don’t worry, it’s not as complicated as I’ve made it sound.

What is Hypothesis Generation

In a nutshell, hypothesis generation is what helps you come up with new ideas for what you need to change. Sure, you can do this by sitting around in a room and brainstorming new features, but reaching out and learning from your users is a much faster way of getting the right data.

Imagine you were building a product to help people buy shoes online. Hypothesis generation might include things like:

  • Talking to people who buy shoes online to explore what their problems are
  • Talking to people who don’t buy shoes online to understand why
  • Watching people attempt to buy shoes both online and offline in order to understand what their problems really are rather than what they tell you they are
  • Watching people use your product to figure out if you’ve done anything particularly confusing that is keeping them from buying shoes from you

As you can see, you can do hypothesis generation at any point in the development of your product. For example, before you have any product at all, you need to do research to learn about your potential users’ habits and problems. Once you have a product, you need to do hypothesis generation to understand how people are using your product and what problems you’ve caused.

To be clear, the research itself does not generate hypotheses. YOU do that. The goal is not to just go out and have people tell you exactly what they want and then build it. The goal is to gain an understanding of your users or your product to help you think up clever ideas for what to build next.

Good hypothesis generation almost always involves qualitative research. At some point, you need to observe people or talk to people in order to understand them better.

However, you can sometimes use data mining or other metrics analyzation to begin to generate a hypothesis. For example, you might look at your registration flow and notice a severe drop off half way through. This might give you a clue that you have some sort of user problem half way through your registration process that you might want to look into with some qualitative research.

What is Hypothesis Validation

Hypothesis validation is different. In this case, you already have an idea of what is wrong, and you have an idea of how you might possibly fix it. You now have to go out and do some research to figure out if your assumptions and decisions were correct.

For our fictional shoe-buying product, hypothesis validation might look something like:

  • Standard usability testing on a proposed new purchase flow to see if it goes more smoothly than the old one
  • Showing mockups to people in a particular persona group to see if a proposed new feature appeals to that specific group of people
  • A/B testing of changes to see if a new feature improves purchase conversion

Hypothesis validation also almost always involves some sort of tangible thing that is getting tested. That thing could be anything from a wireframe to a prototype to an actual feature, but there’s something that you’re testing and getting concrete data about.

You can use both quantitative and qualitative data to validate a hypothesis, but you have to choose carefully to make sure you’re testing the right thing. In fact, sometimes a combination of the two is most effective. I’ve got some information on choosing the right type of test in my post Qual vs. Quant: When to Listen and When to Measure.

Types of Research

Why is this distinction between generation and validation important? Because figuring out whether you’re generating hypotheses or validating them is necessary for deciding which type of research you want to do.

Want to understand why nobody is registering for your site? Generate some hypotheses with observational testing of new users. Want to see if the mockups for your new registration flow are likely to improve matters? Validate your hypothesis with straight usability testing of a prototype.

These aren’t the only factors that go into determining the type of research necessary for your stage of product development, but they’re an important part of deciding how to learn from your users.

Like the post? Follow me on Twitter!