Tuesday, October 26, 2010

The Dangers of Metrics (Only) Driven Product Development

When I first started designing, it was a lot harder to know what I got right. Sure, we ran usability tests, and we looked generally at things like page counts and revenue before and after big redesigns, but it was still tough to know exactly what design changes were making the biggest difference. Everything changed once I started working with companies that made small, iterative design changes and a/b tested the results against specific metrics.

To be clear, not all the designers I know like working in this manner. After all, it's no fun being told that your big change was a failure because it didn't result in a statistically significant increase in revenue or retention. In fact, if you're a designer or a product owner and are required to improve certain metrics, it can sometimes be tempting to cheat a little.

This leads to a problem that I don't think we talk about enough: Metrics (Only) Driven Product Development.

What Is Metrics (Only) Driven Product Development?

Imagine that you work at a store, and your manager has noticed that when the store is busy, the store makes more money. The manager then tells you that your job is to make the store busier - that's your metric that you need to improve.

You have several options for improving your metric. You could:
  • Improve the quality of the shopping experience so that people who are already in the store want to stay longer
  • Offer more merchandise so that people find more things they want to buy
  • Advertise widely to try to attract more people into the store
  • Sell everything at half off
  • Remove several cash registers in order to make checking out take longer, which should increase the number of people in the store at a time, since it will take people longer to get out
  • Hire people to come hang out in the store
As you can see, all of the above would very likely improve the metric you were supposed to improve. They would all ensure that, for awhile at least, the store was quite busy. However, some are significantly better for the overall health of the store than others.

Thursday, October 21, 2010

A Review of UserTesting.com

A lot of people recently have asked me about the new crop of user testing tools available on the internet. One specific tool that comes up a lot is usertesting.com, and I’d like to talk a little bit about my experience with it.

Frankly, when I first saw the site, my initial reaction was, “Well, that’s not going to be very useful.” I’ve heard a similar gut reaction from several user researchers. Once I’d given it a try, my reaction changed to, “Oh, shit. This could seriously cut into my income.”

Having used it several times now, I can happily say that neither of these reactions was correct.

The Cons:

Let’s start with the things that I originally noted about usertesting.com that made me think it wouldn’t be very useful.

There is no moderator.
Not having a moderator  means that there isn’t a human being running the test who can ask follow up questions and delve deeply into issues that come up naturally during a session.  Good moderators don’t just follow a script; they ask the right questions to really understand why users are doing what they’re doing.

It’s less useful for testing incomplete interactive prototypes.
I have not yet found a good way to test prototypes with usertesting.com. When I design, I create very sketchy, incomplete mockups of products. Typically, these only run in one browser, they have no visual design, and large parts of them won’t really work or will use fake data.

I can deal with all of these issues in a one-on-one session by explaining that the prototypes are not real, giving the participant some help in areas where the prototype isn’t perfect, or gently reminding them not to fixate on the visual design. This isn’t really possible without a human moderator. 

Monday, October 11, 2010

Pie-Jacking and Other Tips for Making Engineers Tolerate UX

Obviously, I find UX to be incredibly important. And these days, I’m finding more and more people who agree with me. Unfortunately, in many organizations, there are people who still feel that user research and interaction design slow things down or aren’t really necessary.

Sometimes, those people are engineers. Not always, of course. I’ve worked with lots of engineers who are very excited about the idea of good design and getting user feedback. But occasionally you run into groups of engineers who have yet to be convinced.

In the interest of achieving harmony between the Engineering and UX departments at your company, here are a few tips for convincing people (especially engineers) of the value of user research and design.

To be clear, I am in no way suggesting that you trick engineers or lie to them or manipulate them in any way. I’m working on the assumption that engineers are frequently extremely logical people who just need evidence that things are useful before they buy in.

Involve Them in the User Research

The number one way to get anybody excited about user research and UX is to get them involved with it. Too frequently, the only thing about user research that engineers get to see is a thirty page paper detailing all of the problems with their product. This is boring, painful, and easily ignored.

In my experience working with dozens of companies, the single most powerful tool for getting engineers in touch with users was having them sit in on a few usability sessions. The sessions didn’t have to be formal. I’d just put the engineers in chairs in the back of a small conference room and have a participant use the product in front of them.

Without fail, the engineers started to understand the pain that their users were feeling. And, since engineers are not monsters, they wanted to help those people. They would fix bugs they observed during the sessions. They would ask for advice on how to improve screens that they had previously thought were just fine.

Most importantly, the engineers would suddenly understand how different they were from their users! Suddenly, it became much harder for the engineers to believe that they had all the answers to their customers’ problems, since they had seen that they were really quite different.

Tuesday, October 5, 2010

6 Incredibly Important Lean UX Lessons

A project I’ve been working on recently should really be featured in all books on agile and lean design. I’ve found that many projects and clients have similar issues with their design processes, but it’s rare that a single project clearly demonstrates so many incredibly important tenets of lean UX. 

So far, over the course of about a month, this one has driven home all of the following lessons:
So, what happened?

I’m working with a very cool startup that is absolutely devoted to metrics. They A/B test and measure everything. When they first brought me on, the team went over some history of the product.

They originally had a particular flow for the product, but when they did qualitative testing, it became clear to them that users expected things to behave differently. The team designed a new version of the product that they felt addressed the issues that testers were having. They then built and released the new version.

It bombed. Well, perhaps that’s a bit strong, but it performed significantly worse than the original design in an A/B test.

Thursday, September 30, 2010

User Research Tips

After my talk at Web 2.0 Expo on combining qualitative research, quantitative metrics, and design vision for better products, there were some questions from the audience. Interestingly, the large majority of questions were about the qualitative research part of the talk.

And that makes sense. Qualitative research can be tough to incorporate into your development process. Until fairly recently, it's been a big, expensive, time-consuming endeavor. Often it required having outside consultants come in to run tests in a rented lab behind a one way mirror. Additionally, a lot of product folks assumed that it would slow down the development process, since it would often add a step between design and engineering.

Now, if you read my blog or listen to me speak, you know that I advocate quick and cheap testing over large, formal studies, and I like taking advantage of tools that let me run remote usability studies. I also feel that testing and research speeds up your development process, since it tends to catch problems early, when they're easier to fix.

That said, user research is easy to get wrong. It takes some practice to be good at things like moderating sessions and analyzing data. For those of you who are interested in learning more about these things, I've compiled a list of resources to get you started.

My Blog Posts

These older posts should help you fix some of the common problems people have with user research:

Books

There are a million books about user research. These are two very good ones. Let me know in the comments if you've read any other particularly helpful ones. 


Online Tools

These tools do NOT eliminate the need to actually interact with your users in person, but they can be extremely valuable additions to your user research process. 
  • Skype, GoToMeeting, WebEx, etc. - Allow you to screen share so that you can observe how your users are interacting with your product. 
  • usertesting.com - Very fast & cheap way to test your new user experience. 
  • NavFlow - Lets you test your site navigation using mockups, which allows you to get feedback before you build the product. 
  • Five Second Test - Great for testing things like landing pages or whether your calls to action are obvious enough. 
  • Ethnio - Helps recruit session participants who are currently using your product. 
  • Revelation - Helps you run longer term studies with current users.

Friday, September 24, 2010

Please Stop Annoying Your Users

Once upon a time, I worked with a company that was addicted to interstitials. Interstitials, for those of you who don’t know the term, are web pages or advertisements that show up before an expected  content page. For example, the user clicks a link or button and expects to be taken to a news article or to take some action, and instead she is shown a web page selling her something.

Like many damaging addictions, this one started out innocently enough. You see, the company had a freemium product, so they were constantly looking for ways to share the benefits of upgrading to the premium version in a way that flowed naturally within the product.

They had good luck with one interstitial that informed users of a useful new feature that required the user to upgrade. They had more good luck with another that asked the user to consider inviting some friends before continuing on with the product.

Then things got ugly.

Customers could no longer use the product for more than a few minutes without getting asked for money or to invite a friend or to view a video to earn points. Brand new users who didn’t even understand the value proposition of the free version were getting hassled to sign up for a monthly subscription.
Every time I tried to explain that this was driving users away, management explained, “But people buy things from these interstitials! They make us money! Besides, if people don’t want to see them, they can dismiss them.”

How This Affects Metrics

Of course, you know how this goes. Just looking at the metrics from each individual interstitial, it was pretty clear that people did buy things or invite friends or watch videos. Each interstitial did, in fact, make us some money. The problem was that overall the interstitials lost us customers and potential customers by driving away people who became annoyed.

The fact that the users could simply skip the interstitials didn’t seem to matter much. Sure people could click the cleverly hidden “skip” button – provided they could find it – but they had already been annoyed. Maybe just a little. Maybe only momentarily. But it was there. The product had annoyed them, and now they had a slightly more negative view of the company.

Here’s the important thing that the company had to learn: a mildly annoyed user does not necessarily leave immediately. She doesn’t typically call customer service to complain. She doesn’t write a nasty email. She just gets a little bit unhappy with the service. And the next time you do something to annoy her, she gets a little more unhappy with the service. And if you annoy her enough, THEN she leaves.

The real problem is that this problem is often tricky to identify with metrics. It’s a combination of a lot of little things, not one big thing, that makes the user move on, so it doesn’t show up as a giant drop off in a particular place. It’s just a slow, gradual attrition of formerly happy customers as they get more and more pissed off and decide to go elsewhere.

If you fix each annoyance and A/B test it individually, you might not see a very impressive lift, because, of course, you still have dozens of other things that are annoying the user. But over time, when you’ve identified and fixed most of the annoyances, what you will see is higher retention and better word of mouth as your product stops vaguely irritating your users.

Some Key Offenders

I can’t tell you exactly what you’re doing that is slightly annoying your customers, but here are a few things that I’ve seen irritate people pretty consistently over the years:
  • Slowness
  • Too many interstitials
  • Not remembering information - for example, not maintaining items in a shopping cart or deleting the information that a user typed into a form if there is an error
  • Confusing or constantly changing navigation
  • Inconsistent look and feel, which can make it harder for users to quickly identify similar items on different screens
  • Hard to find or inappropriately placed call to action buttons
  • Bad or unresponsive customer service

It’s frankly not easy to fix all of these things, and it can be a leap of faith for companies who want every single change to show a measurable improvement in key metrics. But by making your product less annoying overall, you will end up with happier customers who stick around.

Like the post? Follow me on Twitter!

Also, come hear me speak on Wednesday, Sept. 29th, at Web 2.0 Expo New York. I’ll be talking about how to effectively combine qualitative research, quantitative analytics, and design vision in order to improve your products

Thursday, September 16, 2010

Everything In Its Place

I talk to a lot of designers. We’re all different kinds of designers: visual, interaction, user experience, information, blah blah blah, but many of us take the same things for granted. Because of this, designers will probably be bored to tears by this post, while non-designers may learn something that can make it much easier to build products that people can use.

But first, a story. A designer friend of mine had a baby. She asked her husband to put up a note asking people to please not ring the doorbell, since the baby was sleeping. Later, after somebody rang the doorbell and the baby woke up and she was contemplating divorce, she wondered why her husband hadn’t put up the damn note like she had asked.

The thing is, he HAD put up the note. He had put it right on the door at eye level so anybody could see it. What he hadn’t done was associated the call to action with the actual action.

What the hell does that mean? 

A big part of any user experience design is figuring out where to put stuff. This may sound obvious, but it’s best to put stuff where people are most likely to use it. That means associating calls to action with the thing that is being acted upon.

Here’s an example you may have considered. Where do you put a buy button on a page? Well, when a user is trying to decide whether or not to buy something, which pieces of information is the user most likely to need? He definitely needs to know how much he’s paying for the item. He might need pictures of the item. He almost certainly needs to know the name of the item and perhaps a short description.

Considering those needs, the Buy button should probably go near those things on the page. It should even go in a defined visual area with just those things. Here’s the hard part: it needs to go near those things EVEN IF IT LOOKS BETTER SOMEPLACE ELSE.

What's with all the screaming?

I’m all for having a nice visual design. I believe that a page should be balanced and pretty and have a reasonable amount of white space and all that. But if one element of your gorgeous visual design has separated your Buy button from the information that your user needs in order to decide to buy, then your gorgeous visual design is costing you more money than you think.

This isn’t just true for Buy buttons; it’s true any time the user has to make a decision. The call to action to make the decision must be visually associated with any information that the user needs to make that decision. Additionally, any information that is NOT related to the decision should be visually separate.

This also applies to things that aren't calls to action, of course. Related information should all be grouped together while unrelated information should be somewhere else. It's just that simple. Oh, and bonus points if you keep all similar items in the same place on every screen of your product so people always know where to look.

So, where should my friend’s husband have put the note? He should have put it within inches of the doorbell itself. Why? Because the decision the user was making was whether or not to ring the doorbell. The husband needed to put the information about the sleeping baby right at the point where the user was making the decision, not in a completely different part of the interface (the door) where the user might or might not even notice it.

The next time you're deciding where something goes, remember, this strategy is not only important for creating a usable product, it just might save your marriage!

Like the post? I'm a veritable font of useful information, I swear. Follow me on Twitter!