Pages

Thursday, December 3, 2009

Which Metrics Equal Happy Users?

This post originally appeared on the Sliced Bread Design blog.

One of the greatest tools available to me as an interaction designer is the ability to see real metrics. I’m guessing that’s surprising to some people. After all, many people still think that design all happens before a product ever gets into the hands of users, so how could I possibly benefit from finding out what users are actually doing with my products?

Well, for one thing, I believe that design should continue for as long as a product is being used by or sold to customers. It’s an iterative process, and there’s nothing that gives me quicker, more accurate insight into how a new product version or feature is performing than looking at user metrics.

But there’s something that I, as a user advocate, care about quite a lot that is really very hard to measure accurately. I care about User Happiness. Now, I don’t necessarily care about it for some vague, good karma reason. I care because I think that happy users are retained users and, often, paying users. I believe that happy users tell their friends about my product and reduce my acquisition costs. I truly believe that happy users can earn money for my product.

So, how can I tell whether my users are happy? You know, without talking to every single one of them?

Although I think that happy users can equal more registrations, more revenue, and more retention, I don’t actually believe that this implies the opposite. In other words, there are all sorts of things I can do to retain customers or get more money out of them that don’t actually make them happy. Here are a few of the important business metrics you might be tempted to use as shorthand for customer happiness – but it’s not always the case:

Retention

An increase in retention numbers seems like a good indication that your customers are happy. After all, happier customers stay longer, right?

Friday, November 13, 2009

6 Reasons Users Hate Your New Feature

This post originally appeared on the Sliced Bread Design blog.

You spend months on a new feature for your existing product: researching it, designing and building it, launching it. Finally, it’s out in the world, and you sit back and wait for all those glowing comments to come in about how happy your users are that you’ve finally solved their biggest problems. Except, when the emails, forum posts, and adoption data actually come in, you realize that they hate it.

There is, sadly, no single reason why your new feature failed, but there are a number of possibilities. The failure of brand new products is its own complicated subject. To keep the scope narrow, I’m just going to concentrate on failed feature additions to current products with existing users.

Your Existing Product Needs Too Much Work

Ah, the allure of the shiny new feature! It’s so much more exciting to work on the next big thing than to fix bugs or improve the user experience of a boring old existing feature.

While working with one company, I spoke with and read forum posts written by thousands of users. I also used the product extensively myself. One of the recurring themes of the complaints I heard was that the main product was extremely buggy and slow. The problem was, fixing the bugs and the lagging was really, really hard. It involved a significant investment in infrastructure change and a serious rewrite of some very tricky code.

Instead of buckling down and making the necessary improvements, management spent a long time trying to build new features on top of the old, buggy product. Unfortunately, the response to each new, exciting feature tended to be, “Your product still crashes my computer. Why didn’t you make it stop doing that instead of adding this worthless thing that I can’t use?”

Now, you obviously don’t need to fix every last bug in your existing offering before you move on and add something new. You do, however, need to be sensitive to the actual quality of your product and the current experience of your users before adding something new. You wouldn’t build a second story on a house with a shaky foundation. Don’t tack brand new features onto a product that has an unacceptably high crash rate, severe usability problems, or that runs too slowly for a significant percentage of your users.

Before you add a new feature to a product, ask yourself, “Have I fixed the major bugs, crashes, and UX issues that are currently preventing my users from taking advantage of core features?”



Wednesday, November 4, 2009

Is Continuous Deployment Good for Users?

This post originally appeared on the Sliced Bread Design blog.

The recent release of Windows 7 got me thinking about development cycles. For those of us who suffered through the last 2+ years of Vista, Windows 7 has been a welcome relief from the lagging, bugs, and constant hassle of a failed operating system. Overall, as a customer, I’m pretty happy with Windows 7. But, at least on my part, there is still some latent anger - if Windows 7 hadn’t been quite as good as it seems to be, they would have lost me to Apple. They still might.

A big part of my unhappiness is the fact that I had to wait for more than two years before they fixed my problems. That’s a lot of crashes and frustration to forget about.

One approach that many software companies have been adopting to combat the huge lag time built into traditional software releases is something called continuous deployment. This sort of deployment means that, instead of having large, planned releases that go through a strict process and may take months or years, engineers release new code directly to users constantly, sometimes multiple times a day. A “release” could include almost anything: a whole new feature, a bug fix, or a text change on the landing page.

I worked with a software development organization that practiced continuous deployment on a very large, complicated code base, and I can definitely say, the engineers loved it. From the point of view of the employees, continuous deployment was a giant win.

But how was it for the users? The fact is, some decisions that seem like they only affect engineering (or marketing, business, PR, etc.) can actually have a huge impact on end users. So, whenever organizations make decisions, they should always be asking, “how might this affect my customers, and how can I make it work best for them?”

Is Continuous Deployment Good For Users?

As with so many decisions, the answer is yes and no. Continuous deployment has some natural pros and cons for the customer experience, but knowing about them can help you fix the cons and benefit even more from the pros.

Big Customer Wins

Fast Bug Fixes

Perhaps the biggest win for users is that bugs can get addressed immediately. Currently, even Microsoft releases patches for some of its worst security holes, but there is certainly a class of non-critical, but still important bugs that have to wait until the next major release to get addressed. That means weeks, months, or even years of your users dealing with something broken, even if the fix is simple. In continuous deployment, a fix can be shipped as soon as it's done.




Friday, October 2, 2009

A Faster Horse - When Not To Listen To Users

This post originally appeared on the Sliced Bread Design blog.

Henry Ford once said that, if he’d asked his customers what they wanted, they’d have asked for a faster horse. In the high tech industry, this quote is often used to justify not talking to users. After all, if customers don’t know what they want, why bother talking to them?

You need to talk to users because, if you ask the right questions, they will help you build a better product. The key is figuring out the right questions.

For starters, users are great at telling you when there’s something wrong with your product. They can tell you exactly which parts of the product are particularly confusing for them or are keeping them from being happy, repeat customers. Figuring out what to do about those problems is your job.

In general, users are not going to be able to answer the following types of questions:
  • What new technical innovation is going to revolutionize a particular industry?
  • What’s the next cool gadget that you’d like to buy?
  • Do you think that people like you would buy this new cool gadget that you’ve just learned about?
  • What new features would make this product more interesting/compelling/fun/easy to use? (although, this question becomes more answerable when the user is presented with some options for which features they might prefer.)
  • How exactly should we change the product to make it easier for you to use?
They are fantastic at answering questions like these:
  • What do you most love or hate about this product?
  • Do you find anything about this product hard to use or confusing?
  • Does this product solve your problem better or worse than what you’re currently doing?
  • How are you currently solving a particular problem that may or may not be addressed by this product?
  • What don’t you like about your current solutions for a particular problem?
  • Why did you choose this particular solution as opposed to another solution?
Obviously, there are innumerable other questions that you might want to ask your users, so how do you decide which ones they’ll be able to answer with any degree of accuracy?


Wednesday, September 23, 2009

Improving the ROI for Your User Research

This post originally appeared on the Sliced Bread Design blog.

So, you decided to do some user research in order to find out where you can make improvements. After a few hours of user interviews, you ended up with a notebook full of scribbled information that all seemed really critical. How in the world do you figure out what to do with all that information?

If your answer is “talk about it all abstractly with everybody in the company or write a huge paper that nobody will read and then go on with business as usual,” you're in good (bad?) company.

But you have to DO something with all that data. You have to analyze it and turn it into actionable items that your engineering department can use to fix your product. It's not always easy, but I'm going to give you an approach that should make it a little easier. This isn't the only way to do your test analysis, but it's one of the quickest and easiest that I've found when you are concerned with key metrics.

When to use this method:
  • You have an existing product with a way to measure key metrics, and you’re interested in improving in particular areas related to your bottom line
  • You have a limited research and development budget and want to target your changes specifically to move key metrics
  • You are looking for the “low hanging fruit” that is getting in the way of your users performing important tasks with your product
  • You are working in an agile development environment that is constantly tweaking and improving your product and then testing the changes
When not to use this method:
  • You have an existing product that you are planning to completely overhaul, and you want to understand all of the major problems before you do your redesign
  • You are trying to create an overall awesome, irresistible user experience that is not related to a specific metric
  • You are designing a new product or feature and are observing people using other products to identify opportunities for innovation
If you fall into the first bucket, read on…

The Five Basic Steps:
  • Identify key metrics you'd like to improve
  • Identify the tasks on your site that correlate with improvement in those metrics
  • Observe people performing the appropriate tasks
  • Identify the barriers preventing people from completing or repeating the tasks
  • Develop recommendations that address each specific barrier to task completion


Wednesday, September 16, 2009

Why I Hate Paper Prototypes

This post originally appeared on the Sliced Bread Design blog.

Ok, maybe hate is a little strong. Paper prototypes and sketches have their place in interaction design. For example, they're great for helping to quickly brainstorm various different approaches to a problem at the beginning of a design process. They're also a very fast and cheap way to illustrate a new idea, since most people can draw boxes faster than they can build interactive prototypes. But, in my opinion, they have several serious drawbacks.

Before I get too far into this, let me define what I mean by a paper prototype, since I've heard people use the term to refer to everything from sketches on actual pieces of paper (or cocktail napkins in a couple of cases) to full color printed mockups with a polished visual design. In this instance, I'm referring to a totally non-interactive screen, mockup, or sketch of any sort of application that is meant to be shared with customers, test participants, or team members. It can be printed on actual paper or shown on a computer screen, but whatever the viewer does to it, a paper prototype is not interactive.

So, what don't I like about them?

Screen vs. Paper

This first couple of peeves apply to screens that are actually printed out or drawn directly on paper. With a few exceptions that I've listed below, I've found this approach to be really counterproductive.

Iterating On a Design

One of the biggest problems with hand drawn sketches on paper has less to do with user interactions and more to do with my work flow as a designer. Sure, sketching something quickly on a piece of paper can be quick, but what happens when I realize that I want to swap two sections of the screen? I can draw arrows and lines all over it, but that gets messy pretty fast. Whenever I want to make any changes to my design, I need to create a whole new sketch. This can mean redrawing the entire screen quite a few times.

If I'm creating a design in HTML or any other prototyping tool, the very first version might take a little longer than a quick sketch, but the second through nth iterations are a whole lot faster. And, as a bonus, I can check them into source control, which means I'm a lot less likely to lose my work than if I have dozens of pieces of paper scattered all over my office.

Interacting With Paper

Whether they're sketched out by hand or printed out on paper, people interact with paper screens differently than they do with computer screens. They view them at a different angle. They focus on different parts of the screen. They use their hands to interact with them rather than a mouse and keyboard. Any feedback that you get on a printed design will be colored by the fact that people are fundamentally interacting with it differently than they would if it were on a computer screen.

Given all of these drawbacks, there are a few situations when designs printed on paper can be used effectively:
  • You are at the very beginning of the design process, and you want to explore a bunch of different possible directions with other team members very quickly before committing yourself to fleshing out one or two specific options.
  • You're designing material that is meant to be printed, like brochures, user manuals, books, etc. In this case, you want to know how people will interact with the printed media.
  • Your product is an interface for some sort of embedded or small screen device that would be very difficult to replicate in a quick interactive prototype. For example, a screen for certain mobile devices or the heads-up display for a car dashboard might be hard to show interactively in the appropriate context.
  • You have several different visual designs, and you'd like to show them all to users at the same time in order to see which one is the most attention-getting. You’ll still need to show the designs on screen, of course, since colors can vary so much between screen and print, but it can be helpful to lay out several pieces of paper so that the various options can easily be compared.
  • You need to share screens with people in an environment with absolutely no access to a computer whatsoever. You know, maybe you’re in the middle of a meeting and need to sketch something quickly, or the rest of your design team is Amish, or you are designing in a post-apocalyptic wasteland where the computers are trying destroy humanity.
On the other hand, if you're designing desktop or web applications for standard computers, at the very least, show your prototypes on a computer, even if they are not interactive!

Friday, September 11, 2009

6 Stupid Excuses for Not Getting Feedback

This post originally appeared on the Sliced Bread Design blog.

Almost every company I talk to wants to test their products, get customer feedback, and iterate based on real user metrics, but all too often they have some excuse for why they just never get around to it. Despite people's best intentions, products constantly get released with little to no customer feedback until it's too late.

I'm not trying to promote any specific methodology for testing your products or getting customer feedback. Whether you're doing formal usability testing, contextual inquiries, surveys, a/b testing, or just calling up users to chat, you should be staying in contact with customers and potential customers throughout the entire design and development process.

To help get you to stop avoiding it, I've explored six of the most common stupid excuses for not testing your designs and getting feedback early.

Excuse 1: It's a design standard

You can't test every little change you make, right? Can't you sometimes just rely on good design practices and standards? Maybe you moved a button or changed some text. But the problem is, sometimes design standards can get in the way of accomplishing your business goals.

For example, a few months ago at a talk given by Bill Scott, he talked about a developer who had a/b tested the text on a link. One option read, "I'm now on Twitter." The second read, "Follow me on Twitter." The third read, "Click here to follow me on Twitter." Now, anybody familiar with "good design practices" will tell you that you should never, ever use the words "click here" to get somebody to click here. It's SO Web 1.0. But guess which link converted best in the a/b test? That's right. "Click here" generated significantly more Twitter followers than the other two. If that was the business goal, the bad design principle won hands down.

Does this mean that you have to do a full scale usability test every time you change link text? Of course not. Does it mean you have to use the dreaded words "click here" in all your links? Nope. What it does mean is that you should have some way to keep an eye on the metrics you care about for your site, and you should be testing how your design changes affect customer behavior, even when your changes adhere to all the best practices of good design. So, to put it simply: prioritize what you care about and then make sure you test your top priorities.

Excuse 2: Company X does it this way

I can't tell you how many times I've heard, "Oh, we know that will work. Google/Facebook/Apple does it that way." This is the worst kind of cargo cult mentality. While it's true that Google, Facebook, and Apple are all very successful companies, you aren't solving exactly the same problem that those companies are, you don't have exactly the same customers that they do, and you don’t know if they have tested their designs or even care about design in that particular area. You are, hopefully, building an entirely different product, even if it may have some of the same features or a similar set of users.

Is it ok to get design ideas from successful companies? Of course it is. But you still need to make sure your solutions work for your customers.

I previously worked with a company that had a social networking product. Before I joined them, the company decided that, since other companies had had good luck with showing friend updates, they would implement a similar feature, alerting users when their friends updated their profiles or bought products. Unfortunately, the company's users weren't very interested in the updates feature as it was implemented. When we finally asked them why they weren't using the feature, the users told us that they would have been very interested in receiving an entirely different type of update. Of course, if the company had connected with users earlier in the process, they would have rolled the feature out with the right information and gotten a much more positive reaction on launch.

Another thing to remember is that just because a company is successful and has a particular feature doesn't mean it's that exact feature that makes them successful. Google has admitted that the "I'm Feeling Lucky" button loses them page views, but they keep it because they, and their customers, like the feature. That doesn't mean it's a good business plan for your budding search engine startup to adopt a strategy of only providing people with the equivalent of the "I'm Feeling Lucky" button. In fact, this is a great example of why you might need to employ multiple testing methods: qualitative (usability, contextual inquiry, surveys), to find out if users find the feature compelling and usable, and quantitative (a/b, analytics), to make sure that the feature doesn't bankrupt you.

The bottom line is, it doesn't matter if something works for another company. If it’s a core interaction that might impact your business or customer behavior, you need to test new features and designs with your customers to make sure that they work for you. Obviously, you also need to make sure that you’re not violating anybody’s IP, but that’s another blog post.


Monday, August 17, 2009

5 Things People Get Wrong When Talking to Users

This post originally appeared on the Sliced Bread Design blog.

I was talking to an engineer the other day who was describing his startup's first experience in trying to get user feedback about their new product. Since it was a small company and the product didn't exist in production yet, their goals for gathering user feedback were:
  • Get information about whether people thought the product was a good idea.
  • Identify potential customer types, both for marketing and further research purposes.
  • Talk to as many potential users as possible to get a broad range of feedback.
  • Keep it as cheap as possible!
He had, unsurprisingly, a number of stories about mistakes they had made and lessons they'd learned during the process of talking to dozens of people. As he was sharing the stories with me, the thought that kept going through my head was, "OF COURSE that didn't work! Why didn't you [fill in the blank]?" Obviously, the reason he had to learn all this from scratch was because he hadn't moderated and viewed hundreds of usability sessions or had any training in appropriate user interview techniques. Many of things that user researchers take for granted were brand new to him. Having spoken with many other people at small companies with almost non-existent research budgets, I can tell you that this is not an isolated incident. While it's wonderful that more companies are taking user research seriously and understanding how valuable talking to users can be, it seems like people are relearning the same lessons over and over.

In order to help others who don't have a user experience background not make those same mistakes, I've compiled a list of 5 things you're almost certainly doing wrong if you're trying to get customer feedback without much experience. Even if you've been talking to users for years, you might still be doing these things, since I've seen these mistakes made by people who really should know better. Of course, this list is not exhaustive. You could be making dozens of other mistakes, for all I know! But just fixing these few small problems will dramatically increase the quality of your user feedback, regardless of the type of research you're doing.

Don't give a guided tour

One of the most common problems I've seen in customer interviews is inexperienced moderators wanting to give way too much information about the product up front. Whether they're trying to show off the product or trying to "help" the user not get lost, they start the test by launching into a long description of what the product is, who it's for, what problems it's trying to solve, and all the cool features it has. At the end of the tour, they wrap up with a question like, "So, do you think you would use this product to solve this exact problem that I told you about?" Is there any other possible answer than, "ummm...sure?"

Instead of the guided tour, start by letting the user explore a bit on his own. Then, give the user as little background information as possible to complete a task. For example, to test the cool new product we worked on for Superfish, I might give them a scenario they can relate to like, "You are shopping online for a new pair of pants to wear to work, and somebody tells you about this new product that might help. You install the product as a plug in to Firefox and start shopping. Show me what you'd do to find that pair of pants." The only information I've given the user is stuff they probably would have figured out if they'd found the product on their own and installed it themselves. I leave it up to them to figure out what Superfish is, how it works, and whether or not it solves a problem that they have.

Thursday, May 21, 2009

A/B and Qualitative User Testing

Recently, I worked with a company devoted to A/B testing. For those of you who aren't familiar with the practice, A/B testing (sometimes called bucket testing or multivariate testing) is the practice of creating multiple versions of a screen or feature and showing each version to a different set of users in production in order to find out which version produces better metrics. These metrics may include things like "which version of a new feature makes the company more money" or "which landing screen positively affects conversion." Overall, the goal of A/B testing is to allow you to make better product decisions based on the things that are important to your business by using statistically significant data.

Qualitative user testing, on the other hand, involves showing a product or prototype to a small number of people while observing and interviewing them. It produces a different sort of information, but the goal is still to help you make better product decisions based on user feedback.

Now, a big part of my job involves talking to users about products in qualitative tests, so you might imagine that I would hate A/B testing. After all, wouldn't something like that put somebody like me out of a job? Absolutely not! I love A/B testing. It's a phenomenal tool for making decisions about products. It is not the only tool, however. In fact, qualitative user research combined with A/B testing creates the most powerful system for informing design that I have ever seen. If you're not doing it yet, you probably should be.

A/B Testing

What It Does Well

A/B testing on its own is fantastic for certain things. It can help you:
  • Get statistically significant data on whether a proposed new feature or change significantly increases metrics that matter - numbers like revenue, retention, and customer acquisition
  • Understand more about what your customers are actually doing on your site
  • Make decisions about which features to cut and which to improve
  • Validate design decisions
  • See which small changes have surprisingly large effects on metrics
  • Get user feedback without actually interacting with users